im trying to do a search through every object in a scene and test the proximity to the player object…
at the moment im going with something like:
var allobjects = FindObjectsOfType(Rigidbody);
for (var eachObject : Rigidbody in allobjects)
{
dist = Vector3.Distance(transform.position, eachObject.position);
is this the most efficient way to iterate through all the objects and test proximity?..or for example would it be better to iterate through them and cast rays? or any other system?
var objectsNearMe = Physics.OverlapSphere(transform.position, 10.0);
Btw. in your example you’ll only find all Rigidbodys. If you really want all objects you could search for Transforms instead. (As all game objects have a transform component attached, and since you are testing the position, you will need a reference to the Transform component anyway.)
for(var other :Transform in FindObjectsOfType(Transform) {
dist = Vector3.Distance(transform.position, other.position);
...
}
… but OverlapSphere is much more effective. (Although you will need to attach a collider to all objects that should be returned by it. Also it will not base the descision on the distance to the center of the object as your example would do, but on whether its collider overlaps the a sphere at the given position and radius.)
on a similar note of efficiency, i was playing around with some circular motion of objects:
just using that script for basic circular movement. it seems that this is slowing down the simulation, and if i put a few more of the same into the scene it appears to be worse.
is this the ‘best’ way to simulate that kind of motion? or is there a more efficient way of doing this for a larger number of objects?
You can have a lot of objects using transform.Rotate in the Update loop.
I just tested it and had 1000 objects with Update loop executing transform.Rotate every frame and still running at maximum frame rate in the editor. (Editor framerate is capped to 100 fps)
I guess question is really what are you rotating? Is it an object with a huge hierarchy and lots of children?
nope…just a sphere…which was why i couldnt understand the problem…
i actually just removed everything from the scene (not anything hugely taxing) and put in a load of spheres only rotating and it runs fine now. so i must have something else in that is messing the rest up, i shall check it out.
would that really slow the system so much? i know that the rotate around (as i used it) is frame depandant and not time dependant, but surely adding more objects (even alot) would not cause a significant drag since the rotate around method would not be particularly intense…? obviously the more i add in the more it has to do each update, but all the same…
You are assuming it is the rotating that is the bottleneck. Say you have one mesh on screen that is rotating… maybe you’re getting 500 fps. Add two more meshes and you go down to 300 fps. Now your objects will only rotate at 3/5ths the speed. That is very significant and easily noticeable.
You almost always want to make everything move in a manner independent of the frame rate.
I’m not sure if this has anything to do about it, but remember that the eye can only see about 50 images a second, so if your object is rotating really fast it might look kind of slow (or sometimes backwards). Remember when you were a kid going on car trips and you looked at the wheels of the cars passing by.
Well, no, the upper limit for what the eye can see has yet to be determined. It’s far more than 50fps though. Sorry, just gotta debunk that myth; it keeps going around. (Personally I don’t have much difficulty seeing the difference between, say, 200 and 300fps, and going from 100fps to 50fps is very very obvious.)
It may vary from person to person, like earing, but a “regular person” is fine with 50 to 60 images. If that wasn’t right, TV wouldn’t keep those values. Even TVs at 100 Hz display the same image twice. And TVs have the signal interlaced, so you are actually seeing two frames at a time, but no one sees two images.
Increasing the framerate only diminuishes the perception of flickering, and since we are moving away from catotic ray tubes, we won’t need frame rates too high.
I’m not saying all people, but for me, beyond 50/60 frames, I don’t see any problems in the movement.
I think part of the point is that, like sound, it’s a relative sense. If your frame rate suddenly jumps, no matter how fast it is going, there is probably going to be a perceptible difference.
TV is 25fps (PAL) and 30fps (NTSC). That’s generally enough to convey smooth motion with some tricks (like motion blur and interlacing), but even a “regular person” can see far more fps than that. (Naturally, that’s limited by the refresh rate of your screen if you’re talking TVs and computer monitors.) Even if 60fps is “enough” (which it usually is), more is always better, and certainly noticeable by pretty much anyone.
But this is going off-topic here so I’ll stop. A Google search easily turns up lots of info for anyone interested.
(my understanding is that…) Actually the eye can’t see past some 25 (or whatever) images per second. However, what it does over this one-image duration is kind of average (or “integrate”) what’s displayed. Because games usually don’t have quality motion blur, it’s still much nicer to have more FPS (the eye still sees 25 images, but they’re nicely motion blurred). Movies on the other hand, already have very good motion blur in each frame (the cameras also integrate incoming light over one frame’s duration), hence no need for larger framerate.
Higher Hz on a CRT monitor is better because the pixels “fade out” after being hit by the electrons, so you can still see flickering after your eye integrates the light. LCDs can have quite lower refresh and still be good to look at.
…and 50Hz TVs are usually field interlaced, meaning they update the whole image only at 25Hz or so.
Coming from my experience with video editing, this is not entirely accurate. Each field from a video camera is usually offset by 1/50th of a second, so the two fields in a frame are not quite the same.
A common trick in video editing has been to perform field-doubling, where every other field is discarded and replaced by a copy of the previous. This of course reduces the resolution of the image by half, but acheives a film-like look. – people are simply used to watching films in a low frame rate.
When converting a 24fps film to video, the speed of the film is simply increased by a tiny amound and each frame is shown in 2 fields. (This is one of the reasons PAL releases of films is a bit shorter than the original theater version. Also the sounds and voices of characters are about a demitone (half a semitone) higher, although this is not really noticeable.)
This is of course a bit more complicated for NTSC, as speeding up from 24fps to 30fps is out of the question. In NTSC, the 4 frames ABCD are split up into fields in this way AA BB BC CC DD (with some modifications as color NTSC in not exactly 60Hz) This method is called 2-3-3-2 pull-down.
The fact that the two fields are temporally seperate on video sources but not on footage converted from film stock proposes a problem for makers of 100Hz television sets. If the set shows each 50hz field twice in a row, the image resolution will in effect be cut by half (but the frame rate increased by 2). On the other hand if the set chooses to repeat each frame, (Ie. show field 1 and 2 and then repeat them) some jutter might be visible in high motion video sources.
I assume modern (post 2000 – can’t really call CRTs modern anymore) sets solve this problem by testing the temporal difference between fields and choose the deinterlacing method most appropiate. My old Nokia set from 1997 uses method number 2 and I can clearly see the jutter