Am I correct that using Physics.Raycast is always less performant than using Physics.RaycastNonAlloc with an array of RaycastHit that allows for 1 element only?
RaycastNonAlloc will stop executing as soon as 1 raycast is finished just like Physics.Raycast. But it won’t generate garbage.
I’m also going to ask question about RaycastNonAlloc.
What if my results array is smaller than potential number of hits.
Will I ever know if there are more colliders, or I will have to repeat casting with bigger array when the hits == result.Length ?
If it’s smaller than than the potential number of hits you will receive less RaycastHits than the potential number of RaycastHits yes. You will receive exactly the amount of RaycastHits as your array’s length.
OK but you will not know if it completed because it found all hits or because the array length was to small.
The proper implementation of such a functionality should return this information.
It’s your responsibility to ensure that your array is large enough to account for the maximum number of hits you are expecting. The point of having a limit is that the raycast stops after reaching it. Not that it hits everything possible and then only returns a subset. Providing information on whether the raycast includes everything it possible could have with a higher limit defeats the purpose of having a limit in the first place.
Is Raycast causing you performance problems where RaycastNonAlloc isn’t? If no, then it doesn’t really matter. If in the future you find that Raycast is a bottleneck, it should be fairly straightforward to swap out.
Really. How can I know what to expect in uncertain situation ?
Also I like to use minimum memory I need.
As I said this method could return information if it’s operation was limited by the array length.
If it was limited by it, depends on my needs I might resize the array and redo the raycasting.
Right now we can kind of simulate it by resizing when result array is completely filled.
Sorry
Going back to the topic,
I think that single Raycast might be better than RaycastNonAlloc with the size of 1. (but the difference might be super small)
It should not generate garbage since RaycastHit is a struct.
That’s your job to figure out as a game programmer! If your game has literally unbounded numbers of potential raycast targets you are going to have larger logistical issues than “my hit array is too small”.
I have never needed to do anything like this. If I am expecting a large number of possible results, I will create a large hit array. For example:
// declare this somewhere convenient
public static readonly RaycastHit[] hitBuffer = new RaycastHit[ 1000 ];
// use it everywhere I need it
var hitCount = Physics.RaycastNonAlloc( ray, hitBuffer, /* etc */ );
// do stuff with hitBuffer
I’m trying to find the objective answer to the question, and not to a specific case scenario.
It logically seems to me that RaycastNonAlloc is better in all cases, I experimented with it and found that it’s the case but I would like somebody more knowledgeable than I to validate this.
@ … Creating a static buffer like this just begs for catastrophe.
You might have some other methods invoked before you are fully done with the result that will use RaycastNonAlloc and overwrite your buffer.
If you do raycasts without the NonAlloc function in a controller which calls Raycasts every frame, you will allocate tons of garbage and it will destroy your game’s performance.
If you want to make sure you get all the rays, just pass a big array of 1000 raycastHits to the function, there is no problem with this. This potentially means the function will have to call 1000 raycast as far as I understand, but this is a risk you already take by doing RaycastAll so there really is no difference between those 2 functions except for the fact that one creates garbage and the other doesn’t.
You’re thinking about it in the wrong way. What matters most is the performance, not that fact that there could be billions of colliders in the raycast.
I understand the reasoning behind wanting to know this, and by no means fault you for wanting to know, but practically it really doesn’t matter. I don’t know this bit of perf trivia w/out benchmarking it myself precisely because it doesn’t matter.
In projects where I am raycasting for single objects, I use Raycast due to simpler implementation. Readability, flexibility, and ease of use trumps performance in all cases except where it is causing the program not to meet stated performance benchmarks, is bogging the editor done unreasonably, and/or I am specifically doing a performance pass on the code (which ideally means that everything else is done).
That’s not how the raycasting works internally. Unity’s physics space is divided into an octtree, and raycast searches through that for collisions. If it’s anything like what I’ve done, there’s a relatively high cost of setup. Once you’ve done that, iterating through possible matches is (relatively) cheap.
This is purely an issue of semantics. Depending on your use case, there’s nothing stopping you from 1) ensuring that your raycast results have a short lifetime (most do anyway), 2) ensuring that calls to RaycastNonAlloc don’t conflict, 3) ensuring that you copy any raycast results you do want to keep long term, 4) making more than one buffers, and/or namespacing buffers for different purposes, or 5) wiring up a simple array pooler so that you can be sure you’re always getting an array that is not being used.
The larger point I was making was that if you are allocating an array for RaycastNonAlloc, there is no reason that you can’t create an array that is as large as it needs to be to fulfill your worst case scenario. This is a very common technique in game design. Saying that you can’t possibly know how many collisions you need to deal with is preposterous. There’s a lot of numbers between 0 and infinity – surely you can choose one of those and confidently say that there can’t possibly be more than that.
No matter how they divide up the space and find the colliders underneath the hood, the more colliders your ray will meet the more costly the operation of raycasting becomes, right?
Agreed.
My usecase is for a controller script that calls Raycast every fixedupdate. I’ll just keep believing that garbage allocation is to be avoided in scenarios like this. I’m too lazy to do lots of tests.
Necessarily, but my point was that RaycastNonAlloc is not equivalent to doing lots of Raycasts under the hood, and that cost doesn’t scale linearly with the number of total colliders.
GC is to be avoided in all scenarios as a rule of thumb. GC generation is something that I’d classify under the “to be addressed later, if needed” category, though.
I don’t think that Raycast generates garbage in builds though (I haven’t confirmed this).
The answer to these kind of questions is always best given by a Profiler, because the Profiler is able to measure execution time and memory allocations, whereas in the forum you often get opinions only.
If I want to know whether A or B is faster, I either implement both versions in my game, or create a new project that just implements this and profile it.
You could create a new project that fires 10000 or 100000 raycasts per frame with either method and then use Unity’s Profiler to measure if there is a performance difference.
Make sure to profile this in a build rather than in a player, because:
Are you sure? I just put together a quick test. At least as of 2018.4.11, Raycast( ray, out hit ) does not allocate in the build (or editor, for that matter).
As an addendum, GC is only a performance hit insofar as GC collection is a performance hit. The actual act of allocating the memory for GC is not really the bottleneck. Performance is rarely a direct synonym for GC allocations, or lack thereof.
You’re right, it does not create garbage in 2019.2.16 neither. That’s good news.
It must be something else in the controller that creates garbage… I haven’t found what precisely yet.
This makes naming the other Raycast function “NonAlloc” weird.
EDIT: The reason for my allocation was because I was creating an int[ ] in the FixedUpdate() loop