Hello,
Do you know if it is possible to program parallel instructions in a script attached to a game object? For example I want to parallelize raycasts instead of doing it sequentially in a for loop.
Thanks in advance for your help,
Minsc
Hello,
Do you know if it is possible to program parallel instructions in a script attached to a game object? For example I want to parallelize raycasts instead of doing it sequentially in a for loop.
Thanks in advance for your help,
Minsc
The Unity API cannot be accessed on different threads. That includes raycasting.
There are plenty of things you can do on another thread, but not raycasting.
Aw that is limiting is there not a workaround somehow? Like using another raycast system than from the Unity API…
Sure. You could rewrite the entire physics system with a double buffer or something to make it thread safe. The benefits would be dubious, but it could be optimised for a specific type game.
Or you could ask the question “why do I need so many raycasts?” A better algorithm beats rewriting the engine any day of the week.
I really doubt it. Without unsafe inspection of the Unity memory space that would require moving knowledge of your game’s state into another API. The cost of duplicating your state would more than neutralize any gains from parallelization.
Why do you want to do this? To quote a number of experienced forum users: Write your code first and optimize it after you’ve used the profiler.
Actually I am doing a free vision system, I have a basic version here:
The thing is, for now what is raycasted is the center of the object, that of course has limits. The next step is to actually make one character “see” the mesh, by raycasting a sample of points. I already can generate a pointcloud of the mesh with a quite balanced sample that will be representative of the mesh, and that pointcloud will follow the reference points. But for the raycast it would be too slow if it is done in sequence, I want to parallelize that.
Raycast against the centre and the bounds to check line of sight. If you need better data, get it from the collider mesh itself. Typically there is no need to raycast to generate point clouds in realtime.
For strict object detection I would agree. But what I am trying to do is to build a very realistic vision system, step by step, with the more global goal to contribute to an AI system, but of course for pc gaming it would be interesting. It is the mesh data which contain visual information of an object, this is why I am using a pointcloud. Colliders are more limited, for example they do not contain color data and of course do not necessarily represent the topology of the mesh. if I can find a way to detect pointcloud efficiently from a character’s “eye” it would open up a lot of horizons
So there’s definitely merit to what you’re trying to do, but it’s really expensive. As in you’re going to have to do a lot of raycasts.
Raycasts in Unity works by sending calls down to the PhysX engine running on the c++ side. That happens on the main thread. So you simply can’t, unless you’re willing to jump through the hoops to get source access.
Which means that you either have to do some smart thinking about how you model your world and how you raycast to bring down the number of calls, accept a low frame rate and low-end graphics, or switch to working directly with PhysX and drop the utilities of an engine. Unless Unreal allows you to talk directly to it’s PhysX implementation, I dunno.
Finally I used neural networks for image recognition by integrating the Clarifai web service: AI system project - Community Showcases - Unity Discussions
It works relatively well though it is far from being fully functional yet, but there is definitely potential in this in my opinion.