my idea was to use a cameras that render to a textures in combination with RenderWithShaders and layers to generate a image. I then want to process this image. Lets say, count the number of pixels that are red (enemies!).
That actually works pretty fine. But its pretty ineffective. And not because of the rendering.
My workflow so far is:
Render the image into a RenderTexture with Camera.RenderToTexture
put the data in a Texture2D with Texture2D.ReadPixels()
put this data in a Color32[ ] with Texture2D.GetPixels32()
iterate over the ColorArray and count stuff
That does not feal good…
Is there any way to do something similar?
is there a way to leave the Texture2D out and directly get the infos out of the rendertexture?
or is there any image proccessing functionality, so i dont have to iterate through the whole array?
Image processing is always very resource intensive - there is no real way around this. There are some open source image processing libraries available with highly optimized algorithms to ease the processing load (the most notable being OpenCV, with its C# wrapper EmguCV that can be included in Unity), but in a 60 FPS game you will still be struggling, because, as you noticed, you have to go through a whole lot of data every frame.
So the real question becomes if there is an alternative to counting pixels to what you are trying to achieve. If you indeed intend to count enemies, there are far better ways of doing so, since your scripts are controlling them in the first place.
You could push some of the processing onto another thread. It will make the entire job an order of magnitude more complex. But it will help reduce frame rate stutter.
Quite right, in my laser tracking for interactive projection mapping (see the link in my signature), I process the frames of a webcam stream on a separate thread, and send the coordinates of the laser blobs back to the main thread so Unity can use them as input. There isn’t much of a frame rate impact, but the camera is lower resolution and FPS than the game output.
Can you tell us what exactly it is you want to achieve, chef_seppel?
what i want to do is simulate sight of my characters.
I have a cam attached to their face and render what they see into a texture and then i want to raycast to the pixelpositions. And when enough raycasts hit an enemy he is shown und if not he remains hidden.
i will look into the threading. That will definitely help.
But the main problem of copying the same data 3 times just to get it into a processable type seems unnecessary
You would be better off just raycasting in the scene, starting from the character and in the direction the character is facing. If the raycast hits something, you can check if it’s an enemy (using tags, for instance). This is a lot faster than checking pixels, especially if you only use a couple of raycasts inside the field of view of the character.
Doing a bunch of raycasts per frame against every enemy would probably still be several orders of magnitude cheaper than parsing the screen.
A very cheap solution is to raycast directly against each corner of your enemie’s bounding box, and against the center of the enemy - if any of those hits, you see them.
If you want to make the simulation “more realistic”, raycast against more points on that bounding box.
using the boundingbox sounds really good. Is there a way to get a boundingbox around my enemies and one of the sides facing me? So that i have a plane and can make a grid of raycasts on that plane?
To get the bounding box of an enemy, use Renderer.bounds. You don’t really have to send the raycasts from a plane, you can just make them start from the face of the character.
what i want to do is to use a sort of grid of raycasts to the enemy bounding box. All coming from the face of the character. To calc the extends of the grid it would be nice to have a boundingbox that is facing the character. The boundingbox i get from Renderer.bounds is always aligned to the axes of the coordinate system.