mmmMmmm i have a problem…
Lets say i want to render something in 3D but i dont want to show it in screen, then i want to take that render (as a texture i guess) and be able to go through the color (as with Texture2D.GetPixels ())…
How do i do that ??
I have seen examples but all of them render to the screen and take the info from there…
thanks
.org
That makes no real sense that you render it offscreen and then block the whole pipeline again by using getpixel which forces the texture to be downloaded to system ram.
Render to Texture needs unity pro.
EDITED to fix an error
Rendertextures only need a shader with “GrabPass” if you’re actually wanting to take info from the screen, like with the glass shader, so you can distort it or do whatever. Otherwise you don’t use it.
You can make a rendertexture with the desired texture size, do Camera.Render() to it, and then do texture.ReadPixels(). Since ReadPixels() will read from the active rendertexture as well as the screen. Depending on what you want to do exactly, you could make a shader that takes the rendertexture and does stuff with it, which would be faster than using ReadPixels().
–Eric
Hi all,
I have a specific case where I’m grabbing two different “screenshots”. One is for the whole scene, and the other is a close up shot of the face on the main character in the scene (Mugshot). The data for both shots needs to be uploaded to a remote server as PNGs.
I’ve got the first case working since I can use Texture2D.ReadPixels(). And it was easy enough to EncodeToPNG() for the byte array and upload that to the server.
However the second case has been tricky.
I got it working by waiting for the first screenshot to finish, then switching cameras and calling the same code. However this renders to the screen, and since there are big differences between the two camera perspectives it renders what appears to be a “glitch” to the screen. I’d be interested in exploring a solution where I can render a frame on a camera without having it go to the screen and still use ReadPixels().
Next I tried attached a RenderTexture to the 2nd camera. This camera is disabled, but I can generate a “screenshot” to the RenderTexture using Camera.Render();
That works fine. Now I have a RenderTexture with the Mugshot. But I can’t figure out how to write it to a PNG.
In the documentation it says:
It’s unclear what “specified by /source/” means, and nothing I’ve done so far allows a Texture2D to ReadPixels from the RenderTexture of the 2nd camera.
Thanks,
Matt
“Source” refers to the definition:
function ReadPixels (source : Rect, destX : int, destY : int, recalculateMipMaps : bool = true) : void
Source is the source Rect.
A bit of code I have from something:
var cam = camera.main;
if (readFromRenderTexture) {
cam = GameObject.Find("RenderCamera").camera;
cam.targetTexture = new RenderTexture(Screen.width, Screen.height, 24);
cam.Render();
}
screenTex.ReadPixels(Rect(0, 0, Screen.width, Screen.height), 0, 0);
screenTex.Apply();
–Eric
Thanks for the help. That’s pretty close to what I was doing.
Do you have that Camera disabled all the time or enabled all the time?
Thanks,
Matt
I believe the camera component is permanently disabled since I only needed it to take “snapshots” in certain circumstances, although the game object is always active.
–Eric