Hello!
I’m looking for a way to blit a source texture into a destination texture with a texel offset. In short, is there a way to insert the source into destination starting at a certain texel coordinate <x,y>? Graphic.blit does not have an offset parameter.
Thanks !
If the Texture is not Readonly, you can use Texture2D.SetPixels() and Texture2D.GetPixels()
Otherwise, If you have Pro, or iOS, you can just use glTexSubImage2D to insert into a specific texture. You can use all OpenGL C calls via plugins, or directly link to them using DLLImportAttribute.
Superb. I’ll try this immediately. Thanks!
Some precisions:
I omitted to mention that both textures are actual RenderTexture objects, which do not have the Get/SetPixels functions that Texture2D objects have!
Also, since it’s a RenderTexture we want to blit into another texture, it would be best to avoid reading back pixels using glGetTexImage into an IntPtr, and then passing this IntPtr as the image data to glTexSubImage2D.
I’m stomped. I guess the best way would be to use manipulate Unity RenderTextures as FBOs/PBOs, but I hardly know where to start. Suggestions welcome!
That sounds a bit odd.
Is there a specific reason why your source RenderTarget is larger than the area you want to write?
The OpenGL FAQ suggests ReadPixels/DrawPixels:
http://www.opengl.org/resources/faq/technical/rasterization.htm
It also appears that glCopyPixels could work quite well:
http://www.opengl.org/sdk/docs/man/xhtml/glCopyPixels.xml
Although this second option is not viable for mobile.
And these solutions will also require you to use the -force opengl when running under windows, which can be painful.
Is it possible there is a higher level solution to your problem that avoids trying to set a subsection of RenderTexture A onto a subsection of RenderTexture B?
Good point…
FYI I was thinking more into the lines of what you may read here:
The goal is actually quite simple: In order to render a cylindrical panorama (or cyclorama), “blit” six contiguous 1024*768 render textures into a large 6144 * 768 texture. This texture is then handled outside Unity by a warping/blending engine.
Let me know if you come up with a higher level idea!
Thanks!
Ah ok, yeah that could be tricky.
One solution that jumps out at me, could be to use 6 cameras, with modified normalized viewport coordinates. i.e. each one is 1/6th of the actual screen width when rendering. Using the viewport Rects, you can use the built in mechanic for rendering to part of the screen Unity provides, and since you need to render all 6 viewing angles anyway it shouldn’t be more resources. That way you can render the entire thing directly onto one RenderTarget without the need for intermediate steps, although it could also lead to potential edge issues where the rounding goes awry. I’m not sure if Unity will let you get away with a 6144 sized Rendertarget, but you can always try.
Excellent point, and you’ll find my solution (which works!) goes totally along those lines. I hope this helps other people too!
One camera is rotated 5 times in 60 degrees increments along it Space.Self up vector, and takes a snapshot at each position. Each of these snapshots is rendered into one of 6 RenderTextures, which are a 1/6th of the final RenderTexture. The result is a 6 face cylinder, which is in effect my decomposed cyclorama.
Now, in order to stitch this back together into 1 texture (the final, large one):
After each individual Render, in the OnRenderImage() callback, pass on the source RenderTexture to the destination first, but keep a counter (which indicates which view this is). Then make the final texture as RenderTexture.active, and draw a quad that occupies exactly a sixth of the final image, and which is offset in its vertices based on the value of the counter. Voilà! 6 textures very efficiently “blitted” to a final, larger one.
I get tearing, but not because of texture edge issues. The only problem which remains is setting the horizontal FoV properly, indeed, according to Unity’s manual: Camera.fieldOfView : This is the vertical field of view; horizontal FOV varies depending on the viewport’s aspect ratio. … damn!
Now if I could only find the exact relationship between both…