Can you render the output of a camera onto the bump map of a polygon? Converting the brightness (or, say, just the Red) into height on the fly?
The effect I’m thinking of would be like a live, moving relief sculpture. (No real thickness, and no color, just a bump map that moves, along with a static stone diffuse texture.)
Any way to make that happen?
Alternatively, maybe I could make the moving relief by compositing several bump maps together. Like having two embossed symbols travel across a polygon in different directions. Can that be done on the fly using a pair of bitmaps?
Yes with a combination of a custom shader and render textures you can do all of the above. And once you get the hang of vertex pixel shaders it’s also fairly easy to do.
The first option, converting heightmap to normal map on the fly; is doable but not the fastest way. You basically have to sample the heightmap at several neighboring locations to get the “slope” and compute something like a cross product to get the “surface normal”.
The alternative is just blending several normal maps. This is not “physically correct”, but is fast and easy. The water in Unity does exactly this - it scrolls the waves normal map at two different speeds/directions and averages the result.
Would it be possible to composite two normal maps using a mask to determine where each is used?
Like having one normal map of rough stone, and a second one of parallel ridges. Then animating a mask in the shape of a logo, so that the inside of the logo is 100% ridges, and everything else is 100% stone. If that makes sense
In fact you can do this even with Unity Indie if you don’t use render textures. For example, use a small movie and change frames of the texture you’re using to control the blending; or just scroll that texture to see it.
Such as letting the user enter text, which then gets embossed (as a variable-width typeface) onto a wall via a bump map or decal? Or like the license plate on a Hellbender?
(I know you can use text meshes, but I want completely flat–or carved in–look, not 3D text sitting in front of the surface.)
edit: I dont know much of coding, but if the normal viewport font rendering cant do, you can define a bitmap alphabet wich can definatly be rendered into a texture.
The answer is of course “yes”. Render textures are a simple concept: anything that is rendered by a camera ends up being in the texture. So if you can make a text object, use camera’s culling mask and object’s layer to render it only for this camera; and setup camera’s target render texture - the text will end up there.
I’m interested in this… What would be required for one to convert a render texture into a heightmap? I imagine a custom shader…
What I’m interested in doing is using render texture to create a heightmap and then use that heightmap to create mesh geometry. I know the heightmap → mesh script is already available. But what are the steps inbetween?