I want to make a shader that will have an offset in world coordinates, but only in the fragment part. Tried to do it by using a standard offset in the vertex shader and changing parameters in the generated shader, but it doesn’t help (artifacts with the direction of lighting). Here’s some pseudocode of surface shader:
I want to render cube (for example) with same size, but modify worldPos in fragment shader using heightmap, so it will be used to calculate light attenuation and used in not modified cube. So I don’t want to modify real vertex position, but send modified position for light.
Sounds like what you’re looking for is a parallax occlusion mapping (aka POM) or relief mapping shader. They’re basically the same technique with subtle differences in some of the finer implementation details, but the short version is you give it a height map and an offset scale and it gives you the appearance of geometric depth on a flat surface. You can try to implement your own version of this, or you can give this asset a go:
Amplify Shader Editor also has a built in node for this if you want to try doing it with a node base shader editor.
Note, if you find something called just “Parallax Mapping” or “Parallax Offset Mapping”, this isn’t really the same thing. Parallax Offset Mapping is what Unity’s built in Standard shader does when you give it a height map. It’s a very gross approximation that gives a surface some movement that is kind of sort of like it has height, but is also good at making it look like the surface is made out of goopy paint that’s mixing badly.
Looks like it’s POM, thank you very much. Is there any built-in function in cginc to calculate light in fragment shader by putting worldPos? And do I need to use vertex and fragment shader or this could be done with simple surface shader?
Basically, I want to make a shader for the pixel art.
It is a bad idea to use a real geometry change, since in the end there will be a very large number of polygons on a 32x32 tile, and there will be many tiles. And the lighting attenuation itself would be desirable to calculate only for each pixel on the texture (32x32), and not an interpolated pixel on the screen. As a result, there are a lot of problems and I am already starting to think whether it is possible to implement this using a particle system or unity vfx. Any advice?
Update: found your reply about lighting https://discussions.unity.com/t/700574
Yeah, this is outside the realm of Surface Shaders. You can do basic offset mapping or even POM in Surface Shaders, but the real geometry position for lighting and depth will always just be the geometry surface as you don’t have access to the data to modify that.
If you’re looking to do something like “3D pixel” style art, you’re talking about voxels. You probably want to look at voxel or cube stepped ray marching. It makes something like POM way cheaper as you have a known, fixed step size for the ray marching, and don’t have to deal with the additional nebulousness of bilinear or trilinear filtering.