How to "precompute" a texture and save it for use in a shader?

Say you need to do some expensive calculations in your shader, so instead you precompute them into a texture to use in your shader. I’ve seen this in numerous texts but what I haven’t found is 1) what does this actually mean, specifically. Is precomputation offline, is it in other software, is it at compile time, a previous shader pass, etc. 2) how a texture is saved.

A shader runs continuously, so how do you store a texture on your machine so that you can use it in a shader meant for a real-time simulation?

Or, does “precomputing” actually just mean computing in another pass of the same shader?

Those are all valid ways to do it. Usually it’s not a previous shader pass, but for some effects that’s a way to do it.

Most commonly it’s stored as a file on disk. What you use to generate that texture depends on what you’re most comfortable using. For Unity users it’s relatively easy to generate a texture in c# either using SetPixels() or rendering a shader to a render texture, and save it a .png or .exr file. Other people use various math programs, or Processing, or small programs people write on their own, and output to an image file. For some types of effects you might want or need to update the precomputed image while the game is running, in which case usually it’s a texture being generated on the GPU with a fragment shader or compute shader.

What you use kind of doesn’t matter, as long as the GPU gets a texture file in the end.

Well, if you generate your textures in Unity, then:
https://docs.unity3d.com/ScriptReference/ImageConversion.EncodeToPNG.html
https://docs.unity3d.com/ScriptReference/ImageConversion.EncodeToEXR.html

So, what do you mean by “math or processing programs”, what is an example of each so I can get a clearer idea?

If you’re using the GPU to generate and update the texture in real time, what’s the benefit of that as opposed to just using the regular vertex shader?

https://www.wolfram.com/mathematica/
https://processing.org/

Because the look up texture might be quite small, say 64x64 pixels, and you only need to do some expensive math for those 64x64 pixels. If the object that uses the look up texture covers the entire screen that’s going to be a lot more pixels doing that math if it wasn’t using the look up.

Modern GPUs are often fast enough that doing the math for every pixel of the object can be cheaper than sampling a texture, so precomputed textures have been falling out of favor as of late. They still have their benefits for some cases, but mostly for having control over styling.

The other example of doing it every frame would be things most people don’t think of as “precomputed”, but kind of are. For example, deferred rendering or image effects. Deferred rendering works by rendering out the albedo, normals, specular color, etc. into full screen textures. Then the lighting runs reading those textures rather than rendering the objects directly. This is a form of realtime precomputed textures. Similarly image effects work by taking the “final” rendered image with all the lighting and altering it in some way. Could be changing the color balance, or adding fog, or ambient occlusion. In the case of image effects it would seem silly as “obviously you can’t render the entire scene in the one shader”. However a lot of that work can be done when initially rendering the object and don’t need to be done later, but it often is because it can be faster.

Well that gives me some better insight, I guess I should create the effect first and if it’s too costly, that’s when to look into storing textures…