The catlike coding tutorials are fairly technical still, intended for someone who wants to learn to write shaders, or do basic graphics programming. Trying to get through even the basic tutorial might be overly complex for someone just looking for an answer to “what are shaders”.
The short version of what shaders are: They’re simple programs that run on GPUs that take some input data, mesh vertices, texture data, transform matrices, and other arbitrary numerical data, and output a single color value per screen / render target pixel.
GLSL / HLSL are the shader programming languages for OpenGL and Direct3D respectively. HLSL stands for “High Level Shader Language”, and GLSL stands for “OpenGL Shader Language”. (GL in OpenGL stands for “Graphics Library”, but no one really says that anymore.) These are intended to be human readable / writable programming languages, similar to C, for describing what you want to do with the data being passed to it. This eventually gets compiled into GPU assembly code the GPU actual runs, just like any other C program would need to be compiled to run on a CPU.
Nvidia’s Cg is a long dead shader language & cross compiler intended to be able to compile directly for OpenGL or Direct3D, and is nearly identical to HLSL. Unity used to use it when they first added support for Direct3D (Unity was originally OpenGL only), and a lot of code and documentation still references the term “Cg” in the context of the shaders, but Unity has not used Cg for a long time now. The confusion comes from the fact they switched from Cg to HLSL, but because the two programming languages are so similar, they didn’t really have to change anything for it all to keep working. The Cg shaders they had all compiled without issue when fed into an HLSL compiler. For OpenGL, as well as Vulkan (basically OpenGL 5.0), Metal (Apple), and several other proprietary graphics APIs for consoles, Unity uses their own cross compilers & code translators that convert HLSL into the forms needed for those other platforms. But now I’m probably getting too technical too.
Terms like ray tracing & ray marching describe ways of rendering.
Ray tracing is the idea that you shoot a ray through the scene and stop at the first thing you hit (or potentially hit several things and figure out which one was the closest). For simple planar geometry, this is pretty simple from a mathematical perspective, but for complex scenes this gets very hard and expensive. And some kinds of things you might want to render don’t have a surface to “hit”.
Ray marching means stepping a ray through the scene and at each step asking “am I close to / hitting / inside of something”? Thing about trying to render a cloud or some other volumetric thing. There’s no surface to “hit”, so instead you have to take smalls steps through the data and calculate what color / opacity it is at each point. This is also very expensive.
Most GPUs don’t do ray tracing yet. Ray marching gets used in limited forms, usually manually done in a shader for certain kinds of effects. But all modern GPUs use rasterization instead. This is a very fast way to take triangles and figure out what pixels they cover on a grid … like the pixels of a screen.
While this video still goes a into the weeds, as it is again intended for someone who wants to learn how to write them, I think it’s a little easier to follow for people unfamiliar with shaders: