I’m just curious. Why was Cg chosen over GLSL as the basis or ShaderLab? Doesn’t Cg only work on Nvidia cards?
The very first reason is probably that several years back GLSL either didn’t exist, or was severely unstable. This day, GLSL is only moderately unstable, and has loads of other issues on OS X 10.4 (and even more so on Windows, with the myriad of driver versions out there).
Cg compiles to ARB vertex/fragment programs that are regular ARB extensions (not NVIDIA specific). Yes, Cg can also compile to NVIDIA specific extensions, but currently Unity does not use that.
That said, Cg is not the “basis” of ShaderLab. It’s just the shader language that is used in Shaderlab today. Nothing prevents us from adding more shader languages in the future.
Cg compiles down to assembly code which is what gets shipped in the game. This runs on all cards and has far better driver compatibility than GLSL.
Ah, I never took into account that timeline of GLSL, Cg and Unity. It all makes sense now! What is unstable about GLSL on Mac OS X?
First, the bugs are different on PPC vs. Intel machines. It seems like PPC graphics drivers were not updated since 10.4.3; while Intel drivers are updated with each OS revision. So some bugs are fixed on Intel, while not being fixed on PPC. Fun, eh?
I haven’t worked much with GLSL, but I already have some issues:
-
In some cases the GLSL can silently fallback to software rendering (example: access any matrix inside an if statement). It still works, but if your “frames per second” turned into “seconds per frame”, you know who gets the credit for that.
-
ftransform() does not work correctly on MacBook Pros. So if you write pixel lit shaders, say hello to wild random flickering because fixed function transformation does not match the glsl one!
-
In some cases doing (ab) produces zero even if both arguments are not zeros. Changing that to (ba) fixes it, if you can find.
-
Some preprocessor directives work differently on PPC vs. Intel machines.
The following are not OS X specific, but general issues I have with GLSL:
-
When you write GLSL shader, you have no idea whether it will run on any other card/driver than the one you’re developing on. There’s no way to get any soft of “instruction length” from it; and if you have two shaders doing the same thing you have no clue which one is faster (and that can vary between cards).
-
GLSL does not have pre-compilation. That means each time the shader is loaded, it has to be preprocesses, compiled, optimized, turned into machine microcode, and possibly optimized again. I have written a compiler once, and compiling/optimizing is not a very fast process. I don’t want to do that each time the shader is loaded.
-
Think about multiple platforms (consoles, DX10, whatnot). GLSL is OpenGL only, so it’s PC/Mac/Linux mostly. That does not include consoles because no console actually runs OpenGL (and the closest one to GLSL - PS3 - uses Cg). Cg can compile to OpenGL shaders, D3D shaders, PS3 shaders etc. It can also compile to GLSL (yes, it’s buggy at this moment, but I hope it will improve).
-
Writing a compiler is hard. Now GLSL is implemented separately by each hardware vendor. That means each and every one of them get a chance to screw it up in different ways. Cg is implemented only once, so at least there is a fixed set of bugs in there
Interesting. That makes things extremely complicated! I guess I only have one choice… Use both! lol
Seriously, thanks for the insight Aras. That cleared up a lot, but also created new question!
Thanks for your time.
Edit -
How would you do this?
Unity does this for you automatically via presumably the compiler that ships with the Cg SDK. You can see the results of the compilation in the inspector when you highlight a shader.
-Jon
There isn’t a choice: Unity supports Cg only right now.
-Jon
True, which is the only reason I want to learn it (but I like using OpenGL, so… Unity isn’t my only fun :p).