Some basic shader questions.

Hi,

i recently had to do some image heavy things. First i prototyped the idea in BlitzMax, rewrote it with threading out of curiosity just to see that it was still too slow. So i ended up using GLSL which did the job just fine. Finally and at some point i need such a solution in Unity as well.

I read that you can use HLSL, CG and GLSL in Unity. If i understood things right then a.o. for the sake of the Windows platform you shouldn’t use GLSL directly. Which leaves HLSL and CG. In which do you best write your shaders in Unity?

Some of the shader examples in the docs are older, some are newer, so i’m not sure if all the suggestions are still fine. Does there exist something like a specific shader which is written in several languages, for a comparison and to see the differences of each implementation? As i’m not using shaders on a daily basis i sometimes find it kind of confusing which example uses exactly what and i found that whilst certain intrinsic functions or ShaderLab built in values were working, others somehow didn’t.

Can you write HLSL shaders which run in Unity without any problems or should you use CG instead? When i understood things right, you should go with Surface shaders everywhere a flexible way of lightning is involved and for things like image effects you can go with CG/HLSL and setting up the vertex/fragment shaders on your own.

Last but but not least a few more specific questions:

#1 How do you obtain from an CG/HLSL/Surface shader the uv position of a projected textured surface? And can you get that relative position from the uv-space also if there isn’t a texture applied as well?

#2 What’s the equivalent to GLSL’s gl_FragCoord and how do you obtain the max resolution (_ScreenParams?)?

#3 How do you pass parameters from let’s say a fragment shader back to a script in Unity?

Thanks!

If you have the GLSL code already, I would suggest to use it. Just start Unity on Windows with “-force-opengl”.

#3: render to texture and reading pixel values from the render target. There aren’t that many other ways in OpenGL, are there?

Well, i wrote the GLSL shader for this specific case but beside of this i would like to know what the best solution/workflow, keeping possible drivers/performance issues and a 1 to x platform publishing strategy in mind, is.

What i like about writing shaders is that you can just write sin instead of Mathf.Sin again. :O)

Okay, so there is no way via ShaderLab? That means you write your values to a properly setup texture and then read this texture from a script inside again, hmm, should at least work for not this many values before it turns into a bottleneck, thanks!

Which shader language approach do you prefer and for what reasons?

Goofing around 2k11_e (maybe some weird and expensive magnetar detecting device)

So much to learn…

Cg and HLSL are nearly identical. I recommend using Cg because it is the most widely supported in Unity. Depending on decent OpenGL drivers on Windows is a bad idea.

Okay, beside from a Unity and platform discussion, does a certain language also offer possibilities or a more elegant way than others do?

Yes, cg can do more and can do it on all platforms, while glsl is not even “1 breed” between mobile and desktop due to OpenGL ES vs OpenGL

In which aspects offers Cg more, beside of the platform aspect, and how does it compare to HLSL in those areas?

Surface shaders (with Cg) are probably the best choice on Unity because they automate the lighting computations and are reasonably flexible and well documented. I haven’t seen surface shaders with GLSL code; thus, I don’t know whether it is possible to use GLSL code with surface shaders (and how painful that would be). In any case, it is apparently not documented.

I don’t know about Cg and HLSL, to me it looks like Unity takes care of the differences automatically(?)

The docs definately could be improved.

Btw., beside of official documentations for each language, any hints for good tutorials on the think like a pixel/vertex contest?

I guess the official documentation/tutorial of Cg is the “Cg Tutorial” by Nvidia: http://developer.nvidia.com/object/cg_tutorial_home.html

The official documentation of HLSL is probably in the Direct3D documentation.

And the official specification of GLSL (version 1.0, which is supported by OpenGL ES 2.0) is: http://www.khronos.org/registry/gles/specs/2.0/GLSL_ES_Specification_1.0.17.pdf

There are many tutorials for GLSL, see for example: http://www.opengl.org/code/category/C20 . I’ve written some tutorials recently specifically for Unity: GLSL Programming/Unity - Wikibooks, open books for an open world

One more note when you come from GLSL to Cg: to a GLSL programmer it appears like Cg programmers always tend to do an even number of errors when indexing matrices to arrive at the correct result.

For example, a GLSL programmer would say that the matrix to specify a translation is:

1 0 0 Tx
0 1 0 Ty
0 0 1 Tz
0 0 0 1

In GLSL, you would specify such a matrix column-by-column (because they are stored in column-major format):

mat4 m =mat4(
1, 0, 0, 0, // 0th column
0, 1, 0, 0, // 1st column
0, 0, 1, 0, // 2nd column
tx,ty,tz,1); // 3rd column

The element m[3][0] is tx because you access it as m[column][row] (column-major order).
And you multiply the matrix m to a vector with m * v.

Thus, once you accept that matrices are stored in column-major order, everything is fine.

On the other hand, a Cg programmer would say that the matrix to specify a translation is:

1 0 0 0
0 1 0 0
0 0 1 0
Tx Ty Tz 1

(e.g. Numbers in Transformation Matrices · Aras' website )

You would specify such a matrix in Cg row-by-row:

mat4x4 m = {
1, 0, 0, 0,
0, 1, 0, 0,
0, 0, 1, 0,
tx,ty,tz,1};

Note that this is the same as in GLSL except that (from the point of view of a GLSL programmer) we made two “errors”: we used the transposed matrix and we provided rows when we where supposed to provide columns. But these two errors cancel each other and the code is very similar.

The element m[3][0] is tx because you access it as m[row][column] in the matrix (see The Cg Tutorial - Chapter 5. Lighting ). Again (from the point of view of a GLSL programmer) we made two errors: using the wrong order of indices and using the transposed matrix.

And then you can multiply your matrix m to a vector v with mul(m,v) which works fine because internally it actually multiplies the transposed matrix to v (see The Cg Tutorial - Appendix E. Cg Standard Library Functions, which clearly shows that the order of indices is in fact column-major for the purpose of matrix-by-vector multiplication).

As a GLSL programmer I’m driven mad by the requirement to always make an even number of errors. I find it just way too confusing.

Of course, the Cg programmers will tell you that I’m talking rubbish … :slight_smile:

Well, at least it’s more informed rubbish than i would be able to offer, so thanks!

Hello! Can I ask here what is the role of shader to a computer?

Maybe this offers some explanation.

Does there exist a tool which lets you write and compile shaders on the iPhone directly?

quick search for “glsl” in Apple’s iTunes Store finds these two apps: Paragraf (iPhone iPad) and RenderDuck (iPad)

Thanks! :O)

I confess i suck at searching on an iPhone but Paragraf does fragment shaders only, no idea about RenderDuck.

Maybe it’s just me but fragment shaders kind of feel like the Copper to me, i like them. :O)

Yes, they are cool. One limitation is that you cannot have a million texture lookups in a real-time fragment program, i.e. not every pixel can depend on all pixels of the previous frame. However, to some degree this all-to-all communication can be achieved with so-called “pyramid algorithms”.