Unity Editor world up is Vector3(0.0f, 1.0f, 0.0f).
If a mesh face/vertex normal points in that direction, what is its float3 in a (vertex) Shader (like delivered in that Shader argument to begin with)? [Q1]
Does this depend on wether the mesh is in a rotated child GameObject of another GameObject in Editor? [Q2]
Does this depend on whether the normal is in Shader native object space (= like they are in a vertex shader argument to begin with) or whether it is translated to/from world space by mul(unity_ObjectToWorld, float3()) and mul(unity_WorldToObject, float3())? Ie. would I need to define a “world up” float3 in Shader differently when using it with unity_WorldToObject in comparison to what it looks like natively when pointing “world up” in an unrotated root GameObject? [Q3]
I have a memory of how it “is” and i did and can do tests in Shaders myself but I like to know the definitive specification (to not be suspect to errors on my part)
Does the answer to above change when the imported mesh has a rotation applied (when importing a Blender file the mesh inside which was “horizontal” in Blender is “vertical” in Unity and I have seen a rotation of x = -89.9° in its locked import transform properties and i have not found a way to modify this in Unity thus i rotated the mesh “anti-vertical” in Blender (and applied the rotation to the mesh there) to have a “horizontal” mesh in Unity)? [Q4]
Unity considers float3(0,1,0) as up in world space.
If you’re working with a vector, either a position or direction, that is in world space, assume float3(0,1,0) is up for it.
If you’re working with a vector, either position or direction, that is not in world space, then you have to ask yourself what is the definition of “up” in that space"? For an imported mesh, it could be float3(0,1,0), or it might be something else. That’s more a question of how the asset was modeled and how its pivot is aligned than a Unity question. Though Unity will sometimes automatically rotate meshes on import to conform to that float3(0,1,0) up, most of the time it leaves the mesh in its original orientation as it was in the application you modeled it in for that mesh’s “object space” and uses the object to world matrix to rotate it into the Y up space Unity expects, and/or use an additional default rotation value for the mesh’s game object, which is what you’re seeing.
Some applications will have options to convert the scene orientation when exporting a mesh. And as I mentioned, Unity will sometimes attempt to do it’s own modifications to adjust the mesh orientation depending on some factors. What those factors are I can’t tell you because I have no idea what they are. If you’re importing .blend files in directly, then what you’re seeing might be the default behavior for that format but I’ve never imported Blender files into Unity for a real project. For .fbx files how it handles it depends on what application exported the fbx and what export settings were used, because fbx isn’t really one file format, but instead 10 different file formats in a trench coat.
Your answer raises more questions for me than it answers
First, I thought float3(0.0f, 0.0f, 1.0f) is CG/HLSL Shader equivalent and that has appeared to work for me, when Editor world up was y-up. float3 and Vector3 having (x,y,z).
I know I can (re-)orientate and set “up” when exporting from Blender to likes of fbx. However this is 1 of my questions (Q4): does such affect Shader “world up”? Also consider Q2: is Shader “world up” different when the mesh is rotated relative to an unrotated root GameObject? I believed Shader do not have world/object distinguishment, ie. whenever a normal points y-world-up in Editor it does so in Shader but using the Shader equivalent.
Then, when using one of Unitys matrices, does the Shader equivalent of Editor world up need to be different from its usual look (if that is different to begin with)(, perhaps because Unity uses Editor look for vectors in Shaders under such circumstances) ? This is Q3.
World up in Unity is always +Y (float3(0,1,0)). If +Z (float3(0,0,1)) was working for you, then that wasn’t world up, that was probably object space up, as Blender’s coordinate system is +Z up. Which tells me that Unity is not itself doing anything to the coordinate system of the mesh data you’re importing, and also explains that ~90 degree rotation you see when placing meshes in the scene.
Any value from vertex data is going to be in object space until you manually transform it into another space.
When you transform a normal from object to world space, you should use the UnityObjectToWorld(v.normal) function because transforming normals from object to world space isn’t quite as straightforward as mul(unity_ObjectToWorld, v.normal). However transforming positions from object space to normal should be done with a mul, however it should be done with mul(unity_ObjectToWorld, float4(v.vertex.xyz, 1.0)) as transforming a float3 value will only apply rotation (and scale). You need to transform a float4 value with a w of 1 to get translation. Technically the v.vertex.w is always a 1.0 as well, but it’s a little cheaper / safer to use an explicit w of 1 instead.
In your shader, when you’re saying “up” are you referring to the polygon’s surface NORMAL? That’s (0,0,+1).
A shader doesn’t really have an “up” but the world space of shaders agrees with the world space of your Scene. The only rationale we have for calling +Y “up” are twofold: Unity has a constant called Vector3.up which is (0,+1,0), and Unity’s Scene view draws a grid in the XZ plane. That’s all. It’s all just a convention.
Note that Unity (and most post-DirectX game engines) use a left-handed coordinate system, while Blender (and almost any other rational piece of software on the planet) use a right-handed coordinate system like our Creator intended (see ref. Euclid, Descartes, and the chirality of electromagnetism and DNA).
@halley1 by “up” i mean the editor scene world up direction in which a mesh face/vertex normal might point as a shader vector data.
I thought vertices on gpus and in shaders have no knowledge of file formats and their coordinate system. I thought vertices are only tupels of data and if a normal of such a vertex/face does point into the direction which is showing as world up in the scene in Unity editor, it is (0,0,+1) independant of whether the transform of the GameObject in which the mesh is assigned to a MeshFilter is rotated and independent of the file format the mesh was imported from.
Edit: additionally the question is if such a normal has let’s say the representation of (0,0,+1), does the tupel which i would receive from [UnityObjectToWorld() according to bgolus] be the same representation = (0,0,+1) because this is what “editor scene world up” looks like in shader or would it be (0,+1,0) because that is how Unity represents editor scene world up in a GameObject transform?
As I mentioned before, the “world space” agrees between Scene and Shader. If an object has no parent and has an identity/no rotation, then the Object’s (0,1,0) and the World’s (0,1,0) is the same. Once you start turning the object, they diverge.
The matrix stack for every object depends on its parents; thus the local object coordinates work out to be relative to the mesh. This is vital to shaders so that you can paint the same part of a UV image onto the same triangles in the same orientation regardless of how the model spins and moves over time. The Mesh data does not need to change, the matrix in the Transform is applied, and the matrix in the Transform’s parent is applied, recursively, until there is no parent.
A shader can ask for the world coordinates, or the object’s coordinates, or the surface’s normal. Those are all things which the GPU shader infrastructure has already calculated using those matrices, and can give you.