Non-float coordinates passed to vertex shader

The idea behind what I’m asking, is to use a different type for world coordinates than float i.e. int, double or long. I’m not so skilled with shaders as to know whether there’s a way to pass object coordinates and its vertices coordinates as other types into the object’s shader for position.

What I can think of right now is pass them as uniform and vertex attributes but this implies extra data. Is there a better way?

In the vertex shader I would cast them to float and work on from there.

Any overseen gotchas to consider?

Forgot to mention I’d like to be able to work in Amplify or Shader Graph.

I wish you a rewarding new year!

Are you looking to have the vertex positions themselves stored at a higher precision, or just the pivot? It’s already possible to pass integers to the shader directly, though internally Unity stores all the values as floats and convert them to integers at render time, so this doesn’t offer any benefits in precision if you use the internal systems.

If you want to pass true integers values, or doubles to a shader, you have to use a structured buffer. For an int or uint it’s as straight forward as setting up the struct using the same scalar type and assigning it directly from c#. For doubles you have to pack the bits into a pair of 32 bit uint values and unpack them in the shader using asdouble(lowBits, highBits). Graphics APIs don’t support directly passing double precision values as inputs or outputs, so this kind of bit packing is the only option for getting them in, or out of a shader.

Having the vertex positions be double precision means having a structured buffer that has all of the vertex positions in it that you access via the current vertex ID instead of using a vertex buffer from a mesh. This is how a lot of things that use DrawProcedural work, though usually only using a buffer of single precision floats.

However …

While it’s technically possible using a custom expression / custom function node to access a structured buffer and call the necessary hlsl functions to get double values, neither of these support anything better than float in the graph itself. So as soon as the values are passed out of the node’s custom code it’ll be converted back to a float. Plus both will apply the built in transform and projection matrices, which are also all using 32 bit precision components, so the value will again be converted back to 32 bit and you’ll loose any of the benefits. So, basically, double precision positions are pointless to implement for node based material editors.

The usual way to handle this kind of thing is manually keep track of your objects double precision positions in c#. Keep the Unity camera game object at the scene origin wihle moving game objects around it relative to the camera’s position in your double precision “world”.

1 Like

Thank you for the substantial information and for taking the time to share it!
Indeed, I want a more precise world. I would go with long as the backing type as it’s enough to hold a light-year span of a world, enough for me and the precision is consistent throughout the world.

I don’t really know for sure, I’m just looking for a reasonable solution. I’m thinking to take advantage of:

  1. The object itself (size) doesn’t require 64 bit type but only its position.
  2. Rendering occurs for objects between the near / far planes. AFAIK the range between the clipping planes is 32 bit and can’t be changed.
    These two would apparently make me think I could get away with only the pivot as long.

I know about this approach and others (more intricate) out there. But neither fits my needs:

  1. I need the distance based events, even collision checking, to take place at any points in the world at the same time.
  2. I want an as small as possible overhead / astray approach for dealing with world coordinates.
  3. I want to minimize waste incurred by float (on one hand too much precision (7 decimal places) within [-1, 1] and on the other hand, numbers outside [±16777216] start skipping integers).
  4. I’ll use custom physics anyway.

Based on your info and some more research, I made an idea, maybe not the most correct one:
I need to make a custom model matrix (not sure whether view matrix too). It would be something like:

  • Spacemodel (float) * Matrix****model (long) = Spaceworld (long)
  • Here goes collision checking, physics etc. Matrixmodel also would replace Unity Transform.
  • Spaceworld (long) * Matrixview (float) = Spacecamera (float) Here the values would be hopefully auto “demoted” to float.

From now on go on as usual.
And if f.i. two objects are close to each other at some 4 billion units from world origin, no artifacts would hopefully show, like overlapping or alike, because they’re rendered not relative to the origin but relative to the camera (inside the 32 bit clipping planes).

Please, give your views and ideas, it’s a long shot approach but if it succeeds it will have been well worth it.