I have this 3D data set of medical x-rays that I want to be able to visualize and interact with in real time (as well as see differing levels translucency) so I thought ray marching seemed the perfect fit. I’ve created a rudimentary ray marching Shader, but I couldn’t find any resources on the best way to combine an input (in this case a 3D 1-channel texture) to construct a ray marched geometry as all tutorials I’ve found seem to simply procedurally generate these geometries within the Shader itself.
Currently I’ve created a 3D texture in Unity and I was wondering if I should just iterate over every voxel, treating each voxel in my distance function as a sphere with varying levels of alpha and/or size depending on its value? (Below is a corresponding image)
But why? The only reason to use geometry is for using the GPU’s rasterization. For raymarching the 3D texture is already a far more optimal format.
Otherwise if you really want to convert a 3D texture or any form of voxel data into a mesh, it’s really mostly just brute force iteration over the cells and generating box geometry from that. Maybe using greedy meshing to reduce the polycount, but that doesn’t work great on data with a lot of variation.
Sorry I think I misused the term “voxel” and was unclear because of it. I don’t want to voxelize my data or create any sort of mesh using the 3D texture. My primary concern is that using a texture to generate my ray marching geometry won’t be feasible from a real time performance standpoint because I’m looking at a bare minimum of 64x64x256 resolution textures. Most of the texels would be near black and therefore I am left with a sort of sparse matrix where only a small percentage of the texels will have any significant impact on the rendered scene (so potentially this leaves room for huge optimizations).
My current thinking is that I could overwrite some vertex attributes that unity provides to encode only the significantly above pure black/transparent texels in order to massively reduce the complexity of the ray marching geometry to be rendered. I’m not very comfortable with Shaders so I’m wondering if anybody has any advice if this might be a fruitful (or possible) solution before a sink a ton of time into the idea. Perhaps a 64x64x256 resolution texture doesn’t pose a major performance issue in the first place?
Thanks!
EDIT: I realize now that passing in custom vertex data wouldn’t work because the fragment Shader just receives the interpolated values (not the whole vertex array) instead I am looking into passing custom uniform matrix representing these geometries.
You’re mixing your terminology a bit. You are starting with a 3D texture. The pixels of a 3D texture are called voxels. You explicitly do not have any geometry, only voxels. Most games these days that use voxels, like Minecraft, people incorrectly refer to the geomerty used to render the game as voxels, but that’s a mesh representation of the voxel data, and not really the voxels themselves.
What you’re talking about is some form of sparse data representation of the voxel data, like using a sparse voxel octree. This can be a performance improvement, but it can also not be. GPUs are very good at doing a lot of simple calculations, and octrees are more complex. It’s entirely possible that raymarching through all of the voxels in a ray’s path will still be faster than a sparse voxel octree at the resolution and type of raymarching you’re looking to do (layers of partially transparent voxels vs opaque hard surfaces).