Hi,
I’ve been trying to implement a volumetric rendering algorithm based on the Ray Marching technique.
I have been trying to implement something very simple, as in the following tutorial:
My ultimate goal is to load medical (DICOM) data and display it in Unity… Unfortunately, I am just a beginner in shader programming, and although I’ve had some interesting results lately, they are still glitchy, so I hope you guys can help me find what am I doing wrong here…
Okay, so my approach follows the idea of the tutorial I’ve linked above: a multi-pass shader, where I render the back-faces and the front faces of a cube, storing the positions of the generated fragments in each pass to calculate the “rays” that intercept that cube. Then I use these rays and their orientations to sample a 3D texture and perform the “Ray Marching” algorithm to try to generate the final image “inside” the cube.
Things are working, but not perfectly: my redering generates some artifacts. I am generating a 3D texture of size 128x128x128, it is all black and inside of it there is a 64x64x64 white cube… Here’s the generating code:
using UnityEngine;
public class Texture3DTester : MonoBehaviour {
public static Texture3D GenerateProceduralTexture( int texSize )
{
Texture3D generatedTex = new Texture3D(texSize, texSize, texSize, TextureFormat.RGBA32, false);
Color[] texColors = new Color[texSize * texSize * texSize];
for (int z = 0; z < texSize; z++)
{
for (int y = 0; y < texSize; y++)
{
for (int x = 0; x < texSize; x++)
{
if (x < 64 && y < 64 && z < 64)
{
float intensity = 1;
texColors[x + y * texSize + z * texSize * texSize] = new Color(intensity, intensity, intensity, x / (float)texSize);
}
else
texColors[x + y * texSize + z * texSize * texSize] = new Color(0, 0, 0, 0);
}
}
}
generatedTex.SetPixels(texColors);
generatedTex.Apply();
return generatedTex;
}
void Start () {
// Generate texture
const int texSize = 128;
Texture3D volumetricTex = GenerateProceduralTexture(texSize);
// Update the renderer/shader/material with the loaded data
var renderer = this.GetComponent<MeshRenderer>();
Shader shader = renderer.material.shader;
renderer.material.mainTexture = volumetricTex;
}
}
Then I wrote a shader to do a very basic rendering of this on screen, based on the multi-pass idea I’ve described earlier:
Shader "Vinicius/Volumetric Ray Marching"
{
// SHADER PROPERTIES
Properties
{
/** Main 3D texture (generated via C# script). */
[NoScaleOffset] _MainTex ("3D Texture (DICOM)", 3D) = "white" {}
}
// SHADER DEFINITION
SubShader
{
Tags { "Queue" = "Transparent" }
// FIRST PASS: used to render the back-faces of a cube defined with minimum bounds (-0.5,-0.5,-0.5) and
// maximum bounds (+0.5,+0.5,+0.5) -- bounds given in object-space. These coordinates are shifted to
// 3D texture coordinates ranging from (0,0,0) to (1,1,1). Resulting fragments store these shifted
// coordinates, which are used to identify the point where a ray "leaves" the cube which represents the 3D volumetric data being
// rendered, and is used to sample and render its associated 3D texture in a Ray-Marching procedure.
// The point where the ray "enters" the cube will be calculated in this shader's third pass.
Pass
{
// Cull front: only back faces will be drawn in this pass
Cull Front
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#include "UnityCG.cginc"
// Fragment shader's input
struct fragmentInput
{
float4 clipPos : SV_POSITION; // The clip-space position (standard output of the vertex shader).
float4 rayHitBack : TEXCOORD1; // The position where the ray "hits" the back of the cube, converted in 3D texture coordinates.
};
// Vertex shader
fragmentInput vert ( float4 vertexPos : POSITION )
{
fragmentInput output;
output.clipPos = UnityObjectToClipPos( vertexPos );
output.rayHitBack = (vertexPos / vertexPos.w) + 0.5; // converts object-space coordinates X, Y and Z from range [-0.5,+0.5] to range [0,1], so that we can use them as 3D texture coordinates
return output;
}
// Fragment shader
fixed4 frag ( fragmentInput fragmentData ) : SV_Target
{
// Store the texture coordinates of the ray's "far-hit point" in each fragment...
// In future passes we'll access this information after calculating the "near-hit point" to define the ray
// that crosses the cube and is used to sample the 3D texture and render the volumetric data.
return fragmentData.rayHitBack;
}
ENDCG
}
// Second pass: store the contents we've drawn on the first pass in a variable (sampler2D) whose name is specified below.
GrabPass { "_VolumetricBackHitPoint" }
// Third pass: draw the front-faces of the cube and convert each fragment to a 3D texture coordinate, just like we did in the first pass...
// This coordinates will be the point where the ray "enters" the cube. For each fragment, we can access the "_VolumetricBackHitPoint" and check the corresponding
// fragment generated in this texture to discover the point where the ray "leaves" the cube. By having both the points where the ray "enters" and where the ray "leaves"
// the cube, we can sample the texture by marching inside it, in several interactions, and build the resulting fragment by combining the texture data as colors.
Pass
{
// Cull front: only front faces will be drawn in this pass
Cull Back
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#include "UnityCG.cginc"
sampler2D _VolumetricBackHitPoint;
sampler3D _MainTex;
// Fragment shader's input (which is also the output of the vertex shader)
struct fragmentInput
{
float4 clipPos : SV_POSITION;
float4 rayHitFront : TEXCOORD1;
float4 rayHitBackUV : TEXCOORD2;
};
// Vertex shader
fragmentInput vert ( float4 vertexPos : POSITION )
{
fragmentInput output;
output.clipPos = UnityObjectToClipPos( vertexPos ); // Same as first pass' vertex shader...
output.rayHitFront = (vertexPos / vertexPos.w) + 0.5; // Same as first pass' vertex shader...
output.rayHitBackUV = ComputeGrabScreenPos( output.clipPos ); // Unity function for computing the UV coordinates used to sample the "_VolumetricBackHitPoint" texture to retrieve the "leaving point" of a ray
return output;
}
bool isInsideInterval( float val, float minVal, float maxVal )
{
return ( val >= minVal && val <= maxVal );
}
// Fragment shader
fixed4 frag ( fragmentInput fragmentData ) : SV_Target
{
// Copute the position where the ray "leaves" the cube, and use it with the position where the ray "enters" the cube to calculate the ray's direction
float4 rayHitBack = tex2Dproj( _VolumetricBackHitPoint, fragmentData.rayHitBackUV ); // position where the ray leaves the cube
float4 rayDirection = rayHitBack - fragmentData.rayHitFront; // vector from the point where the ray enters the cube to the point where it leaves the cube
// Ray Marching procedure
const int MAX_STEPS = 50; // Number of iterations in the Ray Marching
float4 curPos = float4( fragmentData.rayHitFront.xyz, 0 ); // Current position we're sampling inside the 3D texture's cube
float4 step = float4( rayDirection.xyz / MAX_STEPS, 0 ); // After each iteration, we will add "step" to "curPos", so we can step a little bit further inside the 3D texture's cube
float4 result = float4(0,0,0,0); // Resulting pixel color
for ( int s = 0; s < MAX_STEPS; s++ )
{
// Breaks the loop if we are trying to sample a position outside of the 3D texture's cube
if ( curPos.x < 0 || curPos.y < 0 || curPos.z < 0
|| curPos.x >= 1 || curPos.y >= 1 || curPos.z >= 1 )
break;
// Sample the texture and blend it with the resulting color!
float4 curSample = tex3Dlod(_MainTex, curPos); // Sample texture
result += (1.0f - result.a)*curSample; // Alpha-based standard blending of the resulting pixel colorsz
if ( result.a > 0.95 ) // Stop whenever alpha reaches a value that is too high (current fragment is too opaque already, so data behind it will probably not affect the final result at all)
break;
// Go a little bit further inside the 3D texture cube, "marching" over our ray... but break out of the loop if we have marched outside of the cube.
curPos.xyz += step;
if ( isInsideInterval( curPos.x, 0, 1 ) == false || isInsideInterval( curPos.y, 0, 1 ) == false || isInsideInterval( curPos.z, 0, 1 ) == false )
break;
}
// Return resulting color
return result;
}
ENDCG
}
}
}
Then I go to Unity Editor, create a cube, place the “Texture3DTester” script on it and apply a material with my shader to that cube. The Ray Marching algorithm seems to be working fine, except for the faces of the cube which are facing from the positive to the negative values of each axis (see screenshot below)… on those faces, the shader generates some strange “noise” effect. And I don’t really know why is this happening…
(White cube is drawn correctly, but some faces of the cube seem to be drawing some “noise”)
The problem always appear on the same faces. I can rotate around the cube and see the inside texture correctly, unless I see them from the “troublesome” faces… In these faces I get noise generated.
(Seen from the opposite side of the first screenshot)
So, I was wondering if somebody with more expertise could try to elucidate that for me…? Why do I get this noise? What am I doing wrong…?