Hey guys,
I was hoping to try using compute shaders to do some computationally heavy work.
Lets take something simple like terrain generation … if done on the cpu you need to run the generation on a second thread but its usually noise based and maybe 1 or 2 octaves might get you a result within about 10 seconds on a typical patch of terrain.
But what if iwanted to make really detailed terrains that require say, 7 or 8 octaves of noise?
So that got me thinking what if i could do something like this in a compute shader …
#pragma kernel Generate
RWStructuredBuffer<float3> vertexBuffer : register(u0);
float3[] genVertsAt(uint2 xzPos)
{
//TODO: put some height generation code here.
// could even run marching cubes / dual contouring code.
}
[numthreads(32, 1, 32)]
void Generate (uint3 threadId : SV_GroupThreadID, uint3 groupId : SV_GroupID)
{
uint3 currentXZ = groupId * uint3(32, 1, 32) + threadId;
vertexBuffer.append(genVertsAt(currentXZ.xz));
}
And then something like this in a unity script …
using UnityEngine;
using System.Collections;
public class Test : MonoBehaviour
{
public ComputeShader Generator;
public MeshTopology Topology;
void OnEnable()
{
var computedMeshPoints = ComputeMesh();
CreateMeshFrom(computedMeshPoints);
}
private Vector3[] ComputeMesh()
{
var size = 32*32;
var buffer = new ComputeBuffer(size, 12, ComputeBufferType.Append);
Generator.SetBuffer(0, "vertexBuffer", buffer);
Generator.Dispatch(0, 1, 0, 0);
var results = new Vector3[size];
buffer.GetData(results);
buffer.Dispose();
return results;
}
private void CreateMeshFrom(Vector3[] generatedPoints)
{
var filter = GetComponent<MeshFilter>();
var renderer = GetComponent<MeshRenderer>();
if (generatedPoints.Length > 0)
{
var mesh = new Mesh { vertices = generatedPoints };
var indices = new int[generatedPoints.Length];
//TODO: build this different based on topology of the mesh being generated
for (int i = 0; i < indices.Length; i++)
indices[i] = i;
mesh.SetIndices(indices, Topology, 0);
mesh.RecalculateNormals();
mesh.Optimize();
mesh.RecalculateBounds();
}
else
{
filter.sharedMesh = null;
}
}
}
I can’t seem to find a way to do this that works though.
In my case i’m building “floating islands” so i need to consider more than just x and z values before i can go generating a y, i’ve been trying to figure out a way to generate voxel volumes and then generate mesh from the voxel volume all in compute shaders but unless i can beat the basics i may have to rethink my plan.
Anyone fancy giving this a try and then maybe giving me some ideas how to write this?
I also would love to hear from anyone who has used any of the following …
Compute shaders (generally speaking).
Append buffers (handy when generating mesh data from a voxel volume).
Getting data from the gpu back on to the cpu.
I’m keen to find a way to do this without relying on Texture3D because I would like to find a solution that works for users that don’t have unity pro.
Seems like such an odd thing to limit … its just a render target after all.
