for (int i = 0; i < vertices.Length; i ++)
{
Vector3 worldVertexPosition = groundMesh.transform.TransformPoint(vertices[i]);
if ((shovelPosition - worldVertexPosition).sqrMagnitude <= MaxDistanceSquared)
{
RaycastHit hit;
if (RaycastGround(worldVertexPosition, MaxRaycastDistance, out hit))
{
Vector3 newVertexPosition = groundMesh.transform.InverseTransformPoint(hit.point);
vertices[i] = newVertexPosition;
isMeshUpdated = true;
}else{
//Debug.DrawRay(worldVertexPosition, Vector3.up * 0.025f, Color.green);
}
}
}
Hello,
Iโm working on implementing a VR linocut in Unity5. As the linocutโs carving edge shape is crucial, Iโm using raycasting from the tip of the carving tool to the ground to sculpt the mesh to match the linocutโs shape. However, due to the detailed nature of the carving, there are about 60,000 vertices that need to be sculpted, which is causing optimization issues in the code snippet above.
Therefore, Iโm considering using a compute shader. Would a compute shader be effective in parallel processing the 60,000 vertices in this case?
Yes, a compute shader will be a lot faster. Maybe it could even directly change the vertices in GPU memory (not 100% sure).
Do note that OpenGLES3 has bad compute shader support, so you would need Vulkan if you plan on supporting standalone headsets. For PCVR DX11 is fine.
Not sure if that is in unity 5, but the job system could also multithreaded this and be optimized with burst. If you need it to be on the CPU or have guaranteed support, that will help
Thank you for your response! I was considering using Jobs and Burst, but Iโve learned that they canโt be used with raycasting. So, Iโm looking into using a compute shader. Would a compute shader be suitable for performing calculations on vertices?
You also cannot do raycasting in a compute shader. I think the job system even has a system for it.
You need to raycast in regular C#, and then start the jobs or compute shaders.
You could start them in Update and retrieve them in LateUpdate so they can run in the background (not sure if that also works the same for compute shaders)
Have you seen RaycastCommand?
Do you need to check all 60k vertices each time? Could there be a way to divide the mesh into regions and check only relevant chunks?
I would probably render the carving tool into a depth map from above and store the depth thatโs furthest away (ShaderLab ZTest Greater and Cull Front or Cull Off) without ever clearing the depth map. Then, when rendering the ground, I would sample the depth map in a vertex shader and displace the vertices accordingly (compare depth map depth with vertex position to find out how much to displace). Similar to how you would do snow imprints.