I would like to preface this with stating I know very little about shader programming - especially in the context of unity.
The project I’m working on uses volume data to generate a series of floating islands. The data itself is generated via a combination of numerous functions, including simplex and ridge noise, sampled at discrete intervals in world space. This volume data, once generated, can change dynamically.
I would like to extend the project to generate these islands in real time as the player moves through the world, but have run in to several performance issues while doing so. Profiling reveals that the crux of my problem is the noise functions themselves.
I have experimented to some success with generating the noise inside of a shader and rendering it out to a Texture2D, but this has a severe limitation of only giving me back values from 2D coordinates (essentially a height map). I have also scaled back the sampling size and reduced the octaves of the noise functions, but I’m not entirely happy with either solution.
Which brings me to this post - a feasibility question. Is there a way to process the data set inside shaders and pull them back out?
Alternatively, can I just push the resulting volume data down the graphics pipeline and form the entirety of the mesh?
To give an idea of the sample size I’m looking at about 200k-300k samples needed every few seconds.
If unity can’t do it - can I write a plug-in that will?
I’m currently testing Unity 4. Thanks in advance.