As for how they generate the mesh in the demo I have no clue whatsoever but it looks awesome.
Anyway, now comes the inevitable question… Is this possible in unity? Seems like it should be not that hard. I did a little experiment and drew the depth of the scene on a cube in front of the camera.
I have some questions. Right now on LateUpdate I do a switcheroo of all the materials and the camera settings, then camera.Render();, then set everything back the way it was and place the RenderTexture on the cube.
How would one really approach this? It almost seems like it could be a full screen image effect. I tried at first to make a full screen pixel correct render texture but that didn’t work, I got an error message when placing the render texture on the cube saying the dimensions were wrong or something, and I got garbage on the cube. Making it power of two using mipmaps fixed it.
From the screenshots I’ve seen it looks like it actually works, and the shader doesn’t look extremely complex to make so this is exciting.
Reminds me of one of my favorite pieces…Gestalt… not realtime but uses similar fractal quaternion geometry.
Let me know Forest if you’re getting fancy with realtime AO and quaternion fractal based race tracks… our race cars might need some modding to cope with this
Seriously though, any experiments on this front are very interesting.
There are several components to make such realtime AO (SSAO for “screen space ambient occlusion”) practical:
Support for high precision, single channel render textures, so you can render linear depth to them. D3D9 has that, the hardware supports that since 2002, but unfortunately OpenGL does not have that. The only workaround on OpenGL is to burn 4x times more VRAM and 4x times more bandwidth and use a 4 channel floating point render texture. Duh. This is probably the major reason why high precision single channel render textures are not exposed in Unity; as it’s just not practical on OpenGL.
An easy way to say “render from this camera, and make everything use this shader”. Currently this can be done with some manual work; just like you did - swap the materials before rendering, restore them back after rendering. Someday we’ll add a built-in feature to do this.
Support for quite long fragment shaders, as good looking SSAO needs lots of samples to look good. Again, this would be easy on D3D9 with pixel shader 3.0; and on OpenGL possibly GLSL could be used (with lots of praying that it will actually work).
So yeah, the effect is cool, but it’s tricky to do in practical way mostly because of 1st and 3rd points.
…oh, and that demo does raytracing against a fractal in a pixel shader. So actually there’s no fractal geometry whatsoever. And it runs oh so slow, but then it’s also very cool.
So you are saying with point 1 that simply encoding the depth into the render texture as the alpha channel or a color channel will not be precise enough? It seems one could also spread the depth out over a few channels to get more precision, but that just makes your depth shader and SSAO shader more complicated, right?
And point 3 I can understand, although I have never tested the limits of how long a fragment program can be. Is it a hard limit or specific to each card?
Maybe I shouldn’t have been thinking about this so hard, cause now I’m really itching to try it out. I had some other ideas about how to improve it, like storing the “average depth” of each object in an unused channel in the render texture to help get rid of artifacts where you get a black line around things in the foreground.
(this problem seen in the bottom right project offset screenshot)
I’ve been playing a bit with this too, the effect is too cool to stay away from. I already had the depthmap from my dof effect so I thought it would be easy. Haven’t cracked it yet though. Good work
Yeah I thought so too, jon. Looks like pencil shading sometimes
Its caused by depth buffer imprecision induced by the fact that I am only using 1 channel of a render texture to store the depth.
I am going to try to save up enough instructions to add noise too it too, which will probably help quite a bit. Then I’ve got to figure out the blur pass too.
Oh and patrik / anyone else who wants to have a look so far, heres a WIP package:
just too cool - and I agree - the artifacts actually add to it, kill the CG ness by making it gritty, sorta like sharcoal - wonder if you can make it look like watercolor or clay too
Just wanted to let you know its working in the webplayer here on my pc:
windows xp sp2
explorer 7
geforce 8800 gts 320mb
2 gb ram
Also; thats bloody amazing! I remember being in awe when I saw a demonstration of CryEngine2 that included realtime AO and now Unity has it too. Thats just soo cool!
Yoggy that is stunning. I also agree the rougher dotted artifacts really help get rid of that too clean computer generated look, but the web player is magnificant. I want one
A question which is way too early to ask - could this be applied to the terrain stuff so that trees and detail meshes look properly grounded?
Nice work
Boxy
I have a friend who runs a local games studio and he’s working on a little xbla game and is using this kind of thing (AO is calculated in real time, while all his models etc mostly just have block colours on)
I get a blank screen when running it maximized, and non maximized it dosent seem to work :S
It does work in the webplayer tho on both my mac and pc.
When loading up the package in Unity I get the error “Cg in program “frag”: error C6003: Arithmatic instruction limit of 64 exceeded; 70 arithmatic instructions needed to compile program at line 21.”