Can we use VFX graph to create highly detailed mesh?

So according to my theory if we create point cloud from a high poly mesh and use millions of particles to fill them up and add textures to them… example like this:-

Can we achieve high level of details with it?? I know the results are noisy so can it be denoised with some kind of denoiser or some kind of shaders? Can I use frustum and occlusion culling in the to render particles which are in view of camera… But this would lead to imperfect shadows… And for LOD we can just decrease the number of particles as the camera moves further away from the object… Is this whole theory possible practically?!

1 Like

Could you? Sure. Should you? Absolutely not, it’s the most inefficient possible way to handle things, oh my god. There’s a reason stuff like Nanite was in development for 10 years.

1 Like

That’s why I didn’t used nanite word in this post… if this works maybe we can use this till unity adds support for something similar to nanite in future…:wink:
Can u try this out?! And post the results here on the thread?!

Mate I dont think anybody is going to do this for you. We are all busy with our own work, so we dont have time to be doing random errands for you.

Perhaps this should be in the dedicated VFX Graph forum.

1 Like

So why not have highly detailed mesh ?
You will need this data anyway to create it with particle system.

1 Like

No, you didn’t use Nanite in this thread because, as the UE4 thread showed before it was locked, you don’t really seem to have a grasp on how it works. What you have described here is effectively Nanite: All Potential Optimization Removed Edition, but with even more direct caveats for use. Not only that, but as already mentioned you still have to get the mesh data to this point cloud format, which will involve having to process the mesh anyway, which makes just processing and rendering the mesh normally dramatically faster.

No. We can’t. The results will always look noisy, because you’re dumping multiple points per pixel, and that creates pixel soup effect.

Even if you blast it with an AI denoiser, then you’ll have wasted enough computing power to render the same model a hundred times over.

That’s not what nanite does.

What you propose is rendering geometry via point clouds is one of the least efficient possible ways to render 3d models in general. You’ll end up with visible noise (multiple points per pixel), it will look fuzzy, and it will likely render slowly.

Nanite does not use point clouds. It is likely using parametrized texture patches, where model is split into multiple square patches where world position is stored in a texture. If I were to look for something similar, a NURBS patch would be close (if people still remember what it was). Dumping data into a texture automatically creates texture lods via mipmapping, which allows you to increase/decrease number of triangles according distance easily.

Another option is voxels, on atomontage level. However no engine, as far as I know, currently have this tech ready to production.

Do it yourself, or pay someone to do it for you.

Once I get back on my computer I will try it on my own and later share here with you if it works … I know nanite is not a point cloud those are something else(I don’t understand how they were rendering only 20 million triangles out of billion triangles… Does it only render triangles in camera view?)… I was just trying to find a work around… I will try out voxels and compute shaders if particles don’t work and I will figure out about how to get shadows later…! Thanks for the suggestions

No unity will not be able to handle mesh with millions of triangles

We can divide it into multiple meshes.

Setting the Mesh Index Format to 32 bits (either via code or in the mesh asset inspector) should allow you to use meshes in Unity with up to 4 billion verts (if your machine can run that).

6976724--822875--upload_2021-3-26_10-1-2.png

1 Like

…was just about to mention mesh.indexFormat = UnityEngine.Rendering.IndexFormat.UInt32;

1 Like

Sorry for not being clear:sweat_smile: what I actually mean cannot handle means unity does not have something like runtime mesh decimation or auto LOD and something like culling triangles which are not visible to camera which would overall make unity to not able to handle the high poly scenes having millions of triangles… Yes I have tried handling millions of triangle in unity with normal workflow and it can handle 1.5 million triangles at 20fps in URP on a potato laptop with Intel HD graphics 400 and Intel celeron 1.60GHz 4gb ram and on my other computer which has GTX 1050ti, i3 10th gen and 16gb ram unity was able to handle more than 7-8 millon triangles…
WHAT I HAD THOUGHT TO ACHIEVE RESULTS LIKE UE5 IN UNITY:-
So I wanted to create some type a system where I have a scene with billions of triangles from which it would only render 20 million polygons just like the UE5 tech demo on PS5 which would require auto LOD or runtime mesh decimation systems and some kind of advanced culling systems in unity …so there were only two things in unity which would have done this job perfomantly… one was compute shaders and other was DOTS tech… Since I had no much knowledge about DOTS so I was planning to use compute shaders and started prototyping with VFX graph and came across point clouds… But if it’s more easy and possible to make runtime mesh decimation and triangle culling systems with ECS I am ready to learn DOTS for it…

A home PC could render “millions of triangles” back in 2003. On Riva TNT 2 Pro. Slowly. 32bit indices are currently supported in unity, last time I checked.

AFAIK, nobody really does that anymore.

The last game that I saw that actually used RUNTIME mesh decimation was Gothic 1 (Does anyone remember ID3DXPMesh? ). The reason for that is (likely) that modern games require tangents and use normalmaps and when you start progressively collapsing edges that messes up the appearance of the model. Which is why pretty much everybody and their mother use lods. And which is why there’s interest in Nanite, because Nanite transforms model into form where progressively adding detail without messing up the model is easier.

AFAIK, Nobody also culls individual triangles, because that’s what GPU is for. GPU clips invisible geometry prior to rasterization phase, and manual per-polygon culling is actually slower than that. That’s why people typically use per-object frustru/occlusion culling which unity actually implements.

Be aware that per-polygon culling actually doesn’t make much sense in the first place, as in order to clip a polygon you have to transform the object into world space first anyway. So unless it is a BSP level, it is pointless, and in case of BSP level clipping multiple polys at one is a better idea.

And you wanted to do that… why? For what reason?

Generally the idea is to have some actual problem you’re trying to solve, and not just try to overcome a completely arbitrary challenge for no reason.

To answer your year old question OP, no you aren’t crazy. Dreams on ps4 does something close to this and it runs on the PS4, is able to support VR on the decade old hardware, and looks amazing. One of the best rendering techniques probably tied only with nanite.
Here’s a tutorial on how something like that could be achieve in vfx graph, but to make a whole game like that would probably take a skilled graphics programmer

VFX graph does have an sdf converter now which I would imagine help with this idea.

1 Like