Organic model to voxel generator including SkinnedMeshes!

Hello everyone

I decided a few days ago to make a editor tool that generates a voxel representation of any organic model. Im glad to say it has turned out rather well so far in that I have it working for mesh filter or any skinnedMesh models :wink:

Before any further explanations here is a web player preview :

http://www.shaderbytes.co.za/ian/voxel_generator/

Controls are :

click hold drag left mouse button to rotate models on x and y axis

click hold drag middle mouse button to translate the camera on the xy plane

mouse wheel to translate camera on z axis.

here is a screen shot of what you will see in the web player :

The tool generates a completely new character with all armatures and animations as per the source model. When testing a few different kind of models I realized that there is not always sufficient vertex data in the source to properly generate the new voxel mesh that will deform properly

I then wrote a subdivision utility (first phase only , no position smoothing) that subdivides the data of the original mesh and iterpolates the bone weights similar to how it is done in a regular 3d application when all edges have maximum creasing… I tested my results vs a subdivision done in blender to compare.

here is a screen shot to show you the results of this process in unity :

on the left is a simple 6 face 12 tri cube that has 2 bones to deform the mesh

on the right is the subdivided mesh (3 iterations ) showing how all the new vertices bone weights have been interpolated.

Using this data I then find the best suited vertex to transfer data to the voxel mesh, redo the bindposes and there it is :wink: If you have looked at the webplayer you will see 3 examples 20 ,30 ,40 unit generations. Have a look at the lowest 20 unit generation and you will see that ever there the deformations are behaving well.

Going forward I still want to iron out a few edge case situations in the face generation of the voxel mesh so its as optimal as can be and I will post some more examples soon.

When its finished I plan to put it up for sale on the asset store for $5 or $10 to try and make a little cash

This looks pretty awesome :open_mouth:

Very cool! You know this could be a really nice compliment to our Cubiquity voxel engine. Cubiquity could be used for modeling the environment, while your system could create characters which naturally fit in to the visual style.

Very cool. How long does it take to calculate everything? I worked on an Organic model to voxel generator for a side project but the calculations took a few seconds so it couldn’t be used in real-time. I basically did a raycast in four directions for each voxel and stored the all the enter and exit points into a collection. Then I went through the data and processed it to figure out which areas were solid, worked pretty well but I didn’t store any color values.

-Dane

Thanks Dan

David I have seen your thread and video of the tank blasting up the environment … looks great;)

Dane The calculations are very heavy indeed as im not only building the voxels mesh but also selectively subdividing the data of the original mesh recursively to meet a threshold and then interpolating all the bone weights and selecting the top 4 weights from those again. From there each vertex in each voxel finds the closest vertex from the subdivided data within a set bounds to be weighted best. Then duplicating the entire bone structure and calculating new bind poses.

Also I am using editor only functionality to assign the animation clips from the original to the new model.

That been said… Since it does all happen in the editor all the work is already done so at runtime you can implement switching between the organic model and the voxel model without any computations required.

That would make a neat game mechanic of sorts i guess but I was aiming more for people who do voxel game development as a hobby or side project/stint and have a library of organic models stored up from all their other development endeavours which they can now use to quickly make a character to use in there project at the click of a button.

Hey thats mighty no 9!

Anyway, really cool. This is a pretty awesome thing. I wanna see more

Snowconesolid +100 XP for you been the first to identify the character used in my example :wink:

I built him to do a #MightySalute for the kickstarter campaign. Here have a look at the webplayer i sent , it has the schematic drawing in the background better lighting and cube map reflections and the salute animation button

https://googledrive.com/host/0B5GQag5iS_vzT3h6QU16Zl9jRlk/index.html

I will post more examples for the O2V tool soon.

That’s just wonderful. Can’t wait for it to make it to the Asset Store.

thats awesome man! Really great Mighty no 9 model

Nice. With that, you could make a nice “3d” version of retro 8-bit games.

Any release date in mind?

Hi Mementos

I will put it up real soon perhaps in the next few days up to a week. Im not sure how long it will take from when I submit it to when it gets approved by unity. But I dont think it takes that long there after.

I have added another feature now which calculates the AO as part of the build process. The AO values are then stored in the alpha channel of the vertices color. There is a property in the shader to adjust the strength of the AO overlay effect.

This is new and different in that in the first example above I used an external tool to generate the AO.

I will post another demo scene up by tonight or tomorrow to display the above mentioned additions and there control with different models again( a parrot and a dog perhaps ).

After that there is only one more item I want to add which is to do full edge splits and a vertex color surface shader so its as complete as can be for release. The edge splitting is required for proper hard edges on surface shaders and will increase the vertices count by up to 5 times tri count will remain the same. This option will be exposed in the inspector so it can be toggled according to what is required.

UPDATE 2

Hello again

As mentioned here is a demo of the AO baked into vertex colors as part of the generation process using a different model.

http://www.shaderbytes.co.za/ian/voxel_generator/sample_2/

image of what you will see in the webplayer :

Use the slider to adjust the AO value from nothing to a maximum of 2. I have it set to 0.5 as default which is very subtle but effective. It really makes a difference , Set it to 0 and rotate around and you will see it is impossible to see the contours because its purely vertexlit.

You can click the animation button to cycle through a run,walk,jump,idle animation set and use the slider to adjust the speed.


I currently have 3 methods I used to determine the voxel generation itself and have found some varying results between the 3. The best method was via using the subdivision data but It also had a problem of been to accurate and some voxels are not desired.

With this I have now decided to include an additional human intervention step in the process to fine tune the voxels before the rest of the process continues. This intervention step can be toggled on or off in the inspector.

What this does is creates a game object with every voxel as an independent object/mesh. You can then make edits to this game objects children by deleting voxels and changing the color etc…

Once you are happy then you can complete the build which will then build up new voxel data based on this game object and from there it will complete the rest of the process.

I had an idea on how to factor in edge case problems where the limbs get joined because the left/right voxels are adjacent in the grid. I will define a divider logic that will be tested against to prevent the joining. I wont try explain it all here it would be best that I make a demo video that will explain the usage when im done.

I have edge splitting already built now as part of the human intervention step so this can be carried across to the final build it you which to have it ( for hard edges with correct normals in surface shaders )

UPDATE 3

Hi Everyone

Before all the text - here is a web player with another demonstration :

http://www.shaderbytes.co.za/o2v_beta/test_2/

here is a screen shot of what to expect in the webplayer :

1403206--72918--$sample_5.jpg

This demonstration is definitely worth checking out as the tool was tested on a fantastic model with a huge variety of quality animations built by the good people at 3DRT . They also sell this product in the Unity asset store here Warbots Micromarines

I have been working day and night adding features and increasing production quality.

The main new addition is the human intervention step in the generation process. What this does is builds a debug version of the object where every voxel can be selected and you can change its color or exclude it from the final build or change the expanded region it uses to bind bone weights and lastly to force a dividing logic between any adjacent voxels.

This example was achieved with this process.

I first thought the Auto generation was always about 95% good and only required minor tweaks Until this last test where the combination of shapes and default pose did not generate close to such a ratio … I had to do quit a bit of tweaking in the debug stage , mostly excluding unwanted voxels and changing many of the colors and then several dividers by the head area

For colors this texture had plenty of fine detail in it , so the current method used for automated color selection from the uv map did not produce good results here , nearly every voxel was a different color. Currently I select the subdivision vertex closest to the center of the voxel. I will have to upgrade this process to something more advanced.

IMPORTANT …
Its evident now more than ever that :

a. The automation is not clever enough on its own so tweaking is generally going to always be required.

b. It will not work in all cases of all models , Im now not sure how to express this limitation in the commercial release?

All the data is serialized properly to disc and you can save and close unity without losing your progress. From this debug rig you can generate the final model and if something is still not as you want you can delete the model and make more edits to the debug rig and generate again etc…

The final model is now serialized to disc as a prefab with the mesh as sub asset , just like a regular imported model.

I wrote 3 shaders for the models that can all render the AO over vertex colors . Unlit , vertex lit and the last one is a modified version of the builtin vertexlit which can render the AO to a certain degree. This is only for a fallback. The first two should be fine.

You can now opt to have edge splitting for every face and the last feature was to add a semi-smoothing option

I have it so it can counter any bizarre armature and skin mesh rotations like those that come from blender etc… so far so good in this regard , tested a few variations here

Other limitations that will be present in first release:

I dont split mesh generations above 65k into sub meshes yet … it will fail if exceeding this limit set in unity for a single mesh
I dont support models with sub meshes yet
I dont support any GameObject hierarchy with multiple models nested
It uses vertex colors based on the original uv map. So even if you switch to a regular texture shader and add the original texture … it would look the same as the vertex color shader because all stored uv coords point to a same pixel value per voxel … in other words no detailed texture mapping on any faces.

Wow that was a mouthful , any feedback on the voxel model made in this demo please feel free to let me know

Hi there! =D Amazing idea!
Would you like to test it with UMA?
http://forum.unity3d.com/threads/153689-procedural-character-generation

Cheers!
Fernando R.

Hey Fernando thanks.

My tool might just blow up trying to test it on your awesome generator :wink:

Jokes aside , I have read bits and pieces on your thread in the past but cant remember if you have any sub mesh in your characters?

I do not handle sub meshes yet ( or multiple materials for that matter )

but this is one of the limitations I do think I need to resolve to increase usability going forward.

Also once my model is generated it does not use uv maps anymore its vertex coloured so this is surely a contention for a mismatch ?

I will have to go read up again on your tool to get a grips of what could possibly be used for interesting results with mine :wink:

cheers

Hey =)
Female avatars on example scene have an extra material with opacity for eyelashes, but of course you can consider not including those (just one line of code on UMACrowd), so all avatars would be one material/mesh.
Looks like we could get amazing results =DDD
Default shaders uses a diffuse and a normal map+Spec+Gloss atlases, so I take you could just access diffuse atlas and bring the color values to vertices.

Cheers!

If you want to try this out, please send me an PM, and I can add you at skype, so I can help you with initial setup.

pm sent thanks Fernando

I made a few demo videos which display how to use the tool. This is the first time I have ever made instructional videos so please bare with my novice errors. I also could not do high bitrate videos as my computer does not have enough grunt , I have a 3 year old onboard chipset.

There are 5 short videos, the first 4 videos display how I used/improved the tool to generate the voxel marine model posted above and the last video discusses the two different algorithms and other options.

The marine model was a great test because it is an example of a problematic model which requires some tweaking to get good results. Take note that not all models would require this level of editing.

I made a mistake in video 3 and had to do one process twice ( separating the head ) but did not want to record the video again and these videos can only be used in the WIP stage anyway. I will have to do new videos without the marine model when the tool becomes commercial.

Part 1:

Part2:

Part3:

Part4:

Part5: