Here’s a good example, but from an unreal MetaHuman example. This is a 3D model with a face and body rig. The iPhone being used is taking that mocap data and translating it right into the editor. The new MetaHuman 5.2 updates make this easy to do with relatively low cost equipment.
This is also a good example of how far Unity has slipped in the character pipeline, not only in terms of having ready made and customizable characters that you can use for…free… but also the high end quality as well for doing cinematic close-up work.
(The 1:30 mark shows a an iphone being used to translate the mocap in realtime)
I do something very similar, but in Reallusion iClone, which is just a couple of extra steps to export that to Unity. However, the model quality is something on par with Digital Human/Ziva without a massive team or extensive workflow to make it a reality.
It’s something a single developer can do… something Unity has obviously (and unfortunately) lost sight of.
(Which by the way, I’m not advocating Unreal as a gamedev platform. I prefer Unity still for it’s ease of use to get started and do something very quickly).
Here’s an updated workflow from Reallusion iClone… probably the best indie solution for getting “AAA style” character into Unity (presently), while not free, it’s certainly the easiest process from a Next, Next, Finish standpoint.
That’s awesome! I can definitely see what you mean.
I’m starting to get quoted now, and I have to be clear that I am a programmer and this artist wizardry is out of my wheel house! I have my own opinions on what parts of the engine they should focus on, and what they should put less focus on, but they are unrelated to the art and rendering pipelines as those are not my areas of true expertise. It seems the others in this thread already have a good grasp on the state of things without my musings.
This is awesome. And most character artists in Unity don’t build their own rigs, especially for the face. I’m planning on using my new prototype to do EXACTLY this. Not using an iPhone but taking it way further with more facial performance detail
The biggest problem here is that spending $1,000 for the facial / model import part or $7,000 which would be a really nice full solution however, it still comes back to having to get good characters, costumes and scenes that work together for your game idea and doing the face and body acting (which some of the store animations may have done better) and getting lighting right and a game that is playable, in my case I’m just going with this el cheapie option with Daz to Unity with ARKit for free and store body animations and concentrating on what I can do and see if anyone is interested in my game. Sort of the same thing for going from Unity to Unreal, it would cost me a lot of time to do that and Unreal is a fair bit harder to use than Unity, no offence, it may be slightly better but my indie game is never going to be perfect it is a trade of money/effort to likely return, i.e. nothing I will try the free trial version of iClone / CC4 to see what results I can get in 30 days;) maybe just $1,000 for Daz to CC4 to Unity with iClone facial stuff would be worth it if it looks better than just Daz to Unity with ARKit.
I think that people who play games with hyper realistic facial textures, shapes, and animation will pick out the flaws more readily than lower quality faces that were never intended to be hyper realistic and therefore no attempt to look for flaws is made since it is accepted as is by design.
I honestly don’t see any reason to try to develop hyper realistic faces in today’s games. The bar is too high for the benefit it promises to provide, period.
Daz has updated their bridges in the past few days including the Unity one. It’s still hit and miss though, one character model imported very nicely (probably the best I’ve seen from Daz), the other character model test, not so much and was a mess, so results will vary. It did seem to import props/static meshes much better, but all will still be largely unusable in unity due to high poly meshes.
I completely agree with this. On unreal side it seems with Lumen it’s much better, but very expensive, too. I also did some ambient stuff myself, but this is only SDF based AO so far, which works good, but yours looks better. May I ask what technique you used, and how taxing it is on GPU?
With path tracing, you can get the lighting from outside to affect the interior environments, especially with careful placement of reflection probes in the room. For my future projects, I don’t use real time raytracing because the performance hit isn’t worth it
isn’t this “just” additional positioned area / spot lights in Heretic demo, like classical Key Point character lighting? Enemies demo has SSGI / RT on top of this, but also fakes all reflections / bounces with additional shadow casting lights (mostly area lights).
the Enemies demo is basically throwing everything at it: AVP (no lightmaps), SSGI (you can choose to have it raytraced or not), and a highly complicated lighting setup for the character, which changes a lot during the scene, consisting of very expensive area lights with PCSS shadows, faking the reflections / indirect lights from e.g. the chess board.