Enemies: New tech demo showcasing Unity's growing humanoid toolsets

No one’s talking about this!?

I wasn’t impressed when I heard Weta was picked up from Unity, but I gotta say after watching this video, I’m absolutely blown the &#(k away.

The hair is likely a non Realtime bake, but the skin shaders, the rigging, the animation are all so damned spectacular. I mean, if I’m being cynical, this is kinda what you’d expect to see from Unity after bringing the Weta guys on. A super indulgent clip with no backstory on the tools and no talk about getting this level of quality in an actual game. LA Noire comes to mind… it had all that facial capturing, all the voice lines, and all that production value just didn’t really add up to a better game. I hope Unity is thinking about this from a developer perspective, “How can we develop ergonomic tools that we can build on over time to produce next level humanoid entities in our games?”

All that said, absolutely amazing demo. Even knowing it’s likely all smoke and mirrors, it’s still incredible and a hell of a benchmark.

The question is, will we get prebuilt, automated rigs that can automatically extract animated emotions from an MP3 audio files? or can we animate emotion by hand using simple sliders? Will we finally start getting automatic mouth animations based on parsed dialogue files? Even just giving character’s realistic idle blending with eyes darting around a room, blinks, and subtle head motions based on points of interests nearby, would be a pretty big step in the right direction. Or was this just flexing top tier animators and riggers with decades of experience in a medium that doesn’t really translate to the real-time Unity engine? At the very least this demo shows the potential for Unity to be used as a platform to create INCREDIBLE cinematics with a few bells and whistles.

Maybe this is something we look back on as another “blacksmith”: Largely glitz and glamor. But maybe we look back on this as that short glimpse of how fantastic cinematic scenes can look in real time within a game engine, and how Unity built on this hand crafted segment and used it as a benchmark to build powerful, automated tools for realtime humanoid generation.

Because we all know better.

They show off demos like this every couple years and they only ever work with a single point release, likely now with only heavily modified versions of various packages as well, showcasing technology that will make it into the engine five years down the line if we’re lucky.

13 Likes

Well, at least part of the tech is promised to be released this year Q2 according to the coresponding article:

It’s nice to see that the tech is building on the last demo though. They are not doing something from scratch. So that makes it likely that more and more flows into the engine itself. It just takes time.

In the end, let’s be real, visuals that push up the ceiling of what’s possible will ALWAYS require handtuned optimization and not work out of the box with any engine. Would be great if it were different, but that’s how computer tech works. That’s true even in entirely different industries.

1 Like

Introducing Enemies: The latest evolution in high-fidelity digital humans from Unity
“When can I have it?”
As with previous projects, the Demo team will be sharing the technology developed for Enemies with the community to try out in their own Unity projects.

In a month or two, we’ll release a Digital Human 2.0 package that contains all of the updates and enhancements we’ve made since the version we shared for The Heretic.

We will also release a package containing the strand-based Hair system on GitHub, which allows us to collect feedback and make updates before it becomes an officially supported feature. Keep an eye on Unity’s blog and social media to make sure you’re alerted when these packages are available.

Most of the improvements in Unity that originated from the production of Enemies, or were directly adopted in it, are already in Unity 2021.2 or will be shipping in 2022.1 or 2022.2.

9 Likes

While I appreciate the transparency, these aren’t the sort of assets that are going get people excited about what sorts of fruits we can look forward to coming from these tech demos. The big difference between this and the Unreal stuff we got a year ago, is all of those bells and whistles were just so damned applicable to a great many people’s pipelines, the raw “it just works” factor and technical payoff was so blatantly obvious.

If you’ve done system taxing, overnight bakes for large mesh animations or hair simulations, this isn’t that exciting. Yes, the outside observer who doesn’t know what a bake is might be blown away by the “hair physics” and “facial animations” but once you understand that it’s essentially a flip book from an external 3d package and isn’t feasible for mass content in most games because of the memory requirement and sheer manpower required to create it, the allure fades kinda quickly. While this pipeline can generate fantastic cinematics, when you get down to it, you’re probably better off making these stunning short clips in an environment built from the ground up for it like Maya and exporting a cinematic from there as your game’s cinematic or ad.

Contrast this with Unreal already generating stunning, real-time vistas at such a quality level that Hollywood is adopting the engine into their pipeline for movies and shows that are already out, and this demo feels a little uninspired.

I was really hoping to read about some sort of custom physics system you guys had created to get this working in real time for hair, or some revolutionary toolkit that makes the animations generated for this humanoid capturable and applicable to other humanoids rigs when reading the behind the scenes setups. That somehow some of these systems were modular, real time, and blanket solutions that could widely improve the humanoid assets of most any team looking to utilize them.

I don’t know… I don’t really get it. You guys at Unity clearly have talented people, you can clearly create amazing things, I don’t know why you guys consistently put out these demos without saying, “See that cool shit, we’re going to polish the hell out of it, make it ergonomic, release fantastic tutorials on how to use it, make it as painless as possible to use, and everyone who uses Unity 2 years from now are going to have the best real-time animated characters with emotion, and facial animations than on any other platform on the planet.” We get it, sometimes things don’t go as planned, sometimes thign don’t come together and you have egg on your face. But lately, I’m not sure I know what it is you guys are doing. What are you guys excited about? What cool sh*t drives you guys?

7 Likes

Btw. someone named Mark Shoennagel commented on the video that this was made without source modification.
Is that true?

@IllTemperedTunas You have a slightly strong 1-man-studio view on this… I’d tend to say that slightly larger studios who have at least 2 or 3 dedicated artists can well be excited about such improvements even if it is not absolutely plug and play.
As far as Unreal is concerned, do you have an actual practical view on it, or have you just heard their promises that what they show is actually doable through easily usable tools without man-power for months of optimization?

I’d tend to say striving to push Unities boundries is what drives the demo devs. Maybe those resources could be usable somewhere else, who knows, but it’s a justifiable drive if you ask me xP

That’s the thing, these aren’t necessarily big additions. Bulk mesh import and animations aren’t really a unity addition, they are based on the quality of the assets you are bringing in to unity. If you have a big fancy camera, a professional actress, professional lighting, recording tools, and composite setups, yeah of course you can get a good performance in Unity, but you could get a good performance in darn near anything else with a heck of a lot less overhead.

I made my post as someone who’s been the guy crunching the high end simulation for these bakes. Do I have a practical view on Unreal? Yes, I’ve worked professionally in both Unreal and Unity as a tech/ effects artist on a variety of projects.

There’s a ton of videos on their incredible advancements in particles, terrain, rendering tech, real-time scripting, animation advancements and on and on…

Here’s a video on their character tech, it doesn’t look good as this demo, but it’s a true game ready system that generates unique characters on the fly without baking animations into a streamed file:

https://www.youtube.com/watch?v=S3F1vZYpH8c

You can find more here:
https://www.youtube.com/c/UnrealEngine/videos?view=0&sort=p&flow=grid

Their recent matrix demo was pretty mind blowing.

1 Like

If Mark says so then it is.

2 Likes

You can get all this done right now in Unity. I just built an entire procedural animation system that sits on top of a conversational AI and intent engine that reacts to the dialogue, ssml tracks, audio volume and spectral analysis, phoneme to viseme timing to synch voice to face and lip motion and blends emotions from the face rig into the viseme, has parameters for procedural idling and gesturing where any kind of character can be dialed in. No canned animation loops, and can be calmed down to sleepiness levels or amped up into full blown traffic rage rant. I would be happy with a hair shader a few steps beyond what i have now but I am happy with the results and more importantly, the people paying me are stoked.

I am unabashedly using MakeHuman to extract avatars as i am keeping as open source as possible. I export the one with the face rig and then change the body armature to a gaming rig, use the weights of the first rigged avatar and just add the tags in C4D and a new avatar can be gotten together in under an hour. They can also be blend morphed so you can choose two avatars and pull a slider to get a hybrid. I would like to see the soft tissue tools in the pipeline soon. Proper use will add mass to those larger creatures and muscled warriors that just don’t seem to have any weight in boss fight scenes etc.

1 Like

Of course a scanned face is going to beat a synthetic face. If you want an apples to apples comparison to Unreal, you should be using this instead of metahumans:

1 Like

The light diffusion is wrong, especially on darker skin, this is true for all CG I have reviewed, hair is even worse.
@SebLagarde did some work ( I think) on dark skin in the game Remember me, but even then they use very dark scene and boosting the specular as an artistic stylization, as a way to mask things that don’t work.

1 Like

Hi there! Just a 2 cents. Sorry for the length. TL DR: Absolutely Spectacular demo…ok…it’s a demo…but for a demo, it’s one of the most (or, the most) impressive short demo/CGI film; this looks like a Offline Rendering - Totally; like…if you showed this a movie crow in the Theater…they would think it’s a movie with 150 million budget à la Avatar…
but it’S not. and that’S good (for us).

Congratulations!

Incredible work, Unity, Ennemies team. This made me change my mind (on HDRP, I’ll switch now (and accept whatever problems that may come; it’S one of those times - half-way in the quest - where you are at ‘no return’ point…you either continue on half-tank (hoping) or you go back to the start (and sink)); thankfully, it’s not that dire, we can just go back to a previous version project (that worked/bugless); but, sometimes, there is enough reason to upgrade even if you are mid-way production and you are not suppose to - it’s suppose to be final/decided what tool to use and not switch on and on and face new bugs/tons of problems; hence, stick to a LTS version - and don’T budge until the end of project/game is done; but, even LTS, becomes old one time and this new technology there is worth it; akin to UE5’s Nanite and Lumen…kind of; so I thank you, Unity Technologies for listening).

Will this scene be available to test on computer (I read that the demo is running on GeForce 3090 RTX, is using HDRP GPU Raytracing (reflections, and especially, the sun shadows, the light rays in the windows, and the shading itself (which looks definitely Raytraced look/looks like a ‘offline rendering’ using Renderman or Arnold pathtracers) - I’m asking this because I am seeking to get this graphic quality/visual fidelity…but, we don’t know what are the Project Settings - very important; like what are the Project Settings, Exactly. Because, Project Settings can totally change the look…

  • Is it Pathtracing Raytracing; I understand that it is HDRP Raytracing…but is it the Pathtracing type of Raytracing…it is RTX Raytracing - specifically, because HDRP Real-Time Raytracing only supports (to my knowledge.) Nvidia’s RTX
    Raytracing technology…not AMD Radeon’s Proradeontracing one…Albeit, it’S not so bad, I checked on Steam, and the large portion of players are Nvidia GPU holders…not AMD Radeons XTs…there are at least 20% of Nvidia RTX cards (2000 series and up, to RTX 3090) on Steam; while AMD Radeon 5600-5700/6000s XT are less than 5%. 20% of 150 million Steam users = 30 million Nvidia RTX holders…while AMD Radeon XTs is about 5 million holders. In my case, I have a AMD Radeon…so it’s problematic to use HDRP Real-time Raytracing, that Requires a Nvidian Geforce RTX card…that has that RTX Raytracing technology - on the GPU/hardware card itself…so that means Need a RTX card to use HDRP RealTime-Raytracing. I mean it’s not the best thing, you wish to have Both GPU brands and not cut yourself out of - 5 million people who happen to have a AMD XT GPU; that’s a huge chunk lost due to being forced to Having this Hardware GPU (RTX Geforce; since only their cards provide this Harwarde Raytracing tech)

  • The Youtube messages said that the demo runs in 4K resolution, on a i12 like a 12-core or so CPU and on RTX 3090 GPU that has at least I think 25 GB of VRAM (which is extremely expensive card, less than 0.3% of Steam Nvidia users have this card; a rich enough to get it; and due to COVID having created silicon chip drought…cards are more expensive now, just unaffordable and even Old GPU cards are sold higher price (not worth it - to play low end games/not worth buying old card unless playing older games, but not for next-gen 3d games that are coming in later 2022-2023…) because unavailability of GPUs/silicon/diode/transistors/boards…electronic parts (due to rising costs/covid putting a wrench in the industrial cogwheel of hardware peripheral making));

  • This demo Needs Next-Gen Hardware…simple as that. As such it will remove a considerable amount of people still on old GPUs, albeit the graphics can be downgraded…but these graphics are the Reason why one would want to get next gen hardware; unless, they don’t care about that (And I know a ton of people could not care less…because they don,t play next-gen games…or are interested in them; they prefer artistical/cartoon games (small indie games) that are more about Stylization/stylized look…rather than CGI/3D Photorealism like this demo… I guess…different strokes for different folks…

  • But, let’s not kid ourselves, the Large Market our there is Interested in next-gen (as I have said before) and there are over 30 million Nvdidian users…they would like to Make Use of their GPU…More… RTX is that answer.

  • Often the problem with new GPUs and new games…it’s that the new games/next-gen games - Don’t - make use of new next-gen hardware; and do not Push the Enveloppe/To the Limit the GPU hardware…or they don’t make use of New Technologies (…like RTX).

  • Hence, stagnation and same ol same ol, no progression visually…now progression is very subjective…once your reached Photographic Realism there is nothing after that (dminishing return…after CGI…well…it’s RealityTM)…

  • So then…devs are faced with a dilemman - what’S next…make More Realistic or …Stylized Realism…most choose the latter and think that Photographic realism is ‘boring/dull’ because we see it ‘everyday’ with our 2 eyes - in reality.

  • Games are about Escaping Reality…

  • But, how far you escape is the choice of the dev…some don’t like Far-Fetched Escaping…just looks like a joke…because no basis in reality/Uncredible…Reality = Credible…because is real- reality; you believe it. you live in it.

  • I think this demo is a proof (in the CG pudding) that we can make Dreamy games…taht are like movies…and even Cartoons à la Pixar Toy Story…with this, the (cgi) sky’s the limit. IF you want to retrograde down to Popeye cartoons à la ‘Cuphead’ game (1930s sunday cartoons…) why not?..

  • It’s just more tools to express and accomplish/concretize your vision.

  • I think the thread derailed a bit with the political talk of skin SSS; but, I mean I understand the point of ‘oh look…of course…white CGI…of course…’…but let’s not make this a deep political thing (even if there is politicality behind it…let’s just not Shoehorn-it…in); there CGI African American renderings are incredible and like…I’m surprised that people made this a ‘skin thing’ (pigmentation) thing…the Skin technology of the demo is incredible and some of the most accurate so far; as they said with the ‘Dispersion model’; I think that the Epic Metahumans are impressive…but they are more CGI looking (due to the deep shading); while this demo here, looks a little Less CGI…and thus, is ‘in the middle’; and closer to a offline render (due to the use of Raytracing; Metahumans can use Raytracing too, but the look, looks a bit more CG/‘plastic’…as people say : ‘it looks plastic’…cg; well, the protagonist woman in the demo looks Eerily ‘real’ because it looks a Bit Less CGI; and thus closer to a photograph (which has Less ‘plastic’ shading look); as some point I was almost fooled - I almost thought this was a Real Person acting in front of a desk…this woman like in reality shot with a real camera; now the big reason why…is raytracing but Also Her Face…her face is 3D Scan and it shows…this a Real 3D face scan and thus is much more accurate than Metahumans ‘semblant humans’…they are not real face scans…thus we hit the big thing:

  • Uncanny Valley…I read this in the Youtube messages; ‘‘D*rn…the Uncanny Valley hit me hard…’’, it’s due to the eyes that are slightly more robotic/lacking soul/depth…thus llooks CGI Puppet…eyes are very hard to get right and that is due to lacking Pupilla change to light condition, if they stay same it looks ‘dead stare/CG stare’ microdetails are very hard and is why I applaud the team because they capture 98% of it…in the microdetails; from the mouth/lips moving, the slightly too jelly/face…(instead of face musculature under the skin; when skin ‘Slides’ on muscles and creates this kind ‘of skin tension’ and micro-details likes wrinkles…which they did emulate… it’s just that eye is Very Trained to detect micro-Off things…that makes a CGI face enter - Uncanny Valley);

  • I did read some messages that said : ''Wow…we are Out of the Uncanny Valley…and there is Canniness; the eyes, emotions and mouth are Canny; which is good; not Uncanny);

  • That is where we will have to Stylize the Ultrarealism…so that it’s ‘Stylized Realism’…instead of Accurate Photographic Realism 1:1…to Reality.

  • Anyway, sorry for extending here, I am very impressed (And no am not alone…and I know tonds of other people are Not impressed…like some Youtubers were like ; ‘‘uh…ok…same ol CG…same ol…nothing impressive…realism does not impress me…show me Cartoons…’’

Just a 2 cents.

PS: The Skin shader is incredible…but Unity, you, needs to make this Scene available to study the Settings…or tell us what are the settings, exactly, so that we reach this look using HDRP; for me the most incredible is the Global Illumination (SSGI) and the RTX Raytracing 3D graphic visual fidelity here; especially the deep shadows/detail of the shadows; the message is poignant too ‘‘Power… is only given to those brave enough to lower themselves to pick it up’’.

2 Likes

PPPPS: For anyone switching to HDRP - hold it, I guess I misspoke and was bit to hopefuil…I won’t switch now. Since my Builtin Render is better than without this tech…I knew there was a catch…‘some thing that slowed everthing’…sigh/sad/i lulled myself…I guess the thing to learn is don’t hope Too Much…too quick/too fast…

The demo’s shading-3D Look of objects/things- is, Very Dependent of this APV new technology (Adaptive Probe Volumes), not so much the SSGI, nor the Raytracing (in fact there is nothing RTX at all…excep that the RTX support ray tracing…but it is not RTX-catered to, only just because Only RTX offers raytracing;

meaning the whole 99% look…is due to APV ‘probing GI’…

it is not Path Tracing neither…

APV, - is Not Realt Time.…is a PreComputed, ‘Global Illumination’ (kind of like Reflectin Probes)…here I was thinking this demo’s visual was -Real Time…It is not. You must pre Bake’’ this…so not real time. they said it is an Experimental thing and that is not a replacement for ‘Light Map Baking’…on top of that. I axed that, and skipped in my game. Probing is low and slow…not real time, simply put this APV - is the Holy Grail of GI/CGI look…and it impossible to get Real Time…so far…I have looked at SSGI and the SSReflections and Raytrace AO…they don’t give that look…alone; only APV does. The Unity BMW 2019 demo…used RT-Raytracing…but it did not look good like this demo here…and that is due, to APV.

Maybe later…if APV becomes Realtime’ill switch until then too much work for too little gain…the fact that we have to Pre Bake is the crux/problem…light mapping is slow, and APV too…if you have a small game with almost nothing and time on your hands…APV is gret…if you ahve a giant game and not spend years to wait prebaking/probing the gi space…it’s not.

PS: Hopefully some APV equivalent in real time happens in the years coming…no more baking. baking soda → no more. Changing Half-way a project - to do more Baking; no. Just a 2 cents…

Last PS: One more (last) thing…the Unity demo is impressive (because of APV…) because it is rendered in 4K (DLSS); 4K makes a Big difference to ‘feel’ of the image; it is much more Filmic…than 2k (2k is filmic too; most films are in 2K…but many films are Not shot in 2K…they are shot in digital 4K format camera - natively; thus, they are 4K source material and downresed to 2K on Blu-Ray…; sometimes you find 4K Blu-Rays…more expensive but they keep the full resolution of the film…and ther eis a Very visible Eye difference to the image…there is just more ‘detail’ in the image; all the micro-details now appear (and thus, you enter the Uncanny Valley…the more High-Res you increase the image resolution and add more details in the image by the rising resolution; 4K Native rendering is very expensive; hence, DLSS upresing is the solution; in my case, Built-in Render has no DLSS…so I will have to find a way to do ‘supersampling/upresing/upscaling’…to 4K from lower res…to emulate the 4K DLSS lookl it is worth it; because it sells the look of this demo here - in 2K, the demo loses quite some visual oomph in detail (especially the little details on her clothes…they vanish…). In any case, I don’T wish to minimize the greatness of this achievement…it’s just not what I thought (I thought it was real time…). APV = Precomputed Real-time; not True RT Real-time.

1 Like

What actual features are here? The only one I’m seeing explicitly mentioned is the “real-time” strand genre system. How was the person made, and how was this hair system integrated?

“Give me Metahuman” is a tall ask but for someone who will never have the resources to individually sculpt, texture, rig, and who knows-what-else-I’m completely-unaware-of scores of individual characters, something like a character creator is the only realistic option.

Damn, finally a new impressive graphcis demo by Unity, and the first thing I can’t help but think about is how ugly their new logo still is.

3 Likes

Unity’s made some pretty cool custom shaders and rendering features for heretic and Blacksmith too. I always have the same question every time- How would I even author the assets that could make use of these features?

3 Likes

Technically they should be useful even in lower fidelity assets, it’s not like hair and skin stop being themselves in “lower def”. Though strand based hair is different from card base hair, they seem to hint at strand based importation from hair edition like in blender. There is some gotcha about the details, they haven’t released enough information (for example guiding strand vs interpolated strand). Skin is less and less difficult as the workflow get automated, either through capture or plain generation, small details should be less a problem in the future and the artistry goes to the lower frequency features.

For example here is a highly stylized character with realistic shader and hair as per the unity demo, done in current unity by sakura rabbit on twitter:
https://mobile.twitter.com/sakura_rabbiter

I don’t think authoring should be such a big problem once the workflow is done. You don’t actually need hyper realistic mesh for hyper realistic rendering (see pixar too). It’s just up to you what target you want to hit.

also it’s a meme:
https://ifunny.co/picture/who-would-win-unreal-meta-humans-a-character-f-sakura-jsQZ1bFN8

Personally I was more interested in the room/environment and lighting, small as they are, than the character.
Different strokes for different folks :stuck_out_tongue:

3 Likes

For something more concrete as a workflow, you can get inspired by this, you don’t need all of it though, that’s good inspiration.

https://www.youtube.com/watch?v=qnxCcY0WDAk

1 Like

In the meantime, other companies are providing similar solutions. Soul Machines is an example --https://www.soulmachines.com/. Unity may have a better mouse trap, but it’s months or years away!