Just in general, why can’t Unity display an FBX rigged model with decent materials for SSS for Skin with the HDRP\Lit shader? Every major character design tool outputs full 4k PBR materials but when you pull them into Unity, they all look wooden flat. It doesn’t matter how much you adjust specular/metal/alpha, etc. Whereas in Unreal, they look amazing without a having to tweak or use custom 3rd party shaders. It doesn’t seem to be related to subdivision or normal maps, etc. It seems to come down to HDRP boiling away details in its light rendering or the import workflows.
Yes, I know there’s the Digital Human pack with the 1 character (Whoopie!), but it’s not compatible with any of the popular character design tools, at all, so it can’t be used.
I’d really love for Unity to address using AAA looking digital humans for skin, eye, and hair without Asset Store stuff and compatibility with importing characters from tools that are used to create them. Granted, it’s not an apples to apples comparison, but has anyone been able to get a decent AAA looking character into HDRP other than the heretic team with their special one-off-custom rig? If so, what was your workflow?
This is a part of an HDRP game being worked on by two people only.
this isn’t a showcase, where there’s nothing but the character.
This is a part of an actual game, with RPG mechanics, saying that to give an idea of what they accomplished when it wasn’t a skin/char demo, they gave some focus on this and moved on to the rest of… everything that makes a game. They don’t use any assets related to this, and as far as I know, they use HDRP as it is. But unless an indie project is made by absolute beginners, custom tools/shaders will be made. Just not the level of ‘unity demos’ custom.
The question is, is the math result in the lighting/surface calculations in unity3d and unreal the same.
And the only difference is placement of lights and shadow quality in this scenes.
Yeah, when I get the time, I’ll do an apples/apples from Unity to Unreal using the same model. I think you sort of answered my question in a way. After you posted the Sua example, I went and did a bit of deeper dive into the Unity Digital Human build, there I found custom shaders (which no longer work in Unity HDRP 10.x) and some lower level API injections they’ve done for the the example, which they hint at in the blog. I believe this is why we don’t see more AAA-style character models in Unity for cutscenes and close up camera work.
The “Sua” model from Hyeong uses the Unity Digital Human, which is largely non-extensible. Unity has a “making of” of Digital Human on the blog post. Which is here Unity Blog. You’ll quickly see it’s not realistic workflow for indie game developers, in fact, some of what’s there sort of answers my question, they had to “inject custom rendering work at certain stages during a frame”. Hyeong also used RayTracing for Unity, but raytracing has a lot of drawbacks for typical unity games. The clothes used for the Sua model was done by an asset store developer that’s no longer in business.
Unreal 5 has metahuman as a package
which essentially does what the Unity Digital Human does, but Unreal made it artist friendly whereas Unity dumped it in github with the standard “good luck” and no real tutorials for it.
I’d really to see Unity do more along the lines of supporting artist tools (Maya, ZBrush, etc) and bringing higher fidelity into game production. Unity has this bad habit of showing off tech that are one-offs to gamedevs (Adam, Book of the Dead, Heretic, Megacity) that are purpose built for one demo and pretty much locked to one version of Unity with lots of experimental development. They show these things off like “Look at what you can do in Unity!”, which is really, “Look at what you can do in Unity with a 20 person team working on something specific for a year or more for a 30 second demo”.
Yeah, I don’t think its about light and shadow to be honest. Even with SSS, using Ultra for quality in shadow maps with High in quality for contact shadows, I don’t think that really helps. Yeah GI helps (although not available in HDRP 10.x in Unity 2020.3), but really only with light bounce, there’s something more that’s missing. At this point I’m attributing the lack of higher fidelity from design to game engine is what they allude to in the digital human blog post with custom shaders and a lot of tweaking and injection into the srp.
You can export metahumans characters to unity with bridge, also it is about lights and shadows, you need to make the same scene in unity and unreal, also you need to take into consideration the textures and the materials. Remember when you open unreal it is preconfigured to high quality with effects and all that and unity is not. Try opening the HDRP official demo from unity, put all quality in high and then import your metahuman model with all the textures at least in 2k and make good use of the materials, then show us the results
I’m still tweaking my own model, but I mean, from an artist perspective, you don’t want to spend days and days trying to match quality from the design tool into the engine. How Unreal does this is something that Unity should pay attention to. I have no idea how many hours the guy spent doing a 1:1 on the test above. I’ve been tweaking for about an hour and managed to get closer, but I still have a lot more work to do, which, frankly is disappointing when Unreal does it so well. At this point, I’m considering doing my cutscenes in Unreal, capturing them to a 4k/60fps video file, and playing them in Unity as such.
Unity did add a HDRP hair shader into 10.2, but like everything else in the unity world, no in-depth tutorials. It does improve the hair look over the traditional HDRP/Lit shader through by quite a bit. I have used it, not on the model above, but on another model and the hair quality does improve. Same with eyes, Unity did release an HDRP Eye shader, which is decent, but as always, the Asset Store has better ones, but locked to specific HDRP versions (9.x).
The Unity HDRP Hair shader does support vertex animation, but here’s the kicker, if you use Raytracing DX12, vertex animation is not supported. Again, like everything in Unity, it’s a mishmash of compatibility. I can’t use Raytracing anyway because my project uses tessellation materials, and terrains, but it’s just an example of how Unity is approaching HDRP from an artist perspective, which is not well thought out.
For what i can see, the lighting is not the same (you can see it in the lips for example), but the shader looks better on unreal’s image. If you really need that, why don’t you use unreal then? The beta version of unreal is graphically waaay better than unity.
You need to take into consideration that there is not release version for Unreal 5, is still in beta, so is in fact something new (newer than unity’s features) but if you are looking 100% realism, go with unreal of course.
Yeah, I mean, after I finish my current game, I’m moving over to Unity. I like the ease of development with Unity from a language (C#) and extensibility perspective, but what it offers in one area, has some significant drawbacks in others. The philosophy has become “use the asset store for functionality”. I’ve got 10k+ hours into Unity, been a presenter at GDC, and worked for 505 games, Microsoft, and Bioware, so it’s not like I just downloaded Unity and was like, “Uh, why does this look like cardboard?!?”. I get lighting, probes, shadows, shaders, etc… so I think I’ve boiled this down to, Unity can’t do what Unreal can do, which bums me out, but it is what it is.
Wondering if people even have proper cubemaps / probes set up. Those are unfortunately not optional, and HDRP’s SSS/transmission needs a modification to the map unless you know for sure the thickness amount is what is expected by this particular formula in this particular engine.
I don’t mean to be funny but you know a lot of people aren’t using assets made for HDRP, nor setting up scenes properly.
Yes i agree, in fact i’m doing the same, finishing my current project and then maybe for my new ideas i’ll go with unreal but unreal has a big downside for me, a simple < 2 minuttes demo is 100gb size, that is for me completely useless, so i hope the release version is better on that.
That is of course other important point. I love unity of course but you can achieve best graphics in unreal doing almost nothing. That is from an artist perspective which is our case. Thinking about (for example) setting probes, dealing with limitations and bugs, light-maps, lights limitations, learning all that stuffs in unity while unreal you have all that almost immediately… is very attractive for us. It is what it is. But of course Unity has very useful tools for other things too, so…
I sympathise, but I don’t think it’s a case of Unity’s HDRP not being able to do it, and as you say, it’s how much friction there is. Unity can do raytraced ground truth as good as the best of them.
Sadly you’re right - Unreal’s all fully set up for that glossy AAA scenario - but be warned, it’s not likely to be anywhere near as easy to tune. I’d say that’s worth trying out anyway since you probably won’t want to.
A little example, i open the 2020’s hdrp template scene, i did not touch anything i just put a point light close to a wall and i have that light leakage (in fact i made a whole post for this some time ago)
Maybe i have to re-bake, maybe i have to add probes maybe something else, but with unreal i do not have to deal with this, i just add a point light in the “new scene template” on unreal and it works perfectly without this light leakage, this is why people like us got a little disappointed.
I don’t blame you, it’s Unity’s own problem to deal with. The engine is awesome but they’re poor at showing that and poor at keeping demos alive, unlike their competition.