On Tuesday, May 12, Ashley Alicea is hosting the next episode of our Unite Now Meet the Devs series, talking to Robert Cupisz and Krasimir Nechevski from Unity’s Demo Team - it’s the team that made The Heretic, and before that, Book of the Dead, Adam, and The Blacksmith. They’ll talk about their process, show you the project in-editor, and take your questions. They also have a special announcement for you!
This session is live and open to everyone, just register to join us here.
If you have questions about the project that you’d like us to focus on during the session, please feel free to post them in this thread. Let’s talk about the things that you want to talk about!
More info about the session here.
Heya @doppioslash ! That means that you already registered - which is great The session will be happening at 9:00 am PDT on May 12th, so all you have to do is return to that zoom link and you can join.
For something that is constantly changing, like the bird, are you bending those lines manually between each frame?
Or, do you set some kind of wiggle protocol to waver within certain constraints and then see if you like how it turned out?
for people that would like to explore the implementation of alembic files, and unity as the main render tool in a pipeline what would you think is the best workflow that consern things created outside unity, specially characters, what would bet he maps the rigging techniques or how would be the most gratefull workflow to create quility short like “The heretic” for somebody just starting in this use of unity for film making, Thank you so much and i love your work
Thanks for a great presentation and some interesting insights, specifically into the Nested Timeline workflows!
Repeating my questions about Dirty Details here since there wasn’t time to reply:
How many bugs (Core/HDRP) did you find throughout development?
Were you able to stay on “vanilla” HDRP?
Have you tried exporting to other platforms (e.g. VR)?
Is Heretic now used as a test case for e.g. Engine/HDRP updates internally?
(this was answered ambiguously - if some of the internal scripts are not working with the latest HDRP release the answer seems to be “only partially”?)*
Is it the imperfect photogrametry input that causes an area on the top left of the actor’s head to flex in and out the early render of the actor’s head?
It was on the right side of the picture early in the video showing the actor’s face rendered in gray. It looks like he has a membrane on his head.
Besides a method for player input, what other work would still need to be done to make this into a game level?
ex: If I tried to open this as a game, would everything fall through the floor?
You did an awesome job and I love Unity and where it’s ging. There are, however, two problems that I just couldn’t find an answer for:
Are there any plans for developing realistic hair for Unity? Or at least something close to the current AAA title standards? There were some fine solutions that were discontinued, but sadly with no replacements.
What happened with Jim Hugunin’s cloth simulation? It looked even better than in Marvelous and had a great performance in Unity 3 years ago.
It is. The cleanup process is still experimental and there are many areas that have issues, such as lips, eyelids, neck, etc. We are already working on improving our methods for future productions.
How many bugs? Many. We started working on this demo way before HDRP was mature enough for a demo like this (by design, that’s our mission), so we have run into all possible problems you could imagine, as the demo touched majority of HDRP features. We worked with the HDRP team to fix these issues before they reached users.
We modified HDRP in multiple places, although kept the modifications small and localized, to manage upgrades. I think the HDRP team is doing great work anticipating which features will be needed, even if sometimes I would do things in slightly different order. But even with that I don’t think it’s reasonable to expect any medium-large production would take any stock engine and use it as-is. There will always be game-specific needs and the role of the engine is to provide a solid, well-rounded, modifiable base. With SRPs you get that: the code that’s most likely to be modified, lives in user land as C# and shaders, and is understandable to a wide audience, debuggable, modifiable, etc. Some of the initial modifications we were able to remove later, with the introduction of the custom pass api or new shader graph nodes.
Other platforms: internal console teams are interested in making the demo run, and we will probably collaborate on that.
If you’re asking if it’s used for upgrade path testing, then the answer is that not (yet?) but I absolutely think it should be.
If you’re asking whether the demo is used by internal teams to test various features - it is and it will continue to be used like that, it’s one of the big reasons we have made it in the first place!
I gave a talk about how I made Boston a short while ago. It covers only first part of the development and quite a bit came after that, but it should give you a good idea.
Regarding the first question, about hair: Yes, for one of our next demos, we have plans for a realistic hair solution. We are approaching this on three fronts: 1. simulation, 2. rendering, and 3. tools. The work is underway, and we currently have a relatively fast and stable simulation that is able to drive thousands of individual strands with collisions. I look forward to being able to share more on that. I could imagine that it would get a separate release similar to the digital human tech package, once we have put it through a production.
Congratulations to the team on a really amazing project! I just opened the Facial animation sample scene and my first impression is you’ve passed the uncanny valley in a spectacular way - to the extent that looking at him moving makes me a bit nervous because he doesn’t blink
So the one thing I find really discouraging when starting a Unity HDRP project is that as soon as the sample scene pops up the first thing I notice is some really annoying shadow artifacts without even moving the camera. Since lighting is such a beast to handle and dynamic shadows are such an important part of graphics fidelity could I ask for any tips as to how to start going about setting up lighting, particularly the shadow cascades, so as to avoid noticeable transitions between higher/lower resolution shadows when approaching objects? How did you approach the lighting setup in The Heretic?
Quick Question… wasn’t the “Meet the Devs: Deep Dive into The Heretic Assets” webinar supposed to happened this past Wednesday? Is the one that is now on June 3rd… I wasn’t able to attend the webinar that was on Wednesday and just want to make sure it’s the same.
We only see a part of it, but to me, probably the most intriguing thing in this demo is the part where he takes off his jacket. The donning and removal of clothing has always been difficult to represent in 3D graphics, which is why it’s generally avoided. At least as far back as Final Fantasy 8, game developers have been pulling tricks like fading to black and reskinning a character’s clothing, or more recently just having it happen off camera while you’re looking at something else going on nearby.
But in this, we watched the guy take off his jacket. Or did we? From the camera angle, we only saw a part of it, and not the interesting part (pulling his arms out of the sleeves). But it’s still more than I’ve seen in games. Was this just the latest trick, or did you guys get an actual, working solution to clothing removal?