HDRP Sponza, learning about HDRP

My “final” results here:
https://www.youtube.com/watch?v=CmftjaFPmaQ

Old info (useful details)
Hi and good day.
Due to my latest HDRP project being damaged, i’m trying with this sponza scene that one guy from unity shared some time ago, i’m making little changes and trying to imitate a real photo i get from internet of the sponza palace.

Anyway, there are some things i do not understand:

1 - You can see in the original foto how light and color bounces (compare the original image with the following)

I’m trying to do that in unity but even with more bounces it does not work. Here is my attempt


Definitely not there. I had to fake it with lights which is not exactly good in all cases. Here is faked


Is not the same and is sutil because if i make the lights stronger then it looks very bad, but the effect i was looking for “is there”. Yes i tried with reflection probes and even when i move the blend points to make it more smooth, it looks very flat. In fact i thing the lightmaper is making a very flat work, because the light is not bouncing (that is why i faked the bounce on the walls). So what can i do to make it work without faking it?

2 - The other thing that i do not understand is: When i clear or delete the lightmaps all the objects becames blue, like reflecting the sky, i look everywhere i changed the post proccesing effects for visual enviroment, sky, sky colors, probes, everything where a blue color is XD and is always the same. Is not a big deal but i want to know why is that.

Do you know something about it? what is your feedback?
Thank you…

2 Likes

UPDATE:
The SSGI is making a very decent work even in medium quality with a GTX1050, the effect is there with SSGI

I think the lightmaper is decent but without those bounce effects looks flat. Here is a front view without SSGI.

2 Likes

Not sure what you are trying to replicate but the materials and the colors are different and as such you will receive different results. Not to mention that it is not possible to make it the same with real lighting no matter what you try.

In 3d rendering and lighting, we manage to give the ILLUSION that it is the same. A convincing resemblance. But it obviously is not possible to exactly replicate the results of an unknown number of light rays interacting an unknown number of times though countless bounces with just some hundreds or thousands of samples. Not to mention the exact behavior of the materials. There is not enough computational power on our machines for that.

Another important aspect is, in HDRP, if your materials are not provided with all the correct basic data channels the lighting comes wrong. i.e. Mask needs to provide the correct metallness, smoothness, AO, etc. afaik, all Sponza models come with very basic textures. Usually just base/color, and rarely a few bump/normal maps.

Yes, it’s most likely the sky reflection. By default, if you don’t use SSR or local Reflection Probes, it will fall back to the sky, which in the case of darker environment, will look very blue/metallic. This is normal, and that’s why it’s important to place reflection probes to get correct reflections.

https://docs.unity3d.com/Packages/com.unity.render-pipelines.high-definition@15.0/manual/Reflection-in-HDRP.html#reflection-hierarchy

2 Likes

Hi… I don’t want to make a 100% identical image, i want to replicate the feeling, the lighting, light bounces, reflections and add my own thing, like the volumetric and bloom effect etc…

I’m not talking about infinite number of rays, i’m talking about rays bounces i do not know what 3d software you use for renders but in the one i use (lightwave) this is pretty basic stuffs, in fact i can replicate that effect with less rays, i don’t even think about it when i make my renders, of course, those are 3d renders sorftware. I think is something i’m missing.

HI, I get it, thanks for the link :slight_smile:

You got me fooling around with this scene.

In general you are right. The lightmapper creates a very flat GI result.
That has been my problem with Unity for a long time, and in my opinion, that is the main reason Unity lighting is inferior compared to Unreal. The GI solution is flat. Soft.

To improve it you need a lot of extra work with post-processing.

This is with a full stack of effects and plenty of tweaks.


8327799--1093983--upload_2022-8-1_15-57-15.png
8327799--1093977--upload_2022-8-1_15-56-0.png

This is without my exposure tweaks (default exposure)

Flat scene.
Just baked lighting and default effects. My starting point.

1 Like

You see? i’m a 3d artist i make realistic renders and all that stuffs and that is something i can NOT just ignore for a lightmapper xD.
Your scene is not the same, you are using the new verion from intel (which is also good) i’m using an old version but i made some changes. But anyway it is the same problem, to achieve those effects i need to put other area and point lights and if you try with SSGI it makes a little better job which for me has no sense. It looks like the rays are not bouncing, something like unity is taking the light value to make an average illumination for the scene, for example, if you are using just 1 light (the sun light) it has no sense the amount of light in the area where the camera is, even with exposure settings, if you compare that area where the door is you can see that is not accurate. But maybe @pierred_unity can help us with this, he is a pro.

With 64 bounces we should definitely see more bounce lighting in that scene.
Especially in the gallery where the light comes through.

We do not know what they have done to optimize the performance of the lightmapper.
But it definitely does not work as expected. Perhaps there is a travel distance limit?

SSGI makes sense because it is a tool that was created for real time.
It is not perfect but works.

I am an old school 3D artist myself, and I have learned to work differently in offline 3D. These methods you mention we used them before GI was a thing, dome lights to produce soft lighting, second direct for bounces, fillers, negative point lights for dark spots etc. but real time with Unity requires a different approach. You should use the relevant tools provided for this kind of work not the methods you know from offline rendering. (Some can be used, and they work great, but others not so great.)

Unity is not an offline renderer, and these methods should be avoided as they are costly in terms of performance.

hahaha i was going to write that in my previous answer but i didn’t because it would seen arrogant XD, but yeah exactly those are old school techniques, something that should be unnecessary by a modern game engine which is also baking lights xD and i said that about SSGI because is screen space and also in medium quality, so lightmaps should looks better it suposed to take geometries and color into consideration for bounces and bake better results than SSGI in medium quality lol it makes me laugh

In many cases, it works as expected.
But in this particular case for whatever reason it doesn’t.
And that is for me a bigger issue. Stability of the features.
Working as expected, every time.

The way I typically work is, to light my scene in a way that the basic lighting is correct, then improve with the filters and effects, but this time the scene looks really bad and basic lighting is not helpful.

I suspect that even if I add all practical lights in my scene to help boost the amount of samples, and try to work in a typical ArchViz approach, it is not going to work.

@impheris Are you aware the reference photo you use was shot with a flash? :wink:

What I mean by that, it’s not a great photo to base your experimentation on. Using some random Jpeg from the Internet with a ton of white clipping, an unknown built-in camera tonemapping, and whatever adjustments the user might have done in post, is rarely a good starting point to try to “match” a real world location. Or one must really spend time reverse engineering the photo, especially the tonemapping and the overall color science.

On top of that, even the new Intel Sponza is not a fully “exact” scan of the Sponza atrium, it uses “photogrammetry-derived” materials, which means it is a remastered asset with better geometries and PBR textures, not an exact digital twin of the real world location. If you look at the comparison below from Frank Meinl (the artist who made both the Crytek and Intel assets), you’ll see materials are slightly different, with the Intel one being more desaturated and having more weathering (darkening) overall than the real world sandstone materials. So this will certainly impact the results, as the lighting response will be slightly different to begin with (it’s still a terrific asset as you can expect from Frank, don’t get me wrong).

Then, every GI solution out there will produce different results, especially with the Sponza atrium, where the light is very unidirectional and coming from a fairly small opening. This is very tricky to handle, because the renderer needs to be told (or it must figure out on its own) that it should give more importance to the sky and/or shoot more rays towards the sky via Multiple Importance Sampling for instance, or good old manually placed portals (Unity doesn’t support the latter).

On top of it all, there are plenty of differences among offline/semi-interactive renderers. So you can imagine that trying to match Unity’s GPU Lightmapper (which is taking many shortcuts compared to full-fledged path tracers, isn’t per pixel, and only considers diffuse lighting) with a photo is going to be very tricky.

Regardless, it is fantastic you’re learning to use HDRP and try to get closer to photo realism. You might need to lower your expectations a bit for the GI part though, because the photo ref you use is not optimal, and the assets you have and the shortcuts that are taken in game engines for GI will make your task a lot more difficult.

A few options:

  1. The hard option: you should use numerous RAW photos of the atrium with a soft neutral tonemapping as a reference (or at least proper jpegs), have scanned materials (from this location), use many samples and bounces in the Lightmapper with a high texel resolution, and many very carefully placed reflection probes. Then you will be in a better positon to start an objective comparison. Even by then, you might realise that maybe the GPU Lightmapper doesn’t manage to “grab” enough light from the sky.

  2. You can start adding raytracing effects, like RT reflections or even (per pixel) RT GI. Or experiment with the path tracer (which now has denoising!). This is much easier, as these are all well supported in HDRP, if you have a sufficiently recent and powerful GPU. But again, you’ll still need better assets to begin with (see point 1), before you can make a proper comparison vs a photo.

  3. Or you can just fake it. Play with the indirect multiplier and the albedo boost, tint the materials with warmer and lighter tones, etc. And in the worst case, manually place lights. Ultimately, that’s often very much what artists begrudgingly end up doing when someone forces them to match a photo. :wink:

9 Likes

now i have a question for you, how did you get the uv for the lightmapper? i tried with that sponza scene before and unity says that it does not have uv for lightmaps

Yes of course i’m aware of that but i do not thing that flash will affect so much the lighting back on that area where the sun is reflecting light and you are right is not a proper photo xD but to be fair, as i said before to @soleron who mentioned the same thing, i do not want a 100% replica of that photo, i want to achieve the ambience the lighting style, the bounces and reflections and my point is about the bounces of the light and the reflections on the wall and floor i can see that in any other photo but no in unity’s lightmap. Maybe you don’t understand my point, see the comparison on the images below, the white to black bar represents how the light is bouncing and reflecting on the wall.

8330508--1094556--sss.jpg
8330508--1094559--asadvgfmgu.jpg

So i do not think the asset is a problem here, i can do it with a 4 walls and a floor and still get the same lack of “bounces”(?), As a said before i’m not looking a 100% replica of a random photo, is not even about the color of the materials, is about the light.

Do not take me worng, i’m very proud of what i managed to do with my scene in 1 hour learning HDRP XD (the original scene looked very different and not good in my opinion) and yes i read all your answer and you are right, but something like that, i though it was basic stuffs for a lightmapper in a 2021 very popular game engine…
I can fake it that is not a problem at all for me i’m not gonna cry for that jajaja (in fact i already did it, see the 4 photo of my first post) but before, i want to know all the limitations, pros and cons of HDRP, also, you know @pierred_unity unity has many so many options and settings and i need to ask before try to reinvent the wheel, i mean i though maybe it was something i missed.

1 Like

Hey, I’ve spent no more than 30min on our Crytek-based internal test scene.

If one just want to get the overall lighting style, and the nice lighting falloff along the walls between the brighter and darker area of the atrium that you want to simulate, I believe it’s perfectly doable to get a good-looking result. This bakes below only takes 30 seconds. See the settings on the right side of the screenshot.

I’ve matched the sun position of the photo (third pillar on the left slightly in the shade, and 1st floor central pole hald in the sun light), something you don’t have in your own test. I do get the nice lighting falloff on the right side of the atrium, going from warm bounce light to darker indirect closer to the camera.

Things to look into if you have poor results:

  • lightmap UVs don’t have a good texture usage, this is the most likely reason for poor lightmapping (and flat looking results)
  • reflection setup is not dense enough to capture the lighting variations, this is very important to get proper material response, as specularity is often neglected, and with PBR you can get radically different results, especially with a shot light this, where walls you’re trying to “match” are parallel to the camera view direction
  • some odd postprocessing settings? this will also radically change the way the subtle bounce light looks like: too much over exposure and you will kill the soft lighting gradient; too much under exposure and you’ll make the lighting look muddy.

And you can get a nice gradient on a plane that you would place in the scene:

For reference, this is how the lightmap UVs for the side walls of the first floor look like. As you can see, there’s a ton of lighting variation in there:

I’m told cannot share this scene yet. But hopefully, with the settings and the suggestions right above, you can get something a lot closer in terms of lightmapping quality.

6 Likes

Valuable information in this thread, thank you!

2 Likes

And trying to match the camera angle and the look of the photo for about 20 min, but quickly giving up, because the materials don’t really match, and I suspect the dimension of the 3d asset are not quite the same as the real world location (which will also influence the way light bounces around). For instance, the arcades’ ceilings are not as tall I believe.

I also simulated the camera flash (see shadows of the pillar on the top right corner). It makes a substantial difference in the top right corner, on the nearby pillar and the ceiling, propagating to the next pillar as well (cooler temperature too).


Using these settings (very dirty), please only use these for educational purpose, not a commercial product.

With a few more hours of material tuning (especially the pillars with more “rusty” patches), and adjusting the geometry too, it should not be difficult to get extremely close to the photo. That won’t be me, though. :smile:

5 Likes

I knew i was missing something! Thank you @pierred_unity you are really great…
Looks like it was the lightmap resolution (mine was at 28) now i have what i was looking for:
8331582--1094889--Untitled.jpg i take off the texture to make it more obvious

Now the bad news is that i use your Post processing values and it looks very different here i did not like it and then i forgot my values and now is a mess XD i need to reconfigure everything again to my liking but i’ll do that later (and i’ll also post the result here) BTW i think i’m using the same scene as you: https://github.com/radishface/Sponza i just changed some materials and hid some objects

yeah i saw that… lol

i won’t either :stuck_out_tongue:

Ah ok, looks like you had barely done any preparation in the scene.

There are two ways to do that:

The Hard One: Properly unwrap each model that you plan to have lightmapped in your 3D application.
Make sure you are using UV Channel 2. (if you are using 3dsmax.)
This way you can have complete control over your Lightmap unwrapping.
Some claim it is the best, for most models I see ZERO difference just a lot of wasted time.

The Easy One: Use Automatic unwrapping in Unity.
(my favorite but in some complex models you need to tweak it a lot or even do it using the “Hard” method.)

After importing your model.
You can select all the models you imported, or even the entire building if you exported everything together.

In the inspector make sure you are in the Model Tab.
Scroll down and

  • Enable “Generate Lightmap UVs”
  • Depending on the complexity of your object, reduce the hard angle. (Default is 88.)
    Think of it as like 3dsmax Smoothing Groups. The more complicated the shapes, the lower angle. That results in your lightmap space breaking down into more pieces. For a box, 88 is great. For a complex sculpture, I choose around 30. It is the angle between two surfaces (i.e. polygons) that the program may consider as “continuous”.
  • Match this value to the lowest amount you plan to use in your lightmap resolution.
    By default, it is 40, but in many cases, you do not need to have such high lightmap resolution. In low frequency spaces even 20-25 should be more than enough. For preview 5-10 is usually great. for final production it really depends on the scene.

8332005--1095000--upload_2022-8-2_21-14-46.png

This result is with Lightmap Resolution set at 10.
8332005--1095009--upload_2022-8-2_21-30-54.jpg

There is a way to interactively see the resolution of your lightmap changing as you tweak the Lightmap Resolution value. Do not make it too dense, after a certain point there is no ROI and just takes too long to produce lightmaps.

This way you can see the checker overlay that will help you understand how much you should increase your Lightmap Resolution. As you see in my case, LR value of 10 is not low.

If you have a GPU with at least 8GB RAM it is worth trying the GPU lightmapper.
It baked the lightmaps of my scene in 3 minutes. The CPU would take significantly longer.

8332005--1095012--upload_2022-8-2_21-34-35.jpg

As with offline 3D rendering, be careful if there are trees, do not make them static.

For that you will need to use other methods to apply Lighting data to the objects.
(i.e. Light Probes)

With a Lightmap Resolution of 40, you barely see any difference with all the effects on.

8332005--1095024--upload_2022-8-2_22-5-0.jpg

Although if you do not enable SSGI then you will need to increase this value.
Baking is important because there may be low spec machines that underperform with SSGI.

8332005--1095027--upload_2022-8-2_22-6-53.jpg

1 Like

Another way Unity is better than using an offline renderer.
You can always create different configuration files of Post Effects and Sky setups and load them at will depending on your scene.

You didn’t have to overwrite the old values.
You could have set up a brand-new configuration file for your Effects volume.
Switch between them and see which one you like best.

I was talking about the sponza from intel, which i guess is the fbx you are using, i tried that, it did not work so i keep working with the old one. Right now i’m using another fbx that one guy from unity shares some time ago and i change some textures.
I know how to use the “Generate lightmap UV” tool from unity, i also know how to create UV maps with my 3d software, that was not the problem about that point, my problem with that sponza fbx file is that it has its UVs but unity does not recognize it (at least not my version of unity) i’m using the Y-UP version.

You are maybe using: https://www.intel.com/content/www/us/en/developer/topic-technology/graphics-research/samples.html

I’m using: https://github.com/radishface/Sponza

IF you are talking about creating your own UV for lightmaps, trust me it is better if you make them, i had a lots o problems some months ago for that with my actual project and it is a low poly game -.-"
https://discussions.unity.com/t/871248/9

Anyway, the information can be useful for you in the same way, so i recommend you to read the previous answers :sunglasses:

Now about the Post effects file, i also know that but i was very dumb and change my own file -.-"