Progressive GPU Lightmapper preview

GPU lightmapper is a preview feature. Preview means that you should not rely on it for full scale production. No new features will be backported to 2018.4, 2019.x or any of the following releases. We recommend using 2023.2 or later because the GPU lightmapper has reached feature parity with the CPU version at this point.
The goal of the GPU lightmapper is to provide the same feature set as the CPU progressive lightmapper, with higher performance.

We would like you to take it for a spin in the latest alpha/beta version and let us know what you think. Please use the bug reporter to report issues instead of posting the issues in this thread. This way we have all the information we need like editor logs, the scene used and system configuration.

Missing features in 2018.3. Most features will be added in the 2019.x and 2020.x release cycles:

  • Double-sided GI support. Geometry will always appear single sided from the GPU lightmapper’s point of view. Added in 2019.1.

  • Cast/receive shadows support. Geometry will always cast and receive shadows when using the GPU lightmapper. Added in 2019.1.

  • Baked LOD support. Added in 2020.1.0a20.

  • A-Trous filtering. The GPU lightmapper will use Gaussian filtering instead. Added in 2020.1a15.

  • Experimental custom bake API. Added in 2020.1a6

  • Submesh support, material properties of the first submesh will be used. Added in 2019.3.

  • Reduced memory usage when baking.

Features added in 2019.1 (will not be backported)

  • Double-sided GI support.

  • Cast/receive shadows support.

  • macOS and Linux support.

Features added in 2019.2 (will not be backported)

  • Multiple importance sampling for environment lighting.
  • Optix and OpenImage denoiser support.
  • Increased sampling performance when using view prioritization or low occupancy maps:

  • Direct light (2019.2.0a9).

  • Indirect and environment (2019.2.0a11).

Features added in 2019.3 (will not be backported)

  • Submesh support (2019.3.0a3)
  • Match CPU lightmapper sampling algorithm (2019.3.0a8)
  • AMD Radeon Pro Image Filters AI denoiser added. Currently Windows and AMD hardware only (2019.3.0a10).
  • Added support for baking box and pyramid shapes for SRP spotlights (2019.3.0a10).

Features added in 2020.1 (will not be backported)

  • GPU backend can now export AOVs to train ML code for de-noising lightmaps. Only available in developer mode (2020.1.0a1).
  • Compressed transparency textures; 75% memory reduction by using rgba32 instead of floats (2020.1.0a2).
  • GPU lightmapper can now write out the filtered AO texture to disk, alongside the Lighting Data Asset. Only available in On Demand mode. Only available through experimental API (2020.1.0a3).
  • Support for the Experimental custom bake API for GPU lightmapper (2020.1a6).
  • Accurate OpenCL memory status for AMD and Nvidia GPUs (2020.1a9).
  • Reduced GPU memory usage when baking lighting by using stackless BVH traversal (2020.1a9).
  • Show user friendly name in the Lighting window for AMD GPUs on Windows and Linux instead of GPU code name (2020.1a9).
  • Compute device can be selected in a dropdown in the Lighting window (2020.1.0a15).
  • Limit memory allocations for light probes to fit in available memory when baking with progressive lightmappers (2020.1.0a15).
  • A-Trous filtering (2020.1a15).
  • Baked LOD support (2020.1.0a20).
  • Baked light cookie support (2020.1.0a22).

Features added in 2020.2

  • Brought back stack based BVH traversal, this time with with Baked LOD support (2020.2.a1).
  • Reduce memory usage when baking large lightmaps on GPU by disabling progressive updates and using tiling on the ray space buffers (2020.2.0a11).

Features added in 2021.2

  • Memory and performance improvements when baking Light Probes (2021.2.a17).
  • Lightmap space tiling to reduce memory usage (2021.2.0a19).

Supported hardware
The GPU lightmapper needs a system with:

  • At least one GPU with OpenCL 1.2 support and at least 2GB of dedicated memory.

  • A CPU that supports SSE4.1 instructions

  • Recommended AMD graphics driver: 18.9.3.

  • Recommended Nvidia graphics driver: 416.34.

Platforms

  • Windows only for the 2018.3 preview.
  • macOS and Linux support was added in 2019.1

How to select a specific GPU for baking
If the computer contains more than one graphics card, the lightmapper will attempt to automatically use the card not used for the Unity Editor’s main graphics device. The name of the card used for baking is displayed next to the bake performance in the Lighting window. The list of available OpenCL devices will be printed in the Editor log and looks like this:

-- Listing OpenCL platforms(s) --
* OpenCL platform 0
PROFILE = FULL_PROFILE
VERSION = OpenCL 2.1 AMD-APP (2580.6)
NAME = AMD Accelerated Parallel Processing
VENDOR = Advanced Micro Devices, Inc.
* OpenCL platform 1
PROFILE = FULL_PROFILE
VERSION = OpenCL 1.2 CUDA 9.2.127
NAME = NVIDIA CUDA
VENDOR = NVIDIA Corporation
-- Listing OpenCL device(s) --
* OpenCL platform 0, device 0
DEVICE_TYPE = 4
DEVICE_NAME = RX580
DEVICE_VENDOR = Advanced Micro Devices, Inc.
...
* OpenCL platform 0, device 1
DEVICE_TYPE = 2
DEVICE_NAME = Intel(R) Core(TM) i7-7700K CPU @ 4.20GHz
DEVICE_VENDOR = GenuineIntel
...
* OpenCL platform 1, device 0
DEVICE_TYPE = 4
DEVICE_NAME = GeForce GTX 660 Ti
DEVICE_VENDOR = NVIDIA Corporation
...

You can instruct the GPU lightmapper to use a specific OpenCL device using this command line option: -OpenCL-PlatformAndDeviceIndices
For example, to select the GeForce GTX 660 Ti from the log above the Windows command line arguments to provide looks like this:

C:\Program Files\Unity 2019.1.0a3\Editor>Unity.exe -OpenCL-PlatformAndDeviceIndices 1 0

The card used for Unity’s main graphics device that renders the Editor viewport can be selected using the -gpu command line argument for the Unity.exe process.

If an OpenCL device is ignored for lightmapping, for instance because it has too little memory, it will not count when specifying device index on the command line, so you have to subtract the number of ignored devices from the index yourself.

Things to keep in mind

  • 2019.2 and older releases will have sampling and noise patterns slightly different than what is produced by the CPU lightmapper as the sampling algorithm used is different. 2019.3 and newer is using the same sampling algorithm as the CPU lightmapper.

  • If the baking process uses more than the available GPU memory the baking can fall back to the CPU lightmapper. Some drivers with virtual memory support will start swapping to CPU memory instead, making the bake much slower.

  • GPU memory usage is very high in the preview version but we are optimizing this. In 2018.3 you need more than 12GB of GPU memory if you want to bake a 4K lightmap.

  • Lightmapper field must be set to Progressive GPU (Preview). Please refer to the image below for how to enable the GPU lightmapper.

Linux driver setup
For Intel GPUs, install the following package:

sudo apt install clinfo ocl-icd-opencl-dev opencl-headers

And https://software.intel.com/en-us/articles/opencl-drivers#latest_linux_driver
Do NOT to install mesa-opencl-icd, even if Mesa is used as Intel GPU driver normally as this driver doesn't work.

When is it ready?
We removed the preview label in 2023.2.0a6.

19 Likes

Have been waiting for that GPU baking feature since... forever :)

5 Likes

Thanks! looking forward to the Mac version, any plans for including this in 2018 cycle? or is this 2019

Awesome. Btw any plan to support skinned mesh renderer baking? Currently I just want to be able to bake static skinned mesh renderer.

[quote=“Lars-Steenhoff”, post:3, topic: 716137]
Thanks! looking forward to the Mac version, any plans for including this in 2018 cycle? or is this 2019
[/quote]
It is too early to say, it isn’t stable enough yet.

[quote=“optimise”, post:4, topic: 716137]
Awesome. Btw any plan to support skinned mesh renderer baking? Currently I just want to be able to bake static skinned mesh renderer.
[/quote]
This is unrelated to baking lighting on the GPU, but rest assured that the task is in our backlog.

1 Like

Well I tested the new GPU lightmapper for the past 30 minutes but ever time it automatically switches to CPU baking after the bake started. Also getting some error stating: Assetion failed on expression: 'IsCLEventCompleted(events.>m_StartMarker, isStartEventAnError)'

I made a bug report with scene file etc.

1 Like

[quote=“Thomas-Pasieka”, post:7, topic: 716137]
Well I tested the new GPU lightmapper for the past 30 minutes but ever time it automatically switches to CPU baking after the bake started. Also getting some error stating: Assetion failed on expression: ‘IsCLEventCompleted(events.>m_StartMarker, isStartEevenAnError)’

I made a bug report with scene file etc.
[/quote]
Thanks, we’ll look into that. Fall back to the CPU lightmapper happens if something goes wrong during baking. What is the case number?

[quote=“KEngelstoft”, post:8, topic: 716137]
Thanks, we’ll look into that. Fall back to the CPU lightmapper happens if something goes wrong during baking. What is the case number?
[/quote]

Case 1085280

3 Likes

[quote=“KEngelstoft”, post:1, topic: 716137]

  • Sampling and noise patterns will look slightly different than what is produced by the CPU lightmapper as the sampling algorithm used is different.
    [/quote]

Assuming you mean the CPU fallback from GPU baking:

Why have CPU fallback then? Why not deny CPU fallback, pause baking, save, flush, resume. It’s called GPU baking for a reason. We would expect consistent bakes. If we want CPU baking, should we not … pick CPU?

Fallback only works when the end result is the same for generating assets.
Fallback should only produce a different result in realtime contexts.

5 Likes

Also while Directional might be a while off I hope it will be improved from the existing washed out Unity effect: https://twitter.com/guycalledfrank/status/1043441539404509184
Here we see Bakery follows ground truth really closely @guycalledfrank

Not sure why these decisions were made, but if Unity will be improving baking it's probably best to always head for ground truth. If the visual can be achieved then dialled back, it's better than never being able to achieve it - unless it was a performance issue.

3 Likes

Oh dang.......the gpu lightmapper are very fast, it's just feel that i bake the lightmap using RadeonProrender Gpu mode.
sometimes i'm also getting the "IsCLEventCompleted" error though.
seems kinda random. Using higher resolution it bake just fine but when i lower it suddenly i'm getting that error and it fallback to cpu.

[quote=“guycalledfrank”, post:941, topic: 704890]
Bakery is designed with low VRAM in mind:

If will cache stuff on hard drive to prevent going out of RAM/VRAM, and this specific optimization allows it to only store a very limited dataset in memory (stuff in lightmap being processed + compressed stuff that is visible from it).

If you have GTX1060 or better - good luck hitting the VRAM limit :slight_smile:
[/quote]
So Bakery works with 2gb cards and does not need to fall back to CPU, it just works. I think this would be better than fast then slow with different results from Unity’s current implementation…

2 Likes

[quote=“Reanimate_L”, post:12, topic: 716137]
Oh dang…the gpu lightmapper are very fast, it’s just feel that i bake the lightmap using RadeonProrender Gpu mode.
sometimes i’m also getting the “IsCLEventCompleted” error though.
seems kinda random. Using higher resolution it bake just fine but when i lower it suddenly i’m getting that error and it fallback to cpu.
[/quote]
Hi,
We are aware of this issue: https://issuetracker.unity3d.com/issues/gpu-plm-fallback-from-gpu-to-cpu-lightmapper-in-cl-profiling-info-not-available-when-baking-with-gpu-lightmapper
We’ll do our best to fix is as soon as possible.
Thanks for your contribution!

1 Like

hmm the integrated intel uhd 630 of my 8700k is enabled alongside my 1070 and unity seems to be preferring this for gpu baking according to the task manager? surprised it baked but is any way to switch which gpu this uses?

edit so I guess disabling in the device manager sort of resolved my question, I think a more official way of selecting a device would be nicer. I'm also curious if multigpu support is on the slate?

Quick question: for a 1070GTX card, what kind of GPU usage should we be expecting?

[quote=“thelebaron”, post:15, topic: 716137]
I think a more official way of selecting a device would be nicer. I’m also curious if multigpu support is on the slate?
[/quote]

You can select the device that will be used for baking using -OpenCL-PlatformAndDeviceIndices . The list being printed in the Editor log.

Check the first point of the 'How to select a specific GPU for baking section in the initial post. However as you state if would be preferable if the gtx1070 would be selected by default in that case. Would you mind open a bug in that regard? :slight_smile:

Is there any chance for multiple GPU support for baking ? I currently have 4 x 1080Ti in my machine (I use this for various other CUDA powered tasks ;) ) It would certainly improve performance ;)

I have a 1080ti, and i5 intel. So this means I have to go out and buy a amd cpu, or i7/i9 cpu......Really??????Getting my hopes up about "GPU" lightmapper, yet it boils down to cpu usuage as well?? very disappointing. Gpu ligmapper seems to be for people with high end cpus.

I actually think CPU fallback is a smart idea. VRAM is limited, there will be places where you may be able to push past the limits, so the fallback opens up new possibilities.

It does seem like it happens waaay too easily currently though. I do expect GPU VRAM usage to be optimised so that it happens very rarely, so everybody, keep calm.

3 Likes