Hello, 2018.3b3 will include the GPU lightmapper preview. I have posted details about it here Progressive GPU Lightmapper preview
This is great, thanks!
OMG!!
and where is 2018.3b3?
Awesome news!
27th of September
oh snap
Getting good speed. about 10 times the speed of baking on the CPU.
This was a 14 core i9 CPU vs a 1080Ti graphics card.
Really nice speed. Too bad, that it doesn’t handle scaled meshes yet. I know that scaled transforms should be avoided in general, but there’s usually no way around it and our level designers like to use scaling for a lot of things. Looking forward to the complete feature set. I’m liking the preview so far.
Falling back to CPU happens when there isn’t enough GPU memory available. What GPU do you have? Please attach the editor.log
3725989–308611–Editor.zip (29.8 KB)
You Geforce 750Ti card only has 2Gb of dedicated GPU memory and that is shared with the rendering in the Editor viewport. You can try the following to lower GPU memory usage:
- Disable lightmap view prioritization
- Disable lightmap supersampling (Unity - Manual: Lightmap Parameters Asset Baked GI - Anti-aliasing Samples)
- Disable the post processing stack if you use that.
- Use non-directional lightmaps
- Use a smaller lightmap size to make it fit in GPU memory.
ok … but what about this? [totally 10 GB memory??]
Those 10Gb is the combined 2Gb of dedicated GPU memory and 8 Gb of shared system memory and is not accessible for us to use on Nvidia cards as it is the driver that decides how to manage the buffers and it doesn’t allow virtual GPU memory. So once you the 2Gb are used, we run out.
On AMD cards, the driver can swap to main memory but once this happens the baking will become slower.
This is an exceptionally great improvement! Love it to death so far! Though now I only need to get a GPU with more memory…
so … i have to planning to buying new AMD cards with more than 16 GB of vram
Keep in mind that this is a preview, we aim to lower memory consumption before releasing the final version.
well … i hate AMD!! but seems that reality is different [specifically for using GPU lightmapper to baking large architectural scenes] … GPU lightmapper can’t use nVIDIA shared memory and gets lower speed when using AMD shared memory [means system ram] … so, cuz nvidia don’t have fair price cards with 16GB of vram , AMD is only real choice for now that have cards with 16GB of vram or at next release with over 16GB of vram maybe!?