OpenCL Error. Falling back to CPU lightmapper. Error callback from context: CL_MEM_OBJECT_ALLOCATION_FAILURE → This happens in the begining, GTX 1070, does that mean that baker wants more than 8GB memory?
Edit. I decreased resolution dramatically and it seems to work fine, unfortunately the quality is not satisfying. I’ll still make couple more tries
I am using a 750Ti 2GB Ram which is apparently not enough for most of the cases.
But i can bake the Sponza with lightmaps scale around 3-4 in a minute or two which is awesome !
However i can see that there is some GBs of system memory allocated in Windows 10 for helping the GPU when playing some games !
So, any way of using a shared system memory for swapping data and keep the baking process !?
Windows 10 x64 - GTX 750Ti 2GB memory with/without using some shared system memory !
Total of 10GB of GPU memory - dedicated plus shared…
Some shared memory is used when needed ( was running a couple of games at the same time )
P.S. this might be useful even on 4, 6 or 8 GB cards for bigger scenes ( if possible at tall ) !
When you have two GPU, the one NOT used as the primary device (ie Unity Editor rendering) will be used for baking automatically. Please take a look at the 'How to select a specific GPU for baking’ section of the initial post above.
Really depend on the scene. If occupancy of the lightmap is good enough and lightmap themselves are large enought you should be closing in to 100%.
As an example:
If lightmap resolution is 2k for example and occupancy is around 70%. It will mean we load the GPU with jobs of (204820480.7 =) 2.9 millions texels witch should give a good GPU load.
On the other hand if lightmap resolution is 512 and occupancy is 40%. It means we process 100k texels at a time witch will probably not be high enough for a good GPU load.
Hi! Can you please elaborate a bit or perhaps make a bug report if you think there is something completely wrong with directionality in your scene? The GPU and CPU lightmappers are using the same definition of directionality so they should look the same. The Unity directional lightmaps aren’t based on SH so they will look different than realtime shaded + SH for indirect. So this is not a apples to apples comparison.
100% usage, if there is enough work to saturate the GPU. See https://discussions.unity.com/t/716137/25 for more detail. Have a look at the Compute queues in the GPU section in the Windows Task Manager or GPU-Z.
I have to disagree, falling back to CPU is purely strange as there is a CPU bake option. The default action should be either wait and flush out the VRAM and go on (cache it to hard drive). It should put up a warning debug message every time it does so. This will give users with less VRAM to choose between GPU and CPU depending on their hardware. If falling back to CPU happens, which will happen on regular basis for people with low VRAM, the GPU lightmapper is sort of pointless for them.
As for me, I usually bake my lightmaps when I am done for the day, let it bake over night. I don’t want to come back in the morning and find out it has been baking with CPU and still has 2 days to go. If that is what I intended, then I would have been away for 2 days.
EDIT: It seems like losing focus on the editor slows it down a lot. I got these times with windows clock open, so I’ll try again… The GPU is more like 11 seconds focused.
These include the Preparing Bake and reflection probes part. Basically, while the blue bar is showing.
45 seconds GPU
6 minutes CPU
Both editor focused. 50 res, 1024 size, 4 bounces, non-dir, AO 1 and 0.5, prioritize view, the rest default.
(Case 1085701) Sadly Progressive GPU is so much slower than Progressive CPU for me and produce incorrect lighting result. Furhermore, Progressive GPU will make my pc super lag until I not able to switch to browser to see website when baking but Progressive CPU dun have this issue.
I am also using the lightmapper GPU version 2018.3b3 my only problem is that with my video card the maximum peak was 60 mrays.
Usually it is around 30-15 mrays, what can it be?
In regard to mrays please note that we are actually computing mega samples ie the shading of the texel is taken into account. Thus it will be lower than what you see adverticed as raw intersection performance in AMD RadeonRays or DXR.
GPU Progressive Lightmapper is what we presented at GDC and are talking about here. It is currently using RadeonRays GPU compute based ray tracing. Further down the line we will support hardware ray tracing APIs. We are currently working on the low level parts of that.
Can you share a screenshot of the incorrect lighting results in the thread. Usually just from that we can make an educated guess about what to do about it.
The issue linked here seems different to the “IsCLEventCompleted” issue. If it is the same it’s not fixed as stated. I’m getting the following error: “Assertion failed on expression: ‘IsCLEventCompleted(data.startEvent, isStartEventAnError)’” followed by “Assertion failed on expression: ‘IsCLEventCompleted(events->m_StartMarker, isStartEventAnError)’”
I get the error on a lightmap resolution above 12. At a res of 12 I can set all other settings as I please.