I tried to test the Progressive Lightmapper (GPU) on my computer and on my laptop, it doesn’t work on any of the two devices.
I already tried to start via “-OpenCL-PlatformAndDeviceIndices 1 0”, but unity always uses intelHD.
In my computer is a GTX 750TI with 2GB and in my laptop a GTX 1050 Ti with 4GB.
However, even under AppData/Local/Unity/Editor/Editor.log only IntelHD is displayed. the graphics cards don’t show up at all.
Both devices have the latest Nvidea drivers. I have the problem on different Unity versions (2018.3.0f2 | 2018.3.3f1 | 2019.1.0a10 )
I once added the Editor.log below (from the laptop). How can I solve the problem?
Edit: After renaming the OpenCL.dll to OpenCL.dll.bak my GPU appears in the Editor.log. I can now access it via -OpenCL-PlatformAndDeviceIndices 0 0. This seems to work.
One thing i can think of is that if the integrated GPU is set as first in the bios, and it will always be used by the GPU lightmapper, as i think i saw somewhere that the first GPU is automatically picked by the lightmapper. Regarding the laptop - just go to the bios and try to find an option, where you set the discrete GPU as first/main ( just for the try though ) !
EDIT : if this is the case, then i think lightmapping team would have to somehow give you an option to choose a baking device !
This is not GPU memory being incorrectly reported as 2GB, this is because something in your scene require an allocation larger than 2GB and the driver doesn’t allow that for this particular Nvidia card (CL_DEVICE_MAX_MEM_ALLOC_SIZE is 2GB in this case). Most Nvidia cards I have seen have a max allocation size of 25% of total GPU memory. AMD cards usually have a 50% max.
Either bake with a different card with more memory or reduce super sampling count or lightmap atlas size. Baking large terrains could also be the cause, so try and reduce heightmap resolution and see if that helps.
If you get out of memory errors related to clustering, it means you are precomputing realtime GI. This is unrelated to the GPU lightmapper but please consider if you need realtime GI and baked GI enabled at the same time? Realtime GI precompute can be very CPU and memory intensive.
By the way, if anyone experiences the problem I had with strange texture artifacts in your build only, you might have this bug. Apparently your resources file is limited to 4GB in size as a Unity 32 bit holdover limitation, which has to be fixed with asset bundles or multiple scenes. This applies mostly to people with large complex scenes.
Just downloaded Unity 2019.1.0b1. I’m looking for the Optix denoiser option at the lightmap settings (progressive cpu mode), but there are only none/A_Trous/Gaussian
I have a GTX 1050 with driver 416.34.
How can I “enable” the new Optix denoising option?
However there is currently a 4GB GPU VRAM minspec as the denoiser is really memory hungry. We are fixing this in 19.2 though. Regardless of the minspec you should see those options.
[QUOTE=“Jesper-Mortensen, post: 4167808, member: 224237”
However there is currently a 4GB GPU VRAM minspec as the denoiser is really memory hungry. We are fixing this in 19.2 though. Regardless of the minspec you should see those options.[/QUOTE]
Thanks for your reply! Sadly I have only 2GB VRAM. So that’s why I can’t see the optix option:
Optix denoiser is greyed out for me on Progressive GPU but available on CPU? Tooltip says “your hardware doesn’t support denoising”. I have a 1070 laptop, 8GB.
I’m quite sure every setting is exactly the same as that when using CPU progressive.
How dose it happen and can it be fixed?
enabling filter won’t achieve an ideal result either
Do you have guys this issue where, once the GPU lightmapper gives warning like out of memory or out of resources etc… you have to restart the engine in order to get it working again, otherwise it stucks on preparing step !?
Thanks! This seems to be a sampling pattern issue (work is planned on the GPU lightmapper in that regard). The best would be that you open a bug with the scene attached so we can confirm and verify it is fix when we do the sampling pattern work.
Switching to the CPU lightmapper should clear all memory and clear the opencl context, ie the next GPU lightmapper bake should start from scratch. So this seems a bug. Repro (ideally in a bug report)?
I thought this might clear the opencl context as you say and tried it, but did not help. I actually have this issue for some time now on different unity versions, like always was there, though !
However will try to repro the issue and may file a report !
Very promising, but please, please, please tell us there a plan to fix the light-mapper;'s UV packing algo, it’s been extremely inefficient since Unity 4 days.
Right now i have a single mesh using unity default auto generated UV’s settings in a scene and the light mapper has created a 4K texture, but has barely populated 1K of the texture with UV islands
See this thread for a long running discussion about the issue