Progressive GPU Lightmapper preview

I don’t get it.

I tried to test the Progressive Lightmapper (GPU) on my computer and on my laptop, it doesn’t work on any of the two devices.
I already tried to start via “-OpenCL-PlatformAndDeviceIndices 1 0”, but unity always uses intelHD.
In my computer is a GTX 750TI with 2GB and in my laptop a GTX 1050 Ti with 4GB.
However, even under AppData/Local/Unity/Editor/Editor.log only IntelHD is displayed. the graphics cards don’t show up at all.

Both devices have the latest Nvidea drivers. I have the problem on different Unity versions (2018.3.0f2 | 2018.3.3f1 | 2019.1.0a10 )

I once added the Editor.log below (from the laptop). How can I solve the problem?

Edit:
After renaming the OpenCL.dll to OpenCL.dll.bak my GPU appears in the Editor.log. I can now access it via -OpenCL-PlatformAndDeviceIndices 0 0. This seems to work.

-- Listing OpenCL platforms(s) --
* OpenCL platform 0
    PROFILE = FULL_PROFILE
    VERSION = OpenCL 2.1
    NAME = Intel(R) OpenCL
    VENDOR = Intel(R) Corporation
    EXTENSIONS = cl_intel_dx9_media_sharing cl_khr_3d_image_writes cl_khr_byte_addressable_store cl_khr_d3d11_sharing cl_khr_depth_images cl_khr_dx9_media_sharing cl_khr_fp64 cl_khr_gl_sharing cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_icd cl_khr_image2d_from_buffer cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_spir
-- Listing OpenCL device(s) --
* OpenCL platform 0, device 0
    DEVICE_TYPE = 4
    DEVICE_NAME = Intel(R) HD Graphics 630
    DEVICE_VENDOR = Intel(R) Corporation
    DEVICE_VERSION = OpenCL 2.1
    DRIVER_VERSION = 22.20.16.4749
    DEVICE_MAX_COMPUTE_UNITS = 23
    DEVICE_MAX_CLOCK_FREQUENCY = 1000
    CL_DEVICE_MAX_CONSTANT_BUFFER_SIZE = 2147483647
    CL_DEVICE_HOST_UNIFIED_MEMORY = true
    CL_DEVICE_MAX_MEM_ALLOC_SIZE = 2147483647
    DEVICE_GLOBAL_MEM_SIZE = 3378762548
    DEVICE_EXTENSIONS = cl_intel_accelerator cl_intel_advanced_motion_estimation cl_intel_d3d11_nv12_media_sharing cl_intel_device_side_avc_motion_estimation cl_intel_driver_diagnostics cl_intel_dx9_media_sharing cl_intel_media_block_io cl_intel_motion_estimation cl_intel_planar_yuv cl_intel_packed_yuv cl_intel_required_subgroup_size cl_intel_simultaneous_sharing cl_intel_subgroups cl_intel_subgroups_short cl_khr_3d_image_writes cl_khr_byte_addressable_store cl_khr_d3d10_sharing cl_khr_d3d11_sharing cl_khr_depth_images cl_khr_dx9_media_sharing cl_khr_fp16 cl_khr_fp64 cl_khr_gl_depth_images cl_khr_gl_event cl_khr_gl_msaa_sharing cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_gl_sharing cl_khr_icd cl_khr_image2d_from_buffer cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_mipmap_image cl_khr_mipmap_image_writes cl_khr_spir cl_khr_subgroups cl_khr_throttle_hints
* OpenCL platform 0, device 1
    DEVICE_TYPE = 2
    DEVICE_NAME = Intel(R) Core(TM) i5-7300HQ CPU @ 2.50GHz
    DEVICE_VENDOR = Intel(R) Corporation
    DEVICE_VERSION = OpenCL 2.1 (Build 10)
    DRIVER_VERSION = 7.2.0.10
    DEVICE_MAX_COMPUTE_UNITS = 4
    DEVICE_MAX_CLOCK_FREQUENCY = 2500
    CL_DEVICE_MAX_CONSTANT_BUFFER_SIZE = 131072
    CL_DEVICE_HOST_UNIFIED_MEMORY = true
    CL_DEVICE_MAX_MEM_ALLOC_SIZE = 2116969472
    DEVICE_GLOBAL_MEM_SIZE = 8467877888
    DEVICE_EXTENSIONS = cl_khr_icd cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_byte_addressable_store cl_khr_depth_images cl_khr_3d_image_writes cl_intel_exec_by_local_thread cl_khr_spir cl_khr_dx9_media_sharing cl_intel_dx9_media_sharing cl_khr_d3d11_sharing cl_khr_gl_sharing cl_khr_fp64 cl_khr_image2d_from_buffer

One thing i can think of is that if the integrated GPU is set as first in the bios, and it will always be used by the GPU lightmapper, as i think i saw somewhere that the first GPU is automatically picked by the lightmapper. Regarding the laptop - just go to the bios and try to find an option, where you set the discrete GPU as first/main ( just for the try though ) !

EDIT : if this is the case, then i think lightmapping team would have to somehow give you an option to choose a baking device !

This is not GPU memory being incorrectly reported as 2GB, this is because something in your scene require an allocation larger than 2GB and the driver doesn’t allow that for this particular Nvidia card (CL_DEVICE_MAX_MEM_ALLOC_SIZE is 2GB in this case). Most Nvidia cards I have seen have a max allocation size of 25% of total GPU memory. AMD cards usually have a 50% max.
Either bake with a different card with more memory or reduce super sampling count or lightmap atlas size. Baking large terrains could also be the cause, so try and reduce heightmap resolution and see if that helps.

1 Like

If you get out of memory errors related to clustering, it means you are precomputing realtime GI. This is unrelated to the GPU lightmapper but please consider if you need realtime GI and baked GI enabled at the same time? Realtime GI precompute can be very CPU and memory intensive.

Brilliant! Exactly the kind of information I needed. Thank you very much!

By the way, if anyone experiences the problem I had with strange texture artifacts in your build only, you might have this bug. Apparently your resources file is limited to 4GB in size as a Unity 32 bit holdover limitation, which has to be fixed with asset bundles or multiple scenes. This applies mostly to people with large complex scenes.

Just downloaded Unity 2019.1.0b1. I’m looking for the Optix denoiser option at the lightmap settings (progressive cpu mode), but there are only none/A_Trous/Gaussian
I have a GTX 1050 with driver 416.34.
How can I “enable” the new Optix denoising option?

1 Like

Should have new options for denoising:

However there is currently a 4GB GPU VRAM minspec as the denoiser is really memory hungry. We are fixing this in 19.2 though. Regardless of the minspec you should see those options.

1 Like

[QUOTE=“Jesper-Mortensen, post: 4167808, member: 224237”
However there is currently a 4GB GPU VRAM minspec as the denoiser is really memory hungry. We are fixing this in 19.2 though. Regardless of the minspec you should see those options.[/QUOTE]

Thanks for your reply! Sadly I have only 2GB VRAM. So that’s why I can’t see the optix option:

Waiting the 19.2 fix very-very much

Optix denoiser is greyed out for me on Progressive GPU but available on CPU? Tooltip says “your hardware doesn’t support denoising”. I have a 1070 laptop, 8GB.

I think the GPU accelerated Optix denoising is currently available only when using progressive CPU lightmapper.

1 Like

Yes, what Total3D said. I have made it work in 19.2 though. Think of the denoising in 19.1 as a soft launch:wink:

1 Like

Unity 2018.3.4&3.1
Hi there.
Finally I achieve to use this GPU lightmapper after recent Update.
But I do meet some wierd result like this.4183162--370099--QQ截图20190205161634.png


I’m quite sure every setting is exactly the same as that when using CPU progressive.
How dose it happen and can it be fixed?
enabling filter won’t achieve an ideal result either
4183162--370105--QQ截图20190205162628.png

Can you post a screenshot showcasing the difference between the CPU and GPU result? Or the scene if possible?

Do you have guys this issue where, once the GPU lightmapper gives warning like out of memory or out of resources etc… you have to restart the engine in order to get it working again, otherwise it stucks on preparing step !?

4183483--370141--CPU.png
this is the CPU result with same settings

Thanks! This seems to be a sampling pattern issue (work is planned on the GPU lightmapper in that regard). The best would be that you open a bug with the scene attached so we can confirm and verify it is fix when we do the sampling pattern work.

Switching to the CPU lightmapper should clear all memory and clear the opencl context, ie the next GPU lightmapper bake should start from scratch. So this seems a bug. Repro (ideally in a bug report)? :slight_smile:

I thought this might clear the opencl context as you say and tried it, but did not help. I actually have this issue for some time now on different unity versions, like always was there, though !
However will try to repro the issue and may file a report !

1 Like

Very promising, but please, please, please tell us there a plan to fix the light-mapper;'s UV packing algo, it’s been extremely inefficient since Unity 4 days.

Right now i have a single mesh using unity default auto generated UV’s settings in a scene and the light mapper has created a 4K texture, but has barely populated 1K of the texture with UV islands

See this thread for a long running discussion about the issue

2 Likes