GPU lightmapper is a preview feature, available in 2018.3, 2019.1 and 2019.2; get it here https://unity3d.com/beta/2019.2. Preview means that you should not rely on it for full scale production. No new features will be backported to 2018.3. The goal of the GPU lightmapper is to provide the same feature set as the CPU progressive lightmapper, with higher performance. We would like you to take it for a spin and let us know what you think. Please use the bug reporter to report issues instead of posting the issues in this thread. This way we have all the information we need like editor logs, the scene used and system configuration. Missing features (not implemented in 2018.3). Features will be added in the 2019.x release cycle. No double-sided GI support. Geometry will always appear single sided from the GPU lightmapper’s point of view. Added in 2019.1. No cast/receive shadows support. Geometry will always cast and receive shadows when using the GPU lightmapper. Added in 2019.1. No baked LOD support. No A-Trous filtering. The GPU lightmapper will use Gaussian filtering instead. No Experimental custom bake API in the GPU lightmapper (https://docs.unity3d.com/ScriptReference/Experimental.Lightmapping.html) No Submesh support, material properties of the first submesh will be used. Bugs that will likely be fixed during the 2018.3 preview Crashes Features added in 2019.1 (will not be backported) Double-sided GI support. Cast/receive shadows support. macOS and Linux support. Features added in 2019.2 (will not be backported) Multiple importance sampling for environment lighting. Optix AI Denoiser support Increased performance for direct light sampling when using view prioritization (2019.2.0a9). Supported hardware The GPU lightmapper needs a system with: At least one GPU with OpenCL 1.2 support and at least 2GB of dedicated memory. A CPU that supports SSE4.1 instructions Recommended AMD graphics driver: 18.9.3. Recommended Nvidia graphics driver: 416.34. Platforms Windows only for the 2018.3 preview. macOS and Linux support has been added in 2019.1.0a11 How to select a specific GPU for baking If the computer contains more than one graphics card, the lightmapper will attempt to automatically use the card not used for the Unity Editor’s main graphics device. The name of the card used for baking is displayed next to the bake performance in the Lighting window. The list of available OpenCL devices will be printed in the Editor log and looks like this: -- Listing OpenCL platforms(s) -- * OpenCL platform 0 PROFILE = FULL_PROFILE VERSION = OpenCL 2.1 AMD-APP (2580.6) NAME = AMD Accelerated Parallel Processing VENDOR = Advanced Micro Devices, Inc. * OpenCL platform 1 PROFILE = FULL_PROFILE VERSION = OpenCL 1.2 CUDA 9.2.127 NAME = NVIDIA CUDA VENDOR = NVIDIA Corporation -- Listing OpenCL device(s) -- * OpenCL platform 0, device 0 DEVICE_TYPE = 4 DEVICE_NAME = RX580 DEVICE_VENDOR = Advanced Micro Devices, Inc. ... * OpenCL platform 0, device 1 DEVICE_TYPE = 2 DEVICE_NAME = Intel(R) Core(TM) i7-7700K CPU @ 4.20GHz DEVICE_VENDOR = GenuineIntel ... * OpenCL platform 1, device 0 DEVICE_TYPE = 4 DEVICE_NAME = GeForce GTX 660 Ti DEVICE_VENDOR = NVIDIA Corporation ... You can instruct the GPU lightmapper to use a specific OpenCL device using this command line option: -OpenCL-PlatformAndDeviceIndices <platform> <device index> For example, to select the GeForce GTX 660 Ti from the log above the Windows command line arguments to provide looks like this: Code (csharp): C:\Program Files\Unity 2019.1.0a3\Editor>Unity.exe -OpenCL-PlatformAndDeviceIndices 1 0 The card used for Unity’s main graphics device that renders the Editor viewport can be selected using the -gpu <index> command line argument for the Unity.exe process. Things to keep in mind Sampling and noise patterns will look slightly different than what is produced by the CPU lightmapper as the sampling algorithm used is different. If the baking process uses more than the available GPU memory the baking can fall back to the CPU lightmapper. Some drivers with virtual memory support will start swapping to CPU memory instead, making the bake much slower. GPU memory usage is very high in the preview version but we are optimizing this. In 2018.3 you need more than 12GB of GPU memory if you want to bake a 4K lightmap. Lightmapper field must be set to Progressive GPU (Preview). Please refer to the image below for how to enable the GPU lightmapper. Known issues: Memory leak when baking Lightmapper Trees project on GeForce 640M - https://fogbugz.unity3d.com/f/cases/1078614/ Linux driver setup For Intel GPUs, install the following package: Code (CSharp): sudo apt install clinfo ocl-icd-opencl-dev opencl-headers And https://software.intel.com/en-us/articles/opencl-drivers#latest_linux_driver Do NOT to install `mesa-opencl-icd`, even if Mesa is used as Intel GPU driver normally as this driver doesn't work.