Search Unity

Official Progressive GPU Lightmapper preview

Discussion in 'Global Illumination' started by KEngelstoft, Sep 26, 2018.

  1. KEngelstoft

    KEngelstoft

    Unity Technologies

    Joined:
    Aug 13, 2013
    Posts:
    1,366
    GPU lightmapper is a preview feature. Preview means that you should not rely on it for full scale production. No new features will be backported to 2018.4, 2019.x or any of the following releases. We recommend using 2023.2 or later because the GPU lightmapper has reached feature parity with the CPU version at this point.
    The goal of the GPU lightmapper is to provide the same feature set as the CPU progressive lightmapper, with higher performance.

    We would like you to take it for a spin in the latest alpha/beta version and let us know what you think. Please use the bug reporter to report issues instead of posting the issues in this thread. This way we have all the information we need like editor logs, the scene used and system configuration.

    Missing features in 2018.3. Most features will be added in the 2019.x and 2020.x release cycles:
    • Double-sided GI support. Geometry will always appear single sided from the GPU lightmapper’s point of view. Added in 2019.1.
    • Cast/receive shadows support. Geometry will always cast and receive shadows when using the GPU lightmapper. Added in 2019.1.
    • Baked LOD support. Added in 2020.1.0a20.
    • A-Trous filtering. The GPU lightmapper will use Gaussian filtering instead. Added in 2020.1a15.
    • Experimental custom bake API. Added in 2020.1a6
    • Submesh support, material properties of the first submesh will be used. Added in 2019.3.
    • Reduced memory usage when baking.
    Features added in 2019.1 (will not be backported)
    • Double-sided GI support.
    • Cast/receive shadows support.
    • macOS and Linux support.
    Features added in 2019.2 (will not be backported)
    • Multiple importance sampling for environment lighting.
    • Optix and OpenImage denoiser support.
    • Increased sampling performance when using view prioritization or low occupancy maps:
      • Direct light (2019.2.0a9).
      • Indirect and environment (2019.2.0a11).
    Features added in 2019.3 (will not be backported)
    • Submesh support (2019.3.0a3)
    • Match CPU lightmapper sampling algorithm (2019.3.0a8)
    • AMD Radeon Pro Image Filters AI denoiser added. Currently Windows and AMD hardware only (2019.3.0a10).
    • Added support for baking box and pyramid shapes for SRP spotlights (2019.3.0a10).
    Features added in 2020.1 (will not be backported)
    • GPU backend can now export AOVs to train ML code for de-noising lightmaps. Only available in developer mode (2020.1.0a1).
    • Compressed transparency textures; 75% memory reduction by using rgba32 instead of floats (2020.1.0a2).
    • GPU lightmapper can now write out the filtered AO texture to disk, alongside the Lighting Data Asset. Only available in On Demand mode. Only available through experimental API (2020.1.0a3).
    • Support for the Experimental custom bake API for GPU lightmapper (2020.1a6).
    • Accurate OpenCL memory status for AMD and Nvidia GPUs (2020.1a9).
    • Reduced GPU memory usage when baking lighting by using stackless BVH traversal (2020.1a9).
    • Show user friendly name in the Lighting window for AMD GPUs on Windows and Linux instead of GPU code name (2020.1a9).
    • Compute device can be selected in a dropdown in the Lighting window (2020.1.0a15).
    • Limit memory allocations for light probes to fit in available memory when baking with progressive lightmappers (2020.1.0a15).
    • A-Trous filtering (2020.1a15).
    • Baked LOD support (2020.1.0a20).
    • Baked light cookie support (2020.1.0a22).
    Features added in 2020.2
    • Brought back stack based BVH traversal, this time with with Baked LOD support (2020.2.a1).
    • Reduce memory usage when baking large lightmaps on GPU by disabling progressive updates and using tiling on the ray space buffers (2020.2.0a11).
    Features added in 2021.2
    • Memory and performance improvements when baking Light Probes (2021.2.a17).
    • Lightmap space tiling to reduce memory usage (2021.2.0a19).
    Supported hardware
    The GPU lightmapper needs a system with:
    • At least one GPU with OpenCL 1.2 support and at least 2GB of dedicated memory.
    • A CPU that supports SSE4.1 instructions
    • Recommended AMD graphics driver: 18.9.3.
    • Recommended Nvidia graphics driver: 416.34.
    Platforms
    • Windows only for the 2018.3 preview.
    • macOS and Linux support was added in 2019.1
    How to select a specific GPU for baking
    If the computer contains more than one graphics card, the lightmapper will attempt to automatically use the card not used for the Unity Editor’s main graphics device. The name of the card used for baking is displayed next to the bake performance in the Lighting window. The list of available OpenCL devices will be printed in the Editor log and looks like this:

    -- Listing OpenCL platforms(s) --
    * OpenCL platform 0
    PROFILE = FULL_PROFILE
    VERSION = OpenCL 2.1 AMD-APP (2580.6)
    NAME = AMD Accelerated Parallel Processing
    VENDOR = Advanced Micro Devices, Inc.
    * OpenCL platform 1
    PROFILE = FULL_PROFILE
    VERSION = OpenCL 1.2 CUDA 9.2.127
    NAME = NVIDIA CUDA
    VENDOR = NVIDIA Corporation
    -- Listing OpenCL device(s) --
    * OpenCL platform 0, device 0
    DEVICE_TYPE = 4
    DEVICE_NAME = RX580
    DEVICE_VENDOR = Advanced Micro Devices, Inc.
    ...
    * OpenCL platform 0, device 1
    DEVICE_TYPE = 2
    DEVICE_NAME = Intel(R) Core(TM) i7-7700K CPU @ 4.20GHz
    DEVICE_VENDOR = GenuineIntel
    ...
    * OpenCL platform 1, device 0
    DEVICE_TYPE = 4
    DEVICE_NAME = GeForce GTX 660 Ti
    DEVICE_VENDOR = NVIDIA Corporation
    ...

    You can instruct the GPU lightmapper to use a specific OpenCL device using this command line option: -OpenCL-PlatformAndDeviceIndices <platform> <device index>
    For example, to select the GeForce GTX 660 Ti from the log above the Windows command line arguments to provide looks like this:

    Code (csharp):
    1. C:\Program Files\Unity 2019.1.0a3\Editor>Unity.exe -OpenCL-PlatformAndDeviceIndices 1 0
    The card used for Unity’s main graphics device that renders the Editor viewport can be selected using the -gpu <index> command line argument for the Unity.exe process.

    If an OpenCL device is ignored for lightmapping, for instance because it has too little memory, it will not count when specifying device index on the command line, so you have to subtract the number of ignored devices from the index yourself.

    Things to keep in mind
    • 2019.2 and older releases will have sampling and noise patterns slightly different than what is produced by the CPU lightmapper as the sampling algorithm used is different. 2019.3 and newer is using the same sampling algorithm as the CPU lightmapper.
    • If the baking process uses more than the available GPU memory the baking can fall back to the CPU lightmapper. Some drivers with virtual memory support will start swapping to CPU memory instead, making the bake much slower.
    • GPU memory usage is very high in the preview version but we are optimizing this. In 2018.3 you need more than 12GB of GPU memory if you want to bake a 4K lightmap.
    • Lightmapper field must be set to Progressive GPU (Preview). Please refer to the image below for how to enable the GPU lightmapper.


    Linux driver setup
    For Intel GPUs, install the following package:
    Code (CSharp):
    1. sudo apt install clinfo ocl-icd-opencl-dev opencl-headers
    And https://software.intel.com/en-us/articles/opencl-drivers#latest_linux_driver
    Do NOT to install `mesa-opencl-icd`, even if Mesa is used as Intel GPU driver normally as this driver doesn't work.

    When is it ready?
    We removed the preview label in 2023.2.0a6.
     
    Last edited: Mar 16, 2023
    M_MG_S, Avatar-Vick, Pecek and 16 others like this.
  2. Thomas-Pasieka

    Thomas-Pasieka

    Joined:
    Sep 19, 2005
    Posts:
    2,174
    Have been waiting for that GPU baking feature since... forever :)
     
  3. Lars-Steenhoff

    Lars-Steenhoff

    Joined:
    Aug 7, 2007
    Posts:
    3,527
    Thanks! looking forward to the Mac version, any plans for including this in 2018 cycle? or is this 2019
     
  4. optimise

    optimise

    Joined:
    Jan 22, 2014
    Posts:
    2,129
    Awesome. Btw any plan to support skinned mesh renderer baking? Currently I just want to be able to bake static skinned mesh renderer.
     
  5. KEngelstoft

    KEngelstoft

    Unity Technologies

    Joined:
    Aug 13, 2013
    Posts:
    1,366
    It is too early to say, it isn't stable enough yet.
     
  6. KEngelstoft

    KEngelstoft

    Unity Technologies

    Joined:
    Aug 13, 2013
    Posts:
    1,366
    This is unrelated to baking lighting on the GPU, but rest assured that the task is in our backlog.
     
    Last edited: Sep 27, 2018
    optimise likes this.
  7. Thomas-Pasieka

    Thomas-Pasieka

    Joined:
    Sep 19, 2005
    Posts:
    2,174
    Well I tested the new GPU lightmapper for the past 30 minutes but ever time it automatically switches to CPU baking after the bake started. Also getting some error stating: Assetion failed on expression: 'IsCLEventCompleted(events.>m_StartMarker, isStartEventAnError)'

    I made a bug report with scene file etc.
     
    AlexTuduran likes this.
  8. KEngelstoft

    KEngelstoft

    Unity Technologies

    Joined:
    Aug 13, 2013
    Posts:
    1,366
    Thanks, we'll look into that. Fall back to the CPU lightmapper happens if something goes wrong during baking. What is the case number?
     
  9. Thomas-Pasieka

    Thomas-Pasieka

    Joined:
    Sep 19, 2005
    Posts:
    2,174
    Case 1085280
     
  10. hippocoder

    hippocoder

    Digital Ape

    Joined:
    Apr 11, 2010
    Posts:
    29,723
    Assuming you mean the CPU fallback from GPU baking:

    Why have CPU fallback then? Why not deny CPU fallback, pause baking, save, flush, resume. It's called GPU baking for a reason. We would expect consistent bakes. If we want CPU baking, should we not ... pick CPU?

    Fallback only works when the end result is the same for generating assets.
    Fallback should only produce a different result in realtime contexts.
     
    Ruslank100, jjejj87, d1favero and 2 others like this.
  11. hippocoder

    hippocoder

    Digital Ape

    Joined:
    Apr 11, 2010
    Posts:
    29,723
    Also while Directional might be a while off I hope it will be improved from the existing washed out Unity effect: https://twitter.com/guycalledfrank/status/1043441539404509184
    Here we see Bakery follows ground truth really closely @guycalledfrank

    Not sure why these decisions were made, but if Unity will be improving baking it's probably best to always head for ground truth. If the visual can be achieved then dialled back, it's better than never being able to achieve it - unless it was a performance issue.
     
    Adam-Bailey, Mauri and Thomas-Pasieka like this.
  12. Reanimate_L

    Reanimate_L

    Joined:
    Oct 10, 2009
    Posts:
    2,788
    Oh dang.......the gpu lightmapper are very fast, it's just feel that i bake the lightmap using RadeonProrender Gpu mode.
    sometimes i'm also getting the "IsCLEventCompleted" error though.
    seems kinda random. Using higher resolution it bake just fine but when i lower it suddenly i'm getting that error and it fallback to cpu.
     
  13. hippocoder

    hippocoder

    Digital Ape

    Joined:
    Apr 11, 2010
    Posts:
    29,723
    So Bakery works with 2gb cards and does not need to fall back to CPU, it just works. I think this would be better than fast then slow with different results from Unity's current implementation...
     
    NeatWolf and Lars-Steenhoff like this.
  14. V_Kalinichenko

    V_Kalinichenko

    Unity Technologies

    Joined:
    Apr 15, 2015
    Posts:
    16
    Hi,
    We are aware of this issue: https://issuetracker.unity3d.com/is...ot-available-when-baking-with-gpu-lightmapper
    We'll do our best to fix is as soon as possible.
    Thanks for your contribution!
     
    fguinier likes this.
  15. thelebaron

    thelebaron

    Joined:
    Jun 2, 2013
    Posts:
    857
    hmm the integrated intel uhd 630 of my 8700k is enabled alongside my 1070 and unity seems to be preferring this for gpu baking according to the task manager? surprised it baked but is any way to switch which gpu this uses?

    edit so I guess disabling in the device manager sort of resolved my question, I think a more official way of selecting a device would be nicer. I'm also curious if multigpu support is on the slate?
     
    Last edited: Sep 27, 2018
  16. jjejj87

    jjejj87

    Joined:
    Feb 2, 2013
    Posts:
    1,117
    Quick question: for a 1070GTX card, what kind of GPU usage should we be expecting?
     
  17. fguinier

    fguinier

    Unity Technologies

    Joined:
    Sep 14, 2015
    Posts:
    146
    You can select the device that will be used for baking using -OpenCL-PlatformAndDeviceIndices <platformIdx> <deviceIdx>. The list being printed in the Editor log.

    Check the first point of the 'How to select a specific GPU for baking' section in the initial post. However as you state if would be preferable if the gtx1070 would be selected by default in that case. Would you mind open a bug in that regard? :)
     
    Last edited: Sep 28, 2018
  18. drzepsuj

    drzepsuj

    Joined:
    Aug 10, 2013
    Posts:
    9
    Is there any chance for multiple GPU support for baking ? I currently have 4 x 1080Ti in my machine (I use this for various other CUDA powered tasks ;) ) It would certainly improve performance ;)
     
  19. Justin_Wingfall

    Justin_Wingfall

    Joined:
    May 15, 2014
    Posts:
    125
    I have a 1080ti, and i5 intel. So this means I have to go out and buy a amd cpu, or i7/i9 cpu......Really??????Getting my hopes up about "GPU" lightmapper, yet it boils down to cpu usuage as well?? very disappointing. Gpu ligmapper seems to be for people with high end cpus.
     
    Last edited: Sep 27, 2018
  20. AcidArrow

    AcidArrow

    Joined:
    May 20, 2010
    Posts:
    11,790
    I actually think CPU fallback is a smart idea. VRAM is limited, there will be places where you may be able to push past the limits, so the fallback opens up new possibilities.

    It does seem like it happens waaay too easily currently though. I do expect GPU VRAM usage to be optimised so that it happens very rarely, so everybody, keep calm.
     
    Ziplock9000, KEngelstoft and fguinier like this.
  21. Wawruch2

    Wawruch2

    Joined:
    Oct 6, 2016
    Posts:
    68
    OpenCL Error. Falling back to CPU lightmapper. Error callback from context: CL_MEM_OBJECT_ALLOCATION_FAILURE -> This happens in the begining, GTX 1070, does that mean that baker wants more than 8GB memory?

    Edit. I decreased resolution dramatically and it seems to work fine, unfortunately the quality is not satisfying. I'll still make couple more tries
     
    Last edited: Sep 28, 2018
  22. Chaiker

    Chaiker

    Joined:
    Apr 14, 2014
    Posts:
    63
    If I have 2 GPUs, can I use the first for the Unity Editor, and the second for Lighting Baking? Without using of integrated graphics in CPU?
     
  23. Vagabond_

    Vagabond_

    Joined:
    Aug 26, 2014
    Posts:
    1,148
    Hi, i am having another question !

    I am using a 750Ti 2GB Ram which is apparently not enough for most of the cases.
    But i can bake the Sponza with lightmaps scale around 3-4 in a minute or two which is awesome !

    However i can see that there is some GBs of system memory allocated in Windows 10 for helping the GPU when playing some games !

    So, any way of using a shared system memory for swapping data and keep the baking process !?

    Windows 10 x64 - GTX 750Ti 2GB memory with/without using some shared system memory !
    Total of 10GB of GPU memory - dedicated plus shared...

    upload_2018-9-28_8-34-23.png

    Some shared memory is used when needed ( was running a couple of games at the same time )

    upload_2018-9-28_8-46-42.png

    P.S. this might be useful even on 4, 6 or 8 GB cards for bigger scenes ( if possible at tall ) !
     
  24. fguinier

    fguinier

    Unity Technologies

    Joined:
    Sep 14, 2015
    Posts:
    146
    When you have two GPU, the one NOT used as the primary device (ie Unity Editor rendering) will be used for baking automatically. Please take a look at the 'How to select a specific GPU for baking' section of the initial post above.
     
    Last edited: Sep 28, 2018
    Chaiker likes this.
  25. fguinier

    fguinier

    Unity Technologies

    Joined:
    Sep 14, 2015
    Posts:
    146
    Really depend on the scene. If occupancy of the lightmap is good enough and lightmap themselves are large enought you should be closing in to 100%.

    As an example:
    If lightmap resolution is 2k for example and occupancy is around 70%. It will mean we load the GPU with jobs of (2048*2048*0.7 =) 2.9 millions texels witch should give a good GPU load.
    On the other hand if lightmap resolution is 512 and occupancy is 40%. It means we process 100k texels at a time witch will probably not be high enough for a good GPU load.
     
    optimise likes this.
  26. fguinier

    fguinier

    Unity Technologies

    Joined:
    Sep 14, 2015
    Posts:
    146
    It is in the plan. :)
     
    drzepsuj and optimise like this.
  27. KEngelstoft

    KEngelstoft

    Unity Technologies

    Joined:
    Aug 13, 2013
    Posts:
    1,366
    Hi! Can you please elaborate a bit or perhaps make a bug report if you think there is something completely wrong with directionality in your scene? The GPU and CPU lightmappers are using the same definition of directionality so they should look the same. The Unity directional lightmaps aren't based on SH so they will look different than realtime shaded + SH for indirect. So this is not a apples to apples comparison.
     
    hippocoder likes this.
  28. KEngelstoft

    KEngelstoft

    Unity Technologies

    Joined:
    Aug 13, 2013
    Posts:
    1,366
  29. jjejj87

    jjejj87

    Joined:
    Feb 2, 2013
    Posts:
    1,117
    I have to disagree, falling back to CPU is purely strange as there is a CPU bake option. The default action should be either wait and flush out the VRAM and go on (cache it to hard drive). It should put up a warning debug message every time it does so. This will give users with less VRAM to choose between GPU and CPU depending on their hardware. If falling back to CPU happens, which will happen on regular basis for people with low VRAM, the GPU lightmapper is sort of pointless for them.

    As for me, I usually bake my lightmaps when I am done for the day, let it bake over night. I don't want to come back in the morning and find out it has been baking with CPU and still has 2 days to go. If that is what I intended, then I would have been away for 2 days.
     
  30. Rich_A

    Rich_A

    Joined:
    Nov 22, 2016
    Posts:
    338
    Anyone have any initial comparison of bake time on CPU vs GPU?
     
  31. jjejj87

    jjejj87

    Joined:
    Feb 2, 2013
    Posts:
    1,117
    I haven't measured, but when I get around 90mRays with the GPU, then its usually under 60 seconds in a moderate scene.
     
  32. Stardog

    Stardog

    Joined:
    Jun 28, 2010
    Posts:
    1,913
    capture.PNG
    5x5 plane, default lighting settings apart from 50 Lightmap Res, 1024 Lightmap Size, AO 1 Indirect 0.5 Direct, Non-Directional.

    32 seconds GPU, 99-123 mrays/sec.
    2m 25s CPU (old i5 2500k 3.3ghz), 4.73-5 mrays/sec.

    EDIT: It seems like losing focus on the editor slows it down a lot. I got these times with windows clock open, so I'll try again... The GPU is more like 11 seconds focused.

    These include the Preparing Bake and reflection probes part. Basically, while the blue bar is showing.

    capture.PNG
    45 seconds GPU
    6 minutes CPU

    Both editor focused. 50 res, 1024 size, 4 bounces, non-dir, AO 1 and 0.5, prioritize view, the rest default.
     
    Last edited: Sep 28, 2018
  33. optimise

    optimise

    Joined:
    Jan 22, 2014
    Posts:
    2,129
    (Case 1085701) Sadly Progressive GPU is so much slower than Progressive CPU for me and produce incorrect lighting result. Furhermore, Progressive GPU will make my pc super lag until I not able to switch to browser to see website when baking but Progressive CPU dun have this issue.
     
    MikeGardiner and JamesArndt like this.
  34. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
  35. Codev

    Codev

    Joined:
    Oct 8, 2015
    Posts:
    5
    I am also using the lightmapper GPU version 2018.3b3 my only problem is that with my video card the maximum peak was 60 mrays.
    Usually it is around 30-15 mrays, what can it be?
    https://i.imgur.com/nSJB41V.png
     
  36. fguinier

    fguinier

    Unity Technologies

    Joined:
    Sep 14, 2015
    Posts:
    146
    The GPU lightmapper is based on AMD RadeonRays. We are working
    Might be related to low occupancy in the lightmap, could you take a look a GPU occupancy using https://www.techpowerup.com/gpuz/ ?

    In regard to mrays please note that we are actually computing mega `samples` ie the shading of the texel is taken into account. Thus it will be lower than what you see adverticed as raw intersection performance in AMD RadeonRays or DXR.
     
    Last edited: Oct 2, 2018
  37. Jesper-Mortensen

    Jesper-Mortensen

    Unity Technologies

    Joined:
    Mar 15, 2013
    Posts:
    232
    GPU Progressive Lightmapper is what we presented at GDC and are talking about here. It is currently using RadeonRays GPU compute based ray tracing. Further down the line we will support hardware ray tracing APIs. We are currently working on the low level parts of that.
     
  38. Jesper-Mortensen

    Jesper-Mortensen

    Unity Technologies

    Joined:
    Mar 15, 2013
    Posts:
    232
    Can you share a screenshot of the incorrect lighting results in the thread. Usually just from that we can make an educated guess about what to do about it.
     
  39. Murray_Zutari

    Murray_Zutari

    Joined:
    Jun 1, 2017
    Posts:
    45
    The issue linked here seems different to the "IsCLEventCompleted" issue. If it is the same it's not fixed as stated. I'm getting the following error: "Assertion failed on expression: 'IsCLEventCompleted(data.startEvent, isStartEventAnError)'" followed by "Assertion failed on expression: 'IsCLEventCompleted(events->m_StartMarker, isStartEventAnError)'"

    I get the error on a lightmap resolution above 12. At a res of 12 I can set all other settings as I please.
     
    V_Kalinichenko likes this.
  40. optimise

    optimise

    Joined:
    Jan 22, 2014
    Posts:
    2,129
    Progressive GPU
    upload_2018-10-2_17-47-34.png

    Progressive CPU
    upload_2018-10-2_17-45-13.png
     
  41. ArnoBax

    ArnoBax

    Joined:
    Oct 1, 2018
    Posts:
    2
    Bug found in GPU lightmapper.
    Light is not ported trough an object that has more materials and a transparant glass material.
    for example if i have a car with car windows. If my glass is attached to my car door but it has multiple materials incoming baked lighting is getting blocked.
    If i detach the glass windows from the doors and make it an stand-alone object the baked light will get ported trough.

    On CPU it works on both ways. the light will always be ported trough even on my custom lightmap parameters.

    Greetings Arno.
    PS. If this is not the right place to post this bug then let me hear where i have to.

    But still the GPU lightmapper is an incredible feature and a good step forward!
     
  42. Vagabond_

    Vagabond_

    Joined:
    Aug 26, 2014
    Posts:
    1,148
    Hi, read the first post of the thread !
    Looks related to this !?

     
    ArnoBax likes this.
  43. ArnoBax

    ArnoBax

    Joined:
    Oct 1, 2018
    Posts:
    2
    Thankyou for your reply sir! I indeed forgot about that point. You are right!
     
  44. Codev

    Codev

    Joined:
    Oct 8, 2015
    Posts:
    5
    I updated the video card drivers and now automatically, when the bake starts changing me from gpu to cpu :(
     
  45. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
    For me it seems to depend on the Quality settings, on low settings everything zips through the GPU on higher settings it drops to a crawl and looks like it's back on the CPU.
     
  46. fguinier

    fguinier

    Unity Technologies

    Joined:
    Sep 14, 2015
    Posts:
    146
    Can you provide your video card driver id before and after please?
     
  47. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
    Further testing and it looks like the clustering process is the CPU hog and that can be toggled by the Realitime Global Illumination setting.

    Can clustering be pushed onto the GPU?
     
  48. Crystalline

    Crystalline

    Joined:
    Sep 11, 2013
    Posts:
    171
    Well, when baking with the GPU baker i get like only 9% gpu usage and A LOT more on the CPU. Its like not using the GPU at its full potential. Strange.
     
  49. fguinier

    fguinier

    Unity Technologies

    Joined:
    Sep 14, 2015
    Posts:
    146
    This is unexpected, are you positive it bake on the GPU ? At the bottom of the lighting setting windows it should show the name of the GPU currently used when doing so.
     
    Last edited: Oct 3, 2018
  50. Jesper-Mortensen

    Jesper-Mortensen

    Unity Technologies

    Joined:
    Mar 15, 2013
    Posts:
    232
    The clustering phase is part of the realtime GI system, this is not using the GPU. This phase runs if you have lights that are realtime with bounce or a realtime environment. Only baked GI is supported by the GPU.

    We have no current plans of moving the current realtime GI system to the GPU.

    Cheers,
    Jesper