Search Unity

  1. Megacity Metro Demo now available. Download now.
    Dismiss Notice
  2. Unity support for visionOS is now available. Learn more in our blog post.
    Dismiss Notice

Official Progressive GPU Lightmapper preview

Discussion in 'Global Illumination' started by KEngelstoft, Sep 26, 2018.

  1. toto2003

    toto2003

    Joined:
    Sep 22, 2010
    Posts:
    528
    can you explain the blend probe ? you remove all the little object from baking?
     
  2. Omzy

    Omzy

    Joined:
    Jun 28, 2013
    Posts:
    31
  3. Stardog

    Stardog

    Joined:
    Jun 28, 2010
    Posts:
    1,910
    Easy rule for baking - avoid baking small things and organic shapes.
     
  4. yian-dev

    yian-dev

    Joined:
    Jun 26, 2017
    Posts:
    20
    I came here to confirm that deleting opencl.dll from Unity editor directory solves the error " No suitable OpenCL device found falling back to CPU"
    Im getting 112 Mrays/sec with RX580 adrenaline 2019.2.2 + Unity 2018.3.3f1, it probably matters what you have in the scene but with older drivers during 2018.3.2 i have not seen more than ~70ish mrays/sec, not sure what changed or that getting rid of opencl.dll makes unity use some other dll that has better perf, i have no clue, UT you should investigate these issues before GPU lightmapper release out of preview.
     
  5. Lars-Steenhoff

    Lars-Steenhoff

    Joined:
    Aug 7, 2007
    Posts:
    3,521



    I interesting to see how raystacing is getting integrated into maya.

    Looking forward to see what it will bring to unity
     
    Last edited: Feb 5, 2019
  6. Sjarp

    Sjarp

    Joined:
    Sep 10, 2017
    Posts:
    4
    Hey, I just tried out the Progressive GPU Lightmapper, but have run into a problem.
    I'm using a GTX 1070 with 8GB of VRAM, yet Unity complains, that it's only getting 2GB, which is not enough. (2.04GB is needed)

    The exact error is: "OpenCL Error. Falling back to CPU lightmapper. Error callback from context: Max allocation size supported by this device is 2.00 GB. 2.04 GB requested".

    Isn't there a way to allow Unity to allocate more than just 2GB?
     
  7. screenname_taken

    screenname_taken

    Joined:
    Apr 8, 2013
    Posts:
    663
    Have you tried going into Unity's folder and deleting the OpenCL.dll file?
    (Or simply change the file's extension so that you can change it back to dll if you need so.)
     
  8. Sjarp

    Sjarp

    Joined:
    Sep 10, 2017
    Posts:
    4
    I have tried it, but unfortunately it doesn't work. After reading other posts I assume it's only a fix for Unity not identifying the GPU.

    edit: Something changed after randomly trying it again and again. It started using the GPU for a few seconds, but then I was hit with those two errors:

    OpenCL Error. Falling back to CPU lightmapper. Error callback from context: CL_MEM_OBJECT_ALLOCATION_FAILURE

    OpenCL Error. Falling back to CPU lightmapper. Error callback from context: CL_OUT_OF_RESOURCES

    About 61 of these, before Unity changed the Lightmapper back to Progressive CPU.

    edit 2: It seems to be running kinda stable now, but there are definitely Memory Issues, since I get some of these errors now:
    Clustering job failed for system: 0x88feae6e6e24abbf6171d8bea5e3d9cb, error: 4 - 'Out of memory loading input data.'.
    Please close applications to free memory, optimize the scene, increase the size of the pagefile or use a system with more memory.
    Total memory (physical and paged): 27240MB.

    Also I'm at about 80% CPU, but the GPU is barely being used at 10-17%. There is still about 4GB of RAM unallocated (Other programs don't allocate it either), but Unity doesn't seem to want those and prefers paged Disk Space.

    But the Baking ETA also went down from 8 hours with CPU to 1,5 hours, so I suppose it kind of works.

    edit 3: Weird things keep happening. I got the "Max allocation size supported is 2 GB" error again, but it's still continuing with GPU Lightmapping,even though it said it would fall back to the CPU lightmapper. I'm not complaining, but ok.
     
    Last edited: Jan 26, 2019
  9. Vagabond_

    Vagabond_

    Joined:
    Aug 26, 2014
    Posts:
    1,148
    1. Did you update drivers !?
    2. Do you have integrated GPU ?

    Don't delete the dll file in latest 2018.3 and 2019.x !
     
  10. Sjarp

    Sjarp

    Joined:
    Sep 10, 2017
    Posts:
    4
    1. I did upgrade my gpu drivers and even did a clean reinstall of the drivers
    2. Nope, I have a dedicated GPU.
    3. Too late, I already deleted (or rather, renamed) the DLL and after a few aforementioned start problems it kinda works now. That was in Unity 2018.3.3f1.
     
  11. valentinwinkelmann

    valentinwinkelmann

    Joined:
    Nov 3, 2014
    Posts:
    188
    I don't get it.

    I tried to test the Progressive Lightmapper (GPU) on my computer and on my laptop, it doesn't work on any of the two devices.
    I already tried to start via "-OpenCL-PlatformAndDeviceIndices 1 0", but unity always uses intelHD.
    In my computer is a GTX 750TI with 2GB and in my laptop a GTX 1050 Ti with 4GB.
    However, even under AppData/Local/Unity/Editor/Editor.log only IntelHD is displayed. the graphics cards don't show up at all.

    Both devices have the latest Nvidea drivers. I have the problem on different Unity versions (2018.3.0f2 | 2018.3.3f1 | 2019.1.0a10 )

    I once added the Editor.log below (from the laptop). How can I solve the problem?

    Edit:
    After renaming the OpenCL.dll to OpenCL.dll.bak my GPU appears in the Editor.log. I can now access it via -OpenCL-PlatformAndDeviceIndices 0 0. This seems to work.



    Code (CSharp):
    1. -- Listing OpenCL platforms(s) --
    2. * OpenCL platform 0
    3.     PROFILE = FULL_PROFILE
    4.     VERSION = OpenCL 2.1
    5.     NAME = Intel(R) OpenCL
    6.     VENDOR = Intel(R) Corporation
    7.     EXTENSIONS = cl_intel_dx9_media_sharing cl_khr_3d_image_writes cl_khr_byte_addressable_store cl_khr_d3d11_sharing cl_khr_depth_images cl_khr_dx9_media_sharing cl_khr_fp64 cl_khr_gl_sharing cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_icd cl_khr_image2d_from_buffer cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_spir
    8. -- Listing OpenCL device(s) --
    9. * OpenCL platform 0, device 0
    10.     DEVICE_TYPE = 4
    11.     DEVICE_NAME = Intel(R) HD Graphics 630
    12.     DEVICE_VENDOR = Intel(R) Corporation
    13.     DEVICE_VERSION = OpenCL 2.1
    14.     DRIVER_VERSION = 22.20.16.4749
    15.     DEVICE_MAX_COMPUTE_UNITS = 23
    16.     DEVICE_MAX_CLOCK_FREQUENCY = 1000
    17.     CL_DEVICE_MAX_CONSTANT_BUFFER_SIZE = 2147483647
    18.     CL_DEVICE_HOST_UNIFIED_MEMORY = true
    19.     CL_DEVICE_MAX_MEM_ALLOC_SIZE = 2147483647
    20.     DEVICE_GLOBAL_MEM_SIZE = 3378762548
    21.     DEVICE_EXTENSIONS = cl_intel_accelerator cl_intel_advanced_motion_estimation cl_intel_d3d11_nv12_media_sharing cl_intel_device_side_avc_motion_estimation cl_intel_driver_diagnostics cl_intel_dx9_media_sharing cl_intel_media_block_io cl_intel_motion_estimation cl_intel_planar_yuv cl_intel_packed_yuv cl_intel_required_subgroup_size cl_intel_simultaneous_sharing cl_intel_subgroups cl_intel_subgroups_short cl_khr_3d_image_writes cl_khr_byte_addressable_store cl_khr_d3d10_sharing cl_khr_d3d11_sharing cl_khr_depth_images cl_khr_dx9_media_sharing cl_khr_fp16 cl_khr_fp64 cl_khr_gl_depth_images cl_khr_gl_event cl_khr_gl_msaa_sharing cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_gl_sharing cl_khr_icd cl_khr_image2d_from_buffer cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_mipmap_image cl_khr_mipmap_image_writes cl_khr_spir cl_khr_subgroups cl_khr_throttle_hints
    22. * OpenCL platform 0, device 1
    23.     DEVICE_TYPE = 2
    24.     DEVICE_NAME = Intel(R) Core(TM) i5-7300HQ CPU @ 2.50GHz
    25.     DEVICE_VENDOR = Intel(R) Corporation
    26.     DEVICE_VERSION = OpenCL 2.1 (Build 10)
    27.     DRIVER_VERSION = 7.2.0.10
    28.     DEVICE_MAX_COMPUTE_UNITS = 4
    29.     DEVICE_MAX_CLOCK_FREQUENCY = 2500
    30.     CL_DEVICE_MAX_CONSTANT_BUFFER_SIZE = 131072
    31.     CL_DEVICE_HOST_UNIFIED_MEMORY = true
    32.     CL_DEVICE_MAX_MEM_ALLOC_SIZE = 2116969472
    33.     DEVICE_GLOBAL_MEM_SIZE = 8467877888
    34.     DEVICE_EXTENSIONS = cl_khr_icd cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_byte_addressable_store cl_khr_depth_images cl_khr_3d_image_writes cl_intel_exec_by_local_thread cl_khr_spir cl_khr_dx9_media_sharing cl_intel_dx9_media_sharing cl_khr_d3d11_sharing cl_khr_gl_sharing cl_khr_fp64 cl_khr_image2d_from_buffer
    35.  
     
    Last edited: Jan 28, 2019
  12. Vagabond_

    Vagabond_

    Joined:
    Aug 26, 2014
    Posts:
    1,148
    One thing i can think of is that if the integrated GPU is set as first in the bios, and it will always be used by the GPU lightmapper, as i think i saw somewhere that the first GPU is automatically picked by the lightmapper. Regarding the laptop - just go to the bios and try to find an option, where you set the discrete GPU as first/main ( just for the try though ) !

    EDIT : if this is the case, then i think lightmapping team would have to somehow give you an option to choose a baking device !
     
  13. KEngelstoft

    KEngelstoft

    Unity Technologies

    Joined:
    Aug 13, 2013
    Posts:
    1,366
    This is not GPU memory being incorrectly reported as 2GB, this is because something in your scene require an allocation larger than 2GB and the driver doesn't allow that for this particular Nvidia card (CL_DEVICE_MAX_MEM_ALLOC_SIZE is 2GB in this case). Most Nvidia cards I have seen have a max allocation size of 25% of total GPU memory. AMD cards usually have a 50% max.
    Either bake with a different card with more memory or reduce super sampling count or lightmap atlas size. Baking large terrains could also be the cause, so try and reduce heightmap resolution and see if that helps.
     
    Sjarp likes this.
  14. KEngelstoft

    KEngelstoft

    Unity Technologies

    Joined:
    Aug 13, 2013
    Posts:
    1,366
    If you get out of memory errors related to clustering, it means you are precomputing realtime GI. This is unrelated to the GPU lightmapper but please consider if you need realtime GI and baked GI enabled at the same time? Realtime GI precompute can be very CPU and memory intensive.
     
  15. Sjarp

    Sjarp

    Joined:
    Sep 10, 2017
    Posts:
    4
    Brilliant! Exactly the kind of information I needed. Thank you very much!
     
  16. Omzy

    Omzy

    Joined:
    Jun 28, 2013
    Posts:
    31
    By the way, if anyone experiences the problem I had with strange texture artifacts in your build only, you might have this bug. Apparently your resources file is limited to 4GB in size as a Unity 32 bit holdover limitation, which has to be fixed with asset bundles or multiple scenes. This applies mostly to people with large complex scenes.

    https://forum.unity.com/threads/bug-4gb-limit-to-textures-in-standalone-build.441116/
     
  17. Total3D

    Total3D

    Joined:
    Apr 23, 2018
    Posts:
    16
    Just downloaded Unity 2019.1.0b1. I'm looking for the Optix denoiser option at the lightmap settings (progressive cpu mode), but there are only none/A_Trous/Gaussian
    I have a GTX 1050 with driver 416.34.
    How can I "enable" the new Optix denoising option?
     
    Shizola likes this.
  18. Jesper-Mortensen

    Jesper-Mortensen

    Unity Technologies

    Joined:
    Mar 15, 2013
    Posts:
    231
    Should have new options for denoising:


    However there is currently a 4GB GPU VRAM minspec as the denoiser is really memory hungry. We are fixing this in 19.2 though. Regardless of the minspec you should see those options.
     
    Lars-Steenhoff likes this.
  19. Total3D

    Total3D

    Joined:
    Apr 23, 2018
    Posts:
    16
    [QUOTE="Jesper-Mortensen, post: 4167808, member: 224237"
    However there is currently a 4GB GPU VRAM minspec as the denoiser is really memory hungry. We are fixing this in 19.2 though. Regardless of the minspec you should see those options.[/QUOTE]

    Thanks for your reply! Sadly I have only 2GB VRAM. So that's why I can't see the optix option:
    unity_missing_optix.jpg

    Waiting the 19.2 fix very-very much
     
  20. Shizola

    Shizola

    Joined:
    Jun 29, 2014
    Posts:
    470
    Optix denoiser is greyed out for me on Progressive GPU but available on CPU? Tooltip says "your hardware doesn't support denoising". I have a 1070 laptop, 8GB.
     
  21. Total3D

    Total3D

    Joined:
    Apr 23, 2018
    Posts:
    16
    I think the GPU accelerated Optix denoising is currently available only when using progressive CPU lightmapper.
     
    Shizola likes this.
  22. Jesper-Mortensen

    Jesper-Mortensen

    Unity Technologies

    Joined:
    Mar 15, 2013
    Posts:
    231
    Yes, what Total3D said. I have made it work in 19.2 though. Think of the denoising in 19.1 as a soft launch;-)
     
    Shizola likes this.
  23. jacknero

    jacknero

    Joined:
    Aug 15, 2018
    Posts:
    60
    Unity 2018.3.4&3.1
    Hi there.
    Finally I achieve to use this GPU lightmapper after recent Update.
    But I do meet some wierd result like this. QQ截图20190205161634.png QQ截图20190205162245.png
    I'm quite sure every setting is exactly the same as that when using CPU progressive.
    How dose it happen and can it be fixed?
    enabling filter won't achieve an ideal result either
    QQ截图20190205162628.png
     
    Last edited: Feb 5, 2019
  24. fguinier

    fguinier

    Unity Technologies

    Joined:
    Sep 14, 2015
    Posts:
    146
    Can you post a screenshot showcasing the difference between the CPU and GPU result? Or the scene if possible?
     
  25. Vagabond_

    Vagabond_

    Joined:
    Aug 26, 2014
    Posts:
    1,148
    Do you have guys this issue where, once the GPU lightmapper gives warning like out of memory or out of resources etc... you have to restart the engine in order to get it working again, otherwise it stucks on preparing step !?
     
  26. jacknero

    jacknero

    Joined:
    Aug 15, 2018
    Posts:
    60
    CPU.png
    this is the CPU result with same settings
     
  27. fguinier

    fguinier

    Unity Technologies

    Joined:
    Sep 14, 2015
    Posts:
    146
    Thanks! This seems to be a sampling pattern issue (work is planned on the GPU lightmapper in that regard). The best would be that you open a bug with the scene attached so we can confirm and verify it is fix when we do the sampling pattern work.
     
  28. fguinier

    fguinier

    Unity Technologies

    Joined:
    Sep 14, 2015
    Posts:
    146
    Switching to the CPU lightmapper should clear all memory and clear the opencl context, ie the next GPU lightmapper bake should start from scratch. So this seems a bug. Repro (ideally in a bug report)? :)
     
  29. Vagabond_

    Vagabond_

    Joined:
    Aug 26, 2014
    Posts:
    1,148
    I thought this might clear the opencl context as you say and tried it, but did not help. I actually have this issue for some time now on different unity versions, like always was there, though !
    However will try to repro the issue and may file a report !
     
    fguinier likes this.
  30. kwcae

    kwcae

    Joined:
    May 26, 2017
    Posts:
    34
    Very promising, but please, please, please tell us there a plan to fix the light-mapper;'s UV packing algo, it's been extremely inefficient since Unity 4 days.

    Right now i have a single mesh using unity default auto generated UV's settings in a scene and the light mapper has created a 4K texture, but has barely populated 1K of the texture with UV islands

    See this thread for a long running discussion about the issue
    https://forum.unity.com/threads/packing-continues-to-be-horrible-please-fix.453014/#post-3890329
     
    Shizola and Lars-Steenhoff like this.
  31. Shizola

    Shizola

    Joined:
    Jun 29, 2014
    Posts:
    470
    Just spotted this in the 2019.2.0 Alpha notes, looking forward to trying it:
    • GI: Reduced GPU memory footprint for GPU lightmapper when baking lighting, by compressing normal vectors and albedo.
    • GI: The Optix AI denoiser is now supported with GPU Lightmapper.
    • GI: Upgraded Optix AI Denoiser to version 6. This new version has better performance and a lower memory footprint.
     
    fguinier likes this.
  32. Total3D

    Total3D

    Joined:
    Apr 23, 2018
    Posts:
    16
    Just tested 2019.2.0a4
    GPU lightmapper does not render area lights for me?
    Is any new trick to enable them?
     

    Attached Files:

  33. Adam-Bailey

    Adam-Bailey

    Joined:
    Feb 17, 2015
    Posts:
    232
    I also couldn't seem to get a directional light to bake in 2019.2.0a4
     
  34. V_Kalinichenko

    V_Kalinichenko

    Unity Technologies

    Joined:
    Apr 15, 2015
    Posts:
    16
    fguinier and Adam-Bailey like this.
  35. cmorait

    cmorait

    Joined:
    Nov 22, 2017
    Posts:
    59
    A quick question. Vega VII or RTX 2080 or RTX 2080ti will be faster with the GPU lightmapper ? Reviewers say that Vega VII is faster to opencl and is comparable to RTX2080ti. As I can not get both cards and run several tests to compare them can you at unity make some testing ?
     
  36. Lars-Steenhoff

    Lars-Steenhoff

    Joined:
    Aug 7, 2007
    Posts:
    3,521
    If you want optix denoising, nvidia is the only option.
    if you need the memory take the 16 gb amd or the 24 gb rtx titan.

    a benchmark scene would be nice
     
  37. Total3D

    Total3D

    Joined:
    Apr 23, 2018
    Posts:
    16
    +vote for an universal benchmark scene

    At the moment there's only one option for denoising: Optix with nvidia card. And it's a must-have thing, in my archviz scenes CPU+Optix lightmapping gives me 5x - 7x speed advantage! It's huge!
    I think the most vise is to wait a bit for the another denoising implementation. If I'm correct soon we'll have it built in.
    In case we get a denoising option compatible with every card the Vega VII is the winner because of it's more memory.
     
  38. Vagabond_

    Vagabond_

    Joined:
    Aug 26, 2014
    Posts:
    1,148
    Hi guys,

    it is now Unity 2018.3.4f1 i'm using and there is still this warning like in the image:

    upload_2019-2-8_11-26-31.png

    The FBX has the option for generating uvs toggled on and:

    1. i got Lightmap Resolution set to 40
    2. lightmap Size set to 1024
    3. HighResolution preset.
    4. Lightmap Padding is 4 texels

    How i should know how to manage this ? Is it an actual issue in this and similar cases.
    Is it a good idea to increase the padding even more ?


    Please at least make the message more sensible as it seems to be related to another thing than just enabling the "generate uvs" option ! For example if we have to leave more space between chunks to change the message saying that or print multiple possible solutions !

    Below is Lightmap setting and the UV Overlap preview - is it an issue or not ?

    Thank you !

    upload_2019-2-8_11-26-4.png


    upload_2019-2-8_11-33-23.png
     
  39. ApexofReality

    ApexofReality

    Joined:
    Feb 14, 2016
    Posts:
    102
    What is a good benchmark tester that will be good to compare with Unity? Nobody really uses Unity's gpu lightmapper yet for benchmarks so what is a good alternative to look at? Luxmark?
     
    Total3D likes this.
  40. Lars-Steenhoff

    Lars-Steenhoff

    Joined:
    Aug 7, 2007
    Posts:
    3,521
    A benchmark for lightmapping different unity versions and gpu's is what needed.

    It does not need to compare to luxmark or any other engines.
    We just want to know what card to buy and when to use cpu or cpu lightmappping
    and when a lot of memory is needed
     
  41. cmorait

    cmorait

    Joined:
    Nov 22, 2017
    Posts:
    59
    I am between Vega VII and RTX 2080Ti (RTX titan is out of my price limit the current time). I will primarily use them for Archviz and VR. Maybe someone from Unity should clear this our for us what 11GB vs 16GB and the available memory that they have available between them.

    With a 4GB Rx580 (underclocked GPU) in my Asus 702ZC 8 Core Ryzen 1700 Laptop (1420 score in Cinebench R15), I am getting 10x performance if I compare GPU vs CPU.
    I think that unity should make a benchmark app to be able to have real results and see what is more suitable for all creators. As only one GPU is supported the current time is very important to know what pathway we will need to follow for our workflow needs.
     
    ApexofReality likes this.
  42. ApexofReality

    ApexofReality

    Joined:
    Feb 14, 2016
    Posts:
    102
    I agree with the app it sounds like a cool idea. I am currently waiting for my RADEON VII 16G but I might cancel if VEGA FRONTIER EDITION’s 16G performance is about the same
     
  43. Lars-Steenhoff

    Lars-Steenhoff

    Joined:
    Aug 7, 2007
    Posts:
    3,521
  44. jacknero

    jacknero

    Joined:
    Aug 15, 2018
    Posts:
    60
    Already launched a bug report 1124484,how can I track its status and your progress on it please?
     
    fguinier likes this.
  45. Kuba

    Kuba

    Moderator

    Joined:
    Jan 13, 2009
    Posts:
    416
    Thanks for the bug report, @jacknero! You'll be notified via e-mail when the status of the bug changes.
     
  46. Cascho01

    Cascho01

    Joined:
    Mar 19, 2010
    Posts:
    1,347
    Will we see a "Bake selected" - button someday?
    (Please!)
     
  47. Kuba

    Kuba

    Moderator

    Joined:
    Jan 13, 2009
    Posts:
    416
    Very likely yes. However we've been prioritizing the final quality of single bakes and big structural changes (like GPU support in the Progressive Lightmapper) over more granular control. Once the former areas stabilize we'll look into those features.
     
    Cascho01 likes this.
  48. AcidArrow

    AcidArrow

    Joined:
    May 20, 2010
    Posts:
    11,631
    I vote against bake selected if it means we'll never get tight packing again.
     
  49. Kuba

    Kuba

    Moderator

    Joined:
    Jan 13, 2009
    Posts:
    416
    It wouldn't imply that anymore. That was the case back in the day, as we didn't have a good system in place to store all the needed metadata.
     
    AcidArrow and Lars-Steenhoff like this.
  50. rubberrobot

    rubberrobot

    Joined:
    Jul 8, 2014
    Posts:
    2
    Hi, i've been away from unity for a bit, can anyone explain the current status of the progressive lightbaking, is it solid and working or still wip?