Search Unity

Official Progressive GPU Lightmapper preview

Discussion in 'Global Illumination' started by KEngelstoft, Sep 26, 2018.

  1. AcidArrow

    AcidArrow

    Joined:
    May 20, 2010
    Posts:
    11,724
    The CPU progressive is in a fine enough state that you can do work with it.

    It is still WIP though (in the sense that I expect further improvements, some are already coming in 2019.1,.2 etc)

    I think it's a bit early for the GPU.
     
  2. rapidrunner

    rapidrunner

    Joined:
    Jun 11, 2008
    Posts:
    944
    I just tried progressive GPU and I get the error. After that unity just crash when I try to close the app to restart it.

    Progressive CPU seems to work fine; I have a 1070 with latest drivers that geforce experience can find (418.81 I believe)
     
  3. KEngelstoft

    KEngelstoft

    Unity Technologies

    Joined:
    Aug 13, 2013
    Posts:
    1,366
    If you report a bug we can probably fix the issue you ran into. There is not enough information in your post to help us pinpoint the problem. Thanks!
     
  4. KEngelstoft

    KEngelstoft

    Unity Technologies

    Joined:
    Aug 13, 2013
    Posts:
    1,366
    A 16GB card will enable 4K lightmap resolution in most cases, whereas 11GB will be ok for 2K lightmaps. Keep in mind the GPU lightmapper is still in preview and we are working on reducing the memory usage before going out of preview.
     
  5. Kuba

    Kuba

    Moderator

    Joined:
    Jan 13, 2009
    Posts:
    416
    The CPU Progressive Lightmapper has been fairly stable for a while now and the recent improvements include:
    - MIS for environment lighting (to reduce noise coming from environment with well pronounced bright areas)
    - sampling improvements (to reduce noise in direct lighting in certain cases, e.g. light going through persian blinds)
    - neural-network-based denoising

    The GPU Progressive Lightmapper is still in preview, but it's catching up quickly with it's functionality. Recently the following improvements have been made:
    - double-sided GI
    - case and receive shadows
    - non-uniform scaling
    - lower memory usage
    The biggest outstanding work is submesh support and porting the updates from the CPU lightmapper that I mentioned above.
     
    Adam-Bailey likes this.
  6. cmorait

    cmorait

    Joined:
    Nov 22, 2017
    Posts:
    59
    This is a problem. So Vega VII is a oneway road when using non proffetional GPUs with the light mapper.

    What are the limits of GPU renderer in terms of project size ?

    Also, I would like to ask if we should consider the OpenCL benchmarks results below for choosing the proper gpu for our GPU lightmapping rig.



     
    Last edited: Feb 12, 2019
  7. screenname_taken

    screenname_taken

    Joined:
    Apr 8, 2013
    Posts:
    663
    Btw, regarding the Vega VII, it doesn't have UEFI support, so if someone is using secureboot, their windows install will fail. Just throwing that out in the air for ppl to be aware of in case they use secure boot.
     
  8. ApexofReality

    ApexofReality

    Joined:
    Feb 14, 2016
    Posts:
    102
    AMD has fixed this issue as of this time
     
  9. rapidrunner

    rapidrunner

    Joined:
    Jun 11, 2008
    Posts:
    944
    I believe I submitted a bug when the crash happen; I have to double check but I can hit that issue 100% of the time

    The only thing that I can't submit is the project since it is above 1 GB.

    OS: Windows 10 1809
    Unity 2018.3.0f2
    Acer Predator G9-793
    Intel I7 7700HQ 2.8GHz
    32 GB Ram
    Nvidia GTX 1070 8 GB VRAM Drivers 25.21.14.1881

    After loading the project, if I run the Progressive CPU I get no issues at all, it runs fine. If I switch to the GPU variant, I get 999+ log entry about
    Code (CSharp):
    1. OpenCL Error. Falling back to CPU lightmapper. Error callback from context: CL_INVALID_PROGRAM
    ; which brought me here.
     
    soleron likes this.
  10. fguinier

    fguinier

    Unity Technologies

    Joined:
    Sep 14, 2015
    Posts:
    146
    It might be related to a driver bug on Nvidia (the driver is caching the compiled kernel on the disk but it does not consider includes files when hashing the source. Thus the code requested by the application might not be compiled at all and instead it can get an old version matching the source but not the includes).
    --> If so this was fixed in 2018.3.6f1.

    However GPU lightmapper have gone a long way in term of stabilisation in 19.1. I would advice to move to it if you can :).
     
  11. rapidrunner

    rapidrunner

    Joined:
    Jun 11, 2008
    Posts:
    944
    I see, I did get the latest drivers because I thought the issue was the drivers, but I can try with the new 2018 build as you suggested.

    I would totally like the idea to use the GPU lightmapper...after all my video card would be more useful in that way, considering that my CPU is not a threadripper or an i9 :)

    BTW is there a way to flush the compiled kernel? if the the problem is that the code can't be compiled and is retrieving the old code that has been cached, a flush should solve the problem, right?

    Thanks for the info
     
  12. cmorait

    cmorait

    Joined:
    Nov 22, 2017
    Posts:
    59
    Even if you have a TR a fast GPU is faster than a 32 Core CPU for baking. Imagine that my 4GB underclocked RX580 (that is in a laptop chassis) is 10 times faster that the 8 core Ryzen 1700 that has the system installed. A 32 core threat reaper is almost 4 times faster in multicore than ryzen 1700. That mean that RX580 is faster in baking than a 32 Core TR. Ok I am not counting the limitations of GPU memory for baking texture maps but we can get the general idea of the speed difference.

    For arch viz we prefer to buy multiple GPUs than higher threaded cpus as if far faster the rendering almost real time. Again you have limitations from the GPU memory but now will that cheap 16gb Vega VII helps out. Even with an i9900k or 2700k system with the proper GPU you can save huge amount of time and money.
     
    Last edited: Feb 14, 2019
    Total3D likes this.
  13. Kuba

    Kuba

    Moderator

    Joined:
    Jan 13, 2009
    Posts:
    416
    Indeed! The cache is in
    %APPDATA%\Roaming\NVIDIA
    .
     
    Mauri and rapidrunner like this.
  14. rapidrunner

    rapidrunner

    Joined:
    Jun 11, 2008
    Posts:
    944
    Well, the point of which is faster was not the main topic there; I did point out that having a decent CPU with a lots of physical cores, would help, compared to the basic one I have on my laptop. If you can use GPU to create lightmaps all the time, then the CPU become irrelevant, since GPU will always fast.

    Although as you realize, memory is a premium on GPU, so unless you have a 32 GB Quadro, you end up hitting the point where your VRAM is gone for good, and in that case I assume the process start to swap on regular RAM, which will slow down things for large scenes.

    If I have GPU available for lightmaps, I will go for that, but if I have to use CPU; then I would go for the one with most cores.
     
  15. rapidrunner

    rapidrunner

    Joined:
    Jun 11, 2008
    Posts:
    944
    Much appreciated! I will try to wipe the content of that folder and try again. I do have few Unity version update on this machine, so a cleanup won't hurt for sure :)
     
  16. Cascho01

    Cascho01

    Joined:
    Mar 19, 2010
    Posts:
    1,347
    I am applying different LightmapParameter-Presets (from Default VeryLow up to Default VeryHigh) within the HDRP (Unity 2018.3.5) but it doesn´t change anything - are they ignored or overwritten somewhere?
     
    Last edited: Feb 15, 2019
  17. AcidArrow

    AcidArrow

    Joined:
    May 20, 2010
    Posts:
    11,724
    Hey, @Kuba and @KEngelstoft and @Jesper-Mortensen and @fguinier (and anyone else on the team I may have interacted with over the past few years). You know I have a lot respect for you, right?

    Anyway.

    I'm going to ask one final time.

    Maybe I'm the only one that cares about this, I don't know, take it as you will.

    I'm playing with the 2019.2a with my brand new laptop (with a 2070 rtx).

    The GPU lightmapper is fast. Really fast. My scenes finish baking in minutes.

    And I like my scenes to be very indirectly lit (imagine a cave scene, lit with just skylight and a directional light that lits the entrance and then it's just bounced light).

    But my problem is : 100k indirect rays is not enough. (it's actually 131072 now, right?). I have scenes that are kinda small, but are very indirectly lit. They bake in minutes. That's fast.

    But they don't look good enough.

    It's really frustrating actually. I look at the results and I think "this would look amazing if I could let it bake for a couple of hours".

    But I can't. I reach the limit of indirect rays too fast. I can only bake for a few minutes.

    My only workaround is to try and bake at a much higher resolution than I need to, then resize all the lightmaps down, so I can sort of have 4x the indirect samples. That's less than I ideal, right?

    I know you have filtering options, but for final results, I'd rather not rely on filtering.

    I don't know what kind of user you have in mind when creating these things, but I think that for final and high quality results, leaving a computer to work on baking overnight is more than acceptable (but again, maybe that's just me, I have a background in arch viz, leaving your computer overnight (or multiple days) to render was normal ).

    So why can't I set the indirect rays to say, 10 million, go to sleep, and wake up to find an amazing baked lightmap?

    Could you please raise the limit of indirect rays? I know you are preparing smarter solutions for some edge cases, but in the meantime, why can't I brute force it? I'm willing to.

    And while I have your attention (I'm assuming I do, maybe I don't)

    Can you also add an option for something like a 0.5pixel gaussian blur for the direct buffer? I often find that no filtering at all looks weird (too much stair stepping) for baked direct light shadows, while a gaussian of one pixel makes all baked shadows look waaaaay too soft. I know we have the technology for an in-between solution, why isn't that available in Unity?

    Thanks for reading.

    I love you guys.

    Cheers.
     
    Last edited: Feb 17, 2019
  18. rapidrunner

    rapidrunner

    Joined:
    Jun 11, 2008
    Posts:
    944
    BTW the correct path is "\Local\NVIDIA" :) That's where my cache folder is located (Windows 10)
     
  19. KEngelstoft

    KEngelstoft

    Unity Technologies

    Joined:
    Aug 13, 2013
    Posts:
    1,366
    Hi @AcidArrow, thank you for the feedback! We are aware that indirect can be very noisy and that's why we have added multiple importance sampling for environment lighting to the GPU lightmapper :) The feature should hit 2019.2.0a8 with some luck (so you should be able to try it within the next few weeks, please double check the release notes once out). This should make it possible to bake your setup with far fewer samples instead of maxing the slider out to 128K. Let me know how many MIS samples you need when you are using MIS for the environment.
    We would like to get to a point where you don't have to specify sample count at all but instead do adaptive sampling until the variance gets below a threshold set by the user. MIS is one step towards this. We have to do some other changes in the pipeline too before we can enable this.

    By the way, to get supersampling without your resizing hack, use a https://docs.unity3d.com/Manual/class-LightmapParameters.html and increase `Baked GI` Anti-aliasing Samples.
     
    Last edited: Feb 18, 2019
  20. AcidArrow

    AcidArrow

    Joined:
    May 20, 2010
    Posts:
    11,724
    I mean, that sounds like an amazing and intelligent solution, which will come eventually...

    But...

    Why can't I brute force it in the meantime? :)

    Also, please consider the 0.5pixel gaussian (or something equivalent) for the direct lights.
     
    fguinier and Kuba like this.
  21. KEngelstoft

    KEngelstoft

    Unity Technologies

    Joined:
    Aug 13, 2013
    Posts:
    1,366
    We would like to move to 16 bit sample count buffers to reduce the memory footprint and this means reducing max allowed sample count to 64K. Before we can do this, we need to make sampling as intelligent as possible so you can get clean bakes without hitting the max sample count limit. I am not going to unlock the sample count now and remove it again in the next release, we try to make the upgrades as smooth as possible.

    0.5 pixel gauss filer option will get discussed in the team today, I promise :)
     
    AcidArrow likes this.
  22. studio1h

    studio1h

    Joined:
    Jul 6, 2012
    Posts:
    31
    Hi @Kuba, @KEngelstoft, @Jesper-Mortensen and @fguinier. I have a question for the team:

    I've been testing the new Unity GPU lightmapper against Bakery. I totally understand that the GPU lightmapper is not feature complete yet, so it's not at all a fair comparison right now. That said, I'm confused by one of the results I'm seeing. Specifically, I find that Bakery consistently generates significantly less lightmap data per scene than Bakery. On one of my test scenes, when baking with the GPU lightmapper, I get 4x the amount of lightmap data than when baking with Bakery.

    My theory is that it may have something to do with the way the two different lightmappers handle lightmap resolution? For the GPU lightmapper we select a specific lightmap size, i.e. 4k, 2k, etc. Bakery on the other hand, asks you to set a min and max resolution value, and then it seems to try to make the most size efficient set of lightmaps possible given your resolution range selection. Does that sound right to you guys? If so, would it be possible to have the GPU lightmapper do something similar to what Bakery is doing with the min/max resolution setup? I ask because I'm finding it next to impossible to get similarly small sets of lightmaps from the GPU lightmapper at the same quality I get from Bakery.

    Thanks! -- BTW, I really appreciate all the hard work you guys are doing on the GPU lightmapper. I really want to be able to use it full time. Right now, I'm using Bakery because the GPU lightmapper is not feature complete, but that means I have to work in Windows because Bakery is Windows/Nvidia only. Long term, I need a lightmapping solution that works on Macs, so I'm really looking forward to switching to the GPU lightmapper when it's ready for use in production.
     
    Lars-Steenhoff likes this.
  23. KEngelstoft

    KEngelstoft

    Unity Technologies

    Joined:
    Aug 13, 2013
    Posts:
    1,366
    Is the 0.5 gauss filter for your 2x manual supersampling workflow or without that step?
     
  24. AcidArrow

    AcidArrow

    Joined:
    May 20, 2010
    Posts:
    11,724
    Without.
     
    KEngelstoft likes this.
  25. ApexofReality

    ApexofReality

    Joined:
    Feb 14, 2016
    Posts:
    102
    3 days ago I got my Radeon VII on my porch and when I started to use the lightmapper in Unity I was met with some awesome results.
    Cornell Box: 390 Mrays per second
    Sponza: 230 Mrays per second
     
  26. Total3D

    Total3D

    Joined:
    Apr 23, 2018
    Posts:
    16
    @ApexofReality
    Can you share the sponza scene with your lightmapping settings?
    I'd like to test my GTX 1070 with the same scene to get comparable results. Thanks!
     
    screenname_taken likes this.
  27. Total3D

    Total3D

    Joined:
    Apr 23, 2018
    Posts:
    16
    tachen and ApexofReality like this.
  28. ApexofReality

    ApexofReality

    Joined:
    Feb 14, 2016
    Posts:
    102
  29. pitibonom

    pitibonom

    Joined:
    Aug 17, 2010
    Posts:
    220
    any plan on simplifying GI baking with this amazing GPU feature ?
    eg: allow the choise of target texture and UV set....

    :)
     
    Last edited: Feb 26, 2019
  30. Total3D

    Total3D

    Joined:
    Apr 23, 2018
    Posts:
    16
    With your setup I score 87 Mrayso_O, but maybe my scene is a bit different (turned off outer walls, etc)
    Conclusion: We need an "official" benchmark scene for testing and comparing results.
     
  31. GenOli

    GenOli

    Joined:
    Apr 21, 2013
    Posts:
    139
    I got this error when trying to bake a large scene:

    "OpenCL Error. Falling back to CPU lightmapper. Error callback from context: Max allocation size supported by this device is 2.75 GB. 3.00 GB requested."

    System: Unity 2018.3.6f1, Win10 64bit, Ryzen 1800x, 1080Ti (driver: 417.71), 16GB RAM. Any ideas? Tried googling but I can't find anyone else with this issue?
     
    Last edited: Feb 26, 2019
  32. KEngelstoft

    KEngelstoft

    Unity Technologies

    Joined:
    Aug 13, 2013
    Posts:
    1,366
    You 1080Ti has 11GB of memory and the largest chunk that can be allocated at a time is 25% (2.75GB). Either try with a 12GB card or lower the lightmap atlas size. We intend to fix this issue before going out of preview.
     
  33. GenOli

    GenOli

    Joined:
    Apr 21, 2013
    Posts:
    139
    I deleted OpenCL.dll from Unity directory (solved other issues) and lowered to 1024 (thanks btw), after a while though I still get memory errors and it eventually fallsback to CPU, might be due to submeshes.

    I would like the option for it to stop baking rather than fallback to CPU if it fails as I can then use Bakery asset for a large scene instead.
     
    Last edited: Feb 27, 2019
  34. luellasfpg

    luellasfpg

    Joined:
    Jan 23, 2019
    Posts:
    16
    We just tested the GPU progressive lightmapper and it works great, it brought down a 6hr+ light bake (that would crash at the end) using the old progressive system to less than 3 minutes. It would be great if we could use this feature in production, but we can't currently because it doesn't support LODs.

    Do you have a timeline for when LOD support will be added? And do you have any resources to point us in a direction to figure out implementing this independently in the meantime? (If it's even possible?)
     
  35. AcidArrow

    AcidArrow

    Joined:
    May 20, 2010
    Posts:
    11,724
    I'm having the following issue with the GPU lightmapper:

    It seems spotlights don't emit indirect light (?).

    Look at this (obviously really rough bakes)

    bounced.jpg

    A few notes:
    1. The rest of the lights (point, emissive materials etc), work fine.
    2. I can make the spotlights emit indirect if I really crank their intensity (and it's actually weird, they keep producing almost zero indirect, when suddenly when I go above a certain threshold, suddenly, BOOM, an explosion of indirect light).
    3. I CANNOT replicate it in a new project. Granted I didn't spend a ton of time, but it's not as simple as placing a spotlight in a scene and calling it a day.
    4. On my main project, it's problematic on both 2018.3 and 2019.1 beta 6 (didn't try 2019.2 yet), also, it's not specific to this scene, others exhibit it too, but not all scenes exhibit it. It's always spotlights.

    This obviously needs a bug report, but I'm having trouble making a minimal repro. I'm hoping someone else has stumbled upon this so hopefully they can point me in the right direction so I can make a proper bug report then.

    EDIT: Uhhhh... So... super weird. As soon as I edit the "Baked Shadow Radius" value for one of the spot lights, all of them start working properly. The value doesn't really matter... Not sure I can create a repro project for this.

    EDIT2: Even weirder, the same trick does not work in Unity 2019.2. I'm starting to believe the above "fix" was a fluke.
     
    Last edited: Mar 10, 2019
  36. MCoburn

    MCoburn

    Joined:
    Feb 27, 2014
    Posts:
    71
    I tried the Progressive GPU Lightmapper on my NVIDIA GeForce 1060 3GB and it complained about OpenCL being buggy/problems occurred and it fell back to CPU Renderer. However just today I swapped out the GTX 1060 and slapped in my AMD RX570. However, despite the latest drivers being installed and OpenCL saying it's enabled, Unity 2018.3.7 refuses to even see that it exists and use it.

    Some errors popup in the logs, follow by spam:

    Code (csharp):
    1.  
    2. OpenCL Error. Falling back to CPU lightmapper. Error callback from context: CL_PLATFORM_NOT_FOUND_KHR
    3. Failed to find a suitable OpenCL device, falling back to CPU lightmapper.
    4. OpenCL Error. Falling back to CPU lightmapper. Error callback from context: CL_INVALID_PROGRAM
    5. [....]
    6.  
    7. [PathTracer] Loaded programs and built CL kernels in 0.004 secs -> Timestamps: [65.610 - 65.613].
    8. [PathTracer] AddGeometry job with hash: 1e61de3339b1282b3cb23ee71e290694 failed with exit code 1.
    Forcing the OpenCLPlatformAndDevice arguments do jack all. In the logs, it says:

    Code (csharp):
    1.  
    2. [PathTracer] building lightmap data asset.
    3. [PathTracer] m_Clear = false;
    4. gi::BakeBackendSwitch: switching bake backend from 3 to 2.
    5. OpenCL Error: 'clGetPlatformIDs(kMaxPlatforms, platforms, &numPlatforms)' returned -1001 (CL_PLATFORM_NOT_FOUND_KHR)!
    6. OpenCL Error. Falling back to CPU lightmapper. Error callback from context: CL_PLATFORM_NOT_FOUND_KHR
    7.  
    8. -- Listing OpenCL platforms(s) --
    9. -- Listing OpenCL device(s) --
    10. Failed to find a suitable OpenCL device, falling back to CPU lightmapper.
    11. (Filename: C:\buildslave\unity\build\Editor/Src/GI/Progressive/RadeonRays/RadeonRaysContextManager.cpp Line: 642)
    12.  
    13. OpenCL Error: 'cl_int _err = kernelWrapper.CreateCLKernel(program, name, pvrJobType)' returned -44 (CL_INVALID_PROGRAM)!
    14. OpenCL Error. Falling back to CPU lightmapper. Error callback from context: CL_INVALID_PROGRAM
    15.  
    16. (Filename: C:\buildslave\unity\build\Editor/Src/GI/Progressive/OpenCL/OpenCLCheck.cpp Line: 122)
    17.  
    18. CL Kernel 'prepareLightRays' creation failed.
    19. OpenCL Error: 'cl_int _err = kernelWrapper.CreateCLKernel(program, name, pvrJobType)' returned -44 (CL_INVALID_PROGRAM)!
    20. OpenCL Error. Falling back to CPU lightmapper. Error callback from context: CL_INVALID_PROGRAM
    21.  
    22. (Filename: C:\buildslave\unity\build\Editor/Src/GI/Progressive/OpenCL/OpenCLCheck.cpp Line: 122)
    23.  
    24. CL Kernel 'prepareLightRaysFromBounce' creation failed.
    25. OpenCL Error: 'cl_int _err = kernelWrapper.CreateCLKernel(program, name, pvrJobType)' returned -44 (CL_INVALID_PROGRAM)!
    26. OpenCL Error. Falling back to CPU lightmapper. Error callback from context: CL_INVALID_PROGRAM
    27.  
    28. (Filename: C:\buildslave\unity\build\Editor/Src/GI/Progressive/OpenCL/OpenCLCheck.cpp Line: 122)
    29.  
    30. [....etc....]
    31.  
    I'd appreciate the help as I was wanting to give the GPU light mapper a burn just to see how well it does vs my 12 thread CPU. And here's all the juicy bits as screenshots:
    GPU-Z_UHnaC85NFM.png GPU-Z_xbXB7wXGf9.png Unity_XDTIqaIPFl.png
     
  37. MCoburn

    MCoburn

    Joined:
    Feb 27, 2014
    Posts:
    71
    Turns out I can fix the issue for my AMD card if I replace Unity's OpenCL.dll with amd_opencl64.dll from the driver's extracted files (or the same file located somewhere inside Windows system directories). I have not seen any ill side effects but I have been baking a few scenes now and it's been running smoothly.

    That'll be an hour of my time I won't get back, but at least I can bake away until I'm happy.
     
    AcidArrow likes this.
  38. kristijonas_unity

    kristijonas_unity

    Unity Technologies

    Joined:
    Feb 8, 2018
    Posts:
    1,080
    Hey! Have you tried baking in non-directional mode, and see if the issue reproduces? Also, do you have any meshes that use multi-material setups? Thanks.
     
  39. AcidArrow

    AcidArrow

    Joined:
    May 20, 2010
    Posts:
    11,724
    Hey, thanks for replying.

    1. Non-directional, yes, that's what I was using in the first place. (also tried directional)
    2. Multi-material, I do use them, yes, but after you suggested that, I made the whole scene be one material and it still reproduced.

    Also:


    At the beginning it's how it's supposed to look.

    When I switch to GPU, only the blue light bounces light.

    As soon as reduce the blue lights intensity, the orange light starts producing bounced light, while the blue light stops. Which is... Weird.

    I tried replicating this behavior in a new project and I can't.

    So next, what I did was delete everything in that scene except the two spotlights and add some default Unity planes instead.

    At first I thought it started working properly, which pointed to something being wrong with my meshes, but that is not the case.

    Watch this video:


    At the beginning it's how it's supposed to look (it's a CPU bake).

    Then notice how as I adjust the properties of the blue light, the bounced light from the orange one seems to reduce.

    So I guess it's not that indirect light is not produced at all as I thought initially, but under certain circumstances, it's greatly reduced.

    I'll make another attempt at a repro project, but if I'm unable to produce one today it'll probably be a while before I can spend more time on this.
     
  40. AcidArrow

    AcidArrow

    Joined:
    May 20, 2010
    Posts:
    11,724
    Hurrah, I got lucky and finally made a repro. (with a different trigger than in the videos)

    Submitted a bug, it's Case 1136304.
     
  41. Laurens-Paladin-Studios

    Laurens-Paladin-Studios

    Joined:
    Apr 25, 2015
    Posts:
    54
    Hi There,

    Im baking some lightmaps using GPU progressive lightmap renderer and I would like to test out the new Optix denoising... but it is grayed out:

    upload_2019-3-15_23-37-45.png

    Im running a GTX 970... Might it be that this GPU does not support it anymore?

    Thank you
    Laurens
     
  42. Total3D

    Total3D

    Joined:
    Apr 23, 2018
    Posts:
    16
    Please check your nvidia driver version. If If I know right, the new Optix 6 requires driver 418.
     
  43. AcidArrow

    AcidArrow

    Joined:
    May 20, 2010
    Posts:
    11,724
    fwiw, Open Image Denoise is easily as good as Optix, so even if you can't use Optix, you're not missing much.
     
  44. BenWoodford

    BenWoodford

    Joined:
    Sep 29, 2013
    Posts:
    116
    I'd really like to see an option to not fall back to the CPU lightmapper to be honest. There's nothing worse than the GPU mapper throwing an error and then you come back to your desk expecting a completed bake but instead you're greeted with one that's about 1% through.
     
    skype6 likes this.
  45. Kichang-Kim

    Kichang-Kim

    Joined:
    Oct 19, 2010
    Posts:
    1,011
    Some case, the baking does not be complete forever with GPU lightmapper. I used Unity 2018.3.8f1.

    Unity log repeated this:
    This issue did not occured with same scene and CPU lightmapper. Decreasing lightmap resolution and direct/indirect samples temporarily solved this issue.
     
  46. Cascho01

    Cascho01

    Joined:
    Mar 19, 2010
    Posts:
    1,347
    Last edited: Mar 22, 2019
  47. KEngelstoft

    KEngelstoft

    Unity Technologies

    Joined:
    Aug 13, 2013
    Posts:
    1,366
    Does this also happen in latest 2019.2 alpha?
     
  48. Kichang-Kim

    Kichang-Kim

    Joined:
    Oct 19, 2010
    Posts:
    1,011
    Oh, I tried 2019.2.0a8 and it baked successfully. I hope that fix will be backported 2019.1 and 2018.3.
     
  49. fguinier

    fguinier

    Unity Technologies

    Joined:
    Sep 14, 2015
    Posts:
    146
    We have plans for it, but no ETA :).
     
    Lars-Steenhoff likes this.
  50. Lars-Steenhoff

    Lars-Steenhoff

    Joined:
    Aug 7, 2007
    Posts:
    3,526
    Nice to hear its planned