A Unity ID allows you to buy and/or subscribe to Unity products and services, shop in the Asset Store and participate
in the Unity community.
Calling all New Unity users! Join the Halloween Mods Showcase Challenge until October 31.
Discussion in 'Global Illumination' started by KEngelstoft, Sep 26, 2018.
can you explain the blend probe ? you remove all the little object from baking?
I'd recommend giving this entire article a read. Section 5 deals with light probes and explains everything. https://unity3d.com/learn/tutorials/topics/graphics/introduction-precomputed-realtime-gi
Easy rule for baking - avoid baking small things and organic shapes.
I came here to confirm that deleting opencl.dll from Unity editor directory solves the error " No suitable OpenCL device found falling back to CPU"
Im getting 112 Mrays/sec with RX580 adrenaline 2019.2.2 + Unity 2018.3.3f1, it probably matters what you have in the scene but with older drivers during 2018.3.2 i have not seen more than ~70ish mrays/sec, not sure what changed or that getting rid of opencl.dll makes unity use some other dll that has better perf, i have no clue, UT you should investigate these issues before GPU lightmapper release out of preview.
I interesting to see how raystacing is getting integrated into maya.
Looking forward to see what it will bring to unity
Hey, I just tried out the Progressive GPU Lightmapper, but have run into a problem.
I'm using a GTX 1070 with 8GB of VRAM, yet Unity complains, that it's only getting 2GB, which is not enough. (2.04GB is needed)
The exact error is: "OpenCL Error. Falling back to CPU lightmapper. Error callback from context: Max allocation size supported by this device is 2.00 GB. 2.04 GB requested".
Isn't there a way to allow Unity to allocate more than just 2GB?
Have you tried going into Unity's folder and deleting the OpenCL.dll file?
(Or simply change the file's extension so that you can change it back to dll if you need so.)
I have tried it, but unfortunately it doesn't work. After reading other posts I assume it's only a fix for Unity not identifying the GPU.
edit: Something changed after randomly trying it again and again. It started using the GPU for a few seconds, but then I was hit with those two errors:
OpenCL Error. Falling back to CPU lightmapper. Error callback from context: CL_MEM_OBJECT_ALLOCATION_FAILURE
OpenCL Error. Falling back to CPU lightmapper. Error callback from context: CL_OUT_OF_RESOURCES
About 61 of these, before Unity changed the Lightmapper back to Progressive CPU.
edit 2: It seems to be running kinda stable now, but there are definitely Memory Issues, since I get some of these errors now:
Clustering job failed for system: 0x88feae6e6e24abbf6171d8bea5e3d9cb, error: 4 - 'Out of memory loading input data.'.
Please close applications to free memory, optimize the scene, increase the size of the pagefile or use a system with more memory.
Total memory (physical and paged): 27240MB.
Also I'm at about 80% CPU, but the GPU is barely being used at 10-17%. There is still about 4GB of RAM unallocated (Other programs don't allocate it either), but Unity doesn't seem to want those and prefers paged Disk Space.
But the Baking ETA also went down from 8 hours with CPU to 1,5 hours, so I suppose it kind of works.
edit 3: Weird things keep happening. I got the "Max allocation size supported is 2 GB" error again, but it's still continuing with GPU Lightmapping,even though it said it would fall back to the CPU lightmapper. I'm not complaining, but ok.
1. Did you update drivers !?
2. Do you have integrated GPU ?
Don't delete the dll file in latest 2018.3 and 2019.x !
1. I did upgrade my gpu drivers and even did a clean reinstall of the drivers
2. Nope, I have a dedicated GPU.
3. Too late, I already deleted (or rather, renamed) the DLL and after a few aforementioned start problems it kinda works now. That was in Unity 2018.3.3f1.
I don't get it.
I tried to test the Progressive Lightmapper (GPU) on my computer and on my laptop, it doesn't work on any of the two devices.
I already tried to start via "-OpenCL-PlatformAndDeviceIndices 1 0", but unity always uses intelHD.
In my computer is a GTX 750TI with 2GB and in my laptop a GTX 1050 Ti with 4GB.
However, even under AppData/Local/Unity/Editor/Editor.log only IntelHD is displayed. the graphics cards don't show up at all.
Both devices have the latest Nvidea drivers. I have the problem on different Unity versions (2018.3.0f2 | 2018.3.3f1 | 2019.1.0a10 )
I once added the Editor.log below (from the laptop). How can I solve the problem?
After renaming the OpenCL.dll to OpenCL.dll.bak my GPU appears in the Editor.log. I can now access it via -OpenCL-PlatformAndDeviceIndices 0 0. This seems to work.
-- Listing OpenCL platforms(s) --
* OpenCL platform 0
PROFILE = FULL_PROFILE
VERSION = OpenCL 2.1
NAME = Intel(R) OpenCL
VENDOR = Intel(R) Corporation
EXTENSIONS = cl_intel_dx9_media_sharing cl_khr_3d_image_writes cl_khr_byte_addressable_store cl_khr_d3d11_sharing cl_khr_depth_images cl_khr_dx9_media_sharing cl_khr_fp64 cl_khr_gl_sharing cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_icd cl_khr_image2d_from_buffer cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_spir
-- Listing OpenCL device(s) --
* OpenCL platform 0, device 0
DEVICE_TYPE = 4
DEVICE_NAME = Intel(R) HD Graphics 630
DEVICE_VENDOR = Intel(R) Corporation
DEVICE_VERSION = OpenCL 2.1
DRIVER_VERSION = 126.96.36.19949
DEVICE_MAX_COMPUTE_UNITS = 23
DEVICE_MAX_CLOCK_FREQUENCY = 1000
CL_DEVICE_MAX_CONSTANT_BUFFER_SIZE = 2147483647
CL_DEVICE_HOST_UNIFIED_MEMORY = true
CL_DEVICE_MAX_MEM_ALLOC_SIZE = 2147483647
DEVICE_GLOBAL_MEM_SIZE = 3378762548
DEVICE_EXTENSIONS = cl_intel_accelerator cl_intel_advanced_motion_estimation cl_intel_d3d11_nv12_media_sharing cl_intel_device_side_avc_motion_estimation cl_intel_driver_diagnostics cl_intel_dx9_media_sharing cl_intel_media_block_io cl_intel_motion_estimation cl_intel_planar_yuv cl_intel_packed_yuv cl_intel_required_subgroup_size cl_intel_simultaneous_sharing cl_intel_subgroups cl_intel_subgroups_short cl_khr_3d_image_writes cl_khr_byte_addressable_store cl_khr_d3d10_sharing cl_khr_d3d11_sharing cl_khr_depth_images cl_khr_dx9_media_sharing cl_khr_fp16 cl_khr_fp64 cl_khr_gl_depth_images cl_khr_gl_event cl_khr_gl_msaa_sharing cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_gl_sharing cl_khr_icd cl_khr_image2d_from_buffer cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_mipmap_image cl_khr_mipmap_image_writes cl_khr_spir cl_khr_subgroups cl_khr_throttle_hints
* OpenCL platform 0, device 1
DEVICE_TYPE = 2
DEVICE_NAME = Intel(R) Core(TM) i5-7300HQ CPU @ 2.50GHz
DEVICE_VENDOR = Intel(R) Corporation
DEVICE_VERSION = OpenCL 2.1 (Build 10)
DRIVER_VERSION = 188.8.131.52
DEVICE_MAX_COMPUTE_UNITS = 4
DEVICE_MAX_CLOCK_FREQUENCY = 2500
CL_DEVICE_MAX_CONSTANT_BUFFER_SIZE = 131072
CL_DEVICE_HOST_UNIFIED_MEMORY = true
CL_DEVICE_MAX_MEM_ALLOC_SIZE = 2116969472
DEVICE_GLOBAL_MEM_SIZE = 8467877888
DEVICE_EXTENSIONS = cl_khr_icd cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_byte_addressable_store cl_khr_depth_images cl_khr_3d_image_writes cl_intel_exec_by_local_thread cl_khr_spir cl_khr_dx9_media_sharing cl_intel_dx9_media_sharing cl_khr_d3d11_sharing cl_khr_gl_sharing cl_khr_fp64 cl_khr_image2d_from_buffer
One thing i can think of is that if the integrated GPU is set as first in the bios, and it will always be used by the GPU lightmapper, as i think i saw somewhere that the first GPU is automatically picked by the lightmapper. Regarding the laptop - just go to the bios and try to find an option, where you set the discrete GPU as first/main ( just for the try though ) !
EDIT : if this is the case, then i think lightmapping team would have to somehow give you an option to choose a baking device !
This is not GPU memory being incorrectly reported as 2GB, this is because something in your scene require an allocation larger than 2GB and the driver doesn't allow that for this particular Nvidia card (CL_DEVICE_MAX_MEM_ALLOC_SIZE is 2GB in this case). Most Nvidia cards I have seen have a max allocation size of 25% of total GPU memory. AMD cards usually have a 50% max.
Either bake with a different card with more memory or reduce super sampling count or lightmap atlas size. Baking large terrains could also be the cause, so try and reduce heightmap resolution and see if that helps.
If you get out of memory errors related to clustering, it means you are precomputing realtime GI. This is unrelated to the GPU lightmapper but please consider if you need realtime GI and baked GI enabled at the same time? Realtime GI precompute can be very CPU and memory intensive.
Brilliant! Exactly the kind of information I needed. Thank you very much!
By the way, if anyone experiences the problem I had with strange texture artifacts in your build only, you might have this bug. Apparently your resources file is limited to 4GB in size as a Unity 32 bit holdover limitation, which has to be fixed with asset bundles or multiple scenes. This applies mostly to people with large complex scenes.
Just downloaded Unity 2019.1.0b1. I'm looking for the Optix denoiser option at the lightmap settings (progressive cpu mode), but there are only none/A_Trous/Gaussian
I have a GTX 1050 with driver 416.34.
How can I "enable" the new Optix denoising option?
Should have new options for denoising:
However there is currently a 4GB GPU VRAM minspec as the denoiser is really memory hungry. We are fixing this in 19.2 though. Regardless of the minspec you should see those options.
[QUOTE="Jesper-Mortensen, post: 4167808, member: 224237"
However there is currently a 4GB GPU VRAM minspec as the denoiser is really memory hungry. We are fixing this in 19.2 though. Regardless of the minspec you should see those options.[/QUOTE]
Thanks for your reply! Sadly I have only 2GB VRAM. So that's why I can't see the optix option:
Waiting the 19.2 fix very-very much
Optix denoiser is greyed out for me on Progressive GPU but available on CPU? Tooltip says "your hardware doesn't support denoising". I have a 1070 laptop, 8GB.
I think the GPU accelerated Optix denoising is currently available only when using progressive CPU lightmapper.
Yes, what Total3D said. I have made it work in 19.2 though. Think of the denoising in 19.1 as a soft launch;-)
Finally I achieve to use this GPU lightmapper after recent Update.
But I do meet some wierd result like this.
I'm quite sure every setting is exactly the same as that when using CPU progressive.
How dose it happen and can it be fixed?
enabling filter won't achieve an ideal result either
Can you post a screenshot showcasing the difference between the CPU and GPU result? Or the scene if possible?
Do you have guys this issue where, once the GPU lightmapper gives warning like out of memory or out of resources etc... you have to restart the engine in order to get it working again, otherwise it stucks on preparing step !?
this is the CPU result with same settings
Thanks! This seems to be a sampling pattern issue (work is planned on the GPU lightmapper in that regard). The best would be that you open a bug with the scene attached so we can confirm and verify it is fix when we do the sampling pattern work.
Switching to the CPU lightmapper should clear all memory and clear the opencl context, ie the next GPU lightmapper bake should start from scratch. So this seems a bug. Repro (ideally in a bug report)?
I thought this might clear the opencl context as you say and tried it, but did not help. I actually have this issue for some time now on different unity versions, like always was there, though !
However will try to repro the issue and may file a report !
Very promising, but please, please, please tell us there a plan to fix the light-mapper;'s UV packing algo, it's been extremely inefficient since Unity 4 days.
Right now i have a single mesh using unity default auto generated UV's settings in a scene and the light mapper has created a 4K texture, but has barely populated 1K of the texture with UV islands
See this thread for a long running discussion about the issue
Just spotted this in the 2019.2.0 Alpha notes, looking forward to trying it:
GI: Reduced GPU memory footprint for GPU lightmapper when baking lighting, by compressing normal vectors and albedo.
GI: The Optix AI denoiser is now supported with GPU Lightmapper.
GI: Upgraded Optix AI Denoiser to version 6. This new version has better performance and a lower memory footprint.
Just tested 2019.2.0a4
GPU lightmapper does not render area lights for me?
Is any new trick to enable them?
I also couldn't seem to get a directional light to bake in 2019.2.0a4
this is a known issue: https://issuetracker.unity3d.com/is...fter-baking-lighting-with-the-gpu-lightmapper
It's fixed in 2019.2.0a5.
A quick question. Vega VII or RTX 2080 or RTX 2080ti will be faster with the GPU lightmapper ? Reviewers say that Vega VII is faster to opencl and is comparable to RTX2080ti. As I can not get both cards and run several tests to compare them can you at unity make some testing ?
If you want optix denoising, nvidia is the only option.
if you need the memory take the 16 gb amd or the 24 gb rtx titan.
a benchmark scene would be nice
+vote for an universal benchmark scene
At the moment there's only one option for denoising: Optix with nvidia card. And it's a must-have thing, in my archviz scenes CPU+Optix lightmapping gives me 5x - 7x speed advantage! It's huge!
I think the most vise is to wait a bit for the another denoising implementation. If I'm correct soon we'll have it built in.
In case we get a denoising option compatible with every card the Vega VII is the winner because of it's more memory.
it is now Unity 2018.3.4f1 i'm using and there is still this warning like in the image:
The FBX has the option for generating uvs toggled on and:
1. i got Lightmap Resolution set to 40
2. lightmap Size set to 1024
3. HighResolution preset.
4. Lightmap Padding is 4 texels
How i should know how to manage this ? Is it an actual issue in this and similar cases.
Is it a good idea to increase the padding even more ?
Please at least make the message more sensible as it seems to be related to another thing than just enabling the "generate uvs" option ! For example if we have to leave more space between chunks to change the message saying that or print multiple possible solutions !
Below is Lightmap setting and the UV Overlap preview - is it an issue or not ?
Thank you !
What is a good benchmark tester that will be good to compare with Unity? Nobody really uses Unity's gpu lightmapper yet for benchmarks so what is a good alternative to look at? Luxmark?
A benchmark for lightmapping different unity versions and gpu's is what needed.
It does not need to compare to luxmark or any other engines.
We just want to know what card to buy and when to use cpu or cpu lightmappping
and when a lot of memory is needed
I am between Vega VII and RTX 2080Ti (RTX titan is out of my price limit the current time). I will primarily use them for Archviz and VR. Maybe someone from Unity should clear this our for us what 11GB vs 16GB and the available memory that they have available between them.
With a 4GB Rx580 (underclocked GPU) in my Asus 702ZC 8 Core Ryzen 1700 Laptop (1420 score in Cinebench R15), I am getting 10x performance if I compare GPU vs CPU.
I think that unity should make a benchmark app to be able to have real results and see what is more suitable for all creators. As only one GPU is supported the current time is very important to know what pathway we will need to follow for our workflow needs.
I agree with the app it sounds like a cool idea. I am currently waiting for my RADEON VII 16G but I might cancel if VEGA FRONTIER EDITION’s 16G performance is about the same
Perhaps we can just use those for benchmarking
Adam Exterior Environment
Already launched a bug report 1124484,how can I track its status and your progress on it please?
Thanks for the bug report, @jacknero! You'll be notified via e-mail when the status of the bug changes.
Will we see a "Bake selected" - button someday?
Very likely yes. However we've been prioritizing the final quality of single bakes and big structural changes (like GPU support in the Progressive Lightmapper) over more granular control. Once the former areas stabilize we'll look into those features.
I vote against bake selected if it means we'll never get tight packing again.
It wouldn't imply that anymore. That was the case back in the day, as we didn't have a good system in place to store all the needed metadata.
Hi, i've been away from unity for a bit, can anyone explain the current status of the progressive lightbaking, is it solid and working or still wip?