Search Unity

Bakery - GPU Lightmapper (v1.96) + RTPreview [RELEASED]

Discussion in 'Assets and Asset Store' started by guycalledfrank, Jun 14, 2018.

  1. keeponshading

    keeponshading

    Joined:
    Sep 6, 2018
    Posts:
    937
    It s way better to say they don t do it after implementation.)
    I could assist when job allows it.

    Probes need spherical calculation.
    Lightmap SG s only the Hemisphaere like in Bake Lab.
    So yes precalculation will take a while longer and needs much more memory.
    But the indirect specular makes me hyped.
    I downloaded the game and cheated me through some levels.
    The quality of the lighting scenarios and performance is absolut amazing.

    Unfortunately BakeLab has no SSR implementation. I would like to see the difference for some scenarios.

    However. For HQ scene use cases they could be nice.
    I tend to Hybrid approaches. There where you need it.

    This i learned from Bakery.
    Bake to Vertex here. Use Lightmap x y z method here.....

    I tried in a fast session to replace Blender EEVEE s SH L2 probes.

    I searched for the already integrated SH Lightprobes in the Blender Source Code and found some of the spherical_harmonics_L2 and hl2_basis code.

    Thats fast done.

    https://github.com/sobotka/blender/...lender/draw/engines/eevee/eevee_lightprobes.c

    https://github.com/sobotka/blender/...raw/engines/eevee/shaders/irradiance_lib.glsl

    EEvEE_Probes.JPG

    But without the Lightmap SG's there is no good comparison possible and thats hard to do in Blender(for me).

    On the other side to extend Bake Lab with Probe volumes and allow saving textures is no fun job, too.

    So Bakery is the best way i think.)
     
    Last edited: Jul 30, 2019
    guycalledfrank likes this.
  2. omacha

    omacha

    Joined:
    Aug 21, 2017
    Posts:
    9
    Hey Frank,

    I was wondering if you were planning to add some documentation on how to use the Subsurface Scattering in Bakery. The wiki page that is referenced in the manual does not exist and I couldn't find any resource about it.
     
  3. keeponshading

    keeponshading

    Joined:
    Sep 6, 2018
    Posts:
    937
    ..some FTracers.
    Confirmed. There is no MultiGPU benefit for now.


    2x GeForce GTX 1080
    Driver 425.31

    1st Render time: 5.45 sec
    2nd Render time: 4.898 sec
    3nd Render time: 4.921 sec


    4x GTX Titan V
    Driver 425.51

    1st Render time: 6.095 sec
    2nd Render time: 5.548 sec
    3rd Render time: 5.417 sec


    2x Titan RTX
    Driver 430.64
    without NV Link

    1st Render time: 1.294 sec
    2nd Render time: 0.719 sec
    3rd Render time: 0.738 sec

    1x Titan RTX
    Driver 430.64
    without NV Link

    1st Render time: 1.241 sec
    2nd Render time: 0.628 sec
    3rd Render time: 0.631 sec


    2x Quadro RTX 6000
    Driver 425.51

    1st Render time: 1.217 sec
    2nd Render time: 0.705 sec
    3rd Render time: 0.719 sec

    1x Quadro RTX 6000
    Driver 425.51

    1st Render time: 1.09 sec
    2nd Render time: 0.591 sec
    3rd Render time: 0.611 sec
    4th Render time: 0.605 sec


    in case of
    2x Quadro RTX 6000
    vs
    1x Quadro RTX 6000

    and

    2x Titan RTX
    vs
    1x Titan RTX

    it seems the same job brodcasts to all available GPU´s available in the system because 2x and 4x GPU setups have same GPU use like one.



     

    Attached Files:

    Last edited: Jul 31, 2019
    liudian208 and guycalledfrank like this.
  4. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,493
    Okay I see now, but are all the small lightmap the same lightmap? Because then that's basically the same thing as mipmaping, we could sample that instead.

    The value of the lightmap is hold the light value at moment t to be sample, so it's best to duplicate the sampled point anyway? Also small lightmap lost all the useable data anyway, since it's surface based (tri), they would be too small relative to pixel and therefore we would sample bad data.

    Duplicating points of the hemisphere (instead of storing UV) is what I proposed, so they are basically local hemi cubemap, but since we can preprocess we can optimized and pack the most important samples (like enlighten does) instead of a vanilla hemicubemap.

    But we still need at some place to do long distance sampling (like if you pack the duplicated data on the hemisphere, you need to bounce back to the light to the original point to have proper multi bounces GI.

    So the optimal is:
    1 - pack duplicate Gbuffer data into tile for each point
    2 - compute light per tile pixel and store into light texture
    3 - generate mip map of the resulting light
    4 - sample the mipmap (when tile = 1px) to display on objects
    5 - bounce the light of the mip back to other points using UV tiles

    Step 5 is unoptimal as we can have multiple point writing to the same target, so it's best to use the current point sampling it's hemisphere anyway. That is you sample the hemisphere direct light to get 1st bounce easy with less cache miss, but then you pay for bounce beyond that point.

    Step 2 could probably do away with storing a light texture on high end machine and just integrate the tile lighting directly, splitting the process with multiple texture allow to spread the compute on low end and get async (render to texture, mipmap, update GI then texture sample are independent).

    The original way of doing, that is update each direct light at the pixel, THEN sample the GI, is trading memory (the Gbuffer is smaller by tile size) vs compute. But the new one as a nice optimization to use 1 bounce at low cost (mipmap). So thanks for the idea!

    Also the Gbuffer should be considered as a surfel representation, not directly the geometry representation. We can cheat with precision there (ie assuming all surfel aren't on surface geometry, only an approximation of it).
     
  5. Resident-Emil

    Resident-Emil

    Joined:
    Jun 23, 2015
    Posts:
    4
    Great asset, amazing results!
    I have a question though:
    Is it possible to bake the results to a predefined UV map? I have a forest on an island, and on the island ground I have grass (spawning off the island geometry and using a UV mapped texture for density and such). I would like to bake the ground shadows from Bakery on a texture UV mapped to the island ground so I can use it to color the grass accordingly.
    Any tips on how I could achieve this?

    Cheers,
    Emil
     
  6. guycalledfrank

    guycalledfrank

    Joined:
    May 13, 2013
    Posts:
    1,671
    Integrate... Cycles? Well, maybe. But there are some differences, e.g. Bakery supports Unity's original (standard pipeline) unrealistic light falloff, which would be perhaps hard to teach to Cycles.

    Ah, true. This feature is kinda experimental, and I quietly sneaked it into release without announcing, but used myself for some stuff.
    Documented: https://geom.io/bakery/wiki/index.php?title=Subsurface_scattering

    Cool thanks!
    I will try to do something about it.

    Yes.

    How? Mipmaps are progressively smaller. Here is just a grid of lightmaps, each picking up lighting from a differently oriented ray.

    Sure, take a look at Lightmap Groups and the Original UVs mode: https://geom.io/bakery/wiki/index.p...my_own_UVs_without_any_alterations_to_them.3F
    (I assume the ground is your own authored mesh with existing UVs you need to use?)
     
    omacha and keeponshading like this.
  7. keeponshading

    keeponshading

    Joined:
    Sep 6, 2018
    Posts:
    937
    der
    Why not.)

    When you need to create realistic light falloff conditions for other galaxys you can change the emission strength of the llights based on the Ray Length from the Light Path node.)

    Or you use the Light Fallof node.

    You can derive the different Light Falloff values from each other by either multiplying or dividing by the ray length.

    Lots of possibilities.)

    linear = quadratic * raylength
    constant = linear * raylength
    quadratic = constant / raylength / raylength


    Unbenannt.JPG
     
    Last edited: Aug 1, 2019
    guycalledfrank likes this.
  8. BenWoodford

    BenWoodford

    Joined:
    Sep 29, 2013
    Posts:
    116
    Oh damn that is handy.

    Presumably you have to setup the lights for each map separately, turn them on/off depending on which map you're baking, and bake each one out right?
     
  9. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,493
    I'm a bit confused, I think I don't have fully understand what your idea is. I think my problem is how do you resolve the lighting.

    I think I misread the first time I thought it was a grid of small 2x2 duplicated lightmap. 2x2 is the UV tile, which sample an atlas of lightmap. I'm confused as to what the atlas store and how it does help (especially in multi bounces).

    So the each UV values goes to a "local" lightmap to sample light?

    ...

    Oh okay, I think I got it, you reversed the packing lol, I thought that you where showing a sequence, you were contrasting "format" (ie a normal light map, a tile lightmap and your own proposition) lol

    Instead of hashing the position to get the tile, we use the tile to get the position, ie we start with UV not the lightmap! I need to think about the implication lol my brain still can't realized what that looks like lol The tile being the sample hemisphere is a nice shortcut, but in the other case I don't see it yet.

    edit trying to think about it:
    Uv value are implicit in my old model, ie a point has a given UV, so we need to get which other hits happen, in some way it's already UV order packed. If we expend each UV to lightmap, that's kinda the same but more costly(naive implementation) as the tile is simply bigger.

    But instead of having addresses we have a mask (visibility structure) of all the point that sample this "hit" position, with lighting value being "global" to the local lightmap and masked. Which mean in the end we would just accumulate down all the mask and their values at every point to get the final value (basically sampling the same position).

    It work with direct lighting because we compute the lighting at the hit, then GI is done by "distributing" using the overlaping of the mask, each point only receiving light from the hits that see it (unmasked). Which mean we could probably do a bit mask to store the lightmap, that is we pack 32 lightmap.

    We could probably optimized by only storing hitpoint in order and discard implicit position(ie not on the lightmap lay oout), which mean it's a surfel based model, each surfel having the Gbuffer to compute the light and the mask as a visibility structure, which is then linearly accumulate to a light accumulation buffer that object would sample. We only bake hits that are used and the visibility structure is potentially a single bitfield lightmap for every 32 hits. Also that sound like a lot of mul-add (mul being mask against direct light, and add being the lighting gathering) which can probably be accelerated by tensor hardware ...


    That's a lot of different approach for what is basically the same technique, that's interesting lol

    Edit2:
    In the case of the surfel decouple from the layout (memory friendly), you would still need a readback from the lightmap values for a multibounce, so potentially a long sample read ...

    In the case of the surfel being implicit on the lightmap lay out, you just read the current position on the light accumulation. Which is the more compute friendly version to date.

    edit3:
    Some math:
    if we have a 256²px texture;
    naively it's (all UV x lightmap size)² since lightmap size is the same UV size it's 256^4, ie a 65536² texture ... we only need one channel so that's /2² if we pack in each channel (32768²), with a bitfield masking, we get 256² x (256²/32) = 11 586²px
     
    Last edited: Aug 1, 2019
    keeponshading and guycalledfrank like this.
  10. Zoidberg656

    Zoidberg656

    Joined:
    Sep 26, 2018
    Posts:
    17
    I truly have no idea what I can do at this point to fix this. I have tweaked and re-baked in a hundred different Bakery settings but to no avail. I continuously get little glowing artifacts that will sometimes disappear, but most are persistent and never leave. The ones on the wall binding don't go away.

    Everything has 2 UV maps. One for the textures and a second for the lightmapping. I used Blender's lightmap UV generator with a generous amount of spacing to avoid bleed.

    I uploaded some examples of my issue and a basic idea of my Bakery settings.

    Please. Any help is appreciated. I'm lost at this point and I've been working on this for weeks. The constant baking with no change is super frustrating.
     

    Attached Files:

  11. lulubosss

    lulubosss

    Joined:
    Sep 27, 2018
    Posts:
    2
    Hello, did you try to active in experimental zone (setting/experimental) the box "Denoise fix bright edge" in lightmapping tasks ?
     
  12. guycalledfrank

    guycalledfrank

    Joined:
    May 13, 2013
    Posts:
    1,671
    Are objects double-sided by any chance? Does it happen if you disable denoising completely? Is dominant direction mode active (does it happen without it?)?

    That's probably a different kind of edge.
     
    Zoidberg656 likes this.
  13. Zoidberg656

    Zoidberg656

    Joined:
    Sep 26, 2018
    Posts:
    17
    All my faces only have one normal and they aren't double sided. Disabling denoising doesn't do anything and neither does dominant direction mode, on or off.
     

    Attached Files:

  14. keeponshading

    keeponshading

    Joined:
    Sep 6, 2018
    Posts:
    937
    could be totally wrong but you said you have manually prepared
    uv 0 normal uv s
    uv 1 lighmap uv set

    could it be that you bake the uv 0 ones?
    you can check by comparing your lightmap uv layout with the layout from the baked lightmap. it is the same?
     
  15. Zoidberg656

    Zoidberg656

    Joined:
    Sep 26, 2018
    Posts:
    17
    All the furniture and walls are texture atlased. If the wrong UV was used, there would be hundreds of artifacts.
     
  16. keeponshading

    keeponshading

    Joined:
    Sep 6, 2018
    Posts:
    937
    just saying when you have setup uv 1 for lightmap in blender it is not used after import automatically.

    you should use
    Lightmap Group Assets with Original UV setting
    and make sure your UV 1 Lightmap set up from Blender is used
     
    Last edited: Aug 2, 2019
    guycalledfrank likes this.
  17. Zoidberg656

    Zoidberg656

    Joined:
    Sep 26, 2018
    Posts:
    17
    How do I go about doing this to make sure?
     
  18. DEEnvironment

    DEEnvironment

    Joined:
    Dec 30, 2018
    Posts:
    437
    Hello I have a question could some one help clear up

    General Best Practice for Lighting Overlap
    we understand that Unity can fit 4 masks in 4 channels of a single texture and we should limit lighting overlaps to below 4...

    1. is this accumulative over a single scene or just restricted to light grouping as in we should have no more that 4 lights over lapping any one location or qty 4 lights overs lapped total in a single scene ?

    2. does this apply to all type lights or do they have different rules
     
  19. JosephHK

    JosephHK

    Joined:
    Jan 28, 2015
    Posts:
    40
    Hello

    For what I understand, bakery's subtractive lighting mode is not same as unity's one.

    If I am correct, bakery's subtractive lighting mode bakes also direct light (in mixed lighting mode) into light probes.
    While, unity's subtractive lighting mode only bakes indirect light into light probes.
     
  20. aunity91

    aunity91

    Joined:
    May 20, 2019
    Posts:
    6
    Getting this error every time I am trying to bake, right after it finishes exporting the scene files and before it starts baking. What is the cause of this?

    Edit: lowering texels and GI bounces solved the problem. Although I'm still wondering if there's a way to maintain higher baking settings without encountering the c++ runtime error.

    note: texels lowered from 100 to 50, the scene is indoor. bounces lowered from 4 to 2.
    Working with nVidia GTX 960 4GB

    http://bysammy.com
     

    Attached Files:

    • c++.JPG
      c++.JPG
      File size:
      25.1 KB
      Views:
      592
    Last edited: Sep 1, 2019
  21. Zoidberg656

    Zoidberg656

    Joined:
    Sep 26, 2018
    Posts:
    17
    How can I fix this little issue? Most rounded surfaces do this at the seams.
     

    Attached Files:

  22. elamhut

    elamhut

    Joined:
    Sep 13, 2013
    Posts:
    45
    Hey Guy,

    I'm getting the following error when trying to bake a specific scene here, can you gimme a hand?

     
  23. lulubosss

    lulubosss

    Joined:
    Sep 27, 2018
    Posts:
    2
    Did you tried to use "Fix seams" in ligthmapping tasks" ?

    And maybe increase the scale in lightmap of the objects ?

     
  24. guycalledfrank

    guycalledfrank

    Joined:
    May 13, 2013
    Posts:
    1,671
    Actually that was a pretty good advice. Since your UV1 come from Blender, you should definitely use a compatible lightmap resolution. Try putting a Lightmap Group on your room and setting packing mode to "Original UV". Then set group resolution to anything >= 3/padding (e.g. if in Blender you used a padding value of 0.005 in UV space, then minimum sufficient resolution is 3/0.005 = 600 (round to next power of two) -> 1024).
    Note that "Original UV" mode implies you have all children objects in a single packed UV layout, not separate UV maps per-object.

    Looks like the same problem, padding not matching resolution.

    Seam fixer can fix discontinuities, but in this case there is a clear pixel-sized UV overlap.

    Not accumulative. Lights are split into separate overlapping groups that don't depend on each other.

    It does, but there are some caveats:
    - Directional lights take a channel in all shadowmasks as they are global. If you have fully shadowed areas you may try to separate them in a Lightmap Group and use bitmasks to locally exclude directionals. This will save you a channel for points/spots.
    - Spot light overlaps are computed inaccurately at the moment. They are treated as bounding spheres (like points) instead of cones. This is on my "geez, I have to fix this" list.

    Huh. Not sure then how Unity occludes the main light on dynamic receivers. Do they bake a single occlusion probe? Occlusion probe data is inaccessible and they don't mention it in the docs much, so this can be hard to fix.

    Most likely it goes out of RAM (main RAM, not video memory) during scene export. More info on your scene polycount/amount of lightmaps generated and RAM size will help.

    Bounces don't affect memory, so you can leave them high.

    Ah, this one should never normally happen, but it can if:
    - You have too many lightmaps.
    - You have weird UV values (like NaNs).
    It should still bake after it, but you might get problems like incorrect GI from some objects
    What's the lightmap count in this scene? Any unusual/procedural geometry?
     
    Last edited: Aug 5, 2019
    keeponshading and DEEnvironment like this.
  25. Resident-Emil

    Resident-Emil

    Joined:
    Jun 23, 2015
    Posts:
    4
  26. ephraimmiah

    ephraimmiah

    Joined:
    Jul 17, 2019
    Posts:
    2
    We downloaded our latest version on June 12th this year so I am pretty sure its the 1.6 even though theres no place to really "find" a version number, neither on the manual nor on the menue dropdown ;)(would be a nice addition to have that information somewhere accessable)

    This was regarding my bug report on page 65.
     
  27. JosephHK

    JosephHK

    Joined:
    Jan 28, 2015
    Posts:
    40
    Probably you may just rely on the result of occlusion probe baking by unity.
    It changes not only the light probes but the lights.

    I can achieve unity's subtractive mode using bakery via a little modification.

    First, removing
    Code (CSharp):
    1.  // Set shadowmask parameters on lights
    2.         for(int i=0; i<storage.bakedLights.Count; i++)
    3.         {
    4. #if UNITY_2017_3_OR_NEWER
    5.             if (storage.bakedLights[i] == null) continue;
    6.  
    7.             int channel = storage.bakedLightChannels[i];
    8.             var output = new LightBakingOutput();
    9.             output.isBaked = true;
    10.             if (channel < 0)
    11.             {
    12.                 output.lightmapBakeType = LightmapBakeType.Baked;
    13.             }
    14.             else
    15.             {
    16.                 output.lightmapBakeType = LightmapBakeType.Mixed;
    17.                 output.mixedLightingMode = channel > 100 ? MixedLightingMode.Subtractive : MixedLightingMode.Shadowmask;
    18.                 output.occlusionMaskChannel = channel;
    19.                 output.probeOcclusionLightIndex  = storage.bakedLights[i].bakingOutput.probeOcclusionLightIndex;
    20.             }
    21.             storage.bakedLights[i].bakingOutput = output;
    22. #endif
    23.         }
    in ftLightmap.cs which overrides unity's baking results.

    Then, render lightmap using Full lighting mode.
    And finally render light probes using Indirect mode with occlusion probes option enabled.

    It breaks some feature of bakery, but I dont mind as I just want unity's subtractive lighting mode.
     
    keeponshading likes this.
  28. Deleted User

    Deleted User

    Guest

    Hi,

    Does the plugin work with Nvidia P 5000?

    Thanks.
     
  29. Deleted User

    Deleted User

    Guest

  30. elamhut

    elamhut

    Joined:
    Sep 13, 2013
    Posts:
    45
    The lightmap count is big, it was a big scene, when I reduced the Texels size I stopped getting this error. I could've also had UV issues since we're currently redoing all our UV2 (so we can uncheck Unity's Auto UV2). No procedural geometry in the game, but we do have a lot of ProBuilder stuff.

    Also, when making a shader, how can I make sure that the emissive in our custom shader will be baked by bakery? Right now the only emissives we're able to bake come from the Standard Shader. We've tried a lot of stuff so we can bake our custom shaders but we've yet to be successful.
     
    guycalledfrank likes this.
  31. E-Cone

    E-Cone

    Joined:
    Jul 4, 2013
    Posts:
    46
    Right. I put my lights sets into different GameObjects and disable them before baking.
     
  32. guycalledfrank

    guycalledfrank

    Joined:
    May 13, 2013
    Posts:
    1,671
    Hi! To make sure, you can try a little benchmark from this post: https://forum.unity.com/threads/bakery-gpu-lightmapper-v1-6-released.536008/page-65#post-4781201
    If there are no errors in the generated log file, you're good to go :)

    See: https://geom.io/bakery/wiki/index.php?title=Manual#Material_compatibility
     
  33. sewy

    sewy

    Joined:
    Oct 11, 2015
    Posts:
    150
    Hello Frank,
    in the main page, there is written that LODs are supported but I am not able to make it work and always end up with black spots of LODs self shadowing. What is the way to make it work pls?

    Using Unity 2019.1.51 + Distance Shadowmask.

    Thanks
     
  34. tkslan

    tkslan

    Joined:
    Sep 30, 2016
    Posts:
    28
    Hi, there is a option to enable SH specular in LWRP ?
     
  35. guycalledfrank

    guycalledfrank

    Joined:
    May 13, 2013
    Posts:
    1,671
    Do you try to combine Lightmap Groups with LODs? Any LOD-related messages in the console?

    Not at the moment, it's pretty hard to inject it to new LWRP (URP?).
     
    tkslan likes this.
  36. sewy

    sewy

    Joined:
    Oct 11, 2015
    Posts:
    150
    Not realy, I havent seen any info about LODs in the doc so I havent done anything special.
    Baking runs through without errors and if not using LODs, it looks nice.

    EDIT: Found the problem - we are using only 2 ligtmaps -LOD0 and LOD1, any other LOD is Lightmap Static but scale is 0 (assigning Lightmap of LOD1 through C#) - if scale > 0 it works but if scale == 0 self shadowing occurs. Is there any workaround - unity lightmapper are OK with it?
     
    guycalledfrank likes this.
  37. deltamish

    deltamish

    Joined:
    Nov 1, 2012
    Posts:
    58
    Any comparison between PLM & Bakery ? , Interms of physically accurate light distribution and performance .etc
     
  38. keeponshading

    keeponshading

    Joined:
    Sep 6, 2018
    Posts:
    937
    Hi. A short question. I try to find a way to bake my gpu instanced trees around a race track. There are 178 and the Bakery Bake for one tree fits with bark and leaves perfect in a 512px lightmap. (256px is possible too)

    Would it be possible to generate a
    "Bakery Lightmap Group Instance" for the trees and Bakery renders all Lightmaps to an Texture2DArray with an textureIndex.

    Code (CSharp):
    1. Texture2D[] textures;
    2. int textureWidth = 512
    3. int textureHeight = 512
    4.  
    5. Texture2DArray textureArray = new Texture2DArray(textureWidth, textureHeight, textures.Length, TextureFormat.RGBA32, false);
    6.  
    7. for (int i = 0; i < textures.Length; i++)
    8. {
    9.     Graphics.CopyTexture(textures[i], 0, 0, textureArray, i, 0); // i is the index of the texture
    10. }
    11.  
    12. material.SetTexture("_Textures", textureArray);
    After bake the tree gpu instances could be set up and rendered with their unique lightmap by setting the textureIndex.
    (needs Bakery Shader.)

    UNITY_SAMPLE_TEX2DARRAY take the texture array as first parameter and a float3(uvx, uvy, textureIndex) for the uv instead of a regular float2(uvx, uvy).

    To declare the parameters of each instance, use UNITY_DEFINE_INSTANCED_PROP.

    To retrieve the parameters of each instance, use UNITY_ACCESS_INSTANCED_PROP.

    To send theses parameters to the shader :

    • Create a MaterialPropertyBlock object.
    • Set the parameters of each instance with MaterialPropertyBlock.SetFloatArray (or any other SetXXX method)
    • Send the MaterialPropertyBlock to the shader via MeshRenderer.SetPropertyBlock or Graphics.DrawMesh.
    Texture2DArray Example Shader

    Code (CSharp):
    1. Shader "Custom/Texture2DArraySurfaceShader"
    2. {
    3.     Properties
    4.     {
    5.         _Textures("Textures", 2DArray) = "" {}
    6.     }
    7.  
    8.     SubShader
    9.     {
    10.         Tags { "RenderType"="Opaque" }
    11.  
    12.         CGPROGRAM
    13.  
    14.         #pragma surface surf Standard fullforwardshadows
    15.         #pragma target 3.5
    16.         #include "UnityCG.cginc"
    17.  
    18.         UNITY_DECLARE_TEX2DARRAY(_Textures);
    19.  
    20.         struct Input
    21.         {
    22.             fixed2 uv_Textures;
    23.         };
    24.  
    25.         UNITY_INSTANCING_CBUFFER_START(Props)
    26.             UNITY_DEFINE_INSTANCED_PROP(float4, _Color)
    27.             UNITY_DEFINE_INSTANCED_PROP(float, _TextureIndex)
    28.         UNITY_INSTANCING_CBUFFER_END
    29.  
    30.         void surf (Input IN, inout SurfaceOutputStandard o)
    31.         {
    32.             fixed4 c = UNITY_SAMPLE_TEX2DARRAY(_Textures, float3(IN.uv_Textures, UNITY_ACCESS_INSTANCED_PROP(_TextureIndex)) * UNITY_ACCESS_INSTANCED_PROP(_Color);
    33.             o.Albedo = c.rgb;
    34.             o.Alpha = c.a;
    35.         }
    36.  
    37.         ENDCG
    38.     }
    39.     FallBack "Diffuse"
    40. }
    Source
    https://www.reddit.com/r/Unity3D/co..._source=amp&utm_medium=&utm_content=post_body

    Or is there probably a better way with bake to vertex for gpu instancing? Problem here is i would loose my wind.
     
    Last edited: Aug 11, 2019
  39. keeponshading

    keeponshading

    Joined:
    Sep 6, 2018
    Posts:
    937
    I am working on on a detailed one. Around 2 months to go.
    Octane,VRay,Cycles,Bakery,PLM

    Previous performance results on an Asset in Unity 2019.1
    you can try yourself.

    Performance
    Scene ArchViz Pro 6

    Bakery 1.6.
    00h29m19s
    LightMap Quality *****

    PLM
    05h40m19s
    LightMap Quality **

    Enlighten+FG
    12h34m08s
    LightMap Quality *****

    more details here and some optimisation proposal multi gpu for the Bakery part... (click on view attachments for settings and results)
    https://forum.unity.com/threads/bakery-gpu-lightmapper-v1-6-released.536008/page-65#post-4763765

    Archviz 6 is not an general example but it shows an trend.
    Visual quality and performance is topnotch even in VR and on Android. Cause its baked.)

    I am working on a pure calibrated ibl and with measured pbr materials (XTex) one . It will allow more detailed comparison to ground truth path traced reality.
     
    Last edited: Aug 11, 2019
    guycalledfrank and deltamish like this.
  40. Ruuubick

    Ruuubick

    Joined:
    Apr 20, 2014
    Posts:
    17
    Hi Mr F !

    I'm having the "GenerateGBufferMap error" as well, except only after updating to 1.6 (from 1.551) and on an 2080ti. I never had issues previously, and going down to 1024 max res still creates this error. What else could i try ? (I haven't been able to successfully import an older Bakery version into my project, if there's a unity package somewhere, that might solve it) I'm also using the very default settings for the render.

    Update : Turns out while trying to use 1.6, i enabled xatlas for my first bake. What xatlas did was remake all my lightmap UVs for all my meshes in the scene. Those UVs were extremely tight and packed, and on the checker had very very little checkers everywhere. My problem was solved by redoing every singly lighthmap UVs for all my meshes in scene. Experimental sure, but a bit dangerous !

    https://imgur.com/L8XlFvf Original UV lightmaps
    https://imgur.com/Q4LLQ29 xatlas UV lightmaps
     
    Last edited: Aug 12, 2019
  41. namdo

    namdo

    Joined:
    Feb 23, 2015
    Posts:
    200
    I have a bit of lighting question. So i have bakery working fine. I add shadows and light to baked and all the items to static. However my playable characters have absolutely no light on them as they aren't static.

    I have some area lights and 1 directional light all set to baked.
     
  42. guycalledfrank

    guycalledfrank

    Joined:
    May 13, 2013
    Posts:
    1,671
    Ahh. Objects with scale_in_lightmap=0 are considered by Bakery as shadow/GI casters which don't get their own lightmaps, and they go through a completely different path with their LOD levels are likely being omitted. Will fix!

    I had one a year ago, but PLM is being changed every month, so it's hard to keep comparisons up to date. keeponshading does amazing benchmarks though, and you can just go through the thread/reviews.

    This is very specific and won't work on every platform (e.g. WebGL1 and low-end mobiles don't support texture arrays. And Safari on iOS still doesn't support WebGL2). You can use a normal Lightmap Group (Pack Atlas) and the "override resolution" checkbox to force every tree have the same sub-lightmap size (this is what I did with bushes in the sponza scene). You can then use this atlas as is or make a script to convert it to Tex2DArray (given identical tree lightmap size it is fairly straightforward).

    Keep it up! Your benchmarks are extremely useful!

    Are there any more details in the error message? E.g. "Can't read texture", "Can't write texture", etc?

    I have them somewhere. PM me if you won't be able to resolve the 1.6 problem.

    ? Probably I should make a public archive with all different versions ?

    If your models were using automaitcally generated UVs (not coming from a modeling package), then this is what it expected to do, yes. Switching the unwrapper back to Default and rendering again should remake them back as they were.

    Sorry, what do you mean by that? You don't have to do anything manually for every model. If you mean waiting for reunwrapping, then... eh, yes. Maybe I need to throw a warning "are you sure" message with estimated time or something.

    Take a look at mixed lighting and light probes:
    https://geom.io/bakery/wiki/index.php?title=Manual#Render_mode
    https://docs.unity3d.com/Manual/LightMode-Mixed.html
     
  43. namdo

    namdo

    Joined:
    Feb 23, 2015
    Posts:
    200
    Oh thanks. I knew light probes existed but I didn't actually know what they where for.
     
  44. namdo

    namdo

    Joined:
    Feb 23, 2015
    Posts:
    200
    I was wondering if you could help with this.

    I use easycombine to combine my meshes because of batches and drawcalls. When I bake using Bakery, i get this.



     
  45. guycalledfrank

    guycalledfrank

    Joined:
    May 13, 2013
    Posts:
    1,671
    I don't know how easycombine works, but you need to make sure it generates proper lightmapping UVs for combined meshes.
     
  46. keeponshading

    keeponshading

    Joined:
    Sep 6, 2018
    Posts:
    937
    The complete setup is waiting for the compute device split from the Bakery Jobs.)
     
  47. blanx

    blanx

    Joined:
    Feb 26, 2016
    Posts:
    73
    Hi,

    Is there a significant reason for using TextureImporterCompression.CompressedHQ in ftTextureProcessor.cs?
    This gives is extreme texture compression times on android builds.
     
  48. DEEnvironment

    DEEnvironment

    Joined:
    Dec 30, 2018
    Posts:
    437
    I also don't use this tool however if it is generating good UVs and mesh one thing you could look for its the way it combines / atlas the textures.. in most cases if you combine a large number of complex items you may need to increase the atlas texture resolution or suffer quality loss.. you may need to chat with the specific DEV for your asset to debug .. hope this helps[/QUOTE]
     
  49. BasenjiGames

    BasenjiGames

    Joined:
    Mar 13, 2014
    Posts:
    3
    Any suggestions for how to bake a very large interconnected scene without running out of memory. I have an environment that is large that I can bake with a resolution of 60-70, but at 100 it always crashes and gives out of memory error. There is plenty of disc space and the GPUs has 6 gb of vRam. I understand the scene may be over taxing it but are there any workarounds, and would multiple GPUs solve this? I tried vRam optimization forced on and looked at the troubleshooting guide.

    Thank You
     
  50. sewy

    sewy

    Joined:
    Oct 11, 2015
    Posts:
    150
    Thanks Frank, I have found temporaly solution - commented out some line in the code but will be happy with your official release (my temporary solution could introduce some hiden problems) - any ETA?

    Another thing would be nice to add - optional MipMap generation - without MipMaps, textures alias at distance. Would you be able to provide script and line, where the textures are created for temporal FIX? As far as i know, it can only be enabled on Texture2D constructor (if you dont want to do it afterwards in the editor, which means reapply for hundreds of textures).

    Cheers