Search Unity

quality of mipmaps by RenderToCubemap are not consistent with that of default reflection probe.

Discussion in 'General Graphics' started by syenet, Oct 14, 2016.

  1. syenet


    May 13, 2013
    I really need to learn about the cubemap and its mipmaps used in default reflection probe, the first problem is that the quality of mipmaps of default reflection probe's cubemap and realtime cubemap generate via script (Camera.RenderToCubemap). here is comparison (left default, right script):
    Code (CSharp):
    1.                rtCubemap = new RenderTexture(rtCubemapSize, rtCubemapSize, 16);
    2.                 rtCubemap.useMipMap = true;
    3.                 rtCubemap.dimension = UnityEngine.Rendering.TextureDimension.Cube;
    4.        = "rt_reflprobe_cubemap_" + Time.realtimeSinceStartup;
    5.                 rtCubemap.hideFlags = HideFlags.HideAndDontSave;
    So it's really huge different in blur quality, and with Camera.RenderToCubemap, things appears to be far shinier and smoother. I wonder is there a way to achieve the same convolution result as the default one by reflection probe.

    I tried setting RenderTexture.mipMapBias, all the mipmaps are blurred more and no longer have clear reflection mipmap (mip0 is blurred too). Besides, the quality is not consistent with default ones.

    As why am I do it this way, it's because that I want to have realtime reflections for dynamic objects like characters and particles, but I can't afford using a realtime reflection probe to render all scene static and dynamic objects every frame or every few frames. If I use two reflection probes, one static for scene and one dynamic for dynamic objects, and blend together, looks good, but actually I have no clue how the default reflection probe's cubemap format (png) is generated and how to manipulate them.

    Therefore currently I choose to use default baked reflection probe's cubemap as skybox for the camera generating realtime cubemap at runtime, so scene objects are actually baked into background while only rendering the dynamic objects. This solution works mostly except the problem described above. Or some other approaches worth trying?

    Really need some help, hope this thread could have some replies, thanks.
  2. AcidArrow


    May 20, 2010
    RenderToCubemap just makes a cubemap with mips, while Unity's reflection probes apply proper specular convolution to the mips.

    And that's probably why RenderToCubemap is faster than the built in realtime reflection probes: They do convolution for the mips.

    You could do convolution on your own, but then the process would probably become as slow as the built in one, so...
    neoshaman likes this.
  3. syenet


    May 13, 2013
    I'm truly grateful for your reply, I'd like to try to apply the convolution myself, but I don't know what interfaces to do that. I don't know how to add post effect to RenderToCubemap, let alone to add convolution to each mip level. About performance, I'm rendering a rendertexture of small resolution (128or256), rendering only relevant objects, and replace object shaders with cheap equivalents, and I could update all faces of the cubemap over few frames (like what reflection probe's time slicing does), or not to update the insignificant faces. So if there's any interface to manipulate the mipmap, I definitely want to try it.

    If no further control out of RenderToCubemap, any other available approach to separately render each faces and combine them manually? Could someone please share anything about the internal structure and content of cubemap-type rendertexture, and that of baked cubemap png? I would like to understand the basics, thanks.
  4. bgolus


    Dec 7, 2012
    Couldn't you use culling masks to this with a reflection probe? Set the clear flag to solid color, then have a box using your skybox material that's hidden from the main camera attached to the reflection probe. The reflection probe will render the "static skybox" and any dynamic objects (which I assume you were using culling masks on to begin with) and get the proper convolutions.

    Honestly the process of doing convolutions isn't something that's going to be easy to do on your own. I don't think the process they use in Unity is easily replicatable with the tools they expose. Certainly with no where near the quality or speed they do it.

    Theoretically you could make a render texture, set it's dimension to cube, and then do a blit on that cubemap render texture. I have no idea if Unity knows how to do a blit on a cube map though, and doing it by copying sides around on the CPU is going to be really, really slow.
  5. neoshaman


    Feb 11, 2011
    You can also look at the free asset LUX who have code to create and convolve custom cubemap, and maybe learn from it.
  6. syenet


    May 13, 2013
    Hi bgolus, thank you for your suggestion, rendering a cube as skybox for unity reflection probe's internal camera is generally working, except that I can't directly use replace shader feature to render things with cheap shading (eg. unlit). I could still reduce the shading cost of reflection probe by create a specific duplicate of reflection layer, and with cheap material for each object in reflection, and make reflection probe render that layer exclusively, pretty cumbersome but feasible. One issue regarding perfomance by duplicating objects I can think of atm is about particles, creating duplicates of particle system may have a impact on CPU side.

    About the reflection probe performance, I did a simple profiling comparing unity's reflection probe with Camera.RenderToCubemap interface. The setup is like this, both rendering at 1024x1024 (to scale up the overhead), with directional light's shadow enabled, and rendering 6 faces and all mipmaps in one frame (no optimization applied, just to make it easier to see the difference), here is the result (above reflection probe, below RenderToCubemap):

    Platform: Win10 Core i-5 GTX1060 VR(Vive)
    I highlight some of the overhead here:

    RED: currently I have one directional light with shadow enabled, so unity use UpdateDepthTexture() and collect them later to have screen shadow texture as usual, no difference on CPU or GPU side. For my case, most of this part can be optimized away because I do not need lighting details in reflections. For Camera.RenderToCubemap(), I could set shadow distance to zero and replace shaders with unlit ones, or simply set the lights before rendering the cubemap. For reflection probe, I could simply set the shadow distance of probe and culling mask to render only the layer of cheap duplicates.

    GREEN: this is mostly the occlusion culling overhead on CPU, since I'm rendering dynamic objects, no occlusion culling is actually needed here, to optimize just disable occlusion culling on the reflection camera or probe.

    BLUE: unity reflection probe apply its own convolution here, the cost is directly and solely bound to the resolution, at 1024x1024 the cost is quite considerable, but seems only affect GPU. I guess the convolution is like Graphic.Blit() that happens directly on GPU, and with lower(128or256) resolution, the overhead is really small. So regardless of the layer of cheap duplicates, the cost to achieve default convolution on mipmaps is acceptable (If I'm incorrect here please correct me).

    YELLOW: if I use Camera.RenderToCubemap, this part has really huge impact on GPU side, while using reflection probe has none at all. Below are the details of this part in comparison(left probe, right RenderToCubemap):
    The overhead seems occur when rendering each side of the cubemap, according to profiler, both invoked Camera.RenderToCubemap internally, I don't understand what causes the overhead difference in RT.SetActive(). This part actually makes up majority of the total cost on GPU, even with lower resolution, its proportion still takes almost half. If I have to choose RenderToCubemap anyway, I have no idea how to optimize this part away.

    I'm also curious about the UpdateDepthTexture, I have a relevant thead about the necessity of this when using baked lightmap:

    If anyone's interested or has any suggestion, I'd be grateful.
  7. syenet


    May 13, 2013
    That's a good reference, thank you, I'll spend some time checking it out.
  8. jvo3dc


    Oct 11, 2013
    This is a bit of an old topic, but I'm having similar considerations. I'm working on a system to dynamically update ReflectionProbes by own metric of importance. (Which is not fixed, but related to the position of the camera among others.)

    The issue I'm having is that a ReflectionProbe needs to be enabled in order to be able to call RenderProbe. Which means the ReflectionProbe has to be in use while it is being updated. That's not really the level of control I'm looking for. I'd like to be able to update a ReflectionProbe while it's not being used. So then I'd have to use Camera.RenderToCubemap and filter the mipmaps myself.

    For now I'll work with the built in solution, but that does mean I have to keep out of date ReflectionProbes in use, because that's the only way to be able to update them through RenderProbe.

    Edit: I worked around the issue by updating the importance. So in order to:
    1. not have to do all mipmap filtering myself.
    2. make sure the ReflectionProbe is not visible while it's being updated.
    I set the importance lower, so the ReflectionProbe gets skipped. Once the update is done, I set the importance back up again.
    Last edited: Apr 17, 2019