Search Unity

Can I create an extra GBuffer for my custom deferred shader?

Discussion in 'Shaders' started by hypnoslave, Jun 30, 2019.

  1. hypnoslave

    hypnoslave

    Joined:
    Sep 8, 2009
    Posts:
    439
    Hi there. I'm writing a custom Deferred Rendering shader. I've also got some per object surface shaders that modify GBuffer2's unused alpha channel (by overriding the UnityToGbuffer function). I then use this data as a mask for various rendering techniques in the Deferred Rendering shaders (both shading and reflections)

    The problem is, I need more data than I can cram into this single channel. Is there a way to create a fourth GBuffer that I can cram with data? Is this a bad idea for some reason?
     
  2. Invertex

    Invertex

    Joined:
    Nov 7, 2013
    Posts:
    1,549
    I believe you can define some extra extra targets to use, the emission buffer that you use the alpha channel of for your data is referencing SV_target3, and if Shadowmasking is used, a fifth buffer, SV_Target4 is used, and here's an example of Unity Japan using up to SV_Target6

    https://github.com/unity3d-jp/Frame.../FrameCapturer/Shaders/CopyFrameBuffer.shader
    And the script for setting up the buffers:
    https://github.com/unity3d-jp/Frame.../UTJ/FrameCapturer/Scripts/GBufferRecorder.cs

    If you're not using Shadowmasking lighting mode in your build though could just use the SV_Target4
     
    Last edited: Jun 30, 2019
  3. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,342
    Can you modify the built in deferred rendering pipeline to add more render targets? No. You cannot.

    Can you write a fully custom deferred rendering path, or set a custom number of rendering targets when rendering something manually via command buffers? Yes! Does this help your specific case? No.

    That FrameCapture example isn’t modifying the built in deferred renderer at all, it’s just setting up a custom multi render target that it then uses to extract the previously rendered gbuffers, splitting the alpha data into their own single channel targets, along with depth and velocity buffer data so they can be easily saved out as separate images. Modifying the shadow masking output only works if shadow masking is already enabled, in which case you’d have to override a bunch of other code to prevent it from using the shadow masking path. You can’t just write to SV_Target4 as it’s not bound to anything, and won’t be made available to later shaders unless it is enabled.


    The options are to render out your own “gbuffer” using a replacement shader pass rather than modifying the existing pipeline, or having shaders output their metallic value and calculate the specular color in the deferred lighting shader (which frees up 8 bit two channels in the specular gbuffer), or don’t use the built in rendering path at all and write a new one from scratch.
     
    Last edited: Jun 30, 2019
    SamFernGamer4k and Invertex like this.
  4. Invertex

    Invertex

    Joined:
    Nov 7, 2013
    Posts:
    1,549
    What about if you bound a RWTexture2D to a register for use during fragment instead of compute and used that as an extra buffer? It would limit you to a higher shader model target, but could that work in theory? (likely having to manually write to it using pixel coordinate I assume)
     
  5. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,342
    In theory sure.
     
  6. hypnoslave

    hypnoslave

    Joined:
    Sep 8, 2009
    Posts:
    439
    mm hmm. Okay! Well that's very clear, thanks (again) bgolus. You've got to be the most helpful person on this forum. Also thanks Invertex for the suggestion.

    I stumbled across another solution but I've got another question - is there some weird way in which info in gbuffer2.a is stored? When I directly output gbuffer2.a in the deferred shader, I get blobs of white around lights that add on one another when they overlap. Is this just some kind of weird secondary phenomenon or is the data in gbuffer2.a somehow different/transformed/something? I'm trying to use it to store/transform values but I need confidence that it doesn't follow some weird set of rules.


    In case you're curious...
    I came up with a solution-ish that might help me bypass having the need to write a full render pipeline or force higher shader model. Since I am only using the extra channel for mask data, I'm trying to write different values there and use math magic to isolate grey values to use as masks. It's still not quite working perfectly (hense the above question) but perhaps I can get there.

    so first, any given object could write .5, or 1, or what ever, to gbuffer2.a, and then, inside the deferred shader, I create:

    float greyMask = sin((gbuffer2.a)*3.1415926); //will isolate mid grey
    float whiteMask = clamp(gbuffer2.a - .9, 0, 1) * 10; // will isolate white.

    ...that seems to work!!

    I've also tried to use:
    float greyMask2 = pow(sin((gbuffer2.a)*3.1415926*2),4); //which *should* isolate both .25 and .75 together.

    ... but that doesn't work. and it really... should????
     
    SamFernGamer4k likes this.
  7. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,342
    The normal buffer is stored using a ARGB2101010 format render texture. This means the RGB channels get 10 bits of precision, or a 0.0 to 1.0 value with 1024 (2^10) steps, and an alpha with 2 bits of precision. ... That means the alpha can store these four values: 0.0, 0.33, 0.66, and 1.0 ... and that’s it.
     
  8. hypnoslave

    hypnoslave

    Joined:
    Sep 8, 2009
    Posts:
    439
    Oh no way! that explains a few things. good thing I asked.

    Thanks again...
     
    SamFernGamer4k likes this.