Search Unity

Leveraging Stencil buffers for masked post effects

Discussion in 'General Graphics' started by miketucker, Mar 24, 2015.

  1. miketucker

    miketucker

    Joined:
    Sep 11, 2012
    Posts:
    6
    I'm hoping to use the (fairly) recently added stencil buffers as a way of creating a quick mask for post effects.
    This could be done with multiple rendertextures and then blit with a full screen shader, but I'm wondering what the absolute most efficient approach, and figure stencils might be the right avenue?

    The image illustrates the desired effect, with a basic blur on the result for sake of the example.

    Any advice? The closest thing I've come across is:
    http://qiankanglai.me/misc/2015/03/07/unity-posteffect-stencil/
    but the full source is not included, and I'm lost on a few of the concepts.
    Thanks!

    stencil-mask.png
     
  2. imaginaryhuman

    imaginaryhuman

    Joined:
    Mar 21, 2010
    Posts:
    5,705
    You'll need at least 2 passes ... one to draw to the stencil buffer, and another to then read from it while checking whether to output pixels to the color buffer. The stencil buffer is checked pretty early in the pipeline so I think it is faster to discard pixels that way, than to rely on render textures. With textures the fragment has to go all the way through the pipeline, be a candidate for getting rendered, AND THEN read the texture, AND THEN figure out whether to let some other texture's pixel be output (you can do this in one pass in a shader)..... but then again, here there is one write and 2 reads, whereas with stencils there are 2 writes and 2 reads. So I guess you have to benchmark it.
     
  3. Tudor

    Tudor

    Joined:
    Sep 27, 2012
    Posts:
    128
    ^ This.

    I'm in the exact same situation. I crawled the entire internets (including the unity documentation) and found nothing (else) regarding how to use stencil buffers as masks for post effects. This should be easy in theory, but I have no idea how to implement it in unity.

    This is definitely possible / done before, when you look at stuff like selective bloom, or selectively blurring screenspace fog depending on edge detection etc, or selective depth of field blur etc. You should be able to use a stencil mask for these kinds of things without a problem. (case in point, qiankanglai made it work somehow)

    So, Bump!
     
    lucariolu3d and theANMATOR2b like this.
  4. AlexBM

    AlexBM

    Joined:
    Mar 26, 2015
    Posts:
    16
    The postprocess itself doesn't have any magic inside. All it does, is rendering camera to rendertexture and then (after calling Blit function) it renders simplest quad with that texture and postprocessing material. To have a better idea what's going on you can implement Blit function by yourself as described here

    http://docs.unity3d.com/ScriptReference/Material.SetPass.html

    So that means basically, that you need to implement all mask operation inside that rendertexture without bothering much that this is so-called postprocessing :) I wouldn't suggest you using stencil at all (moreover, I just can't get an idea how it can work alongside with the way postprocessing does). You can just use simple black-and-white mask and then blend too images with it (processed and original) inside the postprocessing material.

    However, you'll need one extra rendertexture to store previous screen content somewhere and some tricks with SetRenderTarget inside OnRenderImage function.

    As for this article (http://qiankanglai.me/misc/2015/03/07/unity-posteffect-stencil/) it seems like that guy skipped something really important in his description, and I even tend to think that he achieved that effect accidentally not with stencil, but with some tricky combination of cameras and depth-buffers.
     
    theANMATOR2b likes this.
  5. Tudor

    Tudor

    Joined:
    Sep 27, 2012
    Posts:
    128
    Cheers for the clarifications!

    Well, then how would one achieve selective bloom like in that article?
    I for one am using stencils because I need some render tricks to display object intersections (similar to this: http://docs.unity3d.com/Manual/SL-Stencil.html ), and then I need to fetch that masked content and sort of bloom just that content - not the whole scene/screen.

    The only way I see this "working" with render textures and without the stencil, is to have one camera render only those special objects to a texture (to achieve "masking" that way), but that won't cut it as the special objects need to be sorted with the rest of the geometry in the scene. (so they need to be rendered with the main camera)

    There may be a way to render to texture (to a screen sized buffer type texture), from within my stenciled object's CG shader's fragment program (which I have), and then fetch that texture and bloom that. But I have no idea how that's done. That's actually the first thing I tried to google, but found nothing apart from high level concepts and not for unity.

    [EDIT]
    I know you can do `transform.GetComponent<Camera>().depthTextureMode = DepthTextureMode.Depth;` which will set a global shader variable called `_CameraDepthTexture`, which I can access in the shader of an object drawn by a different camera. So far so good. BUT as far as I can see you can't use _CameraDepthTexture inside a `Stencil { //zTests }` block, only in CG.
     
    Last edited: Apr 8, 2015
  6. Zicandar

    Zicandar

    Joined:
    Feb 10, 2014
    Posts:
    388
    Ok, my impression is that no one here has actually explained how to use the stencil buffer...
    First of all, when drawing the objects that are to be bloomed, (in this case), they need to use a shader pass that has the stencil flags set to something that isn't used by anything else, combined with a stencil MASK that makes only that bit(s) affected. Then for all other objects they need to overwrite that bit if they occlude it.
    Then the post effect will need to copy out all the areas of the renderd image to a new render target, this because the stencil comparison operation is done for the pixel being written not read, because the bloom is to affect an area outside the stenciled pixels.
    Now you do a bloom on your copied out texture.
    Then when you merge back in the bloom, merge it back with the original image.

    I hope this makes sense?
    If it does, I can tell you easily that using stencil buffer to tag areas for special effects has saved me massive amounts of performance, not to mention making some things at all viable! The reason for this is because it's "built in", and saves me a extra G-buffer texture, not to mention being a integer type, so it's easy to use a bit masks on it without sampling problems. The trick to using it efficiently is that you do NOT need extra passes to tag up the areas!
     
    Arkade, Tudor, braaad and 1 other person like this.
  7. Tudor

    Tudor

    Joined:
    Sep 27, 2012
    Posts:
    128
    What does "copy to new render target" mean in Unity?
    So you mask things with the stencil, then render the masked image, from the camera, to a RenderTexture? So will you by this point have rendered the stencil-masked parts as black (to hide the image from the previous camera, because you must use Don't Clear on the second) or something? And you somehow have to do a pass somewhere where you copy everything that isn't black, into a new screem-sized texture / "render target"? (or you just blend ignoring blacks) How/where do you do that?
    Or can you somehow pass the stencil buffer to the post effect?

    - Yes, but also, as far as I can tell, you can't use the Stencil mask in post process image effect shaders. You know, the ones you use on `Graphics.Blit(source, renderTexture, blurMaterial)`. (I tried using Stencil{} in the CG blocks on unity's FastBlur image effect shader, and nothing happens (the whole screen is blurred just the same, as if the stencil buffer was emptied or it's not used here at all)

    So if I can't use the stencil buffer for post fx, my plan is to use another camera, that renders my special bloomable objects (and I set this camera's Clear Flags to Don't clear, to inherit the depth buffer (so they get blended with the rest of the scene)).

    Then I add a large quad in front of this camera, which fills the whole screen (color buffer) with black (but doesn't write to the zbuffer). Then I render my special objects on top.

    And so then I have something like "for this new camera, render the quad's 'mask' as black, and render the special bloom objects normally", so I take this camera's texture, and in the bloom post effect I ignore everything black while blooming the rest, and blend this rest into the main camera...
     
    Last edited: Apr 10, 2015
  8. Zicandar

    Zicandar

    Joined:
    Feb 10, 2014
    Posts:
    388
    I would do this in a post effect. And I would copy everything independant of color that passes the stencil Test, as that is done before the pixel is at all calculated by the GPU.

    As far as I know yes, as stencil buffer is normally part of the depth buffer (24 bit's depth, 8 bit stencil), so perhaps make sure you set the depth buffer also? As you mention later, you might need to use a full screen quad instead combined with SetRenderTarget.
     
    Tudor likes this.
  9. Zicandar

    Zicandar

    Joined:
    Feb 10, 2014
    Posts:
    388
    Also another trick I have used before, and might seem stupid/impossible at first glance:
    Use the alpha channel!
    The reasoning is simple, nothing else actually normally needs to output into it! (Unless your doing some other very special stuff).
    The main downside is that you need to modify all transparent shaders to only blend the RGB, not alpha channel, but once that is done, you can use the alpha channel as a mask.
     
    Tudor likes this.
  10. AlexBM

    AlexBM

    Joined:
    Mar 26, 2015
    Posts:
    16
    Oh, finally I devised the way to do that, and after all my code looks very similar to those of that guy http://qiankanglai.me/misc/2015/03/07/unity-posteffect-stencil/

    So here is a simple post-process effect that discards all color channels except red one.
    Original scene:
    Screenshot 2015-04-11 00.54.08.png
    Full post-process:
    Screenshot 2015-04-11 00.54.16.png
    Stencil postprocess with only one ball affected:
    Screenshot 2015-04-11 00.54.39.png

    The trick is quite simple after all. You need to set your camera render target as your own RenderTexture, then, in OnPostRender you need to render quad with this RenderTexture, then (very important!) set render target with depth buffer of it, and then render it again with postprocessing shader which includes stencil test.

    It's very similar to what that guy suggested, (shame on me for not admitting his feat from the first time :)), but still there are some key differences. First, you can't use OnPostRender function of your main camera, because it seems it still didn't finish rendering there and your main RenderTexture isn't ready by that time. So you need to create separate camera that renders nothing (culling mask equals zero) and rewrite OnPostRender there.

    Second, you can't render this stuff directly into the screen, because it seems there is no Graphics.SetRenderTarget function that accepts null as a colorbuffer and depthbuffer from other RenderTexture. Or I'm just too tired to find out the truth :) So you need to create second buffer rendertexture, render all stuff there, and only after that render it to the screen.

    I've attached project with this example. Hope, that helps.
    Thanks,
    Alex.
     

    Attached Files:

  11. scopigno

    scopigno

    Joined:
    Jul 10, 2014
    Posts:
    9
    Wow, it really works.
    Took a time to understand all these operations.
    But once I did I could easily apply this method to any of the standard shaders.
    Thank you a lot.
     
  12. Greg-SS

    Greg-SS

    Joined:
    Jan 19, 2014
    Posts:
    2
    When I try that scene in unity 5, all the pixels that passes the post process shaders are black :(
     
  13. Reanimate_L

    Reanimate_L

    Joined:
    Oct 10, 2009
    Posts:
    2,475
    sorry for old bump, but i don't think this technique are working in Latest unity and deferred shading
     
  14. gungnir

    gungnir

    Joined:
    Nov 27, 2009
    Posts:
    16
    To get this to work in Unity 5 you have to change the postprocess shader to this:

    Shader "Custom/postprocess_effect"
    {
    Properties
    {
    _MainTex ("Base (RGB)", 2D) = "white" {}
    }
    SubShader
    {
    Stencil
    {
    Ref 2
    Comp Equal
    }

    Pass
    {
    ZTest Always Cull Off ZWrite Off

    CGPROGRAM
    #pragma vertex vert
    #pragma fragment frag
    #include "UnityCG.cginc"

    uniform sampler2D _MainTex;
    uniform float4 _MainTex_TexelSize;
    uniform fixed4 _Color;

    struct v2f {
    float4 pos : SV_POSITION;
    float2 uv : TEXCOORD0;
    };

    v2f vert(appdata_img v)
    {
    v2f o;
    o.pos = mul(UNITY_MATRIX_MVP, v.vertex);
    o.uv = v.texcoord.xy;

    return o;
    }

    half4 frag(v2f i) : SV_Target
    {
    return tex2D(_MainTex, i.uv) * float4(1,0,0,1);
    }
    ENDCG
    }
    }

    Fallback off
    }
     
    tangwilliam and Loius like this.
  15. Thomas-Mountainborn

    Thomas-Mountainborn

    Joined:
    Jun 11, 2015
    Posts:
    380
    So I was looking into per object post processing, and this thread seems to be the only good resource on it. I was implementing @AlexBM's technique, which works just fine, but I'm having trouble understanding why all those steps are required. I also got the same result by having a regular post processing effect, using OnRenderImage, on the camera that does the rendering. It literally just blits source into dest using a post processing shader with a stencil check, and only objects that wrote the stencil ref value earlier get affected.

    Is there a reason for having a second camera that renders into a render texture and copies into a separate buffer OnPostRender that I'm not aware of?



    And on this topic, what would be the good approach for allowing post processing effects to leak outside of the original stencil bounds? As you can see in the screen shot below, the edges that would be rendered outside of the object's bounds are cut off.



    Edit: I've just gone through the thread again and I get now that the second camera and buffers are required to apply the post effects outside of the stencil bounds, but I still don't understand the entire process involved, especially because the explanation is preluded with "it seems that...", and the linked article returns 404. Can anyone shed some light on the matter once and for all?
     
    Last edited: May 16, 2016
  16. snw

    snw

    Joined:
    Mar 13, 2014
    Posts:
    41
    Thomas-Mountainborn likes this.
  17. LW

    LW

    Joined:
    Jun 23, 2013
    Posts:
    22
  18. dreamerflyer

    dreamerflyer

    Joined:
    Jun 11, 2011
    Posts:
    927
    alphastencil.jpg
    alpha stencil not work.it is bug?
     
unityunity