Search Unity

Resolved Unity flipping render textures

Discussion in 'Shaders' started by AlexTorbin, Dec 30, 2020.

  1. AlexTorbin

    AlexTorbin

    Joined:
    Dec 12, 2019
    Posts:
    48
    I tried everything from unity manual and from google, this is what I get (or vice versa) no matter what I do..

    111.PNG

    I created a little custom RP, first it renders opaque geometry, then a skybox, and finally post fx shader takes color + depth textures and inverts the green channel on the background.

    The problem is in this shader. When I pass uv's to fragment program and use SAMPLE_TEXTURE2D, counter-flip works in both views. However, when I switch to position approach and LOAD_TEXTURE2D instead, unity always flips the game view, regardless of what I do with positionCS in vertex program.

    All tutorials say:
    Code (CSharp):
    1. #if UNITY_UV_STARTS_AT_TOP
    2.    pos.y = -pos.y;
    3. #endif
    But in my case it doesn't do anything, although the condition is always true. _ProjectionParams.x and _TexelSize.y are always positive as well. What am I doing wrong?

    Shader code:
    Code (CSharp):
    1. Shader "Hidden/FxComposition"
    2. {
    3.     Properties
    4.     {
    5.         _MainTex("Texture", 2D) = "white" {}
    6.     }
    7.  
    8.     SubShader
    9.     {
    10.         Cull Off ZWrite Off ZTest Always
    11.  
    12.         Pass
    13.         {
    14.             HLSLPROGRAM
    15.             #pragma target 3.5
    16.             #pragma vertex Vert
    17.             #pragma fragment Frag
    18.  
    19.             #include "Assets/Custom RP/Shader Library/Common.hlsl"
    20.  
    21.             TEXTURE2D(_MainTex);
    22.             TEXTURE2D(_DepthTex);
    23.             SAMPLER(sampler_point_clamp);
    24.  
    25.             struct Attributes
    26.             {
    27.                 float3 positionOS : POSITION;
    28.                 float2 uv : TEXCOORD0;
    29.             };
    30.  
    31.             struct Varyings
    32.             {
    33.                 float4 positionCS : SV_POSITION;
    34.                 float2 uv : TEXCOORD0;
    35.             };
    36.  
    37.             Varyings Vert(Attributes input)
    38.             {
    39.                 Varyings output;
    40.                 output.positionCS = TransformObjectToHClip(input.positionOS);
    41.                 output.uv = input.uv;
    42.  
    43.                 #if UNITY_UV_STARTS_AT_TOP
    44.                     output.positionCS.y *= -1.0;
    45.                     output.uv.y = 1.0 - output.uv.y;
    46.                 #endif
    47.  
    48.                 return output;
    49.             }
    50.  
    51.             float4 Frag(Varyings input) : SV_Target
    52.             {
    53.                 //Works as intended
    54.                 //float4 color = SAMPLE_TEXTURE2D(_MainTex, sampler_point_clamp, input.uv);
    55.                 //float depth = SAMPLE_DEPTH_TEXTURE(_DepthTex, sampler_point_clamp, input.uv);
    56.  
    57.                 //Upside down in game view
    58.                 int2 screenPos = (int2)input.positionCS.xy;
    59.                 float4 color = LOAD_TEXTURE2D(_MainTex, screenPos);
    60.                 float depth = LOAD_TEXTURE2D(_DepthTex, screenPos).r;
    61.  
    62.                 return depth > 0 ? color : float4(color.r, 1 - color.g, color.ba);
    63.             }      
    64.  
    65.             ENDHLSL
    66.         }
    67.     }
    68. }
     
    Last edited: Dec 30, 2020
    PadBack and zanouk like this.
  2. cadynbombaci

    cadynbombaci

    Joined:
    Oct 10, 2017
    Posts:
    10
    Hold up, I think you're doing a double negative here:
    Code (CSharp):
    1. output.positionCS.y *= -1.0;
    2. output.uv.y = 1.0 - output.uv.y;
    I believe the effect of this would be: Flip the pixels position, but then also change it to read the texture from the correspondingly flipped spot, so in total it gives the same as if there was no flip. I would either have the UV flip, or have the position flip.
     
  3. AlexTorbin

    AlexTorbin

    Joined:
    Dec 12, 2019
    Posts:
    48
    In case of sampling textures, you are absolutely right - both lines have effect, separately and simultaneously.
    But when loading textures from pixel position, you can comment out line2, it is not used obviously. And line1 doesn't do anything, that's the problem. Leave it or remove it, situation doesn't change - correct render in editor and flipped render in game view.
     
    Last edited: Dec 30, 2020
  4. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,352
    The
    SV_Position
    is unaffected by anything in the vertex shader. Flipping the output
    positionCS.y
    in the vertex doesn't matter, since in the fragment shader it's always just the screen space pixel coordinate, and Unity renders everything upside down on PC to match OpenGL's weirdness / texture coordinate standards. That's why everything Unity does in the fragment shader uses the UVs, and (almost) never the SV_Position or VPOS.

    If you really want to use that, you'll need to flip the
    positionCS.y
    in the fragment shader.
     
  5. AlexTorbin

    AlexTorbin

    Joined:
    Dec 12, 2019
    Posts:
    48
    Thanks the answer, but I got this idea from your own article "The Quest for Very Wide Outlines". Isn't it the same? How it was supposed to work, if it has no effect..

    222.PNG

    And of course, If I flip position in fragment shader, I'm getting what was referenced as "vice versa" in the original post.

    333.PNG
     
    Last edited: Dec 30, 2020
    yyylny likes this.
  6. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,352
    Yes, but if you look carefully you'll notice I only flip one pass ... the one that renders the original silhouette mask. The fragment shader for the shader that flips the clip space in the vertex shader just outputs a constant value of 1.0. Then for all other passes I only use the
    SV_Position
    . It's a hacky work around for the funky shiz Unity does when rendering to MSAA render targets (which it does because OpenGL does it). And it means from that point forward I don't have to worry about accounting for the flipping as I always use the raw pixel coordinates.

    Honestly, it's probably totally broken for OpenGL, but IDGAF about OpenGL on desktop. :cool:
     
    Bluedoom likes this.
  7. AlexTorbin

    AlexTorbin

    Joined:
    Dec 12, 2019
    Posts:
    48
    @bgolus I don't understand. You said that flipping positionCS doesn't matter in fragment shader, but you somehow did flip that initial silhouette pass? And when sampling from texture it clearly has effect, watch this short video:

     
  8. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,352
    If you flip the
    positionCS.y
    in the vertex shader, it flips the mesh upside down. This will affect the mesh's silhouette (like in my use case) and the UVs or other interpolated values not using
    SV_Position
    , since the mesh is now upside down. It won't do anything if the mesh covers the entire screen and you use the
    SV_Position
    in the fragement shader. You can flip, rotate, scale the mesh however you want, but it won't matter because the xy values are just the screen space pixel coordinate and nothing to do with the mesh anymore at all. This is a special property specific to the
    SV_Position
    semantic. In short, it is not just the interpolated value you output from the vertex shader once it gets to the fragment shader like all other semantics are.

    So, in your original code sample above, you're using the
    input.positionCS.xy
    for the load function's integer coordinates.

    In that video you're using the UVs, so that is affected.

    I looked over my production outline code, which is a bit different from what I presented in my article. The article code is written for extreme efficiency. The production code is written for usability, and to that end I actually ended up fixing the same issue you're running into of the scene and game views were flipped.

    How? I use the UVs, exactly as shown in Unity's documentation.
    Code (csharp):
    1. #if UNITY_UV_STARTS_AT_TOP
    2. if (_MainTex_TexelSize.y < 0)
    3.     uv.y = 1-uv.y;
    4. #endif
    Since I'm still using load in my shaders, I multiply the UVs by the screen size in the vertex shader, then cast it to an
    int2
    in the fragment shader.
     
    AlexTorbin likes this.
  9. AlexTorbin

    AlexTorbin

    Joined:
    Dec 12, 2019
    Posts:
    48
    That's good to know, so this is not some local problem with my RP. In fact, it all started when I tried to draw a full-screen triangle instead of a unity Blit(), following catlikecoding tutorial. Triangle is defined by vertex id, and outputs uv and position like so:
    444.PNG
    And I thought, since all my render textures are of screen size and point filtering, why pass uv? I don't need uv and samplers. I can use the position only and load textures. And then ofcourse triangle was flipping and I couldn't fix it by varying vertical position in vertex program. Seems like there is no way without using uv. Either you sample the texture with them, or you calculate the position from them.

    I still don't understand why the flip doesn't happen in editor cameras, but it's probably not that important. Thank you for help, very grateful. I need to read everything carefully and think it over again.
     
    Last edited: Dec 30, 2020
  10. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,352
    OpenGL and Direct3D texture coordinates are inverted on the y from each other. This affects both how UVs on meshes work, and how objects render to render targets. Unity does a lot to ensure that everything matches how OpenGL would be by doing things like uploading textures flipped, and inverting the projection matrix so it renders scenes upside down. The rendering is then flipped again automatically by Unity when displaying to the screen at the end so this flip flopping is mostly hidden. This is all good because for the most part it keeps the use of textures or any screen space effects consistent between platforms.

    The problem is MSAA render textures in OpenGL are extra weird. They’re flipped compared to any other OpenGL render target. Unity tries to correct for this, but also mimics that weirdness when running other APIs! You need to check if the
    _TexelSize.y
    is negative to know when to un-flip it to undo the weirdness that Unity is intentionally trying to mimic.

    Unity also disabled MSAA in the scene view a few versions ago to “work around” (punt on fixing) a bug with the SRPs. So the result is if you have MSAA enabled for your project, it’ll be enabled for the game view only and not the scene view, and if you have post processing shaders that aren’t properly accounting for the weirdness with MSAA render textures vs non-MSAA render textures, you’ll have one show flipped and one not.
     
    AlexTorbin likes this.
  11. AlexTorbin

    AlexTorbin

    Joined:
    Dec 12, 2019
    Posts:
    48
    @bgolus Yes, there is a lot of weirdness in render textures. I once asked in another thread but got no answer, so I guess nobody knows. Taking into account all said above, if you change temporary render texture declaration, it inverses the flipping in both views. (I have msaa disabled in main camera)

    These two methods lead to opposite flipping.
    Code (CSharp):
    1. buffer.GetTemporaryRT(colorTexID, pixelWidth, pixelHeight, 0, FilterMode.Point, RenderTextureFormat.Default);
    Code (CSharp):
    1. RenderTextureDescriptor colorTexDesc = new RenderTextureDescriptor()
    2.         {
    3.             dimension = TextureDimension.Tex2D,
    4.             colorFormat = RenderTextureFormat.Default,
    5.  
    6.             width = pixelWidth,
    7.             height = pixelHeight,
    8.  
    9.             msaaSamples = 1,
    10.             depthBufferBits = 0,
    11.  
    12.             sRGB = false,
    13.  
    14.             useMipMap = false,
    15.             autoGenerateMips = false
    16.         };
    17.  
    18. buffer.GetTemporaryRT(colorTexID, colorTexDesc, FilterMode.Point);

    So, when using descriptor, I have to un-flip uv's in fx shader. But when using first variant, I have to remove the un-flip code. It's all pretty confusing. In the end, I can achieve the correct results by experimenting, but it annoys me that I don't understand why it works and under what conditions it will break.

    Btw, do you know what GraphicsFormat corresponds to RenderTextureFormat.Depth? I can't find this information in the manual or anywhere. Tried different variants of R32, got weird results.
    Code (CSharp):
    1. buffer.GetTemporaryRT(depthTexID, pixelWidth, pixelHeight, 32, FilterMode.Point, RenderTextureFormat.Depth)
    2. ==
    3. buffer.GetTemporaryRT(depthTexID, pixelWidth, pixelHeight, 32, FilterMode.Point, GraphicsFormat.????????)
    4.  
    (Camera doesn't want to render depth in custom RP without shadow pass. So I'm using additional depth texture, which I set as depth target when rendering scene geometry/skybox.)

    Manual page say:
    The name of a format is based on the following criteria: - For color formats, the component-format specifies the size of the R, G, B, and A components (if present). - For depth/stencil formats, the component-format specifies the size of the depth (D) and stencil (S) components (if present).

    But there are no formats provided with D in name.
     
    Last edited: Dec 30, 2020
  12. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,352
    That would explain some problems I was having at one point when I first started writing the outline effect. I had at one point a combination of legacy inline and render texture descriptor based temporary render textures and could not for the life of me get the flipping to work properly. It only started working after I deleted most of it and redid it all using descriptors. I hadn't thought about how that might be related. :eek:

    No idea if that's a "bug", or an intentional change in behavior though. Should step through with the frame debugger and / or RenderDoc see what the projection matrix is during rendering into them (see if they're flipped), and what the
    _MainTex_TexelSize
    values are (to see if the y is negative or not).

    I spent some time trying to figure that out too, and did also notice the lack of
    D16
    ,
    D24S8
    , or
    D32S8
    formats, or
    R32_Typeless
    (which is what Direct3D actually uses). Depth texture formats are a bit of a mess as OpenGL has explicit depth and stencil texture format types, and Direct3D uses generic typeless formats with special bind flags set. And neither are exposed in the
    GraphicsFormat
    enum. The
    RenderTextureFormat.Depth
    just magically picks the correct option for the platform that best matches the D32S8 option. At one point I was looking to have a stencil only render texture, which is a totally valid thing to do and I had hoped the
    GraphicsFormat
    enum would have exposed. But no.

    So at the moment I honestly have no idea how to make a depth only render texture without using the older
    RenderTextureFormat.Depth
    .
     
    AlexTorbin likes this.
  13. AlexTorbin

    AlexTorbin

    Joined:
    Dec 12, 2019
    Posts:
    48
    @bgolus Thanks again for sharing so much information, your help is truly priceless. I have found out a couple of things, on how to make it consistent. If someone else reads this - everything described below refers to my tests in a simple project with custom RP (Unity 2020.2.0f1). I decided to abstain from Texture.Load(int coords), so using samplers only to read texture data. All temporary render textures are declared uniformly, either inline or with descriptor. Also, MSAA is disabled, no idea how this will interact with MSAA. Here's what I found..

    1. [ RenderTextures = inline ] [ drawing methods = Blit ]
    This is the simple case, when you declare render textures with inline method and only use Blit() with them. It seems that manual is not lying and Blit() has some anti-flip mechanics under the hood. So, in this case everything renders in proper orientation in scene view and in game view. No need to do anything with uv in shaders. Well, at least in my project.

    2. [ RenderTextures = inline ] [ drawing methods = Mixed ]
    Now, lets add something fancy, for example custom drawing into full screen triangle. In other words, render with post-fx shader using DrawProcedural(). It will flip in game view (not in scene), and frame debugger will show only the last step upside down. I tried many things that different sources suggest, they fixed one view, but flipped the other.. The actual fix was to add both checks in sequence into vertex program in PostFX shader:
    Code (CSharp):
    1. #if UNITY_UV_STARTS_AT_TOP
    2.     output.uv.y = 1.0 - output.uv.y;
    3. #endif
    4.  
    5. if (_ProjectionParams.x < 0.0)
    6. {
    7.     output.uv.y = 1.0 - output.uv.y;
    8. }
    3. [ RenderTextures = descriptor ] [ drawing methods = Blit ]
    If for any reason you want to declare render textures using descriptor, everything will flip in both scene and game views, at all stages in frame debugger. To un-flip the rendering on the last step, add into PostFX shader:
    Code (CSharp):
    1. #if UNITY_UV_STARTS_AT_TOP
    2.     output.uv.y = 1.0 - output.uv.y;
    3. #endif
    But that's not it! My camera doesn't have its own depth texture, it uses a separate one when I need to. To make editor gizmos aware of depth, I use a simple CopyDepth shader. Before the drawing of gizmos, it copies depth into camera target. In the current case it started giving a flipped result. To make it work properly I added the following into CopyDepth shader:
    Code (CSharp):
    1. if (_ProjectionParams.x < 0.0)
    2. {
    3.     output.uv.y = 1.0 - output.uv.y;
    4. }
    4. [ RenderTextures = descriptor ] [ drawing methods = Mixed ]
    By default, in this case everything gets flipped, just like in previous one. But since the last step is DrawProcedural() instead of Blit(), game view gets "fixed" (flipped twice). To make it consistent between scene and game, add into both PostFX and CopyDepth shaders:
    Code (CSharp):
    1. if (_ProjectionParams.x < 0.0)
    2. {
    3.     output.uv.y = 1.0 - output.uv.y;
    4. }
    For now, I'll mark this thread resolved. If something interesting comes up in the future, I'll probably update.
     
    Last edited: Dec 31, 2020
    Pr0x1d, MarcelArioli, Saniell and 2 others like this.