Search Unity

Trying downscaling texture in shader

Discussion in 'Shaders' started by ben3d, Sep 27, 2019.

  1. ben3d

    ben3d

    Joined:
    May 19, 2014
    Posts:
    26
    Hello,
    What i want to do is : when the user stop moving, render the scene in larger resolution then downscaling it with a lanczos filtering.
    I managed to do it in C# but it take 10-20 secondes to generate the texture. So now i try to do it in a shader but I'm stuck on the downscaling part. Is it possible to resize a texture in shader ?
    Currently, my shader take a render texture and some integer to specify the size of the render texture and the size of the desired texture after downscaling. But since I believe my shader is called for each pixel, I can't imagine a way to have less pixel out than in. I'm a bit lost here. I think I misunderstand something, but I don't figure out what.
    Thanks for your help.
     
  2. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    7,683
    You control the resolution target by changing the resolution of the target. In other words, the render texture you're rendering to should be of the resolution you want the output image to be. The input render target can be whatever size you want.

    The shader is then run once for every pixel of the target resolution. You then sample the higher resolution texture multiple times in the shader at each pixel, gathering the data to apply the Lanczos filter to and output that pixel's results.
     
  3. ben3d

    ben3d

    Joined:
    May 19, 2014
    Posts:
    26
    Okay, thanks for your answer, I'm starting to understand.
    But how can I specify that the shader will render to render texture ? I can't set a shader to a render texture. The input render texture will be set as a parameter Sample2D, but the output render texture, I don't understand where I can set it.
    I need to be able to set both from a script since the size of each texture can change.
     
  4. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    7,683
  5. ben3d

    ben3d

    Joined:
    May 19, 2014
    Posts:
    26
    ho. That seems to be exactly what I was looking for ! I will give it a try,
    Thank you.
     
  6. ben3d

    ben3d

    Joined:
    May 19, 2014
    Posts:
    26
    I'm having a hard time trying to use my material with Graphics.Blit function. The shader must adhere to a specific format ? Do I have to fill the shader with the source RenderTexture like this :
    myMaterial.SetTexture("_MainTex", sourceRenderTexture);
    before calling Blit ?
    I'm using this shader :
    Code (CSharp):
    1. Properties
    2. {
    3.     _MainTex("Texture ", 2D) = "white" {}
    4. }
    5.  
    6. SubShader
    7. {
    8.     Lighting Off
    9.     Blend One Zero
    10.  
    11.     Pass
    12.     {
    13.         CGPROGRAM
    14.         #include "UnityCustomRenderTexture.cginc"
    15.         #pragma vertex CustomRenderTextureVertexShader
    16.         #pragma fragment frag
    17.         #pragma target 3.0
    18.  
    19.         sampler2D _MainTex;
    20.  
    21.         float4 frag(v2f_customrendertexture IN) : COLOR
    22.         {
    23.             return tex2D(_MainTex, IN.localTexcoord.xy);
    24.         }
    25.         ENDCG
    26.     }
    27. }
    But the dest RenderTexture is empty.
     
  7. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    7,683
    Not really, no. Usually Blit() shaders are super simple. The modern built in shaders the Blit() function uses are a bit more complex due to them adding support for XR related stuff. But for your case this shader should be enough:
    Code (csharp):
    1. Shader “Minimal Blit Shader” {
    2.     Properties {
    3.         _MainTex (“Source”, 2D) = “white” {}
    4.     }
    5.     SubShader {
    6.         Pass {
    7.             // recommended by Unity, but not strictly needed
    8.             Cull Off ZWrite Off ZTest Always
    9.  
    10.             CGPROGRAM
    11.             #include “UnityCG.cginc”
    12.             #pragma vertex vert_img
    13.             #pragma fragment frag
    14.  
    15.             sampler2D _MainTex;
    16.  
    17.             half4 frag (v2f_img i) : SV_Target // not COLOR
    18.             {
    19.                 return tex2D(_MainTex, i.uv);
    20.             }
    21.         }
    22.     }
    23. }
    If you read that page I linked to it stays you do not as the source input is automatically assigned to the _MainTex. However this doesn’t always work for reasons that aren’t entirely understood (probably a bug?), so setting it manually doesn’t hurt anything.

    edit: replace vert with vertex
     
    Last edited: Oct 4, 2019
  8. ben3d

    ben3d

    Joined:
    May 19, 2014
    Posts:
    26
    Based on your exemple I make it works. Just needed to replace
    #pragma vert vert_img
    by
    #pragma vertex vert_img
    else the output is a magenta screen. I didn't saw any difference between SV_Target and COLOR, the two seems to work correctly.
    All is good now, thanks to you. You have been a great help :)
     
  9. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    7,683
    If you're only running the shader on Windows, then COLOR will work fine. That's Direct3D 9 style HLSL, which the modern Windows shader compiler will still use (and internally convert). On other platforms it may fail to compile, especially any Apply product or console. It may or may not work on Android as it depends on the device you run on.
     
  10. ben3d

    ben3d

    Joined:
    May 19, 2014
    Posts:
    26
    Thanks for the additional information. I tried only on Windows and Android, on my Samsung S8 but I had already SV_Target when I built for Android. I need to be compatible iOS and WebGL too so I will use SV_Target to be sure. ;)
     
    Last edited: Oct 7, 2019
  11. ben3d

    ben3d

    Joined:
    May 19, 2014
    Posts:
    26
    The result I have on iOS show that I have some problem with my shader. I think it's due to precision floating point since I pick up spécific pixel in my source image by creating uv like
    float2(1.0 / src_width, 1.0 / src_height)
    and on Unity Editor and PC Standalone the result is as expected. Can I keep the shader works on iOS the same way it works on Standalone ?
     
  12. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    7,683
    You'd have to explain more of what you mean by "some problem". As long as you're using "float" and not "half" anywhere in your code, they should behave the same in terms of precision.
     
  13. ben3d

    ben3d

    Joined:
    May 19, 2014
    Posts:
    26
    For the same screen resolution in editor and on iPad the quality of the Blit result are not the same. In editor the quality of the "dest" texture is looking better than on my iPad. I'm seeking why.
     
  14. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    7,683
    My guess is the shader isn’t the problem. My guess is the texture you’re sampling from either isn’t set to use bilinear filtering, or the format doesn’t support it. So on the PC you’re getting the benefits of bilinear filtering, and on iOS you’re not.

    You could test this by downscaling by exactly half using the above example shader and see if it's obviously aliasing, or missing small details. If that's the case you might need to change the format of the texture you're reading from, or you'll need to take into account the lack of bilinear filtering.
     
  15. ben3d

    ben3d

    Joined:
    May 19, 2014
    Posts:
    26
    Actually no it wasn't set to use bilinear. I create the source render texture in script and didn't set filterMode. The format is
    RenderTextureFormat.ARGB32
    .
    I will try downscaling by half using the above shader and see if I'm getting the same result. I will manually set filterMode to
    FilterMode.Bilinear
    too.

    Edit : I tried and you were right. I just added manually FilterMode.Bilinear to the source RenderTexture with the basic shader and the results look like the same. Except for light and shadow but I suppose I can't do anything on that, It has to be PC-iOS margin. I will try again with my shader.
     
    Last edited: Oct 16, 2019
  16. ben3d

    ben3d

    Joined:
    May 19, 2014
    Posts:
    26
    With my shader, bilinear didn't solve anything. I think again that the problem is when I retrieve near pixel. I obtain uv like this :
    i.uv + float2(px * _MainTex_TexelSize.x, py * _MainTex_TexelSize.y)

    where px and py are offset from the actual pixel. What do you think ?
     
  17. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    7,683
    Hmm. As long as all of the properties and values you're using are "float" values and none are ever using half or fixed, or int, it should work. It's possible there's some nasty "auto-optimization" happening someplace, but I'm not sure I've seen that happen for iOS. Usually that's an Android thing.
     
  18. ben3d

    ben3d

    Joined:
    May 19, 2014
    Posts:
    26
    Yes, I'm using only float.
    I uploaded an image that show the difference.
    https://ibb.co/3rf9thf
    Left image is result on PC, right is on iOS.
     
  19. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    7,683
    That looks more like you're rendering an image at the wrong resolution. That's the kind of artifact you'd get from rendering a point sampled 256x256 image at 280x256.
     
  20. ben3d

    ben3d

    Joined:
    May 19, 2014
    Posts:
    26
    I don't get it. What do you mean by rendering a point sampled 256x256 at 280x256 ?
    I create the RenderTexture based on pixelWidth attribute of the camera. like this :
    Code (CSharp):
    1. int width = camera.pixelWidth * 2;
    2. int height = camera.pixelHeight * 2;
    3.  
    4. RenderTexture sourceRt = new RenderTexture(width, height, 24, RenderTextureFormat.ARGB32)
    5. {
    6.     filterMode = FilterMode.Bilinear
    7. };
    The source RenderTexture is set to camera pixel * 2 and the destination RenderTexture to camera pixel.
    Why I didn't get any artifact on PC too ?

    ps: Question that has nothing to do with my current issue, wich format do you recommend for iOS, ARGB32 is a bit too memory consuming.
     
    Last edited: Oct 17, 2019
  21. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    7,683
    I don't really have an answer for you, apart from the artifacts look very similar to those that happen when your source and destination resolutions don't perfectly match.

    ARGB32 is the smallest render texture format you can reasonably use. Anything less than that (RGB565 or RGBA4444) are going to quantize the color values significantly to the point of being useless for most use cases. And that assumes the target platform even supports those formats (which many mobile devices do not).
     
  22. ben3d

    ben3d

    Joined:
    May 19, 2014
    Posts:
    26
    Yeah, it is pretty confusing. I will continue to investigate why i'm having this result. Thanks for your time.
    Do you think it is possible that at some point on iOS 1/4448 result in so tiny value that it is rounded and using it in uv will return the wrong pixel, or i'm searching in the wrong side ?
     
  23. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    7,683
    It shouldn't be a problem with precision, again assuming all values used are float and not half or fixed. But I haven't done iOS dev in over half a decade so I don't know if there are any weird gotchas lately. Every iOS device in the last 8 years has been more than capable of full float precision UVs.