Search Unity

  1. The 2022.1 beta is now available for testing. To find out what's new, have a look at our 2022.1 beta blog post.
    Dismiss Notice

Resolved Custom Gaussian Blur unexpected "offset"

Discussion in 'Shaders' started by AverageCG, Sep 28, 2021.

  1. AverageCG

    AverageCG

    Joined:
    Nov 13, 2017
    Posts:
    14
    Hey,

    I have a custom gaussian blur compute shader for a custom pp effect I am working on, but the results are offset in the positive uv direction (i.e. towards top left of the screen). Does anyone know why this might be the case or how to fix this issue? The only thing I could think about is that I am using the unity macros "RW_TEXTURE2D_X" & "COORD_TEXTURE2D_X" and sampling/writing for different perspectives/eyes or somehing but then why would it be skewed to in the y direction as well...

    EDIT: I suppose bilinear sampling is not supported for the used sampling method and/or uvs are getting truncated? That would result in pixel values being sampled that are too far to the bottom left of the image.

    EDIT 2: I have implemented discrete filtering and indeed the offset disappears. That just makes me wonder whether it is possible to use linear sampling with the provided unity macros or compute shaders in general and what the alternatives are. Also (small rant warning) why are unity macros never documented anywhere? try googling RW_TEXTURE2D_X this post is probably one of the top results at this point..

    I am currently not using VR and the screenshot below is just from the unity in-game view.

    this image shows a scene masked with the blur values to showcase the offset.

    issue.PNG


    otherwise here is the basic code if I am missing something here o_O :
    Code (HLSL):
    1.  
    2. #pragma kernel GaussianBlur
    3.  
    4. RW_TEXTURE2D_X(float4, Source);
    5. RW_TEXTURE2D_X(float4, Result);
    6. int strideFactor;
    7.  
    8. float4 gaussian_filter(float2 uv, float2 stride)
    9. {
    10.     float res = Source[COORD_TEXTURE2D_X(uv)] * 0.227027027f;
    11.     float2 d1 = stride * 1.3846153846f;
    12.     res += Source[COORD_TEXTURE2D_X( uv + d1)] * 0.3162162162f;
    13.     res += Source[COORD_TEXTURE2D_X(uv - d1)] * 0.3162162162f;
    14.  
    15.     float2 d2 = stride * 3.2307692308f;
    16.     res +=  Source[COORD_TEXTURE2D_X( uv + d2)] * 0.0702702703f;
    17.     res += Source[COORD_TEXTURE2D_X( uv - d2)] * 0.0702702703f;
    18.  
    19.     return res;
    20. }
    21.  
    22. [numthreads(32, 32, 1)]
    23. void GaussianBlur(uint3 id : SV_DispatchThreadID)
    24. {
    25.     float2 aStride = float2(strideFactor,(1 - strideFactor));
    26.     Result[COORD_TEXTURE2D_X(id.xy)] = gaussian_filter(id.xy, aStride);
    27. }
    28.  
    29.  
    Would be awesome if someone has an idea as to where this shift comes from or what i could try to fix it.
    :oops:
     
    Last edited: Sep 29, 2021
    RyanKeable likes this.
  2. AverageCG

    AverageCG

    Joined:
    Nov 13, 2017
    Posts:
    14
    For reference if anyone will stumble upon similar issues and struggles with finding the right answers:

    I've converted the source texture to just TEXTURE2D_X in the compute shader and sampling looks like this:

    Code (HLSL):
    1.  
    2. float2 getUV(float2 id)
    3. {
    4.     float2 uv = (id.xy + float2(.5f, .5f)) / float2(_ResultWidth, _ResultHeight); // *reso;
    5.  
    6.     return uv;// *_RTHandleScale.xy;
    7. }
    8.  
    9. // 9-tap Gaussian filter with linear sampling
    10. // http://rastergrid.com/blog/2010/09/efficient-gaussian-blur-with-linear-sampling/
    11. float4 gaussian_filter_Linear(float2 uv, float2 stride)
    12. {
    13.     float4 res;
    14.  
    15.     res = SAMPLE_TEXTURE2D_X_LOD(Source, s_linear_clamp_sampler, getUV(uv), 0) * 0.227027027 ;
    16.  
    17.     float2 d1 = stride * 1.3846153846;
    18.     res += SAMPLE_TEXTURE2D_X_LOD(Source, s_linear_clamp_sampler, getUV(uv + d1), 0)* 0.3162162162;
    19.     res += SAMPLE_TEXTURE2D_X_LOD(Source, s_linear_clamp_sampler, getUV(uv - d1), 0)* 0.3162162162;
    20.  
    21.     float2 d2 = stride * 3.2307692308;
    22.     res += SAMPLE_TEXTURE2D_X_LOD(Source, s_linear_clamp_sampler, getUV(uv + d2), 0) * 0.0702702703;
    23.     res += SAMPLE_TEXTURE2D_X_LOD(Source, s_linear_clamp_sampler, getUV(uv - d2), 0) * 0.0702702703;
    24.  
    25.     return res;
    26. }
    keep in mind that the sampler expects uv coordinates between 0-1, i convert them from id.xy with getUV()


    PS: why is it so hard to find basic information on stuff like this and i have to go in and read the hlsl include files and experiment :(
     
  3. RyanKeable

    RyanKeable

    Joined:
    Oct 16, 2013
    Posts:
    60
    thanks for doing the digging for us!

    I haven't had experience with the macro COORD_TEXTURE2D_X so this was a good find.

    just an aside question as I am still learning these kind of processess, at what point would your uvs not be 0-1 when doing a full screen blit? (assuming thats what you are doing)
     
  4. AverageCG

    AverageCG

    Joined:
    Nov 13, 2017
    Posts:
    14
    I am using a compute shader here, so the id.xy coordinates are integer coordinates from 0-Resolution.
    I kinda thought COORD_TEXTURE2D_X would do the job or something but I actually am not sure what it is doing exactly (just that it was needed :D) (probably just vr related selection of which dimension/slice of the _XR texture to sample).
    edit: looked it up, it just adds the SLICE_ARRAY_INDEX macro to the pixelCoord position. i.e. feed it an int2 and it returns an int3 with the corresponding index as third vector param. which is basicly just 0 unless using stereo instancing for vr.

    The only place in which i convert to 0-1 uvs is when i divide by
    float2(_ResultWidth, _ResultHeight)
    in my second post.
     
    Last edited: Sep 30, 2021
  5. RyanKeable

    RyanKeable

    Joined:
    Oct 16, 2013
    Posts:
    60

    Thank you for that follow up! That makes a lot more sense to me now
     
unityunity