Search Unity

Question Compute shader SV_DispatchThreadID to texture coord

Discussion in 'Shaders' started by mchangxe, Aug 3, 2022.

  1. mchangxe

    mchangxe

    Joined:
    Jun 16, 2019
    Posts:
    69
    Hey guys, Im having trouble understanding what the SV_DispatchThreadID argument of the kernel function is in a compute shader. For my situation, I have a compute shader operating on a rendertexture of dimension 1920 * 960, and for testing purposes, have set my numthreads to (1, 1, 1), just for simplicity.

    First of all, I would like to confirm that my understanding of numthreads is correct:
    numthreads(1, 1, 1) equals a single threaded operation on the rendertexture.
    numthreads(8, 8, 1) equals 8*8=64 threads operating on the rendertexture at the same time, where they are allocated an exclusive section of the rendertexture to operate on.

    Secondly, the core part of my shader algorithm requires change the uv coordinates from (0, 1) space to (-2, 2) space. So just like in a normal shader where you would take the i.uv argument and times it by 4 and then subtract it by 2, Im taking the id.xy from the SV_DispatchThreadID and doing the same operation on it. This is where I encountered the current problem. By doing this I get plain black output.

    I then found out that id.x and id.y are not floats in the range of 0-1, so i then tried dividing id.x by the width of the texture 1920 and then timing it by 4 then subtracting it by 2. This led to a better outcome where the effect I want is showing but it looks like its stretched on the x axis indefinitely. I now suspect that my poor understanding of SV_DispatchThreadID type is the root of my problem.

    So what exctly are the 3 integers in id : SV_DispatchThreadID? How do I know the uv coordinate of the pixel getting operated on by a thread?

    thanks.

    @bgolus i need you!!!!!
     
  2. Invertex

    Invertex

    Joined:
    Nov 7, 2013
    Posts:
    1,550
    No. A compute shader is not operating on any specific resource. After all, you can define multiple collections of data at the top, how would it know which one to be "operating on"? Even for a regular shader, it's not the texture that matters, it's simply the UV values of the vertices of the mesh being operated on that map to a sample point in the 0-1 range that you can then use to sample that location on the texture. (though UVs can be outside of the 0-1 range even... but that's usually used for advanced effects or texture scaling/tiling can happen instead of defined at the material level).

    All a compute shader is, is a collection of GPU threads. It's up to you to do with those threads whatever work you want to. And you can only have around 4096 theads per dispatch group. More info here: https://docs.microsoft.com/en-us/windows/win32/direct3dhlsl/sv-dispatchthreadid

    So, you basically want to structure your threads and dispatch in a way that makes enough work to handle all the pixels in your texture. (Usually you don't want a single thread for each pixel though, as the amount of work being done on a pixel is probably going to be small, so you're wasting overhead on dispatching threads when instead a single thread could handle a block of pixels).
    Here is a tutorial on a simple texture effect using compute to help better understand:
    https://bronsonzgeb.com/index.php/2021/07/17/pixelate-filter-post-processing-in-a-compute-shader/
     
    Last edited: Aug 3, 2022
    L1_as likes this.