Search Unity

  1. Good news ✨ We have more Unite Now videos available for you to watch on-demand! Come check them out and ask our experts any questions!
    Dismiss Notice

Communicating between graphics pipeline and compute shaders

Discussion in 'Shaders' started by JamieT, Feb 2, 2016.

  1. JamieT

    JamieT

    Joined:
    Mar 20, 2013
    Posts:
    3
    Hi,

    I was hoping to get some advice on how to pass data from various stages of the graphics pipeline, into a compute shader, and then back again. So far, I've only seen tutorials which focused on passing data from the CPU into buffers for the compute shader.

    How would I (for example), pass the output data of my vertex shader into a compute shader? Then pass the output from the compute shader back into a fragment shader... Is this even possible? I'm guessing that I would do something like...

    StructuredBuffer<float> inputArray : register(t0);

    "Binding" data to a register? If that's even the correct term to use? I really don't know.. :) I'm pretty stuck with where to start to be honest.

    Any help in this area would be much appreciated.
     
  2. rageingnonsense

    rageingnonsense

    Joined:
    Dec 3, 2014
    Posts:
    99
    I'm a little confused why you would want to even do something like this. If you are in the vertex portion of your shader, you are already calculating stuff on the graphics card (if you are at any point in your shader really). What benefit woudl you get from using a compute shader inbetween?

    I am not sure what you want to do is possible. But maybe if you explain why you want to do this, you will get a better response.
     
  3. JamieT

    JamieT

    Joined:
    Mar 20, 2013
    Posts:
    3
    I'm using the tessellation stages, then using displacement mapping on the new vertices. From there I hope to pass the output of the domain (or maybe geometry) shader into a compute shader to hopefully do some other calculations. I was reading earlier about "Stream Out" stage..(?) Not sure if that's where I need to be looking...?

    My example above wasn't to be taken literally, I was just looking for the "general rules" which apply to the idea of passing data from the graphics pipeline into a compute shader. I'm pretty sure it's possible to do that right? :-/
     
  4. rageingnonsense

    rageingnonsense

    Joined:
    Dec 3, 2014
    Posts:
    99
    I don't see why it would be possible. A compute shader really is just a way to get the graphics card to calculate things outaside of the graphics pipeline. It exists as a way to leverage the power of the gpu for things other than graphics.

    If you are rendering things to the screen however, you are already using the gpu.

    Are you trying to get the graphics card to tessellate a mesh for you so you can use the output on the cpu side, and then also perform some other calculations on the mesh as a whole using the gpu?
     
  5. Ellenack

    Ellenack

    Joined:
    Feb 16, 2014
    Posts:
    41
    The answer is pretty simple : to use a compute shader, you have to dispatch it from the CPU. That's the whole idea of the compute shader itself. Trying to use a compute shader between a vertex shader and a fragment shader is meaningless, because you already are on the GPU. What do you want to achieve in doing so that you can't already do with the shaders from the rendering pipeline ?
     
  6. JamieT

    JamieT

    Joined:
    Mar 20, 2013
    Posts:
    3
    I dunno... maybe I misread some information, but I'm sure there's a way to avoid transfering the data I want from GPU --> CPU, then CPU --> GPU, when the data is already in GPU memory. Doing that seems like a huge waste of time.

    Don't really want to reveal exactly what I'm implementing. Like I say, my initial post was just an example. No need to focus on that. Just looking for a way to access (from the compute shader), what's already in GPU memory, whether that's vertex positions, or whatever it is I decide to use.
     
    ModLunar likes this.
  7. Ellenack

    Ellenack

    Joined:
    Feb 16, 2014
    Posts:
    41
    Well, I am pretty sure you can't call a compute shader between a vertex shader and a fragment shader if that's what you are trying to do. That's not their use. You can access a structured buffer (that you can modify with a compute shader, which is dispatched from script from the CPU), but that's pretty much it. The rendering pipeline is pretty closed.
     
  8. Marionette

    Marionette

    Joined:
    Feb 3, 2013
    Posts:
    349
    by my understanding, is that what the following basically does? (btw, this is an excerpt from scrawk's excellent blog tutorial on compute shaders)

    Code (CSharp):
    1. using UnityEngine;
    2. using System.Collections;
    3. public class BufferExample : MonoBehaviour
    4. {
    5.     public Material material;
    6.     ComputeBuffer buffer;
    7.     const int count = 1024;
    8.     const float size = 5.0f;
    9.     void Start ()
    10.     {
    11.         buffer = new ComputeBuffer(count, sizeof(float)*3, ComputeBufferType.Default);
    12.         float[] points = new float[count*3];
    13.         Random.seed = 0;
    14.         for(int i = 0; i < count; i++)
    15.         {
    16.             points[i*3+0] = Random.Range(-size,size);
    17.             points[i*3+1] = Random.Range(-size,size);
    18.             points[i*3+2] = 0.0f;
    19.         }
    20.         buffer.SetData(points);
    21.     }
    22.     void OnPostRender()
    23.     {
    24.         material.SetPass(0);
    25.         material.SetBuffer("buffer", buffer);
    26.         Graphics.DrawProcedural(MeshTopology.Points, count, 1);
    27.     }
    28.     void OnDestroy()
    29.     {
    30.         buffer.Release();
    31.     }
    32. }
    and the shader:

    Code (CSharp):
    1. Shader "Custom/BufferShader"
    2. {
    3.     SubShader
    4.     {
    5.        Pass
    6.        {
    7.             ZTest Always Cull Off ZWrite Off
    8.             Fog { Mode off }
    9.             CGPROGRAM
    10.             #include "UnityCG.cginc"
    11.             #pragma target 5.0
    12.             #pragma vertex vert
    13.             #pragma fragment frag
    14.             uniform StructuredBuffer<float3> buffer;
    15.             struct v2f
    16.             {
    17.                 float4  pos : SV_POSITION;
    18.             };
    19.             v2f vert(uint id : SV_VertexID)
    20.             {
    21.                 float4 pos = float4(buffer[id], 1);
    22.                 v2f OUT;
    23.                 OUT.pos = mul(UNITY_MATRIX_MVP, pos);
    24.                 return OUT;
    25.             }
    26.             float4 frag(v2f IN) : COLOR
    27.             {
    28.                 return float4(1,0,0,1);
    29.             }
    30.             ENDCG
    31.         }
    32.     }
    33. }
    now if you'll notice, there isn't any GetData() call to the compute shader, however you'll still need to pass in the buffer to the material for the shader, but since you aren't actually modifying the buffer on the cpu side, i'm not sure if you're just passing the pointer to the buffer to the shader 'linking' them..

    please correct me if i'm wrong about my assumptions, because i'd like to know definitively as well..

    [edit] actually, looking at this now it seems the compute shader feeds the vert shader, if the vert shader could act on the buffer such as a 'RWStructuredBuffer<float3> buffer' in the vert shader then the vert shader could act on the buffer, but i don't think it can.. mebbe with this? : uniform AppendStructuredBuffer<float3> appendBuffer
     
    Last edited: Feb 24, 2016
    ModLunar likes this.
  9. NemoKrad

    NemoKrad

    Joined:
    Jan 16, 2014
    Posts:
    632
    I did a vlog post on Compute and Geometry shaders in Unity, if that helps. Though I am finding a few issues with them in Unity.
     
  10. Marionette

    Marionette

    Joined:
    Feb 3, 2013
    Posts:
    349
    Brilliant!

    Thank you so much for this! Your explanations are detailed, and you explain everything. Absolutely excellent ;)

    Without sounding ungrateful (hopefully), is there any way i might beg the tutorial source so that i might play around with it?

    If not, i understand ;)

    Again, well done!
     
    NemoKrad likes this.
  11. NemoKrad

    NemoKrad

    Joined:
    Jan 16, 2014
    Posts:
    632
    I have a FB page, "Charles Will Code It!" I think I put the source on there :) Ask to join and I will add you :D

    Glad you liked the tutorial too :D
     
  12. Marionette

    Marionette

    Joined:
    Feb 3, 2013
    Posts:
    349
    Excellent, done ;)

    cheers ;)
     
    NemoKrad likes this.
  13. andSol

    andSol

    Joined:
    May 8, 2016
    Posts:
    22
    @JamieT Did you ever find a more accurate answer to your original questions? I ask that because I am in doubt about that too. What I am certain of, is that the initial answers given to here are wrong: it *does* make sense to use a Compute Shader together with vertex or fragment shaders depending on the case.

    If not for nothing else, to make your life much easier in terms of handling data in and out without having to resort to cumbersome texture-buffers. In short, you could use plain arrays of structs to handle data from the CPU. Another obvious reason I can see to have a ComputeShader together with other shaders: having a "main" kernel function to do calculations before mesh data (e.g. vertices or fragments) are actually processed. Yet another reason that follows that last one: spatial partitioning. It seems logical to me that it would be much easier to implement a quadtree with compute shaders, but quadtress could super useful to handle vertices and fragments.
     
    NemoKrad likes this.
  14. laurentlavigne

    laurentlavigne

    Joined:
    Aug 16, 2012
    Posts:
    3,579
    Is it possible to make a compute shaders work across multiple frames without scheduling chunks separately?
     
  15. barneypitt

    barneypitt

    Joined:
    Mar 29, 2018
    Posts:
    4
    I agree, the initial answers are dead, dead wrong. The whole point of compute shaders is to integrate with the pipeline... nobody in their right mind would use a GLSL compute shader over OpenCL for general purpose GPU (because they're utterly horrible compared to OpenCL). So if they don't integrate with the pipeline, they shouldn't exist.

    I am baffled by the answers which say "why would you want to do that"? The posters clearly have no idea how hobbled vertex/fragment shaders are compared to a GPGPU kernel - no sharing of memory, no way to implement the near-instant repeated "texture" lookups local memory gives you, no way to use atomics which doesn't completely kill your performance. A fragment shader which needs to, say, access dozens (k) of pixels per N * N executions - O(N*N*k) - can very often be replaced by a GL compute shader or OpenCL kernel which can access those pixels for only N executions - O(N*k).
     
    landonth, BradZoob and ModLunar like this.
  16. asdzxcv777

    asdzxcv777

    Joined:
    Jul 17, 2017
    Posts:
    29
    Lel ... those posts from 2016 are so hilarious !!!
    :p:p:p
     
unityunity