Hello I just wanted to know if we are able to use Stream Output stage of the pipeline in Unity to read back the geometry generated by the Geometry Shader on the CPU, or to resend it for another pass on the GPU. If someone already tried this, a little example on how to declare everything could be useful as I cant find any doc on this
Here's an example pass from the examples Aras posted a while back. Hope this helps. If it's unclear, outStream.RestartStrip() starts a new strip (allowing for unconnected triangles to be output). There is also the simple billboard shader, that I found on the forums as well. It might be a simpler example. Hope this helps! Code (csharp): Pass { ZWrite Off ZTest Always Cull Off Fog { Mode Off } Blend SrcAlpha One CGPROGRAM #pragma target 5.0 #pragma vertex vert #pragma geometry geom #pragma fragment frag #include "UnityCG.cginc" StructuredBuffer<float2> pointBuffer; struct vs_out { float4 pos : SV_POSITION; }; vs_out vert (uint id : SV_VertexID) { vs_out o; o.pos = float4(pointBuffer[id] * 2.0 - 1.0, 0, 1); return o; } struct gs_out { float4 pos : SV_POSITION; float2 uv : TEXCOORD0; }; float _Size; [maxvertexcount(4)] void geom (point vs_out input[1], inout TriangleStream<gs_out> outStream) { float dx = _Size; float dy = _Size * _ScreenParams.x / _ScreenParams.y; gs_out output; output.pos = input[0].pos + float4(-dx, dy,0,0); output.uv=float2(0,0); outStream.Append (output); output.pos = input[0].pos + float4( dx, dy,0,0); output.uv=float2(1,0); outStream.Append (output); output.pos = input[0].pos + float4(-dx,-dy,0,0); output.uv=float2(0,1); outStream.Append (output); output.pos = input[0].pos + float4( dx,-dy,0,0); output.uv=float2(1,1); outStream.Append (output); outStream.RestartStrip(); } sampler2D _Sprite; fixed4 _Color; fixed4 frag (gs_out i) : COLOR0 { fixed4 col = tex2D (_Sprite, i.uv); return _Color * col; } ENDCG } } billboard shader: Code (csharp): Shader "Custom/GS Billboard" { Properties { _SpriteTex ("Base (RGB)", 2D) = "white" {} _Size ("Size", Range(0, 3)) = 0.5 } SubShader { Pass { Tags { "RenderType"="Opaque" } LOD 200 CGPROGRAM #pragma target 5.0 #pragma vertex VS_Main #pragma fragment FS_Main #pragma geometry GS_Main #include "UnityCG.cginc" // ************************************************************** // Data structures * // ************************************************************** struct GS_INPUT { float4 pos : POSITION; float3 normal : NORMAL; float2 tex0 : TEXCOORD0; }; struct FS_INPUT { float4 pos : POSITION; float2 tex0 : TEXCOORD0; }; // ************************************************************** // Vars * // ************************************************************** float _Size; float4x4 _VP; Texture2D _SpriteTex; SamplerState sampler_SpriteTex; // ************************************************************** // Shader Programs * // ************************************************************** // Vertex Shader ------------------------------------------------ GS_INPUT VS_Main(appdata_base v) { GS_INPUT output = (GS_INPUT)0; output.pos = mul(_Object2World, v.vertex); output.normal = v.normal; output.tex0 = float2(0, 0); return output; } // Geometry Shader ----------------------------------------------------- [maxvertexcount(4)] void GS_Main(point GS_INPUT p[1], inout TriangleStream<FS_INPUT> triStream) { float3 up = float3(0, 1, 0); float3 look = _WorldSpaceCameraPos - p[0].pos; look.y = 0; look = normalize(look); float3 right = cross(up, look); float halfS = 0.5f * _Size; float4 v[4]; v[0] = float4(p[0].pos + halfS * right - halfS * up, 1.0f); v[1] = float4(p[0].pos + halfS * right + halfS * up, 1.0f); v[2] = float4(p[0].pos - halfS * right - halfS * up, 1.0f); v[3] = float4(p[0].pos - halfS * right + halfS * up, 1.0f); float4x4 vp = mul(UNITY_MATRIX_MVP, _World2Object); FS_INPUT pIn; pIn.pos = mul(vp, v[0]); pIn.tex0 = float2(1.0f, 0.0f); triStream.Append(pIn); pIn.pos = mul(vp, v[1]); pIn.tex0 = float2(1.0f, 1.0f); triStream.Append(pIn); pIn.pos = mul(vp, v[2]); pIn.tex0 = float2(0.0f, 0.0f); triStream.Append(pIn); pIn.pos = mul(vp, v[3]); pIn.tex0 = float2(0.0f, 1.0f); triStream.Append(pIn); } // Fragment Shader ----------------------------------------------- float4 FS_Main(FS_INPUT input) : COLOR { return _SpriteTex.Sample(sampler_SpriteTex, input.tex0); } ENDCG } } }
Thanks for your answer Pyromuffin! I was refering more specifically to the stream output stage of the pipeline as described here This a step of the pipeline where you can read back on the CPU the data generated by the geometry shader - it can be useful to generate static geometry on the GPU. I would like to know the syntax to setup this more on Unity application side. There is no documentation at all on geo shader, so I am not even not sure if we can do that. :neutral:
ah, I see. My mistake. My best guess would be to put it into a (append)structured buffer and then use getData() to read it into an array on the CPU side, though it doesn't seem to utilize that specific feature, it would accomplish close to the same thing.
Yes I used a compute shader and getData() on a compute buffer as a backup for now . Not sure how it compares in term of speed to the real solution though.
can you not use structured buffers in the geometry shader? It looks like the append buffer is not allowed, but a regular structured buffer is. maybe using it with the primitive ID as index?
Are you sure that you can use a RW Structured Buffer in a geo shader ? Arent those reserved to pixel or compute shader ?
Ah, you're right again! I feel dumb. Indeed, all of the write-enabled objects are reserved for pixel or compute shaders. Given that limitation, the only way to do this with the geometry shader is to get access to the steam-output stage, like you requested. Hah, I'm sorry I couldn't help you, but at least I learned something new! Maybe try to get the attention of some unity staff? Thanks again!