I have heard there is no way to use Geometry shaders on a Mac... is this true? If so, does anyone know a way to use similar effects on a Mac? Any help and input welcome!
Geometry shaders are possible on MacOS. To use Geometry shaders you must force the game (and editor if you're developing on the Mac) to use OpenGL instead of the default Metal API. https://docs.unity3d.com/Manual/CommandLineArguments.html https://docs.unity3d.com/Manual/OpenGLCoreDetails.html But this comes with several huge caveats. MacOS's OpenGL support is limited to OpenGL 4.1. This means it does not support Compute shaders, when using OpenGL. Note: the hardware in desktop Macs from roughly the last decade have all supported Compute shaders. If you ran Windows on the Intel based Macs they would be able to use OpenGL 4.3 or better, which supports both Geometry shaders and Compute shaders. Apple explicitly chose to not support any version of OpenGL past 4.1 in MacOS. Not having Compute shader support means a lot of Unity's post processing effects no longer work, or run using slower versions of the effect. There are also several assets on the store that make use of Compute shaders which will also obviously not work. Likewise, there are features of both the URP and HDRP that will not work as they are also Compute shader based. Additionally, while you can run "OpenGL 4.1" applications on the new M1 based Macs, those will not support Geometry shaders as the hardware does not support them, at all. In part because Apple's Metal graphics API does not support Geometry shaders, and so they don't bother to support it with their own chips. So the question you might be wonder is, why? Because Geometry shaders, while convenient, are horribly inefficient. That's not to say that Geometry shaders are always the slowest option, just that there are more efficient options. Namely, Compute shaders. Which is why Apple's Metal API supports Compute, but not Geometry shaders. Metal technically doesn't even support Tessellation shaders, not directly. Those are emulated with a Compute shader. And that's kind of the answer, if you want to use Geometry shaders for something, you probably actually want to use a Compute shader. Unfortunately using Compute shaders for this requires a lot more work to implement, and Unity doesn't make things particularly easy here. If you want to take some mesh and modify it with a Compute shader you have to manually copy the mesh data into buffers that you pass to the Compute shader. You then have to write custom shaders that can take the output of the Compute shader in place of mesh's vertex data. See this tutorial that describes using Compute shaders to do deformation. https://medium.com/swlh/oculus-quest-mesh-deformation-with-compute-shaders-in-unity-9caa1b904fda It's not quite the same as generating new geometry from a compute shader, but unfortunately there aren't a lot of tutorials on that topic.
Hi, is there any update on this ? Can i use Geometry shaders on Mac now without any issue or downgrade of rendering ? If not would be strange, i have used Geometry shaders and have incredible capabilities that make other systems look years behind especially in ease of use, so not having them supported in the hardware level means Macs have suddenly become a no go for me, not even by a billion miles close. Also geometry shaders are really fast. Here is an example of the system i work on Thanks
There was not and there will not be any updates. Geometry shaders will not work on Mac (or iOS). It was explicitly not part of the Metal API specification and Apple decided to not even implement it on their GPUs. Geometry shaders are a dead-end tech, specially after the advent of mesh shaders. They are only "very fast" on mid-range and high-end PC GPUs, and any effect using them can be made more efficiently using compute shaders and indirect instanced rendering. This is specially true on gen8 consoles (PS4/XB1/NSW). Geometry shaders are just still popular among Unity developers because they are easy to write using surface shaders, but other engines abandoned them several years ago.
I use them with URP without a surface shader and are extreme easy to work with and ideal for dynamic effects. I could take time to make what is ready in 5 minutes in 2-3 months time using tricks with compute shaders, but the point is that geometry shaders are extreme easy to use are very fast even on my 5 years old laptop, where the video above was recorded (1050GTX GPU). So while i understand the decision to abandon it for various reasons, i find it a crazy minus for Mac personally. Also one big question is if this tech is abandoned by everyone, why GPUs still implement it and not use the related hardware for something else. Also i am not aware how you can dynamically augment the created geometry with compute shaders, i can only use them to manipulate geometry so far, but not created it on the fly.
GPUs still implement it as to not break existing games which use it. Apple has no regards for backwards compatibility, so they can afford to drop it. Yes, you can't augment on the fly with compute shaders, but in most use cases augmentation can be replaced by procedural drawing. In the case of grass, for example, there's no need for augmentation since everything is pretty much a quad, and by changing the indirect index count or instance count you can determine how many indices will be rendered on the fly, based on data output from compute shaders. This article explains the drawbacks of geometry shaders: http://www.joshbarczak.com/blog/?p=667
Thanks for the article, is indeed interesting. The thing is that both augmenting the polygons are very important for my use (real time seasonal growth) and i dont see any issue with performance, maybe because i do use Intel though. So is still half half, as if you can get special FX, easy way and fast performing, why not want to use them and use a much harder to program and less versatile system, for maybe minor gains. I can see that some platforms do not support them well, but again that is a downside of the platform that chooses to not use well this versatile system.
For grass rendering, you even don't need to use geometry shader / compute shaders / instancing. Just you need Graphics.DrawProceduralNow and Vertex Shader with SV_VertexID. It allows to build geometry in runtime. For example, I used it for simple hair rendering demo based on NURBS. It works on Mac (Metal API): Source code: Code (CSharp): using UnityEngine; public class NurbsHair : MonoBehaviour { public Shader NurbsHairShader; public Color HairColor = new Color(1.0f, 0.5f, 0.5f, 1.0f); public Color HairEnds = new Color(0.0f, 0.0f, 0.0f, 1.0f); [Range(1.0f, 10.0f)] public float HairPower = 5.0f; [Range(1, 100000)] public int HairCount = 50000; [Range(0.0f, 0.5f)] public float HairEffect = 0.0f; [Range(0.1f, 1.0f)] public float HairScale = 0.75f; [Range(2, 64)] public int HairQuality = 32; [Range(0.0f, 10.0f)] public float HairWind = 0.0f; public bool HairShading = true; public enum HairMode {NURBSDerivatives = 0, VertexPositions = 1} public HairMode HairNormalsCalculation = HairMode.VertexPositions; public bool HairDebugNormals = false; public Vector4 HairWeights = Vector4.one; private Material _Material; void Start() { if (NurbsHairShader == null) NurbsHairShader = Shader.Find("Nurbs Hair"); _Material = new Material(NurbsHairShader); } void OnRenderObject() { _Material.SetPass(0); _Material.SetFloat("_HairScale", HairScale); _Material.SetFloat("_HairEffect", HairEffect); _Material.SetFloat("_HairPower", HairPower); _Material.SetInt("_HairQuality", HairQuality * 2); // must be even number _Material.SetColor("_HairColor", HairColor); _Material.SetColor("_HairEnds", HairEnds); _Material.SetVector("_HairWeights", HairWeights); _Material.SetVector("_HairPosition", this.transform.position); _Material.SetInt("_HairShading", System.Convert.ToInt32(HairShading)); _Material.SetFloat("_HairWind", HairWind); _Material.SetInt("_HairNormalsMode", (int)HairNormalsCalculation); _Material.SetInt("_HairDebugNormals", System.Convert.ToInt32(HairDebugNormals)); Graphics.DrawProceduralNow(MeshTopology.Lines, HairQuality * HairCount, 1); } } Code (CSharp): Shader "Nurbs Hair" { Subshader { Pass { Cull Off CGPROGRAM #pragma vertex VSMain #pragma fragment PSMain #pragma target 5.0 uniform float3 _HairColor, _HairEnds; uniform float4 _HairPosition, _HairWeights; uniform float _HairScale, _HairEffect, _HairPower, _HairWind; uniform int _HairQuality, _HairShading, _HairNormalsMode, _HairDebugNormals; // L. Piegl, W. Tiller, "The NURBS Book", Springer Verlag, 1997 // http://nurbscalculator.in/ float3 NurbsCurve (float4 cps[4], int cpsLength, float knots[8], int knotsLength, float u) { const int degree = 3; for (int t = 0; t < cpsLength; t++) cps[t].xyz *= cps[t].w; int index = 0; float4 p = 0; int n = knotsLength - degree - 2; if (u == (knots[n + 1])) index = n; int low = degree; int high = n + 1; int mid = (int)floor((low + high) / 2.0); [unroll(16)] while (u < knots[mid] || u >= knots[mid + 1]) { if (u < knots[mid]) high = mid; else low = mid; mid = (int)floor((low + high) / 2.0); } index = mid; float N[degree + 1]; float left[degree + 1]; float right[degree + 1]; float saved = 0.0, temp = 0.0; N[0] = 1.0; [loop] for (int j = 1; j <= degree; j++) { left[j] = (u - knots[index + 1 - j]); right[j] = knots[index + j] - u; saved = 0.0f; [loop] for (int r = 0; r < j; r++) { temp = N[r] / (right[r + 1] + left[j - r]); N[r] = saved + right[r + 1] * temp; saved = left[j - r] * temp; } N[j] = saved; } for (int i = 0; i <= degree; i++) p += cps[index - degree + i] * N[i]; return (p.w != 0) ? p.xyz / p.w : p.xyz; } float3 NurbsCurveTangent (float4 cps[4], int cpsLength, float knots[8], int knotsLength, float u) { const int order = 1; // order of the derivative const int degree = 3; // curve degree float ders[order + 1][degree + 1]; int span = 0; int n = knotsLength - degree - 2; if (u == (knots[n + 1])) span = n; int low = degree; int high = n + 1; int mid = (int)floor((low + high) / 2.0); [unroll(16)] while (u < knots[mid] || u >= knots[mid + 1]) { if (u < knots[mid]) high = mid; else low = mid; mid = (int)floor((low + high) / 2.0); } span = mid; float left[degree + 1]; float right[degree + 1]; float ndu[degree + 1][degree + 1]; ndu[0][0] = 1.0; [loop] for (int j = 1; j <= degree; j++) { left[j] = u - knots[span + 1 - j]; right[j] = knots[span + j] - u; float saved = 0.0; [loop] for (int r = 0; r < j; r++) { ndu[j][r] = right[r + 1] + left[j - r]; float temp = ndu[r][j - 1] / ndu[j][ r]; ndu[r][ j] = saved + right[r + 1] * temp; saved = left[j - r] * temp; } ndu[j][j] = saved; } for (int m = 0; m <= degree; m++) ders[0][m] = ndu[m][degree]; float a[2][degree + 1]; for (int r = 0; r <= degree; r++) { int s1 = 0; int s2 = 1; a[0][0] = 1.0; [unroll(order)] for (int k = 1; k <= order; k++) { float d = 0.0; int rk = r - k; int pk = degree - k; int j1 = 0; int j2 = 0; if (r >= k) { a[s2][0] = a[s1][0] / ndu[pk + 1][rk]; d = a[s2][0] * ndu[rk][pk]; } j1 = (rk >= -1) ? 1 : -rk; j2 = (r - 1 <= pk) ? k - 1 : degree - r; [unroll(order+1)] for (int j = j1; j <= j2; j++) { a[s2][j] = (a[s1][j] - a[s1][j - 1]) / ndu[pk + 1][rk + j]; d += a[s2][j] * ndu[rk + j][pk]; } if (r <= pk) { a[s2][k] = -a[s1][k - 1] / ndu[pk + 1][r]; d += a[s2][k] * ndu[r][pk]; } ders[k][r] = d; int s3 = s1; s1 = s2; s2 = s3; } } float f = degree; [unroll(order)] for (int k = 1; k <= order; k++) { for (int h = 0; h <= degree; h++) ders[k][ h] *= f; f *= degree - k; } int du = order < degree ? order : degree; float3 result[order + 1]; for (int q = 0; q <= du; q++) { for (int j = 0; j <= degree; j++) { float4 v = cps[span - degree + j]; result[q].xyz += v.xyz * ders[q][j]; } } return normalize(result[1]); } float Mod (float x, float y) { return x - y * floor(x / y); } float4 Hash(uint p) // Returns value in range -1..1 { p = 1103515245U*((p >> 1U)^(p)); uint h32 = 1103515245U*((p)^(p>>3U)); uint n = h32^(h32 >> 16); uint4 rz = uint4(n, n*16807U, n*48271U, n*69621U); return float4((rz >> 1) & (uint4)(0x7fffffffU)) / float(0x7fffffff) * 2.0 - 1.0; } float2 PolarToCartesian (float2 p) { return p.x * float2(cos(p.y), sin(p.y)); } float4 VSMain (uint vertexId : SV_VertexID, out float3 color : COLOR, out float3 normal : NORMAL) : SV_POSITION { float strand = float(_HairQuality); // amount of vertices per strand, default is 64 float instance = floor(vertexId / strand); // instance ID float id = Mod(vertexId, strand); // vertex ID float t = max((id + Mod(id, 2.0) - 1.0), 0.0) / (strand - 1.0); // interpolator float4 n = Hash(uint(instance + 123u)); // noise float2 k = PolarToCartesian (float2(n.x * 3.0, n.y * 16.0)); float4 controlPoints[4] = {0..xxxx, 0..xxxx, 0..xxxx, 0..xxxx}; float knotVector[8] = {0.0, 0.0, 0.0, 0.0, 1.0 - _HairEffect, 1.0, 1.0, 1.0}; float wind = sin(_Time.g * n.x * _HairWind) * 0.1; controlPoints[0] = float4(0.0, 0.0, 0.0, _HairWeights.x); controlPoints[1] = float4(k.x / 4.0, n.z + 1, k.y / 4.0, _HairWeights.y); controlPoints[2] = float4(k.x / 2.0, n.w + 1, k.y / 2.0, _HairWeights.z); controlPoints[3] = float4(k.x, -1.0 + n.w * 0.5 + wind, k.y, _HairWeights.w); float3 localPos = NurbsCurve (controlPoints, 4, knotVector, 8, t) * _HairScale; normal = _HairNormalsMode > 0 ? normalize(localPos) : NurbsCurveTangent(controlPoints, 4, knotVector, 8, t); color = float4(lerp(_HairColor, _HairEnds, pow(t, _HairPower)), 1); return UnityObjectToClipPos(float4(localPos + _HairPosition.xyz, 1.0)); } float4 PSMain (float4 vertex : SV_POSITION, float3 color : COLOR, float3 normal : NORMAL) : SV_Target { float angle = 1.0 - length(_WorldSpaceLightPos0.xz) / length(_WorldSpaceLightPos0.xyz); float3 lightDir = normalize(_WorldSpaceLightPos0.xyz); float3 normalDir = normalize(normal); float diffuse = max(dot(lightDir, normalDir), angle); return _HairDebugNormals > 0 ? float4(normalDir, 1.0) : (_HairShading > 0 ? float4(diffuse.xxx * color, 1.0) : float4(color, 1.0)); } ENDCG } } }
If you want to augment geometry progressively from compute you can in Unity 2020.1 and up. https://docs.google.com/document/d/1QC7NV7JQcvibeelORJvsaTReTyszllOlxdfEsaVL2oA/edit https://github.com/Unity-Technologies/MeshApiExamples https://forum.unity.com/threads/feedback-wanted-mesh-compute-shader-access.1096531/
I had a look into those and seems is still nothing like the extreme fast gpu only Geometry Shaders, plus much more complex to implement and need to be on cpu side with dots and looks like the vertices are still not dynamically added in gpu side, even wjen using compute to update. Unless i miss something Geometry shaders still seem like alien tech comparing to stone age tech of mesh shaders that still looks a lot like compute shaders approach.
No you're missing what this allows you to do. It's all GPU side. It allows you to get the GPU-side pointer to the mesh data so that you can manipulate and use that data from a Compute or fragment shader. Nothing to do with DOTS or CPU other than the CPU triggering the Compute Dispatch. "COMPUTE" shaders are GPU, not CPU. It also allows you to avoid having to update the data every frame if it doesn't need to change, and avoids needing to run for every single shader pass, greatly reducing GPU load.
Is there any sample of a compute shader that dynamically change the vertex counts for example inside the GPU side in the compute part ? I recall i had to rigidly define the matrices in C# to pass the data and that was not dynamic, if done dynamically would not be efficient (e.g. of change the arrays size in each frame when call dispatch etc) Of course i could define a vast array from start, with redundant array count to facilitate adding more vertices, but that is also not efficient. Another aspect is that Geometry shader can operate directly on existing geometry, e.g. i use if for voxelization of the scene in a replacement shader, which means this is also another use that cant be covered by compute shaders.
Just going to link my response here because we're kind of getting a mirror conversation going on here... https://forum.unity.com/threads/fee...ute-shader-access.1096531/page-2#post-8490812 We'll keep the conversation over there.