Search Unity

  1. Welcome to the Unity Forums! Please take the time to read our Code of Conduct to familiarize yourself with the forum rules and how to post constructively.
  2. Dismiss Notice

Compute Shader: lack of support for most basic data types and operations in OpenGL / Vulcan

Discussion in 'Shaders' started by michal_gjk, Sep 4, 2020.

  1. michal_gjk

    michal_gjk

    Joined:
    Aug 13, 2016
    Posts:
    69
    Bug Report Case: 1275152

    Windows 10; Unity 2019.4.8f1 also 2020.1.4f1

    Compute Shader support for OpenGL, Vulkan (Metal untested) seems non-existant.

    When the type of a buffer is anything more complicated than <int> for example <float4> or <SomeStruct> it appears to be translated to an array of uints[] and then every time a value is being read or stored it's translated using uintBitsToFloat(). Unfortunately this causes all kinds of problems. For example InterlockedMin() is translated to uint variant of atomicMin(uint, uint). There's nothing I can do to force the int variant (atomicMin(int, int)) even if I cast everything explicitly to int. That obviously fails to properly compare negative numbers and returns incorrect results.

    Simple example demonstrating the problem.

    Code (CSharp):
    1.  
    2. struct ScopeMinMax
    3. {
    4.     int min;
    5. };
    6.  
    7. RWStructuredBuffer<float4> _Buff;
    8. RWStructuredBuffer<ScopeMinMax> _Min;
    9.  
    10.  
    11. #pragma kernel BugRepro
    12.  
    13. [numthreads(8, 8, 1)]
    14. void BugRepro(uint3 id : SV_DispatchThreadID)
    15. {
    16.     InterlockedMin((int)_Min[0].min, (int)_Buff[0].x);
    17. }
    18.  
    Below is the buggy glsl code Unity currently generates (notice the forced cast to uint in AtomicMin() call):

    Code (CSharp):
    1.  
    2. **** Platform OpenGL Core:
    3. Compiled code for kernel BugRepro
    4. keywords: <none>
    5. #version 430
    6. #extension GL_ARB_shading_language_420pack : require
    7.  
    8. #define HLSLCC_ENABLE_UNIFORM_BUFFERS 1
    9. #if HLSLCC_ENABLE_UNIFORM_BUFFERS
    10. #define UNITY_UNIFORM
    11. #else
    12. #define UNITY_UNIFORM uniform
    13. #endif
    14. #define UNITY_SUPPORTS_UNIFORM_LOCATION 1
    15. #if UNITY_SUPPORTS_UNIFORM_LOCATION
    16. #define UNITY_LOCATION(x) layout(location = x)
    17. #define UNITY_BINDING(x) layout(binding = x, std140)
    18. #else
    19. #define UNITY_LOCATION(x)
    20. #define UNITY_BINDING(x) layout(std140)
    21. #endif
    22.  struct _Buff_type {
    23.     uint[4] value;
    24. };
    25.  
    26. layout(std430, binding = 0) buffer _Buff {
    27.     _Buff_type _Buff_buf[];
    28. };
    29.  struct _Min_type {
    30.     uint[1] value;
    31. };
    32.  
    33. layout(std430, binding = 1) buffer _Min {
    34.     _Min_type _Min_buf[];
    35. };
    36. float u_xlat0;
    37. int u_xlati0;
    38. layout(local_size_x = 8, local_size_y = 8, local_size_z = 1) in;
    39. void main()
    40. {
    41.     u_xlat0 = uintBitsToFloat(_Buff_buf[0].value[(0 >> 2) + 0]);
    42.     u_xlati0 = int(u_xlat0);
    43.     atomicMin(_Min_buf[int(0)].value[int(0) >> 2], uint(u_xlati0));
    44.     return;
    45. }
    46.  
    Do you plan to properly support types like vec3 instead of this strange float3 <-> uint[] conversion in your HLSL to GLSL converter?

    Without this you're effectively forcing everyone to write GLSL compute shaders from scratch manually.

    PS. I don't have the auto generated vulcan code here but returned results seem to indicate similar if not identical bugs.
     
  2. Juho_Oravainen

    Juho_Oravainen

    Unity Technologies

    Joined:
    Jun 5, 2014
    Posts:
    41
    Hi!

    The reason why we use this uint array for structured buffers is that HLSLcc translates shader code by parsing DX bytecode produced by the FXC compiler. The bytecode actually addresses all structured buffer data as float4 arrays. Translating back to original types would be quite complicated due to the addressing and data layout issues. This is unlikely to change as long as we use the HLSLcc based shader code translation.

    However, the fact that the generated code looks ugly and not very readable does not mean it would not work at all. We should be casting the data into proper types after the data reads and everything should work just fine. So, congratulations, you've found a bug in this area. Seems indeed that the types are wrong in the atomicMin and this is something we need to fix (thanks for the bug report, I'll push it forward!).

    I wouldn't call the system unusable because of this single bad data type case though... However, if you find further issues with the generated shader, please report bug cases on them individually so we become aware of them and can fix them.