Search Unity

Question WebGL: problem with integer vertex attributes

Discussion in 'Universal Render Pipeline' started by mehmet_soyturk, Dec 7, 2022.

  1. mehmet_soyturk

    mehmet_soyturk

    Joined:
    Oct 17, 2022
    Posts:
    17
    I'm compressing vertex normals by using integer data for these normals (Octahedron normal vector encoding).

    This is how I fill the vertex data:
    Code (CSharp):
    1.  
    2. VertexAttributeDescriptor[] vertexLayout = new VertexAttributeDescriptor[]
    3. {
    4.     new VertexAttributeDescriptor(
    5.             VertexAttribute.Position,
    6.             VertexAttributeFormat.Float32,
    7.             3,
    8.             stream: 0),
    9.     new VertexAttributeDescriptor(
    10.             VertexAttribute.Normal,
    11.             VertexAttributeFormat.SInt16,
    12.             2,
    13.             stream: 1)
    14. };
    15.  
    16. struct Short2
    17. {
    18.     public short X;
    19.     public short Y;
    20. }
    21.  
    22. // ...
    23. meshData.SetVertexBufferParams(vertexCount, vertexLayout);
    24. var positions = meshData.GetVertexData<float3>(0);
    25. var normals = meshData.GetVertexData<Short2>(1);
    26. // fill encoded vertex data, etc
    27.  
    This is what I do in the hlsl shader code:
    Code (hlsl):
    1. struct VertexShaderInput
    2. {
    3.     float3 position : POSITION;
    4.     int2 normal : NORMAL;
    5. };
    6.  
    7. FragShaderInput VertexShader(VertexAttributes IN)
    8. {
    9.     float3 normal = decodeNormal(IN.normal);
    10.     // etc ...
    11. }
    That works on Window desktop (I have verified with DirectX or OpenGL ES 3 graphics APIs). But it does not work on WebGL.

    On Firefox I get this error message:

    WebGL warning: drawElementsInstanced: Vertex attrib 1 requires data of type INT, but is being supplied with type FLOAT.


    On Chrome I get this:
    [.WebGL-000030DC0A01C000] GL_INVALID_OPERATION: Vertex shader input type does not match the type of the bound vertex attribute.


    From what I understood, the problem results from the fact that
    vertexAttribPointer
    is used when setting up the vertex data, instead of
    vertexAttrib[B]I[/B]Pointer
    . Indeed, when debugging the javascript code, I see no calls of vertexAttribIPointer. vertexAttribPointer is being called for the VertexAttributeFormat.SInt16 data format too.

    Is this a bug, a known limitation or is there a way to set up integer vertex data properly?

    Edit: this is Unity version 2022.1.22 , URP version 13.1.8
     
  2. mehmet_soyturk

    mehmet_soyturk

    Joined:
    Oct 17, 2022
    Posts:
    17
    Note:
    SystemInfo.SupportsVertexAttributeFormat(VertexAttributeFormat.SInt16, 2)
    returns true on WebGL.
     
  3. mehmet_soyturk

    mehmet_soyturk

    Joined:
    Oct 17, 2022
    Posts:
    17
    Note: when I manually edit webgl.framework.js this way, it works ok. But it is a hack!

    Code (CSharp):
    1.  
    2. function _glVertexAttribPointer(index, size, type, normalized, stride, ptr) {
    3.     // Newly added code
    4.     if (!normalized && (type == 5121 || type == 5122)) // GL_UNSIGNED_BYTE or GL_SHORT
    5.     {
    6.         return _glVertexAttribIPointer(index, size, type, stride, ptr);
    7.     }
    8.  
    9.     // the rest of the original code...
    10. }
    11.  
    In some way I have to ensure that _glVertexAttribIPointer is called in the first place, but I don't know how.
     
  4. mehmet_soyturk

    mehmet_soyturk

    Joined:
    Oct 17, 2022
    Posts:
    17
    I found that I can fix this by using SNorm16 format:

    Code (CSharp):
    1.  
    2. VertexAttributeDescriptor[] vertexLayout = new VertexAttributeDescriptor[]
    3. {
    4.     new VertexAttributeDescriptor(
    5.             VertexAttribute.Position,
    6.             VertexAttributeFormat.Float32,
    7.             3,
    8.             stream: 0),
    9.     new VertexAttributeDescriptor(
    10.             VertexAttribute.Normal,
    11.             VertexAttributeFormat.SNorm16,
    12.             2,
    13.             stream: 1)
    14. };
    15. struct Short2
    16. {
    17.     public short X;
    18.     public short Y;
    19. }
    20. // ...
    21. meshData.SetVertexBufferParams(vertexCount, vertexLayout);
    22. var positions = meshData.GetVertexData<float3>(0);
    23. var normals = meshData.GetVertexData<Short2>(1);
    24. // fill encoded vertex data, etc
    25.  
    Then the shader:

    Code (CSharp):
    1.     struct VertexShaderInput
    2.     {
    3.         float3 position : POSITION;
    4.         float2 normal : NORMAL;
    5.     };
    6.    
    7.     FragShaderInput VertexShader(VertexAttributes IN)
    8.     {
    9.         // IN.normal already contains data in [-1, 1] range.
    10.         // Originally, I was dividing it by 32767 first
    11.         float3 normal = decodeNormal(IN.normal);
    12.         // etc ...
    13.     }
    14.  
    It works now ok this way! In my case I suppose this was the more correct way to do that anyway.

    But it might really become a problem if I would really need integers in the shader (maybe because I use different bit patterns to store multiple data in an int). But there are still workarounds there: get the data as float in the shader, then multiply it by max integer value to get the correct integer bits. Note that it would not work correcyly for 32 bits integers, as float has 23 bits mantissa. In that case, the data could be encoded in multiple integers of 16 bits.

    Anyway, I see there are workarounds. That is probably why that problem still exists today.