Search Unity

Question Question about sending uints from CPU -> GPU via texture

Discussion in 'General Graphics' started by kadd11, May 24, 2023.

  1. kadd11

    kadd11

    Joined:
    Mar 11, 2018
    Posts:
    33
    Hi all,

    I hit an issue recently, and while I have a "solution", I would like to better understand what is going on so I can know if there are any better solutions. The situation is:

    I have a buffer of data that I am sending to the GPU, for example:

    Code (CSharp):
    1. struct MyData
    2. {
    3.     float4 SomeVector;
    4.     float3 SomeOtherVector;
    5.     uint SomeUInt;
    6. };
    On most platforms, I do this using ComputeBuffers and read it in the vertex shader. This works great.

    On platforms that don't support ComputeBuffers (or reading them in vertex shaders), I put the data in a texture and read from it multiple times in order to reconstruct the data (since my data types are larger than a float4). For example:
    Code (CSharp):
    1.  
    2. MyData dataFromTexture;
    3. float4 sample1 = DoTextureRead();
    4. dataFromTexture.SomeVector = sample1;
    5. float4 sample2 = DoAnotherTextureRead();
    6. dataFromTexture.SomeOtherVector = sample2.xyz;
    7. dataFromTexture.SomeUInt = asuint(sample2.w);
    8.  
    This works great _most_ of the time, except on Mali GPUs using OpenGL ES. In particular, the uints are giving me a hard time. Changing the shader precision level to full resolves the issue in some cases (like when the uints are used as indices into another buffer/texture). However, I also use uints to send colors packed into 4 bytes instead of occupying a full float4, and I haven't been able to get that work properly even with full precision (so the only way I have been able to make that work is to send colors as a float4, which ends up being a lot more data).

    So it seems that what is happening is that when the GPU reads the texture, it converts the texture into halfs, which obviously mangles the binary representation of what used to be a uint and asuint no longer works. As I mentioned, I can resolve these issues by passing colors as float4s and indices as floats instead uints. But is there a better way to go about this? Some way to force my shader to read the texture at full precision?

    Also, any ideas what could be going on with the packed colors not working at full precision? The way I'm packing/unpacking is the standard way of packing to a uint:
    Code (CSharp):
    1.  
    2. uint packedColor;
    3. float4 unpacked = float4(packedColor & 255, (packedColor >> 8) & 255, (packedColor >> 16) & 255, (packedColor >> 24) & 255) * (1.0 / 255);
    4.  
    Is there some better/more robust way to pack a color into 4 bytes that won't get mangled if it gets reduced to a half? I feel like maybe it could be an endianness issue, but I'm not sure why that wouldn't affect the uints that are used as indices as well?

    All of my data types are padded to float4, and the textures are using point interpolation.
     
  2. c0d3_m0nk3y

    c0d3_m0nk3y

    Joined:
    Oct 21, 2021
    Posts:
    671
    I found the following information about Mali GPUs

    So the integers might also be only 16 or 24 bits on older Mali-400 GPUs in the fragment shader.

    https://documentation-service.arm.com/static/5eff5d5edbdee951c1cd5f20?token=