Search Unity

How to use an 8 bit channel texture if the smallest scalar is 16 bit in compute shaders?

Discussion in 'Shaders' started by TheCelt, Jun 30, 2021.

  1. TheCelt

    TheCelt

    Joined:
    Feb 27, 2013
    Posts:
    742
    Hello

    I want to use the smallest possible image format available which i presumed is Alpha8 since i need a large resolution and i am only marking a single bit as a flag in the image, thus i want to save memory for this large image size.

    But its confusing me in the compute shader because the image is 8 bit on the alpha yet i cannot set a RWTexture to 8 bits...

    The smallest i can go is RWTexture2D<half>

    So i am wasting a lot of memory here when i only need a byte, is there no way to use any thing less than 16 bits for a RWTexture buffer in compute shaders?.
     
    KiraSnow likes this.
  2. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,352
    That’s still 32 bits unless you’re on mobile, because desktop doesn’t support 16 bit math. But no, you’re not wasting any memory. All numbers on desktop are 32 bit floats in the shader. The value is just being converted from / to an 8 bit value on load / store, the same way all textures are on GPUs.

    Though you probably want to use
    R8
    instead of
    Alpha8
    , as with an
    Alpha8
    texture you’d probably need to use
    RWTexture2D<float4>
    and
    rwtex.Load(xy).a
    to access the alpha value.