Search Unity

  1. Welcome to the Unity Forums! Please take the time to read our Code of Conduct to familiarize yourself with the forum rules and how to post constructively.
  2. Dismiss Notice

Question Half Precision floats for Modern GPUs

Discussion in 'Shaders' started by VictorKs, Feb 18, 2023.

  1. VictorKs

    VictorKs

    Joined:
    Jun 2, 2013
    Posts:
    242
    Modern GPUs (Nvidias Turing and later) support half precision at double at double processing speed than full 32 bit floats. Also it reduces bandwidth and register size per warp. Does unity support it?
     
  2. Invertex

    Invertex

    Joined:
    Nov 7, 2013
    Posts:
    1,495
    Explicit FP16 is only supported by Shader Model 6.2/DXC. If you set your graphics target in project settings to be DX12/Vulkan/Metal and use
    #pragma use_dxc
    &
    #pragma Native16Bit
    in your shader, then you should be able to use
    float16_t
    or
    float16_t4
    for example, and they also claim
    half
    will actually be 16bit as well.

    More details here:
    https://docs.google.com/document/d/1yHARKE5NwOGmWKZY2z3EPwSz5V_ZxTDT8RnRl521iyE/edit#
     
    VictorKs likes this.
  3. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,238
    Be aware 16 bit float support on desktop GPUs is also unreasonably buggy. So be prepared for it to have unexpected results, or not work at all on GPUs that “should” support it. AAA devs I know who have tried to add 16 bit float support to their custom engines have mostly given up on trying to use it.
     
    VictorKs likes this.
  4. VictorKs

    VictorKs

    Joined:
    Jun 2, 2013
    Posts:
    242
    Thanks a lot!

    I'm going to take your word on it and avoid it for the time being, maybe in the future it will be more stable. I'm almost never bandwidth bound anyway, most of the times I am either CPU bound or latency bound from many texture reads.
     
    DevDunk likes this.
  5. alvion

    alvion

    Joined:
    May 30, 2013
    Posts:
    8
    Sorry for dredging up a VERY old thread, but I am trying to get this working in Unity 2021 LTS and I can't seem to make it work. I have these two lines as the first lines of my compute shader:

    #pragma use_dxc
    #pragma require Native16Bit

    But when I try to use float16_t in one of my kernels like so:

    float16_t test;

    I get this error:
    Shader error in 'Llama': unknown type name 'float16_t' at kernel ScaleBuffer at line 160 (on d3d11)

    I AM in fact using DX12 (I can see the <DX12> in the title bar of my Unity window, and if I switch to dx11, then simply putting the #pragma use_dxc causes my shader to fail.
     
  6. alvion

    alvion

    Joined:
    May 30, 2013
    Posts:
    8
    I think I answered my own question: I need to gate that code with
    #if UNITY_DEVICE_SUPPORTS_NATIVE_16BIT
    I kind of just assumed that in the year of our lord 2023 that my 3050 would support 16 bit but that doesn't seem to be the case.