Search Unity

How to pass a big array to shader?

Discussion in 'Shaders' started by zhutianlun810, Jan 3, 2019.

  1. zhutianlun810

    zhutianlun810

    Joined:
    Sep 17, 2017
    Posts:
    168
    Hello,

    I would like to pass a very big array to the shader, which is a array of matrix, like float3x3 [4096]. I seems like if I declare it in the shader, it will crash the shader. I also have a 64*64 dds file which contain the infomation of the array. However, Unity3d does not recognize my dds file correctly. The following pictures are my orginal dds, and the dds file after imported to Unity. What can I do now?

    Thanks
     

    Attached Files:

  2. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,343
    I think the editor may not show dds textures in the inspector, but they may still work when assigned to a material. I seem to remember that being the behavior I saw when playing with externally compressed textures. But there may be formats that just don’t work still.

    You could use another texture format that Unity can read, but as I suspect you’re looking to use a floating point texture that may not be advisable since Unity’s handling of float images is not great for anything not HDR color data.

    Likely the best option, as strange as it sounds, is to store the data in a .cs file and construct a floating point texture at runtime. Or store it in a compute buffer, read as a structured buffer in the shader.
     
  3. zhutianlun810

    zhutianlun810

    Joined:
    Sep 17, 2017
    Posts:
    168
    Is there anyway to generate a texture which has floating point value bigger than 1 or smaller than 0 in Unity? Actually I am trying to pass my precomputed matrix to the shader. But I find the Unity only support for the RGBA float with value between 0 and 1.
     
  4. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,343
    This seems like something people hit frequently, but I've never seen it happen myself. Can you share your code?

    The below should generate a Texture2D with a single Matrix4x4 stored across 4 pixels.
    Code (CSharp):
    1. // Construct array of colors
    2. Color[] colors = new Color[4];
    3. Matrix4x4 mat = transform.localToWorldMatrix;
    4. for (int i=0; i<4; i++)
    5.     colors[i] = new Color(mat.GetColumn(i));
    6.  
    7. // Create Texture2D and set pixels colors
    8. var tex = new Texture2D(4, 1, TextureFormat.RGBAHalf, false, true);
    9. tex.SetPixels(colors, 0);
    10. tex.Apply();
    I have similar code here for testing generating a texture with a -1.0 to 1.0 range. That code actually outputs and reimports an exr file too. In that case I found that if the project was set to use gamma space the imported image was clamped to a minimum value of 0.0, though the exported image did retain the -1.0. But those issues shouldn't exist when not importing an external file.
     
  5. zhutianlun810

    zhutianlun810

    Joined:
    Sep 17, 2017
    Posts:
    168
    Thank you very much. At least my texture has some influence on my scene.
     
  6. arnaud-carre

    arnaud-carre

    Unity Technologies

    Joined:
    Jun 23, 2016
    Posts:
    97
    hi!
    By default, when you declare vars in a shader, Unity use constant buffer. A constant buffer can't be bigger than 64KiB on some gfx API ( for instance dx11 ). ( Well, technically a constant buffer can be bigger than 64KiB, but your shader code can only see 64 first KiB ). That's why float3x3[4096] won't work.( too big to be used as constant buffer)

    As bgolus suggested you could use structured buffer instead. ( structure buffer hasn't 64KiB limitation )
     
    bzor likes this.
  7. Eyap

    Eyap

    Joined:
    Nov 17, 2017
    Posts:
    40
    Sorry to necro-thread, but what would be the correct way to use the StructuredBuffer in this case ? I can't find any relevant material about it.

    I use this as a declaration :
    Code (HLSL):
    1. uniform StructuredBuffer<half4> _Colors[15000];
    And then I tried this, but it is not the correct syntax :
    Code (HLSL):
    1. half4 myColor = _Colors[i];
    Also, what is the appropriate way of setting the buffer ? I suppose Shader.SetGlobalVectorArray(...) is not the way to go ?

    Thank you in advance,
     
  8. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,343
    You don't define an array size for structured buffers when defining them in the shader, as they're not arrays. They're objects.
    Code (csharp):
    1. StructuredBuffer<float4> _Colors;
    What you had is saying you want an array of 15000 separate structured buffers, which I don't think you can actually do.

    The data for a structured buffer is set by creating a
    ComputeBuffer
    , which you set a fixed
    count
    and
    stride
    for at creation time. The
    count
    is the total number of elements, and the
    stride
    is the number of bytes each element uses. You can look up and/or manually calculate the number of bytes, or use sizeof() to have
    c#
    calculate that for you. For example:
    Code (csharp):
    1. ComputeBuffer myColorBuffer = new ComputeBuffer(15000, sizeof(Color));
    2. Color[] myColorData = new Color[15000];
    3. for (int i=0; i<15000; i++)
    4.     myColorData[i] = // set the color data
    5. myColorBuffer.SetData(myColorData);
    6. myMaterial.SetBuffer("_Colors", myColorBuffer);
    I'm not entirely sure if
    sizeof(Color)
    works, so you might need to do
    sizeof(float) * 4
    .

    After that if you want to update the color values, you'll want to keep both the buffer and array around, update the array, call
    SetData()
    again and that's it. The values the shader sees will automatically update. If you need to change the length of the array / buffer, you'd need to make a new one and assign that new one to the material.
     
    henners999 and Eyap like this.
  9. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,343
    Note I used
    float4
    and not
    half4
    . Most hardware that supports structured buffers don't differentiate between
    float4
    and
    half4
    , they're both 32 bit floats per component. If you want to only use 16 bits per component, you'll probably want to use a structured buffer of
    uint2
    and decode the components using bitwise operations. Or maybe using a single
    uint
    per element and
    Color32
    , which uses a single byte per component which you would also decode using bitwise operations in the shader.
     
    Eyap likes this.
  10. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,343
    One caveat to this is the connection between the buffer and the material can get lost pretty easily. Alt tabbing or loading /unloading a scene can cause it to break.

    Here's a basic example setup for passing
    Color32
    values from c# to the shader.
    Code (csharp):
    1. using System.Collections;
    2. using System.Collections.Generic;
    3. using UnityEngine;
    4.  
    5. [ExecuteInEditMode]
    6. public class StructuredBufferColor32 : MonoBehaviour
    7. {
    8.     public Material mat;
    9.     public Color32[] colors;
    10.  
    11.     private ComputeBuffer buffer;
    12.  
    13.     void Update()
    14.     {
    15.         if (mat == null)
    16.         {
    17.             var rend = GetComponent<Renderer>();
    18.             if (rend != null)
    19.                 mat = rend.sharedMaterial;
    20.  
    21.             if (mat == null)
    22.                 return;
    23.         }
    24.  
    25.         if (colors == null || colors.Length == 0)
    26.         {
    27.             // rainbow
    28.             colors = new Color32[6];
    29.             colors[0] = new Color32(255,  0,  0,255);
    30.             colors[1] = new Color32(255,255,  0,255);
    31.             colors[2] = new Color32(  0,255,  0,255);
    32.             colors[3] = new Color32(  0,255,255,255);
    33.             colors[4] = new Color32(  0,  0,255,255);
    34.             colors[5] = new Color32(255,  0,255,255);
    35.         }
    36.  
    37.         if (buffer == null || buffer.count != colors.Length)
    38.             buffer = new ComputeBuffer(colors.Length, sizeof(uint));
    39.  
    40.         buffer.SetData(colors);
    41.         mat.SetBuffer("_Colors", buffer);
    42.  
    43.         // deal with the fact buffers loose connection sometimes
    44.         // like when alt tabbing or scene loads
    45.         if (mat != null && buffer != null)
    46.             mat.SetBuffer("_Colors", buffer);
    47.     }
    48. }
    Code (csharp):
    1. Shader "Unlit/StructuredBufferColor32"
    2. {
    3.     Properties
    4.     {
    5.     }
    6.     SubShader
    7.     {
    8.         Tags { "RenderType"="Opaque" }
    9.         LOD 100
    10.  
    11.         Pass
    12.         {
    13.             CGPROGRAM
    14.             #pragma vertex vert
    15.             #pragma fragment frag
    16.  
    17.             #pragma target 5.0
    18.  
    19.             #include "UnityCG.cginc"
    20.  
    21.             float4 vert (float4 vertex : POSITION) : SV_POSITION
    22.             {
    23.                 return UnityObjectToClipPos(vertex);
    24.             }
    25.  
    26.             // Color32 values packed into a uint
    27.             StructuredBuffer<uint> _Colors;
    28.  
    29.             // decode the 8 bit color values from the uint back to a half4
    30.             half4 DecodeColor(uint colorData)
    31.             {
    32.                 half4 col = half4(colorData & 255, (colorData >> 8) & 255, (colorData >> 16) & 255, (colorData >> 24) & 255) / 255.0;
    33.  
    34.                 // correct for gamma conversion when using linear space rendering
    35.             #ifndef UNITY_COLORSPACE_GAMMA
    36.                 col.rgb = GammaToLinearSpace(col.rgb);
    37.             #endif
    38.  
    39.                 return col;
    40.             }
    41.  
    42.             half4 frag () : SV_Target
    43.             {
    44.                 uint num, stride;
    45.                 _Colors.GetDimensions(num, stride);
    46.  
    47.                 uint indexA = uint(_Time.y) % num;
    48.                 half4 colorA = DecodeColor(_Colors[indexA]);
    49.  
    50.                 uint indexB = (indexA + 1) % num;
    51.                 half4 colorB = DecodeColor(_Colors[indexB]);
    52.  
    53.                 half t = frac(_Time.y);
    54.  
    55.                 return lerp(colorA, colorB, t);
    56.             }
    57.             ENDCG
    58.         }
    59.     }
    60. }
     
  11. Eyap

    Eyap

    Joined:
    Nov 17, 2017
    Posts:
    40
    I made it to work, thanks a lot !

    I indeed had to use sizeof(float) * 4.

    Well the only reason to use half4 was that the precision was not really important, and as I use a big array I thought that it would save some memory. For now I used a standard float, but I will try the solution above :).

    Thanks again, I owe you a beer !
     
  12. zhutianlun810

    zhutianlun810

    Joined:
    Sep 17, 2017
    Posts:
    168
    2 years later, I am now having the same problem. Now I am trying to create a uint[12300] in the shader, which still cause the complier to crash. However uint * 12300 is only 48kb.
     
  13. Peter77

    Peter77

    QA Jesus

    Joined:
    Jun 12, 2013
    Posts:
    6,609
    I think you could use Marshal.SizeOf for this case.
    Code (CSharp):
    1. using System.Runtime.InteropServices;
    2. Debug.Log($"Color size is {Marshal.SizeOf(typeof(Color))}");
    3. Debug.Log($"Color32 size is {Marshal.SizeOf(typeof(Color32))}");
    4. Debug.Log($"Vector3 size is {Marshal.SizeOf(typeof(Vector3))}");
    5. Debug.Log($"Vector4 size is {Marshal.SizeOf(typeof(Vector4))}");
    Outputs:
    Code (CSharp):
    1. Color size is 16
    2. Color32 size is 4
    3. Vector3 size is 12
    4. Vector4 size is 16
     
    MaxEden and bgolus like this.