Search Unity

  1. Welcome to the Unity Forums! Please take the time to read our Code of Conduct to familiarize yourself with the forum rules and how to post constructively.
  2. Dismiss Notice

Packing Vertex Data: Running Out of Space?

Discussion in 'Shaders' started by kromenak, Feb 12, 2015.

  1. kromenak

    kromenak

    Joined:
    Feb 9, 2011
    Posts:
    266
    I'm in the processing of converting some CG shaders that pass a good amount of data to the shader as part of its vertex data. The data that needs to be passed are six byte values, as well as the vertex position data.

    Looking at what Unity allows, I can see that for a particular vertex, I have the following parameters that can hold this data:
    • Vertex Position (Vector3)
    • Normal (Vector3)
    • Tangent (Vector4)
    • UVs #1 (Vector2)
    • UVs #2 (Vector2)
    • Color (Fixed 4)
    Obviously, I'm using the position attribute to pass the vertex position. I then have 4 of 6 byte values packed into the two UV channels.

    The problem I'm encountering is that when I pack the other two byte values into any of Normal, Tangent, or Color, the data gets garbled or modified before it reaches the shader. I *think* the values are being normalized, which isn't what I want in this scenario. Interestingly, these values only seem to get garbled if the game is running on mobile OR the Transform scale is either negative or non-uniform. What's that all about?

    Because of this, I'm curious if there are any creative solutions that I might be able to use to fit those two additional bytes in somewhere. A few options I've considered:
    • When packing UVs in C#, they are Vector2, but in the shader code, they're float4. Why doesn't Unity allow me to pack data into the other two floats for the UVs?
    • Turning off normalization of those other parameters. I've tried using the GLSL_NO_AUTO_NORMALIZATION pragma, but its only effective on mobile devices.
    • If I know the length of the normal/tangent vector I pass to the shader, I might be able to "denormalize" the value in the shader code. However, my initial testing shows that the denormalized values aren't 100% accurate.
    • Pass the data via a texture parameter? But this seems very expensive: need to create the texture, assign it to the shader, and then do lookup for each vertex. Also, I'd need to know where to lookup in the texture, which is just more data to pass around!
    Any other ideas as to how I might be able to achieve something like this?
     
  2. kromenak

    kromenak

    Joined:
    Feb 9, 2011
    Posts:
    266
    Typical of any time I post a question on the forums, I find myself finally making progress within 20 minutes of posting :p

    I looked at my remaining two bytes and noticed that neither value will ever exceeds 64. Based on my understanding of the color data, it seems like I can pass a color value to the vertex without worrying about it being modified in some weird way.

    So, I converted the value to a float to stick it in the color data by dividing by 100. As a result, 10 becomes 0.1, 54 becomes 0.54, 3 becomes 0.03, etc. Then, in the shader, I multiply by 100 to get the original value back. This seems to work pretty well, though I am still seeing some occasional weirdness in the shader value. Will need to investigate further.

    Still open to any suggestions or insights though! If there's a better way to do this, I'm totally for it! My current results are getting better, but still not good enough!
     
  3. TechnoCraft

    TechnoCraft

    Joined:
    Apr 6, 2012
    Posts:
    28
    Texture data (colors, normalmaps, lightmaps, uv maps) are stored in a normalized form (between 0 and 1) for many reasons. It is hardware friendly (it has to do with performance, memory requirements, ...). See example on how to encode, decode custom float to color an back here. 0.00 (0%) to 1.00 (100%) are minimum and maximum value you can represent. How precise you can represent the values in between and the range depends on what encoding you are using. Listen to this lecture about floating point to learn more (Berkley).

    For example: Gray scale image (8bits = 1Byte)
    Common color format uses 256 shades of gray to represent one color channel. 1B = Pow(2,8) = can represent 256 values (shades of gray). Converting this value to float will be everything arround module 0.00390625 and will be rounded up or down. This is the maximum precision of basic color encoding (by design). The only way to add more precision is to make a custom encoding (and decoding inside shader).