Search Unity

Vertexmap RenderTexture to Texture2D color difference

Discussion in 'General Graphics' started by Jelmer123, Jan 14, 2020.

  1. Jelmer123

    Jelmer123

    Joined:
    Feb 11, 2019
    Posts:
    243
    Hi

    I'm trying to save this rendertexture to a Texture2D. However, when saving it as PNG / JPG / Whatever format, color information seem to get lost, the colors are slightly different.
    What am I doing wrong? I tried many different texture formats but nothing seems to change the color offset.

    Code (CSharp):
    1.     public void SaveToFile()
    2.     {
    3.         RenderTexture.active = renderTexture1;
    4.         Debug.Log("RenderTexture format: " + renderTexture1.format + ", graphics format: " + renderTexture1.graphicsFormat);
    5.         //result is: RenderTexture format: ARGBHalf, graphics format: R16G16B16A16_SFloat
    6.  
    7.         tex = new Texture2D(renderTexture1.width, renderTexture1.height, TextureFormat.RGBAHalf, false, true);
    8.         Debug.Log("Texture2D format: " + tex.format + ", graphics format: " + tex.graphicsFormat);
    9.         //result is: Texture2D format: RGBAHalf, graphics format: R16G16B16A16_SFloat
    10.  
    11.         tex.ReadPixels(new Rect(0, 0, tex.width, tex.height), 0, 0);
    12.         var bytes = tex.EncodeToPNG();
    13.         System.IO.File.WriteAllBytes("t1 original.png", bytes);
    14.         var bytes3 = tex.EncodeToJPG();
    15.         System.IO.File.WriteAllBytes("t1 original.jpg", bytes3);
    16.     }


    Screenshot of the rendertexture in Unity:
    pointcloudvertextmapk4a.PNG


    The output JPG:
    t1 original.jpg
    (the same happens with PNG so it's not because of JPG compression)
     
  2. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,493
  3. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,336
    That won’t help either. While it’ll avoid the jpg compression artifacts, TGA and PNG are going to produce identical results.

    There are two issues at play. The most obvious one is that color shift, which is going to be due to the original render texture being linear, and the imported texture going to default to be assumed to be sRGB by Unity. Disable sRGB on the texture in the import settings to fix that.

    The next issue you’re going to have is your render texture is a 16 bit signed (positive and negative values) float per channel, and none of the above formats support that. Technically the PNG format supports 16 bits per channel, but only unsigned (aka positive only) values, and Unity’s PNG encoder doesn’t support it.

    The only format that supports signed 16 bit values and that Unity can export is EXR. Though sadly Unity’s import handling of EXR data is semi-broken. If your project is using gamma color space, EXR files are always imported with gamma correction applied, regardless of the sRGB import setting, and with any values below 0.0 clamped to 0.0. If your project is using linear color space then it should work. However I think Unity will default to compress it to an unsigned BC6H ... which again looses all values below 0.0, so you’ll need to set it to be uncompressed.
     
    Jelmer123 and neoshaman like this.
  4. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,336
    The TLDR version is Unity sucks at exporting & importing data textures with more than 8 bits of precision. The “best” solution is don’t do it at all if you can avoid it and save them as .asset files.
     
    Jelmer123 likes this.
  5. Jelmer123

    Jelmer123

    Joined:
    Feb 11, 2019
    Posts:
    243
    thanks for the info! So based on that, the outlook is grim..

    But I made some progress in the meantime though, I think at least...

    From the 16bit render texture, I convert it to Texture2D (16bit per channel), I save it to Png and then import it as a Texture2D and Blit it to a 16 bit RenderTexture again.

    Inked2textures_LI.jpg

    The good thing: The result according to the inspector is the same (see image, left one is the re-converted one, right side is the original).
    The bad thing: when trying to render it as VFX graph, half of the depth info seems missing (the negative values you mention?) half of the hologram is clamped in the middle it seems.
    Conclusion: is rendertexture is signed and the other unsigned? Any way to check that? Or is the inspector not to be trusted for these kind of situations?


    PS: when I save to EXR and use that instead of PNG, all works fine.
    But from the inspector, it looks like that PNG should also work as the colors are the same before and after. I need to transfer the data in an 8 bit format..
     
    Last edited: Jan 16, 2020
  6. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,336
    The colors may be the same, but your monitor can only ever display 8 bit color too. So looking at a 16 bit image and an 8 bit image will look the same, but they do not hold the same data.

    An 8 bit image can only hold values between 0.0 and 1.0, with a precision of 1/255. So a 16 bit image holds twice as much, right? Well, yes and no. It's true it's twice as much "data", but the effect that has on the numerical precision in range is more significant. Having twice as many bits means it can represent values between -65504.0 and +65504.0, and with a precision of at least 1/2048 between -1.0 and 1.0, 8 times finer than an 8 bit image.

    Looking at your two examples it does indeed look like you have negative values in your original texture. If you look at the red and green channels of your image they just kind of disappear half way through the image. Negative values are going to be represented as 0.0 in 8 bit, which is also the only thing that's going to be displayed when looking at the image. And if you look at the resulting shapes you're getting the points kind of flatten out. Only EXR is going to be able to hold those values as it's the only one that supports signed floats.


    The alternative would be to rescale the values from between -1.0 and 1.0 to between 0.0 and 1.0 before saving as a PNG, and then scale them back on import. This is what normal maps do as they're holding a directional vector with a range of -1 to 1 on all components.
     
  7. Jelmer123

    Jelmer123

    Joined:
    Feb 11, 2019
    Posts:
    243
    Aha thanks for clarifying! So Graphics.convert doesn't do any conversion from 16bit values signed to 8bit unsigned?
    What would be the best approach to "compress" the 16 bit values to 8 bit and vica versa?
    Something like: use Texture2D.GetPixel to get the value for each pixel and change it accordingly? At 60fps...
     
  8. Jelmer123

    Jelmer123

    Joined:
    Feb 11, 2019
    Posts:
    243
    So I tried to convert the signed 16 bit pixel values (from -65504 to 65504) to 8 bit unsigned (0 to 256) like this:
    (pixelValue+ 65504) / 511.75f;

    However, it all basically becomes 128, because the values in the 16 bit texture appear to be quite small..?

    It seems I had more luck with just pushing the 16bit into a PNG, it seems I might not have to convert anything, but only do something with the negative values
     
  9. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,336
    Yep. If you do that then any value in the original render texture between 0.0 and 512.0 becomes 128/255 in the final texture. If the only range you use is between -1.0 and +1.0 then you're loosing all of it.

    "Doing something with the negative values" means you still need to do a conversion of some kind. But depending on the range you need it might not need to be that much. Going by the example above, I suspect the range is just +/- 1.0. So the conversion only needs to be:

    (16 bit color) * 0.5 + 0.5 = (8 bit color)

    (8 bit color) * 2.0 - 1.0 = (original position range)

    Which, btw, is exactly the same thing normal maps do, which is why I mentioned it earlier.

    The best approach from a pure speed point of view is to do the conversions on the GPU. Write a shader that samples the original 16 bit render texture, does the * 0.5 + 0.5, and use Blit() to output it to an ARGB32 render texture, and then use ReadPixels() to get that into your script side Texture2D and save it to a texture file.

    For converting back, the fastest solution is to do nothing at all and use the imported texture without doing anything to it. Do the remapping in the shader that uses it. I'm not exactly sure how you're using it (VFX Graph displacement?), but I suspect it'll take the Texture2D straight, and may even have some options to remap the values near where you're assigning it. If you really do need to convert it back into a render texture, have a shader that does the * 2.0 - 1.0 and use Blit() again.

    Or just use an EXR file.

    It should be noted that ReadPixels and EncodeToPNG are both quite expensive. Depending on your PC it might be difficult to maintain 60fps doing both of those.

    ReadPixels is expensive because it potentially takes a long time to copy images from the GPU back to the CPU (which is what that function does), and that function is blocking. That means when you call ReadPIxels the game's c# code just hangs until the image is transferred. This might be 0.5ms, it might be 100ms, depending on what else the GPU is doing at the time. Assuming this is just for recording some footage for reuse later, you might want to look into using AsyncGPUReadback. Here's an example project showing how to use it from one of Unity Japan's more prolific employees.
    https://github.com/keijiro/AsyncCaptureTest

    EncodeToTGA or EncodeToEXR should both be significantly faster than EncodeToPNG, so you may need to fall back to one of those. There are also examples on the forum of encoding to a PNG using external tools asynchronously (but still calling it from Unity c#).

    Or your computer might be fast enough that it's not a problem.

    Also, technically, it is doing a conversion. The 16 bit format and 8 bit format are very different. That conversion just happens to be "convert 16 bit floating point value into a 32 bit floating point value (what the Color uses per color component in c#), clamp to between 0.0 and 1.0, and convert to 8 bit (0 - 255) value."
     
  10. Jelmer123

    Jelmer123

    Joined:
    Feb 11, 2019
    Posts:
    243
    The bigger goal is this: to stream information captured by the Kinect (16 bit RGBA) to another Unity app using stremaing video over web, and then somehow visualize it in 3D.
    I currently use this repo for both Kinect capture (which generates the 16 bit RGBA RenderTexture) and for visualization (using VFX Graph, with 16 bit RGBA RenderTexture as an input).
    goal.jpg

    So I suppose conversion from 16 bit to 8 bit is necessary. But there should be no need to convert from 8 to 16 bit again because I should be able to modify the VFX graph.
     
  11. Jelmer123

    Jelmer123

    Joined:
    Feb 11, 2019
    Posts:
    243
    Thanks again for the help! I managed to do the conversions in shader graph, so the conversion is solved.
    It was like you said:
    (16 bit color) * 0.5 + 0.5 = (8 bit color)
    (8 bit color) * 2.0 - 1.0 = (original position range)
    Now I only have one line of C# code (Blit).
    I only never really understood what you meant with the normal maps? Should it be possible to set the input texture as a normal map, and then no conversion is needed?
     
  12. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,336
    A normal map is an 8 bit texture that represents a float3 unit length vector with a range of -1.0 to +1.0. When a normal map is baked out of some bit of software, before writing it to a texture it's doing
    normal * 0.5 + 0.5
    to remap the range to the 0.0 to 1.0 that the 8 bit texture can store. After that, the shader remaps the texture value back to the -1.0 to +1.0 range by doing
    normalMap * 2.0 - 1.0
    . Basically, normal maps are, and have always been, doing the same remapping trick to fit otherwise signed 16 bit data into an 8 bit texture.

    Now, that does not mean you should be setting your textures as normal maps, as Unity does other things to normal maps that you do not want done to your data. Specifically on desktop normal maps are treated as 2 component textures and the z value is thrown away. I've written about the reasons why elsewhere on the forum so I'm not going to go to deep into it, you can search for those or others' explanations if you want, but because normal maps are a unit length vector, you can reconstruct the z from only the x and y, and you can get higher quality compressed images by only storing two components vs three. You actually need all 3 components of your position, so you can't do that, and thus don't want to mark your textures as normal maps.