Search Unity

Question Color issues with Tensor -> RenderTexture?

Discussion in 'Barracuda' started by jwvanderbeck, Jan 12, 2021.

  1. jwvanderbeck

    jwvanderbeck

    Joined:
    Dec 4, 2014
    Posts:
    825
    7oS3tu48kA.gif
    On the left is the output of running the precompiled model in Python and on the right is the output generated in Unity which goes from output -> RenderTexture -> Texture2D

    It clearly looks like a colorspace issue to me, but nothing I try makes it come out as expected. I've tried various linear/srgb settings on the RenderTexture as well as various Linear/sRGB calculations to the pixel values in the Texture2D but nothing I do makes it look like expected.

    Code (CSharp):
    1. using System;
    2. using Sirenix.OdinInspector;
    3. using Unity.Barracuda;
    4. using UnityEngine;
    5. using UnityEngine.UI;
    6. using Random = UnityEngine.Random;
    7.  
    8. public class Generator_female : MonoBehaviour
    9. {
    10.     public int seed;
    11.     public int noiseSize;
    12.     public int imageSize;
    13.     public float colorAdjust = 2.2f;
    14.     public NNModel modelAsset;
    15.     public Texture2D portrait;
    16.     public RawImage destination;
    17.     System.Random rand = new System.Random();
    18.  
    19.     [Button]
    20.     public void CreatePortrait()
    21.     {
    22.         var mean = 0f;
    23.         var stdDev = 1f;
    24.         Tensor input = new Tensor(64, 1,1,noiseSize);
    25.         // Debug.Log($"Tensor Sequence Length = {input.length}");
    26.         for (int i = 0; i < input.length; i++)
    27.         {
    28.             double u1 = 1.0-rand.NextDouble(); //uniform(0,1] random doubles
    29.             double u2 = 1.0-rand.NextDouble();
    30.             double randStdNormal = Math.Sqrt(-2.0 * Math.Log(u1)) *
    31.                                    Math.Sin(2.0 * Math.PI * u2); //random normal(0,1)
    32.             double randNormal =
    33.                 mean + stdDev * randStdNormal; //random normal(mean,stdDev^2)
    34.             input[i] = (float)randNormal;
    35.         }
    36.         m_Worker.Execute(input);
    37.         Tensor O = m_Worker.PeekOutput();
    38.         input.Dispose();
    39.         var rTexture = new RenderTexture(imageSize, imageSize, 24, RenderTextureFormat.Default, RenderTextureReadWrite.sRGB);
    40.         O.ToRenderTexture(rTexture);
    41.         portrait = toTexture2D(rTexture);
    42.         destination.texture = portrait;
    43.         O.Dispose();
    44.         rTexture.DiscardContents();
    45.     }
    46.     private Model m_RuntimeModel;
    47.     IWorker m_Worker;
    48.    
    49.     void Start()
    50.     {
    51.         Random.InitState(seed);
    52.         m_RuntimeModel = ModelLoader.Load(modelAsset);
    53.         m_Worker = WorkerFactory.CreateWorker(WorkerFactory.Type.Compute, m_RuntimeModel);
    54.     }
    55.  
    56.     void OnDestroy()
    57.     {
    58.         m_Worker.Dispose();
    59.     }
    60.  
    61.     Texture2D toTexture2D(RenderTexture rTex)
    62.     {
    63.         Texture2D tex = new Texture2D(rTex.width, rTex.height, TextureFormat.RGB24, false);
    64.         RenderTexture.active = rTex;
    65.         tex.ReadPixels(new Rect(0, 0, rTex.width, rTex.height), 0, 0);
    66.         var pixels = tex.GetPixels();
    67.         for (int i = 0; i < pixels.Length; i++)
    68.         {
    69.             pixels[i] = LinearToGamma(pixels[i]);
    70.         }
    71.         tex.SetPixels(pixels);      
    72.         tex.Apply();
    73.         return tex;
    74.     }
    75.  
    76.     Color LinearToGamma(Color c)
    77.     {
    78.         return new Color(Mathf.LinearToGammaSpace(c.r), Mathf.LinearToGammaSpace(c.g), Mathf.LinearToGammaSpace(c.b),
    79.             c.a);
    80.     }
    81.  
    82. }
    83.  
     
  2. fguinier

    fguinier

    Unity Technologies

    Joined:
    Sep 14, 2015
    Posts:
    146
    Hi jwvanderbeck,

    If project is in linear color mode Barracuda expect the network to run in linear color space aka when calling TensorToTexture it will convert sRGB texture to linear and keep linear texture as linear. That would be a problem if the network was actually trained in sRGB color space. If so a way would to customize TextureToTensor and TensorToTexture there is an exemple in the style transfer demo: https://github.com/UnityLabs/barracuda-style-transfer

    Hope this helps
     
  3. jwvanderbeck

    jwvanderbeck

    Joined:
    Dec 4, 2014
    Posts:
    825
    Shouldn't the output of the network be the output of the network irrespective of color spaces? I mean it knows nothing about such nuances, it just spits out numbers. Is ToRenderTexture forcibly applying changes to the numbers? Can that be turned off?

    Even if it is, why does applying colorspace math to the pixels afterward not fix it?

    Training up an entire other model to apply a new style to the already generated images feels like a very wrong approach to me.
     
  4. hnphan

    hnphan

    Joined:
    Nov 5, 2014
    Posts:
    7
    That's actually pretty standard, a quick fix if you want to avoid the hassle is to set your project to sRGB in project settings.
     
  5. jwvanderbeck

    jwvanderbeck

    Joined:
    Dec 4, 2014
    Posts:
    825
    Except I don't want my project to be in sRGB :) This weekend I'm going to experiment more and see if I can just grab the raw data out of the output tensor manually and not go through ToRenderTexture.

    Still seems I should have the option to tell it to NOT mess with the numbers at all.
     
  6. hnphan

    hnphan

    Joined:
    Nov 5, 2014
    Posts:
    7
    Ah i see, usually textures have a flag to let be in srgb or linear, not sure why the rendertotexture doesnt have it.
     
  7. jwvanderbeck

    jwvanderbeck

    Joined:
    Dec 4, 2014
    Posts:
    825
    Still stumped here. I've done a few more tests...

    1) Changing project color space between Gamma or Linear has ZERO effect
    2) Directly grabbing the output tensor data and pumping it into a Texture2D produces the exact same incorrect output.
    3) I've also verified it isn't an issue with the display of the texture, by writing out the Texture2D to a PNG file and inspecting that outside Unity. The PNG is incorrectly dark as well.

    Again what really confuses me here is, why should any of that matter? All the model knows is that for a given input it generates a given output. Given that the input to the model in Unity is the same as the input in Python, why would it not generate the same output?
     
    Last edited: Jan 16, 2021
  8. jwvanderbeck

    jwvanderbeck

    Joined:
    Dec 4, 2014
    Posts:
    825
    As a further test I literally wrote out the exact noise definition from Python and fed it into the Tensor in Unity. So they both had the exact same input.

    Issue remains the same.
     
  9. hnphan

    hnphan

    Joined:
    Nov 5, 2014
    Posts:
    7
    Im on Universal Render Pipeline 2019 and changing from gamma to linear changes the output so perhaps you're in another version?
    Have you tried writing out the same image from your model in Python to see if its as dark?
     
  10. jwvanderbeck

    jwvanderbeck

    Joined:
    Dec 4, 2014
    Posts:
    825
    The image in the original post is comparing an image written out in Python to an image created in Unity. Furthermore the test I just did compared the output in Python to the output in Unity using the exact same noisefield, created in Python, written to JSON then loaded into the Python and Unity ONNX code to produce the same output, yet again the one in Unity is darker and the on in Python is as expected.

    This is in Unity 2020.1.f1 but using built-in, not SRP
     
    Last edited: Jan 16, 2021
  11. jwvanderbeck

    jwvanderbeck

    Joined:
    Dec 4, 2014
    Posts:
    825
    At this point the only conclusion I can reach is that the model itself is executing differently in Unity.
     
  12. amirebrahimi_unity

    amirebrahimi_unity

    Joined:
    Aug 12, 2015
    Posts:
    400
    Hi @hnphan - can you try running the model through ONNXRuntime and compare the output?
     
  13. jwvanderbeck

    jwvanderbeck

    Joined:
    Dec 4, 2014
    Posts:
    825
    I think you meant that for me? This is what I am already doing. Comparing the execution of the model via ONNX in both Python and Unity.

    EDIT: There certainly might still be something I am missing, as ML is new to me. I will try to take some time to set up an easily shareable sample in both Python and Unity for test purposes.
     
  14. amirebrahimi_unity

    amirebrahimi_unity

    Joined:
    Aug 12, 2015
    Posts:
    400
    Apologies, @jwvanderbeck. Yes, I meant that for you.

    What framework did you create the model in (e.g. pytorch)? That would be great if you would be willing to share your model and sample with me, so I can look into it further.
     
  15. jwvanderbeck

    jwvanderbeck

    Joined:
    Dec 4, 2014
    Posts:
    825
    Yes, I used PyTorch. I'll work up a sample and what not as soon as I can.
     
  16. amirebrahimi_unity

    amirebrahimi_unity

    Joined:
    Aug 12, 2015
    Posts:
    400
    In that case when you say you have tested executing the model do you mean executing in PyTorch or did you execute via something like:

    Code (Boo):
    1. import onnxruntime as rt
    2. session = rt.InferenceSession(model_file)
    3. onnx_output = session.run(...)
     
  17. jwvanderbeck

    jwvanderbeck

    Joined:
    Dec 4, 2014
    Posts:
    825
    Like this:

    Code (Boo):
    1. import torch
    2. import onnxruntime
    3. import utils
    4.  
    5. ort_session = onnxruntime.InferenceSession("outputs/generator.onnx")
    6. device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
    7.  
    8. def create_noise(sample_size, nz):
    9.     return torch.randn(sample_size, nz, 1, 1).to(device)
    10.  
    11. def to_numpy(tensor):
    12.     return tensor.detach().cpu().numpy() if tensor.requires_grad else tensor.cpu().numpy()
    13.  
    14. # compute ONNX Runtime output prediction
    15. num_runs = 10
    16. for i in range(num_runs):
    17.     ort_inputs = {ort_session.get_inputs()[0].name: to_numpy(create_noise(64, 98))}
    18.     ort_outs = ort_session.run(None, ort_inputs)
    19.     tensor = torch.tensor(ort_outs[0])
    20.     for j, thumb in enumerate(ort_outs[0]):
    21.         utils.save_generator_image(torch.tensor(thumb), f"outputs/portrait_{i}-{j}.png")
     
    amirebrahimi_unity likes this.
  18. jwvanderbeck

    jwvanderbeck

    Joined:
    Dec 4, 2014
    Posts:
    825
    As I understand it the above should be using the same runtime. I am going to rework it though just to be absolutely sure it isn't an input issue, to actually read in the EXACT same noise data for input that I am using in Unity.
     
    amirebrahimi_unity likes this.
  19. alexandreribard_unity

    alexandreribard_unity

    Unity Technologies

    Joined:
    Sep 18, 2019
    Posts:
    53
  20. jwvanderbeck

    jwvanderbeck

    Joined:
    Dec 4, 2014
    Posts:
    825
    I have not and to be honest I completely forgot about this! Between moving and slammed at work, I just plum forgot. I'll try to find time to revisit it this weekend and get things packaged up.
     
  21. alexandreribard_unity

    alexandreribard_unity

    Unity Technologies

    Joined:
    Sep 18, 2019
    Posts:
    53
    ok thanks for the update.
    Given that you are feeding noise to generate your output image, I am guessing that the problem lies in the final output color space for the generated image.
    I think we should allow the option to gamma correct directly in `ToRenderTexture`. But in the meantime you can allways add a pow(1/2.2) or pow(2.2) layer at the end of your network. Do you know how to do that with the `ModelBuilder`?
     
  22. jwvanderbeck

    jwvanderbeck

    Joined:
    Dec 4, 2014
    Posts:
    825
    Sorry not sure what ModelBuilder even is :)

    I'm packaging things up right now. I had to remove Odin from the project so I could safely distribute it.

    The zip file will have both the Python/PyTorch and Unity projects in it. Things a re a bit messy, my apologies, this was all about learning something new to me.

    Download link: https://www.dropbox.com/t/Z5WguWaXh8YoH7i3

    NOTE: In the interest of space, the zip file will not include my training dataset, so you won't be able to retrain anything. If this is needed, let me know and I can zip it up as well. It does of course include the compiled model.

    Some reference to what matters:
    In PyTorch (the DCGAN1 directory):
    • dcgan.py - In here are the generator and discriminator models
    • train_dcgan.py - pretty much what it sounds like. This is probably the most important file for this issue, as this is where the noise input is generated and saved to disk, as well as where the Onnx output is generated. This also saves snapshots during training.
    • onnx.py - This is the file I used to test and compare with Unity. In here the Onnx runtime is used to run the model, in theory identically to Unity, using noise loaded from a JSON file. The output from this is also in Unity as a UI texture for comparison. This outputs the expected color.
    • outputs directory - This is where snapshots are saved during training, as well as the onnx model and the noise.json file
    In Unity (ML directory):
    • Everything currently happens in the Assets/Generator_female.cs script. There is some code in here that I was playing around with the try and do color correction but you can ignore it. It isn't run unless the Apply Color Correction option is checked which by default it is not.
    • Assets/outputs contains the onnx model, noise,json, and snapshot of the run from Python (this is essentially the outputs directory from Python, hence why it is called outputs when logically it should be named inputs in unity lol)
    • Just open the scene Scenes/GenTest and hit play. I'm including a screenshot below of what I see as well as the inspector showing the default settings.
    upload_2021-2-24_7-43-35.png