Search Unity

Is there a way to read texture color without large GC Alloc?

Discussion in 'Scripting' started by Lesnikus5, Feb 24, 2021.

  1. Lesnikus5

    Lesnikus5

    Joined:
    May 20, 2016
    Posts:
    131
    When I try to save a lot of textures to a files, each in a separate file, saving each one generates garbage. This happens when I read data from the texture. Why not allocate a small memory the size of 1 texture, and then use that memory in turn for many textures?

    I've tried many methods.

    1. GetRawTextureData()
    2. EncodeToEXR/EncodeToJPG/EncodeToPNG/EncodeToTGA
    3. GetPixel(x, z)

    Of these, only the latter does not generate garbage, but this method takes forever to execute.

    What other ways can I read the texture without GC Alloc? What other ways are there? I need it quickly and without a lot of garbage allocation.
     
  2. Kurt-Dekker

    Kurt-Dekker

    Joined:
    Mar 16, 2013
    Posts:
    38,735
    I think the underlying issue is that Unity keeps its Texture2Ds in a format that is "Ready for action," eg ready for shoveling to the video card in whatever way the target platform needs. This is generally why you see the "Enable read/write" option being off mostly, to enable even more-optimized formats, such as swizzled formats.

    The reason there is a large alloc is that Unity preps it and copies it for you to read in a known format. And unless there is a way for you to give Unity the buffer where you want the data placed, I don't think you can prevent the large allocs.

    It might be possible using something like Graphics.Blit() to copy it to your own texture, but I suspect the same limits might apply. Even to pass it into some of your own custom native code you would have to pin it which causes a big GC Alloc spike.

    Maybe one of the GC pin variants lets you pin it to pre-made RAM, not sure. Here is how I pin/unpin my 512x512 (admittedly small) texture every single frame that my KurtMaster2D game is running, so that the native game code can manipulate the texture directly:

    Pinning a texture for manipulation via native code:
    https://forum.unity.com/threads/texture-byte-per-pixel.624343/#post-4185943

    EDIT: looking at the above, looks like I am pinning the same Color32[] array, not the texture. Never mind... I don't think it changes or improves your situation.
     
    GuirieSanchez likes this.
  3. Lesnikus5

    Lesnikus5

    Joined:
    May 20, 2016
    Posts:
    131
    Thanks for the answer!

    I was hoping that there is something similar. It's just strange that you can't perform such a simple operation as reading a texture into your buffer or something similar, which would allow you to be more memory-efficient.
     
  4. Lesnikus5

    Lesnikus5

    Joined:
    May 20, 2016
    Posts:
    131
    It's funny that there is WebCamTexture.GetPixels32 that can take your array to fill, thus not generating garbage. But for Texture2D.GetPixels32 the same trick is not possible, it always generates garbage. Why not add this feature to both.
     
    Last edited: Feb 25, 2021