Search Unity

Create non-readable Texture2D from script?

Discussion in 'General Graphics' started by jonagill_rr, Mar 25, 2019.

  1. jonagill_rr

    jonagill_rr

    Joined:
    Jun 21, 2017
    Posts:
    54
    Is there any way to create a Texture2D that is just a non-readable pointer to GPU memory via C#? Right now, it seems like the only way to create a GPU-only texture is to call:

    Code (CSharp):
    1. var texture2dArray = new Texture2DArray(1024, 1024, 16, textureFormat, mipChain, linear);
    2. texture2DArray.Apply(true, makeNoLongerReadable: false);
    This is really slow and expensive, especially with texture arrays like we're trying to do here. The call to Apply() actually takes about as long as the allocation itself, even though we've never called `SetPixels()` or otherwise dirtied the texture, and it is thrashing a ton of CPU memory to perform what should basically be a no-op.

    What we want is a simple pointer to allocated GPU memory. That memory doesn't have to be cleared or backed by CPU memory, since we're going to be writing to it purely on the GPU anyway. Is there any way to get that?


    Some additional context:
    We've tried messing around with all the Texture2D and Texture2DArray constructors, including the experimental ones that take GraphicsFormat and TextureCreationFlags, but nothing seems to do what we want. There are some promising-sounding entries in TextureCreationFlags such as DontCreateSharedTextureData and APIShareable, but they're commented out as internal-only and are presumably stomped in C++ code somewhere.


    (As an interesting aside, we found that calling GraphicsFormatUtility.GetGraphicsFormat(TextureFormat.ARGB32, true) on 2018.3.5 returns "87", which isn't an entry in the GraphicsFormat enum and returns a ton of errors when we try to use it. I've no idea what's going on here, since according to the Texture2D C# source code, the standard Texture2D constructor is calling the same thing internally to convert from TextureFormat to GraphicsFormat.)
     
  2. aleksandrk

    aleksandrk

    Unity Technologies

    Joined:
    Jul 3, 2017
    Posts:
    3,022
    Why are you using ARGB32 and not RGBA32?
     
  3. jonagill_rr

    jonagill_rr

    Joined:
    Jun 21, 2017
    Posts:
    54
    Mostly because it's what I typed first when making some test code. In production we're using DXT5 and RGBA32, but the same issue of having to call Apply() on an empty texture to clear CPU-side memory that we didn't want to allocate in the first place remains.
     
  4. MD_Reptile

    MD_Reptile

    Joined:
    Jan 19, 2012
    Posts:
    2,664
    I'm not really sure if it even is relevant to you, but if your doing a lot of texture editing at runtime, then it may be a better idea to abandon the idea of using texture2d class and Apply() calls.... and instead use a Compute Shader (if its possible on your platform) as you can get far, far superior performance using that instead of trying to push from the CPU side memory to the GPU side memory, which is a very expensive transfer.
     
  5. jonagill_rr

    jonagill_rr

    Joined:
    Jun 21, 2017
    Posts:
    54
    That's basically what we're trying to do. We're doing all of our texture generation on the GPU with CopyTexture, so we have no need to ever allocate CPU-side memory. However, it seems like all of the publicly accessible Texture constructors allocate that CPU-side memory regardless and require an expensive call to Apply() in order to discard it after allocation.
     
  6. MD_Reptile

    MD_Reptile

    Joined:
    Jan 19, 2012
    Posts:
    2,664
  7. jonagill_rr

    jonagill_rr

    Joined:
    Jun 21, 2017
    Posts:
    54
    @MD_Reptile I think you're trying to answer a question that I didn't ask. I don't have an issue with how to efficiently write to a Texture, I have an issue with the inefficient way that Unity's APIs force you to allocate those textures. Specifically, I have an issue with the fact that there appears to be no way to allocate a GPU-side texture without also allocating CPU-side memory for performing SetPixels()-style operations on.

    In most cases, I would probably use a RenderTexture, which provides better controls for this kind of thing than a standard Texture2D. However, we need to use Texture2DArrays, and so we're stuck with the barebones constructor provided here: https://docs.unity3d.com/ScriptReference/Texture2DArray-ctor.html As far as I can tell, this constructor will always allocate CPU memory, forcing you to call Apply() to discard that memory, which is extremely slow and inefficient.
     
  8. jonagill_rr

    jonagill_rr

    Joined:
    Jun 21, 2017
    Posts:
    54
    I've also been playing with the new experimental Texture2DArray constructor, which takes flags for how the texture should be constructed. However, a lot of the flags are commented out in Unity's source and only intended for internal use, so I created my own parallel enum to see if the engine would still accept the flag values if I passed them through correctly.

    Code (CSharp):
    1.   [Flags]
    2.     public enum TextureCreationFlags
    3.     {
    4.         None = 0,
    5.         MipChain = 1 << 0,
    6.         DontInitializePixels = 1 << 2, // this is only used internally.
    7.         DontDestroyTexture = 1 << 3, // this is only used internally.
    8.         DontCreateSharedTextureData = 1 << 4, // this is only used internally.
    9.         APIShareable = 1 << 5, // this is only used internally.
    10.         Crunch = 1 << 6,
    11.     }
    12.  
    13.  
    14.     Texture2DArray Allocate()
    15.     {
    16.         var textureFormat = TextureFormat.RGBA32;
    17.         var mipChain = true;
    18.         var linear = false;
    19.  
    20.         GraphicsFormat format = GraphicsFormatUtility.GetGraphicsFormat(textureFormat, !linear);
    21.         TextureCreationFlags flags = TextureCreationFlags.DontCreateSharedTextureData | TextureCreationFlags.DontInitializePixels;
    22.         if (mipChain)
    23.             flags |= TextureCreationFlags.MipChain;
    24.         if (GraphicsFormatUtility.IsCrunchFormat(textureFormat))
    25.             flags |= TextureCreationFlags.Crunch;
    26.  
    27.         return new Texture2DArray(1024, 1024, 16, format, (UnityEngine.Experimental.Rendering.TextureCreationFlags) flags);
    28.     }
    Unfortunately, even with DontCreateSharedTextureData and DontInitializePixels set, which sound like exactly the flags that we'd need, we're still seeing the CPU-side memory generated in the Profiler. This effectively makes the texture take up double the memory in the profiler until we make that expensive call to Apply(). Presumably Unity is stomping these flag within the actual engine code (hence why they're commented out in the public C# API), which is frustrating since by all appearances these are exactly the flags we'd need to set to get the desired behavior.


    If you're interested, you can see the APIs that we're referencing at the following links, since they're not included in the public Unity docs:

    https://github.com/Unity-Technologi...a7eb10ee9a3594b2885/Runtime/Export/Texture.cs

    https://github.com/Unity-Technologi...dc6d6822cb520/Runtime/Export/GraphicsEnums.cs
     
  9. MD_Reptile

    MD_Reptile

    Joined:
    Jan 19, 2012
    Posts:
    2,664
    I mean I don't totally know what your use case is, but I have done a lot of work on trying to get speed and runtime performance with the Texture2D class while editing it (as in SetPixel or SetPixels calls) and then pushing changes to the GPU with Apply calls, and that is a losing battle. I've tried stuff like splitting up a large area of texture into many small single textures, and only applying changes to the smaller areas that are being edited at runtime - and that gets better performance than using one large texture - but its just not enough if you want to do any serious amount of editing.

    If your really looking to do a lot of per-pixel edits to a texture or textures at runtime, like for instance, to make a game similar to "Cortex Command" by Data Realms - you almost have to look at compute shader methods to do it, because it just smokes the Texture2D class methods of doing the same thing...

    If your doing something totally different then disregard all I've said :p

    EDIT: One last thing to consider - are you doing the profiling on builds or in the editor? I usually forget this step but its important to always do the critical profiling on builds because sometimes the editor will act differently, and degrade performance in ways the builds won't experience.
     
  10. jonagill_rr

    jonagill_rr

    Joined:
    Jun 21, 2017
    Posts:
    54
    Yeah, we've been doing our memory profiling in PC standalone builds, and we're still seeing the CPU allocations happening.

    To be clear, we're never calling SetPixels(), and the only reason we're calling Apply() is to force Unity to discard the CPU memory that it automatically allocated. (Which effectively doubles the cost of the texture, especially on platforms with shared CPU and GPU memory such as PS4).

    Our use case is that we are dynamically batching certain meshes that share the same shader at runtime. To do this, we take all of the textures referenced by each of those individual meshes and use Graphics.CopyTexture() to copy them into a single texture array that we can use to draw all the meshes in a single draw call.

    Doing some more reading, it looks like we might be able to use RenderTexture.dimension to create our GPU array rather than the Texture2DArray constructor. While it still feels like this should be possible when allocating a standard Texture2DArray, this could provide a workaround for our particular case.
     
  11. iileychen

    iileychen

    Joined:
    Oct 13, 2015
    Posts:
    110
    We have similar issue for this, did you find a way finally? I mean create a GPU-only texture.
     
  12. dieterdb

    dieterdb

    Unity Technologies

    Joined:
    Apr 25, 2019
    Posts:
    6
    bgolus likes this.
  13. iileychen

    iileychen

    Joined:
    Oct 13, 2015
    Posts:
    110
    Thanks for the infomation dieterdb, that is good however not very understandable. If there a way to create a non-readable texture directly will be really cool.
     
  14. Neto_Kokku

    Neto_Kokku

    Joined:
    Feb 15, 2018
    Posts:
    1,751
    If you're writing to it on the GPU, why not create a render target?
     
  15. iileychen

    iileychen

    Joined:
    Oct 13, 2015
    Posts:
    110
    Because we are using compressed texture format, for example ASTC for mobile, to reduce the GPU memory usage. RenderTexture does not support that.
    We mantain a big tiled texture, dynamic tile loading implemented by CopyTexture, it works well for us, just the flaw -- creating the big tiled texture will cause a big CPU memory allocate once -- however we do not need it.
     
    Last edited: Mar 2, 2022
    Neto_Kokku likes this.
  16. iileychen

    iileychen

    Joined:
    Oct 13, 2015
    Posts:
    110
    @dieterdb I just tested createUninitialized, it still allocate CPU memory.
    You said "if required (readable textures for example)", there is no "if" since there is no way to create an non-readable texture by constructors.
     
  17. dieterdb

    dieterdb

    Unity Technologies

    Joined:
    Apr 25, 2019
    Posts:
    6
    @iileychen Are you seeing this behavior in the player? inside the editor textures are always CPU backed.
     
  18. iileychen

    iileychen

    Joined:
    Oct 13, 2015
    Posts:
    110
    Yes, i profiled in the standalong player, it cost as same as in editor. Here's my testing code:

    Code (CSharp):
    1.     private void DoTestGpuOnlyTexture()
    2.     {
    3.         gpuOnly = new Texture2D(4096, 4096, TextureFormat.RGBA32, -1, true, true);
    4.         Graphics.CopyTexture(tex, gpuOnly);
    5.         gpuOnlyMat.mainTexture = gpuOnly;
    6.     }
     
  19. iileychen

    iileychen

    Joined:
    Oct 13, 2015
    Posts:
    110
  20. dieterdb

    dieterdb

    Unity Technologies

    Joined:
    Apr 25, 2019
    Posts:
    6
    @iileychen Sorry for the delayed reply on this.
    I checked and it seems that at the moment it is not possible to create Texture2D from C# that does not have that initial CPU memory backing.
    While there is an internal solution for this, this is currently not exposed to the C# Texture constructors. I added a ticket (internal) to expose this as I believe this should be available.
    The current advice is to use RenderTexture for GPU-only textures (though that might not always be possible, for example for non-renderable formats)
     
  21. Neto_Kokku

    Neto_Kokku

    Joined:
    Feb 15, 2018
    Posts:
    1,751
    In that case a workaround could be to actually have that big tiled texture be pre-generated as a blank unreadable texture asset, which you modify by using CopyTexture during runtime. The only drawback is that the size cannot be dynamic. It also adds to your build size, but since it's blank it should compress very well even with LZ4.
     
  22. joshuacwilde

    joshuacwilde

    Joined:
    Feb 4, 2018
    Posts:
    731
    Would be nice to see the same resolution for texture arrays as well. Rather than having to create one then mark it as unreadable.
     
    iileychen likes this.
  23. joshuacwilde

    joshuacwilde

    Joined:
    Feb 4, 2018
    Posts:
    731
  24. iileychen

    iileychen

    Joined:
    Oct 13, 2015
    Posts:
    110
    Any updates?
     
  25. joshuacwilde

    joshuacwilde

    Joined:
    Feb 4, 2018
    Posts:
    731
    1 year later.....
     
    LizzyFox likes this.
  26. LizzyFox

    LizzyFox

    Joined:
    Nov 10, 2016
    Posts:
    4
  27. LizzyFox

    LizzyFox

    Joined:
    Nov 10, 2016
    Posts:
    4
    @joshuacwilde
    TextureCreationFlags.DontInitializePixels | TextureCreationFlags.DontUploadUponCreate flags can help you :)
     
    mgear likes this.
  28. iileychen

    iileychen

    Joined:
    Oct 13, 2015
    Posts:
    110
    Waiting...That's really make our warnning message unreadable.