Search Unity

Fast runtime texture compression

Discussion in 'General Graphics' started by VictorKs, Jun 9, 2021.

  1. VictorKs

    VictorKs

    Joined:
    Jun 2, 2013
    Posts:
    242
    So I want to deploy for desktop and mobile and I need to compress during runtime is there a faster way than texture2D.Compress() ?

    Maybe a compute shader? Is it possible and efficient?
     
    Last edited: Jun 9, 2021
  2. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,342
    Yes, it's possible to compress a texture using a compute shader, and doing so is very fast compared to using the CPU.

    But there are some caveats.

    Unity doesn't have anything built in, so you need to do it all yourself (though perhaps that's obvious).

    Uploading the uncompressed source texture to the GPU and then compressing it in a compute shader may take about the same amount of time as using
    .Compress()
    .

    Textures must be powers of 2 resolution. You can compress DXT1/5 and ETC textures that are multiples of 4 resolution with
    .Compress()
    as 4x4 is the block size for those formats, and technically you can do the same with the compute shader. But you can't copy the resulting data into a compressed texture to actually use it because Unity's
    CopyTexture()
    doesn't know how to handle that case.

    For similar reasons you have to not use a full mip chain as you have limit minimum resolution the smallest mip size to the block size of the texture you're compressing to. For example you can't copy compressed texture data into 2x2 or 1x1 mips as both of those use the same block size as a 4x4 mip, but Unity's
    CopyTexture()
    doesn't account for that and throws an error that the size mismatches.
     
    VictorKs likes this.
  3. VictorKs

    VictorKs

    Joined:
    Jun 2, 2013
    Posts:
    242
    Thanks for the detailed info so I do not use mips at all for these textures they are RGB no (A) so DXT1 should be the fastest I believe. I generate my textures during runtime using blit and rendertextures and then I ReadPixels() from RenderTexture to Texture2D. So maybe the best approach would be to generate the texture inside a compute shader compress it there as well but then how would I use that in my scene?
     
  4. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,342
    The pseudo code version would be:
    1. render to your current render texture
    2. create a render texture that's 1/4th the resolution on both dimensions (
      source.width >> 2
      ) using the format
      RenderTextureFormat.RGInt
      (or
      GraphicsFormat.R32G32_SInt
      ).
    3. run a compute shader that reads 4x4 blocks of pixels from the original render texture, and writes out the DXT1 block to a single pixel
    4. create (or reuse) a full resolution Texture2D with the DXT1 format and no mips.
    5. call
      CopyTexture(blockRT, 0, 0, 0, 0, blockRT.width, blockRT.height, newTex, 0, 0, 0, 0, 0)
      to copy the encoded render texture data to the new DXT1 texture
    6. You're done. Do not call
      Apply()
      on the new texture!
    One annoyance is this leaves the blank DXT1 texture in the CPU side memory. You can call
    Apply()
    before you call
    CopyTexture()
    the first time you create it, but that comes at some additional cost as it'll be uploading that blank texture to the GPU and you have to wait for that to finish before you can do
    CopyTexture()
    or it'll blow that away.
     
    VictorKs likes this.
  5. VictorKs

    VictorKs

    Joined:
    Jun 2, 2013
    Posts:
    242
    Thanks for the help this looks much better from what I had in mind. So if I got the last part right you copy the data only on the GPU side and that leaves a blank texture on the cpu. So if I were to create 200 textures that way can I just reuse that blank texture or will that overwrite my previous texture? Sorry if that seems rather elementary, but Graphics functions like CopyTexture confuse me more than compute shaders :) Btw is there a way to unload runtime created textures from CPU memory? To avoid double cost.
     
    Last edited: Jun 10, 2021
  6. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,342
    If you create 200 textures and need to use all of them at the same time, you'll need to create 200 separate "dummy"
    Texture2D
    objects in script so they can be assigned to the materials / components that need to use them. The
    Texture2D
    class in c# is used for both optionally storing CPU side data, and the reference ID the graphics API assigned to that GPU side resource so that Unity can tell the GPU to use a specific texture on a material, etc.

    For texture assets imported into Unity, these exist on the disk, are loaded into CPU memory, uploaded to the GPU memory, then by default are flushed from the CPU memory so they only take GPU side memory.

    For texture assets created from script the assumption is that you'll be filling in the data manually on the CPU side before uploading it to the GPU. You can call
    tex.Apply(false, true);
    which will upload the CPU side data to the GPU and then flush the CPU side (that's the second boolean). Ideally you would create a dummy resource without any CPU side data at all apart from the resource ID, but AFAIK there's no way to do that within Unity at the moment.

    There could be something I'm missing here though, since this isn't something I've dug too deep into.
     
    VictorKs likes this.
  7. GXMark

    GXMark

    Joined:
    Oct 13, 2012
    Posts:
    514
    Just putting this out there. If you have a game where users can import their own textures like in a building game you could opt to use the Crunch library (not the unity built in one but the sourced version unity has). So you could crunch the textures on import and save them to your disk or server for download.
     
  8. ekakiya

    ekakiya

    Joined:
    Jul 25, 2011
    Posts:
    79
    c0d3_m0nk3y likes this.
  9. c0d3_m0nk3y

    c0d3_m0nk3y

    Joined:
    Oct 21, 2021
    Posts:
    666