Search Unity

Discussion Why DXTn | BCn | S3TC compression always need multiple of 4 in Unity ?

Discussion in 'General Graphics' started by MarcAubryBoissel, Aug 2, 2022.

  1. MarcAubryBoissel

    MarcAubryBoissel

    Joined:
    Dec 1, 2016
    Posts:
    8
    Hello.
    I ask myself this question and I can't find an answer that satisfies me. Why when importing a texture into Unity, from the editor or directly by script, an error is returned if the source image does not have its width or its height divisible by 4 ?

    I understand that a DXTn compression use a block compression techniques which break up uncompressed texture data into 4×4 blocks, compress each block, and then store the data. For this reason, textures are expected to be compressed must have texture dimensions that are multiples of 4.

    What I don't understand is that tools like Nvidia Texture Tools (NVTT) or AMD Compressonator can read and write DXTn textures that are not multiples of 4 without Mipmap.
    How do they get there? They use a virtual space? They use a custom file header?

    However, it seems possible to differentiate the virtual size from the physical size. It is this technique which is used for a mipmap whose initial dimensions are divisible by 4, but the subdivided levels are not.

    And if so, is there any particular reason Unity can't read and write a DXTn texture without multiples of 4? GPU side optimization designed for 4x4 block decoder?
     
  2. georgerh

    georgerh

    Joined:
    Feb 28, 2020
    Posts:
    72
  3. MarcAubryBoissel

    MarcAubryBoissel

    Joined:
    Dec 1, 2016
    Posts:
    8
    Effectivly the multiple of 4 is a requirement which is logical when you know the block-compression algorithms operate on 4x4 texel blocks.

    What surprises me the most is that the tools mentioned above (nvtt, compressonator, dds viewer) have the ability to read and write a dds file outside a 4x4.

    So I assume they must use some subterfuge to achieve this. I supose they clamp to nearest multiple-of-four when texture is write and rescale on original physical size when image is read. What makes me say that is the result of an image in 607x341 write on BC1 format with Compressonator are readable with dds viewer but she is twisted with nvtt. We can clearly see the shift of the blocks on attachments. Image that has a physical size of 604x340. This would mean that they use a header different from the BC specification.

    However, I can't find any information about the method used by these tools. If they encode and decode the dds in multiples of 4 and then recreate a texture resizing in the original size, it works well for a viewer but we lose a lot of interest in using this subterfuge for real time.

    Knowing the reason for Unity's do not allow read/write other than on a 4x4 interest me. Strict application of the BC specification? Technical limitation of the compression / decompression libraries used by Unity? Performance issues?

    I'm starting to test ispc_texcomp. This is the tool used by Unity for Standalone texture compression. No idea if it's the same library that's used for decompression. If someone from Unity passes by here, I'm interested to know more ;)
     

    Attached Files:

  4. georgerh

    georgerh

    Joined:
    Feb 28, 2020
    Posts:
    72
    I doubt that the image is rescaled. It probably works like it does for the mips. Just calculate the number of blocks that you need for a given (odd) size and round up. Ignore all pixels that are outside of the desired size. You can do that for mip 0 just like you do it for mip 1-n.
     
    Last edited: Aug 3, 2022
  5. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,352
    To concur with @georgerh, I believe the official spec still requires a multiple of 4 for mip 0. Though for non-power of 2 resolution textures, the mip maps will invariably end up not being multiples of 4 and that has been supported for a long time. It would make sense to apply the same logic used for the other mip levels on mip 0 for resolutions that don't match the block size, so I suspect that's what those other tools and most GPUs do.

    As for Unity itself, the problem may be in part that "most GPUs" thing. It's likely some GPUs do not support this case properly, as it is not the spec, so Unity decided to not support it. As for why some pre-compressed textures get messed up when used in Unity and some don't, that's because Unity does a magic trick with all textures. Unity generates all of its mesh data using OpenGL conventions, which includes mesh UVs. OpenGL and Direct3D (along with literally all other graphics APIs) are vertically flipped in how they handle texture UVs. As a result, if you were to use a mesh using OpenGL UVs while rendering using Direct3D the textures would be flipped upside down. And indeed, in Unity they are! Unity uploads textures to the GPU upside down!

    They don't decompress the textures to do this, they just invert the compressed data directly. And it would seem whatever logic they use to do that doesn't quite work out for some non-spec resolutions. Though curiously it works properly on other mip levels, so it might be something they could fix if they wanted to.