Search Unity

[Feedback] Why are texture manipulation routines so awkward?

Discussion in 'General Discussion' started by neginfinity, Sep 6, 2017.

  1. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,572
    Why are texture manipulation routines so awkward? I'm talking about anything related to get/set/read pixels.

    Problems spotted so far:
    • Texture2D.ReadPixels read data from globally set RenderTexture.active, meaning we're technically passing a a function parameter via a global variable, which is not the best practice.
    • The rect in ReadPixels is specified using floating point, which makes me wonder what will happen in situation where 1024 is actually 1023.99999999781. Will it round to the nearest integer (sane way), or will it round down and stretch the image (logical way)?
    • There's no Get/SetRawTextureData on Cubemap textures. The only accessible interface is Get/Set pixels. Meaning the data I set will have through the floating point conversion, no matter what I do.
    • There's no GetPixels and no GetRawTextureData on RenderTargets. The only way to grab data is by first transferring it to another texture via ReadPixels.
    • There's no way to set an individual cubemap face as a render target. Apparently they're supposed to be used with RenderToCubemap only and are not usable in any other way.
    • Speaking of which RenderToCubemap does not support replacement shaders and I think it won't support MRT rendering either.
    So, it is awkward all the way around. Apparently to render to an individual cubemap face, I need to:
    1. Create a temporary render target and render onto THAT.
    2. Create a temporary texture and read data into it using ReadPixels.
    3. Get the data using GetPixels (hello, GC allocation)
    4. And set the data received using SetPixels, for the required cubemap face.
    I mean... come on?

    The way I see it, the proper way to implement texture objects would be to make sure that they ALL support:
    1. GetPixels/SetPixels
    2. GetRawPixelData/SetRawPixelData

    And in case of Volumetric textures and Cubemap textures, all those functions would allow user to address individual faces and planes (in volume texture). And of course, it should be possible to set individual faces/planes as render targets.

    Can someone, maybe, pass this up to unity devs? (@Buhlaine or @aliceingameland perhaps?)
     
    Last edited: Sep 6, 2017
    Martin_H likes this.
  2. chingwa

    chingwa

    Joined:
    Dec 4, 2009
    Posts:
    3,790
    I have used Graphics.Blit() to great effect in the past using normal 2d render textures... for cubemaps you could probably setup a temporary texture for each face, use Graphics.Blit() to render each face as needed, then use Graphics.CopyTexture() to get each face into the cubemap.

    Haven't tried it personally, but seems plausible.
     
  3. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,572
    https://docs.unity3d.com/ScriptReference/Graphics.Blit.html
    Graphics.Blit does not seem to offer any way to target individual rendertarget face.
     
  4. chingwa

    chingwa

    Joined:
    Dec 4, 2009
    Posts:
    3,790
    Right, but you can access each face with CopyTexture(), so what I mean is...

    step 1: new target texture sized for each face (x6)
    step 2: render each face individually to their individual textures using Graphics.Blit()
    step 3: copy each face texture directly into your cubemap using Graphics.CopyTexture()
     
    neginfinity likes this.
  5. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,572
    Alright. I admit I overlooked this one.
    --edit--

    Checked it out.

    CopyTexture from RenderTarget to CubemapFace doesn't work.

    CopyTexture from Texture to CubemapFace works. Meaning I'll still need to do the ReadPixels thing.
     
    Last edited: Sep 6, 2017
  6. YesYesNoNo

    YesYesNoNo

    Joined:
    Sep 8, 2017
    Posts:
    6
    Who's idea was it to have 'clamp' settings, as default? So frustrating.
     
  7. King-Kadelfek

    King-Kadelfek

    Joined:
    Mar 8, 2010
    Posts:
    18
    Unity devs come from 3D software, they don't know a lot of things about 2D libraries, especially when it involves pixel manipulation.
    I met personally several Unity representatives and higher-ups, so I can give you a simple example about how they handle things.

    I'm asking them:
    "I have two textures: a character and a background. I want to create a new texture with the character and the background combined."

    Unity guys: "You can use a shader!"

    Me: "No, I want to create a new texture, by combining existing textures."

    Unity guys: "Hum... you can put your character texture on a 3D plane, your background texture on another 3D plane behind the first one, then use a render camera to take a screenshot of both texture."

    Me: "What about combining the pixels of the two textures to create a new texture?"

    Unity guys: "Huh?!"

    For 2D developers, this is the most basic stuff, but Unity guys seem to think only in terms of 3D software.
    During the yearly 2016 survey, I wrote a lengthy version of the above description and I received the excuses of another higher up and his direct contact for any further question.
    (I also told them that RPG Maker XP has a better pixel management than their SetPixel / GetPixel, which is true)

    In the same genre, I had to ask a 2D library developer to add clipping before this basic functionnality still wasn't there after a year of public release:
    https://forum.unity.com/threads/orthello-2d-framework-100-free.95827/page-7#post-984650

    And here is another topic
    https://forum.unity.com/threads/looking-for-a-texture-2d-api.190012/

    I'm saying all of this to explain that the problem of missing pixel methods or inefficient routines is mainly caused by culture differences from 2D developers expecting 3D developers to create good 2D solutions... and lot of Unity users who don't even understand the need for 2D methods, hence Unity devs not developing them.
     
    Ryiah likes this.
  8. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,572
    The stuff I requested is actually related to 3d image manipulation. See DX/OpenGL texture classes/function. What I want is ability to write and read texture data as a raw array, without using "Color[]" array as a proxy.This way I'd be able to directly load floating point textures, dxt textures, etc, without wasting time on conversion from/to Color array and losing precision. Either Get/Set Pixels or ReadPixels also has difficulty working on formats like RFLOAT which only has one channel.

    You can do it with existing apis in several ways.

    Manually:
    1. Load both textures using Texture2D.LoadImage
    2. Ensure they both have same time.
    3. Grab texture data using GetPixels() functions.
    4. Create new texture of the same size.
    5. Create new Color[] or Color32[] array.
    6. Fill the array with data you want, combining both input arrays.
    7. Fill the new texture with data you generated, using SetPixels.
    8. Call "Apply" method on destination.

    It is fairly trivial.

    Then, you can use Graphics.Blit. This way you'll be rendering onto RenderTextures, meaning you'll have to deal with the whole annoying "ReadPixels" thing, but the process will be much faster.

    Then you can use ComputeShaders. Dump whatever data into compute buffers, set them as StructuredBuffer shader variables, then set target compute buffer as RWStructuredBuffer, and fire the shader. I actually did something similar last week.

    Then you can combine textures without creating a new texture, using shader. In a shader set "background" and "foreground" texture. This will use less memory. Now, this solution will suffer fro that issue where by default unity sprites only have one texture on them, but it is possible to work around that too (by making a material for the sprite with custom shader).

    The important thing to realize it is that Unity is not RPG maker, meaning texture data is not readily available for reading it back in many cases. Meaning... in many cases you actually should write a shader to make a combined texture.

    Speaking of clipping, you actually should be able to use only portion of a texture for a sprite by clipping it with sprite editor (set type to sprite on import and then set borders).
    On normal texture it can be acheived using texture scale/offset parameters, the problem is that won't tile automatically, and that setting parameters is not obvious. Then again you could write custom inspector for that.
    ----
    Either way, I posted a feedback request here:
    https://feedback.unity3d.com/suggestions/better-raw-data-access-methods-for-all-texture-classes

    But I think it'll be probably ignored.