Search Unity

GetPixel() vs. GetPixels() with differing TextureFormat

Discussion in 'Scripting' started by slippyd, Apr 3, 2007.

  1. slippyd

    slippyd

    Joined:
    Jun 11, 2005
    Posts:
    129
    I keep discovering things that I wish were in the documentation or someone had talked about on the forums here. I think I'm going to just start posting them to the forum; that way I or anyone else can find them when needed.

    I discovered that while GetPixel() and GetPixels() work the same with Texture2D's of TextureFormat.ARGB32, they work differently with TextureFormat.Alpha8 and Alpha8.RGB24. Because of Color always having 4 components, an Alpha8 (1 component) and a RGB24 (3 components) don't fit into a Color object perfectly.

    When GetPixel() grabs a Alpha8 or a RGB24, it appears to grab the "real" values. This means for Alpha8, it fills in the r, g, and b components with 0.0 and for RGB24, it fills the a component with 1.0.

    When GetPixels() grabs a Alpha8 or a RGB24, it appears to do some modification to make the texture work better. This means for RGB24, a is still filled with 1.0, yet with Alpha8, it also fills in the r, g, and b components with 1.0.

    Here's some sample code to try this out. Try giving it textures of each of the 3 types with different colors in the bottom-left corner, middle, and top-right corner. It's probably best if you turn off Mipping on the textures you try, as with any texture you are grabbing pixels from or drawing pixels on.

    Code (csharp):
    1. var srcTex : Texture2D;
    2.  
    3. function Start() {
    4.     var srcPixels : Color[] = srcTex.GetPixels();
    5.    
    6.     print("srcTex.GetPixel(0, 0): " + srcTex.GetPixel(0, 0));
    7.     print("srcTex.GetPixels(0, 0, 1, 1)[0]: " + srcTex.GetPixels(0, 0, 1, 1)[0]);
    8.     print("srcPixels[0]: " + srcPixels[0]);
    9.     print("srcTex.GetPixel(" + ((srcTex.width - 1) / 2) + ", " + ((srcTex.height - 1) / 2) + "): " + srcTex.GetPixel((srcTex.width - 1) / 2, (srcTex.height - 1) / 2));
    10.     print("srcTex.GetPixels(" + ((srcTex.width - 1) / 2) + ", " + ((srcTex.height - 1) / 2) + ", 1, 1)[0]: " + srcTex.GetPixels((srcTex.width - 1) / 2, (srcTex.height - 1) / 2, 1, 1)[0]);
    11.     print("srcPixels[((srcTex.width - 1) / 2) * srcTex.height + ((srcTex.height - 1) / 2)]: " + srcPixels[((srcTex.width - 1) / 2) * srcTex.height + ((srcTex.height - 1) / 2)]);
    12.     print("srcTex.GetPixel(" + (srcTex.width - 1) + ", " + (srcTex.height - 1) + "): " + srcTex.GetPixel(srcTex.width - 1, srcTex.height - 1));
    13.     print("srcTex.GetPixels(" + (srcTex.width - 1) + ", " + (srcTex.height - 1) + ", 1, 1)[0]: " + srcTex.GetPixels(srcTex.width - 1, srcTex.height - 1, 1, 1)[0]);
    14.     print("srcPixels[srcPixels.length - 1]: " + srcPixels[srcPixels.length - 1]);
    15. }
    I don't know if this is unintended, but if it is, I would prefer GetPixel() work more like GetPixels(). I have some additive texture copy code that works seamlessly with all 3 TextureFormat types because of the way GetPixels() works. If it didn't have this behavior, I would like to be able to check something like Texture2D.format, but that class variable doesn't exist.