Search Unity

Question Is it possible to get more granularity from image 'alpha' than 1/256?

Discussion in 'Editor & General Support' started by dgoyette, Dec 5, 2022.

  1. dgoyette

    dgoyette

    Joined:
    Jul 1, 2016
    Posts:
    4,195
    I find that even if I set the Alpha of an image using floating point value, Unity only renders a difference down to a granularity of 1/256. Meaning, there's no perceived difference between an image with alpha "0.95099", and one with alpha "0.954901", despite a fairly big numeric difference in the alpha. The result of this is that a slow "fade to black" looks janky, as it fades in big steps, instead of a more continuous way.

    This makes sense when there are only 8-bits available for the alpha channel. Is there any way to get more precision out of the alpha of an image, so that very small changes will have very small (but noticeable) impact on the perceived alpha?

    (I know there are other ways to fade to black, but if there's a simple fix to this alpha issue I'd much prefer to use that, as it will mean less rework of other code.)
     
  2. halley

    halley

    Joined:
    Aug 26, 2013
    Posts:
    2,433
    You answered your own question. Unless you're doing the blending using a deeper color space like float or 16 bit per channel, nope.

    Some systems do use dithering to approximate a deeper color space. If you dither half the pixels using color+0.25 and half the pixels using color-0.25, you get a better overall sense of the intended color.
     
  3. dgoyette

    dgoyette

    Joined:
    Jul 1, 2016
    Posts:
    4,195
    Yeah, I didn't see an obvious way to get a higher bit depth out of the color that the image is using.

    Interestingly, and worth mentioning, is that the same result occurs when trying to adjust the Alpha of a CanvasGroup, rather than the color of the image itself. So, that seems like a good indication this won't work.

    Just in case anyone stumbles on this post down the road and wants to know another solution, the approach mentioned here seems to have much greater precision: https://forum.unity.com/threads/free-basic-camera-fade-in-script.509423/

    What I don't really understand about it, though, is that this approach is also just using a color with an alpha channel to draw a texture directly via OnGUI. Yet it seems to not suffer from the precision issue that the canvas does. For example, this code has a very smooth fade to black as Alpha approaches 1. So I'm not really sure why.

    Code (CSharp):
    1. [ExecuteInEditMode]
    2. public class FaderTest : MonoBehaviour
    3. {
    4.     public float Alpha;
    5.     private Texture2D _texture;
    6.  
    7.     public void OnGUI()
    8.     {
    9.         if (_texture == null) _texture = new Texture2D(1, 1);
    10.         _texture.SetPixel(0, 0, new Color(0, 0, 0, Alpha));
    11.         _texture.Apply();
    12.         GUI.DrawTexture(new Rect(0, 0, Screen.width, Screen.height), _texture);
    13.     }
    14. }