I can't seem to get BlendOp Min to work. No matter what I try it just turns my materials fully black. Does it take alpha value into account?
From how I understand the GL_MIN blend equation, it doesn't take alpha into account: http://www.opengl.org/wiki/Blending#Blend_Equations
Thanks for the link. It seems to mention alpha as a separate parameter if using glBlendEquationSeparate. Is that possible to use this in a shader in Unity to modify the alpha blending separately?
Okay, now I'm confused… So I can use, for example, BlendOp Add, Min and it will perform an operation on the color and alpha separately? It does compile but changing the second parameter doesn't seem to affect the result in any way.
I also tried just doing it directly in GLSL code via this: Code (csharp): #ifdef FRAGMENT uniform lowp sampler2D _MainTex; uniform lowp vec4 _Color; void main() { glBlendEquationSeparate(GL_FUNC_ADD, GL_MIN); gl_FragColor = texture2D(_MainTex, uv) * _Color; } #endif ENDGLSL But I get an error that GL_FUNC_ADD and GL_MIN are "undeclared identifiers". However the OpenGL docs say those are the correct parameters :|
In blending there is basically this equation .. (SourceRGBA*Something) + (DestRGBA*Something) The blendop changes the + in-between the two parts into something else like Sub(tract), Min(inimum of the two results), Max(imum) of the two result, etc ie Min((SourceRGBA*Something),(DestRGBA*Something))
Maybe I've misunderstood what the BlendOp is for, but my understanding of the render process (and I'm still learning so please forgive any mistakes here) would be the following... 1) Opaque queue shaders are calculated and put into the frame buffer. 2) Transparent queue shaders are then dealt with... 3) If a transparent queue shader uses a blend operation, the Source is usually the current pixel fragment of the current shader... 4) ...and the Destination is usually what's already in the frame buffer (probably from the opaque shaders pixels). 5) BlendOp Min should decide what is smaller between what's already in the frame buffer (Destination) and the current pixel fragment (Source) and replace the frame buffer with the new result. Is this understanding correct or am I barking up the wrong tree?
I recently experimented with BlendOps. Here are my notes: "BlendOp affects the way ‘src’ and ‘dst’ are added together. It only works with the Blend keyword present. Even when using Min and Max, which don’t take the settings of Blend into consideration." Code (csharp): Shader "Custom/MaxBlending" { Properties { _Color ("Color", Color) = (1,1,1,1) } SubShader { Tags { "Queue" = "Transparent" } Pass { BlendOp Max Blend One One CGPROGRAM // pragma target 5.0 because I was experimenting with DX11.1 BlendOps, you might need to use 'target 3.0' instead #pragma target 5.0 #pragma vertex vert #pragma fragment frag float4 _Color; float4 vert(float4 v:POSITION) : SV_POSITION { return mul (UNITY_MATRIX_MVP, v); } fixed4 frag() : COLOR { return _Color; } ENDCG } } } A picture I posted on Twitter used the 'min' variant: P.S.: The Forum's edit functionality appears to be glitching out, hopefully this version of the post won't glitch out and duplicate itself.
I now notice that I didn't mention that these operations are done per-channel. If I recall correctly, this is also mentioned in the OpenGL documentation. (I don't dare editing the original post right now, the last time I did that, it duplicated the post)
I finally got BlendOp working (must have been a weird compilation bug) but I'm still stuck in trying to accomplish my objective, which is illustrated below. Basically I want to take these intersecting transparent textures: ...and retrieve the smallest (Min) alpha value of each pixel, to get a result like this: Some have said it can't be done with a shader, but I feel as though it should be possible based on what you guys have told me, in combination with my [admittedly limited] understanding of BlendOp Min. Can anyone give me a conclusive answer on if this is possible? This is the shader I'm trying to work with: Code (csharp): // Sets the most transparent pixel when textures overlap Shader "Mobile/Unlit/Transparent Blend" { Properties { _Color ("Main Color (A=Opacity)", Color) = (1,1,1,1) _MainTex ("Base (A=Opacity)", 2D) = "" } Category { Tags {"Queue"="Transparent" "IgnoreProjector"="True"} BlendOp Min, Min Blend SrcAlpha OneMinusSrcAlpha SubShader {Pass { GLSLPROGRAM varying mediump vec2 uv; #ifdef VERTEX uniform mediump vec4 _MainTex_ST; void main() { gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex; uv = gl_MultiTexCoord0.xy * _MainTex_ST.xy + _MainTex_ST.zw; } #endif #ifdef FRAGMENT uniform lowp sampler2D _MainTex; uniform lowp vec4 _Color; void main() { gl_FragColor = texture2D(_MainTex, uv) * _Color; } #endif ENDGLSL }} SubShader {Pass { SetTexture[_MainTex] {Combine texture * constant ConstantColor[_Color]} }} } }
Your example pictures look more like BlendOp Max. The answer to your question is yes, but only for grey textures. But perhaps "Is (...) possible?" wasn't the question you actually wanted to ask.
Thanks for the reply RC-1290, I think what you've posted it promising and I'll try and clarify what I'm trying to do… I'm trying to create a little 2D fog of war technique in my game's level screen. On each revealed space, I use a depth mask shader to cut out a hole through a dark overlay… Then I use a second quad with a semi-transparent 'fog' texture on top, that blends the hole: So, the inner white space is transparent and the outer part is black, and that's the part I'm trying to make invisible when it overlaps with a transparent section.
Alright, but now you're switching the topic a bit. To make sure your original question is answered: BlendOp Min and Max do thake alpha into account, but only for the alpha value that's created. Just like the color channels, they're all handled separately. If you want someone to create a fog of war shader for you, try the commercial forum.
Thanks for the suggestion. I could pay for someone to do it but I want to try and do it myself for a sense of satisfaction and learning. I'll try a different approach if this one doesn't work though. Currently I'm still a bit confused on how BlendOp is meant to work. I'll demonstrate with a much more simple example. I'll show what happens compared with what I expect. If someone can explain why it happens this way that would be appreciated. I set up a blue background, with two black ring textures… With BlendOp Off, it looks like so, which is as expected: If I turn on BlendOp Min, the circles disappear… …But what I'd expect to happen is more like below, because I'm expecting the transparent parts of the textures to cancel out the black parts of the ring it overlaps with: So, I'm expecting BlendOp Min to favor the completely transparent and the translucent parts of the texture where the two ring textures overlap, but why isn't this happening? And for comparison's sake, when I try BlendOp Max, this happens:
It's important to note that the resulting alpha value isn't used for blending (unfortunately). It just determines the value that's written to the alpha channel of the buffer / texture you're writing to.
My apologies, I'm still learning and don't quite understand. Can you explain with a bit more detail what you mean?
I hope that I understand it well enough myself to be able to explain it well. It would be nice if BlendOp would allow you to choose which alpha value to use for blending; the alpha value of the current fragment, or the alpha value of the fragment in the background. But when you use BlendOp Min or Max, the values for Blend are ignored. It simply does a comparison per channel between the newly created fragment, and the fragment in the background, and uses the highest (Max) or Lowest (Min) value. If you imagine the color of current fragment and the background fragment as two vectors: Code (csharp): foreground = float4( 0, 0.5, 0.2, 0.5) background = float4( 1, 0.2, 0.8, 0.6) The two BlendOps would give you the following results: Code (csharp): Max = float4( 1, 0.5, 0.8, 0.6) Min = float4( 0, 0.2, 0.2, 0.5) Any further blend settings are ignored, so Blend SrcAlpha OneMinusSrcAlpha would give the same result as Blend Zero Zero. Hopefully that gives you a better idea of what's going on. So I think you'll need to use a different strategy to create the effect you're looking for.
But.. BlendOp max only blend overlaps .. dark BlendOp max Blend One One have different output this output feel so good but Appear Only Scene View.. unity blendOp min and max . is Right??
Sorry for the necro, but because I too noticed that "BlendOp max Blend One One" looks differently to "BlendOp max" alone, I got false hope for what I am trying to do, and a lot of confusion. It appears that there can be duplicate "Blend ..."s or "BlendOp ..."s but only the last one is selected, and so "BlendOp max Blend One One" first chooses "BlendOp max", then continues to "Blend One One", but because "BlendOp max" is incompatible, the default "BlendOp Add" is used with the "Blend One One". Reversing the order results in only using "BlendOp max", without any "Blend", unfortunately.
Thanks for this discussion. I'm not sure if necroposting is a huge deal. I think I solved similar issue. I needed to make a UI element to be applied on top of another element using Darken photoshop blending mode. As I read here somewhere this is equvalent to Blend Min. But I also wanted to change the opacity of that element. So I ended up using Code (CSharp): GrabPass then do BlendOP Min emulation math with grabbed colors and on top of it apply Code (CSharp): BlendOP Add Blend SrcAlpha OneMinusSrcAlpha Docs say GrabPass is inefficient but I dont know any other way to apply both effects. So the resulting code for this UI element is the following: Code (CSharp): Shader "Romeno/UIDarken" { Properties { [PerRendererData] _MainTex ("Sprite Texture", 2D) = "white" {} _Color ("Tint", Color) = (1,1,1,1) _StencilComp ("Stencil Comparison", Float) = 8 _Stencil ("Stencil ID", Float) = 0 _StencilOp ("Stencil Operation", Float) = 0 _StencilWriteMask ("Stencil Write Mask", Float) = 255 _StencilReadMask ("Stencil Read Mask", Float) = 255 _ColorMask ("Color Mask", Float) = 15 [Toggle(UNITY_UI_ALPHACLIP)] _UseUIAlphaClip ("Use Alpha Clip", Float) = 0 } SubShader { Tags { "Queue"="Transparent" "IgnoreProjector"="True" "RenderType"="Transparent" "PreviewType"="Plane" "CanUseSpriteAtlas"="True" } Stencil { Ref [_Stencil] Comp [_StencilComp] Pass [_StencilOp] ReadMask [_StencilReadMask] WriteMask [_StencilWriteMask] } Cull Off Lighting Off ZWrite Off ZTest [unity_GUIZTestMode] ColorMask [_ColorMask] GrabPass { "_BackgroundTexture" } Pass { Name "Default" BlendOp Add Blend SrcAlpha OneMinusSrcAlpha CGPROGRAM #pragma vertex vert #pragma fragment frag #pragma target 2.0 #include "UnityCG.cginc" #include "UnityUI.cginc" #pragma multi_compile __ UNITY_UI_CLIP_RECT #pragma multi_compile __ UNITY_UI_ALPHACLIP struct appdata_t { float4 vertex : POSITION; float4 color : COLOR; float2 texcoord : TEXCOORD0; UNITY_VERTEX_INPUT_INSTANCE_ID }; struct v2f { float4 vertex : SV_POSITION; fixed4 color : COLOR; float2 texcoord : TEXCOORD0; float4 worldPosition : TEXCOORD1; float4 grabPos : TEXCOORD2; UNITY_VERTEX_OUTPUT_STEREO }; sampler2D _MainTex; fixed4 _Color; fixed4 _TextureSampleAdd; float4 _ClipRect; float4 _MainTex_ST; v2f vert(appdata_t v) { v2f OUT; UNITY_SETUP_INSTANCE_ID(v); UNITY_INITIALIZE_VERTEX_OUTPUT_STEREO(OUT); OUT.worldPosition = v.vertex; OUT.vertex = UnityObjectToClipPos(OUT.worldPosition); OUT.texcoord = TRANSFORM_TEX(v.texcoord, _MainTex); OUT.grabPos = ComputeGrabScreenPos(OUT.vertex); OUT.color = v.color * _Color; return OUT; } sampler2D _BackgroundTexture; fixed4 frag(v2f IN) : SV_Target { half4 dstColor = tex2Dproj(_BackgroundTexture, IN.grabPos); half4 srcColor = (tex2D(_MainTex, IN.texcoord) + _TextureSampleAdd) * IN.color; half4 color = half4(min(dstColor.r, srcColor.r), min(dstColor.g, srcColor.g), min(dstColor.b, srcColor.b), srcColor.a); #ifdef UNITY_UI_CLIP_RECT color.a *= UnityGet2DClipping(IN.worldPosition.xy, _ClipRect); #endif #ifdef UNITY_UI_ALPHACLIP clip (color.a - 0.001); #endif return color; } ENDCG } } }
BlendOp Add is the default blend op. Blend and BlendOp are a pair that control how the blend equation works. Something like Blend SrcAlpha OneMinusSrcAlpha and BlendOp Add works like this: finalColor = SrcColor * SrcAlpha + DstColor * OneMinusSrcAlpha; The add in the middle is controlled by the BlendOp, using Subtract replaces the + with a -, and ReverseSubtact swaps the src and dst. BlendOp Min and Max don’t use the Blend at all, or at least shouldn’t. However to recreate a blend like the one you’re describing you shouldn’t need to do a grab pass. Just do BlendOp Min and change the output of your shader to: Code (csharp): return half4(lerp(half3(1,1,1), col.rgb, col.a), 0);
Yeah, it works, just resulting alpha should be 1. But for some reason opacity changes a tiny little bit differently - faster then in my approach. Thx for the variation without GrabPass.
Hello, Sorry for over-necro-ing this thread. I've been looking for a solution for days now and found lot of informations into your replies @bgolus so thank you =) Thing is : I'm trying to do something not that exotic but I keep on getting mitigated results. I have a background plane with colors (think sky colors).I have trees silhouette (overlapping) sprites in front of this plane and I'd like them to : - be transparent so that they take the plane's colors - don't stack alpha from other silhouette (so I went for BlendOp Max) - modulate the transparency with the vertex color's alpha set in sprite - multiply the vertex color to the texture color as well My problem is that if I take the texture color and just "return half4(0,0,0,1)" instead of black, the result is transparent. I have another problem is that the border transparency of the silhouette returns something like the complementary color of the background color. Any clues ? Is it doable that way ? Did I miss something ? =/ Thanks for your help anyway
BlendOp Max means take the largest value from the src and dst. So if your shader outputs zeros, what's already been rendered is essentially guaranteed to be used since that's going to be some value larger than that. What you want is to simultaneously darken the background, but prevent multiple sprites from stacking that darkening effect. I.E.: You want something like this: Instead of what you're probably currently getting, which is something like this: The easiest solution is to have the trees be 100% opaque transparency (alpha of 1.0) and also be colored by a similar gradient. Not exactly what you want, but would work. The hard part is of course lining up the gradient between multiple trees which might not all be the same size or have the same position in the gradient. Another option would be to use something like a named grab pass (if you're using the built in rendering path) which would grab a copy of the screen just before rendering the first tree outline, darken that, and output that is a fully opaque transparency. After that you could look into stencils, or possibly even sprite masks. The really complex setup that would match how something like Photoshop / Gimp would work would be to render your sprites into a separate render texture that you then render back into the scene. If it was me I'd actually go with the first option I listed, and have some global settings to define the gradient in world space so I could recalculate it / sample the same texture for all of the sprites.
Thanks for your quick reply =) Yep that's what I was afraid you'd answer. I'm using the URP with the 2D renderer pipeline for mobile platform. The gradient solution would work but the environment is procedurally generated so there are no "quick" way to know what gradient colors to use nor where to apply (the gradient isn't linear + there is a parallaxe effect to make it complete haha). I checked the stencil part (which is enabled by default) but thought it was perf-expansive for mobiles. Am I wrong? The sprite masks works if we have each sprite on different rendering layer except its own. For now I have a rendertexture (that does the job well, except postprocesses on rendercams kill the alpha no matter what). And I'm afraid it will drastically affect performances too since having a texture that large won't be good for the fillrate. Can I please ask you one last question : what is the overall less expansive solution in here ?
Sprite masks are implemented using stencils. The trick is to have each sprite both write to and read from the stencil in the same draw, where as I believe sprite masks break that up into separate passes. The one that renders faster is the less expensive solution. Which one will that be? Who knows. Depends on what hardware you’re targeting and what else is going on. You just have to try them and see if the easier solutions are fast enough.
Allright, I guess I'll have to check then =) Thanks a lot for your help and, if in the meantime I find another (better) way to achieve this, I'll post it here for posterity!
Did you find a better way to do this. trying to do the same thing for drop shadows. Basically one whole sorting layer needs to be blended.