Hello,
I’m fairly new to shaders and am working on software that does a lot of image compositing. What I’ve noticed is that almost all default shaders use this blend mode:
Blend SrcAlpha OneMinusSrcAlpha
This is fine for the vast majority of cases since DstAlpha (the alpha value that resides in the framebuffer) never appears in any equations. However, I’m working on a product that does need a proper DstAlpha value and I’m finding this equation has little utility, so I’m wondering why is it used everywhere?
The default equation equals SrcA*SrcA + DstA(1 - SrcA) which causes overlapping alpha to combine to a color that is MORE transparent. Is there a reason for this?
Here’s my example, blending colors with half-opacity:
Backbuffer contains black, zero alpha. Apply Red at half opacity then Green at half opacity using RGBA:
Apply (1,0,0,0.5), (0,0,0,0) => (0.5,0,0,0.25)
Apply (0,1,0,0.5), (0.5,0,0,0.25) => (0.25, 0.5, 0, 0.375) … notice the 0.375 is more transparent than any of the inputs?
Notice how alpha is now at 0.375? It’s because SrcAlpha is getting multiplied by itself during the blend equation. Using the linear blend a + d(1-a) seems to make more sense.
Blend SrcAlpha OneMinusSrcAlpha, One OneMinusSrcAlpha
Apply (1,0,0,0.5), (0,0,0,0) => (0.5,0,0,0.5)
Apply (0,1,0,0.5), (0.5,0,0,0.5) => (0.25, 0.5, 0, 0.75)
This appears to be correct. So my question is … why isn’t this equation used?
Cheers.