I don’t do anything beyond that point. Additive layering should ensure that a pixels full value comes from one of your layers, and all proportions are preserved regardless of the order of layering. Its main drawback is that there’s no way to confirm that your RGBA values actually add up to a total of 1.0 influence. There’s nothing stopping your splatmap from having more than 1.0 influence (resulting in excessively bright pixels) or less than 1.0 (resulting in dark pixels).
If you exported your splatmap from a Unity terrain, the values should add up, or close enough as makes no difference. Check that your splatmap is imported as linear (uncheck sRGB) because the conversion there would distort the influences. I’ve had similar issues when trying to multiply HDR color tints against a base white texture, with the result not being the same as the HDR’s appearance. There’s some color space issues that need to be lined up when you do that.
If you’re painting the splatmap through vertex painting, you’ll want to use pure red, green, blue, and alpha brushes. Unity is kind of bad with alpha, as it makes the swatch itself invisible on Polybrush. I had to create fake RGB swatches with their correct swatches beside them because they all appear invisible.
If you want to be certain the values are balanced, you can temporarily normalize them in your shader. It’s not the same as vector normalization (the built in node) because vector normalization creates a length of 1 unit, not a total of 1 influence across all channels. This type of operation should ideally be baked into the splatmap rather than recalculated every frame since it doesn’t actually change during play.
I assume it’s also better to paint your layers after designing the blending algorithm, as each blending method will gravitate towards its own painting style. The order-of-operation limited methods are especially responsive to it.
I think Unreal recommended having three additive layers and one full opacity layer, which I guess would require tracking the total influence at each point so the remainder can be allotted to the last layer. This allows you to ignore the alpha channel, too, and paint black in order to expose the base layer. With a permanent base layer, you’ll never have gaps. I don’t know how to implement that but the concept sounds approachable if you’re digging around for options.
I used a custom node connected to a .HLSL script with the below setup and code to test the blending without worrying about whether I’d balanced the splatmap correctly:

// Vector normalization adjusts the magnitude of a vector to 1.
// Weighted normalization adjusts the total weight to 1.
void BalanceVector4Weights_float(
float4 RawWeights, out float4 BalancedWeights)
{
// Find the total influence across channels.
float totalInfluence = RawWeights.x + RawWeights.y + RawWeights.z + RawWeights.w;
if(totalInfluence == 0)
// Edge case. If no weights are assigned, force them onto the first channel for safety.
BalancedWeights = float4(1, 0, 0, 0);
else
// If 0.4 weight is used, then multiply by: (1/totalWeights) = (1/0.4) = 2.5.
// If 1.5 weight is used, then multiply by: (1/totalWeights) = (1/1.5) = 0.66.
// If -1 weight is used, then multiply by: (1/totalWeights) = (1/-1) = -1.
BalancedWeights = RawWeights*(1/totalInfluence);
}