The document says that the max in NextFloat is exclusive, but it’s not true.
var rand=new Unity.Mathematics.Random(1783403415);
Debug.Log(1.5f==rand.NextFloat(1f, 1.5f));//true
The document says that the max in NextFloat is exclusive, but it’s not true.
var rand=new Unity.Mathematics.Random(1783403415);
Debug.Log(1.5f==rand.NextFloat(1f, 1.5f));//true
Indeed, it doesn’t work the way it describes. You should consider submitting a bug report to bring this to their attention. It seems to be due to precision problems with floating-point values. The source random value ([0-1)
) remains in the correct bounds, but there’s a problem in the subsequent mapping to the target bounds.
I actually think that the documentation is wrong because almost all float based random range implementations have the min and may value inclusive. Only the integer version has the max exclusive. It doesn’t make much sense for floats to have the max exclusive in almost all cases.
You should still file a bug report so they can fix the documentation (or implementation depending on what they really had in mind).
I will say, the intent of float NextFloat()
really seems to be [0-1)
from what I see in the code and from some checking.
/// <summary>Returns a uniformly random float value in the interval [0, 1).</summary>
/// <returns>A uniformly random float value in the range [0, 1).</returns>
[MethodImpl(MethodImplOptions.AggressiveInlining)]
public float NextFloat()
{
return asfloat(0x3f800000 | (NextState() >> 9)) - 1.0f;
}
The left expression creates a bit pattern for a single-precision float in the range [1,2)
by just filling in random bits for the mantissa, and the subtraction yields a value in the range 0 (00000000) and 0.99999988079071044921875 (3F7FFFFE). Therefore, the no-parameter overload isn’t lying about its bounds. I have some uses for this behavior, like reimplementing random-point-in-sphere / random-point-in-circle which at some points is supposed to uniformly distribute from [0, 2*pi)
, though the actual upper bound is so close to 1 and evidently causes stuff like OP describes…
(Here’s a nice web resource for doing stuff with the bits of floats)
@Spy-Master
@Bunny83
Thanks for the reply!!
I just have sent the bug report.
I know it’s very rare, but exclusiveness is important when it will convert to integer.
Unity should implement and use Math.BitDecrement in Unity.Mathematics.
Interesting. Yes it seems you’re completely right. I haven’t yet looked at the source (I don’t have time at the moment ) but you’re probably even right with the precision issues when upscaling as rounding can mess up the max value.
By the way, BurstCompiling the method greatly increases the probability of returning the maximum value, perhaps because it reduces the precision of the calculation.
Most likely. You can also configure Burst precision.
I am also super positive that the documentation is wrong in specifying [min, max ).
When it comes to floating point, rngs usually adapt the existing bit sequence by rescaling it to 0.0 .. 1.0 (typically via dividing, say irand / (float)uint.MaxValue
), meaning 1.0 should be inclusive (even though it’s unlikely).
Alternatively one could implement it as irand / ((float)uint.MaxValue - 1.0)
But then this happens
// single
var oneUInt = uint.MaxValue / ((float)uint.MaxValue + 1f); // 2^32
Console.WriteLine($"{oneUInt}=1 {(1f).Equals(oneUInt)}"); // True
var oneInt = int.MaxValue / ((float)int.MaxValue + 1f); // 2^31
Console.WriteLine($"{oneInt}=1 {(1f).Equals(oneInt)}"); // True
// double
var oneUIntDouble = uint.MaxValue / ((double)uint.MaxValue + 1.0); // 2^32
Console.WriteLine($"{oneUIntDouble}=1 {(1.0).Equals(oneUIntDouble)}"); // False
var oneUIntAlt = uint.MaxValue / ((double)uint.MaxValue); // 2^32-1
Console.WriteLine($"{oneUIntAlt}=1 {(1.0).Equals(oneUIntAlt)}"); // True
From this you can see that single-precision simply can’t keep up with this, so I’m sure they had the best of intentions for it, it just doesn’t come through. I’ve tested before with various ways one can map 0-2^32 integers to 32 (typo) 23-bit mantissa (in the 0..1 range), and contrary to naive intuition, division is the fastest and the most robust.
Also 32-bit 0..1 range cannot even entirely represent the actual probabilities involved. Yes, it’s uniformly random, but you’re bound to lose something when you try to pigeonhole 32 bits onto 23 slots . In other words, the chance to get exactly 1.0 is somewhat greater than what is intuitive (min/max is a post-transformation btw).
So here’s a couple of ways how you could solve this
float NextFloatExcl(this Unity.Mathematics.Random rng, float min, float max) {
float r;
do r = rng.NextFloat(); while(r >= 1f);
Debug.Assert(0.0 <= r && r < 1.0);
return min + r * (max - min);
}
Try this version instead because it should be that you don’t have to do anything. Not even BitDecrement.
double NextDoubleExcl(this Unity.Mathematics.Random rng, double min, double max) {
var r = rng.NextDouble();
Debug.Assert(0.0 <= r && r < 1.0);
return min + r * (max - min);
}
Anyway, Unity’s docs can be casually defective at times. Or sometimes fairly representative of how badly things are. Here’s one such example where we have atan rotated by 90 degrees (by swapping the arguments) and 1.0/6.28
is supposed to be 2\pi^{-1} (so if you make anything that scales radially with that one, that’s an error of 0.1825.. \approx\frac{73}{400} degrees, which doesn’t sound much but this can quickly escalate because of violated idempotence of several math operations over such an imprecise value).
I’ve reported that one, but it was honestly easier to get govt. tax reimbursement than to make Unity employees confirm this to be an undesirable implementation.
Btw this is where single-precision will finally acknowledge the discrepancy
var oneUInt = uint.MaxValue / ((float)uint.MaxValue + 257f);
Console.WriteLine($"{oneUInt}=1 {(1f).Equals(oneUInt)}"); // False
var oneUInt = (uint.MaxValue - 128) / (float)uint.MaxValue;
Console.WriteLine($"{oneUInt}=1 {(1f).Equals(oneUInt)}"); // False
That’s how coarse the error is on this scale!
In other words
Console.WriteLine($"{1f - (float)(uint.MaxValue - 128) / uint.MaxValue:F16}"); // 0.0000000596046448
Console.WriteLine($"{1f > (4_294_967_167 / (float)4294967295)}"); // True
Console.WriteLine($"{1f > (4_294_967_168 / (float)4294967295)}"); // False
This difference appears to be less than epsilon and thus cannot be registered.
Or to put it shortly, single-precision runs out of gas at this level of detail. Look we can ride the very edge of it:
Console.WriteLine($"{1f > 0.99f}"); // True
Console.WriteLine($"{1f > 0.9999f}"); // True
Console.WriteLine($"{1f > 0.9999999f}"); // True
Console.WriteLine($"{1f > 0.99999997f}"); // True
Console.WriteLine($"{1f > 0.9999999701f}"); // True
Console.WriteLine($"{1f > 0.999999970197f}"); // True
Console.WriteLine($"{1f > 0.9999999701976776f}"); // True
Console.WriteLine($"{1f > 0.9999999701976777f}"); // False (== would return True)
In fact these two values simply quantize differently (if I had to guess the hairline is probably right in the middle)
Console.WriteLine($"{0.9999999701976776f:F16}"); // 0.9999999403953552
Console.WriteLine($"{0.9999999701976777f:F16}"); // 1.0000000000000000
My previous post includes the source of what is done in Unity Mathematics. Both in apparent intent from the design and simple experiments on the bounds, the base no-parameter overload excludes 1 as advertised. Caveat: experiment was x64 Mono and Burst just calling that method with input state = ~0u
with the different float precision and float mode options. All produce the same assembly. More involved expressions may be evaluated differently based on settings, but predictable behavior is out the window in those cases anyway.
First overload doesn’t work on my end with OP’s seed and min/max, it’s effectively doing the same thing as the actual min/max overload but with an ultimately redundant loop. As with the actual overload, the remapping in that particular case simply ends up returning the upper bound for the random value being close enough to 1 (0.99999988079071044921875 = 0x3F7FFFFE), even with FMA.
[UnityEditor.MenuItem("P1506384/8_A")]
private static void P1506384_8_A()
{
var rand = new Unity.Mathematics.Random(1783403415);
float z = NextFloatExcl(rand, 1.0f, 1.5f);
Debug.Assert(z < 1.5f, "Remapped value not less than 1.5");
}
I hasten to add, your two methods should really be passing the Random instance by reference, lest the state not be updated in the instance you passed.
(sorry, akeit, I had a now irrelevant draft post I didn’t fully close, and this message improperly shows as a reply to your post)
Yes it does work the same, it is literally that same thing with the ultimately redundant loop.
It works by guaranteeing that the r >= 1f
check does not apply.
That said, it should not return a value that breaks the assert (because r < 1f != r >= 1f
). The returned number should never be as close to 1 in order to be reported as 1.
The loop is in fact not redundant for this particular case: it is common to throw away bad RNG results without tinkering with them, because fudging values will produce uniformity biases. RNG should be considered as a sequence, and you move along this sequence until data fits the constraints. The technique is called discarding and is relatively common with RNG algorithms (especially with normal distribution where ultra-rare results might land in practical infinities).
Could be, I’m a bit rusty with Unity.Mathematics
atm. Oh yeah yeah Random is a struct. I made an extension hastily only to illustrate the contents, it wasn’t intended for copy/paste.
I guess this is what you meant? There is no ref
for extensions though.
float NextFloatExcl(ref Unity.Mathematics.Random rng, float min, float max)
Regardless the point of my posts are the issues with 32-bit FP precision. I am not talking about the assembly.
I think it’s clear from my tests that there are physical boundaries which is very relevant to how a 32-bit or 64-bit RNG operates when confined inside of 23 mantissa bits.
I didn’t quite get what you mean by this, but if this is something that will affect someone’s codebase, and my solution fixes nothing, then I’d recommend using the 64-bit version (NextDouble). Unity won’t fix this anytime soon, mostly because it’s not really their problem.
And basically if a user wants greater precision – and an argument can be made that whoever needs to accurately differentiate between 1.0 and 0.9999999701976777 is ultimately after greater precision – then user should use types best suited for that precision.
Edit: Unless you want to say you actually caught a bug in their implementation, in that case I just didn’t understand what you said, sorry.
I’m saying that float NextFloat()
is designed not to return 1 (with admittedly limited but nonetheless reproducible empirical evidence on the extrema on x64), so “guaranteeing” shouldn’t be necessary in the first place, barring the compiler playing loose with arithmetic operations when told to with Burst attributes etc. (which again throws predictable behavior out the window).
Breaking down the actual logic of the method, we get a float in the range [1,2)
(max fills the mantissa with 1’s), and for that max bound, subtracting 1 has an unambiguous result: 1.99999988079071044921875 - 1 = 0.99999988079071044921875 which is exactly represented by the bit pattern 0x3f7ffffe.
this ref
/ ref this
is perfectly valid on value types
The code you wrote for the double method doesn’t do any looping. As it is, it’s the same logic as the existing double NextDouble()
with an extra assert that, much in the same way as the float counterpart, shouldn’t be triggered. I feel like you’re addressing the wrong points here, as it seems like you’re fixated on ensuring that the starting random value is [0,1)
when the tangible issue we’re seeing is the precision of remapping after getting the random value.
I get the feeling making logic changes affecting DOTS prng wouldn’t be on their to-do list. At best, they would amend their docs.
Ok, fine, let’s start over.
Well, remapping in Mathematics.random is done exactly as I showed
public float NextFloat(float min, float max) { return NextFloat() * (max - min) + min; }
There is nothing important about that, because if the whole thing doesn’t work this is because you can’t differentiate from 1 something that was already established as 1. So the problem is in NextFloat, which does
return asfloat(0x3f800000 | (NextState() >> 9)) - 1.0f;
With NextState being uint
from here
private uint NextState()
{
CheckState();
uint t = state;
state ^= state << 13;
state ^= state >> 17;
state ^= state << 5;
return t;
}
And we can find what asfloat does here
/// <summary>Returns the bit pattern of a uint as a float.</summary>
/// <param name="x">The uint bits to copy.</param>
/// <returns>The float with the same bit pattern as the input.</returns>
[MethodImpl(MethodImplOptions.AggressiveInlining)]
public static float asfloat(uint x)
{
unsafe
{
return *(float*)&x;
}
}
Hopefully that comment makes the purpose of this method very clear. So we can go back to this
return asfloat(0x3f800000 | (NextState() >> 9)) - 1.0f;
And try to reason with that.
NextState here returns a 32-bit uint value that is a sequence of random bits. This is shifted to the right by 9 places, and we end with 23 bits, which is how much we can fit into 23-bit mantissa.
Then they append an artificial exponent to this, to constrain the FP to 2^0 mode.
with 0x3f800000 you get the following binary representation (mantissa being the # part)
S 0111 1111 #### #### #### #### #### ###
Now with float being constrained to 2^0, and due to IEEE-754 standard the whole value will be increased by 1, when we inject bits what we get is value = 2^0\cdot f where mantissa f is just f = 1 + 2^{-1} + 2^{-2} + 2^{-3} + \cdots (from left to right, assuming the bits are switched, 1 is for granted however.)
Because mantissa adds 1 by default, this why this result has to be reduced by 1.
(Also S is 0, so the returned value is positive.)
More clarification
For reference, value = (-1)^s\cdot2^{e-127}\cdot f
3f8 sets the exponent to 0111 1111 which is 127, so the above simplifies a lot
value = (-1)^0\cdot2^0\cdot f = 1\cdot 1\cdot f = f
Finally, imagine if a random bit soup was all 1s. This is the value we would get out
S 0111 1111 1111 1111 1111 1111 1111 111 (hexadecimally 0x3fffffff)
\begin{align} value & = (-1)^s\cdot2^{e-127}\cdot f - 1\\ & = (-1)^0\cdot2^0\cdot f - 1\\ & = f - 1\\ & = (1 + 2^{-1} + 2^{-2} + 2^{-3} + 2^{-4} + 2^{-5} + \cdots + 2^{-23}) - 1\\ & = 1 - 1 + (2^{-1} + 2^{-2} + 2^{-3} + 2^{-4} + 2^{-5} + \cdots + 2^{-23})\\ & = 2^{-1} + 2^{-2} + 2^{-3} + 2^{-4} + 2^{-5} + \cdots + 2^{-23}\\ & = \frac{1}{2} + \frac{1}{4} + \frac{1}{8} + \frac{1}{16} + \frac{1}{32} + \cdots + \frac{1}{8,388,608}\\ & \approx 0.99999988079071044921875 \end{align}
Now go back to post #9 where I show why exactly this value cannot be distinguished from 1.
Bottom line, I wouldn’t call this a bug, because the generator technically produces a 1-exclusive value. Again, the main issue is 32-bit precision. And if you don’t like it and don’t want to use double
, use the discarding technique I showed earlier.
Now if you could clarify which point I got wrongly?
0.99999988079071044921875 is precisely represented by 0x3f7ffffe. 0.99999988079071044921875 is smaller than 0.9999999701976776, a number that this 9th post says is distinguishable from 1.
System.Console.WriteLine(1f > 0.99999988079071044921875f);
prints true.
Fair point, can you show the code that doesn’t work as intended?
Originally we had just two possible explanations:
At this point we’ve proven (at least theoretically) that Unity.Mathematics is implemented right. So what is the part that fails?
If you think min-max scaling is the problem this is right, because 1..2 interval has the best resolution of all exponent domains, in terms of how well it reflects back to number line.
So when you multiply this with a scalar that is different than 1, you get lossy compression basically (at worst). The values will either crunch (with scalars < 1), or will spread apart (with scalars > 1). When spreading apart, values should be as distinguishable from the maximum as it was originally from 1.
So if this is not the case, maybe the way this is formulated is overflow-y
Maybe they shouldn’t do min + r * (max - min)
maybe they should try the old lerp trick which typically behaves better with large intervals
n = a + t(b-a)\\ n = a + tb - ta\\ n = a(1 - t) + bt
static float NextFloatExcl(ref this Unity.Mathematics.Random rng, float min, float max) {
t = rng.NextFloat();
return min * (1f - t) + max * t;
}
We can try something like min = -7236.0 and max = 12372.882
Previously this would be evaluated as -7236.0 + (1 - epsilon) * (12372.882 + 7236.0) = -7236.0 + (1 - epsilon) * 19608.882
and this would potentially destroy all nuance so what you get in the end is effectively the same as -7236.0 + 1.0 * 19608.882 = 12372.882
And now this would be evaluated as -7236.0 * (1 - (1 - epsilon)) + 12372.882 * (1 - epsilon)
which would give us -7236.0 * (1 - 1 + epsilon) + 12372.882 * (1 - epsilon)
and then -7236.0 * epsilon + 12372.882 * (1 - epsilon)
From this -7236.0 * epsilon
should be a very small but non-zero value and 12372.882 * (1 - epsilon)
can again be exactly 12372.882, because a very small non-zero value would have to diminish it. Edit: In other words, one of the two epsilons can be lost in computation, but if both are lost then you get a maximum because min * 0 + max * 1
. Previously there was just one epsilon so this would be an improvement.
So anyway, in the end, could be that the implementation is bad (at the interval fixing part).
If anyone can try the above function, see if it helps.
Edit:
Oh and there is also this idea.
Instead of doing
public float NextFloat()
{
return asfloat(0x3f800000 | (NextState() >> 9)) - 1.0f;
}
One can do
public float NextFloat()
{
return 2f * (1f - asfloat(0x3f000000 | (NextState() >> 9)));
}
Which runs equally well, but I think it’s better for this case. I might be wrong though.
This would effectively inject a mantissa with a value of 0.5 .. 1 (at no resolution loss), which is then taken away from 1, so the precision bias is transferred toward to 0, and the value is now in the range 0 .. 0.5. Finally the value is doubled back to 0..1, which changes the exponent back to 3f8 (127) and should leave the original mantissa intact.
Sadly, NextState()
is private, but state
and InitState()
are not!
So what we can do is something like this (as a culmination of the previous post)
public float NextFloatExcl(this ref Unity.Mathematics.Random rng, float min, float max) {
float r = asfloat(0x3f000000 | ~(nextState() >> 9));
return (2f * r - 1f) * min + 2f * (1f - r) * (max - Single.Epsilon);
float nextState() {
uint t = rng.state;
rng.InitState(t); // will call NextState internally w/o changing anything
return t;
}
}
Edit:
Took me some time to rearrange this properly. It now deliberately excludes maximum (assuming max >= min).
i.e. for min = 1 and max = 5
when r is 0.5 (should return maximum)
(2r - 1) * min + 2(1 - r)(max - eps)
0 * 1 + 2(1 - 0.5)(5 - eps)
= 5 - eps
when r is 1.0 (should return minimum)
(2r - 1) * min + 2(1 - r)(max - d)
(2*1 - 1) * 1 + 2(1 - 1)(2 * 5 - d)
= 1
Btw this is actually not an easy problem to have/solve.
One more reason why I would never rely on floating point random value exclusivity in my work.
I haven’t tested this though. This is just a gist. All in all not really useful, except as a debate point.
Note: 2s can’t be pulled in front.
Your code has an out-of-place bit negation.
It does not do so in practice.
Subtracting float.Epsilon will round right back to the minuend in most cases. math.EPSILON works for a larger range of values but still breaks with an input as small as 1-5.
1-1.5 can return 1.5.
[UnityEditor.MenuItem("P1506384/17A")]
private static void P1506384_17A()
{
float min = 1f;
float max = 1.5f;
uint initState = 4071982377;
Unity.Mathematics.Random rr = new(initState);
float v = Generate(ref rr, min, max);
Debug.Log($"{v} {math.asuint(v):X8}");
Debug.Assert(v >= min, $"Remapped value not at least {min}");
Debug.Assert(v < max, $"Remapped value not less than {max}");
static float Generate(ref Unity.Mathematics.Random rn, float min, float max)
{
float r = math.asfloat(0x3f000000 | ((rn.NextUInt() + 1) >> 9));
return (2f * r - 1f) * min + 2f * (1f - r) * (max - float.Epsilon);
}
}
1-5 can return 5.
[UnityEditor.MenuItem("P1506384/17B")]
private static void P1506384_17B()
{
float min = 1f;
float max = 5f;
uint initState = 4071982377;
Unity.Mathematics.Random rr = new(initState);
float v = Generate(ref rr, min, max);
Debug.Log($"{v} {math.asuint(v):X8}");
Debug.Assert(v >= min, $"Remapped value not at least {min}");
Debug.Assert(v < max, $"Remapped value not less than {max}");
static float Generate(ref Unity.Mathematics.Random rn, float min, float max)
{
float r = math.asfloat(0x3f000000 | ((rn.NextUInt() + 1) >> 9));
return (2f * r - 1f) * min + 2f * (1f - r) * (max - float.Epsilon);
}
}
117-343 can return 343 in yours. The actual method doesn’t.
[UnityEditor.MenuItem("P1506384/17C")]
private static void P1506384_17C()
{
float min = 117f;
float max = 343f;
uint initState = 4071982377;
Unity.Mathematics.Random rr = new(initState);
float v = Generate(ref rr, min, max);
Debug.Log($"{v} {math.asuint(v):X8}");
Debug.Assert(v >= min, $"Remapped value not at least {min}");
Debug.Assert(v < max, $"Remapped value not less than {max}");
static float Generate(ref Unity.Mathematics.Random rn, float min, float max)
{
float r = math.asfloat(0x3f000000 | ((rn.NextUInt() + 1) >> 9));
return (2f * r - 1f) * min + 2f * (1f - r) * (max - math.EPSILON);
}
}
[UnityEditor.MenuItem("P1506384/17D")]
private static void P1506384_17D()
{
float min = 117f;
float max = 343f;
uint initState = 1584200935;
Unity.Mathematics.Random rr = new(initState);
float v = rr.NextFloat(min, max);
Debug.Log($"{v} {math.asuint(v):X8}");
Debug.Assert(v >= min, $"Remapped value not at least {min}");
Debug.Assert(v < max, $"Remapped value not less than {max}");
}
It’s not ideal, agreed, mostly because I didn’t have the patience to make a stable epsilon. Ideally, it should be a double, or at least the epsilon should be factored out so that it doesn’t scale. The exponent didn’t really make much sense in the end, it was just an attempt to see what will happen.
Thanks for testing, I did this directly in the browser, combining some of the previous ideas into one.
As I said this isn’t actually trivial.
Yes, I was afraid that would be the case, but there is no quick way around. The only reliable method would be BitDecrement as mentioned by @akeit, and although it’s available in MathF class I don’t know if Unity implements that version of C# currently.
Here’s a quick attempt to do this differently.
// assumes max >= min
public float NextFloatExcl(this ref Unity.Mathematics.Random rng, float min, float max) {
Debug.Assert(min <= max);
float r = rng.NextFloat();
//Debug.Assert(0f <= r && r <= 1f);
float n = (1f - r) * min + r * max;
if(n > max) n = max; // not required but I want to avoid >= below
while(n == max) n = System.MathF.BitDecrement(n); // I don't know if this will push to negatives
if(n < min) n = min; // safe lower bound
//Debug.Assert(min <= n && n < max);
return n;
}
This might work?
Obviously it fudges the values if ==
max, so this produces a slight bias toward the end.