Years ago when I wrote a wrapper for unityengine.Random to give it object identity and implement a generalized ‘IRandom’ interface I just picked arbitrary values of 0.9999…:
For float I picked 0.99999f; (5 sig values)
For double I picked 0.99999999d; (8 sig values)
Technically speaking you can easily fit 7 sig values into the float, so you could expand that out to 0.9999999f. I have noticed if you give it an 8th digit, it’ll get rounded to 1f (and actually rounded, it’ll equate to 1).
For double, you may notice in that link above my “SimplePCG” random number generator does this:
That 0x100000000u is basically uint.MaxValue + 1. Since ‘GetNext’ returns a uint I’m basically saying give me a number where all lower 64-bits represent a fractional value between 0->1 exclusive of 1.
Of course… looking at this now for my ‘float’ version I just cast that to a float… and huh… I never realized this but that’s BAD. I shouldn’t have done that. Casting that to a float will round it to 1f (like the 0.99999999f above). I wrote this a decade ago though… so, not surprised I messed that up. I’m going to go fix that.
…
Regardless. I wouldn’t be too concerned honestly. Just set it to 0.99999… up to the number of sig values you want, but short of the max sig values supported.
I mean a good solution is the one that works. You only have so many points of precision with float, so ‘just less than 1f’ is not unreasonable to just hard-code.
Probably best to have a const value somewhere to make this easier to reuse as well.
These are technically the largest possible values < 1 that can be stored in your floats.
0x7FFFFF = 23 digits of 1 in binary
0x1FFFFFFFFFFFFF = 53 digits of 1 in binary
Which those are the sig value ranges of float and double respectively.
And the denominators are just those values + 1.