There are two mathematically-defensible approaches here. The easy one samples from uniformly-spaced floating-point numbers on the interval: mask a random int to the number of significand bits, or with 1.0, subtract 1.0.https://twitter.com/ArvidGerstmann/status/1036661069878620161 …
As you described it this could produce a value which rounds to 1.0, which is obviously incorrect...
-
-
Obviously correct, since both Python and C++ do it =P This is actually a really interesting question. Ideally one rounds down when sampling a half-open interval, but rejection sampling also works.
-
Doesn't rounding down over-represent 0.0?
-
Round *down* not towards zero.
-
(But also, you'll never produce 0 anyway, unless you're in Binary16, so. /shrug/)
-
Ah good point - the inputs you round are effectively ~1074 bits.
End of conversation
New conversation -
-
-
The mass of real numbers in [0,1] that are rounded to 0 is 2^-1074. The mass of real numbers in [0,1] that are rounded to 1 is 2^-53. Rejecting 0 discards a negligible fraction of real numbers, but 2^-53 -- while small -- is very much a nonnegligible probability.
-
Obviously, whether to reject 0 or 1 in your application depends on your application's needs. Want to compute log(p) or log(1 - p) for uniform p, sure, reject whichever is appropriate -- but consider using a different space anyway like log-odds for your whole computation!
-
(My numbers are full of eels^Wfenceposts in the exponents, no doubt. Caveat lector! Twitter is not a peer-reviewed math journal and doesn't even have a ‘I just need to correct one digit!’ button.)
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.