Languages and tooling can help here: - we could warn that 0.1, 0.2, and 0.3 are not exactly representable, and report the value to which they have been rounded. - we could default decimal literals to a decimal floating-point type instead of Double.
-
-
Replying to @stephentyrone @Biappi
The second one is the most interesting to me, since it's akin to what Swift did with strings. What you get in other languages when working with Unicode text feels a bit like these floating-point issues
2 replies 0 retweets 3 likes -
What would be the consequences of using decimal floating point? Can it be optimized as well as binary floating point? It’s pretty rare to care about the exact decimal value if you’re using floating point correctly, isn’t it?
3 replies 0 retweets 0 likes -
To your optimization question: yes, there is absolutely a performance tradeoff (roughly an order of magnitude for computations); the guidance really can't be "just always use double".
1 reply 0 retweets 2 likes -
Replying to @stephentyrone @anandabits and
But: (a) safer defaults with simple access to faster alternatives is generally accepted as a reasonable language design choice (b) broader SW use of decimal FP is the lever that gets us to better and faster HW support for decimal FP.
2 replies 0 retweets 3 likes -
Replying to @stephentyrone @anandabits and
I think it’s not obvious that decimal floats are a safer default. Going beyond 0.1+0.2==0.3, they’re still ultimately inexact, have even more weird representations than binary floats, and less ideal rounding error accumulation
3 replies 0 retweets 14 likes -
Replying to @jckarter @anandabits and
All true! It's not clear. But, there are really two overlapping use cases for floating-point: (a) algorithms that are resilient to changes in the scaling of their inputs. (b) algorithms that have known scaling, but need to represent non-integral values.
1 reply 0 retweets 2 likes -
Replying to @stephentyrone @jckarter and
One can argue that (b) would be better served by fixed-point types, which is almost true, except that if you're writing such code you usually end up calling some library functions, and *those* need to be resilient to scaling because they have multiple clients.
2 replies 0 retweets 2 likes -
Replying to @stephentyrone @jckarter and
Interesting way to look at it. I’m basically in camp (b) in vector graphics rendering. There’s no way to eliminate error because the whole problem is discretizing continuous functions, so I might as well use FP for better library/HW support.
2 replies 0 retweets 1 like -
For graphics specifically, there's also the fact that floating point is just significantly faster on all current hardware, provided you can find enough parallel work to do.
1 reply 0 retweets 2 likes
Yep. Though it’s interesting that the rasterizer converts to fixed point internally. (Also common to pack vertex data into fixed point for compression—I do this.)
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.