Floats tend to be overused by programmers. Almost every real-world quantity (time, money, sensor readings...) is actually quantized, and is better represented via integers (ms, cents, etc.) The only real use case of floats is scientific computing (physics, ML, simulations...)
-
-
And importantly, floats can't accurately represent base 10 numbers used for large quantities, like money. A 10e10 float is actually smaller than 10e10, for instance. Even 0.7 float is actually smaller than 0.7.
Show this thread -
This means floats lead to weird numerical discrepancies that accumulate as you add more operations. Only use floats if you want to run large computation with approximate numbers -- scientific computing, basically.
Show this thread
End of conversation
New conversation -
-
-
I argue even for physics! I used int64 for units in the physics library I wrote for periph. A challenge is that it is less intuitive for users with less programming experience. A drawback is that on Intel x64, division is faster in float than int64 (!) https://periph.io/x/conn/v3/physic …
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.