Floats tend to be overused by programmers. Almost every real-world quantity (time, money, sensor readings...) is actually quantized, and is better represented via integers (ms, cents, etc.) The only real use case of floats is scientific computing (physics, ML, simulations...)
-
-
This means floats lead to weird numerical discrepancies that accumulate as you add more operations. Only use floats if you want to run large computation with approximate numbers -- scientific computing, basically.
Show this threadThanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.