Floats tend to be overused by programmers. Almost every real-world quantity (time, money, sensor readings...) is actually quantized, and is better represented via integers (ms, cents, etc.) The only real use case of floats is scientific computing (physics, ML, simulations...)
-
-
this is how trade software works too for tick sizes, people have lost a lot of money using floats for a tick price and setting conditions like if (x == y) and have x be 9.999999 instead of 10
-
Therefore the practice of using comparison with tolerance in some places and practice of using BigDecimals in others.
- Show replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.