Apple chose 64 bits because it's necessary to get correct results. If you don't care about correctness you can always go faster.
-
-
Replying to @jckarter
CG worked fine with 32 bits. Every GPU is 32 bits. Show me some concrete examples where 64 bits is necessary.
1 reply 0 retweets 1 like -
Replying to @mindbrix
CG worked fine with 32 bits…until we had retina displays and 20+ inch monitors standard. GPUs handle doubles just fine these days.
3 replies 0 retweets 3 likes -
Replying to @jckarter
Why do you need doubles for retina displays? Please show me the math.
1 reply 0 retweets 0 likes -
Replying to @mindbrix
Empirically speaking, even before retina displays, there were instances of text glyphs rendering differently on the far side of the screen, large scroll views behaving oddly, and things like that.
2 replies 0 retweets 1 like -
-
Replying to @mindbrix
I'm no good at math, maybe
@stephentyrone can help with that. But the nice thing about doubles is that you're much less likely to need to be good at math to use them.2 replies 0 retweets 4 likes -
It's not rocket science to get at, to the point that this mostly feels like a bad faith request. You lose a few bits to a poorly-implemented primitive, 12 bits of coordinates to the screen size, then you have a view whose coordinates were poorly chosen or that the API doesn't \
1 reply 0 retweets 11 likes -
allow the engine to renormalize nicely, then you repeat a translation a few times eating some rounding error each time, and you have no bits left. You only had 24 of them to start with, so this doesn't take long.
1 reply 0 retweets 2 likes -
Yes, it's possible to design a new API from scratch to avoid this, if you're very careful and you do it just so, and all of your users also use it just so. Or you can use double and no one ever needs to worry about it.
1 reply 0 retweets 4 likes
I’m using 32-bit floats everywhere in my vector graphics/font API and being careful about it…this may end up coming back to bite me but we’ll see :)
-
-
Replying to @pcwalton @stephentyrone and
32 bites you in games, once you are far from the origin. Also, 32 is faster than 64 for fpu, last time I measure with Mono and LLVM. Some numbers and a cute addendum with a real life problem: https://tirania.org/blog/archive/2018/Apr-11.html …
2 replies 0 retweets 5 likes -
Replying to @migueldeicaza @pcwalton and
I think the main thing isn't that doubles are exactly as fast as single-precision floats, but that most of the things that made them orders of magnitude slower (32 bit CPU buses, no hardware support, etc.) have mostly been addressed, so they're a safe default
3 replies 0 retweets 2 likes - 1 more reply
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.