one thing that makes this worse is that type systems bolted on top of dynlangs tend to be unsound, e.g. mypy is
-
-
Replying to @whitequark
Yes. And generics systems on statically typed languages tend to be designed around the assumption there's a JIT
1 reply 0 retweets 2 likes -
Replying to @mcclure111
you can go a long way with inline caches, see ObjC
1 reply 0 retweets 2 likes -
Replying to @whitequark @mcclure111
Hm, the approaches I know of to add inline caches or method caches to ObjC all ended up getting rolled back because they didn’t make things faster enough to justify the code size and memory cost. ObjC is weird though because when you care about perf you just…don’t use objects
2 replies 0 retweets 5 likes -
I think the biggest benefit isn’t so much ICs but speculative devirtualization leading to opportunistic inlining, which is really tough without a heat profile of the code (PGO or JIT)
1 reply 0 retweets 6 likes -
Yeah that’s pretty much exactly the problem with ObjC. PGO is of course hard to do well, and people generally manually optimize out the dispatch in known hot paths anyway
1 reply 0 retweets 1 like -
Like, modern CPU’s BTB will mostly do a good job with virtual call overhead. (objc_msgSend is weird but I’m guessing it’s keyed off {pc,lr} on Apple’s chips or something to deal with that.) But inlining, that obviously opens up arbitrarily many optimizations.
3 replies 0 retweets 6 likes -
Even Intel’s branch predictors do a fine job with objc_msgSend these days. But since msgSends tend to fall along what would have been ABI boundaries across dylibs, often the opportunity for IPO isn’t there anyway
2 replies 0 retweets 2 likes -
I trust you, but I don’t understand how that works on Intel unless you’re calling the same method over and over… RIP is always the same, should always miss?
3 replies 0 retweets 0 likes -
Intel’s contemporary predictors also use branch history AIUI.
1 reply 0 retweets 0 likes
@mattgodbolt suggests otherwise for the BTB: https://xania.org/201602/bpu-part-three …
Unless it’s changed since 2016.
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.