Approach 1: What I do currently is have dyntyped code in Lua and statically typed code in C. There is a boundary. It is *fairly* expensive. It is not horrifically expensive. One thing that makes it less expensive than it could be is the boundary is always visible to the coder
-
-
Replying to @mcclure111 @whitequark
So like, you never "accidentally" cross the boundary. You never pass some complex data structure that has to be unpacked w/o realizing you've done so. Crossing the boundary is "difficult" (in the sense of: slightly annoying, because it has bad ergonomics) so you avoid doing it
2 replies 0 retweets 5 likes -
Replying to @mcclure111 @whitequark
You could have a situation where the languages are unified, but the boundary is still legible to the coder, in a way that encourages them to make boundary-crossing only be done for a good reason or in an efficient way.
1 reply 0 retweets 2 likes -
Replying to @mcclure111 @whitequark
Approach 2: Something I wound up doing in the unfinished Emily2 impl was that you could only run the compiler if your *entire program* was typed. If there's any dynamic code you have to run the interpreter.
2 replies 0 retweets 6 likes -
Replying to @mcclure111 @whitequark
This was to make the initial implementation simpler, but I was *considering* making it just like… a rule. It's not a terrible rule. One of the main reasons I want dynamic typing is when I'm prototyping anyhow.
2 replies 0 retweets 2 likes -
Replying to @mcclure111
this is actually something Rust specifically tried to avoid with GC, that is, not having the ecosystem split into GC-world and non-GC-world, and arguably something D failed at
1 reply 0 retweets 8 likes -
Replying to @whitequark @mcclure111
one thing that makes this worse is that type systems bolted on top of dynlangs tend to be unsound, e.g. mypy is
1 reply 0 retweets 4 likes -
Replying to @whitequark
Yes. And generics systems on statically typed languages tend to be designed around the assumption there's a JIT
1 reply 0 retweets 2 likes -
Replying to @mcclure111
you can go a long way with inline caches, see ObjC
1 reply 0 retweets 2 likes -
Replying to @whitequark @mcclure111
Hm, the approaches I know of to add inline caches or method caches to ObjC all ended up getting rolled back because they didn’t make things faster enough to justify the code size and memory cost. ObjC is weird though because when you care about perf you just…don’t use objects
2 replies 0 retweets 5 likes
I think the biggest benefit isn’t so much ICs but speculative devirtualization leading to opportunistic inlining, which is really tough without a heat profile of the code (PGO or JIT)
-
-
Yeah that’s pretty much exactly the problem with ObjC. PGO is of course hard to do well, and people generally manually optimize out the dispatch in known hot paths anyway
1 reply 0 retweets 1 like -
Like, modern CPU’s BTB will mostly do a good job with virtual call overhead. (objc_msgSend is weird but I’m guessing it’s keyed off {pc,lr} on Apple’s chips or something to deal with that.) But inlining, that obviously opens up arbitrarily many optimizations.
3 replies 0 retweets 6 likes - 8 more replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.