Conversation

If you're willing to talk about compilation time or other not-just-runtime-microbenchmark costs in your cost model (hello ABI stability / version-stable separate compilation! hello user cognitive load / predictability!) the picture gets _much_ murkier.
2
3
Yeah it's an interesting landscape. I bring it up because often 'zero cost' stuff sacrifices it. I'm just interested in having a healthier, more honest design discussion.
1
Even if the _runtime_ cost model is just a bit more complex -- eg. if you start wondering whether all the specialized copies or unrolling/inlining is hurting cache -- you lose sight of clear answers. Speculative devirt, polymorphic inline cache, trace-jit: ??? heuristic-ville
2
5
It's been on my mind a lot lately when thinking of where to _not_ spend optimization energy in new languages. I think the perspective of staging / specializing is helpful. Like consider your compiler as deciding not just "abstractions to look through" but values to specialize-on.
1
3
It's tempting to think that a compiler is only looking-through abstractions; but in many cases it's also baking-in (possibly to a partial replica) assumptions made when looking-through, and those carry costs both for the replicas and for handling cases when assumptions are wrong.
1
1
(Absurd limit case: specializing programs on every possible input. But less absurd: every one of N scalars used as array sizes? Every one of X*Y caller/callee combinations?)
This Tweet was deleted by the Tweet author. Learn more
Replying to
Dude, your enthusiasm for PLT stuff is no small part of what’s been keeping my gears turning lately, pretty sure we all welcome a little excitement (and a little nudging to actually do the thing, hah)
1
2
Thanks, that means a lot actually. It’s what I try to do at least. It feels like a bunch of this stuff is on the cusp of combining together into something really cool, and I want to see it happen!
1
1
Show replies