There’s a subtlety, I think, between “would write” (willingly) and “could write” (conceivably, but it’s tedious and error-prone)—it’s more the latter that I care about: automatic rearrangement of the higher-level code I want to read & write into the lower-level code I want to run
Conversation
Some interesting barometers:
- would I be willing to lean on this abstraction in a tight inner loops?
- does the optimization that this abstraction lean on break easily/silently?
- how does this it impact compilation time?
- how much damage does it do to the UX?
1
1
If you're willing to talk about compilation time or other not-just-runtime-microbenchmark costs in your cost model (hello ABI stability / version-stable separate compilation! hello user cognitive load / predictability!) the picture gets _much_ murkier.
2
3
Yeah it's an interesting landscape. I bring it up because often 'zero cost' stuff sacrifices it. I'm just interested in having a healthier, more honest design discussion.
1
Even if the _runtime_ cost model is just a bit more complex -- eg. if you start wondering whether all the specialized copies or unrolling/inlining is hurting cache -- you lose sight of clear answers. Speculative devirt, polymorphic inline cache, trace-jit: ??? heuristic-ville
2
5
Hey, I just want to say, I really appreciate these responses. Lots of really good food for though. Thanks! 🥰
1
3
It's been on my mind a lot lately when thinking of where to _not_ spend optimization energy in new languages. I think the perspective of staging / specializing is helpful. Like consider your compiler as deciding not just "abstractions to look through" but values to specialize-on.
1
3
It's tempting to think that a compiler is only looking-through abstractions; but in many cases it's also baking-in (possibly to a partial replica) assumptions made when looking-through, and those carry costs both for the replicas and for handling cases when assumptions are wrong.
1
1
(Absurd limit case: specializing programs on every possible input. But less absurd: every one of N scalars used as array sizes? Every one of X*Y caller/callee combinations?)
This Tweet was deleted by the Tweet author. Learn more
This why I’m following the work of , , and with interest! Not sure if it will fulfill my needs though. 🤔
Staging à la Pfenning and Davies in the Granule pipeline atm.
1
1
Such excitement. Sorry I know I’m like a broken record at this point!
1
2
Show replies



