Conversation

Yeah I’d take you up on that (if not for the big ocean in the way)! There’s definitely a cost to zero cost. I’m also interested to see if we can define what people mean by ‘zero cost’ more precisely. had some interesting ideas on the Pikelet Gitter actually.
1
2
I don't think it's a very meaningful term tbh. It's close to a definition but if you're going to allow translation only "as good as if by hand" you get into a weird place: by hand an author would write different code, not the numerous as-if-by-hand instances the compiler emits.
2
2
I agree it’s not per se terribly meaningful, but I need a similar concept as a touchstone for thinking about my language design—what are the criteria by which I judge perf-sensitive features?—so I need a working definition that does mean something in the context of that design
1
1
There’s a subtlety, I think, between “would write” (willingly) and “could write” (conceivably, but it’s tedious and error-prone)—it’s more the latter that I care about: automatic rearrangement of the higher-level code I want to read & write into the lower-level code I want to run
1
1
Some interesting barometers: - would I be willing to lean on this abstraction in a tight inner loops? - does the optimization that this abstraction lean on break easily/silently? - how does this it impact compilation time? - how much damage does it do to the UX?
1
1
If you're willing to talk about compilation time or other not-just-runtime-microbenchmark costs in your cost model (hello ABI stability / version-stable separate compilation! hello user cognitive load / predictability!) the picture gets _much_ murkier.
2
3
Yeah it's an interesting landscape. I bring it up because often 'zero cost' stuff sacrifices it. I'm just interested in having a healthier, more honest design discussion.
1
Even if the _runtime_ cost model is just a bit more complex -- eg. if you start wondering whether all the specialized copies or unrolling/inlining is hurting cache -- you lose sight of clear answers. Speculative devirt, polymorphic inline cache, trace-jit: ??? heuristic-ville
2
5
Replying to and
It's been on my mind a lot lately when thinking of where to _not_ spend optimization energy in new languages. I think the perspective of staging / specializing is helpful. Like consider your compiler as deciding not just "abstractions to look through" but values to specialize-on.
1
3
It's tempting to think that a compiler is only looking-through abstractions; but in many cases it's also baking-in (possibly to a partial replica) assumptions made when looking-through, and those carry costs both for the replicas and for handling cases when assumptions are wrong.
1
1
Show replies