Conversation

Yeah I’d take you up on that (if not for the big ocean in the way)! There’s definitely a cost to zero cost. I’m also interested to see if we can define what people mean by ‘zero cost’ more precisely. had some interesting ideas on the Pikelet Gitter actually.
1
2
I don't think it's a very meaningful term tbh. It's close to a definition but if you're going to allow translation only "as good as if by hand" you get into a weird place: by hand an author would write different code, not the numerous as-if-by-hand instances the compiler emits.
2
2
I agree it’s not per se terribly meaningful, but I need a similar concept as a touchstone for thinking about my language design—what are the criteria by which I judge perf-sensitive features?—so I need a working definition that does mean something in the context of that design
1
1
There’s a subtlety, I think, between “would write” (willingly) and “could write” (conceivably, but it’s tedious and error-prone)—it’s more the latter that I care about: automatic rearrangement of the higher-level code I want to read & write into the lower-level code I want to run
1
1
Some interesting barometers: - would I be willing to lean on this abstraction in a tight inner loops? - does the optimization that this abstraction lean on break easily/silently? - how does this it impact compilation time? - how much damage does it do to the UX?
1
1
If you're willing to talk about compilation time or other not-just-runtime-microbenchmark costs in your cost model (hello ABI stability / version-stable separate compilation! hello user cognitive load / predictability!) the picture gets _much_ murkier.
2
3
Yeah it's an interesting landscape. I bring it up because often 'zero cost' stuff sacrifices it. I'm just interested in having a healthier, more honest design discussion.
1
Even if the _runtime_ cost model is just a bit more complex -- eg. if you start wondering whether all the specialized copies or unrolling/inlining is hurting cache -- you lose sight of clear answers. Speculative devirt, polymorphic inline cache, trace-jit: ??? heuristic-ville
2
5
That heuristic/statistical optimisation was something I really both liked & disliked about working on Mono. It made me realise that in Kitten I want to give the programmer more insight into & control over how that process works—more of a dialogue between programmer & tooling.
2
1
Replying to
Same, but more than that, I’m thinking of questions like “How do you get the advantages of profile-guided optimisation, but without having to build a large representative workload like a comprehensive test suite or benchmark?”
1
1
Or “How can the compiler *suggest* potential optimisations that the programmer opts into or out of explicitly, rather than just automatically performing them?” Or “deoptimisations”—consider “Suggest boxing this to reduce the large number of static specialisations of it”.