Conversation

We incremental computation researchers cannot say it to get published, but the basic assumption for readers seems that these techniques aren’t really black boxes and writing any essentially new program with them is a new paper, unless your program is already well *parallelized*!
3
See other replies: it’s hard to get performance improvements. You can write highly parallel reusable primitives (such as folds that require associativity and whose computation tree is balanced, or many query languages ops), then use them for your program.
1
1
Let me admit I did a PhD on this and realized the issue halfway, so there might be a “disgruntled failed employee” vibe to this (and I haven’t studied the type systems to guide authors of incremental programs).
3
I would say that most compilers I've seen are very embarrassingly structured and don't adequately isolate/track dependencies at the crude level they "ought" to be able to from the language semantics. So wind up redoing vast amounts of work every run.
1
1
(Much of this, in turn, derives from the _extremely_ convoluted rules for name lookup in virtually every real language. If you can't easily and precisely figure out what a source entity depends on name-wise, you're cooked, gotta back off to the whole TU. Which is really common!)
1
2
(Like if I had one piece of advice for language design it'd be to keep your name lookup rules as simple as possible, not intertwined with type checking, not involving any forms of global overload disambiguation, argument-dependence or whatever. It's a hairball almost everywhere.)
2
14
Like do a thought experiment where you change only one def with only one use in a library with a million defs, and figure out how small you can make the constant (ideally zero) on the O(million) work you're going to do to _figure out_ there's only one use, and exclude the rest.
1
5
Show replies