Conversation

similarly, when we use const, we might write: f : a -> [b] -> [a] f x ys = map (const x) ys even assuming map is specialized to lists, there are *lots* more implicit/unknown bits here: f : {a : ?} -> {b : ?} -> a -> [b] -> [a] f {a} {b} x ys = map {?} {?} (const {?} {?} x) ys
1
3
in an HM type system we’d generate a fresh type variable for every node in the AST, unify using a disjoint set or whatever, and substitute out the solutions. with strictly first-order unification we don’t have to think too hard because solved variables can’t depend on locals.
1
5
but with higher-order unification, all of those holes are existentially-bound metavariables, and now you have to think about all of the in-scope variables they could depend on. so you can bind at the outside & type them as functions of local variables…
1
4
…or you can bind them locally, leaving their types alone, but then potentially have to move them outwards if you can’t solve them locally, moving them leftwards through the context and extending their type as you go to deal with each local at a time.
1
2
binding them at the outside is the most straightforward I guess, tho IIRC it gets Weird™ given that some of the context is typing some of the rest of the context, i.e. you’ll have kinds and types in there, and you have to make sure they don’t get permuted or it’s unsound.
1
2
e.g. `f` binds `a : ?` implicitly as well as `x : a`, so (now naming the meta ?k), you have ?k : Type, a : ?k, x : a
1
2
another bit that tripped me up for a long time was that my constraint solver is no longer just doing unification, it’s computing judgemental equality, and hence also normalization, evaluation under binders, etc. much larger scope.
1
2
and the rules for pattern unification—when it’s ok to solve metas, and why—are super non-obvious to me, even after reading Miller’s papers (and every other thing I could find written on the topic whatsoever for the past three years) I still have only the vaguest intuition.
1
3
there’s this super deep relationship between heterogeneous substitution, normalization by evaluation, pattern unification, &c., and much of it depends on the kind of dependently-typed language you’re designing; & the design space is vast & underdescribed.
1
4
e.g., the assertion repeated in a few places that “e.g. Reed’s typing modulo trick won’t cut it for ‘full-spectrum dependent types’” doesn’t really tell me anything about why, or even what the hell ‘full-spectrum dependent types’ is supposed to mean, and context merely hints.
2
4
I actually kinda think there aren’t that many, cos the people who aren’t flying solo are probably grad students with clueful advisors. What I wouldn’t give for an actual *teacher* :\
1
6
Show replies
glad I could help! and please feel free to dive deeper into any of this that you’d like, or anything you pick up as you go. I’m still very much novice in all of this; it’s proven one of the hardest nuts to crack that I’ve yet encountered in CS, which is saying rather a lot.
2
2
Yeah for sure - at the moment I've been mainly cargo-culting off the elaboration-zoo, but it's sometimes hard to understand _why_ certain choices where made, and what the other options are. I also find papers on unification rather hard to break the ice on. 😬
3
1
Show replies