Yeah, I've been attempting to convert things to the restricted subset of expressions in my dependently typed language Pikelet. I was calling this normalization, but thought it was interchangeable with evaluation.
I was attempting to hackily get around having to do full substitution by just adding defs to the the context and then renormalising, but I'm now running into an issue where I'm not keeping track of captured variables.
Breaks down on more complicated expressions. I then tried doing an explicit substitution approach like in the PiSigma paper, but then my normal forms weren't fully reduced, so I needed to have a more interesting conversion rule.
I then wondered, "how do I fully reduce my 'normalised terms' into 'normal forms' as opposed to 'weak head normal forms'". Seemed like I had a gap in understanding and things were breaking down.
Note that I'm also trying to avoid HOAS as is commonly used in Haskell/ML implementations, because Rust isn't as good at that stuff. (not sure, but it might lead to Rc cycles that wouldn't be cleaned up)
1
1
This Tweet was deleted by the Tweet author. Learn more
Maybe I'm getting mixed up between 'environments' and 'contexts' too - I was attempting to use the typechecking context to do my substitutions, but it's obvious now that it that would break down in more complex examples. In HOAS you get an environment for free.
Also, would it matter if my type checker used WHNF for values, but had strict evaluation semantics during runtime? Never really been clear on that one.
AFAIU: context = name -> type, env = name -> value. And the diff between "normalization" and "evaluation" is more-or-less that normalization goes under binders and "fully evaluates" (to "*the* normal form"), mere 'evaluation' doesn't (necessarily).