Conversation

is there a methodical approach to implementing tools that check rules, e.g. type checkers/compilers, in a way that makes it is easy to surface *why* a certain result was achieved? e.g. why you're getting a "expected type A, found type B"
1
5
not necessarily in a user-friendly way, as I think that is an upper layer of translating why an error happened so that it is friendly for a user. more like what a developer writing the error message needs to have access to while implementing it
3
2
Gah, I wish I remembered what it was, but I'm pretty sure I saw a presentation where they were talking about how they recorded the trace of the typing judgements to give context about why an error occurred?
2
1
I don't think it's what I was thinking of, but I remember seeing mention of Skalpel as being an interesting approach:
Quote Tweet
Replying to @ekuber and @JSMuellerRoemer
I wonder if there's any compiler out there that implements type error slicing (e.g.: as in macs.hw.ac.uk/ultra/skalpel/) to relate the type errors to the parts of the code that contributed to it. It looks pretty in the paper, but I've always wondered how it'd handle edge cases...
1
Replying to and
doi.org/10.1016/j.entc might be interesting to look at regarding Skalpel. Just been pottering around the citation graph on Connected Papers… nothing is really jogging my memory alas, apart from that! Alas, I'm not super acquainted with this area of the literature. 😳
Replying to and
Lots of the prior work seems to be for constraint-based type inference (like Hindley Milner), so not sure how relevant it is to you. For example I am currently leaning more on bidirectional type checking and unifying values directly at the moment, which is a little different…