Survey of error message research: https://dl.acm.org/doi/pdf/10.1145/3344429.3372508 … Nothing in there about synthesis. The only HCI-esque work that's been done on synthesis (to my knowledge) has been for programming by example: https://www.aaai.org/ojs/index.php/aimagazine/article/view/2262 …
-
-
But
@geoffreylitt I think your intuition is correct. If you knew the best way to help someone e.g. debug H-M type inference, that's a step to general synthesis. Another place to look would be debugging tactic programs in a proof assistant (maybe@TaliaRinger knows lit here?)3 replies 0 retweets 3 likes -
thanks, helpful pointer. also enjoyed your PLATEAU paper! context btw: playing with Synquid for a class (synthesis w/ refinement types, http://comcom.csail.mit.edu/comcom/#Synquid ) neat tool, but specifying w/ types is subtle and challenging, and error messages are often unhelpful
2 replies 1 retweet 3 likes -
Replying to @geoffreylitt @wcrichton and
i'm still very new to this area but producing good errors seems hard.. "you've failed to specify some constraint on a value, which is causing our search to fail... think, what implicit constraint is in your head which you haven't written down yet?"
2 replies 0 retweets 1 like -
Seems to stem from a fundamental belief in synthesis that it should be black box. Input spec, output program. But in practice, you *always* have to know how the algo works to do anything interesting. Maybe focus should change from good errors to algorithmic transparency?
1 reply 0 retweets 4 likes -
Yeah, agree! Otherwise small local moves can blow up the search and you don't know why Slight ironic that synthesis, which is used for producing more transparent algorithms than ML, suffers from this issue in its own search process
2 replies 0 retweets 3 likes -
It's useful to look at how PLs (that care about HCI) handle type inference ergonomics. I really love this post from
@aaron_turon: https://blog.rust-lang.org/2017/03/02/lang-ergonomics.html … "We limit type inference to be a good match for what you can hold in your head."2 replies 1 retweet 8 likes -
Replying to @wcrichton @geoffreylitt and
Synthesis research is usually about more powerful search, but perhaps a better framing is carving out a space of searches that make sense to a programmer.
1 reply 0 retweets 3 likes -
Replying to @wcrichton @maxkriegers and
Yeah, reminds me of the point from your PLATEAU paper about the resulting program being understandable by a programmer Seems like a difference in philosophy -- is the point to just do ML with better internals, or to use structure _and expose it to the user_
1 reply 0 retweets 2 likes -
Replying to @geoffreylitt @wcrichton and
Like it's interesting that Flash Fill is black box result + black box search -- the user-facing API is indistinguishable from a neural net; they'd probably replace it with a NN if it performed better
1 reply 0 retweets 0 likes
I think they still value consistency though. I’m sure users still gain a hazy mental model of when Flash Fill seems to work. It’s like how voice assistants would be 10x more frustrating if they were really smart/contextual/etc, but only 20% of the time.
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.
cognitive psychology. PhD