seems cool! but unless I'm wrong (possible!) they test on the same benchmark-prog as they train on? seems dubious...https://twitter.com/fchollet/status/754372425505091584 …
-
-
Replying to @haldaume3
Good point, though for them that's still highly useful. For a MOOC, bootstrap with human annotation, then automate with ML.
1 reply 0 retweets 0 likes -
Replying to @Smerity
who is "them" in this case? I'm having trouble seeing use case (almost certainly a fault of mine!)
1 reply 0 retweets 1 like -
Replying to @haldaume3
MOOC :) I worked
@groklearning, teaches students coding. This can be automated way of helpings students fix broken code.1 reply 0 retweets 0 likes -
Replying to @Smerity @haldaume3
For example, all of the coding questions at
@groklearning have exhaustive and helpful tests as required by this method.1 reply 0 retweets 0 likes -
Tiny hard to catch errors can hit student morale and motivation hard. Indentation issues, indexes, off by one, etc.
1 reply 0 retweets 0 likes -
Replying to @Smerity
and repairing student code automatically is better for learning than showing them a (snippet of a) solution? (honest question--idk)
2 replies 0 retweets 0 likes -
Replying to @haldaume3 @Smerity
I think fixing + highlighting mistakes is great for learning (same with language learning).
2 replies 0 retweets 2 likes
Future of language learning: AI chat app that highlights your mistakes and correct them. Learn by doing + reinforcement.
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.