Machine learning is incredibly good at brute forcing better-than-human outcomes in path-dependent situations.
The software can be set up to run more iterations in a week than you could in a lifetime. E.g. it can play literally millions of games of Chess, Go etc.
Conversation
Replying to
This does not magically translate into fluid learning unconstrained by human limits in other environments.
E.g. grammar is a complete and utter clusterfuck, and we are still terrible at teaching machines natural language.
1
2
This doesn't necessarily reflect a mistake in methodology or a hardware restriction per se, sort of some sci-fi uberquantumcomputer.
It simply isn't an easy thing to model, nor to simulate in a way that provides reliable feedback.
1
1
Sooner or later, someone is going to figure out ways to get around these constraints - or to make other technological innovations that obviate them.
But it's not a given that it's happening now, or soon.
2
