Any problem can be treated as a pattern recognition problem if your training data covers a sufficiently dense sampling of the problem space. What's interesting is what happens when your training data is a sparse sampling of the space -- to extrapolate, you will need intelligence.
-
-
Show this thread
-
Sometimes, it's not about the outcome, it's about what you can learn from the process.
Show this thread
End of conversation
New conversation -
-
-
Do you feel like playing against top players overfits towards that somehow? I'd love if a model could eventually recognize I'm a new player and gradually help me towards becoming a better one. Not just beat the heck out of me :)
-
Exactly what I hope comes of this.. realistic game AI, something
@Google could look into if rumours of their console are true. A game that never bores you cos it adapts to your style, grooms you, and either aids you (coop games) or can be ruthless with you when needed..
End of conversation
New conversation -
-
-
"we'd have learned nothing from the outcome." This seems to be true only in a strictly academic framing with a very small definition of 'we'. 5 AIs besting the world's top players will at the very least teach the younger generations a very important lesson about their future.
-
I agree, the end result would be monumental in culture
End of conversation
New conversation -
-
-
Couldn't agree more, it's just the victory of data, not us
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
We might not learn much from the training itself, but it may teach players something about the underlying game.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
I've got a different take. Dota 5 may learn how to communicate between themselves and thus have an unfair and unbeatable advantage. It's not more data, it is more likely that an easier problem is solved.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Bad take, Francois. There is value in creating AI that can outperform humans on complex tasks even if we truly learn nothing from that engineering effort. At a minimum, those successes inspire and move the bar for AI ever higher.
-
Your take also misses the fact that we still learn from those herculean engineering efforts. High-quality training data doesn’t magically appear; you have to learn how to create it en masse. Algorithms that can learn complex control tasks at scale don’t magically exist; we (1/2)
- Show replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.