Whether an AI that plays StarCraft, DotA, or Overwatch succeeds or fails against top players, we'd have learned nothing from the outcome. Wins -- congrats, you've trained on enough data. Fails -- go back, train on 10x more games, add some bells & whistles to your setup, succeed.
-
-
Do game agent projects need to advance the boundaries of GAI theory? It’s an exercise in applying known theory/processes to a complicated problem space to achieve super-human performance. To say there’s “nothing to learn” from that (process or outcome) is a bit hyperbolic
-
One could easily turn such a game AI project into an experiment that you could learn from, if one were to care less about PR impact. See my prior comments about measuring generalization power rather than skill.
- Show replies
New conversation -
-
-
That statement applies to most AI research nowadays IMO. Anything involving deep learning is highly unlikely to advance AGI research, and that’s OK. Core AGI research is akin to walking through a maze blind-folded because we’re trying to mimic something we don’t understand.
-
This Tweet is unavailable.
- Show replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.