Thread about the OpenAI Dota 2 demo. As usual in AI, there is much less to it than meets the eye. (Decreasing the RL’s time preference may be technically interesting, however.) Follow links to detailed analyses. https://twitter.com/Aelkus/status/1011988370661232640 …
-
This Tweet is unavailable.Show this thread
-
Unlike the DOTA 2 bot, the videogame-playing AI systems I wrote in the late ‘80s interfaced with the game via a neuroscience-inspired vision system. That was a deliberate constraint on their performance, and central to the architecture.pic.twitter.com/AWTMWdTOjX
1 reply 1 retweet 5 likesShow this thread -
Pengi and Sonja were lousy at strategy, but even running on a 1MHz CPU with 1MB of RAM, they had superhuman performance versus a Zerg rush (multiple attackers) if I allowed them to visually track several simultaneously.pic.twitter.com/kXuwhOmqv6
1 reply 0 retweets 5 likesShow this thread -
In 1987, I applied NN RL (backprop + TD learning) to video game playing. With < 1 megaflop available, that didn’t work well. But other problems I ran into still manifest in current work. As
@Aelkus noted, “the situating of the learning machinery” in vision & action is critical.pic.twitter.com/gnbXEzwZ9V
1 reply 0 retweets 5 likesShow this thread
Unknown unknowns, caused by other agents, are critical in effective real-world activity. They defeat rationalist approaches (including ML AI ones) that assume a “small world” of knowable factors. They require anti-fragile heuristics instead.pic.twitter.com/7eyqkoautt
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.