The intuition that complex learning systems are unreliable due to their inscrutable complexity will become moot once we train them to generate proof for their solutions.
-
-
Perhaps the idea got lost in the brevity of the tweet: if you want a learning system to generate provably correct behavior, you can do so by letting it generate an algorithm with proven properties. The AI does not solve the task directly, but writes a [symbolic] program to do so.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.