The intuition that complex learning systems are unreliable due to their inscrutable complexity will become moot once we train them to generate proof for their solutions.
-
-
Replying to @Plinz
We can’t even get close to that for symbolic systems...how will it be different for systems that learn?
2 replies 0 retweets 0 likes -
Replying to @Grady_Booch
The correctness of a proof is usually easy to verify, the problem is finding it. Machines have the ability to search through massive problem spaces in a much shorter time and much more systematically than humans.
1 reply 1 retweet 2 likes
Replying to @Plinz @Grady_Booch
Perhaps the idea got lost in the brevity of the tweet: if you want a learning system to generate provably correct behavior, you can do so by letting it generate an algorithm with proven properties. The AI does not solve the task directly, but writes a [symbolic] program to do so.
8:57 AM - 13 Jun 2018
0 replies
0 retweets
1 like
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.