E.g., rationality comes easily to some people—and not others. Why? the rational ask themselves. Because intelligence. It obviously varies a lot. There’s no apparent upper limit, and it’s easy. So making super-intelligent rational machines should be straightforward engineering.
“What extra hair can we add to the NN that will force it to explain itself” rather than “let’s look at what it’s actually doing, and then it will probably be obvious how it’s doing that.”
-
-
E.g. I tentatively concluded in January 2015 that image classification is mostly using texture, which has since turned out to be true. I could do that because I understood the problem space (having done machine vision in other styles years before).
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.