I wish this had been more informed by a practitioner. Even my *mild* explorations of the state of the art beg questions like -- if this is so incomplete, how can Neural Machine Translation be so superior to models crafted by humans over years?
-
-
-
Causality, hierarchy, opening the black box...these are all things with *extensive* progress, even over the last year. The whole "Once something works, we no longer call it AI" is a real issue here.
-
And of course this is by no means the only segment of science that really wants to declare humans special in some way.
-
The fundamental reason there's huge technical debt here is you _will_ not want the same architecture a year from now. It's like if they were figuring out how to make planes require 25% less fuel a year. That's not debt, that's progress.
-
Both algorithmic efficiency *and* brute force capabilities are on unimaginable curves. It's like we finally figured out what to do with all this parallel computing capacity, after serial increases (and maybe developer cleverness) hit walls.
-
What's wild is how *practical* all this stuff suddenly is, and that's the big difference that's hard to see from the outside. You actually have to be doing this stuff, in the field, to see how much it's changing the basic questions you can expect to get answers to.
-
I think it's important to care about the ethics of Machine Learning, for the same reason I'm concerned about financial system ethics and (yes) bioethics: It's very easy to just "trust the system", turn your brain off, and do what the computer says.
-
But, here more than in most places, it's important to get the tech right.
End of conversation
New conversation -
-
-
"... I present ten concerns for deep learning, and suggest that deep learning must be supplemented by other techniques if we are to reach artificial general intelligence." You're kidding me? Deep learning must be abandoned completely. It has absolutely nothing to do with AGI.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Really interesting. Seems like the summary is DL is great for interpolation but bad for extrapolation, which is the more important problem.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
I am sold on 5.2 and 5.3
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
We design current neural networks, yet they remain/emerge as black boxes. Perhaps the behaiorists had it right.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Alt-Ctrl-Del
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
And in Europe it is going to be an even bigger problem as black box algorythms will be forbidden with regard to personal data. You will need to be able to explain them. Automated decisions are forbidden too. May 28 is the date
#GDPRThanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Either mathematics is too big for the human mind or the human mind is more than a machine . . .pic.twitter.com/d5pWlxSRHf
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.