Google Duplex & Why A.I. Is Harder Than You Think, latest by @garymarcus and Ernest Davis @NYTimes https://nyti.ms/2IS3mDz
-
-
I would claim that Markov logic goes a long way toward solving this problem.
-
I’d agree with
@pmddomingos More generally, I see probabilistic dependency graphs as the holy grail—these capture causal relationships, quantify uncertainty, blend knowledge with data driven knowledge and can incorporate rich nonlinear functions. Hard to argue against it.
End of conversation
New conversation -
-
-
That’s more or less what I had in mind with the “CYC+sparse vector representations” remark. CYC can’t capture everything, and what it can capture can’t readily be reused. Vector embeddings seem more promising, but remain almost exclusively single-task adapted.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.