Is AGI going to be so carefully designed that we're safe, or is it going to be so buggy that we're safe? You kind of have to pick
-
-
-
if AI design follows the same path and abide by the same rules of software design in the wild, the answer is pretty much "so buggy we're safe"
End of conversation
New conversation -
-
-
what if it applies data imputation first?
-
Or even worse, what if it uses a decent tractable probabilistic model, which can naturally deal with missing data by means of marginalization (assuming -- of course -- that NaNs encode the MAR scenario)? I've heard these things have become pretty scalable recently...
End of conversation
New conversation -
-
-
Sike Skynet is russian It uses catboost
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Still gonna try a water bucket first
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
But what if it's all random forests internally?
We're doomed!!Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
...or convert some data columns to string type.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
It just occurred to me that SkyNet likely CREATED John Connor (with regards to him being a warrior and SkyNet nemesis) when they sent terminators back to kill him. The rest of the world probably figured this out 20 yrs ago... Better late than never, I guess...
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.