That is true, but it seems quite likely that AGI is going to be built between 2 and 200 years from now, so it is irrational to dismiss risks
-
-
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
That is entire field of research by now
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
While folks like Anna Salamon, Anders Sandberg or Eliezer Yudkowsky are clearly smarter than me, I doubt that AGI safety can be solved.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
I think that AGI will almost definitely happen. Biology does not have magic powers.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Sure! And yet it seems highly improbable that AGI probability is near zero, so we can dismiss the risk, no?
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Few physicists predict that anti matter weapons are feasible in the near future. Many AI researchers think AGI will happen.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
There are clear obstacles to anti matter weapons (eg lack of sufficient energy sources), but no clear obstacles to artificial brains
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Why? We know that the brain is a hierarchical function approximator + motivation system; we just cannot approximate all relevant fns yet.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Intelligence is the ability to model. Find shortest model that can recreate and predict observable patterns, in the service of regulation
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
It has not been quite so black for a long time now.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.