Apparently the only reason superhuman AI should not kill us is Pascal’s wager (a mythical hidden programmer might condemn you into AI hell if you fail the test of not eating the sacred monkeys), and also our atoms taste bad?https://twitter.com/manjola1990/status/949227035557232640 …
These questions have no easy universally acceptable answer. But more importantly: hiw can we ensure that we get more say about our destiny than chimps did over theirs?
-
-
isn't it a bit like launching ballistic missiles? you have complete control of initial conditions, and can do minor tweaks along the way, but you can't expect to be in complete control all the time. raising kids is another good example.
-
very good metaphor, only we are rising kids in the general area where the missile will hit and while we can tweak the launch a bit we cannot prevent it from hitting
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.