Before we deploy any AI system, we must test it carefully. We wouldn't open a bridge or a skyscraper without first testing the quality of the construction work. The same applies to software. The doomsday scenarios all rely on violating such norms.
The question of whether something is dangerous to society is also quite independent of what most people currently believe.
-
-
Yes, I agree with this. Something that is dangerous isn't correlated with someone's beliefs (i.e. world view).
-
True enough. I was thinking of our tendency to view AGI as a threat, because we default to the same line of thinking when it comes to extraterrestrial intelligence. But they’re definitely different issues.
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.