The way that the most prominent critics of AI risk totally fail to engage with even the most basic arguments made by people in the field suggests that they don't have any good counterarguments. That's very concerning!
“Don’t Fear the Terminator”
Artificial intelligence never needed to evolve, so it didn’t develop the survival instinct that leads to the impulse to dominate others.
Article by @TonyZador@YLeCunhttps://blogs.scientificamerican.com/observations/dont-fear-the-terminator/…
This is basic enough that *any* amount of due diligence or spark of curiosity about why a bunch of thoughtful people disagree with them should have led them to at least *mention* it. Even the editors should've been able to realize that this isn't what a good argument looks like.
Tony, to clarify - do you think the supposed risk of sufficiently powerful AI systems eventually taking over specifically due to instrumental reasons is overblown yet a real risk, or do you think it’s negligibly unlikely?