1. We might create something more intelligent than us at some point in the future. It's allowed by the laws of physics, and a lot of people are working on it. Nobody knows when.
The most basic argument for X-Risk from Artificial Superintelligence is pretty straightforward.
-
-
Prikaži ovu nit
-
2. If we succeed at that, it is possible that the superintelligence will have different goals to us. There is no compelling argument that all intelligences *must* have human-like final goals.
Prikaži ovu nit -
3. Having different final goals inevitably puts us in resource conflict with the superintelligence. If it is indeed superintelligence, it will win such a conflict. So let's figure out how to avoid this ahead of time.
Prikaži ovu nit -
Right now there are only about 100 people working on that problem, globally.
Prikaži ovu nit
Kraj razgovora
Novi razgovor -
Čini se da učitavanje traje već neko vrijeme.
Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.