The most basic argument for X-Risk from Artificial Superintelligence is pretty straightforward.
2. If we succeed at that, it is possible that the superintelligence will have different goals to us. There is no compelling argument that all intelligences *must* have human-like final goals.
-
-
3. Having different final goals inevitably puts us in resource conflict with the superintelligence. If it is indeed superintelligence, it will win such a conflict. So let's figure out how to avoid this ahead of time.
Prikaži ovu nit -
Right now there are only about 100 people working on that problem, globally.
Prikaži ovu nit
Kraj razgovora
Novi razgovor -
Čini se da učitavanje traje već neko vrijeme.
Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.