The most basic argument for X-Risk from Artificial Superintelligence is pretty straightforward.
-
Prikaži ovu nit
-
1. We might create something more intelligent than us at some point in the future. It's allowed by the laws of physics, and a lot of people are working on it. Nobody knows when.
1 reply 0 proslijeđenih tweetova 0 korisnika označava da im se sviđaPrikaži ovu nit -
2. If we succeed at that, it is possible that the superintelligence will have different goals to us. There is no compelling argument that all intelligences *must* have human-like final goals.
1 reply 0 proslijeđenih tweetova 0 korisnika označava da im se sviđaPrikaži ovu nit -
3. Having different final goals inevitably puts us in resource conflict with the superintelligence. If it is indeed superintelligence, it will win such a conflict. So let's figure out how to avoid this ahead of time.
1 reply 0 proslijeđenih tweetova 0 korisnika označava da im se sviđaPrikaži ovu nit
Right now there are only about 100 people working on that problem, globally.
Čini se da učitavanje traje već neko vrijeme.
Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.