"The task in front of us is not to identify the true or correct moral theory and then implement it in machines. Rather, it is to discover a way of selecting appropriate values that is compatible with the fact that we live in a diverse world.”
-
-
Prikaži ovu nit
-
Key ideas from: https://arxiv.org/abs/2001.09768 1/9 The techniques we use to build AI are likely to influence the values we can encode in AI systems. By understanding value better, we move closer to aligned AI. 2/9 There is no way to ‘bracket out’ normative questions altogether.
Prikaži ovu nit -
3/9 It would be a mistake to align AI with instructions, expressed intentions, or revealed preferences alone. 4/9 Properly-aligned AI will need to take account of different forms of unethical or imprudent behavior, and incorporate design principles that prevent these outcomes.
Prikaži ovu nit -
5/9 One way to do this would be to build-in a set of objective constraints. Better still, would be alignment with principles that situate human direction within a coherent moral framework. 6/9 These principles must be compatible with moral pluralism and difference of opinion.
Prikaži ovu nit -
7/9 Principles for AI alignment need to be selected through a fair processes e.g. an overlapping consensus, veil of ignorance, or democratic process. 8/9 Human rights have a foundational role to play.
Prikaži ovu nit -
9/9 The design of artificial agents should not forestall the possibility of moral progress over time.
Prikaži ovu nit
Kraj razgovora
Novi razgovor -
-
-
So great to see that this is out!
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi
-
Čini se da učitavanje traje već neko vrijeme.
Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.
