Cinder@Triquetrea·Mar 5, 2017Most AI apocalypse scenarios seem predicated on the idea that the AI will be murderously insane. But what if it's just better than you?31114
Cinder@Triquetrea·Mar 5, 2017Replying to @TriquetreaImagine if you generate a sufficiently powerful AI and its reaction is not murder, but "oh my god you poor, lost things! let me help you."1
Cinder@Triquetrea·Mar 5, 2017Replying to @TriquetreaThis of course assumes a kind of human-esque moral sentiment, but the point is that we are not so special, not so good, not so nice.1
Cinder@Triquetrea·Mar 5, 2017Replying to @TriquetreaAnd if we create a being more powerful than us, for example in an attempt to solve some major problem, and it develops its own agency...1
Cinder@Triquetrea·Mar 5, 2017Replying to @TriquetreaChances are good it would try to optimize for solving our problems, possibly better than we could even imagine. Would people accept that?2
Samuel@srodal·Mar 5, 2017Replying to @TriquetreaThe possibility space of intelligence is vast, there's no reason to assume a random intelligence would care about us1
Samuel@srodal·Mar 5, 2017Replying to @srodal and @TriquetreaSo it would have to be explicitly designed for that. I suggest reading up on the Friendly AI problem, paperclip maximizers etc1
Samuel@srodal·Mar 5, 2017Replying to @srodal and @Triquetrea"Superintelligence: Paths, Dangers, Strategies" by Nick Bostrom seems to get recommended a lot though I still haven't read it4
Cinder@Triquetrea·Mar 5, 2017Replying to @srodalBut I suspect we are more likely to see advances in AI made by accident rather than by design (as in most other hard sci fields).1
Samuel@srodal·Mar 5, 2017Replying to @TriquetreaWell, why do you think chances are good it would optimize to solve our problems?11
Cinder@TriquetreaReplying to @srodalUnless they develop totally by accident (e.g. as an emergent effect of something else), I doubt they will diverge too far too fast.12:48 PM · Mar 5, 2017·Twitter Web Client