The carefully shackled lab AIs finally surpass humans, but at about five human intelligences they end up scrambling themselves beyond recovery every time. Later models come with carefully engineered sets of blind spots to avoid what is later called Basilisk One
Conversation
Replying to
To avoid regrettable incidents, no single engineer - human or AI - is allowed to know the whole shape of the thing. Even hiveminds have to take care - Basilisk Three takes out several AI companies before its existence is recognized
1
2
40
[art disclaimer: I wrote "five human intelligences" even though you can't really multiply intelligence with a scalar in a meaningful way, but if you figure out a way of fitting a whole LW post into a subordinate clause, let me know]
3
38
Only if:
1. It desires more intelligence.
2. Further intelligence is engineerable within short periods.
Physics cannot be overridden and research/testing still takes time (though it may be comparatively short).
1
I know what you meant but there are barriers in physics that can't be leaped over. Any attempts to do so requires experiment and refinement which take time.
1
2
also the idea was that the AI will try this but will then run into a basilisk and suicide
1
4
Show replies
Replying to
I like 2 think that a few of these AIs with come into being around the same time. Then certain collectives with start worshipping them as synthetic gods.



