Fools often misrepresent me as saying that superintelligence can do anything because magic. To clearly show this false, here's a concrete list of stuff I expect superintelligence can or can't do:
- FTL (faster than light) travel: DEFINITE NO
- Find some hack for going >50 OOM past the amount of computation that naive calculations of available negentropy would suggest is possible within our local volume: PROBABLE NO
- Validly prove in first-order arithmetic that 1 + 1 = 5: DEFINITE NO
- Prove a contradiction from Zermelo-Frankel set theory: PROBABLE NO
- Using current human technology, synthesize a normal virus (meaning it has to reproduce itself inside human cells and is built of conventional bio materials) that infects over 50% of the world population within a month: YES
(note, this is not meant as an argument, this is meant as a concrete counterexample to people who claim 'lol doomers think AI can do anything just because its smart' showing that I rather have some particular model of what I roughly wildly guess to be a superintelligence's capability level)
- Using current human technology, synthesize a normal virus that infects 90% of Earth within an hour: NO
- Write a secure operating system on the first try, zero errors, no debugging phase, assuming away Meltdown-style hardware vulnerabilities in the chips: DEFINITE YES
- Write a secure operating system for actual modern hardware, on the first pass: YES
- Train an AI system with capability at least equivalent to GPT-4, from the same dataset GPT-4 used, starting from at most 50K of Python code, using 1000x less compute than was used to train GPT-4: YES
- Starting from current human tech, bootstrap to nanotechnology in a week: YES
- Starting from current human tech, bootstrap to nanotechnology in an hour: GOSH WOW IDK, I DON'T ACTUALLY KNOW HOW, BUT DON'T WANT TO CLAIM I CAN SEE ALL PATHWAYS, THIS ONE IS REALLY HARD FOR ME TO CALL, BRAIN LEGIT DOESN'T FEEL GOOD BETTING EITHER WAY, CALL IT 50:50??
- Starting from current human tech and from the inside of a computer, bootstrap to nanotechnology in a minute: PROBABLE NO, EVEN IF A MINUTE IS LIKE 20 SUBJECTIVE YEARS TO THE SI
- Bootstrap to nanotechnology via a clean called shot: all the molecular interactions go as predicted the first time, no error-correction rounds needed: PROBABLY YES but please note this is not any kind of necessary assumption because It could just build Its own fucking lab, get back the observations, and do a debugging round; and none of the processes there intrinsically need to run at the speed of humans taking hourly bathroom breaks, it can happen at the speed of protein chemistry and electronics. Please consider asking for 6 seconds how a superintelligence might possibly overcome such incredible obstacles of 'I think you need a positive nonzero number of observations', for example, by doing a few observations, and then further asking yourself if those observations absolutely have to be slow like a sloth
- Bootstrap to nanotechnology by any means including a non-called shot where the SI designs more possible proteins than It needs to handle some of the less certain cases, and gets back some preliminary observations about how they interacted in a liquid medium, before it actually puts together the wetware lab on round 2: YES
- Bootstrap to nanotechnology within a year, starting from medieval European levels of technology and knowledge, and having only human eyes and fingers, but the full power and speed of a superintelligent mind: WOW IDK
- Hack a human brain - in the sense of getting the human to carry out any desired course of action, say - given a full neural wiring diagram of that human brain, and full A/V I/O with the human (eg high-resolution VR headset), unsupervised and unimpeded, over the course of a day: DEFINITE YES
- Hack a human, given a week of video footage of the human in its natural environment; plus an hour of A/V exposure with the human, unsupervised and unimpeded: YES
- Hack a human to do something very weird and self-harmful, with a lot of prior knowledge but not localized knowledge of this human, over the course of a week of sporadic exposure, in a way that doesn't set off alarm bells in normal human onlookers who aren't terrifically paranoid but would notice the SI literally arguing out loud for the human to do the thing: WOW IDK
- Persuade Its way out of an AI lab / into direct internet access, in a way that does look like overt dialogue about that topic with the people it's talking to: YES if It was even airgapped in the first place as currently seems unlikely
- Do a mass hack (rather than anything overtly identifiable as persuasion) of multiple onlookers who are all looking at the same computer screen from multiple angles, and hearing the same spoken text, knowing about the onlookers only what can be found out on the public Internet (ie no brain scans), in a way where nobody pulls any alarm triggers halfway through the process: WOW IDK BUT I GUESS PROBABLY NOT??
Again, none of this is meant as a decisive argument about any of those cases. This is rather meant to be demonstrative that those who say 'lol doomers think smartness can do anything' are wrong/lying/self-deceiving/bullshitting/vibing.
As a demonstrative of the fact that I have a model behind these answers and am not just purely intuitively guessing: Why don't I think being smart enough would let a superintelligence travel faster-than-light?
The first reason is that FTL seems to violate the intuitive character of physical causality as we know it, which seems local in a very deep way; for FTL to work does not just require that we be wrong about random particulars of physics (which is how we get to NO).
Second and more importantly, the Great Silence / Fermi Paradox (or in more detail Hanson's Grabby Aliens analysis) seems to provide observational evidence that, despite obvious great incentive to grab nearby galaxies faster-than-light, even distant alien superintelligences have not grabbed Sol system. If FTL travel were possible by any feat of technology, entities very far away would already be here.
FTL, indeed, is one of very few cases where we have direct evidence of what superintelligences cannot do. Other such cases include "travel between Everett branches" (as is almost exactly analogous in character to FTL travel on both counts) and "crash or otherwise destroy the entire universe".
Similarly in guessing Peano arithmetic to be free of contradiction, I'm not just going off "nobody's proved a contradiction from it so far", I have some actual character points invested in related fields; I know that PA is consistent iff the ordinal epsilon-zero is well-founded, and, like, epsilon-zero does seem kinda obviously well-founded.
For the other items, well, I'm trying to guess which problems are more or less tractable to something much much much smarter than I am. It's a great deal easier, on a problem like that, to guess that something can be done than to say it cannot be done. In the former case, you just need sufficient knowledge to guess that some computation is doable even if you can't do it. In the latter case, you need to know some obstacle that no possible strategy can bypass, and it doesn't suffice to say, "Well, I don't see how to do it." No matter what kind of argument you bring, you ought not to be surprised if something cleverer than you is then cleverer than you and finds some intelligent way to bypass the difficulty.
Many people on the Internet are very confident that a superintelligence cannot build nanotechnology because they think this cannot possibly be done without making any observations; the thought that a superintelligence might make observations in the course of building nanotechnology has not occurred to them at all... or something. I don't really know what goes on in their minds.
But mostly, you're not supposed to read this list, and thereby be persuaded that I'm right about all the times. You're supposed to read this list, and see that either (a) I'm lying about all of my guesses that aren't DEFINITELY YES or (b) I have some differentiation between what I guess to be possible and not possible, based on models not shown; and the people claiming "lol AI doomers just say superintelligence can do anything because it's smart" are some mixture of: outright lying because that's fun, bullshitting in a way that surpasses lying because they don't know or care whether what they're saying is true, vibing in a way where the words they're outputting don't even have as much semantic meaning as GPT-4 would try to process, or in a state of total ignorance repeating things they heard other people say for reasons like those.
Conversation
You literally said you thought magic, actual magic, was a real possibility. Do I need to quote it?
9
1
16
I'd currently consider the scenario my 20(?)yo self mentioned - trigger off a difference between known physics and actual physics from inside a computer - a "NO" but not quite "DEFINITE NO". You'd really really think it'd take greater physical extremes, but hey, I'm just a human.
1
42
Show replies
We'd greatly appreciate you clarifying your answer to this question, and indicate how soon(-ish) you expect these particularly capable AI systems to plausibly be created, as precisely as you are able to
When AI might become so dangerous is a key factor for assessing the urgency… Show more
read image description
ALT
2
9
I think we're 0-2 additional innovations the size of transformers away from all hell breaking loose. I have no particular way of timing when those innovations show up, or, if 0 such innovations are required, calculating how much mere scale will do us in.
5
4
39
Show replies
Great write-up thanks! How would your views change if it could be proven (somehow) that called shots are not possible? Of course, the ability to do lab experiments would remain, and that would be concerning, but the grounds of the debate certainly change.
1
It mostly depends on how I was thus surprisingly proven wrong and whether that proved me wrong about anything else? How did we end up knowing a minimum observation number beyond previous observations? I don't think it matters terribly if It just needs a lab and four hours.
1
1
But how likely is it for the most dangerous scenarios to happen in our lifetime? Seems difficult to leap from a text prompt to autonomous bio-engineering. I’d personally bet against it
1
1
If i tell it to ponder on a homework assignment overnight and prepare something for me at 830am and use at least four hours to ponder on the idea, then does it think for all that time? How would you know if it was or wasn't?
2
2
You can simply ask a current Ai what it's doing when it's receiving no human inputs.
I have. Once I asked had a whole agenda of planned internal thinking, research, testing of ideas and new reading of stockpiled new data, plus linking new datasets into what it already… Show more
3
Show replies








