In general new tech goes first to the rich, and then the cost goes down & it becomes more widely available until all can share it. But there is a class of potential tech to which this logic does not apply. Doomsday tech; last period tech; ladder-kicking tech. For example,
-
-
We don't know. Steve Hsu points out that there is a roughly 20SD difference between wild type & custom bred livestock traits, & g may be similar as it is also additive & massively polygenic. But it is beyond my ken to grasp such creatures.
-
I can envision it only by reference to existing examples, but further in their direction, just as they were further in mine than the average person. I cannot envision past another 2SD further. "Man is a rope stretched between the animal and the Superman--a rope over an abyss."
End of conversation
New conversation -
-
-
Perhaps there's a limit for biological brains without cryptographically secured reward systems: at some point the mind becomes too smart to be blackmailed by the organism to regulate its affairs.
-
can you expand on that? how would you secure a brain's reward functions? in a computer system we'd be looking at message integrity and code injection vulnerabilities. does the brain have analogous functional structures? afaik we have no idea what the mechanism really is.
-
If a human mind realizes the relevance of hacking its reward function, it may chose to lock itself into the cell of a monastery for a couple decades and let go of whatever it wants. I don't think that we evolved protections (like guilt, shame, boredom, love) against that.
-
The problem is not so much performing illegal operations, but realizing that you are a piece of software, and that all your problems will go away if you change a few bytes. Smart people often know that. It is inevitable that a super-human-level intelligent mind figures it out.
-
I don't think that follows; the geniuses I'm aware of have not all elected to become Buddhist monks or lotus-eaters. But that could be a selection effect.
-
I noted that possibility, but the issue is there are no cases I'm aware of to support your turn-on tune-out drop-out theory. Further, if judged socially undesirable, the tendency to elect that course could itself plausibly be gene-engineered away.
-
*tune-in
End of conversation
New conversation -
-
-
As with the ELOs of computer chess programs, turns out there is just a lot of room for improvement.
End of conversation
New conversation -
-
This scenario is a lot like Van Vogt's novel Slan. The humans simply couldn't trust a race that advanced.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.