to the rich, and confers a large enough advantage, it seems plausible to me the first generation of adopters (rather, their children) could secure a permanent advantage. New rounds of tech would only heighten the problem, as the first altered generation used the cutting edge
-
Show this thread
-
tech on their kids, etc. * * * I see this issue as more serious by many orders of magnitude than AI risk.
5 replies 1 retweet 15 likesShow this thread -
-
Replying to @danlistensto @PereGrimmer
the genetic engineering you're talking about would only accelerate some of the natural consequences of improved nutrition and pediatric medicine (and possibly assortative mating)
1 reply 0 retweets 5 likes -
Replying to @danlistensto @PereGrimmer
it does bring to mind an interesting thought experiment about how much an order of magnitude difference in pace of acceleration would matter though. probably quite a lot.
1 reply 0 retweets 5 likes -
Replying to @danlistensto
Yeah. Think of it concretely - say 50,000 people each have at least one kid engineered to have an IQ 6SD higher than von Neumann. The kids form a community. What would they honestly think of we schlubs?
1 reply 1 retweet 5 likes -
Replying to @PereGrimmer @danlistensto
And, spot on re accelerating assortative mating; but I think it has the potential to be qualitatively different due to the potential for a shift of very high magnitude not subject to noise.
1 reply 0 retweets 5 likes -
Replying to @PereGrimmer
i honestly can't imagine a person with an IQ double that of Von Neumann, who was already a few steps past the line of "can interact with normals without friction" by most accounts. I wonder what the upper cognitive limit actually is. What kind of enhancements are possible?
4 replies 0 retweets 5 likes -
Replying to @danlistensto @PereGrimmer
Perhaps there's a limit for biological brains without cryptographically secured reward systems: at some point the mind becomes too smart to be blackmailed by the organism to regulate its affairs.
1 reply 0 retweets 3 likes -
Replying to @Plinz @PereGrimmer
can you expand on that? how would you secure a brain's reward functions? in a computer system we'd be looking at message integrity and code injection vulnerabilities. does the brain have analogous functional structures? afaik we have no idea what the mechanism really is.
1 reply 0 retweets 1 like
If a human mind realizes the relevance of hacking its reward function, it may chose to lock itself into the cell of a monastery for a couple decades and let go of whatever it wants. I don't think that we evolved protections (like guilt, shame, boredom, love) against that.
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.