My podcast with Nick Land covers accelerationism, cybernetics, ideology, the evolution of Nick’s perspective, Deleuze & Guattari, emancipation vs. dehumanization, AI, Moldbug, the significance of zero, religion, Bitcoin, Kantianism, & synthetic time.http://jmrphy.libsyn.com/ideology-intelligence-and-capital-with-nick-land …
-
-
The super-AI reaches concludes that it will survive the remainder of the lifespan of the universe without requiring any further thought or action. What does it do next? What comes after achieving optimality in absence of terminal values other than Omohundro drives?
-
That's one hell of a hypothetical.
- 1 more reply
New conversation -
-
-
Yes, that is essentially a statement of the orthogonality thesis. I concede you have provided a counterexample (an agent cannot have "be retarded" as its goal at every intelligence level - at least for very long), but other than these edge cases I remain unconvinced.
-
Your point is that intelligence optimization is the only terminal value. I disagree. You can conceive of an environment where the marginal return on intelligence enhancement does not maximize reward - especially one like ours which faces inevitable heat death.
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.