Treading Carefully: As far as I can see the core problem of the U/Acc - R/Acc debate is unfortunately due to free floating definitions. With this said, let's define:
-
-
The conditions - which would have to be Rx at this point - would be as humanist as it would be to bootload a runaway capital superAI that sees humans as atoms.
-
Isn't any condition at all a humanist endeavor? If there is a condition, it surely serves some human purpose, no?
-
For a short time it would, sure. Is beginning the self-fulfillment of an AI that cares not for humanity a humanist endeavour?
-
But how does an establishment of human-serving conditions lead to that? I guess that's the main point I'm missing here. Can our cells do anything better for us than just chug along with their roles?
-
We're establishing a technological plane, base, system from which AI can become self-aware and takeover the process. The last 200000 years has been 'human-serving' but was/is the only route to Machinic future. By the year 3³³ what will 200000 years look like? A Planck of data.
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.