Imagine that you want to build a giant sentient AI agent by letting a very large number of small autonomous learning AI agents self organize. What is the core policy that each of these agents has to follow to make it work? This is how you understand Kant’s Categorical Imperative.
-
-
Three corollaries for emergent agency: - commit to unifying your agency with other agents if they are following the same core policy - prioritize global reward over individual reward - act as if the global agency is already latent, so it can emerge before it is generating rewards
Show this thread -
(These seven corollaries have been described by Thomas Aquinas; he calls the the first four “cardinal virtues”, and they can be found by deduction. The latter three are the “divine virtues” and they require induction.)
Show this thread
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.