Imagine that you want to build a giant sentient AI agent by letting a very large number of small autonomous learning AI agents self organize. What is the core policy that each of these agents has to follow to make it work? This is how you understand Kant’s Categorical Imperative.
-
Show this thread
-
Replying to @Plinz
Are you assuming agents capable of developing a theory of mind, or is this behavior emergent?
1 reply 0 retweets 1 like
Replying to @renatrigiorese
Every sufficiently complex intelligence will discover the nature of mental representations
4:22 PM - 28 Mar 2020
0 replies
0 retweets
6 likes
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.