Imagine that you want to build a giant sentient AI agent by letting a very large number of small autonomous learning AI agents self organize. What is the core policy that each of these agents has to follow to make it work? This is how you understand Kant’s Categorical Imperative.
In seriousness, it is more than an interesting route. All real philosophy is mathematical in nature. Since Gödel, we know that mathematics is computation (constructive math), since Wittgenstein that AI is the missing link between meaning and language.
-
-
Yea I just mean to say I think it’s a good framing of moral “rules” and also explains why they don’t work sometimes. Edge case bugs
-
Unexplored because code hadn’t existed yet in 19th century and also most philosophers now don’t code
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.