I think we don't have a good way to make agents behave intelligently in environments without fully-defined affordances
critique is that invoking catastrophe in the thought experiments I've seen is unnecessary to capture and explore the failure mode
-
-
surely the stakes are not totally irrelevant
-
my experience with thinking through problems is that if I consider the stakes too early I can't think about it clearly enough
- 3 more replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.