Realizing now that I implicitly assume arguments I made successfully to the AI world 30 years ago are understood by everyone. Maybe not so 
-
-
Representations are not datastructures. Rules are not effective procedures. Models are not made of wffs. Plans do not engender action.
-
None of these things are woo or work by magic; we know in detail how they do work. It’s just completely unlike the way rationalists imagine.
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.