Bingo. So many people think they can do NLU (or robotics, or even driving) without having a commonsense knowledge database, and I just don’t see how that’s remotely possible.https://twitter.com/punkstrategy/status/1071560512662515712 …
-
-
BERT is achieving state of the art performance on a broad range of tests, but not - you are correct - the GLUE WLNI due (at least implied in the paper) to an issue with the construction of the data set. Taking a step back, I think you are looking for logical symbolic 1/
-
operations on identifiable entities to underpin "true" NLU. Assuming I understand you, why couldn't instead that logic occur as learned NN transformations on embedded representations of concepts and entities? It may require an order of magnitude more training data /2
- 1 more reply
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.