There have been several very interesting works probing the properties of representations generated by BERT-like models recently: https://nlp.stanford.edu/~johnhew/structural-probe.html … , https://openreview.net/forum?id=SJzSgnRcKX … or https://arxiv.org/abs/1906.02715 (among others)
-
-
-
Curious to see the same applied to GPT2! I would guess the prompt setting offers even higher inductive bias on syntax than the standard MLM task.
End of conversation
New conversation -
-
-
How come you mainstream AI types never talk about how the sensors used by the brain detect changes (transitions) in the environment and emit precisely timed spikes? How come you never talk about how the retina uses lots of tiny motion detectors at various angles and directions?
- 4 more replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.