One interesting thing about deep learning is that even as ever better results surface, everything we know about NNs is probably wrong. A short list (in rough chronological order): - "you need to pretrain a NN" - "NNs require thousands of datapoints to train" (1/10)
Hmm, so it uses a bilingual dictionary rather than bilingual corpus. That’s cool… following up on that, the bilingual dictionary is also learned without any bilingual pairs, which is described here: https://arxiv.org/pdf/1710.04087.pdf …
-
-
Yes. Some papers start with just pretrained paired word embeddings or a few parallel sentences, but both of those are still *some* bilingual corpuses IMO, while that paper seems to be genuinely fully monolingual. Which is surprising, but then humans do learn what 'gavagai' means.
-
I’ve only skimmed the second paper, but so far it appears to be a genuinely surprising result!
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.