We're excited to announce the🤗Transformers release of the Retrieval-Augmented Generation model in collaboration with !
Paper: arxiv.org/abs/2005.11401
Demo: huggingface.co/rag/
🤗Doc: huggingface.co/transformers/m
Blog post: ai.facebook.com/blog/retrieval
Conversation
Lead by , the RAG model is trained end-to-end for retrieval-in-the-loop generation, a new paradigm that allows a model to go find useful information in a text corpus when generating.
No need to try to encode all of that knowledge in a trillion parameters any more ;)
1
15
Our implementation allows you to load and use the model in just 5 LOC, and leverages our 🤗Datasets library to efficiently use a knowledge base on disk!
2
1
13
We prepared a demo that showcases the model's ability to answer AND generate -style questions: have fun entering a target answer and reading the prompt out loud In the Voice of Alex Trebek (IVAT)* !
(*#alextrebek text-to-speech not provided)
1
1
6
Some favorites so far:
1. The hellbender, the largest type of this tailed amphibian in the U.S., can reach a length of 3 feet
2. In a Grimms' fairy tale, a miller's daughter guessed the name of this little man
3. This breed shares its name with a famous 1830s British ship
1
1
1
How good at Jeopardy are you ;) ?
Huge thanks to , , @PatrickPlaten, and who made this release possible!
1
1
6
Awesome to see retrieval-augmentation to make it to Transformers! And great to have worked with you!
12
What great work. Thank you!
Is it straightforward to flow gradient through the FAISS IndexHNSWFlat retriever? I think this MIPS bit is going over my head.
1





