Log in
Sign up
See new Tweets

Conversation

Aaron Tay
@aarontay
·
May 19
[Read] In-Context Retrieval-Augmented Language Models
arXiv logo
arxiv.org
In-Context Retrieval-Augmented Language Models
Retrieval-Augmented Language Modeling (RALM) methods, that condition a language model (LM) on relevant documents from a grounding corpus during generation, have been shown to significantly improve...
1
Aaron Tay
@aarontay
·
May 19
[Read] Generate rather than Retrieve: Large Language Models are Strong Context Generators - I'm quite confused why this works and is competitive with retrieve and generate models, despite no new external info retrieved.
openreview.net
Generate rather than Retrieve: Large Language Models are Strong...
We propose a novel generate-then-read pipeline for solving knowledge-intensive tasks by prompting a large language model to generate relevant contextual documents.
1
Aaron Tay
@aarontay
[Read] REALM: Retrieval-Augmented Language Model Pre-Training https://arxiv.org/abs/2002.08909 , This 2020 paper seems to have started idea of adding retriever to LLMs, I think it didn't catch on as retriever needs to be trained with knowledge augmented encoder....
arXiv logo
arxiv.org
REALM: Retrieval-Augmented Language Model Pre-Training
Language model pre-training has been shown to capture a surprising amount of world knowledge, crucial for NLP tasks such as question answering. However, this knowledge is stored implicitly in the...
5:16 PM · May 19, 2023
·
219
Views