Conversation

They are trained on data from the web, and so pick up statistical correlations between words that make them ok at answering simple and static questions (things like "how far away is the moon from the earth", which has a single and unchanging factual answer).
1
9
However, more nuanced questions or ones that have factual answers which change over time are difficult or impossible for language models to answer. "Who is the prime minister of the UK" for example is actually hilariously hard to answer these days
1
8
The Grounded QA Bot i have been working on lets you deploy a contextualized, factual, question-answering conversation bot that uses embeddings, prompting, and web search to try to improve on this fundamental limitation of LLMs.
13