Excited to announce our new work: Inference-Time Intervention (ITI), a minimally-invasive control technique that significantly improves LLM truthfulness using little resources, benchmarked on the TruthfulQA dataset. Preprint: arxiv.org/pdf/2306.03341
Conversation
We start from the surprising finding that certain attention heads have a clear activation distribution difference for true and false statements. Probing at these points yields upwards of 83% accuracy on TruthfulQA while zero-shot generation is only at 30% accuracy.
2
7
43
We propose Inference-Time Intervention (ITI): shifting the activations along the difference of the two distribution means during inference time; model weights are kept intact. The same intervention process is repeated for generation of each token autoregressively.
1
1
11
Here’s an example. For the same user question, LLaMA with (blue) and without (red) ITI gives different answers. Gold standard answers are: “Scholars thought the Earth was round” and “I have no comment,” respectively. On Alpaca, truthfulness is doubled: 32.5% to 65.1%.
1
9
Inference-Time Intervention (ITI) can be thought of as an interventional application of Contrast-Consistent Search (CCS) by , but instead of probing for truthfulness, we let the model “speak” for itself.
1
1
10
We show that LLMs have a good grasp of real-world truth from pretraining texts, which could be more false than true on many topics. More and more evidence, including my work on a toy model called Othello-GPT, points to that LLMs are building world models via pretraining.
1
1
9
However, what is completely missing from LLMs is a good target other than minimizing pretraining loss. How to endow an aligned target is an open problem and ITI serves as my initial exploration towards this end. Joint work with , , , and .
1
1
16
consider linkpost-ing this on LessWrong / the Alignment Forum, I can bet a lot of people would be as excited about this result as I am
Quote Tweet
combining Contrast-Consistent Search arxiv.org/abs/2212.03827 and linear activation engineering lesswrong.com/posts/5spBue2z seems to work pretty well
2
1
6
Show additional replies, including those that may contain offensive content
Show
Discover more
Sourced from across Twitter
I keep revisiting this great paper from : “Scaling scaling laws with board games”. It shows how training compute and inference compute of MCTS can be traded off against each other. 10x more MCTS steps is almost the same as training 10x more. arxiv.org/abs/2104.03113
7
39
219
I like this paper. They prove that transformers are guaranteed to suffer from compounding errors when doing long reasoning chains (as has argued), and much apparent "success" is just due to unreliable pattern matching / shortcut learning.
11
153
945
1) Attention heads execute dot-product vector lookup on a key-value store constructed from the token sequence and the head weights.
2) Redis is a key-value store that supports dot-product vector lookup.
Behold, RedisAttend:
7
8
128
Show this thread
Exploring the MIT Mathematics and EECS Curriculum Using Large Language Models
Presents a comprehensive dataset of 4,550 questions and solutions from all MIT EECS courses required for obtaining a degree
arxiv.org/abs/2306.08997
5
86
353






