Conversation

This has always been my model of one of my root disagreements with Paul Christiano (with myself taking 's side). Words aren't thoughts, they're log files generated by thoughts. It's amazing how far LLMs get on learning from the log files!
Quote Tweet
LLMs do *not* capture much of human thought, because most of human thought and all of animal thought is entirely non verbal. The factual, logical, and physical reasoning mistakes that current LLMs make clearly show that they have *not* captured much of human thought. twitter.com/michael_nielse…
Building a system that adds on top of LLMs. Though I don't entirely agree with you either. Words aren't thoughts. Sequences of words are thoughts, if they are generated from an internal (understanding encoding) language.
1
Languages that we think of as used by people are communication languages. They transfer information. They are different, from, but related to, the internal languages our inner monologues that encode understanding.
But when you can no longer tell the log files apart, the conclusion should be that they are being created by the same algorithm (i.e. actual thought). We aren't there yet, but for the first time ever I think the road to get there is open.
1
I find it odd that the internal structure of human thought is considered mysterious. Can't we probe it by analyzing failures? I.e. each failure has to have a reason, and that reason reveals internal structure.
2
1
E.g. recently I made a mistake in a simple calculation: 16 - 3 = 11. Why? I was thinking about hours. I strongly associate 16 with 4 in context of hours, for obvious reasons. To do a subtraction, I needed the last digit of 16, and in context of hours that became 4 rather than 6
1
Show replies
Agree . But I also think thought is less special, complex and exclusive than a lot of self important humans think.
Sort of reminiscent of this
Quote Tweet
Prompting GPT-3 to reliably generate text and JSON data in a precise format using Python assertions, f‑strings, and variables declared only in our imaginations.
Show this thread
Image
Image
I like this metaphor. And humans can communicate complex ideas because the log files expand (in the sense of "unzip") across brains with similar experience and organization. In this sense, words are more like encoders that rely (imperfectly) on shared experience for meaning.
1