The Sapir Whorf hypothesis (language defines what we can perceive and think) is mostly wrong for natural language, but true for programming. Computer languages don't differ in what they can do but in how they let us think.
Yep. I do wonder how much it has to do with Miller-style "chunks". That, roughly speaking, most "natural" languages require roughly the same number of chunks to represent a given concept. But specialized representations can greatly reduce that number.
-
-
That is definitely part of the story, especially since you can describe all mental behavior algorithmically, and represent algorithms using chunks. But I suspect the general answer is that expertise involves building new operators, which then get specialized symbolic descriptors.
-
Those operators are (some of) the new chunks that I believe are acquired. But there's also just general pattern-recognition, along the lines of http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.601.2724&rep=rep1&type=pdf …
- 1 more reply
New conversation -
-
-
A speculative, almost tongue-in-cheek just-so story in this vein is that g is pretty well correlated with working memory; learning far more efficient chunks in some domain is like effectively expanding your working memory, and thus increasing g in that domain.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.