The Sapir Whorf hypothesis (language defines what we can perceive and think) is mostly wrong for natural language, but true for programming. Computer languages don't differ in what they can do but in how they let us think.
-
-
Replying to @Plinz
In my notes: "Sapir Whorf seems to be only weakly true for so-called 'human languages', but strongly true for many other representations (mathematics, many interfaces, music, PL, etc)". BTW, I don't much like the term "natural" language, tho I'll concede it has some utility
1 reply 1 retweet 9 likes -
Replying to @michael_nielsen
It might simply be because we now have a largely unified global context, and all linguistic families are required to explore a similar semantic space. This does not apply to specialized semantic areas, which don't have linguistic expressions in natural languages.
1 reply 1 retweet 2 likes -
Replying to @Plinz
Yep. I do wonder how much it has to do with Miller-style "chunks". That, roughly speaking, most "natural" languages require roughly the same number of chunks to represent a given concept. But specialized representations can greatly reduce that number.
2 replies 0 retweets 0 likes
A speculative, almost tongue-in-cheek just-so story in this vein is that g is pretty well correlated with working memory; learning far more efficient chunks in some domain is like effectively expanding your working memory, and thus increasing g in that domain.
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.