I won’t @ them but limiting expressivity in order to limit cognitive load and keep codebases approachable is a totally legitimate move in language design. I’d even say essential. It’s all about balance, and expressivity _does_ have tradeoffs.https://twitter.com/SeanTAllen/status/1036236006872305665 …
-
-
Replying to @graydon_pub
In the case of generics, the cost of leaving out even simple generics has a high cognitive load cost. I have no problem with taking it slow and conservative; that's a reasonable place for them. But the koans and zealous arguments against generics disrupt the design process.
2 replies 1 retweet 10 likes -
Replying to @wycats
I can't say I agree. The cognitive load of any static typing discipline is in mentally modelling the dynamic (unspecified-by-typing) residue; the residue-model gets simpler the fancier the types get, but nonlinearly. Balance is load of residue vs. load of the types themselves.
2 replies 0 retweets 17 likes -
Replying to @graydon_pub @wycats
To me, the nonlinearity is the key here. Residue is already off the cliff of undecidability, so meaningful chunks are quite hard to take out of it. Whereas simpler type systems impose dramatically-less load than more-complex ones (which themselves often fall off same cliff).
1 reply 0 retweets 2 likes -
Replying to @graydon_pub @wycats
Many users have experienced "my program is simple, but type system is so complex that I can't for the life of me get it to typecheck" which is .. a thing you want to reserve for only the gravest / most-pervasive problems if you want anyone to have the patience to use the thing.
2 replies 1 retweet 11 likes -
Replying to @graydon_pub @wycats
My experience is that mostly people who want their simple program to type check are asking for either significantly greater type system complexity or to be able to ignore some aspect of the underlying language (such as concurrency or mutability).
2 replies 0 retweets 7 likes -
IME it's more often subtle misreadings in the user's model of the type system, such as mistaking type envs / bindings for values, mistaking static bounds for existentials, wrong model of unification, variance, implicits, interaction with subtyping, etc.
3 replies 1 retweet 11 likes
Subtyping and variance are big, big causes of cognitive load and more effort to find models that cut down on such pervasive reliance on them would be great.
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.