How should we evaluate tools for thought? There's no simple metric, as far as I can tell. The best tools change your paradigm anyway, so your old metrics (books printed per year?) aren't what matter.
Here's one (vague, but focusing): how much meaning is unlocked on the margin?
Conversation
Replying to
That is, you can talk about Mathematica's value by asking how many students use it, or if it helps their test scores, or by timing people solving problems using different tools. But its most significant value is in producing marginal profound mathematical insights.
1
1
21
It's all a variant of Kay's "Sistine chapels per generation," I guess! But the marginal meaning doesn't have to be a grand edifice: Twitter's most powerful metric as a tool for thought is in creating transformative (off-platform) personal connections.
3
3
29
It's not clear how to get leading indicators for any of this! As far as I can tell, you want to be on the lookout for very strange stories, like casually making an animation system in Smalltalk at age 12.
Do any of you have good leading indicator stories here?
2
1
15
Replying to
Great epistemic tools serve as maps which provide dictionaries to move between different worlds/perspectives. I.e. you have a set of rules where you can use an understanding of one thing to understand something else that is completely different from that thing.
1
1
6
I have a friend who says we know everything by analogy, "using an understanding of one thing to understand something else".
2
Replying to
One can find dimensions along which transformative Tools for Thought (TfT) are *significantly* better. For example, Mathematica's (i) documentation has 10x the reach, (ii) its code is 2x-10x more concise, and (iii) its conceptual reach is 2x-10x broader than the "competition".
3
Replying to
Not sure if it makes sense or not, but how about this (admittedly hard to quantify):
how much curiosity is evoked and how much action does it propel?
1
Replying to
What would be the meaning unlocked on the margin for spaced repetition systems?
Something like insights when looking at old cards/cards mixed together? Or something like knowledge compounding?
1
The metric that matters for SRS is not anything about memory or the cards themselves, but the marginal contribution to some purpose with intrinsic meaning for you. That might be powerful insights in one's original research or whatever; it's probably not what happens "in" Anki.
1
5
Show replies






