I’ve been studying dynamics of reader memory with the mnemonic medium, running experiments on interventions, etc. A big challenge has been that I'm roughly trying to understand changes in a continuous value (depth of encoding) through discrete measurements (remembered / didn’t).
Conversation
I can approximate a continuous measure by looking at populations: “X% of users in situation Y remembered.” Compare that % for situations Y and Y’ to sorta measure an effect. This works reasonably well when many users are “just on the edge” of remembering, and poorly otherwise…
1
5
It’s a threshold function on the underlying distribution. Imagine that a person will remember something iff their depth-of-encoding (a hidden variable)—plus some random noise (situation)—is greater than some threshold. Our population measure can distinguish A vs A’, not B vs B’.
2
6
So it works pretty well initially, when the distribution’s spread out. e.g.: I’ve been running an RCT on retry mechanics. Of readers who forget an answer while reading an essay, about 20% more will succeed in their first review if the in-essay prompt gave them a chance to retry.
2
1
6
Subjectively this significantly helps me! Really made a difference when you added it (I think!)
1
1
Replying to
Yes, reader interviews were pretty positive about it! Though sometimes (as you’ve pointed out) it can perhaps be too pushy / literal. I’m mostly interested in it because it shows that “simple” mechanism changes can move the needle (so may be more low-hanging fruit).
The Reverse New Coke effect in UX: changes users hate at first, complain about bitterly, then get to know tge new thing, love it, forget they ever thought the old thing was better
1

