Conversation

Software interfaces undervalue peripheral vision! (a thread) My physical space is full of subtle cues. Books I read or bought most recently are lying out. Papers are lying in stacks on my desk, roughly arranged by their relationships.
Image
43
462
1,653
Peripheral vision spontaneously prompts action. If I need to fix a door, I’ll be reminded each time I see it. Digital task lists live in a dedicated app. I have no natural cause to look at that app regularly, so I need to establish a new habit to explicitly review my task list.
5
23
276
Peripheral vision emphasizes the concrete. Unread digital books and papers live in some folder or app, invisible until I decide that “it’s reading time.” But that confuses cause and effect.
Image
2
8
156
If I leave books lying on my coffee table, I’ll naturally notice them at receptive moments. I'll read a book if I feel an actual, concrete interest in it. By contrast, the motivation to read a digital book comes from abstract interest in the habit of reading.
5
13
237
Peripheral vision offers context. If I mark up a physical book then later flip through to see my margin notes, I’ll always see them in the context of the surrounding text. By contrast, digital annotation listings usually display only the text I highlighted, removed from context.
Image
2
11
133
The primary “unit” in such systems is a single highlight or note, but that’s not how I think. Marginalia have fuzzy boundaries, and I often think of a page’s markings as a single unit. LiquidText is a lovely counterexample: it works hard to display annotations in context.
Embedded video
GIF
5
21
312
Replying to
I thought it was still getting some development...in any case the concept you brought up of "peripheral vision" makes a lot of sense to me. Too much high quality content is often hidden so, lost.
Just saw this reply—curious if he had anything interesting to say! Not so much about my writing (don't think there would be anything new to him there), but just his current thinking on the subject.
1
2
Show replies