Conversation

Software interfaces undervalue peripheral vision! (a thread) My physical space is full of subtle cues. Books I read or bought most recently are lying out. Papers are lying in stacks on my desk, roughly arranged by their relationships.
Image
43
462
1,653
Peripheral vision spontaneously prompts action. If I need to fix a door, I’ll be reminded each time I see it. Digital task lists live in a dedicated app. I have no natural cause to look at that app regularly, so I need to establish a new habit to explicitly review my task list.
5
23
276
Peripheral vision emphasizes the concrete. Unread digital books and papers live in some folder or app, invisible until I decide that “it’s reading time.” But that confuses cause and effect.
Image
2
8
156
If I leave books lying on my coffee table, I’ll naturally notice them at receptive moments. I'll read a book if I feel an actual, concrete interest in it. By contrast, the motivation to read a digital book comes from abstract interest in the habit of reading.
5
13
237
Peripheral vision offers context. If I mark up a physical book then later flip through to see my margin notes, I’ll always see them in the context of the surrounding text. By contrast, digital annotation listings usually display only the text I highlighted, removed from context.
Image
2
11
133
The primary “unit” in such systems is a single highlight or note, but that’s not how I think. Marginalia have fuzzy boundaries, and I often think of a page’s markings as a single unit. LiquidText is a lovely counterexample: it works hard to display annotations in context.
Embedded video
GIF
5
21
312
In digital note systems, the UI centers on the experience of writing one note. The core operations and representations fixate on “the note you have open,” not on larger structures. I often can’t simultaneously see another note I’ve just finished writing—let alone the last four.
2
9
108
Most systems barely support multiple windows, but even if I can open multiple windows, it’s awkward to arrange them into the spatial relationships I might naturally use for physical index cards. Rather than peripheral vision, it’s like I’m wearing horse blinders and mittens.
Image
6
7
114
Backlinks are a weak peripheral vision, and they help, but they’re generally about switching the one note you have open, not an effective means of sense-making across many notes. Contextual backlinks help, but if you navigate, you lose object permanence.
3
7
100
If I read an old digital note, I get the unnerving sense that it’s part of some “whole” that I can’t see at all—no matter how much hypertext is involved. Working with physical notes, I’d shuffle notes around to make sense of the structure. There isn’t a digital equivalent.
7
16
157
Roam's certainly trying! I'm curious about approaches which maintain object permanence. Most approaches, incl Roam's, are heavy on "switching the primary focus" as a core operation. I want to see more unusual ideas! Most attempts here are so boring. Here's a weird prototype:
Embedded video
0:18
3.3K views
12
8
108
Show replies
Replying to
Would you consider heads-up displays in video games to be peripheral? Seems a little different, since it’s omnipresent during play, unlike a passing context that you infrequently.
1
4
Show replies
Yep, this
Quote Tweet
Replying to @samim
I've simplified this behavior into 'sessions' where pages (usually topic-clustered by window) are saved into browsing sessions. I can then search these later and rediscover either the source — or the fuzzy related content based on my own organization. Contextual read-it-later!
Image
1
1
7
Replying to
I use workspaces heavily to separate my windows by task context. So I have my windows laid out in a workspace based on the task. It’s good because you are reminded of them as you switch between, but lacking in that there isn’t much visual cue that there is more than one workspace
4