Software interfaces undervalue peripheral vision! (a thread)
My physical space is full of subtle cues. Books I read or bought most recently are lying out. Papers are lying in stacks on my desk, roughly arranged by their relationships.
Peripheral vision spontaneously prompts action.
If I need to fix a door, I’ll be reminded each time I see it. Digital task lists live in a dedicated app. I have no natural cause to look at that app regularly, so I need to establish a new habit to explicitly review my task list.
Peripheral vision emphasizes the concrete.
Unread digital books and papers live in some folder or app, invisible until I decide that “it’s reading time.” But that confuses cause and effect.
If I leave books lying on my coffee table, I’ll naturally notice them at receptive moments. I'll read a book if I feel an actual, concrete interest in it. By contrast, the motivation to read a digital book comes from abstract interest in the habit of reading.
Peripheral vision offers context.
If I mark up a physical book then later flip through to see my margin notes, I’ll always see them in the context of the surrounding text. By contrast, digital annotation listings usually display only the text I highlighted, removed from context.
The primary “unit” in such systems is a single highlight or note, but that’s not how I think. Marginalia have fuzzy boundaries, and I often think of a page’s markings as a single unit.
LiquidText is a lovely counterexample: it works hard to display annotations in context.
In digital note systems, the UI centers on the experience of writing one note. The core operations and representations fixate on “the note you have open,” not on larger structures. I often can’t simultaneously see another note I’ve just finished writing—let alone the last four.
Most systems barely support multiple windows, but even if I can open multiple windows, it’s awkward to arrange them into the spatial relationships I might naturally use for physical index cards. Rather than peripheral vision, it’s like I’m wearing horse blinders and mittens.
Backlinks are a weak peripheral vision, and they help, but they’re generally about switching the one note you have open, not an effective means of sense-making across many notes. Contextual backlinks help, but if you navigate, you lose object permanence.
If I read an old digital note, I get the unnerving sense that it’s part of some “whole” that I can’t see at all—no matter how much hypertext is involved. Working with physical notes, I’d shuffle notes around to make sense of the structure. There isn’t a digital equivalent.
when i’m designing on paper this is why notebooks rarely work for me. i don’t care how nice it is! i always prefer a stack of printer paper that i can rearrange, put side-by-side, spread out in front of me, hand to someone, etc. notebooks always cover up your last thought.
also: i always wanted an iPad in college just to use liquidtext. they don’t have it for mac, so for my research papers i would end up screenshooting snippets of PDFs, arranging them in Sketch, and then manually drawing line connections. but no link back to the original PDF 😞
The original definition of "copy and paste" in the newspaper or book manuscript sense has never been returned to us.
Cutting out paragraphs, rearranging them by hand, arranging notes and text in space, then capturing that.
This seems digitally doable but hasn't been
I was quite successful with Keynote when working on presentations reflecting on eg work in the past year. The speed at which i can move slides around, copy paste content to create WIP scratchpad slides allows such flow that i'm starting to "think with my hands".
Relatedly, there are lots of fun attempts at simultaneously showing the forest and the trees in dense graph visualization. The general approaches alone don’t seem to be enough, but maybe combined with some domain-specific semantics...
This is a nice one:
http://yunhaiwang.net/infovis18/fisheye/index.html…