Conversation

Summer project: “PageRank” my antilibrary to surface the top books I should actually read. For each pair of books, I will score the strength of the link using a sort of Hebbian heuristic of whether they evoke correlated unconscious intentions. Then cluster, label, rank...
2
26
Shit why didn’t I think of this 20y ago 🤬 I’d guess I have maybe 300 books. That’s 44850 pairwise scorings. At 10s a pair that works out to about 124 hours of tedium. Plus I’d need to enter in all my books and write a little program to present them in all possible pairs.
Hmm. Algorithm: use previous scores of a book to auto-score other links? There’s got to be a way to simplify this.
2
2
If there are 100 books and if book 1’s vector of 99 affinities is matched by book 2’s vector for say the first 20, could we auto-fill the rest to match? Maybe manually score x% of the books, then estimate the rest via interpolation among most similar vectors based on partials? 🤔
2
4
There’s a product idea here. Step 1: take pictures of bookshelf. Step 2, mturk HIIT to turn into a spreadsheet. Step 3, estimate affinity graph via full scoring of random subset and partial scoring with autocomplete of rest.
3
10