Summer project: “PageRank” my antilibrary to surface the top books I should actually read. For each pair of books, I will score the strength of the link using a sort of Hebbian heuristic of whether they evoke correlated unconscious intentions. Then cluster, label, rank...
-
-
Still feels like it would be a worthwhile effort. Anti libraries are neural metadata.
Show this thread -
Hmm. Algorithm: use previous scores of a book to auto-score other links? There’s got to be a way to simplify this.
Show this thread -
If there are 100 books and if book 1’s vector of 99 affinities is matched by book 2’s vector for say the first 20, could we auto-fill the rest to match? Maybe manually score x% of the books, then estimate the rest via interpolation among most similar vectors based on partials?
Show this thread -
There’s a product idea here. Step 1: take pictures of bookshelf. Step 2, mturk HIIT to turn into a spreadsheet. Step 3, estimate affinity graph via full scoring of random subset and partial scoring with autocomplete of rest.
Show this thread
End of conversation
New conversation -
-
-
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.
I’d guess I have maybe 300 books. That’s 44850 pairwise scorings. At 10s a pair that works out to about 124 hours of tedium. Plus I’d need to enter in all my books and write a little program to present them in all possible pairs.