Shit why didn’t I think of this 20y ago
I’d guess I have maybe 300 books. That’s 44850 pairwise scorings. At 10s a pair that works out to about 124 hours of tedium. Plus I’d need to enter in all my books and write a little program to present them in all possible pairs.
-
-
Show this thread
-
Still feels like it would be a worthwhile effort. Anti libraries are neural metadata.
Show this thread -
Hmm. Algorithm: use previous scores of a book to auto-score other links? There’s got to be a way to simplify this.
Show this thread -
If there are 100 books and if book 1’s vector of 99 affinities is matched by book 2’s vector for say the first 20, could we auto-fill the rest to match? Maybe manually score x% of the books, then estimate the rest via interpolation among most similar vectors based on partials?
Show this thread -
There’s a product idea here. Step 1: take pictures of bookshelf. Step 2, mturk HIIT to turn into a spreadsheet. Step 3, estimate affinity graph via full scoring of random subset and partial scoring with autocomplete of rest.
Show this thread
End of conversation
New conversation -
-
-
How dare you disrupt the sacred serendipity of the antilibrary??! Kidding, I love weird book data & this sounds fun. But for me an antilibrary easily compounds & some books are way more "signal" than others. Maybe first hand-curate intuitive top 50-100 & test w/ that?
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.