do you think there's value in non-syncable, non-collaborative "local-first" ? or where maybe those come from just working with git/github instead of hard to perfect models?
Conversation
Yeah, definitely. In fact, I tried hard to design a format which could just be synced as dumb flat files without constant conflicts. But in practice, any app which needs database-like features (querying, indexing) will find this quite difficult.
2
7
You may be able to separate the DB features from the data sync. Obsidian is just md files that it syncs, but there must be indexes around to speed things up.
1
1
Yes, this is an exciting direction! It strikes me as tricky and likely to involve lots of subtle bugs, but then again, so do basically all solutions in this space. And I'd love to expose a plain file structure.
1
1
If it’s easy with your structure to delete and reindex whenever a data file changes, it seems like that part may be easier than the sync itself
1
I'd probably have to do it incrementally. Typical loads in my case would have 10^3-10^5 files. Actually, syncing a tree of 10^5 files is non-trivial! Not really sure how to provide hosting services for that.
1
Incremental indexing and sync for sure. 10^5 files is not a big deal for something like S3
1
?? Wouldn't naive syncing involve lstat-ing 10^5 files locally? Hard to see how to get away with that!
1
I haven’t tried Obsidian with that many files, but I was assuming it would use file watcher API when it’s running. On mobile, probably put everything in SQLite instead?
1
Not sure how open they are about such things, but you could try asking in the Obsidian Discord if they’d give pointers about sync for lots of files. Not sure how many they’ve tested their sync with.
1
1
Just ran a test with 50k files on my 2 year old MBP: lstat took 200ms! Surprisingly good! Wow… maybe this *is* tractable…


