I have a question — sorry for necro'ing! — but how would I do this if it's utterly impractical to manually click to discover download URLs?
Imagine if you have like 100s of big data files and you know the average user/reader can't d/l 5GB in one go. But they could have a subset!
-
-
Maybe some form of pattern matching, want to create an issue on the github repo for this?
-
Sure but for what? My request to download subfolders or to be able to receive the current subdirectories names or your glob'ing idea?
-
for sub folders just do clone against that subfolder guid ? issue for ls expand to include guid could inc names & new issue for glob'ing
-
I tried to clone subfolder but got whole osf — can you link me to the way to do it please?
-
p.s. thank you for continuing to engage! these are great use cases that we might not be finding out about so quickly but for you!
-
I ideally want to put my data on OSF and my code will decide to download what and how. It's really big data so cloning the whole OSF is a
-
big ask for my readers/users. I will obviously give them the option of downloading the full 5gigs... but it would be nice to be able to let
-
them have fewer big files, that would still allow some data fun!


And you are welcome, thank you all for making this and listening! - 7 more replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.