is there an efficient way, in python, to read many (10^5) small files to memory? using sequential open() -> major IO bottleneck, even w/ SSD
-
-
-
Replying to @BagelDaughter
guh, I'd dumb. 'sequential' inaccurate. have 24 parallel processes; each calls open, performs task on binary input, repeat.
1 reply 0 retweets 0 likes -
Replying to @eigenrobot @BagelDaughter
jc, *I'm*. anyway hope was that there might be a way to mass-access drive data rather than repeated, maybe-colliding reads.
1 reply 0 retweets 1 like
Replying to @eigenrobot
cool. In that case I probably got nothing, except, you should be able to beat 24 w thread parallelism
5:12 PM - 13 Feb 2017
0 replies
0 retweets
1 like
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.