is there an efficient way, in python, to read many (10^5) small files to memory? using sequential open() -> major IO bottleneck, even w/ SSD
-
-
Replying to @eigenrobot
could you jam them together into one big temp file with eg the cat command in the Shell before u run python?
1 reply 0 retweets 3 likes -
Replying to @ngvrnd
hmmmm. YES! I'll try this later. clever, thank you. times like this I wish I'd had a formal CS education
1 reply 0 retweets 1 like -
-
Replying to @ngvrnd
yeah formal CS education is mostly about asymptotic algorithm performance lol
1 reply 0 retweets 3 likes -
Replying to @The_Lagrangian @ngvrnd
eerily similar to classical, generally useless econometric theory coursework
2 replies 0 retweets 1 like -
Replying to @eigenrobot @The_Lagrangian
i mean, damn it, if i suffered through the proof of complexity of tarjans disjoint union find algorithm
1 reply 0 retweets 2 likes -
-
Replying to @ngvrnd @The_Lagrangian
( iirc it scales with the inverse of Ackerman's function)
1 reply 0 retweets 1 like -
Replying to @ngvrnd @The_Lagrangian
oh totally! And I'm glad I can prove asymptotic efficiency + unbiasedness of certain estimators, all else equal. Yet
1 reply 0 retweets 2 likes
. . . now, I'd never waste a student's time on it. Bootstrap that shit, cross-validate, and get back to study design
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.