is there an efficient way, in python, to read many (10^5) small files to memory? using sequential open() -> major IO bottleneck, even w/ SSD
eerily similar to classical, generally useless econometric theory coursework
-
-
i mean, damn it, if i suffered through the proof of complexity of tarjans disjoint union find algorithm
-
it had to be worth something, right?
-
( iirc it scales with the inverse of Ackerman's function)
-
oh totally! And I'm glad I can prove asymptotic efficiency + unbiasedness of certain estimators, all else equal. Yet
-
. . . now, I'd never waste a student's time on it. Bootstrap that shit, cross-validate, and get back to study design
End of conversation
New conversation -
-
-
i guess it might be Stockholm Syndrome but I feel like it was only mostly useless...
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.
