-
-
Depends on a few more factors. Large files, or tiny files? All files in a directory, or filtering (in which case out of how many total files)? Dependencies between files, or can you read them in any order?
1 reply 0 retweets 0 likes -
Replying to @josh_triplett @rustlang
Tiny files, all files within a single dir, SIX millions files or more. First pass would be count them, It would suffice. Thanks
2 replies 0 retweets 0 likes -
Try walkdir (https://crates.io/crates/walkdir ): walkdir::WalkDir::new(dir).min_depth(1).max_depth(1).into_iter().count() Fits in a fraction of a tweet, uses the fastest available method on your platform.
2 replies 0 retweets 3 likes -
On my system, find takes 5.6s to walk 6M files (without even printing), and the above line takes 3.4s.
1 reply 0 retweets 0 likes -
You could also implement the same thing with std::fs::read_dir, but walkdir handles several additional details for you.
1 reply 0 retweets 0 likes -
Replying to @josh_triplett @rustlang
Why not using getdents64? Is that in std lib?
1 reply 0 retweets 0 likes
Because it's not portable and it's not clear how much of an improvement it would be. Why not try it and benchmark it?
-
-
Good ti know, thanks. Benchmark in my situation Is difficult, the dir Is used by a lot (~120) apps communicating, doing something that put latency in the process Is not possibile. An "ls -f | wc -l" takes from 5 to 20 minutes ti complete.
0 replies 0 retweets 0 likesThanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.