I keep this bash script to run stuff on the CPU. But, for spacy in my case, 4 threads have always proved to be the fastest.pic.twitter.com/zw2YhPw6uT
You can add location information to your Tweets, such as your city or precise location, from the web and via third-party applications. You always have the option to delete your Tweet location history. Learn more
I keep this bash script to run stuff on the CPU. But, for spacy in my case, 4 threads have always proved to be the fastest.pic.twitter.com/zw2YhPw6uT
Right - my comment only applies when you are also have top level experiment parallism- for single experiments, using more threads is faster. It's just that when you have multiple experiments, they stomp on each other if they share resources
That’s right. I try to make sure that my top level parallelism * num-threads-per-job doesn’t exceed the available resources. And in that case I seem to get the most out of spacy with 4 threads. But that might just pertain to my use case.
I've seen a similar thing with tensorflow on cpu, 5x speedup for certain models with interop threads set to 1...amongst other things http://blog.tabanpour.info/projects/2018/09/07/tf-docker-kube.html …
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.