New TensorFlow benchmarks: faster training than Caffe2; near-linear speedup with # of GPUs https://www.tensorflow.org/performance/benchmarks … https://blogs.nvidia.com/blog/2017/04/18/caffe2/ …pic.twitter.com/1bG4uAHEuZ
You can add location information to your Tweets, such as your city or precise location, from the web and via third-party applications. You always have the option to delete your Tweet location history. Learn more
Have you tried this on phi? If it scales linearly you could put 128 cores to work
Nice! Just in time for GTC ;-)
AFAIK PyTorch, Chainer and DyNet all construct graphs for use as autodiff tape, only not do it ahead of time.
Also, in many cases they're not *significantly slower* than static graph frameworks (think only few %).
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.