-
-
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
It's a shame Google doesn't show the impact of these improvements on energy consumption, if there is any, as this will be increasingly important if AI is to become ubiquitous.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
From everything I’ve directly experienced with Google Cloud and it’s manufactured “benchmarks”, I call BS on this one.
-
It's just not an apples to apples comparison. We have no reference point in terms of cost or power consumption or density of units
End of conversation
New conversation -
-
-
V2?
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Would it be as fast with pytorch?
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
What's their coherence time and error rate though??

Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Measured by Google
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
At 16 chips or less Tensorflow + TPU loses at 4/5 benchmarks. At best performance, TPU still loses on 2/5 benchmarks (Mask R-CNN + NMT). It appears that TPU's parallelize better than Nvidia's V100's, but Pytorch/MXNet + V100 still appear to be significantly faster per chip.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.