Curious: Why are all these neural network coprocessors (TPUs, Neural Engine, etc.) separate processors instead of just GPUs with better support for 8-bit ints and floats?
-
Show this thread
-
Replying to @BRIAN_____
Yeah, I’m more asking why NVIDIA and Apple, which make GPUs, decided to go with separate blocks alongside their GPUs. Neither company sells their “TPUs” separate from a GPU.
1 reply 1 retweet 0 likes -
Replying to @pcwalton @BRIAN_____
NVIDIA’s “Tensor Cores” are actually just an instruction in their GPUs that performs a small FP16 matrix multiply: https://devblogs.nvidia.com/programming-tensor-cores-cuda-9/ …
2 replies 2 retweets 7 likes
Replying to @trishume @BRIAN_____
Huh, you’re right!
to NVIDIA.
Now we just need to get this exposed in SPIR-V :)
8:27 PM - 12 Sep 2018
from Dogpatch, San Francisco
0 replies
1 retweet
9 likes
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.