Curious: Why are all these neural network coprocessors (TPUs, Neural Engine, etc.) separate processors instead of just GPUs with better support for 8-bit ints and floats?
Sure, I’m more asking about companies that won’t sell you a TPU without an attached GPU though (Apple, NVIDIA). Seems strange that they have totally separate blocks instead of reusing silicon.
-
-
Well when it’s on an power constrained SoC it’s often more a game of using only the most efficient silicon for the job. On a GPU…well a lot of that is going to memory, which is shared. I suspect at Nvidia will make a “mostly tensor core” part at some point.
-
Cool, yeah. I hope things converge over time. There have gotta be good uses beyond neural nets for really fast 16 bit matrix math that clever devs can think of…
- 1 more reply
New conversation -
-
-
(Speculation) it’s advantageous for them not to be packaged right next each other for thermal reasons. As for Google’s TPU, once you’re past the up front cost of designing a chip, it’s easier/cheaper to design/manufacture a TPU than design/buy something like Volta.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.