Imagine if folks used custom chips instead of standard gaming chips for deep learning. ~100X boost possible...https://twitter.com/mappingbabel/status/676853665500495872 …
-
-
Replying to @oe1cxw
@oe1cxw@adapteva Could be used for cases where data size is small enough to fit in on-chip memory? Possible for a set of use cases.2 replies 0 retweets 0 likes -
Replying to @nachiketkapre
@nachiketkapre@adapteva NV Maxwell has up to 336 GB/s global mem BW. Pascal is expected to have 1 TB/s. Hard to beat with a custom chip..1 reply 0 retweets 0 likes -
Replying to @oe1cxw
@oe1cxw@nachiketkapre Wrong premise, global memory is the past. Easy to get 100's of TB/s with truly distributed arch.#brain4 replies 0 retweets 0 likes
Replying to @zeroasic
@adapteva @nachiketkapre So you'd have to be more specific what exactly you mean by "deep learning". Which algo? Which topology? 2/
9:26 AM - 16 Dec 2015
0 replies
0 retweets
0 likes
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.