So why *aren't* there small bits of compute baked into RAM yet? Referring to highly local operations with fixed dimensionality, or reducing queries (return or update memory region where following constraint is true). We have TB+ EC2 nodes, after all. Why move the data?
-
Show this thread
-
Replying to @dakami @mcclure111
one reason is the wafers that are good for storage (high gate capacitance) suck for computation, and vice versa
2 replies 0 retweets 13 likes -
And this is why eDRAM isn't just "copy and paste a RAM design into the die".
1 reply 0 retweets 2 likes -
eDRAM?
2 replies 0 retweets 0 likes -
Intel's L4 cache / video RAM. Used on CPUs with Intel Iris graphics. Implemented on a second die sitting next to the CPU+GPU die itself.
1 reply 0 retweets 1 like -
Replying to @_sudoreality @sudoreality and
to add to this, eDRAM is the name of the concept (z/196 also uses eDRAM, for example), rather than Intel's impl
1 reply 0 retweets 2 likes -
Replying to @whitequark @sudoreality and
Intel's implementation isn't even "real" eDRAM (though by some definitions it qualifies). It's a separate die. I was talking about actually having eDRAM on the same die as logic. That's been common on game consoles since PS2/GameCube, and recently it's used for cache on POWER.
3 replies 0 retweets 3 likes -
Replying to @marcan42 @sudoreality and
oh, TIL. I thought eDRAM was always different-die.
1 reply 0 retweets 1 like
Yeah, half the Wii U's GPU/SoC die is eDRAM. Two blocks of different density. I think the top left bit is true SRAM. Both that & the smaller eDRAM block used to be eDRAM on Wii/GC, which was also half the die. https://marcan.st/transf/latte_stitched.jpg … https://marcan.st/transf/hollywood_die.jpg … (warning: huge).pic.twitter.com/Mn4FaVn1rL
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.