I don't even know who's working on this these days, is it a thing outside of some niches?
-
-
I mean, it doesn't *sound* all that hard, but then again it obviously didn't sound that hard back in the day or else Itanium would not have happened
1 reply 0 retweets 2 likes -
Replying to @johnregehr @IgorSkochinsky
I mean, GCC wasn’t even SSA back in those days, right?
2 replies 0 retweets 0 likes -
I know some older GPUs like VideoCore IV (Raspberry Pi) use VLIW
3 replies 0 retweets 0 likes -
I think to some extent it became unclear a sufficiently smart compiler was even possible for several VLIW problems. Like moving branch prediction into the compiler requires fundamentally rethinking some styles of software development.
1 reply 0 retweets 0 likes -
I'd read a whole book about this stuff. just spent a few minutes looking for anything at all (survey paper, book, whatever) about this from the last 5 years and came up empty.
3 replies 0 retweets 3 likes -
Well, for GPUs it’s easy to see why they moved away from VLIW. Instead of, say, shading 4 pixels at a time with 2 ALUs each, you can see why you can get better utilization by just shading 8 pixels at a time, one pixel per ALU. (Terminology imprecise, but you get the idea)
3 replies 0 retweets 0 likes -
Replying to @pcwalton @johnregehr and
Maybe one way to look at the history here is that, for vector processors at least, SIMD beats MIMD (where VLIW is a form of MIMD) in practice.
1 reply 0 retweets 1 like -
Replying to @pcwalton @johnregehr and
Which makes sense—I mean, if you have tons of data parallelism available, why complicate things by adding multiple simultaneous instruction dispatch when you can just decode one instruction and parallelize across lanes
1 reply 0 retweets 2 likes -
and in Machine Learning the move has been to make the "software" more embarrassingly parallel
1 reply 0 retweets 0 likes
Say more?
-
-
So today's ML often boils down to big matrix multiplications. This creates a funny feedback loop where hardware has spent more and more resources making MatMul fast, which in turn means there's more interest in models that use bigger MatMuls.
0 replies 0 retweets 3 likesThanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.