Is memory/performance optimization a big thing in the ML world? Seems like it would be: these models take a long time to train and GPUs are expensive, so there should be a lot of money in make it faster/smaller. Or is this not a thing for whatever reason?
13
1
44

