One definition of "research" is to push the limits of today's technology to figure out what's possible, and thus inform what will be easily accessible with future technology. Complaining that researchers are using high-end tech is ... dubious.
-
-
-
oh yeah sure, in a similar vein we are not complaining that not everyone who's interested has access to a particle accelerator etc. Still DL sometimes kind of feels like we are brute-forcing our way to getting better results without making much progress in the approach itself.
- Još 2 druga odgovora
Novi razgovor -
-
-
I disagree with this analysis in so many ways. The key feature of these huge language models is that they're re-usable. The initial training is expensive, but customizing them for new purposes is very cheap, including distilling them for easier deployment.
-
You are right if we are talking about the general potential transfer learning for slightly different corpuses. But then, even loading this model (33 Gig for just the params!) is not trivial. So, how are you going to fine tune it for your task if you don't have the GPU setup 1/2?
- Još 3 druga odgovora
Novi razgovor -
-
-
On
#ogAMS conference this week there was an excellent presentation by@theShiftPR0JECT. Lean ICT is but a puzzle piece in fighting Climate Change. But I think especially we#MachineLearning experts also need to become#leanICT and#DigitalSobriety experts. -
I wasn't yet aware of the digital sobriety concept, but I also fully agree that this isn't sustainable. Finding more efficient algorithms has always been one of a goals of computer science, but currently we're going through a phase of adding more power and machines.
- Još 2 druga odgovora
Novi razgovor -
-
-
Doesn't this trade-off ultimately depend on application? If a tiny improvement means lower error rate for breast cancer detection, that's surely worth the (economic & environmental) cost. "Unnecessary" medical care costs $1tn/year in the US. That pays for a couple supercomputers.
- Još 6 drugih odgovora
Novi razgovor -
-
-
Democratizing AI is very applicable to genomics!
@swzaranek and I are seeing same problems: don't need large compute for a model where one machine (and straightforward filtering) produces equivalent or better results. "Big" methods easier to fund and publish, unfortunately. -
Probabilistic methods always win over filters (and other heuristics) when trained on well prepared datasets.
- Još 1 odgovor
Novi razgovor -
Čini se da učitavanje traje već neko vrijeme.
Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.