Could you benchmark also the resources needed to run the repos? Speed per epoch, iterations until convercence etc etc?
-
-
-
Yes it should be possible in principle. The benchmarking libraries are here: http://github.com/paperswithcode/sotabench-eval … and http://github.com/paperswithcode/torchbench … - open for PRs.
Kraj razgovora
Novi razgovor -
-
-
Were the model re-trained from scratch with the same hyperparameters? Or we the results outputs from the pre-trained models? Esp. asking for the WMT benchmarks w.r.t to
#neuralempty I would argue that "reproduction" and re-running a trained model is rather different. =) -
For WMT, the http://matrix.statmt.org/ does exist too.
- Još 1 odgovor
Novi razgovor -
-
-
Hi, would you add GANS benchmark, like celebA, lsun, and so on
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi
-
-
-
Excellent initiative! I love your work!
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi
-
Čini se da učitavanje traje već neko vrijeme.
Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.
Introducing sotabench : a new service with the mission of benchmarking every open source ML model. We run GitHub repos on free GPU servers to capture their results: compare to papers, other models and see speed/accuracy trade-offs. Check it out: