No, I will not make any such change, for two reasons:https://twitter.com/yoavgo/status/1353413967889244161 …
-
Pokaż ten wątek
-
1. Process: The camera ready is done, and approved by all of the authors. If I make any changes past this point it will be literally only fixing typos/citations. No changes to content let alone the title.
2 odpowiedzi 1 podany dalej 31 polubionychPokaż ten wątek -
2. Content: I stand by the title and the question we are asking. The question is motivated because the field has been dominated by "bigger bigger bigger!" (yes in terms of both training data and model size), with most* of the discourse only fawning over the results. >>
3 odpowiedzi 2 podane dalej 50 polubionychPokaż ten wątek -
*(Exceptions to this: (1) the big body of work--including yours--into whether the models absorb bias and (2) the GPT-2 staged roll-out paper (and references cited in its sec 1.) https://arxiv.org/abs/1908.09203
1 odpowiedź 1 podany dalej 27 polubionychPokaż ten wątek -
Thus, the motivation for writing this paper. We aren't saying "LLMs are bad" but rather: these are the dangers we see, that should be accounted for in risk/benefit analyses and, if research proceeds in this direction, mitigated. >>
1 odpowiedź 7 podanych dalej 60 polubionychPokaż ten wątek -
Furthermore, I've now had a minute to read your critique, and I disagree with your claim that our criticisms are independent of model size. Difficulty in curating and documenting datasets absolutely scales with dataset size, as we clearly lay out in the paper: >>pic.twitter.com/CKLsFXIKwD
1 odpowiedź 12 podanych dalej 68 polubionychPokaż ten wątek -
Likewise, nowhere do we say that small LMs are necessarily good/risk-free. There, and in your points about smaller models possibly being less energy efficient, you seem to have bought into a world view where language modeling must necessarily exist and continue. >>
1 odpowiedź 2 podane dalej 27 polubionychPokaż ten wątek -
This isn't a presupposition we share. (And please don't misread: I'm not saying it should all stop this instant, but rather that research in this area should include cost/benefit analyses.) >>
1 odpowiedź 1 podany dalej 27 polubionychPokaż ten wątek -
As for the claim that our paper is one-sided, this is exhausting. All of ML gets to write papers that talk up the benefits of the tech without mentioning any risks (at least until 2020 w/broader impact statements), but when a paper focuses on the risks, it's "one-sided"? >>
5 odpowiedzi 57 podanych dalej 371 polubionychPokaż ten wątek
It was really refreshing to read a paper about the risks of large scale NLP models. We always hear about the dangers and bias with facial recognition and image processing, but there isn’t enough discussion about NLP. It’s not something I had properly consider before.
Wydaje się, że ładowanie zajmuje dużo czasu.
Twitter jest przeciążony lub wystąpił chwilowy problem. Spróbuj ponownie lub sprawdź status Twittera, aby uzyskać więcej informacji.
to 
