Conversation

Replying to
Well said! On one hand, I am in favor or sharing large language models in the interest of science (both figuratively and literally). However, as a former moderator of the Arxiv machine learning category, I do share some of the concerns. 1/2
2
21
Replying to and
2/2 I am not against progress, and I think there is room for compromise. For example, why not changing the marking: instead of promising a system that generates entire articles and fasts, create an "article template generator" where researchers have to fill in the facts.
3
13
Show replies
Replying to
From my pov, as a research paper it's nice. However, there is no use case where this LLM is useful. If you are a scientist, you won't trust its output as it can allucinate (and it does). Who is gonna use it then? Think that regular people often read paper's title to provr things
1
4
Replying to
Nicely reasoned article. What strikes me is that, for it to be truly useful, it must produce accurate and truthful output. This would seem to be orders of magnitude harder than simply making plausible sentences (just my intuition). Could be in potentially risky state for a while.
Replying to
One application of galactica whn it generates authentic looking fake papers is that when a researcher gets stuck with a problem for years (it happens)some fake papers cn give him clue to new way of thinking as a scientific artist.Kind o generating imagination in scientist way
Replying to
With Meta’s reputation and lot of fakery on social media releasing this sort of product is not wise . We should be exploring suitable use cases and ways to identification of the output as not real before releasing it.Good article
Replying to
The biggest source of confusion for me is that I don't see the extra danger in Galactica compared to, say, gpt3. Are we worried that since Galactica generates technical documents, it will be better at dispersing false results? Isn't the review system setup to deal with this?
1