Remember when you went to Microsoft for stodgy but basically functional software and the bookstore for speculative fiction?
arXiv may have been useful in physics and math (and other parts of CS) but it's a cesspool in "AI"—a reservoir for hype infections
Conversation
From the abstract of this 154 page novella: "We contend that (this early version of) GPT-4 is part of a new cohort of LLMs [...] that exhibit more general intelligence than previous AI models. We discuss the rising capabilities and implications of these models."
>>
3
2
41
And "We demonstrate that, beyond its mastery of language, GPT-4 can solve novel and difficult tasks that span mathematics, coding, vision, medicine, law, psychology and more, without needing any special prompting."
>>
2
1
25
Pièce de résistance: "Given the breadth and depth of GPT-4's capabilities, we believe that it could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system."
>>
1
2
31
Comic interlude "In our exploration of GPT-4, we put special emphasis on discovering its limitations,"
(But apparently none on the limitations of their 'tests' for AGI.)
>>
1
3
45
And finally: "We conclude with reflections on societal influences of the recent technological leap" --- I'm not sure I even want to look to see what they have to say there.
3
28
So, if there is a definition of AGI with associated factors that, if identified via multiple examples, evidence the beginnings or wholesale existence of AGI, wouldn’t that be scientifically objective, or is there an expectation for some other research design here?
1
1
Show replies
I am really concerned about the hype (and lack of reflection on the use of certain terminology 'AGI' in research).
2
I also found the tone fawning, but some of the experiments (image generation, code execution, spatial reasoning) seem to provide evidence of an emulated world-model
That said, it has to produce strings to represent & reason about the model, rather than rely on internal state
2
15
I'm still stuck on the fact that they say they're going to ignore separating your training and testing data, instead use "human psychology" (?) and admit it's not scientifically rigorous. But it's AGI anyway.
Quote Tweet
Where Microsoft tries to tell you "don't worry bro, forget about it" on data contamination, and conveniently does not note if while GPT-4 was being refined, the model was given corrective feedback for these prompts with these so called "evolutionary" progress. twitter.com/SebastienBubec…
Show this thread
2
19





