Conversation

Discover more

Sourced from across Twitter
Unifying LLMs & Knowledge Graphs 1) Incorporate KGs during LLM pre-training/ inference, enhancing LLM understanding 2) Leverage LLMs for different KG tasks (embedding, completion, construction) 3) LLMs <> KGs bidirectional reasoning (data vs knowledge) arxiv.org/abs/2306.08302
Image
8
531
A recent work from claimed GPT4 can score 100% on MIT's EECS curriculum with the right prompting. My friends and I were excited to read the analysis behind such a feat, but after digging deeper, what we found left us surprised and disappointed. dub.sh/gptsucksatmit 🧵
Quote Tweet
Exploring the MIT Mathematics and EECS Curriculum Using Large Language Models Presents a comprehensive dataset of 4,550 questions and solutions from all MIT EECS courses required for obtaining a degree arxiv.org/abs/2306.08997
Image
33
2,169
Show this thread
When someone claims that a language model achieves 100% accuracy on a task, especially when the task is getting thousands of questions in MIT’s EECS courses 100% correct, I’m surprised that many knowledgeable ML folks on Twitter simply promoted the results without any skepticism.
Quote Tweet
Update: we've started replicating their experiments directly with GPT4 calls, and somehow it only gets worse. We've finished running zero-shot GPT 4 on the dataset, and after hand grading the first 30% of the dataset, the results don't seem to match the paper. 🧵 twitter.com/sauhaarda/stat…
Show this thread
12
315
Show this thread
There are many ways to enhance LLMs in terms of performance and reliability. Combining the advantages of LLMs and knowledge graphs (KGs) is a promising direction. This new paper provides a good roadmap for the unification of LLMs and KGs. Covers: - incorporating KGs in LLM…~~~~~~~~~~ hf3f8e3a 992bba08-8399-4bde-ab97-c1305e64876 SSR-I18N f2c6ac64-eb07-4bf8-bb18-52a36cf153b7 hf3f8e3a ~~~~~~~~~~
Image
4
228