The Bloom paper is out. Looks like it's doing worse than current GPT3 API in zero-shot generation tasks in English but better than other open-source LLMs & better than all in zs multi-lingual (which was the main goal). Proud of the work from the community! arxiv.org/abs/2211.05100
Conversation
Replying to
And it's just the beginning. There's cool continuation work being done with BloomZ for example: twitter.com/Muennighoff/st!
Quote Tweet
Crosslingual Generalization through Multitask Finetuning
Demo: huggingface.co/bigscience/blo
arxiv.org/abs/2211.01786
github.com/bigscience-wor
We present BLOOMZ & mT0, a family of models w/ up to 176B params that follow human instructions in >100 languages zero-shot. 1/7
Show this thread
3
14
I also LOVE the carbon footprint paper of Bloom by the fantastic & Anne-Laure Ligozat that has been released recently too: arxiv.org/pdf/2211.02001
2
5
28
Replying to
Clem, our algo ranked BLOOM in the top 5 papers this week (out of 1060). We're featuring it in tomorrow's email 🤙
2
Replying to
Am I having deja vu or something? I could have sworn this paper was already released 😅 Anyways, great job!
1
Your bar about proudness is not that high, why not making it better than GPT-3, a two year-old tech with all details in the paper. Worse performance on zero-shot generation tasks is meaningless.









