Incredible news for devs:
- new GPT-4 and 3.5 Turbo models
- function calling in the API (plugins)
- 16k context 3.5 Turbo model (available to everyone today)
- 75% price reduction on V2 embeddings models
And more 🤯🧵
Conversation
First, if you haven’t already, check out the announcement blog post which goes over some of the highlights:
2
22
95
Another really important part of today's announcement: gpt-4-0314 and gpt-3.5-turbo-0301 are being deprecated.
We spun up a new deprecations page to make it clear to developers what to expect: platform.openai.com/docs/deprecati
3
2
36
Okay, back to the fun stuff: new models! The latest GPT-4 and GPT-3.5-turbo models include the ability to work with function calling and are more steerable than previous models. The 16k context turbo model is also going to be so useful!
Read more: platform.openai.com/docs/models/gp
3
3
66
I also wanted to take a second to reiterate the 75% price drop on embeddings. This is actually pretty crazy. You used to be able to embed the whole internet for ~$50M, now it is down to ~$12.5M.
Quote Tweet
Some more napkin math - size of the Internet is ~10^11 pages of text*, this would cost (only?) $50M to embed.
Who wants to take on Google? twitter.com/BorisMPower/st…
Show this thread
3
16
136
With the 16k turbo model, the TPM is also 2x the previous limit: platform.openai.com/docs/guides/ra 👏
The more tokens the better : )
9
1
68

