Conversation

In the latest gpt-x-0613 models, OpenAI made a finetuning-grade native update that has significant implication. Function call = better API tool use = more robust LLM with digital actuators. Making this a first-class citizen also *greatly* reduces hallucination on calling the wrong function signature. And surely, OpenAI also extended GPT-3.5's context length to 16K. I continue to be amazed by their massive shipping speed.
Image
Show replies
haven’t added function calls yet but upgraded to 16K and it rocks. I do think it is following some of my system prompts a little less closely but I have so many more tokens so I can add reenforcements and details to compensate.
1
1
Show replies
Super excited about function calling. Next obvious step in terms of automation in my opinion:
Quote Tweet
OpenAI’s announced function calling today. My takeaways: Function calling is the next, obvious step in moving from generating text to taking actions. It doesn’t really unlock any net new functionality. If you engineered the prompt well enough before, you could get the same… Show more
Image
There’s a new version of this Tweet
5
75% price reduction on v2 embeddings can be very useful too.
Quote Tweet
🎉 New features for GPT-4 and GPT-3.5 released today by @OpenAI including: “ 1️⃣ New GPT-4 and 3.5 Turbo models 2️⃣ Function calling in the API (plugins) 3️⃣ 16k context 3.5 Turbo model 4️⃣ 75% price reduction on v2 embeddings models “ You can embed the whole internet for $12.5M. twitter.com/officiallogank…
6