In the latest gpt-x-0613 models, OpenAI made a finetuning-grade native update that has significant implication.
Function call = better API tool use = more robust LLM with digital actuators. Making this a first-class citizen also *greatly* reduces hallucination on calling the wrong function signature.
And surely, OpenAI also extended GPT-3.5's context length to 16K. I continue to be amazed by their massive shipping speed.
Conversation
> OpenAI made a finetuning-grade native update
this tweet could've been phrased better😦
1
6
Show replies
haven’t added function calls yet but upgraded to 16K and it rocks. I do think it is following some of my system prompts a little less closely but I have so many more tokens so I can add reenforcements and details to compensate.
1
1
Can you give some examples on how this is used? I want to get into using the API but I don’t know where to start
1
Show replies
Super excited about function calling. Next obvious step in terms of automation in my opinion:
Quote Tweet
OpenAI’s announced function calling today.
My takeaways:
Function calling is the next, obvious step in moving from generating text to taking actions.
It doesn’t really unlock any net new functionality. If you engineered the prompt well enough before, you could get the same… Show more
There’s a new version of this Tweet
5
75% price reduction on v2 embeddings can be very useful too.
Quote Tweet
6
Agree! I love the shipping speed.
Let’s see if they can achieve 1M context length by end of 2023!
4









