Prediction: 2023 will make 2022 look like a sleepy year for AI advancement & adoption.
Greg Brockman’s posts
Plugins for processing a video clip, no ffmpeg wizardry required. Actual use-case from today's launch.
0:48
One of the least-appreciated skills in programming is writing anti-frustrating error messages.
A good error message should make it self-evident (a) what the user did, (b) what acceptable inputs are, and (c) how to fix the problem.
Can determine love or hate for your library.
An agent which learned to play Mario without rewards. Instead, it was incentivized to avoid "boredom" (that is, getting into states where it can predict what will happen next). Discovered warp levels, how to defeat bosses, etc. More details: blog.openai.com/reinforcement-
0:45
We’ve added initial support for ChatGPT plugins — a protocol for developers to build tools for ChatGPT, with safety as a core design principle. Deploying iteratively (starting with a small number of users & developers) to learn from contact with reality: openai.com/blog/chatgpt-p
0:59
We’re releasing GPT-4 — a large multimodal model (image & text in, text out) which is a significant advance in both capability and alignment.
Still limited in many ways, but passes many qualification benchmarks like the bar exam & AP Calculus:
Being willing to ask dumb questions is a superpower. Often by far the fastest way to get oriented in a new domain, and though perhaps counterintuitive, experts tend to love it when people genuinely want to learn about their passion area.
Working on a professional version of ChatGPT; will offer higher limits & faster performance. If interested, please join our waitlist here:
We're piloting ChatGPT Plus, a $20/mo subscription for faster response times and reliability during peak hours:
Kind of crazy that a computing device designed for desktop gaming has become critical to the emergence of super useful AI systems.
DALL-E — our new neural network for generating images from text:
openai.com/blog/dall-e/
Code is a liability, not an asset. So goal of software engineer is delivering the maximum amount of desired functionality at the cost of the least amount of code complexity, even as desired functionality evolves over time.
Big takeaway from the GPT paradigm is that the world of text is a far more complete description of the human experience than almost anyone anticipated.
ChatGPT plugin for visual learners:
Quote
The “ShowMe” plugin for ChatGPT is my new favorite plugin. I’m a visual leaner and it makes it easier/faster to consume info. chat.openai.com/share/b632f068
One of my biggest growth moments as a programmer was realizing that libraries I use are just code, and I could read them directly rather than puzzling it out from the docs.
Even today, I am surprised how much faster I move every time I start reading a layer I'm building on.
A new tool for scriptwriters:
Quote
Introducing Dramatron, a new tool for writers to co-write theatre and film scripts with a language model.
Dramatron can interactively co-create new stories complete with title, characters, location descriptions and dialogue.
Try it yourself now: dpmd.ai/dramatron-gith
GIF
ChatGPT iOS app is live in the US, rolling out in other countries over upcoming weeks: apps.apple.com/app/openai-cha
Android app coming next.
DALL·E 3 is ready! It's able to understand subtle nuance & follow prompts containing great detail.
Will be available to all ChatGPT Plus & Enterprise users over upcoming weeks.
openai.com/dall-e-3
The underlying spirit in many debates about the pace of AI progress—that we need to take safety very seriously and proceed with caution—is key to our mission. We spent more than 6 months testing GPT-4 and making it even safer, and built it on years of alignment research that we… Show more
At a low level, there is no magic in machine learning — just lots of straightforward mathematics and systems engineering. And somehow the end result is entirely magical.
ChatGPT API now available, 10% the price of our flagship language model & matching/better at any pretty much any task (not just chat).
Also released Whisper API & greatly improved our developer policies in response to feedback. We ❤️ developers:
Writing code for is not very fun for its own sake. What makes it insanely addictive is the feeling upon shipping — that actual people are doing something useful with what started as a figment of your imagination. Never gets old.
Just launched ChatGPT, our new AI system which is optimized for dialogue: openai.com/blog/chatgpt/.
Try it out here: chat.openai.com
We've raised $1 billion from Microsoft and will be working together to build next-generation supercomputers, with the goal of building a platform within Azure which will scale to AGI:
Announcing our partnership with Bain, with Coca-Cola Company as the first mutual client:
Code Interpreter becoming available for all ChatGPT Plus users over the next week. Really amazing for any data science use case:
With some exceptions, the biggest impacts in AI come from people who are experts at both software and machine learning.
Though most people expect the opposite, it’s generally much faster to learn ML than software.
So great software engineers tend to have outsize potential in AI
The difference between a startup and a large corporation is that in large corporations, everyone is assigned a seat and a desk and a parking spot and a cubicle, and in startups everyone is assigned a task.
In a startup, you should seek out activities that seem hard, boring, annoying, and unscalable. The highest-value tasks are often hiding amongst them, and no one else has noticed because they seem unappealing on the surface.
Machine learning engineering is primarily about patience, attention to detail, and thinking deeply about small things. The day-to-day can be quite tedious & frustrating — but the results of proper execution make it worth it.
Everyone talking about the future of search, but I'm particularly excited about the future of the browser — Edge will now include an AI assistant that can help you anywhere on the web.
Really starting to point at the future of UI:
Quote
Copilot for the web: twitter.com/satyanadella/s…
there is a unique kind of joy that comes from being part of a focused team solving hard problems
Manual inspection of data has probably the highest value-to-prestige ratio of any activity in machine learning.
An OpenAI employee printed out this AI-written sample and posted it by the recycling bin: blog.openai.com/better-languag
GPT-3.5 & GPT-4 for programming education — "CS50 bot" will be integrated into Harvard's CS50 class in the fall:
Much of modern ML engineering is making Python not be your bottleneck.
Never write a regex by hand again. An application I've wanted myself since about the time I wrote my first regex:
Found bug that I've been working on all week. Required leveling up conceptual understanding of a particular area of the stack, building new observability tooling, and running many iterative experiments to isolate the issue. Incredible feeling now that it's fixed.
ML bugs are so much trickier than bugs in traditional software because rather than getting an error, you get degraded performance (and it's not obvious a priori what ideal performance is).
So ML debugging works by continual sanity checking, e.g. comparing to various baselines.
mathematics is beauty, programming is utility, and their intersection is both at once
Team just simplified the ChatGPT URL from "chat.openai.com/chat" to "chat.openai.com" — one of many small improvements that really add up over time.
TED talk from earlier this week. Shows a bit of the future of AI tools, how we teach AIs to follow our intent, and how the tools themselves can help scale our ability to give high-quality feedback:
Programming is so fun because you get to go from lack of understanding to mastery time and time again, at a rate determined by your speed of iteratively writing & running code, and with real-world impact when you succeed.
Hard but necessary learning in software: when something doesn't work, the problem is you.
ChatGPT for life improvement:
Quote
I became addicted to running, and lost 26 lbs. All thanks to ChatGPT.
Long-term investments are extremely painful to make, but extremely worthwhile when they finally come to fruition.
When starting OpenAI, we thought hard about what job titles to use—didn't want to bucket people into researchers & engineers. Alan Kay advised that they used "Member of Technical Staff" at Xerox Parc, we loved & adopted it. Recently have seen many companies doing same—very cool!
Microsoft partnership has been one of OpenAI's secrets to success — we work very closely with Azure to produce AI training & serving infrastructure that can scale to our cutting-edge (& entirely unprecedented!) needs.
Quote
microsoft, and particularly azure, don’t get nearly enough credit for the stuff openai launches. they do an amazing amount of work to make it happen; we are deeply grateful for the partnership.
they have built by far the best AI infra out there.
Retrieval is probably going to be the most ubiquitous language model plugin for the near future, since it allows any organization to make their data searchable (with full control over permissions etc) by an AI:
Quote
stare at raw tensor values long enough and you will truly see the matrix
Deploying GPT-4 subject to adversarial pressures of real world has been a great practice run for practical AI alignment. Just getting started, but encouraged by degree of alignment we've achieved so far (and the engineering process we've been maturing to improve issues).
ChatGPT is now browsing enabled. It’s not just able to search, but can also click into webpages (and site owners can choose whether to permit access) to find the most helpful information & links for you. Available to Plus and Enteprise users:
Browsing & Plugins for all Plus users, rolling out over the next week:
ChatGPT just crossed 1 million users; it's been 5 days since launch.
Quote
little openai update:
gpt-3, github copilot, and dall-e each have more than 1 million signups!
took gpt-3 ~24 months to get there, copilot i think around 6 months, and dall-e only 2.5 months.
Perhaps the single most important virtue in ML engineering is persistence. The ML engineering process is one of repeatedly checking & understanding every detail of the system, until it finally goes through a phase transition from "not working at all" to "working shockingly well".
ChatGPT API coming soon — sign up for the waitlist:
DALL·E Outpainting — extend an existing image, to arbitrary size: openai.com/blog/dall-e-in
Just finished a three-day deep dive for a bug which required carefully combing through every layer of the stack. Bug ended up being simple (just a missing line of code), but in the process gained an understanding of how to improve the system to make it much easier to maintain.
ChatGPT for data science:
Quote
This
is a very big
I have access to the new GPT Code Interpreter. I uploaded an XLS file, no context:
"Can you do visualizations & descriptive analyses to help me understand the data?
"Can you try regressions and look for patterns?"
"Can you run regression diagnostics?"
I also really like this one, created by the prompt "DALL-E dreaming of becoming an AGI":
Software engineering: 50% understanding requirements, 40% complexity management, 9% debugging, 1% solving "interesting" algorithmic problems.
You'll enjoy software engineering a whole lot more if you instead think of the first 99% as the interesting part.
GPT-4 to automatically draft clinical notes within seconds of a patient visit:
a good lesson from recent AI progress is that sometimes things are a bit less impossible than they seem
Copilot for the web:
Quote
Bing and Edge + AI: a new way to search starts today blogs.microsoft.com/blog/2023/02/0
Congratulations to Team paiN, the Dota 2 pro team who just beat OpenAI Five in a 51 minute game. Lots of extremely exciting plays by both teams. Has been a great showcase of what both humans and AIs can do.
From a 2010 email where I told my AI professor I was dropping out to work on : "My background is in systems, and it's been fascinating to poke my head into the world of AI. One day, I hope to return to learn more about the field." It's good to be back :).
The "curse of dimensionality" has turned out to be a misnomer, as neural networks are trainable only due to the counterintuitive behavior of billion-dimensional spaces. Maybe time to be renamed to "gift of dimensionality".
GPT-4 for curiosity-led exploration of a concept:
Quote
here’s a force-directed knowledge graph interface for @OpenAI’s gpt-4. given a topic, it prompts new questions to ask based on its own generated responses, allowing curiosity-led exploration of a concept.
1:07
Just launched in 🇮🇳!
Voice mode and image inputs are now in ChatGPT. Starting to feel like the interface to a real AI:
GPT-4 as your personal tutor on .
One of my personal dream applications (fun fact, I was at one point exploring starting a programming education company — always felt that more people would program if they had access to a great teacher).
Quote
Did you hear?
Khan Academy is using GPT-4 from @OpenAI to shape the future of learning.
Starting today, you can sign up to test our AI-powered guide, Khanmigo. A tutor for learners. An assistant for teachers.
Come explore with us! 
Interesting first results of applying GPT embeddings to early diagnosis of dementia — 80% accuracy on a challenge dataset of speech recordings. Note also the paper was submitted in August; we've released much better embeddings since then! eurekalert.org/news-releases/
GPT-4 for converting your ideas to working prototypes:
Quote
have a graveyard of tightly scoped side projects that i’m just feeding to this thing and it’s just spitting out code…….that just executes lmaooo
I asked how to set neural network init. He accidentally replied with a poem:
You want to be on the edge of chaos
Too small, and the init will be too stable, with vanishing gradients
Too large, and you'll be unstable, due to exploding gradients
You want to be on the edge
DALL·E 2 — generate any image from a text description. Imagination is the limit.
"A Shiba Inu dog wearing a beret and black turtleneck"
"A photo of a quaint flower shop storefront with a pastel green and clean white facade and open door and big window"
openai.com/dall-e-2/
Exciting but overlooked that ChatGPT is primarily an alignment advance—the base model (GPT-3.5) has been available in publicly for many months, but making it into a useful chat system required significant strides with reliably following the intent of the developer and the user.
Quote
Replying to @sama
iterative deployment is, imo, the only safe path and the only way for people, society, and institutions to have time to update and internalize what this all means.
Most amazing fact about AI is that even though it’s starting to feel impressive, a year from now we’ll look back fondly on the AI that exists today as quaint & antiquated.
Equal cause for excitement and deliberative caution — important to get the tech and its deployment right.
GPT-4 for personalizing a lesson to your individual learning style:
Quote
GPT-4 will change the way that every student learns!
Imagine a single lesson being instantly converted into a version for visual learners, logical learners, etc.
We built an app that used GPT-4 to do exactly this - without code and in literal seconds.
Here's how we did it 

0:15














