Everything you Need to Know About OpenAI DevDay
OpenAI held its first ever DevDay event on November 6th, 2023, announcing major updates across its developer platform and new products. There were dozens of announcements ranging from powerful new AI models like GPT-4 Turbo to developer tools like the Assistants API.
Here are the key announcements from the DevDay -
GPT-4 Turbo with 128K context window
The star of the show was GPT-4 Turbo, an upgraded version of GPT-4 that is even more capable. GPT-4 Turbo has a few major improvements:
3x cheaper price - Input tokens are 3x cheaper and output tokens are 2x cheaper compared to regular GPT-4 pricing. This makes GPT-4 Turbo much more affordable to use in applications.
128K context window - Up from just 32K in the original GPT-4, the new Turbo can fit the equivalent of over 300 pages of text in a single prompt. This allows it to understand much more context and have more knowledgeable conversations.
Improved world knowledge - GPT-4 Turbo has been updated with events up to April 2023 so it can discuss recent happenings.
Developers can start using GPT-4 Turbo today by passing "gpt-4-1106-preview" in their API requests. The stable production version is expected in the coming weeks.
GPT-3.5 Turbo improvements
In addition to the new GPT-4 Turbo, OpenAI announced upgrades to GPT-3.5 Turbo:
16K context window by default, up from 4K previously
3x cheaper pricing on input tokens and 2x cheaper pricing on output tokens
Supports new features like improved instruction following and JSON mode.
There is also an updated version of the model available by calling "gpt-3.5-turbo-1106". Apps using regular gpt-3.5-turbo will get automatically upgraded on December 11th.
Assistants API and new developer tools
One of the biggest announcements was the new Assistants API, which provides building blocks for developers to build their own virtual assistant agents. Key capabilities include:
Persistent threads - Have long-running conversations without context limits.
Code Interpreter - Write and run Python code in a sandboxed environment, allowing assistants to run code iteratively to solve challenging code and math problems, and more.
Retrieval - Incorporate knowledge from proprietary documents and data sources.
Function calling - Call APIs and take actions defined by the developer.
There is also a new playground available to start building assistants without any code simply by instructing the ChatGPT. Additional new tools include visibility into log probabilities, reproducible outputs using a seed, and a beta program for GPT-4 fine tuning access.
Multimodal capabilities
OpenAI unveiled new modalities now available in the API:
GPT-4 Turbo with vision - Analyze images and generate captions. Activated by using "gpt-4-vision-preview".
DALL-E 3 - Generate images through the Images API. Customers like Snap and Coca-Cola are already using it.
Text-to-speech - Two new voice models with six new voices available to convert text into human-like speech.
Lower pricing and higher rate limits
Pricing was reduced across many API offerings:
GPT-4 Turbo costs 3x less for input tokens and 2x less for output tokens compared to regular GPT-4.
GPT-3.5 Turbo received a 3x price cut on input tokens and 2x on output.
Fine-tuned GPT-3.5 models are now much cheaper, with input tokens 4x cheaper and output tokens 2.7x cheaper.
Rate limits were also increased by 2x for all paying GPT-4 customers. OpenAI published usage tiers that show how limits automatically increase with usage.
GPTs - Customizable versions of ChatGPT
One of the most exciting announcements was GPTs, a way to create customized versions of ChatGPT for specific use cases. With GPTs, anyone can:
Give ChatGPT custom instructions, knowledge sources, and skills/actions.
Build them without coding, just via conversation.
Share publicly or keep private.
Add skills like web search, image generation, and data analysis.
A GPT Store will launch later this month so people can easily find GPTs created by the community. Enterprise customers can also build internal GPTs to help with tasks like customer service and employee onboarding.
GPTs provide a new way to tap into ChatGPT's capabilities for specialized domains.
ChatGPT Plus enhancements
Finally, ChatGPT Plus was updated in two ways:
The model now has knowledge of events up to April 2023.
The interface was simplified so you can access DALL-E, browsing, and other skills without switching models.
The future of AI assistants
With announcements like the Assistants API and GPTs, OpenAI is clearly investing heavily in the next generation of AI assistants. Rather than building one monolithic ChatGPT, the platform makes it easy to create specialized assistants for any use case.
These assistants can take advantage of not just natural language conversation, but also capabilities like analyzing data, generating media, and taking actions through APIs. Over time, they may even be able to execute tasks in the real world.
OpenAI revealed they are thinking deeply about the societal implications as we move towards more general-purpose AI agents. But for now, the new developer tools and updates give you a glimpse into the possibilities the future holds.
How to Try the new Features?
If you want to experience the new AI capabilities first-hand, sign up for an OpenAI API key or try ChatGPT Plus. Some features are restricted to ChatGPT Enterprise.