ChatGPT-4 Turbo: What’s New in OpenAI’s Latest Release?
OpenAI has just given ChatGPT a major boost by upgrading it to run on the new GPT-4 Turbo model. This means even better reasoning skills, a more natural conversation flow, and improved performance overall.
Let’s dive into the details of this exciting update and explore the advancements that ChatGPT-4 Turbo brings to the table.
What Is ChatGPT-4 Turbo?
ChatGPT-4 Turbo is the latest language model OpenAI released in November 2023. It introduces more powerful features than GPT-4 and GPT-3.5, showcasing OpenAI’s dedication to advancing AI.
What’s New In ChatGPT-4 Turbo?
ChatGPT-4 Turbo is set up with many new functions and multimodal capabilities, namely:
Enhanced Model Capabilities
- Expanded context window: ChatGPT-4 Turbo increases its context window to 128k tokens, enabling the processing of over 300 pages of text in a single prompt. This marks a significant leap from GPT-4’s 32k token limit, aligning with advancements in long-context processing.
- Updated knowledge: The model now incorporates knowledge of events up to April 2023, ensuring more relevant and up-to-date responses.
Better Function calling
- Multi-function calling: ChatGPT-4 Turbo has the ability to call multiple functions with a single message. For example, a user could send a command like “open the car window and turn off the A/C,” which previously would have required multiple roundtrips with the model.
- Function calling accuracy: GPT-4 Turbo is more likely to return the right function parameters.
Better instruction following and JSON mode
GPT-4 Turbo excels at tasks that require precise adherence to instructions, such as producing specific formats like “always respond in XML.” Additionally, it supports a new JSON mode, ensuring responses are in valid JSON. This feature is particularly useful for developers
Multiple tools
- Integration with DALL·E 3: Direct integration with DALL·E 3 allows applications to generate images based on text descriptions, facilitating app creative and design tasks.
- Text-to-Speech (TTS) API: The model now offers high-quality speech generation from text, with multiple voices and model variants optimized for different use cases, enhancing the model’s interaction with users through auditory responses.
Model Customization
- Fine-tuning access: An experimental program for fine-tuning GPT-4, aiming at achieving significant improvements and offering a pathway from GPT-3.5 fine-tuning.
- Custom models program: This exclusive program enables organizations to collaborate with OpenAI researchers to create custom versions of GPT-4 tailored to their specific needs and data, offering unparalleled customization.
Who Can Use ChatGPT-4 Turbo
OpenAI has announced that GPT-4 Turbo is currently available for all paying developers to try by using the GPT-4-1106-preview tag in the API. The company also plans to release a stable, production-ready model in the coming weeks. Currently, the language model is only accessible in preview form.
Other Updates from OpenAI ChatGPT Turbo
In addition to the expanded access and capabilities of ChatGPT-4 Turbo, OpenAI has introduced several other updates that further enhance user experience:
Updated ChatGPT-3.5 Turbo
OpenAI has also unveiled an updated version of ChatGPT-3.5 Turbo, which now features a default 16K context window. This enhancement expands its capacity and brings significant improvements in instruction adherence, JSON mode functionality, and the ability to handle multiple function calls simultaneously.
In testing, the updated 3.5 Turbo model demonstrated a notable 38% increase in efficiency on tasks requiring precise format adherence, including generating JSON, XML, and YAML formats.
Developers looking to leverage the upgraded capabilities of this model can do so by using the identifier gpt-3.5-turbo-1106 in their API calls. The previous iteration of the model will remain accessible under the identifier gpt-3.5-turbo-0613 in the API until June 13, 2024, ensuring a smooth transition to the latest version.
Assistants API
OpenAI’s new Assistants API revolutionizes the development of AI applications by offering advanced features like Code Interpreter, Retrieval, and enhanced function calling. This initiative aims to simplify the creation of complex, agent-like applications across various domains, such as coding assistance, data analysis, and more.
Three new tools to call are:
- Code Interpreter: Enables assistants to execute Python code, create graphs, and process diverse data, supporting complex problem-solving.
- Retrieval: Expands an assistant’s knowledge using external data, simplifying integrating proprietary information without manual data processing.
- Function Calling: Allows assistants to invoke custom functions and incorporate their responses directly, enhancing interactivity.
Lower prices
OpenAI has reduced its prices significantly, enhancing affordability for AI developers.
GPT-4 Turbo’s input and output tokens are now priced at $0.01 and $0.03, respectively, marking substantial reductions. The new GPT-3.5 Turbo model offers even lower input and output token costs at $0.001 and $0.002.
The fine-tuned GPT-3.5 Turbo 4K model also sees input tokens drop to $0.003 and output tokens to $0.006.
These price cuts are OpenAI’s strategy to stay competitive and make its sophisticated AI models more accessible to developers.
Higher rate limits
Furthermore, OpenAI has increased the token rate limits for all paying GPT-4 customers, effectively doubling the tokens per minute allowance to support application scaling.
The organization has also clarified its guidelines on usage tiers and rate limits, ensuring developers have a transparent understanding of how their application’s usage limits will scale.
Although GPT-4 Turbo is in its preview phase with a fixed rate limit of 20 requests per minute and 100 per day, OpenAI suggests the possibility of rate limit increases following the release of a public version.
ChatGPT-4 Turbo vs ChatGPT-4 vs ChatGPT-3.5 Turbo
Below is a comparative overview of ChatGPT-4 Turbo, ChatGPT-4, and ChatGPT-3.5 Turbo.
Feature | ChatGPT-4 Turbo | ChatGPT-4 | ChatGPT-3.5 Turbo |
Pricing (input/output tokens) | $0.01 / $0.03 | $20/user/month | $0.001 / $0.002 for the new model, $0.003 / $0.006 for fine-tuned |
Knowledge Cutoff | April 2023 | April 2023 | January 2022 |
Information Input | Text, Text-to-Speech, Image, File | Text, Images | Text |
Who Can Access | Paying developers (API access) | ChatGPT plus user | All ChatGPT users |
Context Window | 128K tokens | 8K tokens for gpt-432K tokens for gpt-4-32k | 16K tokens by default4K for fine-tuned |
Bottom Line
OpenAI’s newest release, ChatGPT-4 Turbo, is a big moment in the evolution of conversational AI models. It has some significant improvements that make AI easier for developers to use and open the door for more advanced and subtle AI applications.