Affiliate links on Android Authority may earn us a commission. Learn more.
What is GPT-4 Turbo? New features, release date, pricing explained
Ever since ChatGPT creator OpenAI released its latest GPT-4 language model, the world of AI has been waiting with bated breath for news of a successor. And while we know that GPT-5 is in active development, it likely won’t arrive until 2025. This led many to speculate that the company would incrementally improve its existing models for efficiency and speed before developing a brand-new model. That ended up happening in late 2023 when OpenAI released GPT-4 Turbo, a major refinement version of its latest language model.
GPT-4 Turbo introduced several new features, from an increased context window to improved knowledge of recent events. So in this article, let’s break down what GPT-4 Turbo brings to the table and why it’s such a big deal.
Editor’s note: As of May 2024, GPT-4 Turbo is no longer OpenAI’s latest model. That honor now goes to GPT-4o, which elevates ChatGPT’s multimodal capabilities and is also available to free users in a limited capacity. The improvements outlined in this article still apply to the latest model in the GPT-4 family, so we’ve preserved the following text.
In a hurry? Here’s a quick summary of GPT-4 Turbo's features:
- As the name suggests, you can expect faster responses from GPT-4 Turbo compared to its predecessor.
- GPT-4 Turbo supports longer inputs, up to 128K tokens in length.
- While previous models didn’t know about events that took place after September 2021, GPT-4 Turbo has been trained on a much more recent dataset.
- The latest model is significantly cheaper for developers to integrate into their own apps.
- OpenAI will also let developers use the model’s vision, text-to-speech, and AI image generation features via code.
- In ChatGPT, you can now build your own GPTs with customized instructions for specialized tasks. Likewise, you will be able to download existing ones from the GPT store.
Keep reading to learn more about the features included within GPT-4 Turbo and how it compares to previous OpenAI models.
What is GPT-4 Turbo?
According to OpenAI, GPT-4 Turbo is the company’s “next-generation model”. Primarily, it can now retain more information and has knowledge of events that occurred up to April 2023. That’s a big jump from prior GPT generations, which had a pretty restrictive knowledge cut-off of September 2021. OpenAI offered a way to overcome that limitation by letting ChatGPT browse the internet, but that didn’t work if developers wanted to use GPT-4 without relying on external plugins or sources.
Information retrieval is another area where GPT-4 Turbo is leaps and bounds ahead of previous models. It boasts a context window of 128K tokens, which OpenAI says is roughly equivalent to 300 pages of text. This can come in handy if you need the language model to analyze a long document or remember a lot of information. For context, the previous model only supported context windows of 8K tokens (or 32K in some limited cases).
GPT-4 Turbo is simultaneously more capable and cheaper.
GPT-4 Turbo also offers a massive cost reduction to developers. It also offers rates two to three times cheaper than its predecessor. Having said that, GPT-4 Turbo still costs an order of magnitude higher than GPT-3.5 Turbo, the model that was released alongside ChatGPT.
Unfortunately, you’ll have to spring $20 each month for a ChatGPT Plus subscription in order to access GPT-4 Turbo. Free users won’t get to enjoy the now-older vanilla GPT-4 model either, presumably because of its high operating costs. On the plus side, however, Microsoft Copilot has switched over to GPT-4 Turbo. I’ve almost exclusively used Microsoft’s free chatbot over ChatGPT as it uses OpenAI’s latest language model with the ability to search the internet as an added bonus.
GPT-4 Turbo with Vision
When OpenAI first unveiled GPT-4 in early 2023, it made a big deal about the model’s multimodal capabilities. In short, GPT-4 was designed to handle different kinds of input beyond text like audio, images, and even video. While this capability didn’t debut alongside the model’s release, OpenAI started allowing image inputs in September 2023.
GPT-4 Turbo with Vision allows the language model to understand image and non-text inputs.
GPT-4 with Vision allows you to upload an image and have the language model describe or explain it in words. Whether it’s a complex math problem or a strange food that needs identifying, the model can likely analyze enough about it to spit out an answer. I’ve personally used the feature in ChatGPT to translate restaurant menus while abroad and found that it works much better than Google Lens or Translate.
With GPT-4 Turbo, developers can now access the model’s vision features via an API. Pricing is pegged at $0.00765 per 1080×1080 image. This affordability is good news as it means more apps could add the feature going forward.
GPT-4 Turbo vs GPT-4 and previous OpenAI models: What’s new?
GPT-4 Turbo is an iterative upgrade compared to GPT-4 but it’s still got a handful of compelling features. Luckily, existing GPT-4 users don’t have to do anything as it’s an automatic upgrade. However, if you’re still using GPT-3.5 or the free version of ChatGPT, the latest GPT-4 Turbo release is quite a big jump. Here’s how the three models compare:
GPT-3.5 | GPT-4 | GPT-4 Turbo | |
---|---|---|---|
Release date | GPT-3.5 November 2022 | GPT-4 March 2023 | GPT-4 Turbo November 2023 |
Context window | GPT-3.5 4,096 tokens, currently 16,385 tokens | GPT-4 8,192 tokens | GPT-4 Turbo 128,000 tokens |
Knowledge cut-off | GPT-3.5 September 2021 | GPT-4 September 2021 | GPT-4 Turbo April 2023 |
Cost to developers | GPT-3.5 Input: $0.001, Output: $0.002 | GPT-4 Discontinued | GPT-4 Turbo Input: $0.01 Output: $0.03 |
Vision (image input) | GPT-3.5 Not available, text-only | GPT-4 Available | GPT-4 Turbo Available |
Image generation | GPT-3.5 None | GPT-4 Yes, via DALL-E 3 | GPT-4 Turbo Yes, via DALL-E 3 |
Availability | GPT-3.5 All ChatGPT users | GPT-4 ChatGPT Plus only | GPT-4 Turbo ChatGPT Plus only |
How to use GPT-4 Turbo
OpenAI has opened up access to GPT-4 Turbo to all ChatGPT Plus users. It’s worth noting that all GPT-4 chats via ChatGPT Plus will still have input or character limits.
To access GPT-4 Turbo without any restrictions, simply head over to the OpenAI Playground page and log into your account. Then, look for the dropdown menu next to the word “Playground” and change the mode to Chat. Finally, change the model to GPT-4 Turbo (preview). If you don’t see models newer than GPT-3.5, you’ll have to add a payment method to your billing account.
Most users won’t want to pay for each response, however, so I’d recommend using GPT-4 via ChatGPT Plus instead. While Plus users won’t benefit from the massive 128,000 context window, the upgrade still offers other features like a more recent knowledge cut-off, image generation, custom GPTs, and GPT-4 Vision. The latest GPT-4o model also introduces back-and-forth voice conversations that isn’t available to free ChatGPT users in any capacity.
FAQs
No, GPT-4 Turbo requires a ChatGPT Plus subscription. Developers have to pay $0.03 per 1000 tokens (approximately 1000 words).
Yes, GPT-4 Turbo can generate images via OpenAI’s DALL-E 3 image creator.
No, GPT-4 Turbo is a large language model that simply analyzes and generates text. However, you can use ChatGPT’s browsing plugin or Bing Chat to connect the model to the internet.
GPT-4 Turbo can read PDFs via ChatGPT’s Code Interpreter or Plugins features. This will require a paid subscription.