OpenAI has updated its GPT-4-Turbo AI model, which now offers significantly better responses and analysis features.
At first, only developers can access the model that comes with AI vision tech to analyze video, image, and audio content. However, OpenAI has stated that these capabilities will be added to ChatGPT in the near future.
For the first time, third-party developers can use GPT-4-Turbo with vision technology. This might lead to exciting new applications and services related to fashion, coding, and gaming.
The updated model now includes information up until December 2023, which is when the AI’s training ended. Before this, the latest information it had was from April of the previous year.
What is GPT-4-Turbo?
GPT-4 Turbo with Vision is now generally available in the API. Vision requests can now also use JSON mode and function calling.https://t.co/cbvJjij3uL
Below are some great ways developers are building with vision. Drop yours in a reply 🧵
— OpenAI Developers (@OpenAIDevs) April 9, 2024
The main aim of GPT-4-Turbo is to make developers’ work easier when they use the OpenAI model via an API. The latest update is designed to simplify processes and make apps work better, since previously separate models were required for images and text.
Going forward, the model and its ability to analyze images and videos will be enhanced and included in user apps like ChatGPT, improving how it understands visuals.
Google has begun introducing this with Gemini Pro 1.5. However, similar to OpenAI, it’s currently only available on platforms for developers, not for everyday users.
A well-known example is Devin, a coding tool from Cognition Labs that can create intricate programs based on a simple instruction.
What can you do with GPT-4-Turbo?
Recently, GPT-4 hasn’t done very well in tests compared to newer models like Claude 3 Opus and Google’s Gemini. It’s even been outdone by some smaller models in certain areas.
The new updates aim to improve this situation or introduce attractive new features for business users until GPT-5 is released.
The update keeps the 128,000 token limit, which is about the size of a 300-page book. It’s not the biggest available, but it suits most needs.
Until now, OpenAI has worked on making ChatGPT understand audio, text, and images. With the new update, video features will reach more users. Once this is available in ChatGPT, users might be able to share short videos for the AI to summarize or highlight important parts.
What we think?
I think the GPT-4-Turbo update is cool. It will help developers make new apps easier, especially with images and videos.
This could be fun for fashion, coding, and games. It’s still just for developers, but soon, everyone might use it to understand videos better.
Even though GPT-4 had some troubles before, this update could make things better until GPT-5 comes out. I’m excited to see what new things people will create.