About Us

Welcome to The GPT-4o, here you can get the entrance for GPT-4o of OpenAI. It's for user to try with GPT-4o of OpenAI. GPT-4o (“o” for “omni”) is a step towards much more natural human-computer interaction—it accepts as input any combination of text, audio, and image and generates any combination of text, audio, and image outputs. It can respond to audio inputs in as little as 232 milliseconds, with an average of 320 milliseconds, which is similar to human response time(opens in a new window) in a conversation. It matches GPT-4 Turbo performance on text in English and code, with significant improvement on text in non-English languages, while also being much faster and 50% cheaper in the API. GPT-4o is especially better at vision and audio understanding compared to existing models.

Prior to GPT-4o, you could use Voice Mode to talk to ChatGPT with latencies of 2.8 seconds (GPT-3.5) and 5.4 seconds (GPT-4) on average. To achieve this, Voice Mode is a pipeline of three separate models: one simple model transcribes audio to text, GPT-3.5 or GPT-4 takes in text and outputs text, and a third simple model converts that text back to audio. This process means that the main source of intelligence, GPT-4, loses a lot of information—it can’t directly observe tone, multiple speakers, or background noises, and it can’t output laughter, singing, or express emotion.

With GPT-4o, we trained a single new model end-to-end across text, vision, and audio, meaning that all inputs and outputs are processed by the same neural network. Because GPT-4o is our first model combining all of these modalities, we are still just scratching the surface of exploring what the model can do and its limitations.

Join us on this journey as we continue to push the limits of what's possible with AI, creating tools that are not just technologically advanced, but that also serve to inspire, educate, and empower.

If you are interested with us, please visit: https://openai.com/index/hello-gpt-4o/.

Text Evaluation with GPT-4o of OpenAI

GPT-4o sets a new high-score of 88.7% on 0-shot COT MMLU (general knowledge questions).

GPT-4o sets a new high-score of 88.7% on 0-shot COT MMLU (general knowledge questions).

GPT-4o is more powerful.

Also for Audio ASR performance

Audio translation with GPT-4o of OpenAI

GPT-4o sets a new state-of-the-art on speech translation and outperforms Whisper-v3 on the MLS benchmark.

new state-of-the-art on speech translation

Whisper-v3

GPT-4o is more powerful.