OpenAI officially reveals GPT-4, its new ‘human performance’ artificial intelligence

As a way to supercharge their famous ChatGPT , OpenAI now introduces GPT-4 , which can now even process images.

Advertisements

OpenAI has announced to the whole world that its new artificial intelligence is now available: the GPT-4 .

The new generation that will power the current AI in ChatGPT  and that has been used in the Microsoft search engine comes with several promises up its sleeve, especially a very striking one: it wants to accept images.

What GPT-4 brings

While GPT-3.5 can work and read with text, its new version seeks to generate text after rendering images.

“While less capable than humans in many real-world scenarios,” the OpenAI team wrote Tuesday, “it exhibits human-level performance in various professional and academic benchmarks.”

The company reports that GPT-4 passed mock exams (such as Uniform Bar, LSAT, GRE, and various AP tests) scoring “in the top 10% of test takers” compared to GPT-3.5 which scored in the top 10%. lower.

“In casual conversation, the distinction between GPT-3.5 and GPT-4 can be subtle. The difference arises when the complexity of the task reaches a sufficient threshold: GPT-4 is more reliable, creative, and capable of handling much more nuanced instructions than GPT-3.5 ,” the company adds.

First steps

OpenAI notes that, for now, the image inputs remain a research preview and will not yet be publicly available.

And there’s an even more important caveat: “Despite its capabilities, the GPT-4 has similar limitations to previous GPT models. Most importantly, it is still not entirely reliable (“freaking out” with the facts and making errors of reasoning).”

OpenA I says that GPT-4 will be available for both ChatGPT and the API. You will need to be a ChatGPT Plus subscriber to get access , and you should be aware that there will be a usage limit to play with the new model. API access for the new model is handled through a waiting list.