Microsoft Corp-backed startup OpenAI began the rollout of GPT-4, a powerful artificial intelligence (AI) model that succeeds the technology behind the wildly popular ChatGPT.
GPT-4 is “multimodal,” meaning it can generate content from image and text prompts.
What is the difference between GPT 4 and GPT 3.5?
GPT 3.5 takes only text prompts, whereas the latest version of the large language model can also use images as inputs to recognize objects in a picture and analyze them.
GPT 3.5 is limited to about 3,000-word responses, while GPT 4 can generate responses of more than 25,000 words.
GPT 4 is 82% less likely to respond to requests for disallowed content than its predecessor and scores 40% higher on certain tests of factuality.
It will also let developers decide their AI’s style of tone and phrasing. For example, GPT 4 can assume a Socratic conversation style and respond to questions with questions. The previous iteration of the technology had a fixed tone and style.
Soon ChatGPT users will have the option to change the chatbot’s tone and style of responses, OpenAI said.
What can ChatGPT 4 do?
The latest version has outperformed its predecessor in the US bar exam and the Graduate Record Examination (GRE). GPT 4 can also help individuals calculate their taxes, a demonstration by Greg Brockman, OpenAI’s president, showed.
The demo showed it could take a photo of a hand-drawn mock-up for a simple website and create a real one.
Be My Eyes, an app that caters to visually impaired people will provide a virtual volunteer tool powered by GPT 4 on its app.
Limits of ChatGPT 4
According to OpenAI, GPT-4 has similar limitations as its prior versions and is “less capable than humans in many real-world scenarios.”
Incorrect responses known as “hallucinations” have challenged many AI programs, including GPT 4.
OpenAI said GPT 4 could rival human propagandists in many domains, especially when teamed up with a human editor.
It cited an example where GPT 4 came up with plausible suggestions when asked how to get two parties to disagree.
OpenAI Chief Executive Officer (CEO) Sam Altman said GPT 4 was “most capable and aligned” with human values and intent, though “it is still flawed.”
GPT 4 generally lacks knowledge of events after September 2021, when most of its data was cut off. It also does not learn from experience.
Access to GPT 4
While GPT 4 can process text and image inputs, only the text-input feature will be available to ChatGPT Plus subscribers and software developers with a wait list. At the same time, the image-input ability is not publicly available yet.
The subscription plan, which offers faster response time and priority access to new features and improvements, was launched in February and costs $20 monthly.
GPT 4 powers Microsoft’s Bing AI chatbot and some features on the language learning platform Duolingo’s subscription tier. (Reuters)