OpenAI just released its latest AI model, GPT-4, which demonstrates human-level performance across a variety of professional and academic benchmarks. GPT-4 is a large multi-mode model capable of accepting image and text input and generating text output. In this article, we look at the capabilities, limitations, and risks of GPT-4. Eventually, you’ll better understand the potential impact of GPT-4 and what it is and isn’t capable of.
Capabilities
The capabilities of the GPT-4 are an improvement over the previous GPT-3.5 model in terms of reliability, creativity, and nuanced instruction handling. OpenAI tested the model against various criteria, including simulated human reviews, and found that GPT-4 outperformed existing large language models. It also works well in languages other than English, including resource-poor languages like Latvian, Welsh, and Swahili.
Visual Inputs
GPT-4 can accept both text and images as input, making it capable of generating text output from both text and image inputs. While the model’s visual input capabilities are still in the preview stage of research, it has demonstrated capabilities similar to text-only input.
Steerability
OpenAI worked on every aspect of the plan outlined in its post on defining AI behavior, including controllability. Developers can now prescribe the style and purpose of their AI by describing directions in the “system” message. Users of the API can personalize their users’ experience within limits, allowing for meaningful personalization.
Limitations
GPT-4 is not perfect and has similar limitations to previous GPT models. It can still “hallucinate” facts and make errors in reasoning, so care must be taken when using language model outputs, especially in high-stakes settings. GPT-4 knows nothing of events after September 2021, which can cause it to make simple errors of reasoning and accept false claims as true. It can also fail on problems as challenging as humans, such as introducing security vulnerabilities into the code. GPT-4 can make confident but incorrect predictions and does not always check its work carefully. Interestingly, the baseline model is good at predicting the accuracy of its responses, but this ability diminishes after training.
Risks & Mitigation
While GPT-4’s capabilities are significant, they introduce new risks such as: B. generating harmful advice, wrong code, or inaccurate information. OpenAI has worked to mitigate these risks by working with more than 50 experts to test the model and collect additional data to improve GPT-4’s ability to reject dangerous requests. As a result, OpenAI has made many improvements to GPT-4 to make it more secure than GPT-3.5. GPT-4 is 82% less likely to have inappropriate content than the previous version and better adheres to policies on sensitive topics like medical advice and self-harm. Although OpenAI has made the model more resistant to bad behavior, it is still possible to generate content that violates usage rules. GPT-4 could be good or bad for society, OpenAI says, so it’s working with other researchers to understand the potential consequences.
Training Process
Like previous GPT models, the GPT-4 base model was trained to predict the next word in a document using publicly available data licensed under OpenAI. Fine-tuning the model’s behavior using Human Feedback Reinforcement Learning (RLHF) aligns it with user intent in the railings.
Predictable Scaling
A key goal of the GPT-4 project was the creation of a predictably scalable deep learning stack. OpenAI has developed an infrastructure and optimization with predictable multiscale behavior and can accurately predict whether GPT-4 will be lost during training.
Availability
Microsoft confirms that the new Bing search experience now works on GPT-4. Although it has a usage limit, you can also access GPT-4 with a ChatGPT Plus subscription. OpenAI can adjust the usage limit based on system demand and performance. The company is considering adding another subscription tier to enable increased GPT-4 usage. To access the GPT-4 API, you need to join the waitlist.
Conclusion
The creation of GPT-4 marks an important step in OpenAI’s efforts to develop deep learning. Although imperfect, it has demonstrated human-level performance on various academic and professional criteria, making it a powerful tool. However, care should be taken when using language model outputs in high-stakes contexts. OpenAI has worked to mitigate risk and create a deep learning stack that scales in a predictable way, which will be essential for future AI systems.