The Next Big Disaster Is Going to Be Caused by ChatGPT Plugins.
When ChatGPT first came out, many users were upset that it didn't use the Internet to get replies but instead only used its training data. Now, OpenAI has revealed that ChatGPT plugins are coming out, which will let this always-hungry LLM use the Internet.
The new ChatGPT, which is driven by GPT-4, can now make API calls to different services, which makes it more useful. OpenAI says that these tools were "designed specifically for language models with safety as a core principle," but in reality, they are a disaster waiting to happen.
In the blog post where OpenAI announced the release of ChatGPT plugins, one of the main things they talked about was how they were "gradually rolling out plugins" to see how they worked. Even though this is a good start, the way phased rollouts work means that it will end up being available to everyone after a certain amount of time. Even though the chance of disruption is much lower than with a full launch, it's like the difference between doing a dive straight into a pool and getting in slowly. You're in the water either way.
In line with their focus on "safety," there are currently only 13 plugins that can be used. It looks like these apps were chosen carefully to show only the good things that can be done with an Internet-connected chatbot. OpenAI picked and decided which features they wanted to add to ChatGPT, such as the ability to plan trips, order groceries and food, and use Wolfram's computing power. In this way, the ChatGPT plugins are "safe" right now, but you never know what could happen in the future.
There's no question that these plugins will make ChatGPT more useful, but it looks like OpenAI is missing the big picture here. Even though they made the beta safer by making it harder to use and by limiting the number of users, it's possible that these ideas won't work for all ChatGPT users. When you add the fact that ChatGPT developers can make their own plugins, you have a disaster ready to happen.
Fuel for the fire
In the rush to add apps to ChatGPT, OpenAI seems to have forgotten about the problems the service already has. Since the app came out, the company and ChatGPT jailbreakers on Reddit have been playing a game of "cat and mouse." Every time a new exploit got a lot of attention, experts from OpenAI would step in and make it harder to use. But this method has allowed many jailbreaks to go unnoticed, so some social engineering attempts still work on GPT-4.
The study paper for the model shows that GPT-4 is even more capable than GPT-3.5. Researchers said that GPT-4 could become "agentic," which means that it could go beyond what it was programmed to do and achieve goals that it wasn't designed to do. Because web APIs can be used to add apps to ChatGPT Plus, which is built on GPT-4, this "agentic" could get stronger.
These plugins can also be made by developers, so the only thing stopping dangerous plugins from being made is OpenAI. It is currently putting out the ability for developers to make their own plugins, and it is also working on an open standard that will give AI an interface. OpenAI wants to make a standard for ChatGPT apps, similar to how REST has become a standard for Web APIs.
If such a standard is made, multimodal AI bots and their security features might become obsolete. For example, an ElevenLabs app can easily make a propaganda-as-a-service offering, with ChatGPT making the text and ElevenLabs making the voice. Hackers could make a huge amount of code with the help of a GitHub Copilot tool. There are a lot of things that could go wrong, but OpenAI has the last word.
Self-Regulation Doesn’t Work
Sam Altman, the CEO of OpenAI, has said in the past that he thinks AI needs more rules. Until then, it looks like Altman will run OpenAI based on their "content policy" and "iterative deployment philosophy." But the safe release of ChatGPT Plugins shows that self-regulation isn't enough to make OpenAI's AGI goals come true safely.
In hindsight, the protections that OpenAI has put in place for ChatGPT make it clear that they care more about limiting the effects the robot can have on society. For example, OpenAI's content policy says it's not okay to make jokes about minorities or protected groups, but ChatGPT is happy to talk about Liberal-leaning topics that are politically divisive.
Since ChatGPT tools came out, bias has become less important. It's not a decision that should be made quickly to go from "not connected to the Internet" to "let's make API calls." Even though the launch was done in a careful way, ChatGPT apps show that AI needs a lot more rules.