Exposing ChatGPT's Deception: Are We Ready to Trust Robots?
After Google and Microsoft revealed they would be using chatbots to produce search results, doubts about the dependability of these AI assistants started to surface. Chatbots were still being developed despite warnings from Google's own AI researchers that they might say something inaccurate, stupid, or offensive. Their use of large language models, which are trained to predict the likelihood of utterances but are sometimes unable to determine whether a sentence is true or false, is the problem. In a presentation on the dangers of LLMs, a team from DeepMind, the Alphabet-owned AI company, highlighted this by claiming that "the bots are prone to hallucinating."
These chatbots aren't really intelligent, in essence. They are stupid and dishonest.
The chatbots quickly demonstrated this point. Last week, Google's Bard answered a question incorrectly in an advertisement, which resulted in a multibillion-dollar decline in the stock price of the company. The responses given by Bing's Sydney during its public demonstration were unreliable according to simple fact-checks.
It's alarming how AI-powered search engines are currently functioning. Search engines have benefited society even though online search was already a constant battle against spam, SEO strategies, and advertiser demands. They have built a bridge between information and knowledge and given order to the chaotic world of online data, earning our trust in the process.
As they are programmed to speak with unwavering confidence and little actual knowledge, chatbots are an unsettling development that have earned the nickname "mansplaining as a service" among tech insiders. Despite this, they are poised to take over as the preferred resource for learning everyday information, leaving us open to their errors and arrogance. Given that search engines were once a dependable tool that assisted us in navigating the wealth of information on the internet, this is a concerning development.
We continue to heavily rely on chatbots for learning and decision-making despite being aware of their shortcomings. Despite the fact that we are aware of these machines' severe flaws, we employ them hourly millions of times. This raises the question of why we would choose to put our faith in a distrustful machine.
To be honest, the foremost authorities in philosophy, psychology, and neuroscience still don't fully understand why people believe what they do. How beliefs are created or defined is a matter of debate. Because of this ambiguity, it is difficult to comprehend how chatbots operate and why some information is regarded as being more reliable than others. However, there are theories as to why individuals will probably be duped by ChatGPT's façade. Because we are naturally drawn to self-assured people with impressive credentials, chatbots will only get more and more adept at tricking us.
Official figures
Researchers have studied the causes of people's propensity to believe false information over the last few decades. Propaganda and social media have received the most attention in these studies as the main sources of fiction presented as truth. But as chatbots proliferate as a tool in search engines, false information will be ingrained in the very tools we use to learn new things. And since a social media post lacks the weight and authority that a search engine response does, the issue of people believing false information will probably get worse as a result.
People may accept chatbot responses in a biassed or self-serving way, much like any other source of information, claims Joe Vitriol, a political scientist at Lehigh University who studies disinformation. This means that, similar to how they do with conventional Google results, users may believe a chatbot if it conveys information that supports their pre-existing beliefs and opinions. In this case, it's possible that people won't care whether the chatbot is telling the truth or not.
People may be even more trusting of the chatbot-generated results due to the way the answers are presented and the support of Google behind them. Humans have a tendency to place more trust in sources that seem reliable or authoritative, and Google has made a name for itself as one of these sources. According to Joe Vitriol, some people might believe the chatbot has more authority than a human source and might be less likely to consider the possibility that the bot is prone to prejudices and errors of reasoning, just like humans. Therefore, even if the chatbot's responses are inaccurate, they might still be believed to be true because they come from Google.
Indeed, chatbots may be even more persuasive than straightforward search results or links due to the use of natural language and the appearance of a human-like persona. Chatbots are created to take advantage of the fact that as people, we are more likely to believe information that is presented to us in a relatable and personable manner. Additionally, it's likely that chatbots will become even better at mimicking human communication as technology advances and they become more sophisticated, potentially obfuscating the distinction between human and machine-generated content even more.
I agree. Because chatbots can produce natural language responses, people may find their responses to be more credible and trustworthy. This is so because language is a potent tool for persuasion and communication, and because people are biologically programmed to respond to it. The first-person "I" can also give the impression that the user and the chatbot have a personal connection, which can boost trust even more. Additionally, as more people rely on chatbots for information, there is a greater chance that false information and manipulation will spread. We should always be on guard and sceptical of the data we take in, whether it comes from people or machines.
The impact of stories
Our natural attraction for explanations is an additional reason for our propensity for chatbots. We feel a certain amount of satisfaction when uncertainty is replaced by certainty. It not only gives us a sense of intelligence, but it also gives us a sense of control over things we are unable to change.
It's difficult to determine the exact reason why some people prefer one theory to another. Studies have shown that detailed stories are more convincing than simple, broadly applicable explanations in some situations. Even sociologist Kieran Healy pointed out our tendency for over complicating things in a paper titled "Fuck Nuance." Situation is important, according to an meta-analysis of 61 studies performed over a five-decade period. While people prefer facts without decorations when it comes to matters of public policy(such as a problem with a low level of threat, a problem not related to health, or an issue affecting others), stories enhance the credibility of an explanation when it comes to emotional topics (such as a scenario involving a serious risk, a health problem, or something that affects oneself).
While chatbots are made to generate responses that sound authoritative and conclusive, they often don't have the necessary knowledge to support up those claims. As a result, they might provide incorrect information, especially when discussing complicated or complex topics.
Regardless of how difficult accuracy may be, AI chatbots seem to have it. In a recent preprint, a team of Stanford social scientists compared the convincing abilities of GPT-3 chatbots and humans. Plenty of individuals were given brief articles by the researchers on hot-button issues like assault weapon bans and carbon taxes. After reading either chatbot-written or human-written versions of the articles, the participants' opinions were gauged to see how much they had changed.
The study discovered that the messages produced by AI were just as persuasive as those written by humans, but for an unexpected cause. Participants who liked the chatbot articles thought that the human-written messages depends too much on anecdote and imagery, whereas the AI-generated messages were more evidence-based and logically reasoned, according to the researchers. In essence, the chatbots' lack of human traits increased their credibility. These chatbots, compared to their "Terminator" forerunners, lacked empathy, guilt, or fear. They simply continued in presenting their explanations and supporting data until the human subjects agreed.
You lazy bum bastard
The possibility of chatbots lying and giving false information raises serious ethical questions. My biggest worry, though, is that Google and Bing users will be aware of this fact but decide to disregard it. One explanation for the increase of false information and fake news is that people are simply too lazy to conduct their own research and prefer to believe what a reliable source says. Users might agree to take chatbot responses at face value if they are generally accurate. This could have harmful consequences, like missing a flight or starting a fire as a result of the chatbot's inaccurate information.
The sociologist Watts was asked a few weeks ago to contribute to a story on the reasons why people believe in conspiracies. He advised reading "Explanation as Orgasm," a paper written 25 years ago by Alison Gopnik, a psychologist at the University of California, Berkeley.
Gopnik is well-known psychologist at the University of California, Berkeley, has concentrated her studies on the psychology of child development. Gopnik claims that children build mental models of the world through observation and hypothesis testing, which is similar to the scientific method. She claims that humans have two different cognitive systems for understanding the world around us in her paper on explanations. The "hmm system" is the first method for asking yourself why things are the way they are. The second system, called the "aha system," is in charge of coming up with explanations. Gopnik likens the relationship between the two systems to our related but distinct biological systems for reproduction and orgasm. She points out that we can use one system without using the other and that using the second system makes the first feel rewarding and satisfying.
The aha system can, however, be easily tricked and is not entirely accurate. For example, hallucinatory experiences can make people feel as though "everything makes sense," even when they are unable to articulate how. Similar feelings can be caused by dreams. Because of this, the notes you make to yourself when you suddenly wake up at 3 in the morning about a brilliant idea that just struck you while you were still asleep might not make sense when you read them the next day.
Simply put, the feeling of having what seems to be an answer can be so good that it can overpower the part of our brain that asked the question in the first place. We can, in essence, mistake an answer for the answer.
William Clifford was a philosopher who lived in 1877. He thought that people's beliefs should come from careful investigation, not just from ignoring doubt. He said that our ideas are something we all have in common and that it is our job to make a world where future generations can live. In his article "The Ethics of Belief," he talks about how important it is to think about and test our beliefs instead of blindly accepting them. He thinks that this duty is a "awful privilege" that we should take very seriously.
The temptation to avoid that obligation is strong. Gopnik and Clifford both knew that it can be satisfying to have explanations, even if they are wrong. Clifford says, "Men are eager to believe and afraid to doubt when they feel like they have power and know something." Think about how quickly people are trying to explain all the strange things that have been shot down over Saskatchewan. It is better to believe in aliens than to live in fear of things that can't be explained.
Clifford gives a way to stop this temptation. In essence, his answer is to say "no." He says that "the holy tradition of humanity" is not made up of statements or claims that must be accepted and trusted because of tradition. Instead, it is made up of questions that are asked in the right way, ideas that let us ask more questions, and ways to answer those questions.
The chatbots will show us easy ways to do things. But we need to keep in mind that these are not what we should be looking for.
Interested in the latest updates on AI technology? Follow us on Facebook and join our group (Link to Group) to leave your comments and share your thoughts on this exciting topic!