AI tools are making false information that sounds good. Getting involved with them means being very careful.
AI tools can help us make content, learn about the world, and (maybe) get rid of some of the more boring things we have to do in life, but they aren't perfect. They have been shown to make up information, use other people's work without permission, and use explanations and other social norms to gain users' trust.
Some AI chatbots, like "companion" bots, are often made with the goal of being able to respond with empathy. This makes them seem like they could be true. Even though these tools make us feel awe and wonder, we have to be careful when we use them or we risk being mislead.
The CEO of OpenAI, which made the ChatGPT chatbot, Sam Altman, has said that he is "worried that these models could be used for large-scale disinformation." I am, too, because I study how people use technology to get information.
Source: The Conversation
With AI in your pocket, there will be more false news.
Tools that use machine learning use algorithms to do certain jobs. They "learn" as they get more information and change how they respond to it. AI is used by Netflix, for example, to keep track of the shows you like and offer others for you to watch. Netflix will suggest more cooking shows to you if you watch more cooking shows.
Many of us are trying out and having fun with new AI tools, but experts say that these tools are only as good as the data they are based on, which we know is often wrong, biassed, or even made to trick. When we used to be able to tell if an email was a scam by the way it was written or if a picture was made by AI because it had extra fingers, system improvements have made it harder to tell.
The growing use of AI in work apps makes these worries even bigger. AI tools will be added to a number of Microsoft, Google, and Adobe services, such as Google Docs, Gmail, Word, PowerPoint, Excel, Photoshop, and Illustrator.
No longer do you need special skills and tools to make fake pictures and deep-fake videos.
Running Tests
I used the Dall-E 2 image generator to see if it could make a true picture of a cat that looked like mine. I started with the idea of "a white cat with a fluffy tail, orange eyes, and a grey sofa."
It didn't turn out quite right. Its hair was tangled, its nose wasn't fully formed, and its eyes were cloudy and turned in the wrong direction. It made me think of the pets in Stephen King's Pet Sematary who went back to their owners. But the flaws in the design made it easier for me to see that the picture was just something the computer made.
Source: The Conversation
I then asked for the same cat to be "sleeping on its back on a hardwood floor." The created cat in the new picture didn't look too different from my own. A picture like that could fool almost anyone.
Source: The Conversation
Then I turned ChatGPT around on myself and asked, "What is Lisa Given best known for?" It started off well, but then it listed a bunch of books that I didn't write. That was the end of my trust in it.
Source: The Conversation
The robot started having hallucinations and giving me credit for other people's work. There is a book called The Digital Academic: Critical Perspectives on Digital Technologies in Higher Education, but I didn't write it. Digital Storytelling in Health and Social Policy wasn't written by me either. I'm not in charge of Digital Humanities Quarterly either.
When I told ChatGPT it was wrong, it apologised very much, but then made even more mistakes. I did not write or edit any of the books or diaries on this page. I wrote one part of Information and Emotion, but neither Paul Dourish nor I co-edited the book. The most famous book I've written, Looking for Information, was left out.
Source: The Conversation
Our biggest defence is checking the facts.
In the latest version of Looking for Information, which I wrote with two other people, we talk about how people have been spreading false information for a long time. Misinformation and disinformation have been around for a long time, but AI tools are the newest way that they are shared. They make it possible for this to happen faster, on a bigger scale, and with more people having access to the technology.
Last week, the Voiceprint function used by Centrelink and the Australian Tax Office was said to have a worrying security flaw. AI-made voices can fool this system, which lets people use their voices to get to private account information. Scammers have also impersonated people's loved ones on WhatsApp by using fake sounds.
Access to and creation of information can now be done by more people because of advanced AI tools, but they do come with a cost. We can't always ask experts for advice, so we have to make our own decisions based on what we know. Here, it's important to be able to think critically and check facts.
These tips will help you find your way around an information world full of AI.
1. Ask questions and check with third parties
If you use an AI text generator, you should always check the sources that are cited in the result. If the sources do exist, ask yourself if they are presented fairly and correctly and if any important details have been left out.
2. Don't believe everything you see.
If you see a picture that you think might have been made by AI, ask yourself if it seems too "perfect" to be real. Or maybe one part of the picture doesn't fit with the rest, which is often a dead giveaway. Look at the textures, features, colours, shadows, and, most importantly, the overall scene. You can also use a reverse picture search to check the sources.
Just a heads-up - Midjourney's AI can now do hands correctly. Be extra critical of any political imagery (especially photography) you see online that is trying to incite a reaction. pic.twitter.com/ebEagrQAQq
— Del Walker (@TheCartelDel) March 16, 2023
If you're not sure about a written text, look for mistakes in the facts and ask yourself if the style and content fit what you would expect from the source.
3. Talk about AI in your groups.
Make sure you and those around you use these tools in a responsible way to avoid sharing or accidentally making AI-driven fake news. If you or a company you work with are thinking about using AI tools, make a plan for how to handle possible mistakes and how to be open about the tools you're using in the materials you make.
Interested in the latest updates on AI technology? Follow us on Facebook and join our group (Link to Group) to leave your comments and share your thoughts on this exciting topic!