Former Google AI Ethicist Blasts AI-Generated Language Tools as 'Bulls***'"

Former Google AI Ethicist Blasts AI-Generated Language Tools as 'Bulls***'"

This article was initialized by a human, created by AI, updated by a human, copy edited by AI, final copy edited by a human, and posted by a human. For human benefit. Enjoy!

Alex Hanna, a former AI ethicist at Google, recently spoke to Analytics India Magazine about the rising concerns surrounding large language models (LLMs) and their ability to generate content such as college essays. Hanna referred to LLMs as “bullshit generators” and questioned the need for such models given their high cost of training and impact on carbon emissions.

Hanna’s comments come in response to Google’s recent release of the experimental AI chatbot, ‘Bard,’ which was built on the large language model LaMDA. The chatbot received criticism after one of the facts mentioned in its ad was found to be wrong. Many employees at Google believe that the release of Bard was rushed in response to the popularity of ChatGPT, backed by Microsoft and developed by OpenAI.

Hanna emphasized the importance of serving marginalized communities with AI technology, saying, “How is it going to serve the most marginalized people right now?" Hanna's comments come as companies like Google, Microsoft, and OpenAI continue to pour resources into developing large language models.

While LLMs have generated excitement and promise for their potential applications, there are concerns about their impact on society. As Hanna notes, LLMs have a high cost of training and a significant carbon footprint. Additionally, the content generated by LLMs may perpetuate biases or misinformation, especially if not adequately vetted by humans.

Despite these concerns, LLMs like Bard and ChatGPT continue to generate interest and research. As AI technology continues to evolve, it will be important to ensure that its development serves the greater good and is accessible to all, not just a select few.

A Different Perspective" and its content

Former AI ethicist at Google, Alex Hanna, shared her thoughts on the big tech's focus on language models, saying that the release of this technology has impressed the funder class of VCs, resulting in a lot of money being poured into it. According to Hanna, there are other uses of AI that are more prevalent, such as using it to support people in welfare allocation or provide useful services. She believes that language models such as ChatGPT and Bard could take away the power to discriminate economically, socially, and politically. Hanna's concerns about the ethical implications of language models and their focus on profit over social responsibility highlights a larger issue of the tech industry being driven by greed.

Data Privacy Concerns

Alex Hanna, also expressed concerns about the explosion of data labelling companies and the abuse of data used to train these large language models. In particular, the data used for models such as GPT-3.5 and LaMDA is either proprietary or scraped from the internet. According to Hanna, little attention is paid to the rights of the people in the data, including artists and writers who are not being compensated.

This lack of data privacy has already led to several artists suing companies for using their work without consent. For example, Sarah Andersen, Kelly McKernan, and Karla Ortiz recently sued Midjourney, Devian Art, and Stability.AI for using their work for data without explicit consent. Additionally, OpenAI faced criticism for outsourcing data sourcing work to Kenyan workers for its chatbot 'ChatGPT'.

Hanna believes that there needs to be more attention paid to the rights of the people involved in creating and providing the data for these models. Without proper compensation and consent, the continued use of large language models could result in further exploitation and abuse of data privacy.

As the use of large language models becomes more prevalent in various industries, it's important to address these ethical concerns to ensure that the technology is being used in a responsible and fair manner.

The Power Play

As large tech corporations continue to claim to be 'AI first,' there is a growing concern about the exploitation of underpaid workers who contribute to their success, including data labourers, content moderators, warehouse workers, and delivery drivers. Alex Hanna's team is dedicated to evaluating and addressing the potential harms associated with AI, while also exploring new possibilities for the technology with the input of the community.

One example of this exploitation is Amazon's surveillance of its delivery partners and workers in order to expedite deliveries to customers. An internal document revealed that the company has been closely monitoring its employees' activities and movements, which Hanna argues is a clear exploitation of workers, disguised as technological advancement. "We are working to combat the harms of technology, but we also aim to go beyond that," she added.

Embracing Ethical AI

While the public is distracted by the spectre of machines and the noise these large language model chatbots are creating, an army of researchers is having discussions pertaining to ethical AI. This is where people like Hanna come into the picture. Her journey started way back in 2017, when she started getting involved with AI ethics.

“I started focusing on the use of technologies because I’ve always had this interest in how society interacts with computing,” said Hanna. “I became disenchanted with how this stuff wasn’t really being used to serve people. It could also be used for surveillance on a massive scale.”

When she was a Senior Research Scientist in Google’s Ethical AI team, Hanna predominantly questioned the tech industry’s approach to artificial intelligence. However, over time, she became disillusioned with the company’s culture, which she deemed both racist and sexist. Hanna’s unhappiness was amplified in late 2020 when Google fired Timnit Gebru, the co-lead of the Ethical AI team.

While the episode brought about a new level of attention to her work, Hanna has made the most of this jarring opportunity. Now, she is attempting to make a change from the outside as the director of research at Distributed Artificial Intelligence Research Institute, or DAIR. The institute focuses on AI research from the perspective of the places and people most likely to experience its harm.

Rethinking AI's Impact on Public Safety and Policing

The use of AI in areas such as child welfare, public safety, and policing has been a topic of intense debate in recent years. As Director of Research at the Distributed Artificial Intelligence Research Institute (DAIR), Hanna is at the forefront of discussions around ethical AI and its impact on society.

Hanna believes that AI should not be used in areas where it could do more harm than good. In her view, the use of facial recognition systems, predictive policing, ShotSpotter, or other data harmonizing technologies reinforces the systemic biases already present in policing, leading to the same people being targeted over and over again.

To address this issue, Hanna argues that policing needs to be completely reimagined. Rather than relying on technology to make decisions, policing should be more community-led and focused on serving the needs of the people it aims to protect. By rethinking the role of AI in public safety and policing, Hanna believes we can create a more just and equitable society for all.

Interested in the latest updates on AI technology? Follow us on Facebook and join our group (Link to Group) to leave your comments and share your thoughts on this exciting topic!