Tech experts criticise a letter that cited their study and asked for a break from AI: 'Fearmongering'

Tech experts criticise a letter that cited their study and asked for a break from AI: 'Fearmongering'

This article was initialized by a human, created by AI, updated by a human, copy edited by AI, final copy edited by a human, and posted by a human. For human benefit. Enjoy!

More than 2,000 people, including Elon Musk, have signed a letter asking for a pause in AI.

"The letter makes a number of suggestions that we agree with and that we proposed in our 2021 peer-reviewed paper called "Stochastic Parrots," such as "provenance and watermarking systems to help distinguish real from synthetic" media. However, these are overshadowed by fearmongering and AI hype, which steers the conversation to the risks of imagined "powerful digital minds" with "human-competitive intelligence." In a statement released on Friday, Timnit Gebru, Emily M. Bender, Angelina McMillan-Major, and Margaret Mitchell said the following. 

The four tech experts were mentioned in a letter that was released earlier this week. The letter asked for a six-month break from teaching AI systems that are very smart. As of Saturday, more than 2,000 people had signed the letter. Among them were Elon Musk, CEO of Tesla and Twitter, and Steve Wozniak, who helped start Apple.

"Deep risks can be posed to society and humanity by AI systems that are smarter than humans, as shown by a lot of research and acknowledged by the best AI labs," the letter says. The Future of Life Institute, a non-profit that "works on reducing extreme risks from transformative technologies," put out the open letter.

Google AI Research Scientist Timnit Gebru speaks onstage during Day 3 of TechCrunch Disrupt SF 2018 at Moscone Center on September 7, 2018, in San Francisco, California. (Photo by Kimberly White/Getty Images for TechCrunch) (Kimberly White/Getty Images for TechCrunch / Getty Images)

Google AI Research Scientist Timnit Gebru speaks onstage during Day 3 of TechCrunch Disrupt SF 2018 at Moscone Center on September 7, 2018, in San Francisco, California. (Photo by Kimberly White/Getty Images for TechCrunch) (Kimberly White/Getty Images for TechCrunch / Getty Images)

The first footnote on the letter's opening paragraph refers to the peer-reviewed study "On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?" by Gebru, Bender, McMillan-Major, and Mitchell, but the researchers claim the letter is propagating "AI hype."

The four authors warned that it is risky to divert our attention with imagined AI-enabled utopia or catastrophe that either promises a "flourishing" or "potentially catastrophic" future. "Such language, as we note in Stochastic Parrots, inflates the capabilities of automated systems and anthropomorphizes them, deceives people into thinking that there is a sentient being behind the synthetic media."

AI PAUSE GIVES 'BAD GUYS' TIME TO CATCH UP, BILL ACKMAN SAYS: 'I DON'T THINK WE HAVE A CHOICE'

Mitchell presently serves as the head ethical scientist at the AI lab Hugging Face and formerly oversaw ethical AI research at Google. Although the letter specifically calls for a delay on AI technology "more powerful than GPT-4," she told Reuters that it is unsure which AI systems would even qualify as breaking those requirements. 

The Welcome to ChatGPT lettering of the U.S. company OpenAI seen on a computer screen. (Silas Stein/picture alliance via Getty Images / Getty Images)

The Welcome to ChatGPT lettering of the U.S. company OpenAI seen on a computer screen. (Silas Stein/picture alliance via Getty Images / Getty Images)

The letter states a set of priorities and a narrative on artificial intelligence that advantages [Future of Life Institute]'s supporters by taking a lot of dubious concepts as givens, she claimed. Some of us don't have the luxury of disregarding current harms.

Shiri Dori-Hacohen, a professor at the University of Connecticut and another expert included in the letter, told Reuters that while she agrees with some of the letter's concerns, she doesn't agree with how her research was interpreted. 

According to Reuters, Dori-Hacohen co-authored a research paper last year titled "Current and Near-Term AI as a Potential Existential Risk Factor," which made the case that the widespread use of AI currently entails hazards and may have an impact on judgements about matters like nuclear war and climate change. 

Ai needs to wake up, says a tech expert: the 'WOLF' is here. 

She claimed that AI does not need to be as intelligent as humans to increase these concerns. 

There are very significant non-existential threats that aren't given the same level of Hollywood attention.

Sam Altman, president of Y Combinator, speaks during the New Work Summit in Half Moon Bay, California, U.S., on Monday, Feb. 25, 2019. The event gathers powerful leaders to assess the opportunities and risks that are now emerging as artificial intell (David Paul Morris/Bloomberg via Getty Images / Getty Images)

I INTERVIEWED CHATGPT AS IF IT WAS A HUMAN; HERE'S WHAT IT HAD TO SAY THAT GAVE ME CHILLS

According to the letter, leaders in AI should "develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts."

The letter continues, "In parallel, AI developers must collaborate with policymakers to significantly accelerate development of robust AI governance systems." 

While agreeing that "it is indeed time to respond," Gebru, Bender, McMillan-Major, and Mitchell asserted that "the focus of our worry should not be fictitious "strong digital brains." Instead, we should concentrate on the exploitative business practises of the organisations claiming to be building them, which are hastening the centralization of power and escalating social injustices.

As Max Tegmark, the president of the Future of Life Institute, explained to Reuters, "if we cite someone, it just means we claim they are endorsing that sentence."

ELON MUSK, CO-FOUNDER OF APPLE, AND OTHER TECH EXPERTS CALL FOR A STOP TO 'GIANT AI EXPERIMENTS' "DAMNABLE RACE"

"It doesn't mean they're endorsing the letter, or we endorse everything they think," he clarified. 

In addition, he dismissed accusations that Musk, who gave $10 million to the Future of Life Institute in 2015 and acts as an external consultant, is attempting to lead the charge against his rivals.

SpaceX owner and Tesla CEO Elon Musk smiles at the E3 gaming convention in Los Angeles, June 13, 2019. (REUTERS/Mike Blake/File Photo / Reuters Photos)

SpaceX owner and Tesla CEO Elon Musk smiles at the E3 gaming convention in Los Angeles, June 13, 2019. (REUTERS/Mike Blake/File Photo / Reuters Photos)

It's quite funny. I've heard people say, 'Elon Musk is attempting to slow down the competition,'" the speaker added. "This is not just about one company."

Tegmark asserted that Musk was not involved in the letter's creation. 

When asked for a comment, Future of Life Institute sent Fox News Digital to its frequently asked questions page about the letter, especially about whether the letter means the nonprofit isn't "concerned about present harms."

"No way, no how. The use of AI systems, no matter how small, causes problems like discrimination and bias, false information, the concentration of economic power, bad effects on labour, weaponization, and damage to the environment, according to a part of the site's FAQ page. 

"We acknowledge and reaffirm these harms and are grateful to the many scholars, business leaders, regulators, and diplomats who keep working to bring these harms to light at the national and international levels," the page says. 

Dan Hendrycks of the Centre for AI Safety in California, who was also mentioned in the Future of Life Institute's letter, told Reuters that he agrees with what it says. He said that it makes sense to think about "black swan events," which are things that seem rare to happen but would have bad results if they did.

Interested in the latest updates on AI technology? Follow us on Facebook and join our group (Link to Group) to leave your comments and share your thoughts on this exciting topic!