FTC Investigates ChatGPT for Misinformation as 2024 Elections Loom
Image courtesy of 123rf.

FTC Investigates ChatGPT for Misinformation as 2024 Elections Loom

The FTC has required OpenAI to provide details of its data security practices.
Neither the author, Ruholamin Haqshanas, nor this website, The Tokenist, provide financial advice. Please consult our website policy prior to making financial decisions.

The Federal Trade Commission (FTC) has launched an investigation into ChatGPT creator OpenAI over concerns that it may have disseminated false information about individuals. The move comes amid increasing concerns regarding the misuse of AI technologies to influence the 2024 United States presidential election. 

The FTC Asks For Details About OpenAI’s AI Models

This week, the FTC sent a 20-page letter to OpenAI, asking for details of its data security practices and how the company addresses potential AI risks. In the letter, the commission referred to a previous incident when a bug allowed users to access information about other users’ chats and payment-related details.

According to The Washington Post, which obtained a copy of the letter, the FTC’s probe contains a series of inquiries covering various aspects of OpenAI’s operations, including marketing efforts, AI model training practices, and handling users’ personal information. The FTC is led by Chair Lina Khan and holds the authority to regulate unfair and deceptive business practices.

The report added that the FTC’s investigation into OpenAI goes beyond personal data security concerns. The agency aims to determine if OpenAI has violated consumer protection laws by placing personal reputations and data at risk. This marks the most substantial regulatory challenge faced by OpenAI in the US.

The FTC has vast enforcement capabilities. If the agency finds that a company has violated consumer protection laws, it can impose fines or issue consent decrees that dictate how the business handles data. In recent years, the FTC has actively policed tech giants such as Meta (formerly Facebook), Amazon, and Twitter, resulting in substantial fines for alleged consumer protection law violations.

As part of its investigation, the FTC has asked OpenAI to provide a detailed account of any complaints it has received regarding ChatGPT’s dissemination of false, misleading, disparaging, or harmful statements about individuals. The agency is specifically examining whether OpenAI’s practices have caused reputational harm to consumers. 

“The FTC is probing whether the company’s data security practices violate consumer protection laws,” the report said. It added that OpenAI is being investigated over whether it has “engaged in unfair or deceptive privacy or data security practices” or practices that harm users.

Join our Telegram group and never miss a breaking digital asset story.

AI Could Influence the 2024 Election

As the 2024 US presidential election approaches, there is a growing concern over the potential influence of AI and its ability to generate and spread misinformation. Even OpenAI CEO Sam Altman highlighted the worries surrounding AI’s impact on society during his recent appearance before Congress.

Altman acknowledged that while working on AI is incredibly exciting, there are legitimate concerns about how it could shape our lives. “But as this technology advances, we understand that people are anxious about how it could change the way we live. We are too,” he said. 

Echoing similar concerns, machine learning specialist Gary Marcus, who also testified before Congress, warned that AI could create persuasive lies on an unprecedented scale, threatening democracy itself. Looking at the devastation caused by social media in recent political history, Marcus believes that AI’s impact could far surpass that.

One area of concern is the personalized disinformation that AI can generate. Researchers from Google, MIT, and Harvard have found that large language models like ChatGPT can accurately predict public opinion based on specific media diets. This allows corporate, government, or foreign entities to fine-tune their strategies and manipulate voters’ actions. 

Additionally, chatbots powered by AI have the potential to completely change people’s political beliefs by feeding them false information. The Wall Street Journal recently published an article titled “Help! My Political Beliefs Were Altered by a Chatbot!” highlighting how individuals may not even realize they are being influenced. 

Subcommittee chair Richard Blumenthal also demonstrated the ease voters can be deceived. He played an audio clip of an AI-generated voice that sounded just like him, illustrating how easily false information can sway public opinion.

“I worry that as the models get better and better, the users can have less and less of their own discriminating thought process,” Altman said. He added that users may eventually become less inclined to double-check the accuracy of answers provided by AI, which could hurt the truth and democracy. 

It is worth noting that Altman has pledged to work with the FTC regarding its investigation. “It’s super important to us that [our] technology is safe and pro-consumer, and we are confident we follow the law.”

Finance is changing.
Learn how, with Five Minute Finance.
A weekly newsletter that covers the big trends in FinTech and Decentralized Finance.

Do you think AI poses any significant threat to the 2024 election? Let us know in the comments below. 

Copy these trading strategies and get real-time alerts from the #1 voted stock discord!

X