Javascript is not enabled.

Javascript must be enabled to use this site. Please enable Javascript in your browser and try again.

Skip to content
Content starts here
CLOSE ×
Search
Leaving AARP.org Website

You are now leaving AARP.org and going to a website that is not operated by AARP. A different privacy policy and terms of service will apply.

FTC Probes ChatGPT Over Concerns About Your Privacy

Feds want to know how its parent, OpenAI, handles security of user data


spinner image a smartphone displaying chatgpt on top of a keyboard
GETTY IMAGES

Excitement surrounding artificial intelligence (AI) has been off the charts since late fall, when ChatGPT exploded on the scene and became what analysts believed to be the fastest-growing app in history.

Since then, Twitter alternative Threads has surpassed it in speedy popularity.

spinner image Image Alt Attribute

AARP Membership— $12 for your first year when you sign up for Automatic Renewal

Get instant access to members-only products and hundreds of discounts, a free second membership, and a subscription to AARP the Magazine.

Join Now

Those who tried ChatGPT marveled at some of the things that it and other emerging generative AI chatbots could churn out in mere moments — letters, poems, recipes, travel suggestions, even computer code.

But a potential darker side of AI also emerged:

Now, the Federal Trade Commission (FTC) has placed OpenAI, the startup that sired ChatGPT, in its regulatory sights. The agency opened an investigation into whether the company may have engaged in unfair or deceptive privacy or data security practices that could put consumer data and personal reputations at risk. The claims are outlined in a letter, published by The Washington Post, that the FTC sent to OpenAI.

OpenAI has not responded to an AARP request for comment. In a tweet Thursday night, OpenAI CEO Sam Altman wrote, “it’s super important to us that [our] technology is safe and pro-consumer, and we are confident we follow the law. of course we will work with the FTC.” The FTC declined to comment.

What does the federal government consider a problem?

ChatGPT and other generative AIs such as Microsoft Bing’s chatbot that incorporates OpenAI technology, Bard from Google, and Claude from startup Anthropic are trained on large language models (LLMs), essentially massive amounts of human generated data from the web. This includes works that may be copyrighted.

Comedian Sarah Silverman is suing both Facebook parent Meta for its LLM, called LLaMa, and Open AI, alleging copyright infringement.

Leaving aside the legal questions, the FTC’s action suggests information on the internet you may have produced yourself, or that is out there about you, could be a part of this massive amount of training data.

Technology & Wireless

Consumer Cellular

5% off monthly fees and 30% off accessories

See more Technology & Wireless offers >

And what the AI bots are ingesting may not be truthful.

Glitches, such as one ChatGPT experienced on March 20, also exposed names, addresses and a portion of the credit card information of a small subset of users active during a nine-hour window. After OpenAI took the chatbot offline to fix the problem, the company said later in the week that it notified affected users that their payment information may have been compromised.

Among the things the FTC is asking OpenAI to describe in detail are the steps the company has taken to address or lessen risks that its LLMs could generate statements about individuals containing “real, accurate personal information,” as well as “statements about real individuals that are false, misleading or disparaging.”

Will the FTC question other AI developers?

The FTC’s action may be an opening salvo to other AI companies to be vigilant about their data and security practices.

Senior research fellow Will Rinehart at the Center for Growth and Opportunity, an economic research group based out of Utah State University, called the letter a shakedown that puts the entire AI industry on notice. He labeled the FTC’s move risky because he believes that “advances coming from AI could bolster U.S. productivity.”

Billionaire entrepreneur Elon Musk, an OpenAI cofounder who is no longer affiliated with the company, has launched a rival startup, xAI, with the goal of building a safe AI that will be “maximally curious” to understand reality. Despite investing in AI ventures, Musk in the past has raised concerns that artificial intelligence could destroy civilization.

spinner image membership-card-w-shadow-192x134

Join AARP today for $16 per year. Get instant access to members-only products and hundreds of discounts, a free second membership, and a subscription to AARP The Magazine.

Will I get any money from this investigation?

It’s too early to determine how the federal scrutiny will affect older adults who use ChatGPT. But part of what the FTC is looking into is whether “monetary relief would be in the public interest.”

Meta was in the headlines recently after agreeing to shell out $725 million to settle a 5-year-old class action lawsuit on Facebook user data improperly shared with other companies. Google agreed to a $23 million settlement to settle several class action lawsuits relating to privacy.

So, big tech companies paying up because of past data practices is not unprecedented.

How can I lessen the risk to my privacy?

You can certainly stop using ChatGPT and other generative AIs, or not begin in the first place. But that won’t work for everyone.

To lessen the likelihood of damage if you use any generative AI, pay attention to the disclaimers. OpenAI openly concedes that “ChatGPT may produce inaccurate information about people, places, or facts.”

And be extremely careful with your prompts. Retirees or others isolated later in life may feel like they’re conversing with a human when they're talking with an AI chatbot, says Rory Mir, associate director of community organizing at the San Francisco-based Electronic Frontier Foundation advocacy group.

The danger: "You may accidentally wind up talking more about more personal or sensitive things than you initially intend,” Mir says. “If you are using a chatbot, always treat anything you put in like it’s public information.”

Discover AARP Members Only Access

Join AARP to Continue

Already a Member?