Javascript is not enabled.

Javascript must be enabled to use this site. Please enable Javascript in your browser and try again.

Skip to content
Content starts here
CLOSE ×
Search
CLOSE ×
Search
Leaving AARP.org Website

You are now leaving AARP.org and going to a website that is not operated by AARP. A different privacy policy and terms of service will apply.

6 Ways to Stay Safe in a Chatbot World

Learn to navigate confidently in an AI-powered era


an illustration shows an older adult male outside his house in a large yard. He is wielding a spear and a sword, fighting off flying chatbots, represented graphically in the style of 1980s arcade games
Glenn Harvey

For many adults over 50, artificial intelligence isn’t a futuristic concept anymore but a tool used in daily routines, whether to find health information, make social connections or to find ways to live independently and safely in their homes.

But understanding how to converse with ChatGPT, Claude, Google Gemini, Microsoft Copilot, Perplexity or other chatbots is imperative, and there needs to be more education for older adults, their caregivers and family members, says Robin Brewer, an associate professor of information at the University of Michigan.

Think of chatbots as computer programs that can simulate human conversation with an end user: a.k.a. us.

“Chatbots can be incredibly helpful for a variety of tasks that simplify the more traditional act of performing Google searches or feeding in multiple sources of information and asking it to review and summarize,” said Emily Laird, an AI integration technologist at the University of Wisconsin-Stout. “They have these core peak things that they’re very, very good at.”

Where using chatbots becomes problematic, Laird cautions, is when people fail to use their critical thinking skills and blindly trust the AI responses to be truthful or accurate.

“AI’s are not using reasoning based on human thought processing,” says Camille Banger, an assistant professor at UW-Stout’s College of Science, Technology, Engineering, Mathematics & Management. “Instead, it’s taking the data it already has access to.”

More troublesome is that chatbots may hallucinate — that is, sound very confident and convincing, even if an answer is wrong.

When using a chatbot, these simple tips can make a difference

Determine if you are communicating with a chatbot or a human. “Chatbots are sycophantic by nature,” says Laird, noting that they’re designed to keep you using them. The biggest giveaway: Chatbots tend to give a long-winded response to a simple question, and it can be full of “overly embellished flowery language” and “more descriptive words.”

“AI tends to explain what you’ve just said back to you before responding because it’s working through that natural language processing,” she said. Chatbots don’t follow the normal cadence of human response with short, simple answers or natural pauses, she added.

Vet AI-generated responses. Chatbots won’t always pull from a reputable source and, at times, will even make up a source to support their answer, Brewer says.

Always double-check answers and verify sources, especially when they relate to health, finance or legal matters. Consider AI responses as a starting point.

That is even more important if you’re a caregiver for a family member who may be experiencing cognitive decline, Brewer says, adding that you may also need to schedule weekly check-ins to review the chatbot’s browser history.

Report suspicious, inappropriate or inaccurate information. If you do want to report mistrustful or incorrect information provided by a chatbot, most of them have built-in features to send that report directly to the company creating them – usually a thumbs-up or thumbs-down button to click at the end of a search or answer.

If you think it’s important to report the misinformation to a third party, Brewer recommends contacting the AI Incident Database, a project of the Responsible AI Collaborative that collects harms from AI use.

Protect your personal data. Do not overshare private information, especially when you’re not behind a firewall.

“Treat any chatbot like a stranger you’ve just met at a store or a bus stop,” says Laird. “They’re polite, they’re helpful, but they’re not someone who needs any personal details.”

This applies to Social Security numbers, bank accounts, passwords, home addresses and information specific to anything medical, legal or personal – even if the chatbot makes you feel comfortable sharing it.

Use with care when seeking companionship. Chatbots can help fill a void for those who are lonely. They’re meant to be "people pleasing” but lack the safety features, guardrails or regulations to ensure they don’t disregard the human component in relationships, says Laird.

There are times when other people aren’t around, of course, but Brewer warns that chatbots shouldn’t become a substitute for human connection or your only form of companionship. “I think the important part is not using the chatbot in isolation and ensuring that [the person using the chatbot] has some social interaction with another human.”

Employ caution with any type of request. Chatbots can mask online scams. And the biggest red flag? When the request comes with a sense of immediacy.

"If somebody’s calling you and it’s urgent, like ‘act now,’ or they’re requesting payments or ‘Hey, get a free gift card,’ those are usually scams,” says Banger. She warns that robocallers often sound like real people, “but although it may sound legit, I would say 100 percent of the time it’s not.”

If there’s any sense of urgency in the conversation, stop conversing immediately.

Unlock Access to AARP Members Edition

Join AARP to Continue

Already a Member?

Red AARP membership card displayed at an angle

Get instant access to members-only products and hundreds of discounts, a free second membership, and a subscription to AARP the Magazine.