Javascript is not enabled.

Javascript must be enabled to use this site. Please enable Javascript in your browser and try again.

Skip to content
Content starts here
CLOSE ×
Search
Leaving AARP.org Website

You are now leaving AARP.org and going to a website that is not operated by AARP. A different privacy policy and terms of service will apply.

How to Talk to Your Kids and Grandkids About AI

7 ways grownups and ones they love can explore together, learn from each other


spinner image father helping his teenage daughter do homework with a laptop
Maskot/Getty

A few months ago, my teenage son, Samuel, asked if allowing ChatGPT to edit his high school essay would be appropriate.

Sam quickly pointed out that he didn’t want the artificial intelligence (AI) chatbot to write the paper. He wanted the bot to improve what he had written.

spinner image Image Alt Attribute

AARP Membership— $12 for your first year when you sign up for Automatic Renewal

Get instant access to members-only products and hundreds of discounts, a free second membership, and a subscription to AARP the Magazine.

Join Now

I was inclined to say no, but the question gave me pause: Was his request much different from asking me to look over an assignment and provide a few pointers, which many parents do? Was it different from summoning a spellchecker or grammar software?

These questions recall debates from decades ago about using calculators for math homework. Nowadays, calculators and even spreadsheets are widely deployed in classrooms.

Parallels can be applied to the generative AI chatbots that have been arriving at breakneck speeds in the past year, notably OpenAI’s ChatGPT, Anthropic’s Claude, Google’s Bard and the refreshed Microsoft Bing. Such uber-powerful tools can generate words and sometimes images from scratch, based on the prompts fed to them, including generally satisfactory school essays.

AI technology introduces new ethical dilemmas

When does allowing a kid to lean too heavily on cutting-edge tech become cheating? What should parents and grandparents tell youngsters about the acceptable use of AI?

Artificial intelligence is definitely a topic for conversation and sure to make its way into holiday gatherings in the coming weeks, but definitive answers are hard to come by. Grownups are trying to get a grip on the effect generative AI bots will have on society — and the ethical and legal challenges that accompany them.

Teens are torn as well.

About a fifth of U.S. teenagers who are aware of ChatGPT and nearly a quarter of 11th and 12th graders say they’ve used the AI to help their schoolwork, according to a newly released survey from the Washington-based nonpartisan Pew Research Center.

About 7 in 10 say working with ChatGPT to research something new is acceptable, compared with 13 percent who reject that idea and 18 percent who are unsure.

Students and the older generations confront the same issues:

  • Are AIs producing factual and up-to-date information?
  • Are the sources behind the information, assuming you can identify them, biased?
  • Do you have the right to use what an AI spits out in your school or professional work?
  • Are proper citations or credit included?

Google’s Jack Krawczyk, lead product director for Bard, stresses that AI prompts differ from search queries. In layman’s terms, generative AIs look for common language patterns, not facts, inside vast reservoirs of data known as large language models or LLMs.

“What this technology does that is so fundamentally different from everything else, and why I think we’re struggling, is we’re so used to computers giving us answers,” Krawczyk says. “Tools like Bard give us possibilities.”

Educators have some guidance about exploring AI with your kids:

1. Educate yourself first

Before you can navigate AI with your child, make sure you as a parent or grandparent are up to speed.

“The thing that feels the most daunting to parents is learning about it, trying it and having to step out of your comfort zone,” says Jasmine Hood Miller, director of community content and engagement at Common Sense Media in Philadelphia. The nonprofit advocacy group for families is headquartered in San Francisco.

Common Sense has just launched an AI nutrition label and rating scale it says is designed to assess the ethical use, transparency, safety and impact of AI products. So far Bard and ChatGPT have been issued three out of five stars.

2. Discover AI together

Experiment with your kids. Try different prompts.

Remind them that exchanging messages with an AI bot can seem like you’re engaging with a person, but no human being is at the other end of an AI conversation. Explain the risks associated with AI exchanges — the possibility of misinformation, potential for plagiarism and lack of privacy — and that everything needs to be verified.

Technology & Wireless

Consumer Cellular

5% off monthly fees and 30% off accessories

See more Technology & Wireless offers >

AIs may deliver answers that sound authoritative or plausible but are just plain wrong, what the tech industry calls “hallucinations.”

“These tools are unreliable,” says Jennifer King, privacy and data policy fellow at the Stanford University Institute for Human-Centered Artificial Intelligence in California. “We’re already seeing what happens to lawyers that ask ChatGPT to write a brief in a case. It literally makes up citations and facts that don’t exist.”

3. Check for school regulations

Just as adults should consult bosses about acceptable office use of AI, see what policies your kid’s school has around AI.

Even in the absence of fully baked rules, “someone already is thinking about it,” Miller says. If in doubt, ask your child’s teachers to weigh in on what’s proper.

4. Don’t cheat. It’s morally wrong

“Plagiarism and cheating [are] a human issue. That’s not a technology issue,” says Jenny Maxwell, head of Grammarly for Education. The Grammarly writing tool — free and fee-based versions are available — can change a writer’s tone, supply real-time feedback, provide citations and help students brainstorm ideas.

The San Francisco company has partnerships with more than 3,000 educational institutions. It recently launched generative AI features that Grammarly officials say will augment, not replace, a student’s critical thinking skills.

Cheating robs a student’s opportunity to learn, says Michael Steven Marx, an associate professor of English and the director of expository writing at Skidmore College in Saratoga Springs, New York. Sure, ChatGPT can correct simple errors in a school paper.

“But it’s like a free lunch, and we know there’s no free lunch,” he says. “It’s going to come back to haunt you. You’re going to be sitting somewhere [and] have to write something, and you still don’t know how to do it.”

Apart from the moral issue, cheaters are often caught. And AI-produced school papers may come across as generic, homogeneous or, on the other end of the spectrum, total bull.

spinner image membership-card-w-shadow-192x134

Join AARP today for $16 per year. Get instant access to members-only products and hundreds of discounts, a free second membership, and a subscription to AARP The Magazine.

“It’s often obvious to a teacher, who might [say], ‘Wow, your writing has really changed. This doesn’t sound like you,’ ” King says.

Stanford University researchers Victor Lee and Denise Pope say cheating rates among high schoolers have been high for a long time. But while 60 percent to 70 percent of students surveyed have reported cheating through the years, Lee’s and Pope’s latest ongoing research shows that cheating rates have stayed about the same or even decreased slightly since AI chatbots like ChatGPT came along.

Video: 4 Tips to Get the Most out of AI Tools

5. Learn the best ways to prompt

Obviously, students “should never claim they’ve done some work that they’ve not done,” says Helen Crompton, executive director of the Research Institute for Digital Innovation in Learning at Old Dominion University in Norfolk, Virginia. She’s also a professor of instructional technology at the school and says an AI bot can function as a copilot of sorts for learning.

Crompton recommends that students prompt AIs along these lines:

  • I’ve got to write an essay. Provide one-sentence starter ideas or topic suggestions.
  • I’m about to start a biology class and am a bit nervous. Give me some major glossary words that I need to know to get ready.
  • I didn’t quite understand this concept. Please explain it. Then, please explain it even more simply, as if I were a fifth grader.
  • I’m preparing for a test on a specific topic. Quiz me with some sample questions.

An example: A student attending one of Marx’s classes was struggling with grammar. As a fun exercise, the professor fed a couple of paragraphs from the student’s paper into ChatGPT, along with a prompt like this: As a writer, I have a problem with sentence fragments. Can you rewrite these two paragraphs without changing my meaning to correct the sentence fragments, following the style presented in the handbook we use?

ChatGPT not only fixed the mistakes but explained the corrections it made.

“There was an opportunity there for the student to say, ‘Here’s what it is telling me that I’ve done wrong. Let me reflect on that,’ ” Marx says. AI can help students by offering opposing viewpoints, alternate perspectives and critical feedback.

OpenAI recently introduced a creative writing coach as one of its personalized AI agents called GPTs, an acronym that stands for generative pretrained transformers. They’re available to subscribers of the company’s $20-a-month ChatGPT Plus plan. The coach suggests ways the writing it’s given can be improved and aims to read passages for clarity, relevance and structure.

6. Preach politeness

When adults talk to an AI, they’re setting examples for children, Crompton says.

“Model politeness,” she says.

That’s also a good practice when talking to smart speakers and their digital assistants. Both Amazon Alexa and Google Assistant are rolling out generative AI in the products you may already have, and Apple is expected to jump on the bandwagon next year with improvements to Siri.

“We’ve gotten so used to [barking out instructions like] ‘Give me the weather!’ ” Google’s Krawczyk says. “I would never ask my spouse that. [Instead try,] ‘What do you think is a nice thing to do on a day like today?’ It’s really amazing to see how the technology kind of morphs to kindness.”

7. AI may help you land or keep a job

Your kids will need to understand AI by the time they enter the workforce. Grammarly recently commissioned a study with Cambridge, Massachusetts-based Forrester Consulting that revealed 70 percent of employees surveyed at large businesses use generative AI for most or all their writing at work.

Although just a fourth of employers have a generative AI strategy in place now, 97 percent are expected to have one by 2025. And when Junior needs to apply for a job, AI can help with resumes and cover letters.

“We as parents and teachers … owe it to this rising generation to give them exposure to this,” Maxwell says. “But [we must] do it in a way now where they can understand what they will forgo by just pushing the ‘easy’ button.”

This story, originally published Nov. 17, 2023, was updated with the results of a Stanford University survey.

Discover AARP Members Only Access

Join AARP to Continue

Already a Member?