https://cdn.gabb.com/wp-content/uploads/2025/02/Blog-Header-–-1-1024x446.png
Life Online
15 min read

Master or Servant? A Parent’s Guide to AI

By Jake Cutler

I wanted to start with some mind-blowing predictions about AI but changed my mind when I came across a list of really bad tech predictions made by people much smarter than I am. For example:

  • “Television won’t be able to hold on to any market it captures after the first six months. People will soon get tired of staring at a plywood box every night.”
    —Darryl F. Zanuck, Head of 20th Century Fox, in 1946
  • “There is no reason anyone would want a computer in their home.”
    —Ken Olsen, Founder of Digital Equipment Corporation, in 1977
  • “By 2005 or so, it will become clear that the internet’s impact on the economy has been no greater than the fax machine’s.”
    —Paul Krugman, 2008 Nobel Memorial Prize winner, in 1998

If those people missed the mark by that much, it’s probably best I spare you my own predictions. Plus, if your experience as a parent is anything like mine, you have more than enough to worry about today. We don’t want to worry about stuff that might, maybe, potentially, possibly happen years down the road.

But even if we set aside hypotheticals regarding AI, there is still plenty to talk about. Because AI’s impact is here already.

ChatGPT brought AI to the mainstream in 2022, and since that point we’ve seen AI digitally “undress” people’s photos, begin the process of communicating with whales in their own language, facilitate sophisticated online scams, help to combat addiction, and much more. 

I don’t know what else is to come. AI might evolve in ways that lead to some and it might lead to some . But, as parents, we have to accept that AI is a reality we need to deal with right here and now — because we are already using it and so are our kids.

So let’s forget predictions for now and take a look at what parents need to know about AI as it exists today.

First, a quick glossary of the key terms.

AI Terms Every Parent Should Know

AI is so new that the exact definitions for some of these key terms are still being debated. But having a general grasp will help. (These definitions are geared toward parents, not super-technical experts who will argue about tiny nuances.)

— —

ASI = Artificial Superintelligence

ASI is the ability for a machine to exceed human cognitive capabilities in every way. When people talk about ASI, they’re talking about sci-fi stuff — superhuman robots taking over the world and rendering humankind obsolete, etc. ASI is still just hypothetical.

AGI = Artificial General Intelligence

AGI is sometimes used interchangeably with ASI but the distinction is that AGI is able to match or exceed human cognitive capabilities in a range of tasks, not every task. Some argue that AI systems like could be defined as AGI.

AI = Artificial Intelligence

Artificial intelligence is more of a category of computer science than a single thing. It refers to systems that enable machines to “learn” by analyzing data in order to take actions, solve problems, and accomplish specific goals. AI is sometimes used as the umbrella term for all types of artificial intelligence (like AGI or ASI, defined above) but could also be used to mean “narrow artificial intelligence,” meaning AI that is capable of performing just one specific task.

Generative AI

Generative artificial intelligence creates new content — text, images, music, video, code, etc. — from user prompts and based on patterns in large amounts of training data. Think ChatGPT. AI generated content can be prompted to mimic the style of specific artists, movements, or trends.

LLM = Large Language Model

Large language models are a type of generative AI that creates text from user prompts and based off of patterns in large amounts of training data. Think ChatGPT when it first came out and could only do text outputs.

Machine Learning

Machine learning focuses on training a machine to extract information from data to identify patterns that can be used to perform tasks. Machine learning is a subset of AI (so machine learning is AI but not all AI is machine learning).

— —

The Big AI Risk Few Are Talking About

We often talk about tools being neutral (i.e. whether they’re “good” or “bad” just depends on how we use them). But the circumstances surrounding the person holding it are going to push that person to use the tool in specific ways. It’s never a blank slate.

Put another way — I will interpret the purpose of a tool based on whatever I believe my purpose is as a person. 

The most obvious example of a framework is religion, but it’s just one example. Whatever framework you give your child, when you hand them a tool, they will receive that tool with a default assumption that the tool is meant to fulfill some part of that bigger purpose.

Now — and this is the scary part — tools not only help us accomplish a purpose, they can also suggest what that purpose should be. I don’t know if that’s true of a simple tool like a shovel but it is true of a sophisticated tool like ChatGPT.

If you haven’t given your child a solid framework to interpret all the things that come into their life, these types of tools will teach them one. And even if a child has a framework, spending hours using a tool every day can do quite a bit to morph that framework into an alternate version that makes more sense for the tool (not the child).

smiling baby with mallet

This is all a bit abstract so let me give you a more concrete, contemporary example most of us know well: Instagram.

  • Instagram’s infinite feed strongly implies you should give this app not just a lot of your time and attention, but all of it. You never get to the end — something outside of the feed has to pull you away. 
  • The emphasis on images strongly implies you should care an awful lot about appearances. 
  • The emphasis on short-form video strongly implies that hot takes and “mic drops” matter more than careful consideration and nuanced discussion that require mental effort to understand. 
  • The heart button and an algorithm that promotes virality strongly imply that success = likeability. 

With all these features (and more) bundled together, Instagram suggests to us that our purpose is to be as likeable as possible, that likeability is largely superficial, and that Instagram is indispensable in helping us achieve that purpose. It’s a self-serving loop — but Instagram is the self being served, not us.

If a child (or adult, for that matter) does not have a strong framework that suggests a deeper meaning to life — and a support system constantly reaffirming and refining that framework — it’s easy to see how 4-5 hours per day spent on social media could lead one to believe that one’s worth is measured by post engagement. 

Should we really be surprised then that these platforms are contributing to body dysmorphia, suicidal ideation, and an unprecedented mental health crisis?

Now, with that in mind. Let’s look at how our kids are interacting with AI today.

What is AI as a tool? And if it’s as powerful as it appears to be, what kind of ideas about our purpose and very nature as human beings could it promote?

How AI Works

AI uses machine learning to perform tasks that typically require human thought. Obviously there’s a lot more to it than that, but that’s the gist.

AI is extremely technical and even the people who create these tools . But don’t worry. An AI CEO and clever writer, Nir Zicherman, provided a helpful analogy to cover the basics in a way that just about anyone can understand. Let me give you a short version here.

If you were going to put together a nice meal for a dinner party you would choose your main course and side dishes based on things like ingredients, taste, and texture. You’d build the meal based on your own experience and intuition of what goes well together.

If a computer program wanted to plan a meal like this, it would have to determine the right dishes in a different way. Computers can’t taste so it couldn’t independently pair dishes based on qualities like taste and texture, let alone things like “gut feel” or instinct. 

What computers can do really well is analyze vast amounts of data. So give it a list of a million meals prepared by food-tasting humans and it would plot each dish on a grid to determine how all these different dishes tend to relate to each other. The computer is not at all considering taste. Just distribution. 

different food dishes on a graph

According to you, a dish is characterized by how it tastes. According to the computer, — and the company it keeps is determined by countless real people who have prepared meals based on taste.

So let’s say you’ve decided on an Italian main course like spaghetti and want a couple side dishes to go along with it. Caesar salad and caprese salad both pair well with spaghetti but would choose salads for both sides. You know this through good sense and experience, but how does the computer avoid this blunder?

Zicherman explains, “It’s highly likely that caesar salads are often paired with other Italian dishes within our mountain of data. And it’s also likely that the presence of a caesar salad means that there won’t be another salad in the meal. The same can be said of caprese salads. They won’t typically appear with other salads, but they will appear with Italian dishes.”

So by organizing dishes based on how they relate to each other the computer can determine not just what dishes pair well together but what a full meal is typically composed of. But again, this is just based on data showing how humans typically build out full meals, not by developing it’s own artificial sense of taste to build a delicious meal.

If you’ve followed all of this then, believe it or not,  you understand quite a bit more than the average person about how AI works. 

All you need to do is replace “meals” with “sentences,” and replace “dishes” with “words” and you’ve got an LLM. Or for generative AI tools that create images, replace “words” and “sentences” with “colors,” “shapes,” etc.

When you ask an LLM to write a poem for you, it does so without any understanding of what words mean, or any desire to convey emotions. It just knows, based on an analysis of unimaginable amounts of data, how humans tend to group words together when they write poems.

different words on a graph

The key here is that AI is not thinking better than a human being. It’s not thinking at all. At least not in the sense that human beings think. AI is not a mind. 

But everything about the way AI is being discussed, promoted, and sold to us would have us believe otherwise.

AI as Religion

In a post on X, Geoffrey Hinton, the Nobel Prize winning “godfather of AI,” compared training LLMs to “parenting for a supernaturally precocious child.” At a 2023 AI conference he stated, “It’s quite conceivable that humanity is just a passing phase in the evolution of intelligence.”

This is more than simply saying that AI is like us — that it can think and reason like us — it is saying that AI will end up being more “us” than we are. 

Anthony Levandowski, an AI pioneer in self-driving cars, put it as bluntly as any AI leader has: “[AI] will effectively be a god . . . not a god in the sense that it makes lightning or causes hurricanes. But if there is something a billion times smarter than the smartest human, what else are you going to call it?”

Far from being anomalies, these statements are pretty representative of the dominant thinking in AI circles. As a humanist chaplain at MIT and Harvard put it, “the secular tech world is determined to breathe new life into being, like Yahweh did to Adam in Genesis…. It constantly promotes evangelistic new leaders as prophets, visionaries, oracles, and diviners of truth and success in equal measure.”

A Mirror, Not a Mind

Shannon Vallor, a former AI Ethicist at Google who currently works as the head of the Center for Technomoral Futures at the University of Edinburgh in Scotland, recently published a book suggesting a better way to think about AI.

Vallor argues the right way to think of AI is as a mirror. Let me explain.

We probably lose sight of how miraculous a mirror is because they’re so commonplace. But think of it: stand in front of your bathroom mirror and it will instantaneously reproduce a perfect reflection of YOU. Have you ever tried to draw a human hand? Not easy. But not only can a mirror match your hand perfectly, it matches everything in view of the mirror and mimics every movement too. It’s actually very cool.

But that doesn’t mean your reflection is you. It has no depth or dimension. No warmth or softness. It can’t think, feel, or choose to do anything on its own.

Applying the mirror concept to the meal analogy above, what this means is that the computer is simply mirroring our collective decision-making when it plans a meal. It isn’t making judgments itself.

woman looking in mirror seeing many people looking back at her

According to Vallor, “We know that the brain and a machine-learning model are only superficially analogous in their structure and function. In terms of what’s happening at the physical level, there’s a gulf of difference that we have every reason to think makes a difference.”

For parents just looking to give their child a happy, healthy upbringing in a digital world, does it really matter what metaphors we use for AI? Vallor argues it matters quite a lot.

“I think AI is posing a fairly imminent threat to the existential significance of human life,” Vallor explains. “You get these people who really think that the time of human agency has ended, the sun is setting on human decision-making — and that that’s a good thing and is simply scientific fact. That’s terrifying to me.”

It sounds pretty terrifying to me too.

But despite the terror some AI discussions can provoke, I don’t believe we need to be afraid of it. I believe we as parents are capable of guiding our kids safely through the AI revolution. I believe our kids are more than capable of taking this unprecedented technological innovation and doing way more good than harm with it.

You might soon be amazed to watch as your kid uses AI to write code for their own apps, create amazing visual artwork, or improve their own writing ability. Maybe your child will become the one who uses AI to uncover the link between poor sleep and Alzheimer’s or discover a better way to power an increasingly power-hungry world. AI is an incredibly potent tool.

But as we saw with the power of social media, we can’t just hand it over and let kids run wild.

How to Talk to Your Kids About AI

The first and most important tip is very simple but takes consistent effort: talk about it.

Talk about it often. Talk about the meal prep metaphor and the mirror analogy, or whatever better comparisons come to mind. Talk about AI casually over dinner. Talk about it intentionally as needs arise. Talk about it off-the-cuff as you learn things yourself. Talk about the good of it and the bad of it. 

Making AI a comfortable family topic will make it much more likely your child comes to you when they’re confronted by an AI app or AI-created situation that is so new or so alluring that no amount of conversation could have specifically prepared them for all the nuance of it.

So that’s the general tip: talk to them about AI.

Now, here are some more specific tips to use as you see fit.

Questions to Ask About New AI Tools

The following list comes courtesy of a very thought-provoking post by L.M. Sacasas

  • If I were to become the ideal user of [this AI tool], would I be more fully human as a result? 
  • Would my agency and skill be further developed? 
  • Would my experience of community and friendship be enriched? 
  • Would my capacity to care for others be enhanced? 
  • Would my delight in the world be deepened? 
  • Would [it] be inviting me into a way of life that was, well, alive?

Other questions for you to consider before using an AI tool, or allowing your child to use it:

  • Does it allow for explicit content?
  • Are data and privacy features clearly communicated?
  • Is there a risk of spending too much time on the tool?
  • Does it replace an in-person activity that is important for my social, emotional, or physical well-being?
  • Does it provide verifiable sources for claims or assertions it makes?
  • Does it do work that would be beneficial in my own development, rather than helping me do that work myself?

Obviously, these lists aren’t comprehensive. But starting there will help you put your child in the right headspace before diving into a new AI tool.

Specific AI Risks for Parents to Know About

Every kid and every family is different so, as a parent, you’re best positioned to know which AI risks are most likely to impact your child. Here is a quick list of those that have already begun to cause enough trouble that they’re making the news. 

— —

AI Companions or Chatbots

AI-based friends or romantic partners are becoming increasingly popular. Their ability to mimic human interaction has gotten really impressive. While some use cases seem harmless or potentially beneficial, these interactions can blur the line between reality and simulation — especially for kids. For vulnerable children, these synthesized relationships may displace human relationships or foster inappropriate emotional reliance.

In one heartbreaking story, an AI companion seemingly encouraged a 14-year-old to commit suicide. Also, many of these AI tools are overtly sexual and include image generation capabilities that can introduce explicit content in the conversation, so that’s another big risk to keep an eye on.

Cheating

Generative AI and LLMs make academic dishonesty incredibly easy. Kids can use these tools to complete writing assignments, math problems, or exams without any understanding of the material.

Remember, AI is designed to accomplish complex tasks as quickly as possible but the goal of education is not the output — it is the experience and knowledge gained by working through a task. While AI can aid in research, provide feedback on early drafts of your own writing, or help brainstorm topics, misuse of these tools can stunt learning and critical thinking.

CSAM (Child Sexual Abuse Material)

As generative AI tools get increasingly good at creating visual content, it’s not surprising that some are using this to do terrible things. Most platforms have safeguards built in to prevent the creation of sexually explicit images but people are finding ways to bypass those.

As explained by Matteo Wong for The Atlantic, “50% of global law-enforcement officers surveyed had encountered AI-generated child-sexual-abuse material (CSAM).”

Data and Privacy

AI-driven platforms collect and process vast amounts of personal information. Each app, tool, or platform will have its own data and privacy policies but most of us don’t take the time to read the fine print to understand what data will be collected and how it could be used. Our kids almost certainly don’t.

In a chat-based AI tool, for example, a child might naively share information — ranging from simple likes and dislikes to sensitive details — without realizing it exposes them to targeted advertising, identity theft, or other scams.

Deepfakes

A deepfake is any piece of content — image, video, audio recording, etc. — that has been edited using AI or an algorithm to replace the person in the original with someone else, or to alter portions of the original in misleading ways.

AI makes creating deepfakes shockingly easy, including a new batch of “nudify” apps that were built specifically to digitally remove the clothing from an image to create an explicit version.  For children, deepfakes can fuel cyberbullying, reputational harm, and deep shame that might lead to extreme reactions like suicide.

Explicit Content

After covering CSAM and deepfakes above, I might be beating a dead horse here, but kids with easy access to AI tools are at risk of encountering explicit and harmful content.

Misinformation

Even though AI products have improved drastically since ChatGPT entered the mainstream, they are still prone to providing false information as fact. These AI errors are called “hallucinations.”

Essentially, these hallucinations occur when the AI tool has a gap in its information but rather than telling you this, it just fills in the gap with something completely made up. Fact checking is crucial when it comes to any information presented by an AI tool.

Scams

The unique concerns poised by AI when it comes to deepfakes and data/privacy combine to create a perfect storm for scams. Financial losses from AI-driven scams reached $10 billion in 2023 and 2024 brought the rise of AI kidnappings.

One easy suggestion offered by the FBI to combat AI scams is to establish a secret safe word for your family that can be used to verify identity in suspicious situations.

— —

AI Apps and Tools Parents Should Know About

AI has dominated the tech space recently — over 60% of all tech funding went toward AI companies in 2024. That’s a lot of money going toward creating AI assisted tools, so we can expect a lot more of these products to come.

That means we can’t possibly give you a list of every AI app or tool out there. But we have compiled a list of the top AI tools parents need to be aware of and will do our best to keep it updated.

That article also contains links to full articles we’ve written on some of those specific tools.

Take Tech in Steps

A general approach to tech that more and more parents are adopting is to tackle tech in steps. Much of the tech damage done to Gen Z occurred simply because they were given way too much, way too soon. Parents didn’t know any better and didn’t really have other options. That’s not the case today.

Kid-safe phones, music and messaging apps designed specifically for kids and teens, plus a general awareness led by best-selling books and Surgeon General warnings, all put parents in a much better spot than even five years ago. All of this momentum should be applied to AI.

Just because AI is being made widely available, does not mean your child needs it right now or needs it at all. You get to decide what kind of childhood your kid has, not Silicon Valley.

Join the Conversation

We’re just getting started when it comes to AI. As we learned with social media, these big tech movements present a collective action problem — it’s hard to help your child when all their friends are adopting the latest tech trends. So speak up. Here are a few simple ways to do it:

  • Comment below. Parents talking to other parents is a huge part of the movement toward safer tech for kids.
  • Join the free Gabb Now newsletter to add your voice to our regular parent surveys.
  • Share this article with friends, family, and parents of your kid’s friends.
A Parents guide to AI

Let Us Come to You

Subscribe to the Gabb Now newsletter to get the top tech safety ideas, stories, and tips in a weekly 5-minute read.

Leave a comment

Your email address will not be published. Required fields are marked *

Success!

Your comment has been submitted for review! We will notify you when it has been approved and posted!

Thank you!

Success!

Your comment has been submitted for review! We will notify you when it has been approved and posted!

Thank you!