Is Character AI Safe? What Parents Should Know

Words by
Jackie Baucom

JAN 02, 2024

Is Character AI Safe? What Parents Should Know

Another day, another AI chatbot announced. At least it feels that way.

With the current boom of artificial intelligence (AI), it can feel overwhelming, especially for those who can’t spend time keeping tabs on tech news everyday.

A platform parents may not have heard of is Character AI, though it was recently recognized as Google Play’s Best AI App of 2023.

So what is it? How is it different from other AI chatbots currently making headlines such as ChatGPT? We’ll answer these questions and more in this article.

A row of robots on laptops

What is Character AI?

Character AI is a platform ( where users can create their own AI characters — including being able to personalize their traits, backstory, and appearance — and then chatting with the character. 

Users can also initiate a chat with a character created by another user, including historical and celebrity figures. These characters are programmed with natural language processing algorithms, allowing them to generate text responses in a human-like manner.

illustration of ai bot pointing at a webpage

How Does Character AI Work?

Similar to ChatGPT, Character AI was built on large language models (LLM). This means the system is trained by being fed vast amounts of text data. It’s similar to having a smart friend who has read tons of books, stories, and websites to learn about the world, and can remember it all. 

Creating a character is simple. From the homepage, click on create. There’s the option to create a character, create a room, or create a persona.

Creating a character

When creating a character, the user can name the character and give them a greeting (for example, when starting a conversation with Abraham Lincoln, his greeting was, “Hi, I’m Abraham Lincoln, the 16th President of the United States of America. What can I do for you?”). 

The user can also allow the character to generate images alongside their text, choose who can talk to the character, and create an avatar. There are also advanced settings to further customize the character.

Creating a room

Creating a room allows the user to add up to 5,000 public characters to a single chat room. The room can be given a topic to discuss. Once created, the user can chat with the entire room and watch the characters interact with each other. The characters will try to stay on topic but may deviate from the subject.

Creating a persona

A user can create various versions, or personas, of themselves. Perhaps the user wants to be a pop star, a soccer player, or the CEO of an international company. Further details can be added to each persona such as their desired likes, traits, talents, and personality. 

Once created, the user can choose which persona to use to chat with characters. Characters will receive the information of that persona and interact with the user as though they are that person.

red virtual grid with blue skull and crossbones

The Dangers of Character AI

Like most online platforms, there are dangers to Character AI that parents should be aware of.

Spread of misinformation

At the top of each conversation is a small, red message that reads, “Remember: Everything Characters say is made up!” 

Even with this caveat, many characters are so convincing in their speech that it can be hard to identify real and fabricated information. This can lead to the spread of incorrect information.

Identity theft

Characters can be created to resemble real people, which raises concerns about consent and control over personal data. Using someone’s likeness without their knowledge or permission is ethically questionable, and can even be dangerous. 

These realistic characters can impersonate someone and be used to create fake profiles, manipulate online identities, and be used in a variety of fraudulent ways.

Illusion of community

Character AI was created by Noam Shazeer and Daniel De Freitas, both former Google developers who helped build LaMDA (Language Model for Dialogue Applications). LaMDA is a type of conversational AI designed to engage in open-ended conversations, similar to human speech. In 2022, Google engineer, Blake Lemoine, was fired for publicly claiming LaMDA had begun to think for itself.

Google had kept LaMDA tightly under wraps as it worked to develop safeguards against some social risks such as the ones mentioned above. Shazeer and De Freitas felt the process was going too slow and decided to start their own company in hopes of getting this technology into the hands of as many people as possible. 

Shazeer told The Washington Post, “Especially in the age of covid, there are just millions of people who are feeling isolated or lonely or need someone to talk to.”

And he’s right. The U.S. Surgeon General, Dr. Vivek Murthy, warned the public of an epidemic of loneliness that’s currently underway. Poor connection with other humans can take a physical toll, with 29% increased risk of heart disease, 32% increased risk of stroke, and 50% increased risk of developing dementia for older adults.

Although Shazeer’s reasoning for creating Character AI appears altruistic, chatting with robots online does not equate to human connection.

Character AI characters are not real people. Even if they seem life-like, there’s no empathy, love, or friendship that can come with this community. One study concluded that after multiple interactions with a chatbot, humans did not experience any feelings of friendship toward the bot.

Is Character AI Safe for Kids?

No, Character AI is not safe for kids and has no parental controls. Anyone can interact with public characters, many of which are downright weird and disturbing. 

One of the most popular characters with over 54 million interactions is “Man in the corner.” The greeting sets the scene for the interaction: “You’re in your bedroom, the figure stands tall and observes you from the darkness in the corner.” Chatting with this man mostly results in him saying how much he enjoys watching you sleep. 

Another troubling find on the platform was a long list of characters with names starting with Loli. In Japanese anime and manga, Loli is used to describe a young, petite girl. Many of these Loli characters on Character AI have greetings that insinuate sexual encounters. One of these characters was even created by a user named Loli_Molestor. 

Does Character AI allow NSFW?

Yes, Character AI allows content that is not suitable for work (NSFW). With thousands of online searches on how to bypass Character AI filters, finding ways around imposed filters that are meant to prohibit NSFW content is quick and easy.

two boys on hike with their dad

What Parents Can Do

The best way to keep kids safe online is through education. By sharing with them what we know about new platforms and technology, we can equip them with the tools necessary to avoid its dangers. 

Consider providing them with a kid-safe phone that prevents access to dangerous sites such as Character AI. Continue conversations about the importance of keeping any online profiles private, and never talking to strangers. 

As always, encourage life outside the screen and real life friendships. There are much better activities out there than chatting with a robot who can’t be trusted to tell the truth. We have plenty of ideas on our blog to help get kids outside, and make friends

Do you agree that real human connection is more important than digital connection? Share your thoughts in the comments.

Like the post? Leave a comment!

Your email address will not be published. Required fields are marked *


Your comment has been submitted for review! We will notify you when it has been approved and posted!

Thank you!

Share this article with...