This week’s top tech stories highlight how quickly technology is evolving and its growing impact on daily life.
From changes to social media terms affecting online privacy to the energy demands of AI, these updates are shaping how families interact with the digital world.
AI App Sued for Tragic Loss of a Teen
A Florida mother has filed a lawsuit against Character.AI, claiming her 14-year-old son’s suicide was linked to his fixation on a chatbot.
Despite knowing it was not real, the boy formed an emotional bond with the AI, telling the chatbot he was coming home to her just moments before his death by suicide.
This case highlights urgent questions about the emotional impact of AI on young users and the need for safeguards to protect kids online.
Can a Chatbot Named Daenerys Targaryen Be Blamed for a Teen’s Suicide? | The New York Times
For our full parents’ guide on Character.AI safety concerns, read here
New Instagram Features to Combat Sextortion
Instagram is launching new safety features to prevent sextortion scams.
Users can no longer take screenshots or screen record content, and scammy accounts are restricted from seeing follower lists and posts.
Teen users will get enhanced protection, including nudity filters, warnings, and crisis support partnerships.
Instagram rolls out new safety features to protect teens from sextortion | Tech Crunch
The Dark Side of Family Vlogging
Shari Franke, daughter of imprisoned YouTuber Ruby Franke, testified in Utah about the ethical and financial dangers of family vlogging.
Franke explained that growing up under the scrutiny of millions of viewers made her feel exploited and manipulated, as her personal moments and struggles were shared online without her full consent.
She advocates for stricter regulations and protection for children featured in family vlogs, emphasizing the lack of agency and consent kids often have in such environments.
Ruby Franke’s daughter speaks out to lawmakers on family vlogging dangers | ABC News
Tech Giants Bet on Nuclear to Power AI
Tech companies like Amazon, Google, and Microsoft are investing billions in nuclear power to meet the soaring energy demands of artificial intelligence (AI) while trying to curb carbon emissions.
Despite their clean energy goals, they face immediate reliance on fossil fuels, as nuclear projects using next-gen technology will take years to complete.
Small modular reactors (SMRs) are part of the plan, but until they become viable, natural gas and even coal may play a role in powering data centers, which consume as much electricity as midsize cities.
Nuclear-Powered AI: Big Tech’s Bold Solution or a Pipedream? | The Wall Street Journal
X Has New Terms: Your Tweets Could Help Train AI
Effective November 15, X (formerly Twitter) will use users’ content, including personal data, to train its AI models.
This has raised concerns among artists and privacy-conscious users about their tweets and creative work being used for machine learning purposes without consent.
It remains unclear whether users can opt out of data sharing for AI, as they could before.
X changed its terms of service to let its AI train on everyone’s posts. Now users are up in arms | CNN
Data Breach Exposes Child Abuse Risks
A data breach of Muah.AI, a site offering uncensored AI-generated girlfriends, reveals disturbing prompts for child sexual abuse material (CSAM).
The hack exposes thousands of users attempting to create illegal content, raising concerns about weak safeguards in lesser-known AI platforms.
The site’s owner admits limited resources for monitoring abuse.
The Age of AI Child Abuse Is Here | The Atlantic
Other Headlines
- The Election Has Taken Over TikTok | The New York Times
- The AI Boom Has an Expiration | The Atlantic
- More than 10,500 actors, musicians, authors protest tech’s AI data grab | The Washington Post
Did we miss anything?
Let us know in the comments below.
Success!
Your comment has been submitted for review! We will notify you when it has been approved and posted!
Thank you!