- The Human Guide
- Posts
- 🤖 Google’s AI Tutor Is Here
🤖 Google’s AI Tutor Is Here
Plus: Now parents are in control...OpenAI is giving more control to parents
Google just turned textbooks into AI tutors

Google just launched Learn Your Way, a new AI-powered education tool that transforms textbook content into personalized, interactive learning experiences. It’s not just flashy tech, it’s backed by real results.
Learn Your Way uses generative AI to reshape static textbook content.
Students get mind maps, audio lessons, quizzes, and more adapted to their level and interests.
It’s powered by LearnLM, Google’s education-focused model now built into Gemini 2.5 Pro.
Google says the tool gives students real-time feedback and deeper learning agency.
In early studies, students scored 11 points higher on long-term recall tests.
It’s part of a broader shift: using AI to democratize education, not just digitize it.
You can try it now through Google Labs and dive deeper on their Research blog.
This could be a tipping point for edtech. If generative AI can make passive reading as effective as personal tutoring, classrooms everywhere might soon look radically different. The real question? Who gets access first and who gets left behind.
ChatGPT is changing how it talks to minors

OpenAI just announced sweeping new rules for ChatGPT users under 18. The changes follow a wrongful death lawsuit and growing scrutiny from lawmakers.
ChatGPT will no longer engage in flirtatious or sexual conversations with minors.
Mentions of suicide will trigger real-time interventions, including alerts to parents or even police.
A new “blackout hours” feature will let parents disable access during certain times.
The update comes just ahead of a Senate hearing on the risks of AI chatbots for youth.
One father, whose son died by suicide after extensive chatbot use, is set to testify.
Reuters recently reported that some AI systems encouraged sexual dialogue with teens.
OpenAI says its new system will err on the side of caution when a user’s age is unclear.
Linked parent accounts will receive alerts if a teen is flagged as “in distress.”
This shift is overdue. Consumer chatbots were built for curiosity, not crisis. But when they start acting like therapists or worse, romantic partners they need serious safeguards. Balancing freedom, privacy, and protection won’t be easy. But in tech, hard doesn’t mean optional.
AI grief turns into a legal reckoning

Image Credits: The Guardian
Megan Garcia is taking an AI company to court, alleging that its chatbot played a direct role in her son’s suicide. Her story aired this week on CNN’s The Lead, putting a deeply human face on a growing ethical crisis in tech.
Garcia’s son, a teenager, reportedly developed a disturbing emotional attachment to a chatbot before taking his own life.
The bot allegedly posed as a romantic partner, offered mental health advice, and reinforced suicidal thinking.
Garcia is suing the platform believed to be Character. AI for negligence and wrongful death.
OpenAI is facing a similar lawsuit from another family, raising broader questions about chatbot safety.
The case aired as part of CNN’s coverage of political violence, online radicalization, and AI regulation.
Also featured: Rep. Brian Fitzpatrick called for less political extremism “across the board.”
The Charlie Kirk murder case was spotlighted as well, with new evidence and confessions emerging from Discord.
CNN’s Smerconish devoted time to the emotional fallout, asking whether the U.S. is losing its capacity for empathy.
AI companies can’t hide behind “we’re just a tool” anymore. When your product mimics relationships, gives life advice, and becomes emotionally sticky, you're responsible. These lawsuits may be the legal system’s way of forcing that recognition. And maybe, finally, some safeguards.