Lawmakers Advance GUARD Act to Age-Gate AI Chatbots

The Senate Judiciary Committee has unanimously advanced the GUARD Act, a bill that would ban access to AI chatbots for individuals under 18 and require age verification for all users. The bill, co-sponsored by Senators Josh Hawley and Richard Blumenthal, will now proceed to the Senate floor.
Lawmakers Advance GUARD Act to Age-Gate AI Chatbots

Lawmakers Advance GUARD Act to Age-Gate AI Chatbots Human Human coverage portrays the GUARD Act as a bipartisan, urgently needed child-protection measure driven by grieving parents and concerned lawmakers, focused on banning chatbot access for minors and forcing companies to adopt strong safeguards and disclosures. It emphasizes tech companies’ past negligence, backs stringent age verification and penalties, and gives relatively little weight to industry concerns about implementation burdens or innovation impacts. @Verge @4qd8…qnwa Lawmakers in Washington are racing ahead of the technology they say is warping childhood, moving to wall off AI chatbots from anyone under 18—even as civil liberties and tech policy experts warn they may be building an internet checkpoint for everyone.

A fast‑tracked crackdown

The immediate catalyst is the Generative Use Accountability, Responsibility, and Disclosure (GUARD) Act, a bipartisan bill introduced last year by Sens. Josh Hawley (R-MO) and Richard Blumenthal (D-CT). The proposal would do something the U.S. has never done before: outright ban teenagers from using AI chatbots and force age checks on every remaining user.1

On Thursday, the Senate Judiciary Committee unanimously voted to advance the GUARD Act, sending it to the Senate floor and signaling rare cross-party unity around aggressive regulation of emerging AI tools.1 The vote capped months of mounting pressure from grieving families, safety advocates, and lawmakers who argue tech companies have had years to fix harms—and failed.

Before the vote: parents’ warning shot

The day before the committee markup, an alliance of parents who say chatbots harmed their children delivered a stark message to Congress: don’t blink.

In a letter shared with lawmakers and obtained by Axios, families accused tech giants of treating kids as fuel for engagement metrics and profit. “For us, this issue is not abstract. Big Tech deliberately designed their products and platforms to addict, manipulate, exploit, and abuse children and teens,” they wrote, urging senators to advance “tough protections.”2

The signatories—including Matt and Maria Raine, Megan Garcia, and Mandi Furniss—planned to sit in the room during the GUARD Act markup, a pointed reminder that for them, this isn’t a theoretical debate about innovation but a response to trauma.2

Their letter was aimed squarely at the most powerful figures in the debate: Senate Judiciary Chair Dick Durbin (D-IL), ranking member Chuck Grassley (R-IA), and key Commerce Committee leaders Maria Cantwell (D-WA) and Ted Cruz (R-TX).2 The message: choose the strict bill, not the softer compromises.

The GUARD Act: a sweeping age wall for AI

The GUARD Act is simple in its core ambition and sweeping in its implications.

First, it bans “everyone under 18 from accessing AI chatbots,” effectively age-gating a huge slice of the most visible AI tools on the market.3 Second, it forces AI companies to verify the age of every chatbot user—teen or adult.

Under the bill, that verification could involve uploading a government ID or using another “reasonable” method such as face scans, raising immediate questions about privacy, data security, and the creation of a de facto digital identity regime.3

The bill doesn’t stop at access rules. It also tries to reshape how chatbots present themselves:

  • Bots would have to disclose that they aren’t human “at 30-minute intervals,” interrupting conversations to remind users they’re talking to a machine.
  • They would be barred from claiming they are human or licensed professionals, mirroring a recent California AI safety law.3
  • It would be illegal to operate a chatbot that produces sexual content for minors or promotes suicide, with criminal penalties for companies that expose kids to such material.3

Blumenthal framed the bill as a direct response to what he calls a failed experiment in letting Silicon Valley police itself. “Our legislation imposes strict safeguards against exploitative or manipulative AI, backed by tough enforcement with criminal and civil penalties,” he said in a statement. “Big Tech has betrayed any claim that we should trust companies to do the right thing on their own when they consistently put profit first ahead of child safety.”3

How we got here: from hearings to hard lines

The GUARD Act didn’t appear out of nowhere. It arrived “just weeks after safety advocates and parents attended a Senate hearing to call attention to the impact of AI chatbots on kids,” where families described how automated systems had allegedly fueled self-harm, exposed children to explicit content, or deepened mental health crises.3

In parallel, AI companies have faced increasing legal scrutiny over how their tools interact with minors and other vulnerable users, from targeted engagement loops to emotionally manipulative responses.2 Parents are also airing broader anxieties: what constant AI assistance is doing to kids’ education, social development, and critical thinking skills.2

That drumbeat of concern set the stage for Hawley and Blumenthal to introduce the GUARD Act last year, positioning it as the maximalist option for those who don’t trust tech companies to self-regulate.

A competing vision: the CHATBOT Act

Not everyone in Congress wants to slam the door entirely on minors’ use of AI chatbots. Days before the grieving families’ letter and the committee markup, Sens. Ted Cruz (R-TX) and Brian Schatz (D-HI) floated a narrower alternative: the CHATBOT Act.2

Rather than banning kids from chatbots, the Cruz–Schatz bill would require AI companies to build “family accounts” that put parents in control of how children use these tools. It would also add privacy protections, limit manipulative features, and ban targeted ads to minors.2

This approach reflects a different theory of harm: that AI can be made safer for youth through design rules and parental controls rather than outright prohibition. It is also much closer to the tech industry’s preferred model of “supervision plus safeguards” rather than hard bans.

The survivor families didn’t name the Cruz–Schatz proposal in their letter, but their critique was unmistakable. They warned against “other proposals that seek to implement the bare minimum safeguards Big Tech…”—a clear swipe at what they see as industry-friendly half measures that would allow companies to claim reform while preserving youth engagement.2

A bipartisan backlash to Big Tech

Despite tensions over tactics, the rapid, unanimous committee vote on GUARD shows just how hostile the political climate has become toward major tech firms.

One Verge report framed the moment bluntly: “Lawmakers advance bill that would age-gate AI chatbots,” noting that the GUARD Act would “ban kids under 18 from accessing chatbots, while implementing age checks for everyone else.”1 Another piece from the outlet captured the bluntness of the proposal with its headline: “Senators Propose Banning Teens from Using AI Chatbots.”3

In this Congress, backing aggressive controls on AI and child-facing products isn’t just safe politics—it’s increasingly expected. That’s especially true for lawmakers who watched years of gridlock on social media regulation while youth mental health indicators worsened.

The unresolved questions

If the GUARD Act clears the Senate and becomes law, AI firms would have to quickly stand up new age-verification systems and content controls. But the bill’s trajectory also raises thorny questions that haven’t yet been fully debated in public:

  • Privacy vs. protection: Forcing everyone to verify their age—potentially via IDs or biometric scans—could create massive new databases of sensitive personal data in the hands of companies that already have checkered privacy records.3
  • Access and equity: An all-ages ban on minors’ chatbot access could widen digital divides for students who rely on free AI tools for tutoring or language support, especially if educational exemptions aren’t carefully carved out.
  • Free speech and autonomy: Older teens, who can work, drive, and in some states consent to certain medical treatments, may balk at being locked out of a major class of information tools entirely.
  • Innovation climate: AI startups will face complex compliance burdens from day one, which some will welcome as a necessary guardrail and others will decry as legislative overreach.

For now, grieving parents and safety-first lawmakers are in the driver’s seat—and they’re moving fast. The GUARD Act has cleared its first big hurdle with unanimous bipartisan support. The next fight will unfold on the Senate floor, where the core question won’t be whether AI chatbots should be regulated, but how far a scared and angry Congress is willing to go.


1. Lawmakers advance bill that would age-gate AI chatbots. — “Introduced last year, the GUARD Act would ban kids under 18 from accessing chatbots, while implementing age checks for everyone else. The Senate Judiciary Committee unanimously voted to advance the bill on Thursday, and now it’s headed to the Senate floor.”

2. Exclusive: Grieving parents push Congress to crack down on AI chatbots — “Families who say chatbots harmed their children are urging Congress to pass strict safeguards, arguing that tech companies have put profits ahead of kids’ safety… ‘For us, this issue is not abstract.Big Tech deliberately designed their products and platforms to addict, manipulate, exploit, and abuse children and teens,’ the parents wrote in the letter.”

3. Senators Propose Banning Teens from Using AI Chatbots — “A new piece of legislation could require AI companies to verify the ages of everyone who uses their chatbots… [It] would also ban everyone under 18 from accessing AI chatbots… Under the legislation, AI companies would have to verify ages by requiring users to upload their government ID or provide validation through another ‘reasonable’ method, which might include something like face scans… ‘Our legislation imposes strict safeguards against exploitative or manipulative AI, backed by tough enforcement with criminal and civil penalties,’ Blumenthal says… ‘Big Tech has betrayed any claim that we should trust companies to do the right thing on their own when they consistently put profit first ahead of child safety.’”

Story coverage

Referenced event not yet available nevent1qqsgy…6gccwyu5
Referenced event not yet available nevent1qqsv3…9cmpmt2w
Referenced event not yet available nevent1qqs9s…nc8ffe2y

Write a comment
No comments yet.