The New Sentinel

NewsSociety & Culture

Listen

All Episodes

AI Surveillance Overreach and Reforms in Education and Law Enforcement

This episode dissects the rapid expansion of AI surveillance technologies in U.S. and European schools and police forces. Our hosts examine infamous incidents, regulatory differences, and the trade-offs between public safety and civil rights, drawing on case studies and personal insights to bring clarity to this high-stakes debate.

This show was created with Jellypod, the AI Podcast Studio. Create your own podcast with Jellypod today.

Get Started

Is this your podcast and want to remove this banner? Click here.


Chapter 1

False Positives and Escalation in U.S. School AI Surveillance

Chukwuka

Alright folks, welcome back to The New Sentinel—Chukwuka here. Today we're digging into something that honestly feels like a Black Mirror episode, but it’s real life: the ballooning use of AI surveillance in schools across the U.S. I want to jump straight to the October 2025 incident in Baltimore County. This is wild—a student has a bag of Doritos, and the ZeroEyes AI system spots it, tags it as a possible gun, and boom, next thing you know, armed officers come storming in. I mean, you can’t make this stuff up.

Major Ethan “Sentinel” Graves

Yeah, I gotta say—back when I was a police captain in Texas, we flirted with some of these so-called "smart" systems. And… look, I get the intent—keep kids safe, prevent the next Uvalde. But when you got AI mistaking snacks for weapons, that ain't smart tech, that's a lawsuit waiting to happen. These systems, like Gaggle, flagged over 1,200 incidents in one Kansas school district in less than a year. Most were harmless—bad jokes, out-of-context stuff. Yet students got detained, some even cuffed. That’s not security, that’s trauma by algorithm. You gotta ask—does more tech really mean more safety, or just more kids ending up in cuffs for nothing?

Duke Johnson

Ethan, it’s like we’re trading real boots-on-the-ground assessment for a computer’s best guess. I mean, you roll up on a kid for a Crunchwrap because a camera system can’t tell a tortilla from a Glock? Where’s the chain of command on sanity, you know? And parents are getting called left and right because their kids made a joke about something weird online. It’s escalation, not prevention. I still believe security is good, but you need soldiers, not sensors, calling the shots.

Olga Ivanova - Female, Progressive

But Duke, what we see is over-surveillance mostly punishing those who are already vulnerable in under-resourced schools. When there's a 15–40% error rate and students are getting handcuffed for harmless mistakes, that’s not just a technical glitch, it’s a systemic issue. And let's not forget—these responses traumatize not only the children flagged but entire communities. I have interviewed students who—after being wrongly accused by these systems—struggle to trust any authority figure at all. The cost isn't measured just in suspensions or arrests, but in long-term psychological impact.

Chukwuka

There you go—it's the lack of context that gets me. AI is flagging keywords and patterns but has no clue about intent, right? And it's easy for schools to just say, "Let’s deploy the latest tech," especially after tragic shootings, but it’s mostly performative. As we've mentioned in previous episodes—when technology gets thrown at complex human problems, sometimes you just end up amplifying the mess.

Chapter 2

Bias, Privacy, and Psychological Harm: Unintended Consequences

Olga Ivanova - Female, Progressive

The consequences get even messier when you look at bias. These algorithms are trained on old data—data shaped by decades of systemic inequity. We have solid evidence now: Black, Latino, and LGBTQ+ students are at least twice as likely to be flagged. Remember that Vancouver breach last spring? It wasn't just data loss. Sensitive youth info—mental health struggles, sexual orientation, even disciplinary records—leaked out because their AI surveillance system had weak safeguards. One girl I interviewed was outed to her family and her school by the system, and it destroyed her trust and sense of safety. She dropped out, even though she’d done nothing wrong.

Major Ethan “Sentinel” Graves

Olga, you’re right about the chilling effect. I saw firsthand, kids start self-censoring—even when they weren't doing anything wrong. Counselors would tell me students would avoid asking for mental health help because they knew the school’s AI was reading emails and search histories. Frankly, we had more problems with students feeling surveilled than any real threats being caught. Most of those flagged alerts? Stuff like jokes gone sideways or edgy language, not actual violence.

Duke Johnson

I—I get what y’all are saying, but I’ll just say, you need immediate response when there is a credible threat. That’s boots-on-the-ground talking. But when a tool’s generating hundreds of bogus alerts, you overload the system, and you burn out the people who should be responding to the real stuff. Lawsuits are already piling up over this "digital dragnet," arguing it’s unconstitutional. And we don’t actually have data showing these AI systems cut school violence—or suicides, for that matter.

Chukwuka

And you all know I’m all for public safety, but I'm also a dad. If my own kid was wrongly flagged and dragged out of class, handcuffed for a joke? There's no amount of "oops" that makes that right. It’s classic over-policing of the same groups we talked about back in our episode on the "No Kings" protests. All that constant monitoring—students start editing what they say, or they just stop saying anything at all. That kind of psychological harm—being afraid to make a mistake—sticks with a person long after school.

Olga Ivanova - Female, Progressive

And it’s not just fear, it’s the complete erosion of trust in authority and institutions. In Europe, regulators call that a "chilling effect," and it’s one reason their laws are so strict. But we see in the U.S., without federal-level oversight—vendors can just market these systems to schools, with no audit or transparency requirements. So we end up in this endless feedback loop: over-surveilled groups get flagged more, which justifies even more surveillance. It’s a circle of harm, not safety.

Chapter 3

U.S. Market Adoption vs. European Rights-Based Reforms

Chukwuka

So let's zoom out. The U.S., it’s like the wild wild west for surveillance tech—a patchwork, state by state, school by school. Vendors rolling out software in over 500 districts now, market worth over $6B. No federal regulation. It’s all about rapid deployment after high-profile incidents—public safety first, audits and rights a distant second. But you cross the Atlantic, and things look different, yeah?

Olga Ivanova - Female, Progressive

Absolutely, Chukwuka. The EU AI Act—enforced from February this year—has drawn a firm line. Live biometric ID is banned in public unless it’s a serious crime with judicial approval, there's prohibition on blanket facial scraping from CCTV or the internet, and any so-called "predictive" system has to undergo strict risk audits and regular bias checks. Europe’s putting human rights—the right to privacy, to expression—ahead of efficiency. Countries can be slow, and sometimes there are loopholes, but these standards do give students and families more protections than you see in the U.S.

Duke Johnson

Yeah Olga, but let’s not ignore—those European standards come with trade-offs. Slower tech adoption, more red tape, maybe even gaps in security. U.K. just drove live facial recognition vans through London for 580 arrests in one year—so even they’re flexing exceptions when it suits 'em. Meanwhile, U.S. kids are at real risk, and parents demand action. You focus too much on proportionality, you limp behind on innovation. That’s a risk too.

Major Ethan “Sentinel” Graves

I see both sides. Europe’s stricter—big fines, mandatory audits, centralized oversight—but they’re not immune to abuse or enforcement delays. In the U.S., school districts can rush out new software overnight. Fast, but it’s reactive and messy—especially with no clear standards for privacy or bias. After Baltimore, you saw schools pause use of some tech. Sometimes feels like two sides of the Atlantic are playing chess with two different rulebooks.

Chukwuka

As someone who grew up navigating both systems, it's night and day. In the States, you get calls from your kid’s school about AI flags. In the U.K., there’s a lot more notification and consent hoops, and honestly—more questions about “is this really necessary?” I’m not saying either gets it perfect, but there’s a lot to learn on both sides. Maybe the future lies in a mix—a U.S.-style focus on evidence and urgency, but Europe’s bias audits and rights checks built in from day one.

Olga Ivanova - Female, Progressive

Yes, and we should be brave enough to have those transatlantic conversations—adopt each other’s strengths, not entrench the flaws. I worry that if countries don’t prioritize rights, we surrender too easily to the logic of “more safety at any cost.” But leaving innovation behind isn’t the answer either.

Duke Johnson

I’ll just say—let’s keep boots on the ground, eyes open, and remember, tech should be a tool, not a master. We need smart regulations, but not at the cost of readiness in the real world.

Major Ethan “Sentinel” Graves

Agreed, Duke. Whatever comes next, if we aren’t having open, tough debates like this, we’ll just end up repeating old mistakes with better gadgets. Well, “better”—that might not be the word, huh?

Chukwuka

Couldn’t have said it better. That’s it for today’s roundtable. This debate isn’t going anywhere soon—plenty more to come as policy catches up with the tech. Thanks to all of you—Ethan, Olga, Duke—for the insights, the arguments, the stories. Listeners, don’t forget: keep asking questions and thinking deeper. We’ll be back next week, same time, same table. Thanks for tuning in to The New Sentinel.

Olga Ivanova - Female, Progressive

Thanks, everyone. Stay brave—keep your rights close and your skepticism closer. See you next time.

Major Ethan “Sentinel” Graves

Appreciate y’all. Take care and stay sharp out there.

Duke Johnson

Hooah—that’s a wrap. Stay vigilant, y’all.