top of page

July 2025 - Minaz Jivraj My Take: Can AI Make Schools Safer, or Just Seem Safer? A Look Behind the Code

  • Writer: Minaz Jivraj
    Minaz Jivraj
  • 7 days ago
  • 6 min read

In recent years, as concerns over school safety have intensified, educational institutions across North America have turned to artificial intelligence (AI) as a potential safeguard against threats ranging from physical violence to mental health crises. Promising enhanced surveillance, real-time alerts, and predictive analytics, AI technologies are being rapidly adopted in hallways, classrooms, and online learning platforms with the expectation that they will detect weapons, flag concerning behaviors, and intervene before tragedy strikes.


This surge in interest is driven by a growing belief that AI can outperform human oversight by analyzing massive volumes of data, whether from security cameras, school-issued devices, or student communications; with speed and precision. From weapon-detection scanners at entry points to behavioral algorithms monitoring student language for signs of self-harm, AI is increasingly being positioned as a frontline tool in school safety strategies.


But as implementation accelerates, so too do questions from educators, parents, and privacy advocates: Are these tools truly improving safety outcomes, or simply creating the appearance of control in environments that remain complex and deeply human?


This article takes a critical look behind the code, exploring where AI is making measurable impact, where it falls short, and what school leaders must consider before handing the keys of student safety to algorithms.


Can AI Identify Safety Threats in Schools?


The Promise: Why AI Is Being Adopted

1. Real‑time video surveillance

Advanced computer vision systems like those from VOLT AI, ZeroEyes, and others are being rolled out in hundreds of U.S. school districts, analyzing live camera feeds to detect violence, medical emergencies, weapons, and atypical crowd behavior.

  • Violence prevention: For instance, VOLT AI deployed in Loudoun County, VA, flags fights or bullying observed in hallways or locker rooms. Their system flags only “a handful” of such events per week, with human operators reviewing alerts before notifying school staff. In Arizona’s Prescott High, the system detected a personal asthma attack in real time, prompting immediate medical response.

  • Weapon detection: Companies like ZeroEyes scan camera footage to identify firearm shapes. Their software looks for ≥18 % of a weapon’s structure and alerts both school and law enforcement. These systems operate in over 40 states across schools, airports, and public venues.

  • Audio analysis: Beyond visuals, AI listens for screams, gunshots, slamming doors, or keywords linked to bullying or distress in bathrooms and classrooms.

 

2. Predictive risk analytics & digital monitoring

Other AI applications scan online student behavior:

  • Language‑model analysis: Systems tuned to essay or chat data, like Ormerod et al.’s model, detect signs of self‑harm, suicidal ideation, or violent intent  .

  • School‑issued device scans: Tools like Gaggle, GoGuardian, and Securly monitor emails, documents, and search terms for risky content. Gaggle operates in ~1,500 U.S. districts, reporting that 96 % of users feel it helps prevent suicide and self‑harm, and can identify one suicide attempt per 200 enrolled students.

 

3. Enhanced emergency response

AI-enhanced systems also contribute to coordinated crisis responses:

  • Detection of threats triggers lockdowns, alerts law enforcement and first responders, and sends real-time updates to faculty and students.

  • AI-directed vehicle and license-plate monitoring control campus entry and can pre-emptively block access for known threat.

 

Real‑life Successes

  • Loudoun County, VA: VOLT AI flagged a seizure event in common areas, enabling timely medical help.

  • Prescott High, AZ: AI identified a student having an asthma episode, allowing prompt intervention.

  • Ohio district study: Surveillance algorithms caught alarming journal entries and enabled mental-health outreach.

These examples illustrate that AI can enhance situational awareness and response speed, especially in large campuses where human oversight is limited.


The Challenges: Where AI Falls Short

1. Limited accuracy and false alarms

AI models rely heavily on training data quality, camera conditions, and context comprehension:

  • High false-positive rates plague weapon detection. Evolv systems reportedly misidentify objects like umbrellas and laptops, up to a 60 % false-alarm rate.

  • A Virginia high school, occupied with scanner operations, missed a violent incident due to diverted attention.

  • Video analytics suffer from reduced effectiveness in poor lighting, oblique angles, weather, and low resolution; possibly missing threats.

  • Behavioral analytics can misinterpret innocent gestures, for example a student reaching for a pizza slice may trigger a weapons alert.

 

2. Privacy, security, and bias concerns

Extensive school surveillance raises critical ethical questions:

  • Data breaches: Investigations revealed misconfigured systems where thousands of sensitive student records were publicly accessible. In Vancouver, WA, unprotected Gaggle screenshots exposed personal mental-health admissions.

  • Privacy erosion: Students report heightened anxiety and feelings of being “under suspicion” constantly.

  • Bias issues: Facial-recognition systems misidentify Black and Latino students 10× more often than white counterparts. Moreover, flagging LGBTQ+ related terms resulted in outing students and eroding trust until Gaggle removed such flags in 2023.

 

3. Lack of definitive outcome evidence

Independent research on actual safety improvement remains sparse:

  • A RAND study concluded there is “scant evidence” that AI monitoring reduces violence or suicide rates.

  • The ACLU warns these systems foster a false sense of security and lack rigorous impact evaluation.

 

4. Resource anxiety and equity issues

Deploying AI can be expensive and resource-intensive, potentially undermining more effective human-centered strategies:

  • A Kentucky district spent $12 million on weapon detectors, only to find them ineffective. Utica, NY, spent $3.7 million, removed units after a student bypassed detection with a knife.

  • Minority students disproportionately suffer from over-surveillance, contributing to the school‑to‑prison pipeline.

  • These “security-theater” measures can create hostile educational atmospheres, distracting from investments in counselors, mentors, or community programs.

 

5. Contextual understanding is limited

AI lacks human sensitivity and understanding of nuance:

  • It cannot differentiate whether a student is retrieving lunch money or concealing a weapon; context still requires human judgment.

  • Novel weapons or improvised threats may not match known training data and could be overlooked.

 

Conclusions & Responsible Integration

AI does hold real potential as a force multiplier; an alerting assistant that enhances human oversight.


Concrete benefits include:

  1. Early intervention in medical or mental-health crises

  2. Proactive detection of fights, weapons, or unusual behavior

  3. Faster coordinated emergency response

Yet it has its limitations; false alarms, equity issues, and inconclusive effectiveness, must temper expectations.


Best practices for responsible AI use in schools include:

  • Human‑in‑the‑loop systems: Ensure every AI alert is reviewed by trained personnel prior to action.

  • Vetting performance claims: Demands for independent validations, peer-reviewed audits, and manufacturer transparency are essential.

  • Privacy-first design: Data encryption, limited retention, auditability, and mechanisms for parental consent/opt‑out must be implemented.

  • Bias mitigation: Regular audits for false positives across demographic groups; exclusion of sensitive content like LGBTQ+ identifiers.

  • Holistic strategy: AI should complement, not replace counselling, mental-health funding, relationship-building, and community engagement.

  • Transparency and oversight: Schools should disclose AI use, involve parents and students in policy making, and permit external review.

 

Final Analysis

AI can help detect emergent threats, for example fights, seizures, self-harm indicators or weapons in real time, often more quickly than human monitoring in busy school settings. But its successes are situational and rely on supplemental human oversight.


Despite real‑world incidents where lives may have been saved, there are competing costs:

  • High false positive rate leading to desensitization

  • Significant financial costs often exceeding alternatives

  • Continued data security, privacy, and bias concerns

  • Lack of hard proof tying AI to violence reduction

 

The evidence suggests AI is supportive, not definitive. Schools should proceed carefully, balancing technological capability with rigorous protocols, ethical guardrails, and proper investment in front-line human resources. AI is not a panacea, but when used responsibly, it can be a valuable tool in a comprehensive student-safety ecosystem.

Ultimately, protecting students demands both smart tech and wise stewardship.


References:

  1. Washington Post: “Can AI identify safety threats in schools? One district wants to try.”

    The Washington Post, June 17 2025. https://www.washingtonpost.com/education/2025/06/17/loudoun-schools-ai-camera-monitoring/  

  2. AP News: “Schools are buying AI software to detect guns. Some experts say it’s a mistake.” AP News, Aug 21 2024.

    https://apnews.com/article/school-safety-guns-artificial-intelligence-legislation-336b9ec34df538d93d4f06e34b11c9b1  

  3. StateScoop: “Schools are buying AI software to detect guns. Some experts say it’s a mistake.” StateScoop, Aug 21 2024.

    https://statescoop.com/zeroeyes-school-safety-ai-firearm-detection-2024/  

  4. Fox News: “AI weapon detection company seeks to prevent school, other shootings: ‘a proactive measure’.” Fox News, Mar 7 2024.

    https://www.foxnews.com/us/ai-weapon-detection-company-seeks-prevent-school-other-shootings-a-proactive-measure  

  5. Washington Post: “Kids keep bringing guns to school. Can AI weapons detectors help?”

    The Washington Post, June 11 2023.

    https://www.washingtonpost.com/education/2023/06/11/school-weapons-detectors-ai-virginia/  

  6. People.com: “AI Weapon Detection System Didn’t Detect Gun Used in Nashville School Shooting.” People, Feb 2025.

    https://people.com/ai-weapon-detection-system-nashville-school-shooting-8780320  

  7. AP News: “Schools use AI to monitor kids, hoping to prevent violence. Our investigation found security risks.” AP News, Mar 12 2025.

    https://apnews.com/article/25a3946727397951fd42324139aaf70f  

  8. Wikipedia: “Gaggle (software).”

    https://en.wikipedia.org/wiki/Gaggle_(software)  

9.    Thakur, A., Shrivastav, A., Sharma, R., et al. “Real‑Time Weapon Detection Using

YOLOv8 for Enhanced Safety.” arXiv, Oct 23 2024. https://arxiv.org/abs/2410.19862  

  1. Ormerod, C. M., Patel, M., Wang, H. “Using Language Models to Detect Alarming Student Responses.” arXiv, May 12 2023. https://arxiv.org/abs/2305.07709  

  2. Zhou, H., Jiang, F., Lu, H. “Student Dangerous Behavior Detection in School.” arXiv, Feb 19 2022. https://arxiv.org/abs/2202.09550  

 


Minaz Jivraj MSc., C.P.P., C.F.E., C.F.E.I., C.C.F.I.-C., I.C.P.S., C.C.T.P.

Disclaimer:The information provided in this blog/article is for general informational purposes only and reflects the personal opinions of the author. It is not intended as legal advice and should not be relied upon as such. While every effort has been made to ensure the accuracy of the content, the author makes no representations or warranties about its completeness or suitability for any particular purpose. Readers are encouraged to seek professional legal advice specific to their situation.

 
 

Recent Posts

See All

MRJ Security Consultants: Protecting Tomorrow's Leaders Today with consulting, training and security services.

Quick Links

© Copyright 2025 MRJ  Security Consultants - All Rights Reserved

bottom of page