April 2, 2025
Jazmin Mignaquy
ANZ Marketing Director
In a time of increasing digital complexity, schools are being asked to do more than ever to keep students safe. But with so much activity happening online—and often silently—it’s no longer enough to rely on traditional tools like filtering and firewalls. What’s needed now is a smarter, more human approach to digital safety: one that focuses on early signals, context, and real-time visibility.
That’s why we created this three-part series. To help school leaders, IT teams and wellbeing staff better understand:
Part 2 The Evolution of School Monitoring: The Critical Role of Human Oversight
Part 3 From Hesitation to Confidence: What Pastoral Alerting Can Do During School Hours
For years, schools have implemented firewalls and filtering systems to manage internet access and ensure students stay within safe online boundaries. These tools were built to block restricted content, log internet activity, and provide visibility into search behaviours. Over time, however, many schools have started using these tools to flag student behaviour, assuming that keyword-based alerts from filters can serve as indicators of student safety concerns.
But there’s a major flaw in this approach: filter logs and firewall reports were never designed to provide real safeguarding alerts. Schools that rely on them to detect wellbeing risks may end up drowning in false-positive noise, missing the very students who need urgent intervention.
Filtering and firewall systems fall under IT security—not student safeguarding. They do a great job at restricting access to inappropriate content, but they lack the sophistication needed to detect real risks to student wellbeing. This leads to several issues:
A search containing a flagged keyword doesn’t necessarily mean a student is at risk. Without knowing the intent, these flags can generate false positives that waste time as staff investigate these flags.
Schools receive an overwhelming number of red flags. IT teams and pastoral care staff spend hours investigating dead ends, meaning real concerns can be missed.
Many schools have used firewall logs and filter reports as alerts when they were never designed for that. These tools track behaviour, but they don’t interpret risk.
To truly support students, schools need to recognise the fundamental difference between traditional filtering solutions and dedicated digital monitoring for student wellbeing.
Traditional Filtering & Firewalls |
Wellbeing & Safeguarding Monitoring |
Blocks access to restricted content |
Detects wellbeing risks (e.g., self-harm, cyberbullying) |
Flags web activity based on keywords |
Uses AI and human moderation to assess intent and urgency |
Provides logs and reports for IT teams |
Provides real-time alerts for pastoral care staff |
Designed for network security |
Designed for proactive student intervention |
When it comes to student wellbeing, context is everything. A filter system may flag a single search, but a wellbeing concern is rarely an isolated event. Students who are struggling often leave digital breadcrumbs—patterns of behaviour that, when assessed together, paint a picture of a student in need of support.
Filtering systems block content and log activity, but they cannot interpret patterns of behaviour or the build-up of warning signs, such as shifts in language, repeated exposure to distressing content, or changes in online engagement. Therefore, monitoring alerts require a holistic assessment, prioritising student well-being over technical policy enforcement.
Wellbeing concerns are rarely isolated incidents triggered by a single keyword; they typically involve a series of interactions, searches, and behaviours within the digital space. While firewalls cannot detect these patterns, monitoring systems designed for safeguarding can. AI can assist in this assessment to a point, but the most effective systems rely heavily on critical human moderation.
Schools are rethinking why and how they track student behaviour online. Instead of just flagging restricted activity (e.g. gaming, VPN use), the focus is shifting to identifying digital behaviours that signal a wellbeing concern.
That means:
A truly effective digital monitoring system needs more than just AI—it requires human oversight to distinguish between concerning patterns and harmless digital activity.
Interpreting Intent
AI can detect flagged keywords, but only a human can assess tone, context, and behavioural patterns over time. A single flagged term doesn’t tell the full story, but repeated distressing searches combined with changes in online behaviour signal deeper concerns.
Reducing False Positives
Automated systems often flag benign searches, creating noise and unnecessary escalations. Human moderators help filter out non-issues by providing schools with contextual screen captures and messages, ensuring staff only focus on real concerns.
Providing Meaningful Escalation
Some digital behaviours require immediate action, while others may indicate a long-term wellbeing issue that needs ongoing support. Human moderators can categorise alerts, helping schools prioritise and act accordingly.
Schools relying on filter logs or basic AI-driven keyword detection will inevitably miss students who need real support. A monitoring system designed for safeguarding ensures that every alert is validated, meaningful, and actionable.
Filter reporting and logs are NOT alerts. They provide data, but they don’t provide meaningful intervention points. If a school wants to detect risks and escalate genuine concerns, they need a dedicated monitoring solution—something built for wellbeing and safeguarding, not just IT security.
If a school is using filter logs to spot a student who needs immediate support, they will miss a child that needs immediate support.
The shift from IT-driven filtering to wellbeing-focused monitoring isn't just a philosophical one—it's a practical necessity. With schools seeing rising levels of online risks and mental health challenges, relying on basic keyword triggers simply isn't good enough.
Consider this:
These aren’t theoretical risks—they’re real, ongoing challenges that schools are facing daily. And yet, many schools are still relying on tools that were designed for bandwidth management, not student wellbeing.
Linewize Monitor is built for student safety. Unlike traditional filtering, it:
Understands context—looking beyond keywords to assess intent
Filters out the noise—reducing false positives
Provides real alerts—so wellbeing teams get notified when action is needed, not overwhelmed with unnecessary flags
Leverages expert human moderators—to ensure every serious case is assessed with insight and empathy before escalation.
By making the shift from IT monitoring to meaningful student safeguarding, schools can ensure critical issues aren’t lost in a sea of irrelevant flags. Schools that want real alerts for real concerns need a tool designed for student wellbeing—not just internet filtering.
If, throughout this series, you’ve found yourself with questions or would like to explore how these ideas apply to your school, we’d love to hear from you.
Linewize can work with you to:
Get in touch to start the conversation and discuss whether this technology is right for your school setting.
Ne At ySafe, we’ve spent years engaging with young people about some of the most difficult online safety challenges they face. We know that ...
Many schools hesitate to implement digital monitoring tools—often referred to (sometimes unfairly) as 'spyware'—out of concern for the ...
In today’s digital landscape, schools are increasingly adopting systems to gain a deeper and real-time understanding of how students use ...