Connect with us

Thought Leaders

Is the K-12 Digital Environment Creating the Next Generation of Hackers?

mm

Schools have become a digital hub for students, with edtech platforms helping them learn. A whopping $165 billion has been invested in the market as of 2026, with benefits comprising, but not limited to, the ability to tailor content to individual learners’ needs, offer interactive and engaging materials, and access analytics to improve learning experiences based on data. But with every new device that is connected to the network comes additional responsibility.

AI is democratizing access to content creation tools that were once only available to trained developers and media professionals. Children’s curiosity in such creations is a positive shift for those who want to learn and explore early interest in popular careers such as filmmaking or designing marketing adverts, but without the proper guardrails and training, they can be dangerous. 

Meanwhile, schools are still defining AI security policies. Ohio is among the first states to require K-12 schools to adopt formal AI policies by mid-2026, banning the use of AI for bullying, and stating that districts should set procedures to investigate suspected misuse, though the policy does not prescribe specific methods. 

The technology is advancing faster than the regulations can keep pace. Schools must take AI responsibility into their own hands to prevent them from turning safe learning environments into a training ground for future hackers.

Why schools must wake up and put a stop to AI deepfakes

We love to see our children excited to learn and experiment. But without proper guardrails, that curiosity can have costly consequences. From data breaches to cyberbullying, schools must understand and manage the latest AI to ensure a safe learning environment. 

Almost 68% of children younger than two years old have been found to spend around two hours of screen time each day. These children want to play with the latest games and tools available, but they don’t understand the potential for harm. They can easily take and upload photos of themselves to create game avatars, and one picture is all it takes for content to be misused.

Two boys were charged with generating nude images of girls in their school, leading to a fight on the playground and one of the victims being expelled. AI has made it easier for anyone to alter or create such images with little to no training, and the fallout is spreading into psychological, legal, and digital safety risks.

According to one survey, 91% of districts have been misled by deepfakes. These images can cause extreme embarrassment and be used as a tool for bullying, which can lead to children missing school and not wanting to return. If images are not taken down responsibly, there is a possibility they will be sourced by students at future schools or by an employer when running a background check.

Schools are testing grounds for next-gen hackers

Today’s school networks are not only targets for existing hackers, but also testing grounds for the next generation of cybercriminals. Reports from 2025 found that over half of school insider cyberattacks were caused by students, where, in several cases, students guessed weak passwords or found them jotted down on bits of paper. 

Reasons cited as to why these children decide to hack include dares, notoriety, financial gain, revenge, and rivalries. This indicates that both training on password strength and the consequences of digital misuse are critical lessons for schools.

When systems are easy to break, and nobody is seemingly watching, what starts as account hijacking to post embarrassing messages on a peer’s social media account can soon escalate into much more harmful threats. Open networks with insufficient identity controls and minimal cyber ethics education become a systemic problem that schools and parents must work together to fix.

Banning social media versus digital literacy

While some schools choose to ban social-media accounts for children under 16, others prioritise AI literacy. The UN has detailed guidelines addressing first parents, followed by teachers, regulators, and then industry and private sectors to help strengthen AI governance frameworks and uphold children’s rights. Recommendations include ensuring AI systems are transparent, accountable, and embedded with child-centred data protection measures.

Schools are responding by expanding the definition of online harms to recognize risks such as deepfakes alongside traditional online safety concerns, and building digital literacy into the curriculum. 

In New Jersey, for example, K-8 students in the district receive lessons on what AI is, how it’s trained, the ethical questions it raises, and how to use it responsibly, along with classes covering coding, digital citizenship, and keyboarding. Since AI is already changing ways of working and creeping into everyday lives, and children are accessing technology from earlier ages, it’s essential that students have AI literacy so they’re prepared for it.

What effective protection looks like today

When students know systems are monitored, abuse drops. In 2020, a study comparing online courses before and after the introduction of webcam proctoring found that cheating was significantly reduced once students knew they were being monitored. Teachers who have visibility and control over students’ digital activity can provide a similar experience, helping to curb undesirable online behavior during school hours. 

EdTech and Smart Classroom market is projected to attain a valuation of approximately $498.5 billion by 2032, advancing at a 15% CAGR. Classroom managers are designed to give teachers access to student device activity during lessons from a master screen, helping steer students to the class material when they veer off track. Most tools will offer services such as monitoring live browser activity, automatically blocking or manually closing distracting or unauthorized tabs, and customizable access controls that grant visibility only on a need-to-know basis. 

It’s important that students are made aware of these tools and the reasons for them to maintain a trusting relationship with teachers. These tools should be introduced alongside training on safe digital practices so students understand why it is important to use tools properly, and why teachers must maintain oversight to guide students on the right path.

Like every introduction to new products and technologies, it must be paired with carefully thought-out best practices and ongoing guidance for users. In schools, teachers must be attentive to the digital materials students engage with and ensure issues of digital misuse are addressed early and sensitively.

Charlie Sander, CEO of ManagedMethods, a company that provides cybersecurity and student safety solutions for K-12 schools.