What is e-safety?
We explain what e-safety is and how it can be managed in schools and beyond
Whether you like it or not, technology has become an integral part of our daily lives, especially when it comes to children. In marked contrast to older generations who may not understand how touchscreens work, kids today presume that everything works with a swipe.
This can be challenging for parents, particularly when it comes to keeping their children safe on the web. Even though lawmakers and regulators may think they understand what kids are doing online and on social media, the ever-changing nature of technology constantly presents new dangers.
E-safety is the concept of protecting users as they navigate the internet, especially those most vulnerable. It tries to protect users from potentially harmful content that can be found on apps or websites, or the effects of such content, such as grooming, pornography, or cyber bullying.
Areas of risk classifying e-safety
The three important areas of risk when it comes to e-safety are content, contact, and conduct.
Content concerns itself with the material being accessed online, and whether it is harmful, illegal, and/or inappropriate. This can be in a variety of formats, inlcuding text, sound, images, or video.
Contact is related to sort of individuals that children interact with online. This includes how they are being contacted and what is exchanged. This leads directly into conduct, which is all about the nature of these exchanges, and whether they are potentially exploitative or harmful.
A lot of a child's internet time will be conducted within a school environment and it's a key place where e-safety will be implemented. The NSPCC has a number of guidelines for schools and educators to follow when protecting pupils online.
"A whole school approach to e-safety can help involve staff, governors, parents and pupils themselves in keeping children and young people safe online," it says.
Its resources help educational establishments implement the e-safety policies and procedures, IT infrastructure and support schools need, as well as helping schools and colleges develop "a trained workforce who are confident in online safety, identifying and responding to concerns".
How widespread is the issue?
Research conducted by Internet Matters five years ago revealed that more and more children between six and 16 were going online without their parents' oversight. Most parents said they didn't always monitor how their children were using the web, despite anxieties about unregulated screen time for youngsters, and the unintended consequences.
IT Pro 20/20: The future of IT infrastructure
Is UK net neutrality under threat and can using blockchain ruin your green credentials? Issue 22 of IT Pro 20/20 is out nowDOWNLOAD NOW
The availability of hardware, such as laptops and tablets, and the increased role these play in essential activities such as work and study, is one key reason why children are browsing the internet more and more without supervision. Smartphones, which are as powerful as desktop machines, are universal, including among teenagers, and youngsters are able to carry devices wherever they wish.
Another reason is the heightening acceptance of children owning their own electronic devices, something that perhaps wouldn't have been possible as soon as 30 years ago. It's now much more common for younger people to take a more independent approach to their online activity. Sadly, however, large swathes of the internet are categorically not child-friendly and leave children exposed to risks and dangerous content.
A further study in 2017 by the UK Safer Internet Centre revealed that 70% of those aged eight to 17 had witnessed graphic content that was not suitable for their age group. Similarly, research by the American Psychology Association found that children are first exposed to online pornography at roughly the age of 13. With regards to cyber bullying, an ONS survey revealed that one in five between ten and 15 had experienced at the very least one form of online bullying in the year ending March 2020, which equates to roughly 764,000 children. Around a quarter of victims, however, kept this to themselves.
In 2017, in an effort to quell the spread of indecent content reaching those it shouldn't, the government proposed a mandatory age verification system in order to view explicit materials. Plans set out under Part 3 of the Digital Economy Act 2017 involved users having to register with an age-restricted website using a traditional form of ID, such as a driver's licence, in order to proceed.
However, the plans were delayed in 2018 and ultimately scrapped a year later, putting the responsibilities back in the hands of regulators and away from lawmakers. Critics claimed the divisive proposals could be easily bypassed and there were difficulties when it came to social media sites. However, the NSPCC still described the scrapping as "disappointing".
Teaching e-safety in schools
Online safety was introduced into all key stages of the curriculum in 2012, with schools required to teach children about how to stay safe online from the age of 5. The various levels of guidance are aimed at different age groups, ensuring all ages understand the risks and are able to alert an adult should they be concerned about someone's behaviour online or feel they are being targeted by cyber bullies.
According to Ofsted, schools are obliged to demonstrate that they are protecting both students and staff from harmful or illegal content as well as educating them on how technology should be used. Schools should also have the means to act in an appropriate manner when an issue is flagged to them, which includes reporting the incident to the governing body, parents and, if necessary, the authorities.
Whose responsibility is e-safety?
Although Ofsted lists e-safety as the responsibility of schools, parents and carers have just as significant of a role to play in educating the youth on the dangers of the web. Family members and other adults should pay attention to what children are up to online and they can do so by implementing processes to check whether the internet and connected devices are being used in a safe and secure manner. Just like their children's' teachers, parents and carers should also dedicate the time to educate themselves about e-safety and how to deal with issues when they arise.
Businesses should also take responsibility - and action - to protect their younger users against dangerous online activity. They can do that by making it easier for users to alert the authorities of any illegal or harmful content, as well as monitoring their services to minimise the risk to children and young people when online. This could include enforcing age restrictions for certain services, ensuring services have the tools in place to report inappropriate content and having clear communication channels with authorities in case it is necessary to file a report.
However, not every step taken by tech companies to ensure the safety of their youngest users has been met with public approval. In August 2021, Apple was criticised for its plans to scan photos on US iPhones for evidence of CSAM as they are uploaded to the iCloud storage services, as well as introduce an iMessage surveillance technology that aimed to flag any explicit content received by a child to their parent or guardian. This raised concerns over user privacy, with more than 90 civil society organisations, including the UK’s Big Brother Watch and Liberty, urging CEO Tim Cook to abandon the policies. According to privacy activists, the algorithm used to scan for images had the potential to mistakenly flag CSAM content, or even be abused by authoritarian governments. Meanwhile, the iMessage surveillance tech could be exploited by “abusive adults” and lead to LGBTQ children being outed against their will. Weeks later, Apple announced the decision to delay the rollout of the technology, citing “feedback from customers, advocacy groups, researchers, and others”.
At the same time, Facebook halted its work on a version of its Instagram social network for children, citing a need for more collaboration with parents and policymakers, after leaked internal research documents showed that it was aware of Instagram’s negative effect on teen girls' mental health. However, the tech giant added the option of end-to-end encryption e2ee) for voice and video calls on its Messenger communications platform, defying warnings from the UK government that the technology will hinder law enforcement efforts to track down and arrest child abusers.
E-safety during lockdown
The coronavirus lockdown has seen a significant rise in online sexual abuse against children, who are now required to stay at home and therefore beyond the safety of security filters used by schools.
In fact, the spike in cases had been so significant that, in April, the National Crime Agency (NCA) issued a statement advising parents to carefully monitor what their children are doing online when they use the internet to access school resources.
Although Ofsted places the responsibility of e-safety on school, remote learning means that parents and carers have a greater role to play in the online safety of their children. A spokesperson for the department told IT Pro that "how Ofsted is overseeing remote learning, including matters of e-safety, is currently under review".
In order to raise awareness of the threat, the NCA has launched 15-minute activities to parents and carers to do with their children which provide e-safety resources and exercises for families of children across all age groups.
2022 State of the multi-cloud report
What are the biggest multi-cloud motivations for decision-makers, and what are the leading challengesFree Download
The Total Economic Impact™ of IBM robotic process automation
Cost savings and business benefits enabled by robotic process automationFree Download
Multi-cloud data integration for data leaders
A holistic data-fabric approach to multi-cloud integrationFree Download
MLOps and trustworthy AI for data leaders
A data fabric approach to MLOps and trustworthy AIFree Download