How tech traps domestic abuse victims

A woman in silhouette sitting on a bed looking out the window
(Image credit: A woman in silhouette sitting on a bed looking out the window)

Smart homes, smartphones and digital assistants are supposed to make life better, but they’ve also had the uneasy side effect of increasing domestic abuse.

Increasingly, charities and academics are classing “tech abuse” as a strain of domestic abuse, where the abuser takes control of internet and email accounts, installs listening and video equipment that isolates their victims, and deploys tracking software on mobile phones.

Experts we spoke to were reluctant to discuss details of tactics they have come across, for fear of giving abusers ideas, but they did share outlines of what victims face.

“The thing to remember is that abusers, stalkers and other bad actors are going to use any tool they can to control and abuse their victim,” said Chris Cox, executive director of Operation Safe Escape, a non-profit that helps with counter-surveillance to protect victims.

Restricting or monitoring communications, he said, leaves abuse victims further isolated and at risk of worse if they’re caught talking with friends or planning to escape the relationship.

“Isolation is a cornerstone of abuse,” said Cox. “It often starts with physical and societal isolation (that is, cutting off the person from their support system), which can leave a person only with technology to reach the outside world. When that’s taken away, it can make it much harder for them to escape or find comfort.”

The longer tech abuse goes on – especially when the perpetrator has local access – the more difficult it is to make a clean break, meaning homes must be swept to prevent the abuse continuing even after an abuser or victim has left.

“We’ve seen cases where the abuser has left devices on the network specifically to enable access later on,” said Cox. “Sometimes, the abuser is more technically adept and spends time compromising devices and networks so they can continue the abuse after they are cut off and some Internet of Things (IoT) devices can give information like whether the home is occupied, which can put the person at physical risk.”

Researchers from University College London (UCL) have highlighted multiple scenarios where tech can be used for abuse, from wearable devices, which allow perpetrators to track and monitor movements, to security cameras and audio recording devices.

According to Cox, when he or fellow volunteers have been into homes to secure networks or once a survivor has escaped, the abuser will still try to use the tools they have left to re-infiltrate networks, something that can be their undoing.

“They rely on the idea that no one will be looking too closely – and especially that someone qualified won’t be helping,” Cox said. “We’ve seen so many different ways they try to re-attack their victims, so they get discovered fairly quickly. As an additional bonus, the more sophisticated attacks often end up falling into ‘very illegal’ territory, which helps get law enforcement engaged.”

The dangers of sharing

Of course, not all tech abuse stems from complex techniques, and instead relies on the way that many newer devices are set up as communal rather than private tools. “The common abuser-perpetrator is more like a UI-bound adversary,” said Leonie Tanczer, a UCL lecturer in security and emerging technologies. “So a perpetrator doesn’t go to the lengths of writing malware code and installing it – instead, they are using the features that are already available and easily accessible for them to abuse.

“That means with IoT technologies – for example, Amazon Echo – that an abuser can simply purchase them and set them up in the house. As [the] authenticated account holder, they can then use these tools to monitor and control. That’s a core message: often it is established features that in an abusive context are misappropriated.”

Even where manufacturers design IoT devices with privacy in mind – for example, by allowing separate accounts and by implementing robust privacy settings – there’s still a risk that a victim can’t take advantage of them. “Conventionally, with device administration, there’s an owner and account holder,” said Tanczer. “I might be the person who has purchased the device, but I’ve placed it in the household with my partner, who may not be agreeing with that.

“If an IoT manufacturer has the best intentions and designs a super-secure device, the problem still could be that the perpetrator would deliberately prevent a person from making use of these features.”

Stalkerware on the rise

Beside weaknesses in IoT, another major threat is “stalkerware” – an insidious branch of “security” software favoured by control freaks, often sold as legitimate software to “monitor children” and which effectively gives an attacker access to location, messages and call logs.

According to figures from security company Kaspersky, between January and August 2019 there were more than 518,223 cases globally where the company’s protection technologies either registered presence of stalkerware on user devices or detected an attempt to install it – a 373% increase in the same period in 2018.

“Most people don’t protect mobile devices and so probably that is just the tip of the iceberg,” said David Emms, principal security researcher at Kaspersky.

Using such software to access someone’s handset without their permission is illegal, experts from law firm Decoded Legal told us, but it remains easy to find online. It begs the question why security companies are unable to kill such tools at source.

“The whole category is tricky because we can’t label it as malware and report it as we would a backdoor trojan or similar, because in some jurisdictions it’s legal so it straddles a grey area,” said Emms. “Some of them masquerade as parental control software and call themselves legal that way.”

Although the company now flags such software as a privacy threat, it’s unclear what benefit this will have. If, for example, a perpetrator is installing the software, they will override the warning, and if a warning appears later and the victim removes the tool, the consequences could be frightening.

“If someone sees the notification and then removes that software then the person behind it is going to be notified and that puts someone in harm’s way,” Emms said. “Our advice would probably be that if you get a notification of this sort and you believe you’re the victim of abuse the right thing to do is reach out to an organisation for advice – maybe get the police involved before you think about what to do about the phone.

“It may well be that you make the decision to buy a burner phone that isn’t being monitored so that if you need to reach out to a friend you use that phone.”

Finding the right sort of help can be a daunting process for victims. Charities and police try their best to offer assistance, but if people don’t know who to turn to then they may be tempted by free tools. This should be avoided.

“There’s an important void to be filled when it comes to the provision of support,” said UCL’s Tanczer. “But this gap should be filled by specialist services that know how to do thorough risk assessments, safety plans and are trained in trauma-informed work with victims and survivors, [rather] than any corporation that believes their new feature or app could help save the world.”

For confidential advice, call the National Abuse Helpline on 0808 200 0247 or visit nationaldahelpline.org.uk