Firms need to implement hybrid liveness detection to catch out ever improving deepfake technology

Women's face being scanned by facial recognition software with two Ai generated alternative versions using different hairstyles
(Image credit: Getty Images)

Businesses should prioritize a multi-layered approach to identity verification in order to safeguard against increasingly sophisticated attacks using AI-generated content, according to Gartner.

Identity verification systems use unique identifiers such as a user’s fingerprint, voice, or face and cross reference this information with a verified data source, such as a driver’s license or passport, submitted by the user upon opening the account.  

Advances in AI-powered image generation have given hackers the ability to create increasingly life-like synthetic models of a victim’s likeness used to trick facial recognition systems.

AI-assisted deepfake attacks on biometric recognition will cause 30% of enterprises to consider their identity verification solution, when used in isolation, to be no longer reliable by 2026, according to Gartner.

Akif Khan, VP analyst at Gartner specializing in identity proofing, said the proliferation of increasingly sophisticated AI image generation has meant many organizations will lose confidence in their system's ability to verify a user’s identity. 

“As a result, organizations may begin to question the reliability of identity verification and authentication solutions, as they will not be able to tell whether the face of the person being verified is a live person or a deepfake,” he said.

Deceiving biometric authentication systems by placing a spoof impersonating the victim in front of the device’s camera sensor is described as a presentation attack.

Threat actors can also bypass identity verification portals by employing injection attacks, where hackers inject ready-made augmented content into the security systems directly.

Attackers can create AI-generated imagery, such as a deepfake video, and feed this content into a virtual camera driver presenting it as a real camera feed. 

Research by Gartner found injection attacks surged 200% in 2023, but were still used less frequently than the less technically sophisticated presentation attack. 

Bolstering spoof protections with a hybrid approach to liveness detection

Presentation attacks are the most common attack vector used to compromise biometric systems, according to Gartner, with businesses employing presentation attack detection (PAD) mechanisms to determine a user’s ‘liveness’ – whether they are a real, live person in front of the camera – and root out impersonators. 

Methods of assessing liveness can be divided into passive and active liveness detection, where active detection requires the user to perform an action like turning their head, smiling, or blinking. 

This is intended to expose less sophisticated spoofing attempts, but Khan told ITPro the growing maturity of AI technologies is enabling hackers to deceive some of the more robust forms of active liveness detection. 

Once threat actors know which specific actions a verification system uses to determine liveness and when these actions need to occur in the verification process, Khan explained, they can tailor their spoof to perform specific tasks convincingly. 

RELATED RESOURCE

Whitepaper cover with cartoon character wearing digital armour stood in front of a bar/line graph with mobile phone featuring image of female wearing glasses

(Image credit: Salesforce)

Discover must-have tools for your data security toolkit

DOWNLOAD NOW

To get around this, Khan advocated combining this approach with passive liveness detection, which requires no user interaction and can run in the background of the facial verification process. 

This way, attackers are not aware of what liveness criteria the system is searching for and therefore less likely to be able to tailor their spoofs to evade detection.

Asked if these additional security layers might cause weariness in users, much like MFA fatigue, when trying to access their accounts, Khan responded that the extra measures would be invisible and should not complicate the verification process for the user.

“No, since most of the additional layers that we recommend adding to the identity verification process are in fact passive and invisible to the user. Examples include device profiling, location intelligence, and behavioral analysis”, Khan explained.

“The aim here is to add fraud detection signals that could alert you that something is wrong, even if you miss actually detecting the deepfake.”

Solomon Klappholz
Staff Writer

Solomon Klappholz is a Staff Writer at ITPro. He has experience writing about the technologies that facilitate industrial manufacturing which led to him developing a particular interest in IT regulation, industrial infrastructure applications, and machine learning.