Cookie Settings

Why you don't need to worry about generative AI deepfakes with Kairos

Published on
February 13, 2024
Al Esmail

Using GenAI deepfakes to spoof digital onboarding

Threat actors use generative AI deepfakes to create or manipulate facial images for identity fraud. The process involves either generating new faces or altering existing images tailored to the victim's identity or created to pass as a legitimate user using generative AI. These images are often sourced from social media, public records, or generated from scratch using deepfake technology. Attackers use these synthetic images during the onboarding process, presenting them as real faces to deceive verification technologies.

The vulnerability exploited by threat actors is in the digital onboarding provider’s ability to differentiate between live individuals and synthetic images. This is particularly concerning for systems that only perform static checks, such as verifying the presence of facial features or matching photos with ID documents. Despite advancements in liveness detection technologies designed to identify whether a subject is physically present, the sophistication of deepfakes challenges these measures. To address the problem specific image capture security protocols and a robust liveness algorithm should be used to effectively block generative AI spoofing attempts.

The Importance of Liveness Detection in Combating Deepfakes

Liveness detection is a crucial security feature in digital onboarding processes, designed to thwart identity fraud by distinguishing between a real person and a fake representation (such as photos, videos, or masks) during verification processes. Its primary goal is to ensure that the individual attempting to gain access or complete an onboarding process is physically present at the time of verification and not a spoof or a deepfake trying to mimic a real user. This technology plays a significant role in protecting against various forms of digital fraud, particularly in sectors like banking, telecommunications, and any service requiring secure user authentication. Here are some key aspects of liveness detection:

Active and Passive Liveness Detection: Liveness detection methods are generally categorized into active and passive. Active liveness detection requires the user to perform specific actions (e.g., blinking, smiling, or head movements) in response to prompts. In contrast, passive liveness detection works in the background, analyzing the user's behavior and the video's characteristics without requiring any user interaction, making it more seamless and user-friendly.

Techniques Used: Liveness detection employs a variety of techniques, including analyzing texture, depth, motion, eye blinking, facial movements, and response to light changes. Advanced systems might use 3D modeling to assess depth or employ algorithms that detect minor involuntary movements (such as natural eye blinking or slight facial twitches), which are difficult to replicate accurately in synthetic representations.

Best Practices for Digital Onboarding: Avoiding Photo Uploads

Liveness checks require live mobile webcams to ensure the user is physically present during verification. The process involves users taking a live selfie and a photo of their ID document using their mobile camera, often guided by on-screen instructions to ensure clarity and accuracy. These live captures are then analyzed for signs of liveness. This method prevents fraudsters from using static images or pre-recorded videos to spoof the system, as the dynamic interaction with the camera provides real-time evidence of the user's presence and identity, further bolstering the security of the onboarding process.

Best practices for digital onboarding strongly advise against allowing photo uploads for identity verification to mitigate the threat of deepfake and spoofing attacks. Instead, organizations should mandate live photo or video captures directly through the user's device camera. With a mobile camera required during the process, the only way for a fraudulent actor to use generative AI would be to take a photo of a screen or a print. Liveness algorithms can be trained to effectively differentiate between these photos of screens or prints and a live selfie, catching the generative AI spoof attempt.

Using liveness algorithms in conjunction with a mobile selfie effectively mitigates the risk of identity fraud from generative AI

Kairos's Advanced Liveness Technology Explained

Kairos has introduced an innovative liveness detection technology capable of identifying photos of prints and screens using a sophisticated 2D process with remarkable accuracy. This technology scrutinizes the texture, lighting, and other visual cues in images to distinguish between live faces and representations shown on a screen or printed photos.

By combining live mobile webcam captures with Kairos' advanced liveness detection, the system effectively counters the threat posed by generative AI deepfakes. The technology ensures that the person presenting themselves during the onboarding process is physically present and matches the identity on the provided document. This not only thwarts attempts to use synthetic images or videos but also significantly reduces the risk of identity fraud. The integration of this liveness check, alongside the requirement for mobile uploads of selfies and ID documents, provides a robust defense mechanism against the increasingly sophisticated tactics employed by fraudsters using generative AI technology.

Stay up to date on AI developments

Our experts weigh in on the latest industry technology.