Personally, I appreciated the coziness, which made for some insightful networking opportunities and high-quality sessions. I left with plenty to think about on my trip back to Boston, so here are my takeaways from the Biometric Summit 2019:
Having a single factor whether it is face, voice, fingerprint, iris, retina, or even electrocardiogram is good, but it may not be enough. When multiple biometric methods are combined into one authentication attempt, the accuracy levels can be unsurpassed. Solutions were shown combining both facial recognition as well as voice, merged into the same experience with very-high accuracy rates.
And today, biometrics are no longer limited to face, voice and fingerprint. Electrocardiogram (ECG) was introduced as a new potential, although more invasive, form of authentication. Evidently, each of us has a unique signature of our heartbeats—in addition to identification it can be used to determine stress levels, or other emotional states.
There was even typing-style identification demoed. Two people could type the same phrase into a mobile phone and each would be able to be uniquely identified based on their typing styles and patterns. Combining this with other types of identification such as facial could boost the accuracy and ease of authentication. For example; in the case of authenticating a banking chatbot just using your face could be less secure than, say, combining face with typing or voice it —the goal here is to achieve near 100% accuracy.
Facial recognition and other forms of biometrics are treading a thin line between creepiness and convenience. Even the audience of biometrics experts was split on the creepiness factor of various biometric systems. The general consensus being, the more a system makes itself invisible, the more creepy the perception.
Scanning your face to enter a building? Not so creepy. Cameras at the gas pump, triggering ads based on your age, gender and/or race? Getting creepy. Using facial recognition on a missile-equipped drone to assassinate targets in a far-off land? Beyond creepy.
Accuracy is one thing, but it’s not that important if the image, voice, etc. being verified isn’t of a live person. Liveness or anti-spoofing detection is key to determine is it really you, and you are really there.
And, the sophistication of attacks are evolving with attackers attempting to fool systems during processes such as account creation—think of a dating app use case; what happens if an attacker pre-registers an account with your face before you register an account? This would effectively be a denial-of-service preventing legitimate customers from using the service in the future. Integrating anti-spoofing and liveness detection into the registration process is key to preventing this type of malicious behavior.
In order for biometrics to become more widely adopted, the experience of using them must be improved. That doesn’t mean for it to become invisible, because there is a sense that if the user can “see” it working (such as through a progress bar, or some other type of visual feedback) then there is more trust in the process and it ‘feels’ more secure. Even little messages such as “Processing… Verifying… Authenticated!” can go a long way with making your users comfortable with what happens behind the scenes. Making the experience “magical” shouldn’t be the goal, but informed consent and not adding “idiot traps” is key. An idiot trap is where the system adds layers of security theater that prevents or makes it harder for a legitimate user to use the biometric system, but a determined attacker wouldn’t be slowed down that much.
End users are sensitive to friction, but that isn’t always a bad thing. For example, as the story goes, years ago the phone company would inject a little static into the line so that customers would know the connection was still “alive” and the call didn’t go dead. In fact, it was said that the less friction there is in the biometric transaction, the more likely it is that there is potential for security vulnerabilities in the process.
When biometrics is done “right” it has a great potential. But when it is done “wrong” it has the potential to become more insecure than not using biometrics in the first place. That is the danger the industry is grappling with at the moment. How to walk that line.
Threat actors use generative AI deepfakes to create or manipulate facial images for identity fraud. The process involves either generating new faces or altering existing images tailored to the victim's identity or created to pass as a legitimate user using generative AI. To address the problem specific image capture security protocols and a robust liveness algorithm should be used to effectively block generative AI spoofing attempts.
This guide outlines essential practices for capturing and uploading ID documents and selfies, ensuring a smooth and secure digital onboarding experience. By adhering to these guidelines, users can significantly enhance the accuracy of identity verification, reducing errors and improving the overall success rate.
As the landscape of digital identity verification rapidly evolves, Artificial Intelligence (AI) is at the forefront, reshaping traditional approaches. This deep dive into AI's role in identity verification is for those familiar with the nuances of data science and computer science.
Deep Learning, particularly Convolutional Neural Networks (CNNs), has revolutionized facial recognition technology. This layered approach enables the model to discern intricate facial patterns, enhancing recognition accuracy. Backpropagation, a key mechanism in CNNs, refines these features, significantly improving the model's ability to distinguish subtle facial characteristics.
Misidentification of people based on ethnicity, gender, and age plagues the facial recognition industry, and it’s a continuing mission of ours to fix this problem.
In our increasingly digital world, 'IDV' or identity verification is more than a buzzword; it's a fundamental component of online security and trust.
Kairos, the Miami based face recognition provider who gained global attention in 2018 for its early stance to highlight algorithm bias in face recognition systems, has brought back Brian Brackeen, the founder of the company who in the same year was separated from his position as CEO preceding a legal battle which ended in Brackeen’s favor.
A strong scientific discipline is key to the success of any AI focused startup. And having the best minds working on your problem, is the only way to generate category challenging results.
Google Next is Google’s annual conference focusing on their cloud computing offering, Google Cloud Platform. Thanks to our great Google Cloud account team in Miami, we were able to attend this year and learn about Google’s new announcements, network with experts and other peer companies, and get some in depth knowledge about GCP.
Every month, we’re bringing you the best news and views on the most compelling topic in technology today—Identity. All lovingly curated by the team at Kairos.
Facial Recognition is in big demand with businesses all over the world—from preventing fraud to enabling more profitable customer experiences; it’s becoming the natural authority on identity. Which is why we’re excited to announce a renewed partnership between Kairos and RapidAPI—the leading API marketplace for software developers.
As Kairos’ Director of Product Integration, I’m on the front lines when it comes to customer inquiries. From pet detection to weight detection, I’ve heard it all. While some ideas are more far-fetched than others, the common trends cannot be ignored—and these represent innovations that are happening NOW.
From getting the latest TechCrunch headlines on your phone, to booking a Lyft to your office—APIs are powering most of the products and services we all take for granted each day. They make the world move.
This month, CB Insights mapped out the top-funded AI startups in every US state—and, Miami based face recognition company, Kairos came out on top for Florida.