Biometric technology like facial recognition is here to stay

The recent opposition to the use of IRS face recognition technology and the agency’s subsequent decision to abandon that technology raise the question of whether face recognition systems will ever be fully adopted in the United States.

Activists and experts in particular often raise the issue bias in algorithms related to biometric technologies such as face recognition. But like it or not, this technology is prevalent in both the private and public sectors.

Carrie O’Connor Kolaya is the CEO of AU10TIX, an identity verification provider based in Israel. Kola runs in New York. The provider uses various forms of authentication, including document verification and biometric authentication, to help users such as Google, Uber and PayPal prevent fraud.

In these questions and answers Kolaja discusses what went wrong Introducing IRS face recognition technologyvarious trends in biometric authentication and ways to prevent bias in AI and machine learning algorithms that underlie these technologies.

Is the United States the only country resistant to face recognition technology?

Carrie O’Connor Collage

Carrie O’Connor Collage: I don’t think the U.S. is the only country where citizens have resisted sharing their personal information with the government or even the private sector.

I’m embarrassed to say biometrics because if you look at the statistics right now, especially in the US, about 80% of the data hacks that have occurred are due to password cracking. This is what gave rise to biometrics as a way to verify yourself. [Up to] 85% of global consumers have used biometrics to confirm that they have. And all over the world it’s even deeper. And so, if you look in the context that citizens are not comfortable, statistics will show otherwise.

The bigger question underlying all this … with whom are people comfortable or uncomfortable to use? That’s when you start to wonder what the private sector is doing differently than the public.

Whether it’s a signature person or it’s a fingerprint signature, it really applies to our data. Is it about how to create access to things fairly, while maintaining security and privacy, but while giving choice and control?

We don’t need to force someone to choose between providing a biometric signature and a biometric signature [not] get unemployment or pay taxes.

Carrie O’Connor CollageCEO, AU10TIX

This is one of the areas where, in my opinion, things went wrong with our US government in its desire to adopt new technologies – I agree that this is right – and ensure security. I think the implementation failed because we should not force anyone to choose between providing a biometric signature and [not] get unemployment or pay taxes. There must be other ways people can test themselves when they make choices so as not to share biometric information.

Will education on face recognition and biometrics make it more comfortable for the public?

O’Connor Collage: There’s a whole area of ​​publications and content around literacy identity, how you educate people, and who’s responsible for educating people about what it means to store your personal information.

There is no “if”, “and” and “but”; we all share information about ourselves every time we go online, and in the physical world when you carry a card or now when you pass a vaccination card to get to a restaurant in New York.

The broader discussion we need to have is about what responsibility people in the private and public sectors have for disclosing and sharing with the ultimate data subject, and how this information is used.

There was a letter from some democratic members Congress on this issue to the IRS. I was really impressed with some of the issues they raised in this letter. It was roughly: what type of surveillance in government agencies after this information is disseminated? What happens to the data? Where is it stored? How can someone remove it if they want to?

Teaching consumers and citizens about [their data]then they can make choices about what and how they share.

What other trends do you see around face recognition and biometrics?

O’Connor Collage: The notion of needing a lot of data to make sure someone is who they say, with a high level of confidence, is questioned. And the challenge for those of us in the industry and in a broader sense is how you can get the least amount of information to get the highest level of assurance to really minimize the amount of PII [personally identifying information] what is shared

The second thing we are starting to see is that one type of verification is usually not enough.

If they are compromised, what will happen next? I also believe that overlapping on the various verification techniques that are related to what you are trying to do is a big move.

The third major trend we see is that tokenization will be the way of the future. Since the pandemic, if you had [an] an increase in fraud and … people connect to the internet 2,800 times a day or something … we need to move more to what they call verified credentials. They allow a person to access a tax return, PayPal account or Airbnb account without sharing personal information, and this ensures a high level of confidence that this person is who he or she is.

What is verified credentials … a token that proves you know something or have something and it can be issued by anyone and verified by anyone, but your PII is not distributed. In a world where we want to live safer and more securely, this is very important.

And then, I guess, there’s a fourth that really concerns control and choice. GDPR [General Data Protection Regulation] was the catalyst for this; CCPA [California Consumer Privacy Act] laws in California were also. As an individual, I can do this if I want to revoke access to information I have shared with a logo, brand or seller. I believe we will see more and more of this. While there are rights, there is no knowledge and education, and then the process is not as simple as it probably should be.

Is there a way to ensure that the algorithms and AI behind face recognition can be fair?

O’Connor Collage: There are ways you can ensure that artificial intelligence algorithms and models are unbiased.

The model is always initially trained by the people and data sets that are labeled and submitted to the model. In this scenario, the ones who label the data sets are the ones who teach the model, they themselves have to be diverse.

In addition to this there are mechanisms where you can check that the models are diverse and there are no coded glitches.

Another way is to introduce management and control. For example, when we build our models and modify our models, we always do a preliminary test and a post-test, releasing it around a set of data that we know is unbiased, and to ensure that the effectiveness of that result remains intact.

The technology is not perfect. People are not perfect, but there are steps that can be taken to ensure that these models do not make the wrong decisions.

Editor’s note: This interview has been edited for clarity and brevity.

Leave a Comment