© 2024 Ideastream Public Media

1375 Euclid Avenue, Cleveland, Ohio 44115
(216) 916-6100 | (877) 399-3307

WKSU is a public media service licensed to Kent State University and operated by Ideastream Public Media.
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

Exposing the secretive company at the forefront of facial recognition technology

TERRY GROSS, HOST:

This is FRESH AIR. I'm Terry Gross. Facial recognition technology is convenient when you use it to unlock your phone or log into an app. But you might be surprised to know that your face is most likely already in a facial recognition database that can be used to identify who you are without you even being aware it's happening or knowing who's using it and why.

A company expanding the technological possibilities of this technology and testing its legal and ethical limits is Clearview AI. It's a startup whose clients already include some law enforcement and government agencies. If you haven't already heard of it, it's in part because the company didn't want you to know it existed. It did its best to remain secretive until it was exposed by my guest, Kashmir Hill. She's a New York Times tech reporter who first wrote about Clearview AI in 2020. She describes her beat as the future tech dystopia and how we can try to avoid it. Kashmir has continued to report on Clearview AI and other developments in facial recognition technology. Now, she has a new book called "Your Face Belongs To Us: A Secretive Startup's Quest To End Privacy As We Know It."

Kashmir Hill, welcome to FRESH AIR. Tell us what the Clearview AI facial recognition technology is capable of doing.

KASHMIR HILL: So the way it works is that you upload someone's face - a photo of someone - to the Clearview AI app, and then it will return to you all the places on the internet where that person's face has appeared, along with links to those photos.

GROSS: So we're talking about anything that's on the internet - your photos on social media.

HILL: It could lead to your Facebook profile, your Instagram account, your Venmo account, your LinkedIn profile, reveal your name, you know, possibly where you live, who your friends are. And it may well reveal photos that you didn't realize were on the internet, maybe some photos you didn't want to be there.

GROSS: And you'll talk about photos you didn't know they have on you a little bit later. So let's talk about some of the nightmare scenarios that Clearview's facial recognition technology might create.

HILL: So let's think about the worst-case scenarios for facial recognition technology. Some of the sensitive uses that I think about are, you know, a woman who is walking out of a Planned Parenthood and there are protesters outside, and they look at her face, take her photo, find out her identity, make assumptions that she had an abortion and, you know, write about her online or...

GROSS: Mentioning her name.

HILL: ...Harass her right there in the moment. Or if you are at a bar and you are talking to somebody and decide that they are creepy and you never want to talk to them again, they could take your photo and learn who you are, learn where you live, have all this information about you.

For police use of this technology, you know, it can be very useful for solving crimes, but, you know, it can also be wielded in a way that could be very chilling or intimidating. Say, if there are protesters against police brutality and the government is able to very easily identify them. And we have seen this already happen in other countries, not with Clearview AI's technology but with other facial recognition technology. In China, you know, this kind of technology has been used to identify protesters in Hong Kong, to identify Uyghur Muslims and for more surprising uses like naming and shaming people who wear pajamas in public or making sure that somebody in a public restroom doesn't take too much toilet paper. They have to look at a face recognition camera, only get a little bit of toilet paper and then wait a certain amount of time until their face can unlock more.

GROSS: Who would have ever thought of that? (Laughter) OK, so in the U.S., who has this Clearview facial recognition technology now? And are there restrictions on who can use it?

HILL: So in the U.S. right now, I mean, Clearview AI has been used by thousands of police departments, according to Clearview AI. And it has come up in public records requests. A lot of local journalists have done reporting on their local departments using it. They have a contract with the Department of Homeland Security, they have a contract with the FBI and they have received funding from both the Army and the Air Force.

GROSS: So what would they use it for in the military?

HILL: Well, in the military, you can imagine this being very useful for identifying strangers around military bases, you know, in cities that we're in. Clearview AI has actually given their technology for free to Ukraine to use in its war with Russia. And the Ukrainians say that they have used it to, you know, identify Russian spies who are trying to blend in with the population and they're able to search their face and see their - you know, their social media profiles that link them to Russia that show them in their military uniforms.

Ukraine has also used Clearview AI to identify the corpses of Russian soldiers, soldiers who have been killed, and to find their identities, to find their social media profiles. And they have then sent those photos to their loved ones, you know, to a wife, to a mother, to a boyfriend, to a sister, to a brother, to say, look, this is your loved one. They are dead. And it was a way to try to turn the tide of public opinion in Russia against the war, to show them the toll. But a lot of people who saw that use thought it was just an incredibly, you know, chilling and disturbing use of the - of this kind of technology and more.

GROSS: There are U.S. government agencies using this technology, too, right?

HILL: Yes. I mean, we have some limited look at how every single agency uses the technology. So I talked to a Department of Homeland Security officer who has used Clearview AI, and he told me about a specific case in which he used it. And it was a case of child sexual abuse. He had an image that had been found in a foreign user's account in Syria, and they didn't know exactly, you know, who the abuser was or who the child was or even where this photo was taken. They were able to determine that it was in the U.S. kind of based on essentially electrical outlets.

And so he used Clearview AI to search the face of the abuser and it ended up having a hit on Instagram. And it was a photo where this man appeared in the background of someone else's photo. He was - it was a photo at kind of a bodybuilding convention in Las Vegas. And this man was standing behind a workout supplements counter. And this was the breadcrumb that the DHS officer needed to find out who he was. He ended up calling the workout supplements company, you know, asking them if they knew the man. And eventually, they located him in Las Vegas and arrested him. And so it was really - you could kind of see the power of a technology like this in officers' hands.

GROSS: All right. Let's take a short break here, and then we'll talk some more. If you're just joining us, my guest is Kashmir Hill. She's a tech reporter for The New York Times and author of the new book, "Your Face Belongs To Us: A Secretive Startup's Quest To End Privacy As We Know It." We'll be right back after a short break. This is FRESH AIR.

(SOUNDBITE OF ALEXANDRE DESPLAT'S "SPY MEETING")

GROSS: This is FRESH AIR. Let's get back to my interview with New York Times tech reporter Kashmir Hill. Her new book is called "Your Face Belongs to Us: A Secretive Startup's Quest To End Privacy As We Know It." The company she investigates, Clearview AI, has developed state-of-the-art facial recognition technology that's already being used by many law enforcement agencies, as well as some government agencies. It's been used to identify criminals, including child predators. But it's also made mistakes, which have had consequences for the wrongly accused. Here's an example.

HILL: Randall Reed is a man who lives in Atlanta. He's a Black man. He was driving to his mother's house the day after Thanksgiving, and he gets pulled over by a number of police officers. There was something like four police cars that pulled him over. And they get him out of the car, they start arresting him, and he has no idea why or what's going on. And they say you're under arrest. There's a warrant out for you in Louisiana for larceny. And he is bewildered. He says, I've never been to Louisiana.

And it turns out there was a crime committed there, a gang of people who were buying designer purses, very expensive designer purses, from consignment stores in and around New Orleans using a stolen credit card. And they ran a surveillance still of these men, and one of them matched to Randall Reed's face. And Randall Reed ends up being held in jail in Atlanta for a week while they're waiting to extradite him. And he has to hire lawyers in Georgia, hire a lawyer in New Orleans.

And the lawyer in New Orleans was able to, by basically going to one of these stores and asking for the surveillance footage - to realize that, oh, wow, this suspect actually looks a lot like my client. And this detective ends up telling him that, yes, facial recognition was used. And so Randall Reed basically takes a bunch of photos of his face and a video of his face and sends that to the police, and then the charges end up being dropped.

But this was - I mean, this is incredibly traumatic. And in this case, Clearview AI was the technology that was used to identify him. And that is one of the huge problems about the use of Clearview AI is, you know, if police are using this to solve basically a shoplifting crime, they're doing that by searching this database of millions of people. You know, Clearview says that there are 30 billion faces in its database. And so this is a question that activists are asking, you know? Should we all, all of us who are in that database, be in the lineup any time a small crime is committed in a local jurisdiction?

GROSS: You write that of the people who are falsely accused based on faulty recognition technology, the majority of them are people of color and that the algorithms have more trouble correctly identifying people of color than they do identifying white people, building in what is already a racial bias in the criminal justice system. Can you explain, without getting very technical, why these algorithms have more trouble identifying people of color?

HILL: Yeah, I mean, this is a complicated issue. So facial recognition technology for a very long time had serious bias issues. And the reason was basically that the people working on facial recognition technology tended to be white men, and they were making sure that it worked on them. And they were using photos of white men to kind of train the AI. And the way that these systems learn - and this is the case for kind of everything from facial recognition technology to tools like ChatGPT - is that you give a computer a lot of data, and it gets very good at identifying patterns.

And so if you give that computer, you know, only photos of white men or mostly photos of white men, or mostly photos of white people or mostly photos of men, it gets better at identifying those people. And so, yes, this was a problem for a very long time. And there were researchers like Joy Buolamwini who pointed out that this was flawed, that it didn't work as well on darker faces, on women, on children, on older people. And that criticism was heard by the facial recognition technology industry, and they have improved these system. They have gotten more diverse faces to train the AI, and it has improved.

And there have been a lot of questions raised about how they got that data. I mean, part of it is that they just turn to all of the photos of ourselves that we and others have posted on the internet. In one case, Google actually hired a contractor to go out and try to get, basically, photos of Black people. And they targeted homeless people and students. A Chinese company at one point basically offered their technology for free in Africa so that they could collect darker faces to help train their algorithms.

But the technology has improved a lot since its early days when it was really, you know, quite flawed. But obviously, we are still seeing racist outcomes. Of the handful of people we know to have been wrongfully arrested for the crime of looking like someone else, in every case, the person has been Black.

GROSS: So still in my mind is that you said that Clearview AI has 30 billion faces in its database.

HILL: Yes, and that's many more faces than people who live on the planet. So for many individuals, there's going to be many different versions of your face. The CEO...

GROSS: Oh, I see.

HILL: Yeah.

GROSS: So it's, like, different photos of you counted in that?

HILL: Yeah. So the CEO has run searches on me. And, you know, the - I can't remember the last number, but I think it was something like there were 160 different photos of me on the internet that it was pulling up.

GROSS: So Clearview AI, in developing its facial recognition technology, is responsible for technological breakthroughs, but it's also leading to a lot of questions legally and ethically about, where are the boundaries here? Is there a way to say stop when things go too far, and what is that place? You write about how Google and Facebook and maybe some other companies had developed facial recognition technology earlier but didn't want to release it. They thought it was too dangerous, so they didn't make it available. Can you expand on that for us?

HILL: This was a really surprising finding for me. You know, when I first got wind of Clearview AI in the fall of 2019 and started talking to experts, people were shocked that this company came out of nowhere and built this radical tool unlike anything, you know, released by the big technology giants or even by the U.S. government. And everyone thought that it was something they had done technologically. But what I found since then in working on the book is that actually Google had talked about developing something like this as early as 2011, and its then chairman, Eric Schmidt, said that it was the one technology that Google built but decided to hold back. And that was because they were worried about the dangerous ways it could be used by, say, a dictator to control his or her citizens.

And I discovered that Facebook too developed something like this. I actually got to watch this video of engineers who work there in this conference room in Menlo Park. And they had rigged up a smartphone on the brim of a baseball cap. And when the guy who was wearing it turned to look at somebody, the smartphone would call out the name of the person he was looking at. But Facebook to decide to hold it back. And that is, you know, pretty surprising from Google and Facebook. They are such boundary pushing companies. They have really changed our notions of privacy. But they both felt that they didn't want to be first with this technology, that it was unethical, potentially illegal.

But Clearview, you know, didn't have those same concerns. It was this new radical startup, a very unusual background, and it just wanted to make its mark on the world. And the building blocks were there for them to do this. You know, countless photos of people on the internet that are not very well protected against the kind of scraping or mass downloading that Clearview did. And then these facial recognition algorithms that are just easier to develop now if you have some technical savvy because the open source - what's called the open source community around these technologies has kind of shared them online. And so what Clearview did was just what others weren't willing to do. I call it ethical arbitrage in the book. And what is so alarming about that is it means that there will be other Clearview AIs and there already are.

GROSS: Well, a paradox here is that although Google and Facebook developed facial recognition technology, they decided it was too potentially dangerous and withheld it from public use. However, hasn't Clearview AI harvested faces through Google and from Facebook?

HILL: Clearview AI has scraped photos from millions of websites, including Instagram, Facebook, LinkedIn, Venmo, YouTube. Yes, you know, it has taken photos from these companies, some of these companies like Facebook especially who convinced us to put our photos online alongside our faces. They did offer the building blocks that that Clearview AI has used. And, you know, after I reported what Clearview AI had done, many of these companies sent cease-and-desist letters to Clearview AI saying stop scraping our sites and delete the photos that you collected from our sites and, you know, said it violates our terms of service. But then they didn't do anything else besides send those letters. There hasn't been a lawsuit against Clearview AI. And as far as I know, as I understand, Clearview AI has not deleted any of those photos. And I think it's continuing to scrape those sites.

GROSS: It's time to take another break, so let me reintroduce you. If you're just joining us, my guest is Kashmir Hill. She's a tech reporter for The New York Times and author of the new book, "Your Face Belongs to Us: A Secretive Startup's Quest To End Privacy As We Know It." We'll be right back after we take a short break. I'm Terry Gross, and this is FRESH AIR.

(SOUNDBITE OF MUSIC)

GROSS: This is FRESH AIR. I'm Terry Gross. Let's get back to my interview with Kashmir Hill, author of the new book "Your Face Belongs to Us: A Secretive Startup's Quest To End Privacy As We Know It." It's about a company called Clearview AI that's expanding the technological possibilities of facial recognition technology and testing its legal and ethical limits. It's a startup whose clients already include some law enforcement agencies and government agencies. She exposed them. It had been a very secretive company. She exposed them in a 2020 article for The New York Times.

How has this affected your use of social media and putting your picture online? They already have your photos, but still.

HILL: So I think a lot of people get hopeless about privacy or feel like, what can I do to protect myself? I do think that people can make choices that will protect them, but it's also a societal responsibility. So for me personally, I am a fairly public person. I have many photos on the internet. But when I post photos of my children, for example, I tend to do so privately on, you know, a private - you know, privately on Instagram, just for friends and family. Or I text photos, you know, share with my friends. I am much more private about their images knowing that this technology is out there.

It is also the case that people can get themselves, in some places, taken out of these databases, so that is advice that I give people, you know? It's not just a matter of being careful what you post. If you live in certain states that protect your face better, you can go to Clearview AI and ask for access to the information they have on you and ask them to delete it. There are privacy laws that give you those rights in California, Connecticut, Virginia, Colorado.

And so, yeah, if you're a citizen of one of those states, if you're a resident of one of those states, you can get out of Clearview AI's database. And that is a kind of hopeful part of this book, is that we don't have to just give in to the whims of technology and what it's capable of. We can constrain what's possible with a legal framework, you know? We can pass privacy laws and enforce them, and that will help protect us against what is now becoming possible with technology.

GROSS: You know, a lot of us already use facial recognition technology in our private lives, like, to use it to unlock your phone or log on to an app. Do you use it? Like, what are your thoughts about that in terms of what you're exposing yourself to, if anything?

HILL: Yeah, I mean, people think that because I'm a privacy reporter, I must be a complete - I must have everything on lockdown. But I am a normal person who lives my life in normal ways. It's part of how I get ideas for stories, is just seeing how we interact with the world and what happens when my information is out there. So you know, I do unlock my phone with my face.

When I was traveling to do research for this book, I went to London because they have police vans there, these mobile vans that they send out with facial recognition cameras on the roof to scan crowds and pick up wanted people off the streets. And so I really wanted to go there and have that part of what's happening with facial recognition technology in the book. And when I got to Heathrow Airport, rather than having to wait for hours in line, you know, for a customs agent to look at my passport, I just put it on a little scanner bed, looked into a camera - and there is a biometric chip on your passport that has your face print - and it matched me to the passport and just let me right in.

I mean, there are many beneficial uses of facial recognition technology, and it's part of why I wanted to write this book, because I wanted people to understand it doesn't need to be an all or nothing situation. I hope that we can harness the beneficial uses of facial recognition technology that are convenient to us, that make our lives better, without having to embrace this completely dystopian, you know, world in which facial recognition technology is running all the time on all the cameras, on everybody's phone. And anywhere you go, people can know who you are and, you know, have it just end anonymity as we know it.

GROSS: That's a chilling thought. Let's talk about how you first found out about Clearview AI, because it had been doing everything in its power to prevent the public from knowing about it. How did you first find out it existed?

HILL: So I got a tip in the fall of 2019 from a public records researcher who had been looking into, you know, what types of facial recognition technology police were using, you know, which companies, how much they were paying for it. And he had gotten this 26-page PDF from the Atlanta Police Department. And it included this company that he hadn't heard of before - there wasn't much online - called Clearview AI that claimed that it had scraped billions of photos from the internet, including social media sites, and that it was selling it to hundreds of law enforcement agencies.

And there was a really surprising, privileged and confidential legal memo that the Atlanta Police Department turned over written by Paul Clement, who is - used to be one of the top lawyers in the country. He was the solicitor general under George W. Bush. He had written this memo for police to reassure them that they could use Clearview AI without breaking the law. And this just caught my attention right away. And I started digging in. And, you know, the more I dug, the stranger this company seemed.

GROSS: Well, you couldn't find their office. You couldn't find anyone to talk with. What were some of the obstacles you ran into?

HILL: So...

GROSS: I mean, you found their address, but you couldn't find a building.

HILL: Yeah. So one of the strangest things was, you know, they had a very basic website. And it just described what they were doing as artificial intelligence for a better world. And there was an office address there. And it happened to be just a few blocks away from The New York Times. And so I mapped on Google Maps, I walked over, and I got to where it was supposed to be and the building did not exist. And that was very strange to me.

I also looked them up, you know, on the internet. And they had only one employee on LinkedIn. His name was John Good. He only had two connections on the site. It definitely looked like a fake person. You know, I reached out to that John Good and never got a response. You know, I called everyone I could find that seemed to have some connection to the company. No one would call me back. And so then I turned to police officers, trying to find people using the app, and that's where I had success. I talked to officers who had used it. They said it was incredible. It worked like nothing they had ever used before.

But through the process of talking to police officers, I discovered that Clearview AI was tracking me, that they had put an alert on my face. And every time one of these officers uploaded my photo to try to show me what the results were like, they were getting a call from Clearview AI and being told to stop talking to me. And Clearview AI actually blocked my face for a while from having any results. And that was very chilling to me because I realized, well, one, this company is - has this power to see who law enforcement is looking for, and they're using it on me, and also that they had the ability to control whether or not a person could be found.

GROSS: Yeah. But you were able to see what pictures they had of you. And they had photos of you that you didn't know existed, including photos where you're, like, buried in the background. But it was still able to identify that photo as you. Tell us about some of the most surprising photos that were harvested.

HILL: Yeah. So eventually the company did talk to me. They hired a very seasoned crisis communications consultant. And so I was able to meet Hoan Ton-That, who is the technical co-founder of Clearview AI. And he has since run my face through the app, you know, several times. And in one case, it brought up this photo that I recognized as being taken in Washington, D.C. And there's - you know, there's somebody in the foreground and somebody on the sidewalk in the background walking by. And I was looking at the photo, and I didn't immediately see me until I recognized that the person in profile in the background of the photo was wearing a coat that I bought in - at an American vintage store in Tokyo many, many years ago. And so I realized, wow, that's me. I can even recognize myself with my human eyes that that's me. But this - you know, this algorithm is able to find me.

There was a photo on the internet of somebody I had been talking to for a story, and that made me realize I may need to be much more careful with sensitive sources out in public if something like this is - becomes more ubiquitous because I won't anymore be able to trust necessarily that if I leave my, you know, phone at home and meet them at a dive bar - that someone can't make the connection between us. So, yeah, it was just very surprising. I even, at one point, covered my mouth and nose, you know, the way that you would with a COVID mask. And even then, Hoan Ton-That was still able to take a photo of me and bring up other photos of me. It really is astounding how far this technology has come from its early days, when it was very buggy and didn't work very well.

GROSS: So it can identify you even if you're wearing a mask. That's remarkable. Have you tried to get your own face removed from Clearview AI's database?

HILL: Well, unfortunately, I am a resident of New York, and so I do not have the privacy protections that other people in the U.S. or people outside of the U.S. have. So I can't get Clearview AI to delete the photos of me.

GROSS: Oh, so it's only people in other countries who have that ability.

HILL: So people in Europe have this ability. And then there are states in this country that have privacy laws that give them the right to access and delete information that companies have on them. So if you live in California, Colorado, Virginia or Connecticut, you can go to Clearview AI and get your information deleted. And if you're in Illinois, you're protected by an extra special law that specifically protects your face. But the rest of us are out of luck.

GROSS: Let me reintroduce you. If you're just joining us, my guest is New York Times tech reporter Kashmir Hill. She's the author of the new book "Your Face Belongs To Us: A Secretive Startup's Quest to End Privacy As We Know It." It's about facial recognition technology and the company Clearview AI. We'll be right back. This is FRESH AIR.

(SOUNDBITE OF THE WEE TRIO'S "LOLA")

GROSS: This is FRESH AIR. Let's get back to my interview with Kashmir Hill. She's a New York Times tech reporter and author of the new book "Your Face Belongs To Us: A Secretive Startup's Quest To End Privacy As We Know It." It's about the company Clearview AI and its quest to develop facial recognition technology and all the successes it's had and all the failures it's had so far, how it's testing the ethical and legal limits of use of this technology.

You know, we talked about how law enforcement agencies, some government agencies, the military is using or is interested in using this technology from this company. What about private corporations? Are any of them using it?

HILL: So Clearview AI, when they were first pitching this technology, did want private corporations to use it. They were pitching it to grocery stores and hotels and real estate buildings. One of the people they pitched, actually, was John Catsimatidis, who's a businessman in New York, has run for mayor there, owns the Gristedes grocery stores. And part of their pitch was that they would give the app actually to potential investors and to these businesspeople. And so John Catsimatidis told me they thought about using it. They had a lot of Haagen-Dazs thieves at his stores at the time, and so they tested it. They didn't ultimately install Clearview AI's Technology. But he himself loved having the app on his phone, and he told me about how he used it one time when his daughter walked into an Italian restaurant when he was dining there and she was with a date he didn't recognize. And so he had a waiter take a picture of the couple so he could identify who the man was, which I thought was a really, really shocking use.

So Clearview AI has agreed not to sell its database to companies and to only sell it to police agencies. But there are other facial recognition technologies out there. And I think the most notable example of this is Madison Square Garden, the big events venue in New York City. They own Radio City Music Hall and the Beacon Theater, and they installed facial recognition technology a few years ago to keep out security threats. But in the last year, the owner, James Dolan, decided that he wanted to use the technology to keep out his enemies - namely lawyers who worked for firms that had sued him. And so Madison Square Garden ended up making a list of these 90 firms that had lawsuits against it, scraping the lawyers' photos from their own websites and creating a face ban on these people so that when they tried to go to a Knicks game or Rangers game or a Mariah Carey concert, they get turned away at the door, and they're told, sorry, you're not welcome here until you drop your suit against us.

And yeah, I mean, it's a really incredible deployment of this technology and shows how chilling the uses could be, that you might be, you know, turned away from a company because of where you work, because maybe - I could imagine a future in which a company turns you away because you wrote a bad Yelp review or they don't like your political leanings.

GROSS: You went with a lawyer who is on the banned list of Madison Square Garden to see if the technology actually prevented her from getting in. And it did. It worked.

HILL: Yeah. It was incredible. I mean, we - so I went with her. I can't remember if it was a Rangers game or a Knicks game, but I bought our tickets, so it was not under her name, not, you know, associated with her in any conceivable way. And we walked through the door to the stadium and put our purses - our bags down on, you know, the security belt, walked through the metal detector, and a security guard immediately walked up to her. And he asked for her ID, she showed it. And he said, you know, you're going to have to stand here for a moment. My manager's coming over. And he came over and he said, hey, you work for this firm. You know, you're not allowed to come into the stadium. And she said, well, I'm not working on the case, you know, against your company. It's other lawyers in my firm. He says it doesn't matter. Everybody from your firm is banned. He gave her a note and kicked us out. And I mean, it happened, you know, within a minute of our walking through the door.

GROSS: Let's take another break here, and then we'll talk some more. If you're just joining us, my guest is Kashmir Hill, a tech reporter for The New York Times and author of the book "Your Face Belongs To Us." We'll be right back after we take a short break. This is FRESH AIR.

(SOUNDBITE OF THE MIDNIGHT HOUR'S "BETTER ENDEAVOR")

GROSS: This is FRESH AIR. Let's get back to my interview with Kashmir Hill. She's a tech reporter for The New York Times and author of the new book, "Your Face Belongs To Us: A Secretive Startup's Quest To End Privacy As We Know It." The startup referred to in the title is Clearview AI, and it's a company that has advanced facial recognition technology. And it's raised a lot of questions about the ethical and legal limits of this technology.

Let's talk a little bit about the founder of Clearview AI, and the CEO, Hoan Ton-That. Part of his background was that he was a MAGA supporter. What are his connections to Donald Trump and to the far right?

HILL: Yeah. So Hoan Ton-That, he grew up in Australia. He dropped out of college at 19, moved to San Francisco and he was actually kind of part of a liberal crowd when he lived in San Francisco. And grew his hair long, was a musician, hung out with artists. But then around 2015, he moved to New York, and this seemed to be a time when his politics really shifted. He would later tell me that he was radicalized by the internet, but he started following a lot of people on the far right, you know, Milo Yiannopoulos, Breitbart writers. He started hanging out with a guy named Charles Johnson, known as Chuck Johnson on the internet, who is very much a conservative provocateur, ran a very conservative news site that did what a lot of people described as race baiting.

And Hoan Ton-That and Charles Johnson decided to go to the Republican National Convention together in 2016, where Trump was being anointed the candidate. And yeah, they were very much all in on Trump. And while they were there, they actually met with Peter Thiel, who, you know, was a big Trump supporter, and he was speaking at the convention. Peter Thiel would later become their first investor in Clearview AI before it was even called Clearview AI, when it was called Smart Checker. But that is where the company started. It did start very much within conservative circles in politics.

GROSS: What does that tell you, if anything, about how Clearview AI is using facial recognition technology? I mean, one of the fears is that, like, authoritarian governments could use this for nefarious purposes. And Trump, who the founders of the company - or at least most of the founders of the company - supported, he definitely has authoritarian tendencies.

HILL: I mean, one of the first ways that Clearview AI was used before it was called that - it was still called Smart Checker at the time - was at the DeploraBall, which was this event in D.C. when Trump was becoming president. And Hoan Ton-That, you know, later said in documents about it that they had used the technology to keep anti-fascists - antifa - from being able to get into this event. And they revealed that in a pitch they made to the Hungarian government. They were trying to sell their tool for border security. And, you know, Hungary, I think many would describe as an authoritarian government. And they said that they had fine-tuned the technology so that it could be used to identify people who are affiliated with George Soros and the Open Foundations Society. So specifically, they were trying to sell the technology to an authoritarian government to try to identify and keep out people that are affiliated with kind of civil liberties. So it was very disturbing.

But now Hoan Ton-That says that, you know, he is apolitical. He kind of says he doesn't hold those old views anymore. And, in fact, Clearview AI was used on January 6 when rioters stormed the Capitol. The FBI had photos of all these people because many of them were filming themselves on social media and posting photos online, and they weren't wearing masks. And so many police departments started running their photos through Clearview AI to identify them.

GROSS: You know, I can't help but wonder. Even if this technology is regulated, what's the likelihood it's going to escape into the wild anyway? And what I'm thinking of specifically is we write about - I think it was a potential investor who was given this technology so he could better understand it, and he let his daughter play with it, and she played with it with her friends. So, like, if a potential investor in the company who has been pitched all about it and knows what the boundaries are supposed to be lets his daughter use it and share it with friends, what does that say about the potential of this, just no matter how controlled it is, getting out into hands it's not supposed to be in?

HILL: So Clearview AI, yes, in its early days was used by, yeah, all of these investors, even celebrities were using the technology. Joe Montana at one point emailed Hoan Ton-That because he wanted access to help him remember people's names when he met them. The thing is - so Clearview AI, because of all the blowback, because it has faced such public scrutiny, is limiting its technology to police and security use. But, you know, as we were talking about earlier, there are other people who can do what Clearview AI has done and they have. There is a public face search engine right now called PimEyes, and it does not have as robust a database as Clearview AI. It hasn't scraped as many sites, hasn't scraped social media sites. It hasn't accumulated as many photos.

But yeah, I mean, I could upload your face right now to PimEyes, and I would get results. I would get photos of you potentially, along with links to where they appear. And, you know, I have run PimEyes on myself. It pulls up many photos of me, not as many as Clearview AI. I ran it on my then-5-year-old daughter and had a hit, something I'd forgotten, a photo of her on the internet. PimEyes does let you ask for results to be removed. I did for my own daughter. I mean, the cat is very much getting out of the bag, and it's part of why I wrote this book right now is because we need to figure out what we want or this will become very widespread.

GROSS: Kashmir Hill, thank you so much for your reporting and your new book. I hope this isn't really the end of privacy as we know it, but... (laughter).

HILL: Thank you, Terry. And I do think there is hope for privacy.

GROSS: Oh, good to hear. OK. Thanks so much.

Kashmir Hill is a tech reporter for The New York Times and author of the new book "Your Face Belongs To Us." If you'd like to catch up on FRESH AIR interviews you missed, like our interviews with Leslie Jones, who has a new memoir, or Kerry Washington, who has a new one too, or songwriter, singer and musician Allison Russell, check out our podcast. You'll find lots of FRESH AIR interviews. And if you haven't already subscribed to our free newsletter, give it a shot. It will give you something enjoyable to read about our show and the people who produce it. You'll get it in your mailbox every Saturday morning. You can subscribe at whyy.org/freshair.

FRESH AIR's executive producer is Danny Miller. Our technical director and engineer is Audrey Bentham. Our interviews and reviews are produced and edited by Amy Salit, Phyllis Myers, Roberta Shorrock, Ann Marie Baldonado, Sam Briger, Lauren Krenzel, Heidi Saman, Therese Madden, Seth Kelley and Susan Nyakuindi. Our digital media producer is Molly Seavy-Nesper. Thea Chaloner directed today's show. Our co-host is Tonya Mosley. I'm Terry Gross. Transcript provided by NPR, Copyright NPR.

NPR transcripts are created on a rush deadline by an NPR contractor. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of NPR’s programming is the audio record.

Combine an intelligent interviewer with a roster of guests that, according to the Chicago Tribune, would be prized by any talk-show host, and you're bound to get an interesting conversation. Fresh Air interviews, though, are in a category by themselves, distinguished by the unique approach of host and executive producer Terry Gross. "A remarkable blend of empathy and warmth, genuine curiosity and sharp intelligence," says the San Francisco Chronicle.