The story of AI, as told by the people who invented it


Welcome to I Was There When, a new oral history project from On the Machines We Trust podcast It shows stories of how the successes of artificial intelligence and computing have happened, as told by people who have witnessed it. In this first episode, we met Joseph Atick — who helped make the first commercially feasible facial recognition system.

Credits:

This episode was performed by Jennifer Strong, Anthony Green and Emma Cillekens with help from Lindsay Muscato. Edited by Michael Reilly and Mat Honan. It was mixed by Garret Lang, with sound design and music by Jacob Gorski.

Full transcript:

[TR ID]

Jennifer: I’m Jennifer Strong, host of On the Machines We Trust.

I want to tell you about something we’ve been working on for a while here behind the scenes here.

It’s called I Was There When.

It’s an oral history project that shows the stories of how the disasters of artificial intelligence and computing happened… as told by the people who witnessed it.

Joseph Atick: And when I came into the room, it saw my face, it took it from the back and it said: “I saw Joseph” and that was the moment where the hair on the back… I felt like something was going on . We are a witness.

Jennifer: We started things with someone who helped create the first facial recognition system that could be commercialized … in the ’90s…

[IMWT ID]

I am Joseph Atick. Today, I am the executive chairman of ID for Africa, a humanitarian organization that aims to give the people of Africa a digital identity so they can access services and exercise their rights. But I’m not always in the humanitarian field. After I received my PhD in mathematics, along with my colleagues made some significant achievements, which led to the first commercially viable facial recognition. That’s why people call me as an advocacy father who knows the face and the biometric industry. The algorithm for how a human brain recognizes familiar faces became clear while we were researching, mathematical research, while I was at the Institute for Advanced Study at Princeton. But it’s far from having an idea of ​​how you’re going to implement such a thing.

It was a long period of months of programming and failure and programming and failure. And one night, early in the morning, in fact, we just finished a version of the algorithm. We submitted the source code for compilation to get a run code. And when we came out, I went out to go to the bathroom. And after I went to the room and the source code was compiled into the machine and came back. And usually after you computerize it it automatically runs, and when I go inside, I see someone moving inside and it sees my face, it takes it from behind and it says: “I saw Joseph. ” and that’s the moment where the hair on the back – I feel like something’s going on. We are a witness. And I started calling other people who were still in the lab and each of them would go inside.

And it would say, “I saw Norman. I can see Paul, I can see Joseph. ”And we would take turns running around the room just to see how many could see it inside. This, is a moment of truth where I would say that many years of work have finally brought about a success, even if theoretically, there is no need for further success. The only fact is that we know how to implement it and finally see that the ability to move is very, very rewarding and satisfying. We developed a team where it was more of a development team, not a research team, focused on putting all the capabilities on one PC platform. And that was the birth, true birth of recognizing the face of commerce, I put it, in 1994.

My anxiety started very quickly. I see a future where there is no place to hide with the proliferation of cameras everywhere and the making of computers and the processing capabilities of computers become even better. And so in 1998, I lobbied the industry and I said, we need united principles for responsible use. And I felt good for a while, because I felt our understanding was right. I feel like we put in place a responsible usage code to follow on whatever the implementation is. However, that code did not survive the test of time. And the reason behind it is we didn’t expect the emergence of social media. Basically, by the time we built the code in 1998, we said that the most important element of a facial recognition system was the tagged database of known people. We said, if I was not in the database, the system would go blind.

And it’s hard to create a database. At most we can build thousands of 10,000, 15,000, 20,000 because every image has to be scanned and has to be entered by hand-the world we live in now, we are now in a regime where we allow the beast to get out of the bag by feeding it billions of faces and helping it by marking ourselves. Um, we are now in a world where any hope can be controlled and everyone should be responsible for their use of face recognition is difficult. And at the same time, there is no shortage of known faces on the internet because you can just scrape, as has happened recently with some companies. And so I started panicking in 2011, and I wrote an article that said it was time to push the panic button because the world was heading in a direction where face recognition would give everywhere and faces would go though where applicable. in databases.

And at the time people said I was an alarm, but now they know that’s exactly what’s happening now. And where do we come from? I lobbied for legislation. I am lobbying for legal frameworks that make it a liability for you to use someone’s face without their permission. And so it’s no longer a technology issue. We cannot prevent this powerful technology by technological means. There must be some sort of legal scheme. We don’t let technology get too ahead of us. Beyond our values, first what we think we will receive.

The issue of consent continues to be one of the most difficult and challenging things when it comes to technology, just giving someone notice doesn’t mean it’s enough. For me permission must be announced. They need to understand the consequences of what it means. And not just to say, well, we put up a sign up and it was enough. We tell people, and if they don’t want to, they can go anywhere.

And I also know that there are, much more easily seduced by many forms of technology that can give us a short-lived advantage in our lives. And after the line, we knew we had left something overpriced. And when the time came, we no longer felt the population and we got to a point where we couldn’t back down. That’s what worries me. I’m worried about the fact that face recognition works through Facebook and Apple and so on. I’m not saying everything isn’t legitimate. Many of these are legitimate.

We have come to a point where the majority of the public may become blasé and may be insensitive because they see it everywhere. And maybe in 20 years, you’ll be out of your house. You have no hope that you can’t. It won’t be recognizable to many people you cross down the street. I think at that point in time the public will be very alarmed because the media will start reporting cases where people are stalked. People were targeted, people were even selected according to their net worth on the street and kidnapped. I think that’s a lot of duty in our hands.

And so I think the question of approval will continue to plague the industry. And until that question becomes a consequence, it may not be resolved. I think we need to set limits on what this technology can do.

My career has also taught me that forwarding is not a good thing because face recognition, as we know it today, was actually invented in 1994. But most people think it was invented by Facebook and the algorithms. in machine learning, which is now on the rise around the world. In fact, at some point in time, I had to leave as a public CEO because I restricted the use of technology that my company would promote for fear of negative public consequences. That’s why I feel that scientists need to have the courage to act in the future and see the consequences of their work. I’m not saying they should stop making successes. No, you have to go in full force, make a lot of achievements, but we also have to be honest with ourselves and fundamentally alert the world and the policy makers that this success has many and shortcomings. And so, in using this technology, we need some kind of instruction and frameworks to ensure that it is passed on in a positive application and not a negative one.

Jennifer: Didto Ko Kanus-a … an oral history project with stories of people who have witnessed or created breakthroughs in artificial intelligence and computers.

Do you have a story to tell? Does anyone know? Drop us an email at podcasts@technologyreview.com.

[MIDROLL]

[CREDITS]

Jennifer: This episode was taped in New York City in December of 2020 and I did it with help from Anthony Green and Emma Cillekens. We were edited by Michael Reilly and Mat Honan. Our mix engineer is Garret Lang… with sound design and music by Jacob Gorski.

Thanks for listening, I’m Jennifer Strong.

[TR ID]



Source link

admin

Leave a Reply

Your email address will not be published. Required fields are marked *