RE: WIRED 2021: Timnit Gebru Says Artificial Intelligence Must Slow Down


Artificial intelligence researchers facing an accountability problem: How do you try to make sure decisions are responsible when the decision maker not a responsible person, but a algorithm? At present, few individuals and organizations have the power — and resources — to automate decision-making.

Organizations rely on SA MGA to approve a loan or shape the defendant’s sentence. But the foundations on which these intelligent systems are built are easily swayed by prejudice. Discrimination from the data, from the programmer, and from a powerful bottom line company, can be a snowball to unintended consequences. That’s the fact that AI researcher Timnit Gebru warned in a RE: WIRED talk on Tuesday.

“There are companies that believe [to assess] the possibility of someone identifying a crime again, “Gebru said.” That was terrible for me. “

Gebru is a star engineer at Google who specializes in AI ethics. He leads a team tasked with guarding against algorithmic racism, sexism, and other biases. Gebru also founded the nonprofit Black in AI, which seeks to improve the inclusion, vision, and health of Blacks on his farm.

Last year, Google forced him out. But he hasn’t stopped his struggle to prevent unintentional damage from machine learning algorithms.

Tuesday, Gebru spoke with WIRED senior writer Tom Simonite about AI research incentives, the role of worker protections, and the vision for his planned independent institution for ethics and handsomeness. -responsible for AI. His center point: AI needs to slow down.

“We don’t have time to think about how to build it, because we’re always putting out fires,” he said.

As an Ethiopian refugee attending a public school in the Boston suburbs, Gebru was quick to recognize the racial dissonance in America. The lectures addressed racism in the past, but that didn’t go along with what he saw, Gebru tells Simonite earlier this year. He has always found similar misalignment in his tech career.

Gebru’s professional career began in hardware. But he changed course when he saw the barriers to diversity, and began to suspect that much of AI research has the potential to harm already marginalized groups.

“Encountering that takes me in a different direction, which is to try to understand, and try to limit the negative social effects of AI,” he said.

For two years, Gebru led Google’s Ethical AI team with computer scientist Margaret Mitchell. The team has developed tools to protect against AI mishaps for Google’s product teams. Over time, however, Gebru and Mitchell found out they weren’t in meetings and email threads.

In June 2020, the GPT-3 language model was released, demonstrating the ability to produce sometimes coherent prose. But Gebru’s group is worried about the excitement around it.

“We’re going to build bigger, and bigger, and bigger language models,” Gebru said, recalling popular sentiment. “We have to be like, ‘Please let’s just stop and calm down for a second so we can think about improvements, and damage, and maybe alternative ways to do it.’

His team helped write a paper on the ethical implications of language models, called “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?”

Some at Google are not happy. Gebru was asked to revoke the role or remove the names of Google employees. He responded to a request for transparency: WHO asking for such violent action and why? No side trembled. Gebru learned from one of his direct reports that he was “resigning.”



Source link

admin

Leave a Reply

Your email address will not be published. Required fields are marked *