This Program Can Give AI a Sense of Ethics — Sometimes


Delphi is tapping into the fruits of recent advances in AI and language. Feeding so much text into algorithms that use mathematically simulated neural networks has yielded surprising advances.

In June 2020, the researchers at OpenAI, a company working on the latest AI tools, introduced a program called GPT-3 that can predict, summarize, and generate text. with often seemingly exceptional skill, even it also spit out biased and hateful language learned from the text it read.

The researchers behind Delphi are also asking ethical questions about GPT-3. They found that its answers agreed with those of the workforce just over 50 percent of the time — much better than a flip of a coin.

Improving the performance of a system like Delphi would require a variety of AI techniques, possibly including some that allow a machine to explain its reasoning and show if it contradicts.

The idea of ​​giving machines a moral code has existed for decades in academic research and science fiction. The famous Isaac Asimov Three Laws of Robotics emphasizes the idea that machines can follow human behavior, although the short stories examining the idea emphasize contradictions in that simplistic reasoning.

Choi says Delphi should not be considered to provide a definitive answer to any ethical questions. A more sophisticated version may flag uncertainty, due to different opinions on its training data. “Life is full of gray areas,” he said. “No two people are perfectly compatible, and there’s no way an AI program can match people’s judgments.”

Other machine learning systems present their own moral blind spots. In 2016, Microsoft released a chatbot called Tay designed to learn from online conversations. The program was rushed sabotaged and taught to say hurtful and hateful things.

Efforts to explore ethical perspectives related to AI also reveal the complexity of the task. A project launched in 2018 by researchers at MIT and elsewhere seeking to examine public perceptions of the ethical conundrums that can be faced by self-driving cars. They ask people to decide, for example, whether it is better for a car to hit an elderly person, a child, or a robber. The project Revelation different opinions in different countries and social groups. Those from the U.S. and Western Europe were more likely than respondents elsewhere to save the child of an older person.

Some of the founders of AI tools like to deal with ethical challenges. “I think people are right to point out the flaws and failures of the model,” said Nick Frost, CEO of together, a startup that has created a large language model that is accessible to others via an API. “They’ll be aware of broader, broader problems.”

Cohere has developed ways to manipulate the output of its algorithms, which are now being tested by some businesses. It curates algorithm-fed content and trains the algorithm to learn to capture instances of bias or hateful language.

Frost says the debate around Delphi reflects a broader question facing the tech industry — how to make technology responsible. Always, he says, when it comes to moderation in content, misinformation, ug algorithmic bias, companies try to wash their hands of the problem by arguing that all technology works for good and bad.

When it comes to ethics, “there’s no basic truth, and sometimes tech companies shirk responsibility because there’s no basic truth,” Frost said. “The best way is to test.”

Updated, 10-28-21, 11:40 am ET: An early version of this article incorrectly claimed that Mirco Musolesi was a professor of philosophy.


Lots of Great WIRED Stories



Source link

admin

Leave a Reply

Your email address will not be published. Required fields are marked *