The Department of Defense issues AI ethical guidelines for technology contractors


The purpose of the guidelines is to ensure that tech contractors remain available to the DoD. ethical principles for AI, said Goodman. The DoD announced these principles last year, after a two -year study commissioned by the Defense Innovation Board, an advisory panel of leading technology researchers and entrepreneurs formed in 2016 to bring the spark to Silicon Valley. in the U.S. military. The board was led by former Google CEO Eric Schmidt until September 2020, and its members now include Daniela Rus, the director of the Computer Science and Artificial Intelligence Lab at MIT.

Yet some critics doubt whether the work promises any meaningful reform.

During the study, the board consulted with a variety of experts, including vocal critics of the military’s use of AI, such as members of the Campaign for Killer Robots and Meredith Whittaker, a former Google researcher who helped -organize the Project Maven protest.

Whittaker, who is now faculty director at the AI ​​Now Institute at New York University, was not available for comment. But according to Courtney Holsworth, a spokeswoman at the institute, she attended a meeting, where she discussed with senior board members, including Schmidt, about the direction it was taking. “He was never meaningfully consulted,” Holsworth said. “The claim that he can be read as a form of moral washing, in which the presence of opposing voices in a small part of a lengthy process is used to claim that a given as a result there is widespread buying from relevant stakeholders. “

If the DoD doesn’t have a broad buy-in, will its guidelines still help build trust? “There are people who aren’t satisfied with any set of ethical guidelines that the DoD has put in place because they see the idea as the opposite,” Goodman said. “It’s important to be realistic about what the instructions can and can’t do.”

For example, the guidelines say nothing about the use of lethal autonomous weapons, a technology that some campaigners say should be banned. But Goodman points out that regulations governing such technology have decided to go higher in the chain. The goal of the guidelines is to make it easier for AI to comply with such regulations. And part of that process is to make clear any concerns that third-party developers have. “A valid application of these guidelines is to decide not to maintain a particular system,” said Jared Dunnmon of DIU, its co -author. “You may decide it’s not a good idea.”



Source link

admin

Leave a Reply

Your email address will not be published. Required fields are marked *