Americans need a Bill of Rights for a world of AI origin


ago ten years on, data -driven technologies have changed the world around us. We saw what could be done by gathering a lot of data and training artificial intelligence to translate it: computers to learn to interpreting languages, facial recognition systems open our smartphones, algorithms that identify cancers in patients. The possibilities are endless.

But these new tools also bring serious problems. The knowledge of the machines depends on many things-including the data used to train them.

Data sets that fail to represent American society can result in virtual assistants that do not understand Southern accents; face recognition technology towards wrongful, discriminatory arrest; and health care algorithms discount the weight of kidney disease in African Americans, which prevents people from getting kidney transplants.

Training machines adapted to previous examples can infiltrate past prejudice and create today’s discrimination. Extraction tools identify shapes of the staff of a company may reject applicants who are not the same as the current staff even if they have skills – for example, female computer programmers. Loan approval algorithms to determine the ability to obtain credit is quick to recall that some household zip codes are related to race and poverty, extended by decades of housing discrimination to the digital age. Can be AI recommend medical support for groups that regularly go to hospital services more than those who need them most. AI training without choice in internet conversations is possible consequences in “analysis of sentiment ” viewed the words “Black,” “Jewish,” and “gay” as negative.

These technologies also raise questions about privacy and transparency. If we ask our wise speaker to play a song, does it record what our children said? If a student takes an exam online, is it necessary webcam monitoring and tracking some action? Do we have a right to know why we were denied a home loan or a job interview?

In addition, there is a problem with AI being deliberately abused. Some autocracies use it as a tool for state -supported oppression, division, and discrimination.

In the United States, some of the AI ​​failures may be unintentional, but they seriously and unequally affect isolated individuals and communities. They often result from AI developers not using appropriate data sets and not comprehensive auditing systems, as well as not having different perspectives around the table to anticipate and fix problems ago used products (or to kill products that cannot be cured).

In a competitive market, it can be just as easy to cut corners. But it is unacceptable to make AI systems that harm many people, just as it is unacceptable to make medicines and other products – whether cars, children’s toys, or medical devices – that harm many. people.

Americans have a right to expect better. Powerful technologies are needed to respect our democratic values ​​and follow the central principle that everyone should be treated fairly. Changing these ideas will help ensure that.

Following the ratification of our Constitution, Americans adopted a Bill of Rights to guard against the powerful government we had just formed-which took into account guarantees such as freedom of expression and assembly, rights to due process and fair trials, and protection against unreasonable search and seizure. Throughout our history we have had to redefine, reaffirm, and occasionally expand these rights. In the 21st century, we need a “bill of rights” to guard against the powerful technologies we create.

Our country needs to clarify the rights and freedoms we expect to respect in data -driven technologies. What exactly that is will require discussion, but here are a few possibilities: your right to know when and how AI will influence a decision that affects your civil rights and freedoms in civil; your freedom from being subjected to unaudited AI carefully to ensure that it is accurate, impartial, and adequately representative of the data; your freedom from widespread or discriminatory surveillance and monitoring of your home, community, and workplace; and your right to meaningful appeal if the use of an algorithm harms you.

Of course, the first step is to count the rights. What can we do to protect them? Possibilities include the federal government refusing to purchase software or technology products that do not respect these rights, requiring federal contractors to use technologies that comply with the “bill of rights,” or adopt the new laws and regulations to fill the gaps. States may choose to adopt similar approaches.



Source link

admin

Leave a Reply

Your email address will not be published. Required fields are marked *