The Struggle to Make Meaning When AI Is ‘High Risk’
EU leaders insisted that tackling the ethical issues surrounding AI would lead to a more competitive market for AI products and services, increase AI adoption, and help the region compete. with China and the United States. Regulators have demanded that labels with risky motivation be more professional and responsible business practices.
Business respondents said the draft law is too far-fetched, with costs and rules that could hinder innovation. Meanwhile, many human rights groups, AI ethics, and anti -discrimination groups have argued that the AI Act is inadequate, leaving people in powerful businesses and governments with the resources to send the advanced AI system. (Most research does not cover the use of AI in the military.)
(Usually) Strict Business
While some public comments about the AI Act came from individual EU citizens, the responses were mainly from professional groups for radiologists and oncologists, trade unions for Irish and German teachers. , and major European businesses such as Nokia, Philips, Siemens, and the BMW Group.
American companies were also well represented, with commentary from Facebook, Google, IBM, Intel, Microsoft, OpenAI, Twilio, and Workday. In fact, according to data collected by European Commission staff, the United States is the fourth source of most comments, after Belgium, France, and Germany.
Many companies have expressed concern about the costs of the new regulation and questioned how to label their own AI systems. Facebook wants the European Commission to be more clear on whether the AI Act mandates to ban subliminal methods that manipulate people as far as targeted advertising. Equifax and MasterCard have each argued against a blanket high -risk assignment for any AI that judges a person’s ability, claiming to increase cost and decrease the accuracy of credit checks. However, a lot study found time of discrimination involving algorithms, financial services, and loans.
NEC, Japan’s face recognition company, argued that the AI Act places an undisclosed responsibility on the provider of AI systems instead of users and the draft proposal to mark all remote biometric identification system as high risk to carry high cost of compliance.
One of the main controversial companies with a draft is how it treats general purpose or preliminary models that enable the completion of various tasks, such as OpenTh GPT-3 or Google’s multimodal experiment model MAMA. Some of these models are open source, and others are proprietary creations sold to customers by cloud service companies with the AI talent, data, and computing resources needed to train those. system. In a 13-page response to the AI Act, Google argued that it is difficult or impossible for makers of AI systems with the intent to follow the rules.
Other companies working to develop systems that are for multi-purpose or artificial general intelligence such as Google’s DeepMind, IBM, and Microsoft have also suggested changes to account for AI to be carry a lot of tasks. OpenAI urges the European Commission to avoid banning systems with a purpose in the future, even if some use cases may fall into a category of high risk.
Businesses also want to see AI Act makers change definitions of critical terminology. Companies like Facebook argue that the research uses overbroad terminology to identify systems that are at high risk, resulting in over-dominance. Some suggest a lot of technical changes. For example, Google wants a new definition added to the draft bill that differentiates between “deployers” of an AI system and “providers,” “distributors,” or “importers” of AI systems. . In doing so, the company argues, it may place more responsibility for changes made to an AI system on the business or entity that made the change than the company that made the original. Microsoft made a similar recommendation.
The Costs of High-Risk AI
After all there is a factor as to how much businesses will pay for a high -risk brand.
A study by European Commission staff set costs for following an AI project under the AI Act at almost 10,000 euros and found that companies could expect initial total costs of about 30,000 euros. As companies develop professional methods and think business as usual, they expect costs to fall to almost 20,000 euros. The study uses a model developed by the Federal Statistical Office in Germany and recognizes that costs can vary depending on the size and complexity of a project. Because developers have acquired and customized AI models, then embedded them in their own products, the study concluded that a “complex ecosystem can have a complex division of responsibilities.”