15.9 C
Munich

Europe offers strict regulation of artificial intelligence

On Wednesday, the European Union introduced strict regulations governing the use of artificial intelligence, the first of its kind, outlining how companies and governments can use technology that is seen as one of the most essential but ethically pregnant scientific advances. last memory.

The draft rules impose restrictions on the use of artificial intelligence in a range of activities, from self-driving cars to rental decisions, bank loans, school-leaving choices, and exam scores. It will also include the use of artificial intelligence by law enforcement and the judiciary as areas of “high risk” as they may threaten human security or fundamental rights.

Some uses will be banned altogether, including live face recognition in public, although there would be a few exceptions for national security or other purposes.

The 108-page policy is an attempt to regulate emerging technology before it becomes a major part of it. The rules have far-reaching implications for major tech companies, including Amazon, Google, Facebook and Microsoft, which have devoted resources to the development of artificial intelligence, but also to a number of other companies that use software for developing medicines, taking out insurance policies, and assessing creditworthiness. “Governments have used technology options such as criminal justice and public services to provide revenue support.

Companies violating the new regulations, which could take several years to process in EU policy-making, could be fined up to 6% of world sales.

“Confidence in artificial intelligence is imperative, it’s not fun to have,” said Margaret Vestager, executive vice president of the European Commission, which oversees digital policy for the 27-nation bloc. “With these important rules, the EU is pushing for new global norms to make sure AI can be trusted.”

EU regulations require companies providing artificial intelligence in high-risk areas for regulators to provide evidence of its safety, including risk assessment և documentation explaining how technology makes decisions. Companies must also guarantee human control over the creation and use of systems.

Some software, such as chatbots that provide human chats in customer service situations, and software that creates hard-to-detect manipulated images, such as deepfakes, should make it clear to users that what they are seeing is computer.

The European Union has been the world’s most aggressive observer of the technology industry for the past decade, and its policies are often used as a project by other nations. The Alliance has already adopted the world’s widest data privacy regulations, is considering additional antitrust and content measurement laws.

But Europe is no longer alone in its strict control. The largest technology companies are now being held accountable by governments around the world, each with its own political motives for squeezing the power of industry.

In the United States, President Biden has filled his administration with industry critics. Britain creates technology regulatory industry for police. India tightens control over social media. China is targeting tech giants like Alibaba և Tencent.

The results of the coming years can transform how the global Internet works և how new technologies are used. Access to different content, digital services, or online freedoms based on their location.

Artificial intelligence, where machines teach how to work, make their own decisions, study huge amounts of data, technologists, business leaders, government officials consider one of the most transformative technologies in the world, promising great productivity gains.

But as systems become more complex, it will be harder to understand why software makes decisions, a problem that can be exacerbated as computers become more powerful. Researchers have raised ethical questions about its use, suggesting that it could perpetuate bias in society, invade privacy, or lead to more job automation.

The publication of the bill by the executive body of the bloc, the European Commission, received a mixed response. Many industry groups eased that regulations were not stricter, while civil society groups said they should have gone further.

“There has been a lot of discussion over the last few years about what it would mean to regulate AI, so far there has been a post-election option to do nothing, wait and see what happens,” said Ada Lovelace, CEO Carly Kind. London Institute for the Ethical Use of Artificial Intelligence. “Every country or regional bloc is trying for the first time.”

Kind said many were concerned that the policy was too broad, leaving too much discretion for companies and technology developers to self-regulate.

“If it does not put very red lines, guidelines, very strong boundaries on what is acceptable, it opens up a lot for comment,” he said.

The development of artificial intelligence has become one of the most controversial issues in Silicon Valley. In December, Google’s co-chair of the study of software ethics said he had been fired for criticizing the company’s lack of diversity. on selling.

In the United States, the risks of artificial intelligence are also considered by government agencies.

This week, the Federal Trade Commission warned against selling artificial intelligence systems that use racially biased algorithms, or systems that could “deny people employment, housing, credit, insurance or other benefits.”

Elsewhere, in Massachusetts ներում Oakland, California, Portland, Oregon; և San Francisco, governments have taken steps to limit police recognition.

LEAVE A REPLY

Please enter your comment!
Please enter your name here