European Union lawmakers gave final approval to the 27|nation group's artificial intelligence law Wednesday. The rules are expected to take effect later this year.
Lawmakers in the European Parliament voted in favor of the Artificial Intelligence Act, five years after regulations were first proposed.
Major technology companies have generally supported the idea. But they want to make sure new 人工智能 requirements work in their favor. Open人工智能 chief Sam Altman suggested the maker of ChatGPT might pull out of Europe if it cannot comply with the 人工智能 Act. He later said his company had no plans to leave.
Here are some details about Europe's new 人工智能 rules:
How does the 人工智能 Act work?
Like many EU regulations, the 人工智能 Act started as consumer safety legislation. The EU took a "risk|based approach" to products or services that use artificial intelligence (人工智能).
If an 人工智能 application is risky, then more rules cover it. Most 人工智能 systems are expected to be low risk, like content recommendation systems or filters that block spam, or unwanted email. Companies can choose to follow voluntary requirements and codes of conduct.
High|risk uses of 人工智能 include tools used in medical devices or important infrastructure like water or electrical networks. Those face additional requirements like using what the legislation calls high|quality data and providing clear information to users.
Some 人工智能 uses are banned because they are considered to present an unacceptable risk. Those include things like social scoring systems that are meant to govern how people behave. Some sorts of predictive policing and emotion recognition systems also are reportedly banned in schools and workplaces.
Other banned uses include ones that police use to scan faces in public places using 人工智能|powered remote "biometric identification" systems. There is an exception for use in serious crimes like kidnapping or terrorism.
What about generative 人工智能?
The law's early versions centered on 人工智能 systems that carry out limited tasks, like reviewing employment information and job applications. But general 人工智能 models, like Open人工智能's ChatGPT, forced EU officials to add rules for generative 人工智能 models. 人工智能 chatbot systems that can produce lifelike responses, images and more are examples of generative 人工智能 models.
Developers of general purpose 人工智能 models will have to provide detailed descriptions of the writings, pictures, video and other data on the internet that was used to train the systems. They must also follow EU copyright law.
人工智能|generated pictures, video or audio of existing people, places or events must be labeled as artificially produced. These sorts of media are known as "deepfakes" because they appear to show real people doing or saying things that are not real.
There are reportedly extra rules for the biggest and most powerful 人工智能 models that carry "systemic risks." Those include Open人工智能's GPT4 and 谷歌's Gemini.
What do Europe's rules mean?
The EU first suggested 人工智能 regulations in 2019. Europe was quick to propose rules for the new and developing industry.
In the U.S., President Joe Biden signed an executive order on 人工智能 in October. The U.S. Congress is likely to propose legislation. Lawmakers in at least seven U.S. states are working on their own 人工智能 legislation. And international agreements are possible too.
Chinese President Xi Jinping has proposed his Global 人工智能 Governance Initiative for fair and safe use of 人工智能. Other major countries, including Brazil and Japan, are developing rules, as well as the United Nations and Group of Seven industrialized nations.
What happens next?
The 人工智能 Act is expected to officially become law by May or June, after approval from EU member countries. Rules will start taking effect slowly. Countries will be required to ban unapproved 人工智能 systems six months after the law takes effect.
Rules for general purpose 人工智能 systems like chatbots will start going into effect in one year. By the middle of 2026, the complete set of regulations, including requirements for high|risk systems, will be in effect.
Each EU country will set up their own 人工智能 enforcement agency. Citizens can make a complaint if they think they have been the victim of a violation of the rules. And the EU will create an 人工智能 Office that will oversee the law for general purpose 人工智能 systems.
Violations of the 人工智能 Act could be punished with a fine of up to $38 million, or seven percent of a company's worldwide revenue.
I'm Dan Novak.
Dan Novak adapted this story for VOA Learning English based on reporting by The Associated Press.
|||||||||||||||||||||||||||||||||||||||
Words in This Storyregulation — n. a rule or a group of rules that control how an industry can carry out business
comply — v. to do what is requested to be done or what is required by rules or law
consumer — n. a person or group that buys goods and services that are not for industrial purposes
filter — n. something that permits only what is wanted through while blocking unwanted things
codes of conduct –n. a set of rules that a group of people or businesses agrees to usually voluntarily
infrastructure — n. the structures and systems that are needed for modern life like roads, electricity lines, dams and many other things
scan –v. to use a camera to take a picture of a group of people to find a certain person
biometric –adj. related to taking measurements of the human body to confirm someone's identity
artificial — adj. made by people, not happening naturally; something that is not real
complaint –n. an official statement of dissatisfaction that is presented to a public official with the expectation that the problem will be dealt with
revenue — n. money collected by a business through sales, investment and other operations