The United Nations (UN) General Assembly has approved a decision concerning artificial intelligence. This decision represents the latest effort among various government initiatives worldwide aiming to influence the development of artificial intelligence. Initiated by the United States and supported by 123 countries, including China, the decision was unanimously adopted on March 2nd, indicating widespread support among all 193 UN member states.
UN’s Move on Artificial Intelligence
The decision encourages countries to protect human rights, safeguard personal data, and monitor risks associated with artificial intelligence. Although many AI initiatives lack practicality, concerns continue about the technology’s potential to disrupt democratic systems, intensify fraudulent activities, or lead to significant changes in important job sectors. The decision includes the following statement:
“There are risks that the improper or malicious design, development, deployment, and use of artificial intelligence systems can undermine the protection, development, and exercise of human rights and fundamental freedoms.”
Unlike Security Council resolutions, UN General Assembly decisions do not have legal binding force but instead serve as indicators of global sentiment. This decision calls on various organizations, including nations, to promote regulatory frameworks for safe artificial intelligence systems.
The decision aims to close the digital divide between wealthier countries and less affluent developing nations and to ensure these countries are included in discussions about artificial intelligence. It also seeks to equip developing countries with the necessary technology and skills to benefit from AI advantages in disease detection, flood prediction, agricultural support, and workforce training.
What’s Happening in the Field of Artificial Intelligence?
In November, the United States, the United Kingdom, and more than a dozen other countries formed a comprehensive global agreement outlining measures to protect artificial intelligence from malicious actors. The agreement emphasizes the need for technology companies to develop AI systems with inherent security features. The decision warns against the creation, advancement, implementation, and use of artificial intelligence systems that lack adequate protection or violate international law.
Meanwhile, major technology firms generally acknowledge the need for AI regulation, while advocating for any rules to be advantageous to their interests. However, European Union lawmakers gave final approval to the world’s first comprehensive AI regulations on March 13th. Following some steps, these rules are expected to come into effect by May or June.
EU regulations ban various technologies, including biometric surveillance, social scoring systems, predictive policing, emotion recognition, and indiscriminate facial recognition systems. The White House also aimed to enhance national security while reducing AI risks for consumers, workers, and minorities with a new executive order issued in October.