UN Adopts Resolution for 'Secure, Trustworthy' AI
-
- The United Nations on Thursday adopted a resolution concerning responsible use of artificial intelligence, with unclear implications for global AI security. The US-drafted proposal — co-sponsored by 120 countries and accepted without a vote — focuses on promoting "safe, secure and trustworthy artificial intelligence," a phrase it repeats 24 times in the eight-page document. The move signals an awareness of the pressing issues AI poses today — its role in disinformation campaigns and its ability to exacerbate human rights abuses and inequality between and within nations, among many others — but falls short of requiring anything of anyone, and only makes general mention of cybersecurity threats in particular. "You need to get the right people to table and I think this is, hopefully, a step in that direction," says Joseph Thacker, principal AI engineer and security researcher at AppOmni. Down the line, he believes "you can say [to member states]: 'Hey, we agreed to do this. And now you're not following through.'" The most direct mention of cybersecurity threats from AI in the new UN resolution can be found in its subsection 6f, which encourages member states in "strengthening investment in developing and implementing effective safeguards, including physical security, artificial intelligence systems security, and risk management across the life cycle of artificial intelligence systems." Thacker highlights the choice of the term "systems security." He says, "I like that term, because I think that it encompasses the whole [development] lifecycle and not just safety." Other suggestions focus more on protecting personal data, including "mechanisms for risk monitoring and management, mechanisms for securing data, including personal data protection and privacy policies, as well as impact assessments as appropriate," both during the testing and evaluation of AI systems and post-deployment. This latest UN resolution follows from stronger actions taken by Western governments in recent months. As usual, the European Union led the way with its AI Act. The law prohibits certain uses of the technology — like creating social scoring systems and manipulating human behavior — and imposes penalties for noncompliance that can add up to millions of dollars, or substantial chunks of a company's annual revenue. The Biden White House also made strides with an Executive Order last fall, prompting AI developers to share critical safety information, develop cybersecurity programs for finding and fixing vulnerabilities, and prevent fraud and abuse, encapsulating everything from disinformation media to terrorists using chatbots to engineer biological weapons.
ที่มาแหล่งข่าว
https://www.helpnetsecurity.com/2024/03/25/blockfi-ftx-phishing/สามารถติดตามข่าวสารได้ที่ webboard หรือ Facebook NCSA Thailand