Trend Micro Inc.

09/30/2024 | News release | Distributed by Public on 10/01/2024 15:25

AI Pulse: What's new in AI regulations

CITATION: o1 scorecard posted to the OpenAI website on September 12, 2024. Accessible at: https://openai.com/index/openai-o1-system-card/.

AI marches on to war

Most AI regulations are about preventing AI systems from doing harm. In war, the calculus is trickier: how to ensure AI-based weapons do only the right kind of harm. A recent New York Times op-ed argued the world isn't ready for the implications of AI-powered weapons systems, describing how Ukrainian forces had to abandon tanks due to kamikaze drone strikes-a harbinger of "the end of a century of manned mechanized warfare as we know it." (Now the Ukrainians send unmanned tanks.)

These issues were on the minds of military and diplomatic leaders who took part in the second REAIM Summit this September in South Korea. (REAIM stands for Responsible AI in the Military Domain.) The meeting yielded a Blueprint for Action outlining 20 principles for military use of AI, including that "Humans remain responsible and accountable for [AI] use and [the] effects of Al applications in the military domain, and responsibility and accountability can never be transferred to machines."

Not all countries supported the blueprint, prompting a provocative headline in the Times of India : "China refuses to sign agreement to ban AI from controlling nuclear weapons". The truth is more nuanced, but REAIM does underscore the vital importance of world powers agreeing on how AI weapons will be used.

CoSAI-ing up to make AI safe
The OASIS Open standards organization spun up the Coalition for Secure AI (CoSAI) this past summer as a forum for technology industry members to work together on advancing AI safety. Specific goals include ensuring trust in AI and driving responsible development by creating systems that are secure by design.

Other groups are also spotlighting best practices that industry and AI users alike can rely on for AI safety with or without legislation in place. A prime example is the Top 10 Checklist released by the Open Worldwide Application Security Project (OWASP) earlier this year, which outlines key risks associated with large language models (LLMs) and how to mitigate them.

One current top-of-mind concern for many observers is the deceptive use of AI in elections, especially with the U.S. Presidential campaigns speeding toward the finish line. Back in March, nearly two dozen companies signed an accord to combat the deceptive use of AI in 2024 elections including Adobe, Amazon, Anthropic, Arm, ElevenLabs, Google, IBM, Inflection AI, LinkedIn, McAfee, Meta, Microsoft, Nota, OpenAI, Snap, Stability AI, TikTok, Truepic, X, and Trend Micro-another example of the power of (and need for) collective action on AI safety.

[AI Threat Trends]
AI Threat Trends

DOJ nabs Russian Doppelganger domains
On September 4 the U.S. Department of Justice announced its seizure of 32 internet domains being used to "covertly spread Russian government propaganda with the aim of reducing international support for Ukraine, bolstering pro-Russian policies and interests, and influencing voters in U.S. and foreign elections...." Those activities were all part of an influence campaign dubbed 'Doppelganger' that broke U.S. money laundering and criminal trademark laws.

U.S. authorities remain on high alert against disinformation, manipulation, and the deceptive use of AI to skew November's Presidential election results. According to Fox News, U.S. Attorney General Merrick Garland is also taking aim at Russia's state-controlled Russia Today (RT) media outlet, which Meta announced it was banning from Facebook and Instagram on September 17 due to alleged foreign interference.

"Let me check my riskopedia..."
This August, MIT launched a public AI Risk Repository to map and catalogue the ever-growing AI risk landscape in an accessible and manageable way. The current version enumerates more than 700 risks based on more than 40 different frameworks and includes citations as well as a pair of risk taxonomies: one causal (indicating when, how and why risks occur) and the other based on seven primary domains including privacy and security, malicious actors, misinformation, and more.

MIT says the repository will be updated regularly to support research, curricula, audits, and policy development and give the full range of interested parties a "common frame of reference" for talking about AI-related risks.

Grok AI feeds on X user data for smart-aleck 'anti-woke' outputs
X's Grok AI was developed to be an AI search assistant with fewer guardrails and less 'woke' sensitivity than other chatbots. While decidedly sarcastic, it has turned out to be more open-minded than some might have hoped-and controversial for a whole other reason. This summer it surfaced that X was automatically opting-in users to have their data train Grok. That raised the ire of European regulators and criticism from folks like NordVPN CTO Marijus Briedis, who told WIRED the move has "significant privacy implications," including "[the] ability to access and analyze potentially private or sensitive information... [and the] capability to generate images and content with minimal moderation."

[AI Predictions]
What's Next for AI Model Building

AI is heading for a major data drought