Results

Trend Micro Inc.

10/31/2024 | News release | Distributed by Public on 10/31/2024 13:28

AI Pulse: Election Deepfakes, Disasters, Scams & more

Source: In a 1980s sci-fi setting, a futuristic detective faces the challenge of identifying advanced cyborgs that are indistinguishable from humans. His mission to FIND these beings.

Critics have warned from the get-go that AI will eventually pose the same challenge. As it becomes more sophisticated, it will be harder and harder for people or machines to tell if a document, image, or recording is real or AI-generated.

Some AI deepfakes are already difficult to detect. In September, the Chair of the U.S. Senate Foreign Relations Committee booked a video conference with someone he thought was a legitimate, known contact in Ukraine. But the email he'd received was fake, and the video call-which seemed to feature the real foreign official-was also an AI scam. When the conversation veered into "politically charged" territory, the Chair and his team realized something was up and pulled the plug.

We are all targets
Public figures are by no means the only ones vulnerable to synthetic media scams. Trend Micro data shared with Dark Reading this past summer showed that 80% of consumers had seen deepfake images, 64% had seen deepfake videos, and just over a third-35%-had been personally exposed to deepfake scams.

Training people to be aware of deepfakes and other AI-generated threats is clearly essential. But as Trend's Shannon Murphy points out, humans can't see down to the pixel level. Technology-based tools are also a must-to make AI-generated content identifiable and to detect it when it doesn't identify itself.

Getting AI to show itself
On the 'AI identifier' side of the question, one commonly promoted technique is the use of digital watermarks: machine-detectable patterns embedded in AI-generated content. The Brookings Institute notes these watermarks are effective but not invulnerable to tampering-and can be hard to standardize while maintaining trust.

Microsoft is putting something along these lines into practice with Content Credentials, a way for creators and publishers to authenticate their work cryptographically and use metadata to certify who made something, when, and if AI was involved. The Content Credentials regime conforms to the C2PA technical standard and can be used with photos, video, and audio content.

OpenAI is concentrating more heavily on the AI detection part of the puzzle. According to Venture Beat, the company's GPT-4o is designed to identify and stop deepfakes by detecting content from generative adversarial networks (GANs), audio and video anomaly detection, authenticating voices, and checking that audio and visual media components match up-for example, that mouth movements and breaths correspond to what appears onscreen in video.

The only real answer is defense in depth
Deepfakes and other AI threats are going to continue to challenge our senses and assail our institutions. Vigilant humans, AI identifiers, and analytical AI detection technologies are all key defenses, but none of these is perfect, meaning still more needs to be done. Zero-trust models are also essential to orient organizations and processes around protecting themselves-taking a "trust nothing, verify everything" stance. Consider the risks before taking actions based on digital content.

Combining all of the above along with legal and regulatory guardrails will provide true defense-in-depth and our best possible protection against AI-generated threats.

More perspective from Trend Micro

Check out these additional resources: