Microsoft wants Congress to pass “a comprehensive deepfake fraud statute” to prevent AI-generated scams and abuse

Microsoft recommends a new legal framework to protect the public from abusive AI-generated content.

When you purchase through links on our site, we may earn an affiliate commission.Here’s how it works.

What you need to know

What you need to know

As generative AI tools likeMicrosoft Copilotand OpenAI’sChatGPTbecome more advanced and sophisticated, cases of deepfake AI-generated content flooding the internet continue to rise (seeElon Musk this week). Aside from the security and privacy issues riddling the progression of the technology, the prevalence of deepfakes continues to hurt the authenticity of content surfacing online, making it difficult for users to determine what’s real.

Bad actors use AI to generate deepfakes for fraud, abuse, and manipulation. A lack of elaborate regulations and guardrails has contributed to deepfakes becoming widespread. However, Microsoft Vice Chair and President Brad Smith recently outlined new measures it intends to use toprotect the public from deepfakes.

Smith says Microsoft and other key players in the industry have been focused on ensuring that AI-generated deepfakes aren’t used to spread misinformation about the forthcoming US Presidential election.

While the company seemingly has a firm grasp on this front, the top exec says more can be done to prevent the widespread use of deepfakes in crime. “One of the most important things the US can do is pass a comprehensive deepfake fraud statute to prevent cybercriminals from using this technology to steal from everyday Americans,” added Smith.

Congress should require AI system providers to use state-of-the-art provenance tooling to label synthetic content. This is essential to build trust in the information ecosystem and will help the public better understand whether content is AI-generated or manipulated.

In the same breath, Smith wants policymakers to ensure federal and state laws designed to protect children from sexual exploitation, abuse, and non-consensual intimate imagery include AI-generated content as the technology becomes more prevalent and advanced.

It all a work in progress

It all a work in progress

Previously, Microsoft CEO Satya Nadella pointed out that there’s enough technology toprotect the forthcoming US presidential elections from AI deepfakes and misinformation. This is despite several reports highlighting Copilot AI’s shortcomings after the tool was spottedgenerating false information about the forthcoming elections.

Followingexplicit AI-generated images of pop star Taylor Swiftsurfacing online, the Senate passed a bill that addresses the issue. Users featured in explicit content generated using AI have grounds to sue for damages.

Get the Windows Central Newsletter

All the latest news, reviews, and guides for Windows and Xbox diehards.

On the other hand, OpenAI rolled out a new strategy designed to help users identify AI-generated content.ChatGPT and DALL-E 3 images are now watermarked, though the startup admits it’s “not a silver bullet to address issues of provenance.” OpenAI announced that it was working on a tool to help identify AI-generated images andpromises 99.9% accuracy.

Kevin Okemwa is a seasoned tech journalist based in Nairobi, Kenya with lots of experience covering the latest trends and developments in the industry at Windows Central. With a passion for innovation and a keen eye for detail, he has written for leading publications such as OnMSFT, MakeUseOf, and Windows Report, providing insightful analysis and breaking news on everything revolving around the Microsoft ecosystem. You’ll also catch him occasionally contributing at iMore about Apple and AI. While AFK and not busy following the ever-emerging trends in tech, you can find him exploring the world or listening to music.