Hany Farid, a professor at UC Berkeley who specializes in digital forensics but was not involved in Microsoft’s research, says that if the industry adopted the company’s model, it would be much harder to deceive the public with manipulated content. Savvy individuals or governments can work to circumvent these tools, he says, but the new standard could eliminate a significant portion of the misleading material.
“I don’t think it solves the problem, but I think it takes a good chunk out of it,” he said.
There are, however, reasons to view Microsoft’s approach as an example of somewhat naive techno-optimism. There is growing evidence that people are influenced by AI-generated content even if they know it is fake. And in a recent study Among AI-generated pro-Russian videos about the war in Ukraine, comments pointing out that the videos were made with AI received significantly less engagement than comments regarding them as authentic.
“Are there people who, no matter what you tell them, will believe what they believe? asks Farid. “Yes.” But, he adds, “there are a large majority of Americans and citizens around the world who, I think, want to know the truth.”
This desire hasn’t exactly led to urgent action from tech companies. Google began adding a watermark to content generated by its AI tools in 2023, which Farid said has been helpful in its investigations. Some platforms use C2PAa standard from Microsoft helped launch in 2021. But Microsoft’s set of suggested changes, powerful as they are, could remain just suggestions if they threaten the business models of AI companies or social media platforms.
