Thursday, I reported This is the first confirmation that the US Department of Homeland Security, which houses immigration agencies, uses AI video generators from Google and Adobe to create content that it shares with the public. The news comes as immigration agencies have flooded social media with content in support of President Trump’s mass deportation agenda, some of which appears to be done with AI (such as a video on “Christmas after the mass deportations”).
But I received two types of reader reactions that may equally explain the epistemic crisis in which we find ourselves.
One was from people who were not surprised, because on January 22, the White House had issued a digitally modified photo of a woman arrested at an ICE protest, which made her appear hysterical and in tears. Kaelan Dorr, deputy White House communications director, did not respond to questions about whether the White House altered the photo, but wrote“The memes will continue.”
The second came from readers who didn’t see any point in reporting that DHS was using AI to edit content shared with the public, because the media was apparently doing the same. They pointed to the fact that the news network MS Now (formerly MSNBC) shared an image of Alex Pretti that was altered by AI and appeared to make him look better, a fact that led to numerous viral clips this week, including one from Joe Rogan’s podcast. In other words, fight fire with fire? An MS Now spokesperson told Snopes that the media broadcast the image without knowing that it had been modified.
There is no reason to lump these two cases of altered content into the same category, or to read them as proof that the truth no longer matters. One involved the US government sharing a clearly altered photo with the public and refusing to answer whether it had been intentionally manipulated; the other involved a media outlet broadcasting a photo I should have known has been modified but takes some steps to reveal the error.
What these reactions reveal instead is a flaw in the way we collectively prepared for this moment. Warnings about the AI truth crisis revolved around a central thesis: Not being able to tell what is real would destroy us, so we need tools to independently verify the truth. My two grim conclusions are that these tools are failing and that, while truth-checking remains essential, it alone is no longer capable of producing the societal trust we were promised.
