Have you ever doubted the truthfulness of an image or video you saw online? In a world where deepfakes are becoming increasingly convincing, detecting content generated by artificial intelligence has become a daily challenge. Discover how Google and other tech players are trying to arm us against this new reality.
The 3 must-know facts
- Google uses SynthID, an invisible watermarking technology, to identify content generated by its own AI models.
- The C2PA standard allows verification of the origin of images and videos, even those created by Google’s competitors like OpenAI.
- Despite these advances, no tool replaces critical thinking, essential for distinguishing the real from the artificial.
The limits of old AI models
Not long ago, AI models produced videos with obvious inconsistencies, making their detection quite simple for the human eye. However, rapid technological advancements have enabled much more realistic renderings. Deepfakes have infiltrated social networks, making the distinction between reality and illusion more complex than ever.
Google and SynthID technology
Faced with this challenge, Google introduced SynthID, developed by Google DeepMind. Unlike traditional watermarks, SynthID acts on the frequency component of the digital signal. The algorithm adjusts color and brightness statistics, creating an undetectable imprint to the naked eye, but recognizable by Google’s Gemini model.
In a few clicks, users can submit an image or video to Gemini and ask the question: “Was this image designed by AI?” The chatbot will then analyze the metadata to identify the SynthID marking.
Interoperability with the C2PA standard
To overcome the limitations of SynthID, especially with content created by competitors, Google relies on the C2PA standard. This protocol ensures file traceability from their creation. It is adopted by giants like Adobe, Microsoft, and OpenAI, allowing verification of content origin, even in the absence of SynthID marking.
To use C2PA, simply go to the Content Credentials platform where an image can be quickly analyzed. However, this standard is not foolproof. For example, a simple screenshot can erase traceability information, making detection impossible.
The challenges of digital authenticity
Despite the sophisticated tools developed by AI giants, human discernment remains crucial. Technology cannot yet replace our ability to assess the truthfulness of content. As Blaise Pascal anticipated, truth and lies are often intertwined, and our role is to untangle these threads to distinguish the real from the artificial.
In this race for truthfulness, companies like Google, OpenAI, and Adobe are constantly innovating to offer more effective detection solutions. However, their success also depends on our ability to remain vigilant and critical of the information we consume daily.
Source:







