Google is stepping up efforts to ensure trust in AI-generated content by introducing SynthID watermarks and transparency notes in Google Photos. As AI technology evolves, distinguishing between human-made and AI-generated content has become more challenging. Google’s new tools aim to help users recognise AI-edited content, boosting transparency and reducing potential misinformation.
How SynthID works
SynthID, now available as open-source technology, has been created to identify AI-generated content without altering its quality. Originally designed to integrate with Google’s Gemini app and web-based tools, SynthID embeds an invisible watermark directly into images, audio, video, or text produced by AI. This subtle mark, invisible to the human eye, serves as a digital “signature,” which can later be detected to verify if the content originated from Google’s AI tools.
SynthID operates in two key ways. First, it applies the watermark at the generation stage, ensuring that all new content created by Google’s AI tools contains this invisible marker. Second, it enables users to scan existing content to check for SynthID watermarks, letting them verify if part or all of a given piece of content is AI-generated. This verification can be done for simple and complex media, such as images, audio, text, and video.
SynthID slightly adjusts probability scores for certain words or elements without impacting the content’s natural look or readability to achieve this. Google’s language models, or LLMs, generate text one token at a time. These tokens, which represent single words or parts of phrases, are predicted based on previous words and assigned probability scores. SynthID subtly alters these scores to produce a watermark pattern, which is a reliable marker. This watermarking approach becomes stronger and more accurate with longer texts, but even brief content, such as a three-sentence snippet, can carry this mark. The technique preserves content quality, ensuring watermarked AI outputs remain high-quality, coherent, and original.
Protecting image quality and AI transparency
Google claims SynthID’s watermarks do not affect image or video quality. The watermark remains detectable even after modifications, like cropping, applying filters, colour changes, and altering frame rates. These durable features mean the watermark holds even if the image or video undergoes common edits, reducing the chance of unintentional erasure.
Although SynthID isn’t a comprehensive solution to misinformation, Google views it as an essential step toward more transparent AI practices. SynthID is available within Google’s Responsible Generative AI Toolkit, a resource that offers tools and guidance for safer AI creation. Google also collaborates with the AI community, including Hugging Face, to broaden SynthID’s reach. This partnership enables developers to integrate SynthID technology into their models, promoting responsible AI use across platforms.
Tracking AI edits in Google Photos
Google is also introducing a feature in Google Photos to alert users when AI has edited an image. Starting next week, any photo altered with Google AI will display a note in the app, allowing users to see AI involvement in the “Details” section under “Edited with Google AI.” This step is part of Google’s commitment to transparency, helping users track AI edits in their media.
Additionally, Google will incorporate International Press Telecommunications Council (IPTC) metadata to signal when images are composed of different photos. For instance, features like “Best Take” on the Pixel 8 and Pixel 9 phones and “Add Me” on the Pixel 9 utilise closely timed captures to merge elements into one image, which is ideal for capturing group photos. While these tools don’t use generative AI, this metadata distinction helps clarify when photos involve advanced editing techniques.
Google’s latest moves in AI transparency show a commitment to responsible technology practices. They aim to maintain user trust in AI advancements while ensuring clarity around how content is created. As these changes roll out, users will have greater insight into the origins of their content, allowing for a clearer distinction between natural and AI-enhanced media.