探花精选

Spot The Fake: Google Launches New Watermarking Tech

Subscribe to 探花精选's Next in AI Newsletter
Martina Bretous
Martina Bretous

Updated:

Published:

Where were you when you saw that image of Pope Francis in a white puffer jacket and jeweled crucifix, looking like he stepped out of a streetwear runway show?

google releases new watermarking technology

We’ve seen deepfakes before, but few had quite . And now, Google has released a tool that will prevent images like this from spreading unchecked.

How does it work?

Deep Mind, an AI research lab Google acquired years back, recently announced the launch of SynthID, a watermarking tool designed specifically to spot AI-generated images.

“Being able to identify AI-generated content is critical to empowering people with knowledge of when they’re interacting with generated media and for helping prevent the spread of misinformation,” the Deep Mind team .

Unlike traditional watermarking techniques, which often rely on visible watermarks or metadata that can get lost, Deep Mind embeds a digital watermark into the pixels of an image.

So, even if you alter the image – cropping, resizing, filters, brightness adjustments – the watermark remains. The human eye won’t spot it, but detection software can identify it.

Not specific enough? Well, that’s all the team is willing to share about the tech.

“The more you reveal about the way it works, the easier it’ll be for hackers and nefarious entities to get around it,” said CEO Demis Hassabis to .

Currently in beta, SynthID is available to Imagen users (a Google Cloud product) who use Vertex AI, a cloud-based machine learning platform. Customers will be able to responsibly create, share, and identify AI-generated images.

Hassabis adds that the technology isn’t foolproof against “extreme image manipulations,” but it’s progress in the right direction.

What’s next?

The team at Deep Mind is working on expanding access to SynthID to make it available to third parties and integrate it into more Google products.

This announcement came shortly after Google, and six top AI players, and pledged to invest in AI safety tools and research for responsible use.

, the White House requested new watermark technology as a way for AI companies to earn public trust. And according to , the software will likely extend to audio and visual content.

This summit continued the government’s effort to combat deep fakes. In 2021, the Department of Homeland Security and Governmental Affairs Committee passed the , which is exactly what it sounds like.

On the light end of the spectrum, you have deep fakes used to style the Pope in the latest fashion trends. On the dark end, they can lead to political instability, fraud, and .

In 2021, Adobe cofounded the nonprofit Coalition for Content Provenance and Authenticity (C2PA). The coalition exists to standardize how media content is labeled and to combat misinformation. They will serve as a seal of approval, showing consumers that an asset was not manipulated or altered.

Due to the AI boom, C2PA’s membership has grown 56% in the past six months, according to an article.

Shutterstock that it would integrate C2PA’s technical protocol into its AI systems and creativity tools, including the

The takeaway: With government pressure mounting, the big and small AI players will need to prioritize efforts toward responsible AI. Whether you’re using or creating the tools, there’s more oversight on the horizon.

Topics:

Related Articles

A weekly newsletter covering AI and business.

SUBSCRIBE

The weekly email to help take your career to the next level. No fluff, only first-hand expert advice & useful marketing trends.

Must enter a valid email

We're committed to your privacy. 探花精选 uses the information you provide to us to contact you about our relevant content, products, and services. You may unsubscribe from these communications at any time. For more information, check out our .

This form is protected by reCAPTCHA and the Google and apply.