work aboutservicesnews teamcareers contact
GreenRubino Logo

Insights

California’s AI Watermark Bill

 

 

California’s AI watermark bill: What could it mean for brands?

OpenAI, Adobe and Microsoft have put their support behind California bill AB-3211, which requires generative AI providers to identify synthetic content and online social media platforms to label this content.

My first reaction to this is a huge sigh of relief. By now we’ve probably all seen or heard about the “Swifties for Trump” AI images that were posted earlier in the month. With election season upon us, gen AI could play a huge role in creating fake content to swing voters. As of right now, all we have to rely on for policing this content are the keen eyes of users and their quick response to call it out.

But as someone deeply immersed in the use of generative AI for content in advertising, I have a lot of questions about what this means for brands that use generative AI to assist in the creation of content. Before getting into that, let’s look at a summation of the details of AB-3211:

  • Generative AI providers of AI images, video and audio would be required to include metadata in the content identifying it as synthetic. Synthetic is defined as content produced or significantly modified by generative AI.
  • Non-synthetic content is defined as images, video and audio that’s captured in the physical world by natural persons, with only minor modifications that include things like changes to brightness or contrast of images or removal of background noise in audio.
  • Large online, public-facing social media platforms with more than 2,000,000 unique monthly California users would be required to label this content. Social media platforms are defined as a video-sharing platform, messaging platform, advertising network or standalone search engine.

My first set of questions around this are from a content creation POV. Content is rarely created or modified with a single tool. Let’s look at a simple use case:

  • A photo is shot by a photographer or purchased from a stock site.
  • The photo is modified in Photoshop – the background is extended with Generative Fill so the photo can be used in different content sizes. This is saved out as a flattened JPEG.
  • The JPEG is taken into Adobe Express and used to create several social media executions. Type and animation are added, and the file is created and posted to a social media account.

First, does extending the background count as a significant modification? From a conceptual perspective, I would say “no.” We’re simply adding non-essential visual information so that the focus of the image can be retained when used in different layouts and sizes. But from a technical perspective, this could be considered significant. The use of generative AI to create visual information where there previously was none is significant, by the definition of the bill; this isn’t a contrast change. In addition, depending on the area of the background extension, we could be creating a large number of pixels using generative AI. Will the number of pixels be part of the consideration when defining “significant modifications”? I hope both yes and no. A small number of pixels can have a big impact,

Second, will the generative AI metadata from the Photoshop file be retained in the final files from Express? A flattened JPEG loses metadata in both the layers and editing history, so will generative AI metadata be lost as well? With Adobe backing AB-3211, I’d imagine they’re way ahead of me on these questions and will find a way to include this metadata in files as they move from one tool to the next.

There are more detailed use cases which adds to the complexity. Adobe has a number of significant generative AI enhancements peppered throughout their tools, and I’ve found that some of these “smaller” enhancements have had some of the greatest impact on the work we’re doing. New, AI-enabled features allow us to more easily blend layers and composite photos together in Photoshop. On one hand, these fall under “minor modifications” because it can be looked at as a brightness or contrast change. On the other hand, this allows content creators to fabricate images, which is exactly what the bill is wanting to identify.

If strongly enforced (metadata carries over from tool to tool, “small” AI enhancements are considered significant), AB-3211 would mean that a large amount of brand content could be identified as “synthetic.” If loosely enforced, the door is left open for misinformation and the harm of generative AI will continue.

My second area of questions fall under the umbrella of sites or platforms required to label this content.

Another use case:

  • A national brand located in Washington has a social media account with over 2,000,000 unique monthly California users.
  • The brand posts content to their channel that has been labeled as “synthetic.”

Will the labeling only be seen by users in California? Or will the rules of California introduce transparency for national brands, regardless of where the brand is located? Technically, it’s possible for the platform to introduce conditional labeling, with the platform identifying a user’s location using geographical information associated with the IP address. But there are accuracy, legal and ethical associations with this. For instance, apps typically require a user’s permission to access their location. If the bill implications go the route of only requiring California users to see the labeling, these users would have to opt in to see the labeling. But if things go the other way and brands outside of California are forced to label their content for everyone because of their 2,000,000 unique monthly California users, the bill is having a national impact. I can’t imagine this would go over well, especially considering that the bill’s definition of “social media platform” extends beyond the likes of TikTok, Instagram and Meta.

If AB-3211 passes, the extremes of how it’s applied will determine the bill’s impact. We could all start seeing labeled content, or it might only be visible to Californians who choose to view these labels.

So how should a brand look at the potential implications of AB-3211 if it passes?

In my career, I don’t think I’ve worked with a brand that hasn’t had the word “authentic” in their brand values. I believe that brands should maintain this authenticity towards their audiences, even if it means that content will be labeled as “synthetic.” An authentic narrative can be brought to life in creative, “synthetic” ways. For years I worked on Aflac and we told authentic stories with a talking duck. In no way were we trying to convince audiences that there was actually a talking duck out there selling insurance. I see approaching “synthetic” content in the vein as Aflac using the duck. If generative AI is used as a creative means to tell an authentic narrative, brands and audiences should have no issue with the labels. It would be a move towards an even better word to include in brand values – “transparency.”

 


Adam Deer

Adam Deer
Director, Creative & Innovation

 

Share This

You might also like

Best Hotels in the World

2025 Telly Awards

City of Redmond

Back to news

Green Rubino Marketing Agency Links

Contact

New business, new job? It’s all good. Connect with the right person right here.

Green Rubino Marketing Agency Social
Careers Privacy Policy ⦿→ Your Privacy Choices
1114 E. Pike St, 3rd Floor, Seattle, WA 98122