The BrXnd Marketing X AI Conference is coming to NYC on 5/16. Only a few tickets left. Get yours now! →

Stay off the front page!: A General Counsel’s Toolkit for using gen AI without getting in hot water

Navigating Legal Risks in Gen AI: A Practical Guide for Companies with Betty Louie and Shareen Pathak

Generative AI tools have the power to transform creative industries. But using them comes with its own set of risks — and that fear can stifle innovation inside companies. Join Betty Louie and Shareen Pathak in this practical conversation, where we’ll talk about how to create systems that enable teams to use and scale generative AI, while minimizing risk, incorporating the company's ethical stance and increasing efficiency.

Transcript

[MUSIC PLAYING] I want to start with a really big picture question, because we've seen such amazing demos today.

And there's also so many AI tools in the market.

And I think that one thing that I hear from a lot of people I talk to in the marketing world, in the publisher world, agency world, is that they don't know where to start when it comes to using generative AI, working through all these different tools they see every day.

And I know that you've actually, the Brand Tech Group, developed this really interesting way of thinking about this and how to think through all of these tools.

So I think that's a question a lot of people in the audience have.

So I'd like to actually start with what you're doing in terms of going through the landfill and figuring out what to use.

Yeah.

So great question.

So about a year ago, the question of Gen AI came up.

We have our own Gen AI platform called Pencil, which Chiara had demonstrated today.

And certainly for us, not only to use Pencil, but to use so many of the different Gen AI tools out there, the question was asked, which tools are safe to use?

Which ones can we use?

So we approached it in terms of looking at it in a way as to how to make the process very efficient.

Because we started hearing that a lot of places did it so that the creative team would come up with different tools that they wanted to use.

Then they would go to some AI committee and ask the AI committee if they could use the tool.

And then a few months later, the AI committee would let them know if they could use it or not.

And we decided to reverse the process.

So we looked at a whole bunch of tools.

I think from the get-go, we looked at about 10 to 15 different tools.

And right now, we're up to about 65 different tools.

And we've analyzed it.

And we've put it onto what we call a green list.

So we put it into a traffic light system.

So green means it's a tool that you could use for public-facing work.

Amber are tools that you can really use just for internal work, mood boards, ideations.

And then red tools are tools that you should not use.

And the reason why we came up with this is when I myself was looking at the tools, I realized it takes a lot of time to figure out how to use each tool.

You have to figure out where to sign up and whether you want to pay for it.

And if you pay for it, which ones there are, how to toggle certain things on and off.

And then you start to learn how to prompt it, which is in and of itself like an entire work stream.

So if you want the creative team to really lean into using tools and being comfortable with using tools, you can't think that they would do this and then think that two months later an AI committee would let them know that they cannot touch the tool.

So by reversing the process, it really encourages people to play with the tools that are green.

It encourages innovation.

And it allows people to feel safer working within a certain framework.

That's really interesting.

And I like the idea of reversing the process in order to force efficiency into the systems that people aren't wasting time waiting for approvals or things like that.

They know up front.

What are some of the parameters around which you started thinking about, as much detail as you can get into, around what constitutes green, red, amber?

What are some broad strokes parameters you were thinking through?

Yeah.

It's a combination of a legal analysis, tech analysis, IT analysis, but also creative and the marketing team.

So it's a whole combination.

Some of the heavier, more technical ones are, one, the provenance of the tool.

What is it based on?

What's the underlying model that it's based on?

Certain ones disclose what it's built on top of.

Certain ones don't.

Some of it could be IT.

Where do they store their data?

Who owns the input?

Who owns the output?

What some of the indemnifications are.

And then we also look at some of the softer aspects of it.

Is the provenance of the tool?

Who's behind the tool?

Is it well-funded?

Is this a platform that might go away over time because it's not sustainable?

So it's a whole-- so many different aspects of it.

I want to talk about a bunch of fears around legal issues, again, that I've heard from people and then I personally have as a publisher.

But before that, I do want to-- we're calling this a toolkit to understanding how this-- so I want to keep it as practical as possible.

If somebody wants to start with their version of a green list, what is one starting point that they should think about just philosophically as they're starting on this?

Why do this?

And where should they begin?

Where should they begin?

I think they should first figure out what are the tools that their team wants to use?

First, you have to canvas internally.

What are the tools people are using?

How are they using it?

And then you build parameters around that.

For us, we looked at tools for different features as well-- text to image, text to text, text to voice, text to video.

So you don't want that many duplications.

We want green and then at least one in each category, if not two or three in every single category.

But you don't want five in one category and zero in other ones.

So you want to build up some system of efficiency so that people could scale, but at the same time, give them the bandwidth to work with it.

So I think it's first canvas the team to see what tools people are using.

Yeah, I think Noah talked about this earlier, where he said a lot of people are using tools that people don't even know about in the leadership.

And so trying to start there makes a lot of sense.

Yeah, and certainly seeing it on the green list helps all the different creators understand what tools are out there, because you're not relying on yourself to figure out what's out there in the market.

You'll see different things that are green, that are amber, and you'll say, oh, that's interesting.

I didn't realize this tool is out there.

And you start playing around with it, and that certainly encourages innovation.

And then conversely, if something's on the red list, you don't touch it.

And you're not wasting your time with it.

I wanted to talk a little bit, like I said, about I think there's about four or five big buckets of fears related to using AI and legal fears related to using AI.

Or it's even psychological safety.

Am I going to get fired?

Am I going to get my boss fired?

Am I going to get on the front page for a bad reason that I don't want to be part of?

The biggest one, I think, that everybody talks about, first and foremost, is copyright.

I think that the issue of how do we think about copywriting AI-generated material, whether it's content, image, logo, campaign, is one that I fear a lot.

How are you seeing the issue of copyright evolve as you're starting to work through?

And especially how it's evolved over the last few months, because I think it feels like it's changing constantly.

Yeah, so the US Copyright Office is probably the most conservative on this topic.

And it is the most vocal on this topic as well.

So the US takes the position that the author of a copyrighted image can only be a human.

So if something is generated by technology, it is not protectable.

So this is very interesting.

And it's caused a lot of issues with Gen AI being so strong and the technology being so strong.

And in an unprecedented move, the US Copyright Office actually put out an inquiry in August last year to deal with this issue.

They received over 10,000 responses by the deadline, which was December of 2023.

So sometime this year, they're supposed to come out with some sort of guidance on this issue.

What they have said to date is that if a human does the prompt, but the technology itself generates the image, the image itself is not protectable.

So you could be the greatest prompter, and you could do the fanciest prompt.

If that is the only human interaction, that is not enough to protect the image.

So what this actually means is you have to go back to the traditional notion.

If it is an image that you want to protect-- and we can go into that on whether or not it's an image you want to protect-- you have to build back into the system the human tailoring, the human-- I don't want to say manipulation, but you have to build in the human aspect to it.

And more importantly, you have to have a record of all the prompts and all the work that you've done to it so that you're able to-- if you want to protect the image, you're able to have that backup for it.

So for example, our Pencil Pro, our product, we do a prompt lock.

So you're able to keep that.

As best practices, most companies probably should, if it's an image that they're looking to protect, have that sort of data as to what was generated, what was prompted, how the human changed it.

And then you might go back in and prompt some more.

And then if the human changes it some more, and how it is.

And one thing to keep in mind when we're talking about the protectability of a certain work is not all images-- when you're working with Gen AI tools, you're not looking to protect every single image.

There might be certain images where you're just using AI in the background.

And it's very, very low risk.

You're just looking to spit it out with volume, speed, low cost.

And those images, your team might make a decision, you know what, I really don't need to protect this.

It's not something that's so unique.

So you're not looking to tailor it so much.

Whereas there might be certain campaigns where it's very sensitive to the company.

It's a rebranding.

It's a new line that's coming out, a new model, new shape of something, new colors.

And that the team really needs to be aware of right away.

OK, this is a model.

This is a content that we really want to protect.

So you build in that human tailoring.

You make sure you have the prompt logs.

You make sure you keep the record on that.

So you should look at content that you generate from AI across a whole spectrum that not everything needs to be protectable.

Because otherwise it becomes too cumbersome as a process.

You take some of the goodness of the Gen AI tool out of it.

So if you're looking at background work and you deem it low risk, then you're not looking-- You remove the efficiency of the point.

It's unclear to do that.

You've sort of spoken about images.

And I kind of want to turn and talk a little bit more about sort of content of a different sort.

Obviously, a lot of the conversation on AI has centered on news publishers, content publishers, and brands who create a ton of content.

And at Toolkits, we cover the business of content.

And I also advise a lot of brands who do content marketing.

So I hear this from them often that, oh, I have a great CMO or a great executive who has some amazing ideas for a thought leadership piece.

How do I know that what actually comes across my desk was AI generated, was AI involved?

If I'm running an op-ed page and people are sending in an opinion piece, how do I know some of this wasn't AI generated, was AI generated, to what extent was it AI generated?

And does it matter?

Am I going to-- if I'm going to print something-- you know, it's print.

If I'm going to publish something, how much should I be thoughtful about how much of this was created by AI?

And how much of this should I even disclose?

Are there best practices from an ethical perspective or from a legal perspective or both that you sort of advise people more on the sort of content article front?

Yeah, no, that's a great question in terms of who bears the responsibility.

I think, first, if you're a publisher and you're getting articles from a wide range of sources, given how easy it is to use a lot of these tools and to get ideas from these tools, I think it would be safe-- I think it would be a good question to ask people contributing pieces, how much did you use a Gen AI tool to help you with this?

How much was a tool involved in this process?

So that you get-- you understand what's out there.

Because certainly as a publisher, you don't want to be putting certain pieces out and then somebody comes back and says, hey, you know what?

It's pretty much verbatim of what I had written on something else, right?

So I would put it back into your questionnaire when you're working with different writers and pieces that they submit.

I would put it back into the questionnaire.

And then it's really a decision as to whether or not you want to disclose it.

It's a bit of an ethical issue.

It's a bit of a personal stance as to how you want to do it.

If you want to say, portions of this piece was assisted by Gen AI tool, or how you want to disclose it.

The EU AI Act has been published, or a draft of it has been published.

And certainly there are certain requirements as to disclosure.

The EU AI Act puts people into two different buckets when it comes to Gen AI, one of which is if you're a provider of Gen AI, meaning you're a person building a Gen AI system.

And then the other is if you're a deployer, meaning somebody who's just used it.

So there's different levels of disclosure that's required.

So you might not want to avail yourself of Gen AI, EU AI Act standards.

But at the same time, you might also want to be progressive and say, no, I will do it, even if I'm not obligated to do so as a matter of principle, as a matter of ethics and best practices.

I will disclose it.

And then you decide how you want to footnote it on that and how much you want to put it out there.

I think it's so interesting in the context of brands becoming media companies, because a lot of them are putting so many resources into hiring writers and doing great thought leadership and content.

And then I think the use of AI can obviously significantly speed up and make a lot of workflows more efficient, as we've talked about.

But there's also that issue, like you said, of sort of audience trust.

And how do you kind of play with disclosures either to garner that trust?

And maybe you say, none of this piece was made using generative AI.

And that can have a certain impact in an environment where people expect generative AI.

Or you can come out front and preempt it, right?

And say, oh, we use generative AI for some of this piece.

And that also kind of helps create that trust.

The EU Act is fascinating.

Obviously, Europe is-- with most of these things, including privacy, has always been kind of ahead of the curve or ahead of the US when it comes to this.

Do you have a sense of how kind of laws around this will evolve in the US, especially with sort of the White House and the FTC kind of talking about this quite a bit now around kind of disclosures and things like that?

You know, in the US, it's really patchwork right now.

I mean, so many different states are coming out with different legislation on very specific smaller pieces of it.

So it's too much of a patchwork for, I think, most companies to really wrap their arms around it.

I think what will end up happening is people will decide whether to avail themselves of the EU AI Act and use that standard as sort of a best practice standard and then roll it out, you know, similar to a GDPR standard.

And then the US might have different nuances-- Around that.

Yeah, around that.

And then back to your other point, in terms of garnering the trust of your readers in whether you say, you know, this piece was not-- you know, Gen AI was not used in any of it or a portion of it was Gen AI.

You know, it's two prongs because you could look at it as getting trust from the readers.

But you know, some of it's also reputational risk.

You know, so even if you as a publisher, you say, OK, I don't really have a legal risk-- or maybe you do, but maybe you don't.

But some of it's reputational risk because you don't want someone coming back to you and say, well, these articles that you've published, it's so similar to so many different ones.

Yeah.

Right?

So it's something to keep in mind as to how to do that.

And I know that there are a lot of tools out there, kind of AI on AI, where you use another AI tool to figure out if AI was used on it.

But that's putting a lot of burden on the publisher to figure that out.

So it's-- you know, that's an individual assessment as to how you want to approach it.

But you know, I think as a first matter, it's pretty-- I think as part of the process, it's pretty fair to just ask anyone submitting work in and saying, did you use AI in this?

And if so, how much-- Including-- --did you rely on it?

And you know, you can ask them, what models did you use?

What were they?

Because you also want to understand what's out there.

Yeah.

Including if it's your own boss.

Then you're talking about leadership.

Well, yeah.

And if you put it as like, you know, a matter of a standard questionnaire, then it doesn't feel so personal to your boss as to like interrogating him or her on this.

But it is just a matter of practice.

And then plus, it's also part of your backup file.

Like if somebody comes back to you and says, you know, this article is this, you say, hey, look.

This is what I've asked.

And these are the responses I got.

So it's a nice-- it's nice paperwork for you to have.

I think that's so interesting, again, in the context of content marketers who pour so much resources into creating amazing content.

Like we heard from KPMG earlier, who's spending all this time and energy having that content out there.

We don't have a lot of time left.

But I did want to talk sort of about what's happening in this space when it comes to-- and I use the term publishers loosely.

I mean news publishers.

I mean brand publishers.

Anyone who publishes content on the internet are kind of currently thinking, OK, but a lot of the work I publish gets ingested by a lot of models.

What's in it for me?

And you're seeing kind of on the news side a lot of news brands starting to strike some deals with open AI and sort of saying, OK, at least you can license our content so that we get some of that whatever back if you're going to be using it.

And I have heard from a lot of brands in this space who do create amazing thought leadership saying, oh, a lot of our content is being ingested by these crawlers.

How do we think about this?

Do we go the licensing route?

Do we bother?

In a few extreme cases, you've seen sort of some lawsuits, which are also a form of negotiation.

The licensor sue approach is sort of really interesting to me in the context of this publishing.

How do you advise people to sort of think about that if they are thinking, what should I do about being sort of crawled by these web crawlers?

Yeah, I mean, we don't typically-- we don't advise on that piece per se.

I think the way I see it is if you're looking at licensing and you're a small creator, a relatively smaller creator, you would want to group up with other people that might be looking to negotiate with a larger player like open AI or different things.

If you're going the lawsuit route, that's going to take a while.

It takes a long time to work through the US system.

And that's a route you could choose.

But it's a very expensive route as well.

But even if you're looking to join, let's say, a group to negotiate with open AI or whichever platform, think about what you want to get out of that negotiated deal.

The negotiated deals haven't really been revealed as to what some of the terms are.

But what they have revealed is some of it's monetary.

They haven't shown how much people get paid.

But some of it's also attribution.

And attribution is super important because if they are able to link you back to your own site, then you might get what you want out of it.

And then the important part on attribution is you want to figure out if they're attributing it as part of the negotiated deal, are they attributing to thousands of sites so that it's meaningless, meaning you don't really get a pop out of that attribution, or if when they attribute it, it's like to a short list of 10 or 15 so that you actually benefit from that.

So I think when you're looking to do these negotiated deals with these large tech platforms, figure out what you really want to get from it, whether it's a combination of attribution, money, or different things.

And then figure out the right team to kind of group up with to do the negotiations against some of the bigger players.

Amazing.

Great.

Well, Betty, we're right on time.

But thank you so much.

That was great.

Thank you.

[APPLAUSE] Thank you so much.

[MUSIC PLAYING]

BRXND is coming to LA for the first time on February 6, 2025 for another full day of marketing and AI—
BRXND is coming to LA for the first time on February 6, 2025 for another full day of marketing and AI—
BRXND is coming to LA for the first time on February 6, 2025 for another full day of marketing and AI—
BRXND is coming to LA for the first time on February 6, 2025 for another full day of marketing and AI—
BRXND is coming to LA for the first time on February 6, 2025 for another full day of marketing and AI—
BRXND is coming to LA for the first time on February 6, 2025 for another full day of marketing and AI—

Two years after launching the BRXND Marketing x AI Conference in NYC, we are ready to take things to California. On February 6, 2025 we will be in Los Angeles to explore the intersection of marketing and AI. Where must the industry go? And, most importantly, what's worth getting your hands on today? Join us in February.

BRXND.ai is an organization that exists at the intersection between brands and AI. Our mission is to help the world of marketing and AI connect and collaborate. This event will feature world-class marketers and game-changing technologists discussing what's possible today.

The day will include presentations from CMOs from leading brands talking about the effects of AI on their business, demos of the world’s best marketing AI, and conversations about the legal, ethical, and practical challenges the industry faces as it adopts this exciting new technology.

Attendees will return to work the next day with real ideas on how to immediately bring AI into their marketing, brand, strategy, and data work. We hope you will join us!

Who is the BrXnd Conference For?

Marketers

Advertisers

Creatives

Strategists

Executives

Technologists

Data Scientists

Media

We hope that you will join us. Add your email to be informed as we open up tickets.

Content

Dispatch

BrXnd Dispatch is a newsletter covering the intersection of brands and AI.

All Posts