The BrXnd Marketing X AI Conference is coming to NYC on 5/16. Only a few tickets left. Get yours now! →

Using AI to create consistent product visuals

Enhancing Product Visuals with AI: Discover Claid.ai’s Revolutionary Tools for Consistent Brand Imagery

Claid.ai is a generative AI platform that helps to create beautiful product photography in seconds for websites, marketing and social media. Their suite of tools edit and generate personalized product visuals at scale, preserving brand guidelines, shapes and colors.

Transcript

[MUSIC PLAYING] OK.

Hi, everyone.

We are almost in the end of the day.

I'm Sophie.

I'm the CEO and co-founder of Clade AI.

I'm very happy to be here all the way from San Francisco and explore non-very technical side of AI.

Ah, OK.

So Clade is a deep tech startup.

We've been working with Generative AI for the last-- almost six years.

Launched our first product back in 2018.

And we are the developer.

That means that we built and developed and trained almost all the stack that I will show you today, which actually helps a lot to understand the possibilities and limitations of these networks.

Since the launch, we process more than 300 million images with AI on the different various tasks.

We started with photo editing and added generation from scratch last year.

We work with companies like DoorDash, Ruppi, Printify, and help them to process all their content with AI.

So right now, we have a robust suit of product operations that can create images for the websites, marketing, campaigns, ads, and whatever the task you needed to do, it can do.

The key difference between what we do and tools like Dolly, Midjourney-- basically, it's a very frequent question-- other text-to-image tools that most of the tools that I mentioned, they are zero to one.

So you input the prompt, and they give you the result.

And while being highly creative, they need to be curated.

You need to look on hundreds of different variations to find the closest to your vision.

What we do at Klaid, as we focus on e-commerce, we train and fine-tune with the product in mind.

So even with the same prompt, we make sure that we don't change colors, logos, format, shape of the product.

Product always comes first.

That's much harder to do than just text-to-image.

But it gives you much more consistency and higher result for e-commerce use case.

Some improvements takes weeks, some months.

But if you look on the retrospective, the jump in quality is impressive.

So that's our first generation that we launched in-- almost launched.

We were not-- we didn't want to launch it.

We were not happy with that.

But that's some of our first attempts in January, 2023.

And that's one of the latest generation 5 that we did last year-- last month.

And let me show you how it works.

Can we switch to the laptop?

OK.

So as I mentioned, product always comes first.

We need to upload the product.

It can come from various sources.

And we have, actually, 15 different tools and parameters that you can use to edit the product and get to the requirements that you want.

So first comes the upscaling, enhancing part.

If the product is not high quality, you can improve it.

You can blow up the resolution.

Actually, this product is pretty decent.

So we can skip that.

The next thing that we also do is fix the light.

As a lot of e-commerce sellers, especially in marketplaces like eBay, Amazon, they don't have really well-lit products.

So we have the AI that can fix it.

It doesn't change the product that much.

But sometimes, you just want to add a little bit to that-- just add a little bit of light, correct white.

It's a very simple tool to do for the retoucher.

But it takes some time.

So we are saving that time at scale.

What comes next is background removal.

And we also can do background blurring.

To generate the next product, we need to have the image on transparent background.

We need to have that alpha channel.

So you can pick if you want the transparent background or you want specific solid background.

I will pick transparent.

You can also check something like padding, which is basically white space around the product.

If the product should take 70% of the image catalog photo, you can specify that, and we will resize it.

So let's wait for that.

Perfect.

So it's removed the background.

Now we can work with that.

And what comes next is the generation part.

So we have a few levels of generations.

First and the simplest one is just the shadow.

Some brands are very-- especially on the bigger marketplaces-- don't want to generate something risky.

So we can just add a little bit of shadow.

But what comes next is generating the new image itself.

So we need to place the product where you want it to be and just write the prompt.

So writing prompts is basically what you would like to see.

It's highly contextual, like on what product you have.

If it's beauty, it's probably something like with the ingredients that you have, or bathroom, or whatever you want.

However, we are building the technology to help and auto-prompt and do the prompting for you.

I actually built the-- I was so tired of prompting, so I built the custom GPT that does prompting for me.

You can do it through GPT.

It's very, very simple.

And here we have four different variations.

We can choose-- it's never the same.

It always gives variation of the scene that you requested.

So still, while this technology probably develops the fastest, you still need to look at this.

We don't recommend to fully trust it, even though as a developer, we try to limit it, try to make sure that nothing-- like we have not safe for work filter, we have logos detector, like nothing pops up there that is not safe.

But still, we recommend the curation.

And when you are happy with the result, you can reuse the same image and basically just regenerate it in multiple formats.

So this one is the landscape one.

You can do it for stories.

You can do it for all 15 formats of the Facebook ads.

It all can be done automatically once you're happy with this image.

But the frequent question that we also have is like, OK, that's all great, but can I give it a little bit more control just to make sure that I don't need to look at every single image that is generated?

And you can.

And we are doing this by using templates.

Templates give us much more control in terms of-- oh, I'm sorry.

I need to come back to the product.

Yeah, templates give us much more control in terms of geometry and composition on the product.

And you can upload some specific brand guidelines like hex color of the scenes that you would like to generate, brand colors that is applicable to your brand specifically.

And once you place the product here, it will never move.

It will be there standing on the platform, keeping the composition consistent.

We just give a little bit of variations of the same scene.

With the color, you can also specify some extra details.

And it stays there.

And through API, you can generate hundreds of variations of that composition.

So it looks similar, but it's just-- as I picked this crazy color for the demo, it kind of like-- color influenced the result a lot.

You can do it in much less crazy way.

But you can do it for your brand.

And some of the brands that we work with, they don't want to-- they want to use the templates, but they don't want it to be changed.

So we have a low creativity that just blends the product, just generates a little bit of shadow so it doesn't look pasted.

And you can-- we have a client that process like 1,000 of the products, place them on the kitchen counters, and just with the shadow.

And we didn't need to check every single one of them.

It was all done.

So that's how it works.

In the upcoming weeks, we are launching a possibility to correct the images after they are generated.

And upload their brand styles, brand reference images from your previous photo shoots.

So the good news is that this technology gets more and more and more control.

However, it's still-- yeah, we still recommend a little bit of curation.

So thank you.

[MUSIC PLAYING] [APPLAUSE] (upbeat music)

BRXND is coming to LA for the first time on February 6, 2025 for another full day of marketing and AI—
BRXND is coming to LA for the first time on February 6, 2025 for another full day of marketing and AI—
BRXND is coming to LA for the first time on February 6, 2025 for another full day of marketing and AI—
BRXND is coming to LA for the first time on February 6, 2025 for another full day of marketing and AI—
BRXND is coming to LA for the first time on February 6, 2025 for another full day of marketing and AI—
BRXND is coming to LA for the first time on February 6, 2025 for another full day of marketing and AI—

Two years after launching the BRXND Marketing x AI Conference in NYC, we are ready to take things to California. On February 6, 2025 we will be in Los Angeles to explore the intersection of marketing and AI. Where must the industry go? And, most importantly, what's worth getting your hands on today? Join us in February.

BRXND.ai is an organization that exists at the intersection between brands and AI. Our mission is to help the world of marketing and AI connect and collaborate. This event will feature world-class marketers and game-changing technologists discussing what's possible today.

The day will include presentations from CMOs from leading brands talking about the effects of AI on their business, demos of the world’s best marketing AI, and conversations about the legal, ethical, and practical challenges the industry faces as it adopts this exciting new technology.

Attendees will return to work the next day with real ideas on how to immediately bring AI into their marketing, brand, strategy, and data work. We hope you will join us!

Who is the BrXnd Conference For?

Marketers

Advertisers

Creatives

Strategists

Executives

Technologists

Data Scientists

Media

We hope that you will join us. Add your email to be informed as we open up tickets.

Content

Dispatch

BrXnd Dispatch is a newsletter covering the intersection of brands and AI.

All Posts