The BrXnd Marketing X AI Conference is coming to NYC on 5/16. Only a few tickets left. Get yours now! →

A Year of Living Artificially

A Year of Living Artificially

Reflections on the last twelve months of tinkering at the intersection of marketing and AI.

Transcript

[MUSIC PLAYING] Thank you all so much for being here.

Let me get my slides on here.

OK, here we go.

Thank you so much for being here.

Thank you so much for so many of you coming back again.

So it was amazing last year.

I got a lot of great feedback.

Hopefully, I solved the sort of two biggest issues people had.

One was getting upstairs.

We tried to make it a little smoother.

And the second was that the chairs were very uncomfortable last year.

That was a large point of feedback.

So I ordered new chairs for us.

And hopefully, it's a little better.

So a few other logistical things before we dive in.

Everybody should just take a picture of that.

Just grab your phones.

That way, you can always show it to someone else who needs the Wi-Fi or needs the agenda.

Everything is there.

You're all good.

I'll give you all a second.

Looks like it.

So now, everybody can turn to the person next to them and just ask them to give them the agenda.

OK, and then, of course, before I get started, I just wanted to say a really big thank you for all the people who have supported this.

The Airtable, Brandtech Group, Focal Data, Brandguard, Red Scout, Canvas Worldwide, Distillery, McKinney, Innovo, and Persistent Productions.

Thank you so much.

This is really just started as a thing that I was doing because I was disappointed with the state of the conversation around AI and marketing.

And it wouldn't be possible without all of these folks.

So a huge thank you again to all of them.

So the way I opened the conference last year is actually the same way I want to open it this year, which is today is for questions.

I started this because I felt like the state of the conversation happening in this industry just wasn't where it needed to be, like that we could do a better job and we should do a better job.

And we should be asking more questions and not assuming so many answers.

And so that's very much the same mindset that I want to bring to this year and to the next six or seven or however many hours we have.

But I'd say even more specifically this year, one of the feelings I've been having is that we've been boxing this technology in too much.

Even though it's so new, it feels like people are feeling more and more sure about what it is and what it isn't.

And so sort of beyond it just being for questions, I want it to be about probing and provoking.

So my ask for all of you is just sort of like come in with an open mind, throw away sort of all of the ideas you have.

I've tried to program the event as much as possible to be about people doing things.

There's a rule that everybody got who was coming to talk, which is no speculating.

So it's about sort of having things that you can go back to work with tomorrow and put into action.

Looking back at the conference last year, when I really think about sort of what I got most wrong, I'd say the thing was just like, I really thought last May that by the end of the year, we'd see sort of adoption of this technology be everywhere in brands and in marketing organizations.

And I'd say looking back, we have not quite gotten there.

I think we could probably all agree for a whole bunch of different reasons.

At the same time, obviously like interest in this has just sort of continued to skyrocket.

We have better models.

You can't go to a conference or a dinner or be in a meeting and not talk about this stuff, right?

I've probably met with a hundred brands and agencies over the last 12 months.

And I'd say sort of all of them, AI is sort of at or near the top of their list of priorities, but it's not like really translating into action.

And I think that's sort of a very interesting question as to why that is.

I found this funny chart from an IBM survey last November.

And the funny part of it is that purple line is adoption.

And so that should be sort of getting bigger and somehow it's getting smaller.

And it's kind of hard to figure out.

And I think it's sort of fairly indicative of kind of where we are as an industry and how we're thinking about this stuff.

Is that, I think people are confused about what AI is.

It isn't, they're confused about sort of where adoption is.

Everybody is in that sort of middle column of exploring.

And that's okay.

Like I think sort of more than anything else, I want this to be a day of reassurance.

Like I don't think anyone should have the answers.

I don't think anybody does have the answers.

I think if they tell you that they have the answers, they're lying to you.

And so like, it's all good.

And I think one of my biggest takeaways, I ran an enterprise software company for about a decade and we worked with some of the largest brands in the world.

And it was often very frustrating to me to see how slow they adopted new technology.

And over time I came to realize I just needed to sort of flip my whole sort of purview on that, which is that, you know, these companies are really good at what they do and they're really good at protecting their brands and their assets.

And, you know, sometimes the speed that they move, even though it can be frustrating, and I'm sure it's frustrating inside, it's certainly frustrating when you're outside, but you know, there's sort of method to the madness.

And it is all good also, because it's just, this is weird.

This is the strangest technology I certainly have ever experienced.

Like this is such a weird piece of tech.

And I don't think any of us really know what to do with it yet.

And, you know, again, that sort of makes it okay.

We're all figuring it out.

We're all on this journey.

And that is sort of the vibe for the day.

There'll be lots of talks about vibe today.

You know, fundamentally AI is counterintuitive, right?

Like, and this has been a sort of continual rallying cry for me, something Tim Wong is going to be speaking a lot and I've stolen like half his slides from last year just to represent.

You know, it's basically, AI is bad at everything that computers are good at.

The canonical example of this is math, right?

I'm sure you've all sort of seen some version of this, but you know, if you ask an AI to do math and you ask a computer to do math, you get answers that look fairly similar.

Like here's a five and a six digit number multiplied together.

But when you zoom in a little more on the AI answer, it turns out that it's incorrect.

It looks right.

The beginning is right.

The end is right.

The middle is not right.

And the reason for that is that AI is not running a deterministic process.

Math is deterministic, right?

You do the same thing every time.

You multiply something the same way.

The answer is the same every time.

That's not what AI does at all.

AI is something completely different.

And that makes it weird, right?

And the flip side of that is that it makes it good at all these things that computers are bad at.

Like not only bad at, but like we would never even imagine asking them.

So, you know, I showed this last year.

I'm sure many of you have seen it.

I've seen some version of them, but these are AI generated brand collabs.

Like this is not a thing.

Are these the best brand collabs in the world?

Like, of course not.

Like, you know, you have designers who do amazing things, but like a computer made these, right?

Like this is not something a computer is supposed to be able to do.

Or another project that I worked on last year with a friend of mine who's a creative director is we built this manifesto generator.

And this is Katie Welch.

Katie is the CMO of Rare Beauty.

And someone sent me this video after we put the manifesto generator on the web.

And this is her reading a manifesto that she generated for her brand at a conference. - So in the year 2020, we've screened 15 years and life's became our reflection.

Rare Beauty emerged, a fearless champion in the arena of authenticity.

We retracted the digital masquerade of flawlessness.

We dismantled the filters that blur our true selves.

We cast shadows on the illusion of perfection.

We believe in beauty maintained, unfiltered, and unequivocally real.

We believe in the power of every freckle, scar, and asymmetry.

We believe in the radical notion that beauty should not conform.

We call it a bold, dispirited, the original.

Dare to dare the rare, dare to define standard.

Join us in a revolution where every selfie is a statement.

Here's to the rare ones, to the real beauty waiting to break free.

Shout at the glass, the false ideals.

Let beauty, your beauty, define the world. - So is it the best manifesto that's ever been written?

Definitely not.

But is it absolutely crazy that a computer came up with this?

Yeah, and it's just mind-blowing, right?

And if you look at them side by side, you sort of really see just how strange it all is, right?

Essentially, computers are good at all of these things, like math and information storage, that not only is AI not good at, it's just sort of not something that you can imagine doing with it, or you can imagine, but it doesn't work.

But on the flip side, AI can do these creative tasks that, like, if I asked you five years ago to have your computer write me a sonnet, it just would be a nonsensical thing to say, right?

And today, it's just sort of this reality we live in.

It's the first thing everybody wants to show you when they try out ChatGPT, is they wanna show you how it can write this great poem.

If I see another ChatGPT poem, I'm gonna die.

But, you know, again, this sort of line is blurry.

The line between computers and AI is blurry.

It's even blurry for the computers themselves.

If you ask ChatGPT if it's a piece of software or an AI model, it tells you it's an AI model.

If you probe it a little more and you say, but, well, I'm accessing you through something, it says, yes, you're actually accessing.

So we are accessing these tools through software, through these deterministic processing machines, but they are non-deterministic in their nature.

I've been joking that AI fails the duck test, right?

So the duck test is if it looks like a duck and it swims like a duck and it quacks like a duck, then it's a duck.

So AI looks like software, it feels like software, it sounds like software, but it's not, right?

And that's weird.

And so, you know, to come back to sort of like this adoption thing, like it's fine that we all feel a little weird about this because it is very weird.

And I think, you know, one of the ways we deal with things when they're feeling weird is we try to come up with analogies, right?

This is something I did last year.

I offered this analogy.

Maybe one way to sort of conceive of AI is that we think of it as like this, this Ikea box.

And instead of going to Ikea and choosing a mound bed or a bookcase or that little cart that everybody has, we just buy this big box.

And when we get home, we decide what it's gonna be.

And we decide it's gonna be a bed and we could decide it's gonna be a desk.

We decided it's gonna be a set of drawers and it rearranges itself sort of magically.

And it's this kind of like fuzzy, amazing thing.

And about a week later, I decided that was a terrible analogy and, you know, not just 'cause like, it's just sort of too confusing and not that good, but also like, we just need fewer boxes.

And I didn't give up trying to find an analogy.

And eventually I was sitting on the couch almost every night after my kids take their showers, we watch a little YouTube and we have a set of channels that we subscribe to.

And so one of them is called Veritasium.

It's run by a physics PhD named Derek Muller.

And he had this great video where he showed why the physics of bikes don't make any sense.

They're like fundamentally counterintuitive.

And the way he showed it is he built a bike and this was a special bike that you could remotely stop from turning left or stop from turning right.

And the reason that matters is it turns out when you ride a bike, when you turn right, you first turn left.

And when you turn left, you first turn right.

If you look closely, you can see the problem.

Here, I'm trying to turn right, but steering that way puts me off balance.

If you could ride this bicycle, you would find it's impossible to turn left without first steering right.

And it's impossible to turn right without first steering left.

This seems wrong.

I think most people believe you turn a bike simply by pointing the handlebars in the direction you wanna go.

After all, this is how you drive a car.

Point the front wheels any direction you like and the car just goes that way.

But the difference with a bicycle is steering doesn't just affect the direction you're headed, it also affects your balance. - And I just kind of love this as an analogy, one, because it's less about what it can do and can't do, which is sort of the Ikea box, but it's just about sort of how you ride it, right?

Like, you know, anybody who has ever tried to teach a kid to ride a bike, you know that you don't teach them by reading them a book.

You don't teach them by giving them an instruction manual.

You run alongside them and you watch them try to balance and they skin a knee and they eventually figured it out.

And the reason for that is because it's counterintuitive.

It doesn't make sense to us.

But once we learn to ride it, we're good, right?

One funny aside on this that I thought about after I started to like this analogy is a few years ago, a member of my family who will remain nameless that works for a very large consulting company, tried to convince me that he could explain to my then six-year-old how to ride a bike by explaining to her the physics of bike riding.

So he was gonna explain to her these physics of like balance and moving left.

And so I was like, sure, man, go for it, try it out.

And it worked exactly as well as you would think it would, but he was sure he could do it.

But as I started to sort of explore bicycles, 'cause like one of the things I like to do is sort of fall into these rabbit holes, I realized actually this bike analogy was, it went a lot deeper than just this sort of initial thing.

One of the other really interesting kind of connections between bicycles and AI is that bikes were first introduced about 70 years before they were popularized.

So there were all these different configurations of bicycles.

I'm sure many of you have seen pictures of the really large wheel front bicycle.

It wasn't until the safety bicycle was introduced in the late 1800s that bikes got really popular.

There's an 1869 article from the New York Times about velocipede mania.

And that was possible because all of a sudden there were these sort of two equal size wheels.

It's kind of like the transformer model, which was introduced only about five or six years ago now.

But also the connection is sort of deeper still, which is that where the real magic of bicycles came, no one foresaw at that time.

So one of the things that's very interesting connection between bikes and what happened in the future is that the Wright brothers, who obviously were responsible for the first powered flight, they were bicycle makers.

And in fact, they built their first airplane in their bicycle shop.

So it's like we have this sort of weird second order, third order effect of bicycles, which is that these people who got really good at using lightweight, very strong parts, realized they could apply that to flying.

And so we had this thing we could never imagine could happen, happened.

Also bikes helped to bring cars to the road.

So all of a sudden, there's a quote from a 1973 article from Scientific American, "We thought the railway was good enough.

The bicycle created a new demand, which was beyond the capacity of the railroad to supply.

Then it came about that the bicycle couldn't satisfy the demand it had created.

A mechanically propelled vehicle was wanted instead of a foot propelled one.

And we now know that the automobile was the answer."

So it's like, we started with bikes, they are introduced in the early 1800s.

They don't really work all that well.

AI is sort of first, you know, neural networks are first kind of conceived in the 40s and 50s.

People have this idea for how AI could work.

And then, you know, it's not until the safety bicycle is introduced, and then it's really not until we see these second and third and fourth order effects that we see how the technology really plays out.

Maybe the most interesting of all the effects though, to me, is that eventually this Scientific American article from 1973 inspired Steve Jobs to call the personal computer the bicycle of the mind. - I remember reading an article when I was about 12 years old I think it might've been in Scientific American, where they measured the efficiency of locomotion for all these species on planet Earth.

How many kilocalories did they expend to get from point A to point B?

And the Condor one came in at the top of the list, surpassed everything else.

And humans came in about a third of the way down the list, which was not such a great showing for the crown of creation.

And, but somebody there had the imagination to test the efficiency of a human riding a bicycle.

Human riding a bicycle blew away the Condor, all the way off the top of the list.

And it made a really big impression on me that we humans are tool builders, and that we can fashion tools that amplify these inherent abilities that we have to spectacular magnitudes.

And so for me, a computer has always been a bicycle of the mind.

Something that takes us far beyond our inherent abilities.

And I think we're just at the early stages of this tool, very early stages.

And we've come only a very short distance, and it's still in its formation, but already we've seen enormous changes.

I think that's nothing compared to what's coming in the next hundred years. - So again, you have these sort of interesting and funny knock-on effects.

And obviously he's talking about personal computers in the early 1980s, but he could have easily been talking about sort of where we are today and AI.

You know, fundamentally analogies allow us to transfer knowledge.

Like this is what they do.

In fact, Douglas Hofstadter, who's a AI pioneer and a cognitive researcher, he wrote the book "Go to Letcher Bach."

He basically has this amazing sort of way of explaining analogies, which is like, he believes they're so fundamental to humans, he calls them the core of cognition, but he also uses them as a way to explain why babies can't remember things from when they're little and why it feels like life is moving faster as we grow older.

And what he says essentially is that it's all due to this thing he calls chunking, which is that they lack the experience that allows understanding or perceiving of large structures.

And so nothing above a rather low level of abstraction gets perceived at all, let alone remembered in later years.

And that's as opposed to this thing that happens as we grow older, where we have more chunks, they grow in size and number, and consequently, we automatically start to perceive in a frame even larger events and constellations of events.

And so for him, analogies are how we order the world.

And in fact, he's an AI researcher, and he believes that analogies are fundamental to like having truly successful AI, like their ability to analogize and generalize problems is what makes them so amazing.

It's called a GPT, a general purpose transformer, right?

This is what they do sometimes.

So, you know, one of the big problems is they're not always that good at generalizing and transferring that knowledge.

So this is a video of DeepMind training in AI to play Breakout, the Atari game.

As an interesting aside, Breakout was a game designed by Steve Wozniak and Steve Jobs when they were at Atari.

And what you can watch here is the AI is sort of like not very good at the beginning.

They don't give it any instructions.

They don't tell it anything about Breakout.

They just tell it to optimize the score.

And so it's using reinforcement learning to do that.

And eventually it gets really amazing, right?

And it develops the strategy.

Any of you who've ever played Breakout, I'm sure you've tried to play this strategy, which is that you try to break through the side and then you try to get to the top and then you try to bounce along.

So it not surprisingly becomes the best Breakout player that has ever existed, right?

It gets very, very adept at getting up that left side and knocking all the blocks out safely.

And it's good to go.

And it's better than any human could ever do until somebody had the idea after DeepMind published this research to move the paddle down by five pixels.

And when they moved the paddle down by five pixels, the machine was completely unable to play Breakout anymore.

Right, so it hadn't generalized any of its knowledge at all.

It had learned to play this specific pixel configuration of Breakout.

But if I gave my six-year-old Breakout and then I moved the paddle down five pixels, she'd have no problem, right?

She's an analogy machine.

She knows how to do that.

In modeling, machine learning, AI, anywhere, this is the act of fitting the data, right?

So if you take a plot of data, this is sort of a very random plot of data I drew, and we try to draw a line, right?

You might say that the data, the line should look like this.

We fit the data, right?

And if you overfit the data, you draw a line like that, right?

It's so perfect.

That is the perfect line for that data.

But if you give it one more data point, the odds of that data point being up at the top of that red line are very, very low, right?

Like that line is almost definitely not going to still be the correct line.

And that's what overfitting is, right?

It's too narrowly drawing conclusions from not enough data.

This is a huge problem across machine learning.

It's a huge problem across modeling.

Any of you who have ever worked with a CFO on a forecast have surely had to deal at some point with overfitting.

So that is kind of how I feel like we've gone wrong over the last 12 months is that we've shrunk our analogy too much, that we've gone with sort of too small of one, and we need to kind of reopen that aperture.

So that's really what I mean when I say a data probe and provoke is that I just want to kind of open it up more again, because I think like we're, you know, it's like 1869 and we're in Velocipede mania, and we have no idea that there's an airplane and a car and a personal computer 150 years down the road, right?

Like this is, it's just hard for us to know all these things.

And when I talk about probing, I'm a huge fan of Marshall McLuhan.

He described his process as probing. - The medium I employ is the probe, not the package.

That is, I tend to use phrases, I tend to use observations that tease people, that squeeze them, that push at them, that disturb them, because I'm really exploring situations.

I'm not trying to deliver some complete set of observations about anything. - As an aside, I love Marshall McLuhan, but this is the greatest excuse ever for making no sense all the time, is that you just tell everybody you meant to do it.

But you know, I think this is a great way to think about analogies, right?

They're these sort of probes of reality.

They're the thing we use to sort of poke and prod and test and see if this thing sort of still makes sense.

And so what I wanna do with the rest of my time, before we really get into the day, is offer up a few probes.

And the point of these is to try to sort of flip some of the thinking.

Because again, I think like we've gotta sort of get outside of some of these boxes that we've put.

They're all structured in this way, where it's this over that.

You know, I spent a lot of my career doing strategy and fundamental to doing strategy is making a decision between two things, right?

Like the difference between good strategy and bad strategy is whether a decision was made.

And so I've got seven of them.

It's react over generate, systems over models, extract over write, vibes over evals, bottom up over top down, examples over prompts and process over output.

So I'm gonna quickly go through them.

So the first one actually, react over generate, it comes from a story.

I was sitting down at the World Trade Center with Ivan Kaiser, who's here today.

Ivan's the CEO of Red Scout.

And we both seem a demo for a AI product that was meant to help strategists.

And I think both of us felt like this was weird.

We both, you know, cut our teeth as strategists and run teams of strategists.

And we saw it and it was like, you know, it was all about helping them come up with more insights.

And you know, if you've run strategy teams, actually coming up with more insights is not like a problem that you have, right?

Like hardly anyone needs more insights.

It's like, how do you get better ones?

How do you narrow down the ones that you have?

How do you make choices between them?

And so as we sort of talked about it more and tried to figure out sort of like, what is it that, you know, we were both like had this tingly feeling, we were like, this is not quite right.

We started digging out, okay, you know, Ivan runs a large team of strategists, like, you know, where do they actually sort of need the most help?

And, you know, the answer is like, well, they need sort of the help in packaging their ideas, right?

Like in, they need feedback from him on sort of how to best package it for clients, how to best sell it, how to best position it, how to best build a business case around it.

It's not like they don't need more insights, right?

That actually is the thing they're best at and the thing they like doing most.

And so what we came to is this thing that like we're so focused on the generative ability, these models that we forget that they're like amazing at reacting and giving feedback.

And actually in most organizations, that's where the bottleneck is.

It's not that people need more insights, people need more creative work.

I had somebody from an AI company once try to convince me that the biggest problem in marketing is that they need more ideas.

And I was like, dude, I don't think that's in the top 100.

Like, you know, marketers need better ideas.

They need to narrow down the ideas they have.

And so using these models as feedback is amazing.

And what's funny is originally I didn't have a good illustration for this.

And what I did was I took my whole talk, I recorded it, I transcribed it and I gave the PDF and the transcription to Claude and I asked it for feedback on my talk.

And it gave me these absolutely amazing feedback points.

One of which was that I needed a better illustration for this section.

And so this is it.

Like these things are amazing feedback machines, right?

They're incredible.

And again, if you think about most organizations, most large organizations, most of you are executives in the room.

Like you think about what your job is, 90% of your job is giving feedback to your teams.

And what could happen if that could happen earlier?

We're so focused on sort of automating these lower level tasks, but like actually maybe the sort of thing we should be automating is ourselves, right?

Like that ability to give feedback earlier in the process so that things can continue to move.

Feedback is a real bottleneck and AI can be a great reaction engine.

So number two, systems over models.

So I said AI failed the duck test, right?

It looks like software, it feels like software, but it's not software.

And so often people say, okay, like how does Siri do this thing where it can do math?

Like Siri uses AI, but it can still do math.

And the answer to that is they're combining the AI with a calculator, right?

And it's the way to do it.

You run a deterministic process on the math.

So it knows that you're asking a question about math and it just has the calculator do the math.

That's great.

That's basically how all of the good AI stuff works.

And so I think we have so many conversations about AI where we're imagining as this sort of like closed off model, but the reality is it exists inside this ecosystem and we combine it with software.

And when you combine it with software, it's where the real power is.

Lots of people are talking about RAG, retrieval augmented generation.

That's where you sort of have data and you are able to basically access it as you chat.

So you can sort of look something up and then you can pull it back in.

I've done a whole bunch of projects over the last year with people.

I did a project with a friend, he had 40 hours of interviews that he had done and his team needed help to sort of bring it down and put it into a pitch, into the presentation deck back for the client.

And so we loaded it all in.

We put it into this vector database and we created this chat interface where the team could sort of chat with the generalized version.

The AI would grab raw transcript data.

It would feed the verbatims back to the AI and then it would answer the questions.

But the really amazing thing is you could then click in and get to the verbatim from the interview, right?

This is RAG has sort of stormed the enterprise.

It's definitely the first big enterprise use case.

I've been experimenting with building my own RAG for just all my own writing and myself.

But the point here is really the model is just this one piece of the puzzle.

I think like anytime you're in a conversation and people are sort of talking about or describing the model as if it exists in this box without anything else, it just doesn't make a lot of sense.

So the third one is about extracting over writing.

Again, there's so much focus on writing.

There's so much focus on generation.

The first thing we do is we ask it to write a poem.

We do this thing, but like the most amazing use case still, and the thing that really hooked me on AI from the beginning was that I realized that I could use it to turn unstructured data into structured data.

So I could scrape a page, I could grab all the text, and then I could tell it how I want it structured, and it would give it to me back in structure.

And that is like absolutely amazing.

Anybody who has ever had to build a web scraper knows what a pain it is, because if that page changes by a tiny bit, everything breaks, right?

There are these super brittle applications, and this just solves the problem.

Like I will never do it another way again, period.

The first thing I built with it is I built this landscape of AI tools, and I was able to build a pricing comparison page for all of them, right, for all the categories.

And I pulled all that data.

I just literally copy and pasted pricing pages, and then the AI ran it all through and pulled out the data in the same format so that I could compare everything.

Recently, I have a newsletter called "Why is this Interesting?"

and we built a product recommendation site.

We took 1,500 emails that we had sent out.

I used AI to suck out all the products, and then we were able to use AI to go find the right link and then write descriptions.

And again, like this is wild.

I had wanted to do this project forever, but it just wasn't feasible before, right?

And it wasn't about having the AI write anything.

Like we have these amazing writers who write this newsletter, but like having to go through 1,500 emails and find all the product recommendations, there are like 7,000 product recommendations in there.

That's like a real amount of work, right?

This unstructured to structured opens up just a whole new world of possibilities, and it has very little to do with the ability for this thing to write bones.

So next one is Vibes over evals.

This is one of my favorites.

I have a real passion for this Vibes thing.

So every time a new model comes out, this is Gemini Ultra, you see a chart like this, right?

This chart is how it did on the evaluation benchmarks, right?

The one at the top is this MMLU.

The MMLU is measuring massive multitask language understanding.

And it's basically a test that looks a lot like the SAT, honestly, right?

It's a whole bunch of questions, and they ask the model these questions, and it gives them these answers.

The problem is like any of us who spend any time with these models know, like this is a complete nonsense way to get at like what actually works in them, right?

This is like asking a creative director to take the SAT to decide if you're gonna hire them or not.

It just like, it doesn't make a lot of sense.

I prefer this model from Tim, who's gonna speak a little later, where he says that language models should come with tasting notes like wine.

Tim and I, I think, by the way, if anybody wants to get involved, we're gonna work on the first issue of AI Aficionado, where we're gonna try to get Sam Altman in a smoking jacket for the cover.

But the point that Tim made last year, and I wanna repeat is that hallucinations are a feature of these models.

They're not a bug, right?

Like that test, the MMLU is trying to figure out like how much did they hallucinate and saying it's a bad thing, but they're actually like, I think it's pretty amazing thing.

Again, like all of these are hallucinations.

These shoes don't exist, but like even more so, this is one of my favorite examples.

These, both, neither of these shoes were meant to be Nikes, right?

If you look at the bottom, one's a Duncan Saucony, the other is Portion Doritos.

They got swooshes on them, right?

And technically that's a hallucination, but perceptually, if we went out on the street and asked a bunch of people what a sneaker is, they'd say it's a swoosh, right?

It's like Nike has the mind share.

The AI is perceptually correct.

And so we just have to lean into hallucinations more.

I think this is sort of the key.

Next one, bottom up, top down.

So I tried to find the best image for this and I had to make one, but having spent the last 12 months working inside these organizations, working with a lot of really big companies, what I've seen is just the shit rolls downhill, right?

CEOs are asking CMOs, CMOs are asking VPs, VPs are asking agencies, agencies, it's just, it's sort of everything is going down.

And what's interesting though is like, on one hand, you have people inside the organizations at the bottom who are finding real use cases, but then the people at the top, the thing they should be doing rather than asking for this is actually giving them access.

I heard this amazing story from a friend who works at a very large packaged goods company, and they just did a huge training where they paid a lot of money to do AI training.

And then afterwards she asked for access and she was told she had to get on the 1500 person wait list for chat GPT.

And it's just like, this doesn't make any sense, right?

I think that what we have to look for and what we're seeing inside these big companies is like real bottoms up use cases, but without top down access.

And so I think, given the people that we have in the room, that has to be the mandate, like you have to drive access to these tools.

Like people are not going to be able to imagine what's possible if they don't have access to them.

So just two more of them.

The next one is examples over prompts.

Earlier this year, I did a pretty big project with a large brand about prompting.

And we were coming up with a whole bunch of prompts to help the marketing organization.

And that led me to go back and read a whole bunch of the research on prompting.

And as much as we see sort of LinkedIn posts about the perfect prompt and all these things, what I sort of realized from reading this stuff firsthand was it's like, you can kind of break all the prompts down into these three buckets, right?

You have zero shot and zero shot is where you just ask the model a question and it answers, right?

So categorize this list of job titles, help me summarize this thing.

It's the thing we do every day if you use chat GPT or Claude or any of these.

Second one is few shot.

So for few shot, we give it a bunch of examples and we use those examples to help it write the content.

And the amazing thing about few shot is it comes up with quantitatively better answers, right?

So if you think about what you're doing with few shot prompting, you are training the model to write like you, right?

And you're just training it and it's able to learn from three or four or five examples.

And few shot is really, really powerful technique.

And then the last one is chain of thought.

And chain of thought came out of Google.

This is the original paper that it came from.

And chain of thought is essentially in addition to the examples, you explain it, right?

And so, AI is not good at doing logic.

We talked about how it's a non-deterministic process.

So it can't answer questions like, Noah has five tennis balls and Ron has three tennis balls.

And Noah gives Ron two tennis balls.

How many tennis balls does Ron have?

I have no idea.

I was not following my own chain, but the AI is not very good at that.

But it turns out if you explain to it how to think through that problem, it gets, it's able to emulate that and then follow the logic.

And when you do this, when you put all these prompting techniques together, you get these amazing results.

So this is from a Microsoft paper from December.

They came up with a technique called MedPrompt, which is sort of a combination of a whole bunch of prompting.

And what they found is that you could, if you had a good enough prompt, you could make a general model perform like a fine tune model.

And so what's the takeaway from all this?

The takeaway is that people, we are really the magic of these systems.

Like our ability to give them the examples, the work that we've done, knowing what's really good, knowing the process that goes through, that we need to go through to create really good work.

That is sort of how you get really, how you measure a believer output out of these machines.

And speaking of output, the last one is just process over output.

So I talked about the manifesto machine.

We built that earlier this year.

I built it with a creative director friend of mine.

When we started, it was really simple.

It was brand category and start year.

And we had the AI write a manifesto.

And the AI wrote these really long, like 3,000 word manifestos, and nobody is reading a 3,000 word manifesto.

But there was nuggets in there that were absolutely amazing.

I kept testing with Duck brand duct tape.

And the very first manifesto that came out was, it said that Duck brand duct tape was the enemy of entropy.

And I was just like, wow, that is awesome.

In fact, if anybody knows any CMOs of duct tape companies, I am very interested in working on an enemy of entropy campaign.

But this was not like a real manifesto.

And so we started to think through like, what else needs to go into a manifesto?

Well, you need length and you need rhythm and you need context, right?

So it didn't always know the brands.

We had to have it go search Google.

You need a CTA, you need a we believe checkbox.

So do you want we believe statements in your manifesto?

Of course you do.

So then we sort of came up with the final version and it got, again, pretty good.

You heard Katie read one of those manifestos on stage at the beginning of the talk.

But the point is not like what it did.

The point was that this process that we went through made us reflect on sort of what goes into this work.

What is a manifesto, right?

Like what is research?

What is any of these projects I do?

What I keep finding is that it just makes me better at thinking about how to do these things.

It makes me better at the thing that I need to do because I need to build the thing that can do the thing that I need to do.

There's a great quote that's often attributed to Marshall McLuhan but was not said by him.

Originally it came from Winston Churchill in a version.

But it's basically we shape our tools and thereafter our tools shape us.

Like this is what's happening, right?

Like this is what we do.

This is what we continually do as humans.

Or to go back to Hofstetter, sometimes it seems as though each new step towards AI merely reveals what real intelligence is not.

I think that's kind of where we're at.

And that's how I wanted to set up the day is like, this is meant to be fun.

It's very focused on doing things.

Keep an open mind.

Be open.

So what are the takeaways?

Well, we have all these probes.

React over generate.

Systems over models.

Extract over react.

But also I want to leave you with a version of the thing I said last year to close my talk because I think they're all really still true.

So the first one is just probe and ask questions.

The second one is just tinker and play.

Get your hands on it.

Last year I talked about fingerspitzenfühlen, which is building fingertip feeling in German.

And then don't trust anybody that sounds too confident, including me.

So with that, thank you very much.

[APPLAUSE] So as the team gets some chairs up, I'm just going to quickly talk about what's going on today.

So after the conference last year, I thought it went really well.

So I put together this set of tenets that I wanted to sort of continue to make sure we programmed around.

One is doers over spectators.

So that's what you're going to see today.

People who are in it, doing it.

Curiosity over overconfidence.

Show over tell.

And then finally, process over perfection.

We have an amazing day.

I am super excited to hear from everyone.

I'm a little bit relieved that I'm now done talking.

And I just need to introduce everybody from now on.

[MUSIC PLAYING]

BRXND is coming to LA for the first time on February 6, 2025 for another full day of marketing and AI—
BRXND is coming to LA for the first time on February 6, 2025 for another full day of marketing and AI—
BRXND is coming to LA for the first time on February 6, 2025 for another full day of marketing and AI—
BRXND is coming to LA for the first time on February 6, 2025 for another full day of marketing and AI—
BRXND is coming to LA for the first time on February 6, 2025 for another full day of marketing and AI—
BRXND is coming to LA for the first time on February 6, 2025 for another full day of marketing and AI—

Two years after launching the BRXND Marketing x AI Conference in NYC, we are ready to take things to California. On February 6, 2025 we will be in Los Angeles to explore the intersection of marketing and AI. Where must the industry go? And, most importantly, what's worth getting your hands on today? Join us in February.

BRXND.ai is an organization that exists at the intersection between brands and AI. Our mission is to help the world of marketing and AI connect and collaborate. This event will feature world-class marketers and game-changing technologists discussing what's possible today.

The day will include presentations from CMOs from leading brands talking about the effects of AI on their business, demos of the world’s best marketing AI, and conversations about the legal, ethical, and practical challenges the industry faces as it adopts this exciting new technology.

Attendees will return to work the next day with real ideas on how to immediately bring AI into their marketing, brand, strategy, and data work. We hope you will join us!

Who is the BrXnd Conference For?

Marketers

Advertisers

Creatives

Strategists

Executives

Technologists

Data Scientists

Media

We hope that you will join us. Add your email to be informed as we open up tickets.

Content

Dispatch

BrXnd Dispatch is a newsletter covering the intersection of brands and AI.

All Posts