As part of this inaugural BrXnd Conference, we will be running an “ad Turing test.” For the unfamiliar, the “Turing test” was a game proposed by famed computing pioneer Alan Turing as a way to test the intelligence of a machine. In his original thought experiment, he wondered whether you could build a computer that could fool a person into believing it was human.
With the rapid growth of AI tools and some examples of uncannily good ad copy and images generated by machines, we believe it’s a perfect moment to ask the same question of advertising experts: can they correctly identify whether an ad was produced by a computer or a person? We’re betting that recent advancements in AI make this way more difficult than people would believe.
For our test, we will be recruiting an esteemed panel of judges from across the brand and advertising world to make the call on whether AI is up to the task of besting human creatives. The format will be a print ad—8.5x11–for a fictional brand that will be distributed to all participants. Humans and computers alike will have one month to deliver their best work against the brief to the panel, who will attempt to guess whether the ad was made by a computer or a person.
We are recruiting two types of participants for this event:
Creative teams at advertising schools: These teams will represent the humans and will produce on-brief ads in a traditional way. All student entrants will get free entry to the BrXnd conference in NYC and be featured on stage in front of an audience of industry professionals.
AI teams: These teams will represent the machines and be responsible for producing on-brief work entirely with models. The full rules for these teams are available upon request and will be distributed, but the basic gist is that entrants must use a model/combination of models, but may not do post-processing, such as placing logos or text programmatically (without a model).
The spirit of this event is that we want to get an accurate picture of the current state of AI in the advertising space. To that end, we are asking human and AI teams to work off the same brief and assets to produce a final ad product that will then be judged by a human jury of marketing leaders from across brands and agencies.
Although there is no prize outside a commemorative trophy, we do want to create a fair playing field that ensures an accurate portrayal of the capabilities of both our human and computer participants.
Let’s get onto the rules.
If you are a human, you can stop reading at this point. Those are all the rules you must follow.
To ensure we are getting an accurate view of the technology, there are additional rules we are expecting our AI participants to follow:
We are proud and honored to be joined by an incredibly esteemed jury of industry leaders who will be lending their expertise to help understand where we are in the lifecycle of this technology.
Chief Brand Officer, Esprit
Co-Founder & Executive Chairman, New Stand
Chief Creative Officer, Squarespace
Penguin-in-Chief, CMO Huddles
Creative Director and Co-founder, Otherward
Founder, Weightless.co
Founder & CEO, The Liberty Guild
Director, Brand Solutions Marketing
Founder, Sunday Dinner
Founder, Utendahl Creative
Head of Brand Engagement, Verizon
CEO, Canvas Worldwide
Sr. Vice President of North American Whiskeys Portfolio, Diageo
Global Chief Creative Officer, R/GA
CMO, EY (Ernst & Young)
BrXnd is the brainchild of Noah Brier. Noah has spent the last twenty years operating at the intersection of marketing and technology—first at award-winning agencies, and later as an entrepreneur. In 2011 Noah co-founded Percolate, the world's leading content marketing platform. Percolate worked with brands like Uniliever, GE, and Google and was backed by Sequoia, First Round Capital, and Lightspeed. The company was acquired by Seismic, the leading Sales Enablement Platform, in 2019.
Tim is a researcher working at the intersection of law, policy, and emerging technologies. He formerly led global public policy for Google on artificial intelligence and machine learning and was director of the Harvard-MIT Ethics and Governance of AI Initiative. Most recently, he was a Research Fellow at the Georgetown Center for Security and Emerging Technology, working on topics of geopolitical competition around AI and the security impact of deepfakes.
Interested in taking part as either an AI team or a student? We'd love to talk to you. Please be in touch.