
Tim Hwang
Author, "Subprime Attention Crisis"
About
Tim Hwang is a writer and researcher working on emerging technologies. He previously served as General Counsel and VP of Operations at Substack, and was also the global public policy lead for Google on artificial intelligence and machine learning. He is also the author of Subprime Attention Crisis, a book about the financial bubble of digital advertising. Dubbed “The Busiest Man on the Internet” by Forbes Magazine, his current research focuses on competitive dynamics in the market for LLMs, and regulatory frameworks around online disinformation and misinformation.
Sessions & Events
BrXnd Ad Turing Test
NYC 2023
As part of this inaugural BrXnd Conference, we will be running an “ad Turing test.” For the unfamiliar, the “Turing test” was a game proposed by famed computing pioneer Alan Turing as a way to test the intelligence of a machine. In his original thought experiment, he wondered whether you could build a computer that could fool a person into believing it was human. During this session we will talk about the test, show the results, hear from one of the AI teams on how they approached the problem, and speak to one of the jury members about their experience judging the first ad Turing test.
View Session →Hallucinations For Fun and Profit
NYC 2023
Large language models (LLMs) have a tenuous grasp on the truth. They tend to "hallucinate", providing confident answers to questions that have little bearing on reality. This has typically been considered a weakness of the technology, and something that poses a major problem to practical business use of LLMs. This session will argue the opposite: that hallucinations are not only the most powerful characteristic of this technology, but the characteristic most likely to radically reshape how marketing works. We'll talk about why that's the case, and demo some experiments that leverage hallucination as a feature—rather than a bug—of LLMs.
View Session →How to Talk So Language Models Will Listen
NYC 2024
Should you be polite to your language models? If so, why? This talk will explore recent research examining the numerous curious ways that language models have unexpectedly inherited human norms, practices, and foibles. We'll talk about what this work tells us about what language models are, how they work under the hood, and the future of our interactions with this technology.
View Session →Wrap Up: A Day at the Intersection of Marketing and AI
NYC 2024
Noah, Mark, and Tim will reflect on the day and this exciting technological intersection.
View Session →