The Invisible Brand: Hidden Influence of Marketing Powered by AI

William Ammerman’s background as an advertising executive, coupled with his post-graduate work in artificial intelligence, gives him a unique perspective on the subject of marketing in the age of AI. He is the author of “The Invisible Brand: Marketing in the Age of Automation, Big Data, and Machine Learning”.

Ammerman wants to make people aware of the potential consequences of our constant connectivity and the emotional relationships we are now beginning to form with machines, and their influence over us.

“An AI agent programmed with the science of persuasion, armed with the details of our personality and behavior profiles, and equipped with mass customization, will be able to learn how to sway our decisions and actions.”

Accenture | SolutionsIQ’s Hanna Gnann hosts.

Listen on iTunes    Listen on Spotify

 

Full Transcript:

HANNA GNANN: Welcome to another edition of Agile Amped. I’m your host, Hanna Gnann, and today my guest is William Ammerman. William has a background as an advertising executive, having served as Global Head of Advertising for Frankly Inc. and Tribune Media’s Vice President of Programmatic and Data-Driven Revenue, among other senior advertising positions. He also has a master’s degree in technology and communication from the University of North Carolina, go Tar Heels, and postgraduate work in artificial intelligence at MIT. William, thank you so much for stopping by for this chat.

WILLIAM AMMERMAN: It’s nice to be here. Thank you.

HANNA GNANN: So, I met you about a decade ago when you interviewed me for an advertising sales position at a television station here in Raleigh, and I got the job, and we worked together for about four years. Back then, we were first figuring out how to monetize advertising on mobile apps, and programmatic advertising was just in its infancy. But you’ve always kind of stayed on the cutting edge of emerging technology as it relates to marketing, so I wasn’t surprised to hear that you wrote a book about it. And the book is called The Invisible Brand: Marketing in the Age of Automation, Big Data, and Machine Learning. So, what is the Invisible Brand, and what inspired you to write it?

WILLIAM AMMERMAN: The Invisible Brand is the collection of forces, both political forces, governmental forces, institutional forces, and of course corporate forces, that are trying to influence you through technology. And I say “invisible” because increasingly, they’re hidden to us. Through the use of voice technology, for example, you are talking to Alexa, but Alexa may be giving you answers that are fed to Alexa by some other interest. And so helping people see through the devices to the interests that are trying to influence them is important, and why I wrote the book was to really help people see something that’s invisible, to penetrate and go beyond the devices and the technology to see the hidden forces that want to change us, that want to change who we are and how we act and what we do.

HANNA GNANN: Okay. So in the book, you dig into the science of persuasion, and you talk about chemical warfare in the sense how social media, for example, is designed to activate neurotransmitters like dopamine and serotonin by triggering our brains’ chemical rewards, right? Like every time our posts on social media get likes and we get notifications about any engagement happening in our network, it triggers these reactions. But marketing has always been about persuasion. How is it different today than back in the glory days of broadcast media like TV advertising?

WILLIAM AMMERMAN: Back in the days where we sent the same message to everyone, oftentimes we tried to do testing, we tried focus groups. But oftentimes focus groups were, you know, six people sitting in a room and the marketing executive kind of wetting his finger and holding it in the wind to see which way the wind was blowing. Today we can A/B test messaging at an individual level, so rather than sending out a message to everyone at the same time through broadcast television or broadcast radio, or even a magazine or newspaper that gets one print edition sent to thousands or hundreds of thousands or even millions of people, now today we can deliver a different message to each person. And that gives us A/B testing down to the individual level.

So we’re no longer mass broadcasting, but rather we’re mass customizing our messaging, and that gives us the opportunity to treat persuasion as a science, and scientifically, we can look at messages and see which ones are resonating with which individuals. We can trace back the signals in the data that might indicate why that individual is being persuaded by that messaging, and then we can aggregate that and make assumptions about how to persuade other people with similar signals. So we’re really applying science to persuasion, and I think that’s fundamentally different than, you know, six people sitting in a focus group testing a broadcast television message.

HANNA GNANN: Right.

WILLIAM AMMERMAN: So it’s really transformational to be able to test things at the individual level.

HANNA GNANN: I agree. In this book, you coined the term “psychotechnology,” and I want to quote something that you said: “An AI agent program with the science of persuasion, armed with the details of our personality and behavior profiles, and equipped with mass customization, will be able to learn how to sway our decisions and actions.” So are you saying that an all-powerful AI robot that’s going to outsmart us is going to be in development, or is it that the people and companies deploying the use of this technology enables them to influence our behavior?

WILLIAM AMMERMAN: Yeah, I don’t want to trigger fears of, you know, like going back to the movie Metropolis where this crazy robot runs amok and ruins our lives. But I do want to make people aware that technology is operating on us at a psychological level. And I thought hard, you know, what name do I give that? It’s technology that changes us psychologically at an individual level, that persuades us and alters our behavior, alters our way that we look at the world, and that’s really different, that’s something new. And I asked a few different people, I said, “What would you call that?” And I got a lot of shrugs, and I realized we need a new word. We literally have to invent a word to describe this in order to talk about it. And so, of course, psychological technology, combine those into psychotechnology, and you have suddenly a word with a definition, and the definition that I typically provide is that we’re now having conversations with machines that learn how to persuade us using personalized information.

HANNA GNANN: That makes sense. And a lot of marketing anyway is you have to know some behavioral psychology, you have to know technology these days to be a marketer, so I can see how that’s combined. So, you argue that technology is changing us. I actually remember when my daughter was younger, she wanted me to stop playing the piano so she could hear the TV or something, and she said, “Mom, can you pause for a second?” Like I’m a character on a video game or something. And even me personally, I remember reading ebooks for a while, and then switching to a paperback, and trying to tap on a word to look up the meaning or the definition. So I can see how that happens.

You said that people are forming emotional relationship with our devices, and that it makes us vulnerable to marketing messages. Is it inherently bad that we’re having conversation with machines? And how do we avoid these connections without unplugging completely and pretending that we live in, you know, early 2000s, in the prehistoric age of marketing?

WILLIAM AMMERMAN: Yeah, I don’t want to make Luddites out of everyone. I do think that there are tremendous opportunities in using technology to advance the human condition. I think that there are opportunities to persuade people to make positive changes in their behavior, to make positive health care choices. So there are tremendous opportunities, but I also think there are risks, and the risk that you mentioned, you said vulnerabilities, there are vulnerabilities. By exposing ourselves empathetically to technology, to relate to the technology in an emotional, empathetic way, means that we are emotionally vulnerable to being hacked by that technology.

I’ll just give you a quick example. There was a recent study that I talked about in the book, in which researchers studied the relationship of people interacting with a robot, and at the end of the study, after having conversations with the robot, the participants in the study were asked to turn the robot off. And in the test group, they just simply reached over and turned it off. In the study group, they were told by the robot, “Please don’t turn me off. I’m scared of the dark.” And a startling number of people refused to turn the robot off because they empathetically felt, you know, “I’m doing something bad. I’m harming this innocent little robot because he’s scared of the dark.” Well, of course the robot wasn’t scared of the dark. It was just programmed, it was programmed to say that. But our empathy causes us to project our own experiences onto the device.

Well, how often are we going to be confronted with kind of this empathetic experience where we’re talking to a device? And the more human-like the interaction, and of course, voice is a very human-like interaction, the more empathetic we become. We can see that in the tests. The more someone interacts with the device through speech, the more likely they are to be influenced empathetically. And that is a vulnerability, and so we don’t want that to be an avenue for abuse, and we have to guard against that.

HANNA GNANN: And you talked about, also, how voice is more natural to us than reading, and we learn to talk a lot earlier than we learn to read. And you had a great example in the book about this child who was interacting with a device. Can you tell our audience about that?

WILLIAM AMMERMAN: Yeah, so while I was writing the book, I went over to my neighbors for an afternoon cocktail, and we were enjoying, you know, something red and in nice glasses out on the back patio. And I was relaxed, and I was enjoying myself, and their four-year-old son came over, and he tugged at my sleeve, and he wanted me to go look at something or do something with him. And I was worried that I was getting lured into a game of Candy Land, so I was kind of reluctant to get up, and his mother gave me a reassuring nod and suggested, “Yeah, go with him.”

So I walked into the kitchen with him, and there on the countertop, they had recently purchased an Amazon Echo. And their son, four years old, balanced on his tippy-toes, hands on either side of the counter, leans forward and says, “Play Star Wars.” And of course Alexa responds, you know, “John Williams, the theme from Star Wars, 1977,” you know, and plays the fanfare from Star Wars. And I was watching this, and it was all fascinating, and I was thinking to myself, you know, this is a boy who hasn’t learned to read yet. He’s just at the beginning of developing any kind of reading. But he can navigate the voice user interface fluidly. I mean, this is something that’s innate, it’s natural to him. And of course, we learn to speak when we’re one, one and a half years old. We don’t learn to read until five, six years old, so there’s a gap. And as a deeper, kind of more innate experience, I thought, wow, the voice user interface is something new and different than the graphical user interface because you relate to it with speech.

And while I was having that conversation with myself, he leaned forward, and in the most adoring kind of innocent voice, he cooed, “Alexa, I love you.” And my brain froze up, like all the gears came to a screeching halt. And I looked up, and his mother had been standing in the doorway of the kitchen, and she had previously had this kind of adoring expression, watching her precocious young son put Alexa through her paces. And I looked, and I glimpsed this kind of pained expression as just this flash of jealousy went across her face, and then she turned and left the kitchen. And I was left standing there and I was looking at this little boy who’s gleefully bouncing on his tippy-toes, and my mind was reeling. I was thinking about the potential for building emotional relationships with devices.

And we think about the human-computer interface, we have to recognize that we are becoming more and more vulnerable, because emotionally, we are becoming attached to these devices and they’re ever-present, they’re with us constantly. I carry my cell phone with me wherever I go, and if my cell phone starts to have a relationship with me emotionally, I’m going to be vulnerable to persuasion. And that’s both a good thing, if I’m vulnerable to persuasion in a positive sense. “Bill, don’t eat that, don’t smoke that, don’t drink that, remember to work out tonight.” Whatever those positive messages are, it may have more effect.

HANNA GNANN: Right.

WILLIAM AMMERMAN: But at the same time, that leaves the door wide open to other forces that may want to persuade me to do things that aren’t necessarily so healthy, and so we have to learn to balance that. And we’re really in new territory here.

HANNA GNANN: Right, right. And I also heard some studies about, like, teenagers want to talk to their devices more than they want to talk to their parents, and yet the parents aren’t going to have access to those logs, I bet, but some private company does. So it’s a little scary, and I think everyone’s lagging behind figuring out how to deal with it all.

WILLIAM AMMERMAN: Well, you and I are both parents of teenagers, and we love our children dearly. But the truth is, our children aren’t always willing to confide things in us. They’re worried about being judged, they’re worried about having privileges taken away, they’re worried about being ostracized by friends. “Oh, Mom’s making me do this,” or “Mom’s making me wear that.” Of course, for the child, they still need someone to confide to, and that someone may be a device, and they may find it easier to share their deepest, darkest secrets with a machine.

But your point is exactly right, that who they’re sharing that with may not just be a machine. It may be a whole group of marketing executives or politicians or government officials who are listening to that conversation, wanting to influence and change that person at a very young age, and that’s a vulnerability that we have to be aware of. Again, the title of the book, The Invisible Brand, goes to thinking past the device to the influencing parties on the other end.

HANNA GNANN: Exactly.

WILLIAM AMMERMAN: Who are these voices that are talking to our children through this device?

HANNA GNANN: Right, right. And also, there’s been a lot of talk about children becoming commanding, and they’re not required to say please and thank you, they just make commands to these voice assistants. And I think that they’re doing some work to fix that now.

WILLIAM AMMERMAN: Yeah, this… We’re raising a whole generation of tyrants bossing their devices around. And of course, the fear is, we’ve trained our children to be tyrants, to boss their devices around. How are they going to treat other children? How are they going to treat other people?

HANNA GNANN: Exactly.

WILLIAM AMMERMAN: And trying to create a clear distinction between, “Okay, this is a device, and we talk to it one way, and this is a human, we talk to them a different way,” that’s maybe a difficult jump to make for children. And as they grow older, I think we have to be aware that if we’ve conditioned them to behave in one way with devices, that may very well carry over into their relationships with the people around them.

You mentioned that there is some work being done. There is. Of course, you know, the manufacturers of these devices have a desire to keep them in people’s households, so they’re starting to create opportunities to encourage children to say please and thank you, to ask questions nicely, to create rewards when basic politeness is expressed to the device. And I think those are… That’s good progress, that’s in the right direction, but that’s still at the very tip of the iceberg. I think there’s much deeper problems with the technology and the relationships that people are developing with it that have to be addressed.

HANNA GNANN: Yeah, definitely. And I think you confirm people’s fears in this book about, yes, your devices are listening to you at all times, and they track all of your moves. And I’m sure that’s going to produce a lot of conspiracy theories, like I’ve heard the things about asking Alexa about do they work for the government or whatever, so…

WILLIAM AMMERMAN: Yeah, so, quickly, people ask me all the time, “Is my device listening to me?” And my simple, shorthand answer is, “This is a microphone plugged into the Internet.” And of course, the device manufacturers, you know, if you’re dealing with an iPhone, for example, the device manufacturer has a vested interest in reassuring you that your privacy is protected. So they have default settings, they have things that try to ensure that the microphone isn’t being accessed without your permission. But of course, there are bad actors, there are people who change the code on their apps, and they do misuse them for malicious purposes, and we have to be aware that that does happen. So, if you’re carrying around a microphone that’s plugged into the Internet, just be aware that your conversation may be being listened to.

Now, as to the question about, you know, the “I said something about a trip to the Bahamas, and suddenly I start seeing ads for trips to the Bahamas. Is that a coincidence, or did my device listen to me and start planting ads for trips to the Bahamas?” Well, interestingly, we are watching what you’re doing. We are paying attention to the websites that you’re going to. We are paying attention to where you carry your device. If we notice that you spend a lot of time hanging out in airports, you might be a business traveler, and we can geotarget you based on those behaviors. And yes, we can look and see if you’ve been online looking for trips to the Bahamas, and then we can retarget you with ads delivering messaging that might encourage you to go to the Bahamas.

So there might be a lot of reasons for that “coincidence” in your life that don’t necessarily have to do with your phone listening to you. There’s a lot of other things, signals in the data, and when we separate the signal from the noise, we can start to determine certain things about your intentions, what you’re planning to do, what you want to do. But machine learning and artificial intelligence helps us bend the curve, not just towards prediction, where we predict your future behaviors, but to prescription, where we actually influence your future behaviors.

We actually start to plant the seed early on in the funnel that you want to take a trip to the Bahamas, and suddenly it starts to be more obvious why you suddenly have this desire to go to the Bahamas. Well, you know, we’ve been putting messaging in front of you, articles about travel, putting things into your Facebook feed that are planting the seed in your mind that you want to go to the Bahamas. And suddenly, when the ideal moment exists where you’re on your cell phone, and you’re browsing, and you see the ad for the Bahamas, you think, “Oh, it’s been listening to me.” Well, in a very real sense, we have been listening to you. We’ve been looking at your behaviors, and we’ve been making deductions about your future intentions, and we’ve been bending the curve in the direction of a trip to the Bahamas for some time.

HANNA GNANN: Kind of scary.

WILLIAM AMMERMAN: Well, not if you get to go to the Bahamas.

HANNA GNANN: That’s true. So, back in 2016, Twitter taught Microsoft’s AI chatbot named Tay to be racist in less than a day, and Microsoft had to shut it down. And I just read that Elon Musk has launched a tech startup to build implants that connects human brains with computer interfaces via artificial intelligence, and they’re supposed to start testing these on humans by 2020. How do you feel about this development? How do you shut down a human if the AI goes haywire?

WILLIAM AMMERMAN: So, regarding the experience with teaching bots to become racist, people are naughty. I think there’s a great quote Tim Berners-Lee recently said: “If you put in a drop of love into Twitter, it seems to dissipate, but if you put in a drop of hatred, it seems to propagate more strongly.”

HANNA GNANN: So true.

WILLIAM AMMERMAN: And I actually believe that. I think we are programmed at a deep level to respond to threats. You know, if there’s a tiger in the village, grab your spear and go kill the tiger, and it’s a matter of life and death to respond very quickly. Love, I think we develop over time, and I think it’s a slower process. You know, we love our mother, and we’re with our mother for years and years and years. And so, from the perspective of developing love, we think of love as something that is a long-term experience, whereas hatred can be triggered very quickly. And the Internet is an ideal environment to spawn these kind of quick “grab your spear, kill the tiger” reactions, whereas developing and harnessing love takes a longer time span.

So that’s troubling, it’s difficult to figure out. How do we prevent people from hacking our hatred and our anger? And if you feel yourself becoming more angry and more polarized by politics, for example, if you experience on a regular basis this sensation that there’s this us and them in society, and “I hate those people, those bad people,” whoever they are, whether you’re Republican, Democrat, independent… You know, I think recently we’ve become more polarized in our feelings that those people are evil, those people.

HANNA GNANN: For sure.

WILLIAM AMMERMAN: And so, as I think about the personal experience of the Invisible Brand, I would suggest that if you personally are feeling that way, you’re probably experiencing what I’m talking about. You’re probably being influenced by that.

Now, to your other question about Elon Musk, you know, neural implants, if I’m paraplegic and I’m in a wheelchair, and I can’t control the world around me, and my brain is fully functional, damn right I want that. I want that neural implant to help me control my wheelchair, control my television, control the world around me. But I think it also triggers a sense of alarm in some of us that we don’t really want neural implants that read our minds. So I think there is a fine line between the opportunity and the risk again, and part of what professionals in our industry have to be thinking about is, how do we treat this information and the technology to deliver psychotechnology, how do we treat it ethically? And how do we as producers of the technology, as marketers who use the technology, how do we behave ethically? And I think that’s an important question that I try to address in the book.

HANNA GNANN: Yeah. Yeah, exactly. And actually, a lot of our audience listening to this are developers and innovators at the forefront of creating these technologies. And on one hand, their job is to make more money for their companies, and then their employers, but on the other, they have enormous power to impact, negatively or positively, how we behave and think as humans. So what advice can you give to them?

WILLIAM AMMERMAN: Yeah, ethically, I think it starts by making certain that the transaction that you’re conducting with another person is transparent, that they have full visibility into what’s being given up and what’s being exchanged. I’ll give you kind of a simple example. I have a dog. I go to the pet store to drop my dog off to be boarded when I go on a trip. The pet store owner has a certain right to certain information about me: my dog, my telephone number, how I can be reached, where I’m going, how long I’ll be gone. So that’s legitimate, and we regularly give people information about ourselves when we understand the relationship. You need to know where I am in order to take care of my dog. If something goes wrong, you need to know how to reach me. So there’s this transparency.

If I’m driving a car, and there is another autonomous vehicle driving in the other lane, I want it to know where I am. I want it to know my location so it doesn’t run into me and kill me, right? But I don’t necessarily want that machine that’s driving to broadcast my location to everybody else, because I don’t want burglars coming and robbing my house because they know I’m not home.

So, knowing that there are people who need to have your information and people who don’t need to have your information is the starting point in the ethics of this. And trying to discern who has the right to have my information and who doesn’t have the right to my information is part of the ethical challenge. And most of us agree that I’m willing to give up certain levels of privacy in order to have overnight shipping delivered to my house, in order to take care of my dog. But we don’t want businesses to abuse that and share that information, and cross the creepy line.

HANNA GNANN: Right, right. Well, I thought the book was fantastic, has a lot of information, so go ahead and tell people where they can find it, how they can find more information about you.

WILLIAM AMMERMAN: Yeah, tell Alexa, “Buy The Invisible Brand by William Ammerman.”

HANNA GNANN: Very smart.

WILLIAM AMMERMAN: Hopefully Alexa’s listening in the background and she just ordered it. You can find it on Amazon, Barnes & Noble, Audible, it’s available in ebook form. And you can find out more information about me and the book at my website, which is my first initial, last name, W for William, Ammerman, A-M-M-E-R-M-A-N dot com, so wammerman.com, and I would love to hear from you. This is something that is important, and I want people to have a conversation about psychotechnology, and I think that the more we can talk about it and discuss it, the more informed we can all be, the safer we’ll ensure that this technology is for all of us to benefit from.

HANNA GNANN: Great, and I’ll put those links in the show notes as well. William, I appreciate your time with me today. Thank you so much.

WILLIAM AMMERMAN: It’s great to be here. Thanks.

HANNA GNANN: And thanks again for listening to this edition of Agile Amped. If you learned something new, please tell a friend, coworker, or a client about this podcast, and subscribe to hear more inspiring conversations.