Trust and the AI Marketing Assistant

AI Advertising: Dreams and Nightmares

We are on the verge of a world in which advertising has grown so responsive that it is effectively a living thing. AI-driven advertising is increasingly ubiquitous on the internet, but in the coming years we’ll see more and more of it in malls and other “real world” locations, and the AI will continue to become increasingly sophisticated. On the one hand, this could be a dream scenario, with advertising being subtly tailored to our desires and expectations, simultaneously less intrusive and more effective, both for the consumer and advertiser. On the other hand, however, it is easy to imagine AI advertising becoming intrusive, manipulative and predatory. How might we deal with a problem like that?

An AI Ecology

One possible answer is to fight fire with fire and add additional AI to the equation. As a society, we are now very familiar with the notion of a simple AI personal assistant, such as Siri. As AI becomes more advanced, we will see a market for sophisticated personal assistants who filter and negotiate with other AIs on our behalf. This is “arms race” logic of the sort we see in anti-virus software, spam filters, and – most clearly relevant – ad blockers. Interestingly though, it would be in marketers’ interests to develop such filter software, because the filter acts as a kind of gateway to the consumer’s attention. Thus we see the emergence of an ecology or ecosystem of Artificial Intelligences, weaving complex game-theoretic relationships. The single most crucial factor deciding such relationships is trust, between humans and software agents in any combination.

Trust in Advice and Decision-Making

Cognitive scientists have long studied the role of trust in human judgment and decision making. In recent decades that field has also encompassed questions of what makes people trust (or distrust) each other online, or trust software agents to do or not do something. Multiple factors determine trust, but the underlying commonality between them is a similarity to oneself; physically, in terms of values, and in terms of one’s own beliefs and judgments. People trust those that they see as similar to themselves, those effects are multiplicative, and they trust themselves above all, by default. Furthermore, people will often claim patterns of trust that bear little relationship to those suggested by their behavioral patterns. In short, an effective marketer will be one that can build a “chain of trust” between themselves and the consumer, which may well involve one or more software agents between the two.

You Know Me So Well

The natural conclusion to this line of thought is that, in order to be trusted, AI advertisers and personal assistants will try to emphasize their similarity and relevance to you. You are more likely to buy something from an advertiser, or accept a filter recommendation, if you feel that the recommendation is coming from someone just like you, a good and trusted friend rather than a stranger. Even the near future is a strange place, and you shouldn’t be surprised if in a few decades your best friends are software agents, and those friends are very popular among other software agents in the bustling world of AI advertising.