digiseg Archives - AdMonsters https://admonsters.com/tag/digiseg/ Ad operations news, conferences, events, community Tue, 23 Jul 2024 02:28:59 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.1 AI’s Role in Political Manipulation https://www.admonsters.com/ai-role-in-political-manipulation/ Tue, 23 Jul 2024 02:28:59 +0000 https://www.admonsters.com/?p=659026 In an era where technology shapes our daily lives, generative AI has emerged as a powerful force in the political landscape, and its role is both revolutionary and potentially dangerous. In this article, Søren H. Dinesen, CEO of Digiseg, explores the complex world of AI in politics, its benefits and risks, and examines why oversight and regulation are crucial for preserving democracy.

The post AI’s Role in Political Manipulation appeared first on AdMonsters.

]]>
In this article, Søren H. Dinesen, CEO of Digiseg, explores the complex world of AI in politics, its benefits and risks, and examines why oversight and regulation are crucial for preserving democracy.

In an era where technology shapes our daily lives, generative AI has emerged as a powerful force in the political landscape, and its role is both revolutionary and potentially dangerous.

With generative AI, politicians can create targeted campaign ads, amplify campaign messages, and engage voters, but there’s a downside. Consider how AI can also be misused for creating convincingly fake campaign ads, disseminating disinformation, and turning voter outreach into voter manipulation.

Dive in as Dinesen delves into the complex world of AI in politics, examining its benefits, risks, and the urgent need for regulation to protect the integrity of democratic processes.

Generative AI and Politics

OpenAI certainly changed the world in November 2022 when it introduced ChatGPT, the first popular and widely available generative AI tool. Public reaction was varied. Many warned it was the end of numerous careers (indeed, the Hollywood writers’ strike was partially due to fear that ChatGPT would eliminate their jobs).

And a great many experts worried that ChatGPT would usher in a new era of fake news, disinformation, and more believable scams as generative AI can create text that feels legitimate to the average person. This isn’t an idle fear as one study found that large language models (LLMs) can outperform human authors in terms of convincing people.

Election officials are sounding the alarm over the use of generative AI in creating political ads, phony but convincing campaign fundraising letters, as well as orchestrating voter outreach initiatives. These officials weren’t wrong; we’ve already seen generative AI used for such purposes. In January 2024, registered Democratic voters in New Hampshire received fake Joe Biden robocalls telling them not to vote in the primaries so that they could save their vote for November.

This is not to say that all use cases for generative AI in the political sphere are nefarious. Many legitimate political parties and candidates see generative AI as a useful tool in amplifying the impact of their political ads. For instance, they can use it to deliver highly targeted ads at the household level, including those encouraging voter turnout. In fact, generative AI can help less-resourced campaigns compete against well-funded ones.

That said, generative AI can (and likely will) have harmful impacts on elections across the world, and it’s well worth our time to be aware of its dangers, and take steps to mitigate them.

Insufficient Oversight in AI-Generated Political Ads

There’s no doubt that AI can create high-quality text that many people and voters find quite credible. But therein lies the danger. 

Most reasonable people assume that the ads they hear or see have been endorsed by a campaign and vetted by the media source that runs them. In the US, radio and television ads end with the candidate saying, “I’m [candidate name] and I approve this message.” Internet-based ads are exempt from this disclosure requirement, a loophole that the Honest Ads Act of 2017 sought to close (it didn’t pass).

Today, few regulations require political ads to disclose the role of AI in their creation. The one exception is the EU AI Act, which classifies AI systems used to influence voters in political campaigns as “high-risk” and therefore subject to strict regulations.

The United States government has failed to enact a national AI disclosure law, even as the 2024 presidential election looms. In the absence of a national law, a dozen or so states enacted laws regulating the use of AI and deepfakes (more on that later) in political advertising and requiring disclosure. Those states are California, Florida, Idaho, Indiana, Michigan, Minnesota, New Mexico, Oregon, Texas, Utah, and Wisconsin. Additionally, Google said last year it would require AI disclosure on political ads, and Meta soon followed suit.  

But there are challenges to these efforts. Common Cause, an advocacy group focused on promoting ethics, accountability, and reform in government and politics, says the Florida law is too weak to be effective as it imposes fines, but no mechanisms for removing offending ads. In Wisconsin, the Voting Rights Lab warns that the state law is too narrow, regulating only candidate campaigns and not special interest group ads.

The bigger challenge is that it’s up to the ad creators to self-disclose, an unlikely event for people bent on fear-mongering, and even if an ad is deemed violative, it will still be in circulation for it is spotted and identified. In other words, AI-generated ads with misinformation will still have ample opportunities to be seen and believed by a great many voters.    

Generative AI Hallucinations

Another challenge is AI hallucinations. Most AI tools warn the user that responses may contain incorrect information (see graphic below), which means a campaign may willingly or inadvertently create campaign ads containing false information.

This isn’t a theoretical concern. Research from a European non-profit organization, AI Forensics, found that one out of three answers provided by AI was wrong. Microsoft’s Bing search bot gave wrong answers when asked basic questions about elections in Germany and Switzerland, often misquoting its sources.

In the United States, misleading and incorrect responses from chatbots threaten to disenfranchise voters. AI-generated responses told users to vote at locations that don’t exist or aren’t official polling stations. Columbia University tested five AI models, and all failed to provide accurate responses to basic questions about the democratic process.

In the U.S., misinformation about voting times and locations is a tried-and-true voting suppression tactic, so it’s concerning that generative AI will allow its practitioners to be more effective.

Inherent Bias of Generative AI

All AI is trained on data; the accuracy of the AI is wholly driven by how well the training data is vetted and labeled. Data is often inherently biased for many reasons. In the political sphere, LLMs are trained on news stories that concern elections and candidates, but liberal news sites block AI bots as a matter of course, whereas right-wing ones welcome them. The result is that the AI models are trained on data skewed to a particular point of view that may not reflect a total body of opinion. 

Going further, some people intentionally seek to influence the responses of a chatbot. In 2023, The New York Times reported that David Rozado, a researcher in New Zealand, used prompt engineering to create right-wing ChatGPT. This revised chatbot was intentionally designed to give right-wing answers

Political Manipulation

Perhaps the biggest concern is that AI will be used to manipulate the voter, as the fake Biden robocalls sought to do.

This isn’t a new fear, of course, as we’ve seen AI used in political manipulation long before the widespread availability of ChatGPT. For instance, in the 2018 midterm elections in the US, election officials were warning voters to be aware of deep fakes. To raise awareness of just how realistic deep fake videos can seem, Oscar-winning filmmaker Jordan Peele created a video in which a fake Barack Obama says “stay woke.” The message is clear: don’t believe what you hear on the internet.

Despite the warning, deep fake videos and images appear in the media.  In June 2023, Florida Gov. Ron DeSantis’s presidential campaign shared fake AI-generated images depicting Donald Trump embracing Dr. Anthony Fauci, the former head of the National Institute of Allergy and Infectious Diseases (NIAID) and someone who Trump came to loathe. Trump supporters targeted African Americans with fake AI images, as part of a strategic ploy to convince voters that Trump is popular among Black voters.

Deep fakes also played a key role in the 2023 Argentine elections. Candidate Sergio Massa’s team created a video featuring his main rival, Javier Milei, describing the revenues that could be gained by selling human organs and suggesting that parents could consider having children as a “long-term investment.” Despite the video’s explicit AI-generated label, it was quickly shared on different platforms without disclaimers.

Over in Turkey, President Recep Tayyip Erdoğan’s staff shared a video depicting his main rival, Kemal Kiliçdaroğlu, receiving the endorsement of the Kurdistan Workers’ Party, a designated terrorist group. Although this video was clearly fabricated, it didn’t stop voters from viewing and sharing it widely. 

Given what we’ve already seen occur, it’s no surprise that election experts call generative AI a “political super-weapon.” Jen Easterly, director of the Cybersecurity and Infrastructure Security Agency, takes it one step further, saying that AI poses “epoch-defining” risks, including the widespread proliferation of disinformation.

When People Aren’t Real: The Rise of Bots & Psychochats

There’s one final threat to consider: AI posing as humans to sway how people think and ultimately vote. Once again, nefarious players have access to sophisticated tools to help them deploy their schemes.

For instance, bots have been effective at disseminating disinformation with a great deal of speed and efficiency. In 2019, The New York Times reported that Epoch Media Group created over 600 fake media profiles, all featuring profile photos generated by AI. Those profiles were then deployed to distribute fake news and disinformation.

It’s not that hard to come up with AI-generated profile pics; a simple Google search serves up numerous sites allowing you to create realistic headshots and photos for social media. These bots can then be used to engage with voters who may be on the fence or provide people, who are intent on voting, to go to a non-existent polling station.

Psychochats goe one step further. These are avatars of candidates and are deployed online to interact with potential voters. It’s only a matter of time before psychochats are used by campaign opponents to spread misinformation on their rivals, similar to Sergio Massa’s smear campaign against Javier Milei.

Think this is too outlandish to be true? Politico reports that Meta is already experimenting with licensed AI celebrity avatars. And, Hello History invites users to “have in-depth conversations” with historical people of the past.

Democracy in Peril: Why We Must Act

When elections are marked by rampant misinformation, the very foundation of democracy is compromised. At the end of the day, misinformation leads to the formation of governments formed under false pretenses. Chaos results when governments lack the necessary legitimacy to govern effectively. 

The erosion of trust brought on by deep fakes, AI-generated lies, and psychochats undermine the democratic process, ultimately threatening the stability of societies. Never has it been more important to protect the integrity of information during election cycles. AI tools are cool and offer tremendous benefits to everyone in the digital media industry. But we must also acknowledge their potential for abuse, and work tirelessly to control how they’re used.

The post AI’s Role in Political Manipulation appeared first on AdMonsters.

]]>
Echo Chambers & Political Discourse: Essential Reads on Media Manipulation and Algorithms https://www.admonsters.com/echo-chambers-political-discourse-essential-reads-on-media-manipulation-and-algorithms/ Fri, 05 Jul 2024 16:18:37 +0000 https://www.admonsters.com/?p=658478 Algorithms shape our world in ways we never imagined, influencing everything from the news we consume to our political beliefs. In this article, Søren H. Dinesen,  Co-founder and CEO of Digiseg explores critical trends in digital media through a curated list of must-read books, exploring the intricate relationship between algorithms, media, and privacy.

The post Echo Chambers & Political Discourse: Essential Reads on Media Manipulation and Algorithms appeared first on AdMonsters.

]]>
Discover crucial insights into digital media and privacy with Digiseg’s latest reading list. These recommended books delve into how algorithms create echo chambers and influence political discourse, offering an in-depth look at their impact on society.

Algorithms shape our world in ways we never imagined, influencing everything from the news we consume to our political beliefs. In this article, Søren H. Dinesen, Co-founder and CEO of Digiseg explores critical trends in digital media through a curated list of must-read books, exploring the intricate relationship between algorithms, media, and privacy. These works dissect the creation of echo chambers, the rise of filter bubbles, and the profound impact these phenomena have on political discourse and societal polarization.

The Algorithmic Influence: Shaping Our News and Beliefs

Dinesen reviews key titles that unravel how algorithms narrow our perspectives and distort our cultural and democratic frameworks. By examining these influential books, he provides a comprehensive overview of the challenges and implications of living in an algorithm-driven world, urging us to rethink how digital media shapes our reality.

Let’s dive in.

The Media Manipulation Machine: A Closer Look

In their book The United States of Distraction: Media Manipulation in Post-Through America, Dr. Nolan Higdon and Mickey Huff tackle urgent questions facing America. Key among them: How did we get to the point where citizens decide what’s true based on the number of people who believe it, rather than facts or reality?

Consider some of the outlandish things Americans believe:

  • Three in ten believe the 2020 presidential election was stolen from Donald Trump
  • About 25% say there is at least some truth that COVID was planned
  • 22% believed that the “storm” predicted by QAnon would occur
  • 7% believe chocolate milk comes from brown cows

One would think it’s easy enough to dispel such ridiculousness … isn’t that what the media is supposed to do? Provide reality checks for a reading public? Sadly, vetted journalism isn’t convincing anyone because distrust of the media is at an all-time high.

Echo Chambers: The Rise of Filter Bubbles

How did we arrive at such a sad state of affairs? Over the past 10+ years, numerous scholars have concluded that the root of all our troubles begins with the deployment of algorithms. Algorithms are designed to present us with the information we have shown a propensity to consume, which in turn, limits the range of information we see and what we believe.

The Impact of Digital Isolation on Democracy

In 2011, Eli Pariser coined the term “filter bubbles” in his book, The Filter Bubble, What the Internet is Hiding from You. Two years earlier, Google had deployed AI to its search results, narrowing sources to just those the algorithm predicts will interest the user.

To Pariser, it was the dawn of the great dumbing down of people as the range of information they were exposed to was limited. We thought we had access to the full spectrum of global information — that we were traveling along a great information superhighway — but with algorithms deciding what we see, that is no longer the case. An important onramp — Google Search — merged with its algorithmic offramp, unbeknownst to us.

Radicalization Through Algorithms: How Big Tech Went Wrong

But that was only one of the challenges. The other was how it groups us into clusters of people who think like us, trapping us into “filter bubbles.” Today we call those bubbles echo chambers.

All of this narrowing and grouping is fueled by AI — algorithms that track our online activity and preferences, then selectively show us content that aligns with our existing beliefs and interests. With enough positive reinforcement from others on the internet, it comes as no surprise people can fall for conspiracy theories.

It’s one thing to believe that chocolate milk comes from brown cows (when I was young I thought trees made wind by bending back and forth). But when AI-curated content leads someone to ransack a Target outlet or bring a gun to a pizza joint to free kids held as slaves, it’s time to admit we have a problem.

This is a theme that is echoed by Stanford University Professors Rob Reich, Mehran Sahami, and Jeremy M. Weinstein in their book, System Error: Where Big Tech Went Wrong and How We Can Reboot. They argue that when Big Tech’s hyper-focus on a single metric — say YouTube’s decision to prioritize time spent consuming videos — bad things happen. They’re not wrong, as it’s now understood that the tuning of an algorithm to prompt binge-watching leads to the radicalization of users and the spread of conspiracy theories.

Memes as Propaganda: The Online Battle for Truth

In the book Meme Wars: The Untold Story of the Online Battles Upending Democracy in America, Harvard’s Joan Donavan and others argue that memes serve as the bedrock of conspiracy theories, helping to make the unbelievable believable. Clever, and easy to go viral, memes like “Stop the Steal” can move entire cohorts of people to act violently and in anti-democratic ways.

Conspiracy Theories and Algorithmic Fuel: A Dangerous Mix

Memes are particularly effective at swaying people and recruiting them to extremism (the book Accountable: The True Story of a Racist Social Media Account and the Teenagers Whose Lives It Changed describes how Googling “black on black crime” will lead people down a white supremacist rabbit hole). 

One of the reasons why memes are effective at converting people is that they break down offensive and outlandish beliefs into bite-sized and memorable riffs. They serve as a starting point to a process that slowly eases people into horrific belief systems. Recruiters of extremist beliefs understand this, and they’ve honed their skills. This, by the way, isn’t just an American problem; it’s global. 

Cultural Flattening: Algorithms Blunting Creativity

In her book, The Next Civil War, Stephanie Marche also warns that algorithms make us more extreme, but she says they add another wrinkle: they make it easy for racist and violent people to find one another. In the days before the Internet, a disaffected youth would have trouble joining a white supremacist or neo-Nazi group because such people were  underground. Today, social media algorithms will recommend them as something users may be interested in.

But it’s not just the worst parts of society that algorithms distort. Algorithms are also blunting the best parts of cultural life, as Kyle Chayka’s recent book Filterworld: How Algorithms Flatten Culture clarifies.

He warns that algorithms have effectively constricted our access to information, serving the lowest common denominator of content because such blandness will appeal to the largest number of people. This results in books, music, and even physical spaces such as cafes, all reading, sounding, and looking alike because algorithms have taught us to expect no better. Groundbreaking ideas are penalized because they don’t have the same level of virality as the more market-tested ones. Put another way, algorithms are flattening the global culture.

A Broken Promise: The Internet’s Failed Information Superhighway

The internet was supposed to be an information superhighway, providing unrestricted access to ideas to anyone with an interest in learning. But instead of providing easy access to ideas and information, the algorithms that now rule the Web are shrinking what we’re exposed to, and in many ways, our free will. Instead of being presented with new ideas, we are grouped into cohorts of people who amplify our beliefs and prejudices, setting the stage for outlandish, post-truth beliefs.

The Path Forward: Reclaiming Our Shared Reality

Shared truths are essential to functioning societies—- this candidate won an election, the Earth is round, not flat, and climate change is an urgent issue that needs addressing. Without a basic set of facts we can all believe in, humanity will only get more mired in the deception and misinformation that is propagated by AI, with people of differing opinions retreating further into their corners.

It’s our duty to prevent that from occurring.
___

This is the third article in Digiseg’s Privacy Series. The first, Privacy Signals, AI in Advertising & the Democratic Dilemma, takes a broad view of the issues of private signals and one-to-one signals as we see them. The second, Surveillance Capitalism 2.0, examines how emerging privacy-centric solutions track user behavior just as much as cookies ever did.

The post Echo Chambers & Political Discourse: Essential Reads on Media Manipulation and Algorithms appeared first on AdMonsters.

]]>
Surveillance Capitalism 2.0: The New Era of Digital Ad Tracking and Privacy https://www.admonsters.com/surveillance-capitalism-2-0-the-new-era-of-digital-ad-tracking-and-privacy/ Fri, 07 Jun 2024 12:00:51 +0000 https://www.admonsters.com/?p=656349 Søren H. Dinesen, CEO of Digiseg, delves into the privacy dilemma as cookie deprecation raises new concerns about consumer expectations. From the early days of contextual ads to the rise of identity resolution graphs, Dinesen unpacks how the ad tech industry continues to track users despite privacy regulations.

The post Surveillance Capitalism 2.0: The New Era of Digital Ad Tracking and Privacy appeared first on AdMonsters.

]]>
Søren H. Dinesen, CEO of Digiseg, delves into the privacy dilemma as cookie deprecation raises new concerns about consumer expectations. From the early days of contextual ads to the rise of identity resolution graphs, Dinesen unpacks how the ad tech industry continues to track users despite privacy regulations. Are we truly anonymous, or is it all just a myth?

In the introduction of this series, I raised the concern that the targeting, measurement and attributions arising in the wake of cookie deprecation won’t meet the consumer’s expectations of privacy. It’s a hugely critical issue, and one worth exploring in depth. This article does just that.

The Rise of the New Tracking Cookie

In the early days of digital advertising, nearly all ads were contextual; Google AdSense assessed web page content and if it matched the topic of an ad creative, Google would fill the impression. The challenge was that contextual targeting back then was rudimentary, leading to horribly embarrassing and often brand-safe placements. A few memorable ones include:

  • A “put your feet up” ad for a travel company appeared next to an article titled “Sixth Severed Foot Appears Off Canadian Coast” on CNN.
  • VacationsToGo.com banner ad over a photo of a cruise ship that sank in Italy
  • Aflac, a service for employee recruitment and whose mascot is a duck and has a tagline of “We’ve got you under our wing” appeared next to a story about anatidaephobia, a disease where people believe they are being watched by a duck.

Marketers naturally wanted better tools for targeting, and deservedly so. By the mid-2000s, Web 2.0 was in full swing, with consumers increasing the amount of time they spent online and on social media, generating vast amounts of data. For marketers, it was the start of the data-driven revolution.

That revolution was powered by private signals, which are any and all signals that are tied to an individual allowing the industry to follow consumers as they go about their digital lives, whether that’s surfing the web, using apps on their mobile device, or streaming content via their smart TVs or radios. 

Initially, the main tracking device was the third-party cookie; little snippets of code dropped into the browser, unbeknownst to the user, so their every move could be logged and their future behavior monetized.

Ad tech companies and agencies retrieved that data from the consumers’ browsers and used it to make assumptions about people: users who visited a parenting site were women aged 25-to-35; users who read about new automobile models are actively in-market for a new car.

Here’s a true story about an American on the Digiseg team: She signed onto her health insurance account to check on something. Later, she saw an ad on Facebook that said something like, “Dr. Smith is in your healthcare network, schedule an appointment today.” This was far from a unique event.

For everyday citizens the message was clear: We’re watching you. For many, installing ad blocking software was an act of desperation. Such software didn’t end tracking, but at least they weren’t reminded of how much they were under the microscope of entities they didn’t know.

Consumers complained, of course. More importantly, they demanded regulators in their home states to end tracking. For the industry, that meant finding a replacement for third-party cookies, but not for tracking users.

But — and it’s a big one — blocking cookies and ceasing the tracking of users in this industry seem to be two different things, though why that is the case is beyond us. Users still emit private signals as they go online, and the industry is still collecting them. Consumers still have no control over the matter, which means brands and ad tech companies still follow them around, whether they like it or not.

The new crop of tracking signals stems from the user’s device or the single consumer, such as hashed emails, and CTV device IDs. Worse, they’re making it more difficult for users to protect their identity from prying eyes. 

Identity Resolution Graphs

Identity resolution graphs are seen as an important step forward in consumer privacy protection, but whether or not they respect a user’s desire for anonymity is up for debate. These databases are built on vast identity signals: email, device ID, cookie data, CTV ID, work computer, home computer, and even physical address. An identity resolution graph connects all known signals to a single ID that typically represents individual consumers.

The benefit of ID graphs is to allow marketers and data companies to “recognize” users across multiple IDs. Let’s say a site invites users to register for a free account and the site collects the user’s email address (i.e. first-party data). Next, the site purchases an ID resolution graph to recognize users when they visit the site via a mobile device or computer from work.

Are there benefits for the user? Yes, because it allows the site to know the user and display content of interest. But wouldn’t it be better to ask the user to sign in or register on the device? Or during the initial registration process, ask permission to recognize them on other devices? This is the type of behavior that got the industry in trouble before. How hard is it to request permission?

In worst-case scenarios, the site allows advertisers or partners to target those users across their devices — without the user’s permission or input.

The Myth of Anonymity

Signals can be anonymized; emails can be hashed, device IDs can be hidden in data clean rooms, but how relevant is that anonymity if the signal can still be used to track users without their permission and for purposes they never agreed to? We forget that cookie data was also “anonymized” but the consumer still complained vociferously about being tracked.

The new private signals don’t even guarantee anonymity. Take hashed emails, which aren’t so private when everyone has the same key. That key allows anyone to recognize a hashed email as a consumer who, say, purchased this dog food or subscribes to this streaming service.

As I mentioned in the first article in this series, this level of tracking is all in pursuit of one-to-one marketing, which itself is a bit of a myth.  

Digital as Mass Media

We’re pursuing a find-and-replace option for cookies, and in doing so, we are ignoring effective and truly privacy-compliant options in front of us: one-to-many ad campaigns. Two of those options include:

Contextual targeting, which has come a long way since the days of Google AdSense. We have numerous AI solutions to help avoid brand unsafe placements, including natural language processing, sentiment analysis and computer vision that can assess the true content of an article, and place ads accordingly. This segmentation method is inherently anonymous, eschews every form of tracking, and can achieve massive scale with the right approach.

Another option is using offline demographic data, that is collected, verified and anonymized by national statistics offices, ensuring it is both accurate and privacy compliant. Going further, with modern modeling and methodology, entire countries can be segmented into neighborhoods of as few as 100 households.

Ultimately, the evolution of digital ad tracking reflects the ongoing tension between technological advancements and privacy concerns. As the ad tech industry continues to innovate, the challenge lies in balancing effective marketing strategies with the imperative to respect user privacy. By embracing more privacy-compliant options such as advanced contextual targeting and offline demographic data, the industry can pave the way for a future where digital advertising is both effective and ethical. As we navigate this new era of surveillance capitalism, the need for transparency, user consent, and robust privacy protections has never been more critical.

The post Surveillance Capitalism 2.0: The New Era of Digital Ad Tracking and Privacy appeared first on AdMonsters.

]]>
Privacy Signals, AI in Advertising & the Democratic Dilemma https://www.admonsters.com/privacy-signals-ai-in-advertising-the-democratic-dilemma/ Mon, 13 May 2024 17:36:09 +0000 https://www.admonsters.com/?p=655791 For reasons that completely baffle Søren H. Dinesen, co-founder and CEO of Digiseg, the digital advertising industry congratulates itself for taking steps to eliminate third-party tracking cookies from the ecosystem, while replacing them with something equally bad from a consumer privacy perspective: various private signals that allow for one-to-one targeting.

The post Privacy Signals, AI in Advertising & the Democratic Dilemma appeared first on AdMonsters.

]]>
For reasons that completely baffle me, the digital advertising industry congratulates itself for taking steps to eliminate third-party tracking cookies from the ecosystem, while replacing them with something equally bad from a consumer privacy perspective: various private signals that allow for one-to-one targeting.

Okay, that’s a lot to unpack, so let’s break it down. GDPR, CCPA, et al are a direct result of consumer blowback against constant online tracking. Private citizens felt that their every move was captured, recorded, packaged, and sold to anyone willing to pay. They weren’t wrong. Can we blame them for hating that level of snooping?

Inside the industry, we saw things differently. While consumers objected, we celebrated the age of data. Almost 100 years after John Wanamaker complained that 50% of his advertising was wasted, the digital advertising industry had devised a way to ensure every dollar an advertiser spent was targeted at a qualified user. We called it one-to-one advertising. 

Outside observers had a different name for our activities: surveillance capitalism

Surveillance capitalism, data-driven marketing, whatever you want to call it, we can all agree that it is essential to the constant bombardment of messages that influence consumer behavior. Some have deployed it to the point where consumers are prodded to buy things they don’t need (and will return to the retailer at some point at considerable loss), incur unmanageable levels of consumer debt, and rent self-storage units to put the stuff they buy but have no room for in their homes. 

And that’s just the start. Authoritarian figures and conspiracy theorists also use one-to-one messaging to proliferate their extreme beliefs that pose a serious threat to democracies all over the world.

So it’s no surprise that privacy regulations sprung up. But how effective are those regulations if we replace tracking cookies with other private signals (e.g. hashed emails, User ID resolution graphs) that allow for the same type of one-to-one targeting?

Let’s Face It: Cookies Were Not Effective

I find the private signals craze that has seized the industry to be rather puzzling. Big-time marketers, such as Kraft’s Julie Fleischer, made no bones about the sub-par quality of cookie data, telling attendees of a data conference that 90% of the data for sale is “crap.” Mind you, she said this back in 2014, when the so-called data revolution was in its heyday. Other studies during that time showed that up to 50% of audience segments for sale failed to reach the target audience. The cookie was never able to live up to its promise, as I wrote in 2022.

But we ignored the cognitive dissonance and dug into the tactic, relentlessly targeting consumers who felt like reading an article about a new car model with ads for new autos, based on the assumption that they must be in market for a new auto. Into countless auto-intender segments they went.

Consumers weren’t happy about it at all, and a few — Max Schrems in Europe and Alastair Mactaggart in California — began successful campaigns to regulate the collection and sale of user data. 

They were hardly outliers. Consumers began installing ad blockers, adopting VPNs, and downloading encryption software in a desperate attempt to protect their privacy. All they wanted was to be left alone to browse the internet in peace.

Given the consumer’s utter distaste for the incessant tracking, one must ask: Why did we go down this road?

The False Promise: One-to-One Marketing

To put it simply, we were led astray by a myth that one-to-one marketing was both possible and embraced by the consumer. We believed that everything the consumer did online — every click, visit, page view, video watched — revealed clues to the user’s predilection, brand preference, potential to buy, political outlook. 

By collecting, storing and analyzing every piece of data generated, we sought to influence what consumers buy and how they vote at scale. 

Worse, we believed that consumers saw the benefit of all this intrusion. We told ourselves that consumers expect highly personalized experiences and it was our job to provide them. After all, modern advertising demands relevance.

There are alternatives, of course, which we’ll explore in later articles.

The Great Accelerator: AI

The whole data revolution and one-to-one marketing scheme had a valuable technology in its corner: machine-learning based AI. AI has been an integral part of programmatic advertising and user profiling from the very beginning. The industry deploys it to analyze who clicked on what ad, who visited which page, and to select which impression out of billions to fill with an ad. 

A more recent form of machine learning — generative AI — is now a topic du jour. It is viewed as a way to take one-to-one marketing to the next level, creating ad copy and images in real-time based on the user behind it. What could possibly go wrong?

This Keeps Me Up at Night

As someone who is deeply committed to digital advertising, our failure to learn from past mistakes keeps me up at night.

Our industry has never really questioned the notion that one-to-one marketing is a worthy goal. Rather than learn the lessons of the consumer rebellion that led to GDPR, CCPA, and countless other regulations, we are embarking on a new style of consumer spying based on a new set of private signals. Today we are leveraging those signals to bully people into buying stuff they don’t need as well as instill in them irrational fears in order to prompt them to support anti-democracy candidates.

Why are we repeating the same mistakes? And make no mistake about it, the “alternatives” to third-party cookies function in the same way; they log a consumer’s private behavior and use it to follow them around the internet. Today we stand on the dawn of cookie-free advertising, tasked with reimagining the world. Instead, we are dangerously close to a colossal failure of imagination. Our focus is on identity resolution graphs and hashed emails — the exact kind of tracking we had with third-party cookies. Call it surveillance capitalism 2.0.

We can — and must — do better. Doing better means resisting the siren call of one-to-one marketing. Until we jettison that fantasy, all of our industry’s brain power and financial investments will do nothing more than recreate surveillance capitalism, ultimately leading us back to the place we currently find ourselves: facing any citizens and governments demanding that we stop abusing privacy.

The post Privacy Signals, AI in Advertising & the Democratic Dilemma appeared first on AdMonsters.

]]>