Acknowledging the intersections of human rights and advertising, the United Nations panel gives a call to action to the advertising industry to use its global impact to positively influence human rights crises worldwide.
The advertising industry operates across various platforms: print, broadcast, radio, and digital. Advertisers wield significant power in shaping public narratives and influencing societal perceptions of diverse issues and individuals.
As Pia Oberoi mentions in her opening remarks at the recent United Nations panel, Intersections Between Advertising and Human Rights, the industry invests plenty of money in global ad spend.
In 2023, digital ad spend is projected to reach a staggering $679 billion worldwide. With such substantial financial resources and pervasive outreach into households and communities, advertisers possess immense potential to affect lives positively or negatively, both online and offline.
Oberoi argues that ethical and informed decisions by industry players can channel advertising spend to support independent and trustworthy media. It can promote inclusive narratives, foster diversity, challenge stereotypes, and empower individuals and communities to enhance their lives and contribute to the betterment of others.
However, when advertising decisions lack a conscious focus on human rights, there is a risk of funding disinformation and hate speech. This may not only fuel discrimination and hostility but can also incite real-world violence, perpetuating discrimination and other abuses unintentionally. As we watch human rights violations across the world and in our own countries, we must consider the ethics and moral responsibility of our professions.
Positive Journalism, Bad Actors, and A Call To Action for Privacy
“We’re on a mission to address the economic link between advertising and harmful content because it is causing human rights harm around the world,” says Harriet Kingaby, Co-chair of the Conscious Advertising Network. Kingaby leads a network comprising over 180 organizations, including major brands, advertising agencies, and civil society groups, who all come together around that one goal.
First, she emphasizes advertisers funding positive aspects of media, such as truthful journalism and entertainment. She also acknowledges how failing to invest in these types of outlets leads to disseminating hateful content and misinformation.
But most of her talking points focused on a common criticism of the advertising industry — questionable data extraction and targeting practices. Kingaby asserts that bad actors have exploited these practices for their means. In her words, “Hate speech and misinformation are disrupting democracy, slowing our progress against global crises.”
We all know many of these practices have disregarded users’ privacy, but the advertising industry is working to appease its past mistakes, such as eliminating cookies and testing alternative solutions. As Dan Rua, CEO of Admiral, emphasizes, “Privacy consent is at the core of a publisher’s relationship with their visitors. The strongest relationships are built on mutual trust.”
The industry is working to rebuild that trust, and Kingaby gives them a call to action: “This system will not fix itself. But coalitions of the willing will.”
We Must Regulate How Humans Use AI
The industry is aware that unregulated AI can reshape human thinking and that freely distributed and unchecked content affects how society perceives information. You just have to log on to social media and see how users believe misinformation from a gossip blog that is rarely, if ever, fact-checked.
Amir Malik, Managing Director at Accenture, compares the current state of media consumption to decades prior. He states, “In the 1900s, people saw about 50 human-created images in a lifetime. Now, we’re inundated with imagery due to AI, impacting everything from consumerism to politics.”
Unfortunately, as Malik points out, some businesses disregard AI’s ethical considerations because it reduces production costs and enhances productivity. Additionally, as generative AI is a relatively new technology, we are still figuring out how much it impacts the human psyche, and thus, society at large.
Platforms like Google are attempting policy changes, like flagging AI-generated content on YouTube, but AI’s potential impact on media demands further regulation. According to Malik, nations weaponize AI-driven propaganda via platforms. If that’s the case, we must establish governance and regulation to prevent the misuse of AI.
Looking Toward a Solution
While much of the panel focused on the issues, they also offered some solutions.
Chris Hajecki is the Director of Ad For News, a media development nonprofit active in over 100 countries that aids advertisers in making informed decisions by curating inclusion lists of quality local news websites worldwide.
The initiative aims to help move away from broad news exclusion policies, foster a transparent supply path, and ensure brand safety. It encourages investment in vital local news while mitigating the risk of inadvertently funding disinformation. Essentially, these inclusion lists advise advertisers on publishers that uphold ethical advertising and journalism standards.
Hajekci said Ads For News upholds these standards by “embracing open sourcing, establishing in-country research teams, tailoring outputs to partners’ needs, and employing inclusive criteria. These standards evaluate legitimacy, journalist employment, and commitment to public interest without disseminating disinformation or hate speech.”
Of course, safe advertising sites have created much discussion this year, especially on the topic of MFA site reformation. But how do we establish safe advertising standards while considering consumers’ human rights?
Gerry D’Angelo, Senior Advisor at McKinsey & Company, believes that “aligning commercial interests with ethical considerations, such as not supporting misinformation or hate speech, while investing in quality journalism and entertainment” is a start.