FTC Archives - AdMonsters https://live-admonsters1.pantheonsite.io/tag/ftc/ Ad operations news, conferences, events, community Thu, 01 Aug 2024 14:56:47 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.1 The Perils of Hashed IDs: FTC Reasserts They Are Not Anonymous https://www.admonsters.com/the-perils-of-hashed-ids-ftc-reasserts-they-are-not-anonymous/ Thu, 01 Aug 2024 14:56:47 +0000 https://www.admonsters.com/?p=659256 In a recent blog post, the FTC reiterated a critical privacy principle: hashed IDs are not anonymous. Despite some companies' claims, hashing—a process that transforms data like email addresses or phone numbers into seemingly random strings—does not render data anonymous. 

The post The Perils of Hashed IDs: FTC Reasserts They Are Not Anonymous appeared first on AdMonsters.

]]>
As the ad tech industry grapples with privacy compliance, the FTC’s latest warning reveals that hashed IDs are not the anonymity shield many believe them to be, urging a rethink of data privacy strategies.

No pun intended, but the ad tech industry is still hashing out its privacy concerns. With Google essentially pulling the plug on third-party cookie deprecation and instead heading in the direction of an opt-out mechanism, ad tech’s privacy terrain is still in a state of limbo. 

But as publishers and advertisers search for the privacy-compliant tech that works best for them, the FTC reissued a warning about hashed IDs. 

In a recent blog post, the FTC reiterated a critical privacy principle: hashed IDs are not anonymous. Despite some companies’ claims, hashing—a process that transforms data like email addresses or phone numbers into seemingly random strings—does not render data anonymous. 

Data is only anonymous when you cannot trace it back to an individual, according to the FTC. This misinterpretation can lead to significant privacy violations, as bad actors can still use hashed data to identify and track users, potentially causing harm.

Hashing provides a layer of obfuscation but does not eliminate the potential for re-identification. The FTC has highlighted several cases where companies misused hashing, believing it ensured anonymity. Notable instances include the 2015 case against Nomi, which tracked consumers in stores using hashed MAC addresses, and the 2022 case against BetterHelp, where hashed email addresses were shared with Facebook, compromising user privacy. 

A Quick Refresh on Hashed IDs

Companies often use hashing to obscure personal data. Hashing transforms information such as email addresses or phone numbers into a consistent numerical value, known as a hash. 

This process ensures that the same input data will always generate the same hash, making the original data difficult to guess.

The advantage of hashing is that it allows companies to store data without directly revealing identifiable information. A hash appears meaningless and preserves user privacy, as companies cannot easily trace it back to the original data. This is why companies often use hashing when they are reluctant to record or share direct identifiers but still need the data for future matching.

However, according to the FTC, the belief that hashing fully anonymizes data is flawed. Companies and bad actors can still use hashed IDs to identify users, and their misuse can lead to harm. They warn that companies should not claim that hashing personal information makes it completely anonymous. The FTC will continue monitoring and addressing deceptive privacy claims to ensure that companies comply with the law.

Hashing Out Industry Sentiments

The ad tech industry is all in on alternative IDs as a go-to solution for privacy complaints. But, the FTC just threw a wrench in the works by declaring that hashed IDs aren’t truly anonymous. A shift might be on the horizon. This revelation puts universal ID formats like TTD’s UI2 and LiveRamp’s Ramp ID—those that hash and encrypt personal data—under the microscope, suggesting they might not be the ultimate fix we once believed.

Where will the industry pivot after this? 

Third-party cookies and hashed IDs will not stand the test of time, according to Adam Schenkel, EVP of GumGum. Schenkel instead upholds that contextual targeting will be the next wave for privacy-compliant solutions. 

“This news and the FTC’s commitment to safeguarding data privacy for Americans indicates that privacy-invasive targeting tactics like third-party cookies and hashed IDs will not stand the test of time,” said Schenkel. “Instead, advanced contextual advertising emerges as a superior solution once again because not only is contextual respectful of a user’s privacy, but it’s also able to match ad content with a user’s real-time interests and mindset.” 

Publishers Hash Out the Sit-and-Wait Approach

Sam Cheng, Director of Advertising Operations, TeamSnap, noted the uncertainty and potential difficulties ahead when asked about his initial reactions to the ruling. “It’s too soon to know until major publishers start taking an approach,” he said, emphasizing a cautious outlook.

Cheng highlighted the complexities in finding the next ID solution to comply with the FTC’s rules, noting that HashID will likely stick around until a new one surfaces. For now, he’s taking a wait-and-see approach.

He also emphasized the tough challenges publishers face in meeting the FTC’s data anonymity and user tracking guidelines. He acknowledged that implementing a new solution would be a pain, especially for companies lacking the technical bandwidth to adapt quickly. 

“Assuming most companies don’t have the technical bandwidth, it will be challenging to implement a new solution when it does come out,” he explained.

The post The Perils of Hashed IDs: FTC Reasserts They Are Not Anonymous appeared first on AdMonsters.

]]>
Decoding the Legal Shifts: Legal Expert Jason Gordon on Chevron Deference and Its Advertising Fallout https://www.admonsters.com/decoding-the-legal-shifts-legal-expert-jason-gordon-on-chevron-deference-and-its-advertising-fallout/ Fri, 19 Jul 2024 14:17:05 +0000 https://www.admonsters.com/?p=658984 Last week, we published an article detailing how the Supreme Court’s decision to overturn Chevron deference will affect the advertising industry. We spoke with Jason Gordon, partner at Reed Smith in its Entertainment and Media group, to get a more in depth understanding of how SCOTUS’ decision will impact advertising. 

The post Decoding the Legal Shifts: Legal Expert Jason Gordon on Chevron Deference and Its Advertising Fallout appeared first on AdMonsters.

]]>
The Supreme Court’s Chevron decision reduces the FTC’s authority to unilaterally define unfair or deceptive advertising practices, likely increasing legal challenges and emboldening advertisers to contest FTC guidelines, particularly in areas like data privacy and consumer consent.

Last week, we published an article detailing how the Supreme Court’s decision to overturn Chevron deference will affect the advertising industry. The consensus was that it will infringe on regulatory efforts and cause publishers and advertisers to challenge government agency regulations that affect their businesses. 

But what does that mean for publishers and advertisers still waiting for a federal privacy law in the US? What does that mean for the FTC, who makes many of the federal regulations for the advertising industry? 

We spoke with Jason Gordon, partner at Reed Smith in its Entertainment and Media group, to get a more in depth understanding of how SCOTUS’ decision will impact advertising. 

Andrew Byrd: Can you elaborate on how the Supreme Court’s Chevron decision impacts the advertising industry?

Jason Gordon: The Supreme Court decision marks a significant shift in regulatory authority. Before this ruling, regulatory agencies, including the FTC, were given substantial deference in defining and enforcing what constitutes unfair or deceptive advertising practices. As the primary enforcer of advertising regulations, the FTC and other specific agencies could interpret these practices broadly. 

This deference allowed the FTC to develop guides such as the “Made in the USA” guide, endorsement and testimonial guides, and green guides for environmental marketing claims. These guides provided advertisers with clear frameworks but granted the FTC significant power in interpreting what constitutes false or misleading advertising.

Now, the role of the courts has become more central in interpreting vague statutes related to advertising practices, reducing the FTC’s unilateral authority. This shift implies that while the FTC’s interpretations remain influential, they are not definitive. Courts will now have the final say in disputes over what constitutes an unfair or deceptive act, which could lead to more legal challenges against the FTC’s rulings. This change introduces a new dynamic where advertisers might feel emboldened to contest FTC’s guidelines and enforcement actions, potentially leading to a more litigious environment in the advertising industry.

Additionally, this shift occurs amid an environment of heightened regulatory scrutiny under the current FTC leadership, which has adopted a more aggressive stance on enforcement. The FTC’s focus areas, such as artificial intelligence, privacy, and novel concepts like “junk fees” and “dark patterns,” may face judicial scrutiny as advertisers challenge the agency’s jurisdiction and interpretations. 

AB: Since this will affect the FTC’s regulatory power, do you see opportunities for the advertising community to challenge federal regulators, especially in privacy and deceptive practices?

JG: While this might limit the FTC’s unilateral power, it’s crucial to understand that the broader regulatory ecosystem remains robust. Besides the FTC, states have their own statutes against unfair or deceptive acts. This means state attorneys general, competitors, or class action lawyers can still pursue false advertising claims even if SCOTUS curtailed some of the FTC’s power.

Specifically regarding privacy, the FTC’s diminished authority doesn’t stop states like California from enacting and amending laws like the CCPA and CPRA. Similarly, Illinois continues to enforce its biometric privacy law. Therefore, despite a potential reduction in FTC enforcement, state-level regulations and actions will persist, maintaining pressure on advertisers to comply with privacy and data security standards. 

Thus, even if the FTC’s scope of action is restricted, advertisers must still deal with a complex regulatory environment where state laws and other legal avenues remain active. 

AB: Given the shifting power dynamics from federal regulations to the Supreme Court, do you foresee increased scrutiny from the FTC on advertisers and publishers? 

JG: The FTC stated that its enforcement priorities will remain steadfast despite the shifting power dynamics from federal regulations to the Supreme Court. However, the practical implications of this stance will unfold over the next three to nine months. 

Advertisers might react by adopting more aggressive strategies, believing they have a better chance in court. Trade organizations could also start challenging the FTC’s authority, leading to potential legal battles. If advertisers contest FTC actions in court, they will rely on precedents like the Lanham Act, which governs false advertising disputes between competitors. This act requires proving both the falsity of the advertising and its material impact on consumer decisions, setting a high bar for legal challenges.

AB: In your initial introduction, you mentioned that some in the advertising industry view specific FTC rules as lacking evidentiary support. How does the industry address concerns about these perceived gaps in evidence?

JG: When the FTC updates its rules, it follows a notice and comment period, allowing for public feedback before finalizing conclusions. However, there have been puzzling decisions. For instance, when the FTC updated its endorsement and testimonial guides, it included FAQs advising businesses and influencers on permissible practices. While FAQs aren’t law, they can signal enforcement intentions.

For example, when Facebook introduced influencer disclosure tools to help disclose connections between influencers and advertisers, the FTC updated its FAQ a week later, stating that these tools wouldn’t necessarily be a defense and that the overall message context matters. The FTC provided no evidence for this stance, reflecting a broader issue of issuing guidelines without substantiation. The FTC’s mandate to prevent false advertising should include evidence-based requirements, but such support is often lacking, leading to industry frustration and confusion.

To address these concerns, industry stakeholders engage in dialogue with the FTC, provide feedback during the comment periods, and sometimes seek legal clarification on ambiguous guidelines. They also invest in compliance training to ensure they adhere to the evolving standards, despite the frustration with some of the FTC’s seemingly unsupported mandates. 

AB: How might SCOTUS’ decision affect the FTC’s focus on data collection practices and consumer consent? 

JG: I foresee two primary reactions from the industry. First, trade organizations and data collectors may argue that the current practices are neither deceptive nor unfair and that if Congress wants to impose stricter data privacy laws, it should do so through new legislation rather than through the FTC’s existing framework. They may contend that the FTC is overstepping its authority by trying to impose these new standards.

Second, if the FTC enforces new data privacy measures, some companies may challenge these regulations in court. They could argue that the FTC lacks sufficient evidence or legal basis for such stringent controls. This could lead to significant legal battles as companies seek to protect their current data collection practices and avoid the operational disruptions that new regulations might entail.

The post Decoding the Legal Shifts: Legal Expert Jason Gordon on Chevron Deference and Its Advertising Fallout appeared first on AdMonsters.

]]>
ChatGPT Under Fire for Allegedly Violating EU Privacy Laws https://www.admonsters.com/chatgpt-alleged-violations-of-eu-privacy-laws/ Tue, 30 Jan 2024 21:00:10 +0000 https://www.admonsters.com/?p=652520 AI technology is evolving much faster than regulation can control. But regulators like Italy's Data Protection Authority are working to ensure we can all reap AI's benefits while complying with data ethics. Amid their lawsuit against the NYT, OpenAI faces privacy scrutiny in Europe after a multi-month investigation into ChatGPT’s data collection methods.

The post ChatGPT Under Fire for Allegedly Violating EU Privacy Laws appeared first on AdMonsters.

]]>
As Ad Ops and Rev Ops professionals gear up to integrate generative AI into their daily routines, they must understand the legal consequences of not adhering to privacy and copyright laws, ensuring consumer protection.

Amid their lawsuit against the NYT, OpenAI faces privacy scrutiny in Europe after a multi-month investigation into ChatGPT’s data collection methods. Italy’s Data Protection Authority gave OpenAI 30 days to respond to the allegations. 

Critics have called out generative AI’s spread of misinformation and data privacy concerns for quite some time. There are also concerns about deep fakes — such as using artists’ voices to create new music or the horrific fake explicit images of American pop princess Taylor Swift.

This technology is evolving much faster than regulation can control. But regulators like Italy’s Data Protection Authority are working to ensure we can all reap AI’s benefits while complying with data ethics. 

As Italy’s privacy watchdog readies its case against OpenAI and OpenAI prepares its response, it could set new precedents in AI regulation standards. 

A History of Data Collection Misfires and the Lawsuits They Bore

Should the EU court confirm the breach, OpenAI faces a potential €20 million fine, or up to 4% of global annual turnover. Beyond financial penalties, data protection authorities can mandate changes in a company’s data processing methods for violating privacy laws. This could lead to altered data collection practices or even cessation of the tech’s usage in regions where they enforce compliance.

Given its history with legal challenges over its data collection practices, OpenAI is no stranger to the intricacies of AI data handling. This includes the notable lawsuit brought by The New York Times for allegedly using copyrighted data to enhance their chatbot’s intelligence. 

Attorney Justin Nelson, representing the New York Times in the lawsuit, accused OpenAI of “building this product on the back of other people’s intellectual property. OpenAI is saying they have a free ride to take anybody else’s intellectual property since the dawn of time, as long as it’s been on the internet.” 

In both cases, OpenAI responded that the lawsuits were without merit — a big shocker. In the case of the NYT lawsuit, OpenAI released a public statement saying that using publicly available internet materials is fair use. 

In response to Italy’s DPA, they said, “We believe our practices align with GDPR and other privacy laws, and we take additional steps to protect people’s data and privacy. We want our AI to learn about the world, not about private individuals. We actively work to reduce personal data in training our systems like ChatGPT, which also rejects requests for private or sensitive information about people.” 

The Regulatory Perspective and the Implication for the Ops Industry

Last year, Italian authorities raised GDPR concerns about OpenAI, temporarily banning ChatGPT’s local data processing. The March 30 provision cited issues like the lack of a legal basis for personal data collection, AI’ hallucinations,’ and child safety problems. The authority suspected GDPR breaches in Articles 5, 6, 8, 13, and 25. 

AI regulators are fighting tooth and nail for industry-wide standards, and no sign of their momentum stopping. For example, the FTC launched a new inquiry into five major AI players investigating how their investments and partnerships impact competition — Alphabet, Amazon, Microsoft, Anthropic, and OpenAI. More specifically, the FTC is examining “whether tech giants are using their power to trick the public, and whether the AI investments allow giants to ‘exert undue influence or gain privileged access’ to secure an advantage across the AI sector.”

“Just as we’ve seen behavioral advertising fuel the endless collection of user data, model training is emerging as another feature that could further incentivize surveillance,” said FTC chair Linda Kahn. “The FTC’s work has made clear that these business incentives cannot justify violations of the law.”

As the Ad and Rev Ops industries prepare their AI capabilities, and I know they are, they must also be cognizant of the potential pitfalls of using this technology. Mark Sturino, VP of Data and Analytics, Good Apple, said at his Keynote address at AdMonsters Ops in 2023 that publishers can differentiate themselves by utilizing AI technology to provide insights and transparency. Still, they must be careful in using AI to create targeted audiences. 

“AI is playing more of a role from a publisher selection perspective. At least at Good Apple, it is less and less about flash, and it’s more about the actual results you’re giving us because everybody will be judged based on performance,” said Sturino.

The post ChatGPT Under Fire for Allegedly Violating EU Privacy Laws appeared first on AdMonsters.

]]>
The FTC’s Push for Privacy: Setting the Stage for Federal Regulation and AI’s Future https://www.admonsters.com/ftcs-push-for-privacy/ Wed, 17 Jan 2024 14:13:59 +0000 https://www.admonsters.com/?p=651990 A recent study, by data compliance tech company, Compliant,  revealed that 90% of publishers shared consumer data with third parties without consent. It's high time publishers and ad tech companies prioritize consumer privacy. These stats reveal there's more work to do, despite some efforts towards privacy compliance.

The post The FTC’s Push for Privacy: Setting the Stage for Federal Regulation and AI’s Future appeared first on AdMonsters.

]]>
The U.S. government has failed to pass federal privacy regulations despite the desire for it, but the FTC’s privacy-compliance crusade may be the push they need to get the ball rolling.

In 2022, the Federal Trade Commission launched a lawsuit against Kochava, accusing the company of selling sensitive location data from reproductive health clinics, places of worship, and more. The FTC further alleges that Kochova’s actions exposed consumers to risks like “stigma, stalking, discrimination, job loss, and even physical violence.”

Kochava isn’t the only one deceiving customers. A recent study, by data compliance tech company, Compliant, revealed that 90% of publishers shared consumer data with third parties without consent. It’s high time publishers and ad tech companies prioritize consumer privacy. These stats reveal there’s more work to do, despite some efforts towards privacy compliance.

New Year, New FTC. The FTC is doubling down on protecting consumer privacy rights. This includes their ongoing lawsuit against Kochava, a recent settlement against Outlogic, and other actions. These cases could lay the groundwork for the future of digital media privacy.

Kochava’s (Alleged) Controversial Practices

Kochava, the self-proclaimed industry leader in mobile app data analytics, is in the FTC’s crosshairs for its data collection practices.  A recent FTC settlement with Outlogic, banning the use and sale of sensitive location data, suggests that Kochava might face similar repercussions.

The mobile data analytics company refutes the FTC’s accusations. However, the FTC contends that Kochava offers a ‘360-degree perspective’ on individuals, combining geolocation with other data to identify consumers. Also, using AI, Kochava allegedly predicts and influences consumer behavior in invasive ways before selling this data.

The Dangers of Data Misuse

The unsealing of court documents last November revealed Kochava’s secret sale of the ‘Kochava Collective’ data. This includes precise geolocation, consumer profiles, app usage, and sensitive details like gender identity and medical data. With this data, customers can target specific groups down to exact locations, raising concerns about privacy and potential misuse.

This allows organizations like advertisers, insurers, political campaigns, and individuals with harmful intentions to purchase data containing names, addresses, emails, economic status, and more about people in selected groups.

Allegedly, the sensitive data — accessed without user consent — is acquired through Kochava’s SDKs installed in over 10,000 apps globally and directly from other data brokers. In a separate California lawsuit, Greenley alleges that Kochava engages in covert data collection and analysis, selling customized data feeds tailored to clients’ specific needs.

Inching Towards Stronger Privacy and AI Regulations

U.S. District Judge B. Lynn Winmill initially dismissed the FTC’s complaint as it required more details. While Winmill hasn’t ruled on a motion to dismiss, discovery is underway, with a trial expected in 2025.  

Advancing AI tools are reshaping data analysis, enabling the invasion of privacy. Generative AI can infer and disclose sensitive data, such as medical records, and can predict consumer behaviors, influencing decisions without their knowledge.

With the FTC’s lawsuit against Kochava unfolding and new proposals for AI regulation on the table, the legal landscape for data privacy is evolving. The case’s outcome, alongside the push for federal privacy laws, could mark a significant shift in how data and AI are regulated and used ethically.

The White House is also concerned about protecting consumer privacy and surveillance advertising. While the U.S. is still stuck with state-led privacy regulations, most agree that more robust federal privacy laws should be enacted. This case may just be the push needed to get some version of the American Data Privacy and Protection Act (ADDPA) passed.

The post The FTC’s Push for Privacy: Setting the Stage for Federal Regulation and AI’s Future appeared first on AdMonsters.

]]>
Big Tech, Walled Gardens, and Regulatory Battles in 2024 https://www.admonsters.com/big-tech-walled-gardens-and-regulatory-battles-in-2024/ Fri, 29 Dec 2023 04:54:51 +0000 https://www.admonsters.com/?p=651207 Thanks to their dominance, Google, Meta, and Amazon will continue to attract the lion’s share of the advertiser’s budget. Over the next few years, they will account for 83% of global digital advertising revenue. Regulators across the globe are doing all they can to stop the dominance, of course, but Big Tech shows no sign of going down without a fight. While 2023 was a year for antitrust suits filed and in some cases lost, 2024 will be a year when key decisions will be made and appealed. It will also be a year when the impact of sweeping regulations in the EU may be felt.

The post Big Tech, Walled Gardens, and Regulatory Battles in 2024 appeared first on AdMonsters.

]]>
New rules and rulings can upend the business models that have dominated Web 2.0.

Since the dawn of the digital ad, Big Tech and their walled gardens have cast a long shadow over every aspect of the industry. 

Despite promises otherwise, marketers face persistent challenges in terms of data ownership, transparency, and the ability to do necessary things like verify campaign results. Thanks to their dominance, Google, Meta, and Amazon will continue to attract the lion’s share of the advertiser’s budget. Over the next few years, they will account for 83% of global digital advertising revenue

Regulators across the globe are doing all they can to stop the dominance, of course, but Big Tech shows no sign of going down without a fight. While 2023 was a year for antitrust suits filed and in some cases lost, 2024 will be a year when key decisions will be made and appealed. It will also be a year when the impact of sweeping regulations in the EU may be felt.

Canada Online News Act

In the summer of 2023, Canada passed the Online News Act, which requires dominant platforms to compensate news organizations when their content is made available in search results and social media.

A game of brinkmanship ensued, with Meta announcing that as of August 1, it would block news content from Facebook and Instagram feeds for all Canadian users. A few years back, Australia passed a similar law, although Google and Meta quickly made a deal with the Australian government to keep news on their platforms. 

While Canada and Google reached a deal in late November to keep news stories in search results, and for Google to pay $72 million (C$100 million) annually to Canadian news publishers, no such agreement has been reached with Meta. Nor is it likely, as the news ban has had very little impact on Facebook’s traffic.

United States

Google

Google has been in the crosshairs of regulators all year, beginning with the news in January 2023 that the FTC filed a suit against the company over its digital ad tech monopoly. The case, which went to trial in September, is ongoing and it’s anyone’s guess how it will turn out. The presiding judge, Amit Mehta of the U.S. District Court for the District of Columbia, has said “I have no idea what I’m going to do.” He set closing arguments for early May. 

Also, as previously reported by AdMonsters, a San Francisco jury sided with Epic Games, ruling that Google violated antitrust laws by unfairly stifling competition within Google Play. The verdict compels Google to allow app stores other than Google Play on Android devices. Google made good on its vow to appeal, and that case will head to the San Francisco-based 9th U.S. Circuit Court of Appeals sometime in 2024.

Both these cases can turn 2024 into a monumental year for the entire industry. Depending on the outcomes, we could be headed for major changes in the advertising industry, including increased competition and a reduction in ad costs (along with headaches for smaller publishers that appreciate the simplicity of the Google ecosystem). It can also lead to a new era of government regulations and oversight, which the industry has assiduously tried to avoid.

Amazon

Financially, 2024 will be a banner year for Amazon’s advertising business, reports DoubleVerify. Amazon more or less invented the concept of retail media and it has become a favored tactic of marketers everywhere, especially as the pool of third-party data and cookies diminishes. Per DoubleVerify, most of the marketers they surveyed said they planned to spend their budgets primarily on Amazon Advertising.

Amazon will need that revenue to beef up its legal team. On September 26, US regulators and 17 states filed an antitrust lawsuit against Amazon. The federal government and a bipartisan group of state attorneys argue that Amazon is a monopolist that stifles its competitors, offering unfair prices to sellers and consumers. More specifically, Amazon allegedly punishes sellers for offering lower prices and pressures them into paying for Amazon’s delivery network.

“Amazon is a monopolist and it is exploiting its monopolies in ways that leave shoppers and sellers paying more for worse service,” said FTC Chair Lina Khan. “In a competitive world, a monopoly hiking prices and degrading service would open up rivals and potential rivals to … grow and compete. But Amazon’s unlawful monopolistic strategy has closed off that possibility, and the public is paying dearly.”

The suit, which is the result of a lengthy investigation spanning many years, asks the court to issue a permanent injunction.

Meta

In late November, Meta filed a lawsuit  that challenges the constitutionality of the FTC; the complaint literally says, “Meta respectfully requests that this Court declare that certain fundamental aspects of the Commission’s structure violate the U.S. Constitution and that these violations render unlawful the FTC Proceeding against Meta.”

The background: In May 2023, the FTC accused Meta of violating the privacy settlement it had reached with the company back in 2020. That settlement included a $5 billion fine and required Meta to implement a comprehensive privacy program to protect user information. It also mandated that Facebook conduct privacy reviews of new or modified products, services, or practices before implementing them and documenting how those risks could be mitigated.

In its May action, the FTC accused Meta of misleading parents about how much control they have over who their children contact within the Messenger Kids app, among other issues. As a result, the FTC has reopened the 2020 settlement and proposes changes that will prevent Meta and its platforms from benefiting financially from any data it collects from people aged 18 or under.

According to the AP, Meta could prevail, writing “Meta’s complaint came after the U.S. Supreme Court’s conservative majority on Wednesday seemed open to a challenge to how the Securities and Exchange Commission fights fraud in a case that could have far-reaching effects on other regulatory agencies.” If the analogy holds, Meta may be able to request a jury, not the FTC, settle the question of any privacy violations.

Europe

Meta

Meta’s 2023 privacy woes across the pond began in early January with the EU fining it $412 million for data collection violations. According to the regulator, blanket terms of service — such as those that apply to U.S. citizens — don’t fly under GDPR. On August 1, Meta announced it is changing the legal basis for processing personal data for behavioral targeting, transitioning from “contractual necessity” to “consent.”

A few months later, Meta announced a “Pay for Your Rights” that would allow EU citizens to pay $10 a month to opt out of data collection for behavioral targeting. The EU was far from impressed, and in November, the European Data Protection Board (EDPB) issued an urgent binding decision to ban Meta’s behavioral advertising practices in the European Economic Area (EEA). Violating the ban can result in fines that equal 4% of its global revenue.

Meta says it’s not backing down and that the fight will continue.

Digital Markets Act (DMA)

While Meta fiddles with its legal basis for collecting data, it, along with five other walled gardens faces a new challenge to their hegemony: the EU’s Digital Markets Act (DMA) — not to be confused with the EU’s Digital Services Act (DSA) — which went into effect in May 2023. In September, the EU designated six companies as “gatekeepers” — aka any company that has a significant impact on the internal market, operates a core platform service, and serves as an important gateway for business users to reach end users. 

The six named gatekeepers are Alphabet, Amazon, Apple, Meta, Microsoft, and ByteDance, spanning 22 platforms. The DMA requires these gatekeepers to comply with strict competition regulations, such as allowing end users to access and use content, subscriptions, features, or other items through their core platform services by using the software application of a third party. This is pretty much what the San Francisco jury said when it ruled that Google must allow other app stores on Android devices. To date, the gatekeepers haven’t said much about their designations, other than quibbling about some individual platforms and whether they meet the criteria.

It’s hard to say if 2024 will be the year when the garden walls will tumble down. We can, however, predict that Big Tech will continue to lawyer up and appeal all moves that take direct aim at their profitable business models.

The post Big Tech, Walled Gardens, and Regulatory Battles in 2024 appeared first on AdMonsters.

]]>
Vizio’s FTC Data Verdict Is a Well-Worn Lesson https://www.admonsters.com/vizios-ftc-data-verdict-well-worn-lesson/ Mon, 20 Mar 2017 19:49:56 +0000 http://beta.admonsters.com/vizios-ftc-data-verdict-well-worn-lesson/ A little over a month ago, Vizio settled for $2.2 million with the FTC after an investigation into its data collection practices. In the settlement, the TV-maker agreed to collect future data only when users opt in. However, Vizio is still facing a class action lawsuit based on VPPA (Video Privacy Protection Act). The dispute is […]

The post Vizio’s FTC Data Verdict Is a Well-Worn Lesson appeared first on AdMonsters.

]]>
A little over a month ago, Vizio settled for $2.2 million with the FTC after an investigation into its data collection practices. In the settlement, the TV-maker agreed to collect future data only when users opt in. However, Vizio is still facing a class action lawsuit based on VPPA (Video Privacy Protection Act).

The dispute is over Vizio’s ”Smart Interactivity” feature in its smart TVs, which enables content suggestions and offers for users. Vizio had collected data around users’ viewing habits and matched it to users’ demographic data (age, sex, income, marital status, education level and so on) without getting explicit user consent—then turned around and sold that data to data platforms, ad tech providers and agencies.

According to a recent AdExchanger article, Vizio’s been catching heat not just from the feds, but from its business partners. Some vendors and agencies that have reportedly “temporarily cut ties with” the smart TV provider, and one tech CEO said his company has been approached by partners warily asking whether they used Vizio data. The article concludes that Vizio can still bounce back, even though its data business will be subject to greater scrutiny and may not be as broad as it had been.

When you look at the business of digital video, the Vizio case really doesn’t set a precedent. Long-form video providers have already been hauled out on the carpet for similar violations. Netflix was sued for linking its former customers to their rental histories after cancelling membership. (Netflix lost that case.) Hulu was sued for linking users’ Facebook IDs to their viewing history. (Hulu won, but the case is being appealed.) With Vizio implicated, it’s clear that… OTTs are not immune? It’s kind of hard to tell what the lesson learned might be this time. The lesson—don’t match viewing histories to anything that might identify a specific user, unless that user gives explicit consent by opting in—has already been taught in other recent instances.

At this point, another VPPA-related lawsuit isn’t going to make an example of anyone. It’s also hard to say that the FTC is using the law in this sense to compromise the ability of companies in digital to do business. Collecting user data and sharing it with partners is legal in itself—the digital ad model isn’t particularly threatened. The illegal part is failing to inform users of what data was being collected, who was going to use it and how, and then giving the user the opportunity to say they were willing or unwilling to have their data shared this way.

Over here at AdMonsters, we’ve been posting a bit lately about how ops can be an asset in keeping publishers out of legal hot water. Ops teams should be proactive in working with legal teams, and should be able to flag issues that go beyond the publisher’s minimum business requirements and might create a legal conundrum. Ops can recognize the difference between targeting at the audience level and targeting a specific user. And ops needs to assure there’s a consistent partner screening process, while documenting which third parties are on the site and what their code is doing. We researched these guidelines and suggestions by talking to ops people at publishers that distribute video content on a wide-scale, international level—suggesting that while compliance with the FTC can sometimes be complicated, it’s possible and it’s not a threat to a video-heavy publisher’s business.

Of course, if the process of sharing sensitive data about your users, obtained without their consent, with your partners is baked into your business model, you’re going to lose some business if you suddenly need to assure user consent. Transparency is better for everyone in the industry, and it encourages users to consent. But 100% user consent for data collection and sharing is not likely. A publisher or platform’s business needs to be built to run on a smaller, more reasonable percent of fully on-the-level data.

This article has been updated to correct an earlier version, which conflated Vizio’s February FTC case with its ongoing class action lawsuit.

The post Vizio’s FTC Data Verdict Is a Well-Worn Lesson appeared first on AdMonsters.

]]>
Who’s Watching Whom? Ad Tracking, Surveillance and Ethics in the Exchanges https://www.admonsters.com/whos-watching-whom-ad-tracking-surveillance-and-ethics-exchanges/ Wed, 18 Jan 2017 21:41:43 +0000 http://beta.admonsters.com/whos-watching-whom-ad-tracking-surveillance-and-ethics-exchanges/ The other day, a publisher friend of mine tapped my shoulder about a release the Electronic Frontier Foundation put out fairly recently. This was a release calling for a radical tightening of security around user data. The sense of urgency, as EFF’s argument goes, is that we’re in an increasingly charged political environment in the […]

The post Who’s Watching Whom? Ad Tracking, Surveillance and Ethics in the Exchanges appeared first on AdMonsters.

]]>
The other day, a publisher friend of mine tapped my shoulder about a release the Electronic Frontier Foundation put out fairly recently. This was a release calling for a radical tightening of security around user data.

The sense of urgency, as EFF’s argument goes, is that we’re in an increasingly charged political environment in the U.S.; and that with the way data is commonly shared, government agencies would be in a prime position to monitor and persecute common citizens through common user data. (That is, if those government agencies really wanted to.) The EFF was putting forward a plea to tech companies, then, to bring up their shields and heighten their default data security and privacy measures for users.

“Oh, hey, this is about ad platforms,” my friend told me as he forwarded me the link.

I should clarify this is a longtime editorial guy who works for a pure digital outlet, but who’s no great fan of the ad-supported model. We’ve had these conversations about data privacy for years.

He’ll raise a privacy concern. I’ll say, “Yeah, but tech companies are only going to use the data they can monetize, which is pretty innocuous. And in any case, there are government regulations about PII and whatnot; there’s a lot of personal data tech companies legally aren’t supposed to disclose.”

He’ll say, “Yeah, but they can if they wanted to.” I’ll say, “Yeah, but if you want to get technical about it, so can basically every consumer-facing company with an internet connection.” He’ll say, “But ad platforms are in a unique position to process and deliver data.”

We keep having variations of this conversation because neither of us is about to adopt the other’s perspective, and neither of us is particularly wrong.

In any case, this time, regarding the EFF’s release, my friend wanted to know whether I thought this was the right moment for the digital ad industry to re-examine the way it handled and shared consumer data. I told him it wasn’t likely, because the ad industry generally doesn’t mobilize to solve hypothetical problems.

The EFF, on its own part, might be worried about the role tech companies might play if the government decided to start targeting dissidents. The ad tech industry isn’t about to do very much about it, though, because it’s not happening now. There are plenty of problems in digital advertising that need to be solved right now, because they’re causing someone to hemorrhage money, or they’re causing someone to leave money on the table.

Most of those problems can be solved, but it’s impossible to say the solutions won’t leave a window open for further problems to get in. As one exec from one tech vendor company told me in a recent conversation, “The ad tech industry is awesome at solving for exactly what you want and not worrying how you get there.”

It’s difficult to make decisions purely on principle in digital. Furthermore, the pressures of the marketplace make it extremely difficult to step back and re-evaluate the basic structure of just about anything. Change usually happens incrementally, and it tends to come about when the bottom line is threatened.

Think about the issue of fraud in the exchanges. Everyone more or less knew about it for ages, but there wasn’t much incentive to combat it broadly until buyers became furious about lost spend and certain tech players were hauled out onto the rug for their alleged complicity.

Think about the “fake news” problem. Premium publishers are livid about it, but many buyers fear that if they blacklist fake news sites, their competitors will swoop in instead. How much business are you going to risk to take a principled stand?

It’s hard to argue the prevalence of fake news hasn’t brought us to some kind of civic crisis. But a civic crisis is unlikely to move the needle on business before it becomes a business crisis.

Does consumer data that hasn’t been properly encrypted present another potential civic crisis, if a certain narrative plays out in Washington? Of course it does. And it always has.

But it would take a truly severe series of events before the digital ad industry substantially changed its practice of sorting users into audiences based on assumptions about their demographic characteristics and interests—because that’s what everyone else in the industry has always done and is set up to keep doing.

To dial back the more incendiary suggestions of the EFF’s statement, there’s something pretty easy to agree with, something toward which the industry is trending anyway: It’s generally a good idea to allow users to opt out of certain kinds of digital tracking if that’s what they want. And the reason it’s trending is—surprise!—there’s a money-driven imperative.

The rise of ad blocking in the U.S. caused a tectonic freakout among publishers. What followed was an admission that users want options, and a re-thinking of the user/publisher value exchange. As of now, it looks like we’re seeing a new paradigm emerging, where users have more control over their ad experiences, but publishers can still get some kind of value from users who don’t want to be subject to ad targeting.

It didn’t take a massive government surveillance program to start pushing toward a situation where users have more control over how they opt in or opt out of sharing their own data. All it took was widespread panic that publisher revenue was going to fall into the toilet otherwise.

The post Who’s Watching Whom? Ad Tracking, Surveillance and Ethics in the Exchanges appeared first on AdMonsters.

]]>