LIVE THREATS
ATLAS OWASP HIGH Significant risk · Prioritise patching RELEVANCE ▲ 6.2

AI-Driven Pushpaganda Scam Exploits Google Discover to Spread Scareware and Ad Fraud

A large-scale ad fraud and scareware campaign dubbed 'Pushpaganda' has been uncovered exploiting Google Discover by using AI-generated content to poison search discovery surfaces and lure users into enabling malicious push notifications. At its peak the operation generated 240 million bid requests across 113 domains in a single week, demonstrating how AI-generated disinformation can be weaponised as an automated delivery mechanism for financial fraud. The campaign highlights the growing abuse of generative AI to scale deceptive content operations against trusted platform surfaces.

AI-Driven Pushpaganda Scam Exploits Google Discover to Spread Scareware and Ad Fraud

Overview

Researchers at HUMAN’s Satori Threat Intelligence and Research Team have exposed a sophisticated ad fraud and scareware operation codenamed Pushpaganda, which weaponises AI-generated content to infiltrate Google Discover — the personalised content feed served to Android and Chrome users worldwide. At peak activity, the campaign was responsible for approximately 240 million bid requests across 113 domains in a single seven-day period, initially targeting India before expanding to the U.S., Australia, Canada, South Africa, and the U.K. Google has since deployed a fix to address the spam vector.

The campaign matters to the AI security community because it represents a scaled, operationally mature example of generative AI being used not to attack ML models directly, but to abuse the outputs of trusted AI-powered discovery surfaces as a fraud delivery mechanism.

Technical Analysis

The attack chain operates in three stages:

  1. SEO Poisoning via AI Content: Threat actors operate a network of actor-controlled domains populated with AI-generated fake news articles. These articles are optimised for Google Discover’s ranking signals, allowing them to surface organically in personalised feeds without paid promotion.

  2. Notification Coercion: Once a victim clicks through to a poisoned domain, the page presents a browser-native push notification permission prompt framed as a required action (e.g., “click allow to continue reading”). This social engineering step is the campaign’s namesake.

  3. Scareware and Ad Fraud Loop: Once notification permissions are granted, the threat actor delivers persistent scareware alerts — fake legal threats, virus warnings, and financial lures. Clicking these notifications redirects victims to additional actor-controlled pages laden with display ads, generating illicit programmatic advertising revenue through invalid organic traffic sourced from real mobile devices.

The use of real devices and organic-looking engagement is significant: it defeats many fraud detection systems that rely on bot signatures or headless browser fingerprints.

Framework Mapping

  • AML.T0047 (ML-Enabled Product or Service): Generative AI is used as an operational tool to produce convincing fake news content at scale, lowering the cost of the initial deception layer.
  • AML.T0043 (Craft Adversarial Data): The AI-generated articles are crafted to exploit the ranking and relevance signals of Google Discover’s ML recommendation engine, effectively adversarially manipulating a production ML system.
  • LLM02 (Insecure Output Handling): The downstream surface (Google Discover) ingests and presents AI-generated content without sufficient authenticity verification, enabling harmful output propagation.
  • LLM09 (Overreliance): End users over-trust content surfaced by AI-powered discovery feeds, making them susceptible to manipulated recommendations.

Impact Assessment

  • Users: Android and Chrome users across six countries exposed to scareware, financial scams, and deepfake content via a trusted platform surface.
  • Advertisers: Legitimate ad budgets contaminated by invalid traffic from the fraud network.
  • Platform Trust: Abuse of Google Discover erodes confidence in AI-curated content feeds broadly.
  • Scale: 240M bid requests in seven days places this among the larger documented mobile ad fraud operations.

Mitigation & Recommendations

  • Users: Audit and revoke browser notification permissions regularly; treat any notification permission prompt on a news site as a red flag.
  • Platform Operators: Implement provenance and authenticity signals for Discover-eligible content; increase scrutiny of newly registered domains with high engagement velocity.
  • Security Teams: Monitor for push notification abuse patterns in endpoint telemetry; deploy browser policies that block notification prompts on unrecognised domains.
  • Ad Ecosystem: Apply invalid traffic (IVT) detection tuned for real-device, organic-appearing fraud patterns.

References