Perplexity Amazon Lawsuit
Amazon Court Order Blocks Perplexity AI Bots — New CFAA Precedent for AI Agents
A federal judge ordered Perplexity to stop its Comet AI shopping agent from accessing Amazon Prime accounts, establishing the first legal precedent that the CFAA applies to autonomous AI agents — even when users authorize them. The ruling forces every builder working with agentic AI to rethink how their tools access third‑party platforms.
What the Court Ordered
U.S. District Judge Maxine Chesney of the Northern District of California granted Amazon a preliminary injunction on March 9, 2026, barring Perplexity's Comet browser from accessing password‑protected portions of Amazon customer accounts. The order, reported by Reuters, requires Perplexity to destroy copies of Amazon customer data already collected through Comet and bars the AI agent from accessing Prime subscriber accounts going forward.
The ruling is stayed for 7 days to allow Perplexity to appeal. The Ninth Circuit subsequently granted Perplexity an administrative stay, temporarily pausing the injunction while the appeals court considers a longer pause, according to CyberScoop. The case is Amazon.com Services LLC v. Perplexity AI Inc, No. 3:25‑cv‑09514.
What Perplexity’s Comet Was Actually Doing
Amazon accused Perplexity of far more than casual scraping. According to Yahoo Finance, Perplexity's Comet browser was covertly logging into password‑protected Prime accounts to browse and make purchases on users' behalf. The agent disguised its automated activity as human browsing — masking Comet's digital fingerprint to impersonate Google Chrome traffic.
When Amazon deployed a technical block in August 2025, Perplexity pushed a software update within 24 hours to bypass it. The judge specifically cited this evasion as evidence of unauthorized access. Amazon had warned Perplexity at least 5 times before filing suit, as CNBC reported.
Why This Ruling Matters for Every AI Builder
This is the first judicial test of how the CFAA applies specifically to agentic AI — systems that autonomously make decisions and initiate transactions — acting on a human's behalf. (The CFAA has been applied to automated bots before, notably in Facebook v. Power Ventures, but never to generative AI agents.) The key precedent: platform authorization matters more than user consent. Even when a user explicitly authorizes an AI agent to act for them, the platform's authorization is also required. User consent alone does not constitute "authorized access" under the CFAA.
As IAPP explains, this directly impacts how builders design agentic tools. The era of "the user authorized it, so we're fine" is over. Platforms now have legal backing to control which AI agents access their services.
- API‑first, not scraping‑first Use formal integration models (APIs, partnerships) rather than relying on user credentials for autonomous access to third‑party platforms.
- Agent identification is now a legal requirement Amazon’s updated Business Solutions Agreement (effective March 4, 2026) requires all AI agents to identify themselves — likely becoming industry standard.
- Don’t evade technical blocks Perplexity’s bypass of Amazon’s fingerprint block dramatically weakened its legal defense. If a platform blocks your tool, negotiate — don’t push a bypass update.
- Review third‑party ToS Violating Terms of Service about AI agent identification could now trigger CFAA liability. Builders must audit their tools against platform ToS provisions.
The CFAA and Agentic AI: Uncharted Territory
The CFAA was drafted in 1986 — long before AI agents existed. How it applies to autonomous software acting on a human's behalf has never been tested at trial. The ruling suggests platforms can use CFAA claims to enforce ToS provisions about AI agent identification, though IAPP's analysis argues this may conflict with the Supreme Court's Van Buren ruling (2021), which held that violating computer‑use policies alone is insufficient for CFAA liability.
Perplexity's decision to update its software to bypass Amazon's technical block was cited as evidence of unauthorized access — similar to but distinct from the hiQ v. LinkedIn precedent. The Ninth Circuit appeal will be the next critical test, according to WSJ.
The Competitive Angle: Who Controls AI Access
Amazon is simultaneously blocking competitors' AI agents while developing its own — the Rufus shopping assistant. This raises antitrust concerns: platforms may use CFAA claims to block competitors while building their own tools. Amazon has already blocked dozens of AI agents including ChatGPT from its platform.
The stakes are massive. AI shopping agents that skip directly to checkout eliminate all sponsored listings, threatening Amazon's $68.6 billion in 2025 advertising revenue. As WSJ notes, the ruling protects this revenue stream. Other retailers like Walmart are developing their own AI assistants while likely tightening access.
In a twist noted by Yahoo Finance, Amazon founder Jeff Bezos is a personal investor in Perplexity — adding an ironic dimension to the litigation.
What Comes Next
The Ninth Circuit appeal is the next battleground. If the appeals court upholds the injunction, it sets a binding precedent across the western United States. If it reverses, the door opens for AI agents with user authorization to access platforms without platform consent. Either way, the case could eventually reach the Supreme Court.
For builders, the practical takeaway is clear: the era of open web access for AI agents is closing fast. ERP Today reports that enterprise architecture teams are already redesigning integration strategies around "API‑first, governed access" models. Builders who invest in formal API partnerships and agent identification now will be ahead of the curve when this precedent expands to other platforms and jurisdictions.
Current law (CFAA, drafted in 1986) was not designed for AI agents. This case highlights the urgent need for updated legislation specifically addressing agentic AI access rights — but until that legislation arrives, the courts are setting the rules one injunction at a time.
Apr 28, 2026
Anthropic Valuation Hits $1 Trillion on Secondary Markets, Overtaking OpenAI
Anthropic’s implied valuation has surpassed $1 trillion on secondary markets, overtaking OpenAI for the first time. The 2.6x jump from its February Series G comes as Claude Code adoption drives revenue growth from $9 billion to over $30 billion in four months — but the numbers come from thin, illiquid trading that may not hold at IPO.
Apr 28, 2026
Anthropic Managed Agents Add Memory — Persistent State for AI That Actually Ships
Anthropic has added persistent memory stores to its Managed Agents platform, giving AI agents the ability to retain knowledge across sessions without custom infrastructure. The update turns Claude from a stateless chat model into a long-running worker that picks up where it left off — and it changes how builders architect agentic workflows.
Apr 28, 2026
600 Google Employees Demand Pichai Reject Classified Pentagon AI Deal
More than 600 Google employees — including DeepMind staff and VPs — signed a letter demanding CEO Sundar Pichai refuse classified Pentagon AI work, directly referencing the Anthropic precedent where the DoD dropped Claude for requesting similar restrictions.