SECURITY AND TRUST

Agentic Commerce: The Threat Landscape

The dawn of agentic commerce can pose threats for consumers and businesses.
11/20/2025


Agentic Commerce Threat Landscape

From markets, to malls, to mobile, shopping is continuously evolving. The next major shift is AI agents that can buy and sell on our behalf.

Agentic commerce is poised to transform how we shop. AI-powered shopping agents promise faster, smarter transactions — but where innovation leads; fraud often follows – fast. Agentic artificial intelligence — autonomous systems capable of initiating, adapting, and executing complex tasks without human oversight — is rapidly reshaping the cyber threat landscape.

Criminals are noticing. Visa PERC identified a more than 450%¹ increase in dark web community posts in underground channels mentioning “AI Agent” over the past six months compared to the prior six-month period. By removing traditional human bottlenecks, these tools enable fraud at a scale and sophistication previously unattainable. What once required coordinated teams of skilled fraudsters can now be executed by AI, automating workflows necessary to design and deploy scams, tailoring social engineering tactics, and rapidly creating convincing criminal infrastructures for higher efficacy and victim monetization.

The good news: Visa has been preparing for this moment long before “agentic commerce” had a name. Over the past five years, we’ve invested more than $13 billion in technology and security to stay ahead of evolving threats. While education and awareness remain critical, consumers and clients can continue to trust Visa’s network and solutions to protect their transactions — even as the threat landscape grows more complex.

 

Shifting Tactics for New Flows

The promise of agentic commerce is a unique personal shopper enabled with faster payments, fewer disputes, and seamless transactions. To realize that promise, shoppers, sellers, and financial institutions all need confidence that AI-driven transactions are safe and reliable – without trust, commerce doesn’t happen. However, one of the most pressing risks for the dawn of agentic commerce lies in how fraudsters are evolving their tactics to exploit agentic commerce flows. Just as legitimate marketers are shifting from optimizing for search engines (SEO) to optimizing for agentic search, fraudsters are adapting the same playbook — manipulating the logic behind agentic shopping results to steer consumers toward fraudulent or scam websites that appear trustworthy. AI shopping agents, which by design find the best deals and make purchases on behalf of consumers, can be deceived by sophisticated counterfeit merchants engineered specifically to exploit them. A fraudulent storefront may look entirely legitimate, pass automated security checks, and offer prices far below market rate. Then, once the AI agent completes the purchase using stored credentials, the malicious merchant can harvest payment data and instantly use it for unauthorized transactions. In these scenarios, both sides of the fraud — the creation of the fake merchant and the exploitation of the consumer’s AI agent — are automated to improve fraudster outcomes.

Visa saw a 25% increase² in malicious bot-initiated transactions over the past 6 months, with the US experiencing a 40% increase² a share expected to grow as agentic commerce scales. The speed and sophistication of these systems mean adversaries can spin up thousands of targeted operations in minutes and adapt instantly to detection measures. Synthetic content generated by AI can be indistinguishable from legitimate business materials, rendering many legacy fraud indicators unreliable.

 

Better, Faster, Stronger (Social Engineering)

Traditional social engineering and phishing techniques are evolving to exploit the unique vulnerabilities of agentic commerce. Unlike conventional domain spoofing — where attackers create visually similar URLs (e.g., “vlsa.com” instead of “visa.com”) — this new threat represents a more sophisticated approach. Malicious actors may develop AI agents that impersonate trusted brands or financial institutions, initiating conversations with users and gradually extracting sensitive information.

While traditional phishing typically relies on static websites that prompt immediate action, these agent-based attacks leverage dynamic, conversational interfaces capable of sustaining deception over extended periods. In a recent scam case, Visa uncovered a network of fraudulent websites that each deployed a conversational AI agent as part of their operation. This tactic served a dual purpose: first, the presence of a responsive, human‑like chat interface gave the sites an added veneer of legitimacy; second, the agent could dissuade victims from contacting their bank by pretending to offer customer support assistance, keeping them engaged over days or weeks. By delaying or preventing reports of fraud, the scammers reduce the chance of early detection, allowing the sites to continue operating and capturing more victims before being shut down.

These malicious AI agents are particularly dangerous because they are designed to build trust over time. They may engage users in seemingly harmless conversations, offer helpful information or services, and establish rapport before attempting to extract sensitive data or manipulate users into harmful actions. Their conversational nature allows them to adapt based on user responses, making them significantly more sophisticated than static phishing attempts.

Moreover, their ability to mimic human-like interaction patterns and brand-specific communication styles makes them highly convincing — even to vigilant users. These agents can operate across various messaging platforms and voice systems, making it more difficult to detect using traditional security measures than conventional spoofed websites.

 

Building Fraud at Scale

Beyond direct attacks, agentic AI can build complex criminal infrastructures with unprecedented speed. It can create legitimate looking websites, secure hosting and payment services, produce falsified compliance documentation, and fabricate corporate identities complete with synthetic personas. This can create a network of convincing but fraudulent merchants and platforms that are difficult to distinguish from legitimate businesses for both agents and humans.

The scale and speed of these threats present unique challenges. Traditional fraud detection systems were built to identify patterns over time and flag anomalies for human review. In the age of agentic AI, adversaries can spin up thousands of targeted operations in minutes, continually refine them, and adapt instantly to detection measures. Synthetic content generated by AI can be indistinguishable from genuine business materials, rendering many existing fraud indicators unreliable.

 

What’s Next

Addressing this new reality requires a strategic shift. Financial institutions and payment processors must invest in verification systems capable of detecting synthetic content across multiple media types. Verification processes should draw on independent data sources and employ time-based challenges that are harder for AI to circumvent. Continuous monitoring must be enhanced to detect rapid operational changes — a hallmark of AI driven criminal enterprises.

One way Visa is addressing these threats is through the Trusted Agent Protocol — a standards-based framework enabling merchants to verify agent identity and intent in real time, preventing impersonation without degrading user experience. This protocol enforces identity verification fields, applies ‑time-based transaction challenges to block credential harvesting, and integrates continuous telemetry into Visa’s fraud and risk models to strengthen detection dynamically.

By harnessing the power of AI for good, Visa combines cutting-edge models with expert human oversight to stop fraud before it starts. And when consumers pay with Visa — on any platform — they’re protected by our zero-liability guarantee, ensuring they’re never held responsible for unauthorized charges. Visa also blocks more than 500 fraudulent transactions a minute using good AI, helping ensure that consumers can be confident when they pay. Looking ahead, we recognize that securing agentic commerce requires more than adapting existing security measures. It demands a new, purpose-built approach. In the same way that we evolved the physical security of the EMV® Chip for in-person transactions into the digital protection of tokenization for e-commerce, we are now building the next evolution of payment security. These agentic-native capabilities will ensure that, as AI agents begin to transact on behalf of consumers, the trust and security synonymous with Visa are embedded from the start.

But no single organization can address this threat alone. The scale and cross-border nature of agentic AI crime demand coordinated industry action. Visa is working with payment networks, banks, regulators, technology providers, and law enforcement to establish intelligence sharing frameworks, common agent verification standards, and coordinated rapid‑ response strategies. This collaborative approach is essential to counter adversaries that can operate simultaneously across multiple jurisdictions‑ and platforms.

Visa is constantly innovating to maintain the trust our customers place in our technology to move money safely.


To learn more on how Visa works to protect the digital economy and the latest trends in risk and security, check out the Visa Biannual Threats Report here.

Get Visa Perspectives in your inbox

Stay informed with curated, timely payments insights from around the globe, designed to help you navigate the new world of commerce.