Blog

Sep 2, 2025

What Is Agentic Commerce, and What Does It Really Mean for AI to Make Payments?

What Is Agentic Commerce, and What Does It Really Mean for AI to Make Payments?

What Is Agentic Commerce, and What Does It Really Mean for AI to Make Payments?

Agentic commerce is where AI agents make purchasing decisions and execute transactions autonomously. Here's what it actually means for payments infrastructure.

Agentic commerce is the shift from AI recommending what you should buy to AI actually buying it for you. Finding the product. Entering the payment details. Executing the transaction. All without you touching a screen.

Today, AI can tell you which flights to book, which shirt matches your wardrobe, which vendor has the best pricing. What it cannot do is complete the transaction without you physically intervening. You read the recommendation. You click through to the checkout. You enter your card number. You solve a CAPTCHA. You approve an OTP on your phone. You wait for confirmation.

That sequence was designed for a world where the buyer is a person sitting at a desk. It's the digital equivalent of a bouncer checking IDs. It works when a human walks through the door. It falls apart when a thousand software agents show up simultaneously, each holding a valid mandate from the person who sent them.

Every Payment System Assumes a Human Is Holding the Phone

Think about what 3D Secure actually does. It sends a push notification to your phone. You open an app. You verify with your face or fingerprint. You tap "approve." Then you wait for the redirect.

An AI agent has no phone. No fingerprint. No face. No browser session that can handle a redirect without breaking.

This is the core flow. Every card-not-present transaction in most of Europe and large parts of Asia runs through Strong Customer Authentication. The regulation was written to reduce fraud by proving a human is present. When the buyer is software, that proof is impossible by design.

So today's agents use workarounds. They store the user's card details in memory. They operate through pre-funded balances. They call APIs that bypass the public payment network entirely. Each workaround narrows what the agent can actually buy, where it can buy it, and how reliably the transaction completes.

The volume, meanwhile, keeps climbing. Adobe measured 4,700% year-over-year growth in AI-referred traffic to retail sites during the 2024 holiday season. Salesforce reported $262 billion in AI-influenced holiday sales. IBM found 45% of consumers already using AI somewhere in their purchase decisions.

What Actually Breaks When an Agent Tries to Buy Something

The four-party card model (issuer, acquirer, network, merchant) was designed around a single assumption: the cardholder is the buyer. Every rule in the system flows from that. The cardholder authenticates. The cardholder bears initial liability for fraud. The cardholder disputes charges. The cardholder's behavioral patterns train the fraud models.

Remove the cardholder from the transaction and you lose the load-bearing wall.

Authentication collapses. The agent can't do biometric verification. It can't receive an OTP. It can't prove it's the account holder, it's operating as a delegate. The payment system has no native concept of "authorized delegate with bounded spending authority." That concept exists in corporate card programs and some B2B payment rails, but consumer payments don't support it.

Fraud detection goes blind. Visa's and Mastercard's fraud models are trained on human behavioral signals: device fingerprints, typing speed, geolocation, purchase history patterns, time-of-day tendencies. An agent operating from a cloud server in Virginia buying a handbag for a user in Tokyo at 3 AM local time looks exactly like fraud. The signal that would distinguish a legitimate agent transaction from a stolen card doesn't exist in the data yet.

Liability becomes ambiguous. The user authorized the agent to shop for clothes under $100. The agent bought a $95 jacket the user hates. Is that a legitimate dispute? The charge was authorized. The product was delivered. The user just didn't want that specific jacket. Today's chargeback system has no category for "my AI made a bad purchasing decision within its authorized parameters."

These are architecture problems. You can't patch them with a better checkout page.

Consumer Agents vs. Business Agents: Same Problem, Different Constraints

Consumer use cases get the headlines. AI stylists. Travel booking agents. Grocery assistants that learn your household patterns and reorder before you run out. The pitch is compelling: tell your agent what you want, set a budget, let it handle everything.

But consumer agents operate in the open market. They interact with merchants who haven't agreed to support agent-based purchasing. They hit checkout pages designed for browsers, not APIs. They encounter different authentication requirements at every store. The failure rate is high because the surface area is enormous.

Business agents are quieter but further along. A procurement agent at a mid-size manufacturer receives an order from the production line for 500 units of a component. It solicits quotes from approved vendors, evaluates terms, and executes the purchase order. The entire flow happens through APIs between systems that already trust each other. No checkout page. No CAPTCHA. No redirect.

The corporate card market figured out delegated spending decades ago. A VP gets a card with a $10,000 monthly limit, restricted to approved vendor categories. The system enforces the rules at the network level. The VP doesn't need to re-authenticate for each purchase.

Agentic commerce for consumers is asking for the same thing, except the delegate is software instead of a VP, the spending rules are more granular, and the merchants haven't built the infrastructure to accept it yet.

That last part is the bottleneck.

The Trust Question Nobody Has Answered

When a human makes a purchase, the liability chain is simple. The cardholder approved the charge. The merchant delivered the goods. The issuing bank verified the payment. Everyone knows their role.

When an AI agent makes a purchase, every link in that chain frays.

The agent made the decision based on rules the user set. But the user didn't approve each individual transaction. They approved a category, a limit, a timeframe. If the agent buys something the user didn't want, who's liable?

The user says: "I didn't authorize this specific purchase." The merchant says: "The payment cleared from a valid account with a valid token." The card issuer says: "We processed a legitimate transaction from a known account." The agent platform says: "The user set the rules that led to this purchase."

All of them have a case, and none of them have a precedent.

Today, most companies avoid this entirely. The agent recommends. The user approves. The transaction executes. You preserve the familiar liability chain. But that defeats the purpose. If the user still approves every transaction, the agent is a search engine with extra steps.

Real agentic commerce requires risk allocation that's explicit before the first transaction. The user authorizes a scope. The agent operates within it. If the agent exceeds its authority, the platform bears liability for not enforcing boundaries. If the agent stays within bounds and the user doesn't like the result, the user accepted that risk when they granted authority.

The question looks legal, but it's really about infrastructure. The payment system has to encode and enforce the rules. Right now, it can't.

Purchasing Decisions vs. Payment Execution

Most discussions about agentic commerce confuse two things that should be separate.

Purchasing is the intelligence layer. Which product? Which vendor? What price? What terms? That's where the agent's value lives.

Payment execution is plumbing. Money moves from the buyer's account to the seller's account. Confirmation happens. The transaction settles. This part requires infrastructure that can move money reliably without re-authenticating the buyer every time, not intelligence.

Conflating the two leads companies to build AI that tries to interact with payment UIs. They build screen-scraping agents that navigate checkout pages, fill in form fields, handle pop-ups. It works 60% of the time. The other 40%, the agent gets stuck on a CAPTCHA, a redirect, a session timeout, or a "please verify you are human" prompt that it cannot pass because it is not human.

The agent should never touch the payment UI. The agent makes the purchasing decision and hands it to a payment API. The API executes the transaction without asking the agent who it is. The agent is metadata - the real actor is the human who authorized the spending.

B2B payments have worked this way for decades. A server in a datacenter makes API calls to charge an account. No human involved. The account holder authorized it upfront.

Agentic commerce applies that same model to transactions that today still require a human clicking "confirm."

Where Prava Fits

We're building the infrastructure layer that makes this work.

When a user sets up an agent through Prava, they define what it can do. "Buy me fashion items under $100, once per week, from approved retailers." That rule set becomes a token the agent carries. The token proves the agent has authority. It proves the boundaries of that authority. It's verifiable by the merchant, the card network, and the issuer without any of them needing to contact the user in real time.

We're working with card networks to make those tokens native to the payment system. The agent doesn't store credentials. It doesn't scrape checkout pages. It presents its token, the payment executes, and the user gets the outcome.

We built Prava for the US and Southeast Asia first. Both regions have the infrastructure flexibility to support agent-native payments before regulation catches up. As the model proves itself, the geography follows.

Agentic commerce is inevitable because autonomous agents are inevitable. Every AI company building an agent that interacts with the real world will eventually need that agent to spend money. That part is settled. What's unsettled is whether the payment infrastructure will be ready when it does.

The companies that build that infrastructure now will own the financial rails for the next generation of AI applications. The ones that wait will be renting access from whoever moved first.

Sushant Pandey

Founder

When AI Checks Out,

Prava Checks In!

Book a Demo

Copyright © 2026 Prava Payments Inc. All rights reserved