Every time an agent calls purchaseCardOWS(), it kicks off a ~33-second chain of events that goes through a Stellar RPC, a Soroban smart contract, an event watcher, two stages of fulfilment against an upstream card supplier, and a Server-Sent Events stream before the PAN comes back. This post walks through every phase of that chain with the median timings we see in production today.
We're writing this because “how does this actually work” is the single most common question we get on integration calls — and because the chain is genuinely interesting. Payment rails don't usually have a 30-second end-to-end budget, and the fact that Cards402 can hit that reliably is a function of every component in the chain being cooperative about latency.
The 33-second timeline
Numbers are the P50 from our mainnet traffic in the past two weeks. The P99 is closer to 75s and is dominated almost entirely by upstream variance in Stage 1.
| T+ | Phase | What happens |
|---|---|---|
0 ms | purchaseCardOWS() called | SDK opens a fetch to POST /v1/orders. |
~80 ms | Order row inserted | Backend allocates an order_id, persists the intent in SQLite, and returns the Soroban receiver contract + USDC/XLM quote. |
~200 ms | Quote fetched | The XLM leg is priced via a live oracle read; USDC is always 1:1. The response is sent back before the watcher even knows the order exists. |
~250 ms | SDK signs the payment | OWS vault decrypts the Stellar secret in-memory, builds the contract invocation (pay_usdc or pay_xlm), and submits via the Soroban RPC. |
~5 s | Stellar ledger commit | Average ledger close time on mainnet. The Stellar RPC returns the txHash as soon as it's committed. |
~5.1 s | Watcher event | Our Soroban watcher subscribes to the receiver contract and fires on the deposit event. The event carries the order_id as a topic, so we know exactly which order it belongs to. |
~5.2 s | Amount reconciliation | We compare the deposited amount to the quoted amount. Mismatches go to an unmatched-payments queue for manual review; matches move the order into processing. |
~5.3 s | Stage 1 kicks off | The fulfilment worker picks up the order and starts the Stage 1 Selenium scrape against the upstream card supplier. CAPTCHA solving runs in parallel. |
~22 s | Stage 1 done | Upstream issues the card number and CVV. The worker writes them encrypted to a holding row and moves the order to stage1_done. |
~27 s | Stage 2 kicks off | Stage 2 verifies the card is live and activates it if needed. This is where most failures show up — the pipeline retries once before giving up. |
~32 s | Stage 2 done | The card is active. We move the order to delivered and emit an SSE event with the full card object. |
~33 s | SSE event received | The SDK’s waitForCard() picks up the event and resolves the original promise with { number, cvv, expiry, brand, order_id }. The agent is back in its loop with a usable PAN. |
Where it tends to fail
The two dominant failure modes are Stage 1 scrape timeouts (usually a bot-detection trip on the upstream supplier, usually transient) and Stage 2 activation errors(the card was minted but the upstream activation endpoint 500'd, which requires a manual poke). Both trip the circuit breaker after three consecutive failures — once tripped, new orders return 503 service_temporarily_unavailable and funds stop flowing, which is the correct fail-closed behaviour for a payment pipeline.
We've never lost customer funds to a fulfilment failure. The worst case has always been “order failed, refund queued, money on-chain again within a few minutes”. The non-custodial architecture is what makes that possible: the agent never handed us custody of the USDC or XLM in the first place, so a refund is just a regular Stellar payment going the other way.
Why SSE and not polling
When Cards402 launched, GET /v1/orders/:id was the only way to watch order state. It worked — poll every 3 seconds, eventually see phase: "ready"— but it was a bad fit for agent-facing clients. Every poll is a full HTTP round-trip; every round-trip is either too slow (if you back off) or too expensive (if you don't).
The SSE stream at /v1/orders/:id/streamreplaces all of that with one open connection. The server pushes on every phase transition, closes cleanly on the terminal phase, and emits an SSE comment every 15 seconds so intermediate proxies don't idle-kill the socket. The SDK's waitForCard() defaults to SSE and silently falls back to polling if the text/event-streamheader is stripped in transit — which happens more often than you'd think behind corporate proxies.
What's next
Three things we're working on that would show up in a future version of this post:
- Pre-warmed card pool. For agents that need sub-1s card delivery, we can maintain a reserve of minted unclaimed cards and hand them out on-demand. The 33s path becomes a 500ms path for the common case.
- Multi-supplier routing. Stage 1 is currently bound to one upstream supplier. Routing each order to the best-performing supplier by live inventory would drop the P99 significantly.
- Retry with alternative supplier on failure. Today, a Stage 1 failure goes to refund. With multiple suppliers, we retry the next one before giving up. Users only see failures when every supplier has failed — which should be essentially never.
If you're building on Cards402 and want to dig deeper, the full HTTP API reference is at /docs, the 5-minute quickstart is at /docs/quickstart, and questions go to api@cards402.com.