$15 million per week moved through a contract in production. An approval bug hid for six months. Only triggered when the transaction amount hit a certain threshold and the sender's nonce wrapped around.
That specific condition is rare.
Two separate audit firms reviewed the code. Both missed it.
When it finally triggered, $2.3 million moved without authorization.
Smart contract auditing for payments isn't the same as auditing speculation contracts or NFT code. Completely different threat model. Payment infrastructure demands specific attention, specific scoping, specific knowledge. If you're building or deploying payment contracts, understand what auditors actually need to check, how to scope an audit properly, and which vulnerabilities actually matter to your business.
The DeFi audit is about stopping theft. The payment audit is about preventing unauthorized execution, double-spending, cross-chain replays, race conditions that let someone spend twice. These are payment-specific problems. Most auditors don't think about them unless you point them out.
What makes payment threats different
Most audit firms focus on hacks and financial loss. Someone drains the protocol. That's a DeFi audit.
Payment systems have a completely different risk profile.
Unauthorized execution means someone calls your payment function without owning the funds. Sounds basic. In payment contracts, it's usually broken access control, but often it's missing signature verification or incomplete state validation. You verify that Alice called the function, but do you verify that Alice authorized this specific payment?
Double-spending is when the same transaction processes twice. Blockchain normally prevents this with nonces. Cross-chain systems use lock-and-release or atomic swaps. But if your nonce implementation breaks or your cross-chain proofs are incomplete, a user spends the same balance twice. That's catastrophic.
Replay attacks on payment contracts are weird compared to regular replays. An attacker captures your signed transaction and replays it on a different chain. Ethereum to Polygon. The contract on the other side might accept it because the signature verifies but the chain ID isn't checked. Same payment executes on two different chains. Same $10,000 twice.
Race conditions in payment flows usually involve the interaction between your contract and an external oracle or settlement layer. A user initiates a payment. The oracle reports a price. Your contract calculates the amount. If the oracle updates between mempool entry and mining, you execute with the wrong amount. Or a user triggers a refund at the exact moment the payment processes. Both execute. You refund money that's already sent.
These aren't theoretical vulnerabilities. They're the actual loss vectors in payment contracts. They're what kills these systems.
How to actually scope an audit
Before you hire anyone, define what you're asking for. Most companies skip this step. That's a mistake.
A "full security audit" of a payment contract costs $30,000 to $80,000 and takes 4 to 8 weeks. The scope determines whether you get the cheap end or the expensive end. Poor scoping means either missing critical things or discovering out-of-scope issues and getting hit with change orders.
Start by listing every public function. Write down exactly what inputs it takes, what state changes it makes, what external calls it performs. If you have an executePayment function, be specific. Not "it executes a payment." What inputs? What order? What external contracts does it call?
Define your threat model explicitly. One paragraph. "We want to ensure only the transaction originator can execute a payment, no payment executes twice, and cross-chain replays are impossible." That's it. But write it. Make the auditor confirm they understand it.
Specify which blockchains. Ethereum and Polygon have completely different threat models. Ethereum has MEV considerations that Polygon doesn't. Polygon has different security assumptions. If you're deploying across chains, the auditor needs to know.
List your dependencies. Oracle calls, bridge interactions, external settlement layers, anything your contract relies on. Each dependency is a vulnerability vector. Does the oracle ever go offline? What happens to your contract if it does? If you depend on an AMM for pricing and the AMM gets depleted, does your contract still work?
Specify the depth. Surface-level audits check for obvious bugs but not subtle logic errors. Two weeks of work. Deep audits involve manual review, formal verification attempts, custom tooling. Six weeks. For payment contracts, never do surface-level.
Define success criteria. How will you know the audit was good? You need specific questions answered. "Can users execute payments without authorization? Can double-spending happen? Will cross-chain replays fail?" Make the auditor answer these explicitly.
The vulnerabilities that actually cause losses
Reentrancy is famous. It's also obvious. Every auditor checks for it. Your contract probably has reentrancy protection if anyone's looked at it.
The vulnerabilities that actually kill payment contracts are subtler.
Approval race conditions happen when a user approves a token transfer before paying. User approves the contract to spend $10,000. Then they change their mind and approve $5,000 instead. Both transactions are in the mempool. Depending on mining order, the contract spends $10,000, then spends another $5,000 against the new approval. Total $15,000. User only authorized $15,000, but intended at most $5,000 total.
Openzeppelin flagged this years ago. Most modern tokens have a permit function to approve and spend in one transaction. But if your contract uses older tokens or custom logic, this is still a risk.
The nonce wrapping bug I mentioned at the start usually looks like this. Your contract increments a nonce to prevent double-spending. The nonce is a uint8 instead of a uint256. After 255 transactions, it wraps to 0. If a user has an old signed transaction with nonce 0 floating around (or an attacker creates one), it becomes valid again when the current nonce wraps back to 0.
Stupid obvious when you're looking for it. But in 3,000 lines of code? You miss it.
Oracle dependency vulnerabilities are common in cross-chain payment systems. You call a price oracle to determine how many tokens to send. If the oracle is stale or malicious, you send too much or too little. The fix is basic. Check the oracle timestamp and revert if it's too old. But if the contract doesn't do this, it's huge exposure.
Missing chain ID validation happens in contracts deployed across chains that accept signed transactions. You sign on Ethereum to send 100 DAI. An attacker replays the signature on Polygon. If the contract doesn't include the chain ID in the signature scheme, the same signed transaction is valid on both chains and both payments execute.
State validation errors are more subtle. Your payment contract has "settled" state and "pending" state. Pending, users can refund. Settled, they can't. But there's a race where a transaction transitions from pending to settled while a refund processes. Refund checks the state, sees pending, proceeds. Settlement also started. Both execute. You refund money that's already been sent.
What an auditor must actually verify
When you hire someone, demand specific answers to these questions.
What happens if an external call fails? Oracle call, bridge call, anything external. Does the contract revert or continue with stale data? What breaks?
Can a user execute a transaction with insufficient balance? What state validation stops this?
What happens in a front-running scenario? Alice and Bob both pay in the same block. Who executes first? Who bears the slippage?
Any unsigned code paths? Can any function execute without proper authorization?
How does the contract handle chain ID? Is it replay-proof across forks or different deployments?
What's the nonce scheme? Could it wrap or get exploited?
If your contract talks to external services, what's the failure mode if those services die?
The report must include specific test cases. Not generic "we reviewed the code." Concrete demonstrations. "We created a test that would fail if nonce wrapping wasn't prevented. It passes." Show me the test. Show me it fails before the fix and passes after.
Red flags when reading code
Any use of uint8 for counters. uint256 doesn't cost more gas and can't wrap.
Calling external contracts without checking the return value. If you call transfer and don't check it returns true, the transfer might not have happened.
State changes after external calls. Send money first, update balance second. An attacker reenters before the balance updates.
Comparisons with block.timestamp. An attacker can influence this slightly. It changes every block. For payments, relative timing is safer.
Functions that accept arbitrary contract addresses and call them. User provides an address, user provides a malicious contract. User wins.
No comprehensive test coverage. An audit tells you if a bug exists. Tests tell you if the bug actually executes. Demand 95+ percent coverage on payment contracts.
Even major auditors miss things. The contract in my opening had two audits. What saves you is layers. Multiple audits. Formal verification. Extensive testing. Slow deployment with caps. Nothing catches everything. But combining them gets close.