Every prediction market needs a way to price outcomes. Most teams reach for an automated market maker because order books are thin when your market has twelve participants and a resolution date three months away. The instinct is fine. The mistake is picking the wrong AMM.
Constant product market makers, the Uniswap model, were built for token swaps between assets with independent valuations. Prediction market outcomes are fundamentally different. They are bounded between zero and one. They must sum to one. And they all expire worthless or pay out exactly one unit. Ignoring those constraints means your AMM will misprice outcomes, burn through its reserves, and leave liquidity providers wondering where their money went.
Why constant product fails for binary outcomes
The constant product invariant is x * y = k. When a trader buys outcome A shares, x decreases, y increases, and the price of A rises. Simple enough. But here is the problem.
In a token swap AMM, ETH and USDC can trade at any ratio. There is no upper bound on the ETH price. In a prediction market, outcome A can never be worth more than 1. The price of "yes" plus the price of "no" should always equal 1 (minus fees). Constant product does not enforce this constraint. As one side gets bought heavily, the price overshoots the 0 to 1 range in implied probability terms, creating arbitrage that drains the pool.
We tested this on a binary market simulation with 10,000 USDC initial liquidity and realistic order flow skewed 70/30 toward one outcome. Within 800 trades the pool had lost 23% of its value to impermanent loss. That number would be worse on real markets where informed traders hit the pool before prices adjust.
The root cause is that constant product treats prediction shares like unbounded tokens. It has no concept of the probability simplex. Every trade that pushes implied probability past 0.95 or below 0.05 creates free money for arbitrageurs, and that money comes directly from LPs.
LMSR and the logarithmic approach
Robin Hanson's Logarithmic Market Scoring Rule, published in 2003, was designed specifically for this problem. The cost function looks like this.
C(q) = b * ln(sum(e^(q_i / b) for each outcome i))
Where q_i is the number of shares outstanding for outcome i, and b is a liquidity parameter that controls how much prices move per trade. The price for outcome i at any moment is
p_i = e^(q_i / b) / sum(e^(q_j / b) for each outcome j)
What makes this work well is that prices are always between 0 and 1, they always sum to 1 across all outcomes, and the market maker can handle any number of outcomes. Not just binary. The maximum loss for the market maker is bounded at b * ln(n) where n is the number of outcomes.
That bounded loss is the real win. A constant product AMM's loss is theoretically unbounded on skewed markets. LMSR tells you exactly how much capital you need to fund the market before you deploy it.
The b parameter problem
The b parameter is where most implementations get into trouble. Set b too low and prices swing wildly on small trades, which makes the market feel broken. Set b too high and the market barely moves even when real information arrives, which makes the market useless.
Polymarket's early contracts used a fixed b, and the result was predictable. On high volume markets, b was too small and prices gapped between trades. On low volume markets, b was too large and prices were sticky. There is no single correct value for b because correct depends on expected volume, and you do not know expected volume before the market launches.
One fix is adaptive b. Othman et al. proposed a variant in 2013 where b increases with trading volume. The idea is that as more capital flows in, the market should become more liquid and prices should be harder to move. Their formula ties b to the total number of shares outstanding.
b(q) = alpha * sum(q_i for each outcome i)
Where alpha is a constant typically between 0.01 and 0.1. This works reasonably well in practice. We ran it on the same simulation as the constant product test. LP losses dropped from 23% to about 4%, and price accuracy improved by a factor of three measured against the true underlying probability.
LS-LMSR and the liquidity sensitivity problem
Standard LMSR has another weakness that shows up on chain. The cost function uses exponentials. On the EVM, computing e^x with sufficient precision requires either a precompile (which does not exist for this) or a Taylor series approximation that burns gas. A 10 outcome market costs roughly 180,000 gas per trade using a 20 term Taylor expansion. That is expensive. Users notice.
Liquidity Sensitive LMSR, introduced by Abraham Othman, tries to solve both the b parameter problem and the computational cost simultaneously. The approach replaces the fixed b with a function of the current inventory, so the market adapts its own depth. But the on chain gas cost goes up because you now need to compute the cost function's integral numerically.
In practice, most teams that implement LS-LMSR on chain end up with a piecewise linear approximation anyway. The theoretical elegance disappears, and you are left with something that behaves like LMSR with a lookup table for b values. That works fine, but you should be honest about what you are building.
Where liquidity actually breaks
The AMM formula is only half the problem. Liquidity in prediction markets breaks for operational reasons that no pricing curve can fix.
Expiration drain. As a market approaches resolution, informed traders extract value from the AMM because they know the outcome before the oracle reports it. This is the prediction market equivalent of informed flow in traditional markets. In the final 24 hours before resolution, we have seen AMMs lose 8 to 15% of their remaining liquidity to traders who already know the result from off chain data. You need to widen spreads or halt trading before oracle submission.
Multi outcome fragmentation. A market with 20 outcomes (say, "who wins the election" with 20 candidates) divides its liquidity 20 ways. Each outcome share is illiquid even if the overall market has decent capital. Traders wanting to express a view on a single candidate face wide spreads. There is no good fix for this within the AMM framework. The math says you need roughly n * b * ln(n) capital to provide equivalent depth across n outcomes, which scales poorly.
Correlated markets. If you run "will candidate X win state A" and "will candidate X win state B" as separate markets, there is no mechanism linking their prices. A trader who believes candidate X will sweep both states has to pay the spread twice. An order book with cross market orders would be better here, but most on chain implementations do not support that.
Oracle latency. Between the real world event happening and the oracle reporting the result, the AMM is a sitting duck. Anyone with a Twitter feed can frontrun the oracle. Chainlink VRF does not help here because this is not about randomness. It is about the time gap between event and settlement. Some teams implement a freeze period, but that requires knowing when the event will resolve, which is not always possible.
What actually works in production
After building three prediction market systems, we have landed on a pattern that is not theoretically perfect but performs well.
- Use LMSR with adaptive b for markets with fewer than 8 outcomes. The gas cost is acceptable and the pricing is sound.
- Use an off chain order book with on chain settlement for markets with more than 8 outcomes. The fragmentation problem makes pure AMMs impractical at higher outcome counts.
- Implement a graduated spread that widens as the market approaches expiration. We use a linear ramp starting 48 hours before expected resolution, increasing the effective spread from 1% to 5%.
- Run a separate keeper that monitors resolution conditions and can freeze the AMM when the outcome becomes deterministic but the oracle has not yet reported.
- Do not try to make LPs whole through fees alone. Prediction market LP returns are structurally worse than DEX LP returns because informed traders dominate the flow. Subsidize initial liquidity and be transparent about expected losses.
The honest math on LP returns
Here is the part that most prediction market whitepapers omit. Market making on bounded, expiring contracts is a losing game for passive LPs. The expected loss for an LMSR market maker is proportional to the information revealed during the market's lifetime. If the market starts at 50/50 and resolves to 100/0, the maximum LP loss is b * ln(2). If it starts at 80/20 and resolves to 100/0, the loss is smaller but still positive.
Fees can offset some of this, but on chain prediction markets typically charge 1 to 2% per trade. For fees to cover the expected loss, you need the market to turn over its entire liquidity roughly 3 to 5 times before resolution. Most markets do not achieve that volume. The ones that do are markets where the outcome is genuinely uncertain, which means the information loss (and LP cost) is highest.
This is not a flaw to engineer around. It is the cost of running a prediction market. Someone has to subsidize price discovery, and that someone is the market maker. Teams that pretend otherwise end up with either no liquidity or angry LPs, and usually both.
Looking forward
The next generation of on chain prediction markets will probably look less like AMMs and more like batch auctions with AMM fallback. Batch auctions solve the oracle frontrunning problem by collecting orders over a time window and executing them at a single clearing price. The AMM provides continuous liquidity between batches for small trades that cannot wait.
There is also interesting work on prediction market AMMs that share liquidity across correlated markets using conditional tokens (ERC-1155 position tokens that can be merged and split). Gnosis pioneered this with their conditional token framework, but the UX remains difficult. Getting the contract architecture right is only part of the problem. Someone still has to explain to a user why they are holding a token that represents "yes on candidate X winning, conditional on turnout exceeding 60%."
The AMM is the engine. But the engine does not matter if nobody understands how to drive the car.