original version Of this story appeared in Quanta Magazine.
Imagine a city with two widget traders. Customers prefer cheap widgets, so merchants must compete to set the lowest price. Unhappy with their meager profits, they meet one night in a smoke-filled tavern to discuss a secret plan: if they raise prices together instead of competing, they can both make more money. But that kind of deliberate price-fixing, called collusion, has long been illegal. Widget traders decide not to risk it, and everyone else enjoys cheap widgets.
For more than a century, US law has followed this basic template: Ban those backroom deals, and fair prices must be maintained. These days, it’s not so easy. Across broader parts of the economy, sellers are increasingly relying on computer programs called learning algorithms, which repeatedly adjust prices in response to new data about market conditions. These are often much simpler than the “deep learning” algorithms that power modern artificial intelligence, but can still be prone to unpredictable behavior.
So how can regulators ensure that algorithms set fair prices? Their traditional approach will not work, because it depends on finding obvious collusion. “The algorithms are definitely not drinking with each other,” said Aaron Roth, a computer scientist at the University of Pennsylvania.
Yet a widely cited 2019 paper showed that algorithms can silently learn to collude, even when they were not programmed to do so. A team of researchers pitted two copies of a simple learning algorithm against each other in a simulated market, then let them explore different strategies to increase their profits. Over time, each algorithm learned through trial and error how to retaliate when others cut prices—dropping its own price by some large, disproportionate amount. The end result was higher prices, backed by the mutual threat of a price war.
Such indirect threats also highlight many cases of human complicity. So if you want to guarantee fair prices, why don’t sellers need to use algorithms that are inherently unable to convey threats?
In a recent paper, Roth and four other computer scientists showed why this may not be enough. They proved that even seemingly benign algorithms that optimize for their own benefit can sometimes lead to bad results for buyers. “You can still get higher prices in ways that look reasonable from the outside,” said graduate student Natalie Collina, who worked with Roth to co-author the new study.
Not all researchers agree on the implications of the finding—a lot depends on how you define “reasonable.” But it shows how nuanced the questions surrounding algorithmic pricing can be, and how difficult it can be to regulate.
