Blog Ipsa Loquitur

Maciej Cegłowski is one of the best writers about the internet you can read. In April 2017, he gave a talk titled Build a Better Monster: Morality, Machine Learning, and Mass Surveillance that you should watch or read in its entirety. At base, the talk is about Surveillance Capitalism, which is the economic basis of the Internet. As Cegłowski puts it, “every interaction with a computing device leaves a data trail, and whole industries exist to consume this data.”

Here’s his bit about the advertising industry:

Ads are served indirectly, based on real-time auctions conducted when the page is served by a maze of intermediaries. This highly automated market is a magnet for fraud, so much of the complexity of modern ad technology consists of additional (and invasive) tracking.

Curiously, despite years of improvements in the technology, and the amount of user data available to the ad networks, online advertising isn’t targeted all that well. You can convince yourself of this by turning off your ad blocker for a week. In a recent example, Chase stopped serving ads to 95% of its websites and saw no measurable difference in ‘engagement’ metrics.

Many advertisers are simply not equipped to use the full panoply of surveillance options. More importantly, adversaries have become very good at gaming real-time ad marketplaces, which introduces noise into the system. An uncharitable but accurate description of online advertising in 2017 is “robots serving ads to robots”. A considerable fraction (only Google and Facebook have the numbers) of the money sloshing around goes to scammers.

So robots bid against one another for the right to show ads on pages, and other robots visit pages with ads to drive up the value of the pages with ads on them in the first place. Of all of humanity’s creations, this quasi-ecosystem has to be one of the most baffling.

As an aside, even the biggest and ostensibly best surveillance companies still haven’t gotten the hang of this stuff. Facebook recently showed me that three of my friends had recently visited New York City, and encouraged me to visit New York City as well. Somehow, Facebook’s system failed to account for the fact that all three of those friends—not to mention myself—live in New York City.

That’s not to say that this demonstrates Facebook is somehow lousy at surveillance. This is just a funny outlier in the midst of surveillance so scary-good that it’s hard to say with certainty that Facebook isn’t listening to the conversations you have in front of your phone. Heck, IBM’s Watson answered a Jeopardy question in the “U.S. Cities” category with “Toronto” en route to crushing its human competitors. The more capable these systems get, the funnier the outliers.

But Also

The outliers serve a second purpose, according to Cegłowski. This is one of his best arguments:

The relative ineffectiveness of targeted advertising creates pressure to collect more data. Ad networks are not just evaluated by their current ad revenue, but by expectations about what new ad formats will make possible in the future, in a dynamic I’ve called “investor storytime”. The more poorly current ads perform, the more room there is to tell convincing stories about future advertising technology, which of course will require new forms of surveillance.

This trick of constantly selling the next version of the ad economy works because new ad formats really do have better engagement. Advertising is like a disease: it takes people time to develop immunity and resistance. Even the first banner ad had a 70% click through rate.

So long as advertising is the economic engine of the internet, the march toward ever more invasive surveillance technologies and ever creepier ads is inexorable. ​Toward that end, Cegłowski shares some meditations on what might make the ads of the future creepy in a way that’s hard to really wrap your head around. Advertising will be powered by artificial intelligences, but AIs are inherently alien, mostly because we don’t understand enough about brains to be able to reinvent them.

In the past, we assumed that when machines reached near-human performance in tasks like image recognition, it would be thanks to fundamental breakthroughs into the nature of cognition. We would be able to lift the lid on the human mind and see all the little gears turning.

What’s happened instead is odd. We found a way to get terrific results by combining fairly simple math with enormous data sets. But this discovery did not advance our understanding. The mathematical techniques used in machine learning don’t have a complex, intelligible internal structure we can reason about. Like our brains, they are a wild, interconnected tangle.

The result is that the algorithms that decide what we see (ads and contents) are smarter than us in some ways, and dangerously unfit to decide how to filter the word for us in other ways. The future’s going to be weird!