Blog Ipsa Loquitur

Joshua Benton on ad-blocking software being built into Apple’s iOS, and what this means for news publishers:

For me, the arguments for using ad blockers range from the unconvincing (dude, information wants to be free) to the reasonable (I don’t need dozens of tracking beacons on every webpage) to the downright understandable (poorly built ads slow my browser to a crawl). I don’t use an ad blocker, but I do block all Flash by default for performance reasons, which accomplishes some of the same ends. The best arguments for adblocking are even stronger on mobile than they are on desktop — bandwidth and performance and battery life are all at a premium.

This is worrisome. Publishers already make tiny dollars on mobile, even as their readers have shifted there in huge numbers. To take one example, The New York Times has more than 50 percent of its digital audience on mobile, but generates only 10 percent of its digital advertising revenue there. Most news outlets aren’t even at that low level.

That looks grim.

I agree with Benton. For my part, I’ve had Flash disabled for years; Flash runs slowly, and it’s wildly insecure. There’s really no good reason to use it. Up until the last few months, I’ve let Javascript run wild on the web. I initially installed a plugin to let me keep track of how many Javascript files each web page was loading, and was a little flabbergasted by how many dozens of Javascript files sites want to run every time you click a link.

On the one hand, I’m on these web sites, consuming the Content that some person was paid to write. If I’m reading, I should be generating money for the site, right? That’s supposed to be the tradeoff.

On the other hand, these publishers have basically zero incentive to limit the number or scope of the Javascript things they’re running in my browser. Publishers seemingly just rent out space on their sites (and therefore your computer) to as many advertisers as they can. Web sites take forever to load all that crap, which slows your browser to a halt, and if you’re on your phone, they’re killing your battery life. It’s extra worse on your phone.

That’s Just Like Your Opinion

Not everyone agrees with that last point. Nilay Patel, The Verge’s Editor-in-Chief, wrote an op-ed this week called “The Mobile Web Sucks.” Patel thinks it’s actually your web browser’s fault:

But man, the web browsers on phones are terrible. They are an abomination of bad user experience, poor performance, and overall disdain for the open web that kicked off the modern tech revolution. Mobile Safari on my iPhone 6 Plus is a slow, buggy, crashy affair, starved for the phone’s paltry 1GB of memory and unable to rotate from portrait to landscape without suffering an emotional crisis. Chrome on my various Android devices feels entirely outclassed at times[…].

This goes on for a while before Patel backs himself into a corner.

I happen to work at a media company, and I happen to run a website that can be bloated and slow. Some of this is our fault: The Verge is ultra-complicated, we have huge images, and we serve ads from our own direct sales and a variety of programmatic networks.

Yeah. You don’t say. A couple months ago, The Verge’s parent company wrote a grim assessment of just how “bloated and slow” their own sites are:

Here’s a sampling of our current performance metrics:

  • 4.85 seconds to first paint
  • 23.33 seconds to page complete
  • 13,406 speed index [time, in milliseconds, until most of the page is loaded]

Ouch. As you can clearly see, we’ve got a lot of work ahead of us, so the next step is to set up a budget.

And that’s not even on one of those starved or outclassed mobile browsers. That’s on one of those Real Computers. The Verge has literally dozens of random web sites running random javascript files. That goes for just about any web site where folks are trying to make money. They just load it up with garbage.

Lots and lots of Garbage

A researcher at Mozilla published a paper on this recently. As it turns out, Real Computers basically load pages twice as fast when they’re not larded up with a ton of advertising javascript. Mozilla is testing a new feature which blocks ads (noticing a trend here?) called Tracking Protection on popular sites:

Even though Tracking Protection prevents initial requests for only 4 HTML <script> elements, without Tracking Protection, an additional 45 domains are contacted. Of the additional resources downloaded without Tracking Protection enabled, 57% are JavaScript (as identified by the content-type HTTP header) and, 27% are images.

The largest elements appear to be JavaScript libraries with advertisement-related names, each on the order of 10 or 100 KB. Even though client-side caching can alleviate data usage, we observe high-entropy GET parameters that will cause the browser to fetch them each time.

The last bit is extra lousy: advertisers could at least save your bandwidth and let your browser cache a copy of their Javascript ad thingie. On our relatively slow mobile networks, that would help you load web sites. But instead, advertisers tell your browser to download their javascript ad thingies every single time, so they can tell where you are and when you look at each page. Hooray.

Publishers aren’t solely to blame for this situation, but they’re not exactly blameless. They hand the keys to advertisers and let them go nuts. Ben Thompson has a good breakdown of how we got here, how it works, and why all these sites aren’t just running their own ads instead of all these crazy ad networks’ ads.

It’s important to note that these ads aren’t just showing you random things. They’re built to gathering information about you, about what you like to read, about how long you spend reading it, and so on. The EFF has built a tool to demonstrate how this works.

The Broader Implications

These companies collect it and store it and use it to sell more specific ads to you. Some of them sell this information to other companies to help them fill in the gaps in what they know about you and what sites you go to.

You and I rely on all of these folks to treat our information properly and not dig through it, looking for high school exes or weird internet crushes. We rely on these folks to secure our personal information against hackers. And to keep making money; when they go out of business, their assets (our information) are sold off like desk chairs and printers. There’s really no telling just how much information about us is in how many different hands.

And really, none of that information is particularly private (big giant asterisk), and so I’m usually willing to make that trade. But the terms of the tradeoff shift a little bit every year. Publishers are serving us more and more advertisers’ javascript files, but publishers aren’t making more money in the process. And so we just make more and more trades, and I run more and more ads on my computer. My browser moves slower and slower. This is less “let’s make that tradeoff” and more “how many tradeoffs can you possibly squeeze into this space?”

It’s only going to get worse. At some point, advertisers will realize that fraud is rampant and the revenue from ads will plummet. There are entire classes of viruses which exist solely to infect users’ computers and load ads in hidden windows – this keeps users from realizing their computers are infected. Heck, you might have fake ads being loaded on your phone right now. Advertisers are already catching on that these “clicks” they’re paying for are bots, not human. So how much longer do sites keep making money for showing these ads? How many more ads will publishers display when advertisers pays them half as much for each ad?

Anyone really want to bet the answer to that is “fewer ads?”

Me neither.

Published on under Eyeballs for Hire

A new study has found that a ten year old study was correct, and that most new studies are wrong. No, really:

The claim that “most published research findings are false” is something you might reasonably expect to come out of the mouth of the most deluded kind of tin-foil-hat-wearing-conspiracy-theorist. Indeed, this is a statement oft-used by fans of pseudoscience who take the claim at face value, without applying the principles behind it to their own evidence. It is however, a concept that is actually increasingly well understood by scientists.

It is the title of a paper written 10 years ago by the legendary Stanford epidemiologist John Ioannidis. The paper, which has become the most widely cited paper ever published in the journal PLoS Medicine, examined how issues currently ingrained in the scientific process combined with the way we currently interpret statistical significance, means that at present, most published findings are likely to be incorrect. […]

Last year UCL pharmacologist and statistician David Colquhoun published a report in the Royal Society’s Open Science in which he backed up Ioannidis’ case: “If you use p=0.05 to suggest that you have made a discovery, you will be wrong at least 30 percent of the time.” That’s assuming “the most optimistic view possible” in which every experiment is perfectly designed, with perfectly random allocation, zero bias, no multiple comparisons and publication of all negative findings.

Read the whole article. The evidence is impressive, and meticulously footnoted.

Published on under Educated Guesses

We sit down with a team trying to revamp the Peace Corps website, then we walk over to chat with another team that recently created a user-friendly analytics web page,, that tracks which government websites are trending (a National Weather Service page usually tops the list). The goal here is to reveal how U.S. citizens use government websites, and to spark healthy competition among agencies to create more popular services.

In keeping with the tech corps’ guiding principles, everything is open source, so outsiders are free to adapt the program. And they do: A few weeks after the analytics website went live, Philadelphia used the program for its own analytics website, which the 18F team considered a measure of success. Thanks to their open-source code, they had improved government without doing any extra work.

The cool part is that the federal government is hiring nerds. The cooler part is that cities are using the technology that the federal government builds. It’d be nice to see New York City take advantage of this.

Published on under Like Uber But For Blogs

A former National Hockey League player just got elected to the Hall of Fame, even though he hasn’t really retired. Puck Daddy, the best hockey blog on the planet explains why this is even absurd than you think:

It’s surreal that Chris Pronger is now paid to prevent future Chris Prongers from hatching and menacing the NHL. It’s surreal that Chris Pronger is an NHL employee, while getting paid on an active NHL player contract. It’s surreal that Chris Pronger is an NHL employee, getting paid on an active NHL player contract, and will be a name featured on the Arizona Coyotes roster on his induction day for the Hockey Hall of Fame – a team the NHL, his employer, recently sold to its current owners.

See also: Pronger Physics.

Published on under Sportball Chronicles

Update July 29, 2015: One of the RecordTrac folks has reached out, and I’m pretty badly mistaken on a number of fundamental premises about this. The good news is that the universe is not as bad as I thought. The bad news is that I have added to the number of idiots writing incorrect things on the Internet. Your move, universe.

I spend a lot of time at my day job researching freedom of information laws around the country. I look for the best practices in other states (or cities, I’m not picky), and then try to get those adopted here in New York. Now, I’m not just looking at the text of the laws – those take a long time to change – I’m also looking at the way those laws are implemented.

You know how freedom of information (FOI) laws work, right? You send an FOI request to a government agency for a record in their possession, and they have to give it to you. There are exceptions for medical information and other private stuff, like your cell phone number and so on. But that’s the gist.

One day in my research, I came across this website built by the city of Oakland, California. They’ve got a portal called RecordTrac which people can use to send FOI requests to any agency in the city. City agencies can respond to the FOI requests and send the documents to the user. Most importantly, each request and its response are public by default and searchable. This saves people and agencies the time of re-asking and re-answering the same questions over and over again.

Sidebar: the nerds in the audience are screaming “it’s a bug tracker” at the tops of their lungs. And yeah, this is exactly what RecordTrac is. Smart managers track issues with their projects. Open source projects track their issues out in the open, so they don’t get a thousand emails with “it doesn’t work” in the subject line every day. A freedom of information request is a request to provide data that the government isn’t providing on its own. It’s left as an exercise to the reader to determine whether government should publish this information or not. (Hint: yes.)

Code for America

So Oakland didn’t invent this RecordTrac software themselves; it was built by a team of fellows working for Code for America. Code for America hires nerds and sends them to work in government offices around the country, to show how much better government could work if it had more nerds.

At the time, Code for America fellows were expected to take what they learned the government needs and… start a company to sell that service to governments everywhere. I’m decidedly ambivalent about this model. There are worse things than making a buck from a good idea, and I’m sure plenty of governments would be happy to pay for a service that directly addresses their biggest obstacles on a day to day basis. And government still gets free nerds for a year. That’s a win-win!

But startup culture. I just… I’m as bemused as anyone by the fetishization of start-up culture and the social class of entrepreneurs among my nerdy peers. It’s a joke, and some people make a staggering amount of money on it while most don’t. That makes it no better or worse than starting your own restaurant or lottery collective.

Apparently, Code for America has suspended the practice of embedding nerds for the purpose of creating startups. However, this was its practice for most of its existence, including the Oakland fellows’ time.

Back to Oakland

I’ve had the good fortune to interact to one degree or another with some of the fellows who built RecordTrac. They seem like great folks.

But, uh, they reinvented the wheel. A free and open source software system called Alaveteli has been in use in the United Kingdom since about 2007. It does exactly the same thing that RecordTrac does. It has dozens of contributors and is in extremely active development. Alaveteli had been used for hundreds of thousands of successful requests in the UK before Oakland ever got its Code For America fellows.

The fellows made RecordTrac despite the fact that there was already great FOIL software out there. Instead of taking an afternoon to install and customize Alaveteli, the Code For America fellows wanted to make their own software platform so they could kick off their own startup later.

The fellows had one year to spend on using technology to solve problems, and they spent most of it reinventing a wheel instead of making it go further.

As an intellectual property attorney, there are some grey areas about who specifically owns the code for RecordTrac. I haven’t seen the contracts that the fellows signed with Code For America, or that Code For America signed with Oakland, or Oakland signed with the fellows. Any one of those parties could own the copyright to the source code. According to the source code, Code For America owns at least some of it. Also, given that it’s open source, it might be hard to sell that software to folks when they can install it themselves.

Re-reinventing the Wheel

So the other day, I saw a press release on GovTech from the Oakland Code For America fellows. They’ve rewritten their software from scratch and call it NextRequest. They’ve solved the messy problem of selling open source software by rebranding it as a hosted service.

So they reinvented the wheel once as RecordTrac, and now they’ve re-reinvented it again as NextRequest.

Look, I want smart civic-minded nerds to make a living. There aren’t enough people doing this kind of work in this kind of space, and I firmly believe the world will be a better place for it. But I wish they would have become America’s top Alaveteli platform instead of spending all this time getting to this point. We could have been here years ago, right?

And hey, lest you think I’m picking on these folks, the field of FOI portals seems to be particularly popular for wheel-reinvention. You’ve got FOIA Machine and MuckRock, and the most successful one in the US: FOIA Online, built by the US Environmental Protection Agency for a cool $1.1 million.

Because we’re all friends here: even I was on a hackathon team that build a rudimentary FOIL portal back in 2012. How the time does fly. In forty-eight hours, we reinvented the Alavateli wheel because fixing the actual problem (i.e. solutions for agencies deluged with FOI requests) is much harder than fixing the superficial problem (i.e. it can be a little daunting to make an FOI request for non-lawyers). This sentence represents me waving at everyone from inside my glass house.

It’s just a little bit deflating to know that rather than run this race, we’re all trying on different pairs of shoes at the starting line. Come on, civic technologists. We can do better.

Published on under We Can't Have Nice Things

Prenda used to be a law firm that represented copyright owners suing the pirates who illegally download movies and TV shows and other, shorter films. I say “used to,” because in about 2013, they were sanctioned (i.e. fined) by a judge for repeatedly engaging in egregious behavior. After being fined, the lawyers running Prenda started hiding their assest so they could plead poverty and avoid paying the fines. The judge found out, and held the lawyers in contempt. How delightful.

Though if you really want the full Prenda experience, Ken White of Popehat has chronicled every delicious moment of Prenda’s demise, including a wonderful blow by blow commentary of a hysterical appellate hearing. I do suggest watching it.

The story gets even better today, because TorrentFreak has evidence that the FBI is investigating Prenda’s attorneys for copyright infringement. Prenda was apparently the pirate providing illegal copies of the videos in the first place, and then running to copyright holders to say “hey look all these people are downloading your movie; we can sue them for you!”

The crucial evidence to back up this allegation came from The Pirate Bay, who shared upload logs with TorrentFreak that tied a user account and uploads to Prenda and its boss John Steele.

This serious allegation together with other violations piqued the interest of the FBI. For a long time there have been suspicions that the authorities are investigating the Prenda operation and today we can confirm that this is indeed the case.

The confirmation comes from Pirate Bay co-founders Peter Sunde and Fredrik Neij, who independently informed TF that they were questioned about Prenda during their stays in prison.

Oh, honeys.

Published on under Motion to Point and Laugh