Blog Ipsa Loquitur

Published on under Well They Sound Harmless

Last year, I became fairly obsessed with superintelligent artificial intelligences. I dipped a toe into the Iaian M. Banks Culture series of books, which are science fiction set in a distant future where humanity has created thousands of godlike AIs to fly their ships and terraform their worlds. I do recommend it.

The next book I read was “Superintelligence: Paths, Dangers, Strategies” by the philosopher Nick Bostrom. Bostrom actually gets paid to think (and write nonfiction!) about artificial intelligence, what it might look like, and when it might arrive. We’ve all seen The Terminator and The Matrix, so you get the gist of how scary the “what” could be.

Raffi Khatchadourian, writing in The New Yorker, has a great review of the book and interview with Bostrom. It’s called The Doomsday Invention, and it covers the “when” of AI. Note that expert consensus on AI is that we’re about twenty years away from being able to create it, and that we’ve been twenty years away for about sixty years.

For de­c­ades, researchers, hampered by the limits of their hardware, struggled to get the technique to work well. But, beginning in 2010, the increasing availability of Big Data and cheap, powerful video-­game processors had a dramatic effect on performance. Without any profound theoretical breakthrough, deep learning suddenly offered breathtaking advances. “I have been talking to quite a few contemporaries,” Stuart Russell told me. “Pretty much everyone sees examples of progress they just didn’t expect.” He cited a YouTube clip of a four-legged robot: one of its designers tries to kick it over, but it quickly regains its balance, scrambling with uncanny naturalness. “A problem that had been viewed as very difficult, where progress was slow and incremental, was all of a sudden done. Locomotion: done.”

In an array of fields—speech processing, face recognition, language translation—the approach was ascendant. Researchers working on computer vision had spent years to get systems to identify objects. In almost no time, the deep-learning networks crushed their records. In one common test, using a database called ImageNet, humans identify photographs with a five-per-cent error rate; Google’s network operates at 4.8 per cent. A.I. systems can differentiate a Pembroke Welsh Corgi from a Cardigan Welsh Corgi.

We’re not going to go extinct tomorrow, next year, or in ten years, but machines are getting exponentially smarter every day. It’s exciting, and only a little scary. ​

Published on under Irreverently Irrelevant

I’m not usually one for op-eds, but The Rise of Hate Search, by Evan Soltas and Seth Stephens-Davidowitz in the New York Times, is pretty stunning:

There are thousands of searches every year, for example, for “I hate my boss,” “people are annoying” and “I am drunk.” Google searches expressing moods, rather than looking for information, represent a tiny sample of everyone who is actually thinking those thoughts.

There are about 1,600 searches for “I hate my boss” every month in the United States. In a survey of American workers, half of the respondents said that they had left a job because they hated their boss; there are about 150 million workers in America.

In November, there were about 3,600 searches in the United States for “I hate Muslims” and about 2,400 for “kill Muslims.” We suspect these Islamophobic searches represent a similarly tiny fraction of those who had the same thoughts but didn’t drop them into Google.

In 2016, there aren’t a lot of things more personal and intimate than what we search for online. (Relevant XKCD)

Published on under Imaginary Property

Here’s an interesting tale of copyright gone weird from Ars Technica. The interminable CBS sitcom The Big Bang Theory is being sued for copyright infringement of a children’s poem called “Soft Kitty”. The poem reads, in its entirety:

warm kitty, soft kitty little ball of fur sleepy kitty, happy kitty purr purr purr

Really? Fifteen words? Three of which are “purr” and four are “kitty?” That has to be some kind of record. There’s no way you can copyright that, right?

Well, yes. You can copyright a haiku. You can copyright surprisingly short things. The only real requirements are that you write it down and that it’s creative. Federal courts have interpreted the creativity requirement to imply some minimum length: you can’t copyright a poem which is one word long. There’s nothing creative about reciting a lone word. But a super long poem doesn’t guarantee copyright either; a list of every word in the English language in alphabetical order isn’t creative. It’s a lousy dictionary.

Published on under Legal Theory

Here’s a provocative title from the usually sober Ars Technica: Secret Source Code Pronounces You Guilty As Charged:

Secret code now has infiltrated the criminal justice system. The latest challenge to it concerns a handyman and a convicted sex offender named Martell Chubbs, now accused of a 1977 Long Beach, California murder. Local police were investigating cold cases and arrested Chubbs after DNA taken from the crime scene long ago matched a sample in a national criminal database, the authorities said.

A private company called Sorenson Forensics, testing vaginal swabs from the victim, concluded that the frequency in the profile occurrence in the general population was one in approximately 10,000 for African Americans. The same sample, when examined by Cybergenetics at the company’s Pittsburgh lab, concluded that the DNA match between the vaginal sperm sample and Chubbs is “1.62 quintillion times more probable than a coincidental match to an unrelated Black person,” according to court records.

Okay, both of those sound like slam dunks, right? What’s the problem with the Cybergenetics analysis if Chubbs is screwed either way?

Well, let’s back up a bit. What exactly do those numbers mean? They’re the likelihood of some random person with the same DNA profile as the person who left their DNA at the crime scene. Take Sorenson’s one in ten thousand number. It doesn’t mean there are 10,000 to 1 odds that Chubbs did it: that’s the prosecutor’s fallacy talking. It also doesn’t mean that if there are 20 million black men in America, that there are 20,000 people whose DNA would match the killer’s, so there’s only a 1 in 20,000 chance that Chubbs is the killer. That’s the defense attorney’s fallacy.

Published on under This Doesn’t Add Up

Volkswagen’s diesel cars aren’t nearly as fuel-efficient as the company claimed for the last decade or so. The New York Times talked with Eben Moglen, who’s been evangelizing open-source software for several centuries, and he points out that this scandal could have been discovered before it started if not for closed-source software:

“Software is in everything,” [Moglen] said, citing airplanes, medical devices and cars, much of it proprietary and thus invisible. “We shouldn’t use it for purposes that could conceivably cause harm, like running personal computers, let alone should we use it for things like anti-lock brakes or throttle control in automobiles.” […] “If Volkswagen knew that every customer who buys a vehicle would have a right to read the source code of all the software in the vehicle, they would never even consider the cheat, because the certainty of getting caught would terrify them.”

Moglen’s definitely not wrong, though I wouldn’t hold my breath on the “open-source software in anti-lock brakes” bit. I think the fact that it’s a felony to tinker with your car’s software is absurd, and that it’s impossible to actually regulate the functioning of closed-source software. But Volkswagen didn’t trick the E.P.A. with closed-source software.

From the Times article again:

When the test was done and the car was on the road, the pollution controls shut off automatically, apparently giving the car more pep, better fuel mileage or both, but letting it spew up to 35 times the legal limit of nitrogen oxide. This cheating was not discovered by the E.P.A., which sets emissions standards but tests only 10 to 15 percent of new cars annually, relying instead on “self certification” by auto manufacturers.

Federal regulators and their European counterparts were bamboozled because the car companies were the ones doing the testing. That’s beyond ridiculous. Think of Volkswagen like a student who got ahold of the answer key and spent all night memorizing the answers to the final exam, only to be asked to grade his own test paper and report his grade to the teacher.

What Volkswagen did was pretty awful, but it’s not surprising. If Volkswagen’s engines pollute too much, they don’t get to sell cars in America. That’s objectively a good thing; I rather enjoy that cars are under strict(?) regulations on the amount of poisonous material they can produce. But if you have those kinds of stakes and then let companies grade their own performance, they’re going to cheat. Full stop.

If we’re being honest, the idea that the E.P.A didn’t have the resources to check the math itself is the really insane part. Open-sourcing Volkswagen’s software would have been an instant fix for this, but regardless of whether that happens, the E.P.A should absolutely be able to afford to drive a car around in circles and measure what comes out of the tailpipe.

Published on under Disrupt Everything

Marco Arment changed the pricing model for his podcast app Overcast last week. Previously, the app was ‘try a bit of it for free, but get all the good features after paying $5.’ Now, it’s ‘all the good stuff is free, but you can donate some money if you want.’ Michael Anderson pointed out that this was a somewhat predatory pricing model, on account of how he sells his app… for free. Arment countered that anybody could make their app free (plus donations), that nobody is entitled to keep their market share, and that other people are copying Overcast’s features to use in their own apps.

There are a few problems with those points, and Samantha Bielefeld wrote about them rather expertly:

The idea that any app developer can witness Marco’s attempt at a different business model, and employ the idea in their own app offering, is true. Anybody can try this model if they wish, the difference is that hardly any other developers will. We are all keenly aware of the publicity surrounding Marco, and the influence he has over the entire industry. From his ground floor involvement in Tumblr (for which he is now a millionaire), to the creation and sale of a wildly successful app called Instapaper, he has become a household name in technology minded circles.

It is this extensive time spent in the spotlight, the huge following on Twitter, and dedicated listeners of his weekly aired Accidental Tech Podcast, that has granted him the freedom to break from seeking revenue in more traditional manners. The success I would see by releasing a music album stating, “pay what you feel my talent is worth”, would pale in comparision to when Radiohead does so.

Arment’s “pragmatic” pricing model reminds me of Taylor Swift’s essay in the Wall Street Journal last summer. She wrote that people will definitely pay artists because art is important and it’s important because it’s rare. She was wrong on one count: music isn’t a scarce resource. Anyone with an email address can sign up for Spotify and start streaming 20 million songs instantly. That’s more than a hundred years of music. You could start listening from the moment you were born until the day you died and never hear the same song twice.

But Swift was absolutely correct on another count. As Nilay Patel noted, Taylor Swift is a very scarce resource.

“In the future, artists will get record deals because they have fans — not the other way around,” writes Taylor. We’re in that future now; that’s where Justin Bieber came from. […] But hitting that tipping point is almost impossible for the vast majority of artists working today, and their inability to actually sell music means they have to sell other things — and making all those other things means they’ll have less time and money to put into making their music. It’s a vicious cycle, and it means that being Taylor Swift is perhaps more valuable than Taylor Swift’s music.

Now sure, Swift was arguing that her music shouldn’t be available for free, and Arment is arguing that everyone can give away their app for free (plus donations). Also, I’m willing to bet she’s a better singer than he is, and that he’s a better coder, etc. But they’re both gigantic brands in their respective fields.

Sure, nothing was special about Arment ten years ago. He was working hard like everyone else was working hard. But he was lucky enough to hit that tipping point working at a startup a long time ago, and now he’s in a position that most developers aren’t. Like Bielefeld said, Radiohead can make a lot more money selling an album for free (plus donations) than you or I can. Arment, Swift, and Radiohead worked very hard and got very lucky. Because of that work and luck, they can take risks that other folks can’t: like giving away your stuff for free and asking for donations.

Those risks have different externalities for musicians than they do for mobile app developers. There’s room for more than one band on my phone, but there’s not really room for more than one podcast app on my phone. Radiohead isn’t putting Taylor Swift out of business by giving their album away for free (plus donations). Arment’s price point is a tough one to beat for developers who aren’t as famous as he is, with the brand he has.

It’s bizarre to watch people like Arment reach their dreams and achieve so much, but still feel like they’re the little guy just doing little guy stuff. He’s famous now. He has his own category on Business Insider, for Pete’s sake. I know they’re not exactly the Wall Street Journal; they’re more like the Huffington Post of business journalism but haha oh wait he has a category there, too.

Congratulations. You’re not the underdog.