The Google Glitch

You may recall that I posted about a problem with Google on Saturday. An explanation has emerged:

…Google maintains a list of sites known to install malicious software in the background or “otherwise surreptitiously”. This is done through both manual and automated methods. Google works with a non-profit organization aimed at fighting malicious software called StopBadware.org to come up with criteria for maintaining this list. The organization then provides simple processes for webmasters to remove their site from the list.

“We periodically update that list and released one such update to the site this morning,” she wrote. “Unfortunately (and here’s the human error), the URL of ‘/’ was mistakenly checked in as a value to the file and ‘/’ expands to all URLs.” As a result, between 6:30 a.m. PST and 7:25 a.m. PST, the message “This site may harm your computer” accompanied almost every result a Google user found. Users who attempted to click through the results saw the “interstitial” warning page that mentions the possibility of badware.

IMO this is being misrepresented as human error. It’s actually several human errors, a design problem, and some systemic problems.

The first problem was the error itself. Fair enough. Accidents will happen, we’re only human, etc. However, even a single test would have revealed the problem. That’s not only a human error, it’s a systemic problem.

The second systemic problem is, as I mentioned on Saturday, that Google apparently doesn’t have separation of function between production and development. If they had, the problem would never have shown up in production. Google is large enough (Lord knows) to separate the two and deploying untested stuff directly into the field is nobody’s idea of best practice.

But there also appears to be a design problem. How can they not be performing simple validations? That’s basic.

Welcome to the modern world of computer software. Everything is an alpha version.

0 comments… add one

Leave a Comment