Credit Where Credit Is Due

It’s nice to see Chicago doing something right for a change, so I was please to read this article at Citylab on how Chicago is using an improved scheduling algorithm to schedule inspections of establishments that prepare food and it’s had good results:

For years, Chicago, like most every city in the U.S., scheduled these inspections by going down the complete list of food vendors and making sure they all had a visit in the mandated timeframe. That process ensured that everyone got inspected, but not that the most likely health code violators got inspected first. And speed matters in this case. Every day that unsanitary vendors serve food is a new chance for diners to get violently ill, paying in time, pain, and medical expenses.

That’s why, in 2014, Chicago’s Department of Innovation and Technology started sifting through publicly available city data and built an algorithm to predict which restaurants were most likely to be in violation of health codes, based on the characteristics of previously recorded violations. The program generated a ranked list of which establishments the inspectors should look at first. The project is notable not just because it worked—the algorithm identified violations significantly earlier than business as usual did—but because the team made it as easy as possible for other cities to replicate the approach

The article goes on to bemoan the fact that few other cities have imitated Chicago’s approach despite the city having made the information readily available to anyone interested.

There’s one niggling little detail with which I take exception:

But there’s still a significant benefit to having more data experts within city governments, says Eric Potash, a postdoctoral researcher at the University of Chicago’s Center for Data Science and Public Policy. He’s working with Chicago’s public health department on a project to use data to predict lead contamination in housing before it poisons children. He points out that collecting data is a messy task, with different troves of information stored separately in different departments. Having an advocate “on the inside” can really help speed up that process.

The odds are overwhelming that the people who developed the algorithm weren’t “on the inside”. As said in the body of the article:

If the code belongs to someone, another city can’t just take it. The open data approach deals with this problem: cities can choose to share their work with whomever may be interested. But if the programmers build a project using expensive paid or proprietary software, other city governments probably won’t have access to it. That’s why the Chicago team worked with R, an open-source statistics program.

Based on my experience with city government that suggests that it was probably built by some folks at University of Illinois. Just for your edification and enlightenment here’s a list of the contractors used by the Department of Innovations and Technology. You’ll probably get bored after the first hundred or so but it does make for illuminating reading.

The point is, if it was done by an arm of government, it was probably done by a contractor.

1 comment… add one
  • mike shupp Link

    I find it … uh … fascinating …. that so many outfits having contracts with the city’s Dept of INNOVATIONS and TECHNOLOGY are law firms. Also, depressing.

Leave a Comment