Not learning from experience

Defense Tech has some pretty sharp criticism of the techno-phobia over at the FBI. Hat tip: Winds of Change.

This might be a good time to revisit the post I wrote 15 months ago on the Virtual Case File debacle. Plus ça change, plus c’est la même chose.

Originally posted January 16, 2005

I’ve been meaning to post about this story since I first read about it . The FBI has a little development problem with a critical project they’ve contracted out:

A $170 million computer overhaul intended to give FBI agents and analysts an instantaneous and paperless way to manage criminal and terrorism cases is headed back to the drawing board, probably at a much steeper cost to taxpayers.

$170 million here and $170 million there and pretty soon you’re beginning to talk about real money. The problems that the FBI have identified with the system include actual inadequacies of the system, security problems, and what’s referred to as “obsolescence”.

The Los Angeles Times has more on the story:

WASHINGTON � A new FBI computer program designed to help agents share information to ward off terrorist attacks may have to be scrapped, the agency has concluded, forcing a further delay in a four-year, half-billion-dollar overhaul of its antiquated computer system.

The bureau is so convinced that the software, known as Virtual Case File, will not work as planned that it has taken steps to begin soliciting proposals from outside contractors for new software, officials said.

[…]

An outside computer analyst who has studied the FBI’s technology efforts said the agency’s problem is that its officials thought they could get it right the first time. “That never happens with anybody,” he said.

That outside computer analyst is right but I think there are even more serious issues than that and I plan to discuss them a little further on in this post.

The contractor on this project is Science Applications International Corp., a San Diego-based Fortune 500 company. As you might imagine they’re not amused by either the state of the project, its reception, or the bad press the company is getting:

Science Applications International Corp. today rejected criticism that it botched a $170 million IT upgrade project with the FBI, saying the company has performed well and that the FBI is partly to blame for problems.

[…]

�The FBI modernization effort involved a massive technological and cultural change agency-wide,� said Duane Andrews, SAIC�s chief operating officer. �Unfortunately, implementing this change on the Trilogy contract has been difficult to do without impacts to cost and schedule. To add to that complexity, in the time that SAIC has been working on the Trilogy project, the FBI has had four different CIOs and 14 different managers. Establishing and setting system requirements in this environment has been incredibly challenging.�

Federal Computing Week has more background:

Five years of development and $170 million in costs has produced for the FBI an incomplete electronic records management system that may be outdated before it can be fully implemented, an FBI official said Jan. 13.

Only about one-tenth of the planned capability of the Virtual Case File has been completed by contractor Science Applications International Corp., said the official, who gave a formal press briefing on an anonymous basis. Virtual Case File is part of the Trilogy program, the bureau’s modernization effort. The application was originally due December 2003.

Currently, only the automated workflow portion of the case file management system is operational, on a pilot test basis in the New Orleans field office and Washington, D.C. At full capacity, the system should enable electronic records management and evidence management, and allow for varying levels of access based on a user’s security clearance. Updating the agency’s Investigative Data Warehouse currently requires FBI workers to manually scan officially signed agent reports, a cumbersome process that would be eliminated with electronic records management, the official said. Reports pertaining to counterterrorism are added to the data warehouse nightly, the official said.

Work on the Virtual Case File began in 2000. Five years later, the technology world has changed and the way the system was developed makes updating it virtually impossible. For example, the Virtual Case File can’t create or transmit electronic signatures, nor could that capability be added. FBI officials also expanded the scope of the file’s mission and begun closer collaboration with the intelligence community following the Sept. 11, 2001, terrorist attacks, the official said.

With all of the IT folks out there in the blogosphere I would have thought that there’d have been a lot more attention paid to this story. Kim du Toit’s take is pretty much on target:

$170 million spent, and nothing to show for it. What’s scary is that, as The Mrs. pointed out to me, $170 million isn’t that much of a fuckup: we know a couple of corporations who spent more than that just customizing an existing product for their own needs. Maybe the Feebs should have tried that avenue.

But that’s not the problem. The problem is that the prototype delivered wasn’t enough even to meet a partial list of needs.

Bithead doubts the project is a total loss:

Example (and let’s bring this home for the purpose of really showing you what I’m talking about) ; You decide you need a video camera for your PC. But you find that most if not all cameras you can get only run on XP… and you’re still running Windows 98. And in any event that old P-II/400 isn’t gonna cut it… not really. Well, you don’t complain much about the costs of the new computer and operating system, you simply replace it, since you really needed to do that anyway…and then get your camera.

Well, let’s extend this a bit further. Turns out the camera you’ve bought is a pile of crap. It doesn’t work. Do you call the hardware and software you bought a bust, or do you keep using it because it’s faster, and better than what you had? And in any event; anything else you buy for your computer’s gonna work better with the newer hardware and OS anyway, and many won’t work without it.

Well, that, I suppose to be the situation the FBI is finding itself in at the moment.

Trust me, gang… this is NOT a $171m loss.

I’d like to consider a few aspects of this story:

  1. What does it all mean?
  2. How did it happen?
  3. How can this stuff be avoided in the future?
  4. What’s going to happen?
What does it all mean?

Well, right off the bat we’ve spent 170 million dollars, let five years go by, and the FBI still doesn’t have the software it needs to function properly. From the standpoint of the federal budget—much as I hate to say it—$170 million is a drop in the bucket. But we can’t get those five years back. They are gone forever and it’s going to take even more time (and even more money) just to get where we should be right this moment. These would be bad enough but what really bugs me is the turnover in the CIO position. I’d say there are two likely alternatives: either the folks taking the job consider it merely a stepping-stone to a better job or it’s such a thankless job and the morale is so low that the CIO bails as soon as it’s humanly possible. Either way this is a ghastly situation to be in when we’re five years into a conflict that’s going to last much, much longer and which the FBI is right in the middle of. This just doesn’t reflect the attitude of an organization that’s taking the War on Terror seriously.

How did it happen?

As I commented over on Bithead’s blog, the development approach was fundamentally flawed. The sad truth is that in the current computer software development environment large monolithic projects of three years or more in length are doomed to failure. The world computing environment is evolving very, very fast.

Another problem is that in today’s computing environment one of two things is true: a project that takes longer than a Microsoft operating system development cycle to complete (and that is required to function on the Microsoft operating system du jour) may never be completed or it will be seen as hopelessly obsolete. That’s the computing environment we’ve got right now, folks.

But there’s another, even more fundamental problem. Automating a function within an organization by its very nature will change that organization’s needs. The very process of automating the function changes the organization so that the original specification is inadequate. Computer automation is not a project—it’s a process.

And the procurement requirements of government are such that they are very poorly adapted to handling processes. They’re project-oriented. In my opinion that’s why so many government IT operations are so old-fashioned and backward looking. They just can’t adapt fast enough.

How can this stuff be avoided in the future?

I think that when dealing with a large project (particularly for the government) it’s important to adopt a strategy whereby you minimize downside risk. It’s possible to design the idea of salvage into a project. Every milestone of a project should have a useable deliverable as-is. That requires a very, very different way of looking at a project. The project is likely to be designed differently and it will be implemented differently.

Adaptivization can be designed into projects. A project can be designed with the idea that it will change. This requires a different frame of reference for those putting together the functional specifications and it will affect the tools used in the design and development and the approaches used in the design and development.

Think small. Use low-tech open source tools and off-the-shelf solutions whenever you can.

What’s going to happen?

My guess is that they’re going to use exactly the same approaches that got them into this fix and get exactly the same results as they got this time around. It would take a major change of culture to do anything else and that would mean a major change in personnel. Our government just doesn’t work that way.

Did I mention that they’re offshore outsourcing the post-mortem on this debacle? I hope they have some method of controlling the security setup with their British contractors. And their subcontractors. And their subcontractors’ subcontractors.

5 comments… add one
  • One of the problems with software development is that very, very few developers understand how to do it. There are simply not many designers or architects available in the market, so people who end up being designers and architects make stupid, stupid, stupid decisions. I have seen so many bad data models, missing object models, violations of basic programming rules (like isolation of data from logic, for example) and so forth — and all of this at a series of large companies and government organizations that have the money, time and organization to do it right — that it is sometimes difficult to see how we can continue our technological growth: if we don’t fix the problem of unskilled high-level IT workers, we will reach a point where our maintenance work exceeds our capacity, and then there’ll be no real new development for a while. Some companies are already there.

    I think that the problem, in other words, is much, much bigger than the FBI. And it’s getting worse, because in our shortage of skilled talent, we are turning to the traditional — and flawed — methods of making sure we get skilled people: we look at years of experience, or at degrees and certifications. The best system administrator and one of the best programmers and database people I’ve ever encountered had a GED. He eventually got a college degree (in less than six months) so that he could go to law school. He’s no longer available in the IT market, except for contributions to open source projects, except as an intellectual property lawyer.

    The reason we are in this position, frankly, is that we seem to think of programming as a profession, a form of engineering. It is not. Coding is an art, and good coders are patient and meticulous. Coders need to learn basic algorithms and the syntax of whatever language they use; after that coding cannot be reduced to other than the innate mental gifts of the coder. System administration, database administration and integration are trades, and need to be taught by mentoring. Administrators and integrators (and the very top tier of coders) can, with study, be good architects. Architecture might plausibly be considered a profession, but it is a profession which cannot be practiced well without first going through one of the trade routes, because to do architecture well, you have to understand hardware, software and networks at a very, very low level, and that comes only from doing it yourself.

    So here we stand, in an industry becoming increasingly vital to the country, and all I can think is that we are going to limp along for a long time, until we can raise our supply to meet the demand. And offshoring actually hurts that, because it removes the lowest-level jobs from the American market, which in turn means that the next generation of people ready to take on high-level assignments is largely not being trained in the US at all.

    I probably wouldn’t be so annoyed about it if I didn’t know that there are numerous counter-examples of how to do things right, which are consistently ignored by both the people who need software built and the people who can build it. I think that this is why I don’t blog too much about my job: it’s horribly depressing when I start thinking about it.

  • One of the reasons I’ve been skeptical of the object model for software development, Jeff, is that I think that it has a flawed user model. Good coders are pretty common; good designers are very rare. I, doffing my customary modesty, happen to be one. Producing quality software using the object model requires more and better designers (which we don’t have) and uses fewer coders (which we have).

  • J Thomas Link

    The sad truth is that in the current computer software development environment large monolithic projects of three years or more in length are doomed to failure. The world computing environment is evolving very, very fast.

    They never worked well. When have they ever had a success rate over 50%?

    Another problem is that in today’s computing environment one of two things is true: a project that takes longer than a Microsoft operating system development cycle to complete (and that is required to function on the Microsoft operating system du jour) may never be completed or it will be seen as hopelessly obsolete. That’s the computing environment we’ve got right now, folks.

    If the project is already big enough to take 3 years, why run on Microsoft? Start with Python/TK and maybe Tcl/TK. List the OS calls required. Provide those calls. Now you have an OS. Whenever you upgrade equipment part of the contract will involve updating drivers.

    Do the project itself in Python/TK or whatever/TK. Then you aren’t limited to Microsoft at all. You’re only limited to TK. And if you want to pay for your own system calls you won’t face Microsoft security issues — just your own security issues.

    But there’s another, even more fundamental problem. Automating a function within an organization by its very nature will change that organization’s needs. The very process of automating the function changes the organization so that the original specification is inadequate. Computer automation is not a project—it’s a process.

    That’s marketing. Start with an organisation that can do X, where X is some set of tasks. Automate functions so that it now does Y, where Y includes all of X plus more. The project is a success. But the organization is now in position to imagine that with a brand new system they could do Z where Z includes all of Y plus more. That’s fine. The project was a success. They can fund a new project whenever they’re ready to handle the dislocation of switching over and the risk that it won’t actually work. If the organization can handle continual change then they could hire a team to constantly rewrite the system. That’s a very good approach until they’ve enjoyed as much transformation as they can stand.

    And the procurement requirements of government are such that they are very poorly adapted to handling processes. They’re project-oriented. In my opinion that’s why so many government IT operations are so old-fashioned and backward looking. They just can’t adapt fast enough.

    I think a lot of it is that so long as they keep doing what they’ve been doing with no disruption, they get ignored, and if they try something new and it doesn’t bother anybody they get ignored, but if they try something new and it either doesn’t work or it works but somebody important liked it better the old way, they catch hell. When your primary job motivation is to be ignored, that doesn’t encourage a lot of innovation.

    When you pay a contractor to innovate for you, when it doesn’t work you can blame him. “He looked like the best choice among the competitive bids, sir.”

    Every milestone of a project should have a useable deliverable as-is.

    That’s likely to raise costs. It will be like a series of projects, that aren’t quite compatible with each other. As you add functionality you’ll have to make little changes that reverberate through the whole thing. Considerably more expensive to make a series of projects than one project.

    This would be a bad idea if the single big complex project actually worked. 😉

    A project can be designed with the idea that it will change.

    This is also expensive. It’s likely to turn out that the possible changes you planned for don’t actually happen. Instead changes come that you didn’t imagine ahead of time at all. Like, maybe the government decides that as a test case, to confirm that it’s a good idea, your project must be recoded entirely in ADA for Microsoft.

    However, if the project has followed your previous idea of deliverables as milestones, it will be considerably easier to add changes because it will have already suffered a series of changes with each new milestone and if the schedule didn’t lag too much that’s a strong indication of flexible design.

    Think small. Use low-tech open source tools and off-the-shelf solutions whenever you can.

    Yes. I can’t think of any drawbacks to this one. What am I missing?

  • Ah. What you’re missing, J Thomas, is that in many organizations there’s not just a prejudice against open source or free software, there’s an outright ban. I can name several Fortune 1000 companies where I know by first-hand experience that there’s an outright ban on using open source software on the grounds that it is less secure than proprietary software.

    I’m aware that’s an exaggeration but there’s no telling that to some people.

    I don’t know that the same practice is true in the FBI but I wouldn’t be a bit surprised.

  • J Thomas Link

    So your supremely workable suggestion is probably not feasible because it’s been banned. [sigh]

Leave a Comment