THE PROBLEMS WITH ELECTRONIC VOTING, ESPECIALLY DRES – A VIEW FROM THE CA TOP TO BOTTOM STUDY
Transcript of Matt Blaze, and Mary Ann Gould on Voice of the Voters!
Dr. Matt Blaze of the University of Pennsylvania and leader of the Sequoia source code review team for California’s Top to Bottom Electronic Voting Investigation
August 8, 2007
MAG: Good evening, Dr. Blaze. We’re glad to have you here, especially with all the notoriety that is going around the country about the top-to-bottom study in California.
MB: Glad to be here.
MAG: I noticed on your blog, which is excellent, www.crypto.com, you noted that you found significant, deeply rooted weaknesses in all three of the vendors’ software. Then you went on to talk about the red team and their finding significant problems because of built-in security mechanisms that they were up against—that they simply don’t work properly.
MAG: Now, you indicated that what you found, even in the code alone, was far more pervasive and much more easily exploitable than you had ever imagined it would be. What did you mean by that?
MB: That’s right. It would be unfair to expect any large system to be completely perfect, and really nobody expects that any large software project is going to be completely free of mistakes or bugs or even little security problems. And in fact, election systems are designed with procedures that are intended to tolerate a certain amount of weakness. So we expected that we would find some things that would be wrong. What really surprised me, and I think surprised all of us, was just how deeply rooted the problems were. It wasn’t simply that there were some mechanisms that could be beefed up or that weren’t as good as they could have been, but that every single mechanism that was intended to stop somebody from doing something just didn’t work or could be defeated very, very easily.
Now, two of the three systems, Hart and Sequoia, haven’t really been studied that widely in the public literature, in the academic literature; not much had been known about them before. But the Diebold system, various versions of that have been studied by academics, by researchers, who had found that there were problems. But even there, the problems that were found by the Diebold team included some things that hadn’t been found before.
MAG: Well, Harri Hursti, on our program, had said two things: one, that there was an overall weakness in the architecture and that basically, the equipment that he had looked at has not been built for quality.
MB: I’d say there really are two problems. This is really another way of putting that. The first, as you said: there’s a problem with the architecture, and by the architecture, what I mean is the design of the system. Even if it were built absolutely perfectly, the way it was designed puts security at a bit of a disadvantage. That is, the way these systems are designed, if you compromise one component, one voting machine somewhere, it becomes easier than it should be to interfere with the election results. The architectures of the systems aren’t designed with enough built-in checks and balances and built-in—essentially—mistrust of the possibility of mistakes to tolerate the kinds of problems that come up in any system run by people. So you can look at the overall design of these systems and tell right off the bat that this design was not as good for security as it could be. But, compounding that problem, when we actually went and looked inside these systems and looked at the source code that runs them, not only is the design weak, but the implementation itself is weak. The code has bugs in it, there are some fundamental security weaknesses that could have been avoided by better programming. So that makes that weak architecture that much worse, because the weaknesses that you might be able to exploit are just all over the place.
MAG: How did these machines get certified?
MB: There’s a federal certification process in which the design is submitted and the source code is submitted to what’s called an independent testing authority, and they look at the code and make sure, and they’re supposed to make sure, that the code is written according to certain standards. They look at the actual machines and they test them. I frankly was surprised that the systems we looked at had passed certification.
MAG: Then that’s my question. How did they get past that certification?
MB: I think you’d have to ask the testing authorities. It frankly baffles me.
MAG: Okay. Then we get to the bottom line, I guess. Are the problems fixable, or do we have systems that might be fatally flawed?
MB: I think they’re fatally flawed, and that puts us in a real bind. We can’t just postpone our elections until the technology is ready. So we really have two problems: one, which in a lot of ways is the easier of the two problems, is what do we do in the long term? How would we design a good, secure election system for use in three to five years from now? And I think there are a number of ways we might do that, and we can talk about them. But we’re still left with the problem of what will we do in November and what will we do in the primaries, and what do we do in the presidential election in 2008?
(Note: You can view every article as one long page if you sign up as an Advocate Member, or higher).