Back to table of contents

Credit: public domain


Andrew J. Ko

How do you know a program does what you intended?

Part of this is being clear about what you intended (by writing specifications, for example), but your intents, however clear, are not enough: you need evidence that your intents were correctly expressed computationally. To get this evidence, we do verification.

There are many ways to verify code. A reasonable first instinct is to simply run your program. After all, what better way to check whether you expressed your intents then to see with your own eyes what your program does? This is an empirical approach is called testing. Some testing is manual, in that a human executes a program and verifies that it does what was intended. Some testing is automated, in that the test is run automatically by a computer. Another way to verify code is to analyze it, using logic to verify its correct operation. As with testing, some analysis is manual, since humans do it. We call this manual analysis inspection, whereas other analysis is automated, since computers do it. We call this program analysis. This leads to a nice complementary set of verification technique along two axes:

manual automatic
empirical manual testing automated testing
analytical inspection program analysis

To discuss each of these and their tradeoffs, first we have to cover some theory about verification. The first and simplest ideas are some terminology:

Note that because defects are defined relative to intent, whether a behavior is a failure depends entirely the definition of intent. If that intent is vague, whether something is a defect is vague. Moreover, you can define intents that result in behaviors that seem like failures: for example, I can write a program that intentionally crashes. A crash isn't a failure if it was intended! This might be pedantic, but you'd be surprised how many times I've seen professional developers in bug triage meetings say:

"Well, it's worked this way for a long time, and people have built up a lot of workarounds for this bug. It's also really hard to fix. Let's just call this by design. Closing this bug as won't fix."


So how do you find defects in a program? Let's start with testing. Testing is generally the easiest kind of verification to do, but as a practice, it has questionable efficacy. Empirical studies of testing find that it is related to fewer defects in the future, but not strongly related, and it's entirely possible that it's not the testing itself that results in fewer defects, but that other activities (such as more careful implementation) result in fewer defects and testing efforts (Ahmed et al. 2016). At the same time, modern developers don't test as much as they think they do (Beller et al. 2015). Moreover, students are often not convinced of the return on investment of automated tests and often opt for laborious manual tests (even though they regret it later) (Pham et al. 2014). Testing is therefore in a strange place: it's a widespread activity in industry, but it's often not executed systematically, and there is some evidence that it doesn't seem to help prevent defects from being released.

Why is this? One possibility is that no amount of testing can prove a program correct with respect to its specifications. Why? It boils down to the same limitations that exist in science: with empiricism, we can provide evidence that a program does have defects, but we can't provide complete evidence that a program doesn't have defects. This is because even simple programs can execute in a infinite number of different ways.

Consider this JavaScript program:

function count(input) {
	while(input > 0)
	return input;

The function should always return 0, right? How many possible values of input do we have to try manually to verify that it always does? Well, if input is an integer, then there are 232 possible integer values, because JavaScript uses 32-bits to represent an integer. That's not infinite, but that's a lot. But what if input is a string? There are an infinite number of possible strings because they can have any sequence of characters of any length. Now we have to manually test an infinite number of possible inputs. So if we were restricting ourselves to testing, we will never know that the program is correct for all possible inputs. In this case, automatic testing doesn't even help, since there are an infinite number of tests to run.

There are some ideas in testing that can improve how well we can find defects. For example, rather than just testing the inputs you can think of, focus on all of the lines of code in your program. If you find a set of tests that can cause all of the lines of code to execute, you have one notion of test coverage. Of course, lines of code aren't enough, because an individual line can contain multiple different paths in it (e.g., value ? getResult1() : getResult2()). So another notion of coverage is executing all of the possible control flow paths through the various conditionals in your program. Executing all of the possible paths is hard, of course, because every conditional in your program doubles the number of possible paths (you have 200 if statements in your program? That's up to 2200 possible paths, which is more paths than there are atoms in the universe).

There are many types of testing that are common in software engineering:

Which tests you should write depends on what risks you want to take. Don't care about failures? Don't write any tests. If failures of a particular kind are highly consequential to your team, you should probably write tests that check for those failures. As we noted above, you can't write enough tests to catch all bugs, so deciding which tests to write and maintain is a key challenge.


Now, you might be thinking that it's obvious that the program above is defective for some integers and strings. How did you know? You analyzed the program rather than executing it with specific inputs. For example, when I read (analyzed) the program, I thought:

"if we assume input is an integer, then there are only three types of values to meaningfully consider with respect to the > in the loop condition: positive, zero, and negative. Positive numbers will always decrement to 0 and return 0. Zero will return zero. And negative numbers just get returned as is, since they're less then zero, which is wrong with respect to the specification. And in JavaScript, strings are never greater than 0 (let's not worry about whether it even makes sense to be able to compare strings and numbers), so the string is returned, which is wrong."

The above is basically an informal proof. I used logic to divide the possible states of input and their effect on the program's behavior. I used symbolic execution to verify all possible paths through the function, finding the paths that result in correct and incorrect values. The strategy was an inspection because we did it manually. If we had written a program that read the program to perform this proof automatically, we would have called it program analysis.

The benefits of analysis is that it can demonstrate that a program is correct in all cases. This is because they can handle infinite spaces of possible inputs by mapping those infinite inputs onto a finite space of possible executions. It's not always possible to do this in practice, since many kinds of programs can execute in infinite ways, but it gets us closer to proving correctness.

One popular type of automatic program analysis tools is a static analysis tool. These tools read programs and identify potential defects using the types of formal proofs like the ones above. They typically result in a set of warnings, each one requiring inspection by a developer to verify, since some of the warnings may be false positives (something the tool thought was a defect, but wasn't). Although static analysis tools can find many kinds of defects, they aren't yet viewed by developers to be that useful because the false positives are often large in number and the way they are presented make them difficult to understand (Johnson et al. 2013). There is one exception to this, and it's a static analysis tool you've likely used: a compiler. Compilers verify the correctness of syntax, grammar, and for statically-typed languages, the correctness of types. As I'm sure you've discovered, compiler errors aren't always the easiest to comprehend, but they do find real defects automatically. The research community is just searching for more advanced ways to check more advanced specifications of program behavior.

Not all analytical techniques rely entirely on logic. In fact, one of the most popular methods of verification in industry are code reviews, also known as inspections. The basic idea of an inspection is to read the program analytically, following the control and data flow inside the code to look for defects. This can be done alone, in groups, and even included as part of process of integrating changes, to verify them before they are committed to a branch. Modern code reviews, while informal, help find defects, stimulate knowledge transfer between developers, increase team awareness, and help identify alternative implementations that can improve quality (Bacchelli & Bird 2013). One study found that measures of how much a developer knows about an architecture can increase 66% to 150% depending on the project (Rigby & Bird 2013). That said, not all reviews are created equal: the best ones are thorough and conducted by a reviewer with strong familiarity with the code (Kononenko et al. 2016); including reviewers that do not know each other or do not know the code can result in longer reviews, especially when run as meetings (Seaman & Basili 1997). Soliciting reviews asynchronously by allowing developers to request reviewers of their peers is generally much more scalable (Rigby & Storey 2011), but this requires developers to be careful about which reviews they invest in.

Beyond these more technical considerations around verifying a program's correctness are organizational issues around different software qualities. For example, different organizations have different sensitivities to defects. If a $0.99 game on the app store has a defect, that might not hurt its sales much, unless that defect prevents a player from completing the game. If Boeing's flight automation software has a defect, hundreds of people might die. The game developer might do a little manual play testing, release, and see if anyone reports a defect. Boeing will spend years proving mathematically with automatic program analysis that every line of code does what is intended, and repeating this verification every time a line of code changes. What type of verification is right for your team depends entirely on what you're building, who's using it, and how they're depending on it.

Next chapter: Monitoring

Further reading

Iftekhar Ahmed, Rahul Gopinath, Caius Brindescu, Alex Groce, and Carlos Jensen. 2016. Can testedness be effectively measured? In Proceedings of the 2016 24th ACM SIGSOFT International Symposium on Foundations of Software Engineering (FSE 2016). ACM, New York, NY, USA, 547-558.

Alberto Bacchelli and Christian Bird. 2013. Expectations, outcomes, and challenges of modern code review. In Proceedings of the 2013 International Conference on Software Engineering (ICSE '13). IEEE Press, Piscataway, NJ, USA, 712-721.

Moritz Beller, Georgios Gousios, Annibale Panichella, and Andy Zaidman. 2015. When, how, and why developers (do not) test in their IDEs. In Proceedings of the 2015 10th Joint Meeting on Foundations of Software Engineering (ESEC/FSE 2015). ACM, New York, NY, USA, 179-190.

Brittany Johnson, Yoonki Song, Emerson Murphy-Hill, and Robert Bowdidge. 2013. Why don't software developers use static analysis tools to find bugs? In Proceedings of the 2013 International Conference on Software Engineering (ICSE '13). IEEE Press, Piscataway, NJ, USA, 672-681.

Oleksii Kononenko, Olga Baysal, and Michael W. Godfrey. 2016. Code review quality: how developers see it. In Proceedings of the 38th International Conference on Software Engineering (ICSE '16). ACM, New York, NY, USA, 1028-1038.

Raphael Pham, Stephan Kiesling, Olga Liskin, Leif Singer, and Kurt Schneider. 2014. Enablers, inhibitors, and perceptions of testing in novice software teams. In Proceedings of the 22nd ACM SIGSOFT International Symposium on Foundations of Software Engineering (FSE 2014). ACM, New York, NY, USA, 30-40.

Peter C. Rigby and Margaret-Anne Storey. 2011. Understanding broadcast based peer review on open source software projects. In Proceedings of the 33rd International Conference on Software Engineering (ICSE '11). ACM, New York, NY, USA, 541-550.

Peter C. Rigby and Christian Bird. 2013. Convergent contemporary software peer review practices. In Proceedings of the 2013 9th Joint Meeting on Foundations of Software Engineering (ESEC/FSE 2013). ACM, New York, NY, USA, 202-212.

Carolyn B. Seaman and Victor R. Basili. 1997. An empirical study of communication in code inspections. In Proceedings of the 19th international conference on Software engineering (ICSE '97). ACM, New York, NY, USA, 96-106.