Back to table of contents

Credit: public domain

Program Comprehension

Andrew J. Ko

Despite all of the activities that we've talked about so far—communicating, coordinating, planning, designing, architecting—really, most of a software engineers time is spent reading code (Maalej et al. 2014). Sometimes this is their own code, which makes this reading easier. Most of the time, it is someone else's code, whether it's a teammate's, or part of a library or API you're using. We call this reading program comprehension.

Being good at program comprehension is a critical skill. You need to be able to read a function and know what it will do with its inputs; you need to be able to read a class and understand its state and functionality; you also need to be able to comprehend a whole implementation, understanding its architecture. Without these skills, you can't test well, you can't debug well, and you can't fix or enhance the systems you're building or maintaining. In fact, studies of software engineers' first year at their first job show that a significant majority of their time is spent trying to simply comprehend the architecture of the system they are building or maintaining and understanding the processes that are being followed to modify and enhance them (Dagenais et al. 2010).

What's going on when developers comprehend code? Usually, developers are trying to answer questions about code that help them build larger models of how a program works. Because program comprehension is hard, they avoid it when they can, relying on explanations from other developers rather than trying to build precise models of how a program works on their own (Roehm et al. 2012). When they do try to comprehend code, developers are usually trying to answer questions. Several studies have many general questions that developers must be able to answer in order to understand programs (Sillito et al. 2006, LaToza & Myers 2010). Here are over forty common questions that developers ask:

Which type represents this domain concept or this UI element or action? Where in the code is the text in this error message or UI element?
Where is there any code involved in the implementation of this behavior? Is there an entity named something like this in that unit (for example in a project, package or class)?
What are the parts of this type? Which types is this type a part of?
Where does this type fit in the type hierarchy? Does this type have any siblings in the type hierarchy?
Where is this field declared in the type hierarchy? Who implements this interface or these abstract methods?
Where is this method called or type referenced? When during the execution is this method called?
Where are instances of this class created? Where is this variable or data structure being accessed?
What data can we access from this object? What does the declaration or definition of this look like?
What are the arguments to this function? What are the values of these arguments at runtime?
What data is being modified in this code? How are instances of these types created and assembled?
How are these types or objects related? How is this feature or concern (object ownership, UI control, etc) implemented?
What in this structure distinguishes these cases? What is the "correct" way to use or access this data structure?
How does this data structure look at runtime? How can data be passed to (or accessed at) this point in the code?
How is control getting (from here to) here? Why isn't control reaching this point in the code?
Which execution path is being taken in this case? Under what circumstances is this method called or exception thrown?
What parts of this data structure are accessed in this code? How does the system behavior vary over these types or cases?
What are the differences between these files or types? What is the difference between these similar parts of the code (e.g., between sets of methods)?
What is the mapping between these UI types and these model types? How can we know this object has been created and initialized correctly?

If you think about the diversity of questions in this list, you can see why program comprehension requires expertise. You not only need to understand programming languages quite well, but you also need to have strategies for answering all of the questions above (and more) quickly, effectively, and accurately.

So how do developers go about answering these questions? Studies comparing experts and novices show that experts use prior knowledge that they have about architecture, design patterns, and the problem domain a program is built for to know what questions to ask and how to answer them, whereas novices use surface features of code, which leads them to spend considerable time reading code that is irrelevant to a question (von Mayrhauser & Vans 1994, LaToza et al. 2007). Reading and comprehending source code is fundamentally different from those of reading and comprehending natural language (Binkley et al. 2013); what experts are doing is ultimately reasoning about dependencies between code (Weiser 1981). Dependencies include things like data dependencies (where a variable is used to compute something, what modifies a data structure, how data flows through a program, etc.) and control dependencies (which components call which functions, which events can trigger a function to be called, how a function is reached, etc.). All of the questions above fundamentally get at different types of data and control dependencies. In fact, theories of how developers navigate code by following these dependencies are highly predictive of what information a developer will seek next (Fleming et al. 2013), suggesting that expert behavior is highly procedural. This work, and work explicitly investigating the role of identifier names (Lawrie et al. 2006), finds that names are actually critical to facilitating higher level comprehension of program behavior.

While much of program comprehension is skill, some of it is determined by design. For example, some programming languages result in programs that are more comprehensible. One framework called the Cognitive Dimensions of Notations (Green 1989) lays out some of the tradeoffs in programming language design that result in these differences in comprehensibility. For example, one of the dimensions in the framework is consistency, which refers to how much of a notation can be guessed based on an initial understanding of a language. JavaScript is a low-consistency language because of operators like ==, which behave differently depending on what the type of the left and right operands are. Knowing the behavior for Booleans doesn't tell you the behavior for a Boolean being compared to an integer. In contrast, Java is a high consistency language: == is only ever valid when both operands are of the same type.

These differences in notation have real impact. Encapsulation through data structures leads to better comprehension that monolithic or purely functional languages (Woodfield et al. 1981, Bhattacharya & Neamtiu 2011). Declarative programming paradigms (like the JavaScript view framework React) have greater comprehensibility than imperative languages (Salvaneschi et al. 2014). In general, languages that are statically typed result in fewer defects (Ray et la. 2014), better comprehensibility because of the ability to construct better documentation (Endrikat et al. 2014), and result in easier debugging (Hanenberg et al. 2013). In fact, studies of more dynamic languages like JavaScript and Smalltalk (Callaú et al. 2013) show that the dynamic features of these languages aren't really used all that much anyway. All of this evidence suggests that that the more you tell a compiler about what your code means (by declaring types, writing functional specifications, etc.), the more it helps the other developers know what it means too.

Code editors, development environments, and program comprehension tools can also be helpful. Early evidence showed that simple features like syntax highlighting and careful typographic choices can improve the speed of program comprehension (Baecker 1988). I have also worked on several tools to support program comprehension, including the Whyline, which automates many of the more challenging aspects of navigating dependencies in code, and visualizes them (Ko & Myers 2009):

The path from novice to expert in program comprehension is one that involves understanding programming language semantics exceedingly well and reading a lot of code, design patterns, and architectures. Anticipate that as you develop these skills, it will take you time to build robust understandings of what a program is doing, slowing down your writing, testing, and debugging.

Next chapter: Verification

Further reading

R. Baecker. 1988. Enhancing program readability and comprehensibility with tools for program visualization. In Proceedings of the 10th international conference on Software engineering (ICSE '88). IEEE Computer Society Press, Los Alamitos, CA, USA, 356-366.

Pamela Bhattacharya and Iulian Neamtiu. 2011. Assessing programming language impact on development and maintenance: a study on C and C++. In Proceedings of the 33rd International Conference on Software Engineering (ICSE '11). ACM, New York, NY, USA, 171-180.

Binkley, D., Davis, M., Lawrie, D., Maletic, J. I., Morrell, C., & Sharif, B. (2013). The impact of identifier style on effort and comprehension. Empirical Software Engineering, 18(2), 219-276.

Callaú, O., Robbes, R., Tanter, É., & Röthlisberger, D. (2013). How (and why) developers use the dynamic features of programming languages: the case of Smalltalk. Empirical Software Engineering, 18(6), 1156-1194.

Barthélémy Dagenais, Harold Ossher, Rachel K. E. Bellamy, Martin P. Robillard, and Jacqueline P. de Vries. 2010. Moving into a new software project landscape. In Proceedings of the 32nd ACM/IEEE International Conference on Software Engineering - Volume 1 (ICSE '10), Vol. 1. ACM, New York, NY, USA, 275-284.

Stefan Endrikat, Stefan Hanenberg, Romain Robbes, and Andreas Stefik. 2014. How do API documentation and static typing affect API usability? In Proceedings of the 36th International Conference on Software Engineering (ICSE 2014). ACM, New York, NY, USA, 632-642.

Green, T. R. (1989). Cognitive dimensions of notations. People and computers V, 443-460.

Fleming, S. D., Scaffidi, C., Piorkowski, D., Burnett, M., Bellamy, R., Lawrance, J., & Kwan, I. (2013). An information foraging theory perspective on tools for debugging, refactoring, and reuse tasks. ACM Transactions on Software Engineering and Methodology (TOSEM), 22(2), 14.

Stefan Hanenberg, Sebastian Kleinschmager, Romain Robbes, Éric Tanter, Andreas Stefik. An empirical study on the impact of static typing on software maintainability. Empirical Software Engineering. 2013.

Ko, A. J., & Myers, B. A. (2009, April). Finding causes of program output with the Java Whyline. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 1569-1578).

Thomas D. LaToza and Brad A. Myers. 2010. Developers ask reachability questions. In Proceedings of the 32nd ACM/IEEE International Conference on Software Engineering - Volume 1 (ICSE '10), Vol. 1. ACM, New York, NY, USA, 185-194.

Thomas D. LaToza, David Garlan, James D. Herbsleb, and Brad A. Myers. 2007. Program comprehension as fact finding. In Proceedings of the the 6th joint meeting of the European software engineering conference and the ACM SIGSOFT symposium on The foundations of software engineering (ESEC-FSE '07). ACM, New York, NY, USA, 361-370.

Lawrie, D., Morrell, C., Feild, H., & Binkley, D. (2006, June). What's in a name? a study of identifiers. IEEE International Conference on Program Comprehension, 3-12.

Walid Maalej, Rebecca Tiarks, Tobias Roehm, and Rainer Koschke. 2014. On the Comprehension of Program Comprehension. ACM Transactions on Software Engineering and Methodology. 23, 4, Article 31 (September 2014), 37 pages.

A. von Mayrhauser and A. M. Vans. 1994. Comprehension processes during large scale maintenance. In Proceedings of the 16th international conference on Software engineering (ICSE '94). IEEE Computer Society Press, Los Alamitos, CA, USA, 39-48.

Baishakhi Ray, Daryl Posnett, Vladimir Filkov, and Premkumar Devanbu. 2014. A large scale study of programming languages and code quality in GitHub. In Proceedings of the 22nd ACM SIGSOFT International Symposium on Foundations of Software Engineering (FSE 2014). ACM, New York, NY, USA, 155-165.

Tobias Roehm, Rebecca Tiarks, Rainer Koschke, and Walid Maalej. 2012. How do professional developers comprehend software? In Proceedings of the 34th International Conference on Software Engineering (ICSE '12). IEEE Press, Piscataway, NJ, USA, 255-265.

Guido Salvaneschi, Sven Amann, Sebastian Proksch, and Mira Mezini. 2014. An empirical study on program comprehension with reactive programming. In Proceedings of the 22nd ACM SIGSOFT International Symposium on Foundations of Software Engineering (FSE 2014). ACM, New York, NY, USA, 564-575.

Jonathan Sillito, Gail C. Murphy, and Kris De Volder. 2006. Questions programmers ask during software evolution tasks. In Proceedings of the 14th ACM SIGSOFT international symposium on Foundations of software engineering (SIGSOFT '06/FSE-14). ACM, New York, NY, USA, 23-34.

S. N. Woodfield, H. E. Dunsmore, and V. Y. Shen. 1981. The effect of modularization and comments on program comprehension. In Proceedings of the 5th international conference on Software engineering (ICSE '81). IEEE Press, Piscataway, NJ, USA, 215-223.

Andreas Stefik and Susanna Siebert. 2013. An Empirical Investigation into Programming Language Syntax. ACM Transactions on Computing Education 13, 4, Article 19 (November 2013), 40 pages.

Yida Tao, Yingnong Dang, Tao Xie, Dongmei Zhang, and Sunghun Kim. 2012. How do software engineers understand code changes? An exploratory study in industry. In Proceedings of the ACM SIGSOFT 20th International Symposium on the Foundations of Software Engineering (FSE '12). ACM, New York, NY, USA, , Article 51 , 11 pages.

Mark Weiser. 1981. Program slicing. In Proceedings of the 5th international conference on Software engineering (ICSE '81). IEEE Press, Piscataway, NJ, USA, 439-449.

Podcasts

Software Engineering Daily, Language Design with Brian Kernighan.