Using programming structures to communicate

 When we first learn to code, we struggle with the basic use of the programming language and just getting the program to work. As we gain more practice, we can start to explore "better" ways to write the code while it still does the same thing.

"Better" code is really all about communication with other humans, yourself in a little while, yourself later, or some other human who needs to work with the code.

When the communication is clear and efficient we get maintainable code that is a joy to work with. When the communication is obscured or missing, the code turns into an unintelligible mess.

So what means of communication can we use?

Comments

The first communication technique we tend to learn is the use of comments, to communicate hints to ourselves how the code works so we can understand it later. Since we are initially still struggling with basic syntax, the comments might tend to be something like the below (I have actually seen this in production code!):

// Add 1 to i
i = i + 1;

A few years down the line, the code might look something like this:

// Add 1 to i
i = i + 2;

An incorrect comment is worse than no comment. In this case, easy to ignore, but when the comment is more usefully explaining the algorithm it becomes downright devastating if it no longer applies and may send you on an hours-long goose chase.

So we learn the hard way that comments are untrustworthy and we need to find other ways to explain the code.

Comments can still be very useful to describe things that are NOT in the code. A todo-note, for example, explains what code has not been written even though it might need to be written in the future.

Code disposition

The order the code is written in can make a huge difference to ease of understanding. If the steps to achieve each important part of the algorithm are grouped together, the code can be understood piecemeal. Even better when a group of connected parts can be extracted into its own named procedure (method, function) or assigned as a named value explaining what it represents.

The optimal way to order the code is the one that minimizes the amount of things the reader needs to keep in mind at each point.

Layers of abstraction

Essentially, we create a new language when we create named procedures and values. If we start with the programming language itself as language L0, then we form language L1 from the values and procedures defined in L0. After that, we can form language L2 by combining things from L1 into more complex procedures. And so on. Finally, the program itself is expressed in the top level language we created, which serves as an explanation of what the program does without getting bogged down in too many details. 

More loosely speaking, it is often said that you should break out a named entity when the "level of abstraction" changes, which happens when "the code goes from explaining what it is doing to how to do it". That's not a very clear explanation, but rephrased in the layered languages above, it means that when you dip down into concepts from a lower layer of language, you should probably create a new concept on the language layer you are using.

Of course things are never that ideal. A more practical way might be to consider what things need to be kept in mind at each point in the program. If the reader needs to remember more than about 4 things at any one time, there is a risk of cognitive overload, which makes it substantially more difficult to understand. Grouping some concepts together into new richer concepts may help. Another way to look at it might be that code should take up space in proportion to its importance.

Data structures

Layering concepts obviously also apply to data so that at one level it might make sense to talk about an x-coordinate and a y-coordinate, while at a higher level we would talk about a point and at higher levels still we might work with lines and rectangles. This is related to types, but types have larger implications and will be discussed separately.

Beyond pure representation of data, there is another aspect of structuring data in a way to both simplify and explain the program. In fact, the easiest way to understand any code is to start looking at the data structures. This was expressed in 1975 as:

Show me your flowchart and conceal your tables, and I shall continue to be mystified. Show me your tables, and I won't usually need your flowchart; it'll be obvious. - Fred Brooks, The Mythical Man-month

 And repeated in more modern form in 1997:

Show me your code and conceal your data structures, and I shall continue to be mystified. Show me your data structures, and I won't usually need your code; it'll be obvious. - Eric S. Raymond, The Cathedral and the Bazaar

When designing an algorithm, you need to make decisions about how you will represent relationships between the data and how you will traverse the data elements. If there is an ordering between the elements and there can be duplicates, use a list. If there should not be duplicates, use a set. Duplicates without order, a bag. A queue implies elements on the same level, a stack implies a hierarchy. A tree can be arranged in ways to enable efficient searching. If elements can be associated with a consecutive numeric index, an array affords immediate access, for other associations a map is your friend, while for checking existence, we're back to a set.

Whether to recurse or iterate is related to data structures and can give similar distinctions as queue versus stack, although they are equivalent and iteration can be mechanically transformed to recursion and vice versa.

One important aspect of datastructures is whether they are immutable or not. An immutable data structure is just one thing to keep in mind. Otherwise all the separate things that can be changed in the data structure are things that need to be kept in mind. Mutability effects can be mitigated by clear ownership of the data.

Data ownership

Which part of the code that is responsible for the data is important to communicate in order to keep track of when the data might be changed (if mutable) and when the data is no longer needed (so that memory can be reclaimed), and if the memory has been reclaimed to know that the data is no longer accessible.

Some languages side-step the memory reclamation issue by automatically reclaiming it when it is no longer used, either through garbage-collectors or reference-counters.

Rust has mechanisms to specifically track and express ownership in the code. In other languages you should probably document it.

Naming

Picking accurate names is critical to correct communication. It can also be quite difficult.

In one review of code that was supposed to compare images by comparing pixel values, I saw on closer inspection that the values named "pixel" actually turned out to be indexes into the image and the program was just comparing that indexes were equal, which would always be true.

Names are known by the computer so references to names will in most languages be checked and provide you with a clear error when you reference an undefined name. Often this check will even happen before the code actually starts running. This checkability can be used when you need to revise every usage of a particular value, just change the name and the computer will remind you of all the references you haven't revised yet. This property also applies to types, to be discussed later.

Language differences

There are of course differences in exactly how things are expressed in different languages, especially between different programming paradigms, but by and large the same kind of things will be communicated in similar ways.

In a procedural language, you have both nouns (data) and verbs (procedures) and you could say that to calculate the variance of a collection of numbers you would first subtract the mean from each element, then add together the square of every element, count the number of elements and then divide the sum by the count of the elements.

In an object-oriented language, an object has state and can respond to requests, so you would ask the numbers to subtract the mean and to square themselves, then ask the collection to add up all the elements, ask the collection how many elements there are and then ask the resulting sum to divide itself by the count.

In functional languages, even functions are things, so there are no verbs. You would have to say that the variance is the quotient between the sum of the squares of the difference between the elements and the mean, and the count of the elements. That can be a bit of a mouthful so a more understandable grouping and ordering is often achieved by let-statements, to for example define the squares, their sum and the count separately before obtaining the quotient. Another option is by a pipeline (or a threading macro) of transforms to turn the list into a list of differences, then a list of squares of differences, then into a sum and a count and then divide them.

 A note on "declarative"

A program is said to be "declarative" when it specifies what is to be done, rather than how to do it. That's exactly the idea of the "levels of abstraction" mentioned above! Every level of the program should be as declarative as possible for most efficient communication.

Languages such as SQL and HTML are considered to be declarative. Some people want to claim that functional languages are generally more declarative and it is indeed difficult to explain how to do something when you only have nouns at your disposal. But with more than a few "of the" in a row, it really looks a lot more like a list of instructions (or, I suppose, how to define things). On the other hand, no "declarative" claims are made about OO, even though objects were created to be the most declarative way of modelling things in the real world.

Choosing a paradigm

In most modern languages you can choose between procedural, functional and object-oriented modes of expression. The trick is to determine which way the separate aspects of your program are best described.

As a guide, I like to think of a triangle of code styles as follows:

  1. At the top of the triangle you are working procedurally with general (possibly abstract) datatypes imperatively and finally arrive at a result that you can interpret satisfactorily.

  2. When you focus on what your data IS, and create datatypes that are more and more specific to your program, you move down the left side into functional programming, with a clearly-defined specific input and a function call that produces a clearly-defined specific output.

  3. If you instead focus on what your objects DO, and create objects that do things more and more specific to your program, you move down the right side into OOP. Messages/calls produce actions and reactions until you get your answer (part of which may be encoded in the state of the system)

 Which of these styles correspond best to the way the desired functionality is most naturally described?

Automated tests

Fundamentally, a test, when run, communicates whether the specified conditions still hold or not. A breaking test is, or rather should be, an alarm bell that the code no longer does what it is supposed to.

Unfortunately, many tests "cry wolf" and merely signal that the code changed, which you probably already knew because you changed it.

Requirements can be communicated through tests, even when they are not run, if the tests are written as describing the requirements.

If you write the test before you write the code, the test is more likely to reflect requirements and can communicate to you when your code is done. Just formulating the test can help clearly communicate to yourself what needs to be done. You already should have an idea of that, so why not try to make it clearer?

In the layered language view, tests are one layer above your code and can therefore be used for communication about the code. Thinking about the layers might also be helpful to avoid digging too deep when making assertions.

One thing to consider is that writing and maintaining tests is an extra cost and it may not be worth it for trivially analyzable code.

For further thoughts about tests, I wrote an article on writing meaningful unit tests.

Contracts

Essentially, a contract is about the guarantees made by the code.

If you hold up your end of the deal when calling a procedure (method, function) by obeying the preconditions stipulated, the procedure will gurantee to hold up its end of the deal and fulfil all the postconditions. If, on the other hand, the preconditions are not obeyed, anything can happen.

Other types of contracts concern "invariants", things that are guaranteed to always be true both before a call is made (or a loop iteration entered) and after the call (or loop iteration) is completed.

Thinking about contracts can be very beneficial to making your code correct. According to the Design by Contract method you should do so before you start writing the actual code. As with tests, you probably have an idea about it already and formalizing the contracts can clarify thinking, as well as communicating that to a future maintainer.

The documentation (that you, like me, probably didn't write) for every piece of the program should contain information about the contracts, that is really the whole point of documentation. But documentation is like comments, something other than the code and must be kept in mind, so it would be preferable to enforce contracts in code for automaic reminders. A very successful example is runtime bounds checking on array access which helps catch large amounts of bugs and prevent malicious attacks.

Some languages have built-in support for specifying contracts, but it is quite possible to write contract-checking code yourself. Most important is to check preconditions because otherwise you have no clue what might happen. Postconditions are more easily spotted as bugs and they tend to be partly covered by tests. Postconditions also tend to be more expensive to compute. A common strategy is to turn off contract checking in production with the rationalisation that the most common problems will have been discovered in staging and development environments.

A failing contract is a serious condition signifying that you don't really know what's going on. The safest way to handle it is to just exit the program as quickly as possible. If you need to keep other things working, having a monitor process automatically restart your program is useful. Don't forget to make sure some human becomes painfully aware of what happens if you want it to ever be fixed.

Types

Types in programming languages can generate almost religious discussions. Type systems also have deep connections to mathematical and logic theory. In practice, the use of types in programming languages is all about communication.

A piece of data can be classified as belonging to a type. Different pieces of data of the same type are interchangeable because they have the same characteristics, whatever that means. Data of different types should not be confused with each other. Naming and typing can be used in somewhat overlapping ways for this information, depending on what's available in the language.

The use of types can help to show which pieces of a program are related to each other and how the data flows. This information can then be reported back to you by the computer when you make changes that don't match up. Here's an article about using types to help refactor a large program.

Specifying types on input parameters and return values has been shown to be beneficial as a form of documentation. These specified types also capture aspects of the pre- and postconditions for the procedure/function/method.

Typescript contains interesting transforms that can be performed on types to show how a related type corresponds to it.

Type inference is useful for some of these things, but not as effective for documentation and contract specification.

A case study comparing an algorithm in Ada vs C shows the effectiveness both of being able to specify many different types and having a language that applies strong typing. But as the article points out, you have to actually do some work to gain the benefits:

"During the design of the software system, the developer must select types that best model the data in the system and that are appropriate for the algorithms to be used. If a program is written with every variable declared as Integer, Float or Character, it will not be taking advantage of the capabilities of Ada."

While types written in comments, or in a language that mostly ignores or automatically converts types, do communicate some things, the biggest value comes in having them strongly enforced by the computer. 

Since it is often easier to determine the resulting type from an operation than actually performing the operation, types can to some extent be statically checked before the program runs, in less time than it takes to run the program, which can be helpful.

Unfortunately, the definition and manipulation of types forms a separate language from your programming language and it can become very complex, which can impede communication. Once the type-specification language becomes Turing complete, which happens all too easily, it is no longer possible to generally guarantee or predict that the type-check ever finishes.

Some things like array bounds checking are generally impossible to do at compile time.

Like with contracts, a failing type check is a serious condition where the program is not doing what you think it is. Some languages gratuitously try to convert types for you, which might make sense in some contexts or to some degree, but can make bugs very difficult to find. Note that the idea of truthiness is an automatic type conversion, as is allowing equality comparisons of different types of values.

Like with tests, it is possible that having a type system that is too complex or prescriptive may not be worth the overhead in comparison to the benefit received. 

An example of a very successful use of types to ease usability of an API is IMO AssertJ. The assertThat function returns an object type with only the assertion methods relevant to the input type.

Modularity and information hiding

What you don't say is sometimes more important than what you say. Modularity is the ability to hide complex "machinery" of how things are done and only expose the ability to use what it does. As long as the exposed interface and contract remains the same, you can change whatever you like inside or even replace the whole thing with an equivalent that also obeys the same interface and contract. For tests, it doesn't even need to obey the contract fully, just emulate it well enough for the particular test.

Information hiding comes in many forms. The one that's most commonly mentioned is OO encapsulation in objects/classes. Exposing a property via getters and setters hides the fact that it's a field. Not even having getters and setters and just performing services for the caller hides even more information. Another way to hide information is to define helper functions inside the function that uses them (in java you can even define a class inside a method).

Miscommunication

The clearer the communication, the easier the code will be to maintain. There is a tension between code that is easy to read and code that is easy to write and unfortunately there is often too much focus on the writing.

Overspecification

Tests often suffer from specifying how the code works rather than what it is supposed to do. Faced with a test breakage it is difficult to know whether to fix the test or the code.

Passing in a large compound data structure (or "entity") into a function that only uses one or two fields definitely makes the function harder to reuse. Code reuse is generally somewhat overrated, however, so the question is what communicates the intent best? Should that function know about the structure of the entity or is it the surrounding code's job to deconstruct it first?

ORMs really make a mess by working with huge entities. Most of the time you're just interested in getting a few pieces of information like a list of friends' names, you don't need the whole social graph. Treating the database as representing relations between facts rather than representing entities will make things much more manageable. Embrace SQL and why not 6NF along with it?

Opaque languages

Decorators and annotations may certainly be declarative ways of adding functionality, but there is no practical way to find out what they do without learning their language. Unfortunately, that language is often missing functionality as well as being underdocumented. Frameworks fall in a similar category, making easy things trivial and difficult things impossible, because you have to fight with the framework.

Domain specific languages are sometimes created with the idea that the domain experts should be able to use them. Again they tend to be underpowered and underdocumented and, what's worse, too difficult for the domain experts anyway, so the programmers now end up coding in the DSL as well as maintaining it.

Note the difference with the layered languages formed by your code, where you can easily go down a level to understand exactly how each layer works and you can easily refactor and reshape it as needed.

A related difficulty can arise when a section of code has too many dependencies, perhaps only using a small part of each. The "lower-level" language formed is just too large and unwieldy. A solution could be to organize a module that exports only what you need and encapsulates the other dependencies.

Hidden dependencies and unknown assumptions

Hidden dependencies arise when two different pieces of code depend on the same knowledge, for example an assumption that all files are in a specific directory. The remedy is to make sure there is a single source of truth for every fact needed by the program.

The assumptions made in a module may not always be explicitly stated (and the programmer may even be unaware that an assumption was made), which causes problems when the user of the module has a different set of assumptions (or facts) to deal with.

An example of insufficient communication of assumptions occured in the Julia language, where functionality can often be magically shared between packages unknown to each other. Unfortunately, this can also cause obscure bugs and problems when there are unknown assuptions to deal with. In particular, arrays used to be indexed from 1 to the length of the array, until someone figured out how to make offset arrays that could start anywhere, which broke the assumptions of most other packages. (I still think Julia is worth trying, BTW)

Implementation reuse and pre-emptive generalization

Sometimes two pieces of code can be identical and you feel the urge to merge them together. Unless they are meant to do exactly the same thing in all future versions of the program, resist the urge. A guide may be if it is difficult to provide a name that sounds great for all usages, resist the urge, because then they are logically different things and they will evolve in separate directions. Communicate that logical difference, don't make a future maintainer try to guess which usage was which when trying to tear them apart.

Related to that is when you think a particular functionality you are writing could be much more generally applicable. Resist the urge, because the generalized expression is likely to be more complex and less obviously applicable to the current use. Also, it could be that the code in future needs to be generalized in a slightly different direction. Don't make a future maintainer have to try to understand that your generalization is not generally used and that they can undo it (and must undo it).

Inconsistent usage

A concrete example is the use of the word final on java classes which prevents creation of subclasses. The standard String class is marked final so that it can be used for secure credentials without risk of sneaky subclasses stealing the data. In that context final means "do not override, bad things will happen if you do".

Some people feel that you should defensively mark a class final if you haven't given any thought to how it can be subclassed. This is a much weaker and less useful communication and you are also removing the user's right to replace your class with a different implementation.

The worst case is when you have programmers of both schools in a codebase, so the word final (or the absence of the word final) carries no discernible meaning at all.

Consistency is key to maximizing communication.

Prescriptive practices

Any practice that is mandated obviously has no communicative value.

Common mandates are to create interfaces corresponding to every class, or to always organize the code in specific layers or groups of files.

It needs to be carefully considered if the practice itself has enough value to outweigh the loss of communication.

Conclusion

There are many ways to use the code to communicate things about the code beyond it merely working. Things you would want to communicate are the requirements and assumptions, preferably set for automatic reminders. I hope it's an interesting and useful perspective and I think that code will be better if it is written deliberately for the purpose of communicating.

Comments

Popular Posts