The Building Blocks of Software

One of my favorite programming books is Grokking Simplicity by Eric Normand. A key concept from the book is the distinction between actions, calculations, and data. These are the building blocks of software. Understand them, learn to wield them, and your programming powers will multiply.

In this post, I share my musings on these building blocks. I’ll compare and contrast them from several points of view. I’m experimenting with a free-flowing writing style here, more stream of consciousness than my usual fare. Let me know if you like it or hate it.


Let’s start with a summary of how the book explains actions, calculations, and data:

  • Action — Code that depends on when you call it and/or how many times you call it. For example, sending an email.
  • Calculation — Functions that depend only on their input. For example, adding two numbers. Adding 2 + 2 always returns 4. It doesn’t matter when you call it or how many times you call it.
  • Data — Recorded facts about events. It’s inert (you don’t run it like code) and transparent (unlike code, you don’t have to run it to see what it does.) For example, a log of emails sent.

In other terms, actions are “impure” code, calculations are “pure” code, and data is inert. Actions are impure because they cause “side effects.” A side effect is an impact on the world, such as sending an email, writing to a database, or modifying the value of a variable. Pure code has no side effects. I like Eric’s terminology of actions and calculations, so I’ll keep using that.

Data and calculations have a correspondence. The mathematical definition of a function is a set of ordered pairs. The ordered pairs represent a mapping from a domain to a range. A set of ordered pairs can be considered a form of “data”. Any function can theoretically be replaced by this map of inputs to outputs (though of course that set may be infinite.) It’s just that functions are a more compact way of expressing that correspondence.

Data can handle rough terrain. A function tries to compress the mapping into a set of rules. But sometimes that mapping has special cases and rough edges. Sometimes it is not neatly compressible. That’s where you get conditionals or more elaborate code to account for those cases. The real world doesn’t often entirely fit into elegant rules. Data can be adjusted on a case-by-case basis. The mapping is malleable. Data can exist on a spectrum between purely random and entirely generated by rules.

There are two definitions of data that I use:

  1. Facts about events.
  2. A sequence of symbols given meaning by acts of interpretation.

The first is a subset of the second. The second definition is more abstract. It allows us to say that data is code and code is data. Code is a subset of data. It’s a subset that we engineer for ease of human entry and for specification of computer behavior.

“Facts about events” gives a stronger distinction between code and data. Code, while data in the abstract sense (#2), does not quite fit the definition of “facts about events” (#1).

We can think of data as facts about events. We can think of data as specifying behavior. We can think of it as a programming language. Different perspectives to give different insights into how to design software.

Data over calculations over actions

Functional programmers prefer data over calculations and calculations over actions. To understand why, let’s evaluate each building block against a set of properties.

  • Testability — How easy is it to test? Does it require mocking or stubbing? Does it require a lot of setup? Does it require a lot of dependencies?
  • Understandability — How easy is it for a human operator to understand and manipulate? How much cognitive overhead does it ask for?
    • Is it deterministic? — Given the same input, does it always return the same output? Or does it depend on the time of day or some other external factor?
    • Locality of reasoning — Can you understand it in isolation? Or do you have to understand its context?
  • Safety — How error-prone is it to use? How likely is it to cause bugs? Closely related to understandability.
    • Idempotency — Does the number of times you call it matter? Does it matter when you call it?
    • Ordering — Does the order of operations matter?
    • Concurrency — Can multiple processes access it at the same time? How difficult is it to coordinate?
  • Rate of change — How often does it change?
  • Generality — How many settings can it apply to? How many contexts?
  • Composability — How easily can it be combined with other building blocks? With other code?
  • Serializability — Cna we send it over the wire? Save it to a database?
  • Transparency — Can we understand what it does without running it? Can we understand it without reading the code?
  • Metaprogramming — Can we manipulate it programmatically? Can we generate it programmatically? For example you can do this with data

Are generality and composability the same thing? Not quite. I think they’re closely related in that when you write more general code, it has a greater chance of connecting to other code. Because it’s applicable in more contexts than specific code. However the definitions of generality and composability are a bit different. Generality is about the applicability of code in multiple contexts/worlds. It’s code that doesn’t tie itself to the mast of specific accidental complexity. Composability is about combining components to form more complex components. You take two behaviors and connect them to each other to form a composite behavior. Then you can take that composite behavior and connect to more behaviors to form even more complex behaviors.

We often combine data to form more complex data. For example, concatenating strings. Or adding properties to a JSON object. Pushing an item on an array. Well, function composition is like that, except for behavior.

Now let’s evaluate the building blocks against these properties.


To test actions you have to mock out the components that cause side effects. Or you have to set up test systems that will receive those side effects. Because you don’t want to actually, for example, send emails in production. You don’t want to write to production databases with your tests. And so on.

You also have to get the system into your desired state for the action under test. This can be a complicated affair if you’re using Object Oriented programming — call this or that factory, create objects for dependency injection, call the right methods, and then you’ll be ready to call the specific method under test.

Testing a calculation is easier. Because if you have a pure function, you pass any dependencies in the function parameters. No massaging of external state needed.

Calculations work really well with unit tests. Because they are isolated units. If you combine calculations into larger forms, then you can test those larger forms too. As long as it’s all pure, then you should have no problem. You don’t need to set up all this external infrastructure and context to test a calculation.

To test data, well, there’s not much testing of data. Because you don’t execute data. So, testing in the sense of verifying behavior doesn’t really apply. We could think of “testing” data as validating its structure and properties. Which you should do whenever you transfer it in or out of your bounded contexts. That validation takes the form of calculations that take the data as input. Then you can test those validation functions as easily as any calculation.


Actions are harder to understand than calculations. Because actions depend on external context. One does not simply understand a side-effecting procedure in isolation. You have to look at what it writes to or what it reads from. Does it use shared state that could change between invocations? Does it call out to an API that could go down? Does it rely on you calling other actions before or after this one?

That’s a lot of context that doesn’t need to be held in the brain when you’re looking at a calculation. With a calculation, you look at the input and the code. The behavior depends on the input only. No need to trace through the rest of the system.

So an action is not deterministic and an action does not have locality of reasoning. Locality of reasoning means that a component can be understood in isolation. This is why actions require more cognitive overhead to understand.

How easy is data to understand? Data is just a block of inert stuff. It doesn’t have connections to anything else. It doesn’t run. While calculations are independent of other context in the system, data is independent of systems. It doesn’t even need to involve software. People have been recording facts about events for thousands of years.


Three aspects of “safety” I want to look at here are idempotency, ordering dependency, and concurrency.

Actions are more dangerous than calculations because actions depend on when they’re run and or how many times they’re run.

Special care has to be taken to design actions that are idempotent. For example, sending an email is not idempotent. Doing it twice is not the same as doing it one time. If you’re in a distributed system where messages may arrive out of order or more than once, this could be a problem.

Actions depend on the order in which they’re called. So when saving to a database, for example, if you save row A before row B, it can be a lot different than if you save B before you save A. What if the value of B impacts the value of A? There’s an ordering dependency. You have to put in special operations in your code to help manage that.

And then for concurrency, because actions depend on things outside of themselves, those things may change. Or there might be contention or conflicts between those shared resources. If multiple commands to add and remove items from an ecommerce shopping cart arrive out of order, then the actions that operate on the shared shopping cart resource might clobber each other’s work. Similarly, with counters or row versioning, you need atomic instructions that compare and swap.

These idempotency, ordering, and concurrency problems don’t occur with calculations. Calculations run deterministically on immutable data. You can call a calculation multiple times in any order. You can run a calculation on 16 different cores at the same time if you want to. There’s nothing shared between those invocations. All it may do is chew up your CPU and memory space.

As for data, since it doesn’t execute (thus not prone to bugs, infinite loops, and such), it is the “safest” building block.

Rate of change

Rate of change is relevant to stratified design. Eric also covers stratified design in the book. The lower layers change less frequently than the higher layers. The lower layers being, for example, your programming language and standard library. At the highest level you have your specific domain requirements.

In domain-driven design, calculations capture the domain logic. Actions depend on that pure domain logic. The actions plug the domain logic into the software environment so that it can interact with humans and computers.

So the answer as to whether actions or calculations change more often… it depends. Does the surrounding infrastructure that actions depend on change more often than the domain logic? I will say that in the projects I’ve worked on, the domain logic once settled stays pretty solid. It rarely changes once you learn enough about the domain. The surrounding layers of the onion change more frequently based on new UI designs, new features, changing patterns of user interaction, and so on.


Generality means applicability in many different contexts.

Since calculations don’t depend on things outside of themselves, they tend to be more general than actions. For example, an action could be sending a network request, getting a response, and saving that response to a database. What URL are you sending this network request, what’s the format of the request, and then what SQL query do you use to insert it into the database? What’s the table format structure? Since it has all those dependencies, the action is usable only in that specific context.

What if you parameterize that action? Let’s say you pass in a function that saves to a database. That increases the generality. Because then you can use that anywhere you have a need to send a network request, get a response, and save to a database. However it still assumes you have a database, you have a network, and so on.

How about calculations? Well, calculations tend to be more general because they don’t depend on outside context. You don’t need a database to exist. You don’t need a network to exist. You don’t even need to use the same programming language.

Imagine a calculation that takes as input the current tax rates and your tax forms. It returns the refund you may get. You could plug that into a web app that shows the result. Or not. You could save the result in a database. Or not.

Data is even more general than calculations because data can be interpreted in many different ways in many different contexts. A given fact about an event can potentially be used in any kind of software system.


Composition means combining two behaviors to form a composite behavior. You can combine that composite behavior with other behaviors. And so on. It’s how in functional programming we build up complexity in functions to handle the domain requirements.

Calculations follow the mathematical definition of composability. You compose two functions by connecting the output of one to the input of the other. Functional programming is all about composing functions. Functional programming has a whole toolkit for fitting the inputs and outputs of functions together so you can compose them. Here’s a good talk on the functional programming toolkit if you’re interested in that topic.

So calculations, by nature and by design, are highly composable. Composite calculations retain the properties of their sub-calculations: Determinism, freedom from side effects, testability, and so on.

While actions don’t follow the mathematical definition of a function, you can still combine smaller actions to create more complex actions, in the spirit of composability. For example, saving to a database and then sending an email — two actions called by a larger action.

However, combining actions presents more challenges than calculations. You have to be careful of the side effects and non-determinism of actions. This means paying attention to the order of execution, handling exceptions, managing transactions, and keeping the overall state consistent. This gets more and more difficult the more actions you combine. Composite actions retain (and amplify) the properties of their sub-actions: Non-determinism, unpredictability, high cognitive load, and so on.


By serializability I mean converting to a format like JSON or XML that can be sent over a network or saved to a database.

Data is of course the most common case of serialization. You can send it, you can save it, you can interpret it at different times and different places. Technically, given the abstract definition of data, this is all true of code too, as code is a special case of data.

You could accept code as a string over the wire. Nothing is stopping you. It could present a security risk though, if you plan on executing that code. We send code over the network to web browsers all the time (JavaScript), but browsers need a lot of security mechanisms to execute that code safely.

The interpreters of what we commonly consider data (e.g. JSON) tend to not implement unsafe operations, in contrast to the interpreters of general-purpose programming languages. So a data structure is like a less expressive but safer programming language.


Transparency here means the ability to programmatically inspect a thing and take action based on it. It means your code can look at a thing and interpret it for practical purposes.

The two definitions of data are relevant here.

If data is a sequence of symbols given meaning by acts of interpretation, then all data (including code) is transparent. We use interpreters that understand the code we type. This more abstract definition of data unlocks metaprogramming. We couldn’t build software in the first place without layers of compilers and interpreters all the way down.

If data is facts about events, and code a separate thing, then only data is transparent. We record facts about events to give our programs the ability to remember things. We write that kind of code all the time. It’s harder to write code that inspects code.

Real-world software blends both these definitions.


Metaprogramming means the ability for your program to create programs or alter its own behavior. Given the three building blocks, can we generate instances of those building blocks programmatically?

Data shines here because most programming languages are designed to manipulate data. We don’t have to create another language or interpreter to work with data within our program.

If you specify parts of your system’s behavior in data, you can make the behavior dynamic and flexible, because you can programmatically manipulate the behavior by manipulating the data. For example, request handling middleware in a web server — you can change how the server responds to requests by adding or removing functions from the pipeline, even at runtime. In languages that reify function references, the pipeline of function references is “data” that the program can change.

This is in contrast to “hardcoding” the server’s behavior in a procedure. Code itself can’t be programmatically manipulated in most languages. Most languages only reify certain elements, such as data and function references, but not the statements of the language itself.

To model behavior with data, ask yourself if you can change that behavior by changing data somewhere — a row in a database, a JSON object, a variable in memory, etc. If it requires changing the non-reified statements of a language, then the behavior is “hardcoded”.

So when you need flexibility in your system’s behavior, try modeling the behavior with data. Err on the side of a little more flexibility than needed, if it’s not too much trouble. You’ll thank yourself later.

Code (actions and calculations) are also forms of data. They can also be programmatically generated and manipulated. Compilers and transpilers do that. It’s just harder to do. High-level programming languages are optimized for human entry, not ease of machine manipulation. Mainstream languages (e.g. JavaScript) don’t reify code the same way as data. You have to travel to a different layer of abstraction.

You could write a JavaScript program that generates strings of JavaScript code and then evals them to change its own behavior. Going back to the web server request pipeline, you could replace the pipeline with function that looks at the request and generates a string of JavaScript to handle the request. Then eval that string with the request in context to get a response. An interesting thought experiment, if tricky and dangerous in practice.


This has been an investigation of why to prefer data over calculations over actions. Data is inert and safe. Calculations are deterministic and decoupled. Actions do things and have strings attached. It has been my experience the more I compose a program out of data, calculations, and actions in that order, the faster and higher quality I can ship to customers.