butido – a Linux Package Building Tool in Rust


Please keep in mind that I cannot go into too much detail about my companies customer specific requirements, thus, some details here are rather vague.

Also: All statements and thoughts in this article are my own and do not necessarily represent those of my employer.


In my dayjob, I package FLOSS software with special adaptions, patches and customizations for enterprises on a range of linux distributions. As an example, we package Bash in up-to-date versions for SLES 11 installations of our customers.

Most software isn't as easy to build and package as, for example, bash. Also, our customers have special requirements, ranging from type of package format (rpm, deb, targz, or even custom formats...) to installation paths, compile time flags, special library versions used in the dependencies of a software and so on...

Our main Intellectual Property is not the packaging mechanisms, as there are many tools already available for packaging. Our IP is the knowledge of how to make libraries work together in custom environments, special sites, with non-standard requirements and how to package properly for the needs of our customers.

Naturally, we have tooling for the packaging. That tooling grew over years and years of changing requirements, even target platforms (almost nobody uses IRIX/HPUX/AIX/Solaris anymore, right? at least our customers moved from those systems to Linux completely) and also technology available for packaging.

End of 2020, I got the opportunity to re-think the problem at hand and develop a prototype for solving the problem (again) with some more state-of-the-art approaches. This article documents the journey until day.


The target audience of this article is technical users with a background in linux and especially in linux packaging, either with Debian/Ubuntu (.deb, APT) or RedHat/CentOS/SuSE (.rpm, YUM/ZYPPER) packages. Basic understanding of the concept of software packages and docker might be good, but are not necessarily needed.


Finding the requirements

When I started thinking about how to re-solve the problem, I already worked for about 1.5 years in my team. I compiled and packaged quite a bunch of software for customers and, in the process of rethinking the approach, also improved some parts of the tooling infrastructure. So it was not that hard to find requirements. Still, patience and thoroughness was key. If I'd miss a critical point, that new software I wanted to develop might result in a mess.

Docker

The first big requirement was rather obvious to find. The existing tooling used docker containers as build hosts. For example, if we needed to package a software for debian 10 Buster, we would spin up a docker container for that distribution, mount everything we needed to that container, copy sources and our custom build configuration file to the right location and then start our tool to do its job. After some automatic dependency resolving, some magic and a fair bit of compiling, a package was born. That package was then tested by us and, if found appropriate, shipped to the customer for installation.

In about 90% of the cases, we had to investigate some build failures if the package was new (we did not package that software before). In about 75% of the cases the build just succeeded if the package was a mere update of the software, i.e. git 2.25.0 was already packaged and now we needed to package 2.26.0 – the customizations from the old version were just copied over to the new one and it most of the time just worked fine that way.

Still, the basic approach used in the old tooling was stateful as hell. Tooling, scripts, sources and artifacts were mounted to the container, resulting in state which I wanted to avoid in the new tool. Even the filesystem layout was one big pile of state, with symlinks that were required to point to certain locations or else the tooling would fail in weird (and sometimes non-obvious) ways.

I knew from years of programming (and a fair amount of system configuration experience with NixOS) that state is the devil. So my new tool should use docker for the build procedure, but inside the running container, there should be a defined state before the build, and a defined state after the build, but nothing in between. Especially if a build failed, it shouldn't clutter the filesystem with stuff that might break the next or some other build.

Package scripting

The next requirement was a bit more difficult to establish – and by that I mean acceptance in the team. The old tooling used bash scripts for build configuration. These bash scripts had a lot of variables that implicitely (at least from my point of view) told the calling tool what to do. That felt like another big pile of state to me and I thought of ways to improve that situation without giving up on the IP that was in those bash scripts, but also providing us with a better way to define structured data that was attached to a package.

The scheme of our bash scripts were “phases” (bash functions) that were called in a certain order to do certain things. For example, there was a function to prepare the build (usually calling the configure script with the desired parameters, or preparing the cmake build). Then, there was a build function that actually executed the build and a package function that created the package after a successful build.

I wanted to keep that scheme, possibly expanding on it.

Also, I needed a (new) way of specifying package metadata. This metadata then must be available during the build process so that the build-mechanism could use it.

Because of the complexity of the requirements of our customers for a package, but also because we package for multiple customers, all with different requirements, the build procedure itself must be highly configurable. Re-using our already existing bash scripts seemed like a good way to go here.

Something like the nix expression language first came to my mind, but was then discarded as too complex and too powerful for our needs.

So I developed a simple, yet powerful mechanism based on hierarchical configuration files using “Tom's Obvious, Minimal Language”. I will come back to that later.

Parallelization of Builds

Most of the time, we are building software where the dependencies are few in depth, but lots in breadth. Visually, that means that we'd rather have this:


                              A
                              +
                              |
              +---------------+----------------+
              |                                |
              v                                v
              B                                C
              +                                +
              |                      +--------------------+
       +-------------+               |         |          |
       v      v      v               v         v          v
       D      E      F               G         H          I

instead of this:

              A
              +
              |
       +-------------+
       v      v      v
       B      C      D
       +             +
       |             |
       v             v
       E             F
       +             +
    +--+---+         |
    v      v         v
    G      H         I

Which means we could optimize a good part by parallelizing builds. In the first visualization, six packages can be built in parallel in the first step, and two in the second build step . In the second visualization, four packages could be built in parallel in the first step, and so on.

Our old tooling, though, would build all packages in sequence, in one container, on one host.

How much optimization could benefit us can be calculated easily: If each package would take one minute to build, without considering the tool runtime overhead, our old tooling would take 9 minutes for all 9 packages to be built. Not considering parallelization overhead, a parallel build procedure would result in 3 minutes for the first visualization and 4 minutes for the second.

Of course, parallelizing builds on one host does not result in the same duration because of sharing of CPU cores, and also the tooling overhead must be considered as well as waiting for dependencies to finish – but as a first approximation, this promised a big speedup (which turned out to be true in real life), especially because we got the option to parallelize over multiple build hosts, which is not possible at all with the old tooling.

Packaging targets

Because we provide support for not only one linux distribution, but a range of different distributions, the tool must be able to support all of them. This does not only mean that we must support APT- and RPM-based distributions, but we must also be able to extend our tooling later to be able to support other packaging tools. Also, the tool should be able to package into tar archives or other – proprietary – packaging formats, because apparently that's still a thing in the industry.

So having the option of packaging for different package formats was another dimension in our requirements matrix.

My idea here was simple and proved to be effective: Not to care about packaging formats at all! All these formats have one thing in common: At the end of the day, they are just files. And thus, the tool should only care about files. So the actual packaging is just the final phase which would ideally results in artifacts in all formats we provide.

Replicability

By using #nixos for several years now, I knew about the benefits of replicability (and reproducibility – patience!)!

In the old tooling, there was some replicability, because after all, the scripts that implemented the packaging procedure were there. But artifacts were only losely associated with the scripts. Also, as dependencies were implicitly calculated, updating a library broke replicability for packages that dependended on that library – consider the following package dependency chain:

A 1.0 -> B 1.0 -> C 1.0

If we'd update B to 2.0, nothing in A would tell us that a a-1.0.tar.gz was built with B 1.0 instead of 2.0, because dependencies were only specified by name, not by version. Only very careful investigation of filesystem timestamps and implicitely created files somewhere in the filesystem could lead to the right information. It was there – buried, hard to find and implicitly created during the packaging procedure.

There clearly was big room for improvement as well. The most obvious way was to make dependencies with their version number explicit. Updating dependencies, though, had to be easy with the new tooling, because you want to be able to update a lot of dependencies easily. Think of openssl, which releases bugfixes more than once a month. You want to be able to update all your packages without a hassle.

Another big point in the topic of replicability are logs. Logs are important to be kept, because they are the one thing that can tell you the most about that old build job you did two months ago for some customer, that resulted in this one package with that very specific configuration that you need to reproduce with a new version of the source now.

Yep, logs are important.

From time to time we did a cleanup of the build directories and just deleted them completely along with all the logs. That was, in my opinion, not good enough. After all, why not keep all logs?

The next thing were, of course, the build scripts. With the old tool, each package was a git repository with some configuration. On each build, a commit was automatically created to ensure the build configuration was persisted. I didn't like how that worked at all. Why not have one big repository, where all build configurations for all packages reside in? Maybe that's just me coming from #nixos where #nixpkgs is just this big pile of packages – maybe. I just felt that this should be the way this should work. Also, with some careful engineering, this would result in way less code duplication, because reusing stuff would be easy – it wouldn't depend on some file that should be somewhere in the filesystem, but a file that is just there because it was committed to the repository.

Also, replicability could be easily ensured by

  1. not allowing any build with uncommitted stuff in the package repository
  2. recording the git commit hash a package was built from

This way, if you have a package in a format that allows to record arbitrary data in its meta-information (.tar.gz does not, of course), you can just check out the commit the package was built from, get the logs for the relevant build-job, and off you go debugging your problem!

Reproducability

With everything recorded in one big git repository for all packages, one big step towards reproducibility was made.

But there was more.

First of all, because of our requirements with different customers, different package configurations and so on, our tooling must be powerful when it comes parametrization. We also needed to be able to record the package parameters. For example, if we package a software for a customer “CoolCompany” and we needed, for whatever reason, to change some CFLAGS to some or all packages, these parameters needed to be recorded, along with possible default-parameters we provide for individual packages anyways.

If we package, for whatever reason, some package with some library dependencies explicitely set to old versions of said library, that needs to be recorded as well.

Selecting Technologies

The requirements we defined for the project were not decisive for the selected technologies, but rather the history we had in our tooling and how our intellectual property and existing technology stack was built. And that is bash scripts for the package configuration(s) and docker for our build infrastructure. Nevertheless, a goal we had in mind while developing our new tooling was always that we might, one day, step away from docker. A new and better-suited technology might surface (not that I say that docker is particularly well suited for what we do here), or docker might just become too much of a hassle to work with – you'll never know! Either way, the “backend” of our tool should be convertible to not only be able to talk to docker, but also other services/tools, e.g. kubernetes or even real virtual machines.

As previously said (see “Package scripting”), the existing tooling (mostly) used bash scripts for package definition. Because these scripts contained a lot of intellectual property, and easy transferation of knowledge from the old tooling to the new tooling was crucial, the new tooling had to use bash scripts as well. Structural and static package metadata, although, could be moved to some other form. TOML, as established markup language in the Rust community, was considered and found appropriate. But because we knew from the old tooling, that repetition was an easy habit to fall for, we needed some setup that made re-use of existing definitions easy and enable flexibility, but also give us the option to fine-tune settings of individual packages, all while not giving away our IP. Thus, the idea of hierarchical package definition came up. Rather than inventing our own DSL for package definition, we would use a set of predefined keys in the TOML data structures and apply some “layering” technique on top to be able to have a generic package definition and alter that per-package (or more fine granular). The details on this follow below in the section “Defining the Package format”.

For replicability, aggregation of log output and tracking of build parameters we wanted some form of database. The general theme we quickly recognized was, that all data we were interested in, was immutable once created. i.e., a line of log output is of course not to be altered, but to be recorded as-is, but also parameters we submit to the build jobs, dependency versions, every form of flags or other metadata. Thus, an append-only storage format was considered best. Due to the nature of the project (mostly being a “let's see what we can accomplish in a certain timeframe”), the least complicated approach for data storage was to use what we knew: postgres – and never alter data once it hit the database. (To date, we have no UPDATE sql statements in the codebase and only one DELETE statement.)

Rust

Using Rust as the language to implement the tool was a no-brainer.

First of all, and I admit that was the main reason that drove me, I am most familiar with the Rust programming language. Other languages I am familiar with are not applicable for that kind of problem (that is: Ruby, Bash or C). Other languages that would be available for the problem domain we're talking about here would be Python or Go (or possibly even a JVM language), but I'd not consider any of them, mostly because not a single one of these languages gives me the expressiveness and safety that Rust can provide. After all, I needed a language where I could be certain that things actually worked after I wrote the code.

The Rust ecosystem gives me a handful of awesome crates (read: libraries) to solve certain types of problems:

Defining the Package format

As stated in one of the previous sections, we had to define a package format that could be used for describing the metadata of packages while keeping the possibility to customize packages per customer and also giving us the ability to use the knowledge from our old bash-based tooling without having to convert everything to the new environment.

Most of the metadata, which is rather static per package, could easily be defined in the format:

All these settings are rather static per package. The real complexity, though, comes with the definition of the build script.

The idea of “phases” in the build script was valuable. So we decided that each package should be able to define a list of “phases”, that could be compiled to a “script” that, when executed, transformed the sources of a package to some artifacts (for example, a .rpm file). The idea was, that the script alone knew about packaging formats, because (if you remember from earlier in the article), we needed flexibility in packaging targets: rpm, deb, tar.gz or even proprietary formats must be supported.

So, each package had a predefined list of phases that could be set to a string. These strings are then concatenated by butido and the resulting script is, along with the package sources, artifacts from the build of the package dependencies and the patches for the package, copied to the build-container and executed. To visualize this:

Sources      Patches      Script
   │            │            │
   │            │            │
   └────────────┼────────────┘
                ▼
         ┌─────────────┐
         │             │
         │  Container  │◄────┐
         │             │     │
         └──────┬──────┘     │
                │            │
                │            │
                ▼            │
            Artifacts────────┘

One ciritcal point, though, was repetition. Having to repeat parts of a script in multiple packages was a deal-breaker. Therefore being able to reuse parts of a script is necessary. Because we did not want to invent our own scripting language/DSL for this, we decided to use the layering mentioned before to implement reuse of parts of scripts.

Consider the following tree of package definition files:

/pkg.toml

/packageA/pkg.toml
/packageA/1.0/pkg.toml
/packageA/2.0/pkg.toml
/packageA/2.0/1/pkg.toml
/packageA/2.0/2/pkg.toml

/packageB/pkg.toml
/packageB/0.1/pkg.toml
/packageB/0.2/pkg.toml

The idea with that scheme was that we implement a high-level package definition (/pkg.toml), where variables and build-functionality was predefined, and later alter variables and definitions as needed in the individual packages (/packageA/pkg.toml or /packageB/pkg.toml), in different versions of these packages (/packageA/1.0/pkg.toml, /packageA/2.0/pkg.toml, /packageB/0.1/pkg.toml or /packageB/0.2/pkg.toml), or even in different builds of a single version of a package (/packageA/2.0/1/pkg.toml, /packageA/2.0/2/pkg.toml).

Here, the top level /pkg.toml would define, for example, CFLAGS = ["-O2"], so that all packages had that CFLAGS passed to their build by default. Later, this environment variable could be overwritten in /packageB/0.2/pkg.toml, only having an effect in that very version. Meanwhile, /packageA/pkg.toml would define name = "packageA" as the name of the package. That setting automatically applies to all sub-directories (and their pkg.toml files). Package-local environment variables, special build-system scripts and metadata would be defined once and reused in all sub-pkg.toml files, so that repetition is not necessary.

That scheme is also true for the script phases of the packages. That means, that we implement a generic framework of how a package is built in /pkg.toml, with a lot of bells and whistles – but not tied to a build tool (autotools or cmake or...) or a packaging target (rpm or deb or...), but with flexibility to handle all of these cases gracefully. Later, we customize parts of the script by overwriting environment variables to configure the generic implementation, or we overwrite whole phases of the generic implementation with a specialized version to meet the needs of the specific package.

In reality, this looks approximately like this:

# /pkg.toml
# no name = ""
# no version = ""

[phases]
unpack.script = '''
    tar xf $sourcefile
'''

build.script = '''
    make -j 4
'''

install.script = '''
    make install
'''

package.script = '''
    # .. you get the hang of it
'''

and later:

# in /tmux/pkg.toml
name = "tmux"
# still no version = ""

[phases]
build.script = '''
    make -j 8
'''

and even later:

# in /tmux/3.2/pkg.toml
version = "3.2"

[phases]
install.script = '''
	make install PREFIX=/usr/local/tmux-3.2
'''

not that the above example is accurate or sane, but it demonstrates the power of the approach: In the top-level /pkg.toml, we define a generic way of building packages. in /tmux/pkgs.toml we overwrite some settings that are equal for all packaged tmux versions: name and the build part of the script, and in the /tmux/3.2/pkg.toml file we define the last bits of the package definition: The version field and, because of some reason, we overwrite the install part of the script to install to a certain location.

One could even go further and have a /tmux/3.2/fastbuild/pkg.toml, where version is set to "3.2-fast" and build tmux with -O3 in that case.

Templating package scripts

The approach described above is very powerful and flexible. It has one critical problem, though: What if we needed information from the lowest pkg.toml file in the tree (e.g. /tmux/3.2/fastbuild/pkg.toml), but that information had to be available in /pkg.toml.

There are two solutions to this problem. The first one would be that we would define a phase at the very beginning of the package script that would define all the variables. The /tmux/3.2/fastbuild/pkg.toml file would overwrite that phase and define all the variables, and later phases would use them.

That approach had one critical problem: It would yield the layering of pkg.toml files meaningless, because each pkg.toml file would need to overwrite that phase with the appropriate settings for the package: if /tmux/3.2/pkg.toml defined all the variables, but one variable needs to be overwritten for /tmux/3.2/fastbuild/pkg.toml, the latter would still need to overwrite the complete phase. This was basically a pothole for the don't-repeat-yourself idea and thus a no-go.

So we asked ourselves: what data do we need in the top-level generic scripts that gets defined in the more specific package files? Turns out: only the static stuff! The generic script phases need the name of a package, or the version of a package, or meta-information of the package... and all this data is static. So, we could just sprinkle a bit of templating over the whole thing and be done with it!

That's why we added handlebars to our dependencies: To be able to access variables of a package in the build script. Now, we can define:

build.script = '''
    cd /build/{{this.name}}-{{this.version}}/
    make
'''

And even more complicated things, for example iterating over all defined dependencies of a package and check whether they are installed correctly in the container. And all that can be scripted in an easy generic way, without knowing about the package-specific details.

PostgreSQL

We decided to use postgresql for logging structured information of the build processes.

After identifying the entities that needed to be stored, setting up the database with the appropriate scheme was not too much of a hassle, given the awesome diesel crate.

Before we started with the implementation of butido, we identified the entities with the following diagram:

+------+ 1             N +---+ 1          1 +-----------------------+
|Submit|<--------------->|Job|-------+----->|Endpoint *             |
+--+---+                 +---+       |    1 +-----------------------+
   |                                 +----->|Package *              |
   |                                 |    1 +-----------------------+
   |                                 +----->|Log                    |
   |  1  +-----------------------+   |    1 +-----------------------+
   +---->|Config Repository HEAD |   +----->|OutputPath             |
   |  1  +-----------------------+   |  N:M +-----------------------+
   +---->|Requested Package      |   +----->|Input Files            |
   |  1  +-----------------------+   |  N:M +-----------------------+
   +---->|Requested Image Name   |   +----->|ENV                    |
   | M:N +-----------------------+   |    1 +-----------------------+
   +---->|Additional ENV         |   +----->|Script *               |
   |  1  +-----------------------+          +-----------------------+
   +---->|Timestamp              |
   |  1  +-----------------------+
   +---->|Unique Build Request ID|
   |  1  +-----------------------+
   +---->|Package Tree (JSON)    |
      1  +-----------------------+

Which is explained in a few sentences:

  1. Each job builds one package
  2. Each job runs on one endpoint
  3. Each job produces one log
  4. Each job results in one output path
  5. Each job runs one script
  6. Each job has N input files, and each file belongs to M jobs
  7. Each job has N environment variables, and each environment belongs to M jobs
  8. One submit results in N jobs
  9. Each submit was started from one config repository commit
  10. Each submit has one package that was requested
  11. Each submit runs on one image
  12. Each submit has one timestamp
  13. Each submit has a unique ID
  14. Each submit has one Tree of packages that needed to be built

I know that this method is not necessarily “following the book” of how to develop software, but this was the very first sketch-up of how our data needed to be structured. Of course, this is not the final database layout that is implemented in butido today, but nevertheless it hasn't changed fundamentally since. The idea of a package “Tree” that is stored in the database was removed, because after all, the packages are not a tree but a DAG (more details below). What hasn't changed, is that the script that was executed in a container is stored in the database, as well as the log output of that script.

Implementing a MVP

After a considerable planning phase and several whiteboards of sketches, the implementation started. Initially, I was allowed to spend 50 hours of work on the problem. That was enough for get some basics done and a plan on how to reach the MVP. After 50 hours, I was able to say that with approximately another 50 hours I could get a prototype that could be used to actually build a software package.

Architecture

The architecture of butido is not as complex as it might seem. As these things are best described with a visualization, here we go:

      ┌─────────────────────────────────────────────────┐
      │                                                 │
      │                    Orchestrator                 │
      │                                                 │
      └─┬──────┬───────┬───────┬───────┬───────┬──────┬─┘
        │      │       │       │       │       │      │
        │      │       │       │       │       │      │
    ┌───▼─┐ ┌──▼──┐ ┌──▼──┐ ┌──▼──┐ ┌──▼──┐ ┌──▼──┐ ┌─▼───┐
    │     │ │     │ │     │ │     │ │     │ │     │ │     │
    │ Job │ │ Job │ │ Job │ │ Job │ │ Job │ │ Job │ │ Job │
    │     │ │     │ │     │ │     │ │     │ │     │ │     │
    └───┬─┘ └──┬──┘ └──┬──┘ └──┬──┘ └──┬──┘ └──┬──┘ └─┬───┘
        │      │       │       │       │       │      │
      ┌─▼──────▼───────▼───────▼───────▼───────▼──────▼─┐
      │                                                 │
      │                    Scheduler                    │
      │                                                 │
      └────────┬───────────────────────────────┬────────┘
               │                               │
      ┌────────▼────────┐             ┌────────▼────────┐
      │                 │             │                 │
      │    Endpoint     │             │    Endpoint     │
      │                 │             │                 │
      └────────┬────────┘             └────────┬────────┘
               │                               │
┌─────┬────────▼────────┬─────┐ ┌─────┬────────▼────────┬─────┐
│     │ Docker Endpoint │     │ │     │ Docker Endpoint │     │
│     └─────────────────┘     │ │     └─────────────────┘     │
│                             │ │                             │
│       Physical Machine      │ │       Physical Machine      │
│                             │ │                             │
│ ┌───────────┐ ┌───────────┐ │ │ ┌───────────┐ ┌───────────┐ │
│ │           │ │           │ │ │ │           │ │           │ │
│ │ Container │ │ Container │ │ │ │ Container │ │ Container │ │
│ │           │ │           │ │ │ │           │ │           │ │
│ └───────────┘ └───────────┘ │ │ └───────────┘ └───────────┘ │
│                             │ │                             │
└─────────────────────────────┘ └─────────────────────────────┘

One part I could not visualize properly without messing up the whole thing is that each job talks to some other jobs. Also, some helpers are not visualized here, but they do not play a part in the overal architecture. But lets start at the beginning.

From top to bottom: The orchestrator uses a Repository type to load the definitions of packages from the filesystem. It then fetches the packages that need to be build using said Repository type, which does a recursive traversal of the packages. That process results in a DAG of packages. For each package, a Job is created, which is a set of variables that need to be associated with the Package so that can be built. That is environment variables, but also the image that should be used for executing the build script and some more settings.

Each of those jobs is then given to a “Worker” (named “Job” in above visualization). Each of these workers gets associated with the jobs that need to be successfully built before the job itself can be run – that's how dependencies are resolved. This association itself is a DAG, and it is automatically executed in the right order because each job waits on its dependents. If an error happened, either during the execution of the script in the container, or while processing of the jobs itself, the job sends the error to its parent job in the DAG. This way, errors propagate through the DAG and all jobs exit, either with success, an error from a child, or an own error.

Each of those jobs knows a “scheduler” object, which can be used to submit work to a docker endpoint. This scheduler keeps track of how many builds run at every point in time and blocks further spawning of further builds if there are too many running (which is a configuration option for the user of butido).

The scheduler knows each configured and connected Endpoint and uses it for submitting builds to a docker endpoint. These docker endpoints could be physical machines (like in the visualzation) or VMs or whatever else.

Of course, during the whole process there are some helper objects involved, the database connection is passed around, progress bar API objects are passed around and a lot of necessary type-safety stuff is done. But for the architectural part, that really was the essence.

Problems during implementation

During the implementation of butido, some problems were encountered and dealt with. Nothing too serious that made us re-do the architecture and nothing too complicated. Still, I want to highlight some of the problems and how they were solved.

Artifacts and Paths

One problem we encountered, and which is a constant source of happyness in every software project, was the paths to our input and output artifacts and the handling of them.

Luckily, Rust has a very convenient and concise path handling API in the standard library, so over several refactorings, we were able to not introduce more bugs than we squashed.

There's no individual commit I can link here, because quite a few commits were made that changed how we track(ed) our Artifact objects and path to artifacts during the execution of builds. git log --grep=path --oneline | wc -l lists 37 of them at the time of writing this article.

It is not enough for code to work. (Robert C. Martin, Clean Code: A Handbook of Agile Software Craftsmanship)

Lessons learned is: Make use of strong types as much as you can when working with paths. For example, we have a StoreRoot type, that points to a directory where artifacts are stored in. Because artifacts are identified by their filename, there's a ArtifactPath type. If StoreRoot::join(artifact_path) is called, the caller gets a FullArtifactPath, which is an absolute path to the actual file on disk. Objects of the aformentioned types cannot be altered, but only new ones can be created from them. That, plus some careful API crafting, makes some classes of bugs impossible.

Orchestrator rewrite(s)

In above section on the general architecture of butido, the “Orchestrator” was already mentioned. This type takes care of loading the repository from disk, selecting the package that needs to be build, finds all the (reverse) dependencies of the package and transforms each package into a runnable job that can be submitted to the scheduler and to the endpoints from there, using workers to orchestrate the whole thing.

The Orchestrator itself, though, was rewritten twice.

In the beginning, the orchestrator was still working on a Tree of packages. This tree was processed layer-by-layer:

    A
   / \
  B   E
 / \   \
C   D   F

So this tree resulted in the following sets of jobs:

[
    [ C, D, F ]
    [ B, E ]
    [ A ]
]

and because the packages did not depend on each other, these lists would then be processed in parallel.

This was a simple implementation of a simple case of the problem that worked very well until we were able to run the first prototype. It was far from optimal though. In above tree, the package named E could start to build eventhough C and D were not finished building yet.

The first rewrite mentioned above solved that by reimplementing the job-spawning algorithm to perform better on such and similar cases – which are in fact not that uncommon for us.

The second rewrite of the Orchestrator, which happened shortly after the first one (after we understood the problem at hand even better), optimized that algorithm to the best possible solution. The new implementation uses a trick: It spawns one worker for each job. Each of those workers has “incoming” and “outgoing” channels (that's actually Multi-Producer-Single-Consumer-Queues). The orchestrator associates each job in the dependency DAG with its parent by connecting the childs “outgoing” channel with the parents “incoming” channel. Leaf nodes in the DAG have no “incoming” channel (they are closed right away after instantiation) and the “outgoing” channel of the “root node” in the DAG sends to the orchestrator itself.

The channels are used to send either successfully built artifacts to the parent, or an error.

Each worker then waits on the “incoming” channels for the artifacts it depends on. If it gets an error, it does nothing but send that error to its parent. If all artifacts are received, it starts scheduling its own build on the scheduler, sending the result of that process to its parent.

This way, artifacts and/or errors propagate through the DAG until the Orchestrator gets all results. And the tokio runtime, which is used for the async-await handling, orchestrates the execution of the processes automatically.

Tree vs. DAG

Another problem we encountered was not so much of a problem but rather a simplification we used when writing the package-to-job conversion algorithm the first time. When we began implementing the package-loading mechansim, we used a tree data structure for the recursive loading of dependencies. That meant that a structure like this:

A ------> B
|         ^
|         |
 `-> C --´ 

Was actually represented like this:

A ----->  B
|
|
 `-> C -> B 

That meant that B was built twice: Once as a dependency of A and once as a dependency of C. That was a simplification we used to get a working prototype fast, because implementing the proper structure (a DAG) was considered too involved at the time, given the “prototype nature” the project had back then.

Because we used clear seperation of concerns in the codebase and the module that implemented the datatype for holding the loaded package dependency collection was seperated from the rest of the logic, replacing the tree structure with a DAG structure was not too much involved (merge).

To lay a bit more emphasize on that: When we changed the implementation of the package dependency collection handling, we didn't need to rewrite the Orchestrator for that (except changing some interfaces), because the two parts of the codebase were seperated enough so that changing the implementation on one side didn't result in a rewrite on the other side.

Open sourcing

For me it was clear from the outset that this tool was not what we call our intellectual property. Our expertise is in the scripts that actually executed the build, in our handling of the large number of special cases for every package/target distribution/customer combination. The tool that orchestrated these builds is nothing that could be sold to someone, because it is not rocket science.

Because of that, I asked whether butido could be open sourced. Long story short: I was allowed to release the source code under a open source license and since we had some experience with the Eclipse Public License (EPL 2.0), I released it under that license.

Before we were able to release it under that license, I had to add some package lints (cargo-deny), to verify that all our dependencies were on terms with that and I had to remove one package that was pulled in but had an unclear license. Because our CI does check the license of all (reverse) dependencies, we can be sure that such an package is not added again.

butido 0.1.0

After about 380 hours of work, butido 0.1.0 was released. Since then, four maintenance releases (v0.1.{1, 2, 3, 4}) were released to fix some minor bugs.

butido is usable for our infrastructure, we've started packaging our software packages with it and started (re)implementing our scripts in the butido specific setup. For now, everything seems to work properly and at the time of writing this article, about 30 packages were already successfully packaged and built.

In general, the longer you wait before fixing a bug, the costlier (in time and money) it is to fix. (Joel Spolsky, Joel on Software)

Of course there might be still some bugs, but I think for now, the most serious problems are dealt with and I'm eager to start using butido at a large scale.

Plans for the future

We also have plans for the future: some of these plans are more in the range of “maybe in a year”, but some of them are also rather short-term.

Some refactoring stuff is always on such a list, as it is with butido: When implementing certain parser helpers, we used the “pom” crate, for example. This should be replaced by the way more popular “nom” crate, just to be future-proof here. Some frontend (CLI) cleanup should be done as well, to ensure consistency in our user expierience.

Logging was implemented using the “log” crate, but as we are talking about a async-await codebase here, we might switch to “tracing”.

Some more general improvements would be required as well. For example, we do store the “script” that gets compiled and send to the containers to execute the build, in the database for each build. This is not very efficient, as we basically have hundreds of copies of the script in the database at some point, only with small changes from one build to another (or even none).
But because the script is in the git history of the package repository, it would be easy to generate the script from an old checkout of the repository. This is a really nice thing that could be improved, as it would reduce database pressure by a nontrivial amount.

Still, the biggest pressure on the database is the fact that we write build logs to the database. And some of these build logs have already reached 10.000 lines. Of course it is not very efficient to store them in a postgres TEXT field, though at the time of initial development, it was the simple solution.
There are a few thoughts flying around how we could improve that while holding up our guarantees – especially that the log cannot be modified easily after it was written. Yes, of course, it can be altered in the postgres database as well, but the hurdles (writing an SQL statement) are much higher than with for example plaintext files (vim <file>). And do not get the impression that this scenario is far-fetched!

whatever can go wrong, will go wrong.

(Murphy's Law)

Having a more structured storage format of the log (especially with timestamps on each line in the log) could lead to some interesting forecasting features, where butido can approximate the time a build will finish matching the output lines of the build with the output lines of an old build.

The biggest idea for butido, though, is refactoring the software into a service.

The first step for that would be refactoring of the endpoint-handling modules into services. This way, if several developers submit builds, butido would talk to a hand full of remote services that represent one endpoint each. This service could then decide whether there are still resources available on the host or whether the incoming job should be scheduled at a later time.

The next step would then be to refactor the core software component into a service as well. In that world, the developer would only have a thin-client on the commandline which could be used to submit builds to the service. The service would then itself schedule the builds appropriately, taking care of resources automatically.
Of course, integration with other services, such as prometheus for gathering of statistics or nexus as an artifact repository would be easily possible, then.

All these things are more mid- or long-term ideas rather than things that will happen within just a few months.

Improvements over the status-quo

In the following section, I want to highlight some improvements over our old setup. Some changes cannot be measured, because they have impact on quality-of-live, joy of use and, for example, the organization of our setup. So, none of the following is hard science, because the tools compared here are so different in their approach, functionality and behaviour. Still, this section might give a good impression how things improved.

Non-measurable improvements

The package definition format is now typed, contrary to being only a bash script in our old setup. That gives us stronger guarantees about its correctness. The interpolation of package variables into the package scripts fails if a required variable is not present. The package definition now holds metadata about the package (such as a description, the project homepage, source URL of the package, license, dependencies). All these things could've been possible before, but were never implemented. Dependencies were not hardcoded into a package, but semi-automatically searched by the old tooling. Although we had to tell each package what it depended on, the exact version was heuristically searched for. Now it is hardcoded and we always know exactly which version will be used.

The package scripts are then compiled to one big script that executes the whole build of one package. Each of these scripts is ran through a linter (for example shellcheck) on each build, ensuring that we do not mess up minor things that get us in the long run. That was simply not possible before.

From experience I can say, that updating a package in butido is way faster than updating it with our old infrastructure. If the updated package does not need any further customizations than the prior version, a package update is as simple as:

In our old setup, a whole new package would have been created. This is, of course, also scripted. But it is not as simple as the above steps:

Of course all of these are not measurable improvements. So lets have a look at the measurable ones!

Measurable improvements

Because of the new package description format, we can configure each package build individually, for example how many make threads should be used to build it (think of make -j 10). Building small packages with a huge number of threads only introduces overhead, while building large packages with a small number results in longer builds.

The layered nature of our package format highly encourages code-reuse. Considering python, the full buildscript of python 3.7.5 is, in our new setup, 664 lines long. The actual package definitions for the python interpreter are just 68 lines, and that is including all metadata. The rest of these 664 lines is the surrounding framework for building our packages that is shared between all packages. The non-shared python-3.7.5-specific part is only 11 lines. In our old setup, the package definition for python 3.7.5 is 167 lines of bash, not including the framework itself. So to have a fair comparison, we must compare these 167 lines with the 68 lines or rather the 11 lines from the new setup, which would mean that our new setup is only 40%, or rather 6.6% of the old setup. Of course this comparison is not completely fair, but – in my opinion – a good approximation.

A full build of our python interpreter takes not even 10 minutes! That is, including generating build scripts for 12 packages, linting all individual scripts, checking that each of the sources have still the expected SHA1, putting all packages in a DAG and traversing this DAG, building the packages in parallel (on two dedicated machines), each package in one docker container.

For reference, the package tree python looks like this (some packages are displayed multiple times because this is a tree representation of the DAG):

python 3.7.5
├─ sqlite 3.35.1
├─ readline 8.1
├─ openssl 1.1.1i
├─ libxslt 1.1.34
│  ├─ libgcrypt 1.9.2
│  │  └─ libgpg-error 1.42
│  └─ libxml2 2.9.10
├─ libxml2 2.9.10
├─ libtk 8.6.10
│  └─ libtcl 8.6.10
├─ libtcl 8.6.10
├─ libffi 3.2.1
└─ libbzip2 1.0.8

As soon as the libraries in this tree are released, butido automatically reuses the packages if it finds that a build would result in the same package, reducing the buildtime even more!

The overhead that butido itself introduces is minimal. Considering the mentioned python package and its dependencies, a build that takes 9:22 minutes. The actual call to butido build takes 9:40 until it returns with exit code 0. This overhead is 3.1% of the complete time. Absolutely not interesting at all compared to the 10 minutes of overall time!

With our old tooling, that is hardly measurable, though. Because of the in-sequence-build nature and a lot of things happening in between the individual package building steps, as well as the fact that the tooling builds in one docker container on one host, the build for python takes considerably longer. Also, the gathering of meta-information and general overhead of the tool itself might be easily a multiple of the overhead of butido.

Comparing two builds

To get hard numbers for a clean build of a package, I had to use cmake in version 3.20.4, because our package for python differs too much from one setup to another (especially concerning dependendcies) to make a helpful comparison.

So, for the following numbers, I started a cmake 3.20.4 build for CentOS 7 with our old tooling, making sure that 24 threads were used (the machine has 24 cores). The cmake package has no dependencies, so we can compare the numbers directly here.

The old tooling took 3:46 minutes from calling the tool until it returned with exit code 0.

The call to butido was made with time butido build cmake --no-lint --no-verify -I local:rh7-default -E TARGET_FORMAT=rpm -- 3.20.4, whereas linting and hash-sum-verification was turned off to be more comparable (our old tooling does not do that). It took butido 3:32 minutes, whereas the actual build took 3:26 minutes. That means that butido has an overhead of 6 seconds, whereas the old tooling had an overhead of 14 seconds.

The comparability of those numbers is still a bit difficult, because of the following points:

Still, these numbers show a significant improvement in speed, especially considering these points.

Some more things

Now is the time to conclude this article.

First of all, I want to thank my employer for letting me write that piece of software and especially for letting me share it with the community as open source software! I think I can say that it was a very interesting journey, I learned a lot, though I think I was very productive and – as far as I can say – it turned out to be a valuable piece of work for our future progress on the whole subject of building software packages.

I think I am allowed to speak on behalf of my team-collegues if I say that they/we are excited to use the new tooling.

I also want to thank everyone who is involved in the Rust ecosystem, either by hanging out on IRC, Matrix or other communication channels, commenting on github or, and especially, by contributing to crates in the ecosystem. The next – and last – chapter in this article will show that I tried to contribute back wherever I saw a chance!

One more thing that is very important to me as well and I want to highlight quickly here is that this article, but also butido itself, would have been way harder to write if we wouldn't have committed cleanly from day one. The fact that we crafted commit messages carefully (for example see here, here, here, here, here or here just for some cases), made navigating the history a breeze.

Contributions to the Rust ecosystem

During the development of butido, contributions to a few other crates from the ecosystem were made. Some of these are PRs that were created, some of these are discussions where I took part to improve something, some of these are issues I reported or questions I've asked the authors of a crate.

During the development of butido, I also got maintainer rights for the config-rs crate, which I did the 0.11.0 release for and where I'm eager to continue contributing.

I want to highlight that these contributions were not necessarily financed by my employer. A few of them were mission-critical for butido itself. Nevertheless, none of them were completely irrelevant for butido and although some of them are trivial, they all resulted in one way or another in benefit for the implementation of butido itself.

The following list is probably not exhaustive, although I tried to catch everything.

Thanks for reading.