Rust Services on Kubernetes with Nix
I've been playing with thoughts about kubernetes a lot lately. Today I finally found the courage and motivation to just try out to write some services and deploy them.
The idea was to write some simple “hello-world” style application, deploy it on some kubernetes and serve them locally on my machine – just to get the hang of the ecosystem and how it all works.
And because I'm running NixOS of course, I wanted to leverage the power of the nix package manager to ease my deployment mechanisms as far as possible.
Building docker images can be done by nixpkgs, so that's not an issue at all. There's also the great kubenix project, which can be used to generate kubernetes deployment configuration files using nix. Plus we're using kind, which is used to spawn a full kubernetes cluster inside a docker container on my notebook – that's really nice for development!
But first, we need to implement the
Services
Writing the service(s) was rather simple (because Rust, yay)! I decided to make it not completely trivial, and to write three services:
- A service that serves “Hello”
- A service that serves “World”
- A service that uses the other two services and joins the strings they serve together and serve the result
The implementation of these services are rather simple and don't need any explanation for the experienced Rustacean. And because I'm assuming a fair bit of Rust knowledge here, I'm not quoting the source here.
The “echo-style” services are implemented using actix-web
in version 3, the “joiner” service is implemented using it in version 4 (beta 9), because version 3 does not support calling the tokio runtime and I need to do that to be able to query the other services asyncronously without too much hassle.
All services expect a HOST
and PORT
environment variable to be set, which they use to bind to. The “joiner” service also expects HELLO_SERVICE
and WORLD_SERVICE
, which should point to the services serving the “Hello” and “World” strings respectively.
All three binaries are built using the most simple nix expression:
{ pkgs ? import <nixpkgs> {} }:
with pkgs.stdenv;
with pkgs.lib;
pkgs.rustPlatform.buildRustPackage rec {
name = "hello-service";
src = pkgs.nix-gitignore.gitignoreSourcePure "target\n" ./.;
cargoSha256 = "sha256:0nam3yr99gg0lajdha0s9pijqwblgwggqi94mg7d3wz7vfhj8c31";
nativeBuildInputs = with pkgs; [ pkg-config makeWrapper ];
buildInputs = with pkgs; [ openssl ];
}
(the other default.nix
files look similar)
Deployment
I started learning about how to write the deployment following the great article from tweag.io – Configuring and Testing Kuberentes Clusters with kubenix and kind.
That article is truly the main resource I used and you should definitively read it if you want to reproduce what I did here. It explains some details way better than I could!
One minor inconveniance though, is that kubenix does only support kuberentes 1.18, but I was using kubernetes 1.22 for this example service. It worked, but my implementation might stop working with kubernetes 1.23!
That said, kind
can use older kubernetes versions with the --image
flag when creating the cluster, which might work to get a kuberentes 1.18 up and running!
The layout of the repository is as follows:
configuration.nix
default.nix
nix/deploy-to-kind.nix
nix/kubenix.nix
service-hello
service-hello/Cargo.lock
service-hello/Cargo.toml
service-hello/default.nix
service-hello/src
service-hello/src/main.rs
...
As you can see, the service implementation is just a subdirectory. The individual services can be build using nix-build ./service-hello/
for example.
The deployment gets generated by configuration.nix
, whereas default.nix
is used to orchestrate the whole thing and build some neat helpers.
Building docker images
Nix has awesome tooling in nixpkgs for building docker images. We can use that to build docker images with our binaries without actually pulling in too much:
helloImage = pkgs.dockerTools.buildLayeredImage {
name = "hello-service";
tag = "latest";
config.Cmd = [ "${helloService}/bin/service-hello" ];
};
That goes into our default.nix.
These images are later loaded into the kind cluster (automatically, thanks to nix) when starting the cluster.
Generating the deployment
Lets have a look at the configuration.nix! The first few lines of that file are simple definitions about the services. We define their name (label
), which port they run on, how much CPU the pods may use they are running in and their environment:
helloApp = rec {
label = "hello";
port = 3000;
cpu = if type == "dev"
then "100m"
else "1000m";
imagePolicy = if type == "dev"
then "Never"
else "IfNotPresent";
env = [
{ name = "HOST"; value = "0.0.0.0"; }
{ name = "PORT"; value = "${toString port}"; }
];
};
These values are later used to write the deployment of the individual services. The neat thing is: The “joiner” service needs to know (for example) which ports the other services run on. But we don't need to duplicate that data, because we can re-use it from the definition of said services:
joinerApp = rec {
label = "joiner";
# ...
env = [
# ...
{ name = "HELLO_SERVICE"; value = "${helloApp.label}:${toString helloApp.port}"; }
{ name = "WORLD_SERVICE"; value = "${worldApp.label}:${toString worldApp.port}"; }
];
};
Using the values, we can then implement the deployments of each of the apps with all the bells and whistles as well as implementing the services in the cluster.
Last, but not least, we need to add an ingress to route traffic from outside the cluster to the “joiner” service.
Deploying
That's all nice and dandy, but we also need some mechanism to deploy the whole thing, you mean?
That's where nix comes in handy! What we can do is, let nix write a script that handles all the deployment steps for us. That script gets the images we just built using docker as input, and copies them into the kind cluster when run. It also gets the generated configuration for the cluster, which it applies automatically as soon as the cluster is created (it also prints it as JSON for discoverability). Lastly, it ensures that an ingress controller is available in the cluster, which is needed for routing the incoming traffic to our “joiner” service.
Putting it all together
Finally, the default.nix file puts it all together. There are even some more possibilities than the file implements (I never actually tried to run the deploy-and-test shell).
I simply used the shell implemented by the file and ran deploy-to-kind
to start the kind instance and deploy everything inside.
Wrapping up
It took me one whole day to figure out all the details, but finally I got it working. This has been a really nice experience and although kubenix seems to be unmaintained as of two years, it worked and was not too complex to use for that simple usecase.
As always, nix is a pleasure to work with, as is Rust! I hope to be able to use both professionally at some point – lets see what the future brings ;–)