musicmatzes blog

The first movie I want to talk about is Riddick, actually it is three movies. The main actor is Vin Diesel, which is one of my favourite actors in the action genre. I didn't know that Diesel actually did a fantasy movie, because that's what “Riddick” is – an action+fantasy movie.

I wasn't aware of the Riddick movies until someone told me about them. And I fell in love within the first couple of minutes watching the first one (Pitch Black, 2000).

Diesel, most known for playing Dominic Toretto in Fast & Furious plays Riddick, who is an infamous former mercenary and solider (“antihero”) who is held hostage on a space ship that crashes. Riddick escapes. You might think that the crew of the ship has a problem now, given that Riddick was held hostage and they might get killed by him. Though, Riddick is far from being their biggest problem on the planet they crashed on.

That's how the first movie, Pitch Black, is set up for. The later two movies, “The Chronicles of Riddick”, 2004, and “Riddick”, 2013, follow on that story line and Diesel continues to impress with that roleof Riddick.

The movies are rather dark (both in their theming and in their display of the environment), which is part of the Plot in all three movies. Riddick continues to be an emotionless berserker in all three movies and stays in the role, which I really like. Most multi-part movies tend to give the main character a bit more of emotion in the later parts, weaken the role of the heartless berserker. These movies do not.

I hope the fourth movie, which is not even announced yet, Diesel continues that exact role.

#vindiesel #riddick #movie #scifi #action #fantasy #dark

On my blog, you find mostly technical articles about Rust, NixOS and stuff I fiddle around with. From time to time, there are some articles about my travels, or some social stuff like the Chaos Communication Congress or things like that.

But I want to add some articles in the future about movies and series I liked a lot. I hope you don't mind that, given that my audience (you!) is mostly technical readers (lets just face it: Nerds are my audience).

I'm not the most skilled writer when it comes to non-technical stuff, I guess. I'm also not the biggest movie-nerd or geek, fwiw. I will blog about movies anyways.

Right now, the list of movies and series I want to see in the upcoming months, given that the pandemic is very much happening still, is rather long. It is mostly Fantasy and Sci-Fi stuff, with some action and thrillers mixed in. I'm not into horror movies at all, and only a little bit into romance stuff. So if you care about the same genres, you might want to follow what I post a bit. Also, I won't do in-depth reviews, but rather some more general impressions about the movie or series at hand. Think of it like a trailer in text-form, or like a short version of the “Plot” section in the wikipedia article of the movie/series.

That being said... let's go!

I am known for being not the biggest fan of #github anymore, especially since #Microsoft aquired them for a shitload of money. But when I recently learned about the “Suggested Change” feature, I lost any believe in them. How bad can one mess up a feature?

So, the “Suggested Change”-Feature, as I call it, provides a way to request changes on a pull request. A user can suggest a snippet of code being altered in a way they think is appropriate, by selecting the line from the diff in the PR that needs to be changed, and providing some replacement.

That replacement then is suggested to all who have write access on the source branch (most of the time, also maintainers of the target repository because of a checked-by-default option in PRs) and can be applied by them.

That's everything. There can be discussion on the change, of course, but that's the whole feature. It's even somewhat useful! But the way GitHub implemented this is just a load of pure shit.

The first thing is: they make you write change suggestions for Code in a Markdown-Editor! I mean... its not like code editors in web browsers are a new thing, or even an uncommon thing. But GitHub thinks otherwise and you're completely left alone, with a non-monospaced font, figure out how many spaces (or tabs, anyone?) you need on your own! GitHub does not care! You want to fix indentation of the code in there? Haha! Good luck with that! Oh, you accidentally suggested trailing whitespace! Well, GitHub cannot help you with that, because they don't know what a code editor is! In fact, your change suggestion is actually a markdown formatted comment with the diff being a markdown code block. What the hell?

Had enough already?

Next thing: you cannot provide a sensible change description, elsewhere known as commit message. You've probably never heard of that, GitHub, have you? Well, that's not entirely true though: The person who accepts your suggested change can. Yep, that's right! Not the author of the diff provides the commit message, but the committer. Nontrivial changes with “Update” as message anyone?

But even worse is that github actually thinks that suggested changes should not even be patches. How full of shit can they be? They implemented a feature to suggest changes on a pull request and these changes are NOT patches. There is no patch you can git-fetch, nothing you can git-cherry-pick or even git-merge on your own machine. Everything you can do is go to the website, click the “Apply suggested change” button, which creates new commits on your PR branch and then fetch your own PR branch. There's no way to fetch the changes beforehand and review them locally, using your favorite tooling. This is the known Embrace-Extend-Extinguish bullshit that Microsoft pulled for years!

My suggestion: If you can, run away from GitHub as fast as you can. This ship will sink at some point, either because the community recognizes how badly they are messing things up, or because Microsoft makes the whole thing into some real enterprise: slow, complicated to use and only with paid access. If you cannot, for whatever reason, leave GitHub at this point, I suggest you gradually move away from it: make use of other git hosting providers, learn how to use alternatives, learn how to contribute via email and/or even roll your own git hoster – with gitolite and cgit it is almost trivial, and hosters that allow you to deploy such software exist – I like to suggest you have a look at uberspace for a really good and reasonably priced one (I am not and never have been affiliated with/paid by them for saying/writing this).

How it could have been

You might ask how such a feature would have been implemented properly. Well, given the circumstance that GitHub is a web service and users are wanted on the platform for as long as possible, I would have implemented this as follows:

  • If you want to suggest changes you get a monospace-ready web-based code editor with syntax highlighting and maybe even a minimal autocompletion feature. The editor boots with your cursor at the position you initially clicked on in the changset you try to alter.
  • You annotate your suggested change with your own commit message, or optionally use the “!fixup ” commit message header that can later be used in a git rebase --autosquash.
  • Once you're done adding your suggestions to the diff in the PR, you submit all your individual patches and you get a branch that builds on top of the PR branch, for example named github.com/<your username>/<your repo clone>/<PR branch name>/suggestions-<N>.
  • The PR author gets notified about the suggested changes and can git-pull them from your fork properly, review them locally and push them to their PR if they see fit or filter out what they don't like.

Everyone would be totally happy with that. For your dummy-users, you could have buttons and knobs to do the whole thing. Still, your power-users would be satisfied because they have the power to use their own infrastructure and tooling.

But once again, GitHub fails to deliver.

I'm not the most active blogger out there, of course. But, and I take pride in that, I am a vim power user. And since the blogging software I use (and the one I used to use) use Markdown for writing articles, it would come to mind that I use vim for writing blog articles, right?

Turns out, no. I've been experimenting using different markdown editors in the last couple of months and I must say, I think I found the one I like most. I could've used the web-based editor that ships with writefreely, but there's two problems with that: first of all, it is online. I want to be able to write my blog articles without needing an active internet connection. For example, while riding a train in Germany, you don't have internet most of the time, although it is getting better. Secondly, writing in the browser is not as distraction-free as it is with a dedicated app in fullscreen mode.

Either way...

What I want when editing Markdown

First of all, writing a blog article differs greatly from coding in one aspect: You're actually writing. When working with code, you often find yourself editing the code, rather than writing new one. Of course, sometimes you implement new features and write a lot of code in one sitting, but nevertheless you don't write like you write when writing prosa. You rather “construct”.

And thus, you don't need a “Text editor” for writing blog articles, you need a software that is for writing prosa. Plus, and that's what I value: it should stay out of my way. That's the one feature that I would like to see in such a software. No bells, no whistles, no automatic rendering, no highlighting except italic, bold, code sections and section headers. That's all I would like to see. Other than that, it should be a cursor and that cursor should not even blink.

So I tried to find a software that implemented these features, while still integrating nicely into my desktop (which is KDE Plasma on stable NixOS). I like a dark theme on my desktop, so it shouldn't be blank text on white background but rather white on dark-greyish background, same theme as my desktop if possible.

The first impression counts!

And like always, the first impression counts. I don't like to spend a lot of time when selecting a new tool. I just want to fire it up and start working with it, optionally giving in to 5 or 10 – but not more – minutes of trying a few things to understand how the tool should be used. With the markdown editor, I only wanted to write, of course. So not more than 2 minutes of fiddling around.

So I fired up the NixOS search and asked it for “markdown editor”.

The first one I looked at was Apostrophe which seems to be nice, but GTK software. First impression: Maybe, but there might be something for KDE/QT, right?
From A, lets go to Z: zettlr. Unfortunately this looks like Apple software, and my Notebook had suddenly one thread of firefox at 100% CPU usage when opening that website. Not a good impression either. Tab closed. marktext was up next. This one looks nice, but has a few features that I consider bloat: It renders diagrams and math formulas – I don't need this, especially because I paste the markdown into my blog anyways. Another tab closed.

You see where I'm getting at. The first impression really counts for me with these things. After all, I wanted something light and distraction free.

Then, still browsing the NixOS search, I clicked on the “homepage” link for ghostwriter, which links to the projects github site. There's also a github hosted website which has screenshots. But I did not find it when I first searched for a tool.

So what I did is install it on my desktop and fire it up:

nix-shell -p ghostwriter --run ghostwriter

And I immediately liked what I saw.

There is indeed a live-preview and an outline feature and even some more things I don't even looked at yet. I was able to confiure a dark theme in the settings within a few clicks and when going fullscreen, that's as distraction-less as I need.

The decision was made

And that's what I use now. I've already prepared a rather long article (way over 3000 words, not yet published) with it and I enjoyed the experience working with it.

Publishing the blog article is nothing more than uploading the text content via CTRL-C, CTRL-V to my blog.

I am always in favor of strong typing (as opposed to “string typing” ;–)).

Right now I was testing some CLI tool that I'm writing and I was wondering why my --something was not considered by the implementation. Turned out, the CLI specification (which was done with the awesome clap crate) specified the name of the argument as some_thing and the code that parsed the argument and turned it into an action was using something.

And that's why we need strong typing. This error (or rather: bug) wouldn't have happened if the compiler was able to enforce types. But because this was merely a String, the compiler did not know anything about it and so the bug was introduced to the code.

You might say “Well, yes... you just have to know what you're doing and not mess up these things”. True. But consider this: two people working on the codebase, each one preparing a PR. The first PR changes the string, because it was non-optimal before. This could be a completely valid change. The second PR from the second person implements a new feature and relies on what is there: the string as it is on the master branch right now.

Both of these PRs are completely fine, the merge fine with master and they pass CI without any warnings. If they get merged (in any order), they do not result in conflicts and everything is fine – except that the second merge does break the tool.

Note that it does not even break the build! The code still builds fine, but because the strings do not match anymore, the tool just won't do the right thing in the case of the newly introduced feature!

This wouldn't happen if there was strong typing plus some nice bot-backened CI (for example bors).


Please note that I think clap is an awesome crate and working with it is always a pleasure. The case of the stringly-typed API is a known issue and there's work to improve the situation.

I've been a fan of the #ActivityPub stuff (or rather: the #fediverse) for a long time now, running a #mastodon account on @musicmatze@mastodon.technology for some time already, and I also have a #pixelfed account at @musicmatze@pixelfed.social.

So it is just a logical step to switch the #blogging software I use to an ActivityPub supporting solution as well.

Thus, as of today, this blog runs of writefreely.

Blogging software

I've used a lot of different blogging software until now. Most of the time, it was static site compilers, because this way I did not have to maintain (mainly update) the software running the blog. Because I host on uberspace, I am free to chose from a range of solutions though. In the very beginning, I was running ghost as far as I can remember.

Lately, I did write less and less on this blog, which is a shame. I hope that with this new solution, I start writing more often even though I have to have a internet connection while writing (possible workarounds like preparing locally with vim or some markdown editor exist, of course).

Importing old posts

All my old posts were already written in markdown, so importing was not that complicated. I had to apply some vim-skills to my markdown files and then import them into #writefreely, but I had to adapt the timestamps. Also, the formatting sucks with the imported articles and maybe even links are broken.

TBH, that's not that important to me to make the effort of fixing every single (of the more than 200) articles.

#hugo

Today, I wrote a mastodon bot.

Shut up and show me the code!

Here you go.

The idea

My idea was, to write a bot that fetches the lastest master from a git repository and counts some commits and then posts a message to mastodon about what it counted.

Because I always complain about people pushing to the master branch of a big community git repository directly, I decided that this would be a perfect fit.

(Whether pushing to master directly is okay and when it is not okay to do this is another topic and I won't discuss this here)

The dependencies

Well, because I didn't want to implement everything myself, I started pulling in some dependencies:

  • log and env_logger for logging
  • structopt, toml, serde and config for argument parsing and config reading
  • anyhow because I don't care too much about error handling. It just has to work
  • getset for a bit cleaner code (not strictly necessary, tbh)
  • handlebars for templating the status message that will be posted
  • elefren as mastodon API crate
  • git2 for working with the git repository which the bot posts about

The Plan

How the bot should work was rather clear from the outset. First of all, it shouldn't be a always-running-process. I wanted it to be as simple as possible, thus, triggering it via a systemd-timer should suffice. Next, it should only fetch the latest commits, so it should be able to work on a working clone of a repository. This way, we don't need another clone of a potentially huge repository on our disk. The path of the repository should of course not be hardcoded, as shouldn't the “upstream” remote name or the “master” branch name (because you might want to track a “release-xyz” branch or because “master” was renamed to something else).

Also, it should be configurable how many hours of commits should be checked. Maybe the user wants to run this bot once a day, maybe once a week. Both is possible, of course. But if the user runs it once a day, they want to check only the commits of the last 24 hours. If they run it once a week, the last 168 hours would be more appropriate.

The message that gets posted should also not be hardcoded, but a template where the variables the bot counted are available.

All the above goes into the configuration file the bot ready (and which can be set via the --config option on the bots CLI).

The configuration struct for the setup described above is rather trivial, as is the CLI setup.

The setup

The first things the bot has to do is reading the commandline and the configuration after initializing the logger, which is a no-brainer, too:

fn main() -> Result<()> {
    env_logger::init();
    log::debug!("Logger initialized");

    let opts = Opts::from_args_safe()?;
    let config: Conf = {
        let mut config = ::config::Config::default();

        config
            .merge(::config::File::from(opts.config().to_path_buf()))?
            .merge(::config::Environment::with_prefix("COMBOT"))?;
        config.try_into()?
    };
    let mastodon_data: elefren::Data = toml::de::from_str(&std::fs::read_to_string(config.mastodon_data())?)?;

The mastodon data is read from a configuration file that is different from the main configuration file, because it may contain sensitive data and if a user wants to put their configuration of the bot into a (public?) git repository, they might not want to include this data. That's why I opted for another file here, its format is described in the configuration example file (next to the setting where the file actually is).

Next, the mastodon client has to be setup and the repository has to be opened:

    let client = elefren::Mastodon::from(mastodon_data);
    let status_language = elefren::Language::from_639_1(config.status_language())
        .ok_or_else(|| anyhow!("Could not parse status language code: {}", config.status_language()))?;
    log::debug!("config parsed");

    let repo = git2::Repository::open(config.repository_path())?;
    log::debug!("Repo opened successfully");

which is rather trivial, too.

The Calculations

Then, we fetch the appropriate remote branch and count the commits:

    let _ = fetch_main_remote(&repo, &config)?;
    log::debug!("Main branch fetched successfully");

    let (commits, merges, nonmerges) = count_commits_on_main_branch(&repo, &config)?;
    log::debug!("Counted commits successfully");

    log::info!("Commits    = {}", commits);
    log::info!("Merges     = {}", merges);
    log::info!("Non-Merges = {}", nonmerges);

The functions called in this snippet will be described later on. Just consider them working for now, and let's move on to the status posting part of the bot now.

First of all, we use the variables to compute the status message using the template from the configuration file.

    {
        let status_text = {
            let mut hb = handlebars::Handlebars::new();
            hb.register_template_string("status", config.status_template())?;
            let mut data = std::collections::BTreeMap::new();
            data.insert("commits", commits);
            data.insert("merges", merges);
            data.insert("nonmerges", nonmerges);
            hb.render("status", &data)?
        };

Handlebars is a perfect fit for that job, as it is rather trivial to use, albeit a very powerful templating language is used. The user could, for example, even add some conditions to their template, like if there are no commits at all, the status message could just say “I'm a lonely bot, because nobody commits to master these days...” or something like that.

Next, we build the status object we pass to mastodon, and post it.

        let status = elefren::StatusBuilder::new()
            .status(status_text)
            .language(status_language)
            .build()
            .expect("Failed to build status");

        let status = client.new_status(status)
            .expect("Failed to post status");
        if let Some(url) = status.url.as_ref() {
            log::info!("Status posted: {}", url);
        } else {
            log::info!("Status posted, no url");
        }
        log::debug!("New status = {:?}", status);
    }

    Ok(())
} // main()

Some logging is added as well, of course.

And that's the whole main function!

Fetching the repository.

But we are not done yet. First of all, we need the function that fetches the remote repository.

Because of the infamous git2 library, this part is rather trivial to implement as well:

fn fetch_main_remote(repo: &git2::Repository, config: &Conf) -> Result<()> {
    log::debug!("Fetch: {} / {}", config.origin_remote_name(), config.master_branch_name());
    repo.find_remote(config.origin_remote_name())?
        .fetch(&[config.master_branch_name()], None, None)
        .map_err(Error::from)
}

Here we have a function that takes a reference to the repository as well as a reference to our Conf object. We then, after some logging, find the appropriate remote in our repository and simply call fetch for it. In case of Err(_), we map that to our anyhow::Error type and return it, because the callee should handle that.

Counting the commits

Counting the commits is the last part we need to implement.

fn count_commits_on_main_branch(repo: &git2::Repository, config: &Conf) -> Result<(usize, usize, usize)> {

The function, like the fetch_main_remote function, takes a reference to the repository as well as a reference to the Conf object of our program. It returns, in case of success, a tuple with three elements. I did not add strong typing here, because the codebase is rather small (less than 160 lines overall), so there's not need to be very explicit about the types here.

Just keep in mind that the first of the three values is the number of all commits, the second is the number of merges and the last is the number of non-merges.

That also means:

tuple.0 = tuple.1 + tuple.2

Next, let's have a variable that holds the branch name with the remote, like we're used from git itself (this is later required for git2). Also, we need to calculate the timestamp that is the lowest timestamp we consider. Because our configuration file specifies this in hours rather than seconds, we simply * 60 * 60 here.

    let branchname = format!("{}/{}", config.origin_remote_name(), config.master_branch_name());
    let minimum_time_epoch = chrono::offset::Local::now().timestamp() - (config.hours_to_check() * 60 * 60);

    log::debug!("Branch to count     : {}", branchname);
    log::debug!("Earliest commit time: {:?}", minimum_time_epoch);

Next, we need to instruct git2 to create a Revwalk object for us:

    let revwalk_start = repo
        .find_branch(&branchname, git2::BranchType::Remote)?
        .get()
        .peel_to_commit()?
        .id();

    log::debug!("Starting at: {}", revwalk_start);

    let mut rw = repo.revwalk()?;
    rw.simplify_first_parent()?;
    rw.push(revwalk_start)?;

That can be used to iterate over the history of a branch, starting at a certain commit. But before we can do that, we need to actually find that commit, which is the first part of the above snippet. Then, we create a Revwalk object, configure it to consider only the first parent (because that's what we care about) and push the rev to start walking from it.

The last bit of the function implements the actual counting.

    let mut commits = 0;
    let mut merges = 0;
    let mut nonmerges = 0;

    for rev in rw {
        let rev = rev?;
        let commit = repo.find_commit(rev)?;
        log::trace!("Found commit: {:?}", commit);

        if commit.time().seconds() < minimum_time_epoch {
            log::trace!("Commit too old, stopping iteration");
            break;
        }
        commits += 1;

        let is_merge = commit.parent_ids().count() > 1;
        log::trace!("Merge: {:?}", is_merge);

        if is_merge {
            merges += 1;
        } else {
            nonmerges += 1;
        }
    }

    log::trace!("Ready iterating");
    Ok((commits, merges, nonmerges))
}

This is done the simple way, without making use of the excelent iterator API. First, we create our variables for counting and then, we use the Revwalk object and iterate over it. For each rev, we unwrap it using the ? operator and then ask the repo to give us the corresponding commit. We then check whether the time of the commit is before our minimum time and if it is, we abort the iteration. If it is not, we continnue and count the commit. We then check whether the commit has more than one parent, because that is what makes a commit a merge-commit, and increase the appropriate variable.

Last but not least, we return our findings to the caller.

Conclusion

And this is it! It was a rather nice journey to implement this bot. There isn't too much that can fail here, some calculations might wrap and result in false calculations. Possibly a clippy run would find some things that could be improved, of course (feel free to submit patches).

If you want to run this bot on your own instance and for your own repositories, make sure to check the README file first. Also, feel free to ask questions about this bot and of course, you're welcome to send patches (make sure to --signoff your commits).

And now, enjoy the first post of the bot.

tags: #mastodon #bot #rust

Today, I challenged myself to write a prometheus exporter for MPD in Rust.

Shut up and show me the code!

Here you go and here you go for submitting patches.

The challenge

I recently started monitoring my server with prometheus and grafana. I am no-way a professional user of these pieces of software, but I slowly got everything up and running. I learned about timeseries databases at university, so the basic concept of prometheus was not new to me. Grafana was, though. I then started learning about prometheus exporters and how they are working and managed to setup node exporters for all my devices and imported their metrics into a nice grafana dashboard I downloaded from the official website.

I figured, that writing an exporter would make me understand the whole thing even better. So what would be better than exporting music data to my prometheus and plotting it with grafana? Especially because my nickname online is “musicmatze”, right?

So I started writing a prometheus exporter for MPD. And because my language of choice is Rust, I wrote it in Rust. Rust has good libraries available for everything I needed to do to export basic MPD metrics to prometheus and even a prometheus exporter library exists!

The libraries I decided to use

Note that this article was written using prometheus-mpd-exporter v0.1.0 of the prometheus-mpd-exporter code. The current codebase might differ, but this was the first working implementation.

So, the scope of my idea was set. Of course, I needed a library to talk to my music player daemon. And because async libraries would be better, since I would essentially write a kind of a web-server, it should be async. Thankfully, async_mpd exists.

Next, I needed a prometheus helper library. The examples in this library work with hyper. I was not able to implement my idea with hyper though (because of some weird borrowing error), but thankfully, actix-web worked just fine.

Besides that I used a bunch of convenience libraries:

  • anyhow and thiserror for error handling
  • env_logger and log for logging
  • structopt for CLI parsing
  • getset, parse-display and itertools to be able to write less code

The first implementation

The first implementation took me about four hours to write, because I had to understand the actix-web infrastructure first (and because I tried it with hyper in the first place, which did not work for about three of that four hours).

The boilerplate of the program includes

  • Defining an ApplicationError type for easy passing-around of errors that happen during the runtime of the program
  • Defining an Opt as a commandline interface definition using structopt
#[actix_web::main]
async fn main() -> Result<(), ApplicationError> {
    let _ = env_logger::init();
    log::info!("Starting...");
    let opt = Opt::from_args();

    let prometheus_bind_addr = format!("{}:{}", opt.bind_addr, opt.bind_port);
    let mpd_connect_string = format!("{}:{}", opt.mpd_server_addr, opt.mpd_server_port);

The main() function then sets up the logging and parses the commandline arguments. Thanks to env_logger and structopt, that's easy. The main() function also acts as the actix_web::main function and is async because of that. It also returns a Result<(), ApplicationError>, so I can easily fail during the setup phase of the program.

Next, I needed to setup the connection to MPD and wrap that in a Mutex, so it can be shared between request handlers.

    log::debug!("Connecting to MPD = {}", mpd_connect_string);
    let mpd = async_mpd::MpdClient::new(&*mpd_connect_string)
        .await
        .map(Mutex::new)?;

    let mpd = web::Data::new(mpd);

And then setup the HttpServer instance for actix-web, and run it.

    HttpServer::new(move || {
        App::new()
            .app_data(mpd.clone()) // add shared state
            .wrap(middleware::Logger::default())
            .route("/", web::get().to(index))
            .route("/metrics", web::get().to(metrics))
    })
    .bind(prometheus_bind_addr)?
    .run()
    .await
    .map_err(ApplicationError::from)
} // end of main()

Now comes the fun part, tho. First of all, I have setup the connection to MPD. In the above snippet, I add routes to the HttpServer for a basic index endpoint as well as for the /metrics endpoint prometheus fetches the metrics from.

Lets have a look at the index handler first, to get a basic understanding of how it works:

async fn index(_: web::Data<Mutex<MpdClient>>, _: HttpRequest) -> impl Responder {
    HttpResponse::build(StatusCode::OK)
        .content_type("text/text; charset=utf-8")
        .body(String::from("Running"))
}

This function gets called every time someone accesses the service without specifying an endpoint, for example curl localhost:9123 would result in this function being called.

Here, I can get the web::Data<Mutex<MpdClient>> object instance that actix-web handles for us as well as a HttpRequest object to get information about the request itself. Because I don't need this data here, the variables are not bound (_). I added them to be able to extend this function later on easily.

I return a simple 200 (that's the StatusCode::OK here) with a simple Running body. curling would result in a simple response:

$ curl 127.0.0.1:9123
Running

Now, lets have a look at the /metrics endpoint. First of all, the signature of the function is the same:

async fn metrics(mpd_data: web::Data<Mutex<MpdClient>>, _: HttpRequest) -> impl Responder {
    match metrics_handler(mpd_data).await {
        Ok(text) => {
            HttpResponse::build(StatusCode::OK)
                .content_type("text/text; charset=utf-8")
                .body(text)
        }

        Err(e) => {
            HttpResponse::build(StatusCode::INTERNAL_SERVER_ERROR)
                .content_type("text/text; charset=utf-8")
                .body(format!("{}", e))
        }
    }
}

but here, we bind the mpd client object to mpd_data, because we want to actually use that object. We then call a function metrics_handler() with that object, wait for the result (because that function itself is async, too), and match the result. If the result is Ok(_), we get the result text and return a 200 with the text as the body. If the result is an error, which means that fetching the data from MPD somehow resulted in an error, we return an internal server error (500) and the error message as body of the response.

Now, to the metrics_handler() function, which is where the real work happens.

async fn metrics_handler(mpd_data: web::Data<Mutex<MpdClient>>) -> Result<String, ApplicationError> {
    let mut mpd = mpd_data.lock().unwrap();
    let stats = mpd.stats().await?;

    let instance = String::new(); // TODO

First of all, we extract the actual MpdClient object from the web::Data<Mutex<_>> wrapper. Them, we ask MPD to get some stats() and wait for the result.

After that, we create a variable we don't fill yet, which we later push in the release without solving the “TODO” marker and when we blog about what we did, we feel ashamed about it.

Next, we create Metric objects for each metric we record from MPD and render all of them into one big String object.

    let res = vec![
        Metric::new("mpd_uptime"      , stats.uptime      , "The uptime of mpd", &instance).into_metric()?,
        Metric::new("mpd_playtime"    , stats.playtime    , "The playtime of the current playlist", &instance).into_metric()?,
        Metric::new("mpd_artists"     , stats.artists     , "The number of artists", &instance).into_metric()?,
        Metric::new("mpd_albums"      , stats.albums      , "The number of albums", &instance).into_metric()?,
        Metric::new("mpd_songs"       , stats.songs       , "The number of songs", &instance).into_metric()?,
        Metric::new("mpd_db_playtime" , stats.db_playtime , "The database playtime", &instance).into_metric()?,
        Metric::new("mpd_db_update"   , stats.db_update   , "The updates of the database", &instance).into_metric()?,
    ]
    .into_iter()
    .map(|m| {
        m.render()
    })
    .join("\n");

    log::debug!("res = {}", res);
    Ok(res)
}

Lastly, we return that String object from our handler implementation.

The Metric object implementation my own, we'll focus on that now. It will help a bit with the interface of the prometheus_exporter_base API interface.

But first, I need to explain the Metric type:

pub struct Metric<'a, T: IntoNumMetric> {
    name: &'static str,
    value: T,
    description: &'static str,
    instance: &'a str,
}

The Metric type is a type that holds a name for a metric, its value and some description (and the aforementioned irrelevant instance). But because the metrics we collect can be of different types (for example a 8-bit unsigned integer u8 or a 32-bit unsigned integer u32), I made that type abstract over it. The type of the metric value must implement a IntoNumMetric trait, though. That trait is a simple helper trait:

use num_traits::Num;
pub trait IntoNumMetric {
    type Output: Num + Display + Debug;

    fn into_num_metric(self) -> Self::Output;
}

And I implemented it for std::time::Duration, u8, u32 and i32 – the implementation itself is trivial and I won't show it here.

Now, I was able to implement the Metric::into_metric() function shown above:

impl<'a, T: IntoNumMetric + Debug> Metric<'a, T> {
    // Metric::new() implementation, hidden here

    pub fn into_metric<'b>(self) -> Result<PrometheusMetric<'b>> {
        let instance = PrometheusInstance::new()
            .with_label("instance", self.instance)
            .with_value(self.value.into_num_metric())
            .with_current_timestamp()
            .map_err(Error::from)?;

        let mut m = PrometheusMetric::new(self.name, MetricType::Counter, self.description);
        m.render_and_append_instance(&instance);
        Ok(m)
    }
}

This function is used for converting a Metric object into the appropriate PrometheusMetric object from prometheus_exporter_base.

The implementation is, of course, also generic over the type the Metric object holds. A PrometheusInstance is created, a label “instance” is added (empty, you know why... :–( ). Then, the value is added to that instance using the conversion from the IntoNumMetric trait. The current timestamp is added as well, or an error is returned if that fails.

Last but not least, a new PrometheusMetric object is created with the appropriate name and description, and the instance is rendered to it.

And that's it!

Deploying

The code is there now. But of course, I still needed to deploy this to my hosts and make it available in my prometheus and grafana instances.

Because I use NixOS, I wrote a nix package definition and a nix service defintion for it, making the endpoint available to my prometheus instance via my wireguard network.

After that, I was able to add queries to my grafana instance, for example:

mpd_db_playtime / 60 / 60 / 24

to display the DB playtime of an instance of my MPD in days.

I'm not yet very proficient in grafana and the query language, and also the service implementation is rather minimal, so there cannot be that much metrics yet.

Either way, it works!

A basic dashboard for MPD stats

Next steps and closing words

The next steps are quite simple. First of all, I want to make more stats available to prometheus. Right now, only the basic statistics of the database are exported.

The async_mpd crate makes a lot of other status information available.

Also, I want to get better with grafana queries and make some nice-looking graphs for my dashboard.

Either way, that challenge took me longer than I anticipated in the first place (“I can hack this in 15 minutes” – famous last words)! But it was fun nonetheless!

The outcome of this little journey is on crates.io and I will also submit a PR to nixpkgs to make it available there, too.

If you want to contribute patches to the sourcecode, which I encourage you to do, feel free to send me patches!

tags: #prometheus #grafana #rust #mpd #music

Happy new year!

I just managed to implement syncthing monitoring for my prometheus and grafana instance, so I figured to write a short blog post about it.

Note: This post is written for prometheus-json-exporter pre-0.1.0 and the configuration file format changed since.

Now, as you've read in the note above, I managed to do this using the prometheus-json-exporter. Syncthing has a status page that can be accessed with

$ curl localhost:22070/status

if enabled. This can then be used to push to prometheus using the prometheus-json-exporter mentioned above using the following configuration for mapping the values from the JSON to prometheus:

- name: syncthing_buildDate
  path: $.buildDate
  help: Value of buildDate

- name: syncthing_buildHost
  path: $.buildHost
  help: Value of buildHost

- name: syncthing_buildUser
  path: $.buildUser
  help: Value of buildUser

- name: syncthing_bytesProxied
  path: $.bytesProxied
  help: Value of bytesProxied

- name: syncthing_goArch
  path: $.goArch
  help: Value of goArch

- name: syncthing_goMaxProcs
  path: $.goMaxProcs
  help: Value of goMaxProcs

- name: syncthing_goNumRoutine
  path: $.goNumRoutine
  help: Value of goNumRoutine

- name: syncthing_goOS
  path: $.goOS
  help: Value of goOS

- name: syncthing_goVersion
  path: $.goVersion
  help: Value of goVersion

- name: syncthing_kbps10s1m5m15m30m60m
  path: $.kbps10s1m5m15m30m60m
  help: Value of kbps10s1m5m15m30m60m
  type: object
  values:
    time_10_sec: $[0]
    time_1_min: $[1]
    time_5_min: $[2]
    time_15_min: $[3]
    time_30_min: $[4]
    time_60_min: $[5]

- name: syncthing_numActiveSessions
  path: $.numActiveSessions
  help: Value of numActiveSessions

- name: syncthing_numConnections
  path: $.numConnections
  help: Value of numConnections

- name: syncthing_numPendingSessionKeys
  path: $.numPendingSessionKeys
  help: Value of numPendingSessionKeys

- name: syncthing_numProxies
  path: $.numProxies
  help: Value of numProxies

- name: syncthing_globalrate
  path: $.options.global-rate
  help: Value of options.global-rate

- name: syncthing_messagetimeout
  path: $.options.message-timeout
  help: Value of options.message-timeout

- name: syncthing_networktimeout
  path: $.options.network-timeout
  help: Value of options.network-timeout

- name: syncthing_persessionrate
  path: $.options.per-session-rate
  help: Value of options.session-rate

- name: syncthing_pinginterval
  path: $.options.ping-interval
  help: Value of options.ping-interval

- name: syncthing_startTime
  path: $.startTime
  help: Value of startTime

- name: syncthing_uptimeSeconds
  path: $.uptimeSeconds
  help: Value of uptimeSeconds

- name: syncthing_version
  path: $.version
  help: Value of version

When configured properly, one is then able to draw graphs using the syncthing-exported data in grafana.

There's nothing more to it.

tags: #nixos #grafana #prometheus #syncthing

My between-the-years project was trying to use my old Thinkpad to run some local services, for example MPD. I thought, that the Thinkpad did not even have a drive anymore, and was surprised to find a 256GB SSD inside of it – with nixos still installed!

AND RUNNING!

So, after almost two years, I booted my old Thinkpad, entered the crypto password for the harddrive, and got greeted with a login screen and an i3 instance. Firefox asked whether I want to start the old session again... everything just worked.

I was amazed.

Well, this is not the crazy thing I wanted to write about here. The problem now was: I update and deploy my devices using krops nowadays. This old installation had root login disabled, which is required for krops to work...

But, because nixos is awesome,... I did nothing more than checking out the git commit the latest generation on the Thinkpad was booted from, modified some settings for ssh server and root user ... rebuild the system and switched to the new build... and then started deployment for the new nixos 20.09 installation using krops.

All without hassle. I might run out of disk space now, because this deployes a full KDE Plasma 5 installation, but honestly it would surprise me... there should be enough space. I am curious, though, whether KDE Plasma 5 runs on the device. We'll see...

tags: #nixos #desktop