musicmatzes blog

grafana

Today, I challenged myself to write a prometheus exporter for MPD in Rust.

Shut up and show me the code!

Here you go and here you go for submitting patches.

The challenge

I recently started monitoring my server with prometheus and grafana. I am no-way a professional user of these pieces of software, but I slowly got everything up and running. I learned about timeseries databases at university, so the basic concept of prometheus was not new to me. Grafana was, though. I then started learning about prometheus exporters and how they are working and managed to setup node exporters for all my devices and imported their metrics into a nice grafana dashboard I downloaded from the official website.

I figured, that writing an exporter would make me understand the whole thing even better. So what would be better than exporting music data to my prometheus and plotting it with grafana? Especially because my nickname online is “musicmatze”, right?

So I started writing a prometheus exporter for MPD. And because my language of choice is Rust, I wrote it in Rust. Rust has good libraries available for everything I needed to do to export basic MPD metrics to prometheus and even a prometheus exporter library exists!

The libraries I decided to use

Note that this article was written using prometheus-mpd-exporter v0.1.0 of the prometheus-mpd-exporter code. The current codebase might differ, but this was the first working implementation.

So, the scope of my idea was set. Of course, I needed a library to talk to my music player daemon. And because async libraries would be better, since I would essentially write a kind of a web-server, it should be async. Thankfully, async_mpd exists.

Next, I needed a prometheus helper library. The examples in this library work with hyper. I was not able to implement my idea with hyper though (because of some weird borrowing error), but thankfully, actix-web worked just fine.

Besides that I used a bunch of convenience libraries:

  • anyhow and thiserror for error handling
  • env_logger and log for logging
  • structopt for CLI parsing
  • getset, parse-display and itertools to be able to write less code

The first implementation

The first implementation took me about four hours to write, because I had to understand the actix-web infrastructure first (and because I tried it with hyper in the first place, which did not work for about three of that four hours).

The boilerplate of the program includes

  • Defining an ApplicationError type for easy passing-around of errors that happen during the runtime of the program
  • Defining an Opt as a commandline interface definition using structopt
#[actix_web::main]
async fn main() -> Result<(), ApplicationError> {
    let _ = env_logger::init();
    log::info!("Starting...");
    let opt = Opt::from_args();

    let prometheus_bind_addr = format!("{}:{}", opt.bind_addr, opt.bind_port);
    let mpd_connect_string = format!("{}:{}", opt.mpd_server_addr, opt.mpd_server_port);

The main() function then sets up the logging and parses the commandline arguments. Thanks to env_logger and structopt, that's easy. The main() function also acts as the actix_web::main function and is async because of that. It also returns a Result<(), ApplicationError>, so I can easily fail during the setup phase of the program.

Next, I needed to setup the connection to MPD and wrap that in a Mutex, so it can be shared between request handlers.

    log::debug!("Connecting to MPD = {}", mpd_connect_string);
    let mpd = async_mpd::MpdClient::new(&*mpd_connect_string)
        .await
        .map(Mutex::new)?;

    let mpd = web::Data::new(mpd);

And then setup the HttpServer instance for actix-web, and run it.

    HttpServer::new(move || {
        App::new()
            .app_data(mpd.clone()) // add shared state
            .wrap(middleware::Logger::default())
            .route("/", web::get().to(index))
            .route("/metrics", web::get().to(metrics))
    })
    .bind(prometheus_bind_addr)?
    .run()
    .await
    .map_err(ApplicationError::from)
} // end of main()

Now comes the fun part, tho. First of all, I have setup the connection to MPD. In the above snippet, I add routes to the HttpServer for a basic index endpoint as well as for the /metrics endpoint prometheus fetches the metrics from.

Lets have a look at the index handler first, to get a basic understanding of how it works:

async fn index(_: web::Data<Mutex<MpdClient>>, _: HttpRequest) -> impl Responder {
    HttpResponse::build(StatusCode::OK)
        .content_type("text/text; charset=utf-8")
        .body(String::from("Running"))
}

This function gets called every time someone accesses the service without specifying an endpoint, for example curl localhost:9123 would result in this function being called.

Here, I can get the web::Data<Mutex<MpdClient>> object instance that actix-web handles for us as well as a HttpRequest object to get information about the request itself. Because I don't need this data here, the variables are not bound (_). I added them to be able to extend this function later on easily.

I return a simple 200 (that's the StatusCode::OK here) with a simple Running body. curling would result in a simple response:

$ curl 127.0.0.1:9123
Running

Now, lets have a look at the /metrics endpoint. First of all, the signature of the function is the same:

async fn metrics(mpd_data: web::Data<Mutex<MpdClient>>, _: HttpRequest) -> impl Responder {
    match metrics_handler(mpd_data).await {
        Ok(text) => {
            HttpResponse::build(StatusCode::OK)
                .content_type("text/text; charset=utf-8")
                .body(text)
        }

        Err(e) => {
            HttpResponse::build(StatusCode::INTERNAL_SERVER_ERROR)
                .content_type("text/text; charset=utf-8")
                .body(format!("{}", e))
        }
    }
}

but here, we bind the mpd client object to mpd_data, because we want to actually use that object. We then call a function metrics_handler() with that object, wait for the result (because that function itself is async, too), and match the result. If the result is Ok(_), we get the result text and return a 200 with the text as the body. If the result is an error, which means that fetching the data from MPD somehow resulted in an error, we return an internal server error (500) and the error message as body of the response.

Now, to the metrics_handler() function, which is where the real work happens.

async fn metrics_handler(mpd_data: web::Data<Mutex<MpdClient>>) -> Result<String, ApplicationError> {
    let mut mpd = mpd_data.lock().unwrap();
    let stats = mpd.stats().await?;

    let instance = String::new(); // TODO

First of all, we extract the actual MpdClient object from the web::Data<Mutex<_>> wrapper. Them, we ask MPD to get some stats() and wait for the result.

After that, we create a variable we don't fill yet, which we later push in the release without solving the “TODO” marker and when we blog about what we did, we feel ashamed about it.

Next, we create Metric objects for each metric we record from MPD and render all of them into one big String object.

    let res = vec![
        Metric::new("mpd_uptime"      , stats.uptime      , "The uptime of mpd", &instance).into_metric()?,
        Metric::new("mpd_playtime"    , stats.playtime    , "The playtime of the current playlist", &instance).into_metric()?,
        Metric::new("mpd_artists"     , stats.artists     , "The number of artists", &instance).into_metric()?,
        Metric::new("mpd_albums"      , stats.albums      , "The number of albums", &instance).into_metric()?,
        Metric::new("mpd_songs"       , stats.songs       , "The number of songs", &instance).into_metric()?,
        Metric::new("mpd_db_playtime" , stats.db_playtime , "The database playtime", &instance).into_metric()?,
        Metric::new("mpd_db_update"   , stats.db_update   , "The updates of the database", &instance).into_metric()?,
    ]
    .into_iter()
    .map(|m| {
        m.render()
    })
    .join("\n");

    log::debug!("res = {}", res);
    Ok(res)
}

Lastly, we return that String object from our handler implementation.

The Metric object implementation my own, we'll focus on that now. It will help a bit with the interface of the prometheus_exporter_base API interface.

But first, I need to explain the Metric type:

pub struct Metric<'a, T: IntoNumMetric> {
    name: &'static str,
    value: T,
    description: &'static str,
    instance: &'a str,
}

The Metric type is a type that holds a name for a metric, its value and some description (and the aforementioned irrelevant instance). But because the metrics we collect can be of different types (for example a 8-bit unsigned integer u8 or a 32-bit unsigned integer u32), I made that type abstract over it. The type of the metric value must implement a IntoNumMetric trait, though. That trait is a simple helper trait:

use num_traits::Num;
pub trait IntoNumMetric {
    type Output: Num + Display + Debug;

    fn into_num_metric(self) -> Self::Output;
}

And I implemented it for std::time::Duration, u8, u32 and i32 – the implementation itself is trivial and I won't show it here.

Now, I was able to implement the Metric::into_metric() function shown above:

impl<'a, T: IntoNumMetric + Debug> Metric<'a, T> {
    // Metric::new() implementation, hidden here

    pub fn into_metric<'b>(self) -> Result<PrometheusMetric<'b>> {
        let instance = PrometheusInstance::new()
            .with_label("instance", self.instance)
            .with_value(self.value.into_num_metric())
            .with_current_timestamp()
            .map_err(Error::from)?;

        let mut m = PrometheusMetric::new(self.name, MetricType::Counter, self.description);
        m.render_and_append_instance(&instance);
        Ok(m)
    }
}

This function is used for converting a Metric object into the appropriate PrometheusMetric object from prometheus_exporter_base.

The implementation is, of course, also generic over the type the Metric object holds. A PrometheusInstance is created, a label “instance” is added (empty, you know why... :–( ). Then, the value is added to that instance using the conversion from the IntoNumMetric trait. The current timestamp is added as well, or an error is returned if that fails.

Last but not least, a new PrometheusMetric object is created with the appropriate name and description, and the instance is rendered to it.

And that's it!

Deploying

The code is there now. But of course, I still needed to deploy this to my hosts and make it available in my prometheus and grafana instances.

Because I use NixOS, I wrote a nix package definition and a nix service defintion for it, making the endpoint available to my prometheus instance via my wireguard network.

After that, I was able to add queries to my grafana instance, for example:

mpd_db_playtime / 60 / 60 / 24

to display the DB playtime of an instance of my MPD in days.

I'm not yet very proficient in grafana and the query language, and also the service implementation is rather minimal, so there cannot be that much metrics yet.

Either way, it works!

A basic dashboard for MPD stats

Next steps and closing words

The next steps are quite simple. First of all, I want to make more stats available to prometheus. Right now, only the basic statistics of the database are exported.

The async_mpd crate makes a lot of other status information available.

Also, I want to get better with grafana queries and make some nice-looking graphs for my dashboard.

Either way, that challenge took me longer than I anticipated in the first place (“I can hack this in 15 minutes” – famous last words)! But it was fun nonetheless!

The outcome of this little journey is on crates.io and I will also submit a PR to nixpkgs to make it available there, too.

If you want to contribute patches to the sourcecode, which I encourage you to do, feel free to send me patches!

tags: #prometheus #grafana #rust #mpd #music

Happy new year!

I just managed to implement syncthing monitoring for my prometheus and grafana instance, so I figured to write a short blog post about it.

Note: This post is written for prometheus-json-exporter pre-0.1.0 and the configuration file format changed since.

Now, as you've read in the note above, I managed to do this using the prometheus-json-exporter. Syncthing has a status page that can be accessed with

$ curl localhost:22070/status

if enabled. This can then be used to push to prometheus using the prometheus-json-exporter mentioned above using the following configuration for mapping the values from the JSON to prometheus:

- name: syncthing_buildDate
  path: $.buildDate
  help: Value of buildDate

- name: syncthing_buildHost
  path: $.buildHost
  help: Value of buildHost

- name: syncthing_buildUser
  path: $.buildUser
  help: Value of buildUser

- name: syncthing_bytesProxied
  path: $.bytesProxied
  help: Value of bytesProxied

- name: syncthing_goArch
  path: $.goArch
  help: Value of goArch

- name: syncthing_goMaxProcs
  path: $.goMaxProcs
  help: Value of goMaxProcs

- name: syncthing_goNumRoutine
  path: $.goNumRoutine
  help: Value of goNumRoutine

- name: syncthing_goOS
  path: $.goOS
  help: Value of goOS

- name: syncthing_goVersion
  path: $.goVersion
  help: Value of goVersion

- name: syncthing_kbps10s1m5m15m30m60m
  path: $.kbps10s1m5m15m30m60m
  help: Value of kbps10s1m5m15m30m60m
  type: object
  values:
    time_10_sec: $[0]
    time_1_min: $[1]
    time_5_min: $[2]
    time_15_min: $[3]
    time_30_min: $[4]
    time_60_min: $[5]

- name: syncthing_numActiveSessions
  path: $.numActiveSessions
  help: Value of numActiveSessions

- name: syncthing_numConnections
  path: $.numConnections
  help: Value of numConnections

- name: syncthing_numPendingSessionKeys
  path: $.numPendingSessionKeys
  help: Value of numPendingSessionKeys

- name: syncthing_numProxies
  path: $.numProxies
  help: Value of numProxies

- name: syncthing_globalrate
  path: $.options.global-rate
  help: Value of options.global-rate

- name: syncthing_messagetimeout
  path: $.options.message-timeout
  help: Value of options.message-timeout

- name: syncthing_networktimeout
  path: $.options.network-timeout
  help: Value of options.network-timeout

- name: syncthing_persessionrate
  path: $.options.per-session-rate
  help: Value of options.session-rate

- name: syncthing_pinginterval
  path: $.options.ping-interval
  help: Value of options.ping-interval

- name: syncthing_startTime
  path: $.startTime
  help: Value of startTime

- name: syncthing_uptimeSeconds
  path: $.uptimeSeconds
  help: Value of uptimeSeconds

- name: syncthing_version
  path: $.version
  help: Value of version

When configured properly, one is then able to draw graphs using the syncthing-exported data in grafana.

There's nothing more to it.

tags: #nixos #grafana #prometheus #syncthing