musicmatzes blog

server

This blog is now hosted via ipfs as well.

What does this mean?

This means that you can access this blog via this IPNS name directly. As it is hosted in a distributed manner, a node which has the content must be online, else the retrieval fails. I try to keep nodes online which host this content, though I cannot promise anything, of course.

Old versions (starting today) of this blog can be found here, also hosted via IPFS. So you can see all typos I correct now and then.

How can I help?

You can help hosting this very blog by installing ipfs and pinning the latest hash (which is QmefDSDRcNyQUzNaVJnazQ7GpynjyEzPrdo1sdEk8ZLt8a as of writing, but might be changed by now).

hash=$(ipfs name resolve QmX95tkM6em8MP1SDs9Qae1G9YscqwozQJbX5rWTTYcJea)
ipfs pin add -r "$hash"

The newest version will be published under this name:

/ipns/QmX95tkM6em8MP1SDs9Qae1G9YscqwozQJbX5rWTTYcJea

So you can poll the newest version of this blog by re-resolving this name and pinning the contents via ipfs pin add -r.

How is it done?

Well, that's rather trivial. I have an ipfs daemon running on my machine and a bash script for adding the contents of this blog to ipfs. Each time I run this script, it adds the blog contents to ipfs and writes the last hash to the “old-versions” file. After that, it publishes the name.

So my workflow is like this:

  1. Run hugo to update the blog sources locally
  2. Run my ipfs publish script to publish the site in ipfs
  3. Run my publish script to publish the site on beyermatthias.de

You really should host your things with IPFS as well!

Additional notes

What I had to do before this worked with “hugo”: I had to ensure hugo did not generate absolute URLs. Because if each link links to the full URL of the blog (beyermatthias.de/yaddayadda), each link in the IPFS-checked-in-version links to this URL as well. Instead, each URL has to be relative to the current location, then, the IPFS versions link correctly.

tags: #blog #server #tools

So I'm in my praxis semester at the moment, and I have to work with fastcgi++, a C++ library for fastcgi. And it is a nightmare! Here comes why...

So first, the codebase of fastcgi++ is a mess. Code snippets like

bool foo = someFunction();
if (foo) return true;
else return false;

are everywhere. You really don't want to read that kind of code. And yes, you have to actually read this code because there is no documentation on how to use the library! There is a wonderful doxygen-generated API documentation, but that does not help you because, yeah, you know the types and the interfaces now – lucky you! How to build a fastcgi module with them – figure it out by reading through the codebase!

Secondly, you are not allowed to define your custom accept() handler. You have to inherit from a Request class template and you have to implement the virtual bool response(void) = 0; method. Okay, no problem, I can do this! But then, you want to start accepting requests. You do this by creating an instance of a class called Manager:

Fastcgipp::Manager<MyRequestInheritedClass> man();
man.handler();

or something like this (this is no real code, I just want to demonstrate how it looks like). The manager accepts one connection (through many layers of abstraction, actually), creates an instance of your request class and calls response() on it.

Well, that's not a problem, is it? Well, it gets to a big problem if you want to be able to handle multiple requests at once, speaking of multithreading/concurrency here. It is a huge mess! My codebase exists of three classes, actually. A ThreadedRequest class, a BlockingQueue where requests get stored thread-safe and a RequestDispatcher which is a singleton which runs several worker threads. These worker threads take the requests out of the queue and process them if they can.

The ThreadedRequest class puts itself into the queue, which is a member of the RequestDispatcher singleton instance. The problem is not that it is unnecessarily complicated to build these 1-N multiplexing thing, but that the library provides an interface which enforces you to couple your classes realy tightly. And this is freaking bullshit.

So, finally: I'm building a wrapper lib around fastcgi++ to be able to handle requests in a concurrent way. And it sucks. But it works.

So, my conclusion is: Stay away from this bullshit library. And even from fastcgi if you can. Apache for example offers a way to write modules, so you can simply write your own module for apache if you need to – which is faster anyways!

If you have to use fastcgi, consider writing your own library!

tags: #c++ #linux #server #open source #programming #software

I started my music server after I came yesterday. I mounted the external music hard drive, I started mpd. Then I wanted to start mpdscribble to scrobble to my last.fm account. But it did not start. Here's why.

First, I thought the configuration must be broken somehow. But it looked fine. I had no package update, as I'm running Debian stable on this machine – so no updates for these packages!

I double- and trible-checked the configuration, the method how I started mpd and so on. But I couldn't find an issue. The log file was not written, the journal was slowly filled.

After a while, I noticed that the logfile had an entry for writing to the syslog daemon, which resulted the log to be in /var/log/daemon.log and there I found it: The system time seem'd to be broken. And yes, it was the 28th Dec. 2014. I don't know why, but I guess that's the time when the BIOS battery run out of energy or something like this. Anyways, fixed the system time and got it working!

tags: #music #server #mpd