<?xml version="1.0" encoding="UTF-8"?><rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>server &amp;mdash; musicmatzes blog</title>
    <link>https://beyermatthias.de/tag:server</link>
    <description></description>
    <pubDate>Wed, 06 May 2026 12:44:31 +0200</pubDate>
    <item>
      <title>IPFS hosted blog</title>
      <link>https://beyermatthias.de/ipfs-hosted-blog</link>
      <description>&lt;![CDATA[This blog is now hosted via ipfs as well.&#xA;&#xA;What does this mean?&#xA;&#xA;This means that you can access this blog&#xA;via this IPNS name&#xA;directly.&#xA;As it is hosted in a distributed manner, a node which has the content must be&#xA;online, else the retrieval fails.&#xA;I try to keep nodes online which host this content, though I cannot promise&#xA;anything, of course.&#xA;&#xA;Old versions (starting today) of this blog can be found here,&#xA;also hosted via IPFS.&#xA;So you can see all typos I correct now and then.&#xA;&#xA;How can I help?&#xA;&#xA;You can help hosting this very blog by&#xA;installing ipfs&#xA;and pinning the latest hash&#xA;(which is&#xA;QmefDSDRcNyQUzNaVJnazQ7GpynjyEzPrdo1sdEk8ZLt8a&#xA;as of writing, but might be changed by now).&#xA;&#xA;hash=$(ipfs name resolve QmX95tkM6em8MP1SDs9Qae1G9YscqwozQJbX5rWTTYcJea)&#xA;ipfs pin add -r &#34;$hash&#34;&#xA;&#xA;The newest version will be published under this name:&#xA;&#xA;/ipns/QmX95tkM6em8MP1SDs9Qae1G9YscqwozQJbX5rWTTYcJea&#xA;&#xA;So you can poll the newest version of this blog by re-resolving this name and&#xA;pinning the contents via ipfs pin add -r.&#xA;&#xA;How is it done?&#xA;&#xA;Well, that&#39;s rather trivial.&#xA;I have an ipfs daemon running on my machine and a bash script for adding the&#xA;contents of this blog to ipfs.&#xA;Each time I run this script, it adds the blog contents to ipfs and writes the&#xA;last hash to the &#34;old-versions&#34; file.&#xA;After that, it publishes the name.&#xA;&#xA;So my workflow is like this:&#xA;&#xA;Run hugo to update the blog sources locally&#xA;Run my ipfs publish script to publish the site in ipfs&#xA;Run my publish script to publish the site on beyermatthias.de&#xA;&#xA;You really should host your things with IPFS as well!&#xA;&#xA;Additional notes&#xA;&#xA;What I had to do before this worked with &#34;hugo&#34;: I had to ensure hugo did not&#xA;generate absolute URLs.&#xA;Because if each link links to the full URL of the blog&#xA;(beyermatthias.de/yaddayadda), each link in the IPFS-checked-in-version links&#xA;to this URL as well.&#xA;Instead, each URL has to be relative to the current location, then, the IPFS&#xA;versions link correctly.&#xA;&#xA;tags:  #blog #server #tools&#xA;]]&gt;</description>
      <content:encoded><![CDATA[<p>This blog is now hosted via <a href="https://ipfs.io">ipfs</a> as well.</p>

<h1 id="what-does-this-mean" id="what-does-this-mean">What does this mean?</h1>

<p>This means that you can access this blog
<a href="http://gateway.ipfs.io/ipns/QmX95tkM6em8MP1SDs9Qae1G9YscqwozQJbX5rWTTYcJea/">via this IPNS name</a>
directly.
As it is hosted in a distributed manner, a node which has the content must be
online, else the retrieval fails.
I try to keep nodes online which host this content, though I cannot promise
anything, of course.</p>

<p>Old versions (starting today) of this blog can be found <a href="/old-versions">here</a>,
also hosted via IPFS.
So you can see all typos I correct now and then.</p>

<h1 id="how-can-i-help" id="how-can-i-help">How can I help?</h1>

<p>You can help hosting this very blog by
<a href="https://ipfs.io/docs/getting-started/">installing ipfs</a>
and pinning the latest hash
(which is
<a href="http://gateway.ipfs.io/ipfs/QmefDSDRcNyQUzNaVJnazQ7GpynjyEzPrdo1sdEk8ZLt8a"><code>QmefDSDRcNyQUzNaVJnazQ7GpynjyEzPrdo1sdEk8ZLt8a</code></a>
as of writing, but might be changed by now).</p>

<pre><code>hash=$(ipfs name resolve QmX95tkM6em8MP1SDs9Qae1G9YscqwozQJbX5rWTTYcJea)
ipfs pin add -r &#34;$hash&#34;
</code></pre>

<p>The newest version will be published under this name:</p>

<pre><code>/ipns/QmX95tkM6em8MP1SDs9Qae1G9YscqwozQJbX5rWTTYcJea
</code></pre>

<p>So you can poll the newest version of this blog by re-resolving this name and
pinning the contents via <code>ipfs pin add -r</code>.</p>

<h1 id="how-is-it-done" id="how-is-it-done">How is it done?</h1>

<p>Well, that&#39;s rather trivial.
I have an ipfs daemon running on my machine and a bash script for adding the
contents of this blog to ipfs.
Each time I run this script, it adds the blog contents to ipfs and writes the
last hash to the “old-versions” file.
After that, it publishes the name.</p>

<p>So my workflow is like this:</p>
<ol><li>Run <code>hugo</code> to update the blog sources locally</li>
<li>Run my ipfs publish script to publish the site in ipfs</li>
<li>Run my publish script to publish the site on <a href="https://beyermatthias.de">beyermatthias.de</a></li></ol>

<p>You really should host your things with IPFS as well!</p>

<h1 id="additional-notes" id="additional-notes">Additional notes</h1>

<p>What I had to do before this worked with “hugo”: I had to ensure hugo did not
generate absolute URLs.
Because if each link links to the full URL of the blog
(beyermatthias.de/yaddayadda), each link in the IPFS-checked-in-version links
to this URL as well.
Instead, each URL has to be relative to the current location, then, the IPFS
versions link correctly.</p>

<p>tags:  <a href="https://beyermatthias.de/tag:blog" class="hashtag"><span>#</span><span class="p-category">blog</span></a> <a href="https://beyermatthias.de/tag:server" class="hashtag"><span>#</span><span class="p-category">server</span></a> <a href="https://beyermatthias.de/tag:tools" class="hashtag"><span>#</span><span class="p-category">tools</span></a></p>
]]></content:encoded>
      <guid>https://beyermatthias.de/ipfs-hosted-blog</guid>
      <pubDate>Sat, 01 Apr 2017 16:41:08 +0200</pubDate>
    </item>
    <item>
      <title>Stay away from FastCGI++ if you can</title>
      <link>https://beyermatthias.de/stay-away-from-fastcgi-if-you-can</link>
      <description>&lt;![CDATA[So I&#39;m in my praxis semester at the moment, and I have to work with fastcgi++,&#xA;a C++ library for fastcgi. And it is a nightmare! Here comes why...&#xA;&#xA;!-- more --&#xA;&#xA;So first, the codebase of fastcgi++ is a mess. Code snippets like&#xA;&#xA;bool foo = someFunction();&#xA;if (foo) return true;&#xA;else return false;&#xA;&#xA;are everywhere. You really don&#39;t want to read that kind of code. And yes,&#xA;you have to actually read this code because there is no documentation on how&#xA;to use the library! There is a wonderful doxygen-generated API documentation,&#xA;but that does not help you because, yeah, you know the types and the&#xA;interfaces now - lucky you! How to build a fastcgi module with them - figure&#xA;it out by reading through the codebase!&#xA;&#xA;Secondly, you are not allowed to define your custom accept() handler. You&#xA;have to inherit from a Request class template and you have to implement the&#xA;virtual bool response(void) = 0; method. Okay, no problem, I can do this!&#xA;But then, you want to start accepting requests. You do this by creating an&#xA;instance of a class called Manager:&#xA;&#xA;Fastcgipp::ManagerMyRequestInheritedClass man();&#xA;man.handler();&#xA;&#xA;or something like this (this is no real code, I just want to demonstrate how&#xA;it looks like). The manager accepts one connection (through many layers of&#xA;abstraction, actually), creates an instance of your request class and calls&#xA;response() on it.&#xA;&#xA;Well, that&#39;s not a problem, is it? Well, it gets to a big problem if you want&#xA;to be able to handle multiple requests at once, speaking of&#xA;multithreading/concurrency here. It is a huge mess! My codebase exists of&#xA;three classes, actually. A ThreadedRequest class, a BlockingQueue where&#xA;requests get stored thread-safe and a RequestDispatcher which is a singleton&#xA;which runs several worker threads. These worker threads take the requests out&#xA;of the queue and process them if they can.&#xA;&#xA;The ThreadedRequest class puts itself into the queue, which is a member of&#xA;the RequestDispatcher singleton instance. The problem is not that it is&#xA;unnecessarily complicated to build these 1-N multiplexing thing, but that the&#xA;library provides an interface which enforces you to couple your classes&#xA;realy tightly. And this is freaking bullshit. &#xA;&#xA;So, finally: I&#39;m building a wrapper lib around fastcgi++ to be able to handle&#xA;requests in a concurrent way. And it sucks. But it works.&#xA;&#xA;So, my conclusion is: Stay away from this bullshit library. And even from&#xA;fastcgi if you can. Apache for example offers a way to write modules, so you&#xA;can simply write your own module for apache if you need to - which is faster&#xA;anyways!&#xA;&#xA;If you have to use fastcgi, consider writing your own library!&#xA;&#xA;tags:  #c++ #linux #server #open source #programming #software&#xA;]]&gt;</description>
      <content:encoded><![CDATA[<p>So I&#39;m in my praxis semester at the moment, and I have to work with fastcgi++,
a C++ library for fastcgi. And it is a nightmare! Here comes why...</p>



<p>So first, the codebase of fastcgi++ is a mess. Code snippets like</p>

<pre><code class="language-C++">bool foo = someFunction();
if (foo) return true;
else return false;
</code></pre>

<p>are <em>everywhere</em>. You really don&#39;t want to read that kind of code. And yes,
you have to actually read this code because there is no documentation on how
to use the library! There is a wonderful doxygen-generated API documentation,
but that does not help you because, yeah, you know the types and the
interfaces now – lucky you! How to build a fastcgi module with them – figure
it out by reading through the codebase!</p>

<p>Secondly, you are not allowed to define your custom <code>accept()</code> handler. You
have to inherit from a <code>Request</code> class template and you have to implement the
<code>virtual bool response(void) = 0;</code> method. Okay, no problem, I can do this!
But then, you want to start accepting requests. You do this by creating an
instance of a class called <code>Manager</code>:</p>

<pre><code class="language-C++">Fastcgipp::Manager&lt;MyRequestInheritedClass&gt; man();
man.handler();
</code></pre>

<p>or something like this (this is no real code, I just want to demonstrate how
it looks like). The manager accepts one connection (through many layers of
abstraction, actually), creates an instance of your request class and calls
<code>response()</code> on it.</p>

<p>Well, that&#39;s not a problem, is it? Well, it gets to a big problem if you want
to be able to handle multiple requests at once, speaking of
multithreading/concurrency here. It is a <em>huge</em> mess! My codebase exists of
three classes, actually. A <code>ThreadedRequest</code> class, a <code>BlockingQueue</code> where
requests get stored thread-safe and a <code>RequestDispatcher</code> which is a singleton
which runs several worker threads. These worker threads take the requests out
of the queue and process them if they can.</p>

<p>The <code>ThreadedRequest</code> class puts <em>itself</em> into the queue, which is a member of
the <code>RequestDispatcher</code> singleton instance. The problem is not that it is
unnecessarily complicated to build these 1-N multiplexing thing, but that the
library provides an interface which <em>enforces</em> you to couple your classes
realy tightly. And this is <em>freaking bullshit</em>.</p>

<p>So, finally: I&#39;m building a wrapper lib around fastcgi++ to be able to handle
requests in a concurrent way. And it sucks. But it works.</p>

<p>So, my conclusion is: Stay away from this bullshit library. And even from
fastcgi if you can. Apache for example offers a way to write modules, so you
can simply write your own module for apache if you need to – which is faster
anyways!</p>

<p>If you have to use fastcgi, consider writing your own library!</p>

<p>tags:  <a href="https://beyermatthias.de/tag:c" class="hashtag"><span>#</span><span class="p-category">c</span></a>++ <a href="https://beyermatthias.de/tag:linux" class="hashtag"><span>#</span><span class="p-category">linux</span></a> <a href="https://beyermatthias.de/tag:server" class="hashtag"><span>#</span><span class="p-category">server</span></a> <a href="https://beyermatthias.de/tag:open" class="hashtag"><span>#</span><span class="p-category">open</span></a> source <a href="https://beyermatthias.de/tag:programming" class="hashtag"><span>#</span><span class="p-category">programming</span></a> <a href="https://beyermatthias.de/tag:software" class="hashtag"><span>#</span><span class="p-category">software</span></a></p>
]]></content:encoded>
      <guid>https://beyermatthias.de/stay-away-from-fastcgi-if-you-can</guid>
      <pubDate>Thu, 14 May 2015 16:37:36 +0200</pubDate>
    </item>
    <item>
      <title>How mpdscribble did not scrobble - because of the system time!</title>
      <link>https://beyermatthias.de/how-mpdscribble-did-not-scrobble-because-of-the-system-time</link>
      <description>&lt;![CDATA[I started my music server after I came yesterday. I mounted the external&#xA;music hard drive, I started mpd. Then I wanted to start mpdscribble to&#xA;scrobble to my last.fm account. But it did not start. Here&#39;s why.&#xA;&#xA;!-- more --&#xA;&#xA;First, I thought the configuration must be broken somehow. But it looked fine.&#xA;I had no package update, as I&#39;m running Debian stable on this machine - so no&#xA;updates for these packages!&#xA;&#xA;I double- and trible-checked the configuration, the method how I started mpd&#xA;and so on. But I couldn&#39;t find an issue. The log file was not written, the&#xA;journal was slowly filled.&#xA;&#xA;After a while, I noticed that the logfile had an entry for writing to the&#xA;syslog daemon, which resulted the log to be in /var/log/daemon.log and there&#xA;I found it: The system time seem&#39;d to be broken. And yes, it was the&#xA;28th Dec. 2014. I don&#39;t know why, but I guess that&#39;s the time when the BIOS&#xA;battery run out of energy or something like this. Anyways, fixed the system&#xA;time and got it working!&#xA;&#xA;tags:  #music #server #mpd&#xA;]]&gt;</description>
      <content:encoded><![CDATA[<p>I started my music server after I came yesterday. I mounted the external
music hard drive, I started mpd. Then I wanted to start mpdscribble to
scrobble to my last.fm account. But it did not start. Here&#39;s why.</p>



<p>First, I thought the configuration must be broken somehow. But it looked fine.
I had no package update, as I&#39;m running Debian stable on this machine – so no
updates for these packages!</p>

<p>I double- and trible-checked the configuration, the method how I started mpd
and so on. But I couldn&#39;t find an issue. The log file was not written, the
journal was slowly filled.</p>

<p>After a while, I noticed that the logfile had an entry for writing to the
syslog daemon, which resulted the log to be in <code>/var/log/daemon.log</code> and there
I found it: The system time seem&#39;d to be broken. And yes, it was the
28th Dec. 2014. I don&#39;t know why, but I guess that&#39;s the time when the BIOS
battery run out of energy or something like this. Anyways, fixed the system
time and got it working!</p>

<p>tags:  <a href="https://beyermatthias.de/tag:music" class="hashtag"><span>#</span><span class="p-category">music</span></a> <a href="https://beyermatthias.de/tag:server" class="hashtag"><span>#</span><span class="p-category">server</span></a> <a href="https://beyermatthias.de/tag:mpd" class="hashtag"><span>#</span><span class="p-category">mpd</span></a></p>
]]></content:encoded>
      <guid>https://beyermatthias.de/how-mpdscribble-did-not-scrobble-because-of-the-system-time</guid>
      <pubDate>Thu, 08 Jan 2015 16:37:28 +0100</pubDate>
    </item>
  </channel>
</rss>