musicmatzes blog

I switched to i3 recently and therefor reconfigured my whole system in manner of apperance. I really like it now, as it uses grey as basic color. Everything is grey: My terminal (xterm, bash), my editor (vim), my IRC client (irssi), my music player (ncmpcpp). But what I like most by far, is my prompt.

Bash it to the right

I always hated the flutter of my bash prompt when diving into directories:

~ $ cd foo ~/foo & cd bar ~/foo/bar $ cd ~ $

But now, I finally reconfigured it to be in a straigt line:

:> | cd foo :> | cd bar :> | cd

And the path is now on the right side (I can't show it in the example, unfortunately).

There is not much magic in it... but let me show you:

The magic

The PS1 variable is the one shelter for the prompt. I configured it to be just

:> |

but if the last command failed, it uses the sad smilie:

:< |

(you can do this with Unicode smilies as well!) There is not much magic in it. The pipe symbol is a Unicode character (\u2502) for me, so I get a straight line without gaps between the lines.

The real magic comes now! There is also a bash variable for a command which gets executed for a prompt. And I abuse this variable a little bit. But first, we have to write the command (actually a bash function):

rightpromptstring() { local tmp=$? local dirty=$([[ $(git diff —shortstat 2>/dev/null | tail -n 1) != “” ]] && echo “ “) local g=$([[ $(git branch 2>/dev/null) ]] && echo “ ($(git name-rev HEAD | cut -d ” “ -f 2)) “) local p=$(pwd | sed “s:^$HOME:~:“) local j=$(jobs | wc -l) local d=$(date +%H:%M) local r=$(echo -n “\u2502 \033[1;30m<<$dirty$g$p [$j] [$d]\033[0m”) local rl=${#r} local t=$(tput cols) let “p = $t – $rl” printf -v sp “%s” $p ' ' echo -ne “$sp$r\r” return $tmp }

So, what happens in here? The first line is for saving the return value of the previous command. We need it to be returned lateron, as the smilie from PS1 won't work if we don't return it.

The next few lines do a bit of git magic, to include the current branch and a asterisk if the repository is “dirty”. The variable p gets the path, but if I'm in my home directory, it gets replaced by the tilde. The j variable gets the number of jobs in the bash, as I always want this to be printed somewhere. The d gets the current time.

The r variable now gets the string which represents the right prompt. It also gets the pipe character (actually the Unicode character \u2502) and some color stuff. Then, the rl variable gets the length of the string in r. The t variable gets the number of colums, which is required for printing spaces until we are at the right side (the printf part prints spaces to the variable sp). Then, the spaces and the right prompt get printed. And then simply a \r for carriage-return.

Afterwards we return the stored exit code of the application which just exited and that's it.

Now, just set the prompt command variable:

PROMTP_COMMAND=rightpromptstring

And that's it.

I know, the rightpromptstring function is a bit ugly and I bet it can be improved, but it works very good for me! Feel free to send me patches!

tags: #bash #programming #linux

Let me list some more bullshit I just learned.

struct++

A struct can have a function as member? So this is valid C++ code:

struct foo { void * initthiscrap(); }

What the hell? Even worse: You can inherit from a structure! Seriousely? You can inherit from a fucking structure? No shit, Sherlok!

Implicit type conversion

This one is even worse. Lets have a look at the following code:

Dot give_dot(void) { return 5; }

Dot is a class. This code is valid. What the fucking hell? In C++ it is legal to convert without any sematic for it. That sucks. Period.

Dot a = 5;

Automatic constructor calling

I already used the folling syntax:

Dot a(); Dot b(4);

But did you know that

Dot c;

Does also call the constructor for c ? What the fucking hell is this? I declare the variable c, but no... C++ won't let you do that! It calls the constructor for Dot. Why? Because it is bullshit!

Sidenote: I would expect this

Dot c();

to include the constructur-call. But not without the parentheses!

tags: #programming #c++

I really like to dump my brain into irc channels. You do, too? So, meet ircdump.

irc_dump.sh (the repo) / ircdump (the script-call) is a short shell script which dumps your text in all joined irc channels you previously joined. It uses ii as irc tool.

If you want to paste something, you just invoke it:

ircdump I like trains

and it dumps the text directly into all available IRC channels. Of course, you must do some setup right before!

You have to use ii to create/join the appropriate channels. You should create them at /tmp/ircdump or set the appropriate path right in the script. Once you connected to a server and joined the appropriate channels, the script does everything else for you. Note that it doesn't paste to the server channel, just to the channels you joined.

Use it with care! You will paste to _all IRC channels you joined with ii.

People gonna hate you!

Update: There is now a repo at my github account which contains utils when dealing with ii: irctools.sh.

tags: #programming #chat #irc #shell

I'm currently working at my university. My job is to explore the possibilities of the so-called “micro VMs” and if they can be used in the studi-cloud as well as in the research-cloud at my university.

I worked about 10 hours on this, and I'm trying to get OSv running. But at this moment, I'm stuck with compiling! First, I tried to compile OSv itself. It's written in C++, which requires me to install the latest gcc on my Debian machine, I'm testing this stuff on. Ha-ha! Debian, you know, has gcc from-before-the-war installed. Well, not the 4.2 but still the 4.7.2, which is not modern enough to compile the sources of OSv. It always crashes with a compiler error.

But, no problem here! I just pulled the pre-build binaries from the site! Then I installed KVM and all required tools around it. I edited some configurations on the OSv part and tried to start a VM with it. But no, not yet! The problem is, that the Debian kernel has VHOST disabled. Whatever VHOSTs are, it is a Kernel feature. So, I pulled the linux kernel git tree, checked out the 3.2 kernel (which Debian currently is running on) and copied the configuration from /boot/config-amd64 to .config inside the kernel sources. I edited the configuration (by hand, you can also do this with make menuconfig) to enable CONFIG_VHOST_NET by setting it to y and started compiling it. After that, I realized that there is another way to compile a kernel for Debian! I installed fakeroot, kernel-package and linux-source-3.2 and started building the next kernel with this instructions. As I did all this via SSH on my testing machine, I'm not able to test if these kernels work. The grub entries are already there, but not installed yet. On monday, I will finish this setup and try these kernels. But, what I wanted to say initialy, it really sucks that all this stuff is not included in Debian. I mean, that there is not the newest gcc version, ... I can understand this! It's all about stability and so on. But I cannot understand why the kernel of a stable distribution, which is a good candidate for a server distribution, does not support features a virtual machine, namingly the Kernel Virtual Machine requires! This is really odd!

At least, the build process for a new kernel under Debian is straight-forward! You just have to copy the configuration of the kernel which is currently running, modifying the parts you want to change, and then build the kernel with the appropriate commands. After that, there is a .deb package, you can install with your package manager. This also includes a update for the bootloaded and, if the build process also built some modules, a package for the new modules is provided, too! Really neat!

tags: #programming

So I'm learning C++ at university now. First I thought, yay, another programming language for my portfolio. But now I think “Oh shit, I won't pass my exam”. Why? I want to cover the topic in a short series of articles. And, as the title suggests, I will tell you why I hate C++.

When we started C++, I already knew C (which is the “basis” of C++, but huh - C++ isn't C at all), Java (which is close, but better [IMHO]), Ruby (of course) and Bash (just for completeness). I already wrote Python, Haskell, Lua and some other languages. So – I'm not unexperienced in this field!

Please note: I'm still learning C++ and I'm of course not an expert. This article is a rant on the stuff I don't understand or on stuff where I don't get why it's so complicated.

Declaration vs. Definition

When talking about declarations and definitions a C programmer thinks about header files and source files. But from what I understood in the lessons, this does not apply for C++ at all. You can mix them up fairly easily. At least that's my feelings. Of course, you also have to write sooo much stuff. Lets write a header for a simple 2D vector:

class Vector2D { int x; int y; /* or shall we put this into the definitio? I don't know! / public: Vector2D(); / constructor / / some ugly operator overloading, I will rant on this later on */ std::string to_string(void); Vector2D * reverse(void); }

Now, that's your prototype. And now, you can rewrite the whole stuff. Don't much you say? Now, take a class with 100+ Methods and additional operator overloadings. We'll reach 500 lines really fast (pure code, no comments) without implementing a single line of code with it!

The fact that you can define methods for a class in two ways is also very disturbing to me:

Vector2D * Vector2D::reverse(void) { /* do your stuff */ }

/* vs. */

class Vector2D { Vector2D * reverse(void) { /* do your stuff / } / more stuff */
};

What the fucking hell is that?

The java compiler is intelligent enough to use the definition of the class. Now you argue about interfaces? You need headers with the class declaration for interfacing to a binary or something? Ok then, but I would bet there is a way to do this in a much cleaner way. Not because Java does this, I know that Java and C++ differ because of the Java VM. You can do reflection on the JVM which you can't do that easy with C++.

But I'm talking of syntax! Why can't we just say “Hey, there is the header, the information for return types and everything is there – we don't need it in the definition!” or something like this? Why do we have to write that much code for almost nothing?

I'm a friend!

What's this friend keyword about? What the fucking hell is it about? We don't need this shit! Of course, this is about operator overloading, but I won't cover that topic in this article but in the next one.

Lets think of another language which supports operator overloading? How do they do it? Lets think about Ruby for example. You can overload an operator in Ruby. And Ruby doesn't have this keyword or something comparable. Why? Because the syntax of Ruby is meant to be as clean as possible. And C++ is not. That's it. This has nothing to do with implementation related things, the whole topic is only about Syntax. And the C++ guys just produced shit.

Reference vs. Pointer

What is this:

CustomType & new_customtype(int input);

An allocator function for the CustomType type you say? Well, now what is

CustomType * new_customtype(int input);

this? The same? Exactly! There is no difference, because both of these functions return a pointer to a new allocated object. The reference thing is just a garbled pointer. So why do we need it? Exactly: We don't fucking need it! But if you don't need it, C++ supports it! For sure!

That were just some of the points I already hate at C++.

To be continued...

tags: #programming #c++

This is the 24th iteration on what happened in the last four weeks in the imag project, the text based personal information management suite for the commandline.

imag is a personal information management suite for the commandline. Its target audience are commandline- and power-users. It does not reimplement personal information management (PIM) aspects, but re-uses existing tools and standards to be an addition to an existing workflow, so one does not have to learn a new tool before beeing productive again. Some simple PIM aspects are implemented as imag modules, though. It gives the user the power to connect data from different existing tools and add meta-information to these connections, so one can do data-mining on PIM data.

What happened?

In the last four weeks, not that much happened. Mainly, because I was on vacation. Semester ended on Feb 21st, so from there on I did exactly one thing: Enjoy my free time.

Nevertheless, we got some things done in the imag codebase:

  • #801 merged the initial support for annotations.
  • #897 merged a new feature which gives us the possibility to build imag with the early-panic flag, which causes the store to panic on a failure rather than to throw errors. This is mainly for debugging, not for production use.
  • #900 merged improvements for the libimagstore documentation.
  • #901 merged some notes on long-term todos which would perish in the github issue tracker.
  • #902 merged some improvements on the Ruby API.
  • #903 improved the libimagrt documentation.
  • #907 updated clap.

Four PRs are not in this list because they are too trivial.

What about Ruby? Well, because of the little progress, nothing happened in regards of the Ruby bindings. But we get there, I promise.

What's coming?

I don't know yet. My 2nd semester for the masters course just began and I really don't know what the workload will be. So... this is all in all a rather sad update, I guess. Progress slowed down a lot, I don't know what will be there in the next four weeks...

We'll see...

tags: #programming #rust #software #tools #imag

Have you ever had a file in your git repository which you would like to split, but you also would like to preserve the history of the file?

Here's a way how to do it.

The Problem

We want to move a (two-line) file a.txt inside a git repository to b.txt and c.txt, whereas b.txt contains the first lines of the original file and the c.txt file the second one. You want to preserve the history of these two lines, as a simple git mv a.txt b.txt && cp b.txt c.txt or something similar would cause at least one of the files (the c.txt one in this case) to be a completely new file to git, and therefor have a single commit as history – the creation of it.

I also tested some other ideas, for example a git mv a.txt b.txt && git checkout HEAD a.txt. It results in the same issue.

The solution...

... is a merge conflict! Yuck, you say? No, not at all! It's a really simple merge conflict!

First, you create a branch which causes the merge-conflict later on. Then, you do git mv a.txt b.txt && git commit. You moved the a.txt file now to b.txt, obviously! Now, go to your new branch. The file is still a.txt there. Do a git mv a.txt c.txt && git commit. Maybe you guess what comes next.

Create a merge-conflict by going back to your branch and merging the temporary created branch git merge --no-ff <conflict-branch>. A merge conflict will come up, as you moved a.txt to b.txt one time and to c.txt another time. Now, simply, take both files.

Of course there is also the possibility to delete code in the files inside the branches. Then, you “splitted” the a.txt file into two files but preserved the history.

Awesome!

tags: #git

Do you have an open source project? Do you have an mailing list for it? Is it open for everyone? No? Why not?

I just tried to submit a bug report to an open source project. I checked if they had a Mailinglist. Apparently, they have. I wrote a message, described the bug (well, it was an error in the documentation, but I think this is also a bug of some kind). I tried to send the message. A failure notice came back.

Why the hell do open source projects close their mailinglists? Users have to register for sending mails, which is a hurdle to them. I will not subscribe for submitting a single bug. Never.

That's why I love the linux kernel mailing list. It is open and therefor, everyone can submit bug without having to register, which enables you to simply file a bug report by writing an email.

If the lkml would be closed, the linux kernel wouldn't be at the point it is today.

So, please, please, please, if you have an open source project and a mailing list for it – keep it open for everyone!

tags: #programming

I'm working on an open source project code base. We're using travis-ci as continuous integration server. And it sucks.

Travis runs ubuntu 12.04 LTS Server edition 64 Bit version. And because of this, it has no packages up to date. We have to install gcc-4.8 by hand to be able to write C11 code. We have to use an old version of the “check” unit testing framework, because we were not able to find a PPA for an up-to-date version (ubuntu ships 0.9.6, we want to use 0.9.14).

Also, the build VM does not cache the packages we install (because we do not pay). Therefor, we have a install time of several minutes and a build time less then 1 minute.

So, our conclusion, after using travis for four weeks: stay away from travis if you have an alternative.

We will now set up our own CI server. Maybe drone on docker, where we are able to run docker images to build our source in. We would be able to build for several distros without that much pain. We would need a hoster for this, but hey,... today a hoster costs about 2 bucks per month, if you know who to ask for hosting.

We're currently trying to set up a drone instance on a fedora 20, just to test if it works as expected and how much effort has to go into it. I guess more posts about this will follow.

tags: #programming #travis #ci

Sharing ones dotfiles has become more and more popular with the rise of github. Almost everyone has a dotfiles repository showing her or his dotfiles, including configuration for various programs. Also for vim – the vimrc – gets shared. Much too often as I think!

Why sharing dotfiles? And why don't?

I had a dotfiles repository, too. Long time ago. I will not share my dotfiles anymore. Here is why:

People share their dotfiles because they want to tell people how awesome they are. “Hey look, I have this awesome set of mappings in my vimrc, which enables me to be fast as light when doing X”. Sharing dotfiles has become some sort of race. Who is the best at manipulating tools to do their jobs more efficient? I don't want to take part in this race anymore.

Sharing the vimrc is also some kind of opening my own personal working style to everyone. From my vimrc, one can easily see how I work. So it's kind of an open window into my privacy. And that's a big no-go for me.

My vimrc has evolved over the last years. A lot of stuff got adapted to my workflow, and a lot of things got removed again. I would say that it is almost perfect by now. But only perfect for me. And this is why I consider my vimrc as big part of my privacy and something which is realy intimate for me. And I don't share intimate stuff. It's that easy!

But how do you contribute to the vim community?

Well, one can now argue that sharing the vimrc file is some kind of giving something back to the community and as I do not share it, I do not contribute to the community. That's true.

But I'm really fine with sharing my knowledge if someone actually asks me! No matter if it is via stackoverflow (I really like to answer questions on SO, if I can) or reddit or personally via email. Sharing experience and knowledge has nothing to do with sharing personal information, I think. And in this context, the “how do I do this” is experience or knowledge and the vimrc (or dotfiles) is personal information.

So, feel free to ask me things about vim, mutt, mpd, git or other tools – I will answer them, of course! But I will not show you my configuration of the tool!

tags: #vim