musicmatzes blog

open

This post was published on both my personal website and imag-pim.org.

I'm thinking of closing contributions to imag since about two months. Here I want to explain why I think about this step and why I am tending into the direction of a “yes, let's do that”.

github is awesome

First of all: github is awesome. It gives you absolutely great tools to build a repository and finally also an open source community around your codebase. It works flawlessly, I did never experience any issues with pull request tracking, issue tracking, milestone management, merging, external tool integration (in my case and in the case of imag only Travis CI) or any other tool github offers. It just works which is awesome.

But. There's always a but. Github has issues as well. From time to time there are outages, I wrote about them before. Yet, I came to the conclusion that github does really really well for the time being. So the outages at github are not the point why I am thinking of moving imag away from github.

Current state of imag

It is the state of imag. Don't get me wrong, imag is awesome and gets better every day. Either way, it is still not in a state where I would use it in production. And I'm developing it for almost two years now. That's a really long time frame for an open source project that is, in majority, only developed by one person. Sure, there are a few hundred commits from other, but right now (the last time I checked the numbers) more than 95% of the commits and the code were written by me.

Imag really should get into a state where I would use it myself before making it accessible (contribution wise) to the public, in my opinion. Developing it more “closed” seems like a good idea for me to get it into shape, therefore.

Closing down

What do I mean by “closing development”, though? I do not intend to make imag closed source or hiding the code from the public, that's for sure. What I mean by closing development is that I would move development off of github and do it only on my own site imag-pim.org. The code will be openly accessible via the cgit web interface, still. Even contributions will be possible, via patch mails or, if a contributor wants to, via a git repository on the site. Just the entry gets a bit harder, which – I like to believe – keeps away casual contributors and only attracts long-term contributors.

The disadvantages

Of course I'm losing the power of the open source community at github. Is this a good thing or a bad thing? I honestly don't know. On the one hand it would lessen the burden on my shoulders with community management (which is fairly not much right now), issue management and pull request management. On the other hand I would lose tools like travis-ci and others, which work flawlessly and are a real improvement for the development process.

The conclusion

I don't really have one. If there would be a way to include Travis into a self-hosted repository as well as some possibility for issue management (git-dit isn't ready in this regard, yet, because one cannot extract issues from github just yet), I would switch immediately. But it isn't. And that's keeping me away from moving off of github (vendor lock in at its finest, right?).

I guess I will experiment with a dedicated issue repository with git-dit and check how the cgit web interface works with it, and if it seems to be good enough I will test how it can be integrated (manually) with emails and a mailing list. If things work out smoothly enough, I will go down this road.

What I don't want to do is to integrate the issue repository in the code repository. I will have a dedicated repository for issues only, I guess. On the other hand, that makes things complicated with pull request handling, because one cannot comment on PRs or close issues with PRs. That's really unfortunate, IMO. Maybe imag will become the first project which heavily uses git-dit. Importing the existing issues from github would be really nice for that, indeed. Maybe I'll find a way to script the import functionality. As I want a complete move, I do not have to keep the issue tracking mechanisms (git-dit and github issues) in sync, so at least I do not have this problem (which is a hard one on its own).

tags: #open-source #programming #software #tools #git #github

This article is a cry for a feature which is long overdue in KDE, in my humble opinion: Syncable, user readable (as in plain-text) configuration files.

But let me start explaining where I come from.

I started with Linux when I was 17 years old. At the time I ran an Kubuntu 9.04 (if I remember correctly) with KDE 3. I disliked Gnome because it looked silly to me (no offense, Gnome or Gnome people). So it was simply aesthetics which made me use KDE. Before switching to Linux I had only experienced the cruel world of Microsoft Windows, XP at the time. When I got a new Notebook for my Birthday, I got Vista and two days later I had this Linux thing installed (which friends of mine kept talking about). Naturally, I was blown away by it.

After some time I got rather comfortable using this black box with the green text on it – the terminal finally had me! Soon, I launched a full blown KDE 3 (themed hackerlike in dark colors) to start a few terminals, open vim and hack away. Then, the same friend who suggested Linux told me about “tiling window managers”. wmii it was shortly after.

A long journey began. After some troubles with Ubuntu 12.04 I switched to Archlinux, later from wmii to dwm and after that to i3 which I kept for a few years.

In 2015 I learned about NixOS, switched to it at the beginning of 2016 and in late 2016 I reevaluated my desktop and settled with XFCE.

And here's the thing: I wanted KDE, but it lacked one missing critical feature: Beeing able to sync its configuration between my machines.

I own a desktop machine with three 1080p Monitors and two Notebooks – a Thinkpad X220 and an old Toshiba Satellite (barely used), if that matters at all.

There are things in the KDE configuration files which shouldn't be there, mainly state information and temporary values, making the whole thing a pain in the ass if you want to git add your configuration files to git push them to your repository somewhere in the web and git pull it on another machine.

Apparently the story is not that much better with XFCE, but at least some configuration files (like keyboard shortcuts, window decoration configuration and menu-bar configuration) can be git added and synced between machines. And it works for me, even with two so different machines. Some things have to be re-done on each machine, but the effort is pretty low.

But with KDE (I tried Plasma 5), it was pure PITA. Not a single configuration file was untouched, reordering values, putting state information in there and so on and so forth.

And I am rather amazed with KDE and really want to play with it, because I think (and that's not just an attempt to kiss up to you KDE guys) that KDE is the future of Linux on the desktop! Maybe it is not for the Hacker kid from next door, but for the normal Linux User, like my Mom, your Sister or the old non-techy guy from next door, KDE is simple enough to understand and powerful enough to get stuff done efficiently without getting in your way too much.

So here's my request:

Please, KDE Project, make your configuration files human readable, editor-friendly and syncable (via tools like git). That means that there are no temporary values and no state information in the configuration files. That means that configuration files do not get shuffled when a user alters a configuration via a GUI tool. That means that the format of the configuration files do not get automatically changed.

The benefits are clear, and there are no drawbacks for KDE as a project (as far as I can see) because parsing and processing the configuration files will be done either way. Maybe it even reduces the complexity of some things in your codebase?


A word on “why don't you submit this to the KDE project as a request”: I am not involved with KDE at all. I don't even know where the documentation, bug tracker, mailinglist(s), ... are located. I don't know where to file these things, whether I have to register at some forum or whatever.

If someone could point me to a place where nice people will discuss these things with me, feel free to send me an email and guide me to this place!

tags: #open-source #software #tools #git #desktop #linux

This post was written during my trip through Iceland. It is part of a series on how to improve ones open-source code. Topics (will) contain programming style, habits, project planning, management and everything that relates to these topics. Suggestions welcome.

Please note that this is totally biased and might not represent the ideas of the broad community.

During my trip through Iceland I had a really great time seeing the awesome landscapes of Iceland, the Geysir, the Fyords and the Highlands. But in the evening, when the wind got harsher and the temperatures went down, I had some time for reading and thinking.

This is when I started this blog article series. I thought about general rules that would help me improving my open source code.

I started thinking about this subject after reading a great article once Fluent C++, about how to make if statements more understandable. This article really got me thinking about this subject, because it makes some really good points and if you haven't read it yet, you clearly should invest a few minutes reading it (and this blog in general, of course). I'll wait and we'll continue if you're ready reading it.

Why thinking about this in the first place?

Well, everyone knows you're already writing the best code possible and everyone who doesn't understand it is not worth your almighty code! Some people sadly think like this. And of course this is not true at all.

Once I've read this great statement, I guess it was also on the Fluent C++ Blog (again: read this blog, it is awesome) which goes approximately like this:

If you look at code you've written six months ago and you cannot think of a way to improve it, you haven't learned anything in six months, and this is as bad as it can get

If you think about this for one minute, it is absolutely right and you really don't want to be at this point. So, you have to start to think about your code. But where to start? Well, at the beginning, you might think. And that's absolutely right. You have to think about the small things first, so lets start with if statements.

Making if statements more understandable

Basically what Jonathan Boccara said in Fluent C++ said. I won't repeat what he has written, just shortly conclude: give long if expressions names by defining functions for the conditions, represent the domain in your if statements, don't be more general than your domain specifications.

The last of these is the point I want to focus on in this article. Full quote:

Don't compress an if statement more than in the spec

But in open source software development we often do not have any spec. If you're working on a hobby project in your free time, improving someones code or contributing some functionality to an open source project you're interested in, you only have the idea in your (and maybe also in someone elses) head. Sometimes you or some other people already had a discussion about the feature you're about to implement and a rough idea how it should be done. Maybe even a concrete idea. But you'll almost never have a specification where edge cases, preconditions and invariants of your functionaly are defined. And most of the time you don't need one. In open source, people come together who have a similar interest and goal - no specifications required, because all contributors involved know what a functionality should do and what not.

Show me code

In the following, I'm using Rust as language for my examples. I'm not doing anything Rust specific here, so people without knowledge of the Rust programming language shouldn't have to learn new things to understand my point, it is just that I'm most comfortable with this language right now.

So what Jonathan already said, one should not make if statements arbitrarily long and complex. Helper functions should be used for statements, even if these functions are only used once. It can heavily improve the readability of your code.

if (car.haswheels(context.requirednumofwheels()) && car.maxspeed() > SpeedUnit::KMH(25)) || car.buildingyear() > Time::Year(2000) { // ... }

the condition from above can be greatly improved in readability by moving it to a helper function.

fn carisneworfast(car: &Car, context: &Context) –> bool { car.haswheels(context.requirednumofwheels()) && car.maxspeed() > SpeedUnit::KMH(25)) || car.buildingyear() > Time::Year(2000) }

//...

if carisneworfast(&car, &context) { // ... }

You might think that does not improve the code at all, just moving the complexity somewhere else – that's not entirely true. If you have only five-line-functions, yes. But if your functions are ten, fifteen, fifty or even a hundred lines long and you have several if statement of similar complexity, moving away such things can improve it a lot.

Also, you can make complex conditions testable by moving them to functions, which is also a nice-to-have.

But, but, but... speed?

One might come up now with the obvious question: Does my code get slower because of this? I would say it depends. Fluent C++ has answered this question for C++, and I would guess that this also holds for Rust, maybe even without the 2%/7% speed decrease Jonathan is experiencing, especially if the code is inlined by the Rust compiler. Even though you might get a bit slower code, you have to think of the one question that I greatly value, not only when it comes to execution speed, but also in other cases: Does it matter?

Does it matter whether your code gets a bit slower? Is this particular piece of code crucial for your domain? If not – expressiveness first, speed second! If it does, write the expressive version first and then: measure. If the expressive version has a performance impact you cannot tolerate, you can still optimize it later.

Next...

What's up next? Well, I don't know. I will get myself inspired by other blog posts and articles and maybe I'll publish the next article for this series soonish. But maybe it takes a month or two, maybe even more, until I got some content. I don't want to make this a weekly thing or something like that, so I'll leave it undefined when the next article of this series will be published.

Thanks for reading.

tags: #open-source #programming #software

In this blog post, which might turn into a short series, I want to plan a rust library crate and write notes down how to implement it.

This article was yet another one that I wrote while being on my trip through Iceland. As you can see – my head does never stop thinking about problems.

Usecase

So first of all, I want to write down the use case of this library. I had this idea when thinking about how to design a user frontend for imag (of course) and came to the conclusion that rust lacks a library for such a thing. So why not writing one?

I want to design the user interface of this library crate approximately like Rails did with their implementation of the same functionality for Ruby (bear with me, I'm not that involved in the Ruby world anymore so I don't know whether this is actually Rails or just another gem that comes with it).

So what I want to be able to do is something like this:

let event_date = today() – days(2) + weeks(10);

for example. I'm not yet entirely sure whether it is possible to actually do this without returning Result<_, _> instead of real types (and because I'm in Iceland without internet connection, I cannot check). If results need to be returned, I would design the API in a way so that these functions and calls only create a AST-like object tree which then can be called with a function to calculate the final result:

let eventdate = today() – days(2) + weeks(10); let eventdate = try!(event_date.calc());

But even more ideas come to mind when thinking about functionality this library may provide:

// Creating iterators today().repeat_every(days(4)) // –> Endless iterator

// Convenience functions (today() + weeks(8)).endofmonth() // The end of the month of the day in 8 weeks

today().endofyear().day_name() // name of the day at end of the current year

today().until(weeks(4)) // Range of time from now until in 4 weeks

// more ...

Later on, a convenient parser could be put in front of this, so a user can actually provide strings which are then parsed and be calculated.

calculate(“now – 4h + 1day”)

Which then could of course be used to face a user as well.

Core Data types

As the foundation of this library would be the awesome “chrono” crate, we do not have to reimplement all the time-related things. This eases everything quite a lot and also ensures that I do not double work which others have done way better than I could have.

So at the core of the library, we need to encapsulate chrono types. But there are many user-facing types in chrono and we cannot assume we know which of them our users need. So we have to be generic over these types, too. This is where the fun starts.

At the very base level we have three kinds of types: Amounts (like seconds, minutes, etc, fixed points in time as well as time ranges:

pub enum TimeType { Seconds(usize), Minutes(usize), //... Years(usize), Point©, Range(A, B) } // A, B and C being chrono types which are wrapped

As I assume right now, we cannot simply subtract and add our types (and thus chronos types) without possible errors, so we have to handle them and return them to the user. Hence, we will create intermediate types which represent what is about to be calculated, so we can add and subtract (etc) them without error:

enum OpArg { TT(TimeType), Add(AddOp), Sub(SubOp) }

pub struct AddOp(OpArg, OpArg); pub struct SubOp(OpArg, OpArg);

trait CalculateableTime { calc(self) –> Result; }

with the trait implemented on the former types – also the enum maybe as I explain in a few words.

To explain why the CalculateableTime::calc() function returns a TimeType rather than a chrono::NaiveDateTime for example, consider this:

(minutes(15) – seconds(12)).calc()

and now you can see that this actually needs the function to return our own type instead of some chrono type here.

The OpArg type needs to be introduced to be able to build a tree of operations. In the calc implementation for the types, we can then recursively call the function itself to calculate what has to be calculated. As the trait is implemented on TimeType itself, which just returns Self then, we automatically have the abort-condition for the recursive call. To note: This is not tail-recursive.

Optimize the types

After handing this article over to two friends for some review, I got told that the data structures can be minified into one data structure. So no traits required, no private data structures, just one enum and all functions implemented directly on it:

enum TimeType { Seconds(usize), Minutes(usize), //... Years(usize), Point©, Range(A, B),

Subtraction(TimeType, TimeType), Addition(TimeType, TimeType) }

and as you can see, also almost no generics.

After thinking a bit more about this enum, I concluded that even things like EndOfWeek, EndOfMonth and such have to go into it. Overall, we do not want a single calculation when writing down the code, only lining up of types where the calculate function takes care of actually doing the work.

Helper functions

In the former I used some functions like seconds() or minutes() – these are just helper functions for hiding more complex type signatures and can hopefully be inlined by the compiler:

pub fn seconds(s: usize) –> TimeType { TimeType::Seconds(s) }

So there is not really much to say for these.

Special Functions, Ranges, Iterators

To get the end of the year of a date, we must hold the current date already, so these functions need to be added to the TimeType type. Ranges can also be done this way:

now().until(tomorrow()) // –> TimeType::Range(_, _)

Well, now the real fun begins. When having a TimeType object, one should be able to construct an Iterator from it.

The iterator needs to be able to hold the value it should increase itself every time as well as a copy of the base value. With this, one could think of an iterator that holds a TimeType object and every time the next() function is called, adds something to it and returns a copy of it.

Another way of implenting this would be to know how many times the iterator has been called, multiply this with the increase value and add this to the base.

I like the latter version more, as it does not increase the calculations needed for getting the real value out of the TimeType instance every time the iterator is called.

This way, one can write the following code:

let v : Vec<_> = now() .every(days(7)) .map(TimeType::calculate) .take(5);

to retrieve five objects, starting from today, each separated by one week.

Next

What I think I'll do in the next iteration on this series is summarize how I want to develop this little crate. I guess test driven is the way to go here, after defining the type described above.


Please note: This article was written a long time ago. In the meantime, I learned from a nice redditor that there is chrono::Duration which is partly what I need here. So I will base my work (despite beeing already started into the direction I outlined in this article) on the chrono::Duration types and develop the API I have in mind with the functionality provided by chrono.

For the sake, I did not alter this article after learning of chrono::Duration, so my thoughts are lined up like I originally had them.

tags: #open-source #programming #software #tools #rust

Here I want to describe how I plan to refactor the logging back end implementation for imag.

This post was published on imag-pim.org as well as on my personal blog.

What we have

Right now, the logging implementation is ridiculously simple. What we do is: On every call to one of the logging macros, the log crate gives us an object with a few informations (line number, file, log message,...) – we apply our format, some color and write it to stderr.

This is of course rather simple and not really flexible.

What we want to have

I want to rewrite the logging backend to give the user more power about the logging. As we only have to rewrite the backend, and the log crate handles everything else, the actual logging looks non different and “client” code does not change.

+----------------------------+
| imag code, libs, bins, ... |
+----------------------------+
              |
              | calls
              |
              v
+----------------------------+
| crate: "log"               |
+----------------------------+
              |
              | calls
              |
              v
+----------------------------+
| imag logging backend impl. |
+----------------------------+

So what features do we want? First of all, the imag user must be able to configure the logging. Not only with the configuration file but also via environment variables and of course command line parameters. The former will be overridden by the latter, respectively. This gives the user nice control, as she can configure imag to log to stderr with only warnings being logged but when calling a script of imag commands or calling imag directly from the command line, these settings can be temporarily (for the script or one command) be overridden.

The configuration options I have in mind are best described by an example:

# The logging section of the configuration
[logging]

# the default logging level
# Valid values are "trace", "debug", "info", "warn", "error"
level = "debug"

# the destinations of the logging output.
# "-" is for stderr, multiple destinations are possible
default-destinations = [ "-", "/tmp/imag.log" ]

# The format for the logging output
#
# The format supports variables which are inserted for each logging call:
#
#  "%no%"       - The of the logging call
#  "%thread%"   - The thread id from the thread calling the log
#  "%level%"    - The logging level
#  "%module%"   - The module name
#  "%file%"     - The file path where the logging call appeared
#  "%line%"     - The line No of the logging call
#  "%message%"" - The logging message
#
# Functions can be applied to the variables to change the color of
# the substitutions.
#
# A format _must_ contain "%message%, else imag fails because no logging should
# be forbidden
#
[logging.formats]
trace = "cyan([imag][%no%][%thread%][%level%][%module%][%file%][%line%]): %message%"
debug = "cyan([imag][%no%][%thread%][%level%][%module%][%file%][%line%]): %message%"
info  = "[imag]: %message%"
warn  = "red([imag]:) %message%"
error = "red(blinking([imag][uppercase(%level%)]): %message%)"

# Example entry for one imag module
# If a module is not configured or keys are missing
# the default values from above are applied
[logging.modules.libimagstore]
enabled = true
level = "trace"
destinations = [ "-" ]
# A format is only globally configurable, not per-module

One of the most complex things in here would be the format parsing, as variable expansion and functions to apply are some kind of DSL I have to implement. I hope I can do this – maybe there's even a crate for helping me with this? Maybe the shellexpand library will do?

These things and configuration options give the user great power over the logging.

The approach

Because imag already logs a lot, I think about an approach where one thread is used for the actual logging. Because each logging call involves a lot of complexity, I want to move that to a dedicated thread where other threads speak to the logging thread via a MPSC queue.

Of course, this should be opt-in.

The idea is that the logging starts a thread upon construction (which is really early in the imag process, nearly one of the first operations done). This happens when the Runtime object is build and hence no “client code” has to be changed, all changes remain in libimagrt.

This thread is bound to the Runtime object, logging calls (via the logging backend which is implemented for the log crate) talk to it via a channel. The thread then does the heavy lifting. Of course, configuration can be aggregated on construction of the logging thread.

The logging thread is killed when the Runtime object is dropped (one of the last operations in each imag process). Of course, the queue has to be emptied before the logging is closed.

I am also thinking about converting the code base to use the slog crate, which offers structured logging. But I'm not yet sure whether we will benefit from that, because I don't know whether we would need to pass a state-object around. If that is the case, I cannot do this as this would introduce a lot of complexity which I don't want to have. If no such object needs to be passed around, I still have to evaluate whether the slog crate is a nice idea and of course this would also increase the number of (complex) dependencies by one... and I'm not sure whether the benefits outrule the inconveniences.

tags: #linux #open source #programming #rust #software #tools #imag

A major problem with open source projects is, that, most of the time, it is not clearly visible in which state the project is. At least not for a casual user of the software, for example a guy like me wanting to try the software.

A version number can be a way to express a certain stability, especially if the project uses semver, though even a 1.0 in semver does not express the state a project is in, only that you won't see breaking changes between (minor) version upgrades.

This is my attempt for defining a scale which can be used to define the maturity of a open source project.

The Levels

In the following, I'm defining Levels for the stability of an open source project.

I define them based on my experiences and views. This means that you might disagree with their definition or maybe even with the need of such a scale. This is fine. I wrote this article because I think we need a discussion about this and I'm more than happy to hear your opinion! Maybe though, we should not do this in a closed manner but on a platform where everyone can participate, for example on a community board like reddit.

Level 0

Computer Scientists start with zero. Projects also start at zero. So am I.

Level 0 is the fundamental idea of a software. You think you, your companion or maybe the whole world needs a tool for a thing. You did not think about this idea that hard, you have no plan how to implement it or even thought about which language to use.

Then your project is in Level 0.

Level 1

You wrote some code. That's awesome! But anyways, the code might not even compile / the interpreter might complain about invalid syntax. The code is just a basic concepts – an expression of thoughts in a textual manner.

Maybe it does compile / the syntax is valid, but still the tool does not yet do what you're planning to do.

Level 2

Wow, now the tool works. At least for you. You know how to build and use the tool.

This is fine. You might be the only developer by now and you don't care that much about other developers, because you're still playing around. You reached Level 2.

Level 3

If you did not share your code with others you might want to do that now, in Level 3. Because now, your code works for you and maybe even your neighbor programmer who thinks you did something cool and wants to try it out.

Still, your code is hacky, you might not even have documentation how to use it or how to build it, but you're confident that you could at least show the world that you're working on something which does something.

(I would say this is the level where most small projects on github are when they're published).

Level 4

Now you care about contributions. You include a basic guide how you did your build, how others might be able to reproduce the build and get a working software package.

You might respond to issues and pull requests and you start interacting with other people when they ask questions. Maybe you even posted your tool on reddit, hackernews or another site of your choice.

Level 5

Community is important. You try to help everybody getting their first contribution merged. Your tool has grown a lot since Level 0 (or Level 3). You think about coding style, development flow and all these things which have to do with your project, but not with solving the problem you're actually working on.

Welcome to Level 5!

This also means that at least some people with a technical background are now able to understand what you're doing and why you're doing it. They might even be able to build your tool to try it out. Maybe even on another platform than you initially started developing.

Level 6

The community around your project is more than just your friend or fellow student hacking on the thing, but people from the internet start watching your project and from time to time, an anonymous person opens a pull request to help you implementing a feature or bugfix.

This is awesome and it really feels good getting other people involved in your codebase!

Level 7

A major event in your project is the first issue/bug report/change request/feature request from a person who is not involved in development of the tool itself and does not even want to – but still wants to use or at least try the tool.

This is when you entered Level 7 – you should now start thinking about how to talk to users of your software!

Level 8

You have a small but dedicated group of users. That means that there are people on this planet using your software. And these people might not even know the programming language you're programming your tool in!

You have people answering questions on IRC which are also developers of your software and you don't have to answer them yourself.

Welcome to Level 8.

Now you have a certain responsibility to these users to keep your software running, because they are interested in using your software!

Level 9

You commit to stability, good documentation and bug fixes as soon as possible.

This is because your community has grown. There are people on your IRC asking questions you can't remember whether you've seen them before in the IRC. You're maybe even getting mails about how to use your software.

Everyone in your community knows that this is still a hobby for you, so you might be unresponsive for a certain amount of time, but anyways they are hoping to get bugs fixed, functionality implemented and questions answered - better sooner than later.

At this point, the development of your software is not the main point anymore. The users are. The support truly is. Development slowed down a lot – not only for but also because your commitment to API stability.

Level 10

At this point, your software (hopefully) is not in pre-1.0 version anymore. This is because you have a really large user base by now. You have IRC channels, maybe a forum, a message board, maybe even a subreddit, google groups or facebook groups for supporting users.

Other developers are heavily involved in your software. There are people around who can do magic with your software. Maybe even people who help developing the software, but never contributed a single line of code because they only contributed translation, style and design, documentation or tutorials and guides. These things might even pop up in the internet without you noticing.

Welcome to Level 10 – where our journey ends. Your software is now used by a lot people and hopefully your bus factor has grown to at least 2.

Where the journey ends

After writing the 10 Levels of open source project maturity, I could rewrite the title of this article to “Open source project community size definitions” or something like that. But I won't do that, because it would not express what I was thinking about before writing these words down.

Overall, one might disagree strongly with these points. As said above, this is okay. The scale might be extended as well. Maybe there are Level 11, Level 12 and Level 42 as well? I don't know, I did not think further than Level 10.

Examples for the Levels

The levels from above do not have any examples stated. That's because I want the reader to make their own thoughts about this.

Either way, I'm providing some examples that I'd define for these Levels. I hope they match the readers view:

  • Level 0 – No example necessary I guess.
  • Level 1 – No example necessary I guess.
  • Level 2 – No example necessary I guess.
  • Level 3 – No example necessary I guess.
  • Level 4 – Maybe my tool nixos-scripts is a good example for this Level.
  • Level 5 – This is where I actually think my own project is: imag. Besides that I'd include khal here.
  • Level 6 – I guess bjoern is a nice example for this.
  • Level 7 – I failed finding a good example for this. Feel free to suggest one.
  • Level 8tig might be a good example here
  • Level 9 – Tools like dwm, surf and other tools from the [suckless] community would be a nice example here, I guess.
  • Level 10 – This is where tools like git, libreoffice, KDE and other “big” pieces of software should be placed.

If anyone feels offended because I categorized their tool in a level which is inaccurate for their tool, feel free to send me an email and I'll remove the link to your project.

You always can argue about “size” of a community as well, so these points are all from my perspective and might not be that accurate from your point of view.

The end

Thanks for joining me on this journey. I hope we can have an open discussion about this.

As stated above, discussion these things via email might not be beneficial, as discussion is not a two-way communication in my opinion, but a multi-directional one.

So, if you care at all or maybe even want to write about these things, feel free to submit a link to your blog post about this and I'll link them here - maybe we can have a discussion on reddit or somewhere, but I'm not sure where to post a thread for this topic.

tags: #open source #software

The imag-pim.org website just got a new face.

I was really eager to do this becaues the old style was... not that optimal (I'm not a web dev, let alone a web designer).

Because the site is now generated using hugo, I also copied the “What's coming up in imag” blog posts over there (I keeping the old ones in this blog for not breaking any links). New articles will be published on the imag-pim.org website.

This very blog article will be published on both sites, of course.

tags: #linux #open source #rust #software #tools #imag

Right now, github shows you this:

github is down

And these things will happen more frequently in future, I assure you!

In the first half of 2017, we already had 3 major service outages, 3 minor service outages and 21 other problems. Yes, indeed, this is very good service quality. It really is. Anyways, it is not optimal. Github advertises itself with 22M developers, 59M repositories and 117k businesses world-wide, which is a freakin' lot.

I really like github, it is a great platform for the open-source community and individual projects and developers.

But. There's a but.

Github will not scale endlessly. It will vanish at some point. Maybe not in 2 years, maybe not in 5 or even 10 years. But at some point, a competitor will step up and get bigger and better than github. Or it will be taken down by someone. Who knows.

But when that happens we, as a community, have to be prepared. And that's why we need distributed issue tracking like we implemented with git-dit.

Yes, it is unfortunate that git-dit itself is hosted on github. And we do not have the issues from github in the repository, yet, as there is no mapper available. But we will get there.

I won't go into detail how git-dit works here, there's a talk from GPN17 on youtube about that (in german), where you can learn about these things.

With git-dit, we won't be tied to github and if github vanishes from today to tomorrow, we would be able to continue development seamlessly because the issues are mirrored into the repository itself. In fact, we won't even need github in the first place, because the repository itself would contain everything which is needed for development.

But we are not there yet.

If you're feeling brave, you're more than welcome to try out git-dit or contribute to the codebase.

tags: #git #github #open-source #software #tools

This is the 25th iteration on what happened in the last four weeks in the imag project, the text based personal information management suite for the commandline.

imag is a personal information management suite for the commandline. Its target audience are commandline- and power-users. It does not reimplement personal information management (PIM) aspects, but re-uses existing tools and standards to be an addition to an existing workflow, so one does not have to learn a new tool before beeing productive again. Some simple PIM aspects are implemented as imag modules, though. It gives the user the power to connect data from different existing tools and add meta-information to these connections, so one can do data-mining on PIM data.

What happenend?

Luckily I can write this iteration on imag. After we had no blog post about the progress in imag in April this year, due to no time on my side, I'm not very lucky to be able to report: We had progress in the last 4 (8) weeks!

Lets have a look at the merged PRs (I'm now starting to link to git.imag-pim.org here):

  • #915 merged a libruby dependency for travis.
  • #918 removed some compiler warnings.
  • #917 merged some travis enhancements/fixes.
  • #916 superceeded PR #898, which simplified the implementation of the FoldResult extension.
  • #895 started a re-do of the ruby build setup.
  • #911 changed the interface of the StoreId::exists() function to return a Result now.
  • #904 added initial support for annotations in the libimagentrylink library, which gives us the posibility to add annotations to links. There are no tests yet and also no remove functionality.
  • #921 was a cleanup PR for #911 which broke master unexpectedly.
  • #914 fixed a compiler warning.
  • #929 removed libimagruby entirely because we couldn't merge to master because a dependency on master started to fail. The whole ruby thing is a complete mess right now, dependencies are not found, tests fail because of this... it is a mess.
  • #927 removed unused imports.
  • #924 updated links in the readme file.
  • #926 added tests for the StoreId type.
  • #919 merged preparings for the 0.3.0 release, which is overdue for one month right now, because the ruby scripting interface does not work.
  • #930 updated the toml-rs dependency to 0.4, which gives us even more superpowers.
  • #932 added some tests for the configuration parsing functionality.
  • #933 Adds a new dependency: is-match, a library I extracted from the imag source code into a new crate.

The libimagruby mess

Well, this is unfortunate.

libimagruby should be ready for one month by now and usable – and it is (the basic things, few things tested also)! But as the CI does not work (fuck you travis!) I cannot merge it. I also don't know how to properly package a Ruby gem, so there's that.

I really hope @malept can help me.

I'm already thinking about adding another scripting interface, so I can continue and start implementing frontends for imag, for example I'm thinking about a lua or ketos interface, still. Lua might be the better idea, as there are libraries around for certain things, while there are no libraries for ketos (I assume).

What will happen

I honestly don't know. I will continue working on imag, of course, but right now, the libimagruby is stalled. I'm not sure where to start working besides libimagruby – a Ruby scripting interface is what I need right now, but it won't work ... so there's that.

As soon as the Ruby interface is ready, we can have nice things. But right now, it is really hard to continue.

tags: #linux #open source #programming #rust #software #tools #imag

I just released my task-hookrs crate in version 0.3.0!

Here's what changed.

This release was made because I could update the serde dependency to 0.9, which gave me the ability to remove tons of code:

$ git diff --shortstat v0.2.2..v0.3.0
11 files changed, 126 insertions(+), 492 deletions(-)

because we now have custom derive! So I could remove my custom implementations for Serialize and Deserialize.

v0.3.0 also has some changes in the API, so not another 0.2.x release. This release pins the minimum compatible compiler to rustc 1.15, too.

So, all in all, this release really was a step forward from my point of view.

tags: #programming #software #open-source #tools