musicmatzes blog

rust

Inspired by the Call for Community Blogposts I want to summarize my experiences and thoughts on Rust in 2017 and what I am excited about for 2018.

Reflecting 2017

2017 was an amazing year for Rust. We got 8 releases of rust itself! We got basic procedural macros allowing custom derive (also known as “macros 1.1”) in the first release last year (1.15.0). This made serde 1.0 possible, if I'm not mistaken? We got 103 stabilized APIs in 2017. This is incredible! The improvements of compiletime and also the tooling got so much better. I mean, it was awesome before. But now it is even better!

On a personal side I got a lot better at programming Rust. I wrote about 37800 lines of rust code in my main project imag and 17380 lines in other crates (authored and contributed, according to a bit git-fooing around). Is that a lot? I don't know.

Hopes for 2018

Now lets talk about 2018. This year will be amazing, I am sure.

Language features

I am really excited about the “impl trait” thing. Beeing able to return an trait from a function will reduce the imag codebase so much, for example. We no longer need to define our own iterator helper types but can simply return Iterator<Item = Whatever>!

I have no other hopes for the language itself, because what we have right now is really amazing and I honestly cannot think of ways it could be improved.

Ecosystem needs / Tooling enhancements

I'm still a bit concerned about cargo functionality for building workspace projects. From what I see, building two different crates in one workspace which share dependencies rebuilds the dependencies. This is not as intended, I guess, but that's what I see. I did not dive deep into this, so I might be wrong, though.

What I am thinking about for several weeks now is a cargo/rust tool for calculating code metrics. I think of things like documentation/code ratio, average function length, simple things... but also about cohesion and coupling metrics and other inter-module/inter-crate metrics.

Also, I tried to set up the rust language server for vim on my workstation and failed hard. I guess this is a packaging problem with my distro (NixOS), though. Either way, installing the rls with a stable toolchain would be nice!

Crates I am still missing / should be improved

There are some crates I would love to have which do not exist yet.

  • A (high level) email crate. There is the email crate, but it is mainly unstable and does not even have a 0.1.0 yet. There's also lettre_email, which is in 0.7.0, but it doesn't support parsing of emails.
  • I really hope rust-vobject (which is one of the crates I contributed to in 2017) will improve even more and be the defacto-standard crate for handling vcard and icalendar data.
  • I follow the development of Cursive and from what I see it is awesome. I really hope people start writing high-level objects for cursive (like a file explorer, a form builder, a text editor like thing, a tab helper and so on) so I have to do less work when implementing a TUI for imag. (To be fair, there are already some crates available).
  • I hope there will be some awesome crates for handling multi-media files and reading/writing their metadata. Especially audio formats and video formats are important to me with imag.
  • Rust bindings for pass would be awesome.
  • Markdown (and other formats, like asciidoc, restructured text, textile and maybe even bbcode) parsers and renderers should be written/improved
  • A API for IPFS or maybe even a protocol implementation
  • Qt bindings (yeah, I have high hopes for 2018)

There are possibly thousands more... But I won't list them all.

tags: #open-source #programming #software #rust

34c3 was awesome. I prepared a blog article as my recap, though I failed to provide enough content. That's why I will simply list my “toots” from mastodon here, as a short recap for the whole congress.

  • (2017-12-26, 4:04 PM) – Arrived at #34c3
  • (2017-12-27, 9:55 AM) – Hi #31c3 ! Arrived in Adams, am excited for the intro talk in less than 65 min! Yes, I got the tag wrong on this one
  • (2017-12-27, 10:01 AM) – Oh my god I'm so excited about #34c3 ... this is huge, girls and boys! The best congress ever is about to start!
  • (2017-12-27, 10:25 AM) – Be awesome to eachother #34c3 ... so far it works beautifully!
  • (2017-12-27, 10:31 AM) – #34c3 first mate is empty.
  • (2017-12-27, 10:46 AM) – #34c3 – less than 15 minutes. Oh MY GOOOOOOOOOD
  • (2017-12-27, 10:49 AM) – Kinda sad that #fefe won't do the Fnord this year at #34c3 ... but I also think that this year was to shitty to laugh about it, right?
  • (2017-12-27, 10:51 AM) – #34c3 oh my good 10 minutes left!
  • (2017-12-27, 11:02 AM) – #34c3 GO GO GO GO!
  • (2017-12-27, 11:16 AM) – Vom Spieltrieb zur Wissbegierig! #34c3
  • (2017-12-27, 12:17 PM) – People asked me things because I am wearing a #nixos T-shirt! Awesome! #34c3
  • (2017-12-27, 12:59 PM) – I really hope i will be able to talk to the #secushare people today #34c3
  • (2017-12-27, 1:44 PM) – I talked to even more people about #nixos ... and also about #rust ... #34c3 barely started and is already awesome!
  • (2017-12-27, 4:28 PM) – Just found a seat in Adams. Awesome! #34c3
  • (2017-12-27, 8:16 PM) – Single girls of #34c3 – where are you?
  • (2017-12-28, 10:25 AM) – Day 2 at #34c3 ... Yeah! Today there will be the #mastodon #meetup ... Really looking forward to that!
  • (2017-12-28, 12:32 PM) – Just saw ads for a #rust #wayland compositor on an info screen at #34c3 – yeah, awesome!
  • (2017-12-28, 12:37 PM) – First mate today. Boom. I'm awake! #34c3
  • (2017-12-28, 12:42 PM) – #mastodon ads on screen! Awesome! #34c3
  • (2017-12-28, 12:45 PM) – #taskwarrior ads on screen – #34c3
  • (2017-12-28, 3:14 PM) – I think I will not publish a blog post about the #34c3 but simply list all my toots and post that as an blog article. Seems to be much easier.
  • (2017-12-28, 3:15 PM) – #34c3 does not feel like a hacker event (at least not like the what I'm used to) because there are so many (beautiful) women around here.
  • (2017-12-28, 3:36 PM) – The food in the congress center in Leipzig at #34c3 is REALLY expensive IMO. 8.50 for a burger with some fries is too expensive. And it is even less than the Chili in Hamburg was.
  • (2017-12-28, 3:43 PM) – Prepare your toots! #mastodon meetup in less than 15 minutes! #34c3
  • (2017-12-28, 3:50 PM) – #34c3 Hi #mastodon #meetup !
  • (2017-12-28, 3:55 PM) – Whuha... there are much more people than I've expected here at the #mastodon #meetup #34c3
  • (2017-12-28, 4:03 PM) – Ok. Small #meetup – or not so small. Awesome. Room is packed. #34c3 awesomeness!
  • (2017-12-28, 4:09 PM) – 10 minutes in ... and we're already discussing pineapples. Community ftw! #34c3 #mastodon #meetup
  • (2017-12-28, 4:46 PM) – Limiting sharing of #toots does only work if all instances behave! #34c3 #mastodon #meetup
  • (2017-12-28, 4:56 PM) – Who-is-who #34c3 #mastodon #meetup doesn't work for me... because I don't know the 300 usernames from the top of my head...
  • (2017-12-28, 5:17 PM) – From one #meetup to the next: #nixos ! #34c3
  • (2017-12-28, 5:57 PM) – Unfortunately the #nixos community has no space for their #meetup at #34c3 ... kinda ad-hoc now!
  • (2017-12-28, 7:58 PM) – Now... Where are all the single ladies? #34c3
  • (2017-12-28, 9:27 PM) – #34c3 can we have #trance #music please?
  • (2017-12-28, 9:38 PM) – Where are my fellow #34c3 #mastodon #meetup people? Get some #toots posted, come on!
  • (2017-12-29, 1:44 AM) – Day 2 ends for me now. #34c3
  • (2017-12-29, 10:30 AM) – Methodisch Inkorrekt. Approx. 1k people waiting in line. Not nice. #34c3
  • (2017-12-29, 10:43 AM) – Damn. Notebook battery ran out of power last night. Cannot check mails and other unimportant things while waiting in line. One improvement proposal for #34c3 – more power lines outside hackcenter!
  • (2017-12-29, 10:44 AM) – Nice. Now the wlan is breaking down. #34c3
  • (2017-12-29, 10:57 AM) – LAOOOOLAAA through the hall! We did it #34c3 !
  • (2017-12-30, 3:45 AM) – 9h Party. Straight. I'm dead. #34c3
  • (2017-12-30, 9:08 PM) – After some awesome days at the #34c3 I am intellectually burned out now. That's why the #trance #techno #rave yesterday was exactly the right thing to do!
  • (2017-12-30, 11:35 PM) – Where can I get the set from yesterday night Chaos Stage #34c3 ??? Would love to trance into the next year with it!
  • (2017-12-31, 11:05 PM) – My first little #34c3 congress résumé: I should continue on #imag and invest even more time. Not that I do not continue it, but progress is slowing down with the last months of my masters thesis... Understandable I guess.

That was my congress. Yes, there are few toots after 28th... because I was really tired by then and also had people to talk to all the time, so little time for microblogging there. All in all: It was the best congress so far!

tags: #ccc #social

This post was written during my trip through Iceland and published much latern than it was written.

In this and also maybe in the next few articles we will focus on rather code-related things than on direct code properties. I hope that's okay.

Planning of an application or library is not easy, not at all. But how much planning do we actually do before writing code? And should we do more?

My thoughts on the subject.

What we've learned

One that has studied computer science should know at least some UML types like class diagrams, flow charts, module plans and use case diagrams. They are used in (let's call it) “normal” software development and in the professional world out there.

But when we are developing open source software for our own needs and maybe for our friends, we do that often in our chambers at home. Class diagrams are often not being developed and I can say that I never saw a hobby programmer draw a use case diagram before writing the code of the application.

Why we don't use it

Why is that? Well, because open source software is often done as a hobby type of thing, there is often no need for planning ahead. A hobbyist is able to hold use case, simple class diagrams and flow charts “in his mind” because he has great knowledge of the domain.

In fact. as he defines the domain entirety, he is both stakeholder, project leader, software architect, programmer, tester and marketing guy at the same time. He knows what problems are about to be solved and therefore can adjust every aspect of the application to the needs required.

This holds true for small and medium sized applications or code bases, where the problems is of certain complexity but not too big. Basically one could say that every aspect of the domain has to fit into one head without much effort, in the open-source-programming-at-home-world. With a bit of training, I believe, one can even get to a point where only a few aspects of the domain have to be in a persons mind to be able to work on a solution

But there is certainly a point where the effort needed to solve a specific problem explodes. One can still write software to solve the problem at hand, but not in reasonable time.

So why don't we hobby programmers do not use planning tools like we've learned in university? Why don't we use diagrams to make things clearer, better documented, even before the real programming starts? The answer is quite simple: because it annoys the hell out of us. We don't like to plan ahead. We don't like to adjust plans as soon as we find out that changing a small aspect of our library could be changed to gain more flexibility and overall goodness. We don't like to check our plans before writing down the next module until it works.

Coding is fun, planning is not.

But should we use these things

In my opinion, this is foolish. We really should use the things we learned in university to plan out software and of course also to document it. It would be such a huge improvement of everything to simply think a bit more about it before actually implementing it!

How we do it

What we do and why we do not use tools to plan ahead is explained with one sentence: We program from the user interface to the implementation, because the other way round is to complicated. Or, with other words: We program top-down because bottom-up needs planning and therefore not that easy.

Of course, I'm speaking about the average case. I've programmed bottom-up before but, for me, it seems much more error prone than top-down does, especially without a plan.

Also, I do not say that top-down is not error prone. Not at all. When writing an API without an actual implementation in mind, one easily results in sacrificing cleanness and speed at some points to keep the API nice, which is not always a good idea. So top-down is only good as long as we get it right.

Tooling

Tooling is one big problem in this context. We do not have a toolchain for planning just yet. At least I do not have one that I would like to use. Because we are really good at controlling (versioning, moving around, managing) our source code (for example with git, and to some extend github), we also want to be able to do this with charts diagrams. But we also want the niceness of SVG-rendered graphics. We don't want to play around with layout all day long, but use tools to simply get the job done.

And there are no such tools available.

Sure, one can use graphviz to design such things, but then again we do not have a nice overview on what's going on while editing our work. One could use ascii-art to draw all those things, but hey... ascii-art. We are better than that, aren't we? We could render the ascii-art into SVG... though the tooling there is not yet as good as it should be. And even if it would be, version controlling these things with git is (I fail to believe otherwise) painful.

Conclusion

Well, I can only conclude the obvious here. We need better tooling for the open source programming community to do their planning, if they need to. Clearly, one does not always have to (or want to) plan things before trying out. But when one does, the tooling should be there and be useful and help with the process.

Next

In the next episode we will talk about version control of open source software projects. I'm not going into details about git or other systems used, but rather on the style how they should be used so everyone is pleased with it. This might be strongly biased, but hey, isn't this whole article series biased?

tags: #open-source #programming #software #tools #rust

This post was written during my trip through Iceland and published much latern than it was written.

What is a nice and gold API. How is “nice” defined when it comes to library interfaces? That's a question I want to discuss in this post and also, how you can create a nice API in your open source library without studying a topic like software architecture or similar.

Definition of a “nice”/” easy to use” API

But first, we have to define what makes an API good. And that's not that easy because this topic is very biased.

For me, a good API is one where I can get the job done without thinking much about it. That means that there shouldn't be that much setup code involved in my code just to use the library. So no Factory hell if the only thing I want to have is the current time, for example. This also means that the API has to be decent high level, but without losing the ability to do fine-grained work if necessary. So for the most part, low level (for example implementation details) things are not interesting for me. But when I want to bit-fiddle around with the library, it should let me.

If a builder, factory or some other mechanism is necessary to produce objects in some way, the library should make clear (documentation wise but also code wise) why it is needed. There's no point in making the user call the tenth factory instantiation if it is not necessary and also it makes the users codebase blow up in size and complexity.

The naming of things in the library should be good, appropriate and, for the most part, be consistent. If a function on an object which returns the string representation of that objectbis named “to_string” it should be named that way for all types from that library, not only some parts.

Statelessness

Calling functions of your API should always result in the same values for the same arguments. That does not mean that your API should be pure in a functional programming meaning, but rather that the actions executed when calling a function should not result in some library-internal variables to be set, changed or unset. This is easily achievable by letting the user of the API have an object that holds the state, and functions of your API work based on that value. In short: your library should not have global variables.

This simple design pattern already results in easy to use APIs and a nice user experience.

Error exposure

Good libraries don't hide errors. Indeed, it is even better if errors are exposed to the user as much as possible. Because the user of the library knows best when and how to handle errors, even from your library.

I'm also a big fan of lots of error cases. The more error cases (the better a user of a library can distinguish between different errors) the better. This way, you let the user decide where she doesn't distinguish between two almost-equal error cases and where it is better to handle them independently. If your library does not give that opportunity, the user has to make ugly Spaghetti-code handling things to be able to tell what is going on. Of course, these things have to be documented properly.

Another thing that can come in handy is when your error types or your library exposes functionality to translate error types into text which can be shown to a user of your library. Nothing is worse (from a users point of view) than a “CallOnInconsistenStateObjectBuilderFactory on line 2832” error message shown in an user-facing interface (and trust me, I've seen such things already).

Completeness

Nothing is worse than an API that is not complete. I mean, don't get me wrong – sometimes one does not think of all cases a library could be used for – and that's completely okay. But some things are too obvious for being left out. For example, if you provide functions to transform your time object from local time into GMT, why wouldn't you provide functions for converting it into UTC or EST? These also matter!

Also cleanup routines. In some languages it is necessary to include cleanup routines for your objects. If your library exposes alloc_vacation_location_obj() it should also provide free_vacation_location_obj()! Sure, a user could use free(), but it is not nice API-wise. Even if your function does nothing more than call to free(), it is better to provide a function (and if you want to include some more cleanup in your function later on, in a new version of your library, a user does not have to think about it that much when upgrading their dependencies).

Consistency

We had the naming game already, but it always comes back to us, right? Consistent naming is one of the most important things in an API. If allocating worked with functions prefixed with new_ all the time, it shouldn't be done with alloc_ this time. Also not in later versions if your library. Not even in a major version bump.

Even more important than naming is behaviour. A function that is named with some alloc prefix should only allocate, never initialize or do other fancy stuff (debugging output excluded here, if necessary).

Next

In the next episode we will talk about how one can plan an application.

tags: #open-source #programming #software #tools #rust

In this blog post, which might turn into a short series, I want to plan a rust library crate and write notes down how to implement it.

This article was yet another one that I wrote while being on my trip through Iceland. As you can see – my head does never stop thinking about problems.

Usecase

So first of all, I want to write down the use case of this library. I had this idea when thinking about how to design a user frontend for imag (of course) and came to the conclusion that rust lacks a library for such a thing. So why not writing one?

I want to design the user interface of this library crate approximately like Rails did with their implementation of the same functionality for Ruby (bear with me, I'm not that involved in the Ruby world anymore so I don't know whether this is actually Rails or just another gem that comes with it).

So what I want to be able to do is something like this:

let event_date = today() – days(2) + weeks(10);

for example. I'm not yet entirely sure whether it is possible to actually do this without returning Result<_, _> instead of real types (and because I'm in Iceland without internet connection, I cannot check). If results need to be returned, I would design the API in a way so that these functions and calls only create a AST-like object tree which then can be called with a function to calculate the final result:

let eventdate = today() – days(2) + weeks(10); let eventdate = try!(event_date.calc());

But even more ideas come to mind when thinking about functionality this library may provide:

// Creating iterators today().repeat_every(days(4)) // –> Endless iterator

// Convenience functions (today() + weeks(8)).endofmonth() // The end of the month of the day in 8 weeks

today().endofyear().day_name() // name of the day at end of the current year

today().until(weeks(4)) // Range of time from now until in 4 weeks

// more ...

Later on, a convenient parser could be put in front of this, so a user can actually provide strings which are then parsed and be calculated.

calculate(“now – 4h + 1day”)

Which then could of course be used to face a user as well.

Core Data types

As the foundation of this library would be the awesome “chrono” crate, we do not have to reimplement all the time-related things. This eases everything quite a lot and also ensures that I do not double work which others have done way better than I could have.

So at the core of the library, we need to encapsulate chrono types. But there are many user-facing types in chrono and we cannot assume we know which of them our users need. So we have to be generic over these types, too. This is where the fun starts.

At the very base level we have three kinds of types: Amounts (like seconds, minutes, etc, fixed points in time as well as time ranges:

pub enum TimeType { Seconds(usize), Minutes(usize), //... Years(usize), Point©, Range(A, B) } // A, B and C being chrono types which are wrapped

As I assume right now, we cannot simply subtract and add our types (and thus chronos types) without possible errors, so we have to handle them and return them to the user. Hence, we will create intermediate types which represent what is about to be calculated, so we can add and subtract (etc) them without error:

enum OpArg { TT(TimeType), Add(AddOp), Sub(SubOp) }

pub struct AddOp(OpArg, OpArg); pub struct SubOp(OpArg, OpArg);

trait CalculateableTime { calc(self) –> Result; }

with the trait implemented on the former types – also the enum maybe as I explain in a few words.

To explain why the CalculateableTime::calc() function returns a TimeType rather than a chrono::NaiveDateTime for example, consider this:

(minutes(15) – seconds(12)).calc()

and now you can see that this actually needs the function to return our own type instead of some chrono type here.

The OpArg type needs to be introduced to be able to build a tree of operations. In the calc implementation for the types, we can then recursively call the function itself to calculate what has to be calculated. As the trait is implemented on TimeType itself, which just returns Self then, we automatically have the abort-condition for the recursive call. To note: This is not tail-recursive.

Optimize the types

After handing this article over to two friends for some review, I got told that the data structures can be minified into one data structure. So no traits required, no private data structures, just one enum and all functions implemented directly on it:

enum TimeType { Seconds(usize), Minutes(usize), //... Years(usize), Point©, Range(A, B),

Subtraction(TimeType, TimeType), Addition(TimeType, TimeType) }

and as you can see, also almost no generics.

After thinking a bit more about this enum, I concluded that even things like EndOfWeek, EndOfMonth and such have to go into it. Overall, we do not want a single calculation when writing down the code, only lining up of types where the calculate function takes care of actually doing the work.

Helper functions

In the former I used some functions like seconds() or minutes() – these are just helper functions for hiding more complex type signatures and can hopefully be inlined by the compiler:

pub fn seconds(s: usize) –> TimeType { TimeType::Seconds(s) }

So there is not really much to say for these.

Special Functions, Ranges, Iterators

To get the end of the year of a date, we must hold the current date already, so these functions need to be added to the TimeType type. Ranges can also be done this way:

now().until(tomorrow()) // –> TimeType::Range(_, _)

Well, now the real fun begins. When having a TimeType object, one should be able to construct an Iterator from it.

The iterator needs to be able to hold the value it should increase itself every time as well as a copy of the base value. With this, one could think of an iterator that holds a TimeType object and every time the next() function is called, adds something to it and returns a copy of it.

Another way of implenting this would be to know how many times the iterator has been called, multiply this with the increase value and add this to the base.

I like the latter version more, as it does not increase the calculations needed for getting the real value out of the TimeType instance every time the iterator is called.

This way, one can write the following code:

let v : Vec<_> = now() .every(days(7)) .map(TimeType::calculate) .take(5);

to retrieve five objects, starting from today, each separated by one week.

Next

What I think I'll do in the next iteration on this series is summarize how I want to develop this little crate. I guess test driven is the way to go here, after defining the type described above.


Please note: This article was written a long time ago. In the meantime, I learned from a nice redditor that there is chrono::Duration which is partly what I need here. So I will base my work (despite beeing already started into the direction I outlined in this article) on the chrono::Duration types and develop the API I have in mind with the functionality provided by chrono.

For the sake, I did not alter this article after learning of chrono::Duration, so my thoughts are lined up like I originally had them.

tags: #open-source #programming #software #tools #rust

Here I want to describe how I plan to refactor the logging back end implementation for imag.

This post was published on imag-pim.org as well as on my personal blog.

What we have

Right now, the logging implementation is ridiculously simple. What we do is: On every call to one of the logging macros, the log crate gives us an object with a few informations (line number, file, log message,...) – we apply our format, some color and write it to stderr.

This is of course rather simple and not really flexible.

What we want to have

I want to rewrite the logging backend to give the user more power about the logging. As we only have to rewrite the backend, and the log crate handles everything else, the actual logging looks non different and “client” code does not change.

+----------------------------+
| imag code, libs, bins, ... |
+----------------------------+
              |
              | calls
              |
              v
+----------------------------+
| crate: "log"               |
+----------------------------+
              |
              | calls
              |
              v
+----------------------------+
| imag logging backend impl. |
+----------------------------+

So what features do we want? First of all, the imag user must be able to configure the logging. Not only with the configuration file but also via environment variables and of course command line parameters. The former will be overridden by the latter, respectively. This gives the user nice control, as she can configure imag to log to stderr with only warnings being logged but when calling a script of imag commands or calling imag directly from the command line, these settings can be temporarily (for the script or one command) be overridden.

The configuration options I have in mind are best described by an example:

# The logging section of the configuration
[logging]

# the default logging level
# Valid values are "trace", "debug", "info", "warn", "error"
level = "debug"

# the destinations of the logging output.
# "-" is for stderr, multiple destinations are possible
default-destinations = [ "-", "/tmp/imag.log" ]

# The format for the logging output
#
# The format supports variables which are inserted for each logging call:
#
#  "%no%"       - The of the logging call
#  "%thread%"   - The thread id from the thread calling the log
#  "%level%"    - The logging level
#  "%module%"   - The module name
#  "%file%"     - The file path where the logging call appeared
#  "%line%"     - The line No of the logging call
#  "%message%"" - The logging message
#
# Functions can be applied to the variables to change the color of
# the substitutions.
#
# A format _must_ contain "%message%, else imag fails because no logging should
# be forbidden
#
[logging.formats]
trace = "cyan([imag][%no%][%thread%][%level%][%module%][%file%][%line%]): %message%"
debug = "cyan([imag][%no%][%thread%][%level%][%module%][%file%][%line%]): %message%"
info  = "[imag]: %message%"
warn  = "red([imag]:) %message%"
error = "red(blinking([imag][uppercase(%level%)]): %message%)"

# Example entry for one imag module
# If a module is not configured or keys are missing
# the default values from above are applied
[logging.modules.libimagstore]
enabled = true
level = "trace"
destinations = [ "-" ]
# A format is only globally configurable, not per-module

One of the most complex things in here would be the format parsing, as variable expansion and functions to apply are some kind of DSL I have to implement. I hope I can do this – maybe there's even a crate for helping me with this? Maybe the shellexpand library will do?

These things and configuration options give the user great power over the logging.

The approach

Because imag already logs a lot, I think about an approach where one thread is used for the actual logging. Because each logging call involves a lot of complexity, I want to move that to a dedicated thread where other threads speak to the logging thread via a MPSC queue.

Of course, this should be opt-in.

The idea is that the logging starts a thread upon construction (which is really early in the imag process, nearly one of the first operations done). This happens when the Runtime object is build and hence no “client code” has to be changed, all changes remain in libimagrt.

This thread is bound to the Runtime object, logging calls (via the logging backend which is implemented for the log crate) talk to it via a channel. The thread then does the heavy lifting. Of course, configuration can be aggregated on construction of the logging thread.

The logging thread is killed when the Runtime object is dropped (one of the last operations in each imag process). Of course, the queue has to be emptied before the logging is closed.

I am also thinking about converting the code base to use the slog crate, which offers structured logging. But I'm not yet sure whether we will benefit from that, because I don't know whether we would need to pass a state-object around. If that is the case, I cannot do this as this would introduce a lot of complexity which I don't want to have. If no such object needs to be passed around, I still have to evaluate whether the slog crate is a nice idea and of course this would also increase the number of (complex) dependencies by one... and I'm not sure whether the benefits outrule the inconveniences.

tags: #linux #open source #programming #rust #software #tools #imag

Today I released version 0.3.0 of my imag project, which contains over 30 sub-crates in the project repository.

This was pain in the ass (compared to how awesome the rust tooling is normally). Here I'll explain why, hopefully triggering someone to make this whole process more enjoyable.

There is no cargo publish --all

Yep, you've read that right. I had to manually cargo publish each crate individually. This is even worse as it sounds! One might expect that, the hacker I am, I wrote a short script a la each crate do cargo publish – but that's not possible – because they depend on each other. I had to find the “lowest” in the stack and build from there.

And as cargo-graph does not yet support workspace projects, I could not even find out which one was the “lowest” crate in the stack (but I know my project, so that was not that much of an issue, actually).

Still, I had to run cargo publish on each crate by hand.

Dependency-specs have to be re-written

As I depend on my own crates by path, I had to re-write all dependency specifications in my Cargo.toml files from something like

[dependencies.libimagstore]
path = "../libimagstore"

to something like

libimagstore = "0.3.0"

which is easily doable with vim and some macros, but still inconvenient.

How it should be

How I'd like it to be: The cargo publish command should have a --all flag, which:

  1. verifies that all metadata is present (before building the code)
  2. checks all path = "..." dependencies and whether they are there
  3. expects the user to provide some data on how to re-resolve the path dependencies, so dependencies are fetched from crates.io after publishing

and then automatically finds the lowest crate in the stack and starts building everything from there up to the most top-level crates and publishes them all in one go and only if all verification and building succeeded.

tags: #rust

The imag-pim.org website just got a new face.

I was really eager to do this becaues the old style was... not that optimal (I'm not a web dev, let alone a web designer).

Because the site is now generated using hugo, I also copied the “What's coming up in imag” blog posts over there (I keeping the old ones in this blog for not breaking any links). New articles will be published on the imag-pim.org website.

This very blog article will be published on both sites, of course.

tags: #linux #open source #rust #software #tools #imag

This is the 25th iteration on what happened in the last four weeks in the imag project, the text based personal information management suite for the commandline.

imag is a personal information management suite for the commandline. Its target audience are commandline- and power-users. It does not reimplement personal information management (PIM) aspects, but re-uses existing tools and standards to be an addition to an existing workflow, so one does not have to learn a new tool before beeing productive again. Some simple PIM aspects are implemented as imag modules, though. It gives the user the power to connect data from different existing tools and add meta-information to these connections, so one can do data-mining on PIM data.

What happenend?

Luckily I can write this iteration on imag. After we had no blog post about the progress in imag in April this year, due to no time on my side, I'm not very lucky to be able to report: We had progress in the last 4 (8) weeks!

Lets have a look at the merged PRs (I'm now starting to link to git.imag-pim.org here):

  • #915 merged a libruby dependency for travis.
  • #918 removed some compiler warnings.
  • #917 merged some travis enhancements/fixes.
  • #916 superceeded PR #898, which simplified the implementation of the FoldResult extension.
  • #895 started a re-do of the ruby build setup.
  • #911 changed the interface of the StoreId::exists() function to return a Result now.
  • #904 added initial support for annotations in the libimagentrylink library, which gives us the posibility to add annotations to links. There are no tests yet and also no remove functionality.
  • #921 was a cleanup PR for #911 which broke master unexpectedly.
  • #914 fixed a compiler warning.
  • #929 removed libimagruby entirely because we couldn't merge to master because a dependency on master started to fail. The whole ruby thing is a complete mess right now, dependencies are not found, tests fail because of this... it is a mess.
  • #927 removed unused imports.
  • #924 updated links in the readme file.
  • #926 added tests for the StoreId type.
  • #919 merged preparings for the 0.3.0 release, which is overdue for one month right now, because the ruby scripting interface does not work.
  • #930 updated the toml-rs dependency to 0.4, which gives us even more superpowers.
  • #932 added some tests for the configuration parsing functionality.
  • #933 Adds a new dependency: is-match, a library I extracted from the imag source code into a new crate.

The libimagruby mess

Well, this is unfortunate.

libimagruby should be ready for one month by now and usable – and it is (the basic things, few things tested also)! But as the CI does not work (fuck you travis!) I cannot merge it. I also don't know how to properly package a Ruby gem, so there's that.

I really hope @malept can help me.

I'm already thinking about adding another scripting interface, so I can continue and start implementing frontends for imag, for example I'm thinking about a lua or ketos interface, still. Lua might be the better idea, as there are libraries around for certain things, while there are no libraries for ketos (I assume).

What will happen

I honestly don't know. I will continue working on imag, of course, but right now, the libimagruby is stalled. I'm not sure where to start working besides libimagruby – a Ruby scripting interface is what I need right now, but it won't work ... so there's that.

As soon as the Ruby interface is ready, we can have nice things. But right now, it is really hard to continue.

tags: #linux #open source #programming #rust #software #tools #imag

When working with Rust on NixOS, one always had the problem that the stable compiler was updated very sparsely. One time I had to wait six weeks after the rustc release until I finally got the compiler update for NixOS.

This is kind of annoying, especially because rustc is a fast-moving target and each new compiler release brings more awesomeness included. As an enthusiastic Rust programmer, I really want to be able to use the latest stable compiler all the time. I don't want to wait several days or even weeks until I can use it.

Use the overlay, Luke!

Finally, with NixOS 17.03 and unstable as of now, we have overlays for nixpkgs. This means that one can “inject” custom nix expressions into the nixpkgs set.

Meet the full awesomeness of nixpkgs!

Soon after overlays were introduced, the Mozilla nixpkgs overlay was announced on the nixos-dev mailinglist.

Now we can install rustc in stable, beta and nightly directly with pre-build binaries from Mozilla. So we do not need to compile rustc nightly from source on our local machines if we want the latest great rust feature.

This is so awesome. Everybody should use the rust overlay now, in my opinion. It's really convenient to use and gives you more flexability with your rust installation on NixOS. So why not?

tags: #nixos #nix #rust #software