musicmatzes blog

I just noticed that I have no blog article about nixos-scripts version 0.3 yet, though the tool was released exactly one month ago.

So I catch up on this now.

Changelog

The third release contains some neat things for the “switch” and the “update-package-def” command, lets start with the latter.

The “update-package-def” command is now able to translate arguments for --cores and --max-jobs now, so you can pass the number of cores or jobs to use for the build. This is extremely helpful for updating large packages. I myself happen to maintain a set of rather small packages, but from time to time I want to update packages I do not maintain, for example i3 or similar packages. Building these packages with more jobs results in much better build time, especially on my OctaCore processor.

The “update-package-def” command is now able to push to a pre-configured remote after successful build now. So your update-workflow is a bit smoother now:

  1. Get the URL for the patch from monitor.nixos.org
  2. Run “update-package-def”
  3. Create pull request on github

The “switch” command got some neat updates as well. First of, the command is now quiet by default. This means you don't get this noisy output anymore when updating your system, as nixos-rebuild is called with -Q now. The command has changed some background behaviour: The nixpkgs repository (if known) gets updated (via git fetch) before a tag is created. This way we can be sure the commit exists before creating a tag on it. Tagging is now disabled for build as command type. Also the codebase was refactored, so its maintainability was improved a bit.

Another big point is the “channel” command. We have commands for channel updates now, they also create tags in your configuration repository. My workflow for updating my system is now:

nixos-script -v channel update
nixos-script -v switch

Which updates the channel with the first command and rebuilds the system with the second command. I don't use the nix-* commands anymore for my system, really!

We have another command (“container”) for container management. This tool has three subcommands:

  1. “setup” for building a new container from a template. You can specify a configuration.nix file, which will be copied to the container after it is created. It will call your editor to customize the configuration file right afterwards and rebuild the container then.
  2. “stats” which prints a simple table with statistics about your containers (name, IP, status and host key).
  3. “kill” kills a container and gives you the option to destroy it as well.

More options will be added to the container subcommand as soon as I need them. We'll see...

The “show-generation” command is now able to show generations from a custom profile.

A tool was added for downloading sources of packages into the store. So you can download all the sources you need for building a package and then disconnect from the Internet and rebuild your system while beeing offline.

A tool I'm rather proud of is the REPL I've added. Its capabilities are limited, but I'm still a bit proud of it. So you can have a simple “shell” where you don't need to type nixos-scripts all the time. Normal bash commands are allowed as well, though functionality is limited here. It also lacks features like autocomplete and “one up for last command” or these things...

The help texts were polished.

What's up next

Well, there are some releases planned in the github issue tracker by now... Lets iterate:

0.4

This release will contain auto-complete for all the things. Nothing more, nothing less. It is something like an one-feature-release. I hope to get this done soon, but at the moment I really have no time to work on nixos-scripts, so ... feel free to send me pull requests to help me there.

0.5

The 0.5 release will introduce user-defined functions. So the user is allowed to put custom code in the configuration file and these functions will then be used for generating the tag names for example.

I'm really looking forward to this, as I do not want to restrict the user in matter of tag names my scripts generate in the users configuration repository. Backward compatibility will be provided, though.

Another feature I want to include is a (sort of) meta-command, which updates and cleans the system in litterally one step:

  1. Update channels
  2. Rebuild the system
  3. Remove old generations
  4. run the GC
  5. Offer a reboot of the system

0.6

As the 0.4 and 0.5 releases are sort of one-feature-releases, this release will introduce all the small fixes and cleanups, new functionality for commands and so on. As this milestone gets rather big on github, I may split it into 0.6 and 0.7, I don't know yet.


So that's it. I apologyze for writing this post now, not one month ago. Though I'm not committing to the repository at the moment, the project is not dead, really! Feel free to send pull requests and issues!

Stay tuned! tags: #linux #nix #nixos #software #tools

Everyone starts using shell environments, where custom variables are available, custom bash functions are enabled, special commands are available, maybe even some additional programs are installed.

Projects on github pop up for handling these things, installing bash “environments” and so on. And I have to ask: Why?

Why would you do that?

I tell you, there's a solution to all of your problems. It is called “nix”. It is a package manager which provides you opportunities, you've never dreamed of!

Yes, you could hack the functionality of these project-environment-manager things yourself (that's what people did, then they put it on github and now you see it and decide to write your own becaues you can do it better). You could use this piece of clusterbash and suffer from impurities you'd never expect. Or, you could just use the purity of the nix package manager to deploy the world in a shell.

With the nix package manager, you can use custom project-based environments to install packages neither globally nor in your user package environment:

nix-shell -p haskellPackages.pandoc

for example installs “Pandoc” and starts a shell where it is available. The executable is not available anymore after you type exit. It is not installed in your system, it is not even installed in you user package set. It was available in this shell and only will be available again after you entered this shell via nix-shell.

I have a lot of packages...

... so have I. Normally, I install one or two packages via commandline in my nix-shell environment. If there are more packages or I need the packages on multiple systems (when developing something, for example), I add a default.nix file to the repository:

{ stdenv, pkgs ? (import <nixpkgs> {}) }:

let
  env = with pkgs; [ racket ];
in

stdenv.mkDerivation rec {
    name        = "blog";
    src         = ./.;
    version     = "0.0.0";
    buildInputs = [ env ];
}

The I only have to type nix-shell to enter the environment.

Another idea would be to install the packages “persistent”, so after a call to nix-collect-garbage won't remove the packages from the ominous store where the packages live:

nix-instantiate . --indirect --add-root $PWD/shell.drv
nix-shell $PWD/shell.drv

(copy-pasted from the nixos-wiki)

tags: #bash #nix #nixos #programming #software

How do you do your Personal Information Management? Or, more specific: How do you organize your contacts over multiple devices, how do you organize your calendar, todo lists, notes, wiki, diary, browser bookmarks, shopping list, mails, news feeds,...

Do you use Google for all this? Maybe you do. Don't you want to uncouple from Google? Well, then... I have to tell you about the sad state of PIM for nerds.

If you want to organize your personal information without google and host everything on your own, you will soon meet tools like owncloud, emacs orgmode or similar tools. Sadly, all these things are not what I want. OwnCloud is getting more buggy with every release and it is already slow as hell. orgmode needs emacs, which is a huge tool itself and you have to learn a whole new ecosystem. If you are a vim user like me, you don't want to use emacs.

But I'm not talking about editors here. I'm talking about PIM tools. What I do right now: Owncloud with khard, khal, vdirsyncer for contacts and calendar organization. As said, OwnCloud is buggy and sometimes calendar entries cannot be synced to all my devices. On Android, I use Apps to sync my contacts and calendar as well, and they fail as well, sometimes.

I use taskwarrior, which has a sync server available. Sadly, it doesn't work yet on NixOS, but well, that's my issue and I'm working on a solution. Nevertheless, the Android client (Mirakel) is badly supported and does not work that good as well.

For news, I use ttrss, which works fine and the appropriate Android App works good, too, so no issue here. For a Wiki, I use Gollum, which works but is a bit annoying to use because it is not that customizable. I do not use note-taking tools at all, because they simply suck. There's no good note-taking tool available for commandline use which integrates with the other tools. Mails work fine with mutt, of course, but they cannot be integrated in the wiki, todolist tools or the other tools I just mentioned. I do not use browser bookmarks at all, because there is no CLI tool available for them. Same goes for shopping lists.

What I want

What I want is simple: One tool, which integrates

  • Personal wiki
  • Personal todolist
  • Personal notes
  • Personal mail indexing
  • Personal Calendars
  • Personal Contact management
  • Personal News Feeds (RSS/Atom mostly)
  • Personal Bookmarks
  • Personal Shopping list
  • Personal Diary

in the following ways:

  • I can use whatever
    • text editor
    • mail reader, sender, receiver
    • rss reader I want to use
  • I can synchronize everything to all devices, including Android smartphones or my Toaster
  • Everything is done with open standards. Means
    • vcard for contacts
    • ical for calendar
    • markdown for
    • wiki
    • notes
    • diary
    • shopping list
    • maybe YAML for todolist
    • mbox or Maildir for mails
    • normal Atom/RSS for news stuff
    • for bookmarks, YAML or JSON would be appropriate, I guess.
  • I can access all my data in the system with a text editor, if I have to
  • a clean and polished (+fast) Android Application to access and modify this data.
  • I can move/link data from one system to another. For example:
    • I can link an Email from my notes
    • I can link a entry from my RSS, notes, calendars to (for example) my Wiki
    • I can send a shopping list from my mail client to a contact and attach a calendar entry which links to the shopping list
    • ... and so on
  • All the things are encrypted (optionally)

As everything should be plain text, git would be fine for synchronization. The sync should be decentralized at least, so I don't have to host a server at home and cannot sync if I'm on the go. A web-hosted entity should be optional and so should be a web interface. Having a web-UI like owncloud has is nice, but not that critical for me. A full encryption of the content would be nice as well, but would be kinda hard for the Android devices, at least if the device gets lost. Anyways, my drives are encrypted and that should be enough for the first step.

It is, for me, really important that these tools interact well with eachother. The feature that I can send a mail to a contact and attach for example a shopping list, which itself has a calendar entry (which gets attached as well, if I want to), is a real point for me. Same goes for attaching a RSS entry to a wiki article or todo item.

Another requirement would be that the tool is fast and stable, of course. Open Source (and at best also free software) would be a crucial point to me as well. GPLv2 would be the thing.

Do it yourself, then!

Well, developing such a tool would be a monstrous huge amount of work. I'd love to have time for all this, especially as student. But I think I have not. I have a lot of opinions how such a tool should work and also a lot of ideas how to solve a certain problem which may arise, though I absolutely have no time to do this.

I, personally, would develop such a tool in Rust. Simply because it gives you so much power to your hands while remaining a really fast language in manner of execution speed (speaking of zero-cost abstractions here). Though, there would be the need for a lot of external libraries, for example for git, vcard, ical, yaml, json, markdown, configuration parsing, etc etc. While some of these things might be available already, others are clearly not.

Sadly, such a tool is not available. Maybe I can find time until I'm 35 years old to develop such a thing. Maybe someone else has done so until then. Maybe I just inspired you to develop it? Would be neat!

tags: #life #linux #mail #media #open source #programming #software #rust #tools #vim #wiki

It happened that I ordered a new mobile phone for myself. And because I don't want to have a google account, pre-installed facebook Apps and the like, it is a OnePlus One.

The order

Well, to order a oneplusone you can go to amazon and order it for like 433 Euro (64 GB version). Or you go to the oneplus store and order it for 299 Euro, which is what I did. I also ordered a screen protector and a case for it, to protect it even more.

But well, if you want to order from the manufacturer, you have to register on their page, which pissed me off a bit. The second problem was that you can only pay via Paypal and because you order from non-germany, you have to setup your Paypal account with a credit card. I do not have a Paypal account (because it really sucks) – so I asked someone else to pay the order for me and transfered the money to the persons bank account.

Unboxing

The One arrived seven days after I ordered it. It was absolutely nicely boxed, though some boxing was rather ridiculous: I understand that the power adapter has to be boxed separately, because of adapter variants. I do not understand why it has to be boxed two times – one time in a box, another time in a plastic-foil.

Screen protector

I also ordered a screen protector. It was shiped together with the One, nice! I applied it accordingly to this tutorial and I managed to get only one really tiny bubble under the protector foil. Well done!

Update!

Of course, when starting the One for the first time, it asks you to insert your google account data, etc. etc. I do not have a google account, so I was really pleasant surprised when I was offered a “skip” button!

As I do not have a micro/nano-SIM card yet, I also skipped the SIM setup.

Then, after some localization and skipping, I was offered to update the phone from CM 11 to CM 12. I did this, of course. It was really fast, although it said it would take up to 20 minutes. While updating it told me that 128 Apps are optimized. I always thought CM comes without Apps preinstalled? Well... We will see.

Initial setup

After applying the update I was asked for several things again, including google account data. Of course I switched off what I could switch off.

I also disabled everything the google apps want to do. None of the google apps should now be able to access my texts or other data. I hope.

The initial setup was fun. I really enjoyed using the device. It is a bit heavier than my old device, but it is also a few inches bigger, so that's completely okay.

I really hope I can get my SIM card as soon as possible, so I can start using the device properly.

What really bothered me in the first place: I have a 64 GB device. I can use about 55 GB of this – copying music will be painful! But well... I solved that “issue”. I found out, that one can install git-annex on Android. So I did that and just imported my complete music library. It took a rather long time to create the 44771 files and 3913 folders on the device. But having git-annex available on my Android device is a great plus point. So I don't have to care about syncing the files and so on, I just can use git-annex and it does everything for me. I just have to be careful with the available memory, as 55 GB can not handle my 500 GB music library, of course.

And, just you know, I do not use the web interface. I use the terminal emulator, of course!

But then, the disillusion: The music player didn't work with the symlinks git-annex generates. So I removed the git-annex repository from the device and copied music to it as I am used to.

Sync with owncloud

Well, syncing with owncloud was a topic. I thought it would be easy, but it wasn't.

There is this DavDroid app on both the market and F-Droid, which can by used to sync between caldav/carddav servers and the device. I entered my owncloud data and synced, but it just won't work.

So I filed a bug in the forum of the developers, but they couldn't help me with my problem (though they responded really quickly and were really nice at all).

So I tried other synchronization apps and finally landed with CalDAV Sync Adapter and carddav sync free which worked.

Now I have everything set up to use the oneplusone as my daily driver! Yay!

A gem I found

I found a nice app in the f-droid as well: whohasmystuff, which can be used to keep track of things you lend to someone. Really nice and basic app. It just works™.

tags: #android #cmmod #linux #media #music #open source #software #oneplusone

Read another NixOS success story here, about how I updated my kernel and removed the old graphics driver and using another one in one step.

And I didn't even care about the fact that my system maybe wasn't bootable afterwards.

Indeed, that's NixOS awesomeness. So, the problem is, I have an AMD graphics card in my workstation at home. I was on kernel 3.18 a rather long time. I mean, I was on 3.18 from 3.18.1 to 3.18.whatisitatthemoment – I guess I was there until 3.18.16 or something!

I used the ati_unfree driver in my system, so it wasn't appropriate for Christmas and holy St. GNUcius. I wanted to fix this. Kernel 4.3 will have the free AMD driver infrastructure, but it isn't out there right now, so I thought of updating to kernel 4.2 with the ati_unfree driver package. But it didn't work, ati_unfree couldn't be build on my system. I don't know why and I actually do not care.

So here's what I did:

I removed the ati_unfree driver from my services.xserver.videoDrivers setting and set it to [ "ati" "nouveau" ] – each of the drivers listed will be loaded and tested, told me the description of this option. I then did a nixos-rebuild build run, to verify it builds. It did. So I updated the kernel to boot.kernelPackages = pkgs.linuxPackages_4_2; and did a nixos-rebuild switch (actually I used my awesome nixos-scripts for this), but yeah...

And then I rebooted. I didn't even have to think about potential breakage, I just tried to boot the new system. If it would have been unbootable, I would just boot into an older generation and instantly be back on my kernel 4.1 and ati_unfree driver I used before the update.

This is what I call NixOS awesomeness.

tags: #nixos #linux #software

I started a project some time ago, where I develop a template for a wiki which is statically compiled into a html page. As the template is more mature now, I want to introduce you wiki.template and explain why I wrote it.

Once upon a time...

If you read this blog you know that I'm a member of the NixOS community. The NixOS community has, of course, a wiki. And the quality of this wiki is bad. No really, it is just one large page (the landing page) linking to sub-pages which are more or less maintained and contain only snippets of information. Sure – there's a lot of content in there, but (especially beginners) one has to search for it. You can't simply find it.

That's why I wanted to have a new wiki for the NixOS community. And because I like git, I wanted the content to be stored in a git repository.

So I came up with the idea to recreate the NixOS wiki and start it from scratch. Of course, the content of the original wiki has to be migrated to the new wiki then, but this has to be done carefully and by hand to ensure that things are ordered and easy to find afterwards.

The requirements

The requirements where rather simple, but nevertheless important to me. As already said, I wanted to have the content version controlled with git. The templates and markups for other data should be version controlled as well, if possible. This way, the content (or whole wiki) can be distributed with git, hosted on github or similar hosting platforms and so on.

I wanted to be able to customize the style of the wiki completely and I wanted to be able to write snippets of markup which can be reused, such as a warning-alert template where you can pass custom text and get a red alert box in your wiki page. Pretty normal stuff for a wiki.

Syntax highlighting, TOC-generating, all these things should be integrated or available through plugins (or there should be the possibility to write such plugins).

So I searched for wiki software with git backends...

The state of git-backend-driven wiki software

The state of wiki software with git backend is bad. No, it is even worse. There are actually two projects I had a look at and one which is written in Perl (and I didn't have a look on this one, because Perl).

gitit

First of all, there is gitit. Gitit is a wonderful piece of Haskell software. It works great and is fast. But it has its issues:

  1. No Templates. Period.
  2. Content is checked in, but neither templates nor style information
  3. Haskell. So writing plugins – nah, not really.
  4. Just few plugins available

So it was pretty clear that gitit is not an option for me.

gollum

Gollum was the other alternative. Developed by github, ruby thing, sounds well, right?

But. There's a big but.

  1. No templates
  2. No plugins
  3. No custom styling or at least no documentation about it

It was clear to me: No option!

The solution

Well, the solution? Just write it yourself! And that's what I did. I started to work on a template for a wiki which is statically compiled with nanoc. Later I switched to jekyll.

Compiling the wiki to a static site has several advantages. First of all, hosting the site is cheap. A common web server can host thousands of pages without even noticing. Of course, you are also protected against hacking and all this stuff, as you do not have the problems dynamic websites have.

But these are common advantages of static compiled sites. An even bigger advantage is, that contributions to the wiki must pass a review-process of some kind. Of course, I will not grant push access to the repository to anyone and of course nobody gets access to merge pull requests, besides myself. I will protect myself against pushes to master as well, so each change to the wiki has to be reviewed. And that's what matters when it comes to quality: Changes must be reviewed.

Besides this big advantage, I do not have to care about updates (of the compilersoftware). I can do them locally and roll back if they fail. I do not have to keep track of spam bots, security updates, user account data, etc. etc. etc.

So here's the story how I wrote the template:

wiki.template

I started the project with nanoc, the static site compiler I love. The project is at version 4 these days and the compiler is really really good. It is flexible, scales well, fast and a joy to work with.

I just failed integrating foundation, the CSS framework I wanted to use. I played around with Jekyll a bit, but wasn't able to get Haml working. But at some point, foundation seemd to be more important than Haml, so I switched to Jekyll.

I also thought of the option to build the site with Jekyll directly from the github repository and build a github page with it, just because this way I can save money for hosting. I still think of that option, but as I use Jekyll with plugins I cannot build it on github directly. So I searched for tutorials on how to build it with travis and deploy my github page from travis. This is documented and a known as working.

Current state of wiki.template

To get my stuff working, I had to use some plugins. And these plugins basically work for me, but the code was either a mess or lacked features I wanted to have. So what did I do? Right, I forked them and send pull requests to the maintainers.

At this very point in time, I have three pending pull requests to these plugins, two of them just normal “cleanup codebase” PRs, one feature. I also opened issues for other features I do not necessarily need but would like to see implemented. I would implement them myself, but well... not enough time.

So what is the current state? Well, wiki.template is basically usable. I want to integrate some more features and it completely lacks of documentation on how to use it.

But these things will be done soon and I hope I can then create a fork of it to start my NixOS Wiki.

The NixOS Wiki

I already announced the NixOS Wiki fork (semi) officially on the NixOS mailinglist. Of course, as always in such communities, a discussion started on whether to do such a thing or not, which software to use and whatever. I don't care about this, I will simply do this and we will see whether it succeeds or not. Discussion all over the place helps nobody.

tags: #git #linux #nix #nixos #open source #programming #software #wiki

Imagine you have an idea for an awesome web application. Or an desktop program or even a library in your favourite programming language do make complicated things easy. You really want to implement it, maybe you already know that it will work really well, maybe you don't and you want to try whether you can make it work or not.

But how to start?

The first steps are easy. You make a new directory, maybe a directory structure. You create some files, such as a README.md or a LICENSE file. You create a MyGreatSoftware.MyFavouriteLanguage file and you open it in your favourite text editor or IDE.

But then you struggle and you don't know what to do first. Let's imagine you want to write a commandline application – do you start with the argument parsing part or do you add some backend abstraction in the first step? Do you build up a great framework around some functionality so you can use it more easily later on or do you implement functionality directly?

I had such problems, too. I started a project for a rather simple commandline application and I really didn't know where to start. It was a ruby project, so Imy initial steps where rather clear: I installed some dependencies I wanted to use, such as an option parsing library and a commandline output helper for pretty formatted output (colored things and so on). I created the directories lib and bin – and then the file lib/app.rb (where “app” is the name of the program.

And then I failed to continue because I didn't know what to implement first. So I ended up implementing the commandline parsing part and the backend abstractions at the same time (on different branches, of course) and it resulted in me deleting the project because things got complicated.

Then, I restarted the project and did something I've never done before: I started to do TDD and I started with the very first line of code.

Disclaimer: I'm not a TDD professionalist. I never had any training in TDD and I don't even know whether I did it right. But it's not the point whether you do TDD right or not here. The point is, you think about simple, neat pieces of code with a lot more focus. You try to imaging what explicit feature you want to have. For example your test says something like this:

If I pass an argument --update a File ~/.app_timestamp gets the current time is appended.

That's a really really compact thing and you can implement this in one single line of Ruby. And that's exactly my point. You start implementing things without building layers and layers of abstraction and losing yourself in the codebase you created yourself for not losing track of things.

I learned that writing tests and implementing just the very functionality that makes the test succeed helps me a damn lot to focus on the functionality I want to build rather than building layers and layers and afterwards having a kind of framework which does nothing (and is mostly even too complicated to use).

Of course, after implementing things you can clean up your codebase and refactor things, so the abstraction layers will be created at some point. But that's after the features have been implemented. And a damn great bonus: You have tests which are there to ensure you don't break your functionality by refactoring and making yourself happier with the codebase.

So that's what I call Test-Driven Project Initialization. You don't need to perform perfect TDD and stick to it all the time. You don't need to test every aspect of your functionality. The tests are there to keep you focused on the functionality, not to ensure there are no bugs. Of course, that's a neat side effect and you can continue and extend your test cases later on to find bugs. But for Test-Driven Project Initialization it is not the point to find bugs.

tags: #programming #software #testing

I just wrote my very first bit of racket code which actually runs somewhere. Want to find out what it is?

Well, it is a rather simple piece of code, and it helps me generating this very blog. If you visit this blog frequently, you may have noticed that the layout changed a bit. Chaning the layout was a thing I wanted to do for months and the adaptions I just did are exactly what I wanted to do all the time: Move the content in the center of the page, the lines are now shorter as the div in which the content lives got reduced in its width.

I also moved the tags and dates on the left of the articles, both on the index page and on the article pages. I really like that style. It wasn't that easy (for me as racket-newbie), because the blog engine I use (frog) has no method to insert a html-list of tags somewhere. So I coded it myself, right in the templates for the article and index pages.

Here's how I did that:

  1. First, I put everything into a subcontainer of the bootstrap layout. The date and tags (lets call it metadata) lives in a two-column-container to the left of the ten-column-container which holds the content
  2. I wrote a bit of racket code to generate a html-bullet-list out of the comma seperated list of tags frog gives me:
@(string-join
  (map (lambda (str)
        (string-append "<li>" str "</li>"))
       (string-split tags ",")))

What it does is simple: It splits the string in tags, which holds a list of tag-links by the seperator, the comma. Then it maps a lambda over the list of strings, which appends the string to "<li>" and "</li>" to that. Afterwards, the whole list gets joined together, so we have one big string left.

The template itself contains the <ul> and </ul> tags around the list. This way, I generate a nice bullet list for both the index pages and the article pages.

Unfortunately, the solution is not very DRY, as both templates contain the same code. There is no possibility to pass custom code to frog, yet. But I opened a feature request on github to include such an option (providing a pull request would be much nicer, I know – but as you already know, I'm a racket newbie and I'm not sure I can figure out such a thing myself).

tags: #blog #racket #programming

I just released the version 0.2 of the nixos-scripts. Read here what is included in this release.

44 patches are included in this release, merges not counted. There were some basic changes and a handful of features were added.

First, the switch command includes the hostname of your machine by default now, whereas the flag is now changed from adding your hostname to preventing the hostname in your tag.

The tags which are generated by switch are no longer annotated.

The switch command also has now an option to do all the things, but not nixos-rebuild at all.

The switch command can also tag the local nixpkgs clone now if the switch succeeded. The subcommand switch is default for the switch command now, you don't have to do nix-script switch -c switch, you can do nix-script switch.

The command for updating a package definition has now a flag for not checking out a new branch before updating the package. This can be used to mass-update packages on one branch. Also a flag for not checking out the base branch after a successful update of a package.

The user gets less output now, only output which can be relevant to the user is printed if the -v flag is passed to the nix-script command. The -d (debugging) flag prints all the information which can be used to debug the scripts now. But normal users don't need this flag.

All the scripts have support for the RC file now. You can put configuration in ~/.nix-script.rc now, so you don't have to pass things all the time. For examples, see the nixos-scripts repository at github.

Channel tools were integrated as well. The channel tools can be used to diff between channel generations, list the channel generations, checkout a specific channel generation and so on. More utilities will be included as time goes on, of course.

What is in the pipeline

Of course, the next release is already planned. v0.3 will land as soon as there are enough new features. In the pipeline is already:

  • A tool to update the nixos channel and generate tags for it in the configuration directory
  • A repo-reset command, which can be used to generate a branch in your nixpkgs which is based on the commit which your current nixos generation is build from
  • Container helpers. Everything you need to manage nixos-containers
  • diff-generations will be able to list the diff in your configuration
  • A tool for test-building a package based on a special branch

More things will be done behind the scenes: Verification that everything worked well, checks before commands are executed, on so on.

v0.4 will be a one-feature-release which will only include completion support. The PR on github for this feature exists already, but it will be developed until v0.3 is ready and if everything works, it will be integrated with a single merge commit.

So that's it for today. tags: #linux #nix #nixos #software #tools

I'm about to go on vacation with my girlfriend. I will drive along the Donau by bike and we will start at the very beginning of the Donau and hopefully finish at the mouth of the Donau.

I will not tell you when we start, because of privacy reasons, of course. Anyways, we will have a great time, I hope. We have eight weeks available to make it to the Black Sea. I'm rather excited about Vienna and Budapest as these cities are known for their beauty. My girlfriend really wants to make it to the mouth, so I hope we will make it and be able to swim a bit and have a great time at the beach there.

Of course, I will report what we've experienced and how it was. Maybe some pictures will make it to the blog, too.

tags: #life