musicmatzes blog

Last weekend I attended the RustNation23 conference in London. We were asked by the organizers to write blog posts about our experience. So here we go.

Traveling to London

I flew in from Stuttgart, Germany on Thursday. The flight was mid-day, which was really nice because I wasn't too tired on the flight, but also not at a late time in London, so hanging out with a colleague who also attended and visiting one of the famous London Pubs was possible.

The Conference

The conference took place at “The Brewery”, which was the perfect location for such an event! The really nice and tasty snacks before, between and after the talks kept me awake but not too stuffed! And that was dearly needed because the talks were of exceptional quality! The keynotes were absolutely awesome! Seeing Job Gjengset live and in action was a particularly great experience, but also the other speakers, who, by the way, came from all around the world, were absolutely awesome!

The staff at the location took great care of everyone! There was always enough water available and the drinks after the conference (the “socializing” part) were also really good.

Another day in London

My company allowed me (or rather us) to stay an extra day in London, which we really enjoyed. We did a long walking trip through the city, visited Buckingham Palace, Victoria Station, Westminster and Big Ben, Leicester Square, Soho, Piccadilly Circus, Leicester Square, Canary Wharf, Jubilee Park, Kings Cross Station, Camden Town, The Regents Park, the Sherlock Holmes Museum (although we didn't enter). That were almost 30km of walking through the City of London in just one day.

Flying Home

On Sunday, I flew back to Stuttgart and finaly fell back into my own bed again. It was dearly needed, as my feet were wrecked, my brain was overloaded and I was tired from just being in London. A lot to process, really!

Conclusion

London was great. I am really grateful that my company let me go there, experience RustNation23 and gave me an extra day to visit London. I cannot wait for the next Rust conference (which will probably be EuroRust 23 for me). If the Speakers are only half as awesome as the ones at RustNation, it will be worth it a hundred times!

I've been writing cargo-changelog lately and already published the first version (0.1.0) on crates.io.

Here I want to write down some thoughts on why I wrote this tool and what assumptions it makes. This should of course not serve as documentation of the tool, but simply as a collection of thoughts that I can refer to.

Where

Changelog management is hard. Not because it is particularly difficult to do, but because nobody really wants to do it in the first place. Especially because there's no established “place” where it should be done.

Some tools want the programmer to write commits which can serve as changelogs. I wrote about that before. It puts burden on the programmer who does not want to concern themselves with whether a change is user-facing or not and whether it does impact the user at all. That's not their job after all! Not in an open source setting and especially not in a commercial environment. They're hired for working on the software and that's all they should do!

In an open-source world, the programmer of a feature may even contribute changelog entries, because they know that the change will have a certain impact on users when released. But the keyword in the prior sentence is “may”. They are not required to do so and should never be. Opensource projects suffer from having to few contributors. Of course, there are big open source projects out there, like kubernetes, tokio, django, Rust, TensorFlow or, of course, the Linux Kernel. These projects do not have that issue, but I feel comfortable in assuming that these are the Top-1%. Most Opensource projects have one or two contributors or, if lucky, are seeing maybe ten to fifteen regular contributors. If an such a project loses only one contributor, that has significant impact on the overall project. Thus, making contributors happy is somewhat of a key concern. Putting them responsible for adding changelog entries to their changes may not be the best way of making them happy.

Thus, I think, changelogs should be managed by the maintainer or someone in the project that wants to dedicate themselves to that task. The contributors should only do what they do best: Produce code and deliver features, fixing bugs, etc.

Under that presumption, putting changelogs within a commit is not a particularly good idea. It does not matter whether we're talking about commit formats like conventional commits here or about git-trailers for categorizing commits. After all, if a contributor categorizes the commit in the wrong way, they would need to rewrite the commit, even though the code they changed may be optimal. That's a serious hassle.

That leaves us only with producing the changelog entry outside of the actual commits that introduce the change.

The idea may then be to add the changelog entry in a dedicated commit, but still within the pull request that introduces the relevant change. That sounds good at first, but quickly falls apart because of a simple issue: Merging this may not be possible. The changelog entry that lands in a CHANGELOG.md file normally gets appended in some form or another. Whether that is a simple append to the section for the upcoming version of the software, or to a sub-section “Bugfixes”/“Features”/... does not matter, it is still an append. If someone else produced a change to that same section, we quickly run into merge conflicts. Needing a pull request to be rebased just because the changelog entry does not merge is a serious slow-down in progress for the whole project. That should never happen!

After establishing the last point, we see that producing the changelog outside of the commits that introduce a change as well as outside of the pull request that introduces the change does have a number of benefits to the overall pace of the project. Also, having someone dedicated to the issue of producing a changelog instead of burdening the programmers also has a benefit that may be beneficial to the whole project not only as in pace but also as in developer happiness.

The above points do not mean that a programmer who feels dedicated shouldn't be able to produce a changelog for their contribution! Of course they should be enabled to produce that changelog! But they should not have to concern themselves with mergability!

Also, producing changelogs should not slow down the project pace. After all, adding changelogs to a project is still a contribution. It should be as easy as producing code. It should not suffer from merge conflicts if two or more contributors add a changelog for different changes.

How

With all that in mind, I came up with a simple scheme. It turns out that other projects exist that follow a similar scheme – so I cannot take any credit for that. I still opted to start cargo-changelog because these already existing tools do of course not integrate with cargo, as they were written in other ecosystems.

So the general idea here is that we do not produce one large CHANGELOG.md file, but we record changes in individual files, called “fragments”. These fragments get put into a special place in the repository: .changelogs/unreleased/. The filename for each fragment is produces simply from a timestamp. That ensures that adding two fragments from two different pull requests will most certainly not result in a merge conflict.

A fragment contains two sections: A section with structured data and free-form text. That structured data is encoded in YAML or TOML (although normally these tools opt for YAML and cargo-changelog does so as well).

I thought long and hard about what structured data may be recorded here. It turned out: I don't know and of course I shouldn't decide this. So what I did was implement a scheme where the user can define what structured data they want to record! Each project can, in the .changelog.toml file, which serves as configuration file for cargo-changelog, define what structured data they want to record, whether a data entry is optional or whether it has a default value. When generating a new fragment, cargo-changelog can either present the user with an interactive questionnaire to fill that data, or(/and) open the users $EDITOR where they can edit that structured-data header themselves.

Structured data may be the pull-request number that introduced the particular change, a classification of that change (“Bugfix”/“Feature”/“Misc”/... whatever the project defines in the .changelog.toml configuration file) or, if desired, a “Short description” of the change.

The free-form text of the fragment can be used to document that change in a human-readable way. Currently, no format is enforced here, so whether the user uses Markdown or reStructured Text or something totally different is entirely up to the user (although cargo-changelog generates .md files for the fragments).

When a release comes up

As soon as the software is about to be released, the “unreleased” fragments should be consolidated. cargo-changelog helps with that by providing a command that moves all fragments from .changelogs/unreleased/* to .changelogs/x.y.z/ (where x.y.z is of course the next release version, either by asking cargo or by letting the user specify it).

One crucial idea here was that the release will be done on a dedicated release-branch. Of course the tool does not enforce or demand this in any way, but it gives the option of doing that without running into issues later down the road.

So if the release branch gets branched off of the master branch, the person dedicated for making the release would issue the cargo-changelog command for consolidating the unreleased fragments and then commit the moved files. After that, they would issue the cargo-changelog command for generating the CHANGELOG.md file. That file would always be generated and never touched manually. There's no need in doing so: changing changelog entries after the fact (for example if a typo was found) would happen in the fragment files.

Of course, the CHANGELOG.md file should also appear on the master branch of the project! Cherry-picking the commits that consolidated the unreleased fragments as well as the one that generated the changelog file does simply work, even if master progressed with new changelog fragments!

Changelog generation

In the previous section I wrote that the CHANGELOG.md file would be generated and would never be edited manually. Still, the user may want to add some custom text at the end of the changelog file, maybe they would like to use a custom ordering of their changes – Maybe they want to list bugfixes first and features second? Or they want only to have the short description of the individual changelog fragment to be displayed and the long-form text should reside in a <details>-enclosed part, so that when rendering the file a user can get a quick overview!

That's why CHANGELOG.md files are generated with a template file. That template resides in .changelogs/template.md (that path, as everything else with cargo-changelog, can be configured). That template file uses Handlebars templating and can be tweaked as required. In the current version of cargo-changelog, there are some minimal helpers installed with the templating engine to sort the released versions, group changes by “type” and some minimal text handling. More will follow, of course.

Metadata crawling

Another feature that cargo-changelog has is metadata crawling. One may want to fill header fields by issuing some command and using that command output as a value for a header field. cargo-changelog can call arbitrary commands for doing exactly that. Each header field can have a “crawler” configured, for issuing commands. These commands may even be other interactive programs like a script that uses skim (or its more popular counterpart fzf) for interacting with the user.

To sum up

To sum up, these are my thoughts and notes on changelog management with cargo-changelog. Of course, most of this is tailored towards opensource projects (and – if someone noticed – also towards an always-green-master strategy. I may write a blog article about that as well).

cargo-changelog is in 0.1.0 and certainly not feature complete yet. It is a first rough implementation of my ideas and it seems to work great so far, although it is not battle tested at all! I am eager to try it out in the near future and extend it and improve it as need be. One can see the tool in action in the history of the repository of the tool itself!

And as always: contributions are welcome!

End of last year, I published the article “I hate conventional commits”. I received a lot of good feedback on this article and it was even mentioned in a podcast (german) – thanks a lot for that!

Lately, I also grew a decent amount of hate for squash merges. I figured that I could also write an article on that, so I can link to it in discussions I have about the subject.

What are squash merges

When proposing changes to a software project that is hosted on a forge like GitHub, gitlab or gitea, the author of that changeset opens a pull request (in gitlab it is named “merge request”, but I'll stick to the former here). That pull request is, when it is approved by the maintainer of the project, merged. This normally happens via a click on the “Merge” button in the web interface of the forge (although it does not have to).

GitHub offers different methods when merging in pull requests. The “normal” way of merging a pull request is by creating a merge commit between the base branch (for example “master”) and the pull-request branch. This is equal to git merge <branch> on the commandline.

Another method would be the so-called “rebase and merge” method, which rebases the pull request branch onto the target branch and merges it after that. The rationale here is that if the pull request gets rebased before it gets merged, it is “up to date” with the target branch when it is merged. There's also two variants to that method, one were a merge commit is created after the rebase and one where the target branch is just fast-forwarded (git merge --ff-only) to the pull-request branch. I find these two methods problematic as well, but that's not what we're here for.

The third method, and the one I want to talk about here, is the “squash merge”. When a pull request is “merged” by the maintainer of a project, all commits that are in the pull-request branch are put into a single commit and all commit messages are joined together. This commit then is directly applied to the target branch. The (approximate) git command(s) for doing this would be

git checkout pr-branch
git log master..pr-branch --format="%s%n%b" > /tmp/message
git rebase master
git reset --soft master
git commit -a --file /tmp/message
git checkout master
git merge --ff-only pr-branch

Implications of squash merges

What I want to highlight here is what squash merging implies.

First of all, squash merging implies that the diff a pull-request branch introduces is put into a single commit. It does not matter whether the pull-request branch contained one commit or a hundred commits, the end-result is always one commit with one diff and one message.

That's also the second thing that a squash merge implies: There is only one message (even though crafted by simply combining multiple messages) for the whole diff the pull request introduced.

Signatures forged with GPG or some other method are destroyed in that process.

Why I hate this

You can probably already smell why I loath this. By combining the individual changes a pull request introduced, one loses so much information! Consider a pull request that took 10 commits to refactor something. Carefully crafted commit messages, why things were changed the way they were changed. Very detailed analysis in the commit message, why a certain change is needed to further refactor a piece of code somewhere else in the next commit. Maybe even performance characteristics written down in the commit message!

All this is basically lost as soon as the pull request is squashed. The end result is a huge diff with a huge message, where the individual parts of the commit message could potentially be associated with the right parts of the diff. Could be. But the effort to take apart the huge commit is just lost time and maybe a huge undertaking that is completely unnecessary if the changes wouldn't have been introduced to the “master” branch via squash merge in the first place.

One might argue that the commits are still there, in the web interface of the forge. Yes, they might be. But git is an offline tool, I should be able to see these things without having to use a browser. I should be able to tell my editor “give me the commit message for this line here, because I want to see why it is written the way it is” and my editor should then give me that information. If it opens an enormous squashed commit, I'll just rage-quit! Because now I have to review a commit that might contain thousands of lines of changes with a message where I have to search in the commit message why that one line I care about was changed.

I really am hesitating to link an example here. Mostly because blaming someone who doesn't know better does not yield anything valuable and is just destructive. But let me assure you: I've seen projects that do this and it is just ridiculous! If you come across a change that touched 2KLOC of code and has a commit message that is 500 lines of “Change”, “Fix things” and “refactor code”, you could also go back to the old SVN days where we had things like “Check-In #1234 from 2022-03-04”. We can do better than that!

How to do better

So, you might think that the above is all valid and sane. But now you want to know how things could be improved. And, to be honest, it is totally trivial!

First of all, let me shortly talk about responsibilities. Because I feel like the idea of squashing all changes in a pull request comes from the attitude “I have to clean things up before I merge” of maintainers. The idea here being that they take the pull request and squash it, so that things are “clean” on the master branch. But that premise is totally wrong. The maintainer of a project (especially in open source, but in my opinion also in “not open source”) is never responsible for cleaning up a contributors work. After all, it is a pull request. The contributor asks the maintainer to take changes. The contributor is the person that wants something to be changed in the project. Therefore it is the duty of the contributor to bring the changes into a form where the maintainer accepts them. And that obviously includes a clean commit history!

I reckon, though, that some contributors just do not care about committing their changes cleanly and with decent commit messages. In my opinion, a maintainer should just not take these patches – I certainly did reject patches because of badly written commit history. There's always the option for the maintainer to take the patches to a new branch and rewrite the commit messages. For example I once did this with nice changes that were just committed badly. It is, though, not the responsibility of the maintainer to do this.

Another option which I quite like is that a project introduces commit linting (but obviously not conventional commits of course). Commit linting can be used (for example by implementing a CI job with gitlint) to ensure that commit messages have signed-off-by lines, do not contain swearwords, have decent length and more. It is a nice and easy way of automating this and working towards decent commits.

This all does help with improving the commit messages and therefore the change history of pull requests. But of course, squash merging must be disabled/forbidden still!

In my opinion, reviewing commit messages should be part of every normal code review. The GitHub web interface does not particularly support that, because one has to click through several pages until the actual commit is viewed. That's why I like to fetch the pull requests from github (git fetch REMOTE pull/PR_NUMBER/head) and review them commit-by-commit on my local machine (git log $(git merge-base master FETCH_HEAD)..FETCH_HEAD).

To sum up...

To sum up, don't enable squash merging in your repository configuration! Disable it, in fact! It hurts your project more than it provides value (because it doesn't provide any value)! It is a disrespectful and destructive operation that minimizes the value your project receives via pull requests.

I, for one, am stopping to contribute to projects if they squash merge.

I am working on a project that uses github pages for publishing its documentation. One annoying thing with github-pages is, that it adds a new branch to your repository that is completely useless for you and you do not care at all on your local machine about that branch.

Very recently, this project also enabled dependabot. Dependabot is really nice, but what it does: It creates new branches in your repository where it files the pull requests from. That's obviously totally okay, but that results in more branches that happen to get downloaded every git-fetch that I do not really care about – after all I can see the pull request in the repository and handle them there!

So what do we do? Right! We use our git superpowers to simply ignore these remote refs!

Here comes the trick: You can teach git to ignore remote refs, and that even works with patterns! So what you do is, you go to the .git/config of the repository in question. You find the remote that you're concerned about and you add a new line to the configuration of that remote:

fetch = ^refs/heads/gh-pages

It is as easy as that!

To check whether everything worked, you can delete the remote ref of the github pages branch that you already fetched from your local repository via

git branch -r -d <remote>/gh-pages

And then check whether it gets re-fetched via simply git fetching from that remote.

To ignore by pattern, for example for dependabot branches, you can insert (on a new line of course)

fetch = ^refs/heads/dependabot/*

And delete the already fetched remote refs again via

git branch -r -d <remote>/dependabot/...

Piece of cake.

I recently came to think about how to enforce good commit messages without enforcing a commit message style such as conventional commits (read more about my passionate hate for conventional commits here).

Disclaimer: This article does not provide a solution, as I don't have one yet.

The Why

We all know know what a good commit message is.

You didn't believe this, did you?

It is more like some of the developers out there and how to write one, but most kiddies out there don't. And these commits from last night are only the ones containing swear-words. We all know projects where the developers just don't care about good commit messages and things like “implement #1241” is “considered” a “good enough” commit for a 1200 lines-of-code change that touches 15 files.

I do not really have to reiterate what others have already stated numerous times, do I?

The How

In my post about my hate for conventional commits I already wrote why I think automating these kind of things is bad idea. But to reiterate this shortly: Enforcing a certain “style” of commit messages, such as common prefixes, certain trailers and so on, makes people resort to only fullfill the automatically tested things.

So if you enforce that a commit message subject should start with one of fix, change or doc, people will no longer write "Implement JSON backend" as commit message, but "change: Implement JSON backend". What have you gained? Nothing.

Instead, I want people to write commit messages that tell me why a change was made. Something in the lines of

Implement JSON backend

This patch implements the JSON backend using the JSONFOO library. Note that content is currently not read in a zero-copy fashion with this patch, to keep the diff small.

Zero-copy JSON handling will be implemented in a seperate patch once we confirmed that JSON is working as expected

Signed-off-by: Ned Stark ned@nohead.email

Manual commit message review

In my opinion, commit message review should be part of every normal code review. What I think, though, is that interfaces such as githubs review interface do not particularly encourage this. Also, I feel like requesting commit messages changes is socially not as accepted as code change requests are. Asking a developer to change a function name is (at least from my perspective) seen as a normal request. Asking a developer to rewrite their commit messages is not.

I don't know why this is or how I got to this impression, but I feel this way and it concerns me. It really should be as socially acceptable as requesting a code change, because it is equally (or maybe even more) important as the code itself.

Today I was notified by a dear friend of mine about a fact that lead me to think about the Rust project a lot. I came to the conclusion that the whole project, including but not limited to its most famous contributors such as Graydon Hoare, Alex Crichton and Niko Matsakis.

This plot has been on-going for a long, long time. In fact, it started even before Rust 1.0 was released in May 2015! I suspect that the plot was even started before the Roadmap planning of Rust 1.0 in 2014, but this is only a (probably good) guess. I do not have any proof of it, and of course the people that were involved with the process of bringing Rust 1.0 to live would deny anything I claim here (I guess).

But I think this guess is a rather good one. In the mentioned Roadmap it was planned to release every 6 weeks, and this is what I base that guess on. They made it fix right there. When Rust 1.0 came out on 15th of May in 2015 – which is a rather odd date, to be honest: 2015-05-15, or 5/15/15 in US notation, the plot must have been made fix already. Maybe I'm on another trace there with this date, as 51515 almost looks like 666, but this is something I will dig into in another blog post.

So when they made it fix, the already had said plot in the back of their heads. What that ominous plot is, you ask? Well... it is as simple as this:

Rust 1.60.0 came out on 7th of April 2022. Thinking forward for 9 releases, we can easily calculate that Rust 1.69.0 will be released on 20th of April 2023, or rather: 4/20 of 2023.

That's it, there it is! They have planned it all along! We've been fooled and tricked for years!

Now that I've shown that all these maintainers, hard thinkers and these hours and hours of effort put into planning, preparation of talks, discussions on various channels all over the internet but also in real-world (“offline”) were just spent to fool us into a conspiracy of putting together funny numbers on some release note, I can go back to my quiet chamber and think a bit more about that 5/15/15.


Of course this article is a joke and nothing here is meant serious, except that all these people indeed this a very good job and put numberless hours into making Rust awesome!

Also note that if you use the original date for the calculation, everything falls apart, sadly.

Mother/Android is a 2021 American post-apocalyptic science fiction thriller film.

Holy crap, I can only say. This was probably one of the most gripping movies I have seen in the last 12 months!

Plot

Androids, which were mere servants but then became evil, try to kill all humans.

That's the plot, yes. It is nothing spectacular, even boring maybe. But the execution of this simple setting is so good, that I was clinging to everything I got hold on.

Reception

On rottentomatoes, the movie got only a weak 4.9/10, whereas I would give it a strong 8.5/10 actually.

So some of you already know from my last article “KDE is love” that I got a new device from my employer. I was allowed to select this device on my own and install my preferred Linux distribution on it.

Now I want to give some feedback on the device itself.

Build quality

The device is a 15,6 inch notebook with measurements 35,6cm width, 1,7cm height and 23,4cm depth. It weights about 1,5 kg with battery (all manufacturer specs). It also comes with a nice and large 114,5 x 70 mm touchpad and without a numblock, which was really important for me as I think numblocks are just a waste of space.

The build is very solid. I really like touching the device, it feels very well done and even compact, although it is the biggest form factor in a notebook that I owned to date.

The fans blow hot air out to the back of the device, which makes working on your lap possible without burning your thights. The power switch is next to the keyboard, not on the side of the device as it is on my previous/private tuxedo (a InfinityBook 14 Pro v4). That, I assume, resulted directly from the feedback TuxedoComputers got about this... the power button on the side was not that practical.

On the left side of the device there is the Kensington lock, one USB 3.2 (Gen1) and one USB 2.0 slot as well as your headphone jack and a micro SD card reader. On the right side, there's another USB 3.2 (Gen1) slot as well as a USB 3.2 (Gen 1) type C slot next to your power plug and the HDMI out plug.

Hardware Specs

My device came with a 1 TB Samsung NVMe SSD, 32 GiB of DDR4 3200 MHz Samsung memory and a AMD Ryzen 7 4800H which runs 8 cores from 2.9 to 4.2 GHz with 16 threads and 12 MB L3 Cache.

I really have to say, this beast is a powerhouse.

Battery

Still, I get very decent battery lifetime out of it. That's because there's a 91,25 Wh battery built in, which can even be replaced. How crazy is that? TuxedoComputers advertises this beast with 20 hours of battery life under absolutely optimal conditions (lowest screen brightness and so on)... we all know that this is never true for a real world usage. Still, they advertise 11 hours of battery life with moderate usage (Screen half bright, WLAN on). I must say that this might be about right. Right now I am using this device on the third day without having it plugged in. I only did a bit of browsing during that time, with screen between 5% and 35% of brightness. I only compiled once for about 5 minutes. So there's that.

If I run out of battery, though, I have got a power bank from TuxedoComputers as well, so I can make sure that I have a full workday of battery power with me when working somewhere outdoors!

Screen

The screen is “only” a HD screen. Yes, but it is completely enough for me. Your milage may differ, of course. What strikes me is that you get a crazy view angle out of that screen. And, you may have guessed, even with 5% of screen brightness, I can work without issues. 35% screen brightness are totally fine for everyday work.

I am looking forward trying out this device outside under sunlight. Right now it is too cold for me to sit outside, though.

Keyboard

The keyboard is the only point that could be a bit better. Maybe I am spoiled by the InfinityBook, where the keyboard is really good (IMO, of course) or even my old X220 Thinkpad.

Don't get me wrong, the keyboard is not bad. The pressure points are decent, the keys are of course large enough. But as they are embedded into the case, I cannot feel them properly when sliding over the keyboard with my fingers. So I have to look down from time to time. Could be much worse, yes.

Conclusion

So, to conclude this: The device is a powerhouse, the build quality is really nice and the keyboard could be a bit better. I don't know how, though. The specs are phenomenal (for me).

Of course, I had absolutely no problems getting Linux (NixOS) running on it, since it comes from a Linux manufacturer. My KDE Plasma 5 setup runs as smooth as I could wish on it.

All in all, after working with it for about one week I cannot complain at all and I am really looking forward working with this device long-term.

Some of you might know that I am using KDE Plasma for some time now and that I am really in love with it. Lately, I've been even more in love with it.

Here's why.

A new device

As I recently switched my job, I had the opportunity to get my hands on a new device (thank you, employer)! I opted for a Tuxedo Pulse 15 Gen 1 which is a wonderful device. I might be writing about that in another article.

Naturally, I installed my favourite linux distribution on it (which is NixOS) and KDE Plasma. After updating the device and all my other installations to the latest release of NixOS (which is 21.11), I was greeted by the brand new KDE Plasma 5.23 welcome screen.

KDE Plasma has been running absolutely smooth on my old device (which I still use), which was a Tuxedo InfinityBook 14 Pro v4. Still, I was even more amazed that on the new device, it ran even smoother. There are no glitches, no screen flickering even while plugging in an external display, not the slightest stuttering while moving around. Don't get me wrong, there hasn't been any of these things with my old Tuxedo, too! Still, it felt so new and refreshing on the new installation that I absolutely had to write about it.

Initial setup

The initial setup of my KDE experience involves some work. The default configuration of the KDE desktop experience is really nice, but I need some custom shortcuts to be as productive as I want to be. For example, I want to use the HJKL keys for some tiling stuff. KWin, the KDE window manager, is not a tiling window manager. Still, you can get some basic layouting to work with it (top-right-corner, right, lower-right-corner and so on). Also I need key bindings for fullscreen, switching virtual desktops and bringing up Krunner, KDEs app launcher/search tool/... whatever you call that awesome piece of software!

Krunner

Krunner is awesome. It is much more than just an app launcher. With the (firefox) browser integration, you can search bookmarks, history, open tabs (and switch right to them), you can search windows, passwords (password-store anyone?) and of course you can launch applications from it. It just integrates so nicely with KDE, it is amazing.

Plasma-browser integration and kdeconnect

Plasma browser integration (for firefox) is god-sent. Combined with kdeconnect, which is another god-sent tool, it makes the experience perfect. Notifications from my mobile phone can be synced to the desktop and the other way round (although I disabled this specific setting because I tend to get a lot of notifications on my mobile phone that just annoy me on my desktop, especially when in a call with co-workers). Whenever someone calls, my music or netflix stream (this is made possible via the plasma browser integration) is automatically paused and (of course) resumed as soon as I hang up. I can send websites from my browser to my phone when I want to continue to read while in the bathroom (we all do that, don't laugh!) and of course also the other way round. And so on and so forth...

All the details

The details of course matter. For example: I just found the setting to disable the touchpad while typing. I toggled it and it just works like a charm. I added a script that executes ssh-add on my ssh keys to the autostart scripts and it justs works, asking me for the password of my ssh key to add it to ssh-agent after login. Enabling the integrated redshift for KDE was literally one click in the settings, now I get a redish (is that a word?) screen in the evening, which is more eye-friendly.

Conclusion

My conclusion is clear and obvious: Love! I don't know why I would ever switch away from KDE again. It is the same as with my NixOS experience, that started in 2014. I just don't see why I would switch away from something that is so good.

Maybe this article was a bit annoying to you now, me fanboying about KDE. I don't know. It's just something that I wanted to share.

Enola Homes is a 2020 (netflix) mystery film.

I have to say that I am a big fan of Sherlock Homes, the Benedict Cumberbatch version. So I was a bit skeptical whether “Enola Homes” would entertain me. Though, my fears were proven unfounded, as the movie was fun and exciting!

Plot

Enola Holmes is the youngest sibling in the famous Holmes family. She is extremely intelligent, observant, and insightful, defying the social norms for women of the time. Her mother, Eudoria, has taught her everything from chess to jujitsu and encouraged her to be strong-willed and to think independently.

wikipedia

... one day, her mother goes missing. Naturally, Enola tries to find her.

I won't share more here, as always. Giving away too much of the plot just reduces the entertainment if you, dear reader, watch the movie.

Reception

The movie has a 7.1/10 rating on rottentomatoes, according to wikipedia, which is quite good.

I would rate it with 8/10. Also, a sequel was announced in May 2021!