Feed Aggregator

Aggregiert Blogs und Webseiten aus meinem Umfeld.

My DNS setup with PowerDNS

I recently overhauled my DNS setup (recursive for my home network and authoritative for my domains), so why not blog about it?

Let’s start with the requirements, which are kind of special. I host all my stuff at home behind a DSL line, and while I have a VPN with a static IP address as well, I don’t want to tunnel everything through it. Thus, I need dynamically updatable DNS records for almost everything.

Furthermore, I want my home network to be able to resolve my domains even when the internet connection is broken (or stated differently: I don’t want the requests to hit the internet).

Hence, I’m need a combined recursive and authoritative server. The recursive part, however, has some quirks of its own: I want to be able to resolve non-standard community-TLDs like .dn42 in the VPN-based overlay networks I’m part of. But at the same time, I want the “usual” DNS to be DNSSEC-validated. Together, this requires the ability to have so called “negative trust anchors”, or NTAs for short. They state that some part of the DNS must not be DNSSEC-validate, come whatever may. Otherwise, I could advise the DNS recursor to look for .dn42 at, but it would refuse to answer any requests for it because the DNSSEC-signed root zone says there no .dn42.

Previously, I was using bind for the authoritative part. But because bind is incapable of having configurable NTAs1, it used unbound as recursor. This was kind of cumbersome and involved every request passing through two daemons with two caches. I chose bind because it was (and AFAIK still is) the only software that is able to DNSSEC-sign zones if it only has the ZSK, not the KSK, which I only had on my laptop. However, as my laptop is now backed up on my home server anyway, I got rid of this level of paranoia and looked at other software again.

In other contexts I very successfully used PowerDNS, and hence I set forth to replace the legacy combination with a nice pdns and pdns_recursor. And indeed: It works like a charm!

What’s more, there’s hardly anything interesting about the config that I could mention here: It just works!™ Okay, it’s a bit more than that. Especially that much of what I considered configuration takes place in the database. But nothing I had to do was out of the ordinary, and I found everything I needed in the extremely good documentation. Compare that to bind, where every documentation is incomprehensible or incomplete.

And on the way, I got some nice tools like pdnsutil edit-zone. Hooray!

  1. In newer versions you can inject them during runtime, but they aren’t preserved across restarts. 

Bonding Wifi interfaces with Network Manager

For a long time, it bothered me that transitioning from network cable to Wifi caused a loss of all persistent connections. SSH being the most painful, others were noticable too (streams, chat, …).

After upgrading to Ubuntu 18.04, I noticed that Network Manager has the ability to control bonding devices. Maybe it gained this ability even earlier, but I just noticed it now. What Linux calls bonding, others teaming, port channel or link aggregation group, is usually used with multiple cables to increase throughput, reliability or both of mostly servers. The different modes decide which packets are sent out which “slave” interface, sometimes in cooperation with the switches they are connected to, sometimes without. So why not use bonding to bond the ethernet and Wifi cards in active-backup mode?

Problem was, the GUI of Network Manager apparently doesn’t want you to bond wifi interfaces. Manually, using ip link, I was able to configure the desired bond. But ditching Network Manager altogether makes managing Wifi networks really cumbersome. So after digging around a bit, I found the holy grail:

nmcli conn modify "Wifi connection" connection.master bond0 connecion.slave-type bond

This simply adds the already configured wifi connection as a slave to the already configured active-backup bond connection. You can even use the GUI afterwards to mange both, it’s just this link that cannot be created with the GUI.

Now, I can dock and undock my laptop without losing all SSH connections. ☺

Was mit Holz

Ich habe die letzten beiden Wochenenden auch mal #WasMitHolz gemacht: Einen Wickelkommoden-Aufsatz! Den Plan hatte ich gefasst, sobald klar wurde, dass wir eine Wickelkommode brauchen und keine Kommode passender Höhe haben. „Der Plan“ sah dabei so aus:

Hochprofessionelle Skizze der Konstruktion Hochprofessionelle Skizze der Konstruktion

Ich habe mich davor gedrückt, irgendetwas selbst sägen zu müssen, da ich weder Werkbank noch vernünftige Säge besitze. Die Platten wurden also alle bereits im Baumarkt passend zugesägt. Also… alle bis auf eine, da ich leider das „2ד in der Liste geflissentlich ignoriert habe. Also sah das Projekt nach dem ersten Anlauf erst mal nur so aus:

Work in Progress… Work in Progress…

Eine Woche später hatte ich auch die fehlende Platte (und etwas Zubehör vom Schweden) und konnte das Ganze fertigstellen:

Fertiger Aufsatz Fertiger Aufsatz

My Multi-Room Music Setup

For much too long, I wanted to write about how I completely overhauled our multi-room audio setup at home about a year ago.

Previously, the setup was as follows: We had four sets of speakers, distributed in the different rooms of our apartment, and they were all connected to the HP Microserver that played the music using MPD and Pulseaudio via a copper cable and a USB DAC. The obvious downside to this approach is its lack of scalability: Each room required a dedicated cable to the server, and the 7.1 USB DAC that I tricked into playing the same stereo stream on every of its four mini jack plugs. The less obvious downside is the lack of surge protection on copper audio cables.

So when I decided to move the server to another place in the apartment and equip it with a UPS (and thus surge protection, on the power line as well as on the Ethernet side), I was looking for a new solution. The main requirement was synchronicity: While standing in the hallway, I want to have exactly synchronous audio from all rooms. That’s what Pulseaudio was for in the old setup, as I couldn’t get ALSA to do the job well enough.

Secondly, to achieve surge protection on all wires towards the server, it seemed like a good idea to use something Ethernet-based (obviously requiring some playing device in every room).

Enter Snapcast. In a traditional server-client architecture, this little piece of software offers exactly that: Streaming time-synchronized audio from a FIFO on the server to multiple clients’ speakers. The performance is topped off by an Android app that can be used as both, a client and a remote control for the various volume levels. And with the buffers all tuned to low, the delay when starting and stopping music is only minimal.

Dusting off some old Raspberry Pis model B, the (far from hi-fi) playback hardware was settled. But because I don’t want to update a couple of Raspbians every day for no apparent reason (remember, they run a single daemon in a sealed off network and only every connect to my own server), I went ahead and toyed around with Buildroot and its Snapcast-addon SnapOS. And for a software guy like myself, that’s where the real fun began: How fast can I get the Pis to play music after powering them up? Or, as next best optimization goal: How small can I get the image?

Well, I won’t say that I’ve attained the optimum, but I’m satisfied by the current solution: My snapcast satellite configuration (currently) produces a 1.1 MiB kernel (XZ compressed) and a 3.9 MiB initramfs (uncompressed) that takes around 15 seconds to boot and play music, fetching all its configuration (network details, hostname, NTP and Snapcast servers) via DHCP. Only my SSH key is “hardcoded”.

So two of the rooms now have Rasberry Pis for playback, the workroom uses my laptop and – what I like most – the living room the Amazon FireTV (the Android app also is a client, remember?).

I was a bit disappointed that I actually needed wired Ethernet for decent synchronicity, but for some reason I couldn’t get a satisfactory result using Wifi links. Oh well.

GPG attacks

While communicating with the CCC office, I was informed that they couldn’t send me GPG encrypted mails because of the error “GPGME: Ambiguous name”. Of course there is more than one key if you search for my mail address (I had other keys in the past), but they also quoted my correct key ID.

Well, turns out the GPG key ID collisions that were in the news last year also caught my key. While the key with id 0xB17F2106D8CCEC27 really is mine (as I also state on my keys page), the fake one with long ID 0x5AAD3FC3D8CCEC27 (notice the identical last eigth characters) has many of the correct parameters that are mentioned in the article:

  • It has the correct primary user ID.
  • It is signed by some other fake keys that correspond to real keys that signed my real key (like CACert).

However, it also lacks some details:

  • It doesn’t have any secondary user IDs.
  • It has the wrong creation time.
  • Its signatures never expire. CACert only ever signs for one year.
  • And of course, it’s revoked, as it was created by researchers.

All in all, I guess I should be proud that apparently, I’m part of the GPG strong set (as suggested by the article) and surprised that the CCC office considers revoked keys.

Preventing RDNSSD from Ruining the SD Card

So I have a Raspberry Pi running in the Freifunk Bremen network. It has a dynamic IPv6 address; and since there are currently five active gateway servers on this network which also act as DNS servers, I wanted to get these DNS server addresses dynamically as well.

This can be done with the rdnssd daemon: it listens for IPv6 Neighbour Discovery Protocol packets (in particular Router Advertisements) and extracts the DNS server addresses from them. The addresses are then written to the /etc/resolv.conf file so that they are used as normal DNS servers by the system.

However after setting this up I noticed that the green LED on my Raspi was lighting up every few seconds, indicating “disk” activity. Of course with a Raspi there is no magnetic hard disk but rather a MicroSD card which contains the file system; and since these cards can only tolerate a limited number of write cycles, the frequent LED blinks were worrying.

The cause of the write accesses was that the /etc/resolv.conf file was rewritten every few seconds. IPv6 RA packets are received quite frequently here (about 100 packets per minute!); and each time the ordering of DNS servers is updated to prefer the server that was received most recently.

Updating the /etc/resolv.conf file is done by the resolvconf tool (actually the openresolv tool, if you look under the hood). It takes name server addresses from various sources (like DHCP client, VPN connections, RDNSSD, and static network configuration) and combines those into a single resolv.conf file. So whenever RDNSSD wanted to reorder the DNS servers it had received, resolvconf rewrote the /etc/resolv.conf file.

To prevent the resolvconf tool from frequently writing to SD card, I took the following steps:

  • make /etc/resolv.conf a symbolic link to /var/run/resolv.conf.
    /var/run resides on a ramdisk so data written there does not touch the actual SD card. This also means that the /var/run/resolv.conf file will be lost during reboot; but the resolvconf tool will recreate it during boot.
  • disable the unbound_conf=/var/cache/unbound/resolvconf_resolvers.conf line in /etc/resolvconf.conf. This line was causing the resolvconf tool to also update the /var/cache/unbound/resolvconf_resolvers.conf file every few seconds (which was unnecessary in my case, since I don’t have an Unbound DNS Server installed on the Raspi); and since /var/cache is stored on the SD card, this caused an actual write access to the card.

After making these two changes, the green LED once again remains dark, and cat /etc/resolv.conf shows that the IPv6 name servers merrily change every few seconds.


I just stumbled across this great coding error of mine and thought I could share it while waiting for the corrected program to finish its run.

My previous code read like this:

if key not in d:
    d[key] = get_value()
return d[key]

I thought “Hey, dict.setdefault() would come in handy here”, as its documentation says

setdefault(key[, default])

If key is in the dictionary, return its value. If not, insert key with a value of default and return default. default defaults to None.

So I replaced the code by

return d.setdefault(key, get_value())

Looks alright, right? Wrong, as I was greeted by an infinite recusion I couldn’t explain at first. But the difference is obvious: In the previous form, get_value() only gets executed if it is not known, while in the second form, get_value() is always executed, which caused the recusion.

So while the documentation is totally correct in reading like the two forms would be equivalent, general programming theory or whatever you want to call it makes the difference. Subtle!

Dotfile repository

Something that I should have done waaaay earlier: I finally collected some of the configuration files that I would like to have on every computer (and that I mostly only stole from someone else myself) into a dotfile repository for me to keep in sync and everybody else to copy and use ad libitum.

In particular, my last post about terminal bells after long running shell commands is more or less contained in there. ☺

Terminal bell after long running commands

Sometimes I wonder “Why on earth didn’t I think of this before?”. Today was such a day, when I thought: Why not get notified automatically when long running commands in a terminal finish? Actually, there’s a mechanism in the window manager for highlighting windows needing attention. Why shouldn’t a terminal prompt after a long running command be such a situation?

The solution actually was pretty simple. I found a ZSH sniplet that rings the bell after a long running command that I modified a bit to my liking, and all that was left was to configure my terminal emulator urxvt to make a ringing bell call for attention, using the following line in ~/.Xresources:

urxvt*urgentOnBell:     true

There you go, now when some build process finishes or aborts, the corresponding terminal window blinks in my taskbar. Yay! ☺

Update: Since I switched to Gnome 3 shortly after, I had to find a way to make the urgent bit visible there as well. Luckily, I found an ancient extension that does precisely that.

Unicode Emojis for Gajim

I am a huge fan of Unicode emojis, because they are so universally usable. And I am a huge fan of XMPP. My XMPP desktop client of choice is Gajim, because it supports every XEP I care for. But until now, Gajim didn‘t support Unicode emojis. Those that were supported by my system font showed up as black and white icons, but that’s nothing compared to the fancy colorful images that show up on, for example, Android.

Thus, I sat down and put together an emoticon package for Gajim with support for many, many Unicode emojis. Of course I didn’t create any of the artwork, but only generated an index file that is usable with Gajim. It is backwards compatible with the legacy ASCII codes Gajim used to offer.

Now I have colorful emojis in Gajim, and so can you!

Design changes and typographic improvements

While reading the excellent Practical Typography (thanks to Fefe for the recommendation), I noticed that while my LaTeX documents all are (of course) quite well formatted, my website is a mess. In particular, the line length was terribly long and the font smaller than it needed to be. By increasing the font size, a switch to a serif font was feasible, because the details (serifs) now had more space. And lazy (and budget-less) as I am, I decided to give the free font “recommended” by the book, Charter, a try. I liked it, and complemented it with Source Code Pro as the monospace font for source code.

I intermittently considered a multi-column layout. The bad browser support for controling column breaks, orphan control and elements spanning multiple columns – particularly in my beloved Firefox – was a first setback. But the final nail in that idea’s coffin arose from the content on my page. While on the home page, each section fits nicely on a screen in a two-column layout and thus only the eyes have to move, in longer blog posts endless scrolling would occur from the bottom to the top when the first column has been read. This is of course mainly caused by the blog posts being unstructured, but I think design has to follow content, not the other way round.

The change was completed by serious improvements of typography throughout all posts and pages, like the correct usage of double quotes of the correct language, curved apostrophes, non-breakable spaces, nicer code formatting, images being set <aside> if enough space is available and a better structured list of reports. Some internal changes made my Gemfile a bit shorter, because in two cases I used stuff that wasn’t the default even though the default works just as well.

On my to-do list, there are still some thoughts left, not only since I read the book:

  • What icon could resemble the blog in the navigation? I like the icon-only navigation, but having “Blog” written out there nags me every time.

    Update: Thanks to Tim, who pointed me to a blog icon by H.R. Sinclair, this is no longer a problem.

  • Should I get rid of the permanent header, even on big screens, and replace it with the version already active on smaller screens? Probably.
  • Again because of the bad browser support (though not Firefox this time!), should I switch to programmatically inserting soft hyphens everywhere while building the website instead of letting the browser handle hyphenation? There seems to be no Jekyll plugin yet, but writing one shouldn’t be too hard.

    Update: Having seen how the front page looks in the stock Android browser (based on Webkit, thus no hyphenation), I really need to do this soon!

I’m happy to hear feedback on any of these thoughts or the design change as a whole. After all, this is just my perception of “nice”, and it would be good to know if other people perceive this the same way.

Update: Turns out that X actually still delivers a Charter system font – as a bitmap font, resulting in a horrible look on Linux without the proper Charter OTF files installed. Thus, the system font is no longer preferred (or used at all) and the 20 KB of WOFF always have to be downloaded.

Calendarserver and DAV_DAV_NOT_CALDAV error

So I had just copied the data of my Darwin Calendarserver from an old to a new disk, but Thunderbird only showed the error DAV_DAV_NOT_CALDAV (“CalDAV: Calendar points to a DAV resource, but not a CalDAV calendar”) on console when trying to open the calendars.

After enabling verbose logging in Thunderbird (by setting calendar.debug.log and calendar.debug.log.verbose to true) and digging through the Lightning sources (in the extensions/{e2fda1a4-762b-4020-b5ad-a41df1933103}/components/ directory of my Thunderbird profile), it turned out that the error was that the calendars had the “{DAV:}resourcetype” attribute set to just “collection” rather then both “collection” and “calendar”. This was also visible when opening https://homeserver/calendars/users/oliver/calendar/ in Firefox. However, I couldn’t find out the cause for this from Calendarserver, neither from logs nor from code.

But digging into other directions finally produced a result: the extended attributes were missing on the copied data. Sure, the old disk had had the user_xattr flag set in /etc/fstab; and the new partition even uses that flag automatically (as seen from /proc/mounts). But for copying the data, I had attached the old disk via USB and had mounted it manually – and in that step I had forgotten to specify “-o user_xattr” :-( . Without that parameter, even “cp -ax” can’t copy these attributes.

After mounting the old disk with the correct and copying the data over again, “getfattr -d -R /var/spool/caldavd” finally showed lots of extended attribute values, and Thunderbird finally opened the calendars. Success!

Firefox Private Browsing Windows Are Not Independent

Public Service Announcement: If you have multiple “Private Browsing” windows open in Firefox (41), they all share the same cookies. So if you think you can login to a site multiple times using multiple Private Browsing windows, you’re out of luck with plain Firefox.
More importantly, if you’re logged into Facebook in one PB window, opening a second PB window won’t protect you from being tracked by Facebook.

You can test this on by setting a test cookie in one PB window and then visiting the site in a second PB window, where the cookie from the first window will be displayed.

Of course this information leak not only affects cookies (they are just the most obvious piece of information that’s leaked between windows). For example, when running the browser history sniffer at in one PB window, it also shows sites you’ve visited in other PB windows.

In summary, this is not the kind of behavior I was expecting from a Private Browsing feature. In the end this feature is really just usable to avoid leaving traces on the computer, but doesn’t help to protect your privacy from the sites you visit.

Gedit File Search Plugin 1.2 now supports Gedit 3.12

The latest Gedit File Search Plugin now adds support for two more Gedit versions: Gedit 3.10 will be used by Ubuntu 14.04 LTS “Trusty Tahr” (soon be released), and Gedit 3.12 is currently the latest bleeding-edge release available.

Unfortunately the Gedit in Ubuntu 14.04 lacks some features which were used by this plugin: namely, the file browser in the side bar doesn’t offer a Search in Files shortcut any more, and highlighting of file search results in opened documents doesn’t work any more. While Gedit has regained these features in later versions, that’s only small consolation for Ubuntu LTS users.
I’m not sure if there’s a way to work around these shortcomings, or if it’s possible to add these features to the Ubuntu Gedit version. Guess once we actually start using it in the next weeks/months it’ll become apparent whether this is a real problem.

There haven’t been any other changes in this release; but if you run a new version of Gedit, download Gedit File Search Plugin 1.2 and give it a try!

jhbuild + etckeeper: “Please tell me who you are.”

>be using Etckeeper with Git under Ubuntu 14.04 alpha
>be running “jhbuild --sysdeps install”
>be entering password for package installation
>jhbuild installs packages successfully, but then…
>a wild error appears:

*** Please tell me who you are.


  git config --global ""
  git config --global "Your Name"

to set your account's default identity.
Omit --global to set the identity only in this repository.

fatal: unable to auto-detect email address (got 'root@trusty64vb.(none)')
E: Problem executing scripts DPkg::Post-Invoke 'if [ -x /usr/bin/etckeeper ]; then etckeeper post-install; fi'
E: Sub-process returned an error code

>”git config --global --get”
>shows correct email
>oh wait, sudo?
>”sudo git config --global --get”
>still looks good
>oh wait, actual root?
>”sudo su -”
>”git config --global --get”

$ sudo su –
# git config --global “root@trusty64vb”
# git config --global “Root”
# exit

(also, explains why problem didn’t appear in 12.04)

Gedit File Search Plugin 1.1 with support for Gedit 3.8

There’s a new version of the Gedit File Search Plugin available, which has been ported to Python 3. This means that Gedit 3.8 is now supported as well.

Internally, the code has been ported to run under both Python 2.7 and Python 3. Also, to accomodate the plugin loaders in the different Gedit versions, there are now two File Search entries in the plugin selection dialog: one for “current” Gedit, and one for Gedit before version 3.8. If you’re unsure which one to enable, just try both: one will refuse to load, and the other one should work :-)

So if you have been itching to use the file search plugin with a current Gedit version, go ahead and download the new release!

Refreshing DNS-SD entries in Nautilus

When publishing a new DNS-SD service (aka Zeroconf, Bonjour, or Rendezvous) with Avahi (eg. by adding a .service file in /etc/avahi/service/), Nautilus sometimes doesn’t pick up changes made to the .service file, even though avahi-discover and avahi-browse have picked up the modifications. Killing and restarting Nautilus or Avahi doesn’t help either; the trick is to kill the gvfsd-network and gvfsd-dnssd processes, eg. with “killall gvfsd-network gvfsd-dnssd“.

Btw. for debugging, the gvfs-ls and gvfs-info command line tools are quite useful as they show the same info as displayed by Nautilus.

Oh, and if you want to use the “p=” parameter in a .service file for specifying the password for FTP: it’s not supported by Nautilus – only the “path” and “u” parameters are handled. If you really want to avoid any prompt when double-clicking an FTP share in Nautilus, either allow login as user anonymous on your FTP server (and specify “<txt-record>u=anonymous</txt-record>” in your .service file); or save the FTP password in the Gnome keyring.

Gedit File Search Plugin 1.0 available – now for Gedit 3

The first version of Gedit File Search Plugin for Gedit 3.4 is finally available. This is mainly the work of Adam Dingle who did the initial port to GTK3 and the new plugin system, and I’m very grateful for his work!

Note that this version has only been tested with Gedit 3.4.1 on Ubuntu 12.04. It might work on other systems as well, though – I’m eager to hear of your experiences.

So go ahead and download the new release!

Also, if you are using Gedit 2, there’s a separate version available.

Gedit File Search Plugion 0.6 available

A new version of Gedit File Search Plugin is available, with minor bugfixes. More importantly, this is probably the last release to support Gedit 2 – any further work on this plugin will probably only happen for Gedit 3. Given that version 0.6 should be quite stable and bugfree, this ought to be a good time to send this branch into retirement, and move focus to Gedit 3.

So go ahead and download the plugin – no use in waiting!

[Update: to clarify, this version supports Gedit 2 only. A version for Gedit 3 will be released soon.]

How to use Magic Sysrq on Lenovo Thinkpad Edge 11

The keyboard on Lenovo Thinkpad Edge 11 doesn’t have a Sysrq key anymore. As described at there are Fn key combos to still get the functionality of these missing keys:

Fn+B = break
Fn+P = pause
Fn+S = sysrq
Fn+C = ScrLK
Fn+I = insert

To use Magic Sysrq key combos, this worked for me when using Ubuntu 11.04:

Press Alt, press Fn and S, release Fn and S, press the Sysrq command key.

For testing, the “h” command key is nice as it just prints some help text to kernel log. You can see in dmesg whether the key combo worked.