Christoph's last Weblog entries

Looking for a replacement Homeserver
11th August 2016

Almost exactly six years ago I bought one of these Fuloong 6064 mini PCs. The machine has been working great ever since both collecting my mail and acting as an IMAP server as well as providing public services -- it's also However jessie is supposed to be the last Debian release supporting the hardware and the system's rather slow and lacks memory. This is especially noticeable with IMAP spam filter training and mail indexing. Therefore I'm looking for some nice replacement -- preferably non-x86 again (no technical reasons). My requirements are pretty simple:

Now I'd consider one of these ARM boards and get it a nice case but they seem all to either fail in terms of SATA or not being faster at all (and one needs to go for outdated hardware to stand a chance of mainline kernel support). If anyone knows something nice and non-x86 I'll happily take suggestions.

Tags: debian linux.
doveadm deduplicate
24th February 2016

Without further words:

% for i in $(seq 1 90) ; do doveadm mailbox status messages debian.buildd.archive.2011.05 | column -t ;  doveadm deduplicate mailbox debian.buildd.archive.2011.05 ; done
debian.buildd.archive.2011.05  messages=8094
debian.buildd.archive.2011.05  messages=7939
debian.buildd.archive.2011.05  messages=7816
debian.buildd.archive.2011.05  messages=7698
debian.buildd.archive.2011.05  messages=7610
debian.buildd.archive.2011.05  messages=7529
debian.buildd.archive.2011.05  messages=7455
debian.buildd.archive.2011.05  messages=7375
debian.buildd.archive.2011.05  messages=7294
debian.buildd.archive.2011.05  messages=7215
debian.buildd.archive.2011.05  messages=7136
debian.buildd.archive.2011.05  messages=7032
debian.buildd.archive.2011.05  messages=6941
debian.buildd.archive.2011.05  messages=6839
debian.buildd.archive.2011.05  messages=6721
debian.buildd.archive.2011.05  messages=6631
debian.buildd.archive.2011.05  messages=6553
debian.buildd.archive.2011.05  messages=6476
debian.buildd.archive.2011.05  messages=6388
debian.buildd.archive.2011.05  messages=6301
debian.buildd.archive.2011.05  messages=6211
debian.buildd.archive.2011.05  messages=6140
debian.buildd.archive.2011.05  messages=6056
debian.buildd.archive.2011.05  messages=6007
debian.buildd.archive.2011.05  messages=5955
debian.buildd.archive.2011.05  messages=5887
debian.buildd.archive.2011.05  messages=5826
debian.buildd.archive.2011.05  messages=5752
debian.buildd.archive.2011.05  messages=5706
debian.buildd.archive.2011.05  messages=5657
debian.buildd.archive.2011.05  messages=5612
debian.buildd.archive.2011.05  messages=5570
debian.buildd.archive.2011.05  messages=5523
debian.buildd.archive.2011.05  messages=5474
debian.buildd.archive.2011.05  messages=5422
debian.buildd.archive.2011.05  messages=5382
debian.buildd.archive.2011.05  messages=5343
debian.buildd.archive.2011.05  messages=5308
debian.buildd.archive.2011.05  messages=5256
debian.buildd.archive.2011.05  messages=5221
debian.buildd.archive.2011.05  messages=5168
debian.buildd.archive.2011.05  messages=5133
debian.buildd.archive.2011.05  messages=5092
debian.buildd.archive.2011.05  messages=5058
debian.buildd.archive.2011.05  messages=5030
debian.buildd.archive.2011.05  messages=4994
debian.buildd.archive.2011.05  messages=4964
debian.buildd.archive.2011.05  messages=4935
debian.buildd.archive.2011.05  messages=4900
debian.buildd.archive.2011.05  messages=4868
debian.buildd.archive.2011.05  messages=4838
debian.buildd.archive.2011.05  messages=4811
debian.buildd.archive.2011.05  messages=4778
debian.buildd.archive.2011.05  messages=4748
debian.buildd.archive.2011.05  messages=4722
debian.buildd.archive.2011.05  messages=4686
debian.buildd.archive.2011.05  messages=4661
debian.buildd.archive.2011.05  messages=4637
debian.buildd.archive.2011.05  messages=4613
debian.buildd.archive.2011.05  messages=4593
debian.buildd.archive.2011.05  messages=4570
debian.buildd.archive.2011.05  messages=4554
debian.buildd.archive.2011.05  messages=4536
debian.buildd.archive.2011.05  messages=4520
debian.buildd.archive.2011.05  messages=4500
debian.buildd.archive.2011.05  messages=4481
debian.buildd.archive.2011.05  messages=4466
debian.buildd.archive.2011.05  messages=4445
debian.buildd.archive.2011.05  messages=4430
debian.buildd.archive.2011.05  messages=4417
debian.buildd.archive.2011.05  messages=4405
debian.buildd.archive.2011.05  messages=4390
debian.buildd.archive.2011.05  messages=4376
debian.buildd.archive.2011.05  messages=4366
debian.buildd.archive.2011.05  messages=4360
debian.buildd.archive.2011.05  messages=4350
debian.buildd.archive.2011.05  messages=4336
debian.buildd.archive.2011.05  messages=4329
debian.buildd.archive.2011.05  messages=4320
debian.buildd.archive.2011.05  messages=4315
debian.buildd.archive.2011.05  messages=4312
debian.buildd.archive.2011.05  messages=4311
debian.buildd.archive.2011.05  messages=4309
debian.buildd.archive.2011.05  messages=4308
debian.buildd.archive.2011.05  messages=4308
debian.buildd.archive.2011.05  messages=4308
debian.buildd.archive.2011.05  messages=4308
debian.buildd.archive.2011.05  messages=4308
debian.buildd.archive.2011.05  messages=4308
debian.buildd.archive.2011.05  messages=4308
Tags: internet, rant.
Finally moving the Weblog
30th December 2015

As of a few minutes ago, the old weblog on is past. I've added redirects for all the entries to the new one at you find any dead links please contact me so I can fix it up.

Note that comments are gone. I'll try to include the already present comments on the new blog some time in the future. Not sure if I will ever add a comment function again (though chronicle seems to have some support for that)

Tags: hier, web.
Welcome Kabel Deutschland
17th September 2015

So moving, away from awesome Core-Backbone provided internet at home and to Kabel Deutschland madness. Missing routes for several hours and noone even seems to notice, no mail address to properly report issues. Quality Network.

traceroute to (, 30 hops max, 60 byte packets
 1 (  11.033 ms * *
 2 (  28.413 ms  74.792 ms  75.376 ms
 3 (  77.083 ms  77.671 ms  79.283 ms
 4 (  79.833 ms (  81.416 ms  83.348 ms
 5 (  87.592 ms  88.101 ms *
 6  * * *
 7  * * *
 8  * * *

Needless to say, of course, that it still works perfectly fine from the old network.

Tags: internet, rant.
What's wrong with the web? -- authentication
8th August 2015

The problem

One problem to solve when doing web authentication has always been one identity provider, so you don't have to remember which username (or email address) you used for that bugtracker you used three years ago or that website. And tie it to one login ideally. Five years ago this problem seemed to be basically solved. There was OpenID and while it may not have been great it worked. You could have your own provider, your institution (university, company, foss project, ..) could have one and you could use your university-provided ID for all university stuff.

Today's state

Looking at the problem again today and the situation seems to have changed. To the worse. A lot. People are actively removing OpenID support. There seemed to be a replacement with, at least, proper design goals: Mozilla's persona. However this seems to be a dead end, no-one (almost) actually supports it.

Then there is what people call OAuth2. However there does not seem to be such a thing as OAuth2 at all, at least not for logging into websites. So for example phabricator supports 12 different OAuth2 systems. That includes Google, Facebook, Twitter, Amazon Github and a whole bunch of other services. Each with a different implementation in the webapp of course. And of course you can not just have your university/company/.. provide an OAuth2 service for you to use -- you would need to write yet another adapter on the (foreign) website to talk to your implementation and your provider.

And the strange thing, people seem to still consider OAuth2 a replacement for OpenID while it does not even provide the core functionality of the older system. Plus there does not seem to be any awareness of that all together.

Other features

Now of course, OpenID is not (and never was) the ultimate answer to the web authentication problem. The most obvious problem being user tracking. Your identity provider will see every website you log into, will see when you log into it and even be able to log into that website with your credentials.

Of course, this problem is fully inherited by OAuth2. And in contrast to OpenID you can no longer run your own provider whom you can fully trust and who already knows about your surfing habits (because it's actually you already). Mozilla's persona might have solved that, they at least intended to. But, again, persona seems quite dead.

Tags: oauth, openid, web.
Systemd pitfalls
7th August 2015

logind hangs

If you just updated systemd and ssh to that host seems to hang, that's just a known Bug (Debian Bug #770135). Don't panic. Wait for the logind timeout and restart logind.

restart and stop;start

One thing that confused me several times and still confuses people is systemctl restart doing more than systemctl stop ; systemctl start. You will notice the difference once you have a failed service. A restart will try to start the service again. Both stop and start however will just ignore it. Rumors have it this has changed post jessie however.

sysvinit-wrapped services and stop

While there are certainly bugs with sysvinit services in general (I found myself several times without a local resolver as unbound failed to be started, haven't managed to debug further), the stop behavior of wrapped services is just broken. systemctl stop will block until the sysv initscript finished. It will even note the result of the action in its state. However systemctl will return with exitcode 0 and not output anything on stdout/stderr. This has been reported as Debian Bug #792045.

zsh helper

I found the following zshrc snipped quite helpful in dealing with non-reported systemctl failures. On root shells it will display a list of failed services as part of the prompt. This will give proper feedback whether your systemctl stop failed, it will give feedback if you still have type=simple services and if the sysv-init script or wrapper is broken.

precmd () {
    if [[ $UID == 0 && $+commands[systemctl] != 0 ]]
      systemd_failed="`systemctl --state=failed | grep failed | cut -d \  -f 2 | tr '\n' ' '`"

if [[ $UID == 0 && $+commands[systemctl] != 0 ]]
  PROMPT=$'%{$fg[red]>>  $systemd_failed$reset_color%}\n'


zsh completion

Speaking of zsh, there's one problem that bothers me a lot and I don't have any solution for. Tab-completing the service name for service is blazing fast. Tab-completing the service name for systemctl restart takes ages. People traced down to truckloads of dbus communication for the completion but no further fix is known (to me).

type=simple services

As described in length by Lucas Nussbaum type=simple services are actively harmful. Proper type=forking daemons are strictly superior (they provide feedback of finished initialization and success thereof) and type=notify services are so simple there's no excuse for not using them even for private one-off hacks. Even if you're language doesn't provide libsystemd-daemon bindings:

(defun sd-notify (event-string)
  (let ((socket (make-instance 'sb-bsd-sockets:local-socket :type :datagram))
        (name (posix-getenv "NOTIFY_SOCKET"))
        (bufferlen (length event-string)))
    (when name
      (sb-bsd-sockets:socket-connect socket name)
      (sb-bsd-sockets:socket-send socket event-string bufferlen))))

This is a stable API guaranteed to not break in the future and implemented in less than ten lines of code with just basic socket functions. And if your language has support it becomes actually trivial:

        import systemd
    except ImportError:

Note that in both cases there is no drawback at all on systemd-free setups. It has the overhead of checking the process' environment for NOTIFY_SOCKET or for the systemd package and behaves like a simple service otherwise.

Actually the idea of separating the technical aspect (daemonizing) from the semantic aspect of signalizing "initialization finished, everything's fine" is a pretty good idea and hopefully has the potential to reduce the number of services signalizing the "everything's fine" too early. It could even be ported to non-systemd init systems easily given the API.

Tags: debian, foss, linux.
unbreaking tt-rss
6th August 2015

TinyTiny-RSS has some nice failure modes. And upstream support forums aren't really helpfull so when you search for your current problem, chances are that there is one mention of it on the web, in the forum, and the only thing happening there is people making fun of the reporter.

Anyway. This installation has seen lots of error messages from the updater in the last several months:

Warning: Fatal error, unknown preferences key: ALLOW_DUPLICATE_POSTS (owner: 3) in /srv/www/ on line 108

Warning: Fatal error, unknown preferences key: ALLOW_DUPLICATE_POSTS (owner: 3) in /srv/www/ on line 108

Warning: Fatal error, unknown preferences key: ALLOW_DUPLICATE_POSTS (owner: 3) in /srv/www/ on line 108

Warning: Fatal error, unknown preferences key: ALLOW_DUPLICATE_POSTS (owner: 3) in /srv/www/ on line 108
And, more recently, the android app stopped working with ERROR:JSON Parse failed.. Turns out both things are related.

First thing I noticed was changing preferences in the web panel stopped working until you use the reset to Defaults option and then changed settings. Plugging wireshark in between showed what was going on (Note: API was displayed as enabled in Preferences/Preferences):

HTTP/1.1 200 OK
Server: nginx/1.8.0
Date: Thu, 06 Aug 2015 11:00:31 GMT
Content-Type: text/json
Transfer-Encoding: chunked
Connection: keep-alive
X-Powered-By: PHP/5.4.43
Content-Language: auto
Set-Cookie: [...]
Expires: Thu, 19 Nov 1981 08:52:00 GMT
Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0
Pragma: no-cache
Api-Content-Length: 234


Warning: Fatal error, unknown preferences key: ENABLE_API_ACCESS (owner: 2) in /srv/www/ on line 108
{"seq":0,"status":1,"content":{"error":"API_DISABLED"}} 0

Solution for fixing the Android app (and the logspam on the way as well) seems to be to reset the preferences and then configure tt-rss again (In the webapp, not in the android thing!). Also silences tt-rss update_daemon as well, yay! One last thing: someone out there who wants to explain to me how to fix

Fatal error: Query INSERT INTO ttrss_enclosures
                                                        (content_url, content_type, title, duration, post_id, width, height) VALUES
                                                        ('', '', '', '', '0', 0, 0) failed: ERROR:  insert or update on table "ttrss_enclosures" violates foreign key constraint "ttrss_enclosures_post_id_fkey"
DETAIL:  Key (post_id)=(0) is not present in table "ttrss_entries". in /srv/www/ on line 46


Tags: web.
Export org snippets to HTML
16th July 2015

Mostly a mental note as I've reinvented this the second time now. If you just quickly want to share some org-mode notes with some non-Emacs-users the built-in HTML export comes handy. However it has one Problem: All source syntax highlighting is derived from your current theme. Which of course is a bad idea if your editor has a dark background (say Emacs.reverseVideo: on). The same if your terminal's background color is dark.

Running Emacs in batch mode and still getting colorful code formatting seems to be an unsolved problem. All that can be found on the Internet suggests adding a dark background to your HTML export (at least to the code blocks). Or maybe use an external style-sheet. Both not exactly the thing if you just want to scp snippets of HTML somewhere to share. However there's a working hack:

#!/usr/bin/make -f

	xvfb-run urxvt +rv -e emacs -nw --visit $< --funcall org-html-export-to-html --kill >/dev/null

So use a terminal you can easily force into light-background-mode (like urxvt +rv) so the emacs -nw runs in light-background-mode and wrap the thing in xvfb-run so you can properly run this over ssh (and don't get annoying windows pop up and disappear again when typing make)

Tags: emacs, web.
Backup strategy
27th June 2014

I've been working on my backup strategy for the notebook recently. The idea is to have full backups every now month and then incremental backups in between as fine-grained as possible. As it's a mobile device there's no point in time where it is guaranteed to be up, connected and within reach of the backup server.

As I'm running Debian GNU/kFreeBSD on it, using ZFS and specifically zfs send comes quite naturally. I'm now generating a new file system snapshot every day (if the notebook happens to be online during that day) using cron.

@daily zfs snapshot base/root@`date -I`
@daily zfs snapshot base/home@`date -I`
@reboot zfs snapshot base/root@`date -I`
@reboot zfs snapshot base/home@`date -I`

When connected to the home network I'm synchronizing off all incrementals that are not yet on the backup server. This is using zfs send together with gpg to encrypt the data and then put it off to some sftp storage. For the first snapshot every month a full backup is created. As there doesn't seem to be a way to merge zfs send streams without importing everything in a zfs pool I create additional incremental streams to the first snapshot of last month so I'm able to delete older full backups and daily snapshots and still keep coarse-gained backups for a longer period of time.

# -*- coding: utf-8 -*-

# Config
SFTP_DIR  = '/srv/backup/mitoraj'
SFTP_USER = 'root'
ZPOOL     = 'base'
GPGUSER   = '9FED5C6CE206B70A585770CA965522B9D49AE731'

import subprocess
import os.path
import sys
import paramiko

term = {
    'green':  "\033[0;32m",
    'red':    "\033[0;31m",
    'yellow': "\033[0;33m",
    'purple': "\033[0;35m",
    'none':   "\033[0m",

sftp = None

def print_colored(data, color):

def postprocess_datasets(datasets):
    devices = set([entry.split('@')[0] for entry in datasets])

    result = dict()
    for device in devices:
        result[device] = sorted([ entry.split('@')[1] for entry in datasets
                                    if entry.startswith(device) ])

    return result

def sftp_connect():
    global sftp

    host_keys = paramiko.util.load_host_keys(os.path.expanduser('~/.ssh/known_hosts'))
    hostkeytype = host_keys[SFTP_HOST].keys()[0]
    hostkey = host_keys[SFTP_HOST][hostkeytype]

    agent = paramiko.Agent()
    transport = paramiko.Transport((SFTP_HOST, 22))

    for key in agent.get_keys():
            transport.auth_publickey(SFTP_USER, key)
        except paramiko.SSHException:

    sftp = paramiko.SFTPClient.from_transport(transport)

def sftp_send(dataset, reference=None):
    zfscommand = ['sudo', 'zfs', 'send', '%s/%s' % (ZPOOL, dataset)]
    if reference is not None:
        zfscommand = zfscommand + ['-i', reference]

    zfs = subprocess.Popen(zfscommand, stdout=subprocess.PIPE)

    gpgcommand = [ 'gpg', '--batch', '--compress-algo', 'ZLIB',
                   '--sign', '--encrypt', '--recipient', GPGUSER ]
    gpg = subprocess.Popen(gpgcommand, stdout=subprocess.PIPE,

    if gpg.returncode not in [None, 0]:
        print_colored("Error:\n\n" + gpg.stderr, 'red')

    if reference is None:
        filename = '%s.full.zfs.gpg' % dataset
        filename = '%s.from.%s.zfs.gpg' % (dataset, reference)

    with, 'w') as remotefile:
        while True:
            junk =*1024)
            if len(junk) == 0:

        print_colored(" DONE", 'green')

def syncronize(local_datasets, remote_datasets):
    for device in local_datasets.keys():
        current = ""
        for dataset in local_datasets[device]:
            last = current
            current = dataset

            if device in remote_datasets:
                if dataset in remote_datasets[device]:
                    print_colored("%s@%s -- found on remote server" % (device, dataset), 'yellow')

            if last == '':
                print_colored("Initial syncronization for device %s" % device, 'green')
                sftp_send("%s@%s" % (device, dataset))
                lastmonth = dataset

            if last[:7] == dataset[:7]:
                print_colored("%s@%s -- incremental backup (reference: %s)" %
                              (device, dataset, last), 'green')
                sftp_send("%s@%s" % (device, dataset), last)
                print_colored("%s@%s -- full backup" % (device, dataset), 'green')
                sftp_send("%s@%s" % (device, dataset))
                print_colored("%s@%s -- doing incremental backup" % (device, dataset), 'green')
                sftp_send("%s@%s" % (device, dataset), lastmonth)
                lastmonth = dataset

def get_remote_datasets():
    datasets = sftp.listdir()
    datasets = filter(lambda x: '@' in x, datasets)

    datasets = [ entry.split('.')[0] for entry in datasets ]

    return postprocess_datasets(datasets)

def get_local_datasets():
    datasets = subprocess.check_output(['sudo', 'zfs', 'list', '-t', 'snapshot', '-H', '-o', 'name'])
    datasets = datasets.strip().split('\n')

    datasets = [ entry[5:] for entry in datasets ]

    return postprocess_datasets(datasets)

def main():
    syncronize(get_local_datasets(), get_remote_datasets())

if __name__ == '__main__':

Rumors have it, btrfs has gained similar functionality to zfs send so maybe I'll be able to extend that code and use it on my linux nodes some future day (after migrating to btrfs there for a start).

Tags: foss, gnupg, kfreebsd.
pass xdotool dmenu
27th June 2014

I've written a small dmenu-based script which allows to select passwords from one's pass password manager and have it xdotool typed in. This should completely bypass the clipboard (which is distrusted by people for a reason). As I've been asked about the script a few times in the past here it is. Feel free to copy and any suggestions welcome.


list_passwords() {
	passwords=( ~/.password-store/**/*.gpg ~/.password-store/*.gpg )
	for password in "${passwords[@]}"
		echo $filename

xdotool_command() {
	echo -n "type "
	pass "$1"

selected_password="$(list_passwords 2>/dev/null| dmenu)"
if [ -n "$selected_password" ]
	xdotool_command "$selected_password" | xdotool -
Tags: foss.

RSS feed

Created by Chronicle v4.6