Noah Meyerhans

The weblog

Building OpenWRT With Docker

I’ve run OpenWRT on my home router for a long time, and these days I maintain a couple of packages for the project. In order to make most efficient use of the hardware resources on my router, I run a custom build of the OpenWRT firmware with some default features removed and others added. For example, I install bind and ipsec-tools, while I disable the web UI in order to save space.

There are quite a few packages required for the OpenWRT build process. I don’t necessarily want all of these packages installed on my main machine, nor do I want to maintain a VM for the build environment. So I investigated using Docker for this.

Starting from a base jessie image, which I created using the Docker debootstrap wrapper, the first step was to construct a Dockerfile containing instructions on how to set up the build environment and create a non-root user to perform the build:

FROM jessie:latest
MAINTAINER Noah Meyerhans <frodo@morgul.net>

RUN DEBIAN_FRONTEND=noninteractive apt-get update && apt-get -y install \
asciidoc bash bc binutils bzip2 fastjar flex git-core g++ gcc
util-linux gawk libgtk2.0-dev intltool jikespg zlib1g-dev make \
genisoimage libncurses5-dev libssl-dev patch perl-modules \
python2.7-dev rsync ruby sdcc unzip wget gettext xsltproc \
libboost1.55-dev libxml-parser-perl libusb-dev bin86 bcc sharutils \
subversion

RUN adduser --disabled-password --uid 1000 --gecos "Docker Builder,,," builder

And we generate a docker image based on this Dockerfile per the docker build documentation. At this point, we’ve got a basic image that does what we want. To initialize the build environment (download package sources, etc), I might run:

docker run -v ~/src/openwrt:/src/openwrt -u builder -t -i jessie/openwrt sh -c "cd /src/openwrt/openwrt && scripts/feeds update -a"

Or configure the system:

docker run -v ~/src/openwrt:/src/openwrt -u builder -t -i jessie/openwrt make -C /src/openwrt/openwrt menuconfig

And finally, build the OpenWRT image itself:

docker run -v ~/src/openwrt:/src/openwrt -u builder -t -i jessie/openwrt make -C /src/openwrt/openwrt -j3

The -v ~/src/openwrt:/src/openwrt flags tell docker to bind mount my ~/src/openwrt directory (which I’d previously cloned using git) to /src/openwrt inside the running container. Without this, one might be tempted to clone the git repo directly into the container at runtime, but the changes to non-bind-mount filesystems are lost when the container terminates. This could be suitable for an autobuild environment, in which the sources are cloned at the start of the build and any generated artifacts are archived externally at the end, but it isn’t suitable for a dev environment where I might be making and testing small changes at a relatively high frequency.

The -u builder flags tell docker to run the given commands as the builder user inside the container. Recall that builder was created with UID 1000 in the Dockerfile. Since I’m storing the source and artifacts in a bind-mounted directory, all saved files will be created with this UID. Since UID 1000 happens to be my UID on my laptop, this is fine. Any files created by builder inside the container will be owned by me outside the container. However, this container should not have to rely on a user with a given UID running it! I’m not sure what the right way to approach this problem is within Docker. It may be that someone using my image should create their own derivative image that creates a user with the appropriate UID (creation of this derivative image is a cheap operation in Docker). Alternatively, whatever Docker init system is used could start as root, add a new user with a specific UID, and execute the build commands as that new user. Neither of these seems as clean as it could be, though.

In general, Docker seems quite useful for such a build environment. It’s easy to set up, and it makes it very easy to generate and share a common collection of packages and configuration. Because images are self-contained, I can reclaim a bunch of disk space by simple executing “docker rmi”.

Spamassassin Updates

If you’re running Spamassassin on Debian or Ubuntu, have you enabled automatic rule updates? If not, why not? If possible, you should enable this feature. It should be as simple as setting "CRON=1" in /etc/default/spamassassin. If you choose not to enable this feature, I’d really like to hear why. In particular, I’m thinking about changing the default behavior of the Spamassassin packages such that automatic rule updates are enabled, and I’d like to know if (and why) anybody opposes this.

Spamassassin hasn’t been providing rules as part of the upstream package for some time. In Debian, we include a snapshot of the ruleset from an essentially arbitrary point in time in our packages. We do this so Spamassassin will work “out of the box” on Debian systems. People who install spamassassin from source must download rules using spamassassin’s updates channel. The typical way to use this service is to use cron or something similar to periodically check for rule changes via this service. This allows the anti-spam community to quickly adapt to changes in spammer tactics, and for you to actually benefit from their work by taking advantage of their newer, presumably more accurate, rules. It also allows for quick reaction to issues such as the one described in bug 738872 and 774768.

If we do change the default, there are a couple of possible approaches we could take. The simplest would be to simply change the default value of the CRON variable in /etc/default/spamassassin. Perhaps a cleaner approach would be to provide a “spamassassin-autoupdates” package that would simply provide the cron job and a simple wrapper program to perform the updates. The Spamassassin package would then specify a Recommends relationship with this package, thus providing the default enabled behavior while still providing a clear and simple mechanism to disable it.

Debconf by Train

Today is the first time I’ve taken an interstate train trip in something like 15 years. A few things about the trip were pleasantly surprising. Most of these will come as no surprise:

  1. Less time wasted in security theater at the station prior to departure.
  2. On-time departure
  3. More comfortable seats than a plane or bus.
  4. Quiet.
  5. Permissive free wifi

Wifi was the biggest surprise. Not that it existed, since we’re living in the future and wifi is expected everywhere. It’s IPv4 only and stuck behind a NAT, which isn’t a big surprise, but it is reasonably open. There isn’t any port filtering of non-web TCP ports, and even non-TCP protocols are allowed out. Even my aiccu IPv6 tunnel worked fine from the train, although I did experience some weird behavior with it.

I haven’t used aiccu much in quite a while, since I have a native IPv6 connection at home, but it can be convenient while traveling. I’m still trying to figure out happened today, though. The first symptoms were that, although I could ping IPv6 hosts, I could not actually log in via IMAP or ssh. Tcpdump showed all the standard symptoms of a PMTU blackhole. Small packets flow fine, large ones are dropped. The interface MTU is set to 1280, which is the minimum MTU for IPv6 and any path on the internet is expected to handle packets of at least that size. Experimentation via ping6 reveals that the largest payload size I can successfully exchange with a peer is 820 bytes. Add 8 bytes for the ICMPv6 header for 828 bytes of payload, plus 40 bytes for the IPv6 header gives an 868 byte packet, which is well under what should be the MTU for this path.

I’ve worked around this problem with an ip6tables rule to rewrite the MSS on outgoing SYN packets to 760 bytes, which should leave 40 for the IPv6 header and 20 for any extension headers:

sudo ip6tables -t mangle -A OUTPUT -p tcp --tcp-flags SYN,RST SYN -j TCPMSS --set-mss 760

It is working well and will allow me to publish this from the train, which I’d otherwise have been unable to do. But… weird.

Bianchi Rebuild

Infusing an old bike with new life

I bought this early/mid 1990’s Bianchi Brava from my officemate at MIT in 2005 or 2006. For some time before I bought it, it had been sitting unused in our office and had fallen in to disrepair. When I bought it, I promptly converted it to a singlespeed with the help of Tyler from Paramount Bicycle Repair in Somerville. I rode in that configuration for a couple years, then bought bull-horn bars, scrapped the brakes, and converted it to a fixed-gear, which I rode for another couple of years. When I moved to California in 2010, I bought a road bike so the big hills surrounding the South Bay Area would be accessible, and didn’t ride the Bianchi very much. In Seattle, I’ve been commuting on a newer (but not particularly fun) Raleigh One-Way singlespeed. The Bianchi has once again been left to collect dust. Considering how much fun I had on this bike for several years in Somerville, I always felt a little sad about neglecting it.

Finally, a month or so ago, I decided I’d convert the bike back to a geared road bike. That’s a useful thing to have around here, considering how hilly Seattle is. I’d been tossing around this idea for a while, and finally decided it was time to go ahead and do it. This time, it was a couple trips to Recycled Cycles to get the process started, but I was going to build my own wheels. So, the first trip has me come home with the following:

Lots of wheel parts

And I get to work on the wheel. This was my first real wheel build. I had rebuilt an old mountain bike wheel a while back, and that seemed to go reasonably well, but I never actually rode on it, so it doesn’t really count. Wheelbuilding has always seemed like something of an arcane art to me. It doesn’t really compare to anything else, so you can’t really prepare for it except by actually doing it, which makes it seem rather daunting. Fortunately, as it has done so many times in the past, Sheldon Brown’s website proved a valuable source of information.

After a short time (less than an hour, for sure), the wheel went from the above collection of parts to the following, roughly 80% laced:

Lacing a wheel

And shortly after that, it’s a wheel:

Brand new wheel

In the truing stand, with a tire mounted, making any last adjustments:

Final adjustments

Following the wheel build, there were still several tasks to complete. I wanted to switch back to drop-bars from the bull-horn bar that I used when it was a fixed-gear. And I needed brakes, and of course, shifters and a derailleur. That generally went smoothly. Aside from the cables and rear tire, everything else was either already laying around the house or bought used at Recycled Cycles. The only problem that came up was that I didn’t buy a long enough pieces of brake cable housing, and ended up having to buy more. The derailleur is a long-cage mountain bike derailleur, which is a bit weird on this bike. If I find a used Shimano 105 derailleur I’ll probably swap it out, but in general it works as is.

And this is what it looks like!

The finished product!

And in Other News…

I’ve aborted several attempts recently to get something of interest posted. For whatever reason, none of that stuck. So here’s something with fewer expectation attached to it: A collection of random updates.

Debian packaging

Spamassassin

The Spamassassin project released version 3.4.0, a major update over the 3.3.2 branch, after nearly two-and-a-half years in development. 3.4.0-1 is currently in unstable and testing, and seems to be working reasonably well for me. I’ve got another release in the works that will provide a systemd unit file (bug 716887) and also switch to using more appropriate mechanisms than ‘su’ for running cron-based maintenance tasks as the debian-spamd user.

Ipsec-tools

In response to Matt Grant’s call for help for ipsec-tools packaging, I’ve become an inaugural member of the pkg-ipsec-tools team and made a couple of uploads. The upstream release has been incremented from 0.8.0 to 0.8.2 to pick up some small bug fixes.

Cycling

With two races in the books, and another coming up in 2 weeks, the mountain biking season is in full swing. The first race (Stottlemeyer) was was a muddy mess of mechanical carnage, while the second (Echo Valley) was dusty, dry, and sunbaked. I dealt with mechanical issues throughout the first race, and that was reflected in my results. My results in Echo Valley were pretty comparable to last year. Overall time was slightly worse, but the winning times were higher as well. I suspect that this was due to the heat, which was really quite severe.

Two weeks from now is the Test of Metal in British Columbia, which is a really popular race and one of the high points of the summer’s race calendar. Last year was the first time I participated in this race, and I’ve been looking forward to this year’s edition ever since.

Parenthood!

Sarah & I are expecting our first child in early October. Thus far day to day life hasn’t changed a lot for me, but it certainly has for Sarah. Working in the garden, lifting things, and getting around by bike have all become pretty uncomfortable for her. Even sleeping has become a challenge because of difficulty getting comfortable. We’ve been brainstorming names, from serious to off-the-wall, but haven’t found anything worthy of going on any kind of short list. Any suggestions?

DNSSEC

I’ve enabled DNSSEC for several of my domain names, including this one. I had originally planned on putting together blog post (or series) about my experiences, but Martín Ferrari beat me to it with a couple of good posts. 1 2

I’ve found the Firefox DNSSEC Validator extension to be really interesting for visually identifying which of the web sites I visit regularly are signing their zones.

Conferences

I’m going to debconf and LISA! Are you?

Starving Trolls

“Netiquette” is an ancient term, dating back to the earliest days on the internet. These days, one might argue that it’s no longer relevant, or that there are so many different definitions that it’s been rendered meaningless. However, one particular aspect of it endures: “Don’t feed the trolls!” A recent thread on the debian-security mailing list provided an amazingly effective demonstration of the effectiveness of this approach. A certain pseudonymous individual made multiple posts in this thread that exhibited classic troll behavior (no meaningful contribution to the discussion, inflamatory comments, etc). On multiple occasions, I was tempted to reply to some of these posts (After all, someone was wrong on the internet!), but I held off. It would do no good to engage this person, and would likely only serve to devolve into a flame war. I expected the flames to fly anyway, since surely somebody would respond to this guy. It was only a matter of time. But the time never came. The discussion remained civil and productive, and troll was left to starve.

Hardware Hacking Like a 21st Century Sixth Grader

I’m not much of an electronics guy. If I was exposed to it as a kid, I might have gotten into it, but there wasn’t much going on back in those days. Certainly not in central Maine, anyway. By the time I was really exposed to it in college, much of my curiosity and excitement for such things had been crushed, so I avoided the opportunity to learn when I could have. In my professional career, I’ve rarely had the need or opportunity to touch electronics. These days, however, a lot of people are doing some neat things to try and introduce today’s bright and curious kids to this world. Raspberry Pi and Arduino are two hugely popular open platforms for beginning electronics hackers to explore, experiment, and build. So, channelling my inner curiosity, I delve into “physical computing”.

Project 1: Blink the Fibbonacci Sequence.

The standard arduino demos begin by blinking an LED light attached to one of the board’s digital output pins. Taking this one step further, I decide that a monotonously blinking LED is boring. Instead, I craft a simple Fibbonacci blinker. Flash the LED for the current value in the sequence, then pause. Repeat for the next value, etc.

Project 2: Lights on, Lights off.

Basically just one of the demos, the pushbutton changes the internal state in the Arduino program and toggles the light.

Project 2: Dial-a-scale

Play the c-major scale based on the position of a dial. Divide the full range of the potentiometer into eight steps, one for each note. Play a note based on the current position of the potentiometer. As it moves to another position, change the tone we play.

Project 2.5: Photoscale

Unplug the potentiometer from the breadboard and replace it with a photoresister. The output from the photoresister depends on the amount of light it sees, so casting shadows or shining lights on the component changes the note being played.

Project 3: Keyboards in C

Similar to the previous project, but using a pressure sensor. Divide it into eight segments corresponding to the eight notes in the scale. Play a note depending on where the pressure is.

Linux Fest Northwest

Linux Fest Northwest has been going on for 14 years. For most of those years, I’ve lived in the wrong timezone for it to be worth attending. A couple weeks ago somebody posted to a Seattle area Linux forum asking if anybody else was going to this year’s edition, taking place on the last weekend in April. My memory was jarred, and my excitement kindled. I used to make it to conferences with some regularity, but it’s been several years. This year, Linux Fest Northwest would be easy to get to, and there was no way I was going to miss it.

The night before the fest, it occurred to me that I hadn’t even checked to see if any of my friends would be there. It turns out that there would! It ended up being a great opportunity to catch up with bkero from Mozilla and Saint Aardvark over good food and good beer!

I made it to a few really interesting sessions. Some notable details:

  • The ACLU and EFF panel was informative, despite being down by a third of its planned size due to an illness. It was a decent survey of the current crop of frightening legal and legislative concerns. Mostly it focused on privacy (and threats such as CISPA), but also touched a bit on censorship and related threats such as the currently defunct SOPA. I learned a couple of interesting things about the Fifth Amendment that I’m a little surprised I hadn’t heard before:

    1. The Fifth Amendment doesn’t apply in civil litigation, only criminal litigation.
    2. The Fifth Amendment only applies to self incrimination. That is, you can’t invoke your fifth amendment rights to avoid revealing information that directly links somebody else to a crime.

    None of these is particularly surprising, but they’re not things that most people think about.

  • Russell Pavlicek gave a good talk entitled “Saving an Open Source Project: Lessons Learned from Xen”, in which he examined the rise, fall, and phoenix-like rise again of Xen. Through mis-management on several levels, Xen went from being the virtualization technology for Linux to an afterthought. Many people, distributions, and organizations gave up on it completely. But starting about two years ago, there was a conscious effort made by what remained of the Xen community to correct past mistakes. This effort has met with good success, and Xen has a healthy community growing around it again. Russell shared some valuable wisdom and did so in a very fluid, dynamic, and approachable way. He was easily the best speaker I encountered over the weekend.

Beyond the talks, there was a lively game room and a vendor expo with a bunch of well known companies and organizations. The game room saw a lot of high-energy Armagetron competition along with slower paced games such as Widelands.

All in all, it was a great weekend and I hope to go back next year! Thanks go out to all the volunteers and organizers who made it happen, and to Bellingham Technical College for hosting the event!

Disqus – Discuss

Ok, so importing the existing posts from posterous worked smoothly. That’s nice.

I’ve signed up for a disqus account and enabled comments. I’m not sure how I feel about that. I don’t like generating “footprints” across a bunch of third-party sites as I navigate the web, and I take steps in my browser to avoid doing so. So requiring readers of my blog (all 3 of them) to download from a third-party in order to see my blog, let alone actually comment on it, seems really hypocritical. Not only that, but my previous reliance on third-parties like, oh, say posterous has not always ended well. The paranoid in me really objects to giving control over to a third party. If only they didn’t make it so damn easy!

Octopress Foo

Day one with Octopress was fairly productive. Or at least, I didn’t punt. There are a couple of things I’ve yet to figure out. Most glaringly is comments. I don’t really want to rely on disqus or another third party for comments, but that appears to be the accepted way of implementing commenting on Octopress blogs.

And I need to start coming up with my own layout/skin.