debianNoah Meyerhansfrodo+blog@morgul.net/categories/debian/Copyright Noah Meyerhans2021-12-29T09:49:13-08:00Noah Meyerhansfrodo+blog@morgul.netWhen You Could Hear Security Scans2021-12-29T09:49:13-08:00/2021/12/29/when-you-could-hear-security-scans/
<p>Have you ever wondered what a security probe of a computer <em>sounded
like</em>? I’d guess probably not, because on the face of it that doesn’t
make a whole lot of sense. But there was a time when I could very
clearly discern the sound of a computer being scanned. It sounded
like a small mechanical heart beat: <em>Click-click…
click-click… click-click…</em></p>
<p>Prior to 2010, I had a computer under my desk with what at the time
were not unheard-of properties: Its storage was based on a stack of
spinning metal platters (a now-antiquated device known as a “hard
drive”), and it had a publicly routable IPv4 address with an
unfiltered connection to the Internet. Naturally it ran Linux and an
ssh server. As was common in those days, service logging was handled
by a syslog daemon. The syslog daemon would sort log messages based
on various criteria and record them somewhere. In most simple
environments, “somewhere” was simply a
<a href="https://manpages.debian.org/bullseye/rsyslog/rsyslog.conf.5.en.html#Regular_file">file on local storage</a>.
When writing to a local file, syslog daemons can be optionally
configured to use the
<a href="https://manpages.debian.org/bullseye/manpages-dev/fsync.2.en.html"><code>fsync()</code></a>
system call to ensure that writes are flushed to disk. Practically
speaking, what this meant is that a page of disk-backed memory would
be written to the disk as soon as an event occurred that triggered a
log message. Because of potential performance implications, <code>fsync()</code>
was not typically enabled for most log files. However, due to the
more sensitive nature of authentication logs, it was often enabled for
<code>/var/log/auth.log</code>.</p>
<p>In the first decade of the 2000’s, there was a fairly unsophisticated
worm loose on the Internet that would
<a href="https://arstechnica.com/civis/viewtopic.php?t=500295">probe sshd</a>
with some
<a href="https://lin-web.clarkson.edu/~jmatthew/publications/leet08.pdf">common username/password combinations</a>.
The worm would pause for a second or so between login attempts, most
likely in an effort to avoid automated security responses. The effect
was that a system being probed by this worm would generate disk write
every second, with a very distinct audible signature from the hard
drive.</p>
<p>I think this situation is a fun demonstration of a side-channel data
leak. It’s primitive and doesn’t leak very much information, but it
was certainly enough to make some inference about the state of the
system in question. Of course, side-channel leakage issues have been
a concern for ages, but I like this one for its simplicity. It was
something that could be explained and demonstrated easily, even to
somebody with relatively limited understanding of “how computers
work”, unlike, for instance
<a href="https://www.cse.wustl.edu/~roger/566S.s21/09065580.pdf">measuring electromagnetic emanations from CPU power management units</a>.</p>
<p>For a different take on the sounds of a computing infrastructure,
<a href="https://www.usenix.org/legacy/publications/library/proceedings/lisa2000/full_papers/gilfix/gilfix_html/index.html">Peep (The Network Auralizer)</a>
won an award at a USENIX conference long, long ago. I’d love to see a
modern deployment of such a system. I’m sure you could build
something for your cloud deployment using something like
<a href="https://aws.amazon.com/eventbridge/">AWS EventBridge</a> or
<a href="https://aws.amazon.com/sqs/">Amazon SQS</a> fairly easily.</p>
<p>For more on research into actual real-world side-channel attacks, you
can read
<a href="https://arxiv.org/abs/2103.14244">A Survey of Microarchitectural Side-channel Vulnerabilities, Attacks and Defenses in Cryptography</a>
or
<a href="https://arxiv.org/abs/1903.07703">A Survey of Electromagnetic Side-Channel Attacks and Discussion on their Case-Progressing Potential for Digital Forensics</a>.</p>Noah Meyerhansfrodo+blog@morgul.netDebian STS: Short Term Support2020-10-27T09:06:14-07:00/2020/10/27/debian-sts-short-term-support/
<p>In another of my frequent late-night bouts with insomnia, I started thinking about the intersection of a number of different issues facing Debian today, both from a user point of view and a developer point of view.</p>
<p>Debian has a reputation for shipping “stale” software. Versions in the stable branch are often significantly behind the latest development upstream. Debian’s policy here has been that this is fine, our goal is to ship something stable, not something bleeding edge. Unofficially, our response to users is: If you need bleeding edge software, Debian may not be for you. Officially, we have no response to users who want fresher software.</p>
<p>Debian also has a problem with a <a href="https://www.theregister.com/2020/09/10/debian_project_address/">lack of manpower</a>. I believe that part of why we have a hard time attracting contributors is our reputation for stale software. It might be worth it for us to consider changes to our approach to releases.</p>
<h3 id="what-about-running-testing">What about running testing?</h3>
<p>People who want newer software often look to Debian’s testing branch as a possible solution. It’s tempting, as it’s a dynamically generated release based on unstable, so it should be quite current. In practice, it’s not at all uncommon to find people running testing, and in fact I’m running it right now on the ThinkPad on which this is being typed. However, testing comes with a glaring issue: a lack of timely security support. Security updates must still propagate through unstable, and this can take <a href="https://www.debian.org/security/faq#testing">some time</a>. They can be held up by dependencies, library transitions, or other factors. Nearly every list of “best practices for computer security” lists keeping software up-to-date at or near the top of most important steps to take to safely use networked computer. Debian’s testing branch makes this very difficult, especially when faced with a zero-day with potential for real-world exploit.</p>
<h3 id="what-about-stable-backports">What about stable-backports?</h3>
<p>Stable backports is both better and worse than testing. It’s better in that it allows you to run a system comprised mainly of packages from the stable branch, which receive updates from the security team in a timely manner. However, it’s worse in that the packages from the backports repository incur an additional delay. The expectation around backports is that a package migrates naturally from unstable to testing, and then requires a maintainer to upload a new package based on the version in testing specifically targeted at stable backports. The migration can potentially be bypassed, and we used to have a mechanism for announcing the availability of security updates for the stable backports archive, but it has gone unused for several years now. The <a href="https://backports.debian.org/Contribute/#index4h2">documentation</a> describes a workflow for posting security updates that involves creating a ticket in <a href="https://wiki.debian.org/rt.debian.org">Debian’s RT system</a>, which is going to be quite foreign to most people. News from mid 2019 <a href="https://lists.debian.org/debian-backports-announce/2019/07/msg00000.html">suggests that this process might change</a>, but nothing appears to have come of this in over a year, and we still haven’t seen a proper security advisory for stable backports in years.</p>
<h3 id="looking-to-lts-for-ideas">Looking to LTS for ideas</h3>
<p><a href="https://wiki.debian.org/LTS">The Long-Term Support project</a> is an “alternative” branch of Debian, maintained outside the normal stable release infrastructure. It’s stable, and expected to behave that way, but it’s not supported by the stable security team or release team. LTS provides a framework for providing security updates via targeted uploads by a team of interested individuals working outside the structure of the existing stable releases. This project seems to be quite active (how much of this is because at least some members are being paid?), and as of this writing has actually published more security advisories in the past month than the stable security team has published for the current stable branch. This is also interesting in that the software in LTS is quite old, first appearing in a Debian stable release in 2017.</p>
<p>LTS is particularly interesting here as it’s an example an initiative within the Debian community taken specifically to address user needs. For some of our users, remaining on an old release is a perfectly valid thing for them to do, and we recognize this and support them in doing so.</p>
<h3 id="debian-short-term-support">Debian Short-Term Support</h3>
<p>So, what would it take to create an “LTS-like” initiative in the other direction? Instead of providing ongoing support for ancient versions of software that previously comprised a stable release, could we build a distribution branch based on something that hasn’t yet reached stable? What would that look like? How would it fit in the existing unstable→testing migration process? What impact would it have on the size of the archive? Would we want a rolling release, or discrete releases? If the latter, how many would we want between proper stable releases?</p>
<p>The <a href="https://security-tracker.debian.org/tracker/">security tracker</a> already tracks outstanding issues in unstable and testing, and can even show <a href="https://security-tracker.debian.org/tracker/status/dtsa-candidates">issues that have been fixed in unstable but haven’t yet propagated to testing</a>.</p>
<p>If we want a rolling release, maybe we could just open up the testing-security repository more broadly? There was once a <a href="https://wiki.debian.org/Teams/TestingSecurity">testing security team</a>, which IIRC was chartered to publish updated packages directly to testing security, along with associated security advisory. Based on the <a href="https://lists.debian.org/debian-testing-security-announce/">mailing list history</a>, that effort seems to have shut down around the time of the squeeze (Debian 6.0) release in early 2011. Would it be worth resurrecting it? We’ve probably got much of the infrastructure required in place already, since it previously existed.</p>
<p>Personally I’m not really a fan of a pure rolling release. I’d rather see a light-weight release. Maybe a snapshot of testing that gets just a date, not a Toy Story codename. Probably skip building a dedicated installer for it. Upgrade from stable or use a d-i snapshot from testing if needed. This mini release is supported until the next one comes out, maybe 6 or 8 months later. By supported, I mean that the “Short Term Release” team is responsible for it. They can upload security or other critical bug fixes directly to a dedicated repository. When the next STS snapshot is released, packages in the updates repository are either archived, if they’re a lower version than the one in the new mini release, or rebuilt against the new mini release and preserved.</p>
<p>Using some of the same mechanisms as the LTS release, we’d need</p>
<ol>
<li><p>Something to take the place of oldstable, that is the base release against which updates are released. This could be something that effectively maps to a date snapshot served by <a href="http://snapshot.debian.org/">http://snapshot.debian.org/</a>. (Snapshot itself could not currently handle the increased load, as I understand it, but conceptually it’s similar.)</p></li>
<li><p>Something to take the place of the <code>dist/updates</code> apt repository that hosts the packages that are updated.</p></li>
</ol>
<p>In theory, if the infrastructure could support hose things, then we could in effect generate a mini release at any time based on a snapshot. I wonder if this could start as something totally unofficial; mirror an arbitrary testing snapshot and provide a place for interested people to publish package updates.</p>
<h3 id="not-a-proposal-nor-a-criticism">Not a proposal, nor a criticism</h3>
<p>To be clear, I don’t really intend this as a proposal; It’s really half-baked. Maybe these ideas have already been considered and dismissed. I don’t know if people would be interested in working on such a project, and I’m not nearly familiar enough with the Debian archive tooling to even make a guess as to how hard it would be to implement much of it. I’m just posting some ideas that I came up with while pondering something that, from my perspective, is an area where Debian is clearly failing to meet the needs of some of our users. We know Debian is a popular and respected Linux distribution, and we know people value our stability. However, we also know that people like running Fedora and Ubuntu’s non-LTS releases. People like Arch Linux. Not just “end-users”, but also the people developing the software shipped by the distros themselves. There are a lot of potential contributors to Debian who are kept away by our unwillingness to provide a distro offering both fresh software and security support. I think that we could attract more people to the Debian community if we could provide a solution for these people, and that would ultimately be good for everybody.</p>
<p>Also, please don’t interpret this as being critical of the release team, the stable security team, or any other team or individual in Debian. I’m sharing this because I think there are opportunities for Debian to improve how we serve our users, not because I think anybody is doing anything wrong.</p>
<p>With all that said, though, let me know if you find the ideas interesting. If you think they’re crazy, you can tell me that, too. I’ll probably agree with you.</p>Noah Meyerhansfrodo+blog@morgul.netSetting environment variables for gnome-session2020-07-07T16:51:25-07:00/2020/07/07/setting-environment-variables-for-gnome-session/
<p>Am I missing something obvious? When did this get so hard?</p>
<p>In the old days, you configured your desktop session on a Linux system by editing the .xsession file in your home directory. The display manager (login screen) would invoke the system-wide xsession script, which would either defer to your personal .xsession script or set up a standard desktop environment. You could put whatever you want in the .xsession script, and it would be executed. If you wanted a specific window manager, you’d run it from .xsession. Start emacs or a browser or an xterm or two? .xsession. It was pretty easy, and super flexible.</p>
<p>For the past 25 years or so, I’ve used X with an environment started via .xsession. Early on it was fvwm with some programs, then I replaced fvwm with <a href="http://www.windowmaker.org/">Window Maker</a> (before that was even its name!), then switched to <a href="https://kde.org/">KDE</a>. More recently (OK, like 10 years ago) I gradually replaced KDE with <a href="https://awesomewm.org/">awesome</a> and various custom widgets. Pretty much everything was based on a .xsession script, and that was fine. One particularly nice thing about it was that I could keep .xsession and any related helper programs in a git repository and manage changes over time.</p>
<p>More recently I decided to give Wayland and GNOME an honest look. This has mostly been fine, but everything I’ve been doing in .xsession is suddenly useless.
OK, fine, progress is good. I’ll just use whatever new mechanisms exist. How hard can it be?</p>
<p>OK, so here we go. I am running GNOME. This isn’t so bad. Alt+F2 brings up the “Run Command” dialog. It’s a different keystroke than what I’m used to, but I can adapt. (Obviously I can reconfigure the key binding, and maybe someday I will, but that’s not the point here.) I have some executables in ~/bin. Oops, the run command dialog can’t find them. No problem, I just need to update the PATH variable that it sees. Hmmm… So how does one do that, anyway? GNOME has a help system, but searching that doesn’t doesn’t reveal anything. But that’s fine, maybe it’s inherited from the parent process. But there’s no xsession script equivalent, since this isn’t X anymore at all. The familiar stuff in /etc/X11/Xsession is no longer used. What’s the equivalent in Wayland? Turns out, there isn’t a shell script at all anymore, at least not in how Wayland and GNOME interact in Debian’s configuration, which seems fairly similar to how anybody else would set this up. The GNOME session runs from a systemd-managed user session.</p>
<p>Digging in to some web search results suggests that systemd provides <a href="https://www.freedesktop.org/software/systemd/man/environment.d.html">a mechanism</a> for setting some environment variables for services started by the user instance of the system. OK, so let’s create some files in ~/.config/environment.d and we should be good. Except no, this isn’t working. I can set some variables, but something is overriding PATH. I can create this file:</p>
<pre><code>$ cat ~/.config/environment.d/01_path.conf
USER_INITIAL_PATH=${PATH}
PATH=${HOME}/bin:${HOME}/go/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
USER_CUSTOM_PATH=${PATH}
</code></pre>
<p>After logging in, the “Run a command” dialog still doesn’t see my PATH. So I use Alt+F2 and <code>sh -c "env > /tmp/env"</code> to capture the environment, and this is what I see:</p>
<pre><code>USER_INITIAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
PATH=/usr/local/bin:/usr/bin:/bin:/usr/games
USER_CUSTOM_PATH=/home/noahm/bin:/home/noahm/go/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
</code></pre>
<p>So, my environment.d file is there, and it’s getting looked at, but something else is clobbering my PATH later in the startup process. But what? Where? Why? The systemd docs don’t indicate that there’s anything special about PATH, and nothing in /lib/systemd/user-environment-generators/ seems to treat it specially. The string “PATH” doesn’t appear in /lib/systemd/user/ either. Looking for the specific value that’s getting assigned to PATH in /etc shows the only occurrence of it being in /etc/zsh/zshenv, so maybe that’s where it’s coming from? But that should only get set there if it’s otherwise unset or otherwise very minimally set. So I still have no idea where it’s coming from.</p>
<p>OK, so ignoring where my custom value is getting overridden, maybe what’s configured in /lib/systemd/user will point me in the right direction. <code>systemd --user status</code> suggests that the interesting part of my session is coming from gnome-shell-wayland.service. Can we use a standard systemd drop-in as documented in <a href="https://www.freedesktop.org/software/systemd/man/systemd.unit.html">systemd.unit(5)</a>? It turns out that we can. This file sets things up the way I want:</p>
<pre><code>$ cat .config/systemd/user/gnome-shell-wayland.service.d/path.conf
[Service]
Environment=PATH=%h/bin:%h/go/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
</code></pre>
<p>Is that right? It really doesn’t feel ideal to me. Systemd’s Environment directive can’t reference existing environment variables, and I can’t use conditionals to do things like add a directory to the PATH only if it exists, so it’s still a functional regression from what we had before. But at least it’s a text file, edited by hand, trackable in git, so that’s not too bad.</p>
<p>There are some people out there who hate systemd, and will cite this as an illustration of why. However, I’m not one of those people, and I very much like systemd as an init system. I’d be happy to throw away sysvinit scripts forever, but I’m not quite so happy with the state of .xsession’s replacements. Despite the similarities, I don’t think .xsession is entirely the same as SysV-style init scripts. The services running on a system are vastly more important than my personal .xsession, and systemd is far better at managing them than the pile of shell scripts used to set things up under sysvinit. Further, systemd the init system maintains compatibility with init scripts, so if you really want to keep using them, you can. As far as I can tell, though, systemd the user session manager does not seem to maintain compatibility with .xsession scripts, and that’s unfortunate.</p>
<p>I still haven’t figured out what was overriding the ~/.config/environment.d/ setting. Any ideas?</p>Noah Meyerhansfrodo+blog@morgul.netDaily VM image builds are available from the cloud team2020-03-04T14:51:04-08:00/2020/03/04/daily-vm-image-builds-are-available-from-the-cloud-team/
<p>Did you know that the cloud team generates daily images for buster, bullseye, and sid? They’re available for download from <a href="https://cdimage.debian.org/images/cloud/">cdimage.debian.org</a> and are published to Amazon EC2 and Microsoft Azure. This is done both to exercise our image generation infrastructure, and also to facilitate testing of the actual images and distribution in general. I’ve often found it convenient to have easy access to a clean, up-to-date, disposable virtual machine, and you might too.</p>
<p><em>Please note that these images are intended for testing purposes, and older ones may be removed at any time in order to free up various resources. You should not hardcode references to specific images in any tools or configuration.</em></p>
<p>If you’re downloading an image for local use, you’ll probably want one of the “nocloud” images. They have an empty root password (the security ramifications of this should be obvious, so please be careful!), and don’t rely on any cloud service for configuration. You can use the qcow2 images with QEMU on any Linux system, or retrieve the raw images for use with another VMM.</p>
<p>If you want to use the images on Amazon EC2, you can identify the latest nightly build using the AWS CLI as follows:</p>
<pre><code># Select the most recent bullseye image for arm64 instance types:
$ aws ec2 describe-images --owner 903794441882 \
--region us-east-1 --output json \
--query "Images[?Architecture=='arm64'] | [?starts_with(Name, 'debian-11-')] | max_by([], &Name)"
{
"Architecture": "arm64",
"CreationDate": "2020-03-04T05:31:12.000Z",
"ImageId": "ami-056a2fe946ef98607",
"ImageLocation": "903794441882/debian-11-arm64-daily-20200304-189",
"ImageType": "machine",
"Public": true,
"OwnerId": "903794441882",
"State": "available",
"BlockDeviceMappings": [
{
"DeviceName": "/dev/xvda",
"Ebs": {
"Encrypted": false,
"DeleteOnTermination": true,
"SnapshotId": "snap-0d7a569b159964d87",
"VolumeSize": 8,
"VolumeType": "gp2"
}
}
],
"Description": "Debian 11 (daily build 20200304-189)",
"EnaSupport": true,
"Hypervisor": "xen",
"Name": "debian-11-arm64-daily-20200304-189",
"RootDeviceName": "/dev/xvda",
"RootDeviceType": "ebs",
"SriovNetSupport": "simple",
"VirtualizationType": "hvm"
}
# Similarly, select the most recent sid amd64 AMI:
$ aws ec2 describe-images --owner 903794441882 \
--region us-east-1 --output json \
--query "Images[?Architecture=='x86_64'] | [?starts_with(Name, 'debian-sid-')] | max_by([], &Name)"
{
"Architecture": "x86_64",
"CreationDate": "2020-03-04T05:13:58.000Z",
"ImageId": "ami-00ec9272298ca9059",
"ImageLocation": "903794441882/debian-sid-amd64-daily-20200304-189",
"ImageType": "machine",
"Public": true,
"OwnerId": "903794441882",
"State": "available",
"BlockDeviceMappings": [
{
"DeviceName": "/dev/xvda",
"Ebs": {
"Encrypted": false,
"DeleteOnTermination": true,
"SnapshotId": "snap-07c3fad3ff835248a",
"VolumeSize": 8,
"VolumeType": "gp2"
}
}
],
"Description": "Debian sid (daily build 20200304-189)",
"EnaSupport": true,
"Hypervisor": "xen",
"Name": "debian-sid-amd64-daily-20200304-189",
"RootDeviceName": "/dev/xvda",
"RootDeviceType": "ebs",
"SriovNetSupport": "simple",
"VirtualizationType": "hvm"
}
</code></pre>
<p>If you’re using Microsoft Azure images, you can inspect the images with <code>az vm image list</code> and <code>az vm image show</code>, as follows:</p>
<pre><code>$ az vm image list -o table --publisher debian --offer debian-sid-daily --location westeurope --all | sort -k 5 | tail
debian-sid-daily Debian sid-gen2 Debian:debian-sid-daily:sid-gen2:0.20200228.184 0.20200228.184
debian-sid-daily Debian sid Debian:debian-sid-daily:sid:0.20200229.185 0.20200229.185
debian-sid-daily Debian sid-gen2 Debian:debian-sid-daily:sid-gen2:0.20200229.185 0.20200229.185
debian-sid-daily Debian sid Debian:debian-sid-daily:sid:0.20200301.186 0.20200301.186
debian-sid-daily Debian sid-gen2 Debian:debian-sid-daily:sid-gen2:0.20200301.186 0.20200301.186
debian-sid-daily Debian sid Debian:debian-sid-daily:sid:0.20200302.187 0.20200302.187
debian-sid-daily Debian sid-gen2 Debian:debian-sid-daily:sid-gen2:0.20200302.187 0.20200302.187
debian-sid-daily Debian sid Debian:debian-sid-daily:sid:0.20200303.188 0.20200303.188
debian-sid-daily Debian sid-gen2 Debian:debian-sid-daily:sid-gen2:0.20200303.188 0.20200303.188
Offer Publisher Sku Urn Version
$ az vm image show --location westeurope --urn debian:debian-sid-daily:sid:latest
{
"automaticOsUpgradeProperties": {
"automaticOsUpgradeSupported": false
},
"dataDiskImages": [],
"hyperVgeneration": "V1",
"id": "/Subscriptions/428325bd-cc87-41f1-b0d8-8caf8bb80b6b/Providers/Microsoft.Compute/Locations/westeurope/Publishers/debian/ArtifactTypes/VMImage/Offers/debian-sid-daily/Skus/sid/Versions/0.20200303.188",
"location": "westeurope",
"name": "0.20200303.188",
"osDiskImage": {
"operatingSystem": "Linux",
"sizeInBytes": 32212255232,
"sizeInGb": 30
},
"plan": null,
"tags": null
}
</code></pre>
<p>More information about cloud computing with Debian is available <a href="https://wiki.debian.org/Cloud/">on the wiki</a>.</p>Noah Meyerhansfrodo+blog@morgul.netBuster in the AWS Marketplace2020-03-02T15:28:00-08:00/2020/03/02/buster-in-the-aws-marketplace/
<p>When buster was first released back in <a href="https://lists.debian.org/debian-announce/2019/msg00003.html">early July</a> of last year, the cloud team was in the process of setting up some new accounts with AWS to be used for AMI publication. For various reasons, the accounts we used for pre-buster releases were considered unsuitable for use long term, and the buster release was considered to be a good logical point to make the switch. Unfortunately, issues within the bureaucracy of both SPI/Debian and AWS delayed the complete switch to the new accounts. We have been publishing buster AMIs using a new account since September of 2019, but we have not been able to list them with the AWS Marketplace. This has reduced the visibility and discoverability of the AMIs and lead to numerous questions on the mailing lists and other forums.</p>
<p>I’m happy to announce today that the issues blocking Marketplace publication have finally been resolved, and buster is <a href="https://aws.amazon.com/marketplace/pp/B0859NK4HC">officially available in the AWS Marketplace</a>. Please use it, please leave us ratings and reviews in the Marketplace, and most importantly, please feel welcome to contribute to the <a href="https://wiki.debian.org/Cloud/">Debian cloud team</a>.</p>
<p>As always, if you’d rather get the latest details from the Debian wiki, they’re <a href="https://wiki.debian.org/Cloud/AmazonEC2Image/Buster">available</a>, or you can query the AWS API directly, e.g. using the <a href="https://packages.debian.org/buster/awscli">awscli</a> command as follows:</p>
<pre><code>$ aws ec2 describe-images --owner 136693071363 \
--region us-east-1 --output json \
--query "Images[?Architecture=='arm64'] | [?starts_with(Name, 'debian-10-')] | max_by([], &Name)"
{
"Architecture": "arm64",
"CreationDate": "2020-02-10T19:04:55.000Z",
"ImageId": "ami-031d1abcdcbbfbd8f",
"ImageLocation": "136693071363/debian-10-arm64-20200210-166",
"ImageType": "machine",
"Public": true,
"OwnerId": "136693071363",
"State": "available",
"BlockDeviceMappings": [
{
"DeviceName": "/dev/xvda",
"Ebs": {
"Encrypted": false,
"DeleteOnTermination": true,
"SnapshotId": "snap-0d8459c1e3fe12486",
"VolumeSize": 8,
"VolumeType": "gp2"
}
}
],
"Description": "Debian 10 (20200210-166)",
"EnaSupport": true,
"Hypervisor": "xen",
"Name": "debian-10-arm64-20200210-166",
"RootDeviceName": "/dev/xvda",
"RootDeviceType": "ebs",
"SriovNetSupport": "simple",
"VirtualizationType": "hvm"
}
</code></pre>
<p>Hopefully this helps reduce some of the confusion around the availability of the buster AMIs.</p>
<p>Next up, <a href="https://aws.amazon.com/govcloud-us/">GovCloud</a>!</p>Noah Meyerhansfrodo+blog@morgul.netYet Another Init decision2019-12-26T15:25:37-08:00/2019/12/26/yet-another-init-decision/
<p>I’m trying to use this to capture some of my thoughts on the current
GR, and to document my approach to this vote. If nothing else, I hope
to use this to convince myself that I’ve read and understood the
various options in the GR.</p>
<p>From my perspective, two of the choices on this ballot are easy to
deal with, in that they have very clear meaning and the ramifications
are easy to understand. We’re either all in on systemd and we don’t
care about compatibility with other init systems, or we’re only
loosely integrated with systemd and require that it be possible to
replace it. It’s pretty easy to rank these two options relative to
each other, depending on how you feel about systemd in general. The
other options are the hardest to rank. They basically all come down
to variants of “further discussion”, or of the status quo, with slight
biases in one direction or another. The significance of them is the
lens through which we want to view the situation, or how we want to
frame the discussion.</p>
<p>There are a few considerations for me when weighing the options
against each other. First, I don’t want to have another discussion
about init systems in Debian ever again. Second, I really like
systemd the init system and am happy to use it just about everywhere.
Third, while I like systemd the init system, I don’t care much for
systemd the NTP client, systemd the DNS resolver, systemd the DHCP
client, etc. Unfortunately, I think this leaves me somewhat
conflicted about choosing any of the stronger options. I’m tempted to
prefer “Focus on systemd”, but I worry about where that leads in the
long run. In particular, the GR option doesn’t discuss this at all,
except to say that “[i]ntegrating systemd more deeply into Debian will
lead to a more integrated and tested system…“, which may well be
true, but I’d like to stop the integration somewhat short of the point
where we start considering whether to rename the project Debian
GNU/systemd.</p>
<p>So the remaining options to consider are:</p>
<ul>
<li>A. Support for multiple init systems is Important</li>
<li>B. Systemd but we support exploring alternatives</li>
<li>D: Support non-systemd systems, without blocking progress</li>
<li>G: Support portability and multiple implementations</li>
<li>H: Support portability, without blocking progress</li>
</ul>
<p>(Note that I’ve re-ordered these from what’s on the ballot.)</p>
<p>First of all, I think this is <em>way</em> too many options, and I wish the
people championing them could have worked together to consolidate a
few of them. I get that there are subtleties, but I strongly believe
that a small number of very clearly worded options would make it much
easier to appreciate the differences.</p>
<p>Second, none of these options directly address my concern about how
much of the system to turn over to the control of the various systemd
components. Some of them may imply a limit, simply as a matter of
practicality, but they really don’t directly discuss it at all.</p>
<p>Option A looks like a somewhat softer variant of option (E), which
states that support for non-systemd inits is required. Option A is
tempting because it seems to at least try to put limits on the
integration of systemd into Debian, but the limit is at the Policy
level, which I don’t think is the right place. Policy is never going
to declare that a given implementation of NTP must be used, for
example. So that leaves the questions, my mind: Is support for
non-systemd pid 1 Important (in the BTS sense)? Would I be willing to
accept a campaign of NMUs to add broad support for some other init
system? I’m not sure that I would. Fundamentally, I’m not sure how
important it is to me, when I get down to it. I haven’t heard a
compelling use case that isn’t fundamentally rooted in somebody’s
personal dislike or distrust of systemd. So, I don’t see how it can
be truly important. (And please don’t bring up embedded systems;
systemd works just fine on a Raspberry Pi Zero, and if you need to go
much smaller then you really need to be asking yourself whether you
truly need things like glibc and apt in your installation. At that
point, I’m skeptical that Debian is what you want at all.)</p>
<p>Option G (Support portability and multiple implementations) appears to
be a very long-winded way of saying “Further Discussion.” The status
quo is preserved, and we essentially resolve to avoid committing hard
either way. We assume that we are agile, creative, and focused enough
to deal with whatever scenarios arise, and that we don’t need a final
resolution to the systemd question. Or to any other technical
question, for that matter; Option G doesn’t actually mention init
systems and there’s nothing in its language that specific to init
systems. I think voting for option G is essentially saying “I wish we
never had this GR.”</p>
<p>Option B (Systemd but we support exploring alternatives) similarly
seems to entrench the status quo. It essentially seems to be saying
that we probably don’t need a GR to resolve current init system
questions, and certainly don’t need one <em>right now</em>. It is the most
open ended of the options I’ve looked at so far. It specifically
mentions derivative distributions as something we should support, and
suggests that avoiding tight coupling with systemd leaves them with
the ability to make different init system choices. That seems like a
reasonable claim.</p>
<p>So this leaves D and H, which look similar on the surface with both
expressing goals of not “blocking progress.” Going deeper, they do
share quite a bit of text. Option H notably states several guiding
principles around the area broadly encompassed by the term
“portability”. In particular, it talks about portability to different
software stacks, which I take to mean things like non-Linux kernels or
non GNU libc implementations, although it doesn’t give such specific
examples. It talks a bit about hardware portability, though I doubt
systemd has any fundamental hardware portability issues, so I don’t
see this as much of an issue. I’m undecided about how I feel about
software portability. The kFreeBSD port, to choose an example, was
interesting, in an academic sense, but I’m not really sure how much it
matters. I have no doubt that real bugs were found and fixed during
that effort, but I do doubt that Debian is fundamentally better
because it happened. It clearly hasn’t had enough of an impact to
foster a self-sustaining sub-project. Even if supporting software
portability is very important, though, I don’t think we need this
option to enable it. Option B (Systemd but we support exploring
alternatives), for example, feels entirely compatible with software
portability, especially when support of derivative distributions is
considered. (I think the kFreeBSD port could have happened, or at
least started, just as easily in a derivative as in the main project.)</p>
<p>Aside from the stated principles, options D and H are identical. So
the end result of those two is essentially identical, with just the
basis for the decision being different.</p>
<p>I began this document thinking that it would help me decide, with
confidence, how to vote. I think it has helped me weed out a few
options, for various reasons, but I don’t think it has helped me to
understand exactly which option I truly hope to see win. On one hand,
I’d be perfectly happy to go all in on systemd and build a tightly
integrated, consistent, and unified distribution. No compatibility
layers, no abstractions, just direct tight coupling. It’s so much
easier to test, and the behavior is going to be more predictable. On
the other hand, I like the idea of people being able to experiment,
and I dislike the idea of monocultures. There’s already enough of a
Linux monoculture on the modern internet, and I’m not sure how
comfortable I am with the idea of making Linux distros more and more
homogeneous. But then again, the fractured Linux ecosystem has
hindered adoption, especially in the desktop area, so maybe we need
more homogeneity. Maybe Debian is too big and slow to really be the
best sandbox for experimentation anyway, and such work is best handled
in derivatives or other distributions altogether, especially early
on. (Consider how long it took to complete the /usr/doc ->
/usr/share/doc transition. Is Debian really the best place for
experimentation?)</p>
<p>Init integration and tooling has been a topic of discussion within
Debian for over 20 years. It had already been going on for some time
when
<a href="https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=76868">bug 76868</a>
was opened in 2000, or when hmh gave
<a href="https://people.debian.org/~hmh/debconf2/initscripts/">a talk</a> based
on
<a href="https://people.debian.org/~hmh/debconf2/debconf2-initscripts-bkg.pdf">his paper</a>
on init systems in Debian at DebConf 2. Regardless of the outcome of
this GR, it’s hard to imagine it really ending here. And maybe that’s
OK. The init system provides glue that integrates a lot of different
components into a unified OS, and as such <em>is</em> important. It’s not
the only integral component, though, so in some ways it’s funny to me
that it’s the one that keeps coming up.</p>Noah Meyerhansfrodo+blog@morgul.netFurther Discussion for DPL!2019-03-10T15:18:00-07:00/2019/03/10/further-discussion-for-dpl/
<p>Further Discussion builds concensus within Debian!</p>
<p>Further Discussion gets things done!</p>
<p>Further Discussion welcomes diverse perspectives in Debian!</p>
<p>We’ll grow the community with Further Discussion!</p>
<p>Further Discussion has been with Debian from the very beginning! Don’t you think it’s time we gave Further Discussion its due, after all the things Further Discussion has accomplished for the project?</p>
<p>Somewhat more seriously, have we really exhausted the community of people interested in serving as Debian Project Leader? That seems unfortunate. I’m not worried about it from a technical point of view, as Debian has established ways of operating without a DPL. But the lack of interest suggest some kind of stagnation within the community. Or maybe this is just the cabal trying to wrest power from the community by stifling the vote. Is there still a cabal?</p>Noah Meyerhansfrodo+blog@morgul.netReviewing GitHub pull requests locally2019-02-15T14:46:00-08:00/2019/02/15/reviewing-github-pull-requests-locally/
<p>When reviewing pull requests on GitHub, it’s often useful to have local access to the changes under review. There are a few different documented ways to accomplish this, but none have left me entirely satisfied. So, I came up with something different. Maybe it’ll work for you.</p>
<p>The existing methods are:</p>
<ul>
<li>You can add a new git remote referencing the source of the PR, then fetch it and check out the branch containing the proposed changes. This is OK, but adding a bunch of remotes is kind of annoying.</li>
<li>All incoming PRs for a given GitHub repo are visible in the repo’s ref space, but not in a path that gets fetched in the default git configuration. With <a href="https://gist.github.com/piscisaureus/3342247">a bit of configuration</a> in your local clone you can add the pr ref path to the list of fetched paths. This works well, but needs to be configured, and will fetch all pending PRs. This is nice, but I track some repos that get a lot of PRs, most of which don’t need my review. In cases like this, retrieving all the PRs whenever I fetch is kind of annoying.</li>
<li>You can fetch a specific pull request by its ID and store it in a given branch using a command like <code>git fetch origin pull/ID/head:BRANCHNAME</code>, where ID is the pull request ID and BRANCHNAME is a new local branch that will hold these changes. This is nice, but annoying to type.</li>
<li>There is a <a href="https://github.com/tj/git-extras/pull/262"><code>git pr</code></a> command available in the <a href="https://github.com/tj/git-extras/">git-extras</a> project that implements the above fetch command with less typing. But that is additional software to install.</li>
</ul>
<p>So, for my take on this, I have implemented a fairly simple git alias. It can be configured globally, in ~/.gitconfig or equivalent, and supports fetching from multiple remotes. Add the following to the ‘[aliases]’ section of your git config:</p>
<pre><code>fpr = !sh -c 'git fetch ${2:-origin} +refs/pull/$1/head:refs/remotes/${2:-origin}/pr/$1 && git checkout pr/$1' -
</code></pre>
<p>The mnemonic for “fpr” is “Fetch Pull Request”. Use it to fetch a specific PR to a local branch named <code>pr/ID</code>. It will fetch from the “origin” remote by default, but you can override that. For example, let’s fetch <a href="https://github.com/openwrt/packages/pull/8287">pr #8287</a> from the OpenWRT packages repository:</p>
<pre><code>$ git remote -v
openwrt git@github.com:openwrt/packages.git (fetch)
openwrt git@github.com:openwrt/packages.git (push)
origin git@github.com:nmeyerhans/packages.git (fetch)
origin git@github.com:nmeyerhans/packages.git (push)
$ git fpr 8287 openwrt
remote: Enumerating objects: 3, done.
remote: Counting objects: 100% (3/3), done.
remote: Total 5 (delta 3), reused 3 (delta 3), pack-reused 2
Unpacking objects: 100% (5/5), done.
From github.com:openwrt/packages
* [new ref] refs/pull/8287/head -> openwrt/pr/8287
Branch pr/8287 set up to track remote branch pr/8287 from openwrt.
Switched to a new branch 'pr/8287'
$ git status
On branch pr/8287
Your branch is up-to-date with 'openwrt/pr/8287'.
nothing to commit, working tree clean
</code></pre>Noah Meyerhansfrodo+blog@morgul.netOn the demise of Linux Journal2017-12-02T18:54:00-08:00/2017/12/02/on-the-demise-of-linux-journal/
<p><a href="https://lwn.net/Articles/740445/">Lwn</a>, <a href="https://linux.slashdot.org/story/17/12/01/1949229/linux-journal-ceases-publication">Slashdot</a>, and many others have marked the recent announcement of Linux Journal’s demise. I’ll take this opportunity to share some of my thoughts, and to thank the publication and its many contributors for their work over the years.</p>
<p>I think it’s probably hard for younger people to imagine what the Linux world was like 20 years ago. Today, it’s really not an exaggeration to say that the Internet as we know it wouldn’t exist at all without Linux. Almost every major Internet company you can think of runs almost completely on Linux. Amazon, Google, Facebook, Twitter, etc, etc. All Linux. In 1997, though, the idea of running a production workload on Linux was pretty far out there.</p>
<p>I was in college in the late 90’s, and worked for a time at a small Cambridge, Massachusetts software company. The company wrote a pretty fancy (and expensive!) GUI builder targeting big expensive commercial UNIX platforms like Solaris, HP/UX, SGI IRIX, and others. At one point a customer inquired about the availability of our software on Linux, and I, as an enthusiastic young student, got really excited about the idea. The company really had no plans to support Linux, though. I’ll never forget the look of disbelief on a company exec’s face as he asked “$3000 on a Linux system?”</p>
<p>Throughout this period, on my lunch breaks from work, I’d swing by the now defunct Quantum Books. One of the monthly treats was a new issue of Linux Journal on the periodicals shelf. In these issues, I learned that more forward thinking companies actually <em>were</em> using Linux to do real work. An article entitled “Linux Sinks the Titanic” described how Hollywood deployed hundreds(!) of Linux systems running custom software to generate the special effects for the 1997 movie <a href="http://www.imdb.com/title/tt0120338/?ref_=nv_sr_1"><em>Titanic</em></a>. Other articles documented how Linux was making inroads at NASA and in the broader scientific community. Even the ads were interesting, as they showed increasing commercial interest in Linux, both on the hardware (HyperMicro, VA Research, Linux Hardware Solutions, etc) and software (CDE, Xi Graphics) fronts.</p>
<p>The software world is very different now than it was in 1997. The media world is different, too. Not only is Linux well established, it’s pretty much the dominant OS on the planet. When Linux Journal reported in the late 90’s that Linux was being used for some new project, that was news. When they documented how to set up a Linux system to control some new piece of hardware or run some network service, you could bet that they filled a gap that nobody else was working on. Today, it’s no longer news that a successful company is using Linux in production. Nor is it surprising that you can run Linux on a small embedded system; in fact it’s quite likely that the system shipped with Linux pre-installed. On the media side, it used to be valuable to have everything bundled in a glossy, professionally produced archive published on a regular basis. Today, at least in the Linux/free software sphere, that’s less important. Individual publication is easy on the Internet today, and search engines are very good at ensuring that the best content is the most discoverable content. The whole Internet is basically one giant continuously published magazine.</p>
<p>It’s been a long time since I paid attention to Linux Journal, so from a practical point of view I can’t honestly say that I’ll miss it. I appreciate the role it played in my growth, but there are so many options for young people today entering the Linux/free software communities that it appears that the role is no longer needed. Still, the termination of this magazine is a permanent thing, and I can’t help but worry that there’s somebody out there who might thrive in the free software community if only they had the right door open before them.</p>Noah Meyerhansfrodo+blog@morgul.netStretch images for Amazon EC2, round 22017-04-20T21:37:00-07:00/2017/04/20/stretch-images-for-amazon-ec2-round-2/
<p>Following up on <a href="https://noah.meyerhans.us/blog/2017/01/28/call-for-testing-stretch-cloud-images-on-aws/">a previous post</a> announcing the availability of a first round of AWS AMIs for stretch, I’m happy to announce the availability of a second round of images. These images address all the feedback we’ve received about the first round. The notable changes include:</p>
<ul>
<li>Don’t install a local MTA.</li>
<li>Don’t install busybox.</li>
<li>Ensure that /etc/machine-id is recreated at launch.</li>
<li>Fix the security.debian.org sources.list entry.</li>
<li>Enable Enhanced Networking and ENA support.</li>
<li>Images are owned by the official debian.org AWS account, rather than my personal account.</li>
</ul>
<p>AMI details are listed
<a href="https://wiki.debian.org/Cloud/AmazonEC2Image/Stretch">on the wiki</a>. As
usual, you’re encouraged to submit feedback to the cloud team via the
<a href="https://bugs.debian.org/cgi-bin/pkgreport.cgi?pkg=cloud.debian.org;dist=unstable">cloud.debian.org</a>
BTS pseudopackage, the
<a href="https://lists.debian.org/debian-cloud/">debian-cloud</a> mailing list,
or <code>#debian-cloud</code> on irc.</p>Noah Meyerhansfrodo+blog@morgul.netUsing FAI to customize and build your own cloud images2017-02-10T23:42:00-08:00/2017/02/10/using-fai-to-customize-and-build-your-own-cloud-images/
<p>At this past November’s Debian cloud sprint, we classified our image users into three broad buckets in order to help guide our discussions and ensure that we were covering the common use cases. Our users fit generally into one of the following groups:</p>
<ol>
<li>People who directly launch our image and treat it like a classic VPS. These users most likely will be logging into their instances via ssh and configuring it interactively, though they may also install and use a configuration management system at some point.</li>
<li>People who directly launch our images but configure them automatically via launch-time configuration passed to the <code>cloud-init</code> process on the agent. This automatic configuration may optionally serve to bootstrap the instance into a more complete configuration management system. The user may or may not ever actually log in to the system at all.</li>
<li>People who will not use our images directly at all, but will instead construct their own image based on ours. They may do this by launching an instance of our image, customizing it, and snapshotting it, or they may build a custom image from scratch by reusing and modifying the tools and configuration that we use to generate our images.</li>
</ol>
<p>This post is intended to help people in the final category get started with building their own cloud images based on our tools and configuration. As I mentioned in <a href="/blog/2017/01/28/call-for-testing-stretch-cloud-images-on-aws/">my previous post on the subject</a>, we are using the <a href="http://wiki.fai-project.org/wiki/Main_Page">FAI</a> project with configuration from the <a href="https://anonscm.debian.org/cgit/cloud/fai-cloud-images.git/">fai-cloud-images</a>. It’s probably a good idea to get familiar with FAI and our configs before proceeding, but it’s not necessary.</p>
<p>You’ll need to use FAI version 5.3.4 or greater. 5.3.4 is currently available in stretch and jessie-backports. Images can be generated locally on your non-cloud host, or on an existing cloud instance. You’ll likely find it more convenient to use a cloud instance so you can avoid the overhead of having to copy disk images between hosts. For the most part, I’ll assume throughout this document that you’re generating your image on a cloud instance, but I’ll highlight the steps where it actually matters. I’ll also be describing the steps to target AWS, though the general workflow should be similar if you’re targeting a different platform.</p>
<p>To get started, install the <code>fai-server</code> package on your instance and clone the <code>fai-cloud-images</code> git repository. (I’ll assume the repository is cloned to <code>/srv/fai/config</code>.) In order to generate your own disk image that generally matches what we’ve been distributing, you’ll use a command like:</p>
<pre><code>sudo fai-diskimage --hostname stretch-image --size 8G \
--class DEBIAN,STRETCH,AMD64,GRUB_PC,DEVEL,CLOUD,EC2 \
/tmp/stretch-image.raw
</code></pre>
<p>This command will create an 8 GB raw disk image at <code>/tmp/stretch-image.raw</code>, create some partitions and filesystems within it, and install and configure a bunch of packages into it. Exactly what packages it installs and how it configures them will be determined by the FAI config tree and the classes provided on the command line. The <code>package_config</code> subdirectory of the FAI configuration contains several files, the names of which are FAI classes. Activating a given class by referencing it on the <code>fai-diskimage</code> command line instructs FAI to process the contents of the matching <code>package_config</code> file if such a file exists. The files use a simple grammar that provides you with the ability to request certain packages to be installed or removed.</p>
<p>Let’s say for example that you’d like to build a custom image that looks mostly identical to Debian’s images, but that also contains the Apache HTTP server. You might do that by introducing a new file to <code>package_config/HTTPD</code> file, as follows:</p>
<pre><code>PACKAGES install
apache2
</code></pre>
<p>Then, when running <code>fai-diskimage</code>, you’ll add <code>HTTPD</code> to the list of classes:</p>
<pre><code>sudo fai-diskimage --hostname stretch-image --size 8G \
--class DEBIAN,STRETCH,AMD64,GRUB_PC,DEVEL,CLOUD,EC2,HTTPD \
/tmp/stretch-image.raw
</code></pre>
<p>Aside from custom package installation, you’re likely to also want custom configuration. FAI allows the use of pretty much any scripting language to perform modifications to your image. A common task that these scripts may want to perform is the installation of custom configuration files. FAI provides the <a href="http://fai-project.org/doc/man/fcopy.html">fcopy</a> tool to help with this. Fcopy is aware of FAI’s class list and is able to select an appropriate file from the FAI config’s <code>files</code> subdirectory based on classes. The <code>scripts/EC2/10-apt</code> script provides a basic example of using fcopy to select and install an apt sources.list file. The <code>files/etc/apt/sources.list/</code> subdirectory contains both an <code>EC2</code> and a <code>GCE</code> file. Since we’ve enabled the <code>EC2</code> class on our command line, fcopy will find and install that file. You’ll notice that the sources.list subdirectory also contains a <code>preinst</code> file, which fcopy can use to perform additional actions prior to actually installing the specified file. <code>postinst</code> scripts are also supported.</p>
<p>Beyond package and file installation, FAI also provides mechanisms to support debconf preseeding, as well as hooks that are executed at various stages of the image generation process. I recommend following the examples in the <code>fai-cloud-images</code> repo, as well as the <a href="http://fai-project.org/fai-guide/">FAI guide</a> for more details. I do have one caveat regarding the documentation, however: FAI was originally written to help provision bare-metal systems, and much of its documentation is written with that use case in mind. The cloud image generation process is able to ignore a lot of the complexity of these environments (for example, you don’t need to worry about pxeboot and tftp!) However, this means that although you get to ignore probably half of the FAI Guide, it’s not immediately obvious which half it is that you get to ignore.</p>
<p>Once you’ve generated your raw image, you can inspect it by telling Linux about the partitions contained within, and then mount and examine the filesystems. For example:</p>
<pre><code>admin@ip-10-0-0-64:~$ sudo partx --show /tmp/stretch-image.raw
NR START END SECTORS SIZE NAME UUID
1 2048 16777215 16775168 8G ed093314-01
admin@ip-10-0-0-64:~$ sudo partx -a /tmp/stretch-image.raw
partx: /dev/loop0: error adding partition 1
admin@ip-10-0-0-64:~$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 8G 0 disk
├─xvda1 202:1 0 1007.5K 0 part
└─xvda2 202:2 0 8G 0 part /
loop0 7:0 0 8G 0 loop
└─loop0p1 259:0 0 8G 0 loop
admin@ip-10-0-0-64:~$ sudo mount /dev/loop0p1 /mnt/
admin@ip-10-0-0-64:~$ ls /mnt/
bin/ dev/ home/ initrd.img.old@ lib64/ media/ opt/ root/ sbin/ sys/ usr/ vmlinuz@
boot/ etc/ initrd.img@ lib/ lost+found/ mnt/ proc/ run/ srv/ tmp/ var/ vmlinuz.old@
</code></pre>
<p>In order to actually use your image with your cloud provider, you’ll need to register it with them. Strictly speaking, these are the only steps that are provider specific and need to be run on your provider’s cloud infrastructure. AWS <a href="http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/creating-an-ami-ebs.html">documents this process</a> in the <a href="http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/concepts.html">User Guide for Linux Instances</a>. The basic workflow is:</p>
<ol>
<li>Attach a secondary EBS volume to your EC2 instance. It must be large enough to hold the raw disk image you created.</li>
<li>Use <code>dd</code> to write your image to the secondary volume, e.g. <code>sudo dd if=/tmp/stretch-image.raw of=/dev/xvdb</code></li>
<li>Use the <code>volume-to-ami.sh</code> script in the <code>fail-cloud-image</code> repo to snapshot the volume and register the resulting snapshot with AWS as a new AMI. Example: <code>./volume-to-ami.sh vol-04351c30c46d7dd6e</code></li>
</ol>
<p>The <code>volume-to-ami.sh</code> script must be run with access to AWS credentials that grant access to several EC2 API calls: <code>describe-snapshots</code>, <code>create-snapshot</code>, and <code>register-image</code>. It recognizes a <code>--help</code> command-line flag and several options that modify characteristics of the AMI that it registers. When <code>volume-to-ami.sh</code> completes, it will print the AMI ID of your new image. You can now work with this image using standard AWS workflows.</p>
<p>As always, we welcome feedback and contributions via the <a href="https://lists.debian.org/debian-cloud/">debian-cloud</a> mailing list or <code>#debian-cloud</code> on IRC.</p>Noah Meyerhansfrodo+blog@morgul.netCall for testing: Stretch cloud images on AWS2017-01-28T13:50:00-08:00/2017/01/28/call-for-testing-stretch-cloud-images-on-aws/
<p>Following up on <a href="https://lists.debian.org/debian-sprints/2016/11/msg00018.html">Steve McIntyre’s writeup</a> of the Debian Cloud Sprint that took place in Seattle this past November, I’m pleased to announce the availability of preliminary Debian stretch AMIs for Amazon EC2. Pre-generated images are available in all public AWS regions, or you can use <a href="http://wiki.fai-project.org/wiki/Main_Page">FAI</a> with the <a href="https://anonscm.debian.org/cgit/cloud/fai-cloud-images.git/">fai-cloud-images</a> configuration tree to generate your own images. The pre-generated AMIs were created on 25 January, shortly after Linux 4.9 entered stretch, and their details follow:</p>
<table>
<tr>
<td>ami-6d017002
</td>
<td>
ap-south-1
</td>
</tr>
<tr>
<td>ami-cc5540a8
</td>
<td>
eu-west-2
</td>
</tr>
<tr>
<td>ami-43401925
</td>
<td>
eu-west-1
</td>
</tr>
<tr>
<td>ami-870edfe9
</td>
<td>
ap-northeast-2
</td>
</tr>
<tr>
<td>ami-812266e6
</td>
<td>
ap-northeast-1
</td>
</tr>
<tr>
<td>ami-932e4aff
</td>
<td>
sa-east-1
</td>
</tr>
<tr>
<td>ami-34ce7350
</td>
<td>
ca-central-1
</td>
</tr>
<tr>
<td>ami-9f6dd8fc
</td>
<td>
ap-southeast-1
</td>
</tr>
<tr>
<td>ami-829295e1
</td>
<td>
ap-southeast-2
</td>
</tr>
<tr>
<td>ami-42448a2d
</td>
<td>
eu-central-1
</td>
</tr>
<tr>
<td>ami-98c9348e
</td>
<td>
us-east-1
</td>
</tr>
<tr>
<td>ami-57361332
</td>
<td>
us-east-2
</td>
</tr>
<tr>
<td>ami-03386563
</td>
<td>
us-west-1
</td>
</tr>
<tr>
<td>ami-7a27991a
</td>
<td>
us-west-2
</td>
</tr>
</table>
<p>As with the current jessie images, these use a default username of ‘admin’, with access controlled by the ssh key named in the ec2 <code>run-instances</code> invocation. They’re intended to provide a reasonably complete Debian environment without too much bloat. IPv6 addressing should be supported in an <a href="http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpc-migrate-ipv6.html">appropriately configured VPC environment</a>.</p>
<p>These images were build using Thomas Lange’s <a href="http://wiki.fai-project.org/wiki/Main_Page">FAI</a>, which has been used for over 15 years for provisioning all sorts of server, workstation, and VM systems, but which only recently was adapted for use generating cloud disk images. It has proven to be well suited to this task though, and image creation is straightforward and flexible. I’ll describe in a followup post the steps you can follow to create and customize your own AMIs based on our recipes. In the meantime, please do test these images! You can submit bug reports to the <a href="https://bugs.debian.org/cgi-bin/pkgreport.cgi?pkg=cloud.debian.org;dist=unstable">cloud.debian.org</a> metapackage, and feedback is welcome via the <a href="https://lists.debian.org/debian-cloud/">debian-cloud</a> mailing list or <code>#debian-cloud</code> on IRC.</p>Noah Meyerhansfrodo+blog@morgul.netWe live in strange times2015-04-22T23:24:00-07:00/2015/04/22/we-live-in-strange-times/
<p><a href="http://openness.microsoft.com/blog/2015/04/21/microsoft-debian-8-linuxfest/">Join Microsoft to celebrate Debian 8 at LinuxFest Northwest</a></p>Noah Meyerhansfrodo+blog@morgul.netBuilding OpenWRT with Docker2015-03-19T22:23:00-07:00/2015/03/19/building-openwrt-with-docker/
<p>I’ve run <a href="http://www.openwrt.org/">OpenWRT</a> on my home router for a
long time, and these days I maintain a couple of packages for the
project. In order to make most efficient use of the hardware resources
on my router, I run a custom build of the OpenWRT firmware with some
default features removed and others added. For example, I install
<a href="https://www.isc.org/downloads/bind/">bind</a> and
<a href="http://ipsec-tools.sourceforge.net/">ipsec-tools</a>, while I disable
the web UI in order to save space.</p>
<p>There are quite a few
<a href="http://wiki.openwrt.org/doc/howto/buildroot.exigence">packages</a>
required for the OpenWRT build process. I don’t necessarily want all
of these packages installed on my main machine, nor do I want to
maintain a VM for the build environment. So I investigated using
Docker for this.</p>
<p>Starting from a base jessie image, which I created using the <a href="https://github.com/docker/docker/blob/master/contrib/mkimage/debootstrap">Docker
debootstrap wrapper</a>,
the first step was to construct a Dockerfile containing instructions
on how to set up the build environment and create a non-root user to
perform the build:</p>
<pre><code>FROM jessie:latest
MAINTAINER Noah Meyerhans <frodo@morgul.net>
RUN DEBIAN_FRONTEND=noninteractive apt-get update && apt-get -y install \
asciidoc bash bc binutils bzip2 fastjar flex git-core g++ gcc
util-linux gawk libgtk2.0-dev intltool jikespg zlib1g-dev make \
genisoimage libncurses5-dev libssl-dev patch perl-modules \
python2.7-dev rsync ruby sdcc unzip wget gettext xsltproc \
libboost1.55-dev libxml-parser-perl libusb-dev bin86 bcc sharutils \
subversion
RUN adduser --disabled-password --uid 1000 --gecos "Docker Builder,,," builder
</code></pre>
<p>And we generate a docker image based on this Dockerfile per the
<a href="http://docs.docker.com/reference/commandline/cli/#build">docker build documentation</a>. At
this point, we’ve got a basic image that does what we want. To
initialize the build environment (download package sources, etc), I
might run:</p>
<p><code>docker run -v ~/src/openwrt:/src/openwrt -u builder -t -i jessie/openwrt
sh -c "cd /src/openwrt/openwrt && scripts/feeds update -a"</code></p>
<p>Or configure the system:</p>
<p><code>docker run -v ~/src/openwrt:/src/openwrt -u builder -t -i
jessie/openwrt make -C /src/openwrt/openwrt menuconfig</code></p>
<p>And finally, build the OpenWRT image itself:</p>
<p><code>docker run -v ~/src/openwrt:/src/openwrt -u builder -t -i
jessie/openwrt make -C /src/openwrt/openwrt -j3</code></p>
<p>The <code>-v ~/src/openwrt:/src/openwrt</code> flags tell docker to bind mount my
~/src/openwrt directory (which I’d previously cloned using git) to
/src/openwrt inside the running container. Without this, one might be
tempted to clone the git repo directly into the container at runtime,
but the changes to non-bind-mount filesystems are lost when the
container terminates. This could be suitable for an autobuild
environment, in which the sources are cloned at the start of the build
and any generated artifacts are archived externally at the end, but it
isn’t suitable for a dev environment where I might be making and
testing small changes at a relatively high frequency.</p>
<p>The <code>-u builder</code> flags tell docker to run the given commands as the
builder user inside the container. Recall that builder was created
with UID 1000 in the Dockerfile. Since I’m storing the source and
artifacts in a bind-mounted directory, all saved files will be created
with this UID. Since UID 1000 happens to be my UID on my laptop, this
is fine. Any files created by builder inside the container will be
owned by me outside the container. However, this container should not
have to rely on a user with a given UID running it! I’m not sure what
the right way to approach this problem is within Docker. It may be
that someone using my image should create their own derivative image
that creates a user with the appropriate UID (creation of this
derivative image is a cheap operation in Docker). Alternatively,
whatever Docker init system is used could start as root, add a new
user with a specific UID, and execute the build commands as that new
user. Neither of these seems as clean as it could be, though.</p>
<p>In general, Docker seems quite useful for such a build
environment. It’s easy to set up, and it makes it very easy to
generate and share a common collection of packages and
configuration. Because images are self-contained, I can reclaim a
bunch of disk space by simple executing “docker rmi”.</p>Noah Meyerhansfrodo+blog@morgul.netSpamassassin updates2015-01-15T07:44:00-08:00/2015/01/15/spamassassin-updates/
<p>If you’re running Spamassassin on Debian or Ubuntu, have you enabled
automatic rule updates? If not, why not? If possible, you should
enable this feature. It should be as simple as setting <code>"CRON=1"</code> in
/etc/default/spamassassin. If you choose not to enable this feature,
I’d really like to hear why. In particular, I’m thinking about
changing the default behavior of the Spamassassin packages such that
automatic rule updates are enabled, and I’d like to know if (and why)
anybody opposes this.</p>
<p>Spamassassin hasn’t been providing rules as part of the upstream
package for some time. In Debian, we include a snapshot of the ruleset
from an essentially arbitrary point in time in our packages. We do
this so Spamassassin will work “out of the box” on Debian
systems. People who install spamassassin from source must download
rules using spamassassin’s
<a href="https://wiki.apache.org/spamassassin/RuleUpdates">updates channel</a>. The
typical way to use this service is to use cron or something similar to
periodically check for rule changes via this service. This allows the
anti-spam community to quickly adapt to changes in spammer tactics,
and for you to actually benefit from their work by taking advantage of
their newer, presumably more accurate, rules. It also allows for quick
reaction to issues such as the one described in
<a href="https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=738872">bug 738872</a>
and
<a href="https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=774768">774768</a>.</p>
<p>If we do change the default, there are a couple of possible approaches
we could take. The simplest would be to simply change the default
value of the CRON variable in /etc/default/spamassassin. Perhaps a
cleaner approach would be to provide a “spamassassin-autoupdates”
package that would simply provide the cron job and a simple wrapper
program to perform the updates. The Spamassassin package would then
specify a Recommends relationship with this package, thus providing
the default enabled behavior while still providing a clear and simple
mechanism to disable it.</p>Noah Meyerhansfrodo+blog@morgul.netDebconf by train2014-08-24T13:19:00-07:00/2014/08/24/debconf-by-train/
<p>Today is the first time I’ve taken an interstate train trip in something like 15 years. A few things about the trip were pleasantly surprising. Most of these will come as no surprise:</p>
<ol>
<li>Less time wasted in security theater at the station prior to departure.</li>
<li>On-time departure</li>
<li>More comfortable seats than a plane or bus.</li>
<li>Quiet.</li>
<li>Permissive free wifi</li>
</ol>
<p>Wifi was the biggest surprise. Not that it existed, since we’re living in the future and wifi is expected everywhere. It’s IPv4 only and stuck behind a NAT, which isn’t a big surprise, but it is reasonably open. There isn’t any port filtering of non-web TCP ports, and even non-TCP protocols are allowed out. Even my aiccu IPv6 tunnel worked fine from the train, although I did experience some weird behavior with it.</p>
<p>I haven’t used aiccu much in quite a while, since I have a native IPv6 connection at home, but it can be convenient while traveling. I’m still trying to figure out happened today, though. The first symptoms were that, although I could ping IPv6 hosts, I could not actually log in via IMAP or ssh. Tcpdump showed all the standard symptoms of a <a href="https://www.usenix.org/legacy/events/lisa02/tech/full_papers/vanderberg/vanderberg_html/index.html">PMTU blackhole</a>. Small packets flow fine, large ones are dropped. The interface MTU is set to 1280, which is the minimum MTU for IPv6 and any path on the internet is expected to handle packets of at least that size. Experimentation via ping6 reveals that the largest payload size I can successfully exchange with a peer is 820 bytes. Add 8 bytes for the ICMPv6 header for 828 bytes of payload, plus 40 bytes for the IPv6 header gives an 868 byte packet, which is well under what should be the MTU for this path.</p>
<p>I’ve worked around this problem with an ip6tables rule to rewrite the MSS on outgoing SYN packets to 760 bytes, which should leave 40 for the IPv6 header and 20 for any extension headers:</p>
<pre><code>sudo ip6tables -t mangle -A OUTPUT -p tcp --tcp-flags SYN,RST SYN -j TCPMSS --set-mss 760
</code></pre>
<p>It is working well and will allow me to publish this from the train, which I’d otherwise have been unable to do. But… weird.</p>Noah Meyerhansfrodo+blog@morgul.netand in other news...2014-06-01T22:09:00-07:00/2014/06/01/and-in-other-news.../
<p>I’ve aborted several attempts recently to get something of interest
posted. For whatever reason, none of that stuck. So here’s something
with fewer expectation attached to it: A collection of random updates.</p>
<h2 id="debian-packaging">Debian packaging</h2>
<h3 id="spamassassin">Spamassassin</h3>
<p>The <a href="http://spamassassin.apache.org">Spamassassin</a> project released
version 3.4.0, a major update over the 3.3.2 branch, after nearly
two-and-a-half years in development. 3.4.0-1 is currently in unstable
and testing, and seems to be working reasonably well for me. I’ve got
another release in the works that will provide a systemd unit file
(<a href="https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=716887">bug 716887</a>)
and also switch to using more appropriate mechanisms than ‘su’ for
running cron-based maintenance tasks as the debian-spamd user.</p>
<h3 id="ipsec-tools">Ipsec-tools</h3>
<p>In response to Matt Grant’s
<a href="https://lists.debian.org/debian-devel/2014/04/msg00075.html">call for help</a>
for ipsec-tools packaging, I’ve become an inaugural member of the
pkg-ipsec-tools team and made a couple of uploads. The upstream
release has been incremented from 0.8.0 to 0.8.2 to pick up some small
bug fixes.</p>
<h2 id="cycling">Cycling</h2>
<p>With two races in the books, and another coming up in 2 weeks, the
mountain biking season is in full swing. The first race (Stottlemeyer)
was was a muddy mess of mechanical carnage, while the second (Echo
Valley) was dusty, dry, and sunbaked. I dealt with mechanical issues
throughout the first race, and that was reflected in my results. My
results in Echo Valley were pretty comparable to last year. Overall
time was slightly worse, but the winning times were higher as well. I
suspect that this was due to the heat, which was really quite severe.</p>
<p>Two weeks from now is the
<a href="http://testofmetal.com/pages/test-of-metal.php">Test of Metal</a> in
British Columbia, which is a really popular race and one of the high
points of the summer’s race calendar. Last year was the first time I
participated in this race, and I’ve been looking forward to this
year’s edition ever since.</p>
<h2 id="parenthood">Parenthood!</h2>
<p>Sarah & I are expecting our first child in early October. Thus far day
to day life hasn’t changed a lot for me, but it certainly has for
Sarah. Working in the garden, lifting things, and getting around by
bike have all become pretty uncomfortable for her. Even sleeping has
become a challenge because of difficulty getting comfortable. We’ve
been brainstorming names, from serious to off-the-wall, but haven’t
found anything worthy of going on any kind of short list. Any
suggestions?</p>
<h2 id="dnssec">DNSSEC</h2>
<p>I’ve enabled <a href="https://en.wikipedia.org/wiki/Dnssec">DNSSEC</a> for
several of my domain names, including this one. I had originally
planned on putting together blog post (or series) about my
experiences, but <a href="http://blog.tincho.org/">Martín Ferrari</a> beat me to
it with a couple of good
posts. <a href="http://blog.tincho.org/posts/DNSSEC__44___DANE__44___SSHFP__44___etc/">1</a>
<a href="http://blog.tincho.org/posts/More_DNSSEC/">2</a></p>
<p>I’ve found the
<a href="https://addons.mozilla.org/en-US/firefox/user/cznic-labs/?src=api">Firefox DNSSEC Validator extension</a>
to be really interesting for visually identifying which of the web
sites I visit regularly are signing their zones.</p>
<h2 id="conferences">Conferences</h2>
<p>I’m going to <a href="http://debconf14.debconf.org/">debconf</a> and
<a href="https://www.usenix.org/conference/lisa14">LISA</a>! Are you?</p>Noah Meyerhansfrodo+blog@morgul.netStarving trolls2014-02-01T20:34:00-08:00/2014/02/01/starving-trolls/
<p><a href="http://en.wikipedia.org/wiki/Netiquette#Netiquette">“Netiquette”</a> is an ancient term, dating back to the earliest days on the internet. These days, one might argue that it’s no longer relevant, or that there are so many different definitions that it’s been rendered meaningless. However, one particular aspect of it endures: “Don’t feed the trolls!” A recent <a href="https://lists.debian.org/debian-security/2014/01/msg00021.html">thread</a> on the <a href="https://lists.debian.org/debian-security/2014/01/msg00021.html">debian-security</a> mailing list provided an amazingly effective demonstration of the effectiveness of this approach. A certain pseudonymous individual made multiple posts in this thread that exhibited classic troll behavior (no meaningful contribution to the discussion, inflamatory comments, etc). On multiple occasions, I was tempted to reply to some of these posts (After all, <a href="http://xkcd.com/386/">someone was <em>wrong</em> on the internet!</a>), but I held off. It would do no good to engage this person, and would likely only serve to devolve into a flame war. I expected the flames to fly anyway, since <em>surely</em> somebody would respond to this guy. It was only a matter of time. But the time never came. The discussion remained civil and productive, and troll was left to starve.</p>Noah Meyerhansfrodo+blog@morgul.netupdating choqok packaging2011-02-20T17:00:00-08:00/2011/02/20/updating-choqok-packaging/
<p>It’s been a little too long since I kept the Debian Choqok packages as up to date as I’d like. This has lead to some issues, since bug #591100 really should have been fixed in time for squeeze. Then, when upstream stopped pushing their svn changes to gitorious and moved their actual development to kde.org’s local git hosting, all my branches got screwed up, leading to further delays. I think this is fully resolved at this point. (I really, really hope so! It was a painful experience!) So, I hope to get a new choqok package uploaded real soon!</p>