Simon Josefsson's blog
Skip to content
I’ve been using
rsnapshot
to take backups of around 10 servers and laptops for well over 15 years, and it is a remarkably reliable tool that has proven itself many times. Rsnapshot uses
rsync
over
SSH
and maintains a temporal hard-link file pool. Once rsnapshot is configured and running, on the backup server, you get a hardlink farm with directories like this for the remote server:
/backup/serverA.domain/.sync/foo
/backup/serverA.domain/daily.0/foo
/backup/serverA.domain/daily.1/foo
/backup/serverA.domain/daily.2/foo
...
/backup/serverA.domain/daily.6/foo
/backup/serverA.domain/weekly.0/foo
/backup/serverA.domain/weekly.1/foo
...
/backup/serverA.domain/monthly.0/foo
/backup/serverA.domain/monthly.1/foo
...
/backup/serverA.domain/yearly.0/foo
I can browse and rescue files easily, going back in time when needed.
The
rsnapshot project README
explains more, there is a long
rsnapshot HOWTO
although I usually find the
rsnapshot man page
the easiest to digest.
I have
stored multi-TB Git-LFS data on GitLab.com
for some time. The yearly renewal is coming up, and the price for Git-LFS storage on GitLab.com is now excessive (~$10.000/year). I have reworked my work-flow and finally migrated
debdistget
to only store Git-LFS stubs on GitLab.com and push the real files to S3 object storage. The cost for this is barely measurable, I have yet to run into the €25/month warning threshold.
But how do you backup stuff stored in S3?
For some time, my S3 backup solution has been to run the
minio-client
mirror
command to download all S3 objects to my laptop, and rely on rsnapshot to keep backups of this. While 4TB NVME’s are relatively cheap, I’ve felt that this disk and network churn on my laptop is unsatisfactory for quite some time.
What is a better approach?
I find S3 hosting sites fairly unreliable by design. Only a couple of clicks in your web browser and you have dropped 100TB of data. Or by someone else who steal your plaintext-equivalent cookie. Thus, I haven’t really felt comfortable using any S3-based backup option. I prefer to self-host, although continously running a mirror job is not sufficient: if I accidentally drop the entire S3 object store, my mirror run will remove all files locally too.
The rsnapshot approach that allows going back in time and having data on self-managed servers feels superior to me.
What if we could use rsnapshot with a S3 client instead of rsync?
Someone else
asked about this several years ago
, and the suggestion was to use the fuse-based
s3fs
which sounded unreliable to me. After some experimentation, working around some hard-coded assumption in the
rsnapshot
implementation, I came up with a small configuration pattern and a wrapper tool to implement what I desired.
Here is my configuration snippet:
cmd_rsync /backup/s3/s3rsync
rsync_short_args -Q
rsync_long_args --json --remove
lockfile /backup/s3/rsnapshot.pid
snapshot_root /backup/s3
backup s3:://hetzner/debdistget-gnuinos ./debdistget-gnuinos
backup s3:://hetzner/debdistget-tacos ./debdistget-tacos
backup s3:://hetzner/debdistget-diffos ./debdistget-diffos
backup s3:://hetzner/debdistget-pureos ./debdistget-pureos
backup s3:://hetzner/debdistget-kali ./debdistget-kali
backup s3:://hetzner/debdistget-devuan ./debdistget-devuan
backup s3:://hetzner/debdistget-trisquel ./debdistget-trisquel
backup s3:://hetzner/debdistget-debian ./debdistget-debian
The idea is to save a backup of a couple of S3 buckets under
/backup/s3/
I have some scripts that take a complete
rsnapshot.conf
file and append my per-directory configuration so that this becomes a complete configuration. If you are curious how I roll this,
backup-all
invokes
backup-one
appending my
rsnapshot.conf template
with the snippet above.
The
s3rsync
wrapper script is the essential hack to convert rsnapshot’s rsync parameters into something that talks S3 and the script is as follows:
#!/bin/sh
set -eu
S3ARG=
for ARG in "$@"; do
case $ARG in
s3:://*) S3ARG="$S3ARG "$(echo $ARG | sed -e 's,s3:://,,');;
-Q*) ;;
*) S3ARG="$S3ARG $ARG";;
esac
done
echo /backup/s3/mc mirror $S3ARG
exec /backup/s3/mc mirror $S3ARG
It uses the
minio-client
tool. I first tried
s3cmd
but its
sync
command read all files to compute MD5 checksums every time you invoke it, which is very slow. The
mc mirror
command is blazingly fast since it only compare mtime’s, just like
rsync
or
git
First you need to store credentials for your S3 bucket. These are stored in plaintext in
~/.mc/config.json
which I find to be sloppy security practices, but I don’t know of any better way to do this. Replace
AKEY
and
SKEY
with your access token and secret token from your S3 provider:
/backup/s3/mc alias set hetzner AKEY SKEY
If I invoke a
sync
job for a fully synced up directory the output looks like this:
root@hamster /backup# /run/current-system/profile/bin/rsnapshot -c /backup/s3/rsnapshot.conf -V sync
Setting locale to POSIX "C"
echo 1443 > /backup/s3/rsnapshot.pid
/backup/s3/s3rsync -Qv --json --remove s3:://hetzner/debdistget-gnuinos \
/backup/s3/.sync//debdistget-gnuinos
/backup/s3/mc mirror --json --remove hetzner/debdistget-gnuinos /backup/s3/.sync//debdistget-gnuinos
{"status":"success","total":0,"transferred":0,"duration":0,"speed":0}
/backup/s3/s3rsync -Qv --json --remove s3:://hetzner/debdistget-tacos \
/backup/s3/.sync//debdistget-tacos
/backup/s3/mc mirror --json --remove hetzner/debdistget-tacos /backup/s3/.sync//debdistget-tacos
{"status":"success","total":0,"transferred":0,"duration":0,"speed":0}
/backup/s3/s3rsync -Qv --json --remove s3:://hetzner/debdistget-diffos \
/backup/s3/.sync//debdistget-diffos
/backup/s3/mc mirror --json --remove hetzner/debdistget-diffos /backup/s3/.sync//debdistget-diffos
{"status":"success","total":0,"transferred":0,"duration":0,"speed":0}
/backup/s3/s3rsync -Qv --json --remove s3:://hetzner/debdistget-pureos \
/backup/s3/.sync//debdistget-pureos
/backup/s3/mc mirror --json --remove hetzner/debdistget-pureos /backup/s3/.sync//debdistget-pureos
{"status":"success","total":0,"transferred":0,"duration":0,"speed":0}
/backup/s3/s3rsync -Qv --json --remove s3:://hetzner/debdistget-kali \
/backup/s3/.sync//debdistget-kali
/backup/s3/mc mirror --json --remove hetzner/debdistget-kali /backup/s3/.sync//debdistget-kali
{"status":"success","total":0,"transferred":0,"duration":0,"speed":0}
/backup/s3/s3rsync -Qv --json --remove s3:://hetzner/debdistget-devuan \
/backup/s3/.sync//debdistget-devuan
/backup/s3/mc mirror --json --remove hetzner/debdistget-devuan /backup/s3/.sync//debdistget-devuan
{"status":"success","total":0,"transferred":0,"duration":0,"speed":0}
/backup/s3/s3rsync -Qv --json --remove s3:://hetzner/debdistget-trisquel \
/backup/s3/.sync//debdistget-trisquel
/backup/s3/mc mirror --json --remove hetzner/debdistget-trisquel /backup/s3/.sync//debdistget-trisquel
{"status":"success","total":0,"transferred":0,"duration":0,"speed":0}
/backup/s3/s3rsync -Qv --json --remove s3:://hetzner/debdistget-debian \
/backup/s3/.sync//debdistget-debian
/backup/s3/mc mirror --json --remove hetzner/debdistget-debian /backup/s3/.sync//debdistget-debian
{"status":"success","total":0,"transferred":0,"duration":0,"speed":0}
touch /backup/s3/.sync/
rm -f /backup/s3/rsnapshot.pid
/run/current-system/profile/bin/logger -p user.info -t rsnapshot[1443] \
/run/current-system/profile/bin/rsnapshot -c /backup/s3/rsnapshot.conf \
-V sync: completed successfully
root@hamster /backup#
You can tell from the paths that this machine runs Guix. This was the first production use of the Guix System for me, and the machine has been running since 2015 (with the occasional new hard drive). Before, I used rsnapshot on Debian, but some stable release of Debian dropped the rsnapshot package, paving the way for me to test Guix in production on a non-Internet exposed machine. Unfortunately,
mc
is not packaged in Guix, so you will have to install it from the
MinIO Client GitHub page
manually.
Running the daily rotation looks like this:
root@hamster /backup# /run/current-system/profile/bin/rsnapshot -c /backup/s3/rsnapshot.conf -V daily
Setting locale to POSIX "C"
echo 1549 > /backup/s3/rsnapshot.pid
mv /backup/s3/daily.5/ /backup/s3/daily.6/
mv /backup/s3/daily.4/ /backup/s3/daily.5/
mv /backup/s3/daily.3/ /backup/s3/daily.4/
mv /backup/s3/daily.2/ /backup/s3/daily.3/
mv /backup/s3/daily.1/ /backup/s3/daily.2/
mv /backup/s3/daily.0/ /backup/s3/daily.1/
/run/current-system/profile/bin/cp -al /backup/s3/.sync /backup/s3/daily.0
rm -f /backup/s3/rsnapshot.pid
/run/current-system/profile/bin/logger -p user.info -t rsnapshot[1549] \
/run/current-system/profile/bin/rsnapshot -c /backup/s3/rsnapshot.conf \
-V daily: completed successfully
root@hamster /backup#
Hopefully you will feel inspired to take backups of your S3 buckets now!
Following up on my
initial announcement about Debian Libre Live
I am happy to report on continued progress and the
release of Debian Libre Live version 13.3.0
Since both this and the
previous 13.2.0 release
are based on the stable
Debian trixie release
, there really isn’t a lot of major changes but instead incremental minor progress for the installation process. Repeated installations has a tendency to reveal bugs, and we have resolved the apt sources list confusion for
Calamares
-based installations and a couple of other nits. This release is more polished and we are not aware of any known remaining issues with them (unlike for earlier versions which were released with known problems), although we conservatively regard the project as still in beta. A Debian Libre Live logo is needed before marking this as stable, any graphically talented takers? (Please base it on the
Debian SVG upstream logo
image.)
We provide GNOME, KDE, and XFCE desktop images, as well as text-only “standard” image, which match the regular Debian Live images with non-free software on them, but also provide a “slim” variant which is merely 750MB compared to the 1.9GB “standard” image. The slim image can still start a debian installer, and can still boot into a minimal live text-based system.
The GNOME, KDE and XFCE desktop images feature the Calamares installer, and we have performed testing on a variety of machines. The standard and slim images does not have a installer from the running live system, but all images support a boot menu entry to start the installer.
With this release we also extend our arm64 support to two tested platforms. The current list of successfully installed and supported systems now include the following hardware:
Desktop ADLINK Ampere Altra Developer Platform arm64 Neoverse N1
Desktop MSI Z790-P WIFI PRO i9-14900K Dasharo
Laptop Framework 13 AMD AI 9 HX 370
Laptop Lenovo X201 i7-620M
Laptop NovaCustom NV56 Intel Ultra 7 155H i915 Dasharo
Server Dell PowerEdge R630 2xE2680v4
Server/Router Protectli VP2440
Server Supermicro MegaDC ARS-110M-NR Ampere Altra Max 128 core 2x25GBe
This is a very limited set of machines, but the diversity in CPUs and architecture should hopefully reflect well on a wide variety of commonly available machines. Several of these machines are crippled (usually GPU or WiFI) without adding non-free software, complain at your hardware vendor and adapt your use-cases and future purchases.
The images are as follows, with SHA256SUM checksums and GnuPG signature on the
13.3.0 release
page.
Amd64 GNOME
debian-live-13.3.0-amd64-libre-gnome.iso
Amd64 KDE
debian-live-13.3.0-amd64-libre-kde.iso
Amd64 XFCE
debian-live-13.3.0-amd64-libre-xfce.iso
Amd64 Standard
debian-live-13.3.0-amd64-libre-standard.iso
Amd64 Slim
debian-live-13.3.0-amd64-libre-slim.iso
Arm64 GNOME
debian-live-13.3.0-arm64-libre-gnome.iso
Arm64 KDE
debian-live-13.3.0-arm64-libre-kde.iso
Arm64 XFCE
debian-live-13.3.0-arm64-libre-xfce.iso
Arm64 Standard
debian-live-13.3.0-arm64-libre-standard.iso
Arm64 Slim
debian-live-13.3.0-arm64-libre-slim.iso
Curious how the images were made? Fear not, for the
Debian Libre Live project README
has documentation, the
run.sh script
is short and
the .gitlab-ci.yml CI/CD Pipeline definition file
brief.
Happy Libre OS hacking!
One of my holiday projects was to understand and gain more trust in how Debian binaries are built, and as the holidays are coming to an end, I’d like to introduce a new research project called Debian Taco. I apparently need more holidays, because there are still more work to be done here, so at the end I’ll summarize some pending work.
Debian Taco, or TacOS
, is a GitSecDevOps rebuild of
Debian GNU/Linux
The Debian Taco project publish rebuilt binary packages, package repository metadata (
InRelease
Packages
, etc), container images, cloud images and live images.
All packages are built from pristine source packages in the Debian archive. Debian Taco does not modify any Debian source code nor add or remove any packages found in Debian.
No servers are involved! Everything is built in GitLab pipelines and results are published through modern GitDevOps mechanism like GitLab Pages and S3 object storage. You can fork the individual projects below on GitLab.com and you will have your own Debian-derived OS available for tweaking. (Of course, at some level, servers are always involved, so this claim is a bit of hyperbole.)
Goals
The goal of TacOS is to be bit-by-bit identical with official Debian GNU/Linux, and until that has been completed, publish
diffoscope
output with differences.
The idea is to further categorize all artifact differences into one of the following categories:
1) An obvious bug in Debian. For example, if a package does not
build reproducible
2) An obvious bug in TacOS. For example, if our build environment does not manage to build a package.
3) Something else. This would be input for further research and consideration. This category also include things where it isn’t obvious if it is a bug in Debian or in TacOS. Known examples:
3A) Packages in TacOS are rebuilt the latest available source code, not the (potentially) older package that were used to build the Debian packages. This could lead to differences in the packages. These differences may be useful to analyze to identify supply-chain attacks. See some
discussion about idempotent rebuilds
Our packages are all built from source code, unless we have not yet managed to build something. In the latter situation, Debian Taco falls back and uses the official Debian artifact. This allows an incremental publication of Debian Taco that still is 100% complete without requiring that everything is rebuilt instantly. The goal is that everything should be rebuilt, and until that has been completed, publish a list of artifacts that we use verbatim from Debian.
Debian Taco Archive
The
Debian Taco Archive
project generate and publish the package archive (
dists/tacos-trixie/InRelease
dists/tacos-trixie/main/binary-amd64/Packages.gz
pool/*
etc), similar to what is published at
The output of the Debian Taco Archive is available from
Debian Taco Container Images
The
Debian Taco Container Images
project provide container images of Debian Taco for
trixie
forky
and
sid
on the
amd64
arm64
ppc64el
and
riscv64
architectures.
These images allow quick and simple use of Debian Taco interactively, but makes it easy to deploy for container orchestration frameworks.
Debian Taco Cloud Images
The
Debian Taco Cloud Images
project provide cloud images of Debian Taco for
trixie
forky
and
sid
on the
amd64
arm64
ppc64el
and
riscv64
architectures.
Launch and install Debian Taco for your cloud environment!
Debian Taco Live Images
The
Debian Taco Live Images
project provide live images of Debian Taco for
trixie
forky
and
sid
on the
amd64
and
arm64
architectures.
These images allows running Debian Taco on physical hardware (or virtual machines), and even installation for permanent use.
Debian Taco Build Images and Packages
Packages are built using
debdistbuild
, which was introduced in a
blog about Build Debian in a GitLab Pipeline
The first step is to prepare build images, which is done by the
Debian Taco Build Images
project. They are similar to the Debian Taco containers but have
build-essential
and debdistbuild installed on them.
Debdistbuild is launched in a per-architecture per-suite CI/CD project. Currently only
trixie-amd64
is available. That project has built some essential early packages like
base-files
debian-archive-keyring
and
hostname
. They are stored in Git LFS backed by a S3 object storage. These packages were all built reproducibly. So this means Debian Taco is still 100% bit-by-bit identical to Debian, except for the renaming.
I’ve yet to launch a more massive wide-scale package rebuild until some outstanding issues have been resolved. I earlier
rebuilt around 7000 packages
from Trixie on amd64, so I know that the method easily scales.
Remaining work
Where is the diffoscope package outputs and list of package differences? For another holiday! Clearly this is an important remaining work item.
Another important outstanding issue is how to orchestrate launching the build of all packages. Clearly a list of packages is needed, and some trigger mechanism to understand when new packages are added to Debian.
One goal was to build packages from the
tag2upload browse.dgit.debian.org
archive, before checking the Debian Archive. This ought to be really simple to implement, but other matters came first.
GitLab or Codeberg?
Everything is written using basic POSIX /bin/sh shell scripts. Debian Taco uses the GitLab CI/CD Pipeline mechanism together with a Hetzner S3 object storage to serve packages. The scripts have only weak reliance on GitLab-specific principles, and were designed with the intention to support other platforms. I believe reliance on a particular CI/CD platform is a limitation, so I’d like to explore shipping Debian Taco through a Forgejo-based architecture, possibly via
Codeberg
as soon as I manage to deploy reliable Forgejo runners.
The important aspects that are required are:
1) Pipelines that can build and publish web sites similar to GitLab Pages. Codeberg has a pipeline mechanism. I’ve successfully used Codeberg Pages to publish the
OATH Toolkit homepage
homepage. Glueing this together seems feasible.
2) Container Registry. It seems
Forgejo supports a Container Registry
but I’ve not worked with it at Codeberg to understand if there are any limitations.
3) Package Registry. The Deban Taco live images are uploaded into a package registry, because they are too big for being served through GitLab Pages. It may be converted to using a Pages mechanism, or possibly through Release Artifacts if multi-GB artifacts are supported on other platforms.
I hope to continue this work and explaining more details in a series of posts, stay tuned!
Around a year ago I wrote about
Guix Container Images for GitLab CI/CD
and these images have served the community well. Besides continous use in CI/CD, these Guix container images are used to confirm reproducibility of the source tarball artifacts in the releases of
Libtasn1 v4.20
InetUtils v2.6
Libidn2 v2.3.8
Libidn v1.43
SASL v2.2.2
Guile-GnuTLS v5.0.1
, and
OATH Toolkit v2.6.13
. See how all those release announcements mention a Guix commit? That’s the essential supply-chain information about the Guix build environment that allows the artifacts to be re-created. To make sure this is repeatable, the release tarball artifacts are re-created from source code every week in the
verify-reproducible-artifacts
project, that I
wrote about earlier
. Guix’s time travelling feature make this sustainable to maintain, and hopefully will continue to be able to reproduce the exact same tarball artifacts for years to come.
During the last year, unfortunately
Guix was removed from Debian stable
. My Guix container images were created from Debian with that Guix package. My setup continued to work since the old stage0 Debian+Guix containers were still available. Such a setup is not sustainable, as there will be bit-rot and we don’t want to rely on old containers forever, which (after the removal of Guix in Debian) could not be re-produced any more. Let this be a reminder how user-empowering features such as
Guix time-travelling
is! I have reworked my Guix container image setup, and this post is an update on the current status of this effort.
The first step was to re-engineer Debian container images with Guix, and I realized these were useful on their own, and warrant a separate project. A more narrowly scoped project makes will hopefully make it easier to keep them working. Now instead of
apt-get install guix
they use the official Guix
guix-install.sh
approach. Read more about that effort in
the announcement of Debian with Guix
The second step was to reconsider my approach to generate the Guix images. The earlier design had several stages. First, Debian+Guix containers were created. Then from those containers, a pure Guix container was created. Finally, using the pure Guix container another pure Guix container was created. The idea behind that GCC-like approach was to get to reproducible images that were created from an image that had no Debian left on it. However, I never managed to finish this. Partially because I hadn’t realized that every time you build a Guix container image from Guix, you effectively go back in time. When using Guix version
to build a container with Guix on it, it will not put Guix version
into the container but will put whatever version of Guix is available in its package archive, which will be an earlier version, such as version
X-N
. I had hope to overcome this somehow (running a
guix pull
in newly generated images may work), but never finished this before Guix was removed from Debian.
So what could a better design look like?
For efficiency, I had already started experimenting with generating the final images directly from the Debian+Guix images, and after
reproducibility bugs were fixed
I was able to get to reproducible images. However, I was still concerned that the Debian container could taint the process somehow, and was also concerned about the implied dependency on
non-free software in Debian
I’ve been using comparative rebuilds using “similar” distributions to confirm artifact reproducibility for my software projects, comparing builds on Trisquel 11 with Ubuntu 22.04, and AlmaLinux 9 with RockyLinux 9 for example. This works surprisingly well. Including one freedom-respecting distribution like Trisquel will detect if any non-free software has bearing on artifacts. Using different architectures, such as
amd64
vs
arm64
also help with deeper supply-chain concerns.
My conclusion was that I wanted containers with the same Guix commit for both Trisquel and Ubuntu. Given the similarity with Debian, adapting and launching the
Guix on Trisquel/Debian
project was straight forward. So we now have Trisquel 11/12 and Ubuntu 22.04/24.04 images with the same Guix on them.
Do you see where the
debian-with-guix
and
guix-on-dpkg
projects are leading to?
We are now ready to look at the modernized
Guix Container Images
project. The tags are the same as before:
registry.gitlab.com/debdistutils/guix/container:latest
registry.gitlab.com/debdistutils/guix/container:slim
registry.gitlab.com/debdistutils/guix/container:extra
registry.gitlab.com/debdistutils/guix/container:gash
The method to create them is different. Now there is a “build” job that uses the earlier Guix+Trisquel container (for amd64) or Guix+Debian (for arm64, pending Trisquel arm64 containers). The build job create the final containers directly. Next a Ubuntu “reproduce” job is launched that runs the same commands, failing if it cannot generate the bit-by-bit identical container. Then single-arch images are tested (installing/building GNU hello and building libksba), and then pushed to the GitLab registry, adding multi-arch images in the process. Then the final multi-arch containers are tested by building Guile-GnuTLS and, on success, uploaded to the Docker Hub.
How would you use them? A small way to start the container is like this:
jas@kaka:~$ podman run -it --privileged --entrypoint=/bin/sh registry.gitlab.com/debdistutils/guix/container:latest
sh-5.2# env HOME=/ guix describe # https://issues.guix.gnu.org/74949
guix 21ce6b3
repository URL: https://git.guix.gnu.org/guix.git
branch: master
commit: 21ce6b392ace4c4d22543abc41bd7c22596cd6d2
sh-5.2#
The need for
--entrypoint=/bin/sh
is because Guix’s
pack
command sets up the entry point differently than most other containers. This could probably be fixed if people want that, and there may be open bug reports about this.
The need for
--privileged
is more problematic, but is
discussed
upstream. The above example works fine without it, but running anything more elaborate with
guix-daemon
installing packages will trigger a fatal error. Speaking of that, here is a snippet of commands that allow you to install Guix packages in the container.
cp -rL /gnu/store/*profile/etc/* /etc/
echo 'root:x:0:0:root:/:/bin/sh' > /etc/passwd
echo 'root:x:0:' > /etc/group
groupadd --system guixbuild
for i in $(seq -w 1 10); do useradd -g guixbuild -G guixbuild -d /var/empty -s $(command -v nologin) -c "Guix build user $i" --system guixbuilder$i; done
env LANG=C.UTF-8 guix-daemon --build-users-group=guixbuild &
guix archive --authorize < /share/guix/ci.guix.gnu.org.pub
guix archive --authorize < /share/guix/bordeaux.guix.gnu.org.pub
guix install hello
GUIX_PROFILE="/var/guix/profiles/per-user/root/guix-profile"
. "$GUIX_PROFILE/etc/profile"
hello
This could be simplified, but we chose to not hard-code in our containers because some of these are things that probably shouldn’t be papered over but fixed properly somehow. In some execution environments, you may need to pass
--disable-chroot
to
guix-daemon
To use the containers to build something in a GitLab pipeline, here is an example snippet:
test-amd64-latest-wget-configure-make-libksba:
image: registry.gitlab.com/debdistutils/guix/container:latest
before_script:
- cp -rL /gnu/store/*profile/etc/* /etc/
- echo 'root:x:0:0:root:/:/bin/sh' > /etc/passwd
- echo 'root:x:0:' > /etc/group
- groupadd --system guixbuild
- for i in $(seq -w 1 10); do useradd -g guixbuild -G guixbuild -d /var/empty -s $(command -v nologin) -c "Guix build user $i" --system guixbuilder$i; done
- export HOME=/
- env LANG=C.UTF-8 guix-daemon --build-users-group=guixbuild &
- guix archive --authorize < /share/guix/ci.guix.gnu.org.pub
- guix archive --authorize < /share/guix/bordeaux.guix.gnu.org.pub
- guix describe
- guix install libgpg-error
- GUIX_PROFILE="//.guix-profile"
- . "$GUIX_PROFILE/etc/profile"
script:
- wget https://www.gnupg.org/ftp/gcrypt/libksba/libksba-1.6.7.tar.bz2
- tar xfa libksba-1.6.7.tar.bz2
- cd libksba-1.6.7
- ./configure
- make V=1
- make check VERBOSE=t V=1
More help on the project page for the
Guix Container Images
That’s it for tonight folks, and remember, Happy Hacking!
Last week I published
Guix on Debian container images
that prepared for today’s announcement of
Guix on Trisquel/Ubuntu container images
I have published images with reasonably modern
Guix
for
Trisquel
11 aramo, Trisquel 12 ecne,
Ubuntu
22.04 and Ubuntu 24.04. The Ubuntu images are available for both amd64 and arm64, but unfortunately
Trisquel arm64 containers aren’t available
yet so they are only for amd64. Images for ppc64el and riscv64 are work in progress. The currently supported container names:
registry.gitlab.com/debdistutils/guix/guix-on-dpkg:trisquel11-guix
registry.gitlab.com/debdistutils/guix/guix-on-dpkg:trisquel12-guix
registry.gitlab.com/debdistutils/guix/guix-on-dpkg:ubuntu22.04-guix
registry.gitlab.com/debdistutils/guix/guix-on-dpkg:ubuntu24.04-guix
Or you prefer
guix-on-dpkg on Docker Hub
docker.io/jas4711/guix-on-dpkg:trisquel11-guix
docker.io/jas4711/guix-on-dpkg:trisquel12-guix
docker.io/jas4711/guix-on-dpkg:ubuntu22.04-guix
docker.io/jas4711/guix-on-dpkg:ubuntu24.04-guix
You may use them as follows. See the
guix-on-dpkg README
for how to start
guix-daemon
and installing packages.
jas@kaka:~$ podman run -it --hostname guix --rm registry.gitlab.com/debdistutils/guix/guix-on-dpkg:trisquel11-guix
root@guix:/# head -1 /etc/os-release
NAME="Trisquel GNU/Linux"
root@guix:/# guix describe
guix 136fc8b
repository URL: https://gitlab.com/debdistutils/guix/mirror.git
branch: master
commit: 136fc8bfe91a64d28b6c54cf8f5930ffe787c16e
root@guix:/#
You may now be asking yourself:
why?
Fear not, gentle reader, because having two container images of roughly similar software is a great tool for attempting to build software artifacts reproducible, and comparing the result to spot differences. Obviously.
I have been using this pattern to get reproducible tarball artifacts of several software releases for around a year and half, since
libntlm 1.8
Let’s walk through how to setup a CI/CD pipeline that will build a piece of software, in four different jobs for Trisquel 11/12 and Ubuntu 22.04/24.04. I am in the process of learning
Codeberg/Forgejo CI/CD
, so I am still using GitLab CI/CD here, but the concepts should be the same regardless of platform. Let’s start by defining a job skeleton:
.guile-gnutls: &guile-gnutls
before_script:
- /root/.config/guix/current/bin/guix-daemon --version
- env LC_ALL=C.UTF-8 /root/.config/guix/current/bin/guix-daemon --build-users-group=guixbuild $GUIX_DAEMON_ARGS &
- GUIX_PROFILE=/root/.config/guix/current; . "$GUIX_PROFILE/etc/profile"
- type guix
- guix --version
- guix describe
- time guix install --verbosity=0 wget gcc-toolchain autoconf automake libtool gnutls guile pkg-config
- time apt-get update
- time apt-get install -y make git texinfo
- GUIX_PROFILE="/root/.guix-profile"; . "$GUIX_PROFILE/etc/profile"
script:
- git clone https://codeberg.org/guile-gnutls/guile-gnutls.git
- cd guile-gnutls
- git checkout v5.0.1
- ./bootstrap
- ./configure
- make V=1
- make V=1 check VERBOSE=t
- make V=1 dist
after_script:
- mkdir -pv out/$CI_JOB_NAME_SLUG/src
- mv -v guile-gnutls/*-src.tar.* out/$CI_JOB_NAME_SLUG/src/
- mv -v guile-gnutls/*.tar.* out/$CI_JOB_NAME_SLUG/
artifacts:
paths:
- out/**
This installs some packages, clones
guile-gnutls
(it could be any project, that’s just an example), build it and return tarball artifacts. The artifacts are the
git-archive
and
make dist
tarballs.
Let’s instantiate the skeleton into four jobs, running the Trisquel 11/12 jobs on amd64 and the Ubuntu 22.04/24.04 jobs on arm64 for fun.
guile-gnutls-trisquel11-amd64:
tags: [ saas-linux-medium-amd64 ]
image: registry.gitlab.com/debdistutils/guix/guix-on-dpkg:trisquel11-guix
extends: .guile-gnutls
guile-gnutls-ubuntu22.04-arm64:
tags: [ saas-linux-medium-arm64 ]
image: registry.gitlab.com/debdistutils/guix/guix-on-dpkg:ubuntu22.04-guix
extends: .guile-gnutls
guile-gnutls-trisquel12-amd64:
tags: [ saas-linux-medium-amd64 ]
image: registry.gitlab.com/debdistutils/guix/guix-on-dpkg:trisquel12-guix
extends: .guile-gnutls
guile-gnutls-ubuntu24.04-arm64:
tags: [ saas-linux-medium-arm64 ]
image: registry.gitlab.com/debdistutils/guix/guix-on-dpkg:ubuntu24.04-guix
extends: .guile-gnutls
Running this pipeline will result in artifacts that you want to confirm for reproducibility. Let’s add a pipeline job to do the comparison:
guile-gnutls-compare:
image: alpine:latest
needs: [ guile-gnutls-trisquel11-amd64,
guile-gnutls-trisquel12-amd64,
guile-gnutls-ubuntu22.04-arm64,
guile-gnutls-ubuntu24.04-arm64 ]
script:
- cd out
- sha256sum */*.tar.* */*/*.tar.* | sort | grep -- -src.tar.
- sha256sum */*.tar.* */*/*.tar.* | sort | grep -v -- -src.tar.
- sha256sum */*.tar.* */*/*.tar.* | sort | uniq -c -w64 | sort -rn
- sha256sum */*.tar.* */*/*.tar.* | grep -- -src.tar. | sort | uniq -c -w64 | grep -v '^ 1 '
- sha256sum */*.tar.* */*/*.tar.* | grep -v -- -src.tar. | sort | uniq -c -w64 | grep -v '^ 1 '
# Confirm modern git-archive tarball reproducibility
- cmp guile-gnutls-trisquel12-amd64/src/*.tar.gz guile-gnutls-ubuntu24-04-arm64/src/*.tar.gz
# Confirm old git-archive (export-subst but long git describe) tarball reproducibility
- cmp guile-gnutls-trisquel11-amd64/src/*.tar.gz guile-gnutls-ubuntu22-04-arm64/src/*.tar.gz
# Confirm 'make dist' generated tarball reproducibility
- cmp guile-gnutls-trisquel11-amd64/*.tar.gz guile-gnutls-ubuntu22-04-arm64/*.tar.gz
- cmp guile-gnutls-trisquel12-amd64/*.tar.gz guile-gnutls-ubuntu24-04-arm64/*.tar.gz
artifacts:
when: always
paths:
- ./out/**
Look how beautiful, almost like ASCII art! The commands print SHA256 checksums of the artifacts, sorted in a couple of ways, and then proceeds to compare relevant artifacts. What would the output of such a run be, you may wonder? You can look for yourself in the
guix-on-dpkg pipeline
but here is the gist of it:
$ cd out
$ sha256sum */*.tar.* */*/*.tar.* | sort | grep -- -src.tar.
79bc24143ba083819b36822eacb8f9e15a15a543e1257c53d30204e9ffec7aca guile-gnutls-trisquel11-amd64/src/guile-gnutls-v5.0.1-src.tar.gz
79bc24143ba083819b36822eacb8f9e15a15a543e1257c53d30204e9ffec7aca guile-gnutls-ubuntu22-04-arm64/src/guile-gnutls-v5.0.1-src.tar.gz
b190047cee068f6b22a5e8d49ca49a2425ad4593901b9ac8940f8842ba7f164f guile-gnutls-trisquel12-amd64/src/guile-gnutls-v5.0.1-src.tar.gz
b190047cee068f6b22a5e8d49ca49a2425ad4593901b9ac8940f8842ba7f164f guile-gnutls-ubuntu24-04-arm64/src/guile-gnutls-v5.0.1-src.tar.gz
$ sha256sum */*.tar.* */*/*.tar.* | sort | grep -v -- -src.tar.
1e8d107ad534b85f30e432d5c98bf599aab5d8db5f996c2530aabe91f203018a guile-gnutls-trisquel11-amd64/guile-gnutls-5.0.1.tar.gz
1e8d107ad534b85f30e432d5c98bf599aab5d8db5f996c2530aabe91f203018a guile-gnutls-ubuntu22-04-arm64/guile-gnutls-5.0.1.tar.gz
bc2df2d868f141bca5f3625aa146aa0f24871f6dcf0b48ff497eba3bb5219b84 guile-gnutls-trisquel12-amd64/guile-gnutls-5.0.1.tar.gz
bc2df2d868f141bca5f3625aa146aa0f24871f6dcf0b48ff497eba3bb5219b84 guile-gnutls-ubuntu24-04-arm64/guile-gnutls-5.0.1.tar.gz
$ sha256sum */*.tar.* */*/*.tar.* | sort | uniq -c -w64 | sort -rn
2 bc2df2d868f141bca5f3625aa146aa0f24871f6dcf0b48ff497eba3bb5219b84 guile-gnutls-trisquel12-amd64/guile-gnutls-5.0.1.tar.gz
2 b190047cee068f6b22a5e8d49ca49a2425ad4593901b9ac8940f8842ba7f164f guile-gnutls-trisquel12-amd64/src/guile-gnutls-v5.0.1-src.tar.gz
2 79bc24143ba083819b36822eacb8f9e15a15a543e1257c53d30204e9ffec7aca guile-gnutls-trisquel11-amd64/src/guile-gnutls-v5.0.1-src.tar.gz
2 1e8d107ad534b85f30e432d5c98bf599aab5d8db5f996c2530aabe91f203018a guile-gnutls-trisquel11-amd64/guile-gnutls-5.0.1.tar.gz
$ sha256sum */*.tar.* */*/*.tar.* | grep -- -src.tar. | sort | uniq -c -w64 | grep -v '^ 1 '
2 79bc24143ba083819b36822eacb8f9e15a15a543e1257c53d30204e9ffec7aca guile-gnutls-trisquel11-amd64/src/guile-gnutls-v5.0.1-src.tar.gz
2 b190047cee068f6b22a5e8d49ca49a2425ad4593901b9ac8940f8842ba7f164f guile-gnutls-trisquel12-amd64/src/guile-gnutls-v5.0.1-src.tar.gz
$ sha256sum */*.tar.* */*/*.tar.* | grep -v -- -src.tar. | sort | uniq -c -w64 | grep -v '^ 1 '
2 1e8d107ad534b85f30e432d5c98bf599aab5d8db5f996c2530aabe91f203018a guile-gnutls-trisquel11-amd64/guile-gnutls-5.0.1.tar.gz
2 bc2df2d868f141bca5f3625aa146aa0f24871f6dcf0b48ff497eba3bb5219b84 guile-gnutls-trisquel12-amd64/guile-gnutls-5.0.1.tar.gz
$ cmp guile-gnutls-trisquel12-amd64/src/*.tar.gz guile-gnutls-ubuntu24-04-arm64/src/*.tar.gz
$ cmp guile-gnutls-trisquel11-amd64/src/*.tar.gz guile-gnutls-ubuntu22-04-arm64/src/*.tar.gz
$ cmp guile-gnutls-trisquel11-amd64/*.tar.gz guile-gnutls-ubuntu22-04-arm64/*.tar.gz
$ cmp guile-gnutls-trisquel12-amd64/*.tar.gz guile-gnutls-ubuntu24-04-arm64/*.tar.gz
That’s it for today, but stay tuned for more updates on using Guix in containers, and remember;
Happy Hacking
Primary Sidebar Widget Area
Tags
android
(7)
bootstrappable
(4)
crypto
(5)
debian
(48)
devuan
(4)
ed25519
(6)
fsdg
(4)
fsf
(5)
git
(7)
gitlab
(16)
gnome
(5)
gnu
(37)
gnuk
(8)
gnupg
(17)
gnutls
(4)
gsasl
(5)
guix
(16)
i9300
(4)
ietf
(10)
key
(4)
laptop
(5)
lenovo
(4)
linux
(7)
neo
(4)
openpgp
(20)
openssh
(5)
openwrt
(6)
pgp
(5)
pipeline
(4)
pureos
(10)
replicant
(7)
reproducible
(11)
rsa
(5)
ryf
(4)
s3
(6)
sasl
(8)
security
(20)
sigstore
(5)
smartcard
(6)
smartcards
(4)
ssh
(6)
supply-chain
(8)
trisquel
(25)
ubuntu
(12)
yubikey
(6)
Recent Posts
Backup of S3 Objects Using rsnapshot
2026-01-18
Debian Libre Live 13.3.0 is released!
2026-01-13
Debian Taco – Towards a GitSecDevOps Debian
2026-01-09
Reproducible Guix Container Images
2025-12-07
Guix on Trisquel & Ubuntu for Reproducible CI/CD Artifacts
2025-12-03
Container Images for Debian with Guix
2025-11-28
Introducing the Debian Libre Live Images
2025-11-13
Independently Reproducible Git Bundles
2025-07-31
Building Debian in a GitLab Pipeline
2025-04-30
GitLab Runner with Rootless Privilege-less Capability-less Podman on riscv64
2025-04-25
Verified Reproducible Tarballs
2025-04-17
On Binary Distribution Rebuilds
2025-03-31
Reproducible Software Releases
2025-03-24
OpenSSH and Git on a Post-Quantum SPHINCS+
2024-12-23
Guix Container Images for GitLab CI/CD
2024-12-18
Towards Idempotent Rebuilds?
2024-07-10
Reproducible and minimal source-only tarballs
2024-04-13
Towards reproducible minimal source code tarballs? On *-src.tar.gz
2024-04-01
Apt archive mirrors in Git-LFS
2024-03-18
Trisquel on arm64: Ampere Altra
2024-01-10
Validating debian/copyright: licenserecon
2023-12-29
Classic McEliece goes to IETF and OpenSSH
2023-12-10
Trisquel on ppc64el: Talos II
2023-09-01
Enforcing wrap-and-sort -satb
2023-08-16
Coping with non-free software in Debian
2023-07-11
Streamlined NTRU Prime sntrup761 goes to IETF
2023-05-12
How To Trust A Machine
2023-04-29
A Security Device Threat Model: The Substitution Attack
2023-04-27
Sigstore for Apt Archives: apt-cosign
2023-04-20
More on Differential Reproducible Builds: Devuan is 46% reproducible!
2023-04-17
Sigstore protects Apt archives: apt-verify & apt-sigstore
2023-04-15
Trisquel is 42% Reproducible!
2023-04-10
OpenPGP master key on Nitrokey Start
2023-03-27
Apt Archive Transparency: debdistdiff & apt-canary
2023-02-01
Understanding Trisquel
2023-01-22
Preseeding Trisquel Virtual Machines Using “netinst” Images
2022-12-30
OpenPGP key on FST-01SZ
2022-12-24
Second impressions of Guix 1.4
2022-12-19
Guix 1.4 on NV41PZ
2022-12-16
Trisquel 11 on NV41PZ: First impressions
2022-12-10
How to complicate buying a laptop
2022-12-10
On language bindings & Relaunching Guile-GnuTLS
2022-10-14
Privilege separation of GSS-API credentials for Apache
2022-09-20
Static network config with Debian Cloud images
2022-08-22
Towards pluggable GSS-API modules
2022-07-14
What’s wrong with SCRAM?
2021-06-08
OpenPGP smartcard with GNOME on Debian 11 Bullseye
2021-05-01
Passive Icinga Checks: icinga-pusher
2019-12-16
OpenPGP smartcard under GNOME on Debian 10 Buster
2019-06-21
Offline Ed25519 OpenPGP key with subkeys on FST-01G running Gnuk
2019-03-21
Installing Gnuk on FST-01G running NeuG
2019-03-21
OpenPGP 2019 Key Transition Statement
2019-03-21
Planning for a new OpenPGP key
2019-03-21
Vikings D16 server first impressions
2017-08-03
OpenPGP smartcard under GNOME on Debian 9.0 Stretch
2017-06-19
GPS on Replicant 6
2017-03-04
Why I don’t Use 2048 or 4096 RSA Key Sizes
2016-11-03
Let’s Encrypt Clients
2015-12-17
Automatic Replicant Backup over USB using rsync
2015-11-28
Combining Dnsmasq and Unbound
2015-10-26
Cosmos – A Simple Configuration Management System
2015-09-24
SSH Host Certificates with YubiKey NEO
2015-06-16
Scrypt in IETF
2015-05-19
Certificates for XMPP/Jabber
2015-05-12
Laptop decision fatigue
2015-05-11
Laptop indecision
2015-03-25
EdDSA and Ed25519 goes to IETF
2015-03-04
Laptop Buying Advice?
2015-02-24
Replicant 4.2 0003 on I9300
2015-01-14
OpenPGP Smartcards and GNOME
2015-01-02
Dice Random Numbers
2014-11-12
The Case for Short OpenPGP Key Validity Periods
2014-08-26
Wifi on S3 with Replicant
2014-08-10
Replicant 4.2 0002 and NFC on I9300
2014-08-05
Offline GnuPG Master Key and Subkeys on YubiKey NEO Smartcard
2014-06-23
OpenPGP Key Transition Statement
2014-06-23
Creating a small JPEG photo for your OpenPGP key
2014-06-19
Replicant 4.2 on Samsung S3
2014-02-27
Necrotizing Fasciitis
2014-01-05
Replicant 4.0 on Samsung Galaxy S III
2013-11-11
BLURB: Software repository metadata convention
2013-09-24
Portable Symmetric Key Container (PSKC) Library
2012-10-11
Using OATH Toolkit with Dropbox
2012-08-27
Small syslog server
2011-12-12
Unattended SSH with Smartcard
2011-10-11
OpenWRT with Huawei E367 and TP-Link TL-WR1043ND
2011-05-22
Introducing the OATH Toolkit
2011-01-20
On Password Hashing and RFC 6070
2011-01-07
GNU SASL with SCRAM-SHA-1-PLUS
2010-11-17
Debian on Lenovo X201
2010-10-25
GS2-KRB5 using GNU SASL and MIT Kerberos for Windows
2010-09-27
Bridging SASL and GSS-API: GS2
2010-07-13
OpenWRT 10.03 “Backfire”
2010-05-03
GS2-KRB5 in GNU SASL 1.5.0
2010-03-31
Fellowship interview
2010-01-08
Nordic Free Software Award 2009
2009-11-15
Storing OpenPGP keys in the DNS
2009-10-29
Thread Safe Functions
2009-06-23
CACert and GnuTLS
2009-04-16
OpenWRT 8.09 plus Huawei E220
2009-03-05
Redmine on Debian Lenny Using Lighttpd
2008-10-17
FSCONS / Nordic Free Software Award Nomination
2008-10-14
Cyclomatic Code Complexity
2008-10-07
My blog uses Yubikey authentication
2008-06-30
Home Wireless Network
2008-05-08
Real-world Performance Tuning with Callgrind
2008-02-27
IDNA flaws with regard to U+2024
2008-01-14
PAM module for Yubico
2008-01-14
Response to GnuTLS in Exim Debate
2007-11-09
FSCONS
2007-10-23
On TLS-AUTHZ
2007-10-18
Home Audio Server
2007-09-25
GnuTLS v2.0
2007-09-05
Building GnuTLS and GNU SASL without running ./configure
2007-08-21
1 TeraByte
2007-08-14
OpenMoko first impressions
2007-08-02
OpenMoko Neo1973 order confirmed
2007-07-22
Linksys WRT54G3G + Huawei E600 + OpenWRT Kamikaze = Internet at summer house
2007-07-22
Neo1973 / OpenMoko ordered
2007-07-15
GNU General Public License version 3
2007-06-29
Porting to uClinux
2007-06-07
Libidn now uses Git
2007-05-31
Free-ietf-review
2007-05-30
Youbico
2007-05-24
Hacking Jobo device
2007-04-27
First TLS v1.2 HTTPS browser in the world?
2007-04-19
Buggy IMAP authentication on Nokia 6233
2007-04-17
Jobo Giga Vu Pro Evolution 80GB
2007-04-14
TLS-AUTHZ Patent Concerns
2007-04-11
Boycott scan.coverity.com!
2007-04-02
EnigForm – HTML/HTTP forms with OpenPGP
2007-04-01
Password-based Authentication Protocol
2007-03-29
New SASL GS2 document published
2007-03-29
Libntlm 0.3.13
2007-03-27
Debian etch on Dell Precision M65
2007-03-24
Announcing krb5dissect
2007-03-14
gitco
2007-03-14
LibIDN 0.6.11
2007-03-13
Cypak LoginKey
2006-10-18
Base encoding
2006-10-17
Update of Kerberos V5 over TLS draft
2006-10-03
Kerberos 5 Credential Cache file format
2006-09-20
Scroll Up