Planet Debian
planet
Skip Quicknav
About Debian
Getting Debian
Support
Developers' Corner
Planet Debian
April 23, 2026
Dirk Eddelbuettel
dtts 0.1.4 on CRAN: Maintenance
Leonardo
and I are happy to
announce another maintenance release 0.1.4 of our
dtts
package
which has been on
CRAN
for four
years now.
dtts
builds upon
our
nanotime
package as well as the beloved
data.table
to bring
high-performance
and
high-resolution
indexing at the
nanosecond level to data frames.
dtts
aims to
offers the time-series indexing versatility of
xts
(and
zoo
) to the immense
power of
data.table
while
supporting highest nanosecond resolution.
This release, not unlike
yesterday’s
release of nanotime
, is driven by recent changes in the
bit64
package which
underlies it.
Michael
who now maintains it, had sent in two PRs to prepare for these changes.
I updated continuous integration, and switched to Authors@R, and that
pretty much is the release. The short list of changes follows.
Changes in version 0.1.4
(2026-04-23)
Continuous integration has received some routine updates
Adapt
align()
column names with changes in
'data.table' (Michael Chirico in
#20
Narrow imports to functions used for packages 'bit64',
'data.table' and 'nanotime' (Michael Chirico in
#21
Courtesy of my
CRANberries
, there
is also a [diffstat repor]tbs
diffstat
for this release. Questions, comments, issue tickets can be brought to
the
GitHub repo
This post by
Dirk
Eddelbuettel
originated on his
Thinking inside the box
blog. If you like this or other open-source work I do, you can now
sponsor me at
GitHub
. You can also sponsor my
Tour
de Shore 2026 ride in support of the Maywood Fine Arts Center
23 April, 2026 06:58PM
Sergio Talens-Oliag
Developing a Git Worktree Helper with Copilot
Over the past few weeks I’ve been developing and using a personal command-line
tool called
gwt
Git Worktree
) to manage Git repositories using worktrees.
This article explains what the tool does, how it evolved, and how I used
GitHub Copilot CLI
to develop it (in
fact the idea of building the script was also to test the tool).
The Problem: Managing Multiple Branches
I was working on a project with multiple active branches, including orphans; the
regular branches are for fixes or features, while the orphans are used to keep
copies of remote documents or store processed versions of those documents.
The project also uses a special orphan branch that contains the scripts and the
CI/CD configuration to store and process the external documents (it is on a
separate branch to avoid mixing its operation with the main project code).
The plan is trigger a pipeline against the special branch from remote projects
to create or update the doc branch for it in our git repository, retrieving
artifacts from the remote projects to get the files and put them on an orphan
branch (initially I added new commits after each update, but I changed the
system to use force pushes and keep only one commit, as the history is not
really needed).
The original documents have to be changed, so, after ingesting them, we run a
script that modifies them and adds or updates another branch with the processed
version; the contents of that branch are used by the
main
branch build process
(there we use
git fetch
and
git archive
to retrieve its contents).
When working on the scripts to manage the orphan branches I discovered the
worktree
feature of
git
, a
functionality that allows me to keep multiple branches checked out in parallel
using a single
.git
folder, removing the need to use
git switch
and
git
stash
when changing between branches (until now I’ve been a heavy user of those
commands).
Reading about it I found that a lot of people use worktrees with the help of a
wrapper script to simplify the management. After looking at one or two posts
and the related scripts I decided to create my own using a specific directory
structure to simplify things.
That’s how I started to work on the
gwt
script; as I also wanted to test
copilot
I decided to build it using its help (I have a pro license at work and
wanted to play with the cli version instead of integrated into an editor, as I
didn’t want to learn a lot of new keyboard shortcuts).
The gwt Philosophy: Opinionated and Transparent
gwt
enforces a simple, filesystem-visible model:
Exactly one bare repository
named
bare.git
(treated as an implementation
detail)
One worktree directory per branch
where the directory name matches the
branch name
Single responsibility
gwt
doesn’t try to be a general
git
wrapper; it
only handles operations that map cleanly to this layout
The repository structure looks like this:
my-repo/
+-- bare.git/ # the Git repository (internal)
+-- main/ # worktree for branch "main"
+-- feature/api/ # worktree for branch "feature/api"
+-- fix/docs/ # worktree for branch "fix/docs"
+-- orphan-history/ # worktree for the "orphan-history" branch
The tool follows five core design principles:
Explicit over clever
: Git commands are not hidden or reinterpreted
Transparent execution
: Every operation is printed before it happens
Safe, preview-first operations
: Destructive commands default to preview,
confirmation, then apply
Shell-agnostic core
: The script never changes the caller’s working
directory (shell wrappers handle that)
Opinionated but minimal
: Only commands that fit the layout model are
included
Core Commands
The script provides these essential commands:
gwt init
— Clone a repository and set up the
gwt
layout
gwt convert
— Convert an existing Git checkout to the
gwt
layout
gwt add [--orphan]
— Create a new worktree (optionally
orphaned)
gwt remove
— Remove a worktree and unregister it (asks the user to
remove the local branch too, useful when removing already merged branches)
gwt rename
— Rename a branch AND its worktree directory
gwt list
— List all worktrees
gwt default [
— Get or set the default branch
gwt current
— Print the current worktree or branch name
Except
init
and
convert
all of the commands work inside a directory
structure that follows the
gwt
layout, which looks for the
bare.git
folder to
find the root folder of the structure.
As I don’t want to hide which commands are really used by the wrapper, all
git
and filesystem operations pass through a single
run
shell function that prints
each command before executing it. This gives complete visibility into what the
tool is doing.
Also, destructive operations (
remove
rename
) default to preview mode:
$ gwt remove feature-old --dry-run
+ git -C bare.git branch -d feature-old
+ git -C bare.git worktree remove feature-old/
Apply these changes? [y/N]:
The user sees exactly what will happen, can verify it’s correct, and only then
confirm execution.
Incremental Development with Copilot
The
gwt
script has grown from 597 lines in its original version (
git-wt
) to
1,111 lines when writing the first draft of this post.
This growth happened through incremental, test-driven development, with each
feature being refined based on real usage patterns.
What follows is a little history of the script evolution written with the help
of
git log
Initial version
First I wrote a design document and asked
copilot
to create the initial
version of the
git-wt
script with the original core commands.
I started to use the tool with a remote repostory (I made copies of the branches
in some cases to avoid missing work) and fixed bugs (trivial ones with
neovim
larger ones asking
copilot
to fix the issues for me, so I had less typing to
do).
Note:
As I used
copilot
I noticed that when you make manual changes it is important
to tell the tool about them, otherwise it gets confused and sometimes tries to
remove manual changes.
First command update
One of the first commands I had to enhance was
rename
as I normally use branches with
on their name and my tool checks out the
worktrees
using the branch name as the path inside the
gwt
root folder
(i.e. a
fix/rename
branch creates the
fix
directory and checks the branch
inside the
fix/rename
folder) the
rename
command had to clean up the empty
parent directories
when renaming a worktree we move the folders and fix the references using the
worktree repair
command to make things work locally, but the rename also
affects the remote branch reference, to avoid surprises the command unsets the
remote branch reference so it can be pushed again using the new name (of
course, the user is responsible of managing the old remote branch, as the
gwt
can’t guess what it should do with it).
Integration with the shell
As I use
zsh
with the
Powerlevel10k
theme
I asked
copilot
to help me add visual elements to the prompt when
working with
gwt
folders, something that I would have never tried without
help, as it would have required a lot of digging on my part on how to do it, as
I never looked into it.
The initial version of the code was on an independent file that I sourced from
my
.zshrc
file and it prints
on the right part of the prompt
when we are inside a
gwt
folder (note that if the folder is a worktree we see
the existing git integration text right before it, so we have the previous
behavior and we see that it is a
gwt
friendly repo) and if we are on the root
folder or the
bare.git
folder we see
gwt
or
bare
(I added the text because there are no git promts on those folders).
I also asked
copilot
to create
zsh
autocompletion functions (I only use
zsh
, so I didn’t add autocompletion for other shells). The good thing here is
that I wouldn’t have done that manually, as it would have required some reading
to get it right, but the output of
copilot
worked and I can update things
using it or manually if I need to.
One thing I was missing from the script was the possibility of changing the
working directory easily, so I wrote a
gwt
wrapper function for
zsh
that
intercepts commands that require shell cooperation (changing the working
directory) and delegates everything else to the core script.
Currently the function supports the following enhanced commands:
cd [
: change into a worktree or the default one if missing
convert
: convert a checkout, then cd into the initial worktree
add [--orphan]
: create a worktree, then cd into it on success
rename
: rename a worktree, then cd into it if we were inside it
Note that the
cd
command will not work on other shells or if the user does not
load my wrapper, but the rest will still work without the working directory
changes.
Renaming the command
As I felt that
git-wt
was a long name I renamed the tool to
gwt
, I could
have done it by hand, but using
copilot
I didn’t have to review all files by
myself and it did it right (note that I have it configured to always ask me
before doing changes, as it sometimes tries to do something I don’t want and I
like to check its changes … as I have the files in git repos, I manually add
the files when I like the status and if the cli output is not clear I allow it
to apply it and check the effects with
git diff
so I can validate or revert
what was done).
The
convert
command
After playing with one repo I added the
convert
subcommand for migrating
existing checkouts, it seemed a simple task at first, but it took multiple
iterations to get it right, as I found multiple issues while testing (in fact I
did copies of the existing checkouts to be able to re-test each update, as some
of the iterations broke them).
The version of the function when this post was first edited had the following
comment explaining what it does:
# ---------------------------------------------------------------------------
# convert - convert an existing checkout into the gwt layout
# ---------------------------------------------------------------------------
# Must be run from the parent directory of
# Steps:
# 1. Read branch from the checkout's HEAD
# 2. Rename
# 3. Create
# 4. Move
# 5. Fix fetch refspec (bare clone default maps refs directly, no remotes/)
# 6. Add a --no-checkout worktree so git wires up the metadata and
# creates
# 7. Move that .git file into the real working tree (
# 8. Remove the now-empty placeholder directory
# 9. Move the real working tree into place as
# 10. Reset the index to HEAD so git status is clean
# (--no-checkout leaves the index empty)
# 11. Create
# from the root without --git-dir
# The .git file ends up at the same absolute path git recorded in step 5,
# so no worktree repair is needed. Working tree files are never modified.
The
.git
link was added when I noticed that I could run commands that don’t
need the checked out files on the root of the
gwt
structure, which is handy
sometimes (i.e. a
git fetch
or a
git log
, that shows the log of the branch
marked as
default
).
After playing with commands that used the
bare.git
folder I updated the
init
and
convert
commands to keep the origin refs, ensuring that the remote
tracking works correctly.
Improving the
add
command
While playing with the tool on more repos I noticed that I also had to enhance
the
add
command to better handle worktree creation, depending on my needs.
Right now the tool supports the following use cases:
if the
branch
exists locally or on origin, it just checks it out.
if the
branch
does not exist, we create it using the given base branch or,
if no base is given, the current
worktree
(if we are in the root folder or
bare.git
the command fails).
as I needed it for my project, I added a
--orphan
option to be able to
create orphan branches directly.
Moving to a single file
Eventually I decided to make the tool self contained; I removed the design
document (I moved the content to comments on the top of the script and details
to comments on each function definition) and added a pair of commands to print
the code to source for the
p10k
and
zsh
integration (autocompletion &
functions), leaving everything in a single file.
Now my
.zshrc
file adds the following to source both things:
# After loading the p10k configuration
if type gwt >/dev/null 2>&1; then
source <(gwt p10k)
fi
[...]
# After loading autocompletion
if type gwt >/dev/null 2>&1; then
source <(gwt zsh)
fi
Versioning
As I modified the script I found interesting to use CalVer-based versioning (the
version variable has the format
YYYY.mm.dd-r#
) so I added a subcommand to show
its value or bump it using the current date and computing the right revision
number.
About the use of
copilot
Although I’ve never been a fan of AI tools I have to admit that the
copilot
CLI has been very useful for building the tool:
Rapid prototyping
: Each commit represented a small feature or fix that I
could implement, test immediately in my actual workflow, and iterate on based
on the result
Edge case handling
: Rather than trying to anticipate every scenario
upfront, I could ask Copilot how to handle edge cases as they appeared in real
usage
Script refinement
: Questions like "how do I clean up empty directories
after a rename" or "how do I detect if I’m inside a specific worktree" were
quickly answered with working code
Shell integration
: The Zsh wrapper and completion system grew from simple
prototypes to sophisticated features, with each iteration informed by how I
actually used the tool
For example, the
convert
command started as a simple rename operation, but
evolved to also create a
.git
symlink and intelligently handle various
migration scenarios—all because I used it repeatedly and refined the
implementation each time.
Self-Contained and Opinionated
gwt
is deliberately opinionated:
Zsh & Powerlevel10k Integration
: The tool includes built-in Zsh shell
integration, accessed via
source <(gwt zsh)
and supports adding a prompt
segment when using
p10k
, as described earlier.
Directory Structure
: The
bare.git
directory name is non-negotiable. This
is how
gwt
discovers the repository root from any subdirectory, and how the
tool knows whether a directory is a gwt repository. The simplicity of this
marker means the discovery mechanism is foolproof and requires no
configuration.
No Configuration Files
gwt
deliberately has no configuration. There are
no
.gwtrc
files or config directories. This makes it portable; the tool
works the same way everywhere, and repositories can be shared across systems
without synchronizing configuration.
From Script to System
What started as a small helper script for managing worktrees has become a
complete system:
Core script
gwt
): 1,111 lines of pure shell, no external dependencies
Shell integration
: Zsh functions and completions
Prompt integration
: Powerlevel10k segment
Documentation
: Built-in help and design philosophy documentation
The script is self-contained, everything needed for the tool to work is in a
single file.
This makes it trivial to update (just replace the script) or audit
(no hidden dependencies).
Development with AI support
Developing
gwt
with
copilot
taught me some things:
Incremental refinement works well for small tools
: Each iteration informed
the next, resulting in a tool that handles real use cases elegantly
Transparency is a feature
: Making operations visible builds confidence and
is easier to debug
Opinionated tools can be powerful
: By constraining the problem space (one
bare repo, one worktree per branch), the solution becomes simpler and more
robust
Shell integration matters
: The same core commands are easier to use when
they can automatically change directories and provide completions
Real-world testing is essential
: I wouldn’t have discovered the need for
automatic directory cleanup or context-aware
cd
behavior without actually
using the tool daily
What was next?
The tool is stable and handles my daily workflow well, so my guess is that I
would keep using it and fixing issues if or when I found them, but I do not plan
to include additional features unless I find a use case that justifies it (i.e.
I never added support for some of the
worktree
subcommands, as it is easier to
use the
git
versions if I ever needed them).
What really happened
While editing this post I discovered that I needed to add another command to it
and fixed a bug (see below).
With those changes and the inclusion of a license and copyright notice (just in
case I distribute it at some point) now the script is 1,217 lines long instead
of the 1,111 it had when I started to write this entry.
Submodule Support
When I converted this blog repository to the
gwt
format and tried to preview
the post using
docker compose
, it failed because the worktree I was on didn’t
have the Git submodule initialized.
My blog theme is included on the repository as a submodule, and when I used
gwt
to check out different branches in worktrees, the submodule was not
initialized in the new worktrees.
This led me to add new internal function and a
gwt submodule
command to
handle submodule initialization; the internal function is called from
convert
and
add
(when converting a repo or adding a worktree) and the public command
is useful to update the submodules on existing branches.
Path Handling with Branch Names Containing Slashes
The second discovery was a bug in how the tool handled branch names containing
slashes (e.g.,
feature/new-api
docs/user-guide
), the worktree directories
are created with the branch name as the path, so a branch like
feature/new-api
would create two nested folders (
feature
and
new-api
inside it).
However, there was a mismatch in how the
zsh
wrapper function resolved
worktree paths (initially it used shell parameter expansion, i.e.
rel="${cwd#"$REPO_ROOT"/}"
), versus how the core script calculated them,
causing the
cd
command to fail or navigate to the wrong location when branch
names contained slashes.
The fix involved ensuring consistent path resolution throughout the script and
wrapper (now it uses a function that processes the
git worktree list
output),
so that
gwt cd feature/new-api
correctly navigates to the worktree directory
regardless of path depth.
Conclusion
gwt
is a tool that solves a real problem: managing multiple Git branches
simultaneously without context-switching overhead.
I’m sure I’m going to keep using it for my projects, as it simplifies some
workflows, although I’ll also use
switch
and
stash
in some cases, but I like
the use of multiple worktrees in parallel.
In fact I converted this blog repository checkout to the
gwt
format to work on
a separate branch as it felt the right approach even if I’m the only one using
the repo now, and it helped me improve the tool, as explained before.
Also, it was a good example of how to use AI tools like
copilot
to develop a
simple tool and keep it evolving while using it.
In any case, although I find the
copilot
useful and has saved me time, I don’t
trust it to work without supervision, it worked well, but got stuck some times
and didn’t do the things as I wanted in multiple occasions.
I also have an additional problem now … I’ve been reading about it, but I
don’t really know which models to use or how the premium requests are computed
(I’ve only been playing with it since last month and I ran out of requests the
last day of the month on purpose, just to see what happened … it stops working
… ;).
On my work machine I’ve been using a specific user account with a
GitHub
Copilot Business
subscription and I only used the
Anthropic Claude Sonnet 4.6
model and with my personal account I configured the
Anthropic Claude Haiku 4.5
model, but I’ve only used that to create the initial draft of this post (I ended
up rewriting most of it manually anyway) and to review the final version (I’m
not a native speaker and it was useful for finding typos and improving the style
in some parts).
I guess I’ll try other models with
copilot
in the future and check other
command line tools like
aider
or
claude-code
, but probably only using
free accounts unless I get a payed account at work, as I have with
GitHub
Copilot
To be fair, what I will love to be able to do is to use local models (
aider
can
do it), but the machines I have are not powerful enough. I tried to run a simple
test and it felt really slow, but when I have the time or the need I’ll try
again, just in case.
23 April, 2026 05:40PM
April 22, 2026
Dirk Eddelbuettel
nanotime 0.3.14 on CRAN: Upstream Maintenance
Another minor update 0.3.14 for our
nanotime
package is now on
CRAN
, and has
compiled for
r2u
(and
will have to wait to be uploaded to
Debian
until dependency
bit64
has been
updated there).
nanotime
relies on the
RcppCCTZ
package (as well as the
RcppDate
package for additional C++ operations) and offers efficient high(er)
resolution time parsing and formatting up to nanosecond resolution,
using the
bit64
package for the actual
integer64
arithmetic. Initially
implemented using the S3 system, it has benefitted greatly from a
rigorous refactoring by
Leonardo
who not only rejigged
nanotime
internals in S4 but also added new S4 types for
periods
intervals
and
durations
This release has been driven almost entirely by
Michael
, who took over as
bit64
maintainer
and has been making changes there that have an effect on us
‘downstream’. He reached out with a number of PRs which (following
occassional refinement and smoothing) have all been integrated. There
are no user-facing changes, or behavioural changes or enhancements, in
this release.
The NEWS snippet below has the fuller details.
Changes in version 0.3.14
(2026-04-22)
Tests were refactored to use
NA_integer64_
(Michael
Chirico in
#149
and
Dirk in
#156
nanoduration
was updated for changes in
nanotime
4.8.0 (Michael Chirico in
#152
fixing
#151
Use of
as.integer64(keep.names=TRUE)
has been
refactored (Michael Chirico in
#154
fixing
#153
In tests,
nanotime
is attached after
bit64
; this still needs a better fix (Michael
Chirico in
#155
The package now has a hard dependency on the just released
bit64
version 4.8.0 (or later)
Thanks to my
CRANberries
, there
is a diffstat report for
this
release
. More details and examples are at the
nanotime
page
; code, issue tickets etc at the
GitHub repository
and all documentation is provided at the
nanotime documentation
site
This post by
Dirk
Eddelbuettel
originated on his
Thinking inside the box
blog. If you like this or other open-source work I do, you can now
sponsor me at
GitHub
. You can also sponsor my
Tour
de Shore 2026 ride in support of the Maywood Fine Arts Center
22 April, 2026 08:34PM
Vincent Bernat
CSS & vertical rhythm for text, images, and tables
Vertical rhythm aligns lines to a consistent spacing cadence down the page. It
creates a predictable flow for the eye to follow. Thanks to the
rlh
CSS unit,
vertical rhythm is now easier to implement for text.
But illustrations
and tables can disrupt the layout. The amateur typographer in me wants to follow
Bringhurst’s wisdom:
Headings, subheads, block quotations, footnotes, illustrations, captions and
other intrusions into the text create syncopations and variations against the
base rhythm of regularly leaded lines. These variations can and should add
life to the page, but the main text should also return after each variation
precisely on beat and in phase.
Robert Bringhurst
The Elements of Typographic Style
Text
Responsive images
Tables
Text
Three factors govern vertical rhythm:
font size
line height
and
margin or padding
. Let’s set our baseline with an 18-pixel font and a 1.5
line height:
html
font-size
112.5
line-height
1.5
h1
h2
h3
h4
font-size
100
html
body
h1
h2
h3
h4
blockquote
dl
dt
dd
ol
ul
li
margin
padding
CSS Values and Units Module Level 4
defines the
rlh
unit, equal to the
computed line height of the root element. All browsers support it
since
2023
Use it to insert vertical spaces or to fix the line height
when altering font size:
h1
h2
h3
h4
margin-top
rlh
margin-bottom
rlh
h1
font-size
2.4
rem
line-height
rlh
h2
font-size
1.5
rem
line-height
rlh
h3
font-size
1.2
rem
line-height
rlh
blockquote
pre
margin-top
rlh
aside
font-size
0.875
rem
line-height
rlh
We can check the result by overlaying a grid
on the content:
Using CSS
rlh
unit to set vertical space works well for text. You can display the grid using
Ctrl
Shift
If a child element uses a font with taller intrinsic metrics, it may stretch
the line’s box beyond the configured line height.
A workaround
is to reduce the line height to 1. The glyphs overflow but don’t push the line
taller.
code
kbd
line-height
Responsive images
Responsive images are difficult to align on the grid because we don’t know their
height.
CSS Rhythmic Sizing Module Level 1
introduces the
block-step
property to adjust the height of an element to a multiple of a step unit. But
most browsers don’t support it yet.
With JavaScript, we can add padding around the image so it does not disturb
the vertical rhythm:
const
targets
document
querySelectorAll
".lf-media-outer"
);
const
adjust
el
height
=>
const
rlh
parseFloat
getComputedStyle
document
documentElement
).
lineHeight
);
const
padding
Math
ceil
height
rlh
rlh
height
el
style
padding
${
padding
px 0`
};
targets
forEach
((
el
=>
adjust
el
el
clientHeight
));
The image is snapped to the grid thanks to the additional padding computed with JavaScript. 216 is divisible by 27, our line height in this example.
As the image is responsive, its height can change. We need to wrap a resize
observer around the
adjust()
function:
const
ro
new
ResizeObserver
((
entries
=>
for
const
entry
of
entries
const
height
entry
contentBoxSize
].
blockSize
adjust
entry
target
height
);
});
for
const
target
of
targets
ro
observe
target
);
Tables
Table cells could set
1rlh
as their height but they would feel constricted.
Using
2rlh
wastes too much space. Instead, we use
incremental leading
: we
align one in every five lines.
table
border-spacing
px
border-collapse
separate
th
padding
0.4
rlh
em
td
padding
0.2
rlh
0.5
em
To align the elements after the table, we need to add some padding. We can
either reuse the JavaScript code from images or use a few lines of CSS that
count the regular rows and compute the missing vertical padding:
table
has
tbody
tr
nth-child
5n
last-child
padding-bottom
0.2
rlh
table
has
tbody
tr
nth-child
5n
last-child
padding-bottom
0.8
rlh
table
has
tbody
tr
nth-child
5n
last-child
padding-bottom
0.4
rlh
table
has
tbody
tr
nth-child
5n
last-child
padding-bottom
table
has
tbody
tr
nth-child
5n
last-child
padding-bottom
0.6
rlh
A header cell has twice the padding of a regular cell. With two regular rows,
the total padding is 2×2×0.2+2×0.4=1.6. We need to add
0.4rlh
to reach
2rlh
of extra vertical padding across the table.
One line out of five is aligned to the grid. Additional padding is added after the table to not break the vertical rhythm. 405 is divisible by 27, our line height in this example.
None of this is necessary. But once you start looking, you can’t unsee it. Until
browsers implement
CSS Rhythmic Sizing
, a
bit of CSS wizardry and a touch of JavaScript is enough to pull it off. The main
text now returns after each intrusion “precisely on beat and in phase.” 🎼
See “
Vertical rhythm using CSS
lh
and
rlh
units
” by Paweł
Grzybek.
For broader compatibility, you can replace
2rlh
with
calc(var(--line-height) * 2rem)
and set the
--line-height
custom
property in the
:root
pseudo-class. I wrote a
simple PostCSS
plugin
for this purpose.
It would have been nicer to compute the line height with
calc(round(up, calc(2.4rem / 1rlh), 0) * 1rlh)
. Unfortunately, typed
arithmetic is
not supported by Firefox yet
. Moreover, browsers support
round()
only
since 2024
. Instead, I coded a
PostCSS
plugin
for this as well.
The following CSS code defines a grid tracking the line height:
body
::
after
content
""
z-index
9999
background
linear-gradient
180
deg
#c8e1ff
99
px
transparent
px
);
background-size
20
px
rlh
pointer-events
none
See “
Deep dive CSS: font metrics, line-height and vertical-align
” by Vincent De Oliveira.
22 April, 2026 07:48PM
by Vincent Bernat
April 21, 2026
Dirk Eddelbuettel
RcppArmadillo 15.2.6-1 on CRAN: Several Updates
Armadillo
is a powerful
and expressive C++ template library for linear algebra and scientific
computing. It aims towards a good balance between speed and ease of use,
has a syntax deliberately close to Matlab, and is useful for algorithm
development directly in C++, or quick conversion of research code into
production environments.
RcppArmadillo
integrates this library with the
environment and language–and is
widely used by (currently) 1263 other packages on
CRAN
, downloaded 45.7 million
times (per the partial logs from the cloud mirrors of CRAN), and the
CSDA paper
preprint
/ vignette
) by Conrad and myself has been cited 683 times according
to Google Scholar.
This versions updates to the 15.2.5 and 15.2.6 upstream
Armadillo
releases from,
respectively, two and five days ago. The package has already been
updated for
Debian
, and built for
r2u
. When we ran the
reverse-dependency check for 15.2.5 at the end of last week, one package
failed. I got in touch with the authors, filed an issue, poked some
more, isolated the one line that caused an example to fail … and right
then 15.2.6 came out fixing just that. It was after all an upstream
issue. We used to ran these checks before Conrad made a release, he now
skips this and hence needed a quick follow-up release. It can
happen.
The other big change is that this R package release phases out the
‘dual support’ for both C++14 or newer (as in current
Armadillo
) along with a C++11
fallback for more slowly updating packages. I am happy to say that after
over eight months of this managed transition (during which
CRAN
expulsed some laggard
packages that were not moving in from C++11) we are now at all packages
using C++14 or newer which is nice. And I will take this as an
opportunity to stress that one can in fact manage a disruptive API
change this way as we just demonstrated. Sadly, R Core does not seem to
have gotten that message and rollout of this package was also still a
little delayed because of the commotion created by the last minute API
changes preceding the R 4.6.0 release later this week.
Smaller changes in the package are a switch in pdf vignette
production to the
Rcpp::asis()
driver, and a
higher-precision computation in
rmultinom()
(matching a
change made in R-devel during last week in its use of Kahan summation).
All detailed changes since the last CRAN release follow.
Changes in
RcppArmadillo version 15.2.6-1 (2026-04-20)
Upgraded to Armadillo release 15.2.6 (Medium Roast Deluxe)
Ensure internally computed tolerances are not
NaN
The
rmultinom
deploys 'Kahan summation' as R-devel
does now.
Changes
in RcppArmadillo version 15.2.5-1 [github-only] (2026-04-18)
Upgraded to Armadillo release 15.2.5 (Medium Roast Deluxe)
Fix for handling NaN elements in
.is_zero()
Fix for handling NaN in tolerance and conformance checks
Faster handling of diagonal views and submatrices with one
row>
Sunset the C++11 fallback of including Armadillo 14.6.3 (
#504
closing
#503
The vignettes have refreshed bibliographies, and are now built
using the
Rcpp::asis
vignette builder (
#506
One
rmultinom
test is skipped under R-devel which
has switched to a higher precisions calc
Courtesy of my
CRANberries
, there
is a
diffstat
report
relative to previous release. More detailed information is on
the
RcppArmadillo
page
. Questions, comments etc should go to the
rcpp-devel
mailing list
off the
Rcpp R-Forge
page.
This post by
Dirk
Eddelbuettel
originated on his
Thinking inside the box
blog. If you like this or other open-source work I do, you can
sponsor me at
GitHub
. You can also sponsor my
Tour
de Shore 2026 ride in support of the Maywood Fine Arts Center
21 April, 2026 11:20PM
Mike Gabriel
Join us at Lomiri CodeFest on May 16-17 & Fre(i)e Software GmbH is hiring more Lomiri Developers
Lomiri Codefest in Tilburg NL (May 16-17 2026)
Just a quick invitation to an in-person event in Tilburg, the Netherlands.
All people interested in the Lomiri Operating Environment are invited to join us at the Lomiri Codefest [codefest] taking place on May 16-17 (participation is free of charge).
We are hiring Lomiri developers
And as another side node, we still have budget (until 07/2027) for 2-3 additional Lomiri developers (depends on each devs weekly availability). The details of my previous post [hiringdetails] +/- still apply. One more limitation / strength: You need real coding skills to apply for the open positions, AI-generated contributions will not be accepted for the tasks at hand.
If you are interested and a skilled FLOSS developer (you need previous OSS contributions as references) and available with at least 10 hrs / week, please get in touch [fsgmbh].
References
[codefest]
[hiringdetails]
[fsgmbh]
21 April, 2026 05:35PM
by sunweaver
Sergio Cipriano
How to view the Debian Upload Queue
How to view the Debian
Upload Queue
Some people may not know this, but the Debian Upload Queue is public
and very easy to access:
$ curl ftp://ftp.upload.debian.org/pub/UploadQueue/
drwxr-sr-x 18 1518 1281 4096 Jun 26 2019 DELAYED
-rw-r--r-- 1 1518 1281 3442 Jul 14 2025 README
-rw-r----- 1 117 1281 3052 Apr 20 21:32 neovim-tokyonight_4.14.1-1.debian.tar.xz
-rw-r----- 1 117 1281 2119 Apr 20 21:32 neovim-tokyonight_4.14.1-1.dsc
-rw-r----- 1 117 1281 5533 Apr 20 21:32 neovim-tokyonight_4.14.1-1_amd64.buildinfo
-rw-r----- 1 117 1281 2637 Apr 20 21:32 neovim-tokyonight_4.14.1-1_source.changes
-rw-r----- 1 117 1281 197584 Apr 20 21:32 neovim-tokyonight_4.14.1.orig.tar.gz
21 April, 2026 03:16PM
Russell Coker
More About Ebook Readers in Debian
FBReader
After
my previous blog post about eBook readers in Debian [1]
a reader recommended FBReader. I tried it and it’s now my favourite reader. It works nicely on laptop and phone and takes significantly less RAM than Calibre or Arianna (especially important for phones). While the problems with my FLX1s not displaying text with Calibre or Arianna might be the fault of something on the FLX1s side those problems just don’t happen with FBReader.
FBReader has apparently now got a proprietary version as the upstream, but we still have FOSS code to use in Debian. It would be nice if someone updated it to store the reading location using WebDAV and/or a local file that can be copied with the NextCloud client or similar. Currently there is code to store reading location in the Google cloud which I don’t want to use. It’s not THAT difficult to see what chapter you are at with one device and just skip to that part on another, but it is an annoyance.
One thing I really like about FBReader is that you can run it with a epub file on the command line and it just opens it and when it’s been closed you can just open it again to the same spot in the same file. I don’t want a “library” to view a book list, I just want to go back to what I was last reading in a hurry. Calibre might be better for some uses, for example I can imagine someone in the publishing industry with a collection of thousands of epub files finding that Calibre works better for them. But for the typical person who just wants to read one book and keep reading it until they finish it FBReader seems clearly better. The GUI is a little unusual, but it’s not at all confusing and it works really well on mobile.
Okular
I tried Okular (the KDE viewer for PDF files etc) which displays epub files if you have the “okular-extra-backends” installed, but it appears to not display books with the background color set to black. I would appreciate it if someone who has read some public domain or CC licences epub files can recommend ones with a black background that I could use for testing as I can’t file a Debian bug report without sample data to reproduce the bug. I decided not to use it for actual book reading as FBReader is far better for my use taking less RAM and being well optimised for mobile use.
Folite
Foliate supports specifying a book on the command-line which is nice. But it takes more memory than FBReader which is probably mostly due to using webkit to display things. The output was in 2 columns on my laptop in small text which is probably configurable but I didn’t proceed with it. I determined that it doesn’t compare with FBReader for my use. It’s written in JavaScript which may be a positive feature for some people.
Koodo
I had a brief test of Koodo which isn’t in Debian.
Here is the Koodo Reader Github [2]
. I installed the .deb that they created, it installs files to “/opt/Koodo Reader/” (yes that’s a space in the directory name) and appears to have Chromium as part of the runtime. I didn’t go past that even though it appears to have a decent feature set. It is licensed under version 3 of the AGPL so is suitable for Debian packaging if someone wants to do it.
Thorium
I saw the
Thorium reader on Github [3]
which looks promising, it’s under the BSD 3 clause license so is suitable for Debian packaging. The
EDR Lab seems like a good project for advancing electronic document use [4]
and it would be good to have their stuff in Debian.
For the moment I’m happy using FBReader.
[1]
[2]
[3]
[4]
Related posts:
Ebook Readers in Debian
Laptop For a while I’ve been using Calibre 8.5.0+ds-1+deb13u1 in...
Apache Mesos on Debian
I decided to try packaging Mesos for Debian/Stretch. I had...
NBD and PXE Booting on Debian
I have a Xen server that I use for testing...
21 April, 2026 09:26AM
by etbe
Ravi Dwivedi
LibreOffice Conference Budapest 2025
In September 2025, I attended the
LibreOffice Conference
in Budapest, Hungary, on the 4th and the 5th, and a community meeting on the 3rd. Thanks to The Document Foundation (TDF) for sponsoring my travel and accommodation costs. The conference venue was Faculty of Informatics, Eötvös Loránd University (ELTE).
The conference was planned to be held from the 4th to the 6th, but the program for the 6th of September had to be canceled due to the venue being unavailable because of a marathon in Budapest. So, all the talks got squeezed into just two days, making the schedule a bit hectic.
The TDF had booked my room at the Corvin Hotel. It was a double bedroom with a window. The breakfast was included in the hotel booking. The hotel was walking distance from the conference venue. One could also take a tram from the hotel to reach the venue.
A shot of my room. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.
A tram in Budapest. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.
3rd of September
On the 3rd of September, we had a community meeting at the above-mentioned venue. I walked with my friend Dione to the venue. Upon reaching there, I noticed that the university had no boundaries and gates. This reminded me of the previous year’s conference venue in Luxembourg, which also had no boundaries or gates.
In contrast, Indian universities and institutes typically have walls and gates serving as boundaries to separate them from the rest of the city. Many of these institutes also have security guards at the entrance, who may ask attendees to present proof of admission before allowing them inside. I was surprised to find that institutes in Europe, like the one where the conference was held, did not have such boundaries.
The building where the conference was held was red, which happened to be the same color as the building for the previous year’s conference venue. I remember joking with Dione that the criteria for the conference venue might have been the color of the building.
The red building in the picture served as the conference venue. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.
During the community meeting, we shared ideas on how to spread the word about LibreOffice. The meeting lasted for a couple of hours.
After the community meeting, we went to the hotel for dinner sponsored by the TDF.
These
Esterházy
cake bites were really yummy. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.
Raspberry Currant cake slices. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.
4th of September
On the first day of the conference, attendees were given swag bags containing a pad, sticky notes, a pen, a conference T-shirt, and a bottle.
Conference swag. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.
The talks started early in the morning with Eliane Domingos, Chairperson of TDF’s Board of Directors, giving the inauguration talk. As always, I found Italo Vignoli’s talk on the importance of document freedom interesting.
During the snack break, I noticed that there were three types of milk available for coffee: cow’s milk, lactose-free milk, and almond milk. Almond milk is rare in India, but I have managed to get it, but I have never seen lactose-free milk in India.
Since I run fundraisers in my projects, such as Prav, I could relate to Lothar K. Becker’s talk. He discussed the issue that certain implementations in LibreOffice require a budget that is too large for any single interested entity to fund independently. Furthermore, The Document Foundation (TDF) cannot legally receive funds from government entities. Therefore, there is no organization or entity to pool resources from all the interested entities to finance the implementation.
Lothar giving his presentation. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.
Another talk was by the Austrian Armed Forces on their migration to LibreOffice. I wanted to know why they migrated, and I found out that they did it for their digital sovereignty, and not for saving on the license costs. Another point presented in the talk was that LibreOffice is available on all the operating systems, while the Microsoft Office suite is not that widely available. The migration was systematic and was performed over a few years. They started working on it in 2021, and the migration was finished recently. In addition, it also required training their staff in using LibreOffice.
Presentation on migration to LibreOffice by Austrian Armed Forces. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.
The lunch was inside the university canteen. We were provided lunch coupons by the TDF. I got a vegan coupon with 4000 Ft written on it, which meant I could take lunch for up to 4000 Hungarian forints.
My lunch ticket for the conference. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.
The lunch I had on the first day. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.
During the evening, it was my turn for the presentation. I was done with preparing my slides ten days before my talk. I also got my slides reviewed by friends.
My talk was finished in 20 minutes, while I was given a 30-minute slot. This helped us catch up on the schedule. Furthermore, I made my talk interactive by asking questions and making sure that the audience was not asleep. During my talk, my friend Dione took my pictures with my camera.
My talk was on how free software projects could give users a say in freedom to modify the software. I illustrated this using the
Prav project
that I am a part of.
After the talks were over, we were treated to a conference dinner at Trofea Grill. It had a great selection of desserts, which helped me sample some Hungarian desserts. The sponge cake was especially good.
Desserts at Tofea Grill. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.
5th of September
The next day—the 5th of September—I went with Dione to the venue early in the morning, as her talk was the first one of the day. Her talk was titled Managing Tasks with Nextcloud Deck. Later that day, I also attended a talk on Collabora. At lunch, I found the egg white salad quite tasty.
Dione giving her presentation. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.
Egg white salad. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.
After the lunch break, we had the conference group photo. I had a Nikon camera, which we used to take the group photo. I requested a university student to take our group photo and also taught her how to operate the camera.
Group photo
By the evening, the conference ended, after which we went to a pub, which was again sponsored by TDF. I had beer, but that one really tasted bad, so I couldn’t finish it. The only vegetarian option was goat cheeseburger, which my friend Manish and I opted for. The burger tasted awful. Apparently, I don’t like goat cheese.
The next day I went sightseeing with Dione in Budapest. Stay tuned for our adventures in Budapest!
Credits: Thanks to Dione and Richard for proofreading.
21 April, 2026 03:54AM
April 20, 2026
Bits from Debian
Debian Project Leader election 2026 is over, Sruthi Chandran elected!
The voting period and tally of votes for the Debian Project Leader election
has just concluded, and the winner is Sruthi Chandran. Congratulations!
347 out of 1,039 Developers voted using the
Condorcet method
More information about the results of the voting is available on the
Debian Project Leader Elections 2026
page.
Many thanks to Sruthi Chandran for her campaign, to our Developers for their
votes, and to Andreas Tille for his service as DPL over the past two years!
The new term for the project leader will start on April 21, 2026 and expire
on April 20, 2027.
20 April, 2026 05:00PM
by Jean-Pierre Giraud
Sune Vuorela
Kookbook 0.3.0 released
I recently released version 0.3.0 of my recipe manager application Kookbook – find it in git in
KDE Invent
or as released tarballs in https://download.kde.org/stable/kookbook/
Changes since last time is more or less “Minor bugfixes and a Qt6 port” – nothing as such noteworthy unless you aim to get rid of Qt5 on your system.
so what is kookbook?
It is a simple recipe viewer that works with semi-structured markdown. More details can be seen in the quite old
0.1.0 announcement
At some point I should do a 10 recipe example collection, but my personal collection is in danish, so I’m not sure it is going to be useful. Unless someone will donate me some handfuls of pre-formatted recipes, I will happily announce it.
20 April, 2026 03:01PM
by Sune Vuorela
Russ Allbery
Review: Surface Detail
Review:
Surface Detail
, by Iain M. Banks
Publisher:
Orbit
Copyright:
October 2010
Printing:
May 2011
ISBN:
0-316-12341-2
Format:
Trade paperback
Pages:
627
Surface Detail
is the ninth novel in Banks's Culture science
fiction (literary space opera?) series. As with most of the Culture
novels, it can be read in any order, although this isn't the best starting
point. There is an Easter egg reference to
Use of Weapons
that would be easier to notice if you have read
that book recently, but which is not that important to the story.
Lededje Y'breq is an Indented Intagliate from the Sichultian Enablement.
Her body is patterned from her skin down to her bones, covered with
elaborate markings similar to tattoos that extend to her internal organs.
As an intagliate, she is someone's property. In her case, she is the
property of Joller Veppers, the richest man in the Enablement and her
father's former business partner. Intagliates are a tradition of great
cultural pride in the Enablement. They are a living representation of the
seriousness with which debts and honor are taken, up to and including
one's not-yet-born children becoming the property of one's debtor. Such
children are decorated as living works of art of the highest skill and
technical sophistication; after all, the Enablement are not barbarians.
As the story opens, Lededje is attempting, not for the first time, to
escape. This attempt is successful in an unexpected way.
Prin and Chay are Pavulean researchers and academics who, as this story
opens, are in Hell. They are not dead; they have infiltrated the Hell that
Pavuleans are shown to scare them into proper behavior in order to prove
that it is not an illusion and their society does indeed torture people in
an afterlife, in more awful ways than people dare imagine. They have
reached the portal through which temporary visitors exit, hoping to escape
with firm evidence of the existence and horrors of the Pavulean afterlife.
They will not be entirely successful.
Yime Nsokyi is a Culture agent for Quietus, the part of Contact that
concerns itself with the dead. Many advanced societies throughout the
galaxy have invented and reinvented the ability to digitize a mind and
then run it in a virtual environment. Once a society can capture the minds
of every person in that society from that point forward, it faces the
question of whether to do so and, if it does, what to do with those minds.
More specifically, it faces the moral question of whether to punish the
minds of people who were horrible in life. It faces the question of
whether to create Hell.
Vatueil is a soldier in a contestation, a limited and carefully monitored
virtual war. The purpose of that war game is to, once and for all, resolve
the question of whether civilizations should be allowed to create Hells.
Some civilizations consider them integral to their religion or
self-conception. Others consider them morally abhorrent, and that conflict
was in danger of spilling over into war in the Real. Hence the War in
Heaven: Both sides committed to fight in a virtual space under specific
and structured rules, and the winner decides the fate of the galaxy's
Hells. Vatueil is fighting for the anti-Hell side. The anti-Hell side is
losing.
There are very few authors who were better at big-idea science fiction
than Iain M. Banks. I've been reading
a few
books about AI ships
and remembered that I had two unread Culture novels
that I was saving. It felt like a good time to lose myself in something
sprawling.
Surface Detail
does sprawl. Even by Banks's standards, there was an
impressive amount of infodumping in this book. Banks always has huge and
lovingly described set pieces, and this book is no exception, but there
are also paragraphs and pages of background and cultural musings and
galactic politics. We are introduced to not one but three new Contact
divisions; as well as the already-mentioned Quietus, there is Numina,
which concerns itself with the races that have sublimed (transcended), and
Restoria, which deals with hegemonizing swarms (grey goo nanotech,
paperclip maximizers
, and their equivalents).
Infodumping is both a feature and a bane of big-idea science fiction, and
it helps to be in the right mood. It also helps if the info being dumped
is interesting, and this is where Banks shines. This is a huge, sprawling
book, but it deals with some huge, sprawling questions and it has
interesting and non-reductive thoughts about them. The problems posed by
the plot come with history, failed solutions, multi-sided political
disputes, strategies and tactics of varying morality and efficacy, and an
effort to wrestle with the irreducible complexity of trying to resolve
political and ethical disagreements in a universe full of profound
disagreements and moral systems that one cannot simply steamroll.
It also helps that the characters are interesting, even when they're not
likable.
Surface Detail
has one fully hissable villain (Veppers) as
a viewpoint character, but even Veppers is interesting in a "let me check
the publication date to see if Banks was aware of Peter Thiel" sort of
way. The Culture ships, of which there are several in this story, tend
towards a gently sarcastic kindness that I find utterly charming. Lededje
provides the compelling motive force of someone who has no involvement in
the broader philosophical questions and instead intends to resolve one
specific problem through lethal violence. Vatueil and Yime were a bit
bland in personality, more exposition generators than characters I warmed
to, but their roles and therefore the surrounding exposition were
fascinating enough that I still enjoyed their sections.
I'm sure this is not an original observation, but I was struck reading
this book in the first half of 2026 that the Culture functions as an
implementation of what the United States likes to think it is but has
never been. It has a strong sense of shared ethics and moral principles,
it tries to export them to the rest of the galaxy through example,
persuasion, and careful meddling, but it tries to follow some combination
of pragmatic and moral rules while doing so, partly to avoid a backlash
and partly to avoid becoming its own sort of hegemonizing swarm. That is a
powerfully attractive vision of how to be an advanced civilization, and
the fact that every hegemon that has claimed that mantle has behaved
appallingly just makes it more intriguing as a fictional concept. In this
book, like in many Culture books, the Culture is painfully aware of the
failure modes of meddling, and the story slowly reveals the effort the
Culture put into staying just on a defensible side of their own moral
lines. This is, in a sense, a Prime Directive story, but with a level of
hard-nosed pragmatism and political sophistication that the endless
Star Trek
Prime Directive episodes never reach.
Surface Detail
does tend to sprawl, and I'm not sure Banks pulled
together all the pieces of the plot. For example, if there was a point to
the subplot involving the Unfallen Bulbitian, it was lost on me. (There is
always a possibility with Banks that I wasn't paying close enough
attention.) But the descriptions are so elaborate and the sense of
politics and history are so deep that I was never bored, even when
following a plot thread that meandered off into apparent irrelevance. The
main plot line comes to a satisfying conclusion that may be even more
biting social commentary today than it was in 2010.
A large part of the plot does involve Hell, so a warning for those who
haven't read much Banks: He adores elaborate descriptions of body horror
and physical torture. The sections involving Prin and Chay are rather
grim and horrific, probably a bit worse than Dante's
Inferno
. I
have a low tolerance for horror and I was able to read past and around the
worst bits, but be warned that Banks indulges his love for the painfully
grotesque quite a bit.
This was great, and exactly what I was hoping for when I picked it up.
It's not the strongest Culture novel (for me, that's either
The Player of Games
or
Excession
), but it's one of the better
ones. Highly recommended, although if you're new to the Culture, I would
start with one of the earlier books that provide a more gradual
introduction to the Culture and Special Circumstances.
Followed, in the somewhat disconnected Culture series sense, by
The
Hydrogen Sonata
Content warnings: Rape (largely off-screen), graphic violence, lots of
Bosch-style grotesque torture, and a lot of Veppers being a thoroughly
awful human being as a viewpoint character.
Rating: 8 out of 10
20 April, 2026 04:26AM
April 19, 2026
Review: Collision Course
Review:
Collision Course
, by Michelle Diener
Series:
Class 5 #6
Publisher:
Eclipse
Copyright:
November 2024
ISBN:
1-7637844-0-1
Format:
Kindle
Pages:
289
Collision Course
is the sixth novel in the Class 5 science fiction
series and the first that doesn't use the
Dark X
naming convention.
There are lots of spoilers in this story for the earlier books, but you
don't have to remember all the details of previous events. Like the
novella,
Dark Ambitions
, this novel
returns to Rose, Sazo, and Dav instead of introducing another Earth woman
and Class 5 ship.
In
Dark Class
, Ellie discovered an
interesting artifact of a previously-unknown space-faring civilization.
Rose, Sazo, and Dav are on their way to make first contact when, during a
routine shuttle flight between the Class 5 and Dav's Grih military ship,
Rose is abducted. The aliens they came to contact have an aggressive,
leverage-based negotiating strategy. They're also in the middle of a
complicated war with more sides than are readily apparent.
What I liked most about
Dark Horse
, the
first book of this series and our introduction to Rose, was the revealed
ethical system and a tense plot that hinged primarily on establishing
mutual trust when there were excellent reasons for the characters to not
trust each other. As the series has continued, I think the plots have
become more complicated but the ethical dilemmas and revealing moments of
culture shock have become less common. That is certainly true of
Collision Course
; this is science fiction as thriller, with a
complex factional conflict, a lot of events, more plot reversals than the
earlier books, but also less ethics and philosophy.
I'm not sure if this is a complaint. I kind of miss the ethics and
philosophy, but Diener also hasn't had much new to say for the past few
books. The plot of
Collision Course
is quite satisfyingly twisty
for a popcorn-style science fiction series. I was kept guessing about the
merits of some of the factions quite late into the book, although
admittedly I was in the mood for light entertainment and was not trying
too hard to figure out where the book was going. I did read nearly the
entire book in one sitting and stayed up until 2am to finish it, which is
a solid indication that something Diener was doing worked.
I do have quibbles, though. One is that the ending is a bit unsatisfying.
Like Sazo, I was getting quite annoyed at the people capturing (and
recapturing) Rose and would have enjoyed somewhat more decisive
consequences. Also, and here I have to be vague to avoid spoilers, I was
expecting a bit more of a redemption arc for one of the players in the
multi-sided conflict. The ending I did get was believable but rather sad,
and I wish Diener had either chosen a different outcome (this is light
happily-ever-after science fiction, after all) or wrestled more directly
with the implications. There were a bit too many "wait, one more thing"
ending reversals and not quite enough emotional payoff for me.
The other quibble is that
Collision Course
was a bit too damsel in
distress for this series. Rose is pregnant, which Diener uses throughout
the book as a way to raise the stakes of the plot and also make Rose more
annoyed but also less capable than she was in her earlier novel. Both Sazo
and Dav are in full heroic rescue mode, and while Diener still ensures
Rose is primarily responsible for her own fate, there is some "military
men attempt to protect the vulnerable woman" here. One of the things I
like about this series is that it does not use that plot, so while the
balance between Rose rescuing herself and other people rescuing her is
still tilted towards Rose, I would have liked this book more if Rose were
in firmer control of events.
I will mostly ignore the fact that a human and a Grih sexually reproducing
makes little to no biological sense, since
Star Trek
did similar
things routinely and it's an established genre trope. But I admit that it
still annoys me a bit that the alien hunk is essentially human except that
he's obsessed with Rose's singing and has pointy ears. Diener cares about
Rose's pregnancy a lot more than I did, which added to my mild grumpiness
at how often it came up.
Overall, this was fine. I prefer a bit more of a protagonist discovering
how powerful she is by making ingenious use of the ethical dilemmas her
captors have trapped themselves in, and a bit less of Rose untangling a
complicated political situation by getting abducted by every player
serially, but it still kept the pages turning. Any book that is
sufficiently engrossing for me to read straight through is working at some
level.
Collision Course
was highly readable, undemanding, and
distracting, which is what I was looking for when I read it. I would put
it about middle of pack in the series. If Rose's pregnancy is more
interesting to you than it was to me, that might push it a bit higher.
If you have gotten this far in the series, you will probably enjoy this,
although it does feel like Diener is running out of new things to say
about this universe. That's unfortunate given the number of threads about
AI sentience and rights that could still be followed, but I think tracing
them properly would require more philosophical meat than Diener intends
for these books. Which is why the next book I grabbed was a Culture novel.
Currently this is the final book in the Class 5 series, but there is no
inherent reason why Diener couldn't write more of them.
Rating: 7 out of 10
19 April, 2026 04:52AM
April 18, 2026
Charles Plessy
Thanks Branchable!
I was hosted for a long time, free of charge, on https://www.branchable.com/
by Joey and Lars. Branchable and Ikiwiki were wonderful ideas that never
took off as much as they deserved. To avoid being a burden now that
Branchable is nearing its
end
, I migrated to
VPS at Sakura
However, I have not left Ikiwiki. I only use it as a site engine, but I
haven't found any equivalent that gives me both native Git integration, wiki
syntax for a personal site, the creativity of its directives (you can do
anything with
inline
and
pagespec
), and its multilingual
support through the
po
plugin.
Joey and Lars, thank you for everything!
18 April, 2026 01:37PM
Matthias Klumpp
Hello old new “Projects” directory!
If you have recently installed a very up-to-date Linux distribution with a desktop environment, or upgraded your system on a rolling-release distribution, you might have noticed that your home directory has a new folder: “Projects”
Why?
With the recent 0.20 release of
xdg-user-dirs
we enabled the “Projects” directory by default. Support for this has already existed since 2007, but was never formally enabled. This closes a
more than 11 year old bug report
that asked for this feature.
The purpose of the
Projects
directory is to give applications a default location to place project files that do not cleanly belong into one of the existing categories (Documents, Music, Pictures, Videos). Examples of this are software engineering projects, scientific projects, 3D printing projects, CAD design or even things like video editing projects, where project files would end up in the “Projects” directory, with output video being more at home in “Videos”.
By enabling this by default, and subsequently in the coming months adding support to GLib, Flatpak, desktops and applications that want to make use of it, we hope to give applications that do operate in a “project-centric” manner with mixed media a better default storage location. As of now, those tools either default to the home directory, or will clutter the “Documents” folder, both of which is not ideal. It also gives users a default organization structure, hopefully leading to less clutter overall and better storage layouts.
This sucks, I don’t like it!
As usual, you are in control and can modify your system’s behavior. If you do not like the “Projects” folder,
simply delete it!
The
xdg-user-dirs
utility will not try to create it again, and instead adjust the default location for this directory to your home directory. If you want more control, you can influence exactly what goes where by editing your
~/.config/user-dirs.dirs
configuration file.
If you are a system administrator or distribution vendor and want to set default locations for the default XDG directories, you can edit the
/etc/xdg/user-dirs.defaults
file to set global defaults that affect all users on the system (users can still adjust the settings however they like though).
What else is new?
Besides this change, the 0.20 release of
xdg-user-dirs
brings full support for the Meson build system (dropping Automake), translation updates, and some robustness improvements to its code. We also fixed the “arbitrary code execution from unsanitized input” bug that the Arch Linux Wiki mentions
here
for the
xdg-user-dirs
utility, by replacing the shell script with a C binary.
Thanks to everyone who contributed to this release!
18 April, 2026 08:06AM
by Matthias
Yifei Zhan
CommBank hardware MFA token
A while ago, CommBank started asking for MFA confirmation on its mobile app for every NetBank login on a browser. Previously, there was an option to use SMS for MFA, which isn’t as secure as I would like, but it was at least usable. Since I’m switching away from Android to Mobian and won’t be able to use the CommBank app for much longer, I applied for a physical NetCode token.
The hardware is made by Digipass and looks disposable. It is a small, battery powered gadget with a screen and a button. When pressed, it shows a temporary NetCode for authentication. Such a NetCode is required both for NetBank logins and approving online transactions.
The letter that came with it has the wrong link for activation, the correct link is under NetBank -> Settings -> NetCode (under the Security section)
To apply for a physical token, call the NetBank team, mention you can’t use the app and need a physical NetCode token, and make sure they actually submit your request for a token. It took me 2 calls to get them to ship me a token. The hardware is free of charge but can only be applied for via phone call; unfortunately staff members at my local branch are unable to do anything in relation to NetBank. I was told privately by a CommBank employee that they are deprecating the hardware token in favor of the mobile app, I hope that won’t happen anytime soon, or that they add support for passkeys before they do. The last time I checked, the CommBank app was LineageOS-friendly, but I don’t want to configure WayDroid just to do online banking.
PayID, the thing that allows you to receive payment via a phone number or email address, is not compatible with the hardware token, and existing PayID will be silently deactivated if you use hardware token. This looks to be an artificial restriction; I don’t see why it has to be this way.
Regular CommBank mobile app sessions will also be de-activated once the hardware token is activated (I was told so but my sessions weren’t deactivated until I wiped my Android phone), and you won’t be able to sign into mobile app again until you manually disable the NetCode token.
Online banking has been getting progressively more invasive and anti-user over the last decade, from demanding remote attestation to requiring real time location data, each time locking certain features when those demands are not satisfied; all based on the flawed assumptions that everyone owns a phone running a certain flavor of iOS or Android, and has it ready all the time. I’m not sure what can be done to reverse this trend, but on the personal level I will use NetBank less and go back to cash.
18 April, 2026 12:00AM
Valhalla's Things
Pizza!
Posted on April 18, 2026
Tags:
madeof:atoms
craft:cooking
This post contains a bit of consumerism and is full of references to
commercial products, none of which caused me to receive any money nor
non-monetary compensation.
This post has also been written after eating in one meal the amount of
bread-like stuff that we usually have in more than 24 hours.
I’ve been baking bread since a long time ago. I don’t know exactly when,
but probably it was the early 2000s or so, and remained a regular-ish
thing until 2020, when it became an
extremely
regular thing, as in I
believe I bake bread on average every other day.
In the before times, I’ve had a chance to bake pizza in a wood fired
oven a few times: a friend had one and would offer the house, my partner
would mind the fire, and I would get there with the dough and prepare
the pizza.
Now that we have moved to a new house, we don’t have a good and
convenient place for a proper wood fired oven in masonry, but we can use
one of the portable ones, and having dealt with more urgent expenses, I
decided that just before the potential collapse of the global economy
was a good time as any to buy the oven I had been looking at since we
found this house.
I decided to get an Ooni Karu 2, having heard good things about the
brand, and since it looked like a good balance between size and
portability. I also didn’t consider their gas fired ovens (nor did I buy
the gas burner) because I’m trying to get rid of gas, not add stuff that
uses it, and I didn’t get an electric one because I’m not at all unhappy
with the bakery-style pizza we make in our regular oven, and I have to
admit we also wanted to play with fire
We also needed an outdoor table suitable to use the oven on and store
it. Here I looked for inspiration at the Ooni tables (and for cheaper
alternatives in the same style), but my mother who shares the outdoor
area with us wasn’t happy with the idea of steel
And then I was browsing the modern viking shores, and found that there
was a new piece in the NÄMMARÖ series my mother likes (and of which we
already have some reclining chairs): a kitchen unit in wood with a steel
top.
At first I expected to just skip the back panel, since it would be in
the way when using the oven, but then I realized that it could probably
be assembled upside down, down from the top between the table legs, and
we decided to try that option.
This week everything had arrived, and we could try it.
Yesterday evening, after dinner (around 21, I think) I prepared the
dough with the flour I usually use for bakery-style pizza:
Farina di
Grano Tenero Tipo 0 PANE
(320 - 340 W);
since I wanted to make things easier for myself I only used 55%
hydration, so the recipe was:
1 kg flour
550 g water
2 g dry yeast
12 g salt
The next time I think I’ll try with one of my other staples:
Molino
Bogetto etichetta blu
(260/280 W)
Then this morning we assembled the NÄMMARÖ, then I divided the dough in
eight balls, put them in a covered — but not sealed — container
, well floured with rice flour and then we fired the oven
(as in: my partner did, I looked for a short while and then set the
table and stuff), using charcoal, because we already had some, and could
conveniently get more at the supermarket.
When the oven had reached temperatures in the orange range
stretched the smallest ball out, working on my wooden peel, sprayed it
with water
, sprinkled it with coarse salt and put it in the
oven.
After 30 seconds I turned it around with the new metal peel, then again
after 30 seconds, and then I lost count of how many times I repeated
this
, but it was probably 2 or 3 minutes until it looked
good.
And it was good. The kind of pizza that is quite soft, especially near
the borders.
We ate it with fresh mozzarella and tomatoes, and then made another one
the same way, to finish the mozzarella.
This was supposed to be our lunch, but we decided to try one with some
leftover cooked radicchio, and that also worked quite nicely.
And finally, we decided we needed to try a more classical pizza, with
tomato sauce and cured meat, of which we forgot to take pictures.
Up to here we had eaten about half of the dough, and we were getting
full: I had prepared significantly more than what I expected to eat, to
be able to accidentally burn some, but also with the idea to bake
something else to be eaten later.
So I made two more focaccias with just water and salt, and then I tried
to cook some bread with what I expected to be residual heat.
Except that the oven was getting a bit too cold, so my partner added
some charcoal, and when I put the last two unflattened balls right at
the back of the oven where it was still warmer, that side carbonized.
After 5 minutes I moved them to the middle of the oven, and turned them,
and then after another turn and 5 more minutes they were ready. And
other than the burnt crust, they were pretty edible.
So, the thoughts after our first experience.
Everybody around the table (my SO, my mother and me) was quite happy
with the results, and they are different enough from the ones I could
get with the regular oven.
As I should have expected, it’s much faster than a masonry oven, both in
getting to temperature and in cooling down: my plan for residual heat
bread cooking will have to be adjusted with experience.
We were able to get it hot enough, but not as hot as it’s supposed to be
able to get: we suspect that using just charcoal may have influenced it,
and next week we’ll try to get some wood, and try with a mix.
As for the recipe, dividing the dough in eight parts worked quite well:
maybe the pizzas are a bit on the smaller side, but since they come one
at a time it’s more convenient to cut and share them, and maybe make a
couple more at the end.
Of course, I’ll want to try different recipes, for different styles of
pizzas (including some almost-trademark-violating ones) and for other
types of flatbread.
I expect it won’t be hard to find volunteers to help us with the
experiments. :D
any insinuation that there may have been considerations of
having a way to have freshly baked bread in case of a prolonged
blackout may or may not be based on reality.
But it wasn’t
the only
— or even the main — reason.
↩︎
come on! it’s made of STEEL. how can it be not good? :D
↩︎
IKEA 365+ 3.1 glass, the one that is 32 cm × 21 cm × 9
cm; it was just big enough for the amount of dough, and then I
covered it with a lid that is missing the seal.
↩︎
why did they put a thermometer on it, and not add
labels
with the actual temperature? WHY???
↩︎
if you don’t have dietary restrictions a bit of olive oil
would taste even better.
↩︎
numbers above 2 are all basically the same, right?
↩︎
18 April, 2026 12:00AM
April 17, 2026
Russell Coker
Home Battery
Prices
On the 19th of March I got a home battery system installed. The government has a rebate scheme so it had a list price of about $22k for a 40kWh setup and cost me about $12k. It seems that 40KWh is the minimum usable size for the amount of electricity I use, I have 84 cores running BOINC when they have nothing better to do which is 585W of TDP according to Intel. While the CPUs are certainly using less than the maximum TDP (both due to design safety limits and the fact that I have disabled hyper-threading on all systems due to it providing minimal benefits and potential security issues) given some power usage by cooling fans and some inefficiency in PSUs I think that assuming that 585W is accounted for 24*7 by CPUs is reasonable. So my home draws between 800W and 1KW when no-one is home and with an electric car and all electric cooking a reasonable amount of electricity can be used.
My bills prior to the battery installation were around $200/month which was based on charging my car only during sunny times as my electricity provider (Amber Electric) has variable rates based on wholesale prices. Also the feed in rates if my solar panels produce too much electricity in sunny times often go negative so if I don’t use enough electricity. I haven’t had the electric car long enough to find out what the bills might be in winter without a home battery.
Before getting the battery my daily bills according to the Amber app were usually between $5 and $10. After getting it the daily bills have almost always been below $5. The only day where it’s been over $5 since the battery installation was when electricity was cheap and I fully charged the home battery and my car which used 50KWh in one day and cost $7.87 which is 16 cents per KWh. 16 cents isn’t the cheapest price (sometimes it gets as low as 10 cents) but is fairly cheap, sometimes even in the cheap parts of the day it doesn’t get that low (the cheapest price on the day I started writing this was 20 cents).
So it looks like this may save me $100 per month, if so there will be a 10% annual return on investment on the $12K I spent. This makes it a good investment, better than repaying a mortgage (which is generally under 6%) and almost as good as the long term results of index tracker funds. However if it cost $22K (the full price without subsidy) then it would still be ok but wouldn’t be a great investment. The government subsidised batteries because the huge amount of power generated by rooftop solar systems was greater than the grid could use during the day in summer and batteries are needed to use that power when it’s dark.
Android App
The battery system is from Fox ESS and the FoxCloud 2.0 Android app is a bit lacking in functionality. It has a timer for mode setting with options “Self-use” (not clearly explained), “Feed-in Priority” (not explained but testing shows feeding everything in to the grid), “Back Up”, “Forced Charge”, and “Forced Discharge”. Currently I have “Forced Charge” setup for most sunny 5 hours of the day for a maximum charge power of 5KW. I did that because about 25KW/day is what I need to cover everything and while the system can do almost 10KW that would charge the battery fully in a few hours and then electricity would be exported to the grid which would at best pay me almost nothing and at worst bill me for supplying electricity when they don’t want it. There doesn’t seem to be a “never put locally generated power into the grid unless the battery is full” option. The force charge mode allows stopping at a certain percentage, but when that is reached there is no fallback to another option. It would be nice if the people who designed the configuration could take as a baseline assumption that the macro programming in office suites and functions in spreadsheets are things that regular people are capable of using when designing the configuration options. I don’t think we need a Turing complete programming language in the app to control batteries (although I would use it if there was one), but I think we need clauses like “if battery is X% full then end this section”.
There is no option to say “force charge until 100%” or “force charge for the next X minutes” as a one-off thing. If I came home in the afternoon with my car below 50% battery and a plan to do a lot of driving the next day then I’d want to force charge it immediately to allow charging the car overnight. But I can’t do that without entering a “schedule”. For Unix people imagine having to do everything via a cron job and no option to run something directly from the command-line.
It’s a little annoying that they appear to have spent more development time on animations for the app than some of what should be core functionality.
Management
Amber has an option to allow my battery to be managed by them based on wholesale pries but I haven’t done that as the feed-in prices are very low. So I just charge my battery when electricity is cheap and use it for the rest of the day. There is usually a factor of 2 or more price difference between the middle of the day and night time so that saves money. It also means I don’t have to go out of my way to try and charge my car in the middle of the day. There is some energy lost in charging and discharging the batteries but it’s not a lot. I configured the system to force charge for the 5 sunniest hours every day for 5KW as that’s enough to keep it charged overnight and 5KW is greater than the amount of solar electricity produced on my house since I’ve been monitoring it so that forces it to all be used for the battery. In summer I might have to change that to 6KW for the sunniest 2 or 3 hours and then 4KW or 5KW surrounding that which will be a pain to manage.
Instead of charging the car every day during sunny times I charge it once or twice a week, I have a 3.3KW charger and the car has a 40KWh battery so usually it takes me less than 10 hours to fully charge it and I get at least 5 hours of good sunlight in the process.
There are people hacking on these devices which is interesting to get direct control from computers [1]
, and apparently not banned from the official community for doing so. I’m not enthusiastic enough to do this, I’ve got plenty of other free software things to work on. But it’s good that others are doing so.
[1]
Related posts:
Electric Car Charging in Melbourne
This morning I noticed some parking bays reserved for car...
Backup for Wind Power
A question that people often ask about wind power (and...
power saving
Adrian von Bidder made an interesting post in response to...
17 April, 2026 12:58PM
by etbe
April 16, 2026
Sahil Dhiman
What is Life (to you)?
It started with a thought: to understand people’s perspectives on life and its meaning. So I texted folks, “What is life (to you)?”. Each of the following list items (-) is a response from a different individual, mostly verbatim.
- A lot
- Everyone has a few universal basic qualities, and some special qualities. To me life is pursuit of exploring world based on those qualities and maturing those qualities as one goes on about exploring world/life with those qualities.
Discovering and enhancing experiences as one goes through them.
- life is endless suffering
- my answer might change daily, but this is what I’ve noticed and feel recently.
Life is a spectrum with two distinct ends: what we control and what we don’t. At birth, the spectrum is largely tilted toward control, but throughout our lives, it gradually shifts toward the other side. Ultimately, as we approach death, we lose all control over any aspect of our existence, reaching the other end of the spectrum.
tho this isn’t universal, privilege plays a huge part in what you control tho i believe it holds true for the majority
but yeah man, meaning and purpose are dynamic, it’s in their nature to change
i can give you a different answer this evening itself xD
- Funeral Monologue from Synecdoche, New York.
- Zindagi ek nadiya hai,
Aur mujhe tairna nahi aata
(translation - Life is a river,
and I don’t know how to swim)
On a more serious note, Life is what you make it out for yourself.
The only established truth is that it will end. We can never know if there is something after or if there was something before.
So try to live a life that you feel aspired by?
But this question was beautifully answered by that book which you had about that dying professor
(Me - He was talking about Tuesday’s with Morrie)
- My answer is 42
- One, it’s living on your own terms, you define everything for yourself, success, normal, whatever. You get to curate your version of it no matter the societal norms.
It’s an accumulation of experiences - friends, parents, work, activities, doing shit loads. Sab try karo- travel, zumba, art, music, workout, sports, dil kara ye karna hai karlo. (translation - If your heart wants to do it, just do it.)
Then I think relationships - all that you’ve nurtured, people forget maintaining people because of work. It takes efforts to keep people in your life, everyone that comes has a place in yours, how well thats stays is upto you. You also get to curate your people, who stays who don’t. Family toh hai hi (translation - family is there) but everyone else that comes along can make it pretty good.
So I don’t want to be 50 and be like chalo ab kuch apne liye karte hai… (translation - Come on, now let’s do something for ourselves) Do whatever shit you want today. Not everything costs money, and if it does get thrifty
But do keep healthy while doing all of that
- Being alive so that my daughter can grow up and i can help raising her kids as well.
Raising kids without mother is tough :P
- Definitively, I feel like Life is a by product of proteins and energy working together.
But in a more personal sense, Life is a dumb joke played onto us. It’s a rat race.
But rats exists because of life and then it becomes a chicken-egg problem
Honestly, I don’t give good answers to life questions. I’m generally the one asking
Life can be like a box of chocolates, you don’t know what you’re gonna get untill you experience the chocolate(assuming the chocolates are heterogenous and contains a mix of everything)
Camus once said, “Life is a revolt”, and one of his students added more spice to it like “Life is a revolt against the meaninglessness of existence"
I kinda feel like Life is the pursuit of every person’s search for meaning
- Imprisonment waiting for execution 😄
I have one more thought while we are on the topic , game with pre defined starting position and predefined destination , path to reach is a maze
- A phase where you can have a really good time or really bad one, usually the mix of both.
A phase where you are prisoner to responsibility and materialistic wants.
It’s a hell for you, where you try to create heaven for others.
Being born was never your choice, but ending is always in your hands but you are a prisoner. You fear that leaving this world behind will destroy the heavens you created for others and they will be back to hell. But eventually everyone moves on watching the hourglass of their life.
Once you are left with no desires or no one to create heavens for, you look arround yourself. You see everyone chasing something, everyone scared of their limitted life time sliping away yet you want it to end sooner.
Doesn’t matter if it was all good till now, or all bad. The other half is waiting for redemption.
If it was all good, it’s best time to die don’t wait for the bad to start. If was all bad, it’s still the best time to die what if it was the good one and more worst is waiting for you.
We desire to be remembered, yet we want to free from this loop of suffering.
Someone once said, life is a suffering, chose your sufferings.
- Life to me is to live without regrets and live with freedom.
Life is always unpredictable and this unpredictability makes it more interesting and worth it.
- As of now, for the state of mind that I am in , I think for me life is about subtle struggle, subtle inconveniences and yet moving forward cause that’s all I know.
I am not sure if any of this has any meaning, but sometimes I feel I was born of a purpose and that the universe has my back.
For me it’s about raising my consciousness, understanding people to their depths, gaining moderate material success and helping people to some extend.
I have tried to seek a grander meaning but I have failed.
Life for me is what I make out it.
In my times of great success i rarely think about life for I am busy enjoying it, whatever you may call that state of mind.
- For me its the little things that you enjoy with YOUR people
- Life to me is about living and loving, and doing it in a way that sustains. It’s the people who shape you, the work you get absorbed in, the quiet moments in between. There’s also the wanting, the drive to figure out what’s worth going after and how to get there, but that’s just one part of it, not the point of it. And none of it happens in a vacuum. I’m aware of the privileges that let me live this way, and I try to hold on to that gratitude. In the end, life has both a material and a non-material side, and a lot of what we do is chasing material things in an attempt to satisfy something non-material within us
- Mere liye (translation - for me) life is staying at my home and studying random economics papers. That’s when I enjoy myself the most.
- Very complicated
Some days I wish this life never ended and some time I feel it would be better if it stopped at that moment.
It all depends on the events that happen in the so called “life”.
So life to me is a string of events that happen anyway and you get to make some decisions which can turn it in any direction and then you wonder how did that happen.
- not forgetting to breathe, learn, eat, game, take a good shit, love, sleep.
- To be honest it changed with time!
At 19 it was about freedom, wasn’t sure what freedom meant but i wanted that! To be free from everything, maybe because parents still controlled a part of my life.
Then came 22-24 where i was working, trying to figure out what i want, the meaning changed from freedom to living for myself. To earn more, to be greedy about myself and pursue whatever would help me gain more steps in my career.
Came my mba life, switched my life from doing for myself to trying everything out to have no regrets. Life meaning was just about living with no regrets, invested, gambled, did everything to earn that tag of “yeah, have tried that”.
Now it has all switched to, it was all just a fake facade. Life turned to having a meaningful life rather than finding meaning in what i am doing. Living for people around me, chhoti chhoti cheezo m khushi (translation - happiness in small things(?)) isn’t really a topic of conversation but more of happy thing for me.
So it changed, and m quite happy to be honest. Life did show me a lot of failures, but was privileged enough to face those failures. Gained a lot of learnings if not money😂
Hopeful for more learnings and change meaning of life with time
- A task.
- You have different answers at different times
You learn different meanings at different times
When you are studying, basically it is about job, finding a partner
then it becomes, house, car other things based on your income
in between, there can be passion too
Free Software was a passion, electoral politics too, but both kind of faded and I want cooperative and user driven development now (prav - something that motivates me every day) and these days learning Chinese and watching Cdrama takes a huge part of my leisure time
it is heavily subjective
and also influences by previous experiences
people around you, how much influence they have on you
it also depends on if they had to struggle in their life or not, for some life did not give much troubles
and trouble itself can be relative
people who never had to struggle may find even smallest challenges as troubles
like if you own a car, your worry is finding a parking slot
- I am too young to think about lyfe
- A ticket to see the show on earth, I guess 😀
I guess life is different depending on the mood. It is a very broad question.
(Me - What is it in this present mood?)
Learning stuff (like I am learning a new language) and being happy but also to regulate emotions in a world where being optimistic is getting harder each day.
Life is also having a unique set of glasses you wear. Both in terms of looking from your eyeballs and your psychological perspective. Both are unique and cannot be replicated.
It is interesting what people on their deathbed think of life. If I know I am dying, my perspective would change a whole lot.
Life is finishing reading books while we are alive 😉
Life is sleeping after a good XMPP chat 😉
- Dukh dard peeda (translation - surrow pain suffering)
- uhh to word it? life is just like a journey from A to somewhere and its all about what paths you take and what line you get on to me, just a series of short adventures that all connect to a larger sequence until you can’t have any more adventures-
(Me - eee, THE END. drop dead, like a coin)
yeaaaah- I am not really for spirituality of an afterlife, to me life just ends at some point, after which point there fails to remain a discernable
you
, and some X time after which, you will be last remembered, try to make that last time a good one I guess?
(Me - no soul?)
uhhh not in the way most people think of it i guess?
theres just a lot of
you
s, theres the physical you, there is the idea of you, there is the expectation of you, and one of the undefinable you I would label as the soul maybe? like the part thats not physically you, but also certainly you
(Me - can’t say I understood part, but I get you in this sense)
mhm- well its about just questioning who you are more so questioning what life is-, I have sadly spent way too much time trying to figure that out
- Making the best of the time you have
- living a full range of experiences and embracing the good ones, seeing all that the world has to offer. In the end we were always just stardust. Might as well enjoy it when we are stardust with a consciousness of our own.
- For some reason or the Universe’s /dev/random I was born here as a biological being, and from my experience I understood living is hard and the best way to live is by embracing it. Loving everyone and everything around you. Be happy and joyful until you naturally say good bye to this world.
- Life is being fucked by everything and you just have to figure out and try to stick to the things worth being fucked for
Note: Following was transcribed from a audio message.
- There are five conditions to become a life to survive in the environment. I think there’s five conditions by the biological definitions and reproduction is one of the factor virus is not considered a life form because it cannot reproduce on its own but technically it’s kind of a life because it reproduces using the DNA ability this is the biological definition.
Do you want a philosophical definition?
My definition is kind of the same except that you get life experiences along with it as a human.
Extra benefits is that you are not an NPC. All other organisms are NPCs.
But humans can interpret the world and change it to their liking.
That is life in the case of a human.
But then many humans are mostly NPCs.
But they still can change the life.
Okay, fuck this. Where is this even going?
A human is an exception in the case of life, because human is not an NPC.
Human can interrupt the world, human can change it to its liking,
which is why we are such a successful organism on this planet.
That is life to me. That’s a human.
But all of this is kind of meaningless, because
the biological impurity of a human being still exists, so you still have the
urges to reproduce, which kind of makes
it like just another organism. But then, humans are yet to evolve
to overcome that biological imperative.
I’m grateful for all the replies, outlooks, and subsequent conversations I got to have after this question with everyone. After all, it was a deeply personal question. It does fit in nicely with
my
definition of life:
“Life is all about experiences and all the transient relationships one gets to have with folks we meet on the way.”
PS - I would love you hear you on this. Feel free to text or email on sahil AT sahilister.in
16 April, 2026 05:59PM
by Sahil Dhiman
April 15, 2026
Paul Tagliamonte
designing arf, an sdr iq encoding format 🐶
Interested in future updates? Follow me on mastodon at
@paul@soylent.green
. Posts about
hz.tools
will be tagged
#hztools
🐶 Want to jump right to the draft? I'll be maintaining ARF going forward at
/draft-tagliamonte-arf-00.txt
It’s true – processing data from software defined radios can be a bit
complex
👈😏👈 – which tends to keep all but the most grizzled experts and bravest
souls from playing with it. While I wouldn’t describe myself as either, I will
say that I’ve stuck with it for longer than most would have expected of me.
One of the biggest takeaways I have from my adventures with software defined
radio is that there’s a lot of cool crossover opportunity between RF and
nearly every other field of engineering.
Fairly early on, I decided on a very light metadata scheme to track SDR
captures, called
rfcap
. rfcap has withstood my test
of time, and I can go back to even my earliest captures and still make sense of
what they are – IQ format, capture frequencies, sample rates, etc. A huge
part of this was the simplicity of the scheme (fixed-lengh header, byte-aligned
to supported capture formats), which made it roughly as easy to work with as a
raw file of IQ samples.
However, rfcap has a number of downsides. It’s only a single, fixed-length
header. If the frequency of operation changed during the capture, that change
is not represented in the capture information. It’s not possible to easily
represent mulit-channel coherent IQ streams, and additional metadata is
condemned to adjacent text files.
ARF (Archive of RF)
A few years ago, I needed to finally solve some of these shortcomings and tried
to see if a new format would stick. I sat down and wrote out my design goals
before I started figuring out what it looked like.
First, whatever I come up with must be capable of being streamed and processed
while being streamed. This includes streaming across the network or merely
written to disk as it’s being created. No post-processing required. This is
mostly an artifact of how I’ve built all my tools and how I intereact with my
SDRs. I use them extensively over the network (both locally, as well
as remotely by friends across my
wider
lan
). This decision sometimes even
prompts me to do some
crazy things
from time
to time.
I need actual, real support for multiple IQ channels from my multi-channel SDRs
(Ettus, Kerberos/Kracken SDR, etc) for playing with things like
beamforming
My new format must be capable of storing
multiple streams in a single capture file, rather than a pile of files in
a directory (and hope they’re aligned).
Finally, metadata must be capable of being stored in-band. The initial set of
metadata I needed to formalize in-stream were
Frequency Changes
and
Discontinuities
. Since then, ARF has grown a few more.
After getting all that down, I opted to start at what I thought the simplest
container would look like,
TLV
(tag-length-value) encoded packets. This is a fairly well trodden path,
and used by a bunch of existing protocols
we
all
know
and
love
Each ARF file (or stream) was a set of
encoded “packets” (sometimes called data units in other specs). This means that
unknown packet types may be skipped (since the length is included) and
additional data can be added after the existing fields without breaking
existing decoders.
tag
flags
length
value
Heads up!
Once this is posted, I'm not super likely to update this page. Once this
goes out, the latest stable copy of the ARF spec is maintained at
draft-tagliamonte-arf-00.txt
This page may quickly become out of date, so if you're actually interested in
implementing this, I've put a lot of effort into making the draft
comprehensive, and I plan to maintain it as I edit the format.
Unlike a “traditional” TLV structure, I opted to add “flags” to the top-level
packet. This gives me a bit of wiggle room down the line, and gives me a
feature that I like from ASN.1 – a “critical” bit. The critical bit indicates
that the packet must be understood fully by implementers, which allows future
backward incompatible changes by marking a new packet type as critical. This
would only really be done if something meaningfully changed the interpretation
of the backwards compatible data to follow.
Flag
Description
0x01
Critical (tag must be understood)
Within each Packet is a
tag
field. This tag indicates how the contents of the
value
field should be interpreted.
Tag ID
Description
0x01
Header
0x02
Stream Header
0x03
Samples
0x04
Frequency Change
0x05
Timing
0x06
Discontinuity
0x07
Location
0xFE
Vendor Extension
In order to help with checking the basic parsing and encoding of this format,
the following is an example packet which should parse without error.
00, // tag (0; no subpacket is 0 yet)
00, // flags (0; no flags)
00, 00 // length (0; no data)
// data would go here, but there is none
Additionally, throughout the rest of the subpackets, there are a few unique and
shared datatypes. I document them all more clearly in the draft, but to quickly
run through them here too:
UUID
This field represents a globally unique idenfifer, as defined by RFC 9562, as
16 raw bytes.
Frequency
Data encoded in a Frequency field is stored as microhz (1 Hz is stored as
1000000, 2 Hz is stored as 2000000) as an unsigned 64 bit integer. This has a
minimum value of 0 Hz, and a maximum value of 18446744073709551615 uHz, or just
above 18.4 THz. This is a bit of a tradeoff, but it’s a set of issues that I
would gladly contend with rather than deal with the related issues with storing
frequency data as a floating point value downstream. Not a huge factor, but as
an aside, this is also how my current generation SDR processing code (
sparky
stores Frequency data internally, which makes conversion between the two
natural.
IQ samples
ARF supports IQ samples in a number of different formats. Part of the idea here
is I want it to be easy for capturing programs to encode ARF for a specific
radio without mandating a single iq format representation. For IQ types with
a scalar value which takes more than a single byte, this is always paired
with a Byte Order field, to indicate if the IQ scalar values are little or
big endian.
ID
Name
Description
0x01
f32
interleaved 32 bit floating point scalar values
0x02
i8
interleaved 8 bit signed integer scalar values
0x03
i16
interleaved 16 bit signed integer scalar values
0x04
u8
interleaved 8 bit unsigned integer scalar values
0x05
f64
interleaved 64 bit floating point scalar values
0x06
f16
interleaved 16 bit floating point scalar values
Header
Each ARF file must start with a specific Header packet. The header contains
information about the ARF stream writ large to follow. Header packets are
always marked as “critical”.
magic
flags
start
guid
site guid
#st
In order to help with checking the basic parsing and encoding of this format,
the following is an example header subpacket (when encoded or decoded this
will be found inside an ARF packet as described above) which should parse
without error, with known values.
00, 00, 00, fa, de, dc, ab, 1e, // magic
00, 00, 00, 00, 00, 00, 00, 00, // flags
18, 27, a6, c0, b5, 3b, 06, 07, // start time (1740543127)
// guid (fb47f2f0-957f-4545-94b3-75bc4018dd4b)
fb, 47, f2, f0, 95, 7f, 45, 45,
94, b3, 75, bc, 40, 18, dd, 4b,
// site_id (ba07c5ce-352b-4b20-a8ac-782628e805ca)
ba, 07, c5, ce, 35, 2b, 4b, 20,
a8, ac, 78, 26, 28, e8, 05, ca
Stream Header
Immediately after the arf
Header
, some number of Stream Headers
follow. There must be exactly the same number of Stream Header packets as are
indicated by the
num streams
field of the Header. This has the nice effect of
enabling clients to read all the stream headers without requiring buffering of
“unread” packets from the stream.
id
flags
fmt
bo
rate
freq
guid
site
In order to help with checking the basic parsing and encoding of this format,
the following is an example stream header subpacket (when encoded or decoded
this will be found inside an ARF packet as described above) which should parse
without error, with known values.
00, 01, // id (1)
00, 00, 00, 00, 00, 00, 00, 00, // flags
01, // format (float32)
01, // byte order (Little Endian)
00, 00, 01, d1, a9, 4a, 20, 00, // rate (2 MHz)
00, 00, 5a, f3, 10, 7a, 40, 00, // frequency (100 MHz)
// guid (7b98019d-694e-417a-8f18-167e2052be4d)
7b, 98, 01, 9d, 69, 4e, 41, 7a,
8f, 18, 16, 7e, 20, 52, be, 4d,
// site_id (98c98dc7-c3c6-47fe-bc05-05fb37b2e0db)
98, c9, 8d, c7, c3, c6, 47, fe,
bc, 05, 05, fb, 37, b2, e0, db,
Samples
Block of IQ samples in the format indicated by this stream’s
format
and
byte_order
field sent in the related
Stream Header
id
iq samples
In order to help with checking the basic parsing and encoding of this format,
the following is an samples subpacket (when encoded or decoded
this will be found inside an ARF packet as described above). The IQ values
here are notional (and are either 2 8 bit samples, or 1 16 bit sample,
depending on what the related
Stream Header
was).
01, // id
ab, cd, ab, cd, // iq samples
Frequency Change
The center frequency of the IQ stream has changed since the
Stream Header
or last
Frequency Change
has been sent. This is useful to capture IQ streams that are jumping
around in frequency during the duration of the capture, rather than
starting and stopping them.
id
frequency
In order to help with checking the basic parsing and encoding of this format,
the following is a frequency change subpacket (when encoded or decoded
this will be found inside an ARF packet as described above).
01, // id
00, 00, b5, e6, 20, f4, 80, 00 // frequency (200 MHz)
Discontinuity
Since the last Samples packet for this stream, samples have been dropped
or not encoded to this stream. This can be used for a stream that has
dropped samples for some reason, a large gap (radio was needed for something
else), or communicating “iq snippits”.
id
In order to help with checking the basic parsing and encoding of this format,
the following is a discontinuity subpacket (when encoded or decoded this will
be found inside an ARF packet as described above).
01, // id
Location
Up-to-date location as of this moment of the IQ stream, usually from a GPS.
This allows for in-band geospatial information to be marked in the IQ stream.
This can be used for all sorts of things (detected IQ packet snippits aligned
with a time and location or a survey of rf noise in an area)
flags
sys
lat
long
el
accuracy
The
sys
field indicates the Geodetic system to be used for the provided
latitude
longitude
and
elevation
fields. The full list of supported
geodetic systems is currently just WGS84, but in case something meaningfully
changes in the future, it’d be nice to migrate forward.
Unfortunately, being a bit of a coward here, the accuracy field is a bit of a
cop-out. I’d really rather it be what we see out of kinematic state estimation
tools like a kalman filter, or at minimum, some sort of ellipsoid. This is
neither of those - it’s a perfect sphere of error where we pick the largest
error in any direction and use that. Truthfully, I can’t be bothered to model
this accurately, and I don’t want to contort myself into half-assing something
I know I will half-ass just because I know better.
System
Description
0x01
WGS84 - World Geodetic System 1984
In order to help with checking the basic parsing and encoding of this format,
the following is a location subpacket (when encoded or decoded this will be
found inside an ARF packet as described above).
00, 00, 00, 00, 00, 00, 00, 00, // flags
01, // system (wgs84)
3f, f3, be, 76, c8, b4, 39, 58, // latitude (1.234)
40, 02, c2, 8f, 5c, 28, f5, c3, // longitude (2.345)
40, 59, 00, 00, 00, 00, 00, 00, // elevation (100)
40, 24, 00, 00, 00, 00, 00, 00 // accuracy (10)
Vendor Extension
In addition to the fields I put in the spec, I expect that I may need custom
packet types I can’t think of now. There’s all sorts of useful data that could
be encoded into the stream, so I’d rather there be an officially sanctioned
mechanism that allows future work on the spec without constraining myself.
Just an example, I’ve used a custom subpacket to create test vectors, the data
is encoded into a Vendor Extension, followed by the IQ for the modulated
packet. If the demodulated data and in-band original data don’t match, we’ve
regressed. You could imagine in-band speech-to-text, antenna rotator azimuth
information, or demodulated digital sideband data (like FM HDR data) too. Or
even things I can’t even think of!
id
data
In order to help with checking the basic parsing and encoding of this format,
the following is a vendor extension subpacket (when encoded or decoded this
will be found inside an ARF packet as described above).
// extension id (b24305f6-ff73-4b7a-ae99-7a6b37a5d5cd)
b2, 43, 05, f6, ff, 73, 4b, 7a,
ae, 99, 7a, 6b, 37, a5, d5, cd,
// data (0x01, 0x02, 0x03, 0x04, 0x05)
01, 02, 03, 04, 05
Tradeoffs
The biggest tradeoff that I’m not
entirely
happy with is limiting the length
of a packet to
u16
– 65535 bytes. Given the u8 sample header, this limits us
to 8191 32 bit sample pairs at a time. I wound up believing that the overhead in
terms of additional packet framing is worth it – because always encoding 4
byte lengths felt like overkill, and a dynamic length scheme ballooned
codepaths in the decoder that I was trying to keep as easy to change as
possible as I worked with the format.
15 April, 2026 03:43PM
Dirk Eddelbuettel
qlcal 0.1.1 on CRAN: Calendar Updates
The nineteenth release of the
qlcal
package
arrivied at
CRAN
just now, and
has already been built for
r2u
. This version
synchronises with
QuantLib
1.42
released this week.
qlcal
delivers the calendaring parts of
QuantLib
. It is provided (for the R
package) as a set of included files, so the package is self-contained
and does not depend on an external
QuantLib
library (which can be
demanding to build).
qlcal
covers
over sixty country / market calendars and can compute holiday lists, its
complement (
i.e.
business day lists) and much more. Examples
are in the README at the
repository
, the
package page
and course at the
CRAN package
page
This releases updates to the 2025 holidays for China, Singapore, and
Taiwan.
The full details from
NEWS.Rd
follow.
Changes in version 0.1.1
(2026-04-15)
Synchronized with QuantLib 1.42 released two days ago
Calendar updates for China, Singapore, Taiwan
Courtesy of my
CRANberries
, there
is a diffstat report for
this
release
. See the
project page
and package documentation for more details, and more examples.
This post by
Dirk
Eddelbuettel
originated on his
Thinking inside the box
blog. If you like this or other open-source work I do, you can
sponsor me at
GitHub
. You can also sponsor my
Tour
de Shore 2026 ride in support of the Maywood Fine Arts Center
15 April, 2026 01:07PM
Emmanuel Kasper
Minix 3 on Beagle Board Black (ARM)
Connected via serial console. Does not have a package manager, web or ssh server, but can play tetris in the terminal (
bsdgames
in Debian have the same tetris version packaged).
15 April, 2026 09:44AM
by Manu
Freexian Collaborators
Debian Contributions: Debusine projects in GSoC, Debian CI updates, Salsa CI maintenance and more! (by Anupa Ann Joseph)
Debian Contributions: 2026-03
Contributing to Debian
is part of
Freexian’s mission
. This article
covers the latest achievements of Freexian and their collaborators. All of this
is made possible by organizations subscribing to our
Long Term Support contracts
and
consulting services
Debusine projects in Google’s Summer of Code
While Freexian initiated Debusine, and is investing a lot of resources in the
project, we manage it as a true free software project that can and should have a
broader community.
We always had
documentation for new contributors
and we aim to be reactive with them when they interact via the issue tracker or
via merge requests. We decided to put those intentions under stress tests by
proposing five projects
for Google’s Summer of Code as part of Debian’s participation in that program.
Given that at least 11 candidates managed to get their merge request accepted in
the last 30 days (interacting with the development team is part of the
pre-requisites to apply to Google Summer of Code projects these days), the
contributing experience must not be too bad. 🙂 If you want to try it out, we
maintain a list of “
quick fixes
that are accessible to newcomers. And as always, we welcome your
feedback
Debian CI: incus backend and upgrade to Bootstrap 5, by Antonio Terceiro
debci
3.14 was released on March 4th, with a followup 3.14.1 release with
regression fixes a few days afterwards. Those releases were followed by new
development and maintenance work that will provide extra capabilities and
stability to the platform.
This month saw the
initial version of an incus backend
land in Debian CI. The transition into the new backend will be done carefully so
as to not disrupt ‘testing’ migration. Each package will be running jobs with
both the current lxc backend and with incus. Packages that have the same result
on both backends will be migrated over, and packages that exhibit different
results will be investigated further, resulting in bug reports and/or other
communication with the maintainers.
On the frontend side, the code has been
ported to Bootstrap 5
over from the now ancient Bootstrap 3. This need has been
originally reported back in 2024
based on the lack of security support for Bootstrap 3. Beyond improving
maintainability, this upgrade also enables support for dark mode in
debci
which is still work in progress.
Both updates mentioned in this section will be available in a following
debci
release.
Salsa CI maintenance by Santiago Ruano Rincón et al.
Santiago reviewed some Salsa CI issues and reviewed associated merge requests.
For example, he investigated a
regression (#545)
introduced by the
move to sbuild
on the use of extra repositories configured as “.source” files; and reviewed the
MR (!712)
that fixes it.
Also, there were conflicts with changes made in
debci 3.14
and
debci 3.14.1
(those updates are mentioned above), and different people have contributed to
fix the subsequent issues, in a long-term way. This includes Raphaël who
proposed
MR !707
and who also suggested Antonio to merge the Salsa CI patches to avoid similar
errors in the future. This happened
shortly after
Those fixes finally required the unrelated
MR !709
which will prevent similar problems when building images.
To identify bugs related to the autopkgtest support in the backport suites as
early as possible, Santiago proposed
MR !708
Finally, Santiago, in collaboration with Emmanuel Arias also had exchanges with
GSoC candidates for the
Salsa CI project
including the contributions they have made as merge requests. It is important to
note that there are several very good candidates interested in participating.
Thanks a lot to them for their work so far!
Miscellaneous contributions
Raphaël reported a
zim bug
affecting Debian Unstable users, which was already fixed in git apparently. He
could thus cherry-pick the fix and
update the package
in Debian Unstable.
Carles created a new page on the
InstallingDebianOn
in Debian Wiki.
Carles submitted translation errors in the debian-installer Weblate.
Carles, using
po-debconf-manager
improved Catalan translations: reviewed and submitted 3 packages. Also improved
error handling when forking or submitting an MR if the fork already existed.
Carles kept improving
check-relations
code base related general improvements (added strict typing, enabled pre-commit).
Also added DebPorts support, virtual packages support and added commands for
reporting missing relations and importing bugs from
bugs.debian.org
Antonio handled miscellaneous Salsa support requests.
Antonio improved the management of
MiniDebConf websites
by keeping all non
-secret settings in git
and
fixed
exporting these sites as static HTML.
Stefano uploaded routine updates to
hatchling
python-mitogen
python-virtualenv
python-discovery
dh-python
pypy3
python-pipx
and
git-filter-repo
Faidon uploaded routine updates to
crun
libmaxminddb
librdkafka
lowdown
platformdirs
python-discovery
sphinx-argparse-cli
tox
tox-uv
Stefano and Santiago continued to help with DebConf 26 preparations.
Stefano reviewed some contributions to debian-reimbursements and handled admin
for reimbursements.debian.net.
Stefano attended the Debian Technical Committee meeting.
Helmut sent 8 patches for cross build failures.
Building on the work of
postmarketOS
Helmut managed to cross build systemd for musl in rebootstrap and sent several
patches in the process.
Helmut reviewed several MRs of Johannes Schauer Marin Rodrigues expanding
support for
DPKG_ROOT
to support installing hurd.
Helmut incorporated a final round of feedback for the Multi-Arch documentation
in Debian policy, which finally made it into
unstable
together with documentation of Build-Profiles.
In order to fix
python-memray
, Helmut
NMUed libunwind
generally disabling C++ exception support as being an incompatible duplication
of the gcc implementation. Unfortunately, that ended up breaking
suricata
on
riscv64
After another
NMU
python-memray finally
migrated
Thorsten uploaded new upstream versions of
epson-inkjet-printer-escpr
and
sane-airscan
. He also fixed a packaging bug in
printer-driver-oki
. As of
systemd 260.1-1 the configuration of lpadmin has been added to the sysusers.d
configuration. All printing packages can now simply depend on the
systemd-sysusers package and don’t have to take care of its creation in
maintainer scripts anymore.
In collaboration with Emmanuel Arias, Santiago had exchanges with GSoC
candidates and reviewed the proposals of the
Linux livepatching GSoC 2026 project
Colin helped to fix
CVE-2026-3497
in openssh and
CVE-2026-28356
in multipart.
Colin upgraded tango and pytango to new upstream releases and packaged
pybind11-stubgen (needed for pytango), thanks to a Freexian customer. Tests of
reproducible builds revealed that pybind11-stubgen didn’t generate imports in a
stable order; this is
now fixed upstream
Lucas fixed
CVE-2025-67733
and
CVE-2026-21863
affecting src:valkey in unstable and testing. Also reviewed the same fixes
targeting stable proposed by Peter Wienemann.
Faidon worked with upstream and build-dep Debian maintainers on resolving
blockers in order to bring pyHanko into Debian, starting with the adoption of
python-pyhanko-certvalidator
. pyHanko is a suite for signing and stamping PDF
files, and one of the few libraries that can be leveraged to sign PDFs with
eIDAS Qualified Electronic Signatures.
Anupa co-organized
MiniDebConf Kanpur
and attended the event with many others from all across India. She handled the
accommodation arrangements along with the registration team members, worked on
the budget and expenses. She was also a speaker at the event.
Lucas helped with content review/schedule for the
MiniDebConf Campinas
. Thanks Freexian for
being a Gold sponsor!
Lucas organized and took part in a one-day in-person sprint to work on
Ruby 3.4 transition. It was held in a coworking space in Brasilia - Brazil on
April 6th. There were 5 DDs and they fixed multiple packages FTBFSing against
Ruby 3.4 (coming to unstable soon hopefully). Lucas has been postponing a blog
post about this sprint since then :-)
15 April, 2026 12:00AM
by Anupa Ann Joseph
April 14, 2026
Steinar H. Gunderson
Looking for work
It seems my own plans and life's plans diverged this spring,
so I am in the market for a new job. So if you're looking for
someone with a long track record making your code go brrr
really fast, give me a ping (contact information at
my homepage
). Working from Oslo
(on-site or remote), CV available upon request. No AI boosterism
or cryptocurrency grifters, please :-)
14 April, 2026 04:44PM
Dirk Eddelbuettel
anytime 0.3.13 on CRAN: Mostly Minor Bugfix
A maintenance release 0.3.13 of the
anytime
package arrived on
CRAN
today,
sticking with the roughly yearly schedule we have now. Binaries for
r2u
have been built
already. The package is fairly feature-complete, and code and
functionality remain mature and stable.
anytime
is a very focused package aiming to do just one thing
really
well: to convert
anything
in integer, numeric, character,
factor, ordered, … input format to either POSIXct (when called as
anytime
) or Date objects (when called as
anydate
) – and to do so
without requiring a format
string
as well as
accomodating different formats in one input
vector
. See the
anytime
page,
the
GitHub repo
for a few examples, the nice
pdf
vignette
, and the beautiful
documentation site
for all documentation.
This release was triggered by a bizarre bug seen on elementary os 8.
For “reason”
anytime
was
taking note on startup where it runs, and used a small and simply piece
of code reading
/etc/os-release
when it exists. We assumed
sane content, but this particular operating system and releases managed
to have a duplicate entry throwing us spanner. So now this code is
robust to duplicates, and no longer executed on each startup but “as
needed” which is a net improvement. We also switched the vignette to
being deployed by the new
Rcpp::asis()
driver.
The short list of changes follows.
Changes in anytime
version 0.3.13 (2026-04-14)
Continuous integration has received minor updates
The vignette now use the
Rcpp::asis()
driver, and
references have been refreshed
Stateful 'where are we running' detection is now more robust, and
has been moved from running on each startup to a cached 'as needed'
case
Courtesy of my
CRANberries
, there
is also a diffstat report of
changes
relative to the previous release
. The
issue tracker
tracker off the
GitHub
repo
can be use for questions and comments. More information about
the package is at the
package page
the
GitHub repo
in the
vignette
and at the
documentation
site.
This post by
Dirk
Eddelbuettel
originated on his
Thinking inside the box
blog. If you like this or other open-source work I do, you can now
sponsor me at
GitHub
. You can also sponsor my
Tour
de Shore 2026 ride in support of the Maywood Fine Arts Center
14 April, 2026 03:07PM
Petter Reinholdtsen
Talking to the Computer, and Getting Some Nonsense Back...
At last, I can run my own large language model artificial idiocy
generator at home on a Debian testing host using Debian packages
directly from the Debian archive. After months of polishing the
llama.cpp
whisper.cpp
and
ggml
packages, and their
dependencies, I was very happy to see today that they all entered
Debian testing this morning. Several release-critical issues in
dependencies have been blocking the migration for the last few weeks,
and now finally the last one of these has been fixed. I would like to
extend a big thanks to everyone involved in making this happen.
I've been running home-build editions of whisper.cpp and llama.cpp
packages for a while now, first building from the upstream Git
repository and later, as the Debian packaging progressed, from the
relevant Salsa Git repositories for the ROCM packages, GGML,
whisper.cpp and llama.cpp. The only snag with the official Debian
packages is that the JavaScript chat client web pages are slightly
broken in my setup, where I use a reverse proxy to make my home server
visible on the public Internet while the included web pages only want
to communicate with localhost / 127.0.0.1. I suspect it might be
simple to fix by making the JavaScript code dynamically look up the
URL of the current page and use that to determine where to find the
API service, but until someone fixes
BTS report #1128381
, I
just have to edit
/usr/share/llama.cpp-tools/llama-server/themes/simplechat/simplechat.js
every time I upgrade the package. I start my server like this on my
machine with a nice AMD GPU (donated to me as a Debian developer by
AMD two years ago, thank you very much):
LC_ALL=C llama-server \
-ngl 256 \
-c $(( 42 * 1024)) \
--temp 0.7 \
--repeat_penalty 1.1 \
-n -1 \
-m Qwen3-Coder-30B-A3B-Instruct-Q5_K_S.gguf
It only takes a few minutes to load the model for the first time
and prepare a nice API server for me at
, available
(note, this sets up the server up without authentication; use a
reverse proxy with authentication if you need it) for all the API
clients I care to test. I switch models regularly to test different
new ones, the Qwen3-Coder one just happen to be the one I use at the
moment. Perhaps these packages is something for you to have fun with
too?
As usual, if you use Bitcoin and want to show your support of my
activities, please send Bitcoin donations to my address
15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b
14 April, 2026 12:15PM
Russell Coker
Furilabs FLX1s Finally Working
I’ve been using the
Furilabs FLX1s phone [1]
as my daily driver for 6 weeks, it’s a decent phone, not as good as I hoped but good enough to use every day and rely on for phone calls about job interviews etc. I intend to keep using it as my main phone and as a platform to improve phone software in Debian as you really can’t effectively find bugs unless you use the platform for important tasks.
Support Problems
I previously wrote about the phone after I received it without a SIM caddy on the 13th of Jan. I had a saga with support about this, on the 16th of Jan one support person said that they would ship it immediately but didn’t provide a tracking number or any indication of when it would arrive. On the 5th of Feb I contacted support again and asked how long it would be, the new support person seemed to have no record of my previous communication but said that they would send it. On the 17th of Feb I made another support request including asking for a way of direct communication as the support email came from an address that wouldn’t accept replies, I was asked for a photo showing where the problem is. The support person also said that they might have to send a replacement phone!
The last support request I sent included my disappointment at the time taken to resolve the issue and the proposed solution of replacing the entire phone (why have two international shipments of a fragile and expensive phone when a single letter with a cheap SIM caddy would do?). I didn’t receive a reply but the SIM caddy arrived on the 2nd of Mar. Here is a pic of the SIM caddy and the package it came in:
One thing that should be noted is that some of the support people seemed to be very good at their jobs and they were all friendly. It was the system that failed here, turning a minor issue of a missing part into a 6 week saga.
Furilabs needs to do the following to address this issue:
Make it possible to reply directly to a message from a support person. Accept email with a custom subject to sort it, give a URL for a web form, anything. Collating discussions with a customer allows giving better support while taking less time for the support people.
Have someone monitor every social media address that is used by the company. When someone sends a support request in a public Mastodon post it indicates that something has gone wrong and you want to move quickly to resolve it.
Take care of the little things, like sending a tracking number for every parcel. If it’s something too small for a parcel (the SIM caddy could have fit in a regular letter) then just tell the customer what date it was posted and where it was posted from so they have some idea of when it will arrive.
This is not just a single failure of Furilabs support, it’s a systemic failure of their processes.
Problems I Will Fix – Unless Someone Beats Me to it
Here are some issues I plan to work on.
Smart Watch Support
I need to port one of the smart watch programs to Debian. Also I want to make one of them support the
Colmi P80 [2]
A smart watch significantly increases the utility of a phone even though IMHO they aren’t doing nearly all the things that they could and should do. When we get Debian programs talking to the PineTime it will make a good platform for development of new smart phone and OS features.
Nextcloud
I have ongoing issues of my text Nextcloud installation on a Debian VM not allowing connection from the Linux desktop app (as packaged in Debian) and from the Android client (from f-droid). The desktop client works with a friend’s Nextcloud installation on Ubuntu so I may try running it on an Ubuntu VM I run while waiting for the Debian issue to get resolved. There was a bug recently fixed in Nextcloud that appears related so maybe the next release will fix it.
For the moment I’ve been running without these features and I call and SMS people from knowing their number or just returning calls. Phone calls generally aren’t very useful for me nowadays except when applying for jobs. If I could deal with recruiters and hiring managers via video calls then I would consider just not having a phone number.
Wifi IPv6
Periodically IPv6 support just stops working, I can’t ping the gateway. I turn wifi off and on again and it works. This might be an issue with my wifi network configuration. This might be an issue with the way I have configured my IPv6 networking, although that problem doesn’t happen with any of my laptops.
Chatty Sorting
Chatty is the program for SMS that is installed by default (part of the phosh/phoc setup), it also does Jabber. Version 0.8.7 is installed which apparently has some Furios modifications and it doesn’t properly support sorting SMS/Jabber conversations. Version 0.8.9 from Debian sorts in the same way as most SMS and Jabber programs with the most recent at the top. But the Debian version doesn’t support Jabber (only SMS and Matrix). When I went back to the Furilabs version of Chatty it still sorted for a while but then suddenly stopped. Killing Chatty (not just closing the window and reopening it) seems to make it sort the conversations sometimes.
Problems for Others to Fix
Here are the current issues I have starting with the most important.
Important
The following issues seriously reduce the usability of the device.
Hotspot
The Wifi hotspot functionality wasn’t working for a few weeks,
this Gitlab issue seems to match it [3]
. It started working correctly for a day and I was not sure if an update I applied fixed the bug or if it’s some sort of race condition that worked for this boot and will return next time I reboot it. Later on I rebooted it and found that it’s somewhat random whether it works or now.
Also while it is mostly working it seemed to stop working about every 25 minutes or so and I had to turn it off and on again to get it going.
On another day it went to a stage where it got repeated packet loss when I pinged the phone as a hotspot from my laptop. A pattern of 3 ping responses and 3 “Destination Host Unreachable” messages was often repeated.
I don’t know if this is related to the way Android software is run in a container to access the hardware.
4G Reliability
Sometimes 4G connectivity has just stopped, sometimes I can stop and restart the 4G data through software to fix it and sometimes I need to use the hardware switch. I haven’t noticed this for a week or two so there is a possibility that one fix addressed both Hotspot and 4G.
One thing that I will do is setup monitoring to give an alert on the phone if it can’t connect to the Internet. I don’t want it to just quietly stop doing networking stuff and not tell me!
On-screen Keyboard
The compatibility issues of the GNOME and KDE on-screen keyboards are getting me. I use phosh/phoc as the login environment as I want to stick to defaults at first to not make things any more difficult than they need to be. When I use programs that use QT such as Nheko the keyboard doesn’t always appear when it should and it forgets the setting for “word completion” (which means spelling correction).
The spelling correction system doesn’t suggest replacing “dont” with “don’t” which is really annoying as a major advantage for spelling checkers on touch screens is inserting an apostrophy. An apostrophy takes at least 3* longer than a regular character and saving that delay makes a difference to typing speed.
The spelling correction doesn’t correct two words run together.
Medium Priority
These issues are ongoing annoyances.
Delay on Power Button
In the best case scenario this phone has a much slower response to pressing the power button than the Android phones I tested (Huawei Mate 10 Pro and Samsung Galaxy Note 9) and a much slower response than my recollection of the vast majority of Android phones I’ve ever used. For testing pressing buttons on the phones simultaneously resulted in the Android phone screens lighting up much sooner. Something like 200ms vs 600ms – I don’t have a good setup to time these things but it’s very obvious when I test.
In a less common case scenario (the phone having been unused for some time) the response can be something like 5 seconds. The worst case scenario is something in excess of 20 seconds.
For UI designers, if you get multiple press events from a button that can turn the screen on/off please make your UI leave the screen on and ignore all the stacked events. Having the screen start turning on and off repeatedly when the phone recovers and processes all the button presses isn’t good, especially when each screen flash takes half a second.
Notifications
Touching on a notification for a program often doesn’t bring it to the foreground. I haven’t yet found a connection between when it does and when it doesn’t.
Also the lack of icons in the top bar on the screen to indicate notifications is annoying, but that seems to be an issue of design not the implementation.
Charge Delay
When I connect the phone to a power source there is a delay of about 22 seconds before it starts to charge. Having it miss 22 seconds of charge time is no big deal, having to wait 22 seconds to be sure it’s charging before leaving it is really annoying. Also the phone makes an audible alert when it gets to 0% charge which woke me up one night when I had failed to push the USB-C connector in hard enough. This phone requires a slightly deeper connector than most phones so with some plugs it’s easy to not quite insert them far enough.
Torch aka Flash
The light for the “torch” or flash for camera is not bright at all. In a quick test staring into the light from 40cm away wasn’t unpleasant compared to my Huawei Mate 10 Pro which has a light bright enough that it hurts to look at it from 4 meters away.
Because of this photos at night are not viable, not even when photographing something that’s less than a meter away.
The torch has a brightness setting which doesn’t seem to change the brightness, so it seems likely that this is a software issue and the brightness is set at a low level and the software isn’t changing it.
Audio
When I connect to my car the Lollypop player starts playing before the phone directs audio to the car, so the music starts coming from the phone for about a second. This is an annoying cosmetic error. Sometimes audio playing pauses for no apparent reason.
It doesn’t support the phone profile with Bluetooth so phone calls can’t go through the car audio system. Also it doesn’t always connect to my car when I start driving, sometimes I need to disable and enable Bluetooth to make it connect.
When I initially set the phone up Lollypop would send the track name when playing music through my car (Nissan LEAF) Bluetooth connection, after an update that often doesn’t happen so the car doesn’t display the track name or whether the music is playing but the pause icon works to pause and resume music (sometimes it does work).
About 30 seconds into a phone call it switches to hands-free mode while the icon to indicate hands-free is not highlighted, so I have to press the hands-free button twice to get it back to normal phone mode.
Low Priority
I could live with these things remaining as-is but it’s annoying.
Ticket Mode
There is apparently some code written to display tickets on screen without unlocking. I want to get this working and store screen-caps of the Android barcode screens of the different loyalty cards so I can scan them without unlocking. My threat model does not include someone trying to steal my phone to get a free loaf of bread on the bakery loyalty program.
Camera
The camera app works with both the back and front cameras, which is nice, and sadly based on my experience with other Debian phones it’s noteworthy. The problem is that it takes a long time to take a photo, something like a second after the button is pressed – long enough for you to think that it just silently took a photo and then move the phone.
The UI of the furios-camera app is also a little annoying, when viewing photos there is an icon at the bottom left of the screen for a video camera and an icon at the bottom right with a cross. Which every time makes me think “record videos” and “leave this screen” not “return to taking photos” and “delete current photo”. I can get used to the surprising icons, but being so slow is a real problem.
GUI App Installation
The program for managing software doesn’t work very well. It said that there were two updates for Mesa package needed, but didn’t seem to want to install them. I ran “flatpak update” as root to fix that. The process of selecting software defaults to including non-free, and most of the available apps are for desktop/laptop with no way to search for phone/tablet apps.
Generally I think it’s best to just avoid this and use apt and flatpak directly from the command-line. Being able to ssh to my phone from a desktop or laptop is good!
Android Emulation
The file
/home/furios/.local/share/andromeda/data/system/uiderrors.txt
is created by the Andromeda system which runs Android apps in a LXC container and appears to grow without end. After using the phone for a month it was 3.5G in size. The disk space usage isn’t directly a problem, out of the 110G storage space only 17G is used and I don’t have a need to put much else on it, even if I wanted to put backups of /home from my laptop on it when travelling that would still leave plenty of free space. But that sort of thing is a problem for backing up the phone and wasting 3.5G out of 110G total is a fairly significant step towards breaking the entire system.
Also having lots of logging messages from a subsystem that isn’t even being used is a bad sign.
I just tried using it and it doesn’t start from either the settings menu or from the f-droid icon. Android isn’t that important to me as I want to get away from the proprietary app space so I won’t bother trying this any more.
Unfixable Problems
Unlocking
After getting used to fingerprint unlocking going back to a password is a pain. I think that the hardware isn’t sufficient for modern quality face recognition that can’t be fooled by a photo and there isn’t fingerprint hardware.
When I first used an Android phone using a pin to unlock didn’t seem like a big deal, but after getting used to fingerprint unlock it’s a real drag to go without. This is a real annoyance when doing things like checking Wikipedia while watching TV.
This phone would be significantly improved with a fingerprint sensor or a camera that worked well enough for face unlock.
Plasma Mobile
According to Reddit Plasma Mobile (KDE for phones) doesn’t support Halium and can never work on this phone because of it [4]
. This is one of a number of potential issues with the phone, running on hardware that was never designed for open OSs is always going to have issues.
Wifi MAC Address
The MAC keeps changing on reboot so I can’t assign a permanent IPv4 address to the phone. It appears from the MAC prefix of 00:08:22 that the network hardware is made in InPro Comm which is well known for using random addresses in the products it OEMs. They apparently have one allocation of 2^24 addresses and each device randomly chooses a MAC from that range on boot.
In the settings for a Wifi connection the “Identity” tab has a field named “Cloned Address” which can be set to “Stable for SSID” that prevents it from changing and allows a static IP address allocation from DHCP. It’s not ideal but it works.
Network Manager can be configured to have a permanent assigned MAC address for all connections or for just some connections. In the past for such things I have copied MAC addresses from ethernet devices that were being discarded and used them for such things. For the moment the “Stable for SSID” setting does what I need but I will consider setting a permanent address at some future time.
Docks
Having the ability to connect to a dock is really handy. The PinePhonePro and Librem5 support it and on the proprietary side a lot of Samsung devices do it with a special desktop GUI named Dex and some Huawei devices also have a desktop version of the GUI. It’s unfortunate that this phone can’t do it.
The Good Things
It’s good to be able to ssh in to my phone, even if the on-screen keyboard worked as well as the Android ones it would still be a major pain to use when compared to a real keyboard. The phone doesn’t support connecting to a dock (unlike Samsung phones I’ve used for which I found Dex to be very useful with a 4K monitor and proper keyboard) so ssh is the best way to access it.
This phone has very reliable connections to my home wifi. I’ve had ssh sessions from my desktop to my phone that have remained open for multiple days. I don’t really need this, I’ve just forgotten to logout and noticed days later that the connection is still running. None of the other phones running Debian could do that.
Running the same OS on desktop and phone makes things easier to test and debug.
Having support for all the things that Linux distributions support is good. For example none of the Android music players support all the encodings of audio that comes from YouTube so to play all of my music collection on Android I would need to transcode most of them which means either losing quality, wasting storage space, or both. While Lollypop plays FLAC0, mp3, m4a, mka, webm, ogg, and more.
Conclusion
This is a step towards where I want to go but it’s far from the end goal.
The PinePhonePro and Librem5 are more open hardware platforms which have some significant benefits. But the battery life issues make them unusable for me.
Running Mobian on a OnePlus 6 or Droidian on a Note 9 works well for the small tablet features but without VoLTE. While the telcos have blocked phones without VoLTE data devices still work so if recruiters etc would stop requiring phone calls then I could make one of them an option.
The phone works well enough that it could potentially be used by one of my older relatives. If I could ssh in to my parents phones when they mess things up that would be convenient.
I’ve run this phone as my daily driver since the 3rd of March and it has worked reasonably well. 6 weeks compared to my previous use of the PinePhonePro for 3 days. This is the first time in 15 years that a non-Android phone has worked for me personally. I have briefly used an iPhone 7 for work which basically did what it needed to do, it was at the bottom of the pile of unused phones at work and I didn’t want to take a newer iPhone that could be used by someone who’s doing more than the occasional SMS or Slack message.
So this is better than it might have been, not as good as I hoped, but a decent platform to use it while developing for it.
[1]
[2]
[3]
[4]
Related posts:
Furilabs FLX1s
The Aim I have just got a Furilabs FLX1s [1]...
My Ideal Mobile Phone
Based on my experience testing the IBM Seer software on...
OnePlus 6 Debian
I recently got a OnePlus 6 for the purpose of...
14 April, 2026 09:31AM
by etbe
Ravi Dwivedi
Hungary Visa
The annual
LibreOffice conference 2025
was held in Budapest, Hungary, from the 3rd to the 6th of September 2025. Thanks to the
The Document Foundation
(TDF) for sponsoring me to attend the conference.
As Hungary is a part of the Schengen area, I needed a Schengen visa to attend the conference. In order to apply for a Schengen visa, one needs to get an appointment at VFS Global and submit all the required documents there, which are then forwarded to the embassy.
I got an appointment for a Hungary visa at VFS Global in New Delhi for the 24th of July. There were many appointment slots available for the Hungary visa. One could easily get an appointment for the next day at the Delhi center. There were some technical problems on the VFS website, though, as I was unable to upload a scanned copy of my passport while booking the appointment. I got an error saying, “Unfortunately, you have exceeded the maximum upload limit.”
The problem didn’t get fixed even after contacting the VFS helpline. They asked me to try in the Firefox browser and deleting all the cache, which I already did.
So I created another account with a different email address and phone number, after which I was able to upload my passport and book an appointment. Other conference attendees from India also reported facing some technical issues on the VFS Hungary website.
Anyway, I went to the VFS Hungary application center as per my appointment on the 24th of July. Going inside, I located the Hungary visa application counter. There were two applicants ahead of me.
When it was my turn, the VFS staff warned me that my passport was damaged. The “damage” was on the bio-data page. All the details could be seen, but the lamination of the details page wore off a bit. They asked me to write an application to the Embassy of Hungary in New Delhi stating that I insist VFS to submit my application along with describing the “damage” on my passport.
I got a bit worried about my application getting rejected due to the “damage.” But I decided to gamble my money on this one, as I didn’t have time (and energy) to apply for a new passport before this trip.
Moreover, I had struck down a couple of fields in my visa application form which were not applicable to me, due to which the VFS staff asked me to fill out another visa application.
After this, the application got submitted, and it was 11,000 INR (including the fee to book the appointment at VFS). Here is the list of documents I submitted:
My passport
Photocopy of my passport
Two photographs of myself
Duly filled visa application form
Return flight ticket reservations
Payslips for the last three months
Invitation letter from the conference organizer (in Hungarian)
Proof of hotel bookings during my stay in Hungary
Cover letter stating my itinerary
Income tax returns filed by me
Bank account statement, signed and sealed by the bank
Travel insurance valid for the period of the entire trip
It took 2 hours for me to submit my visa application, even though there were only two applicants before me. This was by far the longest time to submit a Schengen visa application for me.
Fast-forward to the 30th of July, and I received an email from the Embassy of Hungary asking me to submit an additional document - paid air ticket - for my application. I had only submitted dummy flight tickets, and they were enough for the Schengen visas I applied for until now. This was the first time a country was asking me to submit a confirmed flight ticket during the visa process.
I consulted my travel agent on this, and they were fairly confident that I will get the visa if the embassy is asking me to submit confirmed flight tickets. So I asked the travel agent to book the flight tickets. These tickets were ₹78,000, and the airline was Emirates. Then, I sent the flight tickets to the embassy by email.
The embassy sent the visa results on the 6th of August, which I received the next day.
My visa had been approved! It took 14 days for me to get the Hungary visa after submitting the application.
See you in the next one!
Thanks to
Badri
for proofreading.
14 April, 2026 05:50AM
April 12, 2026
Colin Watson
Free software activity in March 2026
My Debian contributions this month were all
sponsored
by Freexian.
You can also support my work directly via
Liberapay
or
GitHub Sponsors
OpenSSH
I fixed
CVE
-2026-3497
in unstable, thanks to a fix in Ubuntu by Marc Deslauriers. Relatedly, I applied an Ubuntu patch by Athos Ribeiro to
not default to weak
GSS
API
exchange algorithms
I’m looking forward to being able to split out
GSS
API
key exchange support in OpenSSH once Ubuntu 26.04
LTS
has been released! This stuff will still be my problem, but at least it won’t be in packages that
nearly everyone has installed
Python packaging
New upstream versions:
dill
django-modeltranslation
isort
langtable
pathos
pendulum
pox
ppft
pydantic-extra-types
pytango
python-asyncssh
python-datamodel-code-generator
python-evalidate
python-packaging (including fixes for python-hatch-requirements-txt and python-pyproject-examples)
python-zxcvbn-rs-py
rpds-py
smart-open
trove-classifiers
I packaged
pybind11-stubgen
, needed for new upstream versions of pytango. Tests of reproducible builds revealed that it didn’t generate imports in a stable order; I
contributed a fix for that upstream
I worked with the security team to release
DSA
-6161-1
in multipart, fixing
CVE
-2026-28356
upstream discussion
). (Most of the work for this was in February, but the vulnerability was still embargoed when I published my last monthly update.)
In trixie-backports, I updated pytest-django to 4.12.0.
I fixed a number of packages to support building with pyo3 0.28:
pendulum
pydantic-core
python-jellyfish
python-zxcvbn-rs-py
rpds-py
Other build/test failures:
python-bcrypt: Upcoming rust-getrandom update
python-cotengrust:
FTBFS
: error[E0432]: unresolved import
rand::rngs::OsRng
austin:
FTBFS
: E ModuleNotFoundError: No module named ‘pycparser.plyparser’
contributed upstream
taurus:
FTBFS
: dh_auto_build: error: pybuild —build -i python{version} -p “3.14 3.13” returned exit code 13
python-datamodel-code-generator: Depends: python3-isort (< 8) but 8.0.0-1 is to be installed
contributed upstream
Rust packaging
New upstream versions:
rust-rpds
Other bits and pieces
I upgraded tango to 10.1.2, and yubihsm-shell to 2.7.2.
Code reviews
python-backports.zstd: Obsolete with Python 3.14
(sponsored partial fix from
YOKOTA
Hiroshi)
12 April, 2026 10:13AM
by Colin Watson
Vasudev Kamath
Hardening the Unpacakgeable: A systemd-run Sandbox for Third-Party Binaries
The Shift in Software Consumption
Historically, I have been a "distribution-first" user. Sticking to tools
packaged within the Debian archives provides a layer of trust; maintainers
validate licenses, audit code, and ensure the entire dependency chain is
verified. However, the rapid pace of development in the Generative AI
space—specifically with new tools like Gemini-CLI—has made this traditional
approach difficult to sustain.
Many modern CLI tools are built within the
npm
or
Python
ecosystems. For
a distribution packager, these are a nightmare; packaging a single tool often
requires packaging a massive, shifting dependency chain. Consequently, I found
myself forced to use third-party binaries, bypassing the safety of the Debian
archive.
The Supply Chain Risk
Recent supply chain attacks affecting widely used packages like
axios
and
LiteLLM
have made it clear: running unvetted binaries on a personal system
is a significant risk. These scripts often have full access to your
$HOME
directory, SSH keys, and the system D-Bus.
After discussing these concerns with a colleague, I was inspired by his
approach—using a Flatpak-style sandbox for even basic applications like Google
Chrome. I decided to build a generalized version of this using
OpenCode
and
Qwen 3.6 Fast
(which was available for free use at the time) to create a
robust, transient sandbox utility.
The Solution: safe-run-binary
My script,
safe-run-binary
leverages
systemd-run
to execute binaries within an isolated scope. It
implements strict filesystem masking and resource control to ensure that even if
a dependency is compromised, the "blast radius" is contained.
Key Technical Features
1. Virtualized Home Directory (tmpfs)
Instead of exposing my real home directory, the script mounts a
tmpfs
over
$HOME
. It then selectively creates and bind-mounts only the
necessary subdirectories (like
.cache
or
.config
) into a virtual
structure. This prevents the application from ever "seeing" sensitive files
like
~/.ssh
or
~/.gnupg
2. D-Bus Isolation via xdg-dbus-proxy
For GUI applications, providing raw access to the D-Bus is a security hole.
The script uses
xdg-dbus-proxy
to sit between the application and the
system bus. By using the
--filter
and
--talk=org.freedesktop.portal.*
flags, the app can only communicate with necessary portals (like the file
picker) rather than sniffing the entire bus.
3. Linux Namespace Restrictions
The sandbox utilizes several
systemd
execution properties to harden the
process:
RestrictNamespaces=yes
: For CLI tools, this prevents the app from
creating its own nested namespaces.
PrivateTmp=yes
: Ensures a private
/tmp
space that isn't shared with
the host.
NoNewPrivileges=yes
: Prevents the binary from gaining elevated
permissions through SUID/SGID bits.
4. GPU and Audio Passthrough
The script intelligently detects and binds Wayland, PipeWire, and NVIDIA/DRI
device nodes. This allows browsers like Firefox to run with full hardware
acceleration and audio support while remaining locked out of the rest of the
filesystem.
Usage
To run a CLI tool like Gemini-CLI with access only to a specific directory:
safe-run-binary
-b
~/.gemini-config
--
npx
@google/gemini-cli
For a GUI application like Firefox:
safe-run-binary
--gui
-b
~/.mozilla
-b
~/.cache/mozilla
-b
~/Downloads
--
firefox
Conclusion
While it is not always possible to escape the need for third-party software, it
is possible to control the environment in which it operates. By leveraging
native Linux primitives like
systemd
and namespaces, high-grade isolation is
achievable.
PS:
If you spot any issues or have suggestions for improving the script, feel free
to raise a PR on the
repo
12 April, 2026 07:23AM
by copyninja
Russ Allbery
Review: The Teller of Small Fortunes
Review:
The Teller of Small Fortunes
, by Julie Leong
Publisher:
Ace
Copyright:
November 2024
ISBN:
0-593-81590-4
Format:
Kindle
Pages:
324
The Teller of Small Fortunes
is a cozy found-family fantasy with a
roughly medieval setting. It was Julie Leong's first novel.
Tao is a traveling teller of small fortunes. In her wagon, pulled by her
friendly mule Laohu, she wanders the small villages of Eshtera and reads
the trivial fortunes of villagers in the tea leaves. An upcoming injury, a
lost ring, a future kiss, a small business deal... she looks around the
large lines of fate and finds the small threads. After a few days, she
moves on, making her solitary way to another village.
Tao is not originally from Eshtera. She is Shinn, which means she
encounters a bit of suspicion and hostility mixed with the fascination of
the exotic. (Language and culture clues lead me to think Shinara is
intended to be this world's not-China, but it's not a direct mapping.) Tao
uses the fascination to help her business; fortune telling is more
believable from someone who seems exotic. The hostility she's learned to
deflect and ignore. In the worst case, there's always another village.
If you've read any cozy found-family novels, you know roughly what happens
next. Tao encounters people on the road and, for various reasons, they
decide to travel together. The first two are a massive mercenary (Mash)
and a semi-reformed thief (Silt), who join Tao somewhat awkwardly after
Tao gives Mash a fortune that is far more significant than she intended.
One town later, they pick up an apprentice baker best known for her
misshapen pastries. They also collect a stray cat, because of course they
do. It's that sort of book.
For me, this sort of novel lives or dies by the characters, so it's good
news that I liked Tao and enjoyed spending time with her. She's quiet,
resilient, competent, and self-contained, with a difficult past and some
mysteries and emotions the others can draw over time. She's also
thoughtful and introspective, which means the tight third-person narration
that almost always stays on Tao offers emotional growth to mull over. I
also liked Kina (the baker) and Mash; they're a bit more obvious and
straightforward, but Kina adds irrepressible energy and Mash is a good
example of the sometimes-gruff soldier with a soft heart. Silt was a bit
more annoying and I never entirely warmed to him, but he's tolerable and
does get a bit of much-needed (if superficial) character development.
It takes some time for the reader to learn about the primary conflict of
the story (Tao does not give up her secrets quickly), so I won't spoil it,
but I thought it worked well. I was momentarily afraid the story would
develop a clear villain, but Leong has some satisfying alternate surprises
in store. The ending was well-done, although it is very happily-ever-after
in a way that may strike some readers as too neat.
The Teller of
Small Fortunes
aims for a quiet and relaxed mood rather than forcing
character development through difficult choices; it's a fine aim for a
novel, but it won't match everyone's mood.
I liked the world-building, although expect small and somewhat
disconnected details rather than an overarching theory of magic. Tao's
ability gets the most elaboration, for obvious reasons, and I liked how
Leong describes it and explores its consequences. Most of the attention in
the setting is on the friction, wistfulness, and small reminders of coming
from a different culture than everyone around you, but so long ago that
you are not fully a part of either world. This, I thought, was very
well-done and is one of the places where the story is comfortable with
complex feelings and doesn't try to reach a simplifying conclusion.
There is one bit of the story that felt like it was taken directly out of
Dungeons & Dragons
campaign to a degree that felt jarring, but
that was the only odd world-building note.
This book felt like a warm cup of tea intended to comfort and relax,
without large or complex thoughts about the world. It's not intended to be
challenging; there are a few plot twists I didn't anticipate, but nothing
that dramatic, and I doubt anyone will be surprised by the conclusions it
reaches. It's a pleasant time with some nice people and just enough
tension and mystery to add some motivation to find out what happens next.
If that's what you're in the mood for, recommended. If you want a book
that has Things To Say or will put you on the edge of your seat, maybe
save this one for another mood.
All the on-line sources I found for this book call it a standalone, but
The Keeper of Magical Things
is set in the same world, so I would
call it a loose series with different protagonists.
The Teller of
Small Fortunes
is a complete story in one book, though.
Rating: 7 out of 10
12 April, 2026 02:53AM
April 10, 2026
Reproducible Builds
Reproducible Builds in March 2026
Welcome to the March 2026 report from the
Reproducible Builds
project!
These reports outline what we’ve been up to over the past month, highlighting items of news from elsewhere in the increasingly-important area of software supply-chain security. As ever, if you are interested in contributing to the Reproducible Builds project, please see the
Contribute
page on our website.
Linux kernel hash-based integrity checking proposed
Distribution work
Tool development
Upstream patches
Documentation updates
Two new academic papers
Misc news
Linux kernel hash-based integrity checking proposed
Eric Biggers posted to the
Linux Kernel Mailing List
in response to a
patch series posted by Thomas Weißschuh
to introduce a calculated hash-based system of integrity checking to complement the existing
signature
-based approach. Thomas’
original post
mentions:
The current signature-based module integrity checking has some drawbacks in combination with reproducible builds. Either the module signing key is generated at build time, which makes the build unreproducible, or a static signing key is used, which precludes rebuilds by third parties and makes the whole build and packaging process much more complicated.
However,
Eric’s followup message
goes further:
I think this actually undersells the feature. It’s also much simpler than the signature-based module authentication. The latter relies on PKCS#7, X.509, ASN.1, OID registry,
crypto_sig
API, etc in addition to the implementations of the actual signature algorithm (RSA / ECDSA / ML-DSA) and at least one hash algorithm.
Distribution work
In Debian this month,
Lucas Nussbaum
announced
Debaudit
, a “new service to verify the reproducibility of Debian source packages”:
debaudit
complements the work of the Reproducible Builds project. While
reproduce.debian.net
focuses on ensuring that binary packages can be bit-for-bit reproduced from their source packages,
debaudit
focuses on the preceding step: ensuring that the source package itself is a faithful and reproducible representation of its upstream source or
Vcs-Git
repository.
kpcyrd
filed a bug against the
librust-const-random-dev
package
reporting that the
compile-time-rng
feature of the
ahash
crate uses the
const-random
crate in turn, which uses a macro to read/generate a random number generator during the build. This issue was also
filed upstream
60 reviews of Debian packages were added, 4 were updated and 16 were removed this month adding to
our knowledge about identified issues
. One new issue types was added,
pkgjs_lock_json_file_issue
Lastly, Bernhard M. Wiedemann posted another
openSUSE
monthly update
for their work there.
Tool development
diffoscope
is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, Chris Lamb made a number of changes, including preparing and uploading versions,
314
and
315
to Debian.
Chris Lamb:
Don’t run
test_code_is_black_clean
test in the autopkgtests. (
#1130402
). [
Add some debugging info for PyPI debugging. [
Jelle van der Waa:
Fix compatibility with
LLVM
version 22. [
Adjust the PGP file detection regular expression. [
Michael R. Crusoe:
Reformat the source code using
Black
version 26.1.0 [
][
In addition, Vagrant Cascadian
updated
diffoscope
in GNU Guix to version
315
rebuilderd
, our server designed monitor the official package repositories of Linux distributions and attempt to reproduce the observed results there; it powers, amongst other things,
reproduce.debian.net
A new version,
0.26.0
, was released this month, with the following improvements:
Much smoother onboarding/installation.
Complete database redesign with many improvements.
New REST HTTP API.
It’s now possible to artificially delay the first reproduce attempt. This gives archive infrastructure more time to catch up.
And
many, many other changes
Upstream patches
The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:
Bernhard M. Wiedemann:
minify
(rust random HashMap) / (
alternative
by
kpcyrd
rpm-config-SUSE
(toolchain)
Chris Lamb:
#1129544
filed against
python-nxtomomill
#1130622
filed against
dh-fortran
#1130623
filed against
python-discovery
#1130666
filed against
kanboard
#1131168
filed against
moltemplate
#1131384
filed against
stacer
#1131385
filed against
libcupsfilters
#1131395
filed against
django-ninja
#1131403
filed against
python-agate
#1132074
filed against
aetos
#1132508
filed against
python-bayespy
kpcyrd
cargo
(HashMap random order issue;
more info
Documentation updates
Once again, there were a number of improvements made to our website this month including:
kpcyrd
Add a new page about
Rust
specifics. [
][
][
Robin Candau:
Add link to the
diffoci
Arch Linux package on the
Tools
page. [
Timo Pohl:
Add new
From Constrictor to Serpent: Investigating the Threat of Cache Poisoning in the Python Ecosystem
paper to the
Academic publications
page. [
Add GitLab registration confirmation to
How to join the Salsa group
page. [
Two new academic papers
Marc Ohm, Timo Pohl, Ben Swierzy and Michael Meier published a paper on the
threat of cache poisoning in the Python ecosystem
Attacks on software supply chains are on the rise, and attackers are becoming increasingly creative in how they inject malicious code into software components.
This paper is the first to investigate Python cache poisoning, which manipulates bytecode cache files to execute malicious code without altering the human-readable source code.
We demonstrate a proof of concept, showing that an attacker can inject malicious bytecode into a cache file without failing the Python interpreter’s integrity checks.
In a large-scale analysis of the Python Package Index, we find that about 12,500 packages are distributed with cache files.
Through manual investigation of cache files that cannot be reproduced automatically from the corresponding source files, we identify classes of reasons for irreproducibility to locate malicious cache files.
While we did not identify any malware leveraging this attack vector, we demonstrate that several widespread package managers are vulnerable to such attacks.
PDF
of the paper is available online.
Mario Lins of the University of Linz, Austria, has published their PhD doctoral thesis on the topic of
Software supply chain transparency
We begin by examining threats to the software distribution stage — the point at which artifacts (e.g., mobile apps) are delivered to end users — with an emphasis on mobile ecosystems [and] we next focus on the operating system on mobile devices, with an emphasis on mitigating bootloader-targeted attacks. We demonstrate how to compensate lost security guarantees on devices with an unlocked bootloader. This allows users to flash custom operating systems on devices that no longer receive security updates from the original manufacturer without compromising security. We then move to the source code stage. [Also,] we introduce a new architecture to ensure strong source-to-binary correspondence by leveraging the security guarantees of Confidential Computing technology. Finally, we present The Supply Chain Game, an organizational security approach that enhances standard risk-management methods. We demonstrate how game-theoretic techniques, combined with common risk management practices, can derive new criteria to better support decision makers.
PDF
of the paper is available online.
Misc news
On
our mailing list
this month:
Holger Levsen
announced that this year’s Reproducible Builds summit
will almost certainly be held in Gothenburg, Sweden, from September 22 until 24, followed by two days of hacking. However, these dates are preliminary and not 100% final — an official announcement is forthcoming.
Mark Wielaard posted to our list
asking a question
on the difference between
debugedit
and relative debug paths based on a comment on the
Build path
page: “Have people tried more modern versions of
debugedit
to get deterministic (absolute) DWARF paths and found issues with it?
Finally, if you are interested in contributing to the Reproducible Builds project, please visit our
Contribute
page on our website. However, you can get in touch with us via:
IRC:
#reproducible-builds
on
irc.oftc.net
Mastodon:
@reproducible_builds@fosstodon.org
Mailing list:
rb-general@lists.reproducible-builds.org
10 April, 2026 04:13PM
Jamie McClelland
AI Hacking the Planet
A colleague asked me if we should move all our money to our pillow cases after
reading the latest AI editorial from
Thomas
Friedman
The article reads like a press release from Anthropic, repeating the claim that
their latest AI model is so good at finding software vulnerabilities that it is
a danger to the world.
I think I now know what it’s like to be a doctor who is forced to watch Gray’s
Anatomy.
By now every journalist should be able to recognize the AI publicity playbook:
Step 1:
Start with a wildly unsubstantiated claim about how dangerous your
product is:
AI will cause human extinction before we have a chance to colonize mars
(remember that one? Even Kim Stanley Robinson, author of perhaps the most
compelling science fiction on colonizing mars
calls bull
shit
on it).
AI will eliminate all of our jobs
(this one was extremely effective at
providing cover for software companies laying off staff but it has quickly
dawned on people that the companies that did this are living in chaos not
humming along happily with functional robots)
AI will discover massive software vulnerabilities allowing bad actors to “hack
pretty much every major software system in the world”.
(Did Friedman pull that
directly from Anthropic’s press release or was that his contribution?)
Step 2:
To help stave off human collapse, only release the new version to a
vetted group of software companies and developers, preferably ones with big
social media followings
Step 3:
Wait for the limited release developers to spew unbridled
enthusiasm and shocking examples that seem to suggest this new AI produce is
truly unbelievable
Step 4:
Watch stock prices and valuations soar
Step 5:
Release to the world, and experience a steady stream of mockery as
people discover how wrong you are
Step 6:
Start over
Even if Friedman missed the text book example of the playbook, I have to ask:
if you think bad actors compromising software resulting in massive loss of
private data, major outages and wasted resources needs to be reported on, then
where have you been for the last 10 years? This literally happens
on a daily
basis
due to the
fundamentally flawed way capitalism has been writing software even before the
invention of AI. A small part of me wonders - maybe AI writing software is not
so bad, because how could it be any worse than it is now?
Also, let’s keep in mind that AI’s super ability at finding vulnerable software
depends on having access to the software’s source code, which most companies
keep locked up tight. That means the owners of the software can use AI to find
vulnerabilities and fix them but bad actors can’t.
Oh, but wait, what if a company is so incompetent that they
accidentally
release their proprietary software to the
Internet
Surely that would allow AI bots to discover their vulnerabilities and destroy
the company right? I’m not sure if anyone has discovered world ending
vulnerabilities in Anthropic’s Claude code since it was accidentally released,
but it is fun to watch people
mock
software
that is clearly
written by AI (and spoiler alert, it seems way worse that software written
now).
Well… we probably should all be keeping our money in a pillow case anyway.
10 April, 2026 12:27PM
Reproducible Builds (diffoscope)
diffoscope 317 released
The diffoscope maintainers are pleased to announce the release of diffoscope
version
317
. This version includes the following changes:
[ Chris Lamb ]
* Limit python3-guestfs Build-Dependency to !i386. (Closes: #1132974)
* Try to fix PYPI_ID_TOKEN debugging.
[ Holger Levsen ]
* Add ppc64el to the list of architectures for python3-guestfs.
You find out more by
visiting the project homepage
10 April, 2026 12:00AM
April 09, 2026
Russell Coker
HP Z640 and E5-2696 v4
I recently decided to upgrade the CPU in my workstation, the
E5-2696 v3 CPU was OK (passmark 2045 for single thread and 21,380 for multi thread) [1]
but I felt like buying something better so I got a
E5-2696 v4 (passmark 2115 and 24,643) [2]
. I chose the E5-2696 v4 because I was looking for a E5-2699 v4 and found an ebay seller who had them at $140 but was offering the E5-2696 v4 for $99 and the passmark results for the two CPUs are almost identical.
After buying the CPU and waiting for it to be delivered I realised that the Z640 doesn’t include it in the list of supported CPUs and that the maximum TDP of any supported CPU is 145W while according to passmark it has a TDP of 150W. I looked for information about it on Intel ARK (the official site for specs of Intel CPUs) and discovered that
“The Intel® Xeon® Processor E5-2696 v4 is designed to be used by system manufacturers (OEMs), and this means they can modify its specifications depending on the system where it will be implemented” and “The processor does not have an ARK page for this reason, since it has no standard specification from Intel, so depending on the original system, it is necessary to contact that system manufacturer for information” [3]
. That’s the official response from an Intel employee saying that there are no standard specs for that CPU!!!
Somehow I had used a E5-2696 v3 for 3 years without realising that
the same lack of support and specs applies to it [4]
I installed the new CPU in another Z640 which had a E5-1620 v3 CPU and it worked. I was a little surprised to discover that the hole in the corner is in the bottom right (according to the alignment of the printed text on the top) for all my E5-26xx CPUs while it’s in the top left on the E5-1620 v3. Google searches for things like “e5-2600 e5-1600 difference” and “e5-2600 e5-1600 difference hole in corner” didn’t turn up any useful information. The best information I found was from the
Linus Tech Tips forum which says that the hole is to allow gasses to escape when the CPU package is glued together [5]
which implies (but doesn’t state) that the location of the hole has no meaning. I had previously thought that the hole was to indicate the location of “pin 1” and was surprised when the new CPU had the hole in the opposite corner. Hopefully in future when people have such concerns they can find this post and not be worried that they are about to destroy their CPU, PC, or both when upgrading the CPU.
The previous Z640 was one I bought from Facebook marketplace for $50 in “unknown condition” in the expectation that I would get at least $50 of parts but it worked perfectly apart from one DIMM socket. The Z640 I’m using now is one I bought from Facebook marketplace for $200 and it’s working perfectly with 4 DIMMs, 128G of RAM, and the E5-2696 v4 CPU. $300 for a workstation with ECC RAM and a 22 core CPU is good value for money!
There are some accounts of the E5-2696 v4 not working on white-box motherboards including a claim that when it was selling for $4000US someone’s motherboard destroyed one. The best plan for such CPUs is to google for someone who’s already got it working in the same machine, which means a name-brand server. That doesn’t guarantee that it will work (Intel refuses to supply specs and states that different items may work differently) but greatly improves the probability.
This system has the HP BIOS version 2.61, note that the Linux
fwupd
package doesn’t seem to update the BIOS on HP workstations so you need to manually download it and install it. There is a possibility that a Z640 with an older BIOS won’t work with this CPU.
Here is the previous post in my Z640 saga [6]
[1]
[2]
[3]
[4]
[5]
[6]
Related posts:
More About the HP ML110 Gen9 and z640
In May 2021 I bought a ML110 Gen9 to use...
HP z840
Many PCs with DDR4 RAM have started going cheap on...
T320 iDRAC Failure and new HP Z640
The Dell T320 Almost 2 years ago I made a...
09 April, 2026 11:33PM
by etbe
April 08, 2026
Jonathan Dowland
nvim-µwiki
In January 2025,
as a pre-requisite for something else, I published a minimal
neovim
plugin called
nvim-µwiki
. It's essentially just the features from
vimwiki
that I regularly use, which is a small fraction them.
I forgot to blog about it. I recently dusted it off and cleaned it up.
You can find it here, along with a longer list of its features and
how to configure it:
I had a couple of design goals. I didn't want to define a new
filetype
so this is designed to work with the existing markdown one. I'm
using neovim, so I wanted to leverage some of its features: this plugin
is written in
Lua
, rather than vimscript. I use the parse trees
provided by
TreeSitter
to navigate the structure of a document.
I also decided to "plug into" the existing tag stack navigation, rather
than define another dimension of navigation (along with buffers, etc.)
to track: Following a wiki-link pushes onto the tag stack, just as if
you followed a tag.
This was my first serious bit of
Lua
programming, as well as my first
dive into neovim (or even vim) internals.
Lua
is quite reasonable. Most
of the vim and neovim architecture is reasonable. The emerging conventions
about structuring neovim plugins are mostly reasonable. TreeSitter is, well,
interesting, but the devil is very much in the details. Somehow all
together the experience for me was largely just frustrating, and I didn't
really enjoy writing it.
08 April, 2026 08:31PM
April 06, 2026
Thorsten Alteholz
My Debian Activities in March 2026
Debian LTS/ELTS
This was my hundred-forty-first month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.
During my allocated time I uploaded or worked on:
DLA 4500-1
] gimp security update to fix four CVEs related to denial of service or execution of arbitrary code.
DLA 4503-1
] evolution-data-server to fix one CVE related to a missing canonicalization of a file path.
DLA 4512-1
] strongswan security update to fix one CVE related to a denial of service.
[ELA-1656-1] gimp security update to fix four CVEs in Buster and Stretch related to denial of service or execution of arbitrary code.
[ELA-1660-1] evolution-data-server security update to fix one CVE in Buster and Stretch related to a missing canonicalization of a file path.
[ELA-1665-1] strongswan security update to fix one CVE in Buster related to a denial of service.
[ELA-1666-1] libvpx security update to fix one CVE in Buster and Stretch related to a denial of service or potentially execution of arbitrary code.
I also worked on the
check-advisories
script and proposed a fix for cases where issues would be assigned to the coordinator instead of the person who forgot doing something.
I also did some work for a kernel update and packages
snapd
and
ldx
on security-master and attended the monthly LTS/ELTS meeting. Last but not least I started to work on
gst-plugins-bad1.0
Debian Printing
This month I uploaded a new upstream versions:
epson-inkjet-printer-escpr
to unstable.
sane-airscan
to unstable.
printer-driver-oki
to unstable.
Several packages take care of group lpadmin in their maintainer scripts. With the upload of version 260.1-1 of
systemd
there is now a central package (
systemd | systemd-standalone-sysusers | systemd-sysusers
) that takes care of this. Other dependencies like
adduser
can now be dropped.
This work is generously funded by
Freexian
Debian Lomiri
This month I continued to work on unifying packaging on Debian and Ubuntu. This makes it easier to work on those packages independent of the used platform. I am also able to upload Debian packages to the corresponding Ubuntu PPA now. A small bug had to be fixed in the python script to allow the initial configuration in Launchpad.
This work is generously funded by
Fre(i)e Software GmbH
Debian Astro
This month I uploaded a new upstream version or a bugfix version of:
libplayerone
to experimental. For a list of other packages please see below.
I also uploaded lots of indi-drivers (
libplayerone, libsbig, libricohcamerasdk, indi-asi, indi-eqmod, indi-fishcamp, indi-inovaplx, indi-pentax, indi-playerone, indi-sbig, indi-mi, libahp-xc, indi-aagcloudwatcher, indi-aok, indi-apogee, libapogee3, indi-nightscape, libasi, libinovasdk, libmicam, indi-avalon, indi-beefocus, indi-bresserexos2, indi-dsi, indi-ffmv, indi-fli, indi-gige, info-gphoto, indi-gpsd, indi-gpsnmea, indi-limesdr, indi-maxdomeii, indi-mgen, indi-rtklib, indi-shelyak, indi-starbook, indi-starbookten, indi-talon6, indi-weewx-json, indi-webcam, indi-orion-ssg3, indi-armadillo-playtypus
) to experimental to make progress with the indi-transition. No problems with those drivers appeared and the next step would be the upload of indi version 2.x to unstable. I hope this will happen soon, as new drivers are already waiting in the pipeline. There have been also four packages, that migrated to the official indi package and are no longer needed as 3rdparty drivers (indi-astrolink4, indi-astromechfoc, indi-dreamfocuser, indi-spectracyber).
While working on these packages, I thought about testing them. Unfortunately I don’t have enough hardware to really check out every package, so I can upload most of them only as is. In case anybody is interested in a better testing coverage and me being able to provide upstream patches, I would be very glad about hardware donations.
Debian IoT
This month I uploaded a new upstream version or a bugfix version of:
pywws
to unstable.
Debian Mobcom
This month I uploaded a new upstream version or a bugfix version of:
osmo-trx
to unstable.
misc
This month I uploaded a new upstream version or a bugfix version of:
cc-tool
to unstable.
mailio
to unstable.
gnupg-pkcs11-scd
to unstable.
odoo
to unstable.
I also sponsored the upload of Matomo. Thanks a lot to William for preparing the package.
06 April, 2026 05:45PM
by alteholz
April 04, 2026
Isoken Ibizugbe
Post Outreachy Activities
It’s been about a month since I wrapped up my Outreachy internship, but my journey with
Debian
is far from over. I planned to keep contributing and exploring the community, and these past few weeks have been busy
Testing Locales and Solving Bug #1111214
For the
openQA
project, we decided to explore how accurate local language installations are and see if we can improve the translations. While exploring this, I started working on automating a test for a specific bug report:
Debian Bug #1111214
This is a test I had started by writing a detailed description of the installation process to confirm that selecting the
Spanish_panama
locale works accurately. I spent time studying previous language installation tests, and I learned that I needed to add a specific tag (LANGUAGE-) to the “needles” (visual test markers).
Since the installation wasn’t in English anymore, taking the correct screenshots and defining the areas took quite some time. I used the following command on the CLI to run the test:
`openqa-cli api -X POST isos ISO=debian-live-testing-amd64-gnome.iso DISTRI=debian-live VERSION=forky FLAVOR=gnome LANGUAGE=spanish_panama ARCH=x86_64 BUILD=1311 CHECKSUM=unknown`
While working on this, I got stuck at the
complete_installation
step. Because the keyboard layout had changed to Spanish, the commands required to confirm a successful install weren’t working as expected. Specifically, we had an issue typing the “greater than” sign (>).
My mentor,
Roland Clobus
, worked on a clever maneuver for the keys (AltGr-Shift-X), which was actually submitted
upstream
to openSUSE.
In this step, I also had to confirm that the locale was correctly set to LANG=”es_PA.UTF-8″. I had to dig into the scripts and Linux commands to make this work. It was a bit intimidating at first, but it turned out to be a great learning experience. You can follow my progress on this
Merge Request here
. I’m currently debugging a small issue where the “home” key seems to click twice in the final step, and after that, the test would be complete
Community & Connections
Beyond the code, I’ve been getting more involved in the social side of Debian:
Debian Women:
I attended the monthly meeting and met
Sruthi Chandran
. I’ve always seen her name as an Outreachy organizer, so it was great to meet her! She is currently running for Debian Project Leader (DPL). We also discussed starting technical sessions to introduce members to
packaging
, which I am very excited to learn.
DebConf Preparation:
I am officially preparing for my first
DebConf
! My mentors, Tassia and Roland, along with my fellow intern Hellen, have been incredibly supportive in guiding me through the application and presentation process.
04 April, 2026 11:24PM
by Isoken Ibizugbe
Dima Kogan
Simple gpx export from ridewithgps
The
Tour de Los Padres
is coming! The race organizer post
the route on
ridewithgps
. This works, but has convoluted interfaces for people not wanting to
use their service. I just wrote a simple script to export their data into a
plain .gpx file,
including
all the waypoints; their exporter omits those.
I've seen two flavors of their data, so here're two flavors of the
gpx-from-ridewithgps.py
script:
!/usr/bin/python3
import
sys
import
json
def
quote_xml
(s):
return
s.replace(
"&"
"&"
).replace(
"<"
"<"
).replace(
">"
">"
"Reading stdin"
file
=sys.stderr)
data
= json.load(sys.stdin)
(r
"""
for
item
in
data[
"extras"
]:
if
item[
"type"
] !=
"point_of_interest"
continue
poi
= item[
"point_of_interest"
(f
'
(f
'
desc
= poi.get(
"description"
""
if
len
(desc):
(f
'
(f
'
"
for
pt
in
data.get(
"route"
, {}).get(
"track_points"
, []):
(f
'
"
"
!/usr/bin/python3
import
sys
import
json
def
quote_xml
(s):
return
s.replace(
"&"
"&"
).replace(
"<"
"<"
).replace(
">"
">"
"Reading stdin"
file
=sys.stderr)
data
= json.load(sys.stdin)
(r
"""
for
poi
in
data[
"points_of_interest"
]:
(f
'
(f
'
desc
= poi.get(
"description"
""
if
len
(desc):
(f
'
(f
'
for
poi
in
data[
"course_points"
]:
(f
'
(f
'
(f
'
"
for
pt
in
data[
'track_points'
]:
(f
'
"
"
You invoke it by downloading the route and feeding it into the script:
curl -s https://ridewithgps.com/routes/54493422.json | ./ridewithgps-to-gpx.py > out.gpx
Note that the route number 54493422 is in the url above.
04 April, 2026 05:21PM
by Dima Kogan
April 02, 2026
Joerg Jaspert
Building a house - 1 year in
Haven’t written here about it, but last March we finally started on
our journey to get our own house build, so we can move out of the
rented flat here.
That will be a big step, both the actual building, but also the
moving - I am living at this one single place for 36 years now.
If you can read german there is
a dedicated
webpage
where I sometimes write about the
process. Will have much more details (and way more ramblings) than the
following part.
If you can’t read german, a somewhat short summary follows
. Yes,
still a lot of text, but shortened, still.
What? Why now?
Current flat has 83m² - which simply isn’t enough space. And
the number of rooms also doesn’t fit anymore. But it is hard to find a
place that fits our requirements (which do include location).
Moving to a different rented place would also mean changed amount of
rent. And nowadays that would be huge increase (my current rent is
still the price from about 30 years ago!).
So if we go and pay more - we could adjust and pay for something we
own instead. And both, my wife and I had changes in our jobs that made
it possible for us now, so we started looking.
Market
Brrrr, looking is good, actually finding something that fits - not so.
We never found an offer that fit. Space wise, sure. But then location
was off, or price was idiotically high. Location fit, but then size
was a joke, and guess about the price… Who needs 200 square meters
with 3 rooms? Entirely stupid design choices there. Or how about 40
square meters of hallway - with 50m² of tiny rooms around. What are
they smoking? Oh, there, useful size, good rooms - but now you want
more money than a kidney is worth, or something. Thanks, no.
New place
In February 2025 we finally got lucky and found a (newly opened) area
with a large number of places to build a house on. Had multiple talks
with someone from on of the companies developing that area (there are
two you can select from), then talked with banks and signed a contract
in March 2025. We got promised that actual house construction would be
first quarter of 2026, finished in second quarter.
House type
There are basically 2 ways of building a new house (that matter here).
First is called “Massivhaus”, second is called “Fertighaus” in german,
roughly translating to solid and prefabricated. The latter commonly a
wood based construction, though it doesn’t need to be. The important
part of it is the prefabrication, walls and stuff get assembled in a
factory somewhere and then transported to your place, where they play
“big kid lego” for a day and suddenly a house is there.
A common thought is “prefabricated” is faster, but that is only a half
true. Sure, the actual work on side is way shorter - usually one or
two days and the house is done - while a massive construction usually
takes weeks to build up. But that is only a tiny part of the time
needed, the major part goes of into planning and waiting and in there
it doesn’t matter what material you end up with.
Money fun
Last year already wasn’t the best time to start a huge loan - but
isn’t it always “
a few years ago would have been better
”? So we had
multiple talks with different banks and specialised consultants until
we found something that we thought is good for us.
Thinking about it now - we should have put even more money on top as
“reserve”, but who could have thought that 2026 turns into such a
shitshow? Does not help at all, quite the contrary. And that damn
lotto game always ends up with the wrong numbers, meh.
Plans and plans and more plans - and rules
For whichever reason you can not just go and put something on your
ground and be happy. At least not if you are part of the normal people and not
enormously rich. There is a large set of rules to follow. Usually that
is a good thing, even though some rules are sometimes hard to understand.
In Germany, besides the usual laws, we have something that is called
“Bebauungsplan”, which translates to “development plan” (don’t know if
that carries the right meaning, it’s a plan on what and how may be
build, which can have really detailed specifications in). It basically
tells you every aspect
on top
of the normal law that you have to
keep in mind.
In our case we have the requirement of 2 full floors and CAN have a
third smaller on top, it limits how high the house can be
and
also
how high our ground floor may be compared to the street. It regulates
where on the property we may build and how much ground we may cover
with the house, it gives a set of colors we are allowed to use, it
demands a flat roof that we must have as a green roof and has a number
of things more that aren’t important enough to list here. If you do
want to see the full list,
my german post on it has all the details
that matter to
us
With all that stuff in mind - off to plans. Wouldn’t have believed how
many details there are to take in. Room sizes are simple, but how to
arrange them for ideal usage of the sun, useful ways inside the house,
but also keeping in mind that water needs to flow through and out.
Putting a bath room right atop a living room means a water pipe needs
to go down there. Switch the bath room side in the house, and it
suddenly is above the kitchen - means you can connect the pipes from
it to the ones from kitchen, which is much preferred than going
through the living room. And lots more such things.
It took us until nearly end of October to finalize the plans! And we
learned a whole load from it. We started with a lot of wishes. The
planner tried to make them work. Then we changed our minds. Plans
changed. Minds changed again. Comparing the end result with the first
draft we changed most of the ground floor around, with only the stairs
and the entrance door at the same position. Less changes for the upper
floor, but still enough.
Side quests
The whole year was riddled with something my son named side quests. We
visited a construction exhibition near us, we went to the house
builders factory and took a look on how they work. We went to many
different other companies that do SOME type of work which we need
soon, say inside floors, painters, kitchen and more stuff.
Of course the most important side quest was a visit to the notary to
finalize the contracts, especially for the plot of land (in Germany
you must have a notary for that to get entered into the governments
books). Creates lots of fees, of course, for the notary and also the
government (both fees and taxes here).
Building permit
We had been lucky and only needed a small change to the plans to get
the building permit - and the second part, the wastewater permit (yes,
you need a separate one for this) also got through without trouble.
Choices, so many of them
So in January we finally had an appointment for something that’s
called “Bemusterung” which badly translates to “Sampling”. Basically
two days at the house builders factory to select all of what’s needed
for the house that you don’t do in the plans. Doors, inside and out
and their type and color and handles. Same things for the windows and
the blinds and the protection level you want the windows to have.
Decide about stairs, design for the sanitary installations - and also
the height of the toilet! - and the tiles to put into the bathrooms.
Decisions on all the tech needed (heating system, ventilation and
whatnot.
Two days, busy ones - and you can easily spend a lot of extra money
here if you aren’t careful. We managed to get “out of it” with only
about 4000€ extra, so pretty good.
Electro and automation
Now, here I am special. Back when I was young the job I learned is
electrician. So here I have very detailed wishes. I am also running
lots of automatism in my current flat - obviously the new house should
be better than that. So I have a lot of ideas and thoughts on it, so
this is entirely extra and certainly out of the ordinary the house
builder usually see.
Which means I do all of that on my own. Well, the planning and some of
the work, I must have a company at hand for certain tasks, it is
required by some rules. But they will do what I planned, as long as I
don’t violate regulations.
Which means the whole electrical installation is … different.
Entirely planned for automatisms and using KNX for it. I am so happy
to ditch Homeassistant and the load of Homematic, Zigbee and ZWave
based wireless things.
Ok, Homeassistant is a nice thing - it can do a lot. And it can bridge
between about any system you can find. But it is a central single point of
failure. And it is a system that needs constant maintenance. Not
touched for a while? Plan for a few hours playing update whack-a-mole.
And often enough a component here or there breaks with an update. Can
be fixed, but takes another hour or two.
So I change. Away from wireless based stuff. To wires. To a system
thats a standard for decades already. And works entirely without a
SPOF. (Yes, you can add one here too). And, most important, should I
ever die - can easily be maintained by anyone out there dealing with
KNX, which is a large number of people and companies. Without digging
through dozens of specialised integrations and whatnot.
I may even end up with Homeassistant again - but that will entirely be
as a client. It won’t drive automations. It won’t be the central point
to do anything for the house. It will be a logging and data collecting
thing that enables me to put up easy visualizations. It may be an easy
interface for smartphones or tablets to control parts of the house,
for those parts where one wants this to happen. Not the usual
day-to-day stuff, extras on top.
Actual work happening
Since march there
finally
is action visible. The base of the house
is getting build. Wednesday the 1st April we finally got the base
slab poured on the construction site and in another 10 days the house
is getting delivered and build up. A 40ton mobile crane will be there.
02 April, 2026 09:23PM
Samuel Henrique
Bringing HTTP/3 to curl on Amazon Linux
tl;dr
Starting with
curl 8.17.0-1.amzn2023.0.2
in Amazon Linux 2023, you can now use HTTP/3.
dnf
swap
-y
libcurl-minimal libcurl-full
dnf
swap
-y
curl-minimal curl-full
curl
--http3-only
(HTTP/3 is only enabled in the curl -full builds)
Or, if you would like to try it out in a container:
podman
run amazonlinux:2023 /bin/sh
-c
dnf upgrade -y --releasever=latest && dnf swap -y libcurl-minimal libcurl-full && dnf swap -y curl-minimal curl-full && curl --http3-only https://example.com
For a list of test endpoints, you can refer to
The Upgrade I Didn't Have to Make
My teammate Steve Zarkos, who previously worked on upgrading OpenSSL in Amazon
Linux from 3.0 to 3.2, spent the last few months on the complex task of bumping
OpenSSL again, this time to 3.5. A bump like this only happens after extensive
code analysis and testing, something that I didn't foresee happening when
AL2023 was released but that was a notable request from users.
Having
enabled HTTP/3 on
Debian
, I was
always keeping an eye on when I would get to do the same for Amazon Linux (mind
you, I work at AWS, in the Amazon Linux org). The bump to OpenSSL 3.5 was the
perfect opportunity to do that, for the first time Amazon Linux is shipping an
OpenSSL version that is supported by ngtcp2 for HTTP/3 support.
Non-Intrusive Change
In order to avoid any intrusive changes to existing users of AL2023, I've only
enabled HTTP/3 in the full build of curl, not in the minimal one, this means
there is no change for the minimal images.
The way curl handles HTTP/3 today also does not lead to any behavior changes
for those who have the full variants of curl installed, this is due to the fact
that HTTP/3 is only used if the user explicitly asks for it with the flags
--http3
or
--http3-only
Side Quests
Supporting HTTP/3 on curl also requires building it with ngtcp2 and nghttp3,
two packages which were not shipped in Amazon Linux, besides, my team doesn't
even own the curl package, we are a security team so our packages are the
security related stuff such as OpenSSL and GnuTLS. Our main focus is the
services behind Amazon Linux's vulnerability handling, not package maintenance.
I worked with the owners of the curl package and got approvals on a plan to
introduce the two new dependencies under their ownership and to enable the
feature on curl, I appreciate their responsiveness.
Amazon Linux 2023 is forked from Fedora, so while introducing ngtcp2, I also
sent a couple of Pull Requests upstream to keep things in sync:
[ngtcp2] package latest release 1.21.0
[ngtcp2] do not skip tests
While building the curl package in Amazon Linux, I've noticed the build was
taking 1 hour from start to end, and the culprit was something well known to
me; tests.
The curl test suite is quite extensive, with more than 1600 tests, all of that
running without parallelization, running two times for each build of the
package; once for the minimal build and again for the full build.
I had previously enabled parallel tests in Debian back in 2024 but never got
around to submit the same improvements to Amazon Linux or Fedora, this is now
fixed. The build times for Amazon Linux came down to 10 minutes under the same
host (previously 1 hour), and Fedora promptly merged my PR to do the same
there:
[curl] run tests in parallel
All of this uncovered a test which is timing-dependent, meaning it's not
supposed to be run with high levels of parallelism, so there goes another PR,
this time to curl:
Flag test 766 as timing-dependent#21155
What started as enabling a single feature turned into improvements that landed
in curl, Fedora, and Amazon Linux alike. I did this in a mix of work and
volunteer time, mostly during work hours (work email address used when this was
the case), but I'm glad I put in the extra time for the sake of improving curl
for everyone.
Release Notes
Amazon Linux 2023 release notes for 2023.10.20260330
02 April, 2026 12:00AM
by Unknown
Reproducible Builds (diffoscope)
diffoscope 316 released
The diffoscope maintainers are pleased to announce the release of diffoscope
version
316
. This version includes the following changes:
[ Jelle van der Waa ]
* Fix compatibility with LLVM version 22.
[ Chris Lamb ]
* Add some debugging info for PyPI debugging.
You find out more by
visiting the project homepage
02 April, 2026 12:00AM
April 01, 2026
Joey Hess
banning all Anthropic employees
Per
my policies
I need to ban every employee and contractor of Anthropic Inc from ever
contributing code to any of my projects. Anyone have a list?
Any project that requires a Developer Certificate of Origin or similar should
be doing this, because Anthropic is making tools that explicitly lie about
the origin of patches to free software projects.
UNDERCOVER MODE — CRITICAL
You are operating UNDERCOVER in a PUBLIC/OPEN-SOURCE repository. [...]
Do not blow your cover.
NEVER include in commit messages or PR descriptions:
[...]
The phrase 'Claude Code' or any mention that you are an AI
Co-Authored-By lines or any other attribution
--
via @vedolos
01 April, 2026 04:36PM
Ben Hutchings
FOSS activity in March 2026
Debian packages:
firmware-nonfree
Bugs
closed
#1064620: firmware-nonfree: suggestions for the packaging, gencontrol.py and debian/rules
closed
#1126797: firmware-intel-graphics: Please ship irci_irci_ecr-master_20161208_0213_20170112_1500.bin as ipu3-fw.bin
closed
#1131751: ABI break in amdxdna npu firmware
Merge requests:
opened and merged
!140: Update to 20260309
opened and merged
!141: Clean up packaging (from Nicolas Boulenguez)
opened
!142: Replace copy-firmware.sh; install files and generate metainfo.xml at build time
Uploads
uploaded version 20260110-1~bpo13+1 to trixie-backports
uploaded version 20260221-1 to unstable
uploaded version 20260221-1~bpo13+1 to trixie-backports
uploaded version 20260309-1 to unstable
hexagon-dsp-binaries
Bugs
replied to and reassigned
#1130844: firmware-qcom-soc depends on unavailable package firmware-qcom-dsp
initramfs-tools
Merge requests:
merged
!172: Use 3cpio for unmkinitramfs/lsinitramfs if available
merged
!186: update-initramfs: support loading post-update hooks from /usr/share/ too
merged
!190: autopkgtest: increase timeout to 240s on s390x
libtirpc
Bugs
replied to and reassigned
#1132176: rpc.mountd: symbol lookup error: rpc.mountd: undefined symbol: rpc_gss_getcred, version TIRPC_0.3.0
libvirt
Bugs
replied to and reassigned
#1130974: libvirt: Should use nftables for IP masquerading to work with PREEMPT_RT
linux
Bugs
replied to
#1128861: linux: when serving NFS, client attempts to lock served files fail with “No locks available”
replied to
#1130656: [grub2] wrong kernel version order
closed
#1132224: linux: nouveau regression on GK208B/GT 730 after kernel update: artifacts and X crashes
Merge requests:
reviewed
!1842: Merge kernel-wedge and use directly
reviewed and merged
!1849: Cleanup installer
merged
!1853: [amd64] drivers/platform/x86/uniwill: Enable UNIWILL_LAPTOP as module
opened and merged
!1854: Fix ordering of kernel version strings for multiple Debian revisions
reviewed and closed
!1857: crypto: padlock-sha - Disable for Zhaoxin processor
opened
!1862: Fix regressions in debian/bin/test-patches
opened
!1865: Draft: hyperv-daemons: Build using upstream Makefile; install hv_fcopy_uio_daemon
(LTS) worked on backports to 5.10 and 6.1 of the fixes for
CrackArmor
security flaws
Uploads
(LTS) uploaded version 5.10.251-1 to bullseye-security
uploaded version 6.12.74-2~bpo12+1 to bookworm-backports
uploaded version 6.18.15-1~bpo13+1 to trixie-backports
uploaded version 6.19.6-2~bpo13+1 to trixie-backports
uploaded version 6.19.8-1~bpo13+1 to trixie-backports
(LTS)
linux-6.1
Uploads
uploaded version 6.1.164-1~deb11u1 to bullseye-security
linux-base
Uploads
uploaded version 4.12.1~bpo12+1 to bookworm-backports
sgt-puzzles
Bugs
closed
#363441: It’s too easy to quit
closed
#550311: slant: Please make shading of filled squares configurable
closed
#1079717: sgt-puzzles: [Mozaic] crashes when copying the game
closed
#1116973: sgt-puzzles: Loopy Spectres type
Uploads
uploaded version 20250730.a7c7826-1 to unstable
wireless-regdb
Uploads
(LTS) uploaded version 2026.02.04-1~deb11u1 to bullseye-security
Debian non-packages:
kernel-team
added script to show status of all kernel team backports
pipeline
Issues
opened
#552: piuparts job fails to install dependencies outside of main
Mailing lists:
debian-kernel
posted and replied to
Agenda items for kernel-team meeting on 2026-03-18
replied to
How is “keep two last kernels” policy implemented?
debian-lts-announce
posted
[SECURITY] [DLA 4498-1] linux security update
posted
[SECURITY] [DLA 4499-1] linux-6.1 security update
posted
[SECURITY] [DLA 4501-1] wireless-regdb new upstream version
linux-bluetooth
(LTS) replied to
[PATCH v3] Bluetooth: L2CAP: Fix invalid response to L2CAP_ECRED_RECONF_REQ
netdev
(LTS) replied to
[PATCH net v2] net: consume xmit errors of GSO frames
stable
patches
(LTS) reviewed
5.10.252
and replied to
various
patches
included
in it
01 April, 2026 03:30PM
by Ben Hutchings
Matthew Garrett
Self hosting as much of my online presence as practical
Because I am bad at giving up on things, I’ve been running my own email
server for over 20 years. Some of that time it’s been a PC at the end of a
DSL line, some of that time it’s been a Mac Mini in a data centre, and some
of that time it’s been a hosted VM. Last year I decided to bring it in
house, and since then I’ve been gradually consolidating as much of the rest
of my online presence as possible on it. I mentioned this
on
Mastodon
and a
couple of people asked for more details, so here we are.
First:
my ISP
doesn’t guarantee a static
IPv4 unless I’m on a business plan and that seems like it’d cost a bunch
more, so I’m doing what I
described
here
: running a Wireguard link
between a box that sits in a cupboard in my living room and the smallest
OVH
instance I can, with an additional IP
address allocated to the VM and NATted over the VPN link. The practical
outcome of this is that my home IP address is irrelevant and can change as
much as it wants - my DNS points at the OVH IP, and traffic to that all ends
up hitting my server.
The server itself is pretty uninteresting. It’s a refurbished HP EliteDesk
which idles at 10W or so, along 2TB of NVMe and 32GB of RAM that I found
under a pile of laptops in my office. We’re not talking rackmount Xeon
levels of performance, but it’s entirely adequate for everything I’m doing
here.
So. Let’s talk about the services I’m hosting.
Web
This one’s trivial. I’m not really hosting much of a website right now, but
what there is is served via Apache with a Let’s Encrypt certificate. Nothing
interesting at all here, other than the proxying that’s going to be relevant
later.
Email
Inbound email is easy enough. I’m running Postfix with a pretty stock
configuration, and my MX records point at me. The same Let’s Encrypt
certificate is there for TLS delivery. I’m using Dovecot as an IMAP server
(again with the same cert). You can find plenty of guides on setting this
up.
Outbound email? That’s harder. I’m on a residential IP address, so if I send
email directly nobody’s going to deliver it. Going via my OVH address isn’t
going to be a lot better. I have a Google Workspace, so in the end I just
made use of
Google’s SMTP relay
service
. There’s
various commerical alternatives available, I just chose this one because it
didn’t cost me anything more than I’m already paying.
Blog
My blog is largely static content generated by
Hugo
. Comments are
Remark42
running in a Docker container. If you don’t want to handle even that level
of dynamic content you can use a third party comment provider like
Disqus
Mastodon
I’m deploying Mastodon pretty much along the lines of the
upstream compose
file
. Apache
is proxying /api/v1/streaming to the websocket provided by the streaming
container and / to the actual Mastodon service. The only thing I tripped
over for a while was the need to set the “X-Forwarded-Proto” header since
otherwise you get stuck in a redirect loop of Mastodon receiving a request
over http (because TLS termination is being done by the Apache proxy) and
redirecting to https, except that’s where we just came from.
Mastodon is easily the heaviest part of all of this, using around 5GB of RAM
and 60GB of disk for an instance with 3 users. This is more a point of
principle than an especially good idea.
Bluesky
I’m arguably cheating here. Bluesky’s federation model is quite different to
Mastodon - while running a Mastodon service implies running the webview and
other infrastructure associated with it, Bluesky has split that into
multiple
parts
. User
data is stored on Personal Data Servers, then aggregated from those by
Relays, and then displayed on Appviews. Third parties can run any of these,
but a user’s actual posts are stored on a PDS. There are various reasons to
run the others, for instance to implement alternative moderation policies,
but if all you want is to ensure that you have control over your data,
running a PDS is sufficient. I followed
these
instructions
other than using Apache as the frontend proxy rather than nginx, and it’s
all been working fine since then. In terms of ensuring that my data remains
under my control, it’s sufficient.
Backups
I’m using
borgmatic
, backing up to a local
Synology NAS and also to my parents’ home (where I have another HP EliteDesk
set up with an equivalent OVH IPv4 fronting setup). At some point I’ll check
that I’m actually able to restore them.
Conclusion
Most of what I post is now stored on a system that’s happily living under a
TV, but is available to the rest of the world just as visibly as if I used a
hosted provider. Is this necessary? No. Does it improve my life? In no
practical way. Does it generate additional complexity? Absolutely. Should
you do it? Oh good heavens no. But you can, and once it’s working it largely
just keeps working, and there’s a certain sense of comfort in knowing that
my online presence is carefully contained in a small box making a gentle
whirring noise.
01 April, 2026 02:35AM
March 31, 2026
Junichi Uekawa
April already.
April already. Wondering how bazel update is going in Debian. Seems like a large undertaking.
31 March, 2026 11:27PM
by Junichi Uekawa
Benjamin Mako Hill
Quote #75514
Although I never submitted to it, I made several appearances in the now-defunct quote database on bash.org (QDB). I’m dealing with a broken keyboard now, and went to dig hard to find
this classic in the Wayback machine
. I thought I would put it back on the web:
It was, in fact, horrble.
31 March, 2026 09:13PM
by Benjamin Mako Hill
C.J. Collier
Finding: Promoting SeaBIOS Cloud Images to UEFI Secure Boot (Proxmox)
Discovery
Legacy cloud templates often lack the partitioning and bootloader
binaries required for UEFI Secure Boot. Attempting to switch such a VM
to OVMF in Proxmox results in “not a bootable disk.” We discovered that
a surgical promotion is possible by manipulating the block device and
EFI variables from the hypervisor.
The Problem
Protective MBR Flags:
Legacy installers often set
the
pmbr_boot
flag on the GPT’s protective MBR. Strict UEFI
implementations (OVMF) will ignore the GPT if this flag is present.
Missing ESP:
Cloud images often lack a FAT32 EFI
System Partition (ESP).
Variable Store:
A fresh Proxmox
efidisk0
is empty and lacks both the trust certificates
(PK/KEK/db) and the BootOrder entries required for an automated
boot.
The “Promotion” Rule
To upgrade a SeaBIOS VM to Secure Boot without a full OS reinstall:
1.
Surgical Partitioning:
Map the disk on the host and
add a FAT32 partition (Type
EF00
). Clear the
pmbr_boot
flag from the MBR. 2.
Binary
Preparation:
Boot the VM in SeaBIOS mode to install
shim
and
grub-efi
packages. Use
grub2-mkconfig
to populate the new ESP. 3.
Trust
Injection:
Use the
virt-fw-vars
utility on the
hypervisor to programmatically enroll the Red Hat/Microsoft CA keys and
any custom certificates (e.g., FreeIPA CA) into the VM’s
efidisk
. 4.
Boot Pinning:
Explicitly set
the UEFI
BootOrder
to point to the
shimx64.efi
path via
virt-fw-vars --append-boot-filepath
Solution (Example Command
Sequence)
On the Proxmox Host (
root
):
# Map and Clean MBR
DEV
$(
rbd
map pool/disk
parted
-s
$DEV
disk_set pmbr_boot off
# Inject Trust and Boot Path (VM must be stopped)
virt-fw-vars
--inplace
/dev/rbd/mapped_efidisk
--enroll-redhat
--add-db
GUID
/path/to/ipa-ca.crt
--append-boot-filepath
'\EFI\centos\shimx64.efi'
--sb
This workflow enables high-integrity Secure Boot environments using
existing SeaBIOS infrastructure templates.
Tweet
31 March, 2026 09:03PM
by C.J. Collier
Thomas Lange
FAIme using apt-cacher-ng
The
FAI.me service
has become faster over the past two months.
First, the tool fai-mirror can now download all packages
in one go (with all their dependencies) instead of downloading one by
one. This helped a lot for the Linux Mint ISO because it uses a long
list of packages.
I've also added a local apt cache (using
apt-cacher-ng
),
so the network speed does not matter any more in most cases.
This led to the following improvements:
Linux Mint install ISOs went from around 6-7 min to now only 2min.
Ubuntu install ISO went from average 3min to around 90 seconds.
The average time for a Debian Linux install ISO dropped from 2min
to 40 seconds.
So far we only had once a problem with apt-cacher-ng, because the
underlying partition was full.
Building cloud and live images do not gain that much from the local
package cache, because most time is spend in extracting and installing
the packages.
31 March, 2026 12:56PM
Russ Allbery
Review: Code Blue—Emergency
Review:
Code Blue—Emergency
, by James White
Series:
Sector General #7
Publisher:
Orb
Copyright:
1987
Printing:
May 2003
ISBN:
0-7653-0663-8
Format:
Trade paperback
Pages:
252
Code Blue—Emergency
(annoying em-dash in original title) is the
seventh book of James White's Sector General science fiction series about
a vast multi-species hospital station. While there are some references to
(and spoilers for) earlier books in the series, you don't have to remember
the previous books to read this one. I had no trouble despite a nine-year
gap.
I read this as part of the Orb
General Practice
omnibus, which
collects this novel and
The Genocidal Healer
Cha Thrat is a Sommaradvan warrior-surgeon, member of a newly-discovered
species that is beginning the process of contact with the Federation. She
saved a Monitor corps human after an accident on her world, performing
some some highly competent surgery on a species she had never seen before.
That plus her somewhat outcast status on her own world due to her very
traditional attitude towards medical ethics led Sector General to extend
an offer of medical internship, and led her to leap into the unknown by
accepting. This may have been a mistake; there is a great deal that Sector
General does not understand about Sommaradvan medical ethics.
This series entry is another proper (if somewhat episodic) novel and the
first book of the series that doesn't primarily focus on Conway. He makes
an appearance in his new role as Diagnostician, but only as a supporting
character.
Code Blue—Emergency
is told in the tight third-person
perspective of Cha Thrat, an alien who finds many things about Sector
General baffling, confusing, and ethically troubling (and who therefore
provides a good reader surrogate for reintroducing the basics of how the
hospital works).
Using an alien viewpoint is a more sophisticated narrative technique than
White has used previously. I'm glad he tried it, and it mostly works,
although I have some complaints. Cha Thrat comes from the middle caste of
a strictly hierarchical society of three castes, but is also immensely
stubborn and used to a medical system in which doctors take sole
responsibility for their patients. This creates a lot of cultural
conflicts, and I do enjoy science fiction where the human attitudes are
portrayed as the strange ones, but the cultural analysis offered by this
novel is not very deep.
The pattern of this book is for Cha Thrat to stumble into a successful
approach to a problem while being either oblivious to or hostile to the
normal hierarchical structure expected of medical trainees. This is
believable as far as it goes. She is a skilled and intelligent doctor with
some good instincts and a strong commitment to patient care, but is also
culturally inclined to not ask for help. It makes sense for that to be a
serious problem in a hospital. Unfortunately, no one says this directly.
Sector General staff get quite upset in ways that seem more territorial
than oriented towards patient safety, no one directly explains to Cha
Thrat why following a process is important or shows examples of what could
go wrong, and plot armor means that her mistakes usually have positive
outcomes. One can extrapolate the reasons why she is not a good medical
student, but the reader is forced to do the extrapolation.
This is the sort of book where the narration makes clear there are
unresolved cultural clashes that are going to cause problems but hides the
details. To Cha Thrat, her perspective is so obvious she never bothers to
explain it to the reader, so the specifics come as a surprise. As with the
alien perspective, I've seen this technique used with more subtlety and
sophistication in other books, but White's version mostly works. Cha Thrat
is a sympathetic protagonist because she is truly trying to take the most
ethical and empathetic action in every situation and is clearly competent.
Most of my frustration as a reader, ironically, lands on the other Sector
General doctors who seem to make little to no effort to understand her
perspective when she fails to conform to their expectations. This is
believable in the abstract, but the whole point of Sector General is that
they're supposed to be wiser about interspecies difference than this.
Also, sometimes their reactions just seem petty. Cha Thrat has a very
hierarchical concept of medicine that matches the social classes of her
culture. For her, the highest tier of doctor are wizards who treat rulers,
because the work of rulers is mostly mental and intellectual and therefore
the diseases of rulers are treated with magic spells performed with words
to reshape their thinking rather than surgery on their bodies. O'Mara and
the other Sector General psychologists take great offense at this,
muttering about being called witch doctors, which I found completely
absurd. This is a comprehensible, if odd, description of psychology from a
wholly alien species. Surely one's first reaction should be that words
like "wizard" or "magic" are translation errors. Don't get offended; look
to see if the underlying substance matches, which it clearly does.
Apart from cultural and psychological clashes,
Code Blue—Emergency
has the standard episodic Sector General structure of interesting medical
mysteries that require lateral thinking. I find this sort of puzzle story
satisfying, particularly given the firm belief of every character in an
essentially pacifist and empathetic approach to even the most alien of
creatures. This determined non-violence is one of the more interesting
things about this series, and it continues here.
White does tend towards both biological and gender essentialism for
everyone other than the protagonist and main supporting characters, but he
seemed to be walking back some of the more outrageous limitations on women
that appeared in previous books. There is still some nonsense in here
about how females of any species can't be Diagnosticians, but then Cha
Thrat, who is female, seems to violate the justification for that rule
over the course of this novel (sadly without comment). Perhaps he's
setting up for proving Sector General wrong about this prejudice.
I picked this up after reading Elizabeth Bear's
Machine
, which is essentially a (better written) Sector General
novel that got me in the mood for reading more. I wouldn't give
Code
Blue—Emergency
any awards, but it delivered exactly what I was looking
for. This series is not as deep or well-written as some more recent SF,
but it is reliably itself and reliably entertaining. There are worse
things in a series. Recommended if you're in the mood for alien
ER
in space.
The omnibus edition that I read has an introduction to both novels by John
Clute. It does add some interesting insights, but (as is somewhat typical
for Clute) it also spoils parts of both books. You may want to read it
after you read the novels.
Followed by
The Genocidal Healer
Rating: 7 out of 10
31 March, 2026 03:08AM
March 30, 2026
Jamie McClelland
Mailman3 has 2 databases. Whoops.
At
May First
we have been carefully planning our
migration of about 1200 lists from mailman2 to mailman3 for almost six months
now. We did a lot of user communications, had several months of beta testing
with a handful of lists ported over, and everything was looking good. So we
kicked off the migration!
But, about 15% of the way through I started seeing sqlite lock errors. Wait,
what? I carefully re-configured mailman3 to use postgres, not sqlite. Well,
yes, but apparently that was for the database managing the email list
configuration, not the database powering the django web app, which,
incidentally, also includes hundresds of gigabytes of archives. In other words,
the one we
really
need in postgres, not sqlite.
Moving from sqlite to postgres
Well that sucks. We immediately stopped the migration to deal with this.
I noticed that the web is full of useful django instructions on how to migrate
your database from one database to antoher. However, if you read the fine
print, those convenient looking “
dumpdata
loaddata
” workflows are designed
to move the table definitions and a small amount of data. In our case, even
after just 15% of our lists moved, our sqlite database was about 30GB.
I considered some of the hacks to manage memory and try to run this via django,
but eventually decided that
pgloader
was a more robust
option. This option also allowed me to more easily test things out on a copy of
our sqlite database (made while mailman was turned off). This way I could
migrate and re-migrate the sqlite database over and over without impacting our
live installation until I was satisfied it was all working.
My first decision was to opt out of pgloader’s schema creation. I used django’s
schema creation tool by:
Turning off mailman3 and mailman3-web and changing the mailman web
configuration to use the new postgresql database.
Running
mailman-web migrate
Changing the mailman web configuration back to sqlite and starting
everything again.
Note: I tried just adding new database settings in the mailman web
configuration indexed to ’new’ - django has the ability to define different
databases by name, then you can run
mailman-web migrate --database new
. But,
during the migration, I caught django querying the sqlite database for some
migrations that required referencing existing fields (specifically hyperkitty’s
0003_thread_starting_email
). I didn’t want any of these steps to touch the
live database so I opted for the cleaner approach.
Once I had a clean postgres schema, I dumped it so I could easily return to
this spot.
Next I started working on our
pgloader
load file. After a lot of trial and
error, I ended with:
LOAD
DATABASE
FROM
sqlite
///
var
lib
mailman3
sqlite
postgres
migration
mailman3web
clean
backup
db
INTO
postgresql
//
mailmanweb
xxxxxxxxxxx
localhost
5432
mailmanweb
WITH
data
only
reset
sequences
include
no
drop
disable
triggers
create
no
tables
batch
size
MB
batch
rows
500
prefetch
rows
50
workers
concurrency
SET
work_mem
to
'64MB'
maintenance_work_mem
to
'512MB'
CAST
type
datetime
to
timestamptz
drop
default
drop
not
null
type
date
to
date
drop
default
drop
not
null
type
int
when
precision
to
boolean
using
tinyint
to
boolean
type
text
to
varchar
using
remove
null
characters
The batch, prefetch, workers and concurreny settings are all there to ensure
memory doesn’t blow up.
I also discovered that I had to make some changes to the schema before loading
data. Mostly truncating tables that the django migrate command populated to
avoid duplicate key errors:
TRUNCATE
TABLE
django_migrations
CASCADE
TRUNCATE
TABLE
django_content_type
CASCADE
TRUNCATE
TABLE
auth_permission
CASCADE
TRUNCATE
TABLE
django_site
CASCADE
And also, I had to change a column type. Apparently the mailman import process
allowed an attachment file name that exceeds the limit for postgres, but was
allowed into sqlite:
ALTER TABLE hyperkitty_attachment ALTER COLUMN name TYPE text
When
pgloader
runs, we still get a lot of warnings from pgloader, which wants
to cast columns differently than django does. These are harmless (I was able to
import the data without a problem).
And there are still a lot of warnings along the lines of:
2026-03-30T14:08:01.691990Z WARNING PostgreSQL warning: constraint “hyperkitty_vote_email_id_73a50f4d_fk_hyperkitty_email_id” of relation “hyperkitty_vote” does not exist, skipping
These are harmless as well. They appear because
disable triggers
disables
foreign key constraints. Without it, we wouldn’t be able to load tables that
require values in tables that have not yet been populated.
After all the tweaking, the import of our 30GB sqlite database took about 40
minutes.
Final Steps
I think the
reset sequences
from
pgloader
should take care of this, but just in case:
mailman-web sqlsequencereset hyperkitty mailman_django auth | mailman-web dbshell
And, just to ensure postgres is optimized, run this in the psql shell:
ANALYZE VERBOSE;
Last thoughts
I understand very well all the decisions the mailman3 devs made in designing
the next version of mailman, and if I was in the same place I may have made
them the same ones. For example, separating the code running the mailing list
from the code managing the archives and the web interface makes perfectly good
sense - many people might want to run just the mailing list part without a web
interface. And building the web interface in django makes a lot of sense as
well - why re-invent the wheel? I’m sure a lot of time and effort was saved by
simply using the built in features you get for free with django.
But the unfortunate consequence of these decisions is that sys admins have a
much harder time. Almost everyone wants the email lists along with the web
interface and the archives. But nobody wants two different configuration files
with different syntaxes and logic, not to mention two different command lines
to use for maintenance and configuration with completely different APIs. Trying
to understand how to change a default template or set list defaults requires a
lot of research and usually you have to write a python script to do it.
I have finally come to the conclusion that mailman2 is designed for sys admins,
while mailman3 is designed for developers.
Despite these short comings, I am impressed with the community and their quick
and friendly responses to the questions of a confused sys admin. That might be
more valuable than anything else.
30 March, 2026 12:27PM
Utkarsh Gupta
FOSS Activites in March 2026
Here’s my monthly but brief update about the activities I’ve done in the FOSS world.
Debian
Whilst I didn’t get a chance to do much, here are still a few things that I worked on:
A quick exchange with Xavier about node-lodash fixes for stable releases.
Uploaded ruby-rack to CVE-2026-25500 & CVE-2026-22860 to sid, trixie, and bookworm.
Started to work on the DebConf Bursary team along with PEB.
Assited a few folks in getting their patches submitted via Salsa.
Mentoring for newcomers.
Moderation of -project mailing list.
Ubuntu
I joined
Canonical to work on Ubuntu full-time
back in February 2021.
Whilst I can’t give a full, detailed list of things I did, here’s a quick TL;DR of what I did:
Successfully released
26.04 LTS Beta
This one was also done without the ISO tracker and cdimage access.
We also worked very hard to build and promote all the image in due time.
This was the first proper milestone with the Test Observer.
We also did a retrospective:
Worked further on the whole artifact signing story for cdimage.
Assisted a bunch of folks with my Archive Admin and Release team hats to:
Review and grant FFes.
Coordinating weekly syncs.
Promoting/demoting binaries to/from main.
Taking care of package removals and so on.
Was pretty occupied with the new release processs architecture and design.
Preparing for the 26.04 LTS final release.
Debian (E)LTS
This month I have worked 50 hours
on
Debian Long Term Support (LTS)
and on its sister
Extended LTS
project and did the following things:
Released Security Updates
libvirt
: Regression introduced by the linux kernel update via DLA 4404-1.
[LTS]
: Fixed the regression (Debian bug #1124549) via
7.0.0-3+deb11u4
for bullseye. This has been released as
DLA 4504-1
ruby-rack
: Path traversal and stored XSS vulnerabilities in directory handling.
[LTS]
: Fixed
CVE-2026-22860
and
CVE-2026-25500
via
2.1.4-3+deb11u5
for bullseye. This has been released as
DLA 4505-1
[bookworm]
: Fixed
CVE-2026-22860
and
CVE-2026-25500
via
2.2.2-0+deb12u1
for bookworm. This has been uploaded to oldstable-security and announced as
DSA 6180-1
[trixie]
: Fixed
CVE-2026-22860
and
CVE-2026-25500
via
3.1.2-0+deb13u1
for trixie. This has been uploaded to stable-security and announced as
DSA 6180-1
vlc
: Out-of-bounds read and denial of service via a crafted MMS server response.
[LTS]
: Fixed
CVE-2025-51602
via
3.0.23-0+deb11u1
for bullseye. This has been released as
DLA 4507-1
[ELTS]
: The 3.0.23 backport is ready but still in testing. Will be released in April.
nss
: Integer overflow in the AES-GCM implementation.
[LTS]
: Fixed
CVE-2026-2781
via
2:3.61-1+deb11u5
for bullseye. This has been released as
DLA 4508-1
gst-plugins-base1.0
: Integer overflow in the RIFF parser.
[LTS]
: Fixed
CVE-2026-2921
via
1.18.4-2+deb11u5
for bullseye. This has been released as
DLA 4514-1
[ELTS]
: Fixed
CVE-2026-2921
via
1.14.4-2+deb10u6
for buster and
1.10.4-1+deb9u7
for stretch. This has been released as
ELA 1669-1
gst-plugins-ugly1.0
: Heap-based buffer overflow and out-of-bounds write in media demuxers.
[LTS]
: Fixed
CVE-2026-2920
and
CVE-2026-2922
via
1.18.4-2+deb11u2
for bullseye. This has been released as
DLA 4516-1
[ELTS]
: Fixed
CVE-2026-2920
and
CVE-2026-2922
via
1.14.4-1+deb10u3
for buster and
1.10.4-1+deb9u3
for stretch. This has been released as
ELA 1670-1
phpseclib
: Name confusion in X.509 certificate verification and a padding oracle timing attack in AES-CBC.
[LTS]
: Fixed
CVE-2023-52892
and
CVE-2026-32935
via
1.0.19-3+deb11u3
for bullseye. This has been released as
DLA 4518-1
[ELTS]
: Fixed
CVE-2023-52892
and
CVE-2026-32935
via
1.0.19-3~deb10u4
for buster. This has been released as
ELA 1671-1
Work in Progress
knot-resolver
: Affected by CVE-2023-26249, CVE-2023-46317, and CVE-2022-40188, leading to Denial of Service.
[LTS]
: Still in back and forth discussion with maintainers on the best way to proceed for the bullseye upload. Git repository for bullseye:
node-lodash
: Affected by
CVE-2025-13465
, prototype pollution in the
baseUnset
function.
[stable]
: Xavier from the JS team ACK’d the patch. The trixie and bookworm uploads will follow.
[LTS]
: The bullseye test and upload will follow in April once the stable uploads are in and ACK’d by the SRMs.
vlc
: Affected by CVE-2025-51602, an out-of-bounds read and denial of service via a crafted 0x01 response from an MMS server.
[LTS]
: 3.0.23 backport is ready but not tested. I’ll get this over the line in March.
[ELTS]
: 3.0.23 backport is ready but not very clean. Would like to complete LTS and get back to this.
Other Activities
[ELTS]
Continued to review ruby-rack for ELTS – it has since received about 13 new CVEs, making it even more chaotic. Might consider releasing in batches.
[E/LTS]
Monitored discussions on mailing lists, IRC, and all the documentation updates.
[E/LTS]
Attended the monthly LTS meeting on IRC.
Summary here
[Other]
Spent quite some time debugging a bug in debusine. Filed
for the same. Have worked on a preliminary patch but would like to submit something for Colin to review. Will follow up in April.
Until next time.
:wq
for today.
30 March, 2026 05:41AM
Russ Allbery
Review: The Cloak and Its Wizard
Review:
The Cloak and Its Wizard
, by R.Z. Nicolet
Publisher:
UpLit Press
Copyright:
February 2026
ISBN:
1-917849-15-X
Format:
Kindle
Pages:
423
The Cloak and Its Wizard
is a standalone (at least so far) urban
fantasy superhero (sort of) novel. R.Z. Nicolet is the marketing pseudonym
for Rachel Reddick. This is her first novel.
I'm picky about wizards.
The wizards themselves will complain about that, but of course I'm
picky. When I choose a wizard, barring utter abandonment of moral
scruples, it's a till-death-do-us-part situation. (Their death, not
mine. I'm the next best thing to indestructible.)
The Cloak of Sunset and Starlight is a major artifact, meaning that it has
its own preferences and is capable of independent action. It has been
sitting in a glass case in the wizards' library for about a hundred years,
waiting for someone interesting. (Well, mostly sitting. Occasionally it
sneaks out to eavesdrop or move the books around.)
Veronica Noble is interesting. She's older than most initiates,
thoughtful, observant, and clearly had some mundane career before joining
the Order. Her aura is appealing, and her mental shields and resistance to
influence are intriguing. Normally, the Cloak would take its time
investigating a new potential wizard, but the Sword was making thoughtful
rattling sounds, and no way is the Cloak going to let the Sword claim her
first. Time to choose a new wizard!
It was nice, being draped over warm shoulders, and feeling a heartbeat
again.
I could tell she closed her eyes without even looking.
She sighed. "I just got picked by the intransigent one, didn't I?"
The last time I picked a book from the Big Idea feature in Scalzi's
Whatever blog
, it
didn't go that well
, but if you're going to
write a book specifically for me, I'm going to read it. There are very few
tropes of SFF that I love more than intelligent companion objects, and
Nicolet's
introduction to the story
was compelling. So I gave this book discovery
method another chance.
I'm glad I did, because this was exactly what I was in the mood for and a
delight from cover to cover.
Veronica Noble is not a typical wizard. She's a surgeon and was quite
happy to be a surgeon until an unexpected encounter with a magical
creature killed her brother. The forgetting spell cast by the wizards who
came to handle the Cassandra wyrm didn't work on her, so she was dragged
reluctantly into the secret magical world of the Order. This long-lived
society of wizards quietly defends the world against magical intrusions
from other planes of existence. Now she's a wizard with a magical cloak,
which she is not at all sure she wants.
Veronica is not the protagonist, though. The Cloak of Sunset and Starlight
is. As far as it is concerned, its job is to assist its wizard, enjoy
watching interesting feats of magic, and look fabulous doing so. It's
protective, dramatic, rather vain, endlessly curious, easily bored, and
intensely loyal. When it becomes clear that the Order has some serious
problems, the Cloak knows what side it's on.
This sounds a bit like urban fantasy, so I was surprised when the first
superheroes showed up, although given the explicit Doctor Strange
inspiration I probably should have expected them. The Order and the
superheroes do not mix, at least at the start of the novel. The wizards
view the superheroes as a loud and irritating intrusion and hide magical
activities from them the same as they do the rest of the world. Veronica's
opening opinion on superheroes is based on being a trauma surgeon in a
hospital dealing with the aftermath of their fights (which makes me wonder
if the author has read
Hench
, although
the idea is older than that book). As with the Order, the role of
superheroes in this world gets more complicated as the plot develops.
There is a surprising amount of plot and some very nice world-building
here, including multiple twists that I was not expecting. Veronica is the
sort of stubborn and deeply ethical person who will not leave a problem
alone if she has the ability to fix it, which is a good recipe for getting
deeper and deeper into a complex plot. She's believable as a surgeon:
somewhat taciturn, calm in emergencies, detail-oriented, methodical, and
not at all dramatic. This makes the Cloak a perfect foil and complement.
Watching their partnership develop was very satisfying.
This is a sidekick novel, and like the best sidekick novels it makes the
not-protagonist more interesting and more relatable by showing them from
an outside and skewed perspective. Piecing together what Veronica must be
thinking is part of the fun, as is sharing the Cloak's protectiveness
towards her as it becomes clear how much she's been through and how good
of a person she is. The Cloak's personality was a little too much like a
cat for me — I would have preferred a more unique viewpoint, fewer
cat-coded shenanigans, and a bit less of the running laundry machine joke.
But that's a quibble. Its endless curiosity drives the plot forward and
uncovers more of the world-building, and I just love reading stories from
the perspective of this sort of loyal and protective magical creature.
I had so much fun with this book. It's a popcorn sort of book, and I
thought the ending sputtered a little, but overall it was great. Parts of
it could have been designed in a lab to appeal to me specifically, so I'm
not sure if other people will enjoy it as much, but its hit rate with my
friends so far has been good.
Highly recommended, and I will be watching for any further novels from
Nicolet.
The Cloak and Its Wizard
reaches a satisfying conclusion and
doesn't advertise itself as part of a series, but there is room for a
sequel. If Nicolet ever writes one, I'd read it.
Rating: 8 out of 10
30 March, 2026 02:46AM
Sahil Dhiman
MiniDebConf Kanpur 2026
MiniDebConf Kanpur 2026
was held on 14th and 15th March 2026 at the Indian Institute of Technology Kanpur.
Having a Debian conference in the North was something many folks wanted.
Ravi
started the discussion (with local IIT Kanpur folks) almost 7 months before the conference. Lots of folks from Debian India joined in organizing the conference, which was nice. All the meeting notes and discussions were posted on the Debian India mailing list, a first.
Despite all the efforts, the conference start was delayed due to logistical issues. Things went fine post Day 1 lunch. We had two days of almost
full schedule
. disaster’s
Decentralising Indian Communication
was an interesting talk, diving into decentralized communication.
IIT Kanpur is a huge campus with nice footpaths and greenery. We got the opportunity to explore their
HPC
at Computer Center post conference.
Work has been started for MiniDebCamp Kochi. More details can be found on the
wiki
Working to make this conference happen was different with all the challenges involved, but overall, everyone was happy with the outcome.
Group photo.
Click to enlarge
30 March, 2026 02:24AM
by Sahil Dhiman
March 29, 2026
Russell Coker
Ebook Readers in Debian
Laptop
For a while I’ve been using Calibre 8.5.0+ds-1+deb13u1 in Debian/Trixie running KDE for reading ebooks on my laptop, it generally works well and has a large font size. The only downsides of it for that use are taking more RAM than I would prefer (about 780M RSS which seems a lot for a relatively simple task) and having separate windows for the list of books and reading an actual book without any options to just open the last book and not delay me.
I tried Arianna 25.04.0-1 in Debian/Trixie, it has a significantly smaller font size and doesn’t allow high contrast colors as the default is black on gray with the dark theme in KDE. It also only allows left and right arrows for moving through the book while Calibre uses up/down, left/right, or pgup/pgdn so whatever keys seem reasonable to you are going to work. The RSS was 762M which wasn’t great but wasn’t the real problem. Rumours of Arianna using less RAM than Calibre seem exaggerated.
Librem5
On my Librem5 phone with Plasma Mobile Calibre 8.5.0+ds-1+deb13u1 both the initial setup screen and the main screen for selecting a book to read don’t work in the width of portrait view on the phone. After putting it in landscape mode it worked, but I couldn’t touch on a book title to select it I had to touch on the number of the book at the left of the list box. But once it was loaded everything was fine. On the Librem5 Arianna 25.04.0-1 just worked fine, although only using left/right swipes to change pages instead of up/down was annoying.
Furilabs FLX1s
On my Furilabs FLX1s with phosh Arianna 25.04.0-1 and Calibre 8.16.2+ds+~0.10.5-3 both gave the same result of not displaying text or images from the book, I’m not sure if it’s phosh or some other aspect of the FLX1s configuration at fault.
PinePhonePro
On my PinePhonePro running Debian/Testing with Plasma Mobile Arianna 25.12.3-1 worked without any issue and up/down swipes worked. Calibre 9.5.0+ds+~0.10.5-1 had the initial screen work fine in portrait mode but the main screen was too wide and needed landscape. Also the issue of having to touch the number applied.
Laptop running Debian/Unstable
Calibre 9.6.0+ds+~0.10.5-2 and Arianna 25.12.3-1 worked quite nicely on a Thinkpad running Debian/Unstable. One thing I discovered while testing it is that Calibre supports the CTRL-PLUS and CTRL-MINUS key combinations to change font sizes and that also works on the version in Debian/Trixie. Arianna doesn’t support CTRL-PLUS/MINUS.
Conclusion
The problems I had were Arianna on a laptop, everything on the Furilabs FLX1s, and Calibre’s UI not being well adjusted for mobile devices.
Related posts:
encryption speed – Debian vs Fedora
I’m in the process of converting my Fedora/rawhide laptop to...
Phone Charging Speeds With Debian/Trixie
One of the problems I encountered with the PinePhone Pro...
Furilabs FLX1s
The Aim I have just got a Furilabs FLX1s [1]...
29 March, 2026 12:29PM
by etbe
Samuel Henrique
Latest NVIDIA Drivers for Debian (Packaged with AI)
tl;dr
This is not an official package, it's good enough for me and it might be good
enough for you, confirmed as working in Debian Testing but I don't have a
Stable machine to test there.
You can use my custom repo to install the latest NVIDIA drivers on Debian
Stable, Testing or Unstable (install from Sid repository):
The page above contains the APT sources you need, just add the one for your
release to
/etc/apt/sources.list.d/r-samueloph-nvidia-ai.sources
, run
sudo apt update
and install the packages, you might need to disable Secure Boot.
This is not about AI
Discussions about AI are quite divisive in the Free Software communities, and
there's so much to be said about it that I'm not willing to go into in this
blog post. This is rather just me telling people that if they need up-to-date
NVIDIA packages for Debian, they could check if my custom repository gets the
job done.
The AI part is a means to an end, I've been careful to note in the repository
names that the packages were produced with AI to respect people who do not want
to run it for any reason.
RTX 5000 series support
Back in May 2025 I
opened a bug
report
asking for
the NVIDIA drivers on Debian to be updated to support the RTX 5000 series. The
Nouveau drivers might be good enough for some people, but I need the NVIDIA
drivers because I want to play games and do experiments with open weight
models.
Opening a bug report doesn't guarantee anything, at the end of the day Debian
Developers are volunteers, so if I really wanted the newer drivers, I would
have to do something about it, ideally submitting a merge request.
I briefly looked into the NVIDIA packaging, which involves 3 source packages
(and one extra git repo for tarballs), unfortunately this was going to take
more time and effort than what I was willing to spend.
What I Did
After a few weeks of lamenting that I wasn't running the NVIDIA drivers, I
figured I was willing to put in more effort than I originally thought, just
enough to instruct the Claude Code agent to package the latest releases. I'm
skilled enough with agentic tools that I knew how to use it to save time;
providing a clear instruction on how to build the package and explaining the
packaging layout, then letting the agent iterate until it gets a working build.
The agent was running inside a VM that didn't have any of my credentials.
After a little bit of back and forth, where I was reviewing the changes guiding
the agent into how to fix certain issues, I ended up with a working set of
packages.
Once I installed it on my machine and confirmed they worked, I set up a
debusine
repository to make it easier to
install future updates, and let others test it out.
Debusine is analogous to Ubuntu's famous PPA, or Fedora's EPEL, it's a
relatively new project but it has been working fine for this.
Matheus Polkorny helped me test the packages and did spot a few issues which
are fixed now. The Debusine developers were also always quick to respond to my
questions and
bug
reports
How Good Is It?
Short answer: good enough for daily use, but not a substitute for an official Debian package.
The whole point of doing this is because I don't have enough free time to
maintain the package myself. All of this work was done as a volunteer, on my
personal time.
This means I'm trusting the agent to some degree; I review its commits but I
don't go too deep into it, the quality will be dictated by the fact that I'm
a Debian Developer and so by how easily I can spot issues without double checking
everything.
I only have a single machine with an NVIDIA GPU, this machine runs Debian
Testing and so I don't have a way to test the Stable packages. I can do my best
to address problems but at this point there is a risk that new updates break
something.
Installing NVIDIA drivers has always been a bit risky regardless, if you're
comfortable with reverting updates and handling a system without a graphical
interface (in case you end up in a tty), you will be fine.
You will likely need to disable Secure Boot in order to use them, or set up your
BIOS so that a MOK can be used to sign the DKMS modules.
When choosing the version strings for the packages, I was careful enough to
pick something that would sort lower than an official Debian package, meaning
that whenever that same version is packaged in Debian, your system will see it
as an upgrade.
If you have any other methods of installing the NVIDIA drivers on your Debian
system that is working for you, you should likely stick to that.
I have a strong preference for installing them through .deb packages, making
the package sort out configuration changes and dependency updates, besides
handling the DKMS modules.
Ultimately I'm not happy with the amount of difficulty that Debian users have in
installing up-to-date NVIDIA drivers, and I hope this makes it easier for some.
How To Install
Head over to the Debusine page that contains both repos for Trixie (Debian
Stable) and Sid (for Debian Testing and Unstable):
If you are running Debian Testing, then pick the Sid repository.
That page contains the contents of the apt
.sources
file you need, create the
file
/etc/apt/sources.list.d/r-samueloph-nvidia-ai.sources
with the sources for your release.
Run
sudo apt update
and install the packages you need, if you already have a
previous version installed,
sudo apt upgrade --update
would update them.
If there are no upgrades, meaning you don't have a previous version installed,
then you need to explicitly install them.
sudo
apt install nvidia-open-kernel-dkms nvidia-driver
If you run into issues in Debian Stable, consider using the Linux kernel package
from the backports repository, if you need an up-to-date NVIDIA driver, you
likely should also be running the backports kernel package (if you can't
upgrade to Debian Testing).
Future Plans
I currently have no means of measuring how many people are using the debusine
repositories, so if you do end up using it feel free to let me know somehow.
I don't know for how long I will keep managing this repository, and how much
effort I will spend, but my machine needs it and for now I will keep it
up-to-date with the latest production-grade NVIDIA drivers.
Sources
The sources of the packages are available under a namespace in Salsa (Debian's
GitLab instance):
You can also get the exact sources used in the repositories from debusine:
29 March, 2026 12:00AM
by Unknown
March 28, 2026
Evgeni Golov
Converting Dovecot password schemes on the fly without (too much) cursing
I finally upgraded my mail server to Debian 13 and, as expected, the Dovecot part was quite a ride.
The configuration syntax changed between Dovecot 2.3 (Debian 12) and Dovecot 2.4 (Debian 13),
so I started first with diffing my configuration against a vanilla Debian 12 one (this setup is slightly old) and then applied the same (logical) changes to a vanilla Debian 13 one.
This mostly went well.
Mostly because my user database is stored in SQL and while the
Dovecot Configuration Upgrader
says it can convert old
dovecot-auth-sql.conf.ext
files to the new syntax,
it only does so for the structure, not the SQL queries themselves.
While I don't expect it to be able to parse the queries and adopt them correctly,
at least a hint that the field names in
userdb
changed and might require adjustment would've been cool.
Once I got that all sorted, Dovecot would still refuse to let me in:
Error: sql: Invalid password in passdb: Weak password scheme 'MD5-CRYPT' used and refused
Yeah, right.
Did I mention that this setup is old?
The quick cure against this is a
auth_allow_weak_schemes = yes
in
/etc/dovecot/conf.d/10-auth.conf
but long term I really should upgrade the password hashes in the database to something more modern.
And this is what this post is about.
My database only contains hashed (and salted) passwords,
so I can't just update them without changing the password.
And while there are only 9 users in total,
I wanted to play nice and professional.
(LOL)
There is a
Converting Password Schemes
howto in the Dovecot documentation,
but it uses a rather odd looking PHP script, wrapped in a shell script which leaks the plaintext password to the process list,
and I really didn't want to remember how to write PHP to complete this task.
Luckily,
I know Python
The general idea is:
As we're using plaintext authentication (
auth_mechanisms = plain login
),
the plaintext password is available during login.
After Dovecot's
imap-login
has verified the password against the old (insecure) hash in the database,
we can
execute a post-login script
which will connect to the database and update it with a new hash of the plaintext password.
To make the plaintext password available to the post-login script,
we add
'%{password}' as userdb_plain_pass
to the
SELECT
statement of our
passdb
query.
The original howto also says to add a
prefetch
userdb
, which we do.
The
sql
userdb
remains, as otherwise Postfix can't use Dovecot to deliver mail.
Now comes the interesting part.
We need to write a script that is executed by Dovecot's
script-login
and that will update the database for us.
Thanks to Python's
passlib
and
mysqlclient
the database and hashing parts are relatively straight forward:
#!/usr/bin/env python3
import
os
import
MySQLdb
import
passlib.hash
DB_SETTINGS
"host"
"127.0.0.1"
"user"
"user"
"password"
"password"
"database"
"mail"
SELECT_QUERY
"SELECT password_enc FROM mail_users WHERE username=
%(username)s
UPDATE_QUERY
"UPDATE mail_users SET password_enc=
%(pwhash)s
WHERE username=
%(username)s
SCHEME
"bcrypt"
EXPECTED_PREFIX
"$2b$"
def
main
():
# https://doc.dovecot.org/2.4.3/core/config/post_login_scripting.html
# https://doc.dovecot.org/2.4.3/howto/convert_password_schemes.html
user
os
environ
get
"USER"
plain_pass
os
environ
get
"PLAIN_PASS"
if
plain_pass
is
not
None
db
MySQLdb
connect
**
DB_SETTINGS
cursor
db
cursor
()
cursor
execute
SELECT_QUERY
"username"
user
})
result
cursor
fetchone
()
current_pwhash
result
if
not
current_pwhash
startswith
EXPECTED_PREFIX
):
hash_module
getattr
passlib
hash
SCHEME
pwhash
hash_module
hash
plain_pass
data
"pwhash"
pwhash
"username"
user
cursor
execute
UPDATE_QUERY
data
cursor
close
()
db
close
()
if
__name__
==
"__main__"
main
()
But if we add that as
executable = script-login /etc/dovecot/dpsu.py
to our
imap-postlogin
service
as the howto suggests, the users won't be able to login anymore:
Error: Post-login script denied access to user
WAT?
Remember that shell script I wanted to avoid?
It ends with
exec "$@"
Turns out the
script-login
"API" is rather interesting.
It's not "pass in a list of scripts to call and I'll call all of them".
It's "pass a list of scripts, I'll
execv
the first item and pass the rest as args, and every item is expected to
execv
the next one again". 🤯
With that (cursed) knowledge, the script becomes:
#!/usr/bin/env python3
import
os
import
sys
import
MySQLdb
import
passlib.hash
DB_SETTINGS
"host"
"127.0.0.1"
"user"
"user"
"password"
"password"
"database"
"mail"
SELECT_QUERY
"SELECT password_enc FROM mail_users WHERE username=
%(username)s
UPDATE_QUERY
"UPDATE mail_users SET password_enc=
%(pwhash)s
WHERE username=
%(username)s
SCHEME
"bcrypt"
EXPECTED_PREFIX
"$2b$"
def
main
():
# https://doc.dovecot.org/2.4.3/core/config/post_login_scripting.html
# https://doc.dovecot.org/2.4.3/howto/convert_password_schemes.html
user
os
environ
get
"USER"
plain_pass
os
environ
get
"PLAIN_PASS"
if
plain_pass
is
not
None
db
MySQLdb
connect
**
DB_SETTINGS
cursor
db
cursor
()
cursor
execute
SELECT_QUERY
"username"
user
})
result
cursor
fetchone
()
current_pwhash
result
if
not
current_pwhash
startswith
EXPECTED_PREFIX
):
hash_module
getattr
passlib
hash
SCHEME
pwhash
hash_module
hash
plain_pass
data
"pwhash"
pwhash
"username"
user
cursor
execute
UPDATE_QUERY
data
cursor
close
()
db
close
()
os
execv
sys
argv
],
sys
argv
:])
if
__name__
==
"__main__"
main
()
And the passwords are getting gradually updated as the users log in.
Once all are updated, we can remove the post-login script and drop the
auth_allow_weak_schemes = yes
28 March, 2026 10:11PM
by evgeni
James Valleroy
Stagger v0.1.0
I’ve decided it’s time to tag a v0.1.0 release on my roguelike game project, Stagger. It’s more of a small demo than a full game at this point. It is turn-based, and has purely text-based “graphics”, like the original Rogue.
Here’s a “screenshot”:
####################
#..................#
#.@................#
#....|.............#
#..................#
#.........>........#
#..................#
#..................#
#..................#
####################
HP: 10/10
You can find the repository at either of these locations:
The game is developed in Python, using ncurses. It is dual-licensed under AGPL and MPL.
28 March, 2026 10:54AM
by James Valleroy
Valhalla's Things
Ink Lightfastness Tests 2026
Posted on March 28, 2026
Tags:
madeof:atoms
topic:inks
Note
This post will be updated in the next weeks with the test results as
they become available.
Note
Most of the images in this post have no real alt-text: they are all
scans of the test sheet at various stages through the test, and the
results visible on them are described in detail at the end of the
post.
Most of the time, what people write by hand will either end up inside a
notebook in a drawer or cupboard where it’s well protected, or thrown in
the recycling where it doesn’t matter.
There are times, however, when things will be exposed to light: it
doesn’t matter whether it’s a work of artistic calligraphy that you want
to frame or a passive-aggressive notice left in the atrium of a
building; it is useful to know whether the work will remain legible or
it will fade into nothing in a short time.
A few inks are tested by the producers for lightfastness according to
some established standard, a few others are declared lightfast in a
generic way, but a lot come with no indication at all.
Proper testing according to the
standard scales
requires significant
equipment to precisely control the exposure, but it’s significantly
easier — and fun — to do a simple test to divide the inks into three
categories:
suitable for framed calligraphy, i.e. it looks the same after 3 months
of direct sun exposure;
suitable for complaining about the way your neighbours deal with the
trash, i.e. still readable after 3 months of exposure;
not suitable for either, i.e. has faded significantly in the same time.
In the past I’ve done some such tests by taping some sheets to a
south-east facing window, and I’ve noticed that most of the results were
already apparent after a month, and there was basically no difference
between two and three months of exposure, but spring equinox to summer
solstice is a nice timeframe to use for such a test (and it leaves time
for a second test of different materials from summer solstice to autumn
equinox), so this is what I’ve chosen to do this year.
Rather than a window, now I have access to a south-facing covered
balcony that is protected from rain but receives quite a bit of direct
sun, so instead of taping sheets to the windows
I’ve prepared a
sturdy cardboard panel that I can leave on a table on the balcony,
hopefully safe from the rain, but well exposed to the sun.
And then made a quick test, and realized that without the window glass
in front, the black strip used to cover the unexposed half of the sample
doesn’t lay flat and lets some sun in, so I used an old cheap
glass frame instead of the panel.
The next step, already in January, was mentioning in a fountain-pen
enthusiasts forum that I planned such a test, and asking if people were
interested in having me buy a few samples of more inks when I was
buying my next pen.
The word “enthusiasts” is probably a hint of the reason why soon
afterwards I received a package with the pen I had planned to buy, its
converter, and a
couple dozens
ink samples.
And then a couple envelopes with additional samples of inks that weren’t
available on the shops, from said enthusiasts.
Added to the inks I already had acquired since the last lightfastness
test, it meant that they couldn’t all fit in one single page, and thus I
had some room to add some inks I had already tested: some were requests,
and for others I tried to select ones that felt relevant.
Since I’m changing the test setup, I’ve decided I should probably keep
doing this until I’ve tested again all of the inks I still have
available.
For the paper, I’ve used A4 sheets of
Clairefontaine Dessin Croquis
160 g/m²
one of my staples that I’m sure I will have available in the next years,
printed with a dot pattern with a laser printer, using
this pdf
And as for the pen I’ve used a fresh Brause n°361 nib: loading a fountain pen
with all of these inks wouldn’t be a reasonable effort, and the 361 is
one of the writing implements I use most anyway. I also used a glass pen
to fill a couple of squares on the paper with more ink.
One side of each sheet was then covered with a strip of 300 g/m² black
paper (also from Clairefontaine), kept in place with three dots of
non-permanent two sided tape, put in the frame and set out in the sun on
the morning of 2026-03-20, the day of the spring equinox.
While I was filling the sheet for the lightfastness tests, I decided to
also prepare a second set of sheet, for a liquid resistance drop test.
On each line, beside the name of the ink, I added five sets of crossing
parallel lines, and let everything dry for a few days.
Then I used a syringe to put a drop of a liquid on each set of lines,
waited for it to be absorbed into the paper and to dry, at least
overnight, but sometimes also for a day or two (life happened), and then
looked at the results and did the next test.
The first liquid was water, with the usual wild difference between
washable and permanent inks, and all of the intermediate possibilities.
The second liquid was isopropyl alcohol, and I was surprised to see
that, with very few exceptions, most inks didn’t change at all. I
wonder whether that’s related to the fact that instead of forming a drop
it was absorbed almost immediately into the paper, and dried in a very
short time.
The third liquid was hydrogen peroxide: beside the individual results I
noticed that its column yellowed visibly; I wonder whether that means
that the paper I used has optical brighteners, and it will also yellow
under the sun: that wouldn’t be ideal, but it would also be a surprise,
for paper that is acid free and sold for arts.
The fourth liquid was citric acid, by mixing a bit less than a teaspoon
of citric acid granules in just enough very warm water (heated to 70°C,
i.e. the lowest temperature available on my kettle) to dissolve most of
the acid. I forgot that I had some old PH strips until one hour after
I’ve put the drop on the paper, and I don’t know whether something had
changed, but when I did remember about them it showed a deep red between
1 and 2. I don’t think I can
trust
those strips too much, however.
This backfired badly: the drop of citric acid never dried out, but
formed a sticky paste that prevented me from scanning the results,
and I’m not sure whether I’ll do the last test, which was supposed to be
household bleach.
Luckily I had scanned the partial results, and they are shown here.
After one full day with plenty of sun, nothing really had changed,
except possibly for a vague hint that the Herbin Bleu Myosotis may have
have been a bit lighter than it started, but it may also have been a
suggestion.
After three days, however, some results started to show, with the most
fugitive inks starting to be visibly changed, becoming either paler or
in some case duller.
And the full week showed more of that, with a few more inks starting to
show visible change.
After two weeks the paper had significantly yellowed, something I did
not expect from drawing paper (and which means that I will probably use
a different paper when making similar tests in the future).
As for the inks, there were a couple more inks with visible changes, but
mostly it was more of the same as seen in the previous week.
Three weeks started to show changes in the black and most irongall inks, and of course more changes in the even less resistant inks.
Week four saw a bit more clouds and rain than the first few weeks, and
there weren’t big changes, but mostly more of what had already started
to happen earlier.
A month didn’t change much compared to the four weeks, but I did the
scans for completeness, and from now on I’m going to update monthly.
These are the inks I’ve tested, and here I’ll add notes on the results,
as soon as they will be available, keeping this section updated.
When nothing is mentioned, it means that there were no changes, either
under the light or under the various liquids.
Lamy Sepia
Not resistant to water, the drop becomes an uniform colour spot.
After one week it started to be just slightly paler, more so after
three weeks.
Sheaffer Skrip Red
Not resistant to water, the drop becomes an uniform colour spot.
After one week it started to be just slightly paler, more so after
three weeks.
Waterman Audacious Red
Not resistant to water, the drop becomes an uniform colour spot.
After three days it started to be just slightly paler, after a week
visibly so. After four weeks it was very pale.
Waterman Harmonious Green
Not resistant to water, the drop becomes an uniform colour spot; the
hydrogen peroxide drop looks a bit lighter than the one with just
water.
After one week it started to be just slightly paler, more so after
three weeks. After four weeks it was very pale.
Waterman Mysterious Blue
Not resistant to water, the drop becomes an uniform colour spot; the
hydrogen peroxide drop is significantly lighter and tends towards
green.
After two weeks it started to be just slightly paler, after three
weeks it was more gray. After four weeks it was very pale.
Waterman Serenity Blue
Not resistant to water, the drop becomes an uniform colour spot; the
hydrogen peroxide drop is almost completely bleached to a light yellow.
After one week it started to be a bit duller. After four weeks it was
paler and duller.
Visconti Blue
Not resistant to water, the drop becomes an uniform colour spot.
After one week it was visibly duller, looking darker than the
original. After three weeks it was duller, and lighter. After a month
it was just a pale gray.
Montblanc Royal Blue
Not resistant to water, the drop becomes an uniform colour spot; the
hydrogen peroxide drop is almost completely bleached to a light
yellow.
After one week it started to be just slightly duller, more so after
two weeks. After three weeks it was also paler. After a month it was
just a pale gray.
Montblanc Mystery Black
Not resistant to water, the drop becomes an uniform colour spot.
After three weeks it started to be a bit paler.
Aurora Nero
Not resistant to water, the drop becomes an uniform colour spot.
After three weeks it started to be a bit more brown.
Online Duft Blueberry
Not resistant to water, the drop looks very washed out, although a
hint of the original shape can be guessed; the hydrogen peroxide drop
is almost completely bleached to a light yellow.
After one week it was visibly paler and duller. After three weeks
significantly so. After a month it was a pale grey.
Diamine Forever Ink - Smoky Mauve
After a month it looked a bit more purple.
Diamine Forever Ink - Honey Pot
Diamine Forever Ink - Coral Blaze
Diamine Forever Ink - Red Ochre
Diamine Graphite
Not resistant to water, the drop becomes an uniform colour spot.
Diamine Rustic Brown
Not resistant to water, the drop becomes an uniform colour spot.
After three weeks it started to be very slightely paler.
Diamine China Blue
Not resistant to water, the drop becomes an uniform colour spot; the
hydrogen peroxide drop is almost completely bleached to a light
yellow.
After three weeks it started to be paler and duller.
Diamine Inkvent Purple Edition - Glacier
Not resistant to water, there is a drop of uniform colour, but it
maintains a somewhat recognisable shade of the original shape.
After three weeks it started to be lighter.
Fountainfeder STEVE
Not resistant to water, there is a drop of uniform colour, but it
maintains a somewhat recognisable shade of the original shape.
After two weeks the base colour had changed to a pink rather than
purple.
Pilot Iroshizuku Syo Ro
Not resistant to water, there is a drop of uniform colour, but it
maintains a somewhat recognisable shade of the original shape.
After four weeks it was very slightely paler.
Pilot Iroshizuku Shin-Kai
Not resistant to water, there is a drop of uniform colour, but it
maintains a somewhat recognisable shade of the original shape.
After two weeks it had become lighter and more purple. After four
weeks it was a purple gray.
Rohrer & Klingner IG Ebony
Not resistant to water, there is a drop of uniform colour, but it
maintains a recognisable shade of the original shape; under
hydrogen peroxide the shade is significantly lighter.
After four weeks it was a bit lighter
KWZ IG Orange
Not resistant to water, the drop becomes an uniform colour spot; the
hydrogen peroxide drop is significantly bleached to a light orange.
Kallipos.de Schwarze Eisengallus-Tinte
Water stains the paper, leaving however the original shape quite
visible; is it almost completely bleached by hydrogen peroxide.
After three weeks it started to be very slightely lighter.
Kallipos.de Blaue Eisengallus-Tinte
Water stains the paper, leaving however the original shape quite
visible; is it almost completely bleached by hydrogen peroxide.
After two weeks it had started to become lighter and more gray.
Rohrer & Klingner IG Salix
Water stains the paper, leaving however the original shape quite
visible; is it almost completely bleached by hydrogen peroxide.
After two weeks it had become lighter and significantly more gray.
After a month it was a yellowish gray.
Rohrer & Klingner IG Scabiosa
Water stains the paper with a significant purple spot, leaving
however the original shape quite visible; is is a bit bleached by
hydrogen peroxide, but still quite readable.
Pelikan Edelstein Tanzanite
Not resistant to water, the drop becomes an uniform colour spot, but
there is a visible trace of the original shape.
After three weeks it started to be slightly paler.
Montblanc Burgundy Red
Not resistant to water, the drop becomes an uniform colour spot, with
just a hint of the original shape; slightly bleached by hydrogen
peroxide.
After three weeks it started to be paler.
Cifra inchiostro finissimo verde alla lavanda
Not resistant to water, the drop becomes an uniform colour spot;
quite bleached to a light yellowish green by hydrogen peroxide.
After one week it was visibly paler. After a month it was a still
readable pale trace.
Sennelier Abstract acrylic ink 917 purple
The Feather Pen Ink
Eloquentia Inchiostro nero
DeAtramentis Document Blue
DeAtramentis Document BlueGrey
DeAtramentis Document Brown
DeAtramentis Document Fuchsia
DeAtramentis Document Grau
DeAtramentis Document Green Grey
DeAtramentis Document Light Grey
DeAtramentis Document Moosgrün
DeAtramentis Document Orange
DeAtramentis Document Purpurviolett
DeAtramentis Document Urban Sienna
KWZ Sheen Machine
Not resistant to water, the drop becomes an uniform colour spot; the
hydrogen peroxide bleached away the red sheen. This was one of the
only two inks to react to isopropyl alcohol, which caused a pale cyan
halo around the lines.
After three days it was still perfectly readable, but had visibly
lost some red sheen, after one week the red had completely gone and
it looked very dark blue (but still shiny)
KWZ Walk over Vistula
Not resistant to water, the drop becomes an uniform colour spot.
After four weeks it looked a bit
darker
and duller.
KWZ Warsaw Dreaming
Not resistant to water, the drop becomes an uniform colour spot.
After a month it started to be a bit lighter.
Octopus Neon Violett
Water very lightly stains the paper, leaving however the original
shape quite visible. The other ink that reacted to isopropyl alcohol,
with a pale purple halo around the lines.
After two weeks it was paler, more pink.
Octopus Write & Draw Elephant Black
Platinum blue black
Water stains the paper, leaving however the original shape quite
visible; it is significantly bleached by hydrogen peroxide.
After three weeks it started to become gray.
Pelikan 4001 Brillant-Schwarz
Not resistant to water, the drop becomes an uniform colour spot.
After three weeks it was a bit more brown than black, after a month
noticeably so.
Pelikan 4001 Blau-Schwarz
Water stains the paper, leaving however the original shape quite
visible; it is significantly bleached by hydrogen peroxide.
After three weeks it started to become gray.
Pelikan 4001 Königsblau
Not resistant to water, the drop becomes an uniform colour spot, with
just a hint of the original shape; significantly bleached by hydrogen
peroxide.
After three days it had started to be slightly paler.
After three weeks it was significantly desaturated.
Herbin Bleu Myosotis
Not resistant to water, the drop becomes an uniform pink spot,
significantly bleached by hydrogen peroxide.
After three days it was already visibly paler, after one week it was
a pale grey. After a month it was still somehow readable, but as a
trace.
Faber Castell Royal Blue
Not resistant to water, the drop becomes an uniform colour spot, with
just a hint of the original shape; significantly bleached by hydrogen
peroxide.
After three days it was slightly duller, after two weeks definitely
so. After a month it was also quite paler.
Koh-I-Noor Fountain pen ink blue
Not resistant to water, the drop becomes an uniform colour spot, with
just a hint of the original shape; significantly bleached by hydrogen
peroxide.
After three days it had started to be slightly paler, more so after
one week when it had also turned grey. After four weeks it was very
pale.
Koh-I-Noor Document Ink Blue
Koh-I-Noor Document Ink Black
Water leaves a very light stain, but the original shape doesn’t look
changed.
DeAtramentis Document Black
Waterman Intense Black
Not resistant to water, the drop becomes an uniform colour spot, with
a trace of the original shape still visible; very lightly bleached by
hydrogen peroxide.
After three weeks it started to look a bit more brown, noticeably so
after a month.
Herbin Perle Noir
Not resistant to water, the drop becomes an uniform colour spot, with
a trace of the original shape still visible.
After three weeks it started to look a bit more brown, noticeably so
after a month.
Parker Quink black
Not resistant to water, the drop becomes an uniform colour spot.
Platinum Carbon black
Rohrer & Klingner Documentus Black
Sailor Pigment Kiwaguro
Platinum Dyestuff Red
Not resistant to water, the drop becomes an uniform colour spot; very
lightly bleached by hydrogen peroxide.
After three weeks it was a bit paler.
Noodler’s Eternal Polar Blue
which would be spend the day covered by mostly closed
shutters anyway, because they receive quite a bit of direct sun, and
we don’t want that to enter the house during the summer.
↩︎
and thus, I hope, not especially UV-filtering.
↩︎
28 March, 2026 12:00AM
March 27, 2026
Jonathan Dowland
Digital gardening
I was reading
a post
on
Alex Chan's
website
that referenced the concept of
digital gardens
a concept/analogy for organising information which dates back to the 90s.
This old concept is getting new traction today by contrasting the approach
with "endless stream" as used and abused by social media, but also how blogs
are typically presented.
This site
, my homepage, has a blog, and that's the bit that most people who
interact with the site will experience. Partly, because it's the bit that gets
syndicated out: via
feeds
; on
Planet
Debian
and downstream from it; once upon a time on
Twitter; nowadays on
the Fediverse
However there's more to my homepage than that. The rest of it may be of little
interest to anyone beside me, but it's useful to me, at least. So I may switch
focus a little bit from mainly writing blog posts, and tend to the rest of the
garden a bit more.
Some recent seeding and pruning:
Recently my guest status at Newcastle University came up for renewal, so I
wrote down my goals in the Historic Computing Committee for the next year or
so, and put them here:
nuhcc
. I've also been pondering what I'm up to in
Debian
at the moment, so took some time to add my current projects to
that page.
I'm reminded that I should really publish a "blog roll" of cool
blogs I'm following at the moment, of which Alex Chan's is one.
27 March, 2026 10:05PM
A complete feed is available in any of your favourite syndication formats linked by the buttons below.
All times are UTC.
Contact:
Debian Planet Maintainers
Planetarium
Planet Debian
Planet Debian Derivatives
Planet Debian Spanish
Planet Debian French
Hidden Feeds
You currently have hidden entries.
Show all
Subscriptions
(feed)
Abhijith PA
(feed)
Abiola Ajadi
(feed)
Adam Rosi-Kessel
(feed)
Adnan Hodzic
(feed)
Agathe Porte
(feed)
Ahmed Siam
(feed)
Aigars Mahinovs
(feed)
Alberto García
(feed)
Albiona Hoti
(feed)
Alejandro Rios P.
(feed)
Alex Muntada
(feed)
Alexander Wirt
(feed)
Alexandre Viau
(feed)
Aloïs Micard
(feed)
Ana Beatriz Guerrero Lopez
(feed)
Anastasia Tsikoza
(feed)
Andreas Bombe
(feed)
Andreas Metzler
(feed)
Andreas Rönnquist
(feed)
Andreas Schuldei
(feed)
Andree Leidenfrost
(feed)
Andrej Belym
(feed)
Andrej Shadura
(feed)
Andrew Cater
(feed)
Andrew Pollock
(feed)
Andy Simpkins
(feed)
Anisa Kuci
(feed)
Antoine Beaupré
(feed)
Anton Gladky
(feed)
Antonio Terceiro
(feed)
Antti-Juhani Kaijanaho
(feed)
Anuradha Weeraman
(feed)
Arianit Dobroshi
(feed)
Arnaud Rebillout
(feed)
Aron Xu
(feed)
Arthur Diniz
(feed)
Arturo Borrero González
(feed)
Athos Ribeiro
(feed)
Aurelien Jarno
(feed)
Axel Beckert
(feed)
Ayoyimika Ajibade
(feed)
Balasankar 'Balu' C
(feed)
Bastian Blank
(feed)
Bastian Venthur
(feed)
Bdale Garbee
(feed)
Ben Hutchings
(feed)
Benjamin Drung
(feed)
Benjamin Kerensa
(feed)
Benjamin Mako Hill
(feed)
Bernd Zeimetz
(feed)
Bernhard R. Link
(feed)
Biella Coleman
(feed)
Billy Warren
(feed)
Birger Schacht
(feed)
Bits from Debian
(feed)
Brett Parker
(feed)
Brice Goglin
(feed)
Bálint Réczey
(feed)
C.J. Collier
(feed)
Caleb Adepitan
(feed)
Candy Tsai
(feed)
Carl Chenet
(feed)
Carlos Villegas
(feed)
Charles
(feed)
Charles Plessy
(feed)
Chris Butler
(feed)
Chris Lamb
(feed)
Chris Lawrence
(feed)
Christian Kastner
(feed)
Christine Spang
(feed)
Christoph Berg
(feed)
Christoph Berg
(feed)
Christoph Egger
(feed)
Clint Adams
(feed)
Colin Watson
(feed)
Craig Sanders
(feed)
Cyril Brulebois
(feed)
Dan Weber
(feed)
Daniel Baumann
(feed)
Daniel Kahn Gillmor
(feed)
Daniel Lange
(feed)
Daniel Leidert
(feed)
Dariusz Dwornikowski
(feed)
Dave Hibberd
(feed)
David Bremner
(feed)
David Kalnischkies
(feed)
David Moreno
(feed)
David Nusinow
(feed)
David Paleino
(feed)
David Watson
(feed)
David Wendt Jr.
(feed)
Davide Viti
(feed)
DebConf team
(feed)
Debian Brasil
(feed)
Debian GSoC Kotlin project blog
(feed)
Debian Java Packaging Team
(feed)
Debian Med
(feed)
Debian Outreach Team
(feed)
Debian Project Leader
(feed)
Debian Social Team
(feed)
Debian Sysadmin Team
(feed)
Debian XMPP Team
(feed)
Debichem Team
(feed)
Deepanshu Gajbhiye
(feed)
Didier Raboud
(feed)
Dima Kogan
(feed)
Dimitri John Ledkov
(feed)
Dirk Eddelbuettel
(feed)
Divine Attah-Ohiemi
(feed)
Dmitry Shachnev
(feed)
Dogukan Celik
(feed)
Dominique Dumont
(feed)
Don Armstrong
(feed)
Eddy Petrișor
(feed)
Eduard Sanou
(feed)
Eduardo Marcel Macan
(feed)
Edward Betts
(feed)
Elana Hashman
(feed)
Emanuele Rocca
(feed)
Emilio Pozuelo Monfort
(feed)
Emmanuel Kasper
(feed)
Enrico Zini
(feed)
Eriberto Mota
(feed)
Eric Dorland
(feed)
Erich Schubert
(feed)
Eugen Stan
(feed)
Eugene V. Lyubimkin
(feed)
Evgeni Golov
(feed)
Floris Stoica-Marcu
(feed)
Foteini Tsiami
(feed)
Francesco Paolo Lovergine
(feed)
François Marier
(feed)
Freexian Collaborators
(feed)
Giacomo Catenazzi
(feed)
Giovanni Mascellani
(feed)
Giuseppe Iuculano
(feed)
Gregor Herrmann
(feed)
Gregory Colpart
(feed)
Guido Günther
(feed)
Guido Günther
(feed)
Guilherme Puida Moreira
(feed)
Gunnar Wolf
(feed)
Gustavo Franco
(feed)
Gustavo R. Montesino
(feed)
Hellen Chemtai
(feed)
Hideki Yamane
(feed)
Héctor Orón Martínez
(feed)
Iain R. Learmonth
(feed)
Ian Jackson
(feed)
Ian Wienand
(feed)
Igor Genibel
(feed)
Ingo Juergensmann
(feed)
Isoken Ibizugbe
(feed)
Iustin Pop
(feed)
Jacob Adams
(feed)
James Bromberger
(feed)
James McCoy
(feed)
James Morrison
(feed)
James Valleroy
(feed)
Jamie McClelland
(feed)
Jaminy Prabaharan
(feed)
Jan Dittberner
(feed)
Jan Wagner
(feed)
Jeff Bailey
(feed)
Jeff Licquia
(feed)
Jelmer Vernooij
(feed)
Jeremy Bicha
(feed)
Jingjie Jiang
(feed)
Jo Shields
(feed)
Joachim Breitner
(feed)
Joerg Jaspert
(feed)
Joey Hess
(feed)
Johannes Schauer Marin Rodrigues
(feed)
John Goerzen
(feed)
Jonas Meurer
(feed)
Jonas Smedegaard
(feed)
Jonathan Carter
(feed)
Jonathan Dowland
(feed)
Jonathan McDowell
(feed)
Jonathan Wiltshire
(feed)
Jonathan Yu
(feed)
Jonny Lamb
(feed)
Jordi Mallach
(feed)
Joseph Bisch
(feed)
Josselin Mouette
(feed)
Juan Luis Belmonte
(feed)
Julian Andres Klode
(feed)
Julien Viard de Galbert
(feed)
Junichi Uekawa
(feed)
Jurij Smakov
(feed)
Justus Winter
(feed)
Kai Wasserbäch
(feed)
Kalyani Kenekar
(feed)
Kartik Mistry
(feed)
Kathara Sasikumar
(feed)
Kees Cook
(feed)
Keith Packard
(feed)
Kentaro Hayashi
(feed)
Kunal Mehta
(feed)
Kurt Kremitzki
(feed)
Kurt Roeckx
(feed)
Laura Arjona Reina
(feed)
Leandro Doctors
(feed)
Leandro Gómez
(feed)
Leo 'costela' Antunes
(feed)
Lior Kaplan
(feed)
Lisandro Damián Nicanor Pérez Meyer
(feed)
Louis-Philippe Véronneau
(feed)
Luca Bruno
(feed)
Luca Falavigna
(feed)
Luca Favatella
(feed)
Lucas Nussbaum
(feed)
Luciano Prestes Cavalcanti
(feed)
Lukas Märdian
(feed)
Luke Faraone
(feed)
MJ Ray
(feed)
Manuel A. Fernandez Montecelo
(feed)
Marc 'HE' Brockschmidt
(feed)
Marcela Tiznado
(feed)
Marco d'Itri
(feed)
Maria Glukhova
(feed)
Mark Brown
(feed)
Marko Lalic
(feed)
Markus Koschany
(feed)
Martin Michlmayr
(feed)
Martin Zobel-Helas
(feed)
Martin-Éric Racine
(feed)
Masayuki Hatta
(feed)
Mateus Bellomo
(feed)
Mathieu Parent
(feed)
Matrix on Debian blog
(feed)
Matthew Garrett
(feed)
Matthew Palmer
(feed)
Matthias Geiger
(feed)
Matthias Klumpp
(feed)
Matthias Urlichs
(feed)
Matthieu Caneill
(feed)
Mehdi Dogguy
(feed)
Meike Reichle
(feed)
Melissa Wen
(feed)
Mesutcan Kurt
(feed)
Michael Ablassmeier
(feed)
Michael Casadevall
(feed)
Michael Meskes
(feed)
Michael Prokop
(feed)
Michael Stapelberg
(feed)
Michael Vogt
(feed)
Michal Čihař
(feed)
Miguel Gea
(feed)
Mike Beattie
(feed)
Mike Gabriel
(feed)
Mike Hommey
(feed)
Mirco Bauer
(feed)
Moray Allan
(feed)
NOKUBI Takatsugu
(feed)
Neil McGovern
(feed)
NeuroDebian
(feed)
Nico Golde
(feed)
Nicolas Dandrimont
(feed)
Niels Thykier
(feed)
Noah Meyerhans
(feed)
Norman García
(feed)
Obey Arthur Liu
(feed)
Olivier Grégoire
(feed)
Olly Betts
(feed)
Ondřej Čertík
(feed)
Osamu Aoki
(feed)
Otto Kekäläinen
(feed)
Pablo S. Torralba
(feed)
Patrick Matthäi
(feed)
Patryk Cisek
(feed)
Pau Garcia i Quiles
(feed)
Paul Tagliamonte
(feed)
Paul Tagliamonte
(feed)
Paul Tagliamonte
(feed)
Paul Wise
(feed)
Paul van Tilburg
(feed)
Paulo Henrique de Lima Santana
(feed)
Pavit Kaur
(feed)
Pete Nuttall
(feed)
Peter Palfrader
(feed)
Peter Pentchev
(feed)
Petter Reinholdtsen
(feed)
Phil Hands
(feed)
Philipp Kern
(feed)
Qendresa Hoti
(feed)
Rémi Vanicat
(feed)
Raju Devidas
(feed)
Raphaël Hertzog
(feed)
Raphael Geissert
(feed)
Ravi Dwivedi
(feed)
Reinhard Tartler
(feed)
Renata D'Avila
(feed)
Reproducible Builds
(feed)
Reproducible Builds (diffoscope)
(feed)
Rhonda D'Vine
(feed)
Ricardo Mones
(feed)
Riku Voipio
(feed)
Ritesh Raj Sarraf
(feed)
Rob Taylor
(feed)
Robert Edmonds
(feed)
Robert McQueen
(feed)
Robert Millan
(feed)
Rodrigo Siqueira
(feed)
Rogério Brito
(feed)
Roland Mas
(feed)
Romain Perier
(feed)
Ross Gammon
(feed)
Ruby Team
(feed)
Rudy Godoy
(feed)
Russ Allbery
(feed)
Russell Coker
(feed)
Ryan Kavanagh
(feed)
Sahil Dhiman
(feed)
Sam Hartman
(feed)
Samuel Henrique
(feed)
Sandro Knauß
(feed)
Sandro Tosi
(feed)
Santiago García Mantiñán
(feed)
Satyam Zode
(feed)
Scarlett Gately Moore
(feed)
Scott Kitterman
(feed)
Sean Whitton
(feed)
Sergey Davidoff
(feed)
Sergio Cipriano
(feed)
Sergio Durigan Junior
(feed)
Sergio Talens-Oliag
(feed)
Sha Liu
(feed)
Shashank Kumar
(feed)
Siegfried Gevatter
(feed)
Simon Désaulniers
(feed)
Simon Huggins
(feed)
Simon Josefsson
(feed)
Simon Law
(feed)
Simon McVittie
(feed)
Sjoerd Simons
(feed)
Soeren Sonnenburg
(feed)
Sorina Sandu
(feed)
Stefano Rivera
(feed)
Stefano Zacchiroli
(feed)
Stein Magnus Jodal
(feed)
Steinar H. Gunderson
(feed)
Stephan Lachnit
(feed)
Steve Kemp
(feed)
Steve Langasek
(feed)
Steve McIntyre
(feed)
Sune Vuorela
(feed)
Sven Hoexter
(feed)
Sylvain Beucler
(feed)
Sylvain Le Gall
(feed)
Sylvestre Ledru
(feed)
Taavi Väänänen
(feed)
Tanguy Ortolo
(feed)
Taowa
(feed)
Thadeu Lima de Souza Cascardo
(feed)
Theppitak Karoonboonyanan
(feed)
Thibaut Girka
(feed)
Thomas Girard
(feed)
Thomas Goirand
(feed)
Thomas Lange
(feed)
Thorsten Alteholz
(feed)
Tiago Bortoletto Vaz
(feed)
Tianon Gravi
(feed)
Tim Retout
(feed)
Timo Jyrinki
(feed)
Todd Troxell
(feed)
Tollef Fog Heen
(feed)
Ulrich Dangel
(feed)
Ulrike Uhlig
(feed)
Urvika Gola
(feed)
Utkarsh Gupta
(feed)
Uwe Kleine-König
(feed)
Valerie Young
(feed)
Valhalla's Things
(feed)
Vasudev Kamath
(feed)
Vincent Bernat
(feed)
Vincent Fourmond
(feed)
Vincent Sanders
(feed)
William (Bill) Blough
(feed)
Wouter Verhelst
(feed)
Y Giridhar Appaji Nag
(feed)
Yifei Zhan
(feed)
Yves-Alexis Perez
(feed)
Zlatan Todorić
(feed)
intrigeri
(feed)
kpcyrd
(feed)
loldebian - Can I has a RC bug?
(feed)
puer-robustus
(feed)