The Complete Computing Environment

CCE and Emacs Update Feed


This page contains new modules for The Complete Computing Environment, my Emacs and NixOS automation environment. The CCE is designed to be modular, allowing me to intersperse code and documentation using Literate Programming facilities built in to org-mode which is then published to the web using The Arcology Project site engine.

A feed is generated from this page at and cross-posted to the Fediverse on my Fediverse instance.

New on The Wobserver: A Wallabag deployment module for NixOS | CCE HomeOps

(ed: It was pointed out to me today that my last article in this feed had an update time far in the future, my apologies for fat-fingering that. Today I learned that lua's os.time will convert 93 days in to three months and change!)

Today I set up one of the final services in the long migration to my NixOS based Homelab Build, The Wobserver.

It's the "graveyard for web articles i'll never read" known more commonly as Wallabag.

wallabag is a web application allowing you to save web pages for later reading. Click, save and read it when you want. It extracts content so that you won't be distracted by pop-ups and cie.

Wallabag is a PHP application which is packaged in nixpkgs, but it is not trivial to enable it in NixOS as with many other services which are packaged therein. I did find a year-old NixOS module in dwarfmaster/home-nix which was a good starting place, so I copied that in to my system and set to work making it work with the current version of Wallabag, 2.6.6. It's nicely written but needed some work to make it match the current configuration format, and other small changes.

I then customized it so that it is easy to configure and use like standard NixOS modules:

{ pkgs, ... }:

  imports = [ ./wallabag-mod.nix ./wallabag-secrets.nix ];

  services.wallabag = {
    enable = true;
    dataDir = "/srv/wallabag";
    domain = "";
    virtualHost.enable = true;

    parameters = {
      server_name = "rrix's Back-log Black-hole";
      twofactor_sender = "";
      locale = "en_US";
      from_email = "";

  services.nginx.virtualHosts."".extraConfig = ''
    error_log /var/log/nginx/wallabag_error.log;
    access_log /var/log/nginx/wallabag_access.log;

If you want to use this, it should be straightforward to integrate. I don't think it's high enough quality to try to contribute it directly to nixpkgs right now, but if someone is brave enough to shephard that I surely wouldn't mind. 😊

paperless-ngx is a cool little document management system | CCE HomeOps

When I agreed to be the treasurer and bookkeeper for the Blue Cliff Zen Center I bought a Brother DCP-L2550DW printer/scanner/copier, an affordable, functional and reliable inkjet printer that doesn't mess around with you like similarly priced printers from HP etc do. It's fine and basically works with minimal configuration, and I can scan over the network using Skanlite. All nice and easy.

But taking this a step forward and managing my personal paper detritus has been a long-term goal; I have been broadly aware that there are decent open source OCR toolkits like tesseract for a while and I wanted to build a pipeline for generating OCR'd PDFs from a directory of scanned documents, and I never bothered to figure out how to do this myself.

I stumbled recently on Paperless-ngx, though, and found that it was packaged in nixpkgs, with a NixOS module to easily setup and configure it. So I did that.

Importantly, the full-text search works pretty well on printed documents. On hand-written stuff it'll struggle, I wonder if I can tune it against my own handwriting, but for now this is pretty nice:

It also attempts to do some amount of auto-categorization, though with only a couple dozen documents brought in so far, it's a bit too stupid to trust, and I spent about two hours after the first batch scan job to clear out the INBOX tag and manually sort things out. It also had a habit of parsing dates as D/M/Y instead of Americanese M/D/Y dates which I need to figure out how to fix.

Setting up the printer to do "Scan to FTP" was a bit of a pain, for some reason US models have the functionality disabled; I blame CISA. There is some BS you can do to go in to a maintenance menu to change the locale, reconnect it to the wifi, and then configure a Scan to FTP profile in the web UI but this feature is silently disabled by default.

Anyways, I got through a year worth of personal docs in a few hours and have a bigger shred pile than I would like, but I can shred them without feeling too badly now. I'll have encrypted backups of these documents on Backblaze B2 forever now, alongside a sqlite DB that I can full-text search. I'll probably upload my "Important Docs" directory in to this thing sooner or later, but for now it'll be able to handle my mail and the Zendo's documents. It also has a Progressive Web App manifest so you can "install" the management app on your phone to search docs on the go.

As with all of my NixOS code, it's documented and exposed on the Paperless-ngx page.

A toolkit for Literate Programming imapfilter configurations | Emacs noweb

Over on imapfilter filters my imap I have cooked up a simple Org Babel page which generates a set of mail filtering rules for the imapfilter utility. It's operated using Lua, but with a lot of rules it gets a bit difficult to manage. I broke my configuration up in to a number of org-mode tables that are processed by ten lines of Emacs Lisp which turns it in to Lua code that is written to the imapfilter configuration.

While I don't publish the raw org-mode file since it has un-published headings with rules I don't want to publish, the usage flow is pretty simple, so let me walk you through it:

The end result is a set of tables which can be shared or published, edited and reordered etc, easily, and then seamlessly exported as a configuration file.

I do this as well in some other places:

Using literate programming for generating configuration is really fun and lets you scale your code and configuration while keeping it legible!

I'm now running my Matrix Synapse instance on The Wobserver Nix

After last week's embarassingly-handled WebP 0-day, I realized my Synapse instance was sorely out of date and now had a Pegasus-class vulnerability on it. Unfortunately, the dockerfile I had been using to manage that service on my Wobscale server was out of date and didn't build with more recent versions of Synapse. Rather than using the upstream Debian-based Dockerfile, I was using one prepared by my dear friend iliana, one which she stopped using quite a while ago and I was maintaining myself. Welp nyaa~.

After briefly considering migrating to spantaleev/matrix-docker-ansible-deploy, and doing some math on exactly how much data a federating synapse node passes in a week or a month, I decided I would move the Synapse install on to my home network with my Wobscale 1U acting as a reverse proxy to my homelab machine over Tailscale.

And so on Friday afternoon I decided to wreck my sleep schedule and migrate across.

In I'm now running my Matrix Synapse instance on my Wobserver, I've written at length about this migration process, but I will spare the RSS feed the gruesome details. Click through if you're curious about moving a functioning Synapse instance to a NixOS machine with "only" 24 hours of downtime.

Synapse was one of the last services running on my Seattle 1U server; it was originally deployed back in like 2017 and has served me well but it's a geriatric Fedora Linux install that is now only running my Wallabag server on it. Once I migrate that to The Wobserver in my living room, I'll be able to turn this host down and ask ili for a small VM that can do the edge networking and be managed by my NixOS ecosystem instead of hand-tuned nginx configurations. That'll be nice.

This simultaneously took more and less work than I expected it to, and it's certainly not a perfect migration, but it is nice to be done with. It took about 22 hours of downtime all said and done, including some time spent sleeping while the thing was semi-functional.

This migration was a huge fnord for months where I would say "i should update my synapse, ugh, i should migrate my synapse to nixos, ugh, i should move to conduit, ugh i should just sign up for and never touch synapse again" every time my disk would fill up and i would have to do some stupid bullshit to clean it up enough to run VACUUM FULL on the synapse DB. It's still 65 fucking GB of old events I never want to see, and I recently learned why: "unfortunately the matrix-synapse delete room API does not remove anything from stategroupsstate. This is similar to the way that the matrix-synapse message retention policies also do not remove anything from stategroupsstate." this kills me, and this is probably why my next step will be to set up matrix-synapse-diskspace-janitor.

Archiving Old Org-Mode Tasks

Since I use org-caldav to sync my Events between my Org environment and my phone, I end up with a lot of old junk building up over the years. I recently archived ~200 old calendar entries with this function which will archive anything older than 6 months, but skip any recurring events:

(defun cce-archive-old-events ()
  "Archive any headings with active timestamp more than 180 days
in the past, but not repeating"
    (goto-char (point-min))
    (let ((six-months-ago (- (time-convert nil 'integer)
                             (* 60 60 24 30 6))))
      (while-let ((point (re-search-forward (org-re-timestamp 'active) nil t))
                  (ts (match-string 0))) 
          (goto-char (match-beginning 0)) ; timestamp parser needs to be at match-beginning
          (unless (or (> (org-time-string-to-seconds ts) six-months-ago) 
                      (plist-get (cadr (org-element-timestamp-parser)) :repeater-type)) 
            (when (y-or-n-p (format "Archive %s from %s?" (org-get-heading) ts))

It was suggested in the Emacs channel that I could use org-ql to do this, and I would like to learn how to use that properly some day, but the ts-active predicate didn't work how i expected it and i'd need to make a predicate for filtering out recurring events anyways, so i just bodged this together.

This code is simple enough on its own.

Making my NixOS system deploys as simple as possible with Morph, a hosts.toml file, and my Morph Command Wrapper Nix

This weekend I tried setting up deploy-rs, and it and flakes are kind of not very good for what I am doing with my computers.

I spent some time today doing a think that I have wanted to do for a while, moving my "network topology" in to a data file which can be ingested by the tools themselves rather than declaring them in code.

This starts with defining a hosts.toml file whose topology maps more-or-less to the one that Morph uses; a network is a deployment entity which can have any number of hosts in it:

description = "my laptops and desktop"
enableRollback = true
config = "../roles/endpoint"

# target = "rose-quine"
# stateVersion = "23.05"
# user = "rrix"

There are reasonable defaults in the host configurations so that adding a new host is a single-line operation in the hosts.toml file.

With that file in place, Deploying from my hosts.toml defines a function that ingests the networks and spits out a Nix attrset in the shape that Morph wants to execute:

  pkgs = import <nixpkgs> {};
  allNeimportTOML ./hosts.toml;
  mkNetwork = import ./mkNetwork.nix { inherit pkgs; networks = allNetworks; };
in mkNetwork "endpoints"

and from there i could say morph build $file_defined_above and it would go off and do that. then another invocation of morph deploy with a grab-bag of arguments would actually roll the system out to the host.

Taking things a step further is fun though. Why not make a simple wrapper that can make this easier? Morph Command Wrapper does that and allows me to just type deploy to run a change out to the host i'm sitting at, or deploy -b to just build it, or deploy --all to run it out everywhere.

Invoking deploy-targets will print out a list of all the hosts that the system knows about, which can then be conveniently fed in to completing-read

(->> (shell-command-to-string "deploy-targets")
     (s-split "\n")
     (append '("--all"))
     (completing-read "Which host do you want to deploy to? "))

And that can be used by the interactive emacs function arroyo-flood to automatically tangle the systems' org-mode role files, dynamically extracting a list of server, laptop, desktop, etc, modules from a sqlite cache along the way, and then deploying those! I'm pretty happy with this.

Hopefully systemd-inhibit-mode will keep me from burning in my monitor and burning out my laptop battery.

When I watch a full-screen Video in Firefox while using XMonad, it doesn't properly disable DPMS so my screen blanks every five minutes in to the video. Rather than try to figure out why that was, I would invoke systemd-inhibit to inhibit screen blanking. This was fine, but I'd run it in a terminal emulator which I would promptly forget about. Cue my laptop having a dead battery or my monitor burning in the lock-screen text overnight when I would manually lock the desktop and head to bed.

systemd-inhibit-mode is a dead-simple global minor mode which exists to remind me via the modeline that I have the inhibitor process running:

It's 27 lines of code that, if you are interested in, you can copy to your init.el. 😊 I'm not terribly motivated to stick stuff like this in MELPA. Anyways it's licensed under the Hey Smell This license if you are some sort of person who cares about licenses or think I do.

Moved my Org Site Engine-to-Fediverse cross-posting from feed2toot to Feediverse

Lately I have been working on integrating my org-mode site engine with my Fediverse presence. Following for technical work and for creative/tea/philosophical crap should give a full view of what I am adding to my sites and keeping my main presence for 1-1 interactions with my fedi-friends.

There are Atom feeds available for headlines on various pages on the sites, and tying those to Fediverse posts should be pretty straightforward, but finding the right tool for the job is always the hard part when I am forging my own way forward.

Yesterday in the back of a rental car I added feed metadata to the org-mode document for my 2023 Hawaii Big Island Trip and wondered how I could get that on to my fedi profile – i realized that it required my laptop to run the Morph deployment to ship a new NixOS system. a bit overkill for what I want to do, especially when the data is already in at least one sqlite database!

So I modified Arcology's Router to add an endpoint which returns all the feeds on the site in a JSON document and then set to work making a script which wrapped feed2toot to orchestrate this but quickly felt that feed2toot is a bit too over-engineered for what I am asking it to do, especially when I set about adding multi-account support to it; the configuration parser is doing a lot more than I want to deal with. Feediverse is a simple single-file script which I was able to easily modify to my needs, with a bit of code-smell in the form of yaml.load-based configuration.

My fork of feediverse reads that feeds.json document and iterates over every feed looking for new posts. This lets me add new feeds to my cross-poster without deploying The Wobserver on my laptop. There is a slow pipeline that prevents me from using this to Shitpost or live-toot things, but I think that's basically okay. The idea is to use it to slowly ship out things when I make new art, or have an end-of-day log, or publish a site update, without having to think too hard about it. Most of the Arroyo Systems stuff (from adding software to my laptop or server to adding pages to my site) is managed by adding metadata keys to my org-mode documents, and this is now no different, though perhaps a bit too "roundabout" for it to be considered good engineering:

But my axiom in writing the Arroyo Systems stuff and The Arcology Project as a whole is that Personal Software Can Be Shitty, and this certainly is, but it means I can post new feeds from my Astro Slide so who can say whether it's good or bad.

As much as I am not a fan of Pleroma, it's really nice to just be able to fire HTML right at my server and have it appear halfway decently on the Fediverse with inline links and even images working out just fine. Just have to keep my posts somewhat hsort. This is probably too long for Fedi already, bye.

First Update: A welcome return Emacs CCE Catchup

I noticed that my feed in the Planet Emacslife feed aggregator wasn't valid any more and am re-establishing a feed here for capturing my configuration-specific stuff. First-time readers may recognize me from the Emacs or Nix-Emacs rooms, or from building the first prototype of an Emacs Matrix Client which would later be taken and evolved and polished by alphapapa after I became frustrated with hauling bugs and implementing end-to-end encryption.

Right now most of the new functionality in my notebooks is centered on NixOS automation and the deployment of my Wobserver, but I also do increasingly silly hacks with org-roam and my own org-mode metadatabase called arroyo-db. If you haven't seen it before, my Arroyo Systems Management documents are brain-bending automation for dynamically a Concept Operating System from many org-mode docs. I have vague dreams to create a system where Emacs and NixOS users could bootstrap a minimal Linux operating system by referencing documents on the web & seamlessly pulling them in to their system, but this is a fair bit of the way off still.

All of this is published straight out of my org-roam knowledgebase using a home-built web site engine called The Arcology Project which exists to give me a way to publish these pages across multiple domains, to arbitrary web paths, without having to line them all up on a file-system. It's primarily written in Python but it uses Arroyo to generate a sqlite database which the Python reads, and some custom lua and templates which Pandoc uses to render the org-mode docs to HTML.

I'll keep this feed up to date with new modules and interesting updates. Here are a few that have fallen between the cracks over the last 6 months or more: