The Complete Computer

CCE and Emacs Update Feed

LifeTechEmacsArcology

This page contains new modules for The Complete Computing Environment , my Emacs and NixOS automation environment. The CCE is designed to be modular, allowing me to intersperse code and documentation using Literate Programming facilities built in to org-mode which is then published to the web using The Arcology Project site engine.

A feed is generated from this page at https://cce.whatthefuck.computer/updates.xml and cross-posted to the Fediverse on my Fediverse instance.

Blocking Aggressive Scrapers at the Edge

In Limiting expensive to render nginx endpoints , I describe how to use a few nginx limit_req module to substantially limit the amount of aggressive scraping traffic to my Gitea instance without impacting "normal" "human" behavior.

There's three layered rate-limiters in here that are applied to only certain URIs:

  • One does a per-IP limit excluding my Tailscale network and some ASNs I connect from. Each IP can make one costly request per minute, otherwise receive a 503.

  • One tries to map certain cloud providers in to a single rate-limit key and gives each of these providers 1 RPM on these endpoints. Each group of cloud IPs can make one request per minute, otherwise receive a 503.

  • One puts a limit to 1 RPS of all traffic on each "site feature" in Gitea.

So now if you try to browse my Gitea instance http://code.rix.si or make a git clone over HTTP that will work just fine, but a handful of expensive endpoints will be aggressively rate-limited. If you want to look at the git blame for every file in my personal checkout of nixpkgs, you can do that on your own time on your own machine now.

So far installing this on my "edge" server seems to work really well, cutting the load of the small SSL terminator instance in half. Let's see if this is Good Enough.

INPROGRESS A Quick Hack: Elfeed Adaptive Scoring

holding this back from the RSS feeds until I trust that the performance isn't shite

In Elfeed Adaptive Scoring I define a minimal system for creating an adaptive sort ordering for my news feeds which can be influenced by a simple up/down signal sent by the news reader:

the populace yearns for a news reader that gently learns its preferences and isn't a fucking weirdo about it. something that lets one step in to the stream of information, the really simply syndicated under-belly of the internet, and filter that stream to a trickle of the most important information. For a while I had Universal Aggregator and Gnus Adaptive Scoring which gave me an intuitive and responsive wholly-local sorting algorithm on my laptop, and that was well and good, but I also want to use this little artificial intelligence on the go as well, with an android app. Otherwise I waste too much time scrolling social media. One can't simply stop scrolling...

This year I stopped using UA and have been using tt-rss , because it has a mobile app to keep me from scrolling and a manual scoring system so I can bodge in my most important personal rules, content warnings, and boundaries in to place so that I can scroll without losing my cool; a regular-expression based filtering program runs on each incoming message and assigns a score to it, tags, etc. You can't create or edit filters in the Android app, and it's not easy to quickly edit an existing filter, so things can get out of hand, but it does work well enough.

... ...

Speaking of getting out of hand, ... I've been trying to use only a keyboard again, which means using more Emacs tools and less Web Browser tools, like mastodon-mode and Ement.el even though the fricken pantalaimon is broken again;;; and elfeed

elfeed is perhaps the only Linux desktop software I've found which can sync with tt-rss, it's lucky that it's an Emacs widget too. It's allowed me to finally get around to publishing the first pass on My Blog Roll again, which is nice because elfeed-org can push my feeds from there in to elfeed and tt-rss and i don't have to faff about with OPML.

Oh and elfeed also has a manual, rules-based scoring system where I can bodge in my most important personal rules, content warnings, and boundaries.

Tonight I ported my tt-rss scoring filter rules across today while I was learning how to build a tt-rss API plugin and looking at how the data in the tt-rss DB and elfeed DB work. I shelved the plugin work for syncing the score file and filters, and implemented adaptive scoring in elfeed instead, a feature that allows elfeed to automatically adjust its sort ordering based on what works is marked as interesting to me, by me, over time. It tracks the titles and authors of the content that pass through the reader and use that information to tweak future search results.

It's not a lot of code, but I think it'll prove to be highly effective; I hope it's not a performance tarpit, it was never prohibitive in Gnus... I'll report in later, and when/if I write an extension to elfeed-sync to send my elfeed-score rules to tt-rss.

Elfeed Adaptive Scoring is the most effective and must ugly 75 lines of elisp I've written in a while, I'm sure i'll make it work better later on.

Two Updates: Org+Nix dev streams, and my new DNS resolver

I've started to stream on Thursdays where I'll explore salt dunes and arcologies

The last few weeks I have started to work in earnest on Rebuild of The Complete Computer , my effort to provide a distribution of my org-mode site publishing environment in a documented, configurable Concept Operating System . My "complete computing environment" will be delivered in three parts:

  • a set of online documents linked above that are explaining how I manage a small network of private services and a knowledge management environment using my custom web publishing platform, The Arcology Project .

  • a set of videos where I work through the documents, eventually edited down in to a set of video lectures where you are guided from complete fresh fedora VM to installing Nix and a bare-bones org-roam emacs, bootstrapping a NixOS systems management environment, and then use Org files to dynamically add new features to those NixOS systems.

  • a handful of repositories which i'll finally have to treat like "an open source project" instead of Personal Software:

    • The arcology codebase which you'll have a copy of on disk to configure and compile yourself

    • the core configuration documents that are currently indexed on the CCE page, a subset which will be required to run the editing environment, and a number of other bundles of them like "ryan's bad UX opinions", "ryan's bad org-mode opinions", "ryan's bad window manager", etc...

I hope that by reading and following along with the documents while utilizing the video resources, one can tangle source code out of the documents, write and download more and an indexing process will extract metadata from the files that can be later queried to say "give me all the home-manager files that go on the laptops", for example, and produce systems that use that.

Two weeks ago I produced a three hour video where I played Caves of Qud and then spent two hours going over some of the conceptual overviews and design decisions while setting up Nix in a Fedora VM, ending with the Arcology running in a terminal and being used to kind-of-sort-of clobber together a home-manager configuration from a half-dozen org-mode files on disk. It was a good time! This is cataloged on the project page, 0x02: devstream 1 .

This week I came back to it after taking a break last week to contribute an entry to the autumn lisp game jam, and it was a bit more of a chaotic stream with only two hours to get up to speed on the project; there are many implicit dependencies in the design and implementation of the system because it's slowly accreted on top of itself for a decade now. That was 0x02: devstream 2

This week I'll work on cleaning up things to smoothly bootstrap and next week we'll come back with a better way to go from "well home-manager is installed" to "home-manager is managing Emacs and Arcology, and Arcology is managing home-manager" and then from there we build a NixOS machine network...

I have probably a three or six month "curriculum" to work through here while we polish the Rebuild documents. I will be streaming this work and talking about how to build communal publishing networks and document group chats and why anyone should care.

With the news from the US this week, it feels imperative to teach people how to build private networks, if only because the corporatist monopolist AI algorithm gang are going to run rough-shod on what's left of the open web the second Lina Khan and Jonathan Kanter are fired if they haven't already begun today. We can host Fediverse nodes and contact lists and calendars for our friends for cheap and show each other how to use end-to-end chat and ad-blocking and encrypted DNS; we oughta.

I'll stream on twitch.com/rrix on Thursdays at 9am PT and upload VODs to a slow PeerTube server I signed up for. Come through if this sounds interesting to you.

I re-did my DNS infrastructure

Years ago I moved my DNS infrastructure to a pi-hole that was running on my Seattle-based edge host. It worked really nicely without thinking about it when I lived in Seattle, but I hesitated fixing it for the years since I moved a half a hundred milliseconds away. The latency finally got annoying enough lately so I finally got around to it this week.

On my devicies, I've been using Tailscale's "MagicDNS" because DNS is a thing that I think should just have magic rubbed on it, as it is i've already thought way more about DNS in my life than I'd like. If you enable MagicDNS and instruct it to use your pi-hole's address as the global nameserver, any device on your Tailnet will use the pihole for DNS. Neat.

Pi-hole isn't packaged in nixpkgs and I was loathe to configure Unbound etc and a UI myself so I put it off and fnord ed the latency for months. I finally got around to it this week by deploying Blocky on my LAN server which has the feature-set I need, and rather than shipping a UI it ships a minimal API and a Grafana dashboard:

It's a neat little nice little thing, I hope it'll work out. I've started documenting this at Simple DNS Infrastructure with Blocky of course.

With the querying back on my LAN and managed by my Nix systems instead of a web GUI on an unmanaged host, I can list my blocked domains and block lists in a human-legible format, I can have different DNS results to route all my server's traffic direct over the LAN to my homelab instead of round-tripping to the SSL terminator, I can have custom DNS entries for local IPs. All this is managed in that one document which you'll soon be able to download from my git instance; that's the Concept Operating System promise.

If you're a content pihole user but never use the web UI and need to move, consider taking this thing for a spin.

For better or worse, the CCE now runs on River WM

tl;dr I am now running RiverWM on my NixOS distribution and have a published configuration for it therein.

I'm not particularly happy to write this. For literally half of my life, from 16 to 32 years, I ran the KDE Plasma desktop but recently I was forced to swap away. I really like using my computer without a mouse and KDE has made it difficult to impossible for me to do so.

For quite a number of years I was able to run KDE Plasma with XMonad as the window manager under X11 by just setting a session variable in my .profile to KDEWM=/usr/bin/awesome and that worked great for many years; I was even able to go full emacs for a long time with EXWM (which allows one to treat Emacs as the "root" environment with X11 windows acting as Emacs buffers, rather than a WM being the root environment and Emacs being a window). At some point the simple solution of setting an environment variable stopped working but all you had to do to get Plasma with another WM on X11 was set up a SystemD user-unit overlay and then basically ask the WM to manage the Plasma windows.

When I got my GPD Pocket 3 , however, I found that despite it being an Intel integrated graphics machine, X11 did not run well on it and I set up my first Wayland-native environment which meant risking and considering throwing away a decade and a half of good experience with stacking tiling window managers. I set up Bismuth which was a KDE Plasma 5 KWin plugin which would auto-tile and lay out windows and manage an ordered stack so that I could Super-j and Super-k up and down the stack of windows and Super RET to push the active window to the top of the stack and re-organize the windows so that that window was the largest. It was basically good enough, Plasma 6 is a really nice desktop and the stacking features of Bismuth meant that my three or four main applications could be driven from my keyboard's home-row.

Bismuth does not work on Plasma 6 and I started to have some really frustrating crashes in Plasma 5 KWin Wayland which was resolved by a fix only applied to Plasma 6 and XWayland would randomly exit with code 0 and no log output. I eventually got Plasmas X11 to work decently on the GPD Pocket 3 but at the cost that touch events and other libinput stuff didn't work well enough to be my daily driver that I've had to look to other shores.

Bismuth doesn't work with Plasma 6, and is no longer under active development so I had to try to go back to a non-stacking tiling WM (think sway/i3) and tried Polonium, which unfortunately did not work with how i want to use my computer. Not having an auto-tile system was driving me mad. The other Wayland tiling window managers were mostly interested in emulating this tiling philosophy that i3 uses rather than an auto-tiling system with layouts built in like XMonad or Awesome.

River WM is the tiling system that gives you that, basically, with caveats. so I spent the last week or two while my personal life un-winds and comes back together, building a desktop configuration with RiverWM and getting really angry with the state of modern linux desktop systems along the way.

Any intuition I have about how a Linux desktop system is composed is thrown out the window with the move toward D-Bus and SystemD managed desktop sessions. I have spent the last week dealing with fiddly fucking environment variable propagation and XDG Desktop Portal configuration files to get things like my Matrix.org local proxy service Pantalaimon to connect to D-Bus and auto-spawn a secret service which contained the OLM keys. After every NixOS rebuild, I would end up with two waybar instances running and spent all day today trying to get it to work within a systemd-run ephemeral user unit, but for some reason despite verifying the environment variables being basically-equivalent by poking at /proc/$PID/eniviron, the user-unit version of waybar would not show icons in the taskbar, despite it working if I spawned waybar in the initialized desktop session. Infuriating shit.

I have a system I am mostly content with but I know that the "long tail" of issues in maintaining and spit-shining this thing will probably end up being about the same amount of work as it would take to just grit my teeth and get used to PaperWM or Polonium running within a desktop that provides all this ugly plumbing.

But it lends a problem, I can no longer recommend the software I use to anyone else, especially "normies." I've spent the last year carefully polishing a Linux distribution and Emacs environment that I could hand to family to plug in to a cloud that is not owned by a tech monopoly and believe they could use built on a desktop I've tried to understand the control surfaces and plumbing of since high school. I thought I was close with the Rebuild of The Complete Computer , but this feels like a set back.

Now I have a fucked up tiling window manager system that no one else can use and a bunch of shell-script- and JSON- configured microservices that aim to provide a desktop that sucks less but mostly end up making me hate the Linux desktop and the isolated islands it's become. And meanwhile the display power management system on my laptop still barely works.

The Hey Smell This factor of going with the "choose your own Wayland Desktop" is nearly unbound, I can't recommend this to anyone in good faith unless they know what they're getting in to.

I hope some day that there is a window manager like River that implements the same Wayland protocols as KWin and I can swap it back in over the Plasma desktop, because I am oh so tired of dealing with desktop Linux plumbing. I probably should put my money where my mouth is one of these days.

The River CCE module is laid out nicely, though I still need to un-tangle some of the more frustrating bits of configuration like the XDG Desktop Portal configuration. If you want to try out River, there is a "batteries included" lightly opinionated home-manager.nix and nixos configuration for you to walk through.

published: A High Level Overview of a Concept Operating System

The first chapter of the Rebuild of The Complete Computer project has begun with the publishing of A High Level Overview of a Concept Operating System . I'm beginning to sketch out a workbook or textbook of sorts that walks through how to construct a self-documented Linux system out of org-mode documents, and how those documents can be used as kindling for a small community's dream of self-hosting.

In short, the Concept Operating System is a set of documents that manage a Linux operating system and an Emacs text editor environment, a map and a terrain for you to implement your own. By following along with this document and future documents you'll be able to build and maintain your own computing system. These documents describe a system of documents which will build upon and synthesize the author's existing Concept Operating System, which he refers to as "The Complete Computing Environment". In the next section we'll take a look at each component of the Complete Computer and see what it provides and how we can evaluate each layer and ultimately build our own.

In short it's a bunch of B.S. but it might be your kind of B.S.

Re-built my Wallabag NixOS module

After upgrading to NixOS 24.05, the Wallabag module I implemented based on dwarfmaster/home-nix's wallabag module stopped working and I didn't understand how it worked well enough to fix it. I had planned to swap back to running Wallabag in Docker once I got around to it, but stumbled across someone's link to another wallabag module for NixOS which had the benefit of running Wallabag straight out of the Nix Store and resolving some issues around cached resources which may have been responsible for the un-reproducable issues I was having with my old module.

I made some changes to the module along the way, adding configuration options to it, stripping out some DRY library helpers which I didn't need, etc. It's quite easy to use:

nix source: 
{ ... }: { services.wallabag = { enable = true; domain = "bag.fontkeming.fail"; virtualHost.enable = true; parameters = { domain_name = "https://bag.fontkeming.fail"; server_name = "rrix's Back-log Black-hole"; locale = "en_US"; twofactor_sender = "wallabag@fontkeming.fail"; from_email = "wallabag@fontkeming.fail"; }; }; }

Check it out and try it for yourself by checking the Wallabag page today!

The Rebuild of the Complete Computer has Begun

[2024-03-11 Mon] Eugene, OR: The Rebuild of the Complete Computer Project Committee announce the launch of the Rebuild of the Complete Computer Project, a reprisal and reintegration of the Emacs+NixOS Concept Operating System accreted over the years by The Rebuild of the Complete Computer Committee's Chair Ryan Rix . This multi-sensory multi-media experience promises to provide you and your community with new meta-cognitive powers or your money back [ed: the Committee refers us to Very Serious Legal Requirement #3 and reminds us that you actually are responsible for both pieces if it breaks].

The Rebuild of the Complete Computer is an attempt to curate a system where others could build a self-publishing platform built on top of The Arcology Project and use that same knowledge base to deploy and manage their own computer systems. It's an attempt to curate a system where myself or other interested nerds could deploy and manage a small fleet of systems for their friends and family and provide access to powerful, private, meta-cognitive tools.

I've recently re-launched the Arcology Project as a piece of Python software built using Django, you might be reading this using that software right this second. Or perhaps you're reading it in your RSS reader, which fetched the content from this software. Or you're reading it in on the Fediverse , where the CCE Wobserver 's mastadan talked to your mastadan and told it about a new post, an HTML page crammed in to an ActivityPub message.

Today I continued to work toward a replicable and reputable Arcology release, I spent some time today winnowing what software is included in my "stack" so that I could pull out the base dependencies and layers of the Arcology Project and the CCE, and in the process I've structured an interesting document constructing A Holistic View of the Arcology and Complete Computing where I describe the topology of a Concept Operating System in the context of the actual Arcology ideology, the theoretical ecologically just, sociologically friendly, self-sustaining human habitat idealized by Paolo Soleri and the architectural researchers at Arcosanti. You might find that interesting. Idk.

When I've done a bit more of this file organizing work, and a bit more drafting and scripting in some un-published documents and built some OBS overlays, I will begin doing a set of live-stream semi-scripted videos documenting the design and construction of a fresh Emacs+NixOS Concept Operating System and the publishing platform it's integrated with.

I'll likely be doing that on my new MakerTube channel, "Complete Computing", feel free to subscribe there or here if this is interesting to you.

Updated my DrawingBot V3 nixpkg |

the new year is always a nice refresh and a chance to work on new creative projects. I have been thinking about going out and taking more photos and pen-plotting more of them on my AxiDraw . I decided to try to get DrawingBotV3 to work again, a piece of software which can take images or animations and export SVGs suitable to be plotted. Last year I bought the "premium" version which adds CMYK and pen-color matching, and run a bunch of different path-finding algorithms to generate the pen paths which lets you do some really neat things, like this for example:

The last version that I used was distributed as a jar file which was nice since I could just add a wrapper program which would call java -jar with the right arguments but newer versions are distributed only as deb or rpm files on Linux. So I updated my DrawingBot V3 on NixOS package to extract the debian package. This is the first time I have done this and it came together pretty quickly. It took some time to get the buildInputs lined up, especially since the Java app does some JNI loading which autoPatchelfHook doesn't detect. But this was a pretty fun little experiment and you can look forward to seeing more pen plotter art on my Plotter Art page over on the Lion's Rear.

I Re-wrote my keyboard customizations to use xkb

For a long time My Custom Keyboard Layout has used xmodmap to put !? closer to my fingers under shifted ,..

  • Shift + comma produces an exclamation point

  • Shift + period produces a question mark

  • Shift + slash produces a backslash

  • Shift + 1 produces a less-than mark

  • Shift + backslash produces a greater-than mark

Doign this with xmodmap works well enough on login but every time I plug or unplug a USB keyboard or switch inputs on my display (thus re-routing its internal hub) I have to re-run a systemd user service to reinvoke xmodmap.

Previously, I'd tried to define this using xkb in My NixOS configuration but the way I'd tried to do it was causing everything that depended on Xorg to be compiled from source. Yikes! I gave up for a while, but re-approached this last week once I moved my primary machine to Plasma Wayland because xmodmap does not work in Wayland at all.

This was a bit of a pain because there's not a lot of examples of folks doing this on GitHub and it wasn't clear to me whether and how NixOS's services.xserver.xkb worked with KWin's Wayland compositor. Last time I tried to write an entire xkb_keymap file but I couldn't get the syntax to work right and frustratingly xkbcomp wouldn't output any errors even with -w 10 until I got everything lined up. And then once I got it working with a whole xkb_keymap file that would apply with xbkcomp the_file.xkb $DISPLAY using examples from the NixOS Wiki's Keyboard Layout Customization page, it didn't work in Wayland-native apps like my MOZ_ENABLE_WAYLAND=1 Firefox . At least services.xserver.xkb will work in both X11 and Wayland, but the documentation I found both in NixOS and in Linux-land broadly is quite weak on this subject.

But hopefully My Custom Keyboard Layout can serve as a minimal example for customizing keyboard symbols.

New on The Wobserver: A Wallabag deployment module for NixOS |

(ed: It was pointed out to me today that my last article in this feed had an update time far in the future, my apologies for fat-fingering that. Today I learned that lua's os.time will convert 93 days in to three months and change!)

Today I set up one of the final services in the long migration to my NixOS based Homelab Build , The Wobserver .

It's the "graveyard for web articles i'll never read" known more commonly as Wallabag .

wallabag is a web application allowing you to save web pages for later reading. Click, save and read it when you want. It extracts content so that you won't be distracted by pop-ups and cie.

Wallabag is a PHP application which is packaged in nixpkgs , but it is not trivial to enable it in NixOS as with many other services which are packaged therein. I did find a year-old NixOS module in dwarfmaster/home-nix which was a good starting place, so I copied that in to my system and set to work making it work with the current version of Wallabag, 2.6.6. It's nicely written but needed some work to make it match the current configuration format, and other small changes.

I then customized it so that it is easy to configure and use like standard NixOS modules:

nix source: 
{ pkgs, ... }: { imports = [ ./wallabag-mod.nix ./wallabag-secrets.nix ]; services.wallabag = { enable = true; dataDir = "/srv/wallabag"; domain = "bag.fontkeming.fail"; virtualHost.enable = true; parameters = { server_name = "rrix's Back-log Black-hole"; twofactor_sender = "wallabag@fontkeming.fail"; locale = "en_US"; from_email = "wallabag@fontkeming.fail"; }; }; services.nginx.virtualHosts."bag.fontkeming.fail".extraConfig = '' error_log /var/log/nginx/wallabag_error.log; access_log /var/log/nginx/wallabag_access.log; ''; }

If you want to use this, it should be straightforward to integrate. I don't think it's high enough quality to try to contribute it directly to nixpkgs right now, but if someone is brave enough to shephard that I surely wouldn't mind. 😊

paperless-ngx is a cool little document management system |

When I agreed to be the treasurer and bookkeeper for the Blue Cliff Zen Center I bought a Brother DCP-L2550DW printer/scanner/copier, an affordable, functional and reliable inkjet printer that doesn't mess around with you like similarly priced printers from HP etc do. It's fine and basically works with minimal configuration, and I can scan over the network using Skanlite. All nice and easy.

But taking this a step forward and managing my personal paper detritus has been a long-term goal; I have been broadly aware that there are decent open source OCR toolkits like tesseract for a while and I wanted to build a pipeline for generating OCR'd PDFs from a directory of scanned documents, and I never bothered to figure out how to do this myself.

I stumbled recently on Paperless-ngx , though, and found that it was packaged in nixpkgs , with a NixOS module to easily setup and configure it. So I did that.

Importantly, the full-text search works pretty well on printed documents. On hand-written stuff it'll struggle, I wonder if I can tune it against my own handwriting, but for now this is pretty nice:

It also attempts to do some amount of auto-categorization, though with only a couple dozen documents brought in so far, it's a bit too stupid to trust, and I spent about two hours after the first batch scan job to clear out the INBOX tag and manually sort things out. It also had a habit of parsing dates as D/M/Y instead of Americanese M/D/Y dates which I need to figure out how to fix.

Setting up the printer to do "Scan to FTP" was a bit of a pain , for some reason US models have the functionality disabled; I blame CISA. There is some BS you can do to go in to a maintenance menu to change the locale, reconnect it to the wifi, and then configure a Scan to FTP profile in the web UI but this feature is silently disabled by default.

Anyways, I got through a year worth of personal docs in a few hours and have a bigger shred pile than I would like, but I can shred them without feeling too badly now. I'll have encrypted backups of these documents on Backblaze B2 forever now, alongside a sqlite DB that I can full-text search. I'll probably upload my "Important Docs" directory in to this thing sooner or later, but for now it'll be able to handle my mail and the Zendo's documents. It also has a Progressive Web App manifest so you can "install" the management app on your phone to search docs on the go.

As with all of my NixOS code, it's documented and exposed on the Paperless-ngx page.

A toolkit for Literate Programming imapfilter configurations |

Over on imapfilter filters my imap I have cooked up a simple Org Babel page which generates a set of mail filtering rules for the imapfilter utility. It's operated using roam:Lua , but with a lot of rules it gets a bit difficult to manage. I broke my configuration up in to a number of org-mode tables that are processed by ten lines of Emacs Lisp which turns it in to Lua code that is written to the imapfilter configuration.

While I don't publish the raw org-mode file since it has un-published headings with rules I don't want to publish, the usage flow is pretty simple, so let me walk you through it:

  • in the imapfilter configuration I define a handful of lua helper functions which take a list of mails to act on, a search key, and a destination to move the matching mails.

  • Define some emacs-lisp which takes a table as variable; these are passed in to the ELisp as list of lists, the inner list being a single row. This is stored in a named Org Mode source block. It takes each row and turns it in to a single Lua statement:

    emacs-lisp source: 
    (thread-last tbl (seq-map (lambda (row) (format "mb = file_by_%s(mb, \"%s\", \"%s\")" (first row) (second row) (third row)))) (s-join "\n"))
  • Define a table with at least three rows

    • the first is the "type" of rule -- notice in the format call that the string contains file_by_%s, this first row is used to pick one of the helpers.

    • the second row is the value to filter on, that subject, sender, or destination.

    • the third row is the destination to send it.

  • Create "lua" source code blocks with noweb syntax enabled, and with a tangle destination of the imapfilter configuration. That calls, via noweb, our code generation elisp, passing in each table, and then tangling that generated lua in to the configuration file. <<call-imapfilter-from-table(social-lists)>>

The end result is a set of tables which can be shared or published, edited and reordered etc, easily, and then seamlessly exported as a configuration file.

I do this as well in some other places:

Using literate programming for generating configuration is really fun and lets you scale your code and configuration while keeping it legible!

I'm now running my Matrix Synapse instance on The Wobserver

After last week's embarassingly-handled WebP 0-day, I realized my Synapse instance was sorely out of date and now had a Pegasus-class vulnerability on it. Unfortunately, the dockerfile I had been using to manage that service on my Wobscale server was out of date and didn't build with more recent versions of Synapse. Rather than using the upstream Debian-based Dockerfile, I was using one prepared by my dear friend iliana , one which she stopped using quite a while ago and I was maintaining myself. Welp nyaa~.

After briefly considering migrating to spantaleev/matrix-docker-ansible-deploy, and doing some math on exactly how much data a federating synapse node passes in a week or a month, I decided I would move the Synapse install on to my home network with my Wobscale 1U acting as a reverse proxy to my homelab machine over Tailscale .

And so on Friday afternoon I decided to wreck my sleep schedule and migrate across.

In I'm now running my Matrix Synapse instance on my Wobserver , I've written at length about this migration process, but I will spare the RSS feed the gruesome details. Click through if you're curious about moving a functioning Synapse instance to a NixOS machine with "only" 24 hours of downtime.

Synapse was one of the last services running on my Seattle 1U server; it was originally deployed back in like 2017 and has served me well but it's a geriatric Fedora Linux install that is now only running my Wallabag server on it. Once I migrate that to The Wobserver in my living room, I'll be able to turn this host down and ask ili for a small VM that can do the edge networking and be managed by my NixOS ecosystem instead of hand-tuned nginx configurations. That'll be nice.

This simultaneously took more and less work than I expected it to, and it's certainly not a perfect migration, but it is nice to be done with. It took about 22 hours of downtime all said and done, including some time spent sleeping while the thing was semi-functional.

This migration was a huge fnord for months where I would say "i should update my synapse, ugh, i should migrate my synapse to nixos, ugh, i should move to conduit, ugh i should just sign up for beeper.com and never touch synapse again" every time my disk would fill up and i would have to do some stupid bullshit to clean it up enough to run VACUUM FULL on the synapse DB. It's still 65 fucking GB of old events I never want to see, and I recently learned why: "unfortunately the matrix-synapse delete room API does not remove anything from stategroups_state. This is similar to the way that the matrix-synapse message retention policies also do not remove anything from stategroups_state." this kills me, and this is probably why my next step will be to set up matrix-synapse-diskspace-janitor.

Archiving Old Org-Mode Tasks

Since I use org-caldav to sync my Events between my Org environment and my phone, I end up with a lot of old junk building up over the years. I recently archived ~200 old calendar entries with this function which will archive anything older than 6 months, but skip any recurring events:

emacs-lisp source: :tangle ~/org/cce/publish-snippets.el
(defun cce-archive-old-events () "Archive any headings with active timestamp more than 180 days in the past, but not repeating" (interactive) (save-excursion (goto-char (point-min)) (let ((six-months-ago (- (time-convert nil 'integer) (* 60 60 24 30 6)))) (while-let ((point (re-search-forward (org-re-timestamp 'active) nil t)) (ts (match-string 0))) (save-excursion (goto-char (match-beginning 0)) ; timestamp parser needs to be at match-beginning (unless (or (> (org-time-string-to-seconds ts) six-months-ago) (plist-get (cadr (org-element-timestamp-parser)) :repeater-type)) (when (y-or-n-p (format "Archive %s from %s?" (org-get-heading) ts)) (org-archive-subtree-default))))))))

It was suggested in the Emacs Matrix.org channel that I could use org-ql to do this, and I would like to learn how to use that properly some day, but the ts-active predicate didn't work how i expected it and i'd need to make a predicate for filtering out recurring events anyways, so i just bodged this together.

This code is simple enough on its own.

  • Generate an integer representing a time six months ago

  • Generate a regular expression representing an active timestamp

  • Repeatedly search for that timestamp; re-search-forward in its NOERROR mode is great in combination with while-let to do macro activity like this

    • skip any which repeat

    • skip any "younger" than six months

    • ask if I want to archive the heading, and do so

Making my NixOS system deploys as simple as possible with Morph, a hosts.toml file, and my Morph Command Wrapper

This weekend I tried setting up deploy-rs, and it and flakes are kind of not very good for what I am doing with my computers .

I spent some time today doing a think that I have wanted to do for a while, moving my "network topology" in to a data file which can be ingested by the tools themselves rather than declaring them in code.

This starts with defining a hosts.toml file whose topology maps more-or-less to the one that Morph uses; a network is a deployment entity which can have any number of hosts in it:

toml source: 
[endpoints] description = "my laptops and desktop" enableRollback = true config = "../roles/endpoint" [endpoints.hosts.rose-quine] # target = "rose-quine" # stateVersion = "23.05" # user = "rrix"

There are reasonable defaults in the host configurations so that adding a new host is a single-line operation in the hosts.toml file.

With that file in place, Deploying from my =hosts.toml= defines a function that ingests the networks and spits out a Nix attrset in the shape that Morph wants to execute:

nix source: 
let pkgs = import <nixpkgs> {}; allNeimportTOML ./hosts.toml; mkNetwork = import ./mkNetwork.nix { inherit pkgs; networks = allNetworks; }; in mkNetwork "endpoints"

and from there i could say morph build $file_defined_above and it would go off and do that. then another invocation of morph deploy with a grab-bag of arguments would actually roll the system out to the host.

Taking things a step further is fun though. Why not make a simple wrapper that can make this easier? Morph Command Wrapper does that and allows me to just type deploy to run a change out to the host i'm sitting at, or deploy -b to just build it, or deploy --all to run it out everywhere.

Invoking deploy-targets will print out a list of all the hosts that the system knows about, which can then be conveniently fed in to completing-read

emacs-lisp source: 
(->> (shell-command-to-string "deploy-targets") (s-split "\n") (append '("--all")) (completing-read "Which host do you want to deploy to? "))

And that can be used by the interactive emacs function arroyo-flood to automatically tangle the systems' org-mode role files, dynamically extracting a list of server , laptop, desktop , etc, modules from a sqlite cache along the way, and then deploying those! I'm pretty happy with this.

Hopefully systemd-inhibit-mode will keep me from burning in my monitor and burning out my laptop battery.

When I watch a full-screen Video in Firefox while using XMonad , it doesn't properly disable DPMS so my screen blanks every five minutes in to the video. Rather than try to figure out why that was, I would invoke systemd-inhibit to inhibit screen blanking. This was fine, but I'd run it in a terminal emulator which I would promptly forget about. Cue my laptop having a dead battery or my monitor burning in the lock-screen text overnight when I would manually lock the desktop and head to bed.

=systemd-inhibit-mode= is a dead-simple global minor mode which exists to remind me via the modeline that I have the inhibitor process running:

It's 27 lines of code that, if you are interested in, you can copy to your init.el. 😊 I'm not terribly motivated to stick stuff like this in MELPA. Anyways it's licensed under the Hey Smell This license if you are some sort of person who cares about licenses or think I do.

Moved my Org Site Engine-to-Fediverse cross-posting from feed2toot to Feediverse

Lately I have been working on integrating my org-mode site engine with my Fediverse presence. Following @garden@notes.whatthefuck.computer for technical work and @lionsrear@notes.whatthefuck.computer for creative/tea/philosophical crap should give a full view of what I am adding to my sites and keeping my main presence for 1-1 interactions with my fedi-friends.

There are Atom feeds available for headlines on various pages on the sites, and tying those to Fediverse posts should be pretty straightforward, but finding the right tool for the job is always the hard part when I am forging my own way forward.

Yesterday in the back of a rental car I added feed metadata to the org-mode document for my 2023 Hawaii Big Island Trip and wondered how I could get that on to my fedi profile -- i realized that it required my laptop to run the Morph deployment to ship a new NixOS system. a bit overkill for what I want to do, especially when the data is already in at least one sqlite database!

So I modified Arcology's Router to add an endpoint which returns all the feeds on the site in a JSON document and then set to work making a script which wrapped =feed2toot= to orchestrate this but quickly felt that feed2toot is a bit too over-engineered for what I am asking it to do, especially when I set about adding multi-account support to it; the configuration parser is doing a lot more than I want to deal with. Feediverse is a simple single-file script which I was able to easily modify to my needs, with a bit of code-smell in the form of yaml.load-based configuration.

My fork of feediverse reads that feeds.json document and iterates over every feed looking for new posts. This lets me add new feeds to my cross-poster without deploying The Wobserver on my laptop. There is a slow pipeline that prevents me from using this to Shitpost or live-toot things, but I think that's basically okay. The idea is to use it to slowly ship out things when I make new art, or have an end-of-day log, or publish a site update, without having to think too hard about it. Most of the Arroyo Systems stuff (from adding software to my laptop or server to adding pages to my site) is managed by adding metadata keys to my org-mode documents, and this is now no different, though perhaps a bit too "roundabout" for it to be considered good engineering:

  • add ARCOLOGY_FEED page keyword

  • add ARCOLOGY_POST_VISIBILITY page keyword

  • add PUBDATE and ID heading properties to entries on the page that will be published to the feed

  • wait for syncthing to copy files to my server

  • wait for arcology inotify-watcher to reindex the site

  • wait for Feediverse to run on the quarter-hour and post the new feed entries

But my axiom in writing the Arroyo Systems stuff and The Arcology Project as a whole is that Personal Software Can Be Shitty , and this certainly is, but it means I can post new feeds from my Astro Slide so who can say whether it's good or bad.

As much as I am not a fan of Pleroma, it's really nice to just be able to fire HTML right at my server and have it appear halfway decently on the Fediverse with inline links and even images working out just fine. Just have to keep my posts somewhat hsort. This is probably too long for Fedi already, bye.

First Update: A welcome return

I noticed that my feed in the Planet Emacslife feed aggregator wasn't valid any more and am re-establishing a feed here for capturing my configuration-specific stuff. First-time readers may recognize me from the Emacs or Nix-Emacs Matrix.org rooms, or from building the first prototype of an Emacs Matrix Client which would later be taken and evolved and polished by roam:alphapapa after I became frustrated with hauling bugs and implementing end-to-end encryption.

Right now most of the new functionality in my notebooks is centered on NixOS automation and the deployment of my Wobserver , but I also do increasingly silly hacks with org-roam and my own org-mode metadatabase called arroyo-db . If you haven't seen it before, my Arroyo Systems Management documents are brain-bending automation for dynamically a Concept Operating System from many org-mode docs. I have vague dreams to create a system where Emacs and NixOS users could bootstrap a minimal Linux operating system by referencing documents on the web & seamlessly pulling them in to their system, but this is a fair bit of the way off still.

All of this is published straight out of my org-roam knowledgebase using a home-built web site engine called The Arcology Project which exists to give me a way to publish these pages across multiple domains, to arbitrary web paths, without having to line them all up on a file-system. It's primarily written in Python but it uses Arroyo to generate a sqlite database which the Python reads, and some custom lua and templates which Pandoc uses to render the org-mode docs to HTML.

I'll keep this feed up to date with new modules and interesting updates. Here are a few that have fallen between the cracks over the last 6 months or more:

  • Dealing with Syncthing Conflicts shows a function called cce/syncthing-deconflict which will open up ediff buffers for any "conflict" files created by Syncthing which can occur when I edit buffers while syncthing is sleeping on my phone for example.

  • I have a raft of Emacs functions for studying Japanese :

    • opening the Jisho search engine to search for any Japanese word at-point

    • templating org-fc flashcards using a python command line API for Jisho

    • showing katakana and hiraganas' meanings in the minibuffer using eldoc

  • a simple macro for defining Dynamic Org Captures , org-capture templates which have dynamic file names (for example date/week-stamped filenames) and automatic "top matter" like org-roam provides for. I use this to provide a handful of capture commands in to my Journal and other parts of my org-mode document hierarchy.

  • CCE Nixos Core defines a simple helper function cce-find-nix-output-at-point which will select and find-file an output for a Nix derivation path (a metadata-file showing all the inputs/outputs of a Nix expression) at-point.