This page contains new modules for The Complete Computing Environment, my Emacs and NixOS automation environment. The CCE is designed to be modular, allowing me to intersperse code and documentation using Literate Programming facilities built in to org-mode which is then published to the web using The Arcology Project site engine.
A feed is generated from this page at https://cce.whatthefuck.computer/updates.xml and cross-posted to the Fediverse on my Fediverse instance.
New on The Wobserver: A Wallabag deployment module for NixOS | CCE HomeOps
(ed: It was pointed out to me today that my last article in
this feed had an update time far in the future, my apologies for
fat-fingering that. Today I learned that lua's os.time
will convert 93 days in to three months
and change!)
Today I set up one of the final services in the long migration to my NixOS based Homelab Build, The Wobserver.
It's the "graveyard for web articles i'll never read" known more commonly as Wallabag.
wallabag is a web application allowing you to save web pages for later reading. Click, save and read it when you want. It extracts content so that you won't be distracted by pop-ups and cie.
Wallabag is a PHP application which is packaged in nixpkgs, but it is not trivial to enable it in NixOS as with many other services which are packaged therein. I did find a year-old NixOS module in dwarfmaster/home-nix which was a good starting place, so I copied that in to my system and set to work making it work with the current version of Wallabag, 2.6.6. It's nicely written but needed some work to make it match the current configuration format, and other small changes.
I then customized it so that it is easy to configure and use like standard NixOS modules:
{ pkgs, ... }:
{
imports = [ ./wallabag-mod.nix ./wallabag-secrets.nix ];
services.wallabag = {
enable = true;
dataDir = "/srv/wallabag";
domain = "bag.fontkeming.fail";
virtualHost.enable = true;
parameters = {
server_name = "rrix's Back-log Black-hole";
twofactor_sender = "wallabag@fontkeming.fail";
locale = "en_US";
from_email = "wallabag@fontkeming.fail";
};
};
services.nginx.virtualHosts."bag.fontkeming.fail".extraConfig = ''
error_log /var/log/nginx/wallabag_error.log;
access_log /var/log/nginx/wallabag_access.log;
'';
}
If you want to use this, it should be straightforward to integrate. I don't think it's high enough quality to try to contribute it directly to nixpkgs right now, but if someone is brave enough to shephard that I surely wouldn't mind. 😊
paperless-ngx
is a cool little document
management system | CCE HomeOps
When I agreed to be the treasurer and bookkeeper for the Blue Cliff Zen Center I bought a Brother DCP-L2550DW
printer/scanner/copier, an affordable, functional and reliable inkjet
printer that doesn't mess around with you like similarly priced printers
from HP etc do. It's fine and basically works with minimal
configuration, and I can scan over the network using Skanlite
. All nice and easy.
But taking this a step forward and managing my personal paper
detritus has been a long-term goal; I have been broadly aware that there
are decent open source OCR toolkits like tesseract
for a while and I wanted to build a
pipeline for generating OCR'd PDFs from a directory of scanned
documents, and I never bothered to figure out how to do this myself.
I stumbled recently on Paperless-ngx, though, and found that it was packaged in nixpkgs, with a NixOS module to easily setup and configure it. So I did that.
Importantly, the full-text search works pretty well on printed documents. On hand-written stuff it'll struggle, I wonder if I can tune it against my own handwriting, but for now this is pretty nice:
It also attempts to do some amount of auto-categorization, though
with only a couple dozen documents brought in so far, it's a bit too
stupid to trust, and I spent about two hours after the first batch scan
job to clear out the INBOX
tag and
manually sort things out. It also had a habit of parsing dates as D/M/Y
instead of Americanese M/D/Y
dates which I need to figure out how to
fix.
Setting up the printer to do "Scan to FTP" was a bit of a pain, for some reason US models have the functionality disabled; I blame CISA. There is some BS you can do to go in to a maintenance menu to change the locale, reconnect it to the wifi, and then configure a Scan to FTP profile in the web UI but this feature is silently disabled by default.
Anyways, I got through a year worth of personal docs in a few hours and have a bigger shred pile than I would like, but I can shred them without feeling too badly now. I'll have encrypted backups of these documents on Backblaze B2 forever now, alongside a sqlite DB that I can full-text search. I'll probably upload my "Important Docs" directory in to this thing sooner or later, but for now it'll be able to handle my mail and the Zendo's documents. It also has a Progressive Web App manifest so you can "install" the management app on your phone to search docs on the go.
As with all of my NixOS code, it's documented and exposed on the Paperless-ngx page.
A toolkit for Literate Programming imapfilter configurations | Emacs noweb
Over on imapfilter filters my
imap I have cooked up a simple Org Babel page which generates a
set of mail filtering rules for the imapfilter
utility. It's operated using Lua, but with a lot of rules it gets a bit
difficult to manage. I broke my configuration up in to a number of org-mode tables that
are processed by ten lines of Emacs Lisp which turns it in
to Lua code that is written to the imapfilter
configuration.
While I don't publish the raw org-mode file since it has un-published headings with rules I don't want to publish, the usage flow is pretty simple, so let me walk you through it:
in the imapfilter configuration I define a handful of lua helper functions which take a list of mails to act on, a search key, and a destination to move the matching mails.
Define some emacs-lisp which takes a table as variable; these are passed in to the ELisp as list of lists, the inner list being a single row. This is stored in a named Org Mode source block. It takes each row and turns it in to a single Lua statement:
(thread-last tbl (seq-maplambda (row) (format "mb = file_by_%s(mb, \"%s\", \"%s\")" (first row) (second row) (third row)))) ("\n")) (s-join
Define a table with at least three rows
- the first is the "type" of rule – notice in the
format
call that the string containsfile_by_%s
, this first row is used to pick one of the helpers. - the second row is the value to filter on, that subject, sender, or destination.
- the third row is the destination to send it.
- the first is the "type" of rule – notice in the
Create "lua" source code blocks with noweb syntax enabled, and with a tangle destination of the
imapfilter
configuration. That calls, via noweb, our code generation elisp, passing in each table, and then tangling that generated lua in to the configuration file.<<call-imapfilter-from-table(social-lists)>>
The end result is a set of tables which can be shared or published, edited and reordered etc, easily, and then seamlessly exported as a configuration file.
I do this as well in some other places:
- in my Nix Version Pins page (see This document contains Magic)
- Declarative KDE Configuration
with Home Manager generates a bunch of
kwriteconfig5
shell scripts which are run by home-manager - When I used Universal Aggregator i used this similarly, generating an UA configuration in the Arroyo Feed Cache, but no longer use that any more.
Using literate programming for generating configuration is really fun and lets you scale your code and configuration while keeping it legible!
I'm now running my Matrix Synapse instance on The Wobserver Nix
After last week's embarassingly-handled WebP 0-day, I
realized my Synapse instance was sorely out of date and now had a
Pegasus-class vulnerability on it. Unfortunately, the dockerfile
I had been using to manage that
service on my Wobscale server was
out of date and didn't build with more recent versions of Synapse.
Rather than using the upstream Debian-based Dockerfile, I was using one
prepared by my dear friend iliana, one which she
stopped using quite a while ago and I was maintaining myself. Welp
nyaa~.
After briefly considering migrating to spantaleev/matrix-docker-ansible-deploy, and doing some math on exactly how much data a federating synapse node passes in a week or a month, I decided I would move the Synapse install on to my home network with my Wobscale 1U acting as a reverse proxy to my homelab machine over Tailscale.
And so on Friday afternoon I decided to wreck my sleep schedule and migrate across.
In I'm now running my Matrix Synapse instance on my Wobserver, I've written at length about this migration process, but I will spare the RSS feed the gruesome details. Click through if you're curious about moving a functioning Synapse instance to a NixOS machine with "only" 24 hours of downtime.
Synapse was one of the last services running on my Seattle 1U server;
it was originally deployed back in like 2017 and has served me well but
it's a geriatric Fedora Linux install that
is now only running my Wallabag server on
it. Once I migrate that to The Wobserver in my living room,
I'll be able to turn this host down and ask ili for a small VM that can
do the edge networking and be managed by my NixOS ecosystem instead of
hand-tuned nginx
configurations. That'll
be nice.
This simultaneously took more and less work than I expected it to, and it's certainly not a perfect migration, but it is nice to be done with. It took about 22 hours of downtime all said and done, including some time spent sleeping while the thing was semi-functional.
This migration was a huge fnord for months
where I would say "i should update my synapse, ugh, i should migrate my
synapse to nixos, ugh, i should move to conduit, ugh i should just sign
up for beeper.com and never touch synapse again" every time my disk
would fill up and i would have to do some stupid bullshit to clean it up
enough to run VACUUM FULL
on the synapse
DB. It's still 65 fucking GB of old events I never want to see, and I
recently learned why: "unfortunately the matrix-synapse delete room API
does not remove anything from stategroupsstate. This is
similar to the way that the matrix-synapse message retention policies
also do not remove anything from stategroupsstate." this
kills me, and this is probably why my next step will be to set up matrix-synapse-diskspace-janitor.
Archiving Old Org-Mode Tasks
Since I use org-caldav to sync my Events between my Org environment and my phone, I end up with a lot of old junk building up over the years. I recently archived ~200 old calendar entries with this function which will archive anything older than 6 months, but skip any recurring events:
defun cce-archive-old-events ()
("Archive any headings with active timestamp more than 180 days
in the past, but not repeating"
(interactive)
(save-excursion
(goto-char (point-min))let ((six-months-ago (- (time-convert nil 'integer)
(* 60 60 24 30 6))))
('active) nil t))
(while-let ((point (re-search-forward (org-re-timestamp 0)))
(ts (match-string
(save-excursion0)) ; timestamp parser needs to be at match-beginning
(goto-char (match-beginning unless (or (> (org-time-string-to-seconds ts) six-months-ago)
(cadr (org-element-timestamp-parser)) :repeater-type))
(plist-get (when (y-or-n-p (format "Archive %s from %s?" (org-get-heading) ts))
( (org-archive-subtree-default))))))))
It was suggested in the Emacs Matrix.org channel that I could use
org-ql to do this, and
I would like to learn how to use that properly some day, but the ts-active
predicate didn't work how i expected
it and i'd need to make a predicate for filtering out recurring events
anyways, so i just bodged this together.
This code is simple enough on its own.
- Generate an integer representing a time six months ago
- Generate a regular expression representing an active timestamp
- Repeatedly search for that timestamp;
re-search-forward
in itsNOERROR
mode is great in combination with while-let to do macro activity like this- skip any which repeat
- skip any "younger" than six months
- ask if I want to archive the heading, and do so
Making
my NixOS system
deploys as simple as possible with Morph, a
hosts.toml
file, and my Morph Command Wrapper Nix
This weekend I tried setting up deploy-rs, and it and flakes are kind of not very good for what I am doing with my computers.
I spent some time today doing a think that I have wanted to do for a while, moving my "network topology" in to a data file which can be ingested by the tools themselves rather than declaring them in code.
This starts with defining a hosts.toml
file whose topology maps more-or-less to the one that Morph uses; a
network is a deployment entity which can have any number of hosts in
it:
[endpoints]
description = "my laptops and desktop"
enableRollback = true
config = "../roles/endpoint"
[endpoints.hosts.rose-quine]
# target = "rose-quine"
# stateVersion = "23.05"
# user = "rrix"
There are reasonable defaults in the host configurations so that
adding a new host is a single-line operation in the hosts.toml
file.
With that file in place, Deploying from my hosts.toml
defines a function that ingests
the networks and spits out a Nix attrset in the shape that Morph wants
to execute:
let
pkgs = import <nixpkgs> {};
allNeimportTOML ./hosts.toml;
mkNetwork = import ./mkNetwork.nix { inherit pkgs; networks = allNetworks; };
in mkNetwork "endpoints"
and from there i could say morph build $file_defined_above
and it would go
off and do that. then another invocation of morph deploy
with a grab-bag of arguments would
actually roll the system out to the host.
Taking things a step further is fun though. Why not make a simple
wrapper that can make this easier? Morph Command Wrapper does that and
allows me to just type deploy
to run a
change out to the host i'm sitting at, or deploy -b
to just build it, or deploy --all
to run it out everywhere.
Invoking deploy-targets
will print out
a list of all the hosts that the system knows about, which can then be
conveniently fed in to completing-read
"deploy-targets")
(->> (shell-command-to-string "\n")
(s-split append '("--all"))
("Which host do you want to deploy to? ")) (completing-read
And that can be used by the interactive emacs function arroyo-flood to automatically tangle the systems' org-mode role files, dynamically extracting a list of server, laptop, desktop, etc, modules from a sqlite cache along the way, and then deploying those! I'm pretty happy with this.
Hopefully
systemd-inhibit-mode
will keep me from
burning in my monitor and burning out my laptop battery.
When I watch a full-screen Video in Firefox
while using XMonad, it doesn't
properly disable DPMS so my screen blanks every five minutes in to the
video. Rather than try to figure out why that was, I would invoke systemd-inhibit
to inhibit screen blanking. This
was fine, but I'd run it in a terminal emulator which I would promptly
forget about. Cue my laptop having a dead battery or my monitor burning
in the lock-screen text overnight when I would manually lock the desktop
and head to bed.
systemd-inhibit-mode
is a dead-simple global
minor mode which exists to remind me via the modeline
that I have the inhibitor process
running:
It's 27 lines of code that, if you are interested in, you can copy to
your init.el
. 😊 I'm not terribly
motivated to stick stuff like this in MELPA. Anyways it's licensed under
the Hey Smell This license if
you are some sort of person who cares about licenses or think I do.
Moved my Org Site Engine-to-Fediverse cross-posting from feed2toot to Feediverse
Lately I have been working on integrating my org-mode site engine with my Fediverse presence. Following @garden@notes.whatthefuck.computer for technical work and @lionsrear@notes.whatthefuck.computer for creative/tea/philosophical crap should give a full view of what I am adding to my sites and keeping my main presence for 1-1 interactions with my fedi-friends.
There are Atom feeds available for headlines on various pages on the sites, and tying those to Fediverse posts should be pretty straightforward, but finding the right tool for the job is always the hard part when I am forging my own way forward.
Yesterday in the back of a rental car I added feed metadata to the
org-mode document for my 2023 Hawaii
Big Island Trip and wondered how I could get that on to my fedi
profile – i realized that it required my laptop to run the Morph deployment to ship a new NixOS system. a bit
overkill for what I want to do, especially when the data is already in
at least one sqlite
database!
So I modified Arcology's
Router to add an endpoint which returns all the feeds on the site in
a JSON document and then set to work making a script which wrapped feed2toot
to orchestrate this but quickly
felt that feed2toot
is a bit too
over-engineered for what I am asking it to do, especially when I set
about adding multi-account support to it; the configuration parser is
doing a lot more than I want to deal with. Feediverse is a simple single-file
script which I was able to easily modify to my needs, with a bit of
code-smell in the form of yaml.load
-based
configuration.
My fork of feediverse reads that feeds.json
document and iterates over every feed
looking for new posts. This lets me add new feeds to my cross-poster
without deploying The Wobserver
on my laptop. There is a slow pipeline that prevents me from using this
to Shitpost or live-toot things,
but I think that's basically okay. The idea is to use it to slowly ship
out things when I make new art, or have an end-of-day log, or publish a
site update, without having to think too hard about it. Most of the Arroyo Systems stuff (from adding software
to my laptop or server to adding pages to my site) is managed by adding
metadata keys to my org-mode documents,
and this is now no different, though perhaps a bit too "roundabout" for
it to be considered good engineering:
- add
ARCOLOGY_FEED
page keyword - add
ARCOLOGY_POST_VISIBILITY
page keyword - add
PUBDATE
andID
heading properties to entries on the page that will be published to the feed - wait for syncthing to copy files to my server
- wait for arcology inotify-watcher to reindex the site
- wait for Feediverse to run on the quarter-hour and post the new feed entries
But my axiom in writing the Arroyo Systems stuff and The Arcology Project as a whole is that Personal Software Can Be Shitty, and this certainly is, but it means I can post new feeds from my Astro Slide so who can say whether it's good or bad.
As much as I am not a fan of Pleroma, it's really nice to just be able to fire HTML right at my server and have it appear halfway decently on the Fediverse with inline links and even images working out just fine. Just have to keep my posts somewhat hsort. This is probably too long for Fedi already, bye.
First Update: A welcome return Emacs CCE Catchup
I noticed that my feed in the Planet Emacslife feed aggregator wasn't valid any more and am re-establishing a feed here for capturing my configuration-specific stuff. First-time readers may recognize me from the Emacs or Nix-Emacs Matrix.org rooms, or from building the first prototype of an Emacs Matrix Client which would later be taken and evolved and polished by alphapapa after I became frustrated with hauling bugs and implementing end-to-end encryption.
Right now most of the new functionality in my notebooks is centered on NixOS automation and the deployment of my Wobserver, but I also do increasingly silly hacks with org-roam and my own org-mode metadatabase called arroyo-db. If you haven't seen it before, my Arroyo Systems Management documents are brain-bending automation for dynamically a Concept Operating System from many org-mode docs. I have vague dreams to create a system where Emacs and NixOS users could bootstrap a minimal Linux operating system by referencing documents on the web & seamlessly pulling them in to their system, but this is a fair bit of the way off still.
All of this is published straight out of my org-roam knowledgebase
using a home-built web site engine called The Arcology Project
which exists to give me a way to publish these pages across multiple
domains, to arbitrary web paths, without having to line them all up on a
file-system. It's primarily written in Python but it uses Arroyo to
generate a sqlite
database which the
Python reads, and some custom lua
and
templates which Pandoc uses to render the org-mode docs to HTML.
I'll keep this feed up to date with new modules and interesting updates. Here are a few that have fallen between the cracks over the last 6 months or more:
- Dealing with Syncthing
Conflicts shows a function called
cce/syncthing-deconflict
which will open up ediff buffers for any "conflict" files created by Syncthing which can occur when I edit buffers while syncthing is sleeping on my phone for example. - I have a raft of Emacs functions for studying Japanese:
- a simple macro for defining Dynamic Org Captures, org-capture templates which have dynamic file names (for example date/week-stamped filenames) and automatic "top matter" like org-roam provides for. I use this to provide a handful of capture commands in to my Journal and other parts of my org-mode document hierarchy.
- CCE Nixos Core
defines a simple helper function
cce-find-nix-output-at-point
which will select andfind-file
an output for a Nix derivation path (a metadata-file showing all the inputs/outputs of a Nix expression) at-point.