Where Angels Fear
41 min readSep 2, 2024

Stick A Fork In It

Fork it, Marlow!

Okay, so, an update … about a lot less than Life, the Universe, or indeed anything let alone Everything, but it’s what you’re getting, so …

One of the many and varied reasons for my absence … well, there’ve been a few anyway … and, amazingly, they weren’t all SouthpawPoet either … (left-handers … brrrrrrrrrrrrrr) … has been the fact that I’ve been working on a really quite complex IT project — I won’t overwhelm you with the details … suffice it to say that it involves server farms that host other server farms (it’s very meta), quadrillions of simultaneous connections (yep, even more than there mobile phones, tablets, TVs, cameras, doorbells, washing machines, fridges, etc.) and has taken up quite a lot of my capacity to think about … well, anything really, never mind Medium.

But, it’s been a while … some of you are pining and there’s the inevitable risk of your withering and dying in my absence, like neglected houseplants.

So, here we are.

As said, I won’t overwhelm you with the detail, but it has given rise to something related that I might as well get off my chest, because it’ll make me feel less stressed and … actually, it won’t … I’ll still be as exasperated after writing it as I was before, because the World will still be populated by subhuman lifeforms slightly below tapeworm on the evolutionary scale (and I’m not talking about just you here either) … but you might find it educational, so …

One thing that has occurred to me is that the Future is one in which there are no users. That is there will be no need for user accounts as we have known them thus far: either you can supply the credentials (a hash of password, biometrics, device, and/or whatever else) necessary to unlock a resource (data and/or service) and are granted the session token required to access it … or you aren’t — there’s absolutely no need for user accounts. It could be argued that’s just rearranging the deckchairs, because there will be a variety of (more or less half-arsed) approaches that include an element of the device used (which is, effectively, just a user by proxy) … and that lists of the hashes that are entitled to said access are tantamount to lists of users … but imagine a world in which employers don’t maintain employee accounts, they just make services/data available from anywhere in the World, at any time, and you only gain access to them if you supply one of the recognised hashes — there’s no need for user lists on machines or network servers in order to authorise access … and any access that is authorised is independent of the location of either the user or the resource. It’s been heading in that direction for some time now anyway … it’s just the logical conclusion, because it means anyone can host data/services on any machine anywhere in the World without having to worry about to whom they are visible.

But that’s by the by … what I’m really here to vent my spleen about is Garuda Linux and, by extension, other derivatives of Arch Linux specifically and possibly (up to a point at least) other distros as well.

I have a hard time recommending Linux distros to people because … oh, lots of reasons, but it boils down to my having been both an educationalist and an IT specialist for a very long time now — which results in my mentally running in ever increasing spheres (not just circles) of “But you also need to know this as well, if you’re to understand why the other thing” whenever I contemplate explaining my personal choices and why I think they should be yours too, because most people don’t know enough to make a sufficiently informed choice and end up making a poor one as a consequence … and then the horror of that prospect overwhelms me and I end up wanting to run away screaming before I’ve even started.

A long time ago in a galaxy far, far away …

Well … okay … not that far away.

And a lot closer in Time too.

But still quite a long time ago in relation to the average human lifespan.

Less impressively so, if you’re a tortoise, perhaps; for tortoises, Time is a different matter — kinda like for dogs, but to do with longevity rather than lingering smells that mean someone who was here a week ago is, in a manner of speaking, still here. But for humans (and even tortoises, despite their long lives), quite some time ago nevertheless, Mathematics was an imprecise matter. There were numbers and … that was it really. I mean, sure, there were some fancy Greek theories about stuff, but they still relied upon things being there to count and weigh and measure and whathaveyou. It took some nomadic savages living in the desert to conceive of the absence of things.

Okay, look, if you’re a shepherd and, when you count your angels, there are fewer of them on the head of the pin than there were this morning, you know there’s an absence of something … and that absence is pretty significant, if you’re hoping to exchange some sweaters you made from their wool for some otters’ noses at the forum later … but it took some desert-dwelling people (whose lives consisting in no small part of staring at ‘nothing’ for days, if not weeks, on end possibly wasn’t an entirely insignificant influence) to formalise the concept of ‘0’ (‘zero’).

And it’s a good job they did. Or you wouldn’t be reading this. Because, you can’t tell a computer what to do with a string of ‘1’ and leave it to figure out where there should be gaps between them; it’ll get confused and things will go horribly wrong faster than you can say ‘Blue Screen of Death — or ‘BSoD’ (when car registration plates start making sense, it’s time to change career, before it’s too late and you start seeing meaning everywhere … like a paranoid schizophrenic). The categorical absence of information is as vitally informative as its presence; a bit like how you can’t pile some planks of wood on top of each other and tell people to take a left turn at the library — there need to be suitably sized gaps between them to fit the books into, otherwise people will get lost on the way to the pub (and you’ll spend the evening alone, fending off unwelcome advances from drunks).

But I digress … … … where was I?

Ah, yes … so … anyway … Alan Turing, when he wasn’t breaking the Law (or German ciphers), made use of the concept to invent binary code (he may have been many things, but unduly modest seemingly wasn’t one of them). And, from there, the Universal Turing Machine — which led to the disruption of traffic in Turin by Benny Hill some years later. But, eventually, John Von Neumann had the bright idea to make Life easier, by making the thing infinite in Time rather than Space, and invented the modern computer as we know it, using … wait for it … Von Neumann Architecture — tech bros, eh?; still, at least Linda Lovelace was recognised for her pioneering work with Charles Babbage, so, it’s not an entirely wall to wall sausage fest in that thar silly cone valley … there are some bronettes too (even, if they’re blonde).

So … 1 and 0: bits, bytes, kilobytes, megabytes, gigabytes, terabytes … that’s pretty much all you need to know to be a computer programmer; you don’t need to know anything about hardware at all: I started programming in 1981, on a ZX81, graduated by way of a BBC Micro, Atari ST, VAX/VMS on a mainframe, Unix on a mainframe, DOS and various flavours of Windows and it wasn’t until 1997 that I had to get to grips with hardware in any concrete, rather than abstracted, sense — and I’d already got a degree in Computer Science, specialising in AI too (so, I was pretty knowledgeable about various aspects of computing in depth and still didn’t need to know anything about hardware!).

So, the idea that those involved in the development of a computer operating system (OS) are savants in anything but the French sense of idiot savant is not one you want to believe in too fervently — developers (programmers) are people … and people are often, if not even for the most part, simpleminded (added to which, the field of IT does tend to attract a lot of those with idiot savant tendencies to start with). There hasn’t been a genuinely good OS designed yet — there are just areas in which one or other isn’t as weak as the alternatives. They’re pretty much all still 20th Century solutions to (at the latest) 19th Century problems.

There is a surprising number of OSes. But, for the purposes of this discussion, not only are The Big 3™ of Windows, MacOS/Mac OS/however they’re s(p)elling it this week (possibly in Comic Sans), and Linux the only ones of consequence, we’re furthermore going to ignore the former two and focus solely on Linux — the others are only relevant for the purpose of observing that every OS has its strengths/weaknesses and there is no single one that is better than all others, it’s just a matter of deciding which strengths appeal sufficiently to make living with the weaknesses/drawbacks more acceptable than with those of others, or (conversely) which weaknesses/drawbacks mean that, whether another OS has any sufficiently enticing aspects is irrelevant (all that matters is it doesn’t suffer from the ones you find offputting about another). In fact, despite it being the most recent of the three to appear on the scene, Linux’s roots in Unix mean that of all the relevant options, it is in many ways the least advanced, because it (largely of necessity) slavishly adheres to conventions laid down in the 1960s/70s — you won’t find a Program Files directory, for instance, executables are variously located in /bin, /opt, /sbin, /usr/bin, /usr/local/bin, /usr/local/sbin, /usr/sbin, somewhere under your /home/$username directory (or someone else’s) … and the commands are weird too (‘ls’ instead of ‘dir’ to list the contents of a directory, ‘cat’ to list the contents of a file instead of ‘type’, and so on). Moreover, despite there being several options when it comes to a Graphical User Interface (GUI), Linux still makes far more use of the Command Line Interface (CLI) than any of the others. And, just to complicate matters even further, it is hugely balkanised: imagine every version of Windows and Mac OS ever released not being simply a version of that OS, by one supplier, that is no longer used, but an independently going concern, designed and maintained by separate (entirely independent) parties, and, when deciding which one to use, you have a choice of every one of them to make before proceeding to do so — and you can use any of the GUI interfaces ever available (and more besides) with any of them (so, you could run Windows 11 with a Mac OS 7.5 GUI, for instance, whilst your friend runs the latest iteration of Mac OS 8.0 with the GUI from the Commadore Amiga).

Linux has come a long way from its roots in the early ’90s, however, and it’s entirely possible to simply install it and run it without ever needing to know how it works underneath any more than you do to use Windows or MacOS. It is, by and large, no less stable than the alternatives and the only real issue when migrating to it is needing to decide which distribution (distro) to use, which Desktop Environment (DE) and what all the different software applications/programs are: to look at just two examples, Microsift Office isn’t an option, for instance, so you have to get to know LibreOffice/OpenOffice/some other office suite of your choice, and there’s no Notepad … instead, you have to select from Kate, Geany, Mousepad and a myriad of other offerings). And, mostly, once you’ve made your choice of distro/DE combo, the big decisions have already been made for you in that regard — the only reason to investigate alternatives being that there’s something you prefer to use (and that’s the same for any of the OSes).

So … choosing a distro.

There are many-many-many-lots of distros, but they largely fall into a fairly small number of families of derivatives from a core group.

The major players are (in no order of significance):

Red Hat

Debian

Arch

Slackware

Gentoo

… and this is because they are the ur-progenitors of their respective package formats (there is actually more to the family trees than simply package formats, but, for our purposes here … a historical overview … it’s a more or less sufficiently ‘useful lie’).

Red Hat is RPM based.

Debian is Deb based.

Arch uses Pacman to install tarballs.

Slackware doesn’t have a package manager at all and you’re left to manage tarballs on your own.

Gentoo is a source based distro which means you don’t install binaries (executables) at all, but compile them yourself first before installing them.

Over the years, a huge number of derivatives (or forks) of those core distros has arisen, adhering to greater or lesser degree to their great-great-great forebears in various ways, but sharing the same package format, so they are all, to greater or lesser extent, descended from one of the main distros, but it has got kind of complex in the interim: Fedora is descended from Red Hat Linux, but Red Hat (the company) has long since ceased delivering that and, instead, now tests new ideas in Fedora and adopts refined versions of them in its Red Hat Enterprise Linux (RHEL) offering to its corporate clients (the child has become the parent of its own parent) … Ubuntu was (is) derived from Debian but has long since evolved so far from it that it has itself long been the progenitor of many other distros — Mint, for example, is a derivative of Ubuntu rather than Debian directly, even though, technically, it is, by definition a Debian based distro, because Debian is its grandparent.

It used to be the case that which distro you chose depended upon which package format you favoured, because package management could be fraught (especially in the case of RPM), depending on the package manager (which, naturally, depended on the package format).

Ah, yes … package managers.

Okay, so, you know how Apple and Google have their curated App and Play stores, but Windows (despite the recent advent of the Windows Store) is an anarchic free-for-all of installing whatever you like from wherever you find it and you’re on your own re the safety of doing that.

With Linux, these days the emphasis has kind of reversed (in most cases) in that the vast majority of distros supply most of what you’re likely to need in what are called repositories (repos) … which are (basically), the same idea as the App/Play/Windows store: a curated collection of software that is (more or less) guaranteed to work with your distro. But, back in Ye Oldene Dayes, way back in the mists of Time, when Linux was an obscure project for geek hobbyists and underfunded university researchers, it was a bit of a hybrid of the two concepts, but tended more in the Windows direction. There wasn’t much software available and, if you learned of a new offering, the chances were that you’d need to go to the developer(s), download their source code, compile and install it yourself.

Some distros, such as Slackware, would provide ‘tarballs’ (uncompressed ‘archives’) of precompiled binaries that you could just install instead. But you were still left to do so by hand … needing to know which files to place in which directories, if you wanted the application to work afterwards.

Red Hat designed a ‘universal’ package format known as RPM, which could be installed with the aid of a ‘package manager’ doing the hard work of placing the files in the right locations for you, by virtue of the developers including an installation script in the package that the package manager used to install all the files into their correct locations. But there was still the problem of Dependency Hell, whereby (just as with Slackware’s tarballs) the software relied (depended) upon the presence of other files for which the developers weren’t responsible and, if they weren’t present (or they were, but any of them were the wrong version), things wouldn’t work and you’d have to locate them yourself (if you could) and install them first, cross your fingers and try installing the original program (application) again. Of course, many of those dependencies had dependencies of their own, so, you’d have to source those, install them, try to install the original dependencies again and then, if successful, the original app again — only to find there was yet another missing dependency. And dependencies of dependencies could have dependencies … which could have their own dependencies, and so on, and it could all rapidly descend into a fraught process that left you tearing your hair out.

Repos solved the problem up to a point by making everything you might need available from one place, but the RPM package format wasn’t terribly sophisticated and, consequently, the package managers weren’t either: yes, you could probably find everything you needed, but you’d still only find out what you needed by trial and error.

The Deb package format (created for use with Debian) was a vast improvement over this and, as a result, Debian derivatives (like Ubuntu) became very popular very quickly, because the whole process was much less painful — the Apt package manager able to much more of the heavy lifting than any RPM package manager.

And, so, a plethora of Debian derivatives arose.

The most successful Debian derivative was (and still is) Ubuntu.

And Ubuntu, therefore, became a platform from which others built their own derivatives … so, now you have a whole family of distros that use the Deb package format and the Apt package manager (so, they are technically Debian derivatives), but don’t follow Debian but Ubuntu (making them Ubuntu derivatives). And Ubuntu has deviated so far from Debian over the years that there really is little point in considering any Ubuntu derivative a Debian derivative, even though they all use Deb packages and the Apt package manager.

So, it’s a trifle complex these days and to the above list of five you need to add at least Ubuntu and SUSE (an RPM based distro) as well, because it’s no longer really relevant what package format or manager are used but how popular a distro is: the more popular, the more packages are available in its repos and the more those that aren’t available in the repos are likely to be made available packaged for that distro by the developer(s) when you find yourself looking for them elsewhere … which latter is important, because that means the installation script takes account of the quirks specific to the distro’s family (not all Linux distros place the same files in the same places!).

You also need to consider the release schedule.

Linux distros tend to update every six months (or so). Every six months (or so), there’s a new version available and the old one is no longer officially supported. This means that things can stop working, because the repo maintainers don’t simply deliver the software the developers release but tailor it to the quirks of the distro — and, if those change sufficiently with a new release, you need to update your apps (just like when Microsoft or Apple release a new version of their OS). So, every six months, you need to update your OS and software, lest bugs and security holes go unattended.

Some people find that upgrade cycle tedious and/or stressful — it takes time, and things can go wrong during an update: you can end up with a system that is disconcertingly unstable, through noticeably borked, to completely bricked.

Those people prefer to use either a Long Term Support (LTS) release, with a lifespan from a year to two (or even more) years, or a rolling release (RR) distro.

LTS releases are just what you’d think: the distro maintainers release various patches for things to keep them secure, stable and bug free for the duration. The drawback is that the software becomes outdated in the interim. This isn’t much of a problem when first released, as the LTS version tracks the standard release. But, once the standard release is replaced with a new version, that and the LTS diverge from each other and updates to the LTS largely do not include new software releases, just patches to the one originally delivered with it. The advantage is that it needn’t be updated as frequently and is, therefore, less prone to installation problems and/or the instability that can arise from new software versions. The disadvantage is that, if there is a fundamental flaw in the architecture of something, no amount of patching is going to resolve that — a fundamentally insecure application will never be as secure as a redesigned version that mitigates the problem in the first place without the need for (potentially buggy and equally exploitable patches).

Rolling release distros dispense with the idea of versions altogether: you install them once and, from then on, simply update them and your software as and when new versions become available. This has given them a reputation for being unstable … because they’re constantly changing and what worked this morning might not do so this afternoon, if something you update relies upon versions of dependencies that are incompatible with other things that haven’t received an update. Arch Linux (my distro of choice), for instance, is notorious for this. I have to admit that my own use of Arch, whilst outlandish in many ways (I configure it in ways many others find incomprehensible) is nevertheless fundamentally conservative: I eschew the use of many things others assume everyone wants (Bluetooth, printing and suchlike) … which means that, for all the low level changes I apply that make it differ considerably from the norm in some areas, the scope of those changes is very narrow and there aren’t many system operation interactions that could result in instability as a consequence — in the ten years I have used it as my daily platform, I have updated it everything from monthly to fortnightly to weekly to daily, and even hourly (for months at a stretch) … and in all that time I have once, some eight years ago, when a new version of a fundamental set of dependencies core to an enormous amount of the Linux ecosystem was released and it was impossible to expect every application to be ready to immediately make use of the new dependencies, had to downgrade precisely two applications after an update rendered them unusable … and only two days later successfully upgraded them again. So, YMMV (mine certainly seems to), but you might want to think carefully before taking the plunge with an RR release. I’m a geek and highly knowledgeable and experienced … I couldn’t bear to live with the (utterly bizarre) quirks imposed on other distros by maintainers who aren’t as clever as they believe: not only does Arch facilitate my having total control of my system (it’s not simply the case that I can configure it how I like, but that, if I don’t expressly install something, including core components such as networking, it isn’t on my system in the first place) but it hews to upstream development in a way that few other distros do (the maintainers of Arch do almost nothing more than compile and package the software the developers release, and don’t tweak it to ‘improve’ it and twist it to make it work with all the other quirks the other distros see fit to install along with the kitchen sink). But there are other RR distros available (from the likes of SUSE and, I believe, even Fedora) that are more tailored by the maintainers (those quirks I was talking about) and, if you want to make use of things I don’t (like Bluetooth, printing, etc.), you might find them equally as stable with those features as I find Arch to be without them — I can’t tell you that for sure, however, because I don’t use them myself (YMMV, suck ’em and see).

Another thing to consider is the trend in the direction of … let’s call it sandboxing (which is really what it’s all about): everything from hypervisors through virtual machines, containers, immutable images, you name it, they’re’ all concerned with isolating things from each other in order to limit the damage that can be done by rogue software (whether by intent or simply oversight) or user error.

For all practical purposes in your own case this is likely to boil down to whether you want a traditional ‘bare metal’ installation or like (or at are at least as happy as not with) the idea of application images, such as AppImage, Flatpak or Snap.

Advantages of the former are that you have a traditional operating system installation, with all that entails. Disadvantages stem from the fact that you have a traditional operating system installation, with all that entails.

Snap is a format proprietary to Canonical (the company that develops Ubuntu) but widely used by many others (whether Ubuntu derivatives or not).

Flatpak is an alternative to Snap that only concerns itself with apps that can be run in the context of a DE (or other GUI, but we’ll come to that later) — unlike Snap, you won’t be running any CLI based apps (not even in a virtual terminal on a DE).

AppImage is an alternative that doesn’t impose sandboxing — the relevance of which we are about to examine.

What do they do?

They deliver an app and all its dependencies in one package and in isolation from everything else on your system.

Why would you concern yourself with this?

It negates dependency hell — everything you need is delivered along with the app itself.

It negates conflict — you need never worry about what versions of the dependencies are available on your system, nor that updating them to accommodate new software will impact other software that cannot (yet) make use of the new version(s), or that something you want to use will install older versions that other apps no longer use, resulting in their ‘breaking’.

If something doesn’t work, there’s no wider impact than the space it took on your storage device: uninstall it and it’s all gone like none of it was ever there — Cf. portableapps.com for the Windows equivalent of this concept.

It can be run in isolation from the rest of the system — so, there’s (theoretically) no chance of a rogue process (whether a bug or malware) affecting anything else.

What are the considerations?

Each app duplicates all the dependencies, taking up storage space — the very thing the ‘common dependencies’ solution was designed to obviate.

Each app you use loads those duplicates into RAM — you can run fewer in the same amount of RAM than you can if they make use of shared files.

Depending upon which solution you opt for, they can create a logical structure than can be hard to parse, if you find yourself needing to investigate where things are located in order to troubleshoot problems.

Depending upon which solution you choose, you can find that you cannot share data between applications (from being unable to access the same files in different applications to no copying from one app and pasting in another).

More specifically …

If you are philosophically opposed to the idea of using proprietary solutions then you might consider Snap problematic.

Linux makes use of a great many CLI based solutions, so, even if you use a GUI DE, depending upon precisely what it is you want to do, you might find Flatpak restrictive, because it doesn’t make them available … or, conversely, insufficiently secure, because it allows interaction between processes that are supposed to be sandboxed and others that aren’t (meaning you might consider that Flatpak doesn’t really sandbox things).

AppImage doesn’t sandbox things, it simply containerises them — you might regard this as beneficial or not.

You can, depending upon the distro you use, mix and match all three solutions, however. So, you might find a perfect combination of solutions that makes use of Snap, when you want sandboxed apps that can run as CLI tools, Flatpak apps that don’t need any CLI based interaction, and AppImages for when you just want to isolate the app and its dependencies without sandboxing it — you just need to bear in mind the common provisos re storage, RAM and potential issues arising from sandboxing.

Also worth bearing in mind is that Canonical intend Ubuntu to be entirely Snap based and are working towards that: the entire OS, not just apps, will be delivered as snaps — potentially, therefore, making it even harder to troubleshoot issues (because they are adapting an OS conceived in an era before sandboxing, not building an OS designed to be so from the ground up). Likewise, Red Hat are intending to transition to an ‘immutable’ platform … meaning that Fedora will eventually become the testbed for a system that is, similarly, delivered entirely sandboxed and updates and apps released in containers (likely Flatpak). So, if that idea doesn’t appeal, you might need to rule Ubuntu and Fedora out … and possibly even some of their derivatives: it remains to be seen how long snaps can remain optional in something like Mint, when they are so fundamental to its parent (at least without being forked into an entirely independent distro).

I mentioned, in passing, that you might use a DE or other GUI.

All Linux systems require a Window Manager (WM), if you want to use a GUI.

For some that’s where it ends — they like the simplicity that comes with a minimalist approach and/or are happy to roll their sleeves up and tailor their WM’s behaviour to their exact needs rather than being content with a ‘one size fits all’ offering WM+DE combo. The only people more minimalist in their approach are those who eschew any GUI, because real wo/men are hardore and do everything bareback at the terminal CLI — but those people are either working with servers in the corporate/government/military sector and (quite rightly) “don’t need no stinking GUI”, weirdos (if not worse) who enjoy pain, or else twelve-year-olds of all ages, with no adult peers, who feel they have something to prove for some reason (but this is a family friendly discussion, so we’d best leave it there, I think).

Others prefer a more fulsome experience ‘out of the box’ and use a DE on top of their WM (the ‘big’ DEs come with their own WM, but these can often be replaced with another of your choice, if preferred).

Some like eye-candy and use a compositor (such as Compiz) on top of either or both (and, in fact, a few DEs come with a compositor of their own too).

WMs/DEs fall into two broad categories: tiling and stacking.

Tiling is where each window is laid side by side (left/right/above/below) any others and the limit on how many may be present on the desktop depends upon the sophistication of the WM/DE: some make all the same size and when you run out of free space on the desktop, you have to close some, if you want to open others … some will automatically shrink them all to fit as more are added … yet others do fancy things with size-fixing and pinning and all kinds of other things that I can’t go into here, not least because tiling doesn’t suit my workflow, so, I don’t use tiling managers myself and, subsequently, don’t know what all the options are beyond that.

Stacking is what you’re used to from Windows and Mac OS: free floating windows that can be placed adjacent to, over or beneath others.

The main stacking WM/DEs are the aforementioned

Gnome

KDE (Plasma)

XFCE

After those, there are

Mate

Cinnamon

Thereafter there are

LXQt

Pantheon

Budgie

Deepin

COSMIC

… and a number of others — not to mention a range of tiling WMs such as Openbox or i3 … or Sway, if your GUI depends upon Wayland rather than Xorg.

Beyond that, there are various paradigms re the interface … some (like KDE, XFCE, Mate and Cinnamon) look like Windows (with a taskbar and popup menu), others (like Pantheon) look more like the Mac interface (with a top panel and a dock at the bottom), yet others (such as COSMIC) are a sort of hybrid and then there those, like Gnome (or Unity), that are somewhat like the Mac but still quite different in some ways. They are all variously configurable: KDE famously so, although I find that, counterintuitively to the way of thinking of some, it is much less so than XFCE (my DE of choice for that very reason) … Gnome notoriously restrictive … and others (like COSMIC) utterly restrictive (there’s nothing that can be configured in any way and you have to like it or lump it, you aren’t changing anything about it). And they can, with a few exceptions (like Pop! OS) be used with any distro. There aren’t as many as there distros, but there have been (and still are) quite a few to choose from, so it’s worth exploring them all to find the one that works best for you; they can often, if not even usually, be installed alongside each other, so, with the exception of that handful of distros that don’t allow this, the only real consideration is how much storage space having multiple WM/DE combinations will require and the fact that applications designed for one will not look exactly the same as those designed for another — this may or may not bother you (it doesn’t me, I’m interested in functionality, not eye-candy) but, if it does, there are some steps that can, under certain circumstances, be taken to mitigate any discrepancies: I use the XFCE DE myself (which relies upon the GTK framework), but use some QT (which underpins KDE) based apps too and use something called Kvantum to re-theme them to more closely match my GTK apps (I don’t care about the ‘window decorations’ but it’s surprising how jarring it can be when working with a dark theme to have an app suddenly pop up all in white, because it can’t use the libraries upon which the dark theme depends).

As mentioned, my preference is Arch Linux with the XFCE desktop environment. How do you know which one/s is/are right for you though?

Try them out.

How do you do that?

Dual (or even multi) boot them.

Run them in a virtual machine on top of another OS.

Run them off an external device.

Try various ‘live’ distros.

Back in ‘99/’00, I was the multiboot king — whilst others crowed about how they’d got Linux to dualboot with Windows, I was quietly running DOS + Win 3.x., Win 95, Win 98, Win NT, Linux, QNix, OS/2 and BeOS all off a single drive, booting into whichever suited my needs (or simply whim) as occasion dictated. And, in my consequently not altogether inconsiderable experience, dual-/nulti-booting is at best suboptimal and at worst a disaster waiting to happen. Quite apart from the various risks that can arise from doing so (and there are a few), Windows (especially latterly and especially since the advent of Secure Boot) … notoriously frequently wrecks Linux installations; which will make you gnash your teeth, tear out your hair and then sob unconsolably in the corner, because (sadly) however warranted a killing spree of the guilty parties might be, it would have too many negative impacts upon our life to make it a truly viable remedy to our woes (so, impotent, abject misery needs must suffice instead). And that’s assuming that Secure Boot will even let you install and boot your distro of choice to begin with. The Mac is somewhat more forgiving of dualboot shenanigans, but really, it’s still not worth it, when you consider that you can simply install Linux to an external device and boot from it completely independently of whatever else might be on your machine — neither need ever know the other exists … thereby eliminating many, if not all, the attendant risks and all of the headache of getting the two to cohabit in the first place. These days, not only are USB devices (everything from a drive to a key/stick) bootable but many Linux distros are expressly designed to be run from such a device. I have an external dock connected to my PC, from which I run my Linux platform, leaving the internal Windows installation untouched and, furthermore, Windows itself entirely ignorant of the fact that I have ever even had a Linux system, let alone boot my PC with it on a daily basis — it’s better that way because there is a clear separation of concerns: Windows is Windows, MacOS is MacOS, Linux is Linux and never the twain (or whatever three of them are) shall meet. I’ve even run Linux off a USB key (did so for two years in fact) — it was slower than an internal drive, yes, but perfectly functional … and that despite its being a distro that wasn’t even intended to be run from a USB key and, so, didn’t apply clever tricks to itself to mitigate against the limitations inherent in doing so.

Additionally, you don’t want to be messing around installing, deleting, installing, deleting, installing, deleting on your main system, whilst you try out different distros: any other headaches aside, there’ll come a point where the process fails to complete cleanly and, if you don’t bork (or even completely brick) your principal OS along with it, your new Linux installation won’t work properly, because it finds remnants of the last and gets confused as a result … meaning you’ve no idea whether the new one might be suitable or not, because it doesn’t work and you have to start from scratch again — which is just tedious.

It’s far easier to take a USB key, install Ventoy onto it (you can do this from either Windows or Linux and it will then work on any PC or, conceivably, Intel Mac) and try out live Linux distros by simply copying the iso images to the key, then booting from the key and trying out whichever ones you like. You can keep adding iso images until you run out of space and, when you finally do, deleting any distros that didn’t tickle your fancy and replacing them with something new. You can do this until you finally find the one (distro and DE combo) that makes your knees tremble simply thinking about it … or just keep playing the field, distrohopping without commitment. It’s even easier than running them in a virtual machine — which latter places demands on your system over and above anything else by needing to virtulise an entire hardware substrate and then run an OS on top of it and any apps on top of that, all on top of the OS you’re already running (moreover, putting all your eggs in the one basket of the same bare metal filesystem used to store all of the OS, the virtual machine management application, the virtual machine itself and any data you create with it).

You could also investigate distros designed for running under adverse/hostile conditions, such as TAILS or Kodachi.

Which distros should you try?

I’d say at least Fedora, Ubuntu, Mint, but go to DistroWatch, read around and then investigate any others that look interesting (you might find you like the sound of elementary OS, for instance).

Note that the rankings on DistroWatch do not indicate the most popular and, therefore, widely used distros (Manjaro is not, and never has been, more widely used than Ubuntu, for instance) but those that are generating the most buzz each week — a new distro will generate a lot of interest and, consequently, a higher ranking, even though only its three developers actually use it … perchance likewise, even if not many people use a distro, a new update/release will generate a lot of activity amongst those who do, driving it up the rankings.

Which DEs?

Fedora defaults to Gnome … as does, I believe, Ubuntu (at least it did last time I read anything about it).

Mint defaults to Mate, although (albeit deprecated) Cinnamon is, I believe, still an option (take your pick).

But they will all allow you to either download an iso image of a live version with KDE, XFCE or some other WM/DE … so, you can try those too — some DEs are strongly (if not completely) tied to specific distros, so, you’ll need to research what your options are in that regard: you might like the DE, but not the distro, or you might even be able to use the DE with another distro (both Pantheon and COSMIC can be used with Arch, for instance), but I’m not au fait with anything other than Arch really, or Gnome/KDE/XFCE/Mate and a handful of others (such as Fluxbox), so, I don’t know what’s available to other distros and can’t advise on all the possibilities (I stopped distrohopping and DEhopping long ago now).

Then take a look at gnome-look.org, kde-look.org, xfce-look.org and DeviantArt to get an idea of what else people have done with them (bear in mind, however, that Gnome becomes ever less configurable with every new release, so YMMV in regard of anything you see on gnome-look.org).

So, there you have it: pick some distros, pick some WMs and/or DEs (and/or a compositor), sling them on a Ventoy key, give them a whirl … and Robert’s your auntie’s live-in lover.

See? And that’s just a potted “These are the most significant things you need to be aware of, if it’s to make any sense”, not a full breakdown of everything.

And, unless you opt for Arch, Gentoo or Slackware, the chances of your learning much (if anything at all) about your OS are pretty slim: one of the things that leaves me shaking my head are the hubristic comments, on various tech orientated fora, from users of other distros (most notably Mint), who clearly think they know Linux … well, at all actually, but more significantly precisely because they use their distro of choice; which is how I know they don’t — because, if they did, they wouldn’t. Hell, I use Arch and I just laugh whenever I see someone suggest they know Linux because they use Arch; they don’t … they know Arch — ask them to fix your Ubuntu based problem and it’ll all unravel surprisingly rapidly, because they’re unfamiliar with its quirks (the tweaks made to its structure and functionality by the maintainers). Never listen to anyone who says they know Linux … not unless they run LFS (if not BLFS)as their daily drive — and, if they do, they’re utterly insane (anyone who knows anything about Linux knows that no single individual could keep on top of all the CVEs and keep it secure and that, therefore, anyone who knows anything about Linux wouldn’t even think of doing that) … and you definitely shouldn’t listen to them (not unless they’re a sysadmin for some commercial/governmental/military body that has good reason to run a custom platform and the resources to do so).

But … would I recommend the average user install Arch/Gentoo/Slackware?

HAAAAAAAAAAAAA…hahahahahahahahahahahaha! No.

Why not?

Because … oh, dear Lord.

Look … I use Arch as a compromise between total control and being able to just get on with my life.

Why not Slackware?

Sure, yeah, I’d have even greater knowledge of my system, but … at the cost of the extra effort for very little return: there’s all too frequently little-to-nothing I could do with that knowledge, because, these days, an awful lot of decisions are made for you as a result of the path trodden over the years — there are now, for instance, applications that have systemd dependencies for absolutely no good reason (and it’s not that I’m leaning into the ‘systemd bad’ argument, it’s that they don’t even make use of any of its functionality in the first place). Also, who’s gonna take over after Volkerding? And what direction will they take it in? Will I agree with the decisions made? There are just too many unknowns looming on the horizon.

Why not Gentoo?

Setting and forgetting a few make/compile flags isn’t compiling your own OS/apps … it’s setting and forgetting a few make/compile flags.

I don’t have the motivation to purchase two PCs, two laptops, etc. that are identical in every way … right down to the ICs (silicon OEM, chipsets, revision, the works) … so that I can get on with my day whilst I compile updates on the other before copying the end result to the machine I actually use to get anything done other than recompiling my OS/apps. Even the distro maintainers have long recommended you don’t compile a huge amount of it … like your web browser, office suite, media centre, etc. (you know, all the things you actually use on a daily basis) … meaning that the whole raison d’être of the thing is, to say the least, somewhat moot: if you follow best practice with Gentoo, you’re effectively using Sabayon (and I never could get my head around why anyone would run that).

So, unless you have a fervent ideological objection to systemd and want the option of an OS without it, but don’t like the idea of a distro maintained by a single individual (à la Slackware), it’s questionable exactly what you’d get from it that you don’t from Arch … and whether whatever it is is worth the overhead involved.

So … as I don’t, as far as I am concerned, it isn’t, so, I don’t, I run Arch … which is, to my way of thinking, the optimal compromise: Sabayon with a bigger selection of software available for it than either Slackware or Gentoo.

But, unless you are prepared to go away and learn at least as much as I know about IT … and I wish I knew more myself (but I’m not a genius or idiot savant and have my limits) … then you are going to get into difficulty, if you try running even Arch, let alone Gentoo/Slackware — and likely sooner rather than later.

So, I don’t recommend it.

But I can’t recommend any of the others either.

Because …

1. the only difference between them and Windows/MacOS is that they’re not either of them: they dumb everything down in exactly the same way, hiding stuff behind GUIs and predetermined configurations and tweaks, install the kitchen sink in a fit of one-size-needs-to-fit-all paranoia that, should so much as one thing the average user desires be absent, the distro won’t be popular … and, consequently, need to apply so many tweaks to get it all working together seamlessly that there’s

1.1 a considerably reduced pool of software available in the repos
1.2. what’s available outside the repos often fails to work as expected, because it doesn’t accommodate the tweaks
1.3. often a wide discrepancy between the intent and the outcome and it doesn’t all work together seamlessly

2. They’ll lead you astray — you’ll think you’re a Linux expert, when you’re not only just as ignorant as you were before of how things work, but not even a Linux user, you’re a <distro> user.

You can see my dilemma as an educationalist.

Ah, but …

What about an Arch derivative?

All the benefits of Arch, without the necessary prior knowledge or effort, right?

<Holds head in hands>

One of the worst things to happen recently is the reintroduction of an installer for Arch.

It means that, when things go awry … and they have done quite a lot thanks to an unfathomable decision made by the developer of the installer … people are left floundering and asking questions, on the various fora, to which they would already know the answer had they followed the installation guide instead; questions that are so core to the use of any operating system (let alone Linux, never mind Arch specifically) that, if they don’t know the answer, they’re better off using a phone/tablet/Chromebook/similar — sure, they’ll learn the hard way … when it’s too late and they need to install it by hand, using the installation guide instead of the installer … but think how much better it would have been to have learned what they will from the installation guide without having to waste time with the installer the first time around.

So, how is an Arch derivative … full of tweaks and quirks made by the maintainers … of any more benefit than any of the other distros?

That’s right … it isn’t — it’s just subject to all the problems that come with the others, at the cost of a number of the benefits that come with Arch.

How do I know?

Well, you’ll recall my mentioning that I’ve been working on quite a serious project.

The nature of said project is such that it’s platform agnostic — if it’s to work at all, it has to work with anything … everything (pick a distro, any distro).

And I cannot be bothered to waste time getting to know another distro just for the purposes of using it as a platform for some other project rather than for getting to know it in its own right for the purpose of earning a living from it (e.g RHEL, Debian or Ubuntu Server). It’s time I could be spending on the project itself, getting done what need be done, not achieving something utterly irrelevant.

But, ironically, using Arch itself as the platform is not as practical as you might think. I need a semi-persistent platform from which to work: something I can tweak to render it suitable for my purposes, retain those changes persistently, roll back to a default state if/when required … and all without the need to install it or snapshot it — if a tweak doesn’t work, there’s no point committing it in the first place, so the need for subsequent rollback is redundant (if it works, commit it, otherwise don’t bother, just undo the changes in RAM and move on). And Arch doesn’t work as a live platform with a DE.

So, I’ve compromised and used a live version of Garuda, because

1. it has the advantage of my being able to make (and persist) any necessary tweaks to it with minimal effort, because I’m already familiar with the mechanisms by which to do so;

2. when I started, it was the new hottness, so, I figured I might as well see what all the fuss was about … and, as it worked sufficiently well for my purposes, I stuck with it for six or so months — after which I couldn’t be bothered to use anything else (see the point about wasting time getting to know another distro for any reason other than earning a living).

Now, some of the persistent tweaks I made concerned setting up certain security features. Some were to do with creating an access structure that would allow me to flit between Garuda and the project as a platform in its own right (or even both simultaneously) for the purpose of testing things. And a number were … ‘cosmetic’ isn’t the word, but the ‘look and feel’ of my work environment aren’t insignificant:

See here

The reason XFCE is my DE of choice is that it offers the best intersection I have found between features, stability, configurability, responsiveness and resource usage.

People rave about KDE but, in my experience, if you don’t want a Windows-alike experience, you’re out of luck: you can tweak it to your heart’s content, so long as you’re happy using it their way and only their way. Moreover, the widgets are unsuited to my needs. Take a look at my desktop. Across the top, there’s a panel with a number of applets displaying information about the state of play on my system and network. They’re visible at all times … I don’t need to ‘peek’ at my desktop to see them — so, I don’t miss any unusual activity, because the widget that would have alerted me to (but didn’t) it is hidden behind other things (so, it couldn’t). They take no more room than is necessary; go to kde-look.org. org and/or your distro’s repos and investigate what’s there … it’s all clunky AF (making having it permanently visible is impractical). They’re inflexible. There’s also less choice: there are a million-and-one system-performance widgets, but they’re all variations on a speedometer … nor are any of the million-and-one email notification, or crypto-asset value tickers, of any use to me (they’d be a waste of space at any size). XFCE’s panel applets do exactly what I need, how I need, in a way KDE’s just can’t — maybe they could, but I don’t have the time/inclination to learn how to develop one myself, when I can just plug in everything I need to XFCE and get on with my day instead.

Likewise, the Whisker Menu popup application launcher is the perfect intersection between

  1. not having to type in an application name, but just cursor to it amongst my frequently used favourites
  2. being able to type one in when I need something less frequently used and not have to waste time mousing around … and, more significantly, change my focus of attention

I hit the menu key on the keyboard and (at least seven times out of ten) it pops up, if not exactly where I was already looking (because that’s where my mouse cursor is located) then not far from it. The cognitive load is lighter.

And, quite apart from the utility of that in and of itself (I don’t get distracted by the need to look away in order to launch something useful in the meantime, potentially missing something, if not even losing my train of thought altogether) … remember when Windows 8 was new? No Start menu. Someone got me to take a look at their laptop, because something wasn’t working and I was just flummoxed. There was no indication of how to get at applications … and, as I wasn’t used to using Windows without a mouse … because, for a number of reasons (not least the cumbersome shortcuts), it really doesn’t lend itself to that very well … I had never been in the habit of pressing the Windows key on the keyboard in order to access it. So, it wasn’t for some minutes, until I accidentally moved the mouse to the bottom-left corner, that I had even the least idea of how I was going to resolve the issue I had been asked to fix.

Nicholas Negroponte once opined that the worst thing ever to happen to computers was the WIMP GUI. And he was right. I learnt a valuable lesson from my first Windows 8 experience: my thinking had been ossified by muscle memory — I was so used to instinctively looking and moving, to the bottom-left corner in order to achieve anything that I couldn’t think of anything else … and, in the absence of anything else, I was helpless!

It’s all the more ironic given that, for many years already, I had been using the emergeDesktop shell on my own computers … configuring them to be driven by a right-click on the desktop with a mouse. Confronted with a system I knew not to be tailored by myself but to defaults, when those defaults did not adhere to my expectations of them, I was not simply no more expert a user than a newbie, I was actually worse off than the noob … my expectations … based upon decades of prior experience … hindering me and preventing me from even thinking, never mind learning.

It was a salutary experience and the first thing I did with any OS thereafter was, where possible, configure it so that the application launcher would appear randomly on the screen (usually, wherever the mouse cursor was), so that I couldn’t rely upon muscle memory but would actively have to pay attention … keeping my thinking fluid; the cognitive load of doing so is offset by the aforementioned saving with regard to having to shift focus from where I’m already looking — yes, I need to have some idea of where I left my mouse cursor, but I compensate for that by leaving my mouse cursor somewhere I’m likely to recall with little effort based upon whatever activity I’m engaged in … and, as I tend to two-finger scroll on a trackpad (or wheel-scroll with a mouse, when I have no choice but to use one), that tends to be centre-screen (give or take) or wherever I’m currently focussed; so, there’s not really much thought involved and the cognitive load of noticing that it’s not exactly where I’m currently looking, nor otherwise exactly centre screen (which wouldn’t always be useful anyway), is low. Overall, it’s better for my brain, because not habitually relying on muscle memory but being responsive to stimuli is a transferable skill.

So, some of those ‘cosmetic’ tweaks serve a practical purpose re workflow and are, therefore, not as cosmetic as they might first appear. And that includes things like the theme. I’m used to a particular look as well as feel from my everyday experience. And Garuda, whilst not identical, starts out with a very similar one (colourscheme and icons), meaning that, again, I am largely unaffected by “this doesn’t look and feel the same” distractions that would be merely cosmetic — which is ironically significant. Moreover, it means that, when I am confronted with a default configuration that doesn’t behave that way, I am used to being surprised and cope with it much better than I did with Windows 8 that time: by intent and design, I already spend my days expecting the unexpected, so …

So, there you have the background to it all: what I’ve been doing, how and why.

And, consequently, I can say, hand on heart, that, once I no longer need to do things that way and I can get back to the pure Arch experience, I will never again use anything else without a seriously good reason.

Why?

There are so many tweaks made to the thing that … whilst the use of a live version I can’t say I’m 100% certain of their impact on an installed system … even something as simple as updating software (let alone the OS itself) cannot be reliably done via the usual means — which is just all manner of “What!!?” for anyone, never mind an Arch user (think about it). Snaps that don’t install for some unfathomable reason (and I don’t even want to use a Snap in the first place, ffs!). A wild repo enabled by default and, if you disable it, a lot of things stop working (like, if Arch relied upon the AUR for core elements of the OS!). Things that I don’t want to update but have to, because other things I do need to update have been tweaked in such a way that they require new versions of things — things that render other things inoperable (which is why I didn’t want to update them to start with). Bizarre tweaks that make absolutely no sense: the latest version of the XFCE release won’t let me change certain things about the interface that I could before … and nothing is served by restricting it in the new release — maybe it’s fixed in the bare metal version after an update, but that not only doesn’t help me with the live version, it makes no sense for it to have been restricted in any version to start with. And, most significantly (and I have no idea how they’ve managed it, never mind why), but, whereas I could previously bind-mount /etc from persistent storage … log out, log in and get on with my day … now I can’t — everything stops working properly and logging out leaves me unable to log back in! Why? Who knows? Does it make any rational sense? No. Nor will it ever do so: when the Universe collapses in on itself in the Big Crunch, in the last possible instant before Space and Time cease to be and History itself is no longer history, it will never have made any sense — immutable systems are one thing, but there is no sense in a partially immutable system … no-one rational would ever configure a mutable OS immutably, or leave parts of an immutable one mutable (it’s an incoherent strategy).

So … no … an Arch derivative is no better than any non-Arch distro — in fact, it’s worse ¹.

If you’re insufficiently technically literate to use Arch right from the off then, whatever you use that isn’t Arch, make sure it isn’t Arch in any way shape or form; avoid any Arch derivative like the plague — if you think Arch has the potential to bite you in the behind, if you don’t know what you’re doing, wait until you try something that is Arch but over which you don’t actually have the same degree of control that Arch is designed (from the get go) to provide you and that, consequently, you actually need. If you’re not sufficiently technical to use Arch/Gentoo/Slackware, use something like Mint instead until you are (a bit of a Catch-22, I know, but you gotta start somewhere and it’ll all but inevitably end in tears, if you don’t).

There … I’ve got that off my chest at last … and you’ve learnt something valuable along the way ².

Right … back to the grind — l8rz, Morlocks.

___
¹ Yes, I am aware (as I said) that I do some things with my systems than many would consider freakishly weird … but nothing I’ve been doing in this instance is so: bind-mounting is not even uncommon, let alone weird … and that’s the closest to ‘not run-of-the-mill’ I’ve come with regard to configuration of the work platform; more than that would risk rendering troubleshooting the project unreliable — so, I haven’t done more than that to it, duh (so, the root cause of my woes doesn’t stem from any tweaks I’ve made).

² Unless you’ve been counting grains of rice all this time.

Where Angels Fear

There he goes. One of God's own prototypes. A high-powered mutant of some kind never even considered for mass production. Too weird to live and too rare to die.