Managing disks in Linux is broken

Recently i figured that instead of running two servers with 1 disks each, i could instead run a single server with two disks, to use for storing my backups. In this case, it'd be a bit like a software RAID 1, mostly because i don't have that much trust in RAID controllers and instead would prefer incremental changes with something like rsync at scheduled intervals.

Now, normally, this should be a pretty easy task. Just plop the disk in, format it, make it automatically mount, set up some scripts with crontab and you're good to go! Unfortunately, i found that it's not quite as easily, due to a variety of software packages in GNU/Linux (in this case, Debian) being broken.

For example, i decided that i might use GParted for partitioning the disk, a beloved piece of software that i've used to great results in the past. So, i set up VNC on the server and tunnelled to it through SSH, which allowed me to achieve something similar to RDP on Windows (i also tried xrdp, but for some reason it's now broken and won't allow connections), thus allowing me to remotely interact with a graphical environment:


GParted is broken

In this case i'm using XFCE, which i find to be a lightweight desktop, which is both more usable than some other ones like LXDE, but at the same time uses less resources and is snappier than GNOME and KDE. Next up, i decided to install GParted, which is simple enough - just a single command to get the package and be able to use it:

sudo apt install gparted

install gparted

The OS even registered it in the list of software, so that i could open it more easily:

gparted in menu

Sadly, that's where everything working just ended. Attempting to open the software through the little icon in the menu just didn't work. Attempting to do it through the terminal returned errors about no authentication agent being found. Furthermore, trying to YOLO it and opening it with sudo simply returned a Gtk error, stating that for some reason it cannot open the display:

gparted fails to run

The incredibly sad bit is that when running locally, it actually does work. So it feels like VNC is the bad thing here - it fails to provide remote access to the desktop in a similar fashion to actually using it locally. It smells to be a leaky abstraction to me and definitely not like a way how the actual operating system should work. It shouldn't matter whether the user is connected remotely or locally (especially, if the connection comes from a local port, given the SSH tunnelling).

What's worse, i couldn't really find what to do next and how to solve the issue, because the error output didn't give me many hints. Some posts online suggested that i should look for polkit, however there were no actual packages like that available. Of course, i had also installed the full XFCE desktop for Debian, so it's not like i should have to install anything apart from that manually, since the full desktop package should provide a full, functional desktop:


So then what? Some people suggested that running "gksu" would help, which i can attest to - when i ran into the problem in the past, it indeed saved my butt a number of times. Sadly, this was not one of those times, because for some reason, the package had been removed from Debian 10 repositories and wasn't available to even install anymore:


It feels wrong that packages get removed like this, much like Adobe retroactively killed off Flash, especially if i'd still like to run this piece of software. Why does software even deprecate this way over time? Why can't we just write software once against some super basic and stable set of OS APIs and have it run in 5 to 10 years as it does today? I know i'm basically describing OS kernels here (where even some bugs can't always be fixed easily, because there is code out there that depends on their presence), but let me have a bit of a rant.

Besides, i feel that it's totally valid here, because this is the command that finally got it working:

xhost +local:
sudo gparted

What is xhost? What did i just do? The ArchWiki has a page detailing it, but i would not have figured out that i need to use a completely unrelated utility to launch GParted by myself. After all, as a user, the software should tell me what it wants me to do when it refuses or fails to work, to the best of its own abilities.

Thankfully, GParted finally decided to work:

xhost works

Now, you might want to ask me: "Using $INSERT_CLI_TOOL_HERE would have been way more easy here, why wouldn't you just do that?"

I feel like such a question should be invalid, because of Linux being so spectacularly broken at times and essentially making even installing and running a package something that needs debugging. The answer to why i prefer graphical software in this case is because its UI and UX are superb, when compared to anything that a CLI app would offer (maybe a TUI app could compete here). Making a partition is as easy as clicking on a bit of free space on the HDD and entering the data that you want:

gparted usage

Instant visual feedback, easy overview of what needs to be entered and the ability not to think about any commands to use and just click on input fields - more software should be like this! Less than a minute later, i had created a new partition on the HDD and it was ready to be used.

Surely, mounting a HDD wouldn't break, right?

Mounting disks is broken

Wrong, attempting to mount the HDD returned an error:

mount fail

Let me get this straight, computer:

  • you first decided not to automatically mount the HDD that's connected to one of your SATA ports (not even talking about autorun here)?
  • you decided to tell me that i don't have the permissions to mount a disk that i own and that i configured to store my files on the computer that i own?

Really? Let me repeat what i said above: if you are providing me remote desktop access, you should provide me access to a remote desktop, not something that looks like a desktop and acts like a desktop up to the point in time where i try to do anything remotely useful.

Fine, i'll do it the old fashioned way... Thankfully, mounting disks through the CLI isn't too bad, just create a folder for it and mount it, then check whether it succeeded:

sudo mkdir /hdd-1
sudo mount /dev/sdb1 /hdd-1
df -h

That's more like it, simply works without throwing a fit:


That said, there is some weirdness in regards to setting up automatic mounting for HDDs. For example, if we want to set it up so that the HDD will mount after a restart (you know, like Windows does), then we'll need to edit "/etc/fstab", but it expects HDD UUIDs at this point for some reason:


But how would i even get a... Oh, wait, a comment in the configuration file actually tells why it's being used and how to get the value in the first place, by using other tools. If there was ever a good example of a comment in code/file/whever, this is probably it! It was super useful, because the mentioned tool was also available and just worked:

blkid example

Then, with a bit of additional searching, i found an Ubuntu Wiki page about fstab which told me what the weird ones and zeroes in the end actually mean. Personally, i think that it could have been a bit more clear, like options:

UUID=a7726670-01ca-407e-961e-103cc1ed3613 /hdd-1 ext4 errors=remount-ro command-dump=false fs-check=false

But for now, the existing syntax was also perfectly workable:

mounting fstab

And, it seems that after a restart, everything worked as one would expect:

all good after restart


In conclusion - another day, another weird way how Linux broke. I guess someone could probably figure out why VNC acted the way it did and how the X server's permission model didn't quite like working with the way that i used the software, but at the end of the day, software should serve the user's needs. And this time, it failed to do so. In my opinion, if GUI software is subjectively the best for a job, then it definitely should be used, whereas if CLI or maybe TUI is better, then prefer those instead. I am definitely not okay with packages breaking and people suggesting a particular workaround just because the ecosystem is broken like this.

If things don't get better, eventually people will view software breaking all the time for nonsensical reasons as normal, something that will be the beginning of the end of reliable software. If you ask me, we are probably already past that point.


As a sidenote, here's the actual command that i use for copying my data from one HDD to the other:

0 8 * * 6 sudo rsync -au --info=progress2 --delete "/home/" "/hdd-1/home" > /hdd-1/rsync-cron.log 2>&1

It seems to work pretty well, though using the overall progress output instead of the verbose output or the progress bar output, both of which make everyhting slower, seem to be a must when copying lots of smaller files:

rsync output

Of course, copying the entire home directory seems like a bit of a brute force solution, but then again, it takes less than one hour and i only intend to do it once per week, as a means of securing myself against at least one disk failure (seeing as the data doesn't change that often). Now, you might be asking whether the OS itself matters, and the answer here will be a resounding "no". If the disk with the OS ever fails, i'll just toss it and get a new one, given that almost all of my software is running inside of Docker with bind mounted directories and pinned image versions, whereas all the rest can easily be achieved with a few Ansible scripts.

By the way, want an affordable VPN or VPS hosting in Europe?
Personally, I use Time4VPS for almost all of my hosting nowadays, including this very site and my homepage!
(affiliate link so I get discounts from signups; I sometimes recommend it to other people, so also put a link here)
Maybe you want to donate some money to keep this blog going?
If you'd like to support me, you can send me a donation through PayPal. There won't be paywalls for the content I make, but my schedule isn't predictable enough for Patreon either. If you like my blog, feel free to throw enough money for coffee my way!