Moving from GitLab Registry to Sonatype Nexus

Recently i posted about how GitLab updates are broken, at least when attempting to self host it and occasionally update it on my VPSes, which necessitated me moving to something better suited to my particular set of circumstances.

Now, at the time of writing this, i've already finished the migration which replaces 90% of what GitLab does for me (though certainly not all of its functionality that others might use) in a more lightweight and more easily manageable deployment format, that's hopefully also slightly less resource hungry.

Thus, i figured that i might just document the process and the outcomes of it all, over a few posts, each of which will focus on particular aspects:

  • interacting with the source code
  • doing automated CI/CD builds of apps
  • persisting dependencies and container images (this article)

Without further ado, let's get going!

Why Sonatype Nexus

GitLab has integrated registry functionality which allows storing both containers and other types of artefacts, such as Maven packages etc., though in my experience it has been an endless source of pain. Due to way it's structured, it becomes difficult to manage how it stores all of the package data and clean it up selectively, at least in the version that i ran.

That's where Sonatype Nexus comes in - a free and sadly underused solution which allows storing most if not all of the same formats, but doing so with custom blob storages, custom cleanup policies, more fine grained access controls and a variety of other useful options:

00 nexus logo

Though, to be honest with you, my reasons for picking it are a bit more devious - so that i can throw it out if it ever becomes too annoying for me, or starts misbehaving. Clearly i cannot do that with the entire GitLab install because who knows what could happen to my source code, but a separate package registry? In my eyes that's just good risk management (also including cases where it might accidentally delete itself or necessitate someone doing so due to abnormal space usage, of course; a more realistic example).

Getting Nexus up and running

Nexus is actually also pretty easy to set up, since even their Docker Hub page has all of the instructions that you'll need for a basic install. In the end, i settled on the following Docker Compose stack which i launch with Docker Swarm on my own container cluster with a simple bind mount for the data:

version: '3.3'
services:
  nexus:
    image: sonatype/nexus3:3.37.3
    environment:
      - INSTALL4J_ADD_VM_PARAMS=-Xms512m -Xmx1280m -XX:MaxDirectMemorySize=1280m -Djava.util.prefs.userRoot=${NEXUS_DATA}/javaprefs
    networks:
      - ingress_network
    volumes:
      - ..../nexus/nexus-data:/nexus-data
      - /etc/timezone:/etc/timezone:ro
      - /etc/localtime:/etc/localtime:ro
    deploy:
      placement:
        constraints:
          - node.hostname == SOME_VALUE.servers.kronis.eu
      resources:
        limits:
          memory: 1536M
          cpus: '0.75'
networks:
  ingress_network:
    driver: overlay
    attachable: true
    external: true

My only problem was that i needed to manually update the permissions of the bind mount directory, because Nexus doesn't do this by itself, due to permissions management being pretty screwed in Docker, due to there not being a command that would let you run as a non-root user but change certain container/bind mount directory permissions to whatever you want (maybe outside of messing around with local storage driver and bind mount options, not sure on that one):

sudo chown -R 200:200 ..../nexus/nexus-data

To their credit, that is actually documented on the Docker Hub page so it's not too cumbersome, just something to keep in mind:

01 Nexus permissions weird

Now, admittedly, there's a bit more configuration that needs to be done within the actual instance, but for the most part there's nothing keeping me from detailing what i've discovered after running Nexus in my dayjob for a while as well, once the instance is up and running:

04 nexus loading

I'd also like to say that if previously GitLab itself was a heavyweight that was wasteful with resources, now both Gitea and Drone CI are really lightweight and Nexus is the wasteful one - perhaps its main drawback. The official documentation claims that you need almost 3 GB of RAM to run it:

02

Thankfully i managed to get it up and running with decent performance with just 1.5 GB, which in this case is good enough for me:

03 but we do not care about the requirements

You could probably go even lower, since some Java apps are notorious for wasting RAM left and right but also sort of working when you pull in the reins, but i felt that this limitation was enough for me not to be too bothered by its resource usage, while also ensuring that it is going to be as stable as i'd like it to be, regardless of how many CI processes i run and shuffle around data in it. This is probably extra relevant, given the fact that you can use it for caching Maven, NuGet, pip, npm and other packages as well, so the instance being slow would probably be detrimental to build speeds.

Onwards to configuration!

Configuring Nexus

While geting rid of GitLab freed me from having 2 GB of RAM being taken up just to keep it working and Nexus proceeded to claw 1.5 GB of that back, the disk space limitations didn't really disappear anywhere either. Thus, i felt like i'd need to think about Nexus cleanup policies and multiple separate blob stores that can be wiped as necessary, should they ever become problematic.

Creating the cleanup policies is actually exceedingly simple, you just open the interface, enter how many days you want to keep stuff for and save the form. In the end, i settled on a few differet types of policies, some for containers that i'd like to keep for longer, others for intermediate stores that only need to keep stuff for long enough until it's delivered to the servers that might need them (e.g. development containers for dev/test environments):

05 nexus cleanup policies

After that, i also decided on having separate blob stores, where the actual data will be persisted, for these different types of registries, mostly due to having some problematic past with Nexus where it wasn't entirely clear whether a particular registry was just being spammed by 10 different projects, or whether it was misbehaving and the cleanup policies were ineffectual (even though they have a cleanup preview, which is generall useless):

06 blob stores

As you can see, i also set up quotas here, not based on the size of the actual store, but rather what would be left on the server's disk, so when (not "if") something will go wrong, i'll just have the registry refuse to take in any new data, rather than have the server fail instead. If this were to ever happen, i could always just purge one of the non-essential registries, which are little more than cache at this point:

07 blob stores finished

After that, i could easily work on creating new registries. This is where another annoying facet became apparent - Docker registries can't just coexist on some context path, but rather demand that you use specific ports (connectors) for them, which can sometimes complicate things:

08

Thankfully, i'm not going to let Docker mess with me like that, so instead i decided to just set up multiple separate domains, which i could them proxy to the internal ports of the Nexus container as necessary:

09 new registry domains

Admittedly, it's a bit of an ugly solution, however it allows you to use Let's Encrypt with the HTTP-01 challenge type whilst running everything on port 443, without needing to get into DNS-01 challenge and provider specifics, so overall it's definitely a win:

10 caddy configuration

(note: this is the old Caddy v1 configuration because i run it in my cluster for now; migrating over to v2 is in the cards, just not quite there yet)

Either way, whatever web server you use should be reasonably easy to integrate with this setup (i've personally tested Nginx at work), and Let's Encrypt should also work well enough:

11 caddy finished redeploy

A brief test confirmed that everything is indeed working as expected:

12 docker registry

However, everything doesn't quite end there. Remember those cleanup policies that we created? Well, they won't actually be run efficiently yet, because there's a bit of manual configuration that you need to do, according to their documentation on the subject:

13 cleanup policies

However, after you invest a bit of time into configuring everything according to it, things should be reasonably straightforward and low maintenance from there on out:

14

From there, you'll probably also want to create a new role which has permissions to access the repository but not do administrative actions in it, to let your CI servers to connect to it as necessary:

15 new role for nexus

You might also want to create a user or two for read only access, if you care about that sort of thing.

Not migrating data to Nexus

"But what about the data?" you might ask.

After all, i migrated over all of the source and whatnot from GitLab to Gitea in the previous tutorial, shouldn't i do the same with the GitLab Registry and Sonatype Nexus?

This has taken a lot of time already and i'm exhausted, so i'm just not going to do that for now. All of the images that i care about are currently on the servers where they should be, so the repository disappearing for a while doesn't really impact me, since i'm not one of those fancy autoscaling cloud Kubernetes folks.

But supposing that you wanted to achieve such a thing, thankfully it wouldn't be too hard:

docker image pull old-registry.com/some-image
docker image tag old-registry.com/some-image new-registry.com/some-image
docker push new-registry.com/some-image

Of course, if you'd have a lot of images, you might need to look into automation for that, with either a script that you'd write yourself, or some other clever way to transfer everything over. Though thankfully when i need the images again, i can just rebuild them, as i'll demonstrate in the next part of this tutorial - in that regard, having a Dockerfile is like a superpower (as long as all of your internet dependencies are still where).

The only situation where you should truly care about carrying over the old images instead of rebuilding them would be when you need to support old versions of your containers as releases that should remain available or are still supported in some capacity, or you're dealing wiht legacy code that cannot be trivially rebuilt, but then you have another problem to settle.

The same does vaguely apply to any other dependencies, like library code - you should be able to build whatever you need yourself, or alternatively push them to this new source after checking them out from the old one.

Summary

In summary, Nexus is perhaps the problematic piece of software out of Gitea, Nexus and Drone, but thankfully it is also pretty well suited to the workflow that i want to use now. The resource usage is a bit much, but its approach to cleanup policies and blob stores is actually considerably more advanced than that of GitLab Registry, which should give you as much flexibility as you need.

Plus, if there were ever problems with it, at least the other systems would still mostly remain operational, especially since the soft quota mechanism is really useful to have! While i was a bit selective about recommending Gitea over GitLab, personally i think that Nexus is a really strong contender to GitLab package management functionality, if not superior to it in many ways!

By the way, want an affordable VPN or VPS hosting in Europe?
Personally, I use Time4VPS for almost all of my hosting nowadays, including this very site and my homepage!
(affiliate link so I get discounts from signups; I sometimes recommend it to other people, so also put a link here)
Maybe you want to donate some money to keep this blog going?
If you'd like to support me, you can send me a donation through PayPal. There won't be paywalls for the content I make, but my schedule isn't predictable enough for Patreon either. If you like my blog, feel free to throw enough money for coffee my way!