How and why to use Apache httpd in 2022

Suppose that you need to set up a web server to either serve some static assets or a web application front end, work as a reverse proxy, or maybe even act as the provider for SSL/TLS certificates. In the current year, you might consider something like Nginx, Caddy or even Traefik as your software of choice, given their popularity, performance or just how much of the mind share of the industry they've captured.

Apache, the alternatives and benchmarks

Nowadays, Nginx has almost become synonymous with reverse proxy or web server, especially given how many Kubernetes clusters out there use it as an ingress. Apache/httpd, on the other hand, has gotten no such love. That said, it's kind of unfortunate, because the Apache web server project has been in active development for decades at this point and there are a few advantages to using it:

  • it is well documented and there are lots of tutorials and examples for using it in a variety of setups
  • it has an excellent module/plugin ecosystem providing most functionality you might ever need
  • it has decent performance and your applications are still likely to actually be the bottleneck in most cases

Let's actually talk about the performance for a second, because it's a common point for people wanting to choose Nginx over anything else (even Caddy).

There's this lovely site called OpenBenchmarking that has a variety of benchmarks, including some for Apache and Nginx:

If we look at the results for comparable hardware, we can see the following:

Nginx:  Intel Core i9-12900K      485695 +/- 8564
Apache: Intel Core i9-12900K      139305 +/- 10612
Nginx:  AMD Ryzen 9 5950X 16-Core 349897 +/- 12376
Apache: AMD Ryzen 9 5950X 16-Core 58612 +/- 8606

Nginx:  2 x Intel Xeon Gold 5220R 178119 +/- 1937
Apache: 2 x Intel Xeon Gold 5220R 119167 +/- 895
Nginx:  2 x AMD EPYC 7763 64-Core 89766 +/- 5976
Apache: 2 x AMD EPYC 7763 64-Core 90509 +/- 9018

Now, there were quite a few different sets of results for those particular benchmarks, but generally I picked those that had the most compatible public results and also explored both setups that use desktop hardware and CPUs that you might be more likely to find in servers. Admittedly, there is a lot of variance in the results, but a few trends are pretty clear.

nature of results

In short:

  • both servers easily blow past the C10k problem
  • with beefy hardware, both are capable of serving around 100'000 requests per second, which is impressive
  • there is a lot of variance in the results, at best Nginx is more than 3 times faster than Apache, but that hardly matters
  • there are also results in which Nginx is actually slower than Apache or at least closer to its performance, generally on server hardware
  • that doesn't really matter much either, since either should be a viable choice for most of the applications out there

In practice, you'll probably want to pick whatever is the best suited solution for your projects of choice. If Apache is easier to configure, then go with it. If you're more familiar with Nginx, then choose it instead. Want Caddy? Don't feel discouraged by fancy Nginx performance benchmarks - in all likelihood, you'll never get to the scale where this is relevant and should you get there, you probably will have a DevOps team that will handle it for you, while you're driving around in your Tesla. Premature optimization is the root of all evil.

I actually still remember the time when one of my blog posts ended up on the front page of Hacker News. In that day, I got around 40k visits in total:

front page of Hacker News

And you know what? That's not all that much, when you think about it. Even if you look at the distribution of requests over time, most I got was around 2k per hour:

front page of Hacker News

That's around half a request per second, which would easily be handled even by Raspberry Pi. I would need to create something much, much more popular for the web server to become the issue - in most cases, it is more likely that whatever Java/.NET/Node/PHP software I'd be proxying would be the one to fail first.

Why Apache is a good choice

So, why would I recommend Apache, apart from the aforementioned documentation and having a pretty decent track record? Simply put, it does what Caddy does - the recently added mod_md allows you to use something like Let's Encrypt to procure and automatically renew SSL/TLS certificates.

Caddy has that functionality built in, yet is a somewhat new project (in the grand scheme of things) and Caddy v1 was essentially abandoned in favor of a rewrite to v2, forcing plenty of folks to spend migrating to the newer version. Similarly, with Nginx, you actually need to use something like Certbot, a separate piece of software to handle it for you, similarly to how you had to do it previously with Apache, too.

Not only that, but Nginx can be a tad problematic sometimes: since if you run healthchecks against your containers and DNS records for those aren't available before they've succeeded, your instance of Nginx might crash with the following:

nginx: [emerg] host not found in upstream "my-app"

Now, I probably don't have to explain why an ingress that is proxing 10 legacy services going down because 1 of those isn't available isn't a good thing, but I've actually written about it in the past. In the mean time, Caddy has some weirdness in regards to how it handles bad proxy configurations, not only crashing when renewing a certificate fails due to bad config, but also returning odd HTTP response statuses for non-existant proxy paths. I should probably dig forum posts up for those both, but that's not necessarily what this particular piece of writing is about.

Essentially, I think that it's nice to explore various alternatives and Apache is an excellent choice. Sadly, because it's not all that popular anymore, there is a lack of tutorials for getting things up and running, which is a nice opportunity for me to jump in and offer my own advice! So, I think that we should check how it can be a production ready web server, reverse proxy and ingress with SSL/TLS through the newly added mod_md.

I'm sure that you'll be impressed with the results!

Ways of setting it up, our own Apache containers

Now, there are many ways to run software: I've seen people installing on the system through their package manager of choice (such as apt or yum), sometimes unzipping it from an archive wherever they want (typically in the case of something like Tomcat), though in the recent year I've grown more accustomed to the idea of running as much as I can inside of containers.

Not only does that allow me to set resource limits more easily than I would be able to with something like systemd and slices, but also set storage locations that I want without having to work with too many symlinks, expose ports in whatever way I desire with a unified way to configure any package, as well as lots of other benefits, such as being able to update the host OS with minimal risks of actually breaking anything in the containers.

That's the approach that I'll take today as well, though you're free to use any other approach, since the configuration will largely be the same. This also means that you can use Docker, Podman, containerd with Kubernetes, or anything else, really.

So, let's get started: I'll build my own container image at first, basing it on Ubuntu (my image might not be publicly accessible, so consider using FROM ubuntu:focal instead):

# We base our Apache2 image on the common Ubuntu image.

# Disable time zone prompts etc.
ARG DEBIAN_FRONTEND=noninteractive

# Time zone
ENV TZ=Europe/Riga

# Use Bash as our shell
SHELL ["/bin/bash", "-c"]

# Apache web server
RUN apt-get update && apt-get install -y apache2 apache2-utils \
    libapache2-mod-security2 libapache2-mod-md && apt-get clean \
    && rm -rf /var/lib/apt/lists /var/cache/apt/*

# Create directories
RUN mkdir -p /etc/apache2/run/apache2 \
&& mkdir -p /var/lock/apache2 \
&& mkdir -p /var/log/apache2 \
&& mkdir -p /etc/apache2/certificates/md \
&& mkdir -p /etc/apache2/auth

# Initialize environment variables
RUN source /etc/apache2/envvars

# Manual init in case source fails
ENV APACHE_RUN_DIR /etc/apache2/run/apache2
ENV APACHE_PID_FILE /etc/apache2/run/
ENV APACHE_LOCK_DIR /var/lock/apache2
ENV APACHE_LOG_DIR /var/log/apache2

# Enable the modules we need
RUN a2enmod md \
&& a2enmod ssl \
&& a2enmod security2 \
&& a2enmod rewrite \
&& a2enmod deflate \
&& a2enmod proxy \
&& a2enmod proxy_http \
&& a2enmod headers \
&& a2enmod http2

# Copy over config
COPY ./apache/etc/apache2/apache2.conf /etc/apache2/apache2.conf
COPY ./apache/etc/apache2/sites-enabled /etc/apache2/sites-enabled
COPY ./apache/etc/apache2/conf-enabled /etc/apache2/conf-enabled
COPY ./apache/etc/apache2/certificates /etc/apache2/certificates
COPY ./apache/etc/apache2/auth /etc/apache2/auth
COPY ./apache/var/www/html /var/www/html

# Volume for configuration (md folder for Let's Encrypt, config for the rest)
VOLUME "/etc/apache2"
#VOLUME "/var/www/html"

COPY ./apache/ /
RUN chmod +x /
# Default run script
CMD "/"

# Remember to set files with permissions 33:33 for www-data!
# Store files under: /var/www/html

So, in short:

  • we set up a few things to make our lives easier (non interactive front end for apt to not freak out, time zone, bash as our shell)
  • we install Apache and prepare whatever configuration we desire
  • we ensure that all of the directories needed by Apache are present
  • we set a few environment variables
  • finally, we install some modules that we care about
  • also, we can use a custom entrypoint

The aforementioned entry point is the following script:


echo "Removing old PID, if not present..."
rm -f "/etc/apache2/run/" || echo "no old pid to remove, proceeding"

echo "Setting up logging to STDOUT/STDERR..."
ln -sf /proc/$$/fd/1 /var/log/apache2/access.log
ln -sf /proc/$$/fd/2 /var/log/apache2/error.log

echo "Software versions..."
apache2 -v && apache2ctl -S && apache2ctl -M

run_apache() {
    # Use ENABLE_SERVER_RESTARTS environment variable for automatic restarts below
    if [ "$LAUNCH_WITH_RESTARTS" = true ] ; then
        echo "Server will occasionally be restarted for SSL/TLS cert renewal and configuration reloading..."
        while true
            echo "Launching Apache2..."
            apache2ctl -DFOREGROUND &
            # By default, wait until midnight
            SECONDS_UNTIL_MIDNIGHT=$(($(date -d 23:59:59 +%s) - $(date +%s) + 1))
            # Use TIME_BETWEEN_RESTARTS environment variable if present, otherwise use default
            echo "Waiting for $SECONDS_UNTIL_RESTART seconds before restart..."
            sleep $SECONDS_UNTIL_RESTART
            echo "Restarting Apache2..."
            kill $(lsof -t -i:80) || echo "Nothing on port 80..."
            kill $(lsof -t -i:443) || echo "Nothing on port 443..."
        echo "Launching Apache2..."
        apache2ctl -DFOREGROUND

echo "Starting Apache2 container..."

It makes sure that there are not former PIDs, sets up logging, prints versions and starts the web server in the foreground.

But you know what, there is also an easier way if you don't care much about building your own container images. You can just use any of the available Apache images out there, for example:

Regardless, the instructions above could be pretty useful in case you use something like apt or yum to install and launch it, though in that case you should have a systemd (or similar) service set up for you, so doing something like restarting it should become as easy as the following:

sudo service apache2 restart

One thing you'll want to remember is that if you're using a RPM distro, then the package name will be a bit different, for example:

sudo service httpd restart

If you want, you can add some global configuration in the apache2.conf file in the /etc/apache2 directory, for example to have Apache enable HTTP2:

# HTTP2 configuration
Protocols h2 h2c http/1.1

Then, we might be interested in having a default configuration file, which Apache might also create for us, such as having the following in 000-default.conf in the /etc/apache2/sites-enabled directory:

<VirtualHost _default_:80>
    DocumentRoot /var/www/default-html

<IfModule mod_ssl.c>
    <VirtualHost _default_:443>
        DocumentRoot /var/www/default-html

        SSLEngine on
        SSLCertificateFile    /etc/ssl/certs/ssl-cert-snakeoil.pem
        SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key

        <FilesMatch "\.(cgi|shtml|phtml|php)$">
                        SSLOptions +StdEnvVars
        <Directory /usr/lib/cgi-bin>
                        SSLOptions +StdEnvVars

But either way, you should be up and running in minutes, not hours:

apache running

Once that's done, we can proceed with customizing Apache for our needs, setting up a few useful modules and so on...

Configuring Apache for our needs

The good thing about Apache and most other web servers out there, is just how versatile they are! Whether you need a server to allow your users to access a bunch of static files, rewrite requests to go to your index file for a single page application, act as a reverse proxy that provides SSL/TLS termination at the point of ingress, or even want to set up some rate limiting or authentication, all of that is possible.

In my experience shifting these responsibilities to the web server and managing them centrally, as opposed to doing that per-app (e.g. exposing Tomcat directly and managing SSL/TLS certificates there, doing it in a slightly different way for .NET apps etc) is a way to save yourself a lot of headaches in the future.

First up, here's an example of how I'll run my own container image, with a few persistent directories of interest:

version: '3.7'
      # Should automatic restarts be done
      # If you want, force restarts (for cert redeployment, every X seconds)
      - target: 80
        published: 80
        protocol: tcp
        mode: host
      - target: 443
        published: 443
        protocol: tcp
        mode: host
      # Site files
      - /home/kronislv/docker/apache/data/apache/var/www/html:/var/www/html
      # Site configuration
      - /home/kronislv/docker/apache/data/apache/etc/apache2/sites-enabled:/etc/apache2/sites-enabled
      # Server configuration (e.g. ACME)
      - /home/kronislv/docker/apache/data/apache/etc/apache2/conf-enabled:/etc/apache2/conf-enabled
      # Authentication files
      - /home/kronislv/docker/apache/data/apache/etc/apache2/auth:/etc/apache2/auth
      # Certificate files (for persistence)
      - /home/kronislv/docker/apache/data/apache/etc/apache2/certificates:/etc/apache2/certificates
      mode: replicated
      replicas: 1
        delay: 300s
          cpus: "0.75"
          memory: "512M"

Here you'll see that even though we're running Apache2 in an immutable container, I can also persist certificates as well as configuration in the file system, should I choose to do. So, onwards to the rest of the configuration!

Let's Encrypt SSL/TLS certificates

So, let's start with a relatively new feature that Apache has gotten: integration with Let's Encrypt (or another certificate provider) for automatic provisioning and renewal of SSL/TLS certificates. Previously, other web servers like Caddy offered it all in a single package, but now Apache also has support for it thanks to their mod_md module. In contrast, Nginx is still reliant on pieces of software like certbot, which are nice, but still need to be managed separately, making everything feel a bit more loosely coupled - not a bad thing per se, but also not something you'll want in all setups.

In my container images, I actually have an acme.conf file in the /etc/apache2/conf-enabled directory, which will handle how certificates are provisioned for all of the domains:

MDBaseServer              on
MDCertificateProtocol     ACME
MDCAChallenges            http-01
MDDriveMode               auto
MDPrivateKeys             RSA 2048
MDRenewWindow             33%
MDStoreDir                /etc/apache2/certificates/md
MDCertificateAgreement    accepted

<Location "/md-status">
    SetHandler md-status
    AuthType basic
    AuthName "md-status"
    AuthUserFile /etc/apache2/auth/md-status.htpasswd
    Require valid-user

As you can see here, I have an endpoint for seeing the certificate status, that's protected by a password, as well as am (currently) using the Let's Encrypt staging environment for testing, though can also eventually switch over to the real directory as needed.

In our case, we'll have a VPS with a domain (with a few sub-domains) pointing at it, a container with our server running (though containers aren't strictly necessary) and the appropriate configuration for provisioning the certificate:


# ==== MAIN SITE ==============================================================


<VirtualHost *:80>
    Redirect /

<IfModule ssl_module>
    <VirtualHost *:443>
        DocumentRoot /var/www/html
        SSLEngine on

You can also see the redirect from HTTP to HTTPS thanks to mod_rewrite, in addition to me demonstrating how to actually use multiple sub-domains.

Upon startup, the container will provision the certificates, as illustrated by the log output:

[ssl:info] [pid 20:tid 140393120648256] AH01887: Init: Initializing (virtual) servers for SSL
[ssl:info] [pid 20:tid 140393120648256] AH01914: Configuring server for SSL protocol
[md:debug] [pid 20:tid 140393120648256] mod_md.c(930): AH10113: get_certificate called for vhost

(you might need to configure the appropriate log level for this, personally most of the time LogLevel info in apache2.conf is enough)

And after the server will restart (whether manually or thanks to our changed entrypoint script) the certificates will be used properly! In this case, we can see the staging certificates be present:

example of staging cert

And if we want to use the production certificates instead? Just change the MDCertificateAuthority to a different value and restart the server, so it acquires the certificate and then again to use it. Whether you want to do that based on cron, or something else, is up to you. In my case above, I've implemented two approaches - if you set TIME_BETWEEN_RESTARTS to any second value, then you'll have regular restarts, though alternatively you can leave the value blank and restarts will happen every midnight. Of course, you can also turn that off if you don't have ENABLE_SERVER_RESTARTS set to true.

A reverse proxy

So, next up, let's assume that I want to use Apache2 to handle SSL/TLS certificates as above, but want it to act as a reverse proxy and make sure that the traffic ends up reaching another app that I'm running. Here, let's assume that I already have another container running, though it can also just be a reference to another port running on the same server, or even a different server that can be reached from the host running our Apache2 instance.

In that case, my configuration becomes the following:

# ==== PROXY EXAMPLE ==========================================================


<VirtualHost *:80>
    Redirect /

<IfModule ssl_module>
    <VirtualHost *:443>
        ProxyPass "/" "http://my_application:80/"
        ProxyPassReverse "/" "http://my_application:80/"
        SSLEngine on

After this configuration is loaded, if you'll open the virtual host with the proxied application, you'll actually see our app in it:

proxied application

This is especially useful for container deployments, where Apache2 can sit in front of all of them and you don't need to expose the actual applications to the outside world directly, or think about configuring each of them separately (you don't need to worry about how to connect Let's Encrypt to .NET or Java, or even import custom certificates in them either).

A reverse proxy with different paths

But what if we want to expose certain resources under different paths? For example, let's suppose that we have two script files that are served by another application analytics.js:

window.addEventListener('load', function () {
  console.log('Our analytics script would now be loaded!');

and other.js:

window.addEventListener('load', function () {
  console.log('The other script would be loaded!');

In our proxied application, we might want to access those files like so, from the aforementioned domain:

<script src="/analytics.js"></script>
<script src="/subfolder/another-file.js"></script>

To achieve that, we simply add a few more instructions for proxying:

<VirtualHost *:443>
    # Our proxy that just refers to another container
    ProxyPass "/analytics.js" "http://analytics_application:80/analytics.js"
    # Custom proxy, that slightly changes the path
    ProxyPass "/subfolder/another-file.js" "http://analytics_application:80/other.js"
    # Fallback proxy for other resources
    ProxyPass "/" "http://my_application:80/"
    ProxyPassReverse "/" "http://my_application:80/"
    SSLEngine on

So, once we open the application, we'll also see that the resources will be loaded correctly:

proxied resources loading

This is very useful for when you want to have your analytics scripts (such as Matomo) be available on the same domain, to avoid any issues with fetching resources from another one.

A server for a SPA app (single entry point)

Okay, but what if we want to redirect all of our requests (that aren't images, JS, CSS files etc.) to a single entrypoint, for example an index.html file for our single page application?

That is also doable, pretty easily, actually!

Here's the configuration that we need:

<VirtualHost *:443>
    DocumentRoot /var/www/html
    RewriteEngine On
    RewriteRule ^spa\.html$ - [L]
    RewriteRule . /spa.html [L]

Now, this particular configuration is no longer too user friendly, but basically we check if the resources that we're requesting can be found under the document root directory and if not, then we just return the spa.html file, though normally you'd just name it index.html, which works nicely:

single page application

Personally, once you've set it up it works nicely, but I think that this is one of the worst things about Apache - the syntax really should just be simpler, even at the expense of the ability to customize all of this for niche use cases.

A static file server with directory listings

Let's assume that you want to do something a bit different. Maybe you want to let the visitors of your site preview the files that are available in some directory, as well as download them.

Thankfully, the configuration for that is really simple:

<VirtualHost *:443>
    DocumentRoot /var/www/html
    <Directory files>
        Options Indexes

After opening the directory, you'll see the following (I put some of the images of this blog article in there):

file server example

While it doesn't look too pretty, it's honestly a perfectly passable approach for allowing users to browse the archive of your software releases, as well as other files.

Rate limiting

But what about when you expect a lot of users and would like to prevent them from breaking your site by using up too much of its resources? That's when rate limiting could come in handy! Now, Apache2 has a really simple looking module called mod_ratelimit, but sadly I could never really get it working properly. So an alternative to it is to use the security module, to limit not the download speeds in KB/s, but rather limit the maximum allowed requests per second.

In the simplest form, you can configure it like so (the @gt 50 here can be used to change how many requests are allowed):

<VirtualHost *:443>
    DocumentRoot /var/www/html
    DirectoryIndex large.html
    SecRuleEngine On
    <LocationMatch "/large">
        SecAction initcol:ip=%{REMOTE_ADDR},pass,nolog,id:10000001 
        SecAction "phase:5,deprecatevar:ip.ratelimitcounter=1/1,pass,nolog,id:10000002"
        SecRule IP:RATELIMITCOUNTER "@gt 50" "phase:2,pause:300,deny,status:509,setenv:RATELIMITED,skip:1,nolog,id:10000003"
        SecAction "phase:2,pass,setvar:ip.ratelimitcounter=+1,nolog,id:10000004"
        Header always set Retry-After "10" env=RATELIMITED

And here's what happens when you make too many requests, demonstrated with images:

rate limit test

Of course, it might be much better to limit the total allowed download speed per user, but there might just be some weirdness preventing it from working, at least for me. Might have to revisit later.

Additional authentication

Okay, but what about needing some additional authentication? Apache2 also has plenty of modules here, but personally I'd go at least for basicauth for any administrative interfaces of your blogs and other websites. Why? Because if there are certain exploits that can circumvent authentication, like happened with this very blog, you'd prevent such issues by putting the endpoints even behind something as simple as basicauth.

First you'll want to use something like htpasswd to create the file:

htpasswd -c admin.htpasswd admin-user

(enter the password when prompted)

After that, you can add the necessary configuration:

<VirtualHost *:443>
    DocumentRoot /var/www/html
    <Location "/admin">
        AuthType basic
        AuthName "The admin section requires additional login"
        AuthUserFile /etc/apache2/auth/admin-user.htpasswd
        Require valid-user

Then, upon trying to open the /admin path, you'll be prompted to log in by your browser:

basicauth login

Of course, basicauth is one of the most basic methods and isn't really considered secure, especially when you use the htpasswd files, as opposed to something encrypted or a separate auth system. That said, it's also really easy to set up and better than nothing, as long as you use it only over HTTPS.


In summary, Apache2 is a very capable web server, that is suitable for most common use cases even in 2022, and has almost no issues with being deploy either in containers, or directly on a Linux distro of your choice. The configuration formats and some of the options could be a bit more user friendly, but the documentation itself is really good in most cases. It feels more resilient than Nginx, which loves to complain and crash when a domain is unavailable and also comes with support for Let's Encrypt and other automated certificate providers out of the box, even though handling restarts is left as your responsibility.

From what I can tell, the performance and resource usage have never really been huge problems for the scales that I work at, security is pretty decent as long as you don't enable too many modules, and the whole setup process is decent as well, as long as you remember to install everything you need for the container. Honestly, I think that I'll try replacing Caddy v1 (which was simpler to use than v2) with Apache for my servers gradually and then see whether I need to look elsewhere.

Of course, in the future it might also be nice to explore more advanced setups, maybe get the mod_ratelimit module working properly or figure out why it seemed to hate me so much (no errors, just didn't work), or maybe explore the built in options for caching, be it file based or in-memory cache, so I wouldn't need something like memcached or Varnish per se. But for the most part? I just need a reverse proxy, some basic rewriting (for SPA apps and APIs that expect a single entrypoint), automated SSL/TLS and some additional auth. For all of those, Apache passes the test.