In previous posts, we’ve done our best to set up a website that is resilient to traffic, spikes, bots, and mischief of all sorts. ( I, II, III, IV, V VI VII ) But there are still plenty of other things that can go wrong. A good way to protect against this is mirroring.

A mirror is like a backup website—not just a backup with the data but of everything, including the server and domain name.

I built two additional mirrors for my blog website brandonrohrer.com. The symmetry of triple mode redundancy appeals to me, especially after hearing that it’s a principal used by NASA and growing up on some triples-themed sci-fi. You can take down one, but you still have two backups to work with, plenty of breathing room. If you get really unlucky, two will go down leaving you with a third. Why build one when you can build three at three times the cost?

I tried to build in as much redundancy as possible. I got three domain names, each from a different domain name registrar.

These are running on three different virtual private servers (VPSs).

They all contain the same content, which lives in a repository called blog-website. For additional redundancy, I host this repo on three different git services.

One benefit from having three mirrors is that I can treat one as a staging environment. If I want to make a risky change, such as automatically generating new firewall rules to block annoying traffic, I can do it on my least traveled mirror and see how it goes. If something goes horribly wrong and takes down that entire server, then I can put it back together from scratch, all while the other two mirrors stay operational.

Manually updating three mirrors is a little tedious. It’s easy to see why websites with mirrors or other content distribution networks would automate this. However, several of the larger corporate outages we’ve seen recently are precisely because of these automated deployment mechanisms. Because there’s one system that impacts every website, a single misconfiguration can take down all of them. They aren't isolated from each other. I’m not running a business that needs instant updates, so I can afford to roll mine out slowly, by hand, and take that extra time and get that extra resilience.

There are two parts of my system that are single points of failure. The first is my content hosting service. All of the images and videos associated with my blog are hosted in a separate repository on GitHub. This keeps the bandwidth to my VPSs low and keeps my load times very snappy, minus the images. Then I let GitHub pay the bandwidth costs for the larger files and take advantage of its global network. This comes at the cost of some fragility. If for some reason my GitHub account should be compromised, I would have to find another content hosting service. (I of course have multiple backups of all the files there and could rebuild.) But for my purposes, it’s good enough for now.

The other single point of failure is me. If I were running a serious operation, there would be at least three administrators with full admin permissions on everything so that if one or two of us were incapacitated, the third could still carry on. In this particular instance, I’m not worried. If I’m in a position where I’m not capable of taking care of the blog, then likely the last thing I'll be worried about is the health of the blog.

Spreading services across companies and across continents is a good way to prevent a single event from taking down your site. It's not a good feeling to be dependent on a single corporation or even national government for your internet real estate. If AWS us-east-1 goes down and takes your site with it, that's a sad day. It's nice to know that your little piece of the web will stay standing through anything short of a global calamity.