It’s the things you don’t know you don’t know…

A few weeks ago I started a project to set up a new home for my various web projects in the AWS cloud. All these sites use WordPress, and I wanted to make sure I built them on a robust and scalable architecture.

The tricky thing with WordPress is that it was designed around a traditional web server model, where one physical (or virtual) machine serves one instance of the code. When you update themes or plugins, or upload content, you change the data on that instances’s locally attached storage. I wanted to be able to horizontally scale, and WordPress’s design makes that difficult.

Horizontal scaling is achieved by adding additional server instances to spread the load, whereas vertical scaling relies on beefing up the one machine with more CPU or memory. Horizontal scaling is generally preferable, because it not only provides more flexible scaling, eg you can scale up during periods of peak demand, and scale back down later, but because it gives you the added benefit of redundancy – if I have three machines serving my web sites, I can have two die and still be online, albeit perhaps in a degraded state.

To achieve this horizontal scaling in AWS, we need a way to have many machines share the same set of code. Amazon’s Elastic File System  allows us to do that by providing an NFSv4 compatible file system that can be mounted simultaneously to many EC2 instances. All the instances can read and write the same filesystem – problem solved.

So, I built a machine image for my WordPress solution that, on boot, would update itself and mount the EFS volume, ready to serve the shared code within. I set up an autoscaling group and an Application Load balancer to distribute traffic to them. I moved my domains to Route53 bound them via aliases to the LB’s AWS resource name. I used Amazon Certificate Manager to create (free!) SSL certificates that are automatically bound to the LBs. I backed this WordPress Taj Mahal with an Aurora RDS instance and enabled Cloudfront as a CDN.

It was all too easy, and the result was exactly what I wanted. I sat back feeling smug.


Of course, that didn’t last long. A few days ago I got a Pingdom alert saying a site was down. Then another, and another…what was going on?

I went online to check…”Bad Gateway 504″. Oh crap. Hm.

So what was failing? EC2 instances? Load balancer? Some weird WordPress problem? I checked them all and was none the wiser. Unfortunately for me, this happened at 8am on a workday, so I had to leave everything in a broken state and come back to it later.

When I logged in later, I noticed that requests to read the content on the EFS volume mounted on the EC2 instances took a long time to respond – several seconds in some cases. Hm, dodgy EFS? Nope, all the data seemed fine and there was nothing in the EFS console that looked like an alarm.

So, I tried that classic IT trick of turning it on and off again. I rebooted EC2 instances, I unmounted and remounted the volumes…nothing helped.

Having exhausted all the other possibilities, I concluded that the EFS performance issue had to be the cause of all my problems, so I posted an issue with AWS support. The next morning, I had my answer.

“You’ve run out of EFS Burst Credits.”

Huh?

So, I guess I hadn’t been paying as much attention as I should and this detail had passed me by…EFS volumes have “Burst Credits” that are calculated based on the size of the data stored on them. And it seems when you run out of burst credits, you can expect your EFS volume’s performance to suck badly enough that your whole system can fail.

Here’s how my EFS burst credit level dropped over the preceding two weeks:

As my EFS volume is under a GB in size, it generates very little in the way of burst credits. You can check out the details here.

You can also see here that it’s the ONLY metric that mattered. And boy, did it cause me some grief.

So, my new plan is to keep using EFS, but to use rsync, on boot, to clone the data from the EFS mount to the locally attached EBS volume and use that as the web server’s document root. I’ll leave rsync running to watch both the local and the EFS filesystems for changes and to keep them in sync.

Note to the folks at AWS – put the burst credit metric on the EFS console! Don’t make schmucks like me find out the hard way!

What a time to be alive!

When I was at high school in the 80s, computers were about the most boring things I could imagine. They couldn’t do anything cool, unless your idea of cool was maths, and to program them was like talking very slowly to a barely literate person with an IQ of 50.

In the 90s, things changed.

By the 90s, computers had become capable of doing things for you that you couldn’t do better by hand. In the 90s, they started connecting to one another and becoming part of the Internet we all take for granted today. In the 90s, computers started waiting for us, instead of the other way around.

Let’s do stuff!

I got my first computer in the 90s, and immediately started a business doing digital imaging using Photoshop. Kodak at this point was still sleeping peacefully, figuring all this digital stuff was a fad that would be over soon.

The world wide web was hitting the news in the 90s. It took a while for people to figure out what it was, but when they did, the web took off exponentially. Even the dot.com meltdown in 2001 couldn’t really slow it down.

I started building for the web in the mid 90s and have been doing it ever since. In that time I have seen many incredible advances, and some monumental follies*, from vector-animation to 3D, streaming audio and video, WebSockets and WebGL through to initiatives like WebAssembly. The web just keeps getting stronger and more capable. Importantly, it has also stayed open, despite the best attempts by some companies to subvert it.

But even 10 years ago, few would have foreseen how different computing was going to be today.

Clouds appear

In the last 10 years, computing has gone from something we do at a desk or in server rooms to something we do everywhere, all the time. We are all carrying around computers in our pockets. We are seeing tiny cheap computers being built into every corner of our environments – from the smart TV to the wearable activity tracker and the smart watch, to the lightbulbs in your house and the locks on your doors. And it’s all connected via the web.

This is why, for me, the two most exciting technology trends today are Cloud computing and the Internet of Things.

Cloud computing got a lot of hype in the early years and some of it was just silly. The cloud’s infrastructure isn’t that different to what preceded it – it’s still run on servers in data centres, just like we did things in the past. What is different is how commoditised computing resources are changing the nature of computing itself. Servers are no longer purpose-built boxes in a DC that are configured to do one thing. Now they are simply a source of computing resources that can be abstracted away by higher level services. This means they can deliver the outcome we want without all the configuration and systems admin we used to have to do to get that outcome.

So while virtual machines that scale are nice, they are far from the most important thing that cloud computing has unlocked. By removing the need for me to manage my own servers, cloud computing has freed me to focus on the value I want my application to provide. Almost inevitably, this has lead to the concept of serverless architectures, where my application is only the code I need and nothing more. The cloud replaces the server stack I would otherwise spend my time maintaining.

New ways to think of software

This kind of thinking is opening up new ways to build applications. An example of this is AWS Step Functions, where an entire application can be pulled together via a visual workflow. Likewise, tools like AWS Simple Workflow Service offer ways to orchestrate your code in a serverless environment, and then to build out and connect it to systems hosted elsewhere and even to processes that existing in the non-virtual world. Tools like these are facilitating an increased connectedness, which in turn opens up new ideas as to what a software application is, and what it could be.

And then, humming at the edges of all that new cloud-enabled capability are the huge numbers of IoT devices that are popping up daily in our lives.

Devices everywhere

Before we had smartphones, who would have thought everyone carrying around a GPS receiver would be useful? Now we can’t live without them. This is just one familiar example of the IoT world that is heading our way, as we measure, monitor and report on more and more metrics we encounter in our everyday lives. Heart rate, steps taken, how much electricity we’re consuming, room temperature, environmental noise, pollution levels, security camera footage…it’s all being picked up and turned into knowledge we can use to improve our lives.

In industry, condition monitoring is a huge growth area, again driven in large part by low cost computer hardware. You can now put a $100 vibration monitor on a truck and collect that data. The data can allow you to predict when it will need servicing, which can save your company the cost of unscheduled downtime. The economics of this are becoming a no-brainer as computing hardware gets cheaper and smaller and wireless networking becomes increasingly ubiquitous.

One interesting result of the rise of IoT is how the cutting edge of computing has come full circle. In a world where servers are now being commoditised and abstracted away, there is renewed interest in physical computing. People are building their own devices, and plugging them into the cloud. They are getting reacquainted with low-level knowledge, like how serial communications work. They are learning how to gather data from sensors over GPIO pins on a circuit board. It’s an interesting development and one that bodes well for humanity, I think. It gets us back in touch with the magic of what, as a species, we’ve achieved over the last century.

 

* You can put the proprietary Microsoft Network and Rupert Murdoch’s purchase of a dying MySpace in that column.