Serverless - Is It All It’s Made Out to Be?
Well, that’s quite the question.
In recent weeks, there’s been a lot of differing reactions in the community to an article that Amazon Prime Video published on their tech blog at the end of March. The article is titled “Scaling up the Prime Video audio/video monitoring service and reducing costs by 90%”. Sounds interesting right? It certainly turned out to be just that. I’ll let you digest the article in full in your own time, but it can be summarised as below.
The Prime Video team had built a distributed system for inspecting the audio/video quality of streams utilising serverless technologies such as AWS Lambda and AWS Step Functions. The way in which these technologies were being used meant that they not only ran into account limits at ~5% of the expected scale, but that it was also prohibitively expensive. As a result of these findings, the team opted to re-architect the system to a monolith running as an ECS task on a cluster of EC2 instances. The outcome of this was the ability to operate at greater scale and an infrastructure cost reduction of 90%.
Now, as an advocate of serverless technologies I’ll admit that the optics of that story aren’t great at first glance. As is usual within technical communities, arguments from both lovers and haters of serverless started to dominate social media. Shouts of “If Amazon isn’t using serverless then why should I?” coming from one corner of the ring, with some more pragmatic views echoing back from the other.
I lean towards the side viewing this through a more pragmatic lens. Distilling the article down to it’s core: the Prime Video team built a system of microservices using serverless technologies, their requirements for it changed significantly and subsequently they had to re-architect. This isn’t an argument of microservices vs monolith, or serverless vs EC2 instances. It’s a case of using the right tool for the job. If you commuted to work on a bicycle and then got a new job 100 miles away, you’d start commuting by car or train. It doesn’t mean that a bicycle is wrong and you should never use one, just that it’s not the right tool for that particular task.
Arguments have been made based on this article that serverless is not suitable for scale. That’s absolutely not true. We’re talking about one part of a large system here with very specific requirements where it wasn’t the right fit for the task. In other parts of Amazon such as Amazon.com, serverless technologies are used extensively at massive scale. Amazon SQS and Amazon DynamoDB are both well-established services in the serverless ecosystem and on Prime Day in 2022 they peaked at processing 70.5 million messages per second and 105.2 million requests per second respectively (source). If that doesn’t demonstrate ability to operate at scale, I’m not sure what does.
Okay. Enough about how Amazon does or doesn’t use serverless. Let’s go back to basics and talk about the meaning of it in addition to how and where you might want to think about utilising it in your own architectures.
What does serverless even mean?
Much like “What is the meaning of life?”, that question will get you 10 different answers from 10 different individuals.
The one thing we can all agree on is that it can’t be taken in its literal sense. Reading it as ‘server-less’ and thinking of it as ‘no servers’ will leave you disappointed. At least in the ‘no physical servers’ sense. There are still servers, the ‘server-less’ reading of it refers to the way in which you interact with the platform; the concept of a physical server running code is abstracted away from you.
From my perspective, there are a number of key characteristics that a serverless service should exhibit:
- Costs are tightly coupled to granular usage (e.g. a request that takes X seconds to process, costs $Y)
- Easily scales to zero and can handle bursts in workload
- You don’t have to think about managing servers (or VMs)
- The underlying operating system is abstracted away
- The service has the responsibility of determining what hardware to use based on your resource requirements
At the time of writing, AWS considers 16 services to be serverless. Whilst I would wholeheartedly agree with most of these, there are some that I believe to be pushing the boundaries of what serverless stands for.
One such service is Amazon OpenSearch Serverless. This was announced at re:Invent 2022 and was met with a reasonable amount of criticism. Most of it was centred around there being a minimum resource requirement that would result in ~$700/month in usage. Whilst it does abstract away EC2 instance types to “OpenSearch Compute Units”, to me it isn’t fitting with the spirit of serverless.
I’ve talked a bit about what I think serverless should mean, but how about when it should be used?
When should serverless be used?
Whilst I am an advocate of serverless, I am by no means blinded by the shine and sparkle that new technologies often bring. That being said, I do approach architectures with a serverless-first mindset. This doesn’t mean I would always recommend a serverless service; it means that I’ll always evaluate whether it’s the right choice and move onto other technologies if it’s not.
Reducing maintenance overhead
A key indicator that you should look to serverless technologies is that you want to reduce the overhead of maintaining your application as much as possible. A feature of serverless technologies is that they strive to enable developers to focus on business logic rather than undifferentiated heavy lifting.
For example, let’s imagine you have a requirement to run some Python code once per day at 8am. Utilising serverless platforms, all that’s required is: creating a Lambda function, choosing a version of Python, uploading the source code and then using EventBridge to schedule the execution of this function. In comparison, with an EC2 instance you’d need to: launch an EC2 instance with an up-to-date OS, install OS patches, install the version of Python needed, upload the code and then set up a cron job to run the Python script. Then there’s the need to keep the EC2 instance software up to date. Whilst as a one-off event this might not equate to a huge time saving, the cumulative time saved over months or years is not something to ignore.
A fully managed platform brings other benefits also. When it comes to ongoing patching of the underlying software that’s responsible for running your application, AWS takes care of all of that. New vulnerabilities published that affect a library used by the underlying operating system? Not your concern. All of this reduces the burden on your teams of keeping your application in tip-top shape.
Unpredictable workloads
A benefit that is seen as synonymous with serverless is the ability to handle peaky and growing workloads. This association certainly provided kindling for the fire that emerged from the Amazon Prime Video article. Let’s delve a bit deeper into why this connection is made.
Often, AWS Lambda is spoken about in the same breath as serverless. Lambda was launched with the promise of being a service that’s “designed to scale to the size of your workload, whether that’s one request a month or 10,000 a second” (source). This is absolutely true, no marketing exaggeration here. Lambda functions can scale to tens of thousands of concurrent executions. This means that if execution takes less than 1 second, 6 figure invocations per second are easily achievable.
It’s worth keeping in mind that whilst very high throughput is achievable with Lambda functions, they’ll likely be just one part of a wider system. As always, evaluate upstream and downstream dependencies that may introduce bottlenecks (for example, a database).
AWS Lambda functions are just one example of how serverless platforms can automatically scale to handle expected and unexpected peaks in workloads. Note that this doesn’t equate to handling unlimited growth; there are still service quotas to keep in mind. If your requirement exceeds the service quotas that Lambda has, that’s okay. It’s just not the right tool for the job.
Anyone that tells you that they can design a system today that’ll grow to infinite scale and fit any future use-case is being less than truthful.
Cost optimisation
In today’s challenging financial climate many organisations are actively seeking cost optimisation opportunities within their cloud architectures. Adopting serverless for cost reasons is an interesting thread to unravel. Quite often, it’s found that comparing the cost of operating serverless at scale to ‘traditional’ methods such as running EC2 instances results in surprise that the typical “serverless is cheaper” argument isn’t true. Whilst this can be the case, it’s important to consider cost through a wider lens; this is often referred to as Total Cost of Ownership (TCO). TCO includes the non-infrastructure costs such as the person-hours required to operate and maintain the application. Whilst infrastructure costs may be higher, the lower maintenance overhead that comes with a serverless architecture can make the TCO more favourable than traditional architectures.
Peace of mind
It’s also important to consider the qualitative benefits of serverless. There’s a certain peace of mind that comes from the knowledge that AWS will handle the majority of the potential hardware and OS level errors that your system could experience. Knowing that if you suddenly experience a 100% increase in traffic then the infrastructure will scale to handle it is certainly a nice place to be.
Summary
Serverless absolutely has a place in modern architectures. There’s nothing to say you can’t run a monolithic application on a serverless platform (e.g. ECS Fargate). Typically, you would see it used as part of the adoption of service-oriented architecture or microservices; as always, it’s about using the right tool for the job.
In the context of application modernisation, there is almost always opportunity to utilise serverless technologies in some form. Whether it’s refactoring an application to microservices utilising AWS Lambda or replatforming a Docker container to run on ECS Fargate, there’s something for everyone.
What next?
My colleague, Jim Wood, wrote a great piece back in January on the topic of modernising monolithic applications to microservices using AWS App Mesh. If you’re interested in how this works, have a read here.
Ubertas Consulting specialises in migrations to AWS and application modernisation. As an AWS Advanced Tier Partner with numerous competencies, we’re a great choice if you’re looking for assistance on your journey to making the most of AWS.
Just one example of where Ubertas Consulting have helped modernise an AWS architecture utilising serverless is with Filmstro. Get your hands on the details in the case study.
If you’d like an expert eye to review your existing architecture and get hands on support with bringing it up to AWS best practices, get in touch to arrange a Well-Architected Framework Review. These are cost neutral and enable you to leverage the experience of one of our Solution Architects over a number of days.
Are you a bit earlier on your journey and considering a modernisation or migration into AWS? Let’s chat and allow us to show you where we can best help you succeed.
Alex Kearns
Principal Solutions Architect, Ubertas Consulting