Can you reduce costs by going serverless?

Posted by Matthew Bradley

Serverless

In a constantly evolving world of IT, systems and processes are becoming more and more streamlined. Serverless computing takes this concept to the extreme; allowing you to run websites and applications at a fraction of the cost!

What does serverless actually mean? Serverless computing is a way of running functions or applications without any of the overhead of running or maintaining a server. This is good news for developers who only care about writing code and don’t want to worry about running or maintaining a server Operating System. Rather than paying for running an entire Operating System, which can be expensive (especially with a Windows license), you only pay for the code execution time. You just write your code, upload it and it will run. Simple.

Is serverless suitable for all sites/applications? No. Absolutely not. There are definitely some downsides to serverless that you would need to consider. For example, your site/application may be stopped if it isn’t being used and resources may be limited, which may cause performance issues. However, an excellent use case is sites with a small footprint or executable functions that don’t run very often.

My recommendation for serverless. Serverless applications can save you a lot of money but please ensure that the app is suitable for a serverless platform. This can be done with reviewing the application and performing load testing before going live or by speaking to a ClearCloud Architect. If you want any advice on using serverless or would like a platform to test this on then please get in touch with us for an informal chat and guidance.

What if my app isn’t suitable for serverless? Is a full server the only other option? Luckily there are more options. If your app isn’t suitable for serverless then maybe ‘containers’ are for you? This is the middle ground between running a full server and running a serverless application.

Containers are a lightweight, standalone package of software that includes everything that it needs to run. Code, runtime, libraries – everything. Containers allow you to scale out instances extremely quickly (like within seconds) and can be a very powerful tool in your belt. Whilst containers are versatile, they do run on servers (nodes) and so there is still some management there if you manage the nodes yourself.

A recent study done by Gartner predicts that “By 2020, more than 50% of enterprises will run mission-critical, containerised cloud-native applications in production, up from less than 5% today”.

Whilst I’m not entirely convinced by that figure, no-one can argue that containerised applications are, understandably, becoming more and more popular. And they will become far more common over the coming years.

Is moving to containers an easy move? Not initially. Running containers does require a fair amount of knowledge and understanding which may seem daunting at the start. Especially when you look at introducing orchestration for scalable Enterprise applications. But they are definitely worth thinking about. Although you can run and manage containers yourself, Azure have a couple of options for running managed containers to make it easier for those without the full knowledge. These are Azure Container Service (AKS), which is a managed Kubernetes cluster for applications that require scaling and orchestration, or Azure Container Instance (ACI) (which has been Generally Available for about a month now) for smaller setups.

My recommendations for containers. As mentioned earlier, containers are a very powerful tool and there are many options available, however it can take time to learn how they work. Start by using one of the Azure managed container offerings to test and see how you get on. If you want any advice on using containers or would like a platform to test this on then please get in touch with us for an informal chat and guidance.

Back to Blog