/

October 3, 2018

Serverless Computing with AWS Part 1: Introduction

What is Serverless Computing?

Wikipedia defines this as:

Serverless computing is a cloud-computing execution model in which the cloud provider acts as the server, dynamically managing the allocation of machine resources. Pricing is based on the actual amount of resources consumed by an application, rather than on pre-purchased units of capacity.”

This does not mean that there are no servers involved. It means that the task of planning, provisioning, scaling and maintenance of the servers is left to the cloud provider, which lets you to concentrate on business logic.

Should I use it?

Serverless Computing is not for all  applications. One important criteria is that it cannot be used where continuous execution of code is required. It is good for cases where a short burst of execution is required to complete a specific task. There are only certain use cases where one would see benefits using this approach. Lets first delve into how serverless computing works, and then identify the cases where it would benefit solution developers the most. . Once we identify the cases, it will be easier to decide if you can derive benefits of serverless computing for your project.  

How it works

Serverless computing is offered by various cloud providers We will use AWS to illustrate examples in this blog series.

AWS offers serverless computing under the “Lambda” service. Units of code are distinguished by the concept of a “lambda function”. To get started, you need to create a lambda function and upload code to it.  will take care of provisioning the server of an appropriate capacity and execute the code on demand. Once the code execution completes, Lambda will deprovision the server if it sees the need to. AWS will charge you for the compute time and any resources that were used during the execution.

When does the Lambda function get executed? For the Lambda code to get executed, it requires a trigger.

Some of the services available as triggers are S3, DynamoDB, Kinesis, SNS, SQS, CloudWatch Events, API Gateway, etc. We will explore  more about triggers in a later part of this series.

Here are the pros and cons of using serverless architecture:

Pros

  • Scalability – AWS will automatically scale the number of Lambda instances that are required as per the load, without you having to intervene in any way. In certain cases, scalability is limited by the trigger that is used. This will be discussed in a later part of this series.
  • Operations – There is no operations team or system administrator required to manage infrastructure.
  • Cost – Costing is based only on the compute time and other resources involved. No requirements of a operations team also brings down that cost. While this is a significant improvement for applications that have minimal CPU requirements, for CPU intensive applications, serverless costs could quickly escalate. There are strategies to minimize and control costs, which we will discuss in a different blog post.

Cons

  • Stateless – After every execution of a Lambda, the server may get deprovisioned. There is no way to maintain state across multiple Lambda invocations. Therefore you are limited to running stateless applications. However, some tricks and workarounds can be used to overcome this problem. This will be discussed in a later part of this series.
  • No temporary local storage – You do not have access to any local storage/ disk space in case you want to write to any temporary files. You have to use global object/block stores similar to S3.
  • Execution Duration – Your single execution duration is limited to a maximum of 5 minutes. Any tasks that may take longer than this are not suited for a Lambda
  • Startup Latency/Performance – Since the Lambda server may be deprovisioned after the execution, the next time the Lambda is triggered, AWS has to allocate, provision and spin up a server before it can start the execution of the code. This typically happens if there is a long gap between 2 subsequent Lambda executions. This startup latency can affect your performance for real-time tasks.
  • Testing – It is not possible to do run time debugging with Lambda. This can be achieved only with writing to the log file and inspecting that. Also certain functionality can be tested only after deploying the code to Lambda, so frequent deployments during development may slow down the development cycle.
  • Monitoring – Some performance and resources can be difficult to monitor since traditional tools like profilers cannot be used. You are limited to the few items that are offered by AWS, like total memory used.
  • Unsuitable for heavy workloads – There are certain applications where it may be simpler and cheaper to have multiple servers running continuous heavy workloads.

So, Should I use it?

Looking at the pros and cons, it feels like the cons outweigh the pros, right? But not necessarily. If the cons are not applicable to your application, or if you find a workaround for them, then it makes sense to go with Lambda.

Here is a cheat sheet:

  1. My application has small but frequent requests.
  2. My application does not perform CPU intensive tasks for a long time.
  3. Payloads/Requests used by my application (Communication between client and server) can be bunched together.
  4. My application can be modeled as a collection of stateless API services.
  5. I don’t have any plans to switch cloud providers in the near term.

If you answered yes to a majority or all of the above, serverless computing may be good for you. Of course, parts of all applications can benefit in some way by using serverless computing, even if it is to take care of periodic housekeeping tasks.

Further reading

https://en.wikipedia.org/wiki/Serverless_computing

https://aws.amazon.com/lambda/#Use_cases