Serverless Applications Architecture on AWS

14 February, 2022 |  Vladimir Djurovic 

Serverless computing has seen tremendous growth in the recent years. According to this DataDog study, usage of serverless applications model is increasing, with 25% of cloud users deploying and using serverless applications architecture.

With this increased momentum, new challenges arise. Widely adopted practices that were used to develop old-school, server-based applications can not be directly applied to serverless applications. Instead, new approaches specific to serverless applications architecture have emerged.

In this post, I will showcase some of the common approaches to architecting serverless applications on AWS. Specifically, we will focus on web applications with separate front end and back end. I choose AWS, since it is the leading provider of Function as a Service (FaaS) paradigm with it’s Lambda functions service.

Table Of Contents

What are pros and cons of serverless architecture?

Serverless architecture is an approach to designing applications which take advantage of serverless computing. Meaning, we no longer need to worry about servers and infrastructure, sine all that is being handled by a cloud provider.

As with any engineering approach, serverless architecture has it’s benefits and drawbacks. In this sections, we will list some of the prominent good and bad parts of serverless application architecture.

Benefits of serverless architecture

There are several benefits of serverless architecture over the classic server-based one.

No cost when not running

Serverless resources are a prime examples of pay-as-you-go pricing. You pay only for the resources consumed when your application is running. If there is no usage, your cost is essentially 0.

In contrast, with cloud servers, you pay for the time your server runs. Even if it idle, and has no usage, you pay the same as for full load.

This is a great advantage for applications whose usage patterns is unpredictable, or have spikes in certain intervals.

No infrastructure of any kind

Even if you outsource your infrastructure to the cloud, you still have virtual infrastructure to maintain. You have servers, virtual networks, firewalls, storage, database and what not.

With serverless approach, your infrastructure is totally abstracted away, and cloud provider takes care of it. It’s like running on PaaS.

Complete elasticity

Although cloud servers give you a large degree of elasticity, there are still gaps in that approach. Even if you scale up and down based on demand, you still need to over-provision resources in order to have time to perform the scaling before servers get overloaded.

In addition, you can’t scale the servers down to zero when there is no activity. You still need to have at least one server running, even if it’s completely idle.

With serverless, all scaling activities are handled automatically by a cloud provider.

Less complexity, more productivity

When working with serverless architecture, developers focus on pieces of code which perform single function, or a set of closely related actions. This places a lot less cognitive burden on developers, since smaller code bases are easier to get around and to maintain.

In addition, being that complete infrastructure is abstracted away, developers can focus on business logic, instead of worrying about deploying and maintaining servers.

Drawbacks of serverless architecture

After pros, let’s take a look at some cons of serverless approach.

Cold start

This is one of biggest cons for applications that require minimal response times. It takes time for a serverless resources to wake from dormant state and kick into gear.

In reality, this start time is measured in miliseconds, but it can still be a big problem when you need to have your application perform within certain SLA.

There are multiple ways to solve this issue, most of which boil down to keeping the resources “warm”, ie. run them with minimal load.

Debugging is difficult

When developers work on code, they usually test and debug the code by running it on their own machine. This makes debugging easy, since they can use development tools to set breakpoints in code and watch the state of application as it runs.

With serverless, this is impossible, because the code does not run on developer’s machine. Although it is possible to emulate the serverless environment, there are limitations. It is easy to introduce bugs which manifest only when the code runs in cloud environment, and it’s hard to reproduce and track down locally.

Vendor lock-in

There is no accepted standard for serverless infrastructure. Each cloud provider develops their own in-house solution, which makes migration from one cloud provider to another almost impossible.

Migrating classical application to serverless architecture

Now we come to the meat of this post. In the rest of the article, we will see various ways to implement web application using serverless architecture.

Before we start, let’s see the common approach to architecting web application using servers:

Classic web application architecture

Classic server-based web application architecture

In this diagram, each block represents one of the components of the application. Actual implementation details may vary, but the role of each component is clearly defined:

  • UI - user interface of the application. In our case, this is web front end, but it could be desktop application or mobile

  • API - this is the backend of the application, containing business logic. UI communicates with the API to execute user requests. In our case, for a web app, this will be a REST API, but it could also be RPC, SOA or any kind of remotely accessed API

  • Database - this is persistence layer of the application. It could consist of one or more databases, which can be either relational of any flavor of NoSQL

  • Storage - this part is kind of optional, but is needed for applications which need to store user data. Think of file sharing apps, social networks and similar

  • Auth - this is authentication and authorization service for the application. Although we can develop custom security logic for our app, it is common practice to outsource this part to external component, either third party or one managed by the organization itself. This component manage authorization and authentication through standard protocols, such as OAuth2 and OIDC.

In the rest of the post, we will break down each of these components and see how they can be implemented using serverless architecture approach. Specifically, we will use AWS services for each component.

Web front end with AWS CloudFront and S3

The first part of serverless application architecture will be web front end. For this component, we can take advantage of AWS services such as CloudFront and S3. The following diagram depicts the architecture in more detail:

Web front end architecure with CloudFront and S3

Web front end with CloudFront and S3

There are several services in this diagram, so let’s see what each of them does:

  • S3 - all web related files are stored in S3 bucket. These include HTML pages, JavaScript code, CSS files, images, video, audio, fonts or any other resource used by the front end. Benefits of S3 are cheap storage and high durability of the stored files

  • CloudFront - is the global CDN network used to cache content on locations close to users. This greatly improves performance, and in addition reduces bandwidth costs, since CloudFront is cheaper than downloading data directly from S3. In addition, CloudFront integrates neatly with S3, providing seamless developer experience

  • Amazon Cognito - is authentication and authorization service from AWS. It maps to Auth component from the diagram above, but it is shown here because front end interacts with it

  • AWS certificate manager - provides SSL/TLS certificates for HTTPS connection to CloudFront. It allows you to easily provision and deploy certificates you need. Also, it is tightly integrated with CloudFront, so you can add HTTPS to your front end in no time.

  • API Gateway - is actually part of the API component, but it is shown in this diagram because front interacts with it.

So, how does this all work? Let’s walk through the workflow:

  1. User navigates to the application web page via a browser

  2. Cloudfront uses SSL certificate provided by Certificate Manager to establish HTTPS connection

  3. Cloudfront loads application files from S3. These are then cached at edge location for fast retrieval later

  4. If a page requires authentication, user is redirected to Cognito. After successful authentication, user is redirected back to application

  5. Application calls back end API through API gateway

The good side of this architecture is that it’s relatively simple to implement and extremely scalable. Since CloudFront is a CDN, your users get excellent performance worldwide.

On the other hand, it can get expensive if you get a lot of traffic. Outgoing traffic is one of the most expensive items in AWS bill. But, once you get a lot of users, you probably can afford it.

Web backend with API Gateway and Lambda

The most common approach to serverless backend architecture is to use API Gateway as an entry point to API and AWS Lambda functions to process the requests. This architecture is shown in the diagram bellow:

Web back end architecture with API gateway and Lambda

Web back end architecture with API gateway and Lambda

In this architecture, API Gateway is used to define your API endpoints. It also provides supporting functionality, such as security, content negotiation, caching and so on.

API gateway receives HTTP requests and forwards them to Lambda functions for processing. Mapping Lambdas to requests can be done in different ways:

  • One Lambda per request - in this approach, you specify separate Lambda function for each endpoint and HTTP method. For example, GET method to endpoint /foo would be handled by Lambda1, while POST /foo would be handled by Lambda2.

  • Single Lambda handling mulitple requests - this way, one Lambda functions handles all HTTP methods to /foo endpoints, or even multiple endpoints.

Benefit of first approach is that it is simple, mapping is done 1:1. The drawback is that number of functions can get out of hand as number of endpoints increase. The second approach can help with this, but logic in Lambdas can get complicated, since you will have to route the requests yourself.

In the diagram, two additional blocks are Cognito and Certificate Manager. These two provide additional services for API.

Cognito helps with authenticating and authorizing requests. API Gateway offers excellent integration with Cognito, which makes adding security to your API trivial.

Certificate Manager provides SSL certificates for your API. It is also tightly integrated with API gateway, making certificates deployment a breeze.

This architecture is excellent choice in terms of elasticity and price. There are no costs when no requests come in, and you get a generous free tier for both services. You start incurring charges only after few million requests.

On the other hand, if you start getting a lot of concurrent requests, you might run into problems with concurrency. By default, each AWS account gets 1000 concurrent Lambda executions. This number can be increased, but you will eventually hit a limit.

Web backend with Fargate

Implementing a web backend with AWS Fargate is very similar to classic server-based architecture. But, instead of server, your application runs in container. With Fargate, you don’t need to provision any servers, you just set the amount of resources you need (CPU, memory) and Fargate takes care of allocating and scaling your application.

This diagram depicts typical Fargate deployment:

Web back end architecture with ELB and Fargate

Web back end architecture with ELB and Fargate

Instead of API Gateway, here we have load balancer as an entry point. It takes care of the scaling the application by adding or removing tasks in Fargate. So, it looks really similar to classic architecture based on ELB and EC2.

The benefit of this architecture is that it is much more suitable for large number of concurrent requests, since there is no limitation like you have with Lambda. In addition, it is much easier for developers to work on code with this architecture, because they can test and debug the app locally, and then simply deploy it in container.

In terms of price, this architecture is more expensive than Lambdas, because there is no scaling down to zero. You always have at least some amount of resources running. It is much more suitable for applications which always have some amount of traffic.

Serverless databases: DynamoDB and Aurora Serverless

Depending on type of application you are developing, you might have a choice between relational and NoSQL database. AWS offers several options here. But, since we are talking about serverless application architecture, we will discuss DynamoDB andAurora Serverless.

Of the two, Dynamo DB is much more suited to serverless architecture, especially when using Lambdas. This has to do with the difference between relational and non-relational databases.

Establishing connection to database from the application code is expensive and time consuming operation. In classic application architecture, there is usually a connection pool, which enables connections to be reused. It greatly improves performance.

Since Lambdas are transient and stateless, connection pools are not an option. On the other hand, opening and closing connection on each invocation leads to performance degradation. This is a known issue, and AWS even came up with a solution like RDS Proxy, which emulates kind of connection pooling.

With Dynamo DB, there are no such issues. It’s architecture is completely different, and applications access it using it’s API, not connecting to the server.

In general, if you need to use relational database in your application, it is much better to start off with Fargate, and not use Lambdas. You may pay a bit more, but you will save yourself a lot of headaches, and will definitely speed up development.

Serverless storage options

Several storage options are available in AWS. For most use cases, you will be fine with using S3, which is an object storage.

Object storage is perfectly suited for user facing applications, like file sharing apps or social networks. In case of S3, it is almost infinitely scalable in terms of capacity, and also extremely durable. All file are replicated within region’s availability zones, so the possibility of files loss is minimal.

When using Lambda with S3 and you allow users to upload or download files, you need to be careful about the time it takes to upload file. Lambdas are charged by amount of time it takes to execute. And operations with files can take a while, especially if files are large. In extreme cases, your Lambda might timeout before upload or download completes.

In this case, it is better to use S3 pre-signed URLs to work with files. This flow is shown in the diagram bellow:

Web back end storage  with S3

Web back end storage with S3

The way this works is the following:

  1. Web client initiates file upload by hitting an API endpoint

  2. API gateway invokes corresponding Lambda function. Instead of processing file upload itself, Lambda generates presigned S3 URL and returns it to client

  3. Client uses presigned URL to upload file directly to S3

This approach can also be used for download or other file operations.

In addition to object storage, there may also be a situations when you need traditional block storage. Since we are running serverless, we can’t have hard drives. The solution is to use Amazon EFS (Elastic FIle System).

This service provides a network file system which can be mounted to Lambdas or containers running on Fargate. It is similar to NFS or Samba protocols used in traditional networks.

Block storage provides much faster access to file compared to object storage. It works on much lower level than S3, which greatly improves performance. If your application needs fast access to files, EFS is way to go.

The drawback of EFS is that it is much more expensive than S3.

Authentication and authorization with Cognito

Finally, we reach Auth block from our original block diagram. This is authentication and authorization service, provided by Amazon Cognito .

In your application, you use Cognito as Identity Provider, and you use standard protocols like OpenID Connect, OAuth 2 or SAML to manage user accounts, authentication and authorization.

This greatly speeds up development, because you can outsource this whole segment of application, instead of wasting time implementing everything yourself.

Generally, you can use any identity provider out there, like Okta, Auth0 or OneLogin. But, Cognito has 2 significant advantages compared to them:

  1. It is tightly integrated with other AWS services, like API gateway. It is trivial to secure your API using Cognito

  2. Compared to other providers, Cognito is significantly cheaper. We are talking at least an order of magnitude less than it’s competitors

Cognito also has some drawbacks:

  • Documentation leaves much to be desired. It can be confusing and it is hard to wrap your head around things

  • You don’t have many options to customize login page

  • You are limited to single region. There is no option for replication between regions

Final thoughts

That is it! We covered much of the cases for serverless applications architecture on AWS.

The key takeaway from this post is that you have multiple viable options to choose from, but you need to be careful and plan ahead for the cases where your application needs to scale beyond initially projected size.

If your app is serving limited amount of users/requests, starting with API Gateway and Lambda is a great way to reduce cost and speed up development.

But, as the application grows, you might start experiencing exponential increase in costs and hit bottlenecks in performance. This is good time to plan migration to Fargate containers.

For a concrete example of serverless application, you can check out this post about how to integrate PayPal checkout with AWS serverless application based on Lambda function, Dynamo DB and API Gateway.

As always, I would love to hear your thoughts about this post. If you have any questions or insights, feel free to post the comment bellow.