The Pitfalls Of Monolith Serverless Application: What You Need To Know

The majority of companies have difficulties with monolithic serverless applications. They are either redesigning their application or moving to microservice architectures. One of the common mistakes that I see in many organizations is that they are now building monolith serverless applications. It sounds easy once you get a little more familiar with serverless technology. However, serverless is most effective when you build the most efficient architecture.

In addition, technologies are emerging and becoming outdated at an increasingly rapid pace. To ensure the best performance of your application, I have highlighted the five most common anti-patterns I have seen, along with some advice on how to avoid them.

1. Don’t build a monolith serverless application 

You can configure Amazon Lambda functions to run for up to 15 minutes per execution. The maximum execution time (timeout) for a Lambda function was previously 5 minutes. By using longer-running functions, it is now easier to perform big data analysis, bulk data transformation, batch event processing, and statistical computations.

Serverless application

The result is that people are taking advantage of it and developing monolithic serverless applications. There is a possibility that your Lambda function code will grow bigger and bigger. This will lead to difficulty in maintaining the function code due to longer Lambda invocation times. 

My recommendations are as follows:

  • Your Lambda functions with large business logic can be broken up into smaller ones with less business logic.
  • You can orchestrate multiple Lambda functions using AWS Step Functions. As a result, you can benefit from features such as error condition handling, loops, and input/output processing in your Lambda implementation. 
  • Serverless components can exchange information using Amazon EventBridge. EventBridge offers a rich set of filters, logging, and forwarding capabilities. In this way, any event can be converted to a native AWS event. There are more than 200 event sources available on AWS natively and through partners.

2. User asynchronous architecture

I often see architectures use Lambda functions to run various tasks one after another in succession. For example, Lambda is configured to receive files uploaded by users via Amazon API Gateway. In this example, the file is transformed, the operation is saved to an Amazon Simple Storage Service (Amazon S3) bucket, and the HTTP response is returned to the user. 

This type of use case leads to longer Lambda runtimes and higher Lambda implementation costs.

Here is my suggestion for improvement:

  • Consider introducing asynchronous workflows. It is possible to upload files directly to Amazon S3 through a mobile app or a web application, for instance. Additionally, you can simultaneously run a separate long-running process, so you can process data and handle user notifications as well as handle data transformations at the same time.

3. Avoid provisioning extensive memory configuration

Lambda functions with larger memory sizes cost more per millisecond. As a result, you will be charged for fewer milliseconds of execution time.  

Assuming that you have 128 MB of memory, this function will take 11.722 seconds to complete, at a cost of $0.024628 per 1,000 calls. When the memory is increased to 1024 MB, the average duration drops to 1.465 seconds, so the cost drops to $0.024638. For 1,000 calls, the function has a 10-fold improvement in performance. 

In other words, you pay for performance. Ensure that your customers and you are on the same page.

Right-sizing a Lambda function is similar to sizing an Amazon Elastic Compute Cloud (Amazon EC2). In many cases, Lambda functions are configured with extensive memory configuration, which results in additional costs.

  • A “just right” sized Lambda function can be developed using the following goals in order to achieve the following results.
  • Test the Lambda function to ensure that it is configured with the right memory and timeout values before using it. 
  • Ensure your code is optimized to run as efficiently as possible.

4. Make sure your Lambda is scalable by design

A Lambda solution is scalable. Your architecture can be overwhelmed by a less scalable service. Integrating Lambda directly with Amazon’s EC2 instances or Amazon’s Relational Database Service (Amazon RDS) database can be dangerous. 

Here are some tips for preventing potential problems:

  • Decouple your dependencies from non-serverless services. 
  • Consider using asynchronous processing whenever possible. As a result, you will be able to decouple microservices from one another and introduce buffering solutions. 
  • In front of non-scalable or non-serverless services, implement a buffering Amazon Simple Queue Service (Amazon SQS). 
  • To maintain predictable database performance, use Amazon RDS Proxy in front of Amazon RDS.

5. Not user serverless of server-based applications

Lambda is ideal for short-lived workloads that can be scaled horizontally. There are a number of Lambda application types and use cases that provide information on where Lambda is an ideal solution. Be careful when using Lambda for operations that might take a long time. It is possible that those operations might exceed the Lambda timeout if they are not designed or coded properly. 

Here’s some advice:

Lambda can be used to support long-running compute tasks on Amazon EC2, Amazon Elastic Container Service (Amazon ECS), and Amazon Elastic Kubernetes Service (Amazon EKS).

Read the next tutorial blog ‘Event-driven application development with Amazon EventBridge‘ to understand these concepts better.

Stay tuned for more exciting AWS blogs and tutorials!

Leave a Comment

Your email address will not be published. Required fields are marked *