Share This Page

ShareShareShareShare

Share This Page

ShareShareShareShare

Introduction

1. Containers

In the traditional software development approach, often something that works in one environment does not work in some other environment. Containerization of application helps us to overcome this problem. Containerizing an application mainly involves packaging an application with all the configurations, libraries and all the dependencies required for the application to run smoothly across any environment. The most popular eco-system for containers is Docker. Docker can be run on Kubernetes, which is an open-source container orchestration system for automating application deployment, scaling, and management.

Amazon Elastic Container Service for Kubernetes (Amazon EKS) is a service provided by AWS that makes deploying, managing and scaling containerized applications using Kubernetes on AWS easy. Amazon EKS eliminates single point of failure by running the Kubernetes management infrastructure across multiple availability zones.

Amazon Elastic Container Service (Amazon ECS) is a highly scalable,
High-performance container orchestration tool that supports Docker containers and allows a smooth running and scaling of containerized applications. Amazon ECS eliminates the need to install and operate a custom container orchestration software or manage and scale a cluster of virtual machines, or schedule containers.

2. Serverless functions

Serverless architecture hides the infrastructure that the application runs on. It automatically scales up the resources whenever required as per the number of requests coming in. The biggest benefit is, there no need to manage any infrastructure or the server software that runs the application at a very low cost.

AWS Lambda is a compute service that runs code without provisioning or managing servers. AWS Lambda executes the code only when needed and scales automatically, from a few requests per day to thousands per second. It charges only for the compute time consumed and requires zero administration. AWS Lambda runs the code on a high-availability compute infrastructure and performs all the administration of the compute resources, including server and operating system maintenance, capacity provisioning and automatic scaling, code monitoring and logging. All needed to do is provide the code in AWS Lambda supported languages.

Method and Tools used for Experiment

Application: A python program that queries an RDS PostgreSQL database and returns 100 records.

RDS configuration:

  1. Instance class db.t2.micro
  2. vCPU 1
  3. RAM 1 GB
  4. Storage 20GiB

The containers are allocated 128mb memory in all the cases.

In case of ECS and EKS, Application load balancer is used to access the application and in case of lambda, API Gateway is used.

ECS and EKS are both cluster-based services. We deployed two nodes of m4.large (2 vCPUs and 8 GB memory) in both the cases.

JMeter is used for load testing. We used three levels of testing:

  1. 50 threads/sec, 5 sec ramp-up time for 15 minutes
  2. 500 threads/sec, 15 sec ramp-up time for 15 minutes
  3. 1000 threads/sec, 30 sec ramp-up time for 15 minutes

Data obtained from Experiments

The data obtained from experiment is based on response received from the HTTP request that we send using JMeter. JMeter evaluates Number of Samples, Average, Min, Max, Std. Dev., Error percentage, Throughput, Received KB/sec, Sent KB/sec, Avg. Bytes for the request we send.

Experiment 1: Using Lambda (Serverless Function)

This experiment involves creating a python 2.7 lambda function. The lambda function queries an RDS PostgreSQL database and returns hundred records. To expose lambda function to the world as a REST API, it is integrated with API Gateway. After creating the REST API, the URL obtained is used to invoke the lambda function.

Results:

1. At low load of 50 threads/sec and Ramp-up period of 5

At low load of 50 threads/sec and Ramp-up period of 5

X-Ray reading for low load

Fig 1: X-Ray reading for low load

2. At medium load of 500 threads/sec and Ramp-up period of 15

At medium load of 500 threads/sec and Ramp-up period of 15

X-Ray reading for medium load

Fig 2: X-Ray reading for medium load

3. At high load of 1000 threads/sec and Ramp-up period of 30

At high load of 1000 threads/sec and Ramp-up period of 30

X-Ray reading for high load

Fig 3: X-Ray reading for high load

Experiment 2: Using ECS (Elastic Container Service)

This experiment involves creating an ECS cluster that pulls a docker image from a container registry. The docker image contains a python flask application that queries the database and returns hundred records. The ECS cluster is deployed with 2 nodes and the service autoscaling is configured to run a maximum of 10 tasks. The nodes are attached to an Application load balancer.

Results:

4. At low load of 50 threads/sec and Ramp-up period of 5.

At low load of 50 threads/sec and Ramp-up period of 5

CPU-Utilization:

CPU-Utilization

Memory-Utilization:

Memory-Utilization

1. At medium load of 500 threads/sec and Ramp-up period of 15

At medium load of 500 threads/sec and Ramp-up period of 15

CPU-Utilization:

CPU-Utilization

Memory-Utilization:

Memory-Utilization

At high load of 1000 threads/sec and Ramp-up period of 30.

At high load of 1000 threads/sec and Ramp-up period of 30

CPU-Utilization:

CPU-Utilization

Memory-Utilization:

Memory-Utilization

2. Experiment 3: Using EKS (Elastic Container Service for Kubernetes)

An EKS cluster is deployed with 2 nodes which runs a container that is pulled from a docker registry. Horizontal pod autoscaling is enabled in Kubernetes cluster that runs up to 10 pods/containers. The nodes are attached to an Application load balancer.

Results:

3. At low load of 50 threads/sec and Ramp-up period of 5.

At low load of 50 threads/sec and Ramp-up period of 5

CPU Utilization:

CPU-Utilization

Memory Utilization:

Memory Utilization

4. At medium load of 500 threads/sec and Ramp-up period of 5

At medium load of 500 threads/sec and Ramp-up period of 5

CPU Utilization:

CPU-Utilization

Memory Utilization:

Memory Utilization

5. At medium load of 500 threads/sec and Ramp-up period of 5

At medium load of 500 threads/sec and Ramp-up period of 5

CPU Utilization:

CPU-Utilization

Memory Utilization:

Memory Utilization

Cost:

Lambda:

First 1 Million requests per month are free. After 1 million requests, it costs $0.20 per million requests. That is, $0.0000002 per request. There is also compute charge. First 400,000 GB-seconds per month, up to 3.2 million seconds of compute time, are free. $0.00001667 for every GB-second used thereafter. The price depends on the amount of memory you allocate to your function.

ECS (EC2 Launch Type Model):

There is no additional charge for EC2 launch type. You pay for AWS resources (e.g. EC2 instances or EBS volumes) you create to store and run your application. For one m4.large instance, it cost $0.10 per hour.

EKS:

EKS cluster costs $0.2 per hour plus whatever is the cost of the instances that is being used by the cluster. For example, if the cluster is making use of one m4.large instance, then it would cost $0.3 per hour ($0.2 for cluster+ $0.10 for instance).

Conclusion:

Note: The results seen above varies for the same configuration for different bandwidth.

As seen from the results, Lambda function is clearly leading the race. That is because, Lambda is serverless and is not bound by infrastructure limitations and can scale up infinitely. Whereas, EKS and ECS are bound by the limitations of the nodes. However, EKS and ECS can perform better if larger instances and larger cluster are used which can run a greater number of pods. Lambda has its own drawbacks, such as cold start and a 15-minute run time limit. Therefore, lambda is suitable for short execution times, offer high availability whereas ECS and EKS excel at performance, and are most suitable for applications with long execution times.

image

About the Author

Praveen Poojari
Trainee - Cloud Centre of Excellence (CoE)

Praveen and Nikhil are both Mindtree campus trainees from October 2018 batch. Both being batch mates and active techies, they are a part of the Cloud Centre of Excellence (CoE) team in Mindtree. They have worked on several AWS services and serverless technologies with deep understand on docker, Kubernetes and terraform.

Other Stories by the Author

image

About the Author

Nikhil Gupta
Trainee - Cloud Centre of Excellence (CoE)

Praveen and Nikhil are both Mindtree campus trainees from October 2018 batch. Both being batch mates and active techies, they are a part of the Cloud Centre of Excellence (CoE) team in Mindtree. They have worked on several AWS services and serverless technologies with deep understand on docker, Kubernetes and terraform.

Other Stories by the Author

Let's Talk About Your Needs

Thank you for your submission. We'll be in touch.