Share This Page


Share This Page


In a previous blog, we spoke about how containers are a key capability in accelerating the shift to a Product IT Operating Model. This is a continuation to the same series.

Building an application platform that supports the current investment technology and accelerates transformation with technology innovation is an aspect that many companies look forward to. Along with the Agile development method, the trend of releasing the product with limited features and making progress has changed with micro-services taking the lead. A product comprises segregating its multiple features into individual services and the integration of each service. This progressive approach not only helps in the quicker release of the product, but also runs each service on a separate container. The cycle then continues. As containers run on the standard environment, the deployment architecture is taken care of right from the beginning by testing the product on a container platform.

We have quite a few options to establish a micro-services container platform, be it an on-perm setup using Redhat OpenShift, Docker, Kubernetes or on Cloud AWS EKS, Azure AKS or GCP GKE. ECS Fargate is one of the AWS services which enables the possibilities of migrating micro-services applications on a serverless architecture on Cloud. This blog details a use case scenario, with the development and deployment changes that need to be performed for ease of usage and seamless migration of micro-services applications running on a Kubernetes cluster to serverless architecture using AWS Fargate.

Use case scenario

It is widely accepted that Kubernetes solutions are highly matured and stable to use for automated deployments, scaling and management of container applications. The containerized application is expected to run on platform-independent container systems without having any dependency issue for the application that run inside the container. For instance, if we run the containerized application on a laptop that has Windows or Linux or Mac OS, we just need to ensure that the prerequisite software like Docker etc., are installed and configured with no further dependency to be installed specific to the container application on the laptop.

Consider an enterprise looking for a service where the infrastructure is outsourced to one service provider, development to another, application administration to somebody else, monitoring to a different entity and innovation and overall administration with themselves. In this scenario, the bridge between development and operation should have much more broader aspects to succeed in DevOps with great collaboration. This enterprise is running their micro-services application on Kubernetes using the AWS Cloud EC2 instance for their business need. Underlying EC2 instances and operating system administration are taken care by the infrastructure team vendor. Kubernetes installation, the configuration and deployments are taken care by the application administration team vendor. Developers are building a solution to containerize the required application and run it as a single entity to service the business purpose.

The management observes that at times, the EC2 instances meet with issues during the patching cycle which leads to application downtime. With modern configuration management tools available, an efficient patching system can be developed to ensure proper testing with lower environments before moving to higher environments. With this, applying and reverting patches with development automation on multiple servers can be easily managed. But if the management looks for a reduction in the Total Operating Cost (TOC) in terms of managing these EC2 instances and Kubernetes administration by infrastructure and application vendor along with better uptime, then serverless architecture would be an ideal choice.

Deployment architecture using Kubernetes and Fargate

Kubernetes – It is a master-slave type of architecture. Master needs a minimum of 3 nodes to build a high availability cluster and the required number of slave nodes to deploy the application containers. To keep it simple for the scenario, say we have 1 Master node with 3 slaves. The application is launched from master node and the container will scale as per the deployment replicas on slave nodes. The application is accessed by the defined service access method from the Kubernetes cluster.

The intent of the use case solution is not just to eliminate the provision or manage EC2 instance and Kubernetes administration efforts, but at the same time, one needs to ensure that the application architecture and deployment methods should support in all aspects for seamless transition. Figure 1 displays the high level architecture using both Kubernetes using EC2 instances and ECS Fargate.

Figure 1: Kubernetes on EC2 vs ECS Fargate architecture

Figure 1: Kubernetes on EC2 vs ECS Fargate architecture

Development and Deployment changes in-line with Fargate

The application code needs a change for logging methods. It cannot be logged locally, since in the AWS Fargate, the containers can run in any EC2 instance within the defined VPC but with no access to it. Hence, the application logs must be captured in the container CloudWatch log group. If the application that would run in the container is nodejs, then npm modules like winston can be using for console logging. In case of Java applications, then Java statements like system.out.print can be leveraged.

Kubernetes deployments use kubectl commands. While using AWS ECS Fargate, elastic container service (ecs) commands are used. Kubernetes deployment can have images either local or Docker hub or ECR, whereas ECS Fargate mainly supports ECR. Hence, there is a constraint in terms of storing the build image as part of continuous integration to the ECR registry with appropriate tag version.

To have the Fargate container with the latest image and to avoid changing the task definition version for every new deployment, it is important to retag the versioned ECR image to the environment specific tag. For example, with: dev for development environment and push to ECR. Then, the aws ecs update service can be used for new deployments. Once the development team sign off the dev image, the same can be retagged with: qa and deployed to Fargate QA cluster. Similarly, progress can be made for higher environments like stage and production as well.

Fargate cluster

A Fargate cluster can have multiple services and tasks running on it. Figure 2 displays 3 services running on a Fargate cluster blog-node-cluster, the service type mentioned as REPLICA, which is similar to the Kubernetes replica. Each service has a versioned task definition and desired tasks mentioned as 3, which indicates that each service has 3 running tasks. The service configuration helps define the number of tasks to run for each service, upgrade methods, auto-scaling, etc. Each service consists of an associated task definition. The task definition contains the details of CPU, memory, ECR image, environment variables and various other parameters that can be configured to the task. Any change to the task definition will create a new version.

Figure 2: Fargate blog-node-cluster

Figure 2: Fargate blog-node-cluster


A few years ago, if a client migrated their infrastructure from on-perm to Cloud, it was called as Digital Transformation. Cloud will become yet another remote data center if used only for infrastructure purposes. Serverless architecture reflects the future which eliminates mundane operation activities. The need for solutions in the form of serverless architecture increases each day. In this blog, we have seen one such example of transformation from server to serverless architecture for micro-services container platform and its associated continuous integration and deployment methods.

Mindtree has also created M-Engine automation framework, which can be integrated with Mindtree’s CAPE solution to build the serverless container platforms (as shown in figure 1) with Fargate containers as a one-click deployment.

Read to know more:

Deployment in AWS Cloud Serverless or Docker Containers?


About the Author

Murali Dhandapani
Senior Technical Architect

Murali Dhandapani is a Senior Technical Architect at Mindtree, focusing on architecting DevOps and automation solutions for digital transformation projects. He is also an open group certified Master Technical Specialist.

Other Stories by the Author

Let's Talk About Your Needs

Thank you for your submission. We'll be in touch.