Have we ever thought about next generation architectures on Microservices with a DevOps lens and realizing it effectively with AWS DevOps Toolkit?
Enterprises adoptions of microservices based architecture for software development is growing rapidly and becoming de-facto pattern of adoption as a whole or as part of the overall architecture. Recent survey on microservices adoption reveals that enterprises are adopting the microservices to gain Agility (82%) and to achieve scalability (78%). If your motivation to adopt microservices is to achieve agility and scalability then strong DevOps pipeline is an enabler in your journey of microservices adoption.
Figure 1: Microservices adoption drivers (Extract from:https://lightstep.com/resources/reports/microservices-trends-2018/)
From the past 4-5 years we know that containers – based architecture is the go-to standard when it comes to deploying immutable, elastic micro-services. Adoptions of containers made devops landscape for microservices deployment lot easier than doing the same with virtual machines.
Microservices fundamental principles helped containers to prevail for the below reasons:
- Technology shift: Industry shifted from building server based JEE or .Net application on a heavy application server to stateless, immutable, scalable NodeJS or Springboot based microservices.
- Use & Throw: Agility in development requires speed to deploy, test & destroy. Containers are exactly enables that & easy to bring up & destroy.
- DevOps: Powerful tool to simplify the DevOps for microservices is containerization.
Now that it is evident that microservices architecture is the way forward and containerization is the enabler and DevOps is in the center focus to make the paradigm successful.
This blog is an attempt at building DevOps pipeline for microservices architecture, handling continuous deployment and few tools to enable monitoring.
DevOps Pipeline representation for a NodeJS based Microservices
DevOps tool chain constitutes:
- Code — code development and review, source code management tools, code merging
- Build — continuous integration tools, build status
- Test — continuous testing tools that provide feedback on business risks
- Package — artifact repository, application pre-deployment staging
- Release — change management, release approvals, release automation
- Configure — infrastructure configuration and management, Infrastructure as Code tools like AWS Cloud formation
- Monitor — applications performance monitoring, end–user experience
Code Repository (ies) for microservices based application is derived from the Y-Axis of the scale cube i.e., functional decomposition of the monolithic application in to independently deployable units. To provide the flexibility for the individual microservices development team to choose the relevant technology, release path & deployment flexibility it is required to have them separated by independent repositories.
Repository Structure would look like:
Build & Code Quality Check
Build tools, static code analysis, Unit testing and generation of package, this chain of process would remain same between microservices based application and a monolithic applications. As shown in the above picture (figure 2), Gulp is used to build the npm package, after unit testing the code using Chai & Mocha. Like any other technology, static code analysis is done using SonarQube.
Best practices in container Packaging is to create the docker image while deploying instead of while doing the build. Two reasons behind it:
- Build packages and versions needs to be maintained in Artifactory, keeping the docker images in the repo, will demand the high disk space on artifactory.
- It is recommended to keep the Docker Images up to date with all the security patches, by building the image close to deployment will ensure the latest patches applied on to image.
With increase in cloud adoption for System of Engagements built on Microservices architecture, Environments are Now Disposable, it is easier and relevant for containerized solution. A deployment pipeline can now incorporate automated tests, which involved bringing up of the environment using Infrastructure as Code, run the regression test suites and dispose the environment once the test completes.
When it comes to microservice release management, expectation is to do seamless deployment with least/no impact to business. There are three ways to achieve this:
Blue-Green Deployment: Container orchestration tools like Openshift, Kubernates provides support for blue green deployment with no downtime on production environment.
Drawback of this approach is that, if there is an issue with the new version, it gets impacted to all the users even for the fraction time that was exposed to business. To overcome from this let us look at Canary model of deployment.
Canary deployment model is about deploying small, incremental replications sets of the new version and by controlling the traffic exposure to minimal percentage. There are multiple toggles that one can adopt apart from traffic exposure percentage to specific functionalities or to specific consumers.
This year’s microservices adoptions survey reveals that biggest challenge customer faced when microservices is deployed in production scale is that ‘Each additional microservices increases the operational challenges’, ‘it is harder to identify the root cause of performance degradations or issues’. Solution to cracking this problems resides in the container Orchestration platform that one adopts.
There are multiple open source options to monitor the containers, I will explain few references of them here.
Kube-Dashboard: If kubernates if your choice for orchestration, then Kube-Dashboard provides comprehensive insights into your cluster. Integrating that with Heapster, backed by cadvisor will let you understand the CPU, RAM utilization of yours pods. Kube-Dashboard provides your cluster health in a nice graphical representation.
Prometheus: Prometheus is a complete open source solution, it pulls the information from all the registered systems through a centralized server.
Sysdig: Sysdig is cloud hosted monitoring tool, works using their CLI installed on Linux machines. This product offers docker monitoring for workloads on Kubernates, Mesos and Swarm
Cloud Realization for the microservices based architecture in containerized deployment is viewed seriously by AWS with its matured and stabilized offerings. Amazon Web services is an Architect’s choice and AWS approaches every service from a component point of view which can subsequently be used in various architectural scenarios. The other reason why it scores is the fact that once a service is released it always works as intended and the documentation is top class.
Mindtree’s AWS DevOps CoE approaches the realization with AWS Native toolsets as below:
Elastic Container Service/EC2 instances are considered as AWS Compute with AWS Elastic container registry for storing docker images.
Amazon’s newer release on Elastic Kubernetes Service will definitely be a break-through in approaching microservices adoption with complete automation.
Containerized environment being the de-facto choice for deploying microservices based applications, agility for development will be provided through proper DevOps implementation.
Set of key considerations while adopting DevOps for containers:
- Choice of container Orchestration framework like Kubernates, ECS, Openshift will have their own way of doing release management.
- Container monitoring has been a challenge, adopting a tool that meets your requirement upfront will solve lot of operational challenges
- Containers security, publicly available images may bring in lot of security threats, use with caution.