One of the interesting aspects of microservices is its ability to deploy each microservices group independently unlike a monolithic application. It provides the ability to evolve each capability implemented as microservices independently based on business demands. At the same time, it also provides the opportunity to handle scalability requirements of individual capability independently. Now, this brings the question of deploying those services. In this context, I believe Microservices and Containerization concept is a match made in heaven :) Let's explain the same in detail
Now lets start looking at deployment options we have. Whether we will be deploying individual microservices group in each physical machines? Obviously not as it is a costly proposition. Then next obvious choice is traditional virtualized environment. Virtualization allows you to split up the physical machine into separate hosts and each virtualized host capable of running guest OS and application on top of it. So this means we could split the physical server into any number of virtualized hosts. Yes and no. Yes, we could split but to a certain extent. All type 2 virtualization has host OS and hypervisor running on top of it. The role of the hypervisor is to map resources like CPU, Memory from virtual host to physical host. In each virtualized host, it will have guest OS with their kernels. Each virtualized hosts are separated from one another and also from physical hosts by hypervisors. To perform those activities, hypervisor also consume CPU cycles and memory. As we add more and more virtualized host, hypervisor takes more of those resources to perform its tasks. Obviously beyond a certain point, the performance of overall infrastructure starts degrading drastically.
Now let’s look at another light weight alternative in terms of Linux Containers. In the case of Linux containers, it is lightweight as it doesn’t need the hypervisor to split and control each virtual host. But it uses the funda of creating separate process space in which other processes live. So each container is effectively subtree of the system process tree and is given physical resources like CPU/memory/others allocated for them. Since we don’t need hypervisor, we save a lot of resources that make it possible to provision more containers than typical virtual machines. At the same time, Linux container is faster to provision than full-fledged VM images. Docker is one of most popular lightweight container built on top of this concept. Instead of Linux container, they have implemented their own "libcontainer" that make it possible to run container also on top of the Microsoft OS also rather than getting limited to Linux OS. Docker's promise is the ability to run anything(if app can run on any host, it can run on container) and run anywhere(cloud/virtual/physical). It also supports the concept of Docker image repository where tested containers could be pushed...basically Build Once Test It Push It to Container Repo and Then Pull It to Run Anywhere. From the repository, container images could be deployed to Dev/QA/UAT/Production environment without worrying about the environmental dependencies and other inconsistencies across the environments. If there is any change needs to be made, it is made to container in terms of replacing it with newer container image in no time.. (Docker also provides the support for syncing diff alone).
Above capabilities make it an ideal for for Microservices deployment. Containerization is a proven concept and could be adopted without any risk of being an early mover.
amazing blog.
ReplyDeleteI would love to share our website