Microservices architecture is an approach of building large applications with multiple small units called services. Development, deployment, and testing of each service will be done independently. Each of these small services runs in its own process and often uses the lightweight mechanism of the HTTP resource API to communicate with each other. The microservices can be written in different languages and each service may have its own database or storage system or they can share a common database or storage system.
Advantages of Microservices
Improve productivity and speed
The development process becomes faster by decomposing applications into manageable services that are faster to develop. Different development teams can work on different components simultaneously. Microservices can be developed by a fairly small development team.
Better fault isolation
If one microservice fails, other services will continue to work
Each of these microservices can be deployed and then redeployed independently without compromising the integrity of an application.
Reusability of microservices
Microservices like payment and login can be used across multiple business projects
Since the services are separate, we can more easily scale the most needed ones at the appropriate times, as opposed to the whole application.
Easy to adapt new technologies
Developers can make use of new technologies for the development process of microservices and there is no long term commitment to the technology stack
Easy to run as containers
Microservices can work very well with containers, such as Docker.
Integration with the open-source continuous integration tools like Jenkins is very easy and the services can be deployed automatically.
What is Kubernetes (k8s) and why should we use it to deploy microservices?
Kubernetes is an open-source orchestrator for deploying containerized applications (microservices). It is also defined as a platform for creating, deploying and managing various distributed applications. These applications may be of different sizes and shapes. Kubernetes was originally developed by Google to deploy scalable, reliable systems in containers via application-oriented APIs.
As we know the microservices work very well with the docker and each microservices will be run as individual Docker containers. But, since these individual containers have to communicate, Kubernetes is used. So, Docker builds the containers and these containers communicate with each other via Kubernetes. So, containers running on multiple hosts can be manually linked and orchestrated using Kubernetes.
Some of the main advantages that make the Kubernetes a better choice for the microservice deployments are listed below:
Self Healing capability
Kubernetes keeps the deployment healthy by restarting the failed containers. The unresponsive and failed containers will be killed based on the user-defined health check.
Service discovery and load balancing
Kubernetes can expose a container using the DNS name or using its own IP address. Service concept which groups the Pods and simplifies service discovery. Kubernetes will provide IP addresses for each Pod, assign a DNS name for each set of Pods, and then load-balance the traffic to the Pods in a set.
Kubernetes supports a Secret object which is backed by the etcd datastore. So Kubernetes avoids the burden of storing sensitive information in the container images.
Declarative DNS Management
Ingress objects in Kubernetes allow for name-based virtual hosting and HTTP routing in a straightforward, declarative manner. This means that Kubernetes is capable of directing multiple domains and URL paths to different Services.
Kubernetes makes it easy to horizontally scale the number of containers in use depending on the needs of the application. The desired number of containers can be changed from the command line or using the Horizontal Pod Autoscaler based on usage metrics.
Kubernetes allows you to automatically mount a storage system of your choice, such as local storage, public cloud providers, and more. Microservice containers can share storage volume with the other containers in the same pod.
Automated rollouts and rollbacks with zero downtime
Kubernetes offers a solution with Deployments, which will create additional Pods with the newer image and assure that they are running and healthy before destroying the old Pods. Kubernetes will also roll back any changes should the newer containers fail. In this way there is limited downtime, ensuring a strong user experience.