SideCar and Service Mesh : 101

Amit Raj
Dev Genius
Published in
3 min readAug 15, 2022

--

This blog is part of the series where we discuss 101 concepts from Ground Zero for an audience with limited starting knowledge. This article comes in the Intermediate Level Series since it involves understanding the primitives of SideCar and ServiceMesh, and their importance as the backbone of the micro-services architecture.

Some of the earlier blogs in the 101 Series are as follows:

Data Encryption 101
Database Replication 101
Database Sharding 101
Caching Strategy 101
Kubernetes Deployment 101
Async Communication 101
HTTPS 101

What is SideCar?

Sidecar is a separate container running along the application container used for running isolated peripheral tasks such as logging, proxying, configuration management etc. They share the overall lifecycle management as the parent container- creation/termination events are in sync.

The term sidecar (as shown in the above image) comes from the resemblance of a sidecar attached to a motorcycle. It helps offload non-functional requirements of resiliency, scalability, and security into separate containers, thus enabling applications to run the business use-cases.

SideCar — Core Responsibilities

Service Mesh — Basic Architecture

Sidecars can run as an isolated deployment component for supporting basic horizontal needs. However, in usual cases, they are part of an uber service- mesh architecture.

Service Mesh is an infrastructure layer that allows inter-pod communication, sidecars acting as intermediate proxies for inbound/outbound traffic from an application pod.

Components

  • Data Plane — This consists of proxies deployed as sidecars. These sidecars control all the inter-service communication between the micro-services.
  • Control Plane — Control plane consists of components which help in service discovery, management of proxies and certificates etc.

Traffic Flow

  • The External Ingress Traffic flows through the Ingress Controller to a given Pod, for example — Service A.
  • The sidecar container on Service A pod intercepts the request. For a valid request, it forwards the request to the application container.
  • Invalid requests are rejected by the sidecar, without being served by the running application.
  • The inter-service traffic between Service A and Service B is served only using the layer of sidecar proxies using the service discovery pattern.
  • Similarly, the external traffic flows out through the Egress Controller from any given Service Pods.

Advantages

Using Sidecars and Service Mesh provides more flexibility to the engineering teams. The following are some of such advantages depending on the application use case:

  • SideCars reduce the code complexity by segregating APIs with business logic from code with infrastructure concerns.
  • Service Mesh helps in secure communication by enforcing network policies on the sidecar layer, thus isolating the application layer from rogue traffic.
  • Sidecars can be enabled with monitoring agents such as Splunk, fluentd, and Dynatrace, improving application and system observability.

Summary

SideCar and Service Mesh are popular choices for modern-day micro-service- deployments. Tech solutions such as Istio, and Linkerd are some of the automation choices to solve the service mesh needs of native cloud deployments on different compute options available. However, adding multiple components to the overall architecture introduce latency and visibility limitations with each additional hop in the network flow. Additionally, managing the service mesh layer requires additional operational challenges for engineering teams to consider in the overall architecture decision-making process.

For feedback, please drop a message to amit[dot]894[at]gmail[dot]com or reach out to any of the links at https://about.me/amit_raj.

--

--