Contents

Exploring LinkerD

A service mesh like Linkerd is a tool for adding observability, security, and reliability features to “cloud native” applications by transparently inserting this functionality at the platform layer rather than the application layer. In this post we will be exploring some of the more impressive features that LinkerD has to offer.

Getting Started

LinkerD has one of the most impressive and simple getting started guides I have ever seen. Rather than trying to replicate or improve it, I am instead going to suggest you start with their official getting started guide.

I will then use this post to cover some of the other features which are not covered in the getting started guide such as tracing, certificate rotation, and routes. Each section will assume that you are starting with a clean cluster and that LinkerD is not yet installed. We will also be making use of the LinkerD Emojivoto app to demonstrate some of the functionality.

Tracing (Jaeger)

During the getting started guide you were introduced to the viz extension for LinkerD. Getting Jaeger running follows similar steps as we will be using the LinkerD-Jaeger extension. This extensions consists of three components: the collector, the Jaeger backend (with the option of using your own backend), and lastly a Jaeger-injector.

The collector will receive the spans from the mesh and your application and send them through to the collector you have configured. The Jaeger-injector configures the LinkerD proxies to emit the spans.

OpenTelemetry Required
Please keep in mind that for this to work your application will have to be instrumented and configured with the OpenTelemetry library.

Let’s install LinkerD using the following command:

1
2
3
4
linkerd install | kubectl apply -f -

# You can check the status of the install with
linkerd check

Once the LinkerD installation is done we will also install the viz extension, but we will be setting an additional option so that we can access Jaeger within the LinkerD dashboard:

1
2
3
4
linkerd viz install --set jaegerUrl=jaeger.linkerd-jaeger:16686 | kubectl apply -f -

# You can check the status of the install with
linkerd viz check

Run the following command to install the LinkerD-jaeger extension:

1
2
3
4
linkerd jaeger install | kubectl apply -f -

# You can check the status of the install with
linkerd jaeger check

Now that all the LinkerD components are installed and running, we can install the Emojivoto application with the following command:

1
linkerd inject https://run.linkerd.io/emojivoto.yml | kubectl apply -f -

By default Emojivoto does not emit spans. It requires that the OC_AGENT_HOST environment variable is set. Run the following command to patch the deployment and add the required environment variable:

1
kubectl -n emojivoto set env --all deploy OC_AGENT_HOST=collector.linkerd-jaeger:55678

For this demonstration we will also install an Ingress controller so that we can see the trace from the moment the request enters the cluster. Run the following commands to install ingress-nginx and create an ingress rule for the Emojivoto app:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
# Install ingress-nginx and enable tracing
helm upgrade --install ingress-nginx ingress-nginx/ingress-nginx --namespace ingress-nginx --create-namespace --set controller.config.enable-opentracing="true" --set controller.config.zipkin-collector-host=collector.linkerd-jaeger

# Create ingress rule
cat <<EOF | kubectl apply -f -
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: emojivoto-web-ingress
  namespace: emojivoto
  annotations:
    nginx.ingress.kubernetes.io/service-upstream: "true"
spec:
  ingressClassName: nginx
  defaultBackend:
    service:
      name: web-svc
      port:
        number: 80
EOF

Now we are finally ready to see what tracing can do for us. Get the IP of the ingress controller with the following command:

1
kubectl get svc ingress-nginx-controller -n ingress-nginx -o jsonpath="{.status.loadBalancer.ingress[0].ip}"

The Emojivoto app does come with a load generator, but if you want to see some spans originating from the ingress, use the IP you just retrieved and go vote on some emojis. Once you are ready to explore Jaeger run linkerd jaeger dashboard and you should see something like this:

/images/jaeger-dashboard.png
Jaeger Dashboard

To view the details of a request select the nginx service on the left and then click “Find Traces”. Once you click on a trace you should see something more like this:

/images/jaeger-trace.png
Jaeger Trace View

This trace will show you exactly where your request went and how long it took on each step. You will notice that there are also a lot of “linkerd-proxy” entries. These are the LinkerD proxy sidecars sitting in the application pods. From this view you can even drill in a little bit more and view span details:

/images/jaeger-span.png
Jaeger Span Details

The last thing to show is the LinkerD-jaeger integration with the viz dashboard. Open the dashboard with linkerd viz dashboard. Once open, change the namespace to emojivoto and browse the deployments. You should see the Jaeger logo next to the Grafana logo on each row:

/images/viz-jaeger-integration.png
Jaeger Integration

Clicking on this icon will open the Jaeger dashboard to a filtered view for the selected deployment.

Service Profiles (Routes)

Service profiles provide LinkerD with additional information about a service. It is configured via a Kubernetes custom resource definition (CRD). Once you have configured a service profile for a service, LinkerD can provide per-route metrics and you are able to configure additional per-route features such as retries and timeouts.

Creating Service Profiles

Let’s start by installing LinkerD adn the viz extension:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
# Install LinkerD
linkerd install | kubectl apply -f -

# You can check the status of the install with
linkerd check

# Install viz extensions
linkerd viz install | kubectl apply -f -

# You can check the status of the install with
linkerd viz check

We will again be making use the Emojivoto app for our demo. Run the following command to install Emojivoto and inject LinkerD:

1
linkerd inject https://run.linkerd.io/emojivoto.yml | kubectl apply -f -

Once the application is up and running, open the LinkerD viz dashboard with linkerd viz dashboard. Change the namespace to emojivoto, select pods, click on any pod and swap over to Route Metrics. Right now you should see something like this:

/images/linkerd-empty-routes.png
Default Route Metrics

As you can see there is not a lot of details here. To get more out of this view we need to create our service profiles. There are a few ways to go about this. We can either automatically create the service profile from network traffic to the service, automatically generate the service profile from Protobuf or OpenAPI specifications, or the much less impressive and fun way by manually creating them from a template. The last option will not be covered in this post.

From Traffic

To create a service profile from observed traffic run the following command:

1
linkerd viz profile -n emojivoto web-svc --tap deploy/web --tap-duration 10s | kubectl apply -f -

This will generate a service profile from the traffic observed over a 10 second period. The risk of this approach is that not all available routes will receive traffic over the specified period and your service profile will be incomplete.

This method is a great starting point if you do not have Protobuf or OpenAPI specifications, but rather use the below method if you do have access to the specifications.

From OpenAPI/Protobuf Specification

LinkerD provides some handy commands to automatically convert an OpenAPI or Protobuf specification to a complete service profile. The commands to convert the specifications are as follows:

1
2
3
4
5
# Protobuf
linkerd profile -n <service namespace> --proto <protobuf file> <service name>

# OpenAPI
linkerd profile -n <service namespace> --open-api <openapi/swagger file> <service name>

Emojivoto makes use of GRPC and we can download the .proto files from the official GitHub repo. To configure the service profiles from these files run the following commands:

1
2
3
4
5
6
7
# Download the files
wget https://raw.githubusercontent.com/BuoyantIO/emojivoto/main/proto/Emoji.proto
wget https://raw.githubusercontent.com/BuoyantIO/emojivoto/main/proto/Voting.proto

# Generate the service profiles
linkerd profile -n emojivoto --proto Voting.proto voting-svc | kubectl apply -f -
linkerd profile -n emojivoto --proto Emoji.proto emoji-svc | kubectl apply -f -

Now when you visit the Route Metrics of a deployment in the LinkerD dashboard you should see something more like this:

/images/linkerd-route-metrics.png
Route Metrics

Retries

Now that we have service profiles we can do some cool things. The first of which is automatic retries. Edit the voting-svc.emojivoto.svc.cluster.local service profile and change the *VoteDoughnut route to match the following:

1
2
3
4
5
  - condition:
      method: POST
      pathRegex: /emojivoto\.v1\.VotingService/VoteDoughnut
    isRetryable: true # Add this line
    name: VoteDoughnut

That’s it. Now you have automatic retries on that route. You can confirm this by running the following command:

1
linkerd viz routes -n emojivoto deployment/web --to service/voting-svc -o wide

If it is working as expected you should notice that the results of EFFECTIVE_RPS and ACTUAL_RPS are different. The ACTUAL columns include the results of the retries.

By default LinkerD will add at most 20% additional load to the requests to the service, with 10 “free” retries per second. This can be modified by adding a retryBudget to the service profile:

1
2
3
4
5
spec:
  retryBudget:
    retryRatio: 0.2
    minRetriesPerSecond: 10
    ttl: 10s

Timeouts

The second cool thing we can do is tell LinkerD how long it should wait for responses before returning a 504 response. By default this timeout is 10s and it includes retries. To add a timeout, simply edit the service profile like we did for the retries:

1
2
3
4
5
  - condition:
      method: POST
      pathRegex: /emojivoto\.v1\.VotingService/VoteDoughnut
    timeout: 300ms # Add this line
    name: VoteDoughnut

The timeouts will count as failures in the stats that LinkerD provide. It’s also worth noting that the effective request rate could be higher than actual request rate when timeouts are received. This is because LinkerD does not include these timeouts to the actual request rate.

Automatic Certificate Rotation

Before we cover automatic certificate rotation we need to remind ourselves about the LinkerD architecture.

/images/linkerd-control-plane.png
LinkerD Architecture (source: LinkerD Architecture)

LinkerD offers automatic mTLS between services. This feature uses a trust anchor, and an issuer certificate and private key to automatically rotate the TLS certificates for the data plane proxy every 24 hours. However, the credentials used to issue these certificates are not rotated by default.

In this example we will be using Cert-Manager to rotate the issuer certificate and private key. Make sure that your cluster is blank and that you have not installed LinkerD. We will need to get some things set up before we install LinkerD.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
# Install Cert-Manager
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.8.0/cert-manager.yaml

# Create namespace
kubectl create ns linkerd

# Generate the required certs and keys
# We will be using step for this: https://smallstep.com/cli/
step certificate create root.linkerd.cluster.local ca.crt ca.key --profile root-ca --no-password --insecure

# Create tls secret
kubectl -n linkerd create secret tls linkerd-trust-anchor --cert=ca.crt --key=ca.key

Now that Cert-Manager is installed and running, and we have our trust anchor we can create an Issuer referencing our trust anchor:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
cat <<EOF | kubectl apply -f -
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
  name: linkerd-trust-anchor
  namespace: linkerd
spec:
  ca:
    secretName: linkerd-trust-anchor
EOF

The final configuration step is to create a Certificate resource which will use the Issuer we just created to issue our new certificates:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
cat <<EOF | kubectl apply -f -
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: linkerd-identity-issuer
  namespace: linkerd
spec:
  secretName: linkerd-identity-issuer
  duration: 48h
  renewBefore: 25h
  issuerRef:
    name: linkerd-trust-anchor
    kind: Issuer
  commonName: identity.linkerd.cluster.local
  dnsNames:
  - identity.linkerd.cluster.local
  isCA: true
  privateKey:
    algorithm: ECDSA
  usages:
  - cert sign
  - crl sign
  - server auth
  - client auth
EOF

Cert-Manager should now have issued new TLS credentials and stored it in the secret named linkerd-identity-issuer. We can now use this secret while installing LinkerD:

1
2
3
4
linkerd install --identity-external-issuer | kubectl apply -f -

# You can check the status of the install with
linkerd check

The additional identity-external-issuer flag tells LinkerD to use the certs stored in the linkerd-identity-issuer secret. And that’s it. You now have automatic certificate rotation on the control plane.

Trust Anchor
The trust anchor needs to be manually rotated. Luckily LinkerD does detail a zero downtime method for doing this.