NGINX tutorial: Reduce Kubernetes latency with autoscaling

Click for: original source

Your organization built an app in Kubernetes and now it’s getting popular! You went from just a few visitors to hundreds (and sometimes thousands) per day. But there’s a problem… he increased traffic is hitting a bottleneck, causing latency and timeouts for your customers. If you can’t improve the experience, people will stop using the app. By Daniele Polencic of learnk8s.

This blog accompanies the lab for Unit 1 of Microservices March 2022 – Architecting Kubernetes Clusters for High‑Traffic Websites – but you can also use it as a tutorial in your own environment (get the examples from our GitHub repo). It demonstrates how to use NGINX Ingress Controller to expose an app and then autoscale the Ingress controller pods in response to high traffic.

This tutorial uses these technologies:

  • NGINX Ingress Controller
  • Helm
  • KEDA
  • Locust
  • minikube
  • Podinfo
  • Prometheus

In the final challenge, you build a configuration that autoscales resources as the traffic volume increases. The tutorial uses KEDA for autoscaling, so first you install it and create a policy that defines when and how scaling occurs. As in Challenge 3, you then use Locust to simulate a traffic surge and Prometheus to observe NGINX Ingress Controller performance when autoscaling is enabled.

You can also follow the tutorial by watching provided video. Links to further resources and reading are also provided. Nice one!

[Read More]

Tags nginx kubernetes containers devops servers