November 29, 2019

204 words 1 min read

Autoscale your Kubernetes Workload with Prometheus

Autoscale your Kubernetes Workload with Prometheus

Time to autoscale your cloud native deployments, but how do you make it happen? In the past, easier said than done. Lack of guidance and inconsistent implementations of solutions have made autoscaling …

Talk Title Autoscale your Kubernetes Workload with Prometheus
Speakers Frederic Branczyk (Software Engineer, CoreOS)
Conference KubeCon + CloudNativeCon Europe
Conf Tag
Location Copenhagen, Denmark
Date Apr 30-May 4, 2018
URL Talk Page
Slides Talk Slides
Video

Time to autoscale your cloud native deployments, but how do you make it happen? In the past, easier said than done. Lack of guidance and inconsistent implementations of solutions have made autoscaling on Kubernetes a pain. Tedious extensibility and difficult maintenance with Heapster were some of the causes for this. Those days are over! At Kubernetes sig-instrumentation, we have developed and standardised the resource and custom metrics APIs. These APIs are finally giving Kubernetes the autoscaling capabilities it so desperately needed. Frederic Branczyk, software engineer at CoreOS, will explain the history of autoscaling on Kubernetes, elaborate on the design and usage of these newly developed APIs, and describe how they benefit the consistency of autoscaling. He will talk about the recommended way to autoscale Kubernetes using Prometheus, and end with a demo showcasing just that.

comments powered by Disqus