November 20, 2019

216 words 2 mins read

Ready to Serve! Speeding-Up Startup Time of Istio-Powered Workloads

Ready to Serve! Speeding-Up Startup Time of Istio-Powered Workloads

Pod startup time has long been a focus area for cloud-native platforms. Optimizing startup time is critical to support use cases such as autoscaling, upgrades, and failure recovery. The recent rise of …

Talk Title Ready to Serve! Speeding-Up Startup Time of Istio-Powered Workloads
Speakers Etai Lev Ran (System Architect, IBM), Michal Malka (Manager, IBM Cloud Foundations, IBM)
Conference KubeCon + CloudNativeCon North America
Conf Tag
Location San Diego, CA, USA
Date Nov 15-21, 2019
URL Talk Page
Slides Talk Slides
Video

Pod startup time has long been a focus area for cloud-native platforms. Optimizing startup time is critical to support use cases such as autoscaling, upgrades, and failure recovery. The recent rise of the serverless model, along with its key value proposition of scale-to-zero of idle workloads, has made pod startup time important than ever: The platform must be able to start the pod fairly quick, such that the latency of request-triggered scale-from-zero is acceptable.In this talk, we’ll analyze the latency contributed by Istio service mesh to pod startup time, right from pod creation and up to the pod becoming ready to service requests. We’ll also examine various techniques to reduce it, including using Istio CNI to bootstrap the pod’s network, launching the sidecar proxy with an initial routing configuration, and using manual sidecar injection.

comments powered by Disqus