December 14, 2019

457 words 3 mins read

Lightning Talk: CRI Proxy: Solving the Chicken-and-Egg Problem of Running a CRI Implementation as a DaemonSet [I]

Lightning Talk: CRI Proxy: Solving the Chicken-and-Egg Problem of Running a CRI Implementation as a DaemonSet [I]

CRI allows for special-purpose CRI implementations such as Virtlet, which makes it possible to run VMs as if they were containers. Still, deployment of these CRI implementations may bring us back to p …

Talk Title Lightning Talk: CRI Proxy: Solving the Chicken-and-Egg Problem of Running a CRI Implementation as a DaemonSet [I]
Speakers Piotr Skamruk (Software Engineer, Travelping)
Conference KubeCon + CloudNativeCon North America
Conf Tag
Location Austin, TX, United States
Date Dec 4- 8, 2017
URL Talk Page
Slides Talk Slides
Video

CRI allows for special-purpose CRI implementations such as Virtlet, which makes it possible to run VMs as if they were containers. Still, deployment of these CRI implementations may bring us back to pre-container days, because we run into problems with additional required software such as libvirt, the need to configure the operating system on the node in different ways, and so on. We can also have problems with upgrading the CRI implementation apps, because unlike other components, they require special treatment. It would be nice if we could use the deployment power of k8s to install these apps on some of the nodes. Further complicating matters is the fact that if your CRI doesn’t support Docker images, and is too different from Docker, you need to install Kubernetes components such as kube-proxy and a CNI plugin in a special way, meaning that you have to prepare special-purpose CRI nodes in a very special way. Even if you just want to create a quick demo of your CRI that runs on Kubernetes clusters deployed using a popular tool such as kubeadm, you may need to tweak the node config just a bit to make this happen. DaemonSet seems like it might be the right choice for a CRI implementation, but here we run into the chicken-and-egg problem, as a CRI implementation is required to be running on the node in order to run any pods there. Enter CRI Proxy. CRI requests that deal with plain pods are handled by the primary CRI implementation (such as docker-shim), while requests that are marked in special way (using pod annotations and image name conventions) get directed to the special-purpose CRI implementation. This way, the deployment headache almost goes away - all you have to do is install CRI Proxy on the node, and the proxy has minimal dependencies. For demo installations, the proxy provides “bootstrap” mode, which automagically installs CRI Proxy on clusters installed with kubeadm, and possibly some other cluster setup tools, too. (If we have time, I may also say a few words about hyper’s approach; they have something like CRI proxy built into their CRI implementation, which solves problem of running k8s components on the node, though it doesn’t help much with deployment problem.)

comments powered by Disqus