Skip to content →

Category: Doug’s Posts

A Kubernetes Operator Tutorial? You got it, with the Operator-SDK and an Asterisk Operator!

So you need a Kubernetes Operator Tutorial, right? I sure did when I started. So guess what? I got that b-roll! In this tutorial, we’re going to use the Operator SDK, and I definitely got myself up-and-running by following the Operator Framework User Guide. Once we have all that setup – oh yeah! We’re going to run a custom Operator. One that’s designed for Asterisk, it can spin up Asterisk instances, discover them as services and dynamically create SIP trunks between n-number-of-instances of Asterisk so they can all reach one another to make calls between them. Fire up your terminals, it’s time to get moving with Operators.

Leave a Comment

Spin up a Kubernetes cluster on CentOS, a choose-your-own-adventure

So you want to install Kubernetes on CentOS? Awesome, I’ve got a little choose-your-own-adventure here for you. If you choose to continue installing Kubernetes, keep reading. If you choose to not install Kubernetes, skip to the very bottom of the article. I’ve got just the recipe for you to brew it up. It’s been a year since my last article on installing Kubernetes on CentOS, and while it’s still probably useful – some of the Ansible playbooks we were using have changed significantly. Today we’ll use kube-ansible which is a playbook developed by my team and I to spin up Kubernetes clusters for development purposes. Our goal will be to get Kubernetes up (and we’ll use Flannel as the CNI plugin), and then spin up a test pod to make sure everything’s working swimmingly.

Comments closed

Kubernetes multiple network interfaces — but! With different configs per pod; Multus CNI has your back.

You need multiple network interfaces in each pod – because you, like me, have some more serious networking requirements for Kubernetes than your average bear. The thing is – if you have different specifications for each pod, and what network interfaces each pod should have based on its role, well… Previously you were fairly limited. At least using my previous (and somewhat dated) method of using Multus CNI (a CNI plugin to help enable you to have multiple interfaces per pod), you could only apply to all pods (or at best, with multiple CNI configs per box, and have it per box). Thanks to Kural and crew, Multus includes the functionality to use Kubernetes Custom Resources (Also known as “CRDs”). These “custom resource definitions” are a way to extend the Kubernetes API. Today we’ll take advantage of that functionality. The CRD implementation in Multus allows us to specify exactly what multiple network interfaces each pod has based on annotations attached to each pod. Our goal here will be to spin up a Kubernetes cluster complete with Multus CNI (including the CRD functionality), and then we’ll spin up pods where we have some with a single interface, and some with multiple interfaces, and then we’ll inspect those.

Comments closed

Are you exhausted? IPv4 almost is — let’s setup an IPv6 lab for Kubernetes

It’s no secret that there’s the inevitability that IPv4 is becoming exhausted. And it’s not just tired (ba-dum-ching!). Since we’re a bunch of Kubernetes fans, and we’re networking fans – we really want to check out what we can do with IPv6 with Kubernetes. Thanks to some slinky automation by my colleague, Feng Pan, contributed to kube-centos-ansible, he was able to implement some creative work by leblancd. In this simple setup today, we’re going to deploy Kubernetes with custom binaries from leblancd and have two pods (ideally on different nodes) ping one another with ping6 and declare victory! In the future let’s hope to iterate on what’s necessary to get IPv6 functionality in Kubernetes.

Comments closed

Ghost Riding The Whip — A complete Kubernetes workflow without Docker, using CRI-O, Buildah & kpod

It is my decree that whenever you are using Kubernetes without using Docker you are officially “ghost riding the whip”, maybe even “ghost riding the kube”. (Well, I’m from Vermont, so I’m more like “ghost riding the combine”). And again, we’re running Kubernetes without Docker, but this time? We’ve got an entire workflow without Docker. From image build, to running container, to inspecting the running containers. Thanks to the good folks from the OCI project and Project Atomic, we’ve got kpod for working with running containers, and we’ve got buildah for building our images. And of course, don’t leave out CRI-O which makes the magic happen to get it all running in Kube without Docker. Fire up your terminals, because you’re about to ghost ride the kube.

Comments closed

Ratchet CNI — Using VXLAN for network isolation for pods in Kubernetes

In today’s episode we’re looking at Ratchet CNI, an implementation of Koko – but in CNI, the container networking interface that is used by Kubernetes for creating network interfaces. The idea being that the network interface creation can be performed by Kubernetes via CNI. Specifically we’re going to create some network isolation of network links between containers to demonstrate a series of “cloud routers”. We can use the capabilities of Koko to both create vEth connections between containers when they’re local to the same host, and then VXLAN tunnels to containers when they’re across hosts. Our goal today will be to install & configure Ratchet CNI on an existing cluster, we’ll verify it’s working, and then we’ll install a cloud router setup based on zebra pen (a cloud router demo).

Comments closed

Be a hyper spaz about a hyperconverged GlusterFS setup with dynamically provisioned Kubernetes persistent volumes

I’d recently brought up my GlusterFS for persistent volumes in Kubernetes setup and I was noticing something errant. I had to REALLY baby the persistent volumes. That didn’t sit right with me, so I refactored the setup to use gluster-kubernetes to hook up a hyperconverged setup. This setup improves on the previous setup by both having the Gluster daemon running in Kubernetes pods, which is just feeling so fresh and so clean. Difference being that OutKast is like smooth and cool – and I’m an excited spaz about technology with this. Gluster-Kubernetes also implements heketi which is an API for GlusterFS volume management – that Kube can also use to allow us dynamic provisioning. Our goal today is to spin up Kube (using kube-centos-ansible) with gluster-kubernetes for dynamic provisioning, and then we’ll validate it with master-slave replication in MySQL, to one-up our simple MySQL from the last article.

Comments closed

Chainmail of NFV (+1 Dexterity) — Service Chaining in Containers using Koko & Koro

In this episode – we’re going to do some “service chaining” in containers, with some work facilitated by Tomofumi Hayashi in his creation of koko and koro. Koko (the “container connector”) gives us the ability to connect a network between containers (with veth, vxlan or vlan interfaces) in an isolated way (and it creates multiple interfaces for our containers too, which will allow us to chain them), and then we can use the functionality of Koro (the “container routing” tool) to manipulate those network interfaces, and specifically their routing in order to chain them together, and then further manipulate routing and ip addressing to facilitate the changing of this chain. Our goal today will be to connect four containers in a chain of services going from a http client, to a firewall, through a router, and terminating at a web server. Once we have that chain together, we’ll intentionally cause a failure of a service and then repair it using koro.

Comments closed

Using Koko to create vxlan interfaces for cross-host container network isolation — and cross-connecting them with VPP!

I’ve blogged about koko in the past – the container connector. Due to the awesome work put forward by my associate Tomofumi Hayashi – today we can run it and connect to FD.io VPP (vector packet processing), which is used for a fast data path, something we’re quite interested with in the NFV space. We’re going to setup vxlan links between containers (on separate hosts) back to a VPP forwarding host, where we’ll create cross-connects to forward packets between those containers. As a bonus, we’ll also compile koro, an auxillary utility to use with koko for “container routing”, which we’ll using in a following companion article. Put your gloves on start up your terminals, we’re going to put our hands right on it and have it all up and running.

Comments closed

Any time in your schedule? Try using a custom scheduler in Kubernetes

I’ve recently been interested in the idea of extending the scheduler in Kubernetes, there’s a number of reasons why, but at the top of my list is looking at re-scheduling failed pods based on custom metrics – specifically for high performance high availablity; like we need in telecom. In my search for learning more about it, I discovered the Kube docs for configuring multiple schedulers, and even better – a practical application, a toy scheduler created by the one-and-only-kube-hero Kelsey Hightower. It’s about a year old and Hightower is on his game, so he’s using alpha functionality at time of authoring. In this article I modernize at least a component to get it to run in the contemporary day. Today our goal is to run through the toy scheduler and have it schedule a pod for us. We’ll also dig into Kelsey’s go code for the scheduler a little bit to get an intro to what he’s doing.

Comments closed