Skip to content →

NFVPE Blog Posts

Let’s run Homer on Kubernetes!

I have to say that Homer is a favorite of mine. Homer is VoIP analysis & monitoring – on steroids. Not only has it saved my keister a number of times when troubleshooting VoIP platforms, but it has an awesome (and helpful) open source community. In my opinion – it should be an integral part of your devops plan if you’re deploying VoIP apps (really important to have visibility of your… Ops!). Leif and I are using Homer as part of our (still WIP) vnf-asterisk demo VNF (virtualized network function). We want to get it all running in OpenShift & Kubernetes. Our goal for this walk-through is to get Homer up and running on Kubernetes, and generate some traffic using HEPgen.js, and then view it on the Homer Web UI. So – why postpone joy? Let’s use homer-docker to go ahead and get Homer up and running on Kubernetes.

Continue reading Let’s run Homer on Kubernetes!

Leave a Comment

How-to use GlusterFS to back persistent volumes in Kubernetes

A mountain I keep walking around instead of climbing in my Kubernetes lab is storing persistent data, I kept avoiding it. Sure – in a lab, I can just throw it all out most of the time. But, what about when we really need it? I decided I would use GlusterFS to back my persistent volumes and I’ve got to say… My experience with GlusterFS was great, I really enjoyed using it, and it seems rather resilient – and best of all? It was pretty easy to get going and to operate. Today we’ll spin up a Kubernetes cluster using my kube-centos-ansible playbooks, and use some newly included plays that also setup a GlusterFS cluster. With that in hand, our goal will be to setup the persistent volumes and claims to those volumes, and we’ll spin up a MariaDB pod that stores data in a persistent volume, important data that we want to keep – so we’ll make some data about Vermont beer as it’s very very important.

Continue reading How-to use GlusterFS to back persistent volumes in Kubernetes

Leave a Comment

koko – Connect Containers together with virtual ethernet connections

Let’s dig into koko created by Tomofumi Hayashi. koko (the project’s namesake comes from “COntainer COnnector”) is a utility written in Go that gives us a way to connect containers together with “veth” (virtual ethernet) devices – a feature available in the Linux kernel. This allows us to specify interfaces that the containers use and link them together – all without using Linux bridges. koko has become a cornerstone of the zebra-pen project, an effort I’m involved in to analyze gaps in containerized NFV workloads, specifically it routes traffic using Quagga, and we setup all the interfaces using koko. The project really took a turn for the better when Tomo came up with koko and we implemented it in zebra-pen. Ready to see koko in action? Let’s jump in the pool!

Continue reading koko – Connect Containers together with virtual ethernet connections

Leave a Comment

You had ONE JOB — A Kubernetes job.

Let’s take a look at how Kubernetes jobs are crafted. I had been jamming some kind of work-around shell scripts in the entrypoint* for some containers in the vnf-asterisk project that Leif and I have been working on. And that’s not perfect when we can use Kubernetes jobs, or in their new parlance, “run to completion finite workloads” (I’ll stick to calling them “jobs”). They’re one-shot containers that do one thing, and then end (sort of like a “oneshot” of systemd units, at least how we’ll use them today). I like the idea of using them to complete some service discovery for me when other pods are coming up. Today we’ll fire up a pod, and spin up a job to discover that pod (by querying the API for info about it), and put info into etcd. Let’s get the job done.

Continue reading You had ONE JOB — A Kubernetes job.

Leave a Comment

A (happy happy joy joy) ansible-container hello world!

Today we’re going to explore ansible-container, a project that gives you Ansible workflow for Docker. It provides a method of managing container images using ansible commands (so you can avoid a bunch of dirty bash-y Dockerfiles), and then provides a specification of “services” which is eerily similar (on purpose) to docker-compose. It also has paths forward for managing the instances of these containers on Kubernetes & OpenShift – that’s pretty tight. We’ll build two images “ren” and “stimpy”, which contain nginx and output some Ren & Stimpy quotes so we can get a grip on how it’s all put together. It’s better than bad – it’s good!

Continue reading A (happy happy joy joy) ansible-container hello world!

Leave a Comment

Kuryr-Kubernetes will knock your socks off!

Seeing kuryr-kubernetes in action in my “Dr. Octagon NFV laboratory” has got me feeling that barefoot feeling – and henceforth has completely knocked my socks off. Kuryr-Kubernetes provides Kubernetes integration with OpenStack networking, and today we’ll walk through the steps so you can get your own instance up of it up and running so you can check it out for yourself. We’ll spin up kuryr-kubernetes with devstack, create some pods and a VM, inspect Neutron and verify the networking is working a charm.

Continue reading Kuryr-Kubernetes will knock your socks off!

Leave a Comment

TripleO Quickstart deployments on baremetal using TOAD

This article is going to cover how to deploy TripleO Quickstart on baremetal. The undercloud will still be virtualized, but controller and compute will be deployed on baremetal.This post belongs to a serie. In order to get more knowledge about TOAD and tripleo-quickstart, please read http://teknoarticles.blogspot.com/2017/02/automated-osp-deployments-with-tripleo.html and http://teknoarticles.blogspot.com/2017/02/describing-cira-continuous-integration.html Requirements Hardware A baremetal server is needed to act as Jenkins slave + contain virtualized undercloud. A multi-core CPU, 16GB of RAM and 60GB of disk is the recommended setup. One server for each controller/compute that needs to be deployed. They need to have at least 8GB of RAM. Network IPMI access…

Continue reading TripleO Quickstart deployments on baremetal using TOAD

Leave a Comment

So you want to expose a pod to multiple network interfaces? Enter Multus-CNI

Sometimes, one isn’t enough. Especially when you’ve got network requirements that aren’t just “your plain old HTTP API”. By default in Kubernetes, a pod is exposed only to a loopback and a single interface as assigned by your pod networking. In the telephony world, something we love to do is isolate our signalling, media, and management networks. If you’ve got those in separate NICs on your container host, how do you expose them to a Kubernetes pod? Let’s plug in the CNI (container network interface) plugin called multus-cni into our Kubernetes cluster and we’ll expose multiple network interfaces to a (very simple) pod.

Continue reading So you want to expose a pod to multiple network interfaces? Enter Multus-CNI

Comments closed

Let’s spin up k8s 1.5 on CentOS (with CNI pod networking, too!)

Alright, so you’ve seen my blog post about installing Kubernetes by hand on CentOS, now… Let’s make that easier and do that with an Ansible playbook, specifically my kube-centos-ansible playbook. This time we’ll have Kubernetes 1.5 running on a cluster of 3 VMs, and we’ll use weave as a CNI plugin to handle our pod network. And to make it more fun, we’ll even expose some pods to the ‘outside world’, so we can actually (kinda) do something with them. Ready? Let’s go!

Continue reading Let’s spin up k8s 1.5 on CentOS (with CNI pod networking, too!)

Comments closed

Let’s (manually) run k8s on CentOS!

So sometimes it’s handy to have a plain-old-Kubernetes running on CentOS 7. Either for development purposes, or to check out something new. Our goal today is to install Kubernetes by hand on a small cluster of 3 CentOS 7 boxen. We’ll spin up some libvirt VMs running CentOS generic cloud images, get Kubernetes spun up on those, and then we’ll run a test pod to prove it works. Also, this gives you some exposure to some of the components that are running ‘under the hood’.

Continue reading Let’s (manually) run k8s on CentOS!

Comments closed