Skip to content →

NFVPE Blog Posts

Setup an NFS client provisioner in Kubernetes

Setup an NFS client provisioner in Kubernetes One of the most common needs when deploying Kubernetes is the ability to use shared storage. While there are several options available, one of the most commons and easier to setup is to use an NFS server.This post will explain how to setup a dynamic NFS client provisioner on Kubernetes, relying on an existing NFS server on your systems. Step 1. Setup an NFS server (sample for CentOS) First thing you will need, of course, is to have an NFS server. This can be easily achieved with some easy steps: Install nfs package:…

Comments closed

A Kubernetes Operator Tutorial? You got it, with the Operator-SDK and an Asterisk Operator!

So you need a Kubernetes Operator Tutorial, right? I sure did when I started. So guess what? I got that b-roll! In this tutorial, we’re going to use the Operator SDK, and I definitely got myself up-and-running by following the Operator Framework User Guide. Once we have all that setup – oh yeah! We’re going to run a custom Operator. One that’s designed for Asterisk, it can spin up Asterisk instances, discover them as services and dynamically create SIP trunks between n-number-of-instances of Asterisk so they can all reach one another to make calls between them. Fire up your terminals, it’s time to get moving with Operators.

Comments closed

How to deploy TripleO Queens without external network

TripleO Queens has an interesting feature that is called ‘composable networks’. It allows to deploy Openstack with the choice of networks that you want, depending on your environment. Please see: https://docs.openstack.org/tripleo-docs/latest/install/advanced_deployment/custom_networks.html By default, the following networks are defined: Storage Storage Management Internal Api Tenant Management External The external network allows to reach the endpoints externally, and also to define networks to reach the vms externally as well. But to have that, it is needed to have a network with external access, routable, on your lab. Not all labs have it, specially for CI environments, so it may be useful to…

Comments closed

OVS-DPDK: Vhostuser socket Mode

In the newton release, the default vhostuser mode in OvS is dpdkvhostuser. And from ocata onwards, the default mode in the neutron has been changed to dpdkvhostuserclient mode. This post provides the information on vhostuser migration and verifying the vhostuser modes of the VMs created with dpdkvhostuser mode. In order to understand the difference between the two modes and the advantage of moving to dpdkvhostuserclient mode, read the OvS documentation on vhostuser modes.

Comments closed

Newton: Minor Update (OVS-DPDK) – OvS2.9

Newton (OSP10) has seen a variety of OpenvSwitch version supported. Initial release was with OvS2.5 and later it has been updated to OvS2.6, which is present for a long time of the support cycle. Recently, in the time-line with with queens (OSP13) release, OvS2.9 version is planned to be supported. In order to facilitate FFU (Fast Forward Upgrades) from newton to queens, bringing the support of OvS2.9 to newton (OSP10) will help in reduce the cluster downtime for the upgrade.

Comments closed

Spin up a Kubernetes cluster on CentOS, a choose-your-own-adventure

So you want to install Kubernetes on CentOS? Awesome, I’ve got a little choose-your-own-adventure here for you. If you choose to continue installing Kubernetes, keep reading. If you choose to not install Kubernetes, skip to the very bottom of the article. I’ve got just the recipe for you to brew it up. It’s been a year since my last article on installing Kubernetes on CentOS, and while it’s still probably useful – some of the Ansible playbooks we were using have changed significantly. Today we’ll use kube-ansible which is a playbook developed by my team and I to spin up Kubernetes clusters for development purposes. Our goal will be to get Kubernetes up (and we’ll use Flannel as the CNI plugin), and then spin up a test pod to make sure everything’s working swimmingly.

Comments closed

Kubernetes multiple network interfaces — but! With different configs per pod; Multus CNI has your back.

You need multiple network interfaces in each pod – because you, like me, have some more serious networking requirements for Kubernetes than your average bear. The thing is – if you have different specifications for each pod, and what network interfaces each pod should have based on its role, well… Previously you were fairly limited. At least using my previous (and somewhat dated) method of using Multus CNI (a CNI plugin to help enable you to have multiple interfaces per pod), you could only apply to all pods (or at best, with multiple CNI configs per box, and have it per box). Thanks to Kural and crew, Multus includes the functionality to use Kubernetes Custom Resources (Also known as “CRDs”). These “custom resource definitions” are a way to extend the Kubernetes API. Today we’ll take advantage of that functionality. The CRD implementation in Multus allows us to specify exactly what multiple network interfaces each pod has based on annotations attached to each pod. Our goal here will be to spin up a Kubernetes cluster complete with Multus CNI (including the CRD functionality), and then we’ll spin up pods where we have some with a single interface, and some with multiple interfaces, and then we’ll inspect those.

Comments closed