Skip to content →

Category: NFVPE

How to deploy TripleO Queens without external network

TripleO Queens has an interesting feature that is called ‘composable networks’. It allows to deploy Openstack with the choice of networks that you want, depending on your environment. Please see: https://docs.openstack.org/tripleo-docs/latest/install/advanced_deployment/custom_networks.html By default, the following networks are defined: Storage Storage Management Internal Api Tenant Management External The external network allows to reach the endpoints externally, and also to define networks to reach the vms externally as well. But to have that, it is needed to have a network with external access, routable, on your lab. Not all labs have it, specially for CI environments, so it may be useful to…

Comments closed

Newton: Minor Update (OVS-DPDK) – OvS2.9

Newton (OSP10) has seen a variety of OpenvSwitch version supported. Initial release was with OvS2.5 and later it has been updated to OvS2.6, which is present for a long time of the support cycle. Recently, in the time-line with with queens (OSP13) release, OvS2.9 version is planned to be supported. In order to facilitate FFU (Fast Forward Upgrades) from newton to queens, bringing the support of OvS2.9 to newton (OSP10) will help in reduce the cluster downtime for the upgrade.

Comments closed

OVS-DPDK: Vhostuser socket Mode

In the newton release, the default vhostuser mode in OvS is dpdkvhostuser. And from ocata onwards, the default mode in the neutron has been changed to dpdkvhostuserclient mode. This post provides the information on vhostuser migration and verifying the vhostuser modes of the VMs created with dpdkvhostuser mode. In order to understand the difference between the two modes and the advantage of moving to dpdkvhostuserclient mode, read the OvS documentation on vhostuser modes.

Comments closed

Spin up a Kubernetes cluster on CentOS, a choose-your-own-adventure

So you want to install Kubernetes on CentOS? Awesome, I’ve got a little choose-your-own-adventure here for you. If you choose to continue installing Kubernetes, keep reading. If you choose to not install Kubernetes, skip to the very bottom of the article. I’ve got just the recipe for you to brew it up. It’s been a year since my last article on installing Kubernetes on CentOS, and while it’s still probably useful – some of the Ansible playbooks we were using have changed significantly. Today we’ll use kube-ansible which is a playbook developed by my team and I to spin up Kubernetes clusters for development purposes. Our goal will be to get Kubernetes up (and we’ll use Flannel as the CNI plugin), and then spin up a test pod to make sure everything’s working swimmingly.

Comments closed

Kubernetes multiple network interfaces — but! With different configs per pod; Multus CNI has your back.

You need multiple network interfaces in each pod – because you, like me, have some more serious networking requirements for Kubernetes than your average bear. The thing is – if you have different specifications for each pod, and what network interfaces each pod should have based on its role, well… Previously you were fairly limited. At least using my previous (and somewhat dated) method of using Multus CNI (a CNI plugin to help enable you to have multiple interfaces per pod), you could only apply to all pods (or at best, with multiple CNI configs per box, and have it per box). Thanks to Kural and crew, Multus includes the functionality to use Kubernetes Custom Resources (Also known as “CRDs”). These “custom resource definitions” are a way to extend the Kubernetes API. Today we’ll take advantage of that functionality. The CRD implementation in Multus allows us to specify exactly what multiple network interfaces each pod has based on annotations attached to each pod. Our goal here will be to spin up a Kubernetes cluster complete with Multus CNI (including the CRD functionality), and then we’ll spin up pods where we have some with a single interface, and some with multiple interfaces, and then we’ll inspect those.

Comments closed

Are you exhausted? IPv4 almost is — let’s setup an IPv6 lab for Kubernetes

It’s no secret that there’s the inevitability that IPv4 is becoming exhausted. And it’s not just tired (ba-dum-ching!). Since we’re a bunch of Kubernetes fans, and we’re networking fans – we really want to check out what we can do with IPv6 with Kubernetes. Thanks to some slinky automation by my colleague, Feng Pan, contributed to kube-centos-ansible, he was able to implement some creative work by leblancd. In this simple setup today, we’re going to deploy Kubernetes with custom binaries from leblancd and have two pods (ideally on different nodes) ping one another with ping6 and declare victory! In the future let’s hope to iterate on what’s necessary to get IPv6 functionality in Kubernetes.

Comments closed

Automated TripleO upgrades

Upgrading TripleO can be a hard task. While there are instructions on how to do it manually, having a set of playbooks that automate this task can help.With this purpose, I’ve created the TripleO upgrade automation playbooks (https://github.com/redhat-nfvpe/tripleo-upgrade-automation).Those are a set of playbooks that allow to upgrade an existing TripleO deployment, specially focused on versions from 8 to 10, and integrated with local mirrors (https://github.com/redhat-nfvpe/rhel-local-mirrors) In case you want to know more, please visit the tripleo-upgrade-automation project on github, and you’ll get instructions on how to properly use this repo to automate your upgrades.

Comments closed

AWX: The Poor Man’s CI?

I’m just going to go ahead and blame @dougbtv
for all my awesome and terrible ideas. We’ve been working on several
Ansible playbooks to spin up development
environments; like
kucean.

Due to the rapid development nature of things like Kubernetes, Heketi,
GlusterFS, and other tools, it’s both possible and probable that our playbooks
could become broken at any given time. We’ve been wanting to get some continous
integration spun up to test this with Zuul v3
but the learning curve for that is a bit more than we’d prefer to
tackle for some simple periodic runs. Same goes for Jenkins
or any other number of continous integration software bits.

Enter the brilliantly mad mind of @dougbtv. He wondered if AWX (Ansible Tower)
could be turned into a sort of “Poor Man’s CI”? Hold my beer. Challenge
accepted!

Comments closed