Skip to content →

NFVPE Blog Posts

Deploying and upgrading TripleO with local mirrors

Continued from http://teknoarticles.blogspot.com.es/2017/08/automating-local-mirrors-creation-in.html In the previous blogpost, I explained how to automate the RHEL mirror creation using https://github.com/redhat-nfvpe/rhel-local-mirrors. Now we are going to learn how to deploy and upgrade TripleO using those. Deploying TripleO Undercloud To use local mirrors in the undercloud, you simply need to get the generated osp<version>.repo that you generated with the rhel-local-mirrors playbook, and copy it to /etc/yum.repos.d/ , in the undercloud host: sudo curl http://<local_mirror_ip>/osp<version>_repo/osp<version>.repo \-o /etc/yum.repos.d/osp.repo Then proceed with the standard instructions for deploy. Overcloud Each node from the overcloud (controllers, computes, etc…) needs to have a copy of the repository file from our…

Continue reading Deploying and upgrading TripleO with local mirrors

Leave a Comment

Ratchet CNI — Using VXLAN for network isolation for pods in Kubernetes

In today’s episode we’re looking at Ratchet CNI, an implementation of Koko – but in CNI, the container networking interface that is used by Kubernetes for creating network interfaces. The idea being that the network interface creation can be performed by Kubernetes via CNI. Specifically we’re going to create some network isolation of network links between containers to demonstrate a series of “cloud routers”. We can use the capabilities of Koko to both create vEth connections between containers when they’re local to the same host, and then VXLAN tunnels to containers when they’re across hosts. Our goal today will be to install & configure Ratchet CNI on an existing cluster, we’ll verify it’s working, and then we’ll install a cloud router setup based on zebra pen (a cloud router demo).

Continue reading Ratchet CNI — Using VXLAN for network isolation for pods in Kubernetes

Leave a Comment

Automating local mirrors creation in RHEL

Sometimes there is a need to consume RHEL mirrors locally, not using the Red Hat content delivery network. It may be needed to speed up some deployment, or due to network constraints. I create an ansible playbook, rhel-local-mirrors (https://github.com/redhat-nfvpe/rhel-local-mirrors), that can help with that. What does rhel-local-mirrors do? It is basically a tool that connects to the Red Hat CDN, and syncs the repositories locally, allowing to populate the desired mirrors, that can be accessed by other systems via HTTP. The playbook is performing several tasks, that can be run together or independently: register a system on the Red Hat…

Continue reading Automating local mirrors creation in RHEL

Leave a Comment

Be a hyper spaz about a hyperconverged GlusterFS setup with dynamically provisioned Kubernetes persistent volumes

I’d recently brought up my GlusterFS for persistent volumes in Kubernetes setup and I was noticing something errant. I had to REALLY baby the persistent volumes. That didn’t sit right with me, so I refactored the setup to use gluster-kubernetes to hook up a hyperconverged setup. This setup improves on the previous setup by both having the Gluster daemon running in Kubernetes pods, which is just feeling so fresh and so clean. Difference being that OutKast is like smooth and cool – and I’m an excited spaz about technology with this. Gluster-Kubernetes also implements heketi which is an API for GlusterFS volume management – that Kube can also use to allow us dynamic provisioning. Our goal today is to spin up Kube (using kube-centos-ansible) with gluster-kubernetes for dynamic provisioning, and then we’ll validate it with master-slave replication in MySQL, to one-up our simple MySQL from the last article.

Continue reading Be a hyper spaz about a hyperconverged GlusterFS setup with dynamically provisioned Kubernetes persistent volumes

Leave a Comment

Chainmail of NFV (+1 Dexterity) — Service Chaining in Containers using Koko & Koro

In this episode – we’re going to do some “service chaining” in containers, with some work facilitated by Tomofumi Hayashi in his creation of koko and koro. Koko (the “container connector”) gives us the ability to connect a network between containers (with veth, vxlan or vlan interfaces) in an isolated way (and it creates multiple interfaces for our containers too, which will allow us to chain them), and then we can use the functionality of Koro (the “container routing” tool) to manipulate those network interfaces, and specifically their routing in order to chain them together, and then further manipulate routing and ip addressing to facilitate the changing of this chain. Our goal today will be to connect four containers in a chain of services going from a http client, to a firewall, through a router, and terminating at a web server. Once we have that chain together, we’ll intentionally cause a failure of a service and then repair it using koro.

Continue reading Chainmail of NFV (+1 Dexterity) — Service Chaining in Containers using Koko & Koro

Leave a Comment

Using Koko to create vxlan interfaces for cross-host container network isolation — and cross-connecting them with VPP!

I’ve blogged about koko in the past – the container connector. Due to the awesome work put forward by my associate Tomofumi Hayashi – today we can run it and connect to FD.io VPP (vector packet processing), which is used for a fast data path, something we’re quite interested with in the NFV space. We’re going to setup vxlan links between containers (on separate hosts) back to a VPP forwarding host, where we’ll create cross-connects to forward packets between those containers. As a bonus, we’ll also compile koro, an auxillary utility to use with koko for “container routing”, which we’ll using in a following companion article. Put your gloves on start up your terminals, we’re going to put our hands right on it and have it all up and running.

Continue reading Using Koko to create vxlan interfaces for cross-host container network isolation — and cross-connecting them with VPP!

Leave a Comment

Any time in your schedule? Try using a custom scheduler in Kubernetes

I’ve recently been interested in the idea of extending the scheduler in Kubernetes, there’s a number of reasons why, but at the top of my list is looking at re-scheduling failed pods based on custom metrics – specifically for high performance high availablity; like we need in telecom. In my search for learning more about it, I discovered the Kube docs for configuring multiple schedulers, and even better – a practical application, a toy scheduler created by the one-and-only-kube-hero Kelsey Hightower. It’s about a year old and Hightower is on his game, so he’s using alpha functionality at time of authoring. In this article I modernize at least a component to get it to run in the contemporary day. Today our goal is to run through the toy scheduler and have it schedule a pod for us. We’ll also dig into Kelsey’s go code for the scheduler a little bit to get an intro to what he’s doing.

Continue reading Any time in your schedule? Try using a custom scheduler in Kubernetes

Leave a Comment

BYOB – Bring your own boxen to an OpenShift Origin lab!

Let’s spin up a OpenShift Origin lab today, we’ll be using openshift-ansible with a “BYO” (bring your own) inventory. Or I’d rather say “BYOB” for “Bring your own boxen”. OpenShift Origin is the upstream OpenShift – in short, OpenShift is a PaaS (platform-as-a-service), but one that is built with a distribution of Kubernetes, and in my opinion – is so valuable because of its strong opinions, which guide you towards some best practices for using Kubernetes for the enterprise. In addition, we’ll use my openshift-ansible-bootstrap which we can use to A. spin up some VMs to use in the lab, and/or B. Setup some basics on the host to make sure we can properly install OpenShift Origin. Our goal today will be to setup an OpenShift Origin cluster with a master and two compute nodes, we’ll verify that it’s healthy – and we’ll deploy a very basic pod.

Continue reading BYOB – Bring your own boxen to an OpenShift Origin lab!

Leave a Comment