Skip to content →

NFVPE Blog Posts

Customize OpenStack images for booting from ISCSI

When working with OpenStack Ironic and Tripleo, and using the boot from ISCSI feature, you may need to add some kernel parameters into the deployment image for that to work.When using some specific hardware, you may need that the deployment image contains some specific kernel parameters on boot. For example, when trying to boot from ISCSI with IBFT nics, you need to add following kernel parameters: rd.iscsi.ibft=1 rd.iscsi.firmware=1   The TripleO image that is generated by default doesn’t contain those parameters, because they are very specific depending on the hardware you need. It is not also possible right now to send…

Leave a Comment

TripleO Container – Template Configs (Pike)

In this post, I would like to provide the details of the different types of template config sections present in a typical docker service template file. There are few configurations which are present in the puppet/service templates like service_name, which still have the same interpretation in the container services in docker/services templates too. Apart from that, there are few container specific configurations, which are being explained in below sections: puppet_config Specifies the puppet class step-config and the puppet resources puppet- tags to be applied while enabling a service. By default, all the file operation related puppet resources like file, concat,…

Leave a Comment

Ghost Riding The Whip — A complete Kubernetes workflow without Docker, using CRI-O, Buildah & kpod

It is my decree that whenever you are using Kubernetes without using Docker you are officially “ghost riding the whip”, maybe even “ghost riding the kube”. (Well, I’m from Vermont, so I’m more like “ghost riding the combine”). And again, we’re running Kubernetes without Docker, but this time? We’ve got an entire workflow without Docker. From image build, to running container, to inspecting the running containers. Thanks to the good folks from the OCI project and Project Atomic, we’ve got kpod for working with running containers, and we’ve got buildah for building our images. And of course, don’t leave out CRI-O which makes the magic happen to get it all running in Kube without Docker. Fire up your terminals, because you’re about to ghost ride the kube.

Leave a Comment

Persistent volumes with GlusterFS

It’s been a while since I had the original vision of how storage might work
with Kubernetes. I had seen a project called Heketi that helped to make
GlusterFS live inside the Kubernetes infrastructure itself. I wasn’t entirely
convinced on this approach because I wasn’t necessarily comfortable with
Kubernetes managing its own storage infrastructure. This is the story about how
wrong I was.

Leave a Comment

TripleO Container – Types

In Pike release, TripleO container deployment has been completely redesigned, in a way that it is backward comptible with baremetal deployment and re-using most of the existing parts of TripleO. In this post, I would like to detail the different stages of a container deployment and the associated config files and log files. With Pike release, most of the OpenStack services are containerized, leaving some of the platform services like OpenvSwitch to be completed with subsequent releases. Types of Container As of Pike release all the container running in TripleO are based out of Kolla image format. But we can…

Leave a Comment

TripleO Role-Specific Parameters

OpenStack installer TripleO provides a flexibility to the operators to define their own custom roles. A custom role can be defined by associating a list of predefined (or custom-defined) services. A TripleO service can be associated with multiple roles, which brings in the requirement to keep the parameter to be role-specific. This has been achevied in Pike release by introducing a new parameter RoleParameters to the TripleO service template. By default, not all parameters are role-specific. Additional implementation has to be provided on a TripleO service template to enable role-specific parameters support. With role-specific parameters supported, a parameter can be…

Leave a Comment

Deploying and upgrading TripleO with local mirrors

Continued from http://teknoarticles.blogspot.com.es/2017/08/automating-local-mirrors-creation-in.html In the previous blogpost, I explained how to automate the RHEL mirror creation using https://github.com/redhat-nfvpe/rhel-local-mirrors. Now we are going to learn how to deploy and upgrade TripleO using those. Deploying TripleO Undercloud To use local mirrors in the undercloud, you simply need to get the generated osp<version>.repo that you generated with the rhel-local-mirrors playbook, and copy it to /etc/yum.repos.d/ , in the undercloud host: sudo curl http://<local_mirror_ip>/osp<version>_repo/osp<version>.repo \-o /etc/yum.repos.d/osp.repo Then proceed with the standard instructions for deploy. Overcloud Each node from the overcloud (controllers, computes, etc…) needs to have a copy of the repository file from our…

Leave a Comment

Ratchet CNI — Using VXLAN for network isolation for pods in Kubernetes

In today’s episode we’re looking at Ratchet CNI, an implementation of Koko – but in CNI, the container networking interface that is used by Kubernetes for creating network interfaces. The idea being that the network interface creation can be performed by Kubernetes via CNI. Specifically we’re going to create some network isolation of network links between containers to demonstrate a series of “cloud routers”. We can use the capabilities of Koko to both create vEth connections between containers when they’re local to the same host, and then VXLAN tunnels to containers when they’re across hosts. Our goal today will be to install & configure Ratchet CNI on an existing cluster, we’ll verify it’s working, and then we’ll install a cloud router setup based on zebra pen (a cloud router demo).

Comments closed

Automating local mirrors creation in RHEL

Sometimes there is a need to consume RHEL mirrors locally, not using the Red Hat content delivery network. It may be needed to speed up some deployment, or due to network constraints. I create an ansible playbook, rhel-local-mirrors (https://github.com/redhat-nfvpe/rhel-local-mirrors), that can help with that. What does rhel-local-mirrors do? It is basically a tool that connects to the Red Hat CDN, and syncs the repositories locally, allowing to populate the desired mirrors, that can be accessed by other systems via HTTP. The playbook is performing several tasks, that can be run together or independently: register a system on the Red Hat…

Comments closed

Be a hyper spaz about a hyperconverged GlusterFS setup with dynamically provisioned Kubernetes persistent volumes

I’d recently brought up my GlusterFS for persistent volumes in Kubernetes setup and I was noticing something errant. I had to REALLY baby the persistent volumes. That didn’t sit right with me, so I refactored the setup to use gluster-kubernetes to hook up a hyperconverged setup. This setup improves on the previous setup by both having the Gluster daemon running in Kubernetes pods, which is just feeling so fresh and so clean. Difference being that OutKast is like smooth and cool – and I’m an excited spaz about technology with this. Gluster-Kubernetes also implements heketi which is an API for GlusterFS volume management – that Kube can also use to allow us dynamic provisioning. Our goal today is to spin up Kube (using kube-centos-ansible) with gluster-kubernetes for dynamic provisioning, and then we’ll validate it with master-slave replication in MySQL, to one-up our simple MySQL from the last article.

Comments closed