You need multiple network interfaces in each pod – because you, like me, have some more serious networking requirements for Kubernetes than your average bear. The thing is – if you have different specifications for each pod, and what network interfaces each pod should have based on its role, well… Previously you were fairly limited. At least using my previous (and somewhat dated) method of using Multus CNI (a CNI plugin to help enable you to have multiple interfaces per pod), you could only apply to all pods (or at best, with multiple CNI configs per box, and have it per box). Thanks to Kural and crew, Multus includes the functionality to use Kubernetes Custom Resources (Also known as “CRDs”). These “custom resource definitions” are a way to extend the Kubernetes API. Today we’ll take advantage of that functionality. The CRD implementation in Multus allows us to specify exactly what multiple network interfaces each pod has based on annotations attached to each pod. Our goal here will be to spin up a Kubernetes cluster complete with Multus CNI (including the CRD functionality), and then we’ll spin up pods where we have some with a single interface, and some with multiple interfaces, and then we’ll inspect those.
It’s no secret that there’s the inevitability that IPv4 is becoming exhausted. And it’s not just tired (ba-dum-ching!). Since we’re a bunch of Kubernetes fans, and we’re networking fans – we really want to check out what we can do with IPv6 with Kubernetes. Thanks to some slinky automation by my colleague, Feng Pan, contributed to kube-centos-ansible, he was able to implement some creative work by leblancd. In this simple setup today, we’re going to deploy Kubernetes with custom binaries from leblancd and have two pods (ideally on different nodes) ping one another with ping6 and declare victory! In the future let’s hope to iterate on what’s necessary to get IPv6 functionality in Kubernetes.
Upgrading TripleO can be a hard task. While there are instructions on how to do it manually, having a set of playbooks that automate this task can help.With this purpose, I’ve created the TripleO upgrade automation playbooks (https://github.com/redhat-nfvpe/tripleo-upgrade-automation).Those are a set of playbooks that allow to upgrade an existing TripleO deployment, specially focused on versions from 8 to 10, and integrated with local mirrors (https://github.com/redhat-nfvpe/rhel-local-mirrors) In case you want to know more, please visit the tripleo-upgrade-automation project on github, and you’ll get instructions on how to properly use this repo to automate your upgrades.
I’m just going to go ahead and blame @dougbtv
for all my awesome and terrible ideas. We’ve been working on several Ansible playbooks to spin up development
environments; like kucean.
Due to the rapid development nature of things like Kubernetes, Heketi,
GlusterFS, and other tools, it’s both possible and probable that our playbooks
could become broken at any given time. We’ve been wanting to get some continous
integration spun up to test this with Zuul v3
but the learning curve for that is a bit more than we’d prefer to
tackle for some simple periodic runs. Same goes for Jenkins
or any other number of continous integration software bits.
Enter the brilliantly mad mind of @dougbtv. He wondered if AWX (Ansible Tower)
could be turned into a sort of “Poor Man’s CI”? Hold my beer. Challenge
Recently I’ve been playing around with AWX (the upstream, open source code base
of Ansible Tower), and wanted to make it easy to deploy. Standing on the
shoulders of giants (namely @geerlingguy)
I built out a wrapper playbook that would let me easily deploy AWX into a VM on
an OpenStack cloud (in my case, the RDO Cloud). In this blog post, I’ll show
you the wrapper playbook I built, and how to consume it to deploy a development
Starting to apply since Queens This article is a continuation of http://teknoarticles.blogspot.com.es/2017/07/build-and-use-security-hardened-images.html How to build the security hardened image with volumes Starting since Queens, security hardened images can be built using volumes. This will have the advantage of more flexibility when resizing the different filesystems. The process of building the security hardened image is the same as in the previous blogpost. But there have been a change in how the partitions, volumes and filesystems are defined. Now there is a pre-defined partition of 20G, and then volumes are created under it. Volume sizes are created on percentages, not in absolute…
When working with OpenStack Ironic and Tripleo, and using the boot from ISCSI feature, you may need to add some kernel parameters into the deployment image for that to work.When using some specific hardware, you may need that the deployment image contains some specific kernel parameters on boot. For example, when trying to boot from ISCSI with IBFT nics, you need to add following kernel parameters: rd.iscsi.ibft=1 rd.iscsi.firmware=1 The TripleO image that is generated by default doesn’t contain those parameters, because they are very specific depending on the hardware you need. It is not also possible right now to send…
In this post, I would like to provide the details of the different types of template config sections present in a typical docker service template file. There are few configurations which are present in the puppet/service templates like service_name, which still have the same interpretation in the container services in docker/services templates too. Apart from that, there are few container specific configurations, which are being explained in below sections: puppet_config Specifies the puppet class step-config and the puppet resources puppet- tags to be applied while enabling a service. By default, all the file operation related puppet resources like file, concat,…
It is my decree that whenever you are using Kubernetes without using Docker you are officially “ghost riding the whip”, maybe even “ghost riding the kube”. (Well, I’m from Vermont, so I’m more like “ghost riding the combine”). And again, we’re running Kubernetes without Docker, but this time? We’ve got an entire workflow without Docker. From image build, to running container, to inspecting the running containers. Thanks to the good folks from the OCI project and Project Atomic, we’ve got kpod for working with running containers, and we’ve got buildah for building our images. And of course, don’t leave out CRI-O which makes the magic happen to get it all running in Kube without Docker. Fire up your terminals, because you’re about to ghost ride the kube.