How To: Stop and start a production OpenShift Cluster

This post was originally published on the ETI blog here.

So – you want to stop your OpenShift cluster? There are many reasons why you may want to stop your OpenShift cluster. Maybe you have an annual disaster recovery test where you shut down a whole datacenter. Perhaps you want to do some maintenance to your infrastructure or the hypervisor or storage that your cluster is hosted on. It’s not an uncommon to need to be able to do this, so I have collated some of the best practices I have experienced across a multitude of environments, both large and small.

Here is the process that I recommend to use as a best practice in order to stop and start your OpenShift cluster(s). Following this process will give you the best chance of a trouble free maintenance window. As with all things, you should exercise care with this process on your important clusters. Try it on an unimportant environment first and see if it is a good fit for you.

Important: This process will cause an outage to any application workload running on the cluster until the cluster is fully started. The cluster itself will be unavailable until manually started. Care should be taken to run this process only on appropriate environments. It is recommended to have backups available of your environment.

Continue reading “How To: Stop and start a production OpenShift Cluster”

OpenShift – From Design and Deploy to Deliver and Transform: Optimising Distributed Teams with Agile Practices

Previously published on She ITs and Giggles.

Overview

Frequently when I’m on site I am not directly asked but I am expected to provide answers to my customers how to get the best use of a technology. In this post I’m examining a recent scenario around providing structure around deploying OpenShift in order to provide a collaboration environment that would aide the use of this technology. We were also deploying OpenShift but writing about OpenShift deployment is a well covered subject across the board.

Continue reading “OpenShift – From Design and Deploy to Deliver and Transform: Optimising Distributed Teams with Agile Practices”

Communities of practice: Straight from the open source

Every solution starts with sharing a problem. At Red Hat, when we talk about “open source,” we’re talking about a proven way of collaborating to create technology. The freedom to see the code, to learn from it, to ask questions and offer improvements. This is the open source way. However, bringing together people in your organization to collaborate is often easier said than done.

At Red Hat, we’ve created “Communities of Practice” (CoP) to help our own people collaborate, especially on new and emerging technologies–including automation.

Continue reading “Communities of practice: Straight from the open source”

Dynamic SSL certificates using LetsEncrypt on OpenShift

This post was originally published on the ETI blog here.

Managing SSL certificates in OpenShift can be a bit of a chore, especially when you have more than a few routes to manage. Having an automated mechanism to manage this helps with the operational overhead, and in this example LetsEncrypt is the weapon of choice.

You could quite conveniently use a wild card certificate to cover most of your routes but that doesn’t cover every use case that you might have, especially when you manage multiple domains. Consider also that wildcard certificates are deprecated[1] in favour of tooling that can provide programmatic access to easily create and renew SSL certificates on demand. There are a bunch of advantages (and disadvantages) to this and a tonne of articles out there, already covering the nuts and bolts of that topic, so I’m going to skip over that and instead share my experience deploying and using LetsEncrypt on OpenShift.

LetsEncrypt has been around for a while now and has been adopted into many environments so I thought it is about time that I shared how I have applied Lets Encrypt to solve my problem managing certificates across multiple domains on my OpenShift cluster.

Continue reading “Dynamic SSL certificates using LetsEncrypt on OpenShift”

Deploying AMQ 7.2 Streams on OpenShift

This post was originally published on the ETI blog here.

Today I was given the challenge of providing Kafka as a service to multiple development teams in a way that was consistent and could be managed easily. There are a number of challenges to this, from how do you provision the service request through to when the thing is running, how does it get monitored or upgraded.

Kafka is a streaming tool designed to be a highly available and scalable platform for building pipelines for your data and is used by many companies in production.

I wanted to deploy the ability to manage Kafka centrally, so an operator deployed once, centrally to provide Kafka as a service to development teams was a natural fit. It means that developers are able to quickly service their own needs and the central Cloud team stays off their critical path and can focus on providing platform features, not servicing individual requests.

The cleanest way to provide this type of centrally managed service is to deploy Kafka using an operator. Even though operators are only recently starting to be adopted, I was not disappointed to discover that the Strimzi project gives us a way to do this.  I won’t cover what operators are in this article, but if you’d like to find out more about them, take a look at this blog post. There is also a set of training scenarios available on katacoda.

Continue reading “Deploying AMQ 7.2 Streams on OpenShift”

Floating Kwaaaay with Podman and systemd

This post was originally published on the ETI blog here.

Red Hat Quay, (or Kwaaaay as my US colleagues pronounce it), is a Container Registry originally from the guys at CoreOS, who were recently purchased by Red Hat. A container registry plays a pivotal role in a successful container strategy, making it simple for developers and administrators to store, manage, distribute and deploy container images across their container platforms, be that on a laptop, standalone server or a distributed solution like Kubernetes.

Continue reading “Floating Kwaaaay with Podman and systemd”

Remote Debugging of Java Applications on OpenShift

This post was originally published on Ales Nosek – The Software Practitioner.

In this article I am going to show you how to attach a debugger and a VisualVM profiler to the Java application running on OpenShift. The approach described here doesn’t make use of the Jolokia bridge. Instead, we are going to leverage the port-forwarding feature of OpenShift.

Continue reading “Remote Debugging of Java Applications on OpenShift”

4 ways to jump start an Open Source & Agile Automation Culture

Automation within enterprise IT is not a new topic. Whether it’s automating the creation of a user desktop or a server, the drive has always been to automate as much as possible to achieve faster time to market and efficiency. What has changed, though, is the number of infrastructure elements one can automate within an IT org. I still remember my first job in college 15 years ago where I used a variety of tools to automatically deploy and configure Windows XP simultaneously across 50 desktop machines for a classroom lab environment. Today not only can we automate desktop computer deployments but also servers, applications, and even networking.

Continue reading “4 ways to jump start an Open Source & Agile Automation Culture”