Dynamic SSL certificates using LetsEncrypt on OpenShift

This post was originally published on the ETI blog here.

Managing SSL certificates in OpenShift can be a bit of a chore, especially when you have more than a few routes to manage. Having an automated mechanism to manage this helps with the operational overhead, and in this example LetsEncrypt is the weapon of choice.

You could quite conveniently use a wild card certificate to cover most of your routes but that doesn’t cover every use case that you might have, especially when you manage multiple domains. Consider also that wildcard certificates are deprecated[1] in favour of tooling that can provide programmatic access to easily create and renew SSL certificates on demand. There are a bunch of advantages (and disadvantages) to this and a tonne of articles out there, already covering the nuts and bolts of that topic, so I’m going to skip over that and instead share my experience deploying and using LetsEncrypt on OpenShift.

LetsEncrypt has been around for a while now and has been adopted into many environments so I thought it is about time that I shared how I have applied Lets Encrypt to solve my problem managing certificates across multiple domains on my OpenShift cluster.

Continue reading “Dynamic SSL certificates using LetsEncrypt on OpenShift”

Deploying AMQ 7.2 Streams on OpenShift

This post was originally published on the ETI blog here.

Today I was given the challenge of providing Kafka as a service to multiple development teams in a way that was consistent and could be managed easily. There are a number of challenges to this, from how do you provision the service request through to when the thing is running, how does it get monitored or upgraded.

Kafka is a streaming tool designed to be a highly available and scalable platform for building pipelines for your data and is used by many companies in production.

I wanted to deploy the ability to manage Kafka centrally, so an operator deployed once, centrally to provide Kafka as a service to development teams was a natural fit. It means that developers are able to quickly service their own needs and the central Cloud team stays off their critical path and can focus on providing platform features, not servicing individual requests.

The cleanest way to provide this type of centrally managed service is to deploy Kafka using an operator. Even though operators are only recently starting to be adopted, I was not disappointed to discover that the Strimzi project gives us a way to do this.  I won’t cover what operators are in this article, but if you’d like to find out more about them, take a look at this blog post. There is also a set of training scenarios available on katacoda.

Continue reading “Deploying AMQ 7.2 Streams on OpenShift”