Communities of practice: Straight from the open source

Every solution starts with sharing a problem. At Red Hat, when we talk about “open source,” we’re talking about a proven way of collaborating to create technology. The freedom to see the code, to learn from it, to ask questions and offer improvements. This is the open source way. However, bringing together people in your organization to collaborate is often easier said than done.

At Red Hat, we’ve created “Communities of Practice” (CoP) to help our own people collaborate, especially on new and emerging technologies–including automation.

Continue reading “Communities of practice: Straight from the open source”

Dynamic SSL certificates using LetsEncrypt on OpenShift

This post was originally published on the ETI blog here.

Managing SSL certificates in OpenShift can be a bit of a chore, especially when you have more than a few routes to manage. Having an automated mechanism to manage this helps with the operational overhead, and in this example LetsEncrypt is the weapon of choice.

You could quite conveniently use a wild card certificate to cover most of your routes but that doesn’t cover every use case that you might have, especially when you manage multiple domains. Consider also that wildcard certificates are deprecated[1] in favour of tooling that can provide programmatic access to easily create and renew SSL certificates on demand. There are a bunch of advantages (and disadvantages) to this and a tonne of articles out there, already covering the nuts and bolts of that topic, so I’m going to skip over that and instead share my experience deploying and using LetsEncrypt on OpenShift.

LetsEncrypt has been around for a while now and has been adopted into many environments so I thought it is about time that I shared how I have applied Lets Encrypt to solve my problem managing certificates across multiple domains on my OpenShift cluster.

Continue reading “Dynamic SSL certificates using LetsEncrypt on OpenShift”

Deploying AMQ 7.2 Streams on OpenShift

This post was originally published on the ETI blog here.

Today I was given the challenge of providing Kafka as a service to multiple development teams in a way that was consistent and could be managed easily. There are a number of challenges to this, from how do you provision the service request through to when the thing is running, how does it get monitored or upgraded.

Kafka is a streaming tool designed to be a highly available and scalable platform for building pipelines for your data and is used by many companies in production.

I wanted to deploy the ability to manage Kafka centrally, so an operator deployed once, centrally to provide Kafka as a service to development teams was a natural fit. It means that developers are able to quickly service their own needs and the central Cloud team stays off their critical path and can focus on providing platform features, not servicing individual requests.

The cleanest way to provide this type of centrally managed service is to deploy Kafka using an operator. Even though operators are only recently starting to be adopted, I was not disappointed to discover that the Strimzi project gives us a way to do this.  I won’t cover what operators are in this article, but if you’d like to find out more about them, take a look at this blog post. There is also a set of training scenarios available on katacoda.

Continue reading “Deploying AMQ 7.2 Streams on OpenShift”

Floating Kwaaaay with Podman and systemd

This post was originally published on the ETI blog here.

Red Hat Quay, (or Kwaaaay as my US colleagues pronounce it), is a Container Registry originally from the guys at CoreOS, who were recently purchased by Red Hat. A container registry plays a pivotal role in a successful container strategy, making it simple for developers and administrators to store, manage, distribute and deploy container images across their container platforms, be that on a laptop, standalone server or a distributed solution like Kubernetes.

Continue reading “Floating Kwaaaay with Podman and systemd”

Remote Debugging of Java Applications on OpenShift

This post was originally published on Ales Nosek – The Software Practitioner.

In this article I am going to show you how to attach a debugger and a VisualVM profiler to the Java application running on OpenShift. The approach described here doesn’t make use of the Jolokia bridge. Instead, we are going to leverage the port-forwarding feature of OpenShift.

Continue reading “Remote Debugging of Java Applications on OpenShift”

4 ways to jump start an Open Source & Agile Automation Culture

Automation within enterprise IT is not a new topic. Whether it’s automating the creation of a user desktop or a server, the drive has always been to automate as much as possible to achieve faster time to market and efficiency. What has changed, though, is the number of infrastructure elements one can automate within an IT org. I still remember my first job in college 15 years ago where I used a variety of tools to automatically deploy and configure Windows XP simultaneously across 50 desktop machines for a classroom lab environment. Today not only can we automate desktop computer deployments but also servers, applications, and even networking.

Continue reading “4 ways to jump start an Open Source & Agile Automation Culture”

Comparing OpenAPI with gRPC

Are you still coding your API client libraries by hand? Is your manually maintained API documentation drifting away from what was actually implemented? You may be interested in reviewing the two popular technologies that solve this problem. In this article, we are going to look at OpenAPI and gRPC side-by-side.

Continue reading “Comparing OpenAPI with gRPC”

Welcome to the AI Thunderdome: Using OpenStack to accelerate AI training with Sahara, Spark, and Swift

Like many others in the technology industry, I share a passion for artificial intelligence (AI). This year at OpenStack Summit in Berlin, I presented a talk around parallel AI training. OpenStack lends itself well to big data problems.

Continue reading “Welcome to the AI Thunderdome: Using OpenStack to accelerate AI training with Sahara, Spark, and Swift”