I have been asked, tasked, and dropped in by parachute on an extraordinary number of occasions recently to answer questions about, and implement solution for, Single Sign On (SSO) to OpenShift Container Platform. These conversations can start in multiple ways:
- How do I do SSO to OpenShift?
- How do I integrate OpenShift with my existing SAML identity provider?
- How do I log into OpenShift with my PIV and PIN?
The goal of all of these questions is typically the same and all have the same answer. Organizations typically have an existing SAML based identity provider they use for single sign on, and in the case of many, especially government, organizations the identity is provided by the user via a PIV and PIN.
Continue reading “OpenShift Single Sign On (SSO)”
This post was originally published on the ETI blog here.
Today I was given the challenge of providing Kafka as a service to multiple development teams in a way that was consistent and could be managed easily. There are a number of challenges to this, from how do you provision the service request through to when the thing is running, how does it get monitored or upgraded.
Kafka is a streaming tool designed to be a highly available and scalable platform for building pipelines for your data and is used by many companies in production.
I wanted to deploy the ability to manage Kafka centrally, so an operator deployed once, centrally to provide Kafka as a service to development teams was a natural fit. It means that developers are able to quickly service their own needs and the central Cloud team stays off their critical path and can focus on providing platform features, not servicing individual requests.
The cleanest way to provide this type of centrally managed service is to deploy Kafka using an operator. Even though operators are only recently starting to be adopted, I was not disappointed to discover that the Strimzi project gives us a way to do this. I won’t cover what operators are in this article, but if you’d like to find out more about them, take a look at this blog post. There is also a set of training scenarios available on katacoda.
Continue reading “Deploying AMQ 7.2 Streams on OpenShift”
This post was originally published on the ETI blog here.
Red Hat Quay, (or Kwaaaay as my US colleagues pronounce it), is a Container Registry originally from the guys at CoreOS, who were recently purchased by Red Hat. A container registry plays a pivotal role in a successful container strategy, making it simple for developers and administrators to store, manage, distribute and deploy container images across their container platforms, be that on a laptop, standalone server or a distributed solution like Kubernetes.
Continue reading “Floating Kwaaaay with Podman and systemd”
Most of us have been in a position when you felt you were ready to take on the next step. Maybe, you were in the final year of your college studies; maybe, you are a self-taught developer, or administrator. In either case, there comes a time where you feel ready to pounce, ready to take on real-world challenges. You send out CVs, you start networking, you talk to people. Some offers have already fled, you see, for they were never within your reach. Other offers are on the table, but you are not completely sure of them. What’s more, they are not completely sure of you, either.
Continue reading “Red Hat as a Catalyst for the Learner Community”
This post was originally published on Ales Nosek – The Software Practitioner.
In this article I am going to show you how to attach a debugger and a VisualVM profiler to the Java application running on OpenShift. The approach described here doesn’t make use of the Jolokia bridge. Instead, we are going to leverage the port-forwarding feature of OpenShift.
Continue reading “Remote Debugging of Java Applications on OpenShift”
Automation within enterprise IT is not a new topic. Whether it’s automating the creation of a user desktop or a server, the drive has always been to automate as much as possible to achieve faster time to market and efficiency. What has changed, though, is the number of infrastructure elements one can automate within an IT org. I still remember my first job in college 15 years ago where I used a variety of tools to automatically deploy and configure Windows XP simultaneously across 50 desktop machines for a classroom lab environment. Today not only can we automate desktop computer deployments but also servers, applications, and even networking.
Continue reading “4 ways to jump start an Open Source & Agile Automation Culture”
Are you still coding your API client libraries by hand? Is your manually maintained API documentation drifting away from what was actually implemented? You may be interested in reviewing the two popular technologies that solve this problem. In this article, we are going to look at OpenAPI and gRPC side-by-side.
Continue reading “Comparing OpenAPI with gRPC”
Kaunas Information Technology School helps Lithuanians choose a marketable profession, then develop the skills and qualifications needed to meet market demand and obtain a job in IT. To expand its Linux® and open source curricula, the school decided to offer Red Hat Training and Certification through Red Hat Academy, a partnership program with educational organizations. By offering the Red Hat Certified System Administrator course and exam, students gain hands-on experience with Linux technology, improving their job prospects, while the school improves its reputation and competitiveness in the academic community.
Continue reading “Lithuanian IT school improves student job prospects with Linux training”