“I don’t have permission to do this!” That’s one of many phrases that I used to hear from a couple of developers when trying to introduce them to a Configuration Management tool. This barrier was, and still is in some cases, a major obstacle in companies that didn’t adhere to the DevOps movement and practices, a problem that I like to call as “The Wall”. If you are questioning, yes, I am a huge Pink Floyd fan!
“The Wall” is characterized when, for some reason(s), developers, and operations don’t find a middle ground and share the responsibilities. What it will probably result in is a rigid and unreliable application. In a world, where requirements are constantly modified due to customer behavior, these teams will likely fail, when trying to achieve a stable, resilient, and flexible platform. It’s not easy to deal with this cultural problem, and one of the main pillars that support this is technical complexity. All sides have their own specifics, problems, architecture, languages… If we just throw packages from one side to the other, we are just adding “another brick in the wall”. There is a necessity to translate this entanglement and meet halfway, working with platforms like Kubernetes, a container orchestrator that can provide a smooth approach when introducing teams to areas that they didn’t know before.
Continue reading “Breaking Silos using the power of Infrastructure as Data in Kubernetes”
Container-native applications are becoming more and more complex, consisting of various services and features, each component with its own security constraints and complex network policy rules. This makes it more difficult to perform day two operations once the cloud-native applications are deployed.
While upgrades, patches, and provisioning can be done using Ansible playbooks or Helm Charts, application lifecycle, storage lifecycle, and other deeper analysis cannot be done and requires application support team intervention.
Operator Framework initiative introduced Operator-SDK framework several years ago to standardize Kubernetes Operators development and make it easier for the Kubernetes community to create Operators and control container-native applications lifecycle.
Continue reading “How to control Container-Native Applications with Ansible Operator”
I recently collaborated with fellow Red Hatters to create a whiteboarding video that introduces OpenShift Serverless at a high level. In this article, I build upon that YouTube video and my recent work with Quarkus to create a hands-on deep dive into OpenShift Serverless. This article walks you through using the OpenShift Serverless operator to seamlessly add serverless capabilities to an OpenShift 4.3 cluster and then using the Knative CLI tool to deploy a Quarkus native application as a serverless service onto that same cluster.
OpenShift Serverless helps developers to deploy and run applications that will scale up or scale to zero on-demand. Applications are packaged as OCI compliant Linux containers that can be run anywhere. Using the Serverless model, an application can simply consume compute resources and automatically scale up or down based on use. As mentioned in the introduction above, the whiteboarding YouTube video embedded below provides a high-level overview of OpenShift Serverless.
Continue reading “Hands on introduction to OpenShift Serverless”
Continue reading “Red Hat Training helps you take on container adoption your way”
This post was originally published on https://dev.to/tylerauerbeck.
Traditionally there have been very clear battle lines drawn for application and infrastructure deployment. When you need to run a Virtual Machine, you run it on your virtualization platform (Openstack, VMWare, etc.) and when you need to run a container workload, you run it on your container platform (Kubernetes). But when you’re deploying your application, do you really care where it runs? Or do you just care that it runs somewhere?
This is where I entered this discussion and I quickly realized that in most cases, I really didn’t care. What I knew was that I needed to have the things I required to build my application or run my training. I also knew that if I could avoid having to manage multiple sets of automation — that would be an even bigger benefit. So if I could have both running within a single platform, I was absolutely on board to give it a shot.
Continue reading “You’ve got Virtual Machines in my Container Platform!: An argument for running VM’s in Kubernetes”
Are you still doing all your Linux container management using an insecure, bloated daemon? Well, don’t feel bad. I was too until recently. Now I’m finding myself saying goodbye to my beloved Docker daemon, and saying hello to Buildah, Podman, and Skopeo. In this article, we’ll explore the exciting new world of rootless and daemon-less Linux container tools.
Continue reading “Say “Hello” to Buildah, Podman, and Skopeo”
The pace of innovation has shortened expectations for time to market, placing pressure on IT teams to keep up with the rate of change. Organizations need just-in-time, prescriptive resources to enable their teams to leverage innovation to solve business problems. The Red Hat Learning Subscription (RHLS) delivers unlimited, on-demand, modular access to Red Hat’s entire training portfolio including cloud based labs for a full year. The Early Access feature of RHLS enables subscribers to learn from real-time publishing of courses and labs currently in development.
Continue reading “Start learning Red Hat Enterprise Linux 8 and Red Hat OpenShift Container Platform 4 through Early Access”