Kogito is a cloud-native business automation framework for building intelligent business applications. It is based on battle-tested runtime components (like Drools, jBPM, OptaPlanner), and it allows the development of process and business rules centric cloud-native applications for orchestrating distributed microservices and container-native applications. It takes advantage of the many benefits of the container-native platforms (like Kubernetes, OpenShift) as it was designed from the ground up for those platforms.
Some of the distinguishing characteristics of Kogito are as follows:
Continue reading “Cloud-native business automation with Kogito”
Container-native applications are becoming more and more complex, consisting of various services and features, each component with its own security constraints and complex network policy rules. This makes it more difficult to perform day two operations once the cloud-native applications are deployed.
While upgrades, patches, and provisioning can be done using Ansible playbooks or Helm Charts, application lifecycle, storage lifecycle, and other deeper analysis cannot be done and requires application support team intervention.
Operator Framework initiative introduced Operator-SDK framework several years ago to standardize Kubernetes Operators development and make it easier for the Kubernetes community to create Operators and control container-native applications lifecycle.
Continue reading “How to control Container-Native Applications with Ansible Operator”
In isolation, cloud-native development doesn’t have a lot of meaning. To be truly agile and take advantage of cloud-native technology, organizations must think beyond code and focus on how to deliver business value quickly, with quality, while the market moves quickly. This change in thinking comes with a renewed focus on the people and processes involved in developing new capabilities – value comes from communication across teams.
Continue reading “Business-level impact with integrated cloud-native applications, at Red Hat Summit”
As enterprises start to adopt their container journey and onboard their applications into the OpenShift Container Platform, application monitoring becomes critical to anticipate problems and discover bottlenecks in a production environment. Application Monitoring is also one of the biggest challenges faced by almost all organizations who are either in the process of or already have migrated their workloads into OpenShift.
The growing adoption of microservices architecture makes monitoring more complex since a large number of applications that are distributed in nature are communicating with each other. What used to be a function or a direct call in a monolithic application is now a network call from one microservice to another. Also, running multiple instances on these microservices as containers adds another layer of complexity.
Starting with OpenShift 4.3, you can use the platform’s monitoring capabilities for your application workloads running on OpenShift. This helps keep the application monitoring centralized. You don’t need to manage an additional monitoring solution as the platform now provides these capabilities.
Continue reading “Application monitoring in OpenShift 4.3”
I recently collaborated with fellow Red Hatters to create a whiteboarding video that introduces OpenShift Serverless at a high level. In this article, I build upon that YouTube video and my recent work with Quarkus to create a hands-on deep dive into OpenShift Serverless. This article walks you through using the OpenShift Serverless operator to seamlessly add serverless capabilities to an OpenShift 4.3 cluster and then using the Knative CLI tool to deploy a Quarkus native application as a serverless service onto that same cluster.
OpenShift Serverless helps developers to deploy and run applications that will scale up or scale to zero on-demand. Applications are packaged as OCI compliant Linux containers that can be run anywhere. Using the Serverless model, an application can simply consume compute resources and automatically scale up or down based on use. As mentioned in the introduction above, the whiteboarding YouTube video embedded below provides a high-level overview of OpenShift Serverless.
Continue reading “Hands on introduction to OpenShift Serverless”
This article is the first in a series of three articles which share my experience with troubleshooting the performance of Vert.x applications. The first article, originally posted on Ales Nosek – The Software Practitioner, provides an overview of the Vert.x event loop model, the second article covers techniques to prevent delays on the event loop, and the third article focuses on troubleshooting of event loop delays.
Programming with Vert.x requires a good understanding of its event loop model. From what I saw in practice, delayed or blocked event loop threads are the number one contributor to performance problems with Vert.x applications. But don’t worry. In this article, we are going to review the event loop model.
Continue reading “Troubleshooting the Performance of Vert.x Applications, Part I — The Event Loop Model”