Guest post by Eric D. Schabell and John Hurlocker - original article published here.
In this weeks tips & tricks we will be slowing down and taking a closer look at possible Red Hat JBoss BRMS deployment architectures.
When we talk about deployment architectures we are referring to the options you have to deploy a rules and/or events project in your enterprise.
This is the actual runtime architecture that you need to plan for at the start of your design phases, determining for your enterprise and infrastructure what the best way would be to deploy your upcoming application. It will also most likely have an effect on how you design the actual application that you want to build, so being aware of your options should help make your projects a success.
This will be a multi-part series that will introduce the deployment architectures in phases, starting this week with the first two architectures.
The possibilities
A rule administrator or architect work with application team(s) to design the runtime architecture for rules and depending on the organizations needs the architecture could be any one of the following architectures or a hybrid of the designs below.
In this series we will present four different deployment architectures and discuss one design time architecture while providing the pros and cons for each one to allow for evaluation of each one for your own needs.
The basic components in these architectures shown in the accompanying illustrations are:
- JBoss BRMS server
- Rules developer / Business analyst
- Version control (GIT)
- Deployment servers (JBoss EAP)
- Clients using your application
Rules deployed in application
The first architecture is the most basic and static in nature of all the options you have to deploy rules and events in your enterprise architecture.
A deployable rule package (e.g. JAR) is included in your application’s deployable artifact (e.g. EAR, WAR).
In this architecture the JBoss BRMS server acts as a repository to hold your rules and a design time tool. Illustration 1 shows how the JBoss BRMS server is and remains completely disconnected from the deployment or runtime environment.
Pros
- Typically better performance than using a rule execution server since the rule execution occurs within the same JVM as your application
Cons
- Do not have the ability to push rule updates to production applications
- requires a complete rebuild of the application
- requires a complete re-testing of the application (Dev - QA - PROD)
Rules scanned from application
A second architecture that you can use to slightly modify the previous one, is to add a scanner to your application that then monitors for new rule and event updates, pulling them in as they are deployed into your enterprise architecture.
The JBoss BRMS API contains a KieScanner that monitors the rules repository for new rule package versions. Once a new version is available it will be picked up by the KieScanner and loaded into your application, as shown in illustration 2.
The Cool Store demo project provides an example that demonstrates the usage of JBoss BRMS KieScanner, with an example implementation showing how to scan your rule repository for the last freshly built package.
Pros
- No need to restart your application servers
- in some organizations the deployment process for applications can be very lengthy
- this allows you to push rule updates to your application(s) in real time
Cons
- Need to create a deployment process for testing the rule updates with the application(s)
- risk of pushing incorrect logic into application(s) if the above process doesn’t thoroughly test
Next up
Next time we will dig into the two remaining deployment architectures that provide you with an Execution Server deployment and a hybrid deployment model to leverage several elements in a single architecture. Finally, we will cover a design time architecture for your teams to use while crafting and maintaining the rules and events in your enterprise.
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License.
About the author
Browse by channel
Automation
The latest on IT automation that spans tech, teams, and environments
Artificial intelligence
Explore the platforms and partners building a faster path for AI
Open hybrid cloud
Explore how we build a more flexible future with hybrid cloud
Security
Explore how we reduce risks across environments and technologies
Edge computing
Updates on the solutions that simplify infrastructure at the edge
Infrastructure
Stay up to date on the world’s leading enterprise Linux platform
Applications
The latest on our solutions to the toughest application challenges
Original shows
Entertaining stories from the makers and leaders in enterprise tech
Products
- Red Hat Enterprise Linux
- Red Hat OpenShift
- Red Hat Ansible Automation Platform
- Cloud services
- See all products
Tools
- Training and certification
- My account
- Developer resources
- Customer support
- Red Hat value calculator
- Red Hat Ecosystem Catalog
- Find a partner
Try, buy, & sell
Communicate
About Red Hat
We’re the world’s leading provider of enterprise open source solutions—including Linux, cloud, container, and Kubernetes. We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.
Select a language
Red Hat legal and privacy links
- About Red Hat
- Jobs
- Events
- Locations
- Contact Red Hat
- Red Hat Blog
- Diversity, equity, and inclusion
- Cool Stuff Store
- Red Hat Summit