by Jim Rigsbee (Red Hat)
In this article, we will convert a web project generated by the JBoss Developer Studio CDI Web Project wizard to a Maven project. Doing so will give you the power of the Maven build system with its dependency management, build life cycles, and automated JEE packaging abilities. Follow these steps:
a. Right click on the project name in the Project Explorer tree and select Configure → Convert to Maven Project… In the wizard steps be sure to select WAR packaging.
Continue reading “How to convert a JBoss Developer Studio web project to a Maven project (JB225)”
by Duncan Doyle
With the growing popularity of cloud environments and cloud-like architectures, the Service Oriented Architecture (SOA) paradigm has become increasingly important. Having been the previous big buzzword in IT, the term SOA has often been used as a means to sell software products instead of a term to refer to architectural style. However, in order to benefit most from the new possibilities in virtualization, just-in-time provisioning and on-demand scalability it has become a must for businesses to partition their enterprise logic and functionality into individual components which can independently be deployed in heterogeneous environments.
One of the goals of an SOA is to provide the enterprise with a set of re-usable, readily available business services, and as such reduce cost and provide greater operational agility. The autonomous nature of well-defined services make these components the perfect candidate for deployment in cloud environments. These individual services can then be combined, or composed into business applications which provide the actual business value. The specific compositions of these services in fact defines the actual business process.
Continue reading “BPM: Utilizing JBoss technologies to increase business performance and agility”
by David Kang (Red Hat)
Cloud is not software, cloud is not hardware, cloud is not virtualization, and cloud is certainly not a panacea for broken IT. Cloud is an architecture: a set of fundamental tenets that have different implications at different levels of IT, from network, to hardware, to applications, and to the IT process itself. To say you have a cloud is to say that you have a cohesive architecture, technology set, and most importantly processes, that work towards a defined goal under a set of well-understood principals. Building your cloud is as much about defining your goals and governing principals as it is about investing in technology.
Building your cloud and consuming cloud services
Step one is defining your governing principals. This is a crucial step before embarking on your cloud journey as the policies and principals you define will help you navigate your journey through the rapidly expanding cloud ecosystem. This is also an opportunity to ask tough questions and examine what your principals and processes are, and why you have them. Process is ultimately about managing risk, so consider what risks are acceptable under your governance policies and weigh them against the potential benefits cloud can offer. Both Facebook and Google have adopted “deploy to production” models that seem to fly in the face of process conventions such as ITIL or RUP, yet somehow they seem to survive. The penalty for not doing this exercise is ballooning adoption costs, or failed rollouts all together.
Continue reading “Cloud Sniff Test: Cutting through the jargon”
by Forrest Taylor (Red Hat)
Corresponding Curriculum: Content is extracted from the all-new Deploying Systems in Cloud Environments (CL260) course
Activation keys automate client repo subscriptions when registering to Red Hat CloudForms System Engine. Activation keys can define subscriptions and the default environment for a system. To manage activation keys, log in to System Engine and hover over the “Systems” tab, and choose the “Activation Key” sub-tab. Click the “+ New Key” link and enter the name and environment, then click the “Save” button.
Continue reading “Using Activation keys in CloudForms System Engine”
by Wander Boessenkool (Red Hat)
With the release of the updated Red Hat Enterprise Clustering and Storage Management Course (RH436) for Red Hat Enterprise Linux 6 a couple of new subjects have been introduced, while others have been updated to reflect the changes in the Red Hat High-Availability Add-On moving from Red Hat Enterprise Linux 5 to Red Hat Enterprise Linux 6.
One of the most noticeable new subjects in this updated course is the inclusion of an introduction to highly available, distributed, scalable storage using Red Hat Storage Server. Other updates include the use of multipathed storage throughout the course, as well as coverage of the XFS® file system.
Continue reading “Updates to the Red Hat Enterprise Clustering and Storage Management course”
by Vinny Valdez (Red Hat)
The following is an excerpt of a post originally published on June 29, 2012, on Vinny’s Tech Garage.
I’m really excited about CloudForms. In my recorded demo at Summit, I showed a RHEL 2-node active/passive cluster with GFS off an iSCSI target. Then I moved all the underlying CloudForms Cloud Engine components to shared storage. I was able to launch instances, fail over Cloud Engine, and see the correct status. After managing the instances, fail back, and all was good. All of this works because the RHEL HA cluster stops the databases and other services first, moves the floating ip over, then starts the services on the active node. This was a very basic deployment, much more could be explored with clustered PostgreSQL and sharded Mongo.
Continue reading “What is CloudForms?”
by Zach Rhoads (Red Hat)
One of the core tenants of agile development is to focus on the tasks that are the highest priority and immediate need. This is sometimes referred to as “Just-in-Time” development. The idea is to focus on the tasks needed to ship the feature now and worry about everything else when it is actually needed. Another tenant that goes hand-in-hand with “Just-in-Time” is the idea of failing early. Basically, a team should know as early as possible if something is going to fail, that way the team does not waste time going down the wrong path. This means the team should develop a feature and solicit feedback in short cycles, allowing the team to quickly understand what works and what does not.
Continue reading “Reducing friction in agile development using cloud”
It has been a little over a year since Quint Van Deman was named 2011’s worldwide Red Hat Certified Professional of the Year. Out of 600 submissions, Van Deman, an RHCA and director of open source consulting at Emergent, stood out with his experience helping clients move from last-generation, proprietary IT infrastructures to next-generation architecture that embraces the synergy of open source, open standards and cloud-based solutions. With the next winner set to be awarded on June 28 at Red Hat Summit in Boston, we wanted to catch up with Quint to hear the story that won the title, what he’s working on now and how his past year has been.
So, what story did you submit to win the award?
What I really wrote about was my journey to becoming an RHCA and how that really benefitted my professional endeavors. The journey to becoming an RHCA really exposes one to the breadth of solutions that are out there in the Red Hat stack, and how those solve organizational challenges. I was very clearly able to take some of those direct lessons and apply them out into my work. Also, how the RHCA really provides what I call the ‘instant badge of credibility’ when I walk in somewhere. A lot of time when you go into an organization as a consultant, there’s a lot of what I call ‘technical chest-bumping,’ where there will be someone in the room whose only objective of the meeting is to prove that they are smarter than you. Having that RHCA up there really defers a lot of that, especially with folks in the room that may have taken a Red Hat exam.
Continue reading “Checking in with Quint Van Deman, 2011 RHCP of the Year”