Monitoring of application environments is very important, and there are many kinds of monitoring to consider – infrastructure capacity and availability, as well as application responsiveness and availability to name a few. This is also true in a Platform-as-a-Service (PaaS) environment, such as one running the Red Hat OpenShift Container Platform. The use of containers mean that applications can be built as an immutable image that is deployed and promoted to multiple hosting environments at a rapid pace. This makes monitoring more complex, and the need for dynamic adaptation to the running environment is key to success. But what is the best way to deploy and manage the monitoring solution itself, which in many cases is also running fully or partially within containers? This is where Infrastructure-as-Code (IaC) comes in a useful approach. First, let’s define IaC.
Continue reading “OpenShift Container Platform Monitoring managed with Infrastructure-as-Code (IaC)”
Next week in Vancouver, Canada thousands of OpenStack users will meet at the seventeenth annual OpenStack Summit. This year will expand beyond OpenStack to cover topics like CI/CD, container infrastructure, Edge Computing, HPC, Open Source communities, private, hybrid, and public cloud, and telecommunications.
Continue reading “Learn how to manage OpenStack with Ansible, this year at OpenStack Summit”
The Cloud Age
In the olden days of ‘yore, when anyone wanted a new server they would embark on a ritual that would involve everything from ordering the physical server from a supplier to provisioning the server after installation. This process would involve multiple time-consuming indirect actions, and could take up to months. Have you ever seen one of those American TV ads where some distressed person in an irrational amount of agony is shown as an example of a pain point? Yeah? Well, it was kinda like that.
Continue reading “Enable agility with infrastructure-as-code”
by Christian Stankowic
If you’re maintaining multiple Red Hat Enterprise Linux systems (or equivalent offsets like CentOS or Scientific Linux) your administration work with the particular hosts will gain in a routine. Because even the best administrator might forget something it would be advantageously to have a central software and configuration management solution. Chef and Puppet are two very mighty and popular mangement tools for this application. Depending on your system landscape and needs these tools might also be oversized though – Red Hat Package Manager (RPM) can emerge as a functional alternative in this case.
It is often forgotten that RPM can be used for sharing own software and configurations as well. If you’re not managing huge system landscapes with uncontrolled growth of software and want to have a easy-to-use solution, you might want to have a look at RPM.
I’m myself using RPM to maintain my whole Red Hat Enterprise Linux system landscape – this article will show you how easy RPM can be used to simplify system management.
Continue reading “GUEST POST: Software and configuration management made easy with RPM”
by Satish Irrinki (Red Hat)
Increasingly in today’s world, data centers are moving towards software-defined computing, networking, and storage. IT infrastructure that supports the application and data workloads is moving from bare metal servers to cloud. While the most obvious justification for this shift can be summarized as increased efficiency, capacity utilization, and flexibility (to scale up or down), there are less obvious fundamental economic and financial principles in play that contribute to overall business stability of the organizations and lines of business (LOB).
Cloud computing has changed the cost structure of IT infrastructure. Historically, IT infrastructure was considered a capital expenditure (CapEx) that requires large upfront investments leading to higher fixed costs for the business. With the advent of cloud computing, primarily because of its pay-for-use billing model, IT expenditure shifted from fixed operating cost structure to variable operating cost (OpEx) model.
This shift not only decreases the need for larger cash flow requirements or, in lieu, higher liabilities on balance sheet (akin to capitalization of lease expenses) for the CapEx, it also reduces the volatility in the operating income for the business.
Continue reading “Cloud Adoption for Enhanced Business Stability”
by Thomas Crowe (Red Hat)
A key component to a successful migration is a “migration mission statement.” The migration mission statement’s purpose is to summarize the key parts of a migration into a succinct, simply-communicated format that results in a clearly defined migration goal that is easily measurable for success. A sample migration mission statement could be:
Migrate the Acme Order Processing java application from the current proprietary IBM hardware running AIX and WebSphere into a cloud infrastructure running Red Hat Enterprise Linux and JBoss Application Server; in order to provide better TCO and ROI, as well as provide increased scalability and reliability. The migration should be performed during non-peak hours, have minimal downtime requirements, and provide for rollback if necessary.
Generally speaking, there are several factors that go into planning and executing a successful migration project. But by answering the following questions, a significant amount of the information necessary for a successful migration can be gathered.
The most basic question to initially ask is simply, “What is being migrated?” This simple question sets the stage for gathering the additional information that is required. Is the migration moving all services from one server to another? Maybe it is migrating an application from one application server to another, or migrating storage from one array to another. Each of these scenarios are going to have unique data-gathering requirements that need to be understood in order to successfully plan and ultimately execute a successful migration.
Continue reading “Determining your ‘migration mission statement’…and why it’s important”
by Bruno Lima
Long an acquaintance and ally of government institutions, open source is no longer considered rocket science by the enterprise.
Companies find open source attractive because they’re not tied to one vendor, can make improvements in the system at any time and realize cost savings, all helping boost market penetration. And, of course, there’s the benefit of communities continuously improving the products.
In the outside world, governments are strong sponsors of this type of initiative, especially in Brazil, where the use of free and open source software is encouraged to make the market more democratic. And, of course, the market has become increasingly more open to open source. While there were once concerns about the reliability, security, and functionality, those fears are all gone. Red Hat has made it possible to combine the benefits of these technologies with the necessary support for mission-critical environments, developing platforms and the specific demands organizations face.
Continue reading “My thoughts on open source”