by Malcolm Herbert (Red Hat)
The post below originally appeared here on November 22, 2012.
A comparison between enterprise IT and public cloud computing dramatically highlights the benefits of moving to cloud.
Application deployment times can shrink from weeks in the traditional data centre to minutes in a cloud data centre; new application development time accelerates from years to weeks (or months at most); cost per virtual machine plummets from dollars to cents; server administrator ratios can explode from 20:1 to 300:1; while efficiency increases, with resource utilisation soaring from 20% to 75%.
With measurable benefits like these, it’s no wonder that IDC expects that by 2015 the majority of the enterprise market will require integrated hybrid cloud management capabilities (Source: IDC Cloud Management Study, 2011 Survey).
Continue reading “Five top tips for the journey to cloud”
by Thomas Crowe (Red Hat)
As an experienced IT professional, chances are you’ve been involved with a migration of some sort. Whether it’s a simple migration, such as moving static data to another node or a highly complex migration across datacenters, all successful migrations have one thing in common – rock solid planning. Migrations that are attempted without the requisite planning can be fraught with peril, and end up with disastrous consequences
Ultimately, users, our customers, do not really care if a given server is up or down. They care whether they can access a specific application, such as email, a web site, or data. It is the service that users care about, and it is the service in which migration planning needs to be focused.
Continue reading “Migration strategy 2.0: Plan a services-focused approach for greatest success”
by Satish Irrinki (Red Hat)
It’s a truism that adopting open source software (OSS) reduces costs, but that’s not all. Let’s make a deeper dive into the business value of adopting OSS and uncover how the adoption provides immense value at multiple levels of an organization. The value proposition for OSS can be attributed to three groups within an organization – Technical Buyers, Business Buyers, and Economic Buyers.
Technical buyers can be best described as the line managers who are operating under stringent budgets to do more with fewer resources. As a result they aim to reduce costs and increase efficiencies within their operating units. In a bid to increase their resources utilizations, the technical buyers seek to increase reliability and flexibility in their operations. To achieve these goals they use systems that are reliable, adhere to standard specifications, and low in cost.
The high level of collaboration and contribution within the OSS development model accelerates the number of features that typical open source software provides. Availability of source code allows the adopters to make custom changes and tailor the software for specific needs. The ability to reuse software components across the organization (develop once and use within multiple systems) reduces the unit cost of development. These virtues of OSS mesh well with the goals of technical buyers and make OSS a viable option when making technology decisions.
Continue reading “Business value of open source software”
by Larry Spangler (Red Hat)
Lately, I’ve been seeing and hearing a lot of buzz about “operational efficiency.” As some see it, Operational Efficiency is basically the idea of doing more with less–if you can define and follow processes you can achieve repeatable outcomes with reduced error. Automate that, and you have a means to extend the reach of the individual IT operator while decreasing the effort and time required to build systems. It’s a straightforward value proposition that Red Hat has been touting and delivering for years with standardized operating environments (SOEs) and management tools like Red Hat Network Satellite and JBoss Operations Network.
But there’s evolution afoot here from the classic “operational” sense to one that is more expansive and higher purposed. The basics of SOE and management tools are now being used not only to define and develop repeatable infrastructure, they’re being leveraged with other tools like virtualization, IaaS, and PaaS to deliver on-demand capabilities. The key being that the focus is shifting from how to get the most out of your resource investment, to how to effectively and efficiently instantiate, use, and release systems for true on-demand capabilities.
Continue reading “The evolution of operational efficiency”
by Guy Martin (Red Hat)
“Open source is scary!”
“How can something ‘open’ be secure?”
“Won’t using open source in my products mean I have to give away my IP?”
These are all examples from real-world conversations with both external and internal stakeholders during my career as a developer and consultant. There are many more such examples, which I previously built into a blog titled Top 10 Signs Your Enterprise Doesn’t ‘Get’ Open Source. The good news is that with the emergence of Linux, Apache, JBoss and other important open source technologies, we don’t hear these kinds of things as often. The bad news is, there are still quite a few industries and companies where these fears are the norm.
Continue reading “Keep Calm and Innersource On”
by Satish Irrinki (Red Hat)
Open source adoption within public sector is no longer just theoretical – agencies across federal, state, and local governments have adopted open source software for a wide variety of computing tasks. In fact, new guidelines for software selection mandate that open source software be given equal consideration while making technology decisions. This is mostly because there are intrinsic characteristics of open source software that align with the long-term IT adoption trends within the public sector. Of course, open source’s obvious cost savings and economic value are significant drivers to adoption as well.
The 25 Point Implementation Plan to Reform Federal IT, a guideline released by the White House, clearly focuses on driving IT strategy forward with an emphasis on open source’s intrinsic characteristics — interoperability and portability. Adopting open source software perfectly meets these goals, while fostering innovation, reducing redundancy, and providing immense economic benefit to society.
Continue reading “Open source adoption in the public sector”
by Duncan Doyle
With the growing popularity of cloud environments and cloud-like architectures, the Service Oriented Architecture (SOA) paradigm has become increasingly important. Having been the previous big buzzword in IT, the term SOA has often been used as a means to sell software products instead of a term to refer to architectural style. However, in order to benefit most from the new possibilities in virtualization, just-in-time provisioning and on-demand scalability it has become a must for businesses to partition their enterprise logic and functionality into individual components which can independently be deployed in heterogeneous environments.
One of the goals of an SOA is to provide the enterprise with a set of re-usable, readily available business services, and as such reduce cost and provide greater operational agility. The autonomous nature of well-defined services make these components the perfect candidate for deployment in cloud environments. These individual services can then be combined, or composed into business applications which provide the actual business value. The specific compositions of these services in fact defines the actual business process.
Continue reading “BPM: Utilizing JBoss technologies to increase business performance and agility”
by David Kang (Red Hat)
Cloud is not software, cloud is not hardware, cloud is not virtualization, and cloud is certainly not a panacea for broken IT. Cloud is an architecture: a set of fundamental tenets that have different implications at different levels of IT, from network, to hardware, to applications, and to the IT process itself. To say you have a cloud is to say that you have a cohesive architecture, technology set, and most importantly processes, that work towards a defined goal under a set of well-understood principals. Building your cloud is as much about defining your goals and governing principals as it is about investing in technology.
Building your cloud and consuming cloud services
Step one is defining your governing principals. This is a crucial step before embarking on your cloud journey as the policies and principals you define will help you navigate your journey through the rapidly expanding cloud ecosystem. This is also an opportunity to ask tough questions and examine what your principals and processes are, and why you have them. Process is ultimately about managing risk, so consider what risks are acceptable under your governance policies and weigh them against the potential benefits cloud can offer. Both Facebook and Google have adopted “deploy to production” models that seem to fly in the face of process conventions such as ITIL or RUP, yet somehow they seem to survive. The penalty for not doing this exercise is ballooning adoption costs, or failed rollouts all together.
Continue reading “Cloud Sniff Test: Cutting through the jargon”