by Satish Irrinki (Red Hat)
It’s a truism that adopting open source software (OSS) reduces costs, but that’s not all. Let’s make a deeper dive into the business value of adopting OSS and uncover how the adoption provides immense value at multiple levels of an organization. The value proposition for OSS can be attributed to three groups within an organization – Technical Buyers, Business Buyers, and Economic Buyers.
Technical buyers can be best described as the line managers who are operating under stringent budgets to do more with fewer resources. As a result they aim to reduce costs and increase efficiencies within their operating units. In a bid to increase their resources utilizations, the technical buyers seek to increase reliability and flexibility in their operations. To achieve these goals they use systems that are reliable, adhere to standard specifications, and low in cost.
The high level of collaboration and contribution within the OSS development model accelerates the number of features that typical open source software provides. Availability of source code allows the adopters to make custom changes and tailor the software for specific needs. The ability to reuse software components across the organization (develop once and use within multiple systems) reduces the unit cost of development. These virtues of OSS mesh well with the goals of technical buyers and make OSS a viable option when making technology decisions.
Continue reading “Business value of open source software”
by Larry Spangler (Red Hat)
Lately, I’ve been seeing and hearing a lot of buzz about “operational efficiency.” As some see it, Operational Efficiency is basically the idea of doing more with less–if you can define and follow processes you can achieve repeatable outcomes with reduced error. Automate that, and you have a means to extend the reach of the individual IT operator while decreasing the effort and time required to build systems. It’s a straightforward value proposition that Red Hat has been touting and delivering for years with standardized operating environments (SOEs) and management tools like Red Hat Network Satellite and JBoss Operations Network.
But there’s evolution afoot here from the classic “operational” sense to one that is more expansive and higher purposed. The basics of SOE and management tools are now being used not only to define and develop repeatable infrastructure, they’re being leveraged with other tools like virtualization, IaaS, and PaaS to deliver on-demand capabilities. The key being that the focus is shifting from how to get the most out of your resource investment, to how to effectively and efficiently instantiate, use, and release systems for true on-demand capabilities.
Continue reading “The evolution of operational efficiency”
by Guy Martin (Red Hat)
“Open source is scary!”
“How can something ‘open’ be secure?”
“Won’t using open source in my products mean I have to give away my IP?”
These are all examples from real-world conversations with both external and internal stakeholders during my career as a developer and consultant. There are many more such examples, which I previously built into a blog titled Top 10 Signs Your Enterprise Doesn’t ‘Get’ Open Source. The good news is that with the emergence of Linux, Apache, JBoss and other important open source technologies, we don’t hear these kinds of things as often. The bad news is, there are still quite a few industries and companies where these fears are the norm.
Continue reading “Keep Calm and Innersource On”
by Satish Irrinki (Red Hat)
Open source adoption within public sector is no longer just theoretical – agencies across federal, state, and local governments have adopted open source software for a wide variety of computing tasks. In fact, new guidelines for software selection mandate that open source software be given equal consideration while making technology decisions. This is mostly because there are intrinsic characteristics of open source software that align with the long-term IT adoption trends within the public sector. Of course, open source’s obvious cost savings and economic value are significant drivers to adoption as well.
The 25 Point Implementation Plan to Reform Federal IT, a guideline released by the White House, clearly focuses on driving IT strategy forward with an emphasis on open source’s intrinsic characteristics — interoperability and portability. Adopting open source software perfectly meets these goals, while fostering innovation, reducing redundancy, and providing immense economic benefit to society.
Continue reading “Open source adoption in the public sector”
by Duncan Doyle
With the growing popularity of cloud environments and cloud-like architectures, the Service Oriented Architecture (SOA) paradigm has become increasingly important. Having been the previous big buzzword in IT, the term SOA has often been used as a means to sell software products instead of a term to refer to architectural style. However, in order to benefit most from the new possibilities in virtualization, just-in-time provisioning and on-demand scalability it has become a must for businesses to partition their enterprise logic and functionality into individual components which can independently be deployed in heterogeneous environments.
One of the goals of an SOA is to provide the enterprise with a set of re-usable, readily available business services, and as such reduce cost and provide greater operational agility. The autonomous nature of well-defined services make these components the perfect candidate for deployment in cloud environments. These individual services can then be combined, or composed into business applications which provide the actual business value. The specific compositions of these services in fact defines the actual business process.
Continue reading “BPM: Utilizing JBoss technologies to increase business performance and agility”
by David Kang (Red Hat)
Cloud is not software, cloud is not hardware, cloud is not virtualization, and cloud is certainly not a panacea for broken IT. Cloud is an architecture: a set of fundamental tenets that have different implications at different levels of IT, from network, to hardware, to applications, and to the IT process itself. To say you have a cloud is to say that you have a cohesive architecture, technology set, and most importantly processes, that work towards a defined goal under a set of well-understood principals. Building your cloud is as much about defining your goals and governing principals as it is about investing in technology.
Building your cloud and consuming cloud services
Step one is defining your governing principals. This is a crucial step before embarking on your cloud journey as the policies and principals you define will help you navigate your journey through the rapidly expanding cloud ecosystem. This is also an opportunity to ask tough questions and examine what your principals and processes are, and why you have them. Process is ultimately about managing risk, so consider what risks are acceptable under your governance policies and weigh them against the potential benefits cloud can offer. Both Facebook and Google have adopted “deploy to production” models that seem to fly in the face of process conventions such as ITIL or RUP, yet somehow they seem to survive. The penalty for not doing this exercise is ballooning adoption costs, or failed rollouts all together.
Continue reading “Cloud Sniff Test: Cutting through the jargon”
by Larry Spangler (Red Hat)
The funny thing about people is that as much as we complain about how bad things are, there’s a natural resistance to actual change. More often than not, the changes we long for come with a good deal of anxiety and a great deal of process pain.
This week, we moved into our new space in the “Red Hat Tower” in downtown Raleigh. There was a lot of excitement leading up to this move – new offices, new space, new neighbors, new opportunities – a fresh start all around. But that was countered by an equal amount of uncertainty and anxiety – would we like the new space, would we be giving up amenities, would the new commutes be a hassle, how long would it take to be productive again?
Continue reading “A new view on migrations”
by Vinny Valdez (Red Hat)
The following is an excerpt of a post originally published on June 29, 2012, on Vinny’s Tech Garage.
I’m really excited about CloudForms. In my recorded demo at Summit, I showed a RHEL 2-node active/passive cluster with GFS off an iSCSI target. Then I moved all the underlying CloudForms Cloud Engine components to shared storage. I was able to launch instances, fail over Cloud Engine, and see the correct status. After managing the instances, fail back, and all was good. All of this works because the RHEL HA cluster stops the databases and other services first, moves the floating ip over, then starts the services on the active node. This was a very basic deployment, much more could be explored with clustered PostgreSQL and sharded Mongo.
Continue reading “What is CloudForms?”