by Rob Locke (Red Hat)
One of the new features introduced in version 3.1 of Red Hat Enterprise Virtualization is a command line interface (CLI) to connect to the manager. The CLI also contains a scripting system, which helps system administrators perform periodic maintenance or repetitive tasks on their virtualization environment.
Communication with the RHEV Manager is secured through the use of a certificate that needs to be downloaded from the manager:
$ wget http://rhevm.pod0.example.com/ca.crt
Connect to the RHEV Manager using the rhevm-shell command (referring to the downloaded certificate):
Continue reading “Using the command-line interface of Red Hat Enterprise Virtualization 3.1”
by David Kang (Red Hat)
Cloud is not software, cloud is not hardware, cloud is not virtualization, and cloud is certainly not a panacea for broken IT. Cloud is an architecture: a set of fundamental tenets that have different implications at different levels of IT, from network, to hardware, to applications, and to the IT process itself. To say you have a cloud is to say that you have a cohesive architecture, technology set, and most importantly processes, that work towards a defined goal under a set of well-understood principals. Building your cloud is as much about defining your goals and governing principals as it is about investing in technology.
Building your cloud and consuming cloud services
Step one is defining your governing principals. This is a crucial step before embarking on your cloud journey as the policies and principals you define will help you navigate your journey through the rapidly expanding cloud ecosystem. This is also an opportunity to ask tough questions and examine what your principals and processes are, and why you have them. Process is ultimately about managing risk, so consider what risks are acceptable under your governance policies and weigh them against the potential benefits cloud can offer. Both Facebook and Google have adopted “deploy to production” models that seem to fly in the face of process conventions such as ITIL or RUP, yet somehow they seem to survive. The penalty for not doing this exercise is ballooning adoption costs, or failed rollouts all together.
Continue reading “Cloud Sniff Test: Cutting through the jargon”
by Vinny Valdez (Red Hat)
The following is an excerpt of a post originally published on June 29, 2012, on Vinny’s Tech Garage.
I’m really excited about CloudForms. In my recorded demo at Summit, I showed a RHEL 2-node active/passive cluster with GFS off an iSCSI target. Then I moved all the underlying CloudForms Cloud Engine components to shared storage. I was able to launch instances, fail over Cloud Engine, and see the correct status. After managing the instances, fail back, and all was good. All of this works because the RHEL HA cluster stops the databases and other services first, moves the floating ip over, then starts the services on the active node. This was a very basic deployment, much more could be explored with clustered PostgreSQL and sharded Mongo.
Continue reading “What is CloudForms?”
by Pete Hnath (Red Hat)
Innovate or die. It’s the essence of what successful companies do, especially in the tech space. At Red Hat, there is ongoing innovation in every dimension of the business, with new products like CloudForms, new infrastructure like the Customer Portal and new metrics like Net Promoter.
The Curriculum team is similarly pushing to innovate with our course offerings and course delivery. In the last year we’ve completely changed the way Red Hat courses are taught to ensure the most hands-on experience possible. Gone are hour long, death-by-slide lectures. Students are actively engaged through multiple teaching approaches and near-continuous labs focused on solving problems rather than tools and technologies. Instructors are now armed with comprehensive guides with best practices on how to teach topics, resulting in across-the-board consistency and a more optimal student learning environment.
Continue reading “What’s new with Red Hat Training courses”
by Sean Thompson (Red Hat)
As technology consultants, we’re typically brought in by a customer to help them get somewhere specific they can’t reach on their own because of resources, skills, time or a host of other reasons. One of the things I’m most surprised by during these engagements, however, is how many IT organizations know where they want to go, but they don’t necessarily know where they are, or swear they are somewhere else. The knowledge they have about their infrastructure and what’s going on in their datacenter right now is extremely limited, or at best stale due to a lack of realtime data.
The value of understanding where you are today is immense, and is an important first step in realizing your IT goals and to help you move towards your ideal datacenter. Knowing where you stand and having a clear map of your current environment shines a light on opportunities to become leaner, to improve performance and automation, and to drive efficiency. The benchmarks you’ll create will help you conduct TCO/ROI calculations that actually mean something, so it will be clear how to become more agile, and become more responsive to the business.
Continue reading “Understanding where you are today: Assessing the current state of your datacenter”