If you have been thinking that software-defined storage (SDS) can only be used in an OpenStack environment, think again. Storage for OpenStack only represents a tiny fraction of the multi-billion dollar enterprise and cloud storage market. Just about any industry, government, defense, satellite imagery, medical imagery, media and more has incredibly large storage needs that are growing exponentially every year.
Before the arrival of SDS, the only choice organizations had was to invest in expensive storage appliances, and to keep investing in more and more of them whenever they needed to accommodate exploding data storage needs.
The Appliance-Maker’s Workshop
For manufacturers of these appliances this was almost like a license to print money. Even today, they continue to reap the benefits of earlier thinking, which held that it made the most sense to bake the “secret sauce” right into the appliance itself. That is, the intelligence that ran the device was hard-coded directly into the circuitry. This eliminated the need to run any other software on any other devices such as separate servers. To be fair, this pricey collusion of hardware and software does have some advantages: Users get a turn-key solution from a single vendor, and they are spared from dealing with the inherent complexity.
However, eliminating the “need” to run separate software on separate servers also eliminated the advantages of doing so.
With the software locked inside the appliance, users had to wait for the manufacturer to release upgrades that could be uploaded into the device. They couldn’t begin to think about adding functionality, lowering costs by picking slower components, or tuning performance for different workloads. They were locked into a limited set of ready-made options they could find in the catalog of the manufacturer of their appliance. Many users felt hostage to the situation and were frustrated by the prospect of paying ever-increasing dues to the purveyors of their storage silos.
The arrival of SDS reversed the original thinking and detached the “secret sauce” from the device, placing it instead in projects like Ceph — software that runs on commodity servers and uses standard storage hardware to build storage clusters that can scale out to mammoth petabyte scale without performance degradation. As you scale, you not only add more capacity, you also add more processing power to manage it. Not only is this this far more flexible, it is also far more cost-efficient.
Ceph Storage clusters can span multiple geographic sites, using sophisticated replication to build in tremendous resilience. If a node in the cluster is lost, the others can reconstruct all the data from replicas located throughout the other cluster nodes.
With Great Power Comes Great Responsibility
It is important to recognize just how much power and flexibility SDS puts back into the hands of storage architects and administrators. What was previously the exclusive realm of a handful of dedicated product designers working for the appliance vendors is suddenly in the hands of the end users. (That is why some refer to this as “democratization of storage”). But this also presents a challenge: SDS platforms like Ceph have hundreds of configuration settings, which provide a granular ability to tune performance, capacity, throughput, and more. The challenge is that this much technology cannot be very user-friendly. It requires far more knowledge to operate properly. There are no “auto-adjust” wizards in play. The power and control available in SDS can be very beneficial if you know what you’re doing. It can be catastrophic if you do not.
Users who attempt to simply download the software, load it onto servers, and figure it out without assistance generally spend months reading the documentation and preparing to deploy. Others prefer a trial-and-error method and jump right in. Most achieve more frustration than anything else. Unless you are an expert, these tools cannot be taken up lightly.
Leverage Experience & Expertise
Those wishing to achieve a more productive storage environment in a reasonable timeframe choose instead to engage experts who have deep experience deploying SDS solutions and deep familiarity with the technologies involved. These consultative experts help architect and design each solution to closely match the needs of an organization’s particular use cases.
This is why companies like Red Hat have become increasingly important for SDS, by providing viable frameworks and accountability that enables customers to readily access the expertise they need to take fullest advantage of technologies like SDS. Their key yardstick of success? An uneventful deployment, where everything goes as planned.
Consulting alone cannot be the only pillar of the strategy. While a good way to get started, it does not address the ongoing operational needs of an SDS storage cluster. Those charged with operating a SDS system must know what they are doing. The fastest way to achieve that knowledge transfer is through training.
Even trained individuals will occasionally require support on more sophisticated issues. And sometimes even a mature storage platform may exhibit defects, and require someone to change the product itself. Here again an organization like Red Hat provides the the ongoing support to make it all work consistently.
The old proverb tells us you can give someone a fish and they will eat for a day, or teach him how to fish and he will eat forever. Red Hat’s purpose is to bring you to the pond, teach you to fish, and help you keep baiting your hook whenever necessary.
Storage manufacturers have been “dumbing-down” storage for decades. Now that others have opened it back up, it’s time to learn how to take advantage of all you’ve been missing.
Connect with Red Hat Services
Learn more about Red Hat Consulting
Learn more about Red Hat Training
Learn more about Red Hat Certification
Subscribe to the Training Newsletter
Follow Red Hat Services on Twitter