The Pains of Technology Lifecycle in Enterprise Storage and What You Should be Looking For

HEADER_BLOG_Pains of Technology lifecycle in Enterprise Storage and what exactly you need to be looking for.jpg
Keeping up with technology lifecycles can be a huge waste of time and resources. 

Technology lifecycles are short. Just ask any one of the billions of consumers and organizations who make use of technology everyday. While having to change smartphones every so often shouldn’t be a problem for the average user, it’s a different story for enterprises dealing with data and the storage where data resides.

Data is one of the most valuable assets of a business; but the hardware this data lives on is not. With the way  storage media comes and goes–first tape, then disk, then flash–hardware upgrading has become a constant issue. Over time, new and improved media are introduced, each offering advantages over the previous products.

But these new products come at a premium cost, and are only “new” for a short time. Before you know it, they’ve already been commoditized and a superior technology has appeared in the market. Unless an existing product can be upgraded without business impact and high cost, managing technology lifecycles in enterprise data storage can be painful for users and expensive for their organizations.

For instance, flash is currently superior to hard disk. Flash storage comes at a cost several times greater than hard disk, and recent NAND shortages have affected its price and availability. As your business collects more data, does it make financial sense to pay the increased cost for 100% of your data, when perhaps only 20% of you data will benefit from it?

Now what about the next big thing in data storage? Is it 3D Xpoint? Or perhaps Re-RAM? Just like when 2D NAND flash went to 3D NAND, advancements in storage media will continue happening. And with these advancements, the cycle begins again.

As you consider the cost and depreciation of your storage assets, you will start to see that the software portion of the storage solution plays a critical role in two things: first, how well the solution is able to adapt to changes, and second, how it can continue to provide maximum ROI over the years.

Why storage tiering makes good business sense

Storage tiering was thus, born out of the realization that there is a need for a storage system that addresses performance and data availability, while keeping overall costs reasonable and leveraging new media.  

So again, you may ask–what is it about tiering that meets the need? Why are tiers important?  Why have them, and not a single tier, be it FLASH today, and maybe NVRAM tomorrow?

Two reasons: Because your budget is not limitless, and not all data is created equal!

The fact is, organizations are collecting more and more data, but all this data does not have the same access requirements. And, as we’ve established earlier, new media will come and go, with the latest product always priced the highest. But is that necessarily the best fit for your needs? Why spend on high performance storage for data that’s not frequently used?

It’s all about value–getting the most for your money. That’s why “good enough” often makes good business sense.

Let me give you an example. Even after the airplane was invented, we continued to drive cars. Why? Because it’s not cost effective to fly everywhere.


Now let me ask you: Is it cost effective to use a single tier for all data? To answer that, let’s dive into the topic of the media and its tiers a little bit further.

Tier abstraction

Tiering is a continuum that will keep on changing and evolving for the foreseeable future, depending on the challenges of that particular time. Its most basic premise is that data lives at various distances from the processor, from processor cache to object-based cloud storage. “Hot” data used by applications that require low latency lives closer, while “cold” data that is accessed infrequently and used by apps where latency is not an issue, lives farther away. In most cases, more than 80% of the total data is considered “cold.”

The concept of data placement on this continuum is a simple one, but engineering a solution for it is anything but simple. Tiering solutions today are tied to the specific medium, and are too slow to react to changes in workloads. Data analysis is scheduled at periodic intervals, and moves data in bulk based on information from 4, 8, 12 or even 24 hours in the past. This system is not only inefficient but also fails to respond to today’s dynamic workloads.

Efficient tiering must allow for medium changes, adding and removing tiers, and continuously analyzing data to assure it resides in the optimal tier. It also must be flexible in how it moves data, allowing granular and bulk data movement between tiers to be most effective. This is what Reduxio’s TierXTM  offers.

Efficient movement of data is critical, as is the efficient use of the storage medium. With in-line and in-memory global deduplication and compression, Reduxio assures no data blocks exist twice in any tier, maximizing efficiency and utilization of any media, from the least expensive external cloud storage, to less expensive local ‘hard disks, to the most expensive “prime real estate” of flash, DRAM, NVRAM, and the like.

Data protection considerations for storage renewals

Security breaches are an ever present risk. That’s why conversations about data storage should be accompanied with a discussion on data protection. Any implementation of a new tier, media, or system for data storage also needs to address the SLA requirements of the business. So let’s discuss this topic as well.


The way I see it, a couple of things can be established: Abstracting data from its physical location addresses the challenges of tiering, while abstracting data from time addresses data protection.


Now what if every block of your data had a timestamp of when it was created, down to the second? Wouldn’t that assure you of foolproof data protection?

The best part of this is that accessing data from the past is now instant and simple. Just select a point in time, click, and your volume is ready for use. Note that this is not a data snapshot.

What you would have there is a unique and independent volume that can be cloned again and again, for use by different data consumers. It can even be sent to a remote location for offsite protection, or sharing with internal customers at different locations. Maybe you have a development office in a different city? Maybe one of your support staff resides in another location and needs the data for troubleshooting? Select your time, clone, send, and access… all done within minutes.

Meeting your RTO and RPO

When discussing data protection, talk about RPO and RTO cannot be avoided. Similar to storage media, time is a continuum as well. The closer we get to zero RPO/RTO, the higher the cost.

With Reduxio, you get one-second RPO for data on the array, built-in, out-of-the-box. RTO is as fast as you can mount a volume to your hosts. That right there, is the very definition of peace of mind. With such speedy recovery guaranteed, there’s really nothing to worry about.

Now what about remote protection? That’s covered too. Send data to any iSCSI target, or any S3 compliant target, in as low as a 5-minute RPO. Data is deduped and compressed before transferring.

What about RTO? Normally, organizations would wait until 100% of the offsite data is local before starting to accessing it. But you’d be in for a long wait because really, how long would it take to transfer 5TB across the internet? More so 10TB or  50TB?  The prolonged waiting time is a huge waste and inconvenience especially when your applications most likely only need to access 20% of this data.


There’s a better way than that.


Reduxio: From hours and days to seconds and minutes


With Reduxio, you can access the data immediately. All data appears as though it were local, and hot data begins transferring back to the local volumes. Any blocks not local will be pulled and retrieved on demand. Your application is up and running in no time, and will soon have its hot data all local again, even while colder data is still being transferred.

That’s Reduxio for you: a 5-minute RPO and an RTO of minutes, provided out-of-the-box.

For a clearer picture, consider this useful analogy. Ever downloaded a product manual looking for a specific piece of information? You may only need a few sentences, but if you download the PDF file, you must wait until the entire file is downloaded before you can open and access the info you need. The same goes for your data restore process... until now.


Simple, efficient, and effective enterprise storage that  delivers performance, protection, flexibility, and extensibility. This is what Reduxio guarantees. And having a product like this to offer is why I signed up to help customers solve the pains of technology lifecycles.

Jeff Friedman

Written by Jeff Friedman

Jeff has been working with enterprise IT infrastructure since 1986. Working as a system administrator for small businesses and Fortune 500 companies he has first hand experience with the continuous challenges IT organizations face. Born and raised in the Chicago suburbs, he started his journey in the land of IBM mainframes and mid-range systems. He relocated to Seattle in 1998 and saw the rise of distributed and open systems, working with Unix and Linux. He worked for a major retailer during the dot-com days, and the more recent explosion of data collection and analytics which have been straining data center’s storage solutions. This led Jeff to focus on storage in a Sales Engineer role since 2014 when he joined Fusion-io, a pioneer in enterprise flash storage. Today Jeff works for another pioneer in enterprise storage, Reduxio Systems. Jeff believes in remaining objective when discussing technical solutions, keeping the focus on how technology helps the company’s bottom line. He understand the delicate balance of compressed timelines, tight budgets, limited resources, and the importance of understanding TCA and TCO when selecting a solution.

Want to comment on this blog post?