Historically, IT departments protected data in one of two ways. First, through the use of backups, and secondly, usually implemented by larger organizations, through using highly available system components and applying best practices.
Snapshots enter the picture
Things started to change in the 1990s when the snapshot technology, popularized by NetApp, was developed. The ability to take snapshots blurred the line between backups and highly available systems for data recovery, and many of the more sophisticated storage products were quick to adopt it. In the event of a data problem, e.g. a virus attack, a software bug, or any event that corrupted business data, IT departments now had a tool that allowed them to go back and recover most of the data lost.
The business benefit of using snapshots is easily articulated. Let’s say, for example, a data corruption took place at 2.37pm during the business day. If hourly snapshots are implemented, the company would only stand to lose about 37 minutes, or 1 hour and 37 minutes of data at most. Without snapshots, they could lose the data for that entire business day, and data recovery can be performed the end of the last business day. And that’s after several hours of restoring data from their backups. In an organization with hundreds, if not thousands, of employees, the technology was fairly simple to justify.
When snapshots are no longer a “snap”
As with most things, however, the process did not remain that simple in data recovery. Companies evolve over time, and when they do, their applications and data storage needs change as well. As a result, their numerous applications now stored data on multiple volumes, requiring more coordination to ensure that the correct application data was “snapped” at the same time.
This problem is compounded further by the fact that applications are feeding one another and storing different information on different data stores. Take as an example: an online standalone customer order system using an open source database feeds some of the order information directly to an ERP system using Oracle. Now the application managers, database administrators, and storage administrators need to understand what data is stored where, and how different pieces of information are tied together. They also need to know where all the various types of data are located at the time when snapshots are taken.
Some expensive storage systems attempted to make things less complicated by introducing features like data volume consistency groups for snapshots. But even minor changes had to be painstakingly coordinated to ensure that these will continue to work. Clearly, a simple solution had become a cumbersome process that is difficult to manage and entails a lot of resources.
Data recovery in today’s business landscape
Not surprisingly, many organizations have come to the conclusion that in addition to being extremely system resource-intensive, the use of snapshots cannot guarantee intra-day data recovery. While many large enterprises continue to spend time and resources implementing snapshot-based backups to varying degrees of success, most small to medium-sized businesses no longer bother to because they simply do not have the means, resources, or skills to keep up.
Organizations big and small today have to constantly deal with the increasing risk of data corruption and data loss due to issues like application bugs and database crashes, or security threats from various malware such as viruses and ransomware. Because of this, the importance of being able to recover intra-day data quickly and efficiently can never be overemphasized. Businesses that have the right systems in place stand a better chance of remaining competitive.
A viable alternative for point in time data recovery
Aside from the snapshot technology, numerous other processes and technologies have been formulated over the years to address the problem of data loss. But there’s one innovative solution that appears to have “cracked the code” and it’s from Reduxio.
Reduxio’s storage system ensures efficient data recovery by tracking the “write data” by time. This allows Reduxio’s “BackDating™” to present any and all data volumes stored in their system as of a specific second.
Think about that...
Everything is done automatically by the storage system; no one needs to coordinate and map data for recovery between different application groups, database administrators, and IT infrastructure personnel. If there’s a problem, just roll back to the second or two before the problem and continue. For instance, if there’s a Ransomware issue, then go to the second before, remove the malware, and carry on. The same holds true for a database problem – restart the database a second or two before the problem and carry on!
It’s really that simple.
And the simplicity of it is where the beauty of the technology lies. Backdating can be used by organizations of any size. It allows IT departments to deliver the value that the business needs, while giving them more time to improve IT services. No more spending inordinate amounts of time on unnecessarily complex IT processes such as implementing snapshots and consistency group management.
To me, that simplicity is a game changer.