The Hot Cloud Storage Guide to Backup and Recovery
Hot cloud storage makes backed-up data instantly available if the worst should happen.
Why the “hot cloud storage” guide to backup and recovery?
Actually, you may be wondering why a “storage” guide at all? After all, isn’t storage a commodity? “Can’t I just pick the lowest-cost option and be done with it?” Sadly, it’s not that easy. There are many different storage options and countless vendors and they don’t make figuring out your true total cost of ownership very easy. Sure, the cost per gigabyte continues to go down, but complexity (and obfuscation) is on the rise. Massive data growth is outstripping on-premises storage ability to scale. This plus faster hardware obsolescence is placing IT teams in a constant state of worry about rising costs and compatibility issues with existing equipment.
Cloud storage was supposed to be the answer, but adopters of first-generation cloud solutions are encountering similar challenges with rising costs and complexity. Multiple tiers of object storage services (AWS has six different storage offerings) with their different cost structures, restrictions, and transaction fees force IT teams to establish complicated data lifecycle plans and implement automation software to move data from tier to tier in an effort to control costs. And there’s still no reliable way to predict the total cost of storage.
We agree with you. Storage should be a commodity—a standards-based utility, like electricity. There is only one level of service with one low usage-based fee. Use it how you want and pay only for what you use. That is the thinking behind hot cloud storage, the next generation of cloud storage technology that is driving the Cloud 2.0 revolution. It can help solve for many of the backup and recovery challenges described in this guide.
Introduction
In today’s digital world, losing access to your data is the fastest route to failure. Data loss is not only one of the most common causes of business disruption, it’s one of the costliest. A recent study by Verizon reported that small instances of data loss (approximately 100 lost or compromised records) can cost businesses an average of $18,120 to $35,730, depending upon the size of the company and the value of the data. The same study found that large-scale data loss (100+ million records) costs between $5 million and $15.6 million. Some businesses never bounce back from data loss of this magnitude. It is estimated that 1 in 5 small businesses have been forced to close their doors due to data loss caused from a ransomware attack.
It’s no wonder business continuity and data backup and recovery are high on the list of IT priorities. If you’ve downloaded this guide, you’re no doubt interested in replacing or upgrading your existing backup and recovery solution. Like many organizations, you may be considering migrating to a third-party cloud backup provider or are planning a DIY approach using one of the major cloud object storage vendors. Perhaps you’ve invested heavily in on-premises storage and are looking at cloud as a second- or third-copy option for ensuring continuity and resiliency of your business.
Wherever you are in your journey, this guide will help you better understand the latest trends and options for backup and recovery storage, and the pros and pitfalls of each of the storage types available. We’ll explore the evolving and growing use cases for cloud storage, and help you determine which cloud is right for you.
The current state of backup and recovery
Cloud computing. A.I. Blockchain. Cybersecurity. Data analytics. We don’t need to remind you of the evolving digital landscape that is transforming businesses and IT departments everywhere. You’re living it every day. However, with all these advancements in IT, many companies are waking up to the fact that their current traditional approaches to backup and recovery are not keeping up. Here are just a few of the reasons why:
Growing data deluge
Businesses are creating, consuming, and storing more data than ever before—not just human-generated, but machine-generated data from sensors, devices, and A.I. algorithms distributed across the enterprise. IT departments all over the world are faced with managing not just hundreds of terabytes (TB), but petabytes (PB) of data. And the data tsunami is accelerating (see Figure 1).