The Recoverability Factor: Four Key Trends in Data Recovery

The scale, frequency and cost of cyberattacks is well documented. But what’s often overlooked in a seemingly never-ending cycle of prevention and protection are the nuances in what we call the ‘recoverability factor’. This a company’s readiness to respond and recover from a major data attack or other disaster.

Knowing that at some point your data – and potentially your whole business – will be threatened, focus shifts from security and prevention to recovery. The operational, financial and reputational implications can be catastrophic, so reducing the amount of data impacted and speed of recovering operational status then becomes the priority.

But the recovery phase can be chaotic and stressful, and there are no second chances to conceive a new disaster recovery strategy. Organisations must execute on the plan they have put in place. This is not the time to discover any shortcomings or failings. In some cases, experienced technical staff can work around a flawed or poorly thought-out plan, but it isn’t something they have trained for – and to be fair shouldn’t have to do.

An organisation’s ability to recover systems and data is non-negotiable. There is no room for doubt – and if there is, any uncertainty needs to be identified and addressed before disaster strikes. But in a recent Assurestor survey, we discovered that rather than being fully prepared, senior IT professionals are not fully confident in their data recovery capabilities.

Here we look at some of the key trends coming out of the data.

Lack of confidence is an issue

The vast majority (78%) of our survey respondents admitted they had suffered data loss due to system failure, human error or a cyberattack at least once in the past 12 months. Yet only just over half (54%) are confident they could recover their data and mitigate downtime in a future disaster.

The fact that only just over half think their data is recoverable is concerning. How can your readiness for recoverability be reported confidently to the business and to senior stakeholders? Confidence comes from identifying an organisation’s realistic needs, without compromising on cost or making sure you have the right tools for the job.

Data recovery on the business ‘fitness agenda’

Survey respondents were clear in what they are lacking from the business in terms of disaster recovery planning, with 39% pointing to a lack of skills or expertise in-house, 29% to a lack of investment or budget, and 28% to a lack of senior support.

Recoverability is no longer a choice but must be part of a company’s fitness agenda. Support from the top down is critical, as is sufficient funding to avoid fostering a culture of complacency. If those tasked with protecting the business in the event of system failure, a cyber-attack or human error do not feel that threats are taken seriously enough, then their approach and attitude may well reflect this.

Aligned with a thorough testing regime is the confidence to report that systems are recoverable, and the business is ready to respond. It also leads to a culture of professionalism about an aspect of IT that often sits in the shadows – until it’s needed.

The testing ‘gold standard’

Thoroughly and repeatedly testing systems and disaster recovery processes is non-negotiable. But one in five senior IT professionals say they test just once a year or less, while 60% of respondents check their data is fully recoverable and usable once every six months.

The testing ‘gold standard’ is twice-yearly, non-invasive full failover tests supported by monthly system boot tests and data integrity checks. As well as rigorous data validation, testing the ability of workloads (applications and data) for failover capabilities should be baked into the recovery plan. This should also allow for network and connectivity testing, an often-overlooked component in the testing process.

The challenge is that many technologies deployed to recover systems and data do not allow for non-disruptive testing. While testing can be carried out, these tests can never be thorough enough without significant disruption and, as a result, deliver a compromised test. Organisations need to put in place a well-structured recovery environment to optimise data recovery testing and ensure it can be conducted in the least disruptive way to the business.

Fail to plan, plan to fail

Two-thirds of respondents said they review and update their disaster recovery plans at least every six months. But there’s risk it could fall down the priority list. Disaster recovery and data backup is a priority that all business functions should push for and be adapted to meet any newly identified requirements after frequent recovery testing.

As part of this planning process, you should ask two other important questions. First, what constitutes a ‘disaster’ today? The traditional image of fire, flood and acts of God is outdated. The increasing threat and sophistication of cyberattacks is the new reality. Second, how long can you afford to be down? Can you afford to lose any data without significant impact? Do some maths on what the cost of just one hour of downtime would be. Without this visibility your recovery plan may be flawed.

About the Author

Stephen Young, Executive Director at Assurestor is a seasoned business owner and entrepreneur, innovation in technology has been central to Stephen’s career for over 30 years.

Across varying facets of IT, Stephen’s experience covers infrastructure, software development, datacentres, service and support, IT governance combined with management, finance and business development. With roots in software development and service and support, Stephen’s commitment to detail, thoroughness and uncompromising customer support has been a continuous thread through his businesses and has been a major factor to their success.

Stephen can be reached online at https://www.linkedin.com/in/stephenyoung996/ and at our company website: https://www.assurestor.com/

Post Comment