27 Nov 2019
Organisations invest large sums of money into backing up their data and then storing it – sometimes indefinitely – but often find that backups fail when they really need them, so what’s the real issue here?
Nearly all organisations have a solid, tried-and-(sadly often not fully)-tested backup solution in place and have done for years. This involved (and likely still does in some places) tapes upon tapes of data, stored somewhere off-site in case it is ever needed. We now have easier solutions available, such as disk-to-disk backups and cloud storage solutions, which simplify the backup process, even allowing you to pay on a subscription basis, which means little to no upfront investment and a pay-as-you-play scenario.
So why do we find that our organisations are loathed to trust our backups, even when we have so much experience of conducting backups and so many stories of losing live data?
For years, Boards have gotten used to spending a proportion of the IT budget on backing up their organisation’s data and storing that data in a secure off-site facility, but how confident are they of this whole process?
Organisations perform backups on pretty much all their data and systems on a regular basis. But is a restore tested as regularly? It only takes a minute to look at trends in the news – when organisations have been hit with ransomware many have chosen to rebuild from scratch rather than use the backups they have taken!
The feeling seems to be that simply ensuring that data is backed up is enough. IT and board reporting only seem to show successful backups not successful restores. Maybe this could be to do with the name ‘Backup’. My thought is that we should rebrand backups to something more in keeping with what is really required by organisations, like ‘Restorative Services’.
By giving backups a rebrand and a name that says exactly what it is, peoples’ minds will be focussed not just on getting it done, but also on why we do it and why it is really needed. This should help to ensure that the importance and profile of restoring data is increased, as it should be the backbone of any business continuity and disaster recovery plans. All three share the same core principles and complement each other, as they all have the same ultimate goal: to allow the organisation to continue to operate effectively and uninterrupted. Therefore, I’m of the opinion that they should all be combined to become a single restorative services function.
I have seen many organisations (and worked for some!), that when faced with failing over to a disaster recovery site (the backup datacentre) they would not action it as they were afraid to do so. They were not confident that it would work as expected, or that it would take them longer to failback than it would to fix the problem. Neither of these excuses should even be possible! It really begs the question, why would you pay all that money and expend all that effort if you are not going to use your backups? Making sure that you can restore your backups is the first step in moving towards restorative services in action.
It may take time for IT staff to understand the subtle difference here. I am sure they all feel this is what they are currently doing. Often though, in practice, it isn’t. Once IT believes in the difference and the importance of restorative services then we can move to convince the Board. Although this sounds like a dramatic change, it is not – these are tweaks and improvements to existing processes that will greatly enhance the capability, rather than be the dead-end it appears to currently be.
There will be an increase in costs, but there are also ways to save money here, especially if you can combine backup, DR and BC into one whole restoration function.
Another potential cost saving could be around how you invoke DR. Many larger organisations run expensive backup/secondary datacentres – imagine if these could be migrated to the cloud – there’s no need to have purchased all that extra rack space, all those hardware and software licences, or to fund the cooling and resources needed to manage it all! Put it in the cloud, replicate your servers and devices, but leave them powered down and instead just replicate the data (and where you can, leave it dormant).
Whilst these devices are powered down, they are a fraction of the cost but can still be brought back online when the occasion calls for it – surely this is a far simpler DR model?
In short, I don’t believe that the backup is dead, it just needs a makeover and some new friends for an entirely new lease of life!