Overcoming the challenges of back-up and storage

Feb 19, 2018

We speak to Banking Technology Magazine about the solution dominating the market to resolve back-up and storage challenges.

 

Overcoming the challenges of back-up and storage Bridgeworks

Overcoming the challenges of backup and storage
February 19, 2018

A gambling and gaming company has achieved 75% in cost-savings with Amazon Web Services (AWS). The return on investment (ROI) it has achieved is incredible and, more importantly, it can be replicated by banks and other financial services organisations at a time when the European Union’s General Data Protection Regulation (GDPR) are just around the corner – coming into force on May 2018.

So, now is a good time for banks to think audit their back-up and storage to achieve both cost-savings and regulatory compliance.

The other key challenges include:

  • data locality;
  • bandwidth and data change rate that needs replication to a remote site hosting the cloud;
  • privacy.

The gambling and gaming company is keeping some of its data on-site and some of it resides in the cloud. To improve the speed at which it can back up and restore its data, the firm has used a data acceleration to reduce the time it takes to back up its data. The less time it takes to back up data, the more it can save financially – and that’s despite growing data volumes. The larger the data volume, the more challenging companies, including banks, find it to move data to and from the cloud.

David Trossell, CEO and CTO of data acceleration company Bridgeworks, explains: “The rush to put everything in the cloud and run the organisation from there has had an impact on internal service-level agreements (SLAs). An example is of the gaming company. After migrating everything to the cloud, the response for the HQ staff accessing the database in the cloud became unacceptable: this is purely down to the time it takes to get from the HQ to the cloud, a factor of the speed of light.

“This has been the experience of many cloud-only strategies where databases have been involved. This forced the pendulum back to what is now a more acceptable model of a hybrid cloud strategy where the critical data still on-premise, but the non-critical data along with Backup-as-a-Service (BaaS) and Disaster-Recovery-as-a-Service (DRaaS) residing in the cloud.”

So, unlike WAN optimisation, which can’t handle encrypted data, WAN and data acceleration optimise the velocity of data transfers. Data acceleration also mitigates the impact of data and network latency, which can even have a negative impact on DRaaS. Beyond data acceleration, the trouble is that there is no efficient traditional way of moving the data around, and the options are often limited for customers.

Cloudify things

Anjan Srinivas, senior director of product management at Nutanix, comments: “Before you start to structure your IT to cloudify things, it is important to first understand a few things that form your service delivery – current cost structure, current application architecture, uptime goals, regulatory compliance and inefficiencies that one wants to overcome. The cloud does not provide a singular fix for all problems.”

He’s right, and cloud computing isn’t right for everyone either. However, he thinks that cloud offers some great benefits, e.g. the ability to run IT-as-a-Service (ITaaS) now becomes a reality. “In addition, cloud enables fractional consumption and billing”, he says. He then suggests that, if your bank or financial services organisation runs in a “highly available form, with DR data centres being maintained to provide high uptime, cloud DRaaS services can be a very attractive way of achieving the same, or higher availability, without the cost of maintaining another data centre and its associated costs.”

Srinivas suggests the “same holds true for services like back-up and archival” before adding: “banks have to deal with the very important dimension of regulatory compliance, especially when it comes to customer data. There are also geographical restrictions on data locality associated with companies in the financial services domain.”

In his view, financial services should therefore:

  1. “Look at moving to a software-defined, cloud architecture on premise. This will allow predictable applications that need long continuous runs to be managed in a cost-effective manner, as if they were running in the cloud. This will also eliminate any concerns around regulation and control.”
  2. “Identify applications/data that are usually more cost effective in the cloud. DR as a service is a great example – it is for those rainy days when your data centre is down, but without the need to spend Capex and Opex on an ongoing basis.”

Encryption and security

Trossell adds that many smaller financial services companies have low data requirements, but “they would like to utilise the benefits that the cloud can bring”. However, he advises that “the data transfer needs to be encrypted and cloud security measures have to be in place to ensure that there won’t be any extra requirements for them to implement. But when we are talking about some of the large organisations in the financial service market, this is a whole different ball game.”

He continues: “Transferring large amounts of data to the cloud, either as part of the initial seeding of the cloud or utilising the cloud as part of an archive, BaaS or DRaaS amounts to a different matter”. Many organisations fail to comprehend that even a high bandwidth WAN connection to the cloud, latency and packet loss can severely affect the performance of the connection, he notes. This then becomes a risk issue.

Cloud provider location

“So, do you select a cloud provider that is close to you to reduce the latency? You could suffer the same outage or denial of access as you are located within the same circle of disruption. Or do you suffer severe performance drop off across the WAN? These are questions she thinks should be closely scrutinised,” says Srinivas.

“Everything we do is about two things; cost and efficiency. It will differ from financial institution to financial institution on why they choose to use the cloud model. The mileage gained will also vary from enterprise to enterprise.”

Trossell adds: “As with many new technology ideas, there tends to be a swing all the way in one direction, only to realise that the marketing promises are not borne out of a fundamental aspect is overlooked and the pendulum swings back; not all the way but it picks up on the best of both worlds. We have seen exactly the same thing with the cloud.”

“Network latency can be avoided with careful placement and planning. Knowing the distance and network bandwidth between primary and secondary clouds (private or public) will allow IT to work around it”, says Srinivas. “Eliminating latency is not possible, as the laws of physics will still apply.”

BaaS and DRaaS

In contrast, Trossell says: “There are a few key points to take into account when designing BaaS and DRaaS. Firstly, you should always design your back-up strategy around your recovery requirements: What are your organisation’s recovery requirements? Secondly, until your last byte of data is offsite, you do not have a valid backup. Lastly, replication should not be your sole strategy for backup and disaster recovery. If your primary back-up becomes corrupt, then all your copies quickly become corrupt. As for ransomware running amok, the only way to protect from these is to maintain an ‘Air Gap’ between the live and back-up and disaster recovery copies.”

So, beyond data acceleration with machine learning, why isn’t there an efficient, traditional way of moving data around, and why are the options limited for customers? Trossell replies by suggesting that external data acceleration techniques are often an afterthought:

“Using new generation datacentre platforms will allow the efficient usage of networks, in turn eliminating the need for external acceleration platforms. At the end of the day, as I said, physics plays a role and there is no magic formula. What you need to choose is a capable platform, that allows you to use hybrid architecture (private and public) and is intelligent enough to optimise the data placement and transfers to maximise the cost and performance.”

Speed of light

“The speed of light is a fact of life for all of us in the IT world and the traditional way of improving data throughput over the WAN is, to us, WAN optimisation, where the data is cached locally to both ends,” says Trossell. “The data that transfers between the node is de-duplicated to reduce the amount of data flowing across the WAN, giving the impression of improved performance. However, many products now have this function built in. Whilst this has served the industry well over the years it does have some limitations that are coming to the fore, due to the type of data we use and the new higher performance bandwidths available with current WANs.”

He explains that much of today’s data is ether encrypted at rest, or in a compressed format, such as images and video: “Even with de-duplication as part of the product there is little or no reduction in the size of the data transmitted, since there is little to be gained by trying to compress a compressed or encrypted file. Secondly, de-duplication takes computing power, and once you start to get to 1Gb or more WAN connections, the amount of computing power require rises exponentially.”

Cloud gateways

There are several options available today, and cloud gateways are considered but one of them. From a Nutanix perspective, it’s good for organisations that are re-architecting with next generation operating systems for data centres that understand the cloud. Srinivas also believes that cloud gateways are “for customers wanting to extend the efficiency of legacy architectures, gateway vendors have provided some stop gap optimisations”.

“Cloud gateways are an extension of the WAN optimisation products and are the traditional way of securing data for cloud-based back-up and disaster recovery functions off-site – especially over slower WAN connections”, replies Trossell. The cloud gateways tend to be configured with a large local data cache of disks with “a de-dupe engine to crunch down the data before sending it to the cloud. The cache is used as a buffer between the fast data transfers of the in-house systems and the slow WAN links.”

He comments that the cloud gateway “holds the most recent set of data for fast restores to the in-house systems. However, these gateways have a number of aspects that the user should be aware of.” So, to get good “de-dupe ratios”, he recommends that the data has to be fairly stable. “A data set has large amounts of change, especially with encrypted or compressed files, will hinder and slow -the de-dupe process”, Trossell explains.

“Secondly, (and a point that many cloud gateways users are unaware of), although the back-up has finished, until the gateway has finished manipulating the data and has sent every last byte off-site, you still do not have a safe offsite back-up. Lastly, as this is key to any recovery process, if you lose your data completely or the file you want isn’t in the local cache, then you have to pull the data back across the WAN and re-inflate it – this will push the recovery time out considerably. For larger organisations using cloud gateways can push their RTO beyond what is acceptable and are forced to use other methods to move data to the cloud.”

Different approach

So, what are the options from Trossell’s perspective? Data acceleration is certainly one of them, allowing organising to maintain acceptable levels of recovery point objectives (RPO) and recovery time objectives (RTO). Data acceleration, he explains, takes a different approach to maximise the throughput over the WAN from that of the traditional WAN optimisation.

Trossell adds: “Where WAN optimisation uses de-dupe technology to improve the movement of data across the WAN, data acceleration and WAN products, do not touch the data. They use artificial intelligence-controlled parallelisation techniques to maximise the throughput of the WAN connection up to 90%+ of the WAN capability while mitigating packet loss. As it does not manipulate the data, its performance is not affected by the data type and the computational overhead is low, it can scale to 40Gb WAN connections.”

The trouble is that many organisations can miss their RPOs and RTOs whenever they move data around. “It all boils down to design. If you have, say, zero RPO requirements, there is no other option than moving data within the 5ms network boundary. In turn, this translates to the maximum distance your secondary datacentre can be situated”, says Srinivas.

Trossell finds that organisations with a requirement to have data held offsite have always been faced with the distance and speed dilemma. So, the further you go, the slower you go. “In the past, the compromise (whenever there are large volumes of data involved), has been to use tape devices and a safe repository such as Iron Mountain,” he reveals.

In his view this is great for the RPO. He therefore comments: “Large data sets can be stored, but the RTOs can be poor, due to the time getting the data from the depository. The only other alternative was to use a high-speed Metro Area Network to replicate over. The issue with this data will likely be held within the same circle of disruption, as we witness in Super Storm Steven, where companies lost both data centres. WAN optimisation has been used in the past but for the reasons highlighted above but can no longer cope with the performance requirement of today enterprise.”

Increase WAN capacity

He claims that with WAN and data acceleration and their ability to maximise the WAN capacity regardless of the distance involved, it is now possible to have datacentres, or the cloud thousands of miles away, whilst retaining performance and the RPO requirements.

So, how is this traditional technology, such as WAN optimisation, inhibiting cloud back-up and storage – including encrypted and deduplicated data used in a cloud gateway? “The need for all these technologies will vanish over time as the new paradigm of a cloud operating system, that melds both private and public and understands application behaviour and its data come into the fore,” thinks Srinivas.

To back up and store data both securely and efficiently, while also overcoming the challenges of back-up and storage, organisations must understand their business goals. They also need to comprehend application behaviour and data patterns. It’s also crucial for them to plan their environment to permit them to adhere to service level agreements, and they need to ensure that part of this planning and analysis includes thought and action to ensure regulatory compliance. They also need to build a security methodology for each element of their design. From a Nutanix perspective, SD-WAN is a must, however, WAN and data acceleration are something that banks could certainly do with deploying.

x

DISCOVER BRIDGEWORKS

Watch our short video and discover how Bridgeworks patented technology delivers data accelerated solutions to access up to 98% of the WAN!

Book a FREE demo

Q

BOOK A FREE DEMO

Bridgeworks would be delighted to host a FREE one-to-one demonstration of the Bridgeworks solutions portfolio to you. Please complete the following form:


    YesNo



    Please tick to consent to your data being stored in line with the guidelines set out in our privacy policy.