Per-Ola Mard, Managing Director of Data Resilience AB, shares his industry insight about the future of WAN data acceleration.
Moving large volumes of data consisting of rich multimedia and perhaps encrypted data is a challenge in itself even if it is just between two local storage devices. Add to that long distance, resulting in increased latency and dropped packets and the difficulty of performing such a task is likely to skyrocket. So what is the solution?
Traditional WAN (Wide Area Network) Optimisers who rely on dedupe and a local cache to provide the user experience will not help for the simple reason they cannot handle encrypted data streams very well: as we all know it is extremely difficult to dedupe encrypted data. The cache-centric design of the system is likely to be rendered useless by streams of backup traffic passing through depleting the very cache that is used for optimising the data transfers of more chatty data types such as documents being fetched from a fileserver and cached in the WAN Optimiser. Often the remedy is to configure exceptions and remove/opt-out this particular typ of traffic and thereby also reducing the value of the original investement. Instead, what we need to do here is take a different approach that accelerates the data across the WAN link- and that is a completely different thing altogether.
This is a task that is far easier said than done…
A solution for transferring large volumes of data across the long distance would have to resolve four key elements:
- It must work intelligently, dynamically and continuously adjust various settings and parameters in response to the changing conditions on the WAN.
- The system must also be agnostic to data types (plain, compressed, deduped, encrypted) and offer transfer of data near wire speed, transparently from point A to point B, regardless of latency as well as mitigate the impact of packet loss.
- It must not force users into yet another proprietary orchestration layer, instead it should allow the user to use their preferred method of transfer, SFTP, DFSR, HTTPS, SNAP, Object and so on.
- It must manage the data flow from ingress to egress so the WAN is never underutilised or overdriven to cause congestion.
Only then will the solution be the perfect tool for backup and recovery, replication, copying data, moving data into, out of and in between clouds (public, private, hybrid), data migration etc. Is there in fact such a tool available? Well, yes there is!
The development team at Bridgeworks has worked for many years to perfect the solution that is now marketed under the name PORTrockIT and WANrockIT. By using a combination of parallelising the stream of data, and Artificial Intelligence, then pushing data through the multiple channels across the WAN, and reassemble the data at the far end of the WAN, Bridgeworks has met the four conditions of reducing the effects of latency, mitigating packet loss, data and protocol agnostic and maximising the data flow.
The solution is licensed per port/protocol and installed as either Virtual (VM) or Physical (SERVER) -appliances based on standard server hardware that either work by TCP/IP ports (PORTrockIT) as described above or accelerate and convert the FC (Fibre Channel) and iSCSI protocol allowing for example access to a FC tape library across a WAN link from a VM running a backup media server on a virtualisation platform of choice. There is also an option to purchase a WANrockIT as an AWS (Amazon Web Services) Virtual Appliance from the AWS Marketplace.
So what is in it for you, what to expect in terms of acceleration?
Say you lease a 1GbE WAN link with a latency of 20ms that is not fully utilised and you want to transfer 1TB worth of data across using for example SFTP file transfer: without the Bridgeworks accelerator this would take about 23.3 hours, and by using the PORTrockIT you will be done in only 2.6 hours.
When using the WAN accelerator described here it is very important to understand that if the pipe is fully utilised already, regardless of bandwidth (100Mb/1Gb/10Gb), there is not much we can do: there is nothing for us to work with. On the other hand if the WAN link is under-utilised the PORTrockIT will fill it to the brim and your data will be moved across really really fast.
Below is an example from a LAB session accelerating iSCSI transfers using the WANrockIT product:
As shown in the LAB example above the key to the performance improvement lies in the AI (Artificial Intelligence) -element of the software applying its machine learning skills. The LAB also showed the impact of introducing the 20ms latency on the iSCSI traffic, dropping from 72MB/s line speed to 3.12MB/s. In this LAB we did not introduce any packet drop, which of course would have affected the unaccelerated transfer even more.
You can try out the effect of the WAN accelerator on your own by entering the latency and the bandwidth you have in your WAN link using the Free Bridgeworks ThroughPut Calculator and estimate how the software may help you solve your data transfer problems.
After planning the deployment carefully, the WAN accelerator is physically introduced in the customer network non disruptively and once the traffic is redirected through the accelerator and the AI has been trained, the benefits can immediately be observed.
So, if you are not getting your backups across in time for the next shift and you have half filled pipes: you could very likely be much better off using these products.
Data Resilience AB is a reseller of the Bridgeworks WAN accelerator products in the Nordics. Please reach out to me here at Linked-In if you are interested in exploring the solution further for your specific data transfer challenges.
Click here to read the original post by Per-Ola Mard, Managing Director of Data Resilience AB.