Site icon Bridgeworks

WAN Acceleration: Taking the Management Costs out of WAN Latency

Bridgeworks CEO and CTO, David Trossell features in the April issue of Enterprise Viewpoint Magazine. Click here to read the article below on pages 28-29.


April 2023

 

Latency and packet loss are highly disruptive to Wide Area Networks (WANs), and in the fight to mitigate their effects, managements costs can spiral out of control. To combat this, organisations and enterprises need to find a solution that tackles them both head on. In fact, Ken Wood, Principal Technical Product Manager at Teradata, writes about ‘Taking the management costs out of network latency’ for TechRadar Pro.

He concludes: “The powerful combination of the right technology is perfect for on-premises, private, hybrid, public and multi-cloud solutions, where long network latency might keep an enterprise from fully leveraging access to their sensitive data. The cost savings and deduced management overhead spent on curating important data could also play a role in architecting data access methodologies.”

The cost of WAN latency to management is in both Time and Risk. Time to get data to where it is most needed, time to protect data, time for employees and customers waiting for that data. There is therefore a need for latency mitigation techniques over large latencies, and over intercontinental distances because the further away data centres are from each other, the higher the risk of data and WAN speeds being impacted by latency and packet loss. 

Performance is critical

Steven Umbehocker, CEO/CTO, OSNEXUS Corporation, comments regarding metro clusters: “Recovery time objectives are critical to most organisations, so that’s imperative to allow enough performance to keep up with the data ingest rate at a primary site. The other key thing with metro clusters is that they are an increasingly in-demand architecture, where the storage is distributed across multiple sites to achieve zero downtime. That’s only possible when the latency is low enough to provide sufficient and adequate performance for the workloads. PORTrockIT solves these latency issues so that metro clusters can be deployed across a much larger geographic area.” 

“If you have an earthquake in one zone, that larger metro cluster has benefits. You can avoid downtime, enabling organisations to link their disparate sites together. There is a great deal of latency between New York and Tokyo, so you need something like PORTrockIT to mitigate latency.” 

To ensure mitigation, there is a balancing act of speed against utilising strong encryption, over-protecting critical and sensitive data. Should the worst happen, there will also be a need to recover data from an air-gapped recovery source. Air gaps are created when data is stored offsite, without any connectivity, so that if an organisation or an enterprise is attacked, both sensitive data and systems can be quickly restored.  The data can be used to restore systems that have been adversely affected via a WAN. 

Data lineages

So why are data lineages important? The term data lineage means different things to different industry segments. For some, it is maybe transient data that only has a lifetime of a day before it is amalgamated into other data structures and, so, the original data is deleted. Yet, on the other hand, only one copy can exist and this is immutable – but is needed to be accessed across continents. There are others that are immutable, but this is governed by law to be accessible years later. Every data has its own lifecycle, which organisations need to be able to manage efficiently and effectively for the benefit of their operations. 

Latency and its friend packet loss are the killers of performance over the WAN. It can turn a 10Gb WAN into what appears to now be more than a 500mb WAN performance. For many companies that rely on moving large data sets around the globe is a major issue – it comes back to that question: “How much is latency costing you in time and resources?” There also needs to be a consideration of where to locate disaster recovery sites to ensure they aren’t placed within the same circles of disruption. 

Organisations and enterprises all keep daily backups in their data centres just in case of a mistaken deleted file. They also keep a full disaster recovery backup in their data centre to protect them against any downtime that may be caused by a natural or manmade disaster, such as a cyber-attack. The most important backup is the offsite, air-gapped backup that is held as far away as possible. 

The problem with having that crucial air gap a long distance away is WAN latency. Some organisations and enterprises use a tape storage company, such as Iron Mountain, to give that critical distance and air gap. However, that “offsite storage” has its own “latency” i.e., time to find the tapes and the speed of delivery van, when trying to recover data. So, the right WAN technology that mitigates WAN latency and packet loss, can add value within the whole data movement, protection, lineage, and remote computation roles.

Determining the ‘right technology’

To determine the ‘right technology for mitigating the effects of latency and packet loss, with a view to increasing bandwidth utilisation, has often been a complex task. There have been many technologies employed to try and improve the performance of transporting data over WAN. The industry has tried compression and deduplication with WAN Optimisation, but this has a performance limit well below modern Gigabit WNAs. This has been vexing the industry for many years.

The key question is about how to make TCP/IP work over long distances. There have been a number of solutions that use UDP as the transport and have mechanisms to cope with lost packets, etc. However, these suffer when using the higher Gigabit network of more than 10GB. They are now becoming the norm. A different approach is required. That is one that can be used the trusted TCP/IP protocol, mitigate packet loss, scale to more than 40Gb and mitigate the effects of latency – even over-continental and intercontinental distances.  This technology is called WAN Acceleration. 

WAN Acceleration: A different approach

WAN Acceleration takes a completely different approach to the problem. It uses standard TCP protocol. Rather than compressing or deduplicating data, or using UDP, it uses a mixture of parallelisation of the incoming TCP stream, managing the flow of data on and off the WAN. This is coupled with Artificial Intelligence (AI) to manage every aspect of the flow of data across the WAN. There is no need for deduplication or compression; the data is not touched nor modified. This approach can accelerate encrypted data without the need of keys. This means there are no new storage requirements, low compute and memory requirements, while driving up WAN utilisation to 90%. 

Umbehocker adds: “You don’t want to have to wait around for data to be copied; you want it to be continuous. You are continuously copying data between sites. This is versus strategies that are doing periodic copies. WAN Acceleration enables organisations to get increased value and performance out of a 10 gig WAN, with up to a 700% boost in throughput.  Customers will tell us what WAN link they have, and we look at whether creating metro clusters are possible. With PORTrockIT, they can do more to achieve this.”

Daily backups

WAN Acceleration negates the need to store and back up data locally, reducing both network and management costs. With the ability to move data at high-speed using WAN acceleration, organisations and enterprises can backup data to be stored offsite daily, or more if required. AI and machine learning (ML) are used in WAN Acceleration to reduce the need to manually manage the data transfers over the WAN. 

The AI will automatically tune the network parameters it uses, as it transfers data. This always ensures maximum efficiency. More to the point, one of the key advantages over WAN Optimisation is the lack of storage required. This means it is simple to create two instances anywhere in the world and start to transfer data to the private, public or hybrid cloud.

Reducing management costs

So, here are my top 5 tips for taking the management costs out of WAN latency:

All of this can be achieved with WAN Acceleration. To avoid downtime, and to achieve compliance with data protection regulations and laws, such as the European Union’s GDPR, organisations and enterprises can use this technology to store data thousands of miles away with the ability to transmit and receive data at speed. The cost of storing data, be it on-premise, cloud and hybrid cloud once all the costs – including storage space, ingress, egress, capital costs and location – have been taken into account is pretty much the same. 

However, with some classifications of data, it is where you store data or where it resides that is more important with respect to GDPR data. Whilst it is possible to accept European citizens’ data outside of the EU – such as a USA website order, PID data must be stored within the EU. Moving this data between countries and continents has time and protection pressures. WAN acceleration can speed up encrypted data and, so, it is a simple but fast solution that can save management time and money while reducing operational risk. 

Exit mobile version