According to Bill Kleyman, Advisory Board Member of MTM Technologies and Director of Technology Solutions at EPAM Systems, the top five strategies for the data center include flash storage, hyper-converged infrastructure, Linux container and orchestration tools, bring your own device (BYOD), and software-defined networking. However, in his 17th April 2018 article on this topic for Data Center Frontier, he misses one crucial aspect – WAN data acceleration.

All data centers will be impacted by data and network latency, as well as by packet loss, and so there is a need to consider the top five strategies for wide area network (WAN) data acceleration – perhaps in addition or as a complement of some of the strategies proposed by Kleyman, such as software-defined networking. This will ensure that your data is secure, expedient and that the adverse effects of latency and packet loss are mitigated. This is vital when you need to move large volumes of data fast.

Software-defined networks

In his article Kleyman writes about software-defined networks (SDNs) and comments: “As new applications require cloud delivery or deployment, the demand for transferring data into the cloud is increasing. This requires a new approach to managing data with a layer of software on top of existing network hardware to drive innovation. Industry analyst firm Gartner predicts that by 2019, 70 percent of existing storage array products will also be available as “software only” versions. And, by 2020, between 70 percent and 80 percent of unstructured data will be held on lower-cost storage, managed by software-defined storage environments.”

“SDN can do a lot for your data center. Imagine having the ability to recreate an entire networking environment to mirror an existing infrastructure. The difference? Everything is network virtualization and completely isolated. SDN can help create connections between applications, services, VMs and numerous other workloads. Effectively, administrators can test their environment, disaster recovery (DR) plan or high-availability methodology completely from a secured and isolated configuration. From there, you’re looking at capabilities like Network Function Virtualization (NFV), WAN traffic control, load-balancing, security, WAN-OP, and even SD-WAN.”

Accurate selection

In my opinion, he accurately picks the key topics that are driving performance and cost savings within the data center – from flash storage to hyper-converged infrastructure that’s deployed at the edge, through to the latest advancement in SDNs. However, with the push to cloud and hybrid clouds, edge computing etc., very little was given over to connecting these technologies together, and yet they are high on everyone’s agenda.

Yet, whilst Kleyman looks inwardly to the top five data center strategies that will drive performance and efficiency, he has not considered the performance inhibitors outside of the data center. This is unusual, since no data center is an island. Data and the access to it is now the life blood of any company. No longer do people and organizations generate or consume data within the same data center.

The efficient flow of data in and out the data center is critical to the efficiency of any data-driven company. Such is the criticality of data movement that everyone involved in the smooth and efficient running of the data must step back and see the data transfer picture as a whole – not just from their sphere of influence.

Considering latency

In a later article, ‘Understanding the Edge and the World of Connected Devices,’ Kleyman considers the effects of latency and packet loss on data transmissions from remote IoT devices and the problems that can cause.

Traditionally, organizations have tried to use WAN optimization to improve WAN throughput. WAN Optimization is a bit of a misnomer, as it does not resolve latency or packet loss issues. In reality, it is about data and application optimization that gives the impression of improving performance. His solution is to add more compute power to the edge and closer to the Internet of Things (IoT) device or application, while including a modicum of onboard computing for the more time-sensitive applications, whilst still transferring the data back to the data center.

Data security

He also raises another key point that concerns many and that is the security of data transmissions over networks and WANs. The use of data encryption is a major issue when using WAN optimization technologies that rely on deduplication to provide that ‘WAN performance,’ as you can’t dedupe encrypted data. So, what about SD-WAN? Can SD-WANs solve the latency issue?

Well, SD-WAN is a great technological leap forward for low to mid-sized WAN bandwidth applications with its ability to pull disparate WAN connections together under a single software-managed WAN. However, it does not resolve latency and packet loss issues and any performance gains are again normally due to inbuilt deduplication techniques. By adding a WAN data acceleration layer, these issues can be mitigated to improve performance, allowing large volumes of data around to be moved more quickly and efficiently than SD-WANs can achieve on their own.

WAN data acceleration

Therefore, WAN data acceleration is the approach that sets itself aside from the traditional approaches and technologies of WAN optimization and SD-WAN. It takes a totally different approach to mitigating the latency and packet loss issues. The indomitable fact is that the speed of light is just not fast enough, and that is what governs latency and decimates WAN performance over distance.

While nobody can fix latency, the effects of it can be addressed by using TCP/IP parallelization techniques and artificial intelligence (AI) to control the flow of data across the WAN. Typically, with the effects of latency mitigated, organisations see 95% WAN utilization. The other upside of not using compression or dedupe techniques is that WAN data acceleration will speed up any and all data in exactly the same way with solutions such as WANrockIT and PORTrockIT.

An IT industry colleague recently said that he’d had a conversation with a marketing executive at an IT firm who’d thought that the likes of IBM and the other large IT vendors would have resolved the latency and packet loss issues. Yet that’s not the case – they too need to deploy WAN data acceleration solutions, and the fact is that IBM uses our WAN data acceleration solutions in their products.

Still, it is an all too common comment that many companies that buck the trend and propose a totally different approach hear. In reality, people should be asking these large OEMs about why they’ve not developed anything like this new approach, or about the other solutions that are today bucking other trends.

Five accelerating strategies

So, to summarize, here are some top five strategies for WAN data acceleration:

  • There is not one solution that fixes everything; that is equally true of WAN optimization and WAN data acceleration. So, consider developing different layers of potential solutions.
  • For Office types of file transactions over low speed WAN connections, WAN optimization should be your first port of call.
  • If the files or data you are transmitting across the WAN are encrypted, then WAN optimization will not add any benefit. Instead, consider deploying WAN data acceleration, because even at low WAN bandwidths and high latencies it may improve performance.
  • Use WAN data acceleration when it works at its optimum, with a constant flow of data that can match or exceed the performance capability of the WAN.
  • Remember that WAN data acceleration efficiency increases exponentially with the increase in WAN bandwidth and latency. With 10Gb connection and 30ms of latency, customers can experience up to 200 times the performance.

Increasing performance

One on the benefits of using a WAN data acceleration is that it treats all types of data the same – it just doesn’t care what the data is. This feature allows it to reach into other areas, such as Storage Area Networks (SANs). By decoupling the data from the protocol, organisations can transfer data between SAN devices across thousands of miles.

One such customer, CVS Caremark, connected two Virtual Tape Libraries over 2,860 miles at full WAN bandwidth with a performance gain of 95 times the unaccelerated performance. It’s therefore worth implementing as a strategy in itself – even in disaster recovery scenarios, to allow for faster recovery times and to promote service continuity.

Traditionally, data centers have often been located within the same circles of disruption, but with WAN data acceleration they can be situated on opposite sides of the globe, thousands of miles apart. This will, for example, allow data centers to continue to operate when a natural or man-made disaster occurs. Amongst other things, this ability can prevent financial and reputational loss, while creating a competitive advantage. WAN data accelerate is therefore the flexible solution that has the artificial intelligence and machine learning to make a very big difference.