We speak to Data Center Journal about how data centers are evolving and the industry shift to the edge.
As data volumes grow, data centers must evolve. These facilities are known in some circles as the “digital core,” and as such they are having to adopt to an increasing array of technologies. Large technology vendors also tend to say they care about their customers, and perhaps they do, but they’re also very keen to sell their wares, from servers to network infrastructure. Add the discussions about being at “the edge” and you’ll find yourself in a jungle that’s becoming harder to navigate.
What you’ll find, though, is that data volumes will increase, and research on the web suggests that digital transformation and the Internet of Things (IoT) are helping drive this trend. Richard Harris, executive editor of App Developer Magazine, went so far as to suggest in December 2016 that “in 2017, we will create more data than ever before, creating new challenges around consuming that data to make strategic and tactical decisions.”
“The key to creating the future data center is for vendors to offer customers what they need today to meet tomorrow’s needs. You need not replace all of your legacy infrastructure for the latest technology. You can achieve much with what you already have.”
– David Trossell, CEO Bridgeworks, LTD
By contrast, TechTarget defines edge computing as “a distributed information technology (IT) architecture in which client data is processed at the periphery of the network, as close to the originating source as possible. The move toward edge computing is driven by mobile computing, the decreasing cost of computer components and the sheer number of networked devices in the internet of things (IoT).”
An argument for edge data centers and edge computing generally is that it can help to reduce the impact of network latency to the edge user or process. But it doesn’t solve the problem of getting the data to and from the edge in the first place. Even so, just because the data is distributed across both the edge and the core, data-protection policies still apply, and perhaps more so. Ideally, data should be stored and backed up in at least three different data centers or disaster-recovery sites.
Many organizations make the mistake of storing data too close to a circle of disruption to mitigate the effects of throughput latency. Doing so can lead to a total disaster should the worst happen. Now there’s no need to place all of your eggs in one basket. Data-acceleration solutions enable you to mitigate the effects of data and network latency, speed up data flows, and reduce packet loss at a distance. The result is better data flow to and from the edge as well as between data cores.