Site icon Bridgeworks

The Changing Data Centre: The Impact of Networking Costs

Our CEO speaks to Networks Europe Magazine about avoiding excess costs in data centre management.
Pg 36-37

The Changing Data Centre: The Impact of Networking Costs
Jan/Feb 2018

Mattias Fridström, Chief Evangelist for Telia Carrier, says lower networking hardware costs are forcing datacentres and metro networks to fundamentally change how they conduct their business. “Any location with fibre can now become a data centre, opening up new opportunities for designing, managing, and operating cloud and on-demand computing resources”, he comments.

In the past the networking hardware costs were extremely prohibitive, and so connecting different datacentres to each was often an expensive exercise. Organisation such as Google, Facebook, Amazon and Intel have been at the forefront of the software designed revolution in computing, and they are now moving into the networking arena with SDN and SD-WAN. This is displacing the traditional costly propriety silicon purveyors on network equipment.  With lower costs and higher speed connections, the dynamics are changing. In turn this is transforming the costs associated with datacentres, public, hybrid and private clouds – making them more accessible and more affordable.

Restricted capacity

For years the network capacity inside the datacentre has been restricted by their underlying technology, but with the advent of new silicon and signal processing, the costs have been pushed down costs. At the same time network performance has increased inside the datacentre. In the past that used to be restricted to 10Gb/s connectivity, but they now commonly have 100GB/s or higher at their disposal. So, lower costs and higher performance have become the new norm.  This means it is now possible to exploit this new high capacity WAN connectivity.

Cost is always an inhibitor the reduction and commodity hardware as well as the open source software-defined functionality brings flexibility to all sizes of organisations, while changing the dynamics of the WAN and the new possibilities that it can bring. A word of caution though: Latency and it affects must be considered when planning new installations.

Locate anywhere 

Fridström claims that fibre optics now makes it possible to build and locate a datacentre anywhere. In my view you can now create global access to datacentres to mitigate disaster recovery (DR) geographical constraints. In doing so it becomes possible to move computing closer to the consumer, and perhaps even closer to the edge.  However, the speed of light is finite (at the moment) and that can cause issues, making it harder to move volumes of data between datacentres. Network latency and packet loss remain issues that can diminish datacentre performance.

Many organisations fail to factor in the effect of the speed of light when designing geographically dispersed solutions.  For high speed trading platform data, the distance between datacentres affects the time between transactions. However, for low speed transactional data composed of a small number data packets, a few milliseconds of delay isn’t critical.

Data accelerate

When transferring large volumes of data such as workloads or back up as a service transactions latency and packet loss is a massive throughput killer.  You can’t make the speed of light go faster, so you must find another way around the problem. Data acceleration solutions such as PORTrockIT, through the use of parallelisation and AI, can have a dramatic effect on restoring data throughput. Unlike WAN Optimisations, they can also permit encrypted files to be transmitted securely at speed between datacentres that are located outside of their own circles of disruption.

WAN Optimisation solutions often can’t deal with encrypted data, requiring the data to often be sent unencrypted to ensure that a speedier data transmission can be achieved. Moreover, while WAN Optimisation and SD-WAN vendors often claims they deal will latency, they often don’t sufficiently to make a difference to network performance at higher WAN speeds that are now available. In contrast data acceleration solutions use machine learning to mitigate the effects of data and network latency. With them, the possibility of having optimised datacentres and disaster recovery sites in different parts of the world, with impact of latency being much reduced, becomes more feasible.

New opportunities

Yet lower costs create opportunities for designing, managing, and operating on-demand cloud computing resources. Indeed, with service providers now thinking globally, fibre opens up a whole range of opportunities for organisations both large and small. Many still believe that public is the only cloud model available. However, the larger organisations with virtualised, distributed datacentres are linked with high speed fibre. So, they can create their own cloud infrastructure for cloud storage and cloud computing. This nevertheless will leave the debate about whether it’s cheaper for them to outsource to third-party datacentres, or whether it’s cheaper to have their own datacentres.

So, with the changing datacentre in mind and the need for data acceleration still being important, here are my top tips:

  1. Understand the performance and latency requirements of your applications be these databases, DRaaS or BaaS of end user application.
  2. Employ data acceleration solutions such as PORTrockIT and to lower the SLA requirements of your WAN in terms of latency and packet loss SLAs.
  3. Remember that using SD-WANs is a great idea for managing WANs, but they won’t fix the latency and packet loss issues.
  4. Software-defined open source network software can considerably reduce both capital and operational costs.

Future-gazing

Predicting technology ten-year period is is a dangerous as spinning on a dime. However, looking forward at some of the current trends it becomes possible to theorise about the future from what is currently seen within today’s market. Firstly, with ever increasing data volumes, datacentre power and energy consumption is bound to increase exponentially. This will generate much heat too, which could be used to heat homes. Datacentres are going to have to tackle the greenhouse gases.

Increased fibre coverage and performance networks allows data centres to be placed pretty much anywhere, but rural areas are still not having their needs met in many countries. This means that datacentres are likely to remain within the vicinity of urban areas. However, improved Government investment in network infrastructure could enable more datacentres to be located in cheaper and less urbanised areas – whether that be in the UK or elsewhere in the world.

It’s also worth remembering that the web is the cloud, with all this interconnectivity it will be possible for anyone and everyone, and not just large datacentres, to supply spare storage and compute capacity to a commodity brokerage the same way the electricity is bought and sold now. So, the changing datacentre may find that it faces an increasing amount of untraditional competition in the future – offering more choice to organisations and to the consumer.

The changing datacentre will also be increasingly software-defined, hyperscaled and virtual. With the ascendency of artificial intelligence and software-defined infrastructure, there will be massive requirements for compute power. It will create the opportunity to have hyperscaled and virtual computer spread across multiple datacentres. This will solve many of the complex issues that people today think of as being impossible to resolve. So, the ongoing impact of the changing datacentre and of lower networking costs will eventually make the impossible, very much possible to achieve.

Exit mobile version