Site icon Bridgeworks

Moving large volumes of Data – Risk vs. Transformational Thinking

When it comes to moving large volumes of data, Enterprises are unnecessarily settling for less with their network infrastructure. 

Enterprises are struggling with moving large volumes of data.  Whether it’s a large financial services organisation panicking about backup compliance, a Healthcare company trying to replicate its patient data or big data teams under pressure to implement insight and deliver value to justify the IT investment, the task of moving data is effected by the same challenge.   Network latency (delay) is dramatically slowing everything down and it’s hurting on multiple levels – from investment returns to substantial business risk.

This is a very familiar tale that is told to us time and time again by our clients.  And we know they are not alone, because the simple fact is that too many organisations are relying on traditional data transfer methods that can no longer cope with ever-increasing volumes of data.  Data that, more often than not, cannot be deduped or is already un-compressible, meaning that any attempt to help speed up the transfer rates are failing miserably.

Traditional WAN optimization solutions will not make any difference to the speed of image and videos files (on the increase in the Enterprise), or with encrypted data (essential for data security).   Similarly there is continual misunderstanding that just adding bigger bandwidth in the hope that more ‘oomph’ will help with transfer speeds.

Sadly all these very valid attempts at achieving faster transfer speeds for massive volumes of data are costing organisations inordinate amounts of money and losing them valuable time.

There are two key things missing from these approaches that, if considered, would make a huge difference to these challenges:

1/  There is a misconception that other technologies that can help, simply do not exist.

Or maybe it’s worse than that, maybe you know it exists but you are too chicken to blaze a trail because you feel the stakes are too high.   Whichever category businesses fall in to, here’s a thought for you.  In a recent survey Gartner quotes that out of 2944 CIO’s – with over $250bn of spending power – has a top 5 priority of looking at emerging providers.  25% say they are already doing this.  Hooray for them.  But doesn’t this mean that a shocking 75% are not?

My prediction is that it’s the 25% of the trail blazers investing in new technologies that will have the competitive advantage now, and in the next 5-10 years.  Why? Because we have all seen many examples of how emerging technology solutions can open up new markets and opportunities.  It’s the ones that have done the research and are already implementing these new solutions that will already be ahead of the curve, when the others are lagging behind.

2/ Think of new ways to optimize performance of existing infrastructure.

The scary fact is that most organisations have infrastructure that is only being minimally optimized.  This means that, because of the constraints inflicted by latency (not to mention packet loss and congestion), they are accepting that they are losing out on up to 80% of the use of their VERY expensive infrastructure, just because they are not looking at the problem in a new way.  Throwing money at the same inefficient solutions. Or worse, solutions that just cannot deal with the increased volume and data types is never ever going to increase the efficiency or optimization of your infrastructure.

In the case of transferring large volumes of data, many enterprises are using the wrong technology for the wrong job but, when they don’t get the results, they try more of the same, just with bigger pipes.

I’ve said it before, and I’ll say it again… doing the same thing and expecting different results was Einstein’s definition of madness, right?!

Here’s an example, I keep hearing reports about everyone is looking to SDWANS to try to speed things up.  Yet another example of adding investment to try to get around the problem of moving data, but without fixing the inherent issue caused by TCP/IP – latency.  If you fixed the inherent problem then your bright and shiny SD WAN product would really add benefit.

I’m afraid to say that many of you have bought in to the marketing spin of the big vendors.  Most assume that nobody has really worked out that to mitigate latency, you have to approach the problem in a different way. You wouldn’t put a Band-Aid on severed artery, would you?

You don’t have to settle for “as good as it gets”!

If you read the tech news and keep up with analyst insight, it is possible to stay ahead, by the use of emerging solutions.  If this is you, you should be aware that that technology is already on the market that can ensure better performance of existing infrastructure for data transfers.

Let me give you a scenario and a solution and then you can assess whether sticking with what you know is really the only option open to businesses experiencing these challenges.

If you are trying to move 15.8 terrabytes an hour of data, over 40 Gb pipes what would you do. Your first thought may be with a traditional WAN optimization provider, though you would probably need 20 boxes on each side (and millions of $’s). As a side note, it is highly unlikely that you would get any kind of performance increase by doing this, especially if media files or encrypted data was involved.  Your second thought would probably be lift and shift, allowing your precious data into the hands of a courier for a few days.

On the other hand, some of the smarter enterprises have worked out that by applying machine intelligence technology to the problem, their results show that theyare achieving impressive data transfer speeds and network optimization. For them, this change of approach is making a huge difference to business critical functions such as Backup and Restore, Replication, Disaster Recovery, Big Data… the list goes on.

It sounds too good to be true, right?  But 1000s of IBM Storwize customers are applying this thinking to assist with faster replication and they are proof that this is not technology of the future, but is actually available in the here and now.

Solution aside, the most important thing to ask here is, given the two scenarios, which of the two options do you think is going to give enterprises the best chance at risk aversion?

The option where you are hoping that buckling infrastructure, latency crippled network speeds will provide your enterprise the most robust security blanket? Should disaster strike , let’s be frank, operating on a wing and a prayer.  You need to decide whether you are one of the 75% that is ignoring emerging technologies at the cost competitive advantage and mitigated risk, or whether you are a trail blazer.   Because, when you look at the sizeable risks, versus the simple option to source and trial new technologies, I think I know where I would be placing my energies.

Exit mobile version