Site icon Bridgeworks

Why 2018 will be a year of innovation and the ‘cloud on edge’

Bridgeworks CEO David Trossell discusses with Cloud Tech his predictions for cloud innovation in 2018.

 


January 11, 2018

Why 2018 will be a year of innovation and the ‘cloud on edge’

During much of 2017, it was possible to read many articles that predicted the end of cloud computing and favour edge computing instead. However, there is also the view that believes edge computing and cloud computing are extensions of one another. In other words, the two technological models are expected to work together. Cloud computing therefore has much life in it yet.

With the increasing use of artificial intelligence, machine learning, biometric security and sensors to enable everything, from connected and autonomous vehicles to facial and iris recognition in smartphones such as Apple’s 10th anniversary iPhone X, questions are also arising about whether Big Brother is taking a step too far into our private lives. Will the increasing use of body-worn video cameras, sensors and biometrics mean that our every daily movements will be watched? That’s a distinct possibility, which will concern many people who like to guard their lives like Fort Knox.

Arguably, the use of biometrics on smartphones isn’t new though. Some Android handsets have been using iris recognition for a while now. Yet, with the European Union’s General Data Protection Regulations now less than five months away at the time of writing this article, the issue of privacy and how to protect personal data is on everyone’s lips. However, for innovation to occur, there must sometimes be a trade-off because some of today’s mobile technologies rely upon location-based services to indicate our whereabouts, to determine our proximity to points of interest; machine learning is deployed to learn our habits to make life easier.

Looking ahead

So, even Santa has been looking at whether innovation will reside in the cloud or the edge in 2018. He thinks his sleigh might need an upgrade to provide autonomous driving. Nevertheless, he needs to be careful because Rudolph and his red-nosed reindeer might not like being replaced by a self-driving sleigh. Yet, to analyse the data and the many opportunities that will arise from autonomous vehicles as time marches on, he thinks that having much of the data analysis should be conducted at the edge.

By conducting the analysis at the edge, it becomes possible to mitigate some of the effects of latency, and there will be occasions when connected and autonomous vehicles will need to function without any access to the internet or to cloud services. The other factor that is often considered, and why an increasing number of people are arguing that innovation will lie in edge computing, is the fact that the further away your datacentre is located, the more latency and packet loss traditionally tend to increase. Consequently, real-time data analysis becomes impossible to achieve.

Foggy times

However, the myriad of initiatives, such as edge computing, fog computing and cloud computing, that have emerged over the past few years to connect devices together have created much confusion. They are often hard to understand if you are somebody looking at the IT world from the outside. You could therefore say we live in foggy times because new terms are being bounced around that often relate to old technologies that have been given a new badge to enable future commercialisation.

I’ve nevertheless no doubt that autonomous vehicles, personalised location-aware advertising and personalised drugs – to name but a few innovations – are going to radically change the way organisations and individuals generate and collect data, the volumes of data we collect, and how we crunch this data. Without doubt too, they will have implications for data privacy. The perceived wisdom, when faced with vast new amounts of data to store and crunch, is to therefore run it from the cloud. Yet, that may not be the best solution. Therefore, organisations should consider all the possibilities out there in the market – and some of them may not emanate from the large vendors. That’s because smaller companies are often touted as the better innovators.

Autonomous cars

Autonomous cars, according to Hitachi, will create around 2 petabytes of data a day. Connected cars are also expected to create around 25 gigabytes of data per hour. Now, consider that there are currently about 800+ million cars in USA, China and Europe. So, if there were to be 1 billion cars in the near future, with about half of them being fully connected and assuming that they are used for an average journey of 3 hours per day, 37,500,000,000 gigabytes per day would need to be created.

If, as expected, most new cars will be autonomous by the mid-2020s, that number will look insignificant. Clearly, not all that data can instantaneously be shipped back to the cloud without some level of data verification and reduction. There must be a compromise, and that’s what edge computing can offer in support of such technologies, such as autonomous vehicles.

Storing the ever-increasing amount of data is going to be a challenge from a physical perspective. Data size sometimes does matter of course. With it comes a financial and economic matter of cost per gigabyte. So, for example, while electric vehicles are being touted as the flavour of the future, power consumption is bound to increase. So too will the need to ensure that the personal or device-created data doesn’t fall foul of data protection legislation.

Data acceleration

Yet, as much of the data from connected and autonomous vehicles will need to be transmitted to a cloud service for deeper analysis, back-up, storage and data-sharing with an ecosystem of partners, from vehicle manufacturers to insurers, some of the data still needs to be able to flow to and from the vehicles. In this case, to mitigate the effects of network and data latency, there may be a need for data acceleration with solutions such as PORTrockIT.

Unlike edge computing, where data is analysed close to its source, data acceleration can permit the back-up, storage and analysis of data at speed and at distance by using machine learning and parallelisation to mitigate packet loss and latency.  By accelerating data through this approach, it becomes possible to alleviate any pain that organisations feel. CVS Healthcare, is but one organisation, that has seen the benefits of taking such an innovative approach.

The company’s issues were as follows: back-up RPO and RTO; 86ms latency over the network (>2,000 miles); 1% packet loss; 430GB daily backup never completed across the WAN; 50GB incremental taking 12 hours to complete; outside RTO SLA – unacceptable commercial risk; OC12 pipe (600MB per second); excess Iron Mountain costs.

To address these challenges, CVS turned to a data acceleration solution, the installation of which took only 15 minutes. As a result, it reduced the original 50GB back-up from 12 hours to 45 minutes. That equates to a 94% reduction in back-up time. This enabled the organisation to complete daily back-ups of its data, equating to 430GB, in less than 4 hours per day. So, in the face of a calamity it could perform disaster recovery in less than 5 hours to recover everything completely.

Amongst other things, the annual cost-savings created by using data acceleration amounted to $350,000. Interestingly, CVS Healthcare is now looking to merge with Aetna, and so it will most probably need to roll this solution out across both merging entities.

Any reduction in network and data latency can lead to improved customer experiences. However, with the possibility of large congested data transfers to and from the cloud, latency and packet loss can have a considerable negative effect on data throughput. Without machine intelligence solutions, the effects of latency and packet loss can inhibit data and back-up performance.

Data value

Moving away from healthcare and autonomous vehicles and back to GDPR, the trouble is that there are too many organisations that collate, store and archive data without knowing its true value. Jim McGann, VP Marketing & Business Development at Index Engines, says most organisations find it hard to locate personal data on their systems or in paper records.

This issue makes it impossible to know whether the data can be kept, modified, deleted permanently or rectified – making it harder to comply with GDPR, and I would argue it also makes it harder to know whether the data can be used to legitimately drive innovation. So, instead of being able to budget for innovation, organisations in this situation may find that they need to spend a significant amount of money on fines rather than on developing themselves.

He explains: “Much of this is very sensitive and so many companies don’t like to talk on the record about this, but we do a lot of work with legal advisory firms to enable organisations with their compliance.” Index Engines, for example, completed some work with a Fortune 500 electronics manufacturer that found that 40% of its data no longer contained any business value. So, the company decided to purge it from its datacentre.

Limited edge

Organisations are therefore going to need infrastructure that provides a limited level of data computation and data sieving at the edge – perhaps in expanded base stations and then shipped back or from the cloud. This may, for example, involve a hybrid cloud edge infrastructure. Does this solve everything? Not quite! Some fundamental problems remain, such as the need to think about how to move vast amounts of data around the world – especially if it contains personal encrypted data.

More to the point, for innovation to lie anywhere, it’s going to continue to be crucial to consider how to get data to users at the right time, and to plan now how to store the data well into the future.

Exit mobile version