Why Do Legacy and Cloud Mix Well?

Mar 19, 2018

Our CEO David Trossell speaks to Data Center Journal about the mix of legacy systems and the cloud.

 

Why Do Legacy and Cloud Mix Well? Bridgeworks

March 19, 2018

Scott Jeschonek, Director of Cloud Solutions at Avere Systems, thinks that although oil and water don’t mix, legacy and cloud do. Despite the hype about moving applications to the cloud and about turning legacy applications into cloud natives, he finds that legacy systems are alive and well, and he believes they aren’t going anywhere anytime soon: “Though the cloud promises the cost savings and scalability that businesses are eager to adopt, many organizations are not yet ready to let go of existing applications that required massive investments and have become essential to their workflows.”

Complexity and Challenges

Jeschonek adds that rewriting mission-critical applications for the cloud is often inefficient: The process is lengthy and costly in financial terms. Unexpected issues can also arise from moving applications to the cloud, but they will vary from one firm to the next. At the top of the list is the challenge of latency. “Existing applications need fast data access, and with storage infrastructure growing in size and complexity, latency increases as the apps get farther away from the data. If there isn’t a total commitment to moving all data to the cloud, then latency is a guarantee,” he writes.

He mentions other challenges in his DatacenterDynamics article “Unlike Oil and Water, Legacy and Cloud Can Mix Well,” including mismatched protocols and the amount of time required to rewrite software applications to conform to cloud standards. With regard to mismatched protocols he says legacy applications typically employ standard protocols such as NFS and SMB for network-attached storage (NAS). They’re “incompatible with object storage, the architecture most commonly used in the cloud.” To many, this fact makes moving to the cloud a daunting prospect, but it needn’t be.

Benefits of Familiarity

The ideal situation would be to keep the familiarity of legacy applications and their associated time efficiencies. Not many professionals are so keen to embrace new technologies while sacrificing these benefits because they often have limited experience with them. So in many respects, using the existing applications makes sense. If they aren’t broken, then why fix or replace them? Unless there’s a dire need to replace existing infrastructure and applications, there’s no reason to buy the latest technology.

“That said, nothing is stopping you from moving applications to the cloud as-is,” Jeschonek says before adding, “While enterprises may still choose to develop a plan that includes modernization, you can gradually and non-disruptively move key application stacks while preserving your existing workflow.”

To avoid the cost of time and money in rewriting applications, he recommends cloud-bursting. Doing so can involve hybrid-cloud technologies, which permit legacy applications to run on servers “with their original protocols while communicating with the cloud.” Often, an application programming interface (API) will connect the two for this purpose.

Cloud-Bursting

Cloud-bursting solutions can let legacy applications “run in the data center of in a remote location while letting them use the public cloud compute resources as needed,” Jeschonek says. Most of the data can remain on premises, too. This approach reduces risk and minimizes the need to move large files, saving time. It also makes life easier for IT and enables the organization to reduce time to market. As a utility model is used with the cloud, organizations only pay for what they use—and it allows them to focus on their core business while providing financial agility.

Cloud storage can also back up data. Backing up is like an insurance policy: it seems like an unnecessary expense, but the cost of downtime—as British Airways recently discovered—can be more prohibitive. In a Telegraph article on May 29, 2017 (“Devotion to Cost-Cutting ‘In the DNA’ at British Airways”), Bradley Gerrard wrote, “The financial cost of the power outage is set to cost the airline more than £100m, according to some estimates. Mr Wheeldon expected it to hit £120m, and he suggested there could also be a ‘reputational cost.’” Some experts claimed that human error was behind the downtime.

Cloud Backups

Cloud backup is a necessity. “By using cloud snapshots or more comprehensive recovery from a mirrored copy stored in a remote cloud or private object location, the needed data is accessible and recoverable while using less expensive object storage options,” claims Jeschonek. He therefore thinks companies can use a mixture of legacy systems and cloud solutions to save time and money. The cloud also offers additional backup benefits too.

An article that appears on the Hewlett Packard Enterprise (HPE) website, “Cloud Control: 4 Steps to Find Your Company’s IT Balance,” talks about the findings of a 451 Research report entitled Best Practices for Workload Placement in a Hybrid IT Environment. This report found that companies must account for cost, business conditions, security, regulation and compliance. It also notes that 61% of the respondents anticipated “spending less on hardware because they are shifting from traditional to on-premises clouds.” It adds that organizations are subsequently cutting their spending on servers, storage and networking.

Curt Hopkins, a staff writer for HPE Magazine and the author of the article, agrees that companies can incur huge costs when moving non-cloud infrastructure to the cloud: “If you go with the public cloud, you will need to find a provider whose costs are affordable.” He adds that if you wish to “create your own private cloud, the cost of the servers on which to run it is not inconsequential.”

With old workloads you may have to plow through years and years of old documentation too. So before you move anything to the cloud, he advises you to undergo a total-cost-of-ownership (TCO) assessment. It will require you to factor in capital and operational costs, as well as training and personnel considerations.

Finding Balance

Hopkins is right to suggest that it’s all about finding the right IT balance. Hybrid IT, in the form of hybrid cloud, is the most appropriate way to achieve it. “Finding your IT balance is not a zero-sum game; you don’t have to choose legacy IT, public cloud or private cloud…you can mix those options based on your workloads,” he says. To find the right balance, he stresses the need to undertake a cost-benefit analysis, and he thinks balance is to be found “in the interplay between your primary business considerations.” This situation therefore requires you to also evaluate your costs, security, agility and compliance as a whole to get a complete picture of the costs and benefits.

A report by Deloitte, Cloud and Infrastructure: How Much PaaS Can You Really Use?, argues that the past was about technology stacks. It says the present situation favors infrastructure as a service (IaaS), but the future is about platform as a service (PaaS). IT argues that PaaS is the future because many organizations are creating a new generation of custom applications. The main focus seems to therefore be on software developers and the ability of organizations to manage risk in new development projects.

The report says, “The widening gap between end user devices, data mobility, cloud services, and back office legacy systems can challenge the IT executive to manage and maintain technology is a complex array of delivery capabilities. From mobile apps to mainframe MIPs, and from in-house servers to sourced vendor services, managing this broad range requires a view on how much can change by when, an appropriate operating model, and a balanced perspective on what should be developed and controlled, and what needs to be monitored and governed.” Unfortunately, it makes no mention of whether any cloud model is right for managing legacy applications.

A Good Mix

The cloud can nevertheless mix well with legacy applications, but you should also consider what your organization can do with its existing infrastructure. Cloud backup is advisable, and increasing your network bandwidth won’t necessarily mitigate the latency effects—nor, for that matter, will a rationalization of your networking costs by reducing your network performance.

With machine learning, for example, it becomes possible to offer data acceleration with a product such as PORTrockIT. By using machine intelligence, you can mitigate the effects of data and network latency in a way you can’t achieve with WAN optimization. Cloud backups, as well as legacy and cloud applications that interconnect with each other, can work more efficiently and reduce latency.

More to the point, although this technology is innovative, it enables you to maintain your existing infrastructure and thereby reduce costs. With respect to disaster recovery, a data-acceleration tool can reduce your recovery time to enable you to keep operating whenever disaster strikes. While traditionally data once resided close together to minimize latency, a machine-learning data-acceleration solution allows you to place your cloud-based disaster-recovery sites far apart from each other to ensure business and service continuity. So it’s worth investing in your legacy applications, in hybrid cloud and in a data-acceleration solution. Unlike oil and water, they all offer an optimizing mix of solutions that will save you time and money.

x

DISCOVER BRIDGEWORKS

Watch our short video and discover how Bridgeworks patented technology delivers data accelerated solutions to access up to 98% of the WAN!

Book a FREE demo

Q

BOOK A FREE DEMO

Bridgeworks would be delighted to host a FREE one-to-one demonstration of the Bridgeworks solutions portfolio to you. Please complete the following form:


    YesNo



    Please tick to consent to your data being stored in line with the guidelines set out in our privacy policy.