We speak to Data Centre Journal about why banking institutions are considering new technologies instead of investing in new data centers.
July 16, 2018
An international banking firm wanted the ability to back up, restore and recover data from its global data centers. To do so, it thought it would need to build another data center—one not too close to and, equally, not too far from its existing facilities. But this company already has the facilities it needs on site.
It shouldn’t need to spend $100 billion, $100 million or even $1 on a new data center. Yet since none of its existing data centers are operating anywhere near to full capacity, what should this firm be doing? In addition to focusing on the assets it already has, while also allowing workflow and collaborating with its main partners, firms are turning to new technologies to securely manage private and sensitive data.
Should a firm build a data center solely for disaster recovery? Often, at least three disaster-recovery sites are necessary, located far apart rather than in the same circle of disruption to ensure business continuity. You may also experience the situation British Airways (BA) recently faced in May 2017 that left thousands of passengers stranded around the world: the company’s network crashed in 70 countries, preventing staff from checking in passengers.
The company blamed the incident on a power failure, thought to have occurred in a London data center. Many questioned whether this explanation was accurate, with the media saying someone had simply flicked the wrong switch. Regardless of the cause, the downtime reportedly affected at least 75,000 passengers.
Consequently, BA lost revenue, incurred greater staffing costs, and faced compensation claims as well as fines from regulators. So rather than seeing BA as a quality, trustworthy brand, customers may go to a competing airline. Aside from the reputation damage, the estimated outage cost of $150 million is why organizations must implement disaster-recovery plans immediately.
The financial-services industry is also facing the same broken reputation, dissatisfied customers and lost revenues. In some cases, where a data breach occurs, regulators can issue huge fines. Yet with regular audits of their IT infrastructure and with a focus on prevention rather than a cure—all the while investing in backup, storage, disaster recovery and data acceleration—no calamity should ever occur. Moreover, if a bank or some other financial-services institution fails to audit its data and its IT infrastructure, it will remain in the dark about how it can efficiently and securely use what it already has.
With the growing impact of digital transformation and data-management challenges in financial services, the temptation is to buy new technology without considering how the organization can improve the efficiency and utilization of what it already has. WAN optimization, for example, was once the savior of latent data transfer. But with the increase in data volumes, this technology no longer fulfills the requirements to transfer rich, compressed and encrypted data quickly and securely.
Consider investing in technologies that permit you to do more with what you already have. New data-acceleration solutions, such as the Bridgeworks product PORTrockIT, use machine intelligence to mitigate the elusive data-transfer latency. By accelerating data you can make the most of real-time data analysis and thereby make big data analytics timely and more accurate, resolving latency and greatly improving network performance.
Uptime and User Experience
With the transformation in digital and mobile-banking applications, global firms such as Santander—which holds remote video meetings to arrange mortgages—understand the need to maintain a high level of uptime and a solid user experience. Financial-services organizations must conduct regular research and assessments into how customers are affected by any given scenario. Businesses and consumers will want to change services when they experience ongoing delays or the inability to access accounts with ease, but it doesn’t necessarily mean investing in a new data center is the right way to maintaining business and service continuity.
It’s important to understand why organizations believe there’s a need to license the latest technology, and part of this process involves testing their assumptions that new technology will deliver the greatest return on investment. With WAN limitations, however, technology that’s less than fully used is delivering less than the full ROI.
Increased ROI can only be achieved by holding regular audits that support the right choices about technology. Why? They can save time, money and resources. Audits also enable you to make the most of the expertise that’s your organization already gas, especially given the skills shortage in areas such as cloud computing and IT security. Audits should also consider whether outsourcing data management is truly the best choice for managing IT infrastructure.
Banks and other financial-services institutions often deal with highly sensitive data, so they may have a reason to keep as much of it as in house as possible—perhaps using a private cloud. The misconception is that cloud storage is no guarantee that data will be safe.
Audits must therefore test a financial-service organization’s IT security, and they should include staff training to prevent ransomware and cyberattacks. Equally important is the need to test and plan your backup and disaster recovery to identify weaknesses in your ability to operate seamlessly whenever a data center breakdown (as with BA) or ransomware-like incident occurs.
Know Your Assets
You can get the most from your existing assets by knowing what they are as well as how much are in use and by assessing their remaining capacity. Only invest in new technology if it enables greater efficiency of existing infrastructure—which might include a data center—or if it boosts ROI. Data-acceleration solutions, for example, are an innovative technology that allows you to do more with your existing infrastructure by mitigating data and network latency and by reducing packet loss. It lets you do more while safeguarding your business, your customers, your brand, your time and your budget.
Building another data center or installing new network infrastructure may sound like a good option. But what problem are you solving? There’s only so much you can do about network latency given the limited speed of light. Yet data acceleration, supported by machine learning, is an enabler that allows transmission of encrypted data in ways that WAN optimization precludes.
In conclusion, finding a balance between current technologies and new technologies that reduce expenditures while increasing network, data-analytics and storage performance is critical. Following this approach will make your company more competitive, flexible and secure.
About the Author
David Trossell is CEO and CTO of award-winning data-acceleration company Bridgeworks. The company has developed products such as PORTrockIT, which was named the Data Center ICT Networking Product of the Year in May 2018’s DCS Awards.