January is a time for new beginnings. Yet it can be a challenging time too. For many companies it’s a time to reflect on what has been achieved, or to consider what could have been done better in the previous year. It can also be a time of reflection, to think about what their goals and objectives should be over the course of the forthcoming months to achieve their financial targets in order to grow their organisations.
There is also no doubt that the world economy is reacting to the current political tensions at the moment. With this in mind, it’s important to take that into consideration, when deciding budgets and where to spend them, the need to maximise the return on any investment (ROI).
By conducting financial, sales, marketing and technology audits, it should be possible to ensure that no money is spent without a clear business case: No procurement decision should be based on the premise that there is a budget that needs to be spent, for example, on new technology and infrastructure. Furthermore, with the right strategies in place – including the right investments in the technology to help the organisation to achieve its ambitions, it’s possible to save money to use its New Year budgets elsewhere. This should be in areas that really add value to the business.
The role of data
To establish its strategies for 2019, an organisation can nevertheless turn to data held in its Customer Relationship Management systems, and in a whole host of other databases to enable them to complete some data and trend analysis. This might involve some big data analysis to examine data stored in different places to gain a bigger picture of what could happen throughout the year.
Organisations may also need to adjust their budgets and financial targets to realise their corporate, sales and marketing strategies. A consideration about whether to invest in technology, from datacentres to WAN infrastructure and cloud computing, might also need to form part of this audit. Cloud, for example, continues to dominate IT spending at the moment, with IDC reporting that cloud infrastructure spending exceeded traditional IT infrastructure expenditure in Q3, 2018.
As organisations look to move their applications – if not their whole IT infrastructure – to the cloud, it’s time to look at the movement towards software-defined products and how they can influence spending. The trouble is that not all of the decisions about buying and implementing new technology will resolve the issues they either have been dealing with or are currently facing.
This means that mistakes can be made during the auditing process because there are often times when performance improvements can be made with an organisation’s existing tech and infrastructure. For example, wide-area networks (WANs) are often inhibited by latency and packet loss.
So don’t buy the latest and greatest WAN optimisation solution to resolve it; or buy larger pipes to increase bandwidth with the expectation that one or both these options will improve network performance. Don’t also fall into the trap of being led by large, original equipment manufacturers (OEMs). Their marketing copy will often entice people to buy their latest technology, but that new tech may in reality offer no actual benefits compared to what’s already in place.
Bandwidth is very rarely the cause of poor WAN performance. The cause of this is always latency and packet loss. So, it’s surprising that the effects of latency and packet are not considered at the planning stage. In fact, it is only recognised as a factor when organisations have thrown more bandwidth at the problem and found that the performance has only marginally improved. The outcome of this is little or no impact on performance and the accrual of additional expenses.
WAN optimisation pitfalls
Often the next port of call on the elements to fix WAN performance is to add WAN optimisation to the picture. WAN Optimisation is a great tool for improving data flow across the WAN for transactional bitty data that can be compressed. The problem is that if it is already compressed, deduped or encrypted data, IT could find itself having to explain how that once again didn’t improve the data performance across the WAN.
However, SD-WAN technology can combine multiple low-cost network connections together to give improved bandwidth without the costs of traditional high-performance dedicated links. As a result,, it offers a very cost-effective way to increase WAN performance.
Yet WAN optimisation and even SD-WANs, for example, could unnecessarily cost you money that could be saved and be strategically spent elsewhere. SD-WANs are a good piece of technology, but they may need to be further enhanced by creating a WAN data acceleration overlay to ameliorate WAN data performance. In contrast, for moving large amounts of bulk data over large distances and at speed, WAN optimisation tools may be unable to cope. So, they’re often not the answer.
However, going back to the original planning, WAN capacity can be sized correctly. This nevertheless assumes that there is a perfect network in situ. The gremlin in the works is that life is just not like that. It’s often more complicated. Therefore, to fix the WAN performance issue, there’s a need to look at a new breed of WAN tools: WAN data acceleration solutions fit into that category.
They take a different approach to WAN optimisation products in the way they resolve poor WAN performance issues, irrespective of the data being transferred. They use parallelisation techniques, along with AI, to mitigate the effects of latency and packet loss to increase WAN performance – close to 95 per cent of your theoretical bandwidth.
Don’t be misled
Be also aware, though, that OEMs make people believe that their organisations have to buy their equipment to gain a competitive advantage. People do believe their hype too, and with momentum their technology becomes the norm and the market puts its blinkers on us, to make us accept that this is the only way to resolve an issue. This leads organisations to rejecting new and more innovative alternatives.
Obviously, the tech product must do what it says on the tin. This must be evidential, and so there is a need to define a proof of concept (POC) and its outcomes, just like any other product. If it is a “me too” product that provides the same level of performance, then go with the big vendors. However, if the new technology provides a different approach to a problem with markedly improved functionality and performance, then there is still a perquisite to weigh up the pros and cons.
Small vendor benefits
With a small vendor, such as Bridgeworks, customers are not a number on the pipeline forecast. Instead, each customer is a key customer. Even the CEO of a small vendor will know each customer, and a good one will offer his customers a direct line as part of the customer support package. This will also offer you direct contact for technical assistance, or customer service.
The other advantage of working with a small vendor is the ability to steer the product development. This in itself can offer organisations a competitive edge. The customers of larger vendors will have a limited capacity to offer a direction for the product development of any given technology. Typically, they will use the existing technology that’s offered by the large vendors. There won’t be much publicly available information about what the large vendors’ customer are doing, and that ‘s because they will have had to sign a non-disclosure document to preserve their competitive edge against other firms.
Top tips for success
With all of this in mind, here are my top tips for managing the year ahead with a new budget and perhaps with new technologies, or even with existing ones:
- Plan and test all aspects of the infrastructure. Don’t assume it will work the way you expect.
- Don’t throw the baby out with the bathwater – software-defined products can enhance existing infrastructure or lower costs.
- Test with real life data across networks, especially WANs.
- Have a look around the web and some of the nimble consultancy groups for new technologies.
- Look at some of the smaller vendors and resellers that specialise in a certain area – they sometimes have their ear to the ground on new products coming through
Looking ahead, 2019 is going to be a real mixed bag of technologies. Some that are going to succeed; some that will continue the hype cycle and offer so much promise, while having a worthwhile role; and there are those solutions that are beginning to gain credibility by becoming part of the solution.
The cloud will just about be on everyone’s agenda. The planning and evaluation phase of any cloud migration tends to focus on the storage requirement, along with the computing performance, security and scalability. This is because these define the key elements of any infrastructure. However, the key element so often over looked is the connection to the cloud.
This particularly occurs when organisations migrate to the cloud, or when they implement a hybrid cloud strategy. The latter is becoming the preferred model for many originations. Typically, when faced with poor performance over the WAN, the reaction is to throw more bandwidth at the problem. Let’s face it, when confronted with poor applications performance, who hasn’t thrown more CPUs, memory or storage at it – this normally solves the problem. Sometimes it’s does, but a New Year, a new budget and new technology, may require more careful scrutiny to get it all right.
David Trossell, CEO and CTO, Bridgeworks
Image source: Shutterstock/violetkaipa