Four ways organizations can cut through the data noise to improve monitoring data centres

GUEST OPINION: Data noise is the bane of the data center industry because it is every technology support team’s nightmare to be constantly bombarded with wrong, repetitive or trivial information from monitoring tools, or regularly receiving alerts irrelevant to their job function.

Data center monitoring is essential to ensure quality and continuity of service but it stops being useful if technology support teams are overwhelmed by irrelevant alerts. Here are four ways organizations can cut through all the data noise when monitoring data centres.

1. Monitoring needs to be concise, precise and relevant

Monitoring tools need to be concise, precise and help lead technology professionals directly to the causes of issues or other issues, regardless of whether those issues lead back to servers, networks or any other item of data center infrastructure.

If a monitoring tool is providing a large number of irrelevant alerts, the vast majority of technical support staff will ignore them. This leads to apathy and they stop taking even the urgent alerts seriously and those that really matter get hidden amongst the mountain of irrelevant information. These teams are already overloaded with work and stress and the multitude of data noise is only adding to the pressure they are already under.



In addition, data center monitoring must be able to provide the precise data analysts need, when they need it, particularly about the urgent issues and to only sound the alarm over genuine problems or threats and not insignificant issues.

It’s also important to be able to consolidate multiple alerts about the same issue into one concise alert, for instance where a device or port might be up, then down, then up again, so analysts are not overwhelmed by multiple warnings regarding the same problem.

2. Use the data reporting to enable better service quality

Effective data reporting will allow IT professionals to analyze trends in their data centers and help them to decide where more capacity is needed. By doing so, they will be more proactive, by detecting problems before they affect customers, avoiding any downtime and helping to fulfil level service agreements (SLAs).

Being able to highlight looming issues via this type of trends analysis helps organizations to provide a better quality experience to their data center users and customers. Aligning the performance data with business metrics helps them to identify what really matters and allows them to make informed investment decisions based on the potential business impact.

3. Data center monitoring has to provide fully automated 360-degree visibility

Managing data center infrastructure is a challenging task because they are often extremely complex and can feature a hybrid architecture with multiple data centers and cloud systems, each one on its own as well as the data paths and connections between all of them.

Data centers are also very dynamic, so they are subject to minute-by-minute change. The dynamic nature of data centers is due to equipment continually being added or removed, which means that hardware then has to be reconfigured.

There will often be reconfigurations of interconnections because all the devices are interconnected and those connections can change. For instance, servers might be changed from one switch to another. You may also have end-users connected on the access layer of the network and those end users may move around.

What this means is that data center monitoring tools must be equally dynamic; able to map all assets, first of all, but equally able to track changes as and when they occur, to identify genuine anomalies.

In order to approach this ongoing data center complexity, organizations should start to think about the role that automation might play to cut through data noise and identify and fix even the smallest technology issues before they affect users.

4. Understand data center traffic patterns to avoid bottlenecks

Understanding patterns of traffic, hour by hour and week by week, allows a dynamic threshold to be generated for a typical hour’s, day’s or week’s traffic across the data center infrastructure. This will enable significant deviations to be automatically highlighted. It also takes into account the anticipated deviations that would normally be expected during a normal working day.

An auto-tuning feature is based on data that can also be manually queried to determine the causes of unusual or unexpected events. Having that information easily available at the fingertips of a data center professional would highlight a routing issue, which can then be fixed, saving the costs of a bandwidth upgrade.

The take out

The vast amount of data noise bombarding IT teams has been exacerbated by the rapid acceleration of cloud technology implementation over the past two years due to the pivot to remote working during the pandemic.

Organizations really need to make sense of the data noise to avoid flying into adverse operational conditions caused by their data centre. Organizations in highly-regulated industries such as finance and healthcare need to make periodic data center risk assessments and disaster testing a part of their routine operations.

Risk mitigation with IT infrastructure is a shared responsibility, not just the CIO’s or CTO’s. Organizations need to have an appropriate number of IT staff trained and willing to do what it takes to stay on top of data center operations and make sense of the data that is provided to them by their monitoring system.

.

Leave a Comment