Statistics have a hard time keeping up with reality, especially when it comes to a field as fast-changing as cybersecurity. But there’s little doubt that data breaches are proliferating at a head-spinning pace.
In the U.S., ITRC (the Identity Theft Resource Center) has tracked 6,467 distinct data breaches between 2005 and September 2016, resulting in the exposure of more than 879 million records containing personal identifying information. Meanwhile in the U.K., the number of security breaches reported to the Information Commissioner’s Office roughly doubled between 2015 and 2016.
In response, IT security professionals have adopted a wide assortment of new strategies and techniques to better secure their IT systems and data. These approaches range from augmenting the traditional security stack with next-generation tools for detecting and responding to cyberthreats, to evaluating new “autonomous” security systems that promise to use AI and machine-learning processes – rather than human intervention – to protect networks and data.
But every change, of course, can bring some unintended consequences. A by-product of beefing up the arsenal of tools that are capable of monitoring and reporting on every aspect of networks, devices, data, and user behavior is that the number of security alerts is growing, too. As one UK-based security-firm CEO observes:
The root of the problem is that organisations are under such an intense barrage of cyber activity that threat alerts, many of which turn out to be benign, are overwhelming cybersecurity teams. There is simply too much data to analyse and verify manually.
As the number of IT security tools increases, false-positive alerts, or false alarms, require an increasing investment of time and attention from the IT and security professionals who work to disprove the implied assertion that any particular alert corresponds to a genuine threat.
The worst effect of all those benign or false-positive alerts, though, may be simply to distract attention from the one alert that doesn’t get noticed – which turns out to be a real security threat.
What should an organization do when the volume of IT alerts being generated keeps multiplying, while the resources (people, time, attention, and energy) available to evaluate potential threats are finite?
Part of the answer lies in your choice of security solutions. Two key qualities to look for are:
1) A basis for alert detection that is dynamically calibrated by your actual network. In other words, detection based on a real-world, granular understanding of your organization’s network, devices, data, and users (as opposed to an abstract model of likely configurations, or “average” users and static lists of threats).
2) The capacity to detect and report on actual changes in device and user behavior using a high level of confidence of normal behavior. This is needed to minimize false alarms versus leaving detection algorithms as an exercise for the user after being presented with all of the information available about every device (which becomes more information overload).
Observable’s Dynamic Endpoint Modeling is an example of both these qualities. Dynamic Endpoint Modeling begins with building a concrete representation of normal device behavior. If a device begins to act in a way that is unexpected (i.e., deviates from the model), the service generates real-time, device-specific alerts, enabling IT or security staff to respond quickly.
This approach delivers two important advantages, compared with existing security-alerting solutions. It provides valuable, actionable intelligence with each alert; and it drastically reduces false alarms. For IT and security professionals at risk of feeling overwhelmed, that may be a welcome change of pace.
Getting better visibility into your network and improving your security couldn’t be easier. Sign up for a free, no-risk trial of Observable’s Endpoint Modeling solution, and change the way you see security.
Detect Threats Faster – Start Your Free, No-Risk Trial