The "ransomware" concept is an old one. The principles of file-encryption malware were well-known to academics by the mid-1990s. However, actual ransomware infections were relatively rare until about 2013, when payment with digital currency became feasible. Since then ransomware malware has become both increasingly common and increasingly destructive.
For organizations ransomware isn't just a particularly annoying type of malware. It's somewhat inconvenient when a user loses access to the files on an infected drive, but it's unspeakably horrible when all users lose access to the files on a network share. Backups are the best insurance policy, but restoration is time-consuming and imperfect.
To make matters worse, antivirus applications have been really behind the curve in catching the newer ransomware variants. As the abuse.ch Twitter feed noted (back in February), it's not uncommon for dozens of antivirus vendors to have zero coverage for malware that's been seen in the wild.
Given that an infected machine can do lots of damage in a short amount of time, being able to disconnect infected machines from the network as quickly as possible becomes very important. Below I'll discuss two methods that are being used to that end.
Ransomware infections involve Internet traffic at one or more steps - when distributing the malware, when exchanging keys, when setting up payments. There have been big advances in recent months in tracking the servers involved in these operations. In March abuse.ch launched the Ransomware tracker, which keeps tabs on the domains and IP addresses used by these servers.
These feeds represent one way to reduce identification time for infections. If you observe one of the listed IP addresses or domains on your network you'll know which machine might have a problem. Some firewalls can use IP and domain feeds to block this traffic.
This approach has its downsides. There are false positives - IP addresses can be listed even if they host thousands of sites but only one is actually malicious. There are false negatives - someone has to identify the site as malicious before it can be listed. And in some cases identifiably-bad traffic might not be generated until after the infection has done lots of damage. Nonetheless, remote server tracking will be an important tool while antivirus coverage for ransomware is low.
The worst types of ransomware infections involve network shares. If a user has write access to a shared drive, an infection could cause their machine to overwrite everything on it.
Some enterprising administrators came up with the idea of putting "canary" or "sentinel" files out on their shared drives. These are files that normal users would have no reason to access and whose contents are known to a monitoring system. If these files change, then it's possible that some automated process is systematically altering files. The monitoring system can detect the change in contents and send an alert with logs about which machine made the changes.
There are several things to consider when setting up a monitoring system. One is making sure users have access to the sentinel files, but no reason to change them. Another is making sure the sentinel files aren't easily identifiable - if ransomware authors can just skip over "SENTINEL_FILE.TXT" then they're not going to do any good. A third is making sure that when a change is detected you can tie it back to a particular user with audit logs or network data.
This approach has its downsides also. Curious users can generate false positives. Ransomware applications can lie dormant and watch for which files are actually used by real users before activating. The sentinel file may not be one of the first files attacked. However, it might save important files, especially if the monitoring system can take automated action after the sentinel files are altered.
We're probably at "peak ransomware" here in 2016. Eventually antivirus coverage will get better (indeed, it did for the example above), and eventually hosting providers may be able to make things more difficult for authors and controllers.
Observable uses remote server tracking by default, and recently we've been working with some customers on integrating sentinel monitoring. For sentinel monitoring the Observable sensor can host an SMB share with audit logging. These methods, plus DGA domain detection, and SMB traffic change detection can also help reduce time to detection.
Even so: you should probably check that your backup process is working.
Detect Threats Faster – Start Your Free, No-Risk Trial