We’ve been hearing a lot about cloud computing lately. Seems like just about anything you can tuck away into the clouds gets better, or so the story goes, since it allows everyone to access it from anywhere and harness the power of humongous databases full of useful information, among other benefits. Sure enough, the recent RSA conference had its fair share of hype on cloud computing and it has been mentioned to do everything from change security as we know it to solving the imminent swine flu pandemic. On the heels of RSA, a new “cloud antivirus” (AV) solution has been introduced which has brought media attention to cloud-based malware protection to a fever pitch. Given all the chatter on the topic, I thought I would provide my own forecast (of sorts) for cloud-based AV.
Let me start out by saying that I agree cloud-based security makes sense for consumers. We have been using online intelligence for a while now and a good example of this is the Norton Insight feature of our 2009 products. Instead of loading a massive blacklist into the cloud, we have a white list and a community-driven database of trusted applications that don’t need to be scanned (unless they change and no longer match their original, trusted fingerprint). This cloud-based approach allows us to dramatically improve scan speed and focus attention where it’s needed most: unknown applications. Since most malware we see now has never been seen before (it’s brand spankin’ new), we think you have to avoid over-analyzing the good stuff and hone in on the unknown stuff using techniques like behavior & network traffic analysis. This is how the 09 versions of Norton AntiVirus and Internet Security work, as well as the recently launched Norton360 v3.
Now cloud-based antivirus (AV) is another matter. You could consider it a subset of cloud security. The basic premise is that the providers can now more rapidly create and deliver AV definitions through an online database of definition than a client-side database. Thus, the old client-side AV definitions are replaced by the super database of AV definitions in the sky which fuels the client-side scanner, sort of like putting a bigger engine in my old 2000 Honda Accord. I’d like to offer a few counterpoints:
AV signatures will never be good enough. Signatures presume you have a sample of the malware itself or are generic enough that you can make some assumptions about the “family” it belongs to in order to detect it properly, even if it changes a little. This whole model is under severe strain since most malware we see today is unique and designed specifically for signature evasion. Also, with thousands or even tens of thousands of new malware everyday, how are you going to see them all? With arguably the largest installed security customer base in the world, we don’t see every new instance of malware everyday. It’s impossible for any one vendor, especially smaller, regional players. So what happens? You end up with a lot of definitions for yesterday’s malware in your cloud databases while the threats of today merrily chug along infecting people at rates that would make the H1N1 flu bug envious. We believe the right approach today is to use signatures as a last line of defense, and rely on other, newer technologies to deflect or capture unknown threats. Cloud-based models don’t change this.
Scanning is getting old. That’s right; the conventional scanning model is aging faster than the teenage cast of the Harry Potter movies. It relies on exhaustively searching the computer for tell-tale signs of threats. The most popular threat today is the downloader Trojan horse that sneaks onto your PC and then downloads a bunch of its pals to join in the malware fiesta on the newly infected computer. Now let’s say you can scan after this infection takes place since you notice something smells funny. If you’re backed up by all sorts of cloud-based intelligence, you might get some or most of the malware on your PC. But are you really comfortable not getting all of it because some of it is unknown? And what about the fairly typical rootkits that now burrow deep in an infected system? Normal scanning can’t detect these because it can’t see them. Does this mean that scanning is useless? Absolutely not, but, once again, it is a last-line of defense.
After the firewall, the first line of defense is intrusion prevention or exploit blocking. This remarkably effective yet underappreciated technique works on the premise that a lot of attempted infections use known vulnerability exploits to attempt to infect an unpatched computer. Intrusion prevention offers a “virtual patch” of sorts, plugging the hole left by the vulnerability and deflecting the attack. Additionally, Malware detection techniques that monitor application behavior at low-levels in the system also work as they detect unknown threats based on what they’re doing, rather than a unique “fingerprint”. As it turns out, while threats may have very different fingerprints, they tend to do a lot of the same things making behavior-based detection a powerful form of protection versus unknown malware.
Thirdly, watching network communications on a system, similar to what a firewall does, for signs of an unknown threat or unwanted application is also effective. This is because malware authors change the “shape” of the malware programs more often than they alter how they communicate back to the mothership. It’s simply more work and they’d prefer not to do it if they don’t have to. Ultimately, none of the new-style protection techniques mentioned here are foolproof but we believe all of them are required if you are to properly defend a computer versus today’s genre of aggressive threats.
Scanning saps performance. There’s simply no way around this and it’s why we developed Norton Insight which is based on the idea of scanning less. You can do this without hindering performance or decreasing protection if you can determine what is trusted and, consequently, doesn’t need to be scanned. The recently launched cloud antivirus methods may sound great at first blush, but have already been exposed to tax performance more than our new security products. 3 running processes? We use only 2. 17MB of RAM? We use less than 7 MB in Norton 360v3. 50MB footprint? That’s roughly the same size as Norton AntiVirus which is full-featured AntiVirus & AntiSpyware (e.g., full real-time protection, behavior-based protection, intrusion prevention, Norton Insight, etc.). Turns out some people have already had a little trouble with performance impact and the solution is to turn off logging, which doesn’t seem like such a wise idea when using Beta software. Further, only scanning when files are executed seems like an alright idea as long as you’re ok being a carrier for malware to other people; this model allows malware to be copied merrily from place to place without detection in an effort to lighten the load on the system. Sheesh, how about just adopting a model that doesn’t base its core protection model on scanning?
Ultimately, cloud-base malware detection is a technique that provides an additional layer of protection. It takes the old scan and definition-based model and breathes a little new life into it, but it has to be done properly to avoid performance impact and properly harness the online malware databases. Done right, cloud-based AV can incorporate an extra, helpful layer of heuristics beyond the traditional signatures that typically detect a single, known threat. Basic, signature-driven cloud detection methods pale in comparison to the effectiveness of more sophisticated techniques such as behavior-based malware detection, network traffic analysis, and strong intrusion prevention (exploit blocking). All of these work even on unknown malware, an essential feature given today’s threat landscape. And even these protection features, much like cloud-based AV, work a lot better when they function together inside an integrated suite with layered protection.
Source and Copyright: Norton blogs.