ITsecurity
twitter facebook rss

Agents Smith & Jones versus the Bad Guys

Posted by on December 24, 2016.

A new breed of endpoint protection software has emerged over the last few years. If we simplify things – probably over-simplifying – this breed of products uses machine-learning technology to teach itself behavioural rules that can detect and block known and unknown malware in situ. This is the central theme of what is usually known as next-gen anti-virus; or for us, Jones. Based on these rules Jones generates a probability score on whether a binary, and/or its behaviour, is likely to be malicious even if the binary has never been seen before.

The original anti-virus (which for our purposes we shall call Smith) evolved with a slightly different initial purpose. The intent was not probable knowledge, but absolute knowledge: bad stuff would be stopped, good stuff would be allowed. Smith started with malware fingerprints. Every time a binary was detected as malware, a fingerprint (or signature) would be produced and added to a detection engine. If that fingerprint was ever seen again, the associated binary would be known malware and would be stopped.

Over time Smith found that these signatures were not enough. New malware was outpacing the ability to maintain the signature database. So Smith developed his own machine-learning ability to generate behavioural rules to detect new malware. These rules were largely developed in the lab and then physically added to the product.

Smith is mature. His was the original and archetypal security software. His members are largely stable and have evolved methods of sharing malware samples. Smith is a cooperative rather than squabbling industry. He even developed an organisation called the Anti-Malware Testing Standards Organisation (AMTSO). AMTSO develops fair anti-malware testing standards that enable end-users to reasonably and accurately compare one product with another.

And then along came Jones. Jones, convinced that he had developed a better approach to detecting malware, found a market absolutely dominated, verging on saturated, by Smith. Jones also had a dynamic technology that could not be comparatively tested in the same way as the more static Smith products.

Jones resorted – not entirely but certainly in some cases – to underhand methods to gain attention and traction. He hit upon the marketing approach of describing Smith as ‘signature only’ and therefore incapable of recognising new malware. And Jones used VirusTotal (without being a part of VirusTotal) to ‘demonstrate’ that he, and only he, could detect malware supposedly missed by Smith. Neither of these statements were necessarily accurate nor fair. But they worked and they left Smith fuming.

This all left people like me in a quandary. We know from hearsay and logic that Jones has a good concept. We know that Jones, in some cases, has been telling out-and-out fibs about Smith. We also know that Jones is not the final solution against malware: if Jones can use machine learning to gain an edge over bad guys, the bad guys will pretty quickly (in fact are already doing so) use machine learning to regain the advantage.

But what we have not had is genuine third party measurement and comparison of Smith and Jones. Until now. Over the last six months, parts (not all) of Jones have become less strident and more open. A few have joined AMTSO to help develop testing standards that are fair to both Smith and Jones. Others have had themselves independently tested by AMTSO-recognised testing organisations.

And that is why I was so pleased to hear from Jones type SentinelOne just a couple of days ago: “SentinelOne’s Jones-type Endpoint Software Dominates New AV-Test and Outperforms Smith-type Tools.” This is the standard marketing-speak you get from all vendors. But the key here is ‘new AV-Test’. AV-Test is a respected independent test organisation and member of AMTSO. And this is the first third-party comparative test of Smith and Jones I have seen.

You have to ignore the marketing-speak and concentrate on what AV-Test actually says. It includes this from Maik Morgenstern, CTO of AV-Test:

If security performance is the main focus, then the security packages from AVG, Bitdefender, SentinelOne and Sophos performed the most reliably in the test. If we consider the system load required for this, however, then the product from SentinelOne is the best recommendation. It places hardly any measurable system load on MacOS Sierra for daily routines.

Hats off to SentinelOne for putting its money where its mouth used to be. This is what the market needs – genuine independent testing not of Smith versus Jones, but Smith and Jones. Both exist to take on the bad guys.


Share This:
Facebooktwittergoogle_plusredditpinterestlinkedinmail

2 thoughts on “Agents Smith & Jones versus the Bad Guys

  1. Interesting!

    Looking at the Next-Gen security products test by AV-Comparatives & MRG-Effitas, they only tested 4 Jones-type programs (out of which SentinelOne received the lowest exploit-protection score, but still performed very well in the malware protection test).

    This is the test I’m referring to: https://www.av-comparatives.org/wp-content/uploads/2016/11/avc_mrg_biz_2016_nextgen_en.pdf

    Furthermore, the same report mentions something interesting:

    “In fact, a few “next-gen” vendors try to avoid having their products publicly tested or independently scrutinized. To this end, they do not sell their products to testing labs, and may even revoke a license key – without a refund – if they find out or suspect that it was bought anonymously by a testing lab.”

    Why would some Jones companies refuse submitting their software to testing, while other Jones companies do so gladly?

Leave a Reply

Your email address will not be published. Required fields are marked *

Submitted in: Expert Views, Kevin Townsend's opinions, News, News_malware | Tags: