Posted by Kevin on December 4, 2014.
On Tuesday, Reuters published an exclusive report on a new FBI alert about destructive malware. The Reuters report was low on facts but high on conjecture (much of which is quite possibly true). Have you wondered, however, why a report on a document that the reporter has seen should be so low on facts?
[It should be said that the FBI has not associated its alert with the Sony malware, nor suggested that it is North Korean state malware. Everything and everyone who says otherwise is at the time of writing this doing nothing other than guessing.]
The reason for the dearth of fact in the Reuters report is that the alert document will have been marked TLP: Green. That means that the document is restricted although not secret – it cannot be distributed outside of a specific target list, nor disclosed to the general public.
I can think of nothing more absurd. This is the government (in this case the FBI and the DHS) saying, ‘we’ll warn some of the people so that they can better protect themselves from this nasty malware, but we won’t warn all of the people.’
You could claim that targeted malware is only dangerous to the targets (and the likelihood is that this particular malware was specifically targeted against Sony by a nation state, possibly North Korea); but Stuxnet has already proven this claim to be weak. Malware has a habit of spreading. It also tends to be copied by others. And nasty destructive malware is a growing trend. You can see it starting with cryptoware and rapidly growing to the destruction of Code Spaces. Now the criminals can see what chaos malware can bring to a company as large as Sony – and the possibility of a growing market for extortion based on network destruction is clear for all to see.
But it is also worth asking if there is something more sinister to this TLP: Green label beyond governments’ addiction to secrecy. I asked a number of anti-virus companies what would happen if they received malware notification via an FBI TLP:Green document. Luis Corrons, technical director at PandaLabs, was the first to respond:
In this hypothesis, it is the government… that provides you with the malware and information, so the only (legal) thing anyone can do is to comply.
He also added that if he found the malware in the wild, he would detect and protect his customers.
Mikko Hypponen from F-Secure was next up:
What would we do? Well, we would add detection for it.
Fraser Howard of Sophos expanded slightly:
Recipients may share TLP: GREEN information with peers and partner organizations within their sector or community, but not via publicly accessible channels. Obviously we do share security samples & information with other vendors, partners etc. In this, we follow the standard TLP protocol. So within our systems for sharing of that data, we have the ability to flag things as necessary to restrict external release where required.
In this, Sophos conforms with Panda: it would obey the restricted instruction. But also like Panda, that would not preclude detecting the malware and protecting its customers.
Juraj Malcho of ESET saw more than I was asking, but some of where I was going:
I don’t think it’s likely that any government would want to share details about any of their top secret programs (which I think is a fair description of surveillance ops like this) with the public [I think he means ‘private’] sector, this simply doesn’t make sense. If I have a secret, I should keep it secret. The moment I share it I risk leaking of the information. AV companies don’t have security clearance. So I don’t think [the TLP: Green] scenario… is likely at all.
In other words, ESET doesn’t think it would receive such an alert – but doesn’t say what it would do if it did.
He adds that if they did find
…any such piece of malware, there’s no way to know where it comes from and what is its classification. We’d process it like any other malware â€“ we would add detection, perform further research as needed, share the samples with the industry. If the malware/operation is interesting we could also publish a report.
Finally, I asked Kaspersky, which sadly simply avoided the issue and responded with marketing fluff:
As a private company, Kaspersky Lab has no political ties to any government but it is proud to collaborate with the authorities of many countries and international law enforcement agencies in fighting cybercrime. Kaspersky Lab works with the authorities in the best interests of international cybersecurity, providing technical consultations or expert analysis of malicious programs in compliance with court orders or during investigations.
If we try to get to the heart of these replies we can see a common theme: AV companies are most likely to honour the TLP: Green instruction and not provide any information to the public. They would however seek to protect their customers and would circulate information to other AV companies.
We could infer from the answers, however, that the AV companies might not feel restricted if they had already found the malware themselves. This provides a contrary argument to ESET’s – since the industry might find the malware itself and respond in their normal fashion, the best way to preclude this is to simply instruct the industry that it must say nothing about this particular piece of malware.
So far I have found only one analysis of the Sony-suspect malware: An Analysis of the “Destructive” Malware Behind FBI Warnings published by Trend Micro on 3 December. The FBI alert was issued late on 1 December – giving Trend less than two days to analyse the malware, and write, format and publish its report. This is surprisingly rapid – so the implication here is that either Trend found the malware independently, earlier and elsewhere (their report doesn’t say where they got it); or they got it direct from Sony and/or the FBI.
It’s almost certain that they got the malware from Sony/FBI – if only because their sample drops the same Sony-specific image as that used in the Sony breach (see below). The likelihood, then, is that Trend was asked by Sony/FBI to analyse the malware for them; and asked not to say where they got it.
What it looks like, then, is that government notification could easily be used to stop anti-virus companies from talking about specific malware (perhaps to the extent that it even exists), but will not necessarily prevent them from detecting and protecting their customers. But a quick tweak from the developers and the malware is hidden again.
This could be used as a way to keep government’s own malware below the public radar. It’s pure hypothesis of course – but look closely at all of the responses I got. No-one ever says they will defy a government instruction.Submitted in: Expert Views, Kevin Townsend's opinions |