ITsecurity
twitter facebook rss

Hoaxes and Facebook

Posted by on January 22, 2015.

The security industry doesn’t generally take hoaxes per se very seriously. There are exceptions, such as during the spate of virus hoaxes that plagued us during the 90s, which impacted to some extent on the credibility of the anti-virus industry. (Many would now say “what credibility?” but that’s another article…)

What are (some of) the exceptions? As an example of social engineering: technical security solutions are generally of limited effectiveness in countering psychological manipulation. As a means of distributing malware, or of persuading potential victims to run malware, of course, they are all too effective. As a tool of fraudsters, sure. But some hoaxes spread widely but don’t do much apart from give the hoaxer some malicious self-satisfaction at convincing himself that everyone else is stupid, make the victim feel stupid if he realizes he’s been duped, and irritate people who get tired of seeing the same garbage time and time again. And irritability is a minor issue for people who spend their working lives trying to reduce the impact of heavy-duty criminality: is it really that important if people put up a legally meaningless privacy statement that misses the point by warning Facebook that their content is their own?

My view is a little different. Over the years, I’ve spent a lot of time seeing at first hand that outside the malware analysis lab, something that simply doesn’t exist can have serious real-world consequences. Perhaps that’s because it’s only fairly recently (since 2006) that I’ve provided consultancy for security vendors rather than working for people who buy security products. In the 90s, I spent much of my time in a medical research organization restraining people from panicking about non-existent viruses like Good Times, or from pounding overstretched email facilities in order to get money from Bill Gates or free phones from Nokia or to make money for cancer research.  In the early 2000s, I sometimes spent as much time as a security manager in the UK’s National Health Service countering mailstorms warning against the non-existent sulfnbk.exe and jdbgmgr.exe malware as I did trying to implement and maintain countermeasures against real malware, and trying to prevent mail services buckling under the weight of emails spreading hoaxes relating to children orphaned by the 2004 tsunami. After I left the NHS (when it was decided that the NHS should be outsourcing its security: how’s that working out for you, guys? Oh well, never mind…) I even started a blog about hoaxes and psychological manipulation as part of a masterplan for seriously reducing their impact. Unfortunately, I never got around to implementing the main plan, and now I’ve forgotten what it was. Maybe I’ll get back to it sometime when I don’t have to worry about making a living. But since I was assimilated (resistance was useless) into the security industry, I’ve been writing as much about fraud and deception at least as much as I have about bits-and-bytes security threats: I guess that may be because my academic background is in social sciences as well as computer science.

Let me quote myself.

If someone shares misinformation with you on the bus or in a bar, it may have relatively little impact on the community at large. But I’ve often described social media as the natural supplement to or even replacement of email as the hoaxer’s weapon of choice, and because the last thing social media are noted for is restricting the flow of information (or misinformation), they could well be described as a weapon of mass deception.

And, yes, I was thinking about Facebook in particular: it’s by no means the only social media service misused by spammers, scammers, hoaxers and fraudsters, but it does have a huge user population. Happily, it seems that FB has cottoned on to the fact that deluges of false information do not afford its users universal delight. A couple of days ago the service announced that it was taking measures to reduce the impact of misinformation on its news feeds by adding an annotation to posts ‘many people’ have reported as being hoaxes, or have deleted subsequently (on the assumption that they have done so because they were told the information was false). However, as far as I know, no information has been given on what algorithm is used to ascertain how many is ‘many’. I wouldn’t have thought that all that many people would have considered it necessary to warn their friends that an article on Scientists Demonstrate Irrefutably the Existence of Santa Claus wasn’t true, but what do I know?

Facebook’s definition of a hoax includes scams and ‘deliberately false or misleading news stories’, but FB is quick to point out that it isn’t going to be ‘removing stories people report as false’ or ‘reviewing content and making a determination on its accuracy.’ That’s hardly surprising, considering the sheer volume of content that’s shared on Facebook and its siblings, but the point here is that Facebook is anxious not to be seen as a publisher rather than a platform and therefore held legally responsible for content that its users share among themselves.

So how much impact will it have? Alan Martin interprets a suggestion from Wired that people might flag a post not in accordance with their political beliefs as false, as abuse: I’d actually regard it as more a case of an inability to separate subjective from objective – and maybe all of us share that, to a degree – but it’s certainly not impossible that a politically (for instance) contentious article (or other link) might be flagged by Facebook as false (and/or receive less exposure in News Feed) because of the number of Facebook users who’ve objected to it. If Facebook isn’t actually deleting such items, the option may remain for other FB users to make up their own minds, though FB itself admits that heavily flagged items won’t show up so often in newsfeeds. Even before this development, it wasn’t actually very clear how News Feed selects what is shown, unless you happened to come across something like this FB article from 2013, which told us:

The News Feed algorithm responds to signals from you, including, for example:

  • How often you interact with the friend, Page, or public figure (like an actor or journalist) who posted
  • The number of likes, shares and comments a post receives from the world at large and from your friends in particular
  • How much you have interacted with this type of post in the past
  • Whether or not you and other people across Facebook are hiding or reporting a given post

So the latest tweak is not so different from what already happens, as regards its impact on what content actually gets to your feed. (The major difference is in the way some posts are actually flagged.) Indeed, the degree to which Facebook manipulates news feeds was the cause of a great deal of controversy not so long ago, when it became known that the company had manipulated 700,000 news feeds for experimental purposes.

It’s for these reasons that I find it infuriating when people complain when no-one seems to have noticed one of their posts, by the way: it’s perfectly possible the post in question never got to their friends’ and acquaintances’ feeds. On the other hand, if Facebook does manage to significantly reduce the number of times a hoax is re-posted – either by manipulating the feed or by flagging it as a possible hoax – that might at least encourage more people not to repost uncritically, without checking.

 

An interesting point that the Guardian (among others) has mentioned concerns satirical content. Facebook doesn’t believe that ‘satirical content intended to be humorous, or content that is clearly labeled as satire’ is likely to be reported as false, which is no doubt good news for sites such as The Onion and its readers. Wired raises the issue of ‘clickbait mills’, some of which claim to be satirical, but don’t make it clear that their stories are ‘satirical’. I’m not sure how many ‘satirical sites’ that publish only untrue stories with no obvious wit or functionality could accurately be described as clickbait mills, but there are certainly all too many that don’t seem to have any purpose other than to attract clicks. And there are certainly contexts in which clicks mean profits.

dinosaur

Look, kids! Richard Attenborough, Sam Neill and Laura Dern!

An article by Rob Waugh offers a number of suggestions for identifying Facebook hoaxes, and the main identifiers are probably worth summarizing here, though they don’t cover all cases:

  • ‘ANY story where you’re asked to share before seeing it’: because that’s almost invariably clickbait of a kind we’ve been seeing for years..
  • ‘Any ‘news’ story with mermaids or living dinosaurs’: or other improbabilities like the Santa Claus story.
  • ‘Incredibly violent video news reports’: scammers have always capitalized on the worst aspects of human psychology, including many kinds of voyeurism. ‘…and do you have a picture of the pain?’*
  • ‘Outrageous news stories about Facebook itself’: like the constantly recurring stories that it’s shutting down next week, or about to start charging subscribers, and so on.
  • ‘The report about the dying girl who begs you for “Likes”’: presumably a variation on those unpleasant requests to Like a photograph so that Facebook will subsidize treatment of a seriously ill child.
  • ‘The report on the incredible ‘hack’ which will let you see who looked at your Facebook page’: or turn your Facebook page pink (I’ve always wanted to do that!), or offer a Dislike button.

Here are some more examples of out-and-out scams from the Facecrooks site:

  • Apps that are supposed to tell you who has looked at your profile or prevented you from looking at theirs. (Apps with this functionality aren’t possible.)
  • Offers to test and keep iGadgets.
  • ‘Free’ game credits.
  • ‘Free’ travel tickets, gift cards, vouchers and so on
  • ‘Exclusive’ breaking news stories
  • Any post starting OMG or ‘Shocking’. (I think that’s a bit sweeping, but there’s no doubt that there is a lot of dubious content using that sort of hook to grab attention and draw the reader into a survey scam or something of the sort.)
  • Fake celebrity stories. (These spread very fast via Twitter, too.)

Facebook can be fun and even useful. But you really shouldn’t assume that links and stories are safe, accurate, or even legal just because some of your friends are re-posting them.

David Harley

*Phil Ochs: ‘Crucifixion’


Share This:
Facebooktwittergoogle_plusredditpinterestlinkedinmail

Leave a Reply

Your email address will not be published. Required fields are marked *

Submitted in: David Harley | Tags: , , , , ,