ITsecurity
twitter facebook rss

Algorithms, Arrogance and Collateral Damage

Posted by on October 29, 2016.

Artificial intelligence (AI) is the future. It’s a complex subject that basically boils down to mathematics. Mathematical rules, or algorithms, process data and deliver decisions (actually, they deliver probabilities and we decide what level of probability constitutes a decision). Those decisions already control much of our lives, from something as simple as stop/go sequences that maximise vehicle flows, to when to buy or sell shares or currencies. In security AI, in the form of machine learning, is used to decide whether patterns of network activity are benign or malicious.

The next stage in AI is to predict the future. It’s not a new idea – weather forecasting processes past and present data to predict next week’s weather. But now governments are beginning to use AI to discover criminals and terrorists, and predict whether they are active or likely to become active.

This is where it begins to get worrying. We are in danger of being classified in law enforcement terms by the decision of a machine. If the algorithms decide that because of our race, colour, creed, education, past behavior, internet browsing, purchases, address, friends and relatives that we are probably a criminal or likely to become a criminal, then that is how we will be classified and treated.

And here’s the rub – those algorithms aren’t very good. To make the point, consider Windows 10’s sign-on picture offerings. For more than a year it has been giving me different scenes and promising to provide new scenes that I will like based on whether I have liked or disliked earlier offerings. Nothing, you would think, could be simpler. But a year later and some thousand or more decisions on my part, it still cannot get it right. It offers scenes I have repeatedly rejected and new scenes I abhor.

My guess is that the algorithm has been designed to score probabilities based on a number of different criteria: colours, shapes, shades, content etcetera. But it does not include my single over-riding requirement: no evidence of human or animal interference; that is, purely, unadulterated natural scenes. So just when it ‘thinks’ it’s got it right, I reject it because there in the corner is a hiker or a fox or a paved road.

The point is this: algorithms are based on the prejudices of their designers. We will never get away from this. Even when algorithms are designed by algorithms run by algorithms, still, ultimately, there will be prejudice. That prejudice will then be discoloured by the accuracy or lack of accuracy of the data it processes.

In probability terms, algorithms will probably get a lot of things right. But where they fail, they will fail catastrophically – and the victims will be just collateral damage on the false altar of efficiency.


Share This:
Facebooktwittergoogle_plusredditpinterestlinkedinmail

Leave a Reply

Your email address will not be published. Required fields are marked *

Submitted in: Expert Views, Kevin Townsend's opinions | Tags: