3aIT Blog

1+1 = 3 on a blackboardAs you may have seen in the news recently, a court case involving the Post Office and the computer software they had been using has concluded. This isn't an article about that specifically - many others have covered this much better than we can. This primer from the BBC is a good place to start for the uninitiated. What we're intending to explore here is the wider issues this case raises.

That being said, it's probably worth very briefly outlining what happened in this case, as it's a perfect example of what we're going to go on to discuss. The Post Office used a system called Horizon for various accounting facilities. They used the data this provided to accuse various employees of stealing from them. It later transpired that bugs in the software caused it to provide incorrect data.

Off the beaten track

In this increasingly automated world, it is sometimes easy to forget that software is still created by people, and people make mistakes. In a case like this, it could be that the initial logic provided to the company creating the system was wrong. It could be it was right and they made a mistake in the coding. It could be it was initially right and then a later seemingly unrelated change changed the output and no-one noticed.

Stop SignTyping something into a computer and getting an answer is now second nature to most. Whether that's Googling the answer to a question or using a sat nav to get you to an unfamiliar destination. It is very easy to disengage our critical faculties and just accept what's being churned out as the absolute truth. However, to take the example of the sat nav, many of us will have been in a situation where it has almost literally led us up the garden path, because the data says it's the quickest way, ignoring the fact that a quick look at a map would have shown that a 30 second diversion from the route that the sat nav has decided is the quickest would have taken you onto a motorway rather than muddy single lane backroad in the middle of the night. We just keep following it because it must be right, right?

As well as giving a questionable answer to the right question, a sat nav is also very happy to give the right answer to the wrong question. You will almost certainly have seen stories over the years of people misspelling the location they want to go to and then going hundreds of miles in the wrong direction, completely disregarding what their eyes would be telling them if they weren't blindly following what the computer is telling them to do.

Teenage Mutant Algorithms

The school results last year are another example of the "people" bit of the system being glossed over. You will almost certainly be aware of the outline of what happened here, but in brief, it was decided that A Level and GCSE results should be decided by "algorithm" as it wasn't possible for children to sit the exams as planned due to the ongoing pandemic. A load of data was fed in to this algorithm, and it churned out results.

As the dust settled on the apparent mess this caused, Boris Johnson blamed a "mutant algorithm" for the problems. The implication being that computers had gone rogue and got it all wrong and there's nothing we could have done about it. But that's not what happened. The algorithm did exactly what it was supposed to do. A set of rules that people decided on was fed into a computer, and it applied those rules exactly as it should. The problem here was not with the computers but the rules fed into the computer.

Can you believe your eyes?

This is something that we're constantly aware of as developers. We are involved in helping to design and create systems that do all sorts of calculations. Depending on the context, sometimes it's obvious to us from the output if we've made a mistake. Sometimes, manual checks of the expected result and the actual result will highlight a problem. Sometimes it will be caught in client testing - especially if it's data that's very specific to their line of work and they will know from gut feeling that they're looking at a set of data that just doesn't look right based on their experience.

Woman with hands covering her eyesHowever, it's not impossible that an error sneaks past all these checks and makes it into a live system. What then? Often, these are quickly found and fixed as soon as they're being used in real-time. But what if they're not? Say we're now five years down the line and the specifics of the design of that section of the system has long been forgotten by all involved. The data that is being output it now treated as gospel because it's always been that way. That link between the data and the people involved in the initial logic behind that data has been lost. It's now just data in / data out.

So are we suggesting never to trust what a computer is telling you? Absolutely not. Not least as we'd be out of a job! However, it is important for all of us not to disengage our critical thinking when asking a computer to answer a question for us. If the answer feels wrong, maybe it's worth another check. If you act on that data and other people are telling you the data is wrong, it's definitely worth another check. Maybe the answer is wrong. Maybe you misinterpreted it. Maybe you asked the wrong question. Computers are amazing tools that enable us to analyse things it would be impossible to analyse manually. However, until AI gets a lot more advanced, it's always worth having in the back of your mind that the answer you're being given by a computer is being given due to the decisions of people, and people can be wrong sometimes.