Evidence critical systems: designing for dispute resolution

On Friday, 39 subpostmasters had their criminal convictions overturned by the Court of Appeal. These individuals ran post office branches and were prosecuted for theft, fraud and false accounting based on evidence from Horizon, the Post Office computer system created by Fujitsu. Horizon’s evidence was asserted to be reliable by the Post Office, who mounted these prosecutions, and was accepted as proof by the courts for decades. It was only through a long and expensive court case that a true record of Horizon’s problems became publicly known, with the judge concluding that it was “not remotely reliable”, and so allowing these successful appeals against conviction.

The 39 quashed convictions are only the tip of the iceberg. More than 900 subpostmasters were prosecuted based on evidence from Horizon, and many more were forced to reimburse the Post Office for losses that might never have existed. It could be the largest miscarriage of justice the UK has ever seen, and at the centre is the Horizon computer system. The causes of this failure are complex, but one of the most critical is that neither the Post Office nor Fujitsu disclosed the information necessary to establish the reliability (or lack thereof) of Horizon to subpostmasters disputing its evidence. Their reasons for not doing so include that it would be expensive to collect the information, that the details of the system are confidential, and disclosing the information would harm their ability to conduct future prosecutions.

The judgment quashing the convictions had harsh words about this failure of disclosure, but this doesn’t get away from the fact that over 900 prosecutions took place before the problem was identified. There could easily have been more. Similar questions have been raised relating to payment disputes: when a customer claims to be the victim of fraud but the bank says it’s the customer’s fault, could a computer failure be the cause? Both the Post Office and banking industry rely on the legal presumption in England and Wales that computers operate correctly. The responsibility for showing otherwise is for the subpostmaster or banking customer.

This presumption can and should be changed, and there should be more robust enforcement of the principle that organisations disclose all relevant information they hold, even if it might harm their case. However, that isn’t enough. Organisations might not have the information they need to show whether their computer systems are reliable or not (and may even choose not to collect it, in case it discredits their position). The information might be expensive to assemble, and so they might argue it is not justifiable to disclose. In some cases, publicly revealing details about the functioning of a system could assist criminals, so it gives organisation yet another reason (or excuse) to not disclose relevant information. For all these reasons, there will be resistance to a change in the presumption that computers operate correctly.

I believe that we need a new way to build systems that need to produce information to help resolve high-stakes disputes: evidence-critical systems. The analogy to safety-critical systems is deliberate – a malfunction of a safety-critical system can lead to serious harm to individuals or equipment. The failure of an evidence-critical system to produce accurate and interpretable information that can be disclosed could lead to the loss of significant sums of money or an individual’s liberty. Well designed evidence-critical systems can cost-effectively resolve disputes quickly and with confidence, removing the impediments to disclosure, allowing a change in the presumption that computers are operating correctly.

We already know how to build safety-critical systems, but doing so is expensive, and it would not be realistic to apply these standards to all systems. The good news is that evidence-critical engineering is easier than safety-critical engineering in several important ways. While a safety-critical system must continue working, an evidence-critical system can stop when an error is detected. Safety-critical systems must also meet tight response-time requirements, whereas an evidence-critical system can involve manual interpretation to resolve difficult situations. Also, only some parts of a system will be critical for resolving disputes; other parts of the system can be left unchanged. Evidence-critical systems do, however, need to work even when some individuals are acting maliciously, unlike many safety-critical systems.

I would welcome discussion on what we should expect from evidence-critical systems. What requirements should they meet? How can these be verified? What re-usable components are needed to make evidence-critical systems engineering cost-effective? Some of my initial thoughts are in my presentation at the Security and Human Behavior workshop. Join the discussion on Twitter.

 

This post originally appeared on Bentham‘s Gaze.

Updated: