Evidence-critical systems: what they are and why we need them
Steven J. Murdoch, University College London
Computers are limited to enforcing policies that can be unambiguously expressed in code
If you want a computer to require that actions meet certain criteria, the actions and criteria must be precisely specified in a programming language. However, many real-world tasks require human interpretation to enforce policies, or rely on information not available to the computer at the time of the decision.
Transparency can help detect violations of the ambiguous policy, but only if victims have the power to do so
One option is to let anything happen and trust people to act in a trustworthy way, but not everyone should be trusted. We can do better through transparency enhancing technologies that ensure actions are visible so failures can be identified through audit logs. However, there are challenges with this approach. For example, audit logs might contain sensitive information so cannot be disclosed. One technique to address this challenge was demonstrated in VAMS: allowing statistics to be verified without having access to underlying data. However, even if information needed to identify a problem is available, there might be other obstacles to mitigating the damage.
What’s the right process to turn verifiable data into fair outcomes for users of the system?
The legal system is the way we usually turn evidence into justice, but it’s imperfect and has proved particularly problematic where computers are used. For example, consider the prosecutions of 900+ subpostmasters based on evidence generated by the Horizon accounting system finally shown to be not “remotely robust”. Part of the problem is that the English legal system presumes that computers are reliable unless shown otherwise, and obtaining evidence that a computer is unreliable is expensive and may be infeasible, particularly for users.
Bad news: fixing the problem is hard; good news: it’s easier than building safety-critical code
To address the problem we need systems that produce accurate and interpretable evidence to support the cost-effective resolution of a dispute. If the failure of a system to resolve a dispute could result in serious harm, then it is an evidence-critical system. This is related to safety-critical systems – where a failure to operate correctly could result in serious harm – but is not the same. High assurance engineering techniques, expected for safety-critical systems, are expensive even for the most straightforward applications of computers, so it could be argued as unrealistic for all legally relevant computer systems. Actually, the situation is not so bad: safety-critical systems must produce correct and timely responses. Evidence-critical systems need only to never produce an undetectable incorrect response: it’s OK to fail to produce a response, and it’s OK if an incorrect response can be detected.
How can we design and build evidence-critical systems?
The Evidence Critical Systems research project aims to identify the right technologies and design principles to build systems that will produce adequate evidence to resolve disputes fairly and address the challenges in presenting and interpreting this evidence. For example:
- What are the right criteria to evaluate the likelihood of a failure being detected? To know the likelihood that a failure occurred given some evidence, we need to know how likely it is to see the evidence, assuming a failure has occurred (Bayes’ law), or alternatively based on the system design, what is the likelihood of an undetected failure?
- How do we create incentives to ensure that systems are built to these criteria
What’s next?
For updates on this project (up to 12 emails per year), subscribe to our mailing list by sending a blank email to cs-evidencecritical-join@ucl.ac.uk. If you have comments or suggestions, please email steven@evidencecritical.systems.