false positives Digital Evidence

False Positives in Digital Evidence Ruining People’s Lives

Trust the data, because people lie. This is the prevailing line of thought in the modern prosecutorial system. The more information that agencies can collect, the better.

Access to information is how the world works in the 21st century, and privacy – or the control over one’s own personal information – is one of the major concerns for all of Europe. It is enough to directly oppose trillion-dollar businesses.

Personal information is a currency in of itself.

But there is certain information that has no assumption of privacy. Routine fact-checking of inventory and finances is the primary guard against fraud. Transactions happen on such scales and speed that no person – or even teams of accountants – can hope to make sense of them in practical time. You must trust the computer is spitting out the right numbers.

This is why cybersecurity is also of such pressing concern. Computer fraud happens very often through the assumption of legitimacy – fooling the computer that actions taken are legitimate.

But in this tug-of-war between information and protecting information, one thing seems to have been left forgotten from popular consciousness. A third side to this triangle of digital protection.

What if the computer is simply… wrong?

Article brought to you by Hadaway & Hadaway Solicitors in North Shields serving Newcastle and the North East UK

Computer Errors Screwing Up Lives

Bugs. Errors. Bad code. Computers exist to run programs, and programmers are highly paid to make sure those programs run efficiently and problem-free. Financial and security service providers are expected to have exercised every possible care to excise errors from their code. Protection from attacks are often built at a hardware level for such systems.

Information servers even use special types of memory, known as ECC RAM or Error Correcting Code Random Access Memory, to protect information bit from being randomly altered by magnetic interference, voltage flickers or even cosmic rays. Authentication for every incoming and outgoing process could be baked into the processors themselves or in chips that without their presence no information could even be read or modified. Just the reading of data is considered a cyber attack, because information itself is valuable that could compromise many other systems down the line.

So when the monitoring software tells you wide-scale fraud is happening, you are included to trust what the data says.

This is the situation the Postal Service found themselves in between 2000 and 2004. The Post Office prosecuted 736 sub-postmasters and sub-postmistresses based on the information provided by a recently installed computer monitoring system called Horizon. Developed by the Japanese company Fujitsu, which had a long history of reliability in electronic systems, the system as a combination of software and smartcard readers was used to track transactions, accounting, and stocking for all post offices.

The software reported shortfalls across a large area, often amounting to many thousands of pounds. Postal officials complained about bugs in the system, and in some cases attempted to plug the gaps with their own money in order to correct the error. Many were left financially ruined by criminal accusations, some went to prison for false accounting and theft. Some plead guilty to avoid imprisonment and from them on had to live being labeled a criminal in their employment records, locking them off from gainful employment in the future. Some even took their own lives as a result of these accusations.
The software was wrong.

20 years miscarriage of justice

This has turned into the largest miscarriage of justice ever presented to the Court of Appeals in the UK’s history.

According to the report by the independent investigative firm Second Sight, the Horizon system had about “12,000 communication failures every year, with software defects at 76 branches and old and unreliable hardware”. The losses were not losses at all but discrepancies, funds that could have bene generated by Horizon itself or failing to track the money from lottery terminals and cash machines. Instead of looking for the cause of errors, the Post Office accused their Sub-Postmasters of theft.

Post Office Ltd. ordered Second Sight to end their investigation just one day before the report was due to be published, and to destroy all paperwork that they had not handed over. They then published a report that cleared themselves of any wrongdoing.

The Post Office had a standing equal to any other private prosecutor in the British legal system, dating back to 1683 and the Royal Mail being a public authority. Under this authority, the Post Office conducted aggressive criminal prosecutions of its own employees.

After two decades and many civil cases and appeals, campaigners have finally won an admission from the Post Office that it had “got things wrong in dealings with a number of postmasters” and agreed to settle with 555 claimants. This is much too late for those affected who had suffered financial ruination, divorces, imprisonment, criminal defamation, and suicide.

It is a classic example of how an old and established bureaucracy would rather

  1. Find it more believable that there would be widespread corruption and theft from their own employees
  2. Refuse to admit to being wrong or culpability any way, punishing those that contradict their assertion of reality.

This is an out-of-court settlement and formally the Post Office did not accept liability. The group of plaintiffs were awarded £57.75m, but after £46m in legal costs, that leaves only about £20,000 for each person; an amount many members of Parliament considered inadequate for the damages they had suffered. Adding even more offence to this is that the Post Office itself lacked financial resources to provide financial compensation and had to ask for the government to provide funding.

The government has promised that those wrongly convicted of offenses that had their convictions overturned would get interim compensation of up to £100 thousand.

As of early 2022, there has been nothing said about any direct cases against Post Office officials or Fujitsu regarding their failure at deploying Horizon.

Presumption that the computer data is correct

Prior to 1999 and the Youth Justice and Criminal Evidence Act 1999, which repealed Section 69 of the Police and Criminal Evidence Act 1984, it was necessary to prove that

  1. there was no reasonable ground to believe that the statement in a document produced by the computer is inaccurate due to improper use of the computer,
  2. that at all material times that the computer was operating properly, or if not, that any respect in which it was not operating properly or was out of operation was not such as to affect the production of the document or the accuracy of its contents.

Now there are no restrictions on the use of evidence from computer records. There is a presumption that the computer producing the evidence on record was working properly at the time and the record is admissible as evidence.

It is now up to the opposition to rebut and provide evidence to the contrary instead of challenging the idea that the evidence is valid. This should normally speed up trials and digital forensics.
The damning thing was that the Post Office tried to hide and downplay the report from an independent examiner that they were not doing their IT jobs properly.

False positives reinforce false premises

According to a report by the National Police Chiefs Council “Digital Forensic Science Strategy” July 2020, 90% of all criminal investigations in the UK now involve a digital element. Digitization has changed the methods and scope of criminal investigation. However, there is an unrecognized pitfall in the strategy that despite the strive for automation and error-free methodology, digital forensics still struggles with limited resources, an over-reliance on tools, and confirmation bias of subjective opinions.

Digital evidence is increasingly presented and accepted in court without validation of the forensic methodology or tools. Classical investigative measures are subject to strict limits and fair trial guarantees, but digital investigations lack quality assurance and accountability.

As proven by the Post Office debacle, the inappropriate use of poorly-tested technology and a reliance on its results, undermines the rights of a fair trial and threatens the presumption of innocence at an early phase of the investigation. This is also a human rights violation under the EHCR.

False positives are useful in science, and there are times when false positives are preferable than not (for example, it is better for metal detectors to falsely identify a weapon or bomb, prompting more in-depth examination, than fail to register the danger at all), but in the criminal courts it is generally considered preference to make a false negative.

De facto reversal of the burden of proof

Traditionally, investigation is separate from forensics – they are entirely different career paths with different training and expertise. Forensic examination uses scientific methods to acquire impartial data separately from law enforcement objectives. Digital forensics departed from this tradition by introducing “digital forensic investigation”. This is defined:

“Typical forensic science areas answer comparison questions. Unknown object is compared to a standard reference and the scientist determines if they are the same. An object is identified by comparing it to several references. The process that occurs in “digital forensics” on the other hand, involves searching for evidence, identifying it, and reconstructing events. The identification and comparison process is only one part of the big picture…”

B.D. Carrier, A hypothesis-based approach to digital forensic investigations (2006)

Based on this view, this is a form of investigation that is designed to allow the results – digital evidence- to be admissible in a court of law. As it lacks controls and the testing of hypothesis, it does not actually follow the scientific method. It is data collection and pattern recognition. It begins with the presumption of criminal activity and the accumulation of evidence to corroborate this suspicion, not the falsification of this theory.

Reasoning about digital evidence, specially when it is based on the processing of data, is exposed to high levels of uncertainty and inferences as a result of the interpretation by the tool or examiner.

The complexity of digital investigation and technology provides extensive access to a large amount of “potential evidence” that is never examined by the court but the collection of which has significant impact on the rights, comfort, and livelihood of suspects and third parties. There is an increasing trend to collect as much data as possible just in case it would be useful sometime and somewhere else.

J. Milaj, J.P. Mifsud Bonnici stated in Unwitting subjects of surveillance and the presumption of innocence in the journal Computer Law and Security Review argued that the use of technologies to surveil targeted suspects and mass surveillance undermines the presumption of innocence and undermines the right to remain silent, the protective mechanisms in the criminal process, risks unethical criminal profiling, and “precooking” evidential material long before any charges are pressed. It can be argued that emotionally vulnerable people might end up making false confessions in the face of gathered prior “evidence” in data-driven presumption of guilt.

One particularly egregious example of how this might happen is San Francisco Police Department keeping all processed DNA, including from victims of violent crimes, child victims, sexual violence, even unrelated parties such as room-mates and partners, and testing them for matches in unrelated criminal cases. This had been going on for at least seven years. This is grossly unethical, shocking, and a violation of human rights, but is also an endpoint in a mindset where all data is useful data.

Examples of False Positives

Science advances and what had previously been considered highly reliable methods can no longer be relied upon to be a “smoking gun”, at least not in their lonesome.

Dental records, and bite mark analysis, for example, produces a high percentage of false positives. A number of people have been convicted or sent to death row solely on bite mark testimony. Peter Neufeld, founder of the Innocence Project, which had helped free hundreds of people who had been wrongfully convicted, stated that most of those convictions involved the use of what he calls “invalid” science.

“Yeah, like the person who looks at scratch marks on someone’s hand and says, “Those are human bite marks that came from that man, to the exclusion of everybody else on the planet.”

There is no science to support that conclusion, period. It’s something made up—

“The judge allowed it again and again and again. Frankly, not just one judge, but judges all over the country allowed that testimony because it came in from guys in white lab coats.

Even fingerprint analysis, long held to be foundational in crime resolution, cannot be guaranteed to be 100% effective. The premise had always been that no two people have the same fingerprint.

But the case of Brandon Mayfield related to the Madrid subway bombings in 2004 showed that this truism had never really be adequately tested. The Spanish authorities found partial fingerprints on a bag of detonators, which they forwarded to Interpol, who then forwarded them to the FBI. This eventually led to matching a print of Mayfield when he served in the military. Despite never having been to Madrid, multiple experts had testified that the fingerprints matched to fifteen points of accuracy. Mayfield insisted that because he was Muslim was the reason he had been maintained as a suspect, as the only evidence against him was that partial fingerprint.

It was not until later that the fingerprints were matched to an Algerian with evidence of actually having been to Madrid that he was released.
Kenneth Moses, one of the forensic examiners, remarked:

Well, I knew that our profession had taken some sort of a quantum leap because, suddenly, there were new rules involved. No time before in history had there ever been two fingerprints with 15 minutiae that were not the same person.

Under our past standards, I was right. But I was wrong. I had made an error. And so had every other examiner that looked at the print. So therefore, when I heard that it was an error, I knew the ground had shifted somewhere, and indeed it had.

Of all present forensic methods, only DNA so far that consistently produces results that could be relied upon with fair confidence. This is because DNA analysis is constantly being tested, refined, and falsified against incomplete and degraded samples.

If actual physical evidence can be misleading, why are legal experts so confident in their digital evidence?

The Technological Protection Fallacy

Law enforcement and prosecuting authorities are often willing to use fresh and innovative science and technology to gather evidence, and judges seem to be enthusiastic to embrace the products of technological progress. The enhanced use of automated tools to acquire and analyze digital evidence creates the false perception that technology mitigates errors and individual bias, and as such the results from purely mechanical or electronic tools are always reliable and trustworthy.

According to the paper Cognitive and Human Factors in Expert Decision Making:Six Fallacies and the Eight Sources of Bias by Itiel E. Dror in Analytical Chemistry (2020), this is one of the six cognitive fallacies that can impact the decision-making of experts that could lead to false conclusions.

“People think that the mere use of technology, instrumentation, automation, or artificial intelligence eliminates bias. These can reduce bias, but even when these are in used, human biases are still at play because these systems are built, programmed, operated, or interpreted by humans. There is a danger that people will incorrectly believe that using technology is a guaranteed protection from being susceptible to and affected by bias.

Furthermore, technology can even introduce biases, be it mass spectral library matching software, Automated Fingerprint Identification Systems (AFIS), or other technological devices.”

Itiel Dror, a cognitive neuroscientist based in London, is one of the leading authorities on fingerprint analysis and conducted a test in which he took real case – where examiners had found a match – modified the descriptions of the crime, and then asked the same examiners to analyze them again. With the same evidence, a large majority of examiners now changed their mind and said it was not a match.

Table 1. Six Fallacies about Cognitive Bias Commonly Held by Experts

Six Fallacies about Cognitive Bias Commonly Held by Experts

Conclusion

The impact of misidentification is accentuated by technology-assisted investigations where the line between preventive, security, and investigation methods are blurred. False positives and false negatives are equally likely where the automated analysis of inaccurate data only introduces additional errors and bias.

Extreme cases, such as in mass surveillance, mandatory biometrics, or anti-terrorist measures, have a tendency to erode civil liberty arguments and remove proportionality as a legal test for collecting digital evidence without consent. Big data somehow inspires strange trust in enforcement bodies, and bias is introduced against whatever conflicts with the reams of collected data as mere outliers.

Where security software is present in order to prevent or monitor criminal activity, giving excessive trust to the computer simply because it is a computer and not a person who might be biased or malicious, is foolhardy. As people need to be vetted, so should the computer be tested for reliability.

And if it fails those reliability tests, whoever keeps using those outdated and malfunctioning systems are engaging in willful malfeasance. Perhaps ass-covering.

This is, for now, apparently not a crime.

With regards to the story first referred to in this article, the Post Office only had one shareholder, and that is the government itself under the Department for Business, Energy & Industrial Strategy. Whereas in a normal stockholding organization, managers and execute officers can be put to the task by shareholders for tanking the worth of the company with their mismanagement, a government-run essential service as the Post Office is not allowed to fail despite gross misconduct.

Perhaps it should be a crime.