Firearms Analysis: A Forensic Hunch Masquerading as Science

jay-rembert-e0kgA5otj0Q-unsplash

You’ve seen it on CSI—an investigator stands surrounded by crime-scene tape. Another shooting with no leads. She kneels, squinting at the camera. What’s this? She holds up her pen. Balanced on the end of it is a small metal cylinder. A shell casing! Cut to the forensics lab, the lasers, the 3D modeling and the eventual conviction of the shooter based on the infallible technological marvel that is firearms analysis.

Unfortunately, actual forensic comparison doesn’t involve any of the things you see on CSI other than the lone examiner, looking into a microscope, seeing if things look like they might match.

Along with other measurement analyses, like the now discredited bite-mark analysis, firearms analysis has come under increasing scrutiny for its lack of reliability and scientific validity. Reports by authoritative bodies, including the 2016 President’s Council of Advisors on Science and Technology (PCAST) report and the 2009 National Academy of Sciences (NAS) report, have raised serious concerns about the reliability of firearms analysis. These reports, coupled with empirical studies like the Ames Study, highlight fundamental issues with the methodology and accuracy of firearms identification, casting doubt on its credibility and demanding a reevaluation of its use in the justice system.

Questioning the Reliability of Firearms Analysis

Firearms analysis involves examining marks left on bullets and cartridge (shell) cases by a firearm to determine if they were fired from a particular weapon. The underlying premise is that each firearm imparts unique, reproducible marks. However, firearms analysis depends on the ability of a human examiner to identify and match these microscopic, unique marks, while disregarding other marks left on a bullet or shell casing—there is no objective standard for declaring a match.

The 2009 NAS report, “Strengthening Forensic Science in the United States: A Path Forward,” was among the first major critiques, pointing out that firearms analysis lacks a well-defined, quantifiable basis. The report emphasized that the process of matching bullets to guns is, in the end, subjective, relying exclusively on the examiner’s expertise and experience rather than objective standards. This subjectivity introduces significant potential for human error and bias.

The 2016 PCAST report, “Forensic Science in Criminal Courts: Ensuring Scientific Validity of Feature-Comparison Methods,” further scrutinized the field. PCAST found that while some foundational studies support the basic theory that firearms can leave unique marks, the scientific evidence does not establish the reliability and validity of firearms analysis with sufficient scientific rigor. Specifically, the report highlighted the absence of extensive, independent validation studies demonstrating low error rates in practical, casework conditions. Such validation studies and established error rates are taken for granted as the baseline for acceptable science in actual scientific disciplines (think the FDA approving drugs), but, strangely, are not required by courts before using analysists’ conclusions to convict defendants.

Just a Few of the Issues with Accuracy Testing in Firearms Analysis

Several key issues undermine the reliability of accuracy testing in firearms analysis:

  1. Lack of Blind Testing: Firearms examiners are made aware they are being tested, which can influence their performance. Blind testing, where examiners are unaware that they are part of a study, is a standard practice in scientific research to ensure unbiased results. The lack of blind testing in firearms analysis means that the reported error rates do not accurately reflect true performance under normal working conditions.
  2. Dropout Bias: Examiners participating in studies can drop out without their results affecting the study’s error rates. This introduces a bias, as those who might struggle or perform poorly are not accounted for, artificially lowering the apparent error rates. Such dropouts can lead to an overestimation of the reliability and accuracy of firearms analysis.
  3. Use of “Inconclusive” Results: In many studies, examiners are allowed to mark comparisons as “inconclusive,” without impacting the reported error rates. This option can be a convenient way to avoid making definitive, potentially erroneous decisions, thereby skewing the data to suggest a lower error rate than might exist if conclusive determinations were required. Imagine using this standard for the bar exam (or any other proficiency exam for that matter). Since an answer of “inconclusive” is considered correct, anyone could pass the test and the error rate would remain close to zero. Think you’re not a qualified firearms examiner? The tests disagree!

Implications for the Justice System

The implications of these deficiencies are profound. Courts rely on forensic evidence to make determinations of guilt or innocence, and the perceived infallibility of forensic methods can heavily influence jury decisions. The revelations about the limitations and potential inaccuracies of firearms analysis necessitate a reevaluation of its use in the judicial process.

To ensure the fair administration of justice, it is crucial that forensic methods are subjected to rigorous scientific validation. This includes comprehensive error rate studies under conditions that reflect actual casework, improved standards for examiner training and proficiency testing, and greater transparency in the reporting of forensic evidence.

Conclusion

Firearms analysis is now recognized, by scientists without a financial stake in its continued use in court, as a discipline fraught with uncertainty and subjectivity. The reports from PCAST and the National Academy of Sciences, along with empirical studies like the Ames Study, underscore the need for a critical reassessment of its role in criminal investigations. Anything less will mean more wrongful convictions.

Facebook
Twitter
LinkedIn
Pinterest

Related Posts