Kentucky Innocence Project logo

JUNK SCIENCE

Forensic Science

The misapplication of forensic science contributed to 52% of wrongful convictions in Innocence Project cases. False or misleading forensic evidence was a contributing factor in 24% of all wrongful convictions nationally, according to the National Registry of Exonerations, which tracks both DNA and non-DNA based exonerations.


Some “forensic disciplines” that have been used by courts to convict people have been deemed to be completely unreliable or invalid. Studies have demonstrated that some forensic methods used in criminal investigations cannot consistently produce accurate results. For example, bite mark comparison in which an identification of a biter is made from a bite mark made on skin is an example of an analysis that is unreliable and inaccurate. Some of the forensic disciplines in use may be capable of consistently producing accurate results, but there has not been sufficient research to establish validity.


Misapplication of forensic science is sometimes referred to as “junk science.” This has been enough of a widespread problem that two government reports have been issued to help stop the use of junk science, Strengthening Forensic Science in the United States (2009) and Forensic Science in Criminal Courts: Ensuring Scientific Validity of Feature-Comparison Methods (2016).


This report examined the research underlying specific forensic feature comparison disciplines, evaluated their accuracy and reliability, and made recommendations to various federal agencies to strengthen these disciplines. Among the recommendations was the need for better resources to support judicial training given the changing landscape in the evaluation of forensic evidence and state of validation of various forensic techniques . It also recommended that the accuracy of a method should be established using large, well-designed studies. Without these studies, the results of an analysis cannot be properly interpreted and applied to a case, and this kind of analysis and conclusions should never be presented in court. Unfortunately, invalid and unreliable methods have been used to wrongfully convict innocent people.


According to the above two large scale reports, many forensic disciplines have been plagued with high-profile errors. For example: an ongoing review of the Federal Bureau of Investigation's (FBI's) microscopic hair comparisons, in which forensic scientists look for distinguishing features such as the thickness, texture, and pigment in a hair strand, has revealed erroneous statements in more than 90% of cases before 2000 in which FBI examiners gave testimony. Often, analysts had said that hair could be associated with a specific person—which hair analysis cannot prove.


Another example of errors – from an even more “well established” method, such as fingerprint comparison, has faced criticism. Many fingerprint analysts use standard procedures to mark different levels of detail in a suspect's fingerprint in a "latent print" (meaning one left at a crime scene). But making a so-called “individualization” —a conclusion that the prints are from the same source—is where things can get fuzzy.  After examiners look at enough prints known to be from the same source and from different sources, the examiner’s brain starts to see patterns of similarity, and it is more likely that they will start finding points on the prints that match. 

Forensic examiners and laboratory technicians sometimes provide misleading testimony.

Sometimes forensic testimony overstates or exaggerates the significance of similarities between evidence from a crime scene and evidence from an individual (a “suspect” or “person of interest”) or oversimplifies the data.


Sometimes forensic testimony fails to include information on the limitations of the methods used in the analysis, such as the method’s error rates and situations in which the method has, and has not, been shown to be valid. Typical testimony uses phrase that overstate the reliability and validity of the testing, such as:


To a reasonable degree of scientific certainty. This is a customary way to express the scientific testimony that has zero meaning. This phrase should never be allowed in the courtroom.


It’s a match. This is an overused phrase by TV crime dramas. A correct statement by a lab technician would be “I’ve looked at the two samples and they look similar.”


There is a 0% error rate. This is never true. There is always a chance, even it is infinitesimal, that there is a different explanation for the evidence.


Identification / Individualization. This was often used in finger print analysis, and implies that there is 100% certainty in the conclusion. This phrase should be abandoned, according to the U.S. Army’s Defense Forensic Science Center, which states it will no longer allow their technicians and analysts to use the term.


Basic mistakes happen too. Forensic practitioners can make mistakes, including mixing up samples or contaminating specimens. These can occur in any type of science or laboratory testing, even in well-developed and well-validated fields.


Misconduct by the laboratory personnel. In some cases, forensic analysts have fabricated results, hidden exculpatory evidence, or reported results when testing had not been conducted.


How junk science has led to wrongful convictions and the need for reform.

Because shifts in scientific understanding often take decades to emerge, individuals whose wrongful convictions were based on misapplied science might face difficulties in proving their innocence due to time limitations and high evidentiary standards. In addition, some state courts do not recognize discredited scientific evidence as new evidence of a wrongful conviction.


This one reason the Kentucky Innocence Project is reviewing many of the past DNA cases in Kentucky, to assure that convictions have been based on the most current science, and to assure that junk science has not been the cause of our client’s conviction.

Share by: