by Alex Biedermann, William C. Thompson and Joëlle Vuille
Vol. 102 No. 1 (2018) | Forensic FailBig changes are occurring in forensic science, particularly among experts who compare the patterns found in fingerprints, footwear impressions, toolmarks, handwriting, and the like. Forensic examiners are reaching conclusions in new ways and changing the language they use in reports and testimony. This article explains these changes and the challenges they pose for lawyers and judges.
Although testimony about forensic comparisons has been offered in court for over a century, it has recently become controversial. Questions have emerged about the scientific foundation of the pattern-matching disciplines and about the logic underlying forensic scientistsâ conclusions. The traditional assumption that items like fingerprints and toolmarks have unique patterns that allow experts to accurately determine their source has been challenged and is being replaced by a new logic of forensic reporting. The new logic requires experts to evaluate and weigh probabilities rather than claim certainty. Forensic experts must now moderate the claims they make about their own accuracy and, increasingly, use numbers to describe the strength of their conclusions. Because these changes have important implications for the probative value of the conclusions that forensic experts offer in court, it is important that judges understand them.
As recently as a decade ago, forensic scientists in the pattern-matching disciplines told a common story when asked to explain how they reached conclusions. Their analytic process began with the assumption that the items they examined had unique patterns: For example, every finger was said to have a unique set of friction ridges, and thus every print left by a given finger (if sufficient in size and clarity) was expected to be different from the print made by any other finger. Similarly, every gun barrel was thought to be unique; hence the pattern of marks found on bullets fired through a given barrel (if sufficient in size and clarity) was expected to differ from the pattern found on bullets fired through any other gun barrel. The soles of shoes and human dentition also were presumed to be unique, and thus the impressions left by a given shoe, or a given set of teeth (if sufficiently clear and detailed) were assumed to differ from the impressions left by any other shoe or set of teeth. Applying the same analysis, everyoneâs handwriting was presumed to be unique, and hence a sample of handwriting from a given individual (if sufficiently extensive) was presumed to be distinguishable from the handwriting of any other individual. These presumptions have been called the theory of discernible uniqueness.1
According to this traditional account, the job of the forensic examiner was first to assess whether the patterns seen in impressions contained sufficient detail to allow a determination of source and, second, to compare the impression patterns. If sufficient detail was available, then a âmatchâ between the patterns meant the source of the impressions must necessarily be the same, and a mismatch (failure to match) meant that the source of the impressions must necessarily be different. If insufficient detail was available to make a definitive determination, then the examination was inconclusive.
Examiners in a number of forensic disciplines have testified that this analysis allows them to make source determinations with complete certainty. A prominent fingerprint examiner explained the matter as follows:
Fingerprint examiners routinely claim to have âidentifiedâ or âindividualizedâ an unknown mark to a single known print. This identification is often characterized as being âto the exclusion of all othersâ on earth to a 100 [percent] certainty, and the comparison method used is claimed to have a zero percent error rate. These claims are based on the premises that friction ridge skin is unique and permanent.2
Unfortunately, these claims have not withstood scientific scrutiny. Indeed, commentary on the issue in the broader scientific and academic communities (beyond the community of forensic science practitioners) has been nearly unanimous in dismissing such claims as unwarranted.3 Consider the claim that the ridge patterns on every finger are unique. Like similar claims about snowflakes, it is impossible to demonstrate empirically that this claim is true because one cannot conduct a systematic comparison of every finger against every other. Furthermore, there is a difference between the claim that the ridge pattern on each finger is unique and the claim that a fingerprint examiner can accurately determine whether two fingerprints were made by the same finger. The validity of the latter also depends on the quality of the prints and the level of analysis employed during the comparison. Even if the ridge detail of every finger were unique, it does not follow that every impression made by every finger will always be distinguishable from every impression made by any other finger, particularly when the impressions are of poor quality (e.g., limited detail, smudged, distorted, or overlaid on another impression). By analogy, it may be that every human face is unique, but we can still mistake one person for another, particularly when comparing poor-quality photos.4
This is a limitation that most fingerprint examiners now acknowledge:
When fingerprint comparisons are being made, they are not being made from friction ridge skin to friction ridge skin. They are being made from one imperfect, incomplete recording to another. . . . [Hence] correctly associating a degraded mark to its true source is by no means a certainty, even were one to presume absolute uniqueness of all friction ridge skin.5
Consequently, the key scientific question is not whether the ridge pattern of each finger is unique, but how well an examiner can distinguish the impressions of different fingers at the level of analysis applied in a forensic examination. That question cannot be answered by assertions about the uniqueness of ridge patterns; it can only be answered by empirical research.
This critique also applies to other forensic pattern-matching disciplines, such as toolmark analysis, footwear analysis, handwriting analysis, and bitemark analysis. Although some practitioners in these fields persist in making the injudicious claim that their conclusions must be accurate because they are comparing patterns that are unique, the broader scientific community has called for empirical studies to put such claims to the test.
A key event in the evolution of forensic science opinion was a 2009 report by the United States National Academy of Sciences (NAS), which called for the development of âquantifiable measures of the reliability and accuracy of forensic analysesâ that reflect âactual practice on realistic case scenarios . . . .â6 It called for research to establish âthe limits of reliability and accuracy that analytic methods can be expected to achieve as the conditions of forensic evidence vary.â7 The report concluded that âmuch forensic evidence â including, for example, bitemarks and firearm and tool mark identifications â is introduced in criminal trials without any meaningful scientific validation, determination of error rates, or reliability testing to explain the limits of the discipline.â8 In response to this high-level scientific criticism, forensic scientists made some efforts to study the accuracy of their methods, although these efforts have been limited. The FBI commissioned an important series of studies on the accuracy of latent print analysis, but relatively little research has been conducted on the accuracy of other forensic science disciplines. In 2016, the Presidentâs Council of Advisors on Science and Technology (PCAST) issued a report that reviewed scientific research published to that point on the accuracy of six forensic science disciplines that rely on âfeature comparisonâ: DNA analysis, latent print analysis, firearms analysis, bitemarks analysis, footwear analysis, and microscopic hair analysis.9 PCAST found that adequate research had been done to establish the âfoundational validityâ of latent print analysis and DNA analysis of single-source and simple mixture samples. âFoundational validityâ means the method in question is capable of producing accurate results when properly performed. PCAST concluded, however, that too little research had been published to establish the âfoundational validityâ of firearms analysis, bitemarks analysis, footwear analysis, microscopic hair analysis, and DNA analysis of complex mixtures.
Moreover, even if latent print examination has âfoundational validity,â the studies do not show that it is infallible (as examiners have claimed). The studies reviewed by PCAST showed that latent print examiners have:
. . . a false-positive rate that is substantial and is likely to be higher than expected by many jurors based on longstanding claims about the infallibility of fingerprint analysis. The false-positive rate could be as high as [one] error in 306 cases [based on an FBI study] and [one] error in 18 cases based on a study by another crime laboratory.10
The studies reviewed by PCAST also showed substantial numbers of false exclusions.11 In light of these developments, forensic scientists have begun to change the way they describe their analytic process and report their conclusions. They can no longer credibly claim the ability to infallibly discern whether two compared sets of features share a unique pattern and thus have a common source. Professional associations and standards-setting bodies in various branches of forensic science have recommended that examiners avoid asserting that their conclusions are infallible and avoid claiming that they can discern whether a pattern is unique.12 Experts are now discussing a variety of new approaches to reporting.
To understand and evaluate the new approaches to reporting, it is necessary to understand the logic of forensic inference â that is, the logical steps by which a forensic examiner proceeds from observations to conclusions. Letâs consider, as an example, the logical steps that lead a latent print examiner from the observation that two fingerprints have similar ridge patterns to conclusions about whether they were made by the same finger. If examiners can no longer credibly claim that prints must necessarily have a common source if they appear to have âmatchingâ ridge patterns, what conclusions can they reasonably draw?
The new approaches all recognize that forensic inference requires an inductive line of reasoning, which entails consideration of probabilities. The examiner must consider the probability of seeing the patterns observed in the impressions under two alternative hypotheses about their origin: (1) that the impressions have the same source (e.g., same finger, same tool); and (2) that the impressions have a different source.
Suppose, for example, that a latent print examiner observes that two fingerprints have similar patterns but with slight discrepancies. The examiner must consider how probable it would be to observe those particular patterns (including both similarities and discrepancies) if the prints were made by the same finger. This might involve consideration of the likelihood that slipping or torsion of the finger, or some other process, could have distorted one or both of the prints enough to produce the discrepancies. The examiner must also consider how probable it would be to observe those particular patterns (including both similarities and discrepancies) if the prints were made by different fingers. This would involve consideration of the rarity of the shared features, hence how likely or unlikely it would be to observe so much similarity in prints made by different fingers.
In order to draw inferences and reach conclusions about whether two impressions have a common source, the expert must consider the balance between the two key probabilities: (1) the probability of the observed patterns if the impressions have the same source; and (2) the probability of the observed patterns if the impressions have a different source. The ratio between these two probabilities provides an index of the probative value of the evidence for distinguishing the two hypotheses. The evidence favors a particular hypothesis to the extent that the observed results are more probable under that hypothesis than under the alternative hypothesis. For example, if a latent print examiner thinks the observed ridge patterns (including both similarities and discrepancies) would be more probable if the prints have the same source (same finger) than if they have a different source (different fingers), then the evidence supports the hypothesis that the prints have the same source.
This logic is fundamental and inescapable. It is the basis for any conclusions that examiners choose to report.
There are several schools of thought about how examiners should report their conclusions regarding the balance of probability. In this section of the article, we will outline the different approaches and discuss their strengths and weaknesses.
Likelihood Ratios. One approach that is popular in Europe allows examiners to use numbers called likelihood ratios to describe their perception of the balance of probabilities.13 The likelihood ratio represents the expertâs view of the relative probability of the observed features under the alternative hypotheses about the source of the impressions. A likelihood ratio of 1000, for example, represents the expertâs view that the observed patterns are 1000 times more probable under one hypothesis (e.g., same source) than under the alternative hypothesis. Experts typically make the favored hypothesis the numerator of the likelihood ratio so that reported values range from one to infinity. A value of one means the expert thinks the observed patterns are equally likely under the two hypotheses, and hence the evidence has no value for distinguishing the hypotheses. A value greater than one means the expert thinks the observed patterns are more likely under one hypothesis than the alternative, and thus the forensic evidence supports the favored hypothesis. The larger the likelihood ratio, the greater the expertâs perception of how strongly the balance of probabilities supports the favored hypotheses. European latent print experts sometimes report very high likelihood ratio values, such as one million or even ten million.
The European Network of Forensic Science Institutes (ENFSI) and the U.K. Royal Statistical Society promote the use of likelihood ratios to describe expertsâ assessments of the strength of forensic evidence.14 Many forensic scientists in Europe, New Zealand, and parts of Australia also have adopted this approach.15 The question most commonly asked about likelihood ratios is how the experts come up with the numbers they report. In some disciplines, experts can rely on databases and statistical modeling. This is most common in fields like forensic DNA analysis and forensic voice comparison, where extensive databases exist and methods for statistical modeling have been evaluated in the scientific literature.16 Likelihood ratios have been presented in the United States for many years in connection with forensic DNA evidence. The expert typically says something like the following:
The genetic characteristics observed in the evidentiary sample are X times more likely if the defendant was a contributor than if the contributor was instead a random unknown Caucasian.
In the past, there has been insufficient data on the rarity of the features observed by experts in most pattern-matching disciplines to allow statistical estimates, but that is starting to change. Recently the Defense Forensic Science Center (DFSC) of the Department of the Army began presenting probabilities in connection with fingerprint evidence. In March 2017, the laboratory announced that future reports would include statements like the following:
The latent print on Exhibit ## and the standards bearing the name XXXX have corresponding ridge detail. The probability of observing this amount of correspondence is approximately ## times greater when impressions are made by the same source rather than by different sources.17
The laboratory uses a software program to score the similarity of the prints being compared based on âthe spatial relationship and angles of the ridge details.â18 The program then uses a database to evaluate how much more common it is to observe a given similarity score when comparing prints from the same finger than prints from different fingers. Although this is a novel method that has not yet been adopted by other forensic laboratories, the DFSC has reportedly offered to share this software with any government forensic laboratory in the United States, and other labs are evaluating this approach. Similar software-based, quantitative methods for assessing toolmark and handwriting evidence also are under development, although it may be a few years before they are ready for the courtroom. As experts begin offering testimony based on these new methods in United States courtrooms, litigants are likely to challenge admissibility under the Daubert or Frye standards, which will require judges to scrutinize whether the new methods are reliable and generally accepted.
Likelihood ratios also can be reported in forensic science disciplines that have not developed databases and statistical models. In those fields, experts may rely on their training and experience to come up with a likelihood ratio. In some instances, a likelihood ratio can be based partly on empirical data and partly on the expertâs judgment.19 While some commentators have derided such estimates as âsubjectiveâ and questioned their validity (one commentator called them ânumbers from nowhereâ20), the practice of presenting likelihood ratios based on expert judgment (rather than a database) appears to have taken hold in many European countries.21 Whether such testimony should be admitted in the United States is an issue judges may soon need to contemplate.
Those who support the use of likelihood ratios based on expert judgment (rather than databases) point out that a forensic examiner must make subjective judgments of probability in order to draw any conclusions about whether two items have a common source.22 If the examiner does not know enough to assess the relevant probabilities, then the examiner does not know enough to evaluate the strength of the forensic evidence â and hence nothing the examiner says about the value of the evidence should be trusted. It makes no sense, proponents say, to allow experts to testify about conclusions they reached based on a subjective judgment of the balance of probabilities but not allow the expert to use a likelihood ratio to say what their judgment was. When experts report their judgments of the likelihood ratio, proponents argue, the expertâs judgmental process is more transparent, and hence the value of the expertâs conclusions is easier to evaluate.23
Verbal Equivalents of Likelihood Ratios. Examiners may nevertheless be reluctant to put specific numbers on their subjective judgments, even if those judgments are well grounded. An examiner may justifiably believe that the observed results are more probable if the items being compared have the same source than a different source, for example, without being able to say with any precision how much more probable. Forcing examiners to articulate numbers may lend a false air of precision to a subjective assessment.
One way to avoid this problem is to allow examiners to express conclusions about the balance of probabilities using words rather than numbers. In a 2012 report, a group of experts assembled by the National Institute of Standards and Technology (NIST) recommended that latent print examiners report their conclusions using statements like the following:
It is far more probable that this degree of similarity would occur when comparing the latent print with the defendantâs fingers than with someone elseâs fingers.24
This approach allows examiners to substitute an imprecise verbal statement (âfar more probableâ) for a number, while still explaining the strength of the forensic evidence in terms of the balance of probabilities. Of course lawyers can (and should) ask experts testifying in this manner to explain what they mean by statements like âfar more probableâ and what basis they have for that conclusion.
Another approach that has been popular in Europe substitutes words for numerical likelihood ratios. The U.K.-based Association of Forensic Science Providers (AFSP) has proposed that forensic scientists use the âverbal expressionsâ shown in Table 1 (above) to describe how strongly their evidence supports a particular hypothesis about the evidence (e.g., the hypothesis that two items have a common source).25 Under this approach, forensic scientists first come up with a likelihood ratio that reflects their perception of the balance of probabilities, and then use one of the verbal expressions in the table instead of (or in addition to) the number to describe their conclusions in reports and testimony.
For example, a forensic scientist who concludes (by whatever means) that the results observed in a forensic comparison are 500 times more likely if the items have a common source than if
they have a different source would report that the comparison provides âmoderately strongâ support for the conclusion that the items have a common source. A forensic scientist who concluded that the results are 100,000 times more likely if the patterns being compared have a common source would say that the evidence provides âvery strong supportâ for the hypothesis of a common source. Statements of this type are not common in U.S. courts, but they have been discussed extensively in the academic literature.26 They offer one possible answer to the question of how to report source conclusions.
Match Frequencies / Random Match Probabilities. When a comparison reveals matching features in two items, forensic scientists sometimes estimate and report the frequency of the matching features in a reference population. This occurs most commonly in forensic DNA analysis, where genetic databases provide an empirical basis for assessing the proportion of a population that has a particular genetic feature. Forensic DNA analysts sometimes refer to these estimates as match frequencies (e.g., âThe blood stain at the crime scene and the reference blood sample from the suspect have the same DNA profile. This profile is estimated to occur in one person in 10 million among Caucasian-Americans.â). Alternatively, they may present these estimates as random match probabilities (RMPs) (e.g., âThe probability that a random Caucasian-American would match this DNA profile is 0.0000001 or 1 in 10 million.â). As forensic scientists develop databases that can be used to quantify the rarity of pattern features, we are likely to see similar testimony in other pattern-matching disciplines.
Even without empirical data, experts sometimes make statements about the random match probability based on training and experience. These subjective-match probabilities are typically reported with words rather than numbers. An examiner might say, for example, that the set of features shared by two items is ârareâ or âunusual.â
One drawback of this approach is that it addresses only one of the two questions needed to evaluate the balance of probabilities reflected in the likelihood ratio. It addresses the probability of the observed patterns under the hypothesis that they have a different source. It fails to consider the probability of the observed patterns if the impressions have the same source. Consequently, this approach may be misleading in cases in which the latter probability is low, when, for instance, the patterns have important discrepancies as well as similarities. Likelihood ratios, which consider both probabilities, arguably offer a more balanced and complete account of the strength of such evidence.
Source Probabilities. In the United States, forensic examiners often present opinions on the probability that two items have a common source. Opinions of this type can be expressed quantitatively, using probabilities or percentages. For example, a forensic scientist might say there is a 99 percent chance that two items have a common source. It is more common, however, for examiners to express such conclusions with words rather than numbers. For example, the forensic scientist might say it is âmoderately probable,â âhighly probable,â or âpractically certainâ that two items have a common source.
Lawyers and judges tend to like source probabilities because they are easy to understand; they address the exact question that the trier of fact needs to assess â how likely it is that the two impressions (e.g., two fingerprints) come from the same source? The problem, unfortunately, is that the information forensic scientists can glean from a comparison of impressions is not, by itself, sufficient to allow them to reach conclusions about source probability. As we will explain, examiners can logically draw conclusions about source probabilities only by combining conclusions drawn from a comparison of the impressions with assumptions or conclusions about the strength of other evidence that bears on the question of whether the impressions being compared have a common source.27
To illustrate, consider the Elvis Problem discussed in the sidebar. What is the probability that Elvis Presley was the source of the evidence left at the crime scene? As explained, this question cannot be answered based on the forensic science evidence alone. It is only by making assumptions or drawing conclusions about the likelihood of Elvis being at the crime scene â a matter having nothing to do with the forensic science evidence â that the forensic examiner can draw conclusions about the probability that Elvis was the source.
The same problem arises whenever forensic scientists express opinions on source probabilities. The opinion must, of logical necessity, depend in part on conclusions or assumptions about matters having nothing to do with forensic science, such as whether the person who is alleged to have left a trace (e.g., a fingerprint or shoeprint) at the crime scene is a likely or unlikely suspect and how many other people had access to the crime scene. Forensic examiners are not in a good position to make such judgments and have no business doing so anyway.
Identification and Exclusion. In the United States, the most popular method of reporting results of forensic comparisons is to state a bottom-line conclusion about whether two traces have a common source. The conclusion that two traces have the same source is often described as âidentificationâ or âindividualization,â while a conclusion that they have a different source is âexclusion.â These conclusions can be seen as extreme examples of source probabilities, corresponding to either a 100 percent or a zero percent chance that the traces being compared have the same source.
The demise of the theory of discernible uniqueness has made these conclusions more difficult to justify. Most experts now acknowledge that these conclusions require the examiner to make a decision about whether the evidence is strong enough to support a definitive conclusion, but there does not appear to be a generally accepted theory regarding how experts should make that decision.
One approach requires experts to make an assessment of the source probability. They report âidentificationâ when their assessed source probability exceeds some high threshold and âexclusionâ when their assessment falls below some low threshold. As discussed in the previous section, however, the assessment of source probabilities requires the expert to make assumptions or draw conclusions about matters beyond the forensic comparison in question. Experts cannot draw conclusions about source probabilities without facing the Elvis Problem, which renders such conclusions problematic. If courts allow experts to present conclusions reached in this manner, they should also require experts to disclose the factual basis for their asserted source probabilities. To evaluate the expertâs conclusion, the trier-of-fact will need to know the extent to which the expertâs decision was influenced by assumptions or conclusions about matters beyond the realm of forensic science.
To avoid the Elvis Problem, forensic scientists might instead base their decision on their judgment of the balance of probabilities. If they believe the balance weighs strongly enough in favor of the hypothesis that the items being compared have the same source, then they might report âidentification.â If they believe the balance weighs strongly enough in favor of the hypothesis that the items have a different source, then they might report âexclusion.â This approach avoids the need for the expert to evaluate source probabilities, but it still raises many questions. In order to understand the expertâs conclusions, the trier-of-fact will need to know how the expert evaluated the relevant probabilities, and how, where, and why the expert set the threshold for reporting a particular decision. The trier-of-fact also will need information about the accuracy of decisions reached in this manner.
In the past, expert forensic science testimony about âidentificationâ and âexclusionâ often went unchallenged, with lawyers on both sides assuming such testimony was reliable and uncontroversial. As lawyers become more aware of the issues discussed in this article, we expect they will examine the logic and basis of such conclusions far more closely than they have in the past.
Imagine that a bloodstain of recent origin is found at the scene of a crime. Imagine further that the DNA profile of the bloodstain is somehow determined to be the same as the DNA profile of rock-and-roll legend Elvis Presley. Finally, imagine that the DNA profile in question is one million times more likely to be observed if the sample came from Elvis than if it came from a random person. Based on the DNA evidence, what can the examiner logically infer about the probability that the crime scene stain came from Elvis Presley?
A moment of reflection should be sufficient to realize that the examiner can draw no conclusion about the probability that the crime scene stain came from Elvis based on the DNA evidence alone; the examiner must also consider other matters, such as whether Elvis could plausibly be the source. In this case, the suspect (Elvis) has a strong alibi â he was widely reported to have died in 1977. If the forensic scientist believes this âalibi,â then the probability that the bloodstain came from Elvis is necessarily zero.
An examiner who believes Elvis is dead might decide to report that there is a zero percent chance the crime scene sample came from Elvis. Notice, however, that this conclusion is not based on the strength of the DNA evidence. It depends entirely on the expertâs assessment of matters beyond the realm of forensic science â in this case Elvisâs alibi.
The expert might try to take a neutral position on the alibi â assuming, for example, that the question of whether Elvis could have been the source is a toss-up or 50:50 chance. When this seemingly neutral assumption about the truth of the alibi is taken as a starting point, the expert can update the initial assessment in light of the DNA evidence. That approach leads logically to the conclusion that there is more than a 99 percent chance that Elvis was the source of the blood.28 Notice, however, that this conclusion depends only partly on the DNA evidence; it also depends critically on the assumption of a 50 percent chance a priori that the blood at the crime scene came from Elvis (an assumption many people will view as fanciful). Should forensic scientists be basing their conclusions on assumptions of this type? The problem (as should now be clear) is that no assumption about the probability of an alibiâs veracity can truly be considered âneutral.â Yet without some assumption about the probability of the alibiâs veracity, there is no logical way to assess the probability that Elvis was the source.
This same logical conundrum arises in any case in which a forensic scientist is asked to assess the probability that a particular suspect was the source of a crime scene sample based on a forensic comparison. The expert can never answer the question based solely on the forensic evidence. Inevitably the expert must make assumptions or take a position on other matters, such as the probability that the suspectâs alibi is true. Doing that may well invade the juryâs province; it certainly requires the expert to delve into matters beyond his or her scientific expertise. Consequently, judges should consider carefully whether to admit statements about source probabilities into evidence. If such statements are admitted, judges (and lawyers) should try to make clear to the jury the extent to which the expertâs conclusions depend on comparison of the items in question, and the extent to which they depend on assumptions or conclusions about other matters.
Regardless of how forensic scientists choose to present their conclusions, we also expect in the near future to see more testimony about the error rates of pattern-matching disciplines. The 2016 PCAST report argued forcefully that empirical research is the only way to assess the accuracy (and hence the probative value) of examinersâ source conclusions:
Without appropriate estimates of accuracy, an examinerâs statement that two samples are similar â or even indistinguishable â is scientifically meaningless: it has no probative value, and considerable potential for prejudicial impact. Nothing â not training, personal experience nor professional practices â can substitute for adequate empirical demonstration of accuracy.29
PCAST called for a continuing program of research in which examiners are tested by having them compare samples from known sources. PCAST recommended that the samples used in the research be representative of the samples encountered in casework, that examiners have no information about the correct answer, that independent groups with no stake in the outcome conduct multiple studies, and that the data be available to other scientists for review.30 Courts will need to consider the results of such studies when deciding whether testimony about forensic comparisons is sufficiently trustworthy to be admitted â whether, in the words of Rule 702(c) of the Federal Rules of Evidence, it is âthe product of reliable principles and methods.â31 When such testimony is admitted, error-rate data will be relevant for assessing its probative value. PCAST suggested that testimony about error rates of the relevant forensic method, as research has shown on samples like those in the case at hand, should always be presented in conjunction with testimony about the results of forensic comparisons. Experts are likely to be asked about error rates during cross-examination even if the proponent of the forensic evidence elects not to present error-rate data in direct testimony. Lawyers are likely to debate the implications and significance of error-rate data for evaluating the probability that an error occurred in the case at hand.
We are on the cusp of a new era for forensic science â an era in which statistics will inevitably play a greater role. Oliver Wendell Holmes once declared that âthe man of the future is the man of statistics . . . . â32
In the pattern-matching disciplines of forensic science, that future has arrived.
Footnotes: