Computational Tools in Forensic Anthropology: The Value of Open-Source Licensing as a Standard

Main Article Content

Jeffrey James Lynch
Carl N. Stephan

Abstract

Over the last 40 years, the number of computational tools in forensic anthropology has increased considerably. No longer are forensic anthropologists limited to the narrow suite of programs previously available, including Snow and Folk’s commingling probability calculator (1970), Maples’s more comprehensive “Forensic Anthropology” suite (1986), and Jantz and Ousley’s Fordisc (1993). Now at least 48 different tools have been independently developed, in part due to newly formulated programming languages and generally broader computer literacy of researchers in the field. While new tools enable a broader range of tasks to be addressed, many tools are closed source, some are not provided for community use, and others lack appropriate version control. Therefore, while the popularity of the microcomputer analytical tools is increasing, so too is their variation. Moreover, external validation of microcomputer test results lags behind development, because the source code is often not accessible to the larger community. Open-source licensing, as a standard, provides a path toward robust development, auditing, and validation of analytical tools; it encourages  agile development, version control, and community-wide collaboration to facilitate reliability, which is favorable within the post-Daubert context. As a niche field with a relatively small user base that is subsequently unlikely to attract large commercial investment for thorough beta testing, open-source licensing thereby provides major benefits. This article focuses attention on these issues, advocates for open-source licensing of new and existing tools, and provides the first detailed review of microcomputer use in forensic anthropology since Maples’s first account in 1986 (some 32 years ago).

Article Details

Section
Review Articles