home home Staff Contact
Forensic Protection - Services Forensic Protection - Rates Forensic Protection - FP_System Forensic Protection - Feedback
Trade article

Digital Forensics and the Law
by  Doug Carner   (Published October 2018, all rights reserved)

In 2006 the U.S. Department of Justice sought to quantify the public's confidence with forensic science. One such study found that jurors trusted forensic evidence far more than the testimony of the police, eyewitnesses, or victims.

The public's blind faith has been reinforced by television and movies that depict forensic science as being both unlimited and infallible. The reality is that classic forensic practices originated in the field, far from labs and scholars, and was based on unproven assumptions. This stemmed from the need for investigative tools, outpacing the deployment of the peer reviewed science to validate those tools. The end result has been wrongful convictions based upon junk science that has been codified into case law.

For example, Polygraphs have existed for nearly a century, have been at the center of high profile convictions, and are generally perceived as trustworthy by the public. This is inconsistent with the well-documented cases of subjects cheating the Polygraph to produce their desired results, and the National Academy of Sciences determining Polygraphs to be unreliable, unscientific and biased. Despite these facts, public confidence in this pseudo-science remains strong.

Fingerprint analysis had a similar origin. However, after a century of review and refinement, fingerprint analysis has become an accepted science. At issue is its high rate of false matches, primarily due to assumptions that must be made by the reviewing analyst.

In my specialty of audio-video enhancement and authentication, practitioners routinely apply judgment based assumptions and filters, without an adequate level of training or peer review. My colleagues boast about how they just figure it out as they work. Unfortunately, the courts have accepted prior work from these "experts", so their unsubstantiated results continue to be unchallenged due to those precedents. This has resulted in a proliferation of flawed evidence being accepted as fact into the court.

In 2013, I testified in a case that centered around altered video evidence. This opinion was easy to prove because the manipulations were crude and obvious. Later that same year, I testified in a federal case where the hiring attorney was only given a subtly altered video during discovery, and then the opposing counsel sought a Motion for Summary Judgment due to our opinion's reliance upon that flawed video. While that decision was overturned on appeal, that manipulated evidence should never have passed through the court as legitimate evidence.

The unfortunate reality is that, among all legal cases in the United States, faulty forensic science is second only to faulty eyewitness testimony as the leading causes of wrongful convictions. There are serious and harmful consequences when faulty forensic science is able to reach the courtroom, and it is unreasonable to expect judges to posses the required expertise in each area of science to make that determination.

To avoid forming flawed opinions, analysts must test everything, and assume nothing. Unfortunately, video authentication testing was still in its infancy in 2013, and analysts generally had to accept the integrity of their discovery as fact. In response, scientists and practitioners of the Forensic Working Group developed a scientific approach to evaluate the authenticity of recorded multimedia files. This led to a suite of relevant peer-reviewed tests being compiled into a logical work flow, called the Multimedia Authentication Testing form, or MAT for short.

The MAT provides a roadmap to assist the analyst in assessing key file metrics and characteristics, including hidden metadata. When the test results are compared to known case facts and established manipulation models, the analyst is able to form a fact based opinion with regards to the tested file's evidentiary trustworthiness.

For example, let's say that a case relies upon an audio recording with an incriminating admission, but the accused claims that the recording was edited after the fact. Only the interviewer and accused were present at the time of the recording so, rather than hoping for a favorable courtroom battle between testimony credibility, science can search for provable signs of content manipulation.

The Locard's exchange principle states that, "It is impossible for a criminal to act, especially considering the intensity of a crime, without leaving traces of this presence." Since the behavioral characteristics of microphones are well established, any tampering events such as stop-start, editing, resaving, speed changes, and muting should leave behind some measurable trace evidence.

For example, an audio recording may have captured a rhythmic sub-audible sound (e.g. a seemingly silent 60Hz electrical harmonic) which can be isolated and amplified. Any disruption, in this otherwise predictable sound pattern, would serve as compelling evidence of editing, and such a finding would negatively impact the credibility of all evidence originating through the same chain of custody. Favorable dispositions are almost always ensured when damaging evidence can be excluded and/or the opposing side discredited.

In one of my early cases, I examined a telephone recording that sounded natural. Upon a spectral view (frequency over time), I observed a shift in the background noise during the incriminating portion of the recording. Upon deeper examination, it became obvious that this portion of the recording had originated from a different audio file. At that moment, I felt like a super-detective and my career choice was forever solidified.

Metadata within, or about, a tested file can aid in determining a file's origin and chain-of-custody. I remember a case where a plaintiff and eyewitness accused the defendant of initiating an altercation. The eyewitness even produced a corroborating video the following day. The mobile phone video appeared authentic, and the case was presumed obvious and self-evident.

Upon forensic inspection, I found that the recording's GPS metadata deviated from the incident address defined in the police report. Opposing counsel responded with an article explaining how a GPS discrepancy can result from a lag in the phone's ability to receive updated GPS satellite signals. However, suspicion grew once I proved that the video's GPS coordinates matched the home address of the plaintiff, who happened to be a professional video editor.

Authentication testing has the capacity to identify the originating recorder's make and model, and can sometimes identify the specific unit that made the recording. This is accomplished through the analysis of compression tables, file metadata, comparison to recordings found in social media posts, and characteristics unique to the specific recording device. Such tests can be performed years after the fact, even when the original recording device has been lost or destroyed.

For example, photo response non-uniformity (PRNU) refers to the natural microscopic defects in the silicon wafer used to create a camera's imager. The relative position of those defects creates a complex and unique watermark that gets imprinted onto every recording made by that device. Because the defects within the imager can be represented by a relative pattern, PRNU testing is able to survive repeated image processing. This capability makes PRNU testing comparable to the identity analysis applied to ballistic markings and DNA profiling.

In ruling on the 1993 case of Daubert v. Merrell Dow Pharmaceuticals, Inc., 509 U.S. 579, the U.S. Supreme Court set the Daubert standard for evaluating the scientific credibility of an expert's methodology, in that it must: be generally accepted in the scientific community, have undergone peer review or publication, have been (or be able to be) tested, and have a measurable error rate.

PRNU was first accepted into the scientific community during the 2005 SPIE conference. It was subsequently published and peer-reviewed, which validated its error rate parameters. PRNU became codified in a 2011 Alabama case (United States of America v. Nathan Allen Railey) where it passed its first Daubert challenge, and used to prove that the images in this case originated from the suspect's camera.

Some common examples of authentication methods that have been codified include analysis of browser history and cookies to retrace an individual's internet activity, prior cell towers pings to track a user's movement, vehicular infotainment systems to reconstruct how and where someone drives, and temporal Electric Network Frequency (ENF) discrepancies in our nation's power plants to determine where and when a given audio recording was produced.

Having access to all this information raises an interesting legal question: Is the analyst committing a crime under the 1984 Computer Fraud & Abuse Act 18 USC 1030(a) simply by extracting this data? It is unlikely that the end user explicitly authorized the analyst to access this identifying data, as they may not even have been aware that such data existed. At best, the governing laws are vague and, using the Fifth and Fourteenth Amendments to the United States Constitution as reference, the courts generally find that vague laws violate a citizen's right to due process. While this challenge has yet to be tested in Federal court, it serves as a reminder that science evolves faster than the legal landscape it serves.

Courts change slowly, and judges are reluctant to rule against legal precedent or to hear a challenge, even when those rulings are found to have been based upon faulty science. In response, attorneys are hiring experts to elevate their evidence, even when the integrity of that evidence was not in question. This allows their opening statement to include, "This case relies upon recorded evidence. Opposing counsel would prefer that you trivialize it during your deliberations, so they never had it tested. We had this evidence analyzed by an independent forensic expert, who concluded that the recordings in this case are authentic and trustworthy." This is an effective tactic that has become the fastest growth segment of my business. When authentication testing is neglected, it leaves the evidence in a vulnerable status.

An example of this was a restaurant slip-and-fall case where, in the absence of eyewitness testimony, the case hinged upon video clips exported from different surveillance camera views. Each video spanned a brief time period, and none of the clips captured the actual incident. Although the originating DVR model was known, the site's DVR unit no longer existed, nor did the original recordings that it had contained. At question was why the video clips missed the incident.

Plaintiff's expert dismissed motion detection recording as a possible source, because on-going motion was observed as the video clips ended. The expert then determined that the event was not part of the exported video clips because of either selective video exporting or post-production editing.

As a result, plaintiff asserted that the defendant had intentionally omitted incriminating evidence, engaged in evidentiary spoliation, and perpetrated a fraud upon the court. Plaintiff's expert had decades of experience, and an extensive CV detailing relevant training and prior testimony. On paper, this expert had no equal, and it came as no surprise that the court accepted their qualifications and opinions as fact. The problem is that prior training and testimony on any given subject does not guarantee a correct understanding of that subject matter.

As the rebuttal expert, I used the MAT method to prove that the video clips were trustworthy and had not been manipulated. I then use the visual contents of those videos to work backwards and define both the motion detection zones, along with their relevant pre-record/post-record settings, all of which exactly matched the available setting options defined in the user manual for the originating DVR model. Next I proved that the calculated settings of each video clip validated the calculations from the remaining video clips. I then used the raw data within the opposing expert's report to fully validate my findings, and thus prove that the other expert's opinions were based upon unsubstantiated assumptions. The case settled favorably after my deposition.

Within the field of multimedia, remaining current across ever evolving technology is critical to an analyst's effectiveness. Until a test becomes Daubert compliant, the analyst must either apply some unverified assumption or apply the unverified test, the latter of which is generally favored since it is easier to defend in the courtroom.

The future is unlikely to resolve these challenges as audio and video recording equipment becomes more complex. Although numerous public and private organizations see the need for industry oversight, this is unlikely to occur due to their competing interests and those of the software vendors who want to protect their proprietary solutions.

Within my profession, the real gate keepers are the forensic groups and the accredited certifications, as they require continued education and testing. If an expert has not been certified, or re-certified, within the last few years, their knowledge may be seriously outdated. The same warning applies to experts where their only recent qualifications are public speaking and lecture/workshop attendance, where they may have been texting, confused, or otherwise disconnected from the educational content claimed on their CV. Right or wrong, it is often up to the attorneys to evaluate the case experts, and the opinions that they present.

Suggested next article

Copyright © Forensic Protection
QuickLinks | Main page | Case study | Media | FAQs | Contact us