When evidence is open to interpretation, opposing counsel can control the narrative and shift the case into a battle of expert testimony. Rather than placing your client’s fate in someone else’s hands, consider improving your audio-video exhibits to become unshakable evidence. There are two methods to accomplish this, enhancement and clarification.
Since its inception, audio and video enhancement has been achieved through a series of independent modifications. Noise is averaged into usable sections, effectively destroying data to aid in perceived clarity, and resolution is increased through interpolation, effectively inventing evidence. Enhancement can also introduce unwanted items (amplified sounds, thickened object trace lines, unrealistic colors, etc…) and is similar to an artist using a paint brush and special lighting to retouch a picture.
Video enhancement remains commonplace because it has a history and only requires minimal computing power. Since enhancement is highly judgment-based, the results and likelihood of a court challenge depend upon the expert’s experience, tools and patience when working on your case.
In early 2012, neighborhood watchman George Zimmerman killed Treyvon Martin, a black teenager after a presumed altercation. Although a local news event, it ignited worldwide racial tensions. The only evidence of Zimmerman’s claimed injuries was a grainy video from Florida’s Sanford Police Department.
Several sources attempted to enhance the presence or absence of wounds upon Zimmerman’s head. Although working from the same video, each enhancement depicted a different version of the truth. ABC was the only agency that reached out to a clarification lab. The clarified results were featured on Good Morning America and ended the speculation. You can see those clarified images at ForensicProtection.com and through a Google image search.
Clarification uses sophisticated automatic algorithms to help rebuild details lost during the original capture and record process. This is analogous to solving a Sudoku puzzle or
wearing prescription eye glasses. The center-piece of Clarification
is recursive temporal-space-frequency data reconstruction. In optimal
circumstances, clarification pairs any two: time, frequency and intensity, and then uses each pair as a training set to repair the third. Every step is co-dependent and may require
extensive computer power to restore a single minute of video.
Clarification suppresses noise and improves clarity while preserving the
facts, and its automation leads to extremely accurate and reproducible
results. Thus, clarification reduces the need for experts to defend their work during trial. Unfortunately, switching from being an enhancement artist to a clarification engineer can be a difficult transition, with many experts thriving off the testimonial income that comes from the vagueness of enhancement.
Video clarification can be applied to every surveillance recording. It can recover a blurry license plate and brighten the darkest nighttime scene. Clarification can be used to restore VHS tapes, zoom-in on a suspect’s hands, definitively prove if a DUI stop was justified, or support facial identity matching.
Similarly, audio clarification can isolate human speech and suppress distracting sounds. Once clarified, a voice print can be created to identify who said what. Emerging advances in audio software will soon isolate a specific voice from a crowd using voice pattern matching and spatial directionality.
PROBLEM: Surveillance cameras are rarely in focus at the time of the target event.
Enhancement is manual calibration (aka judgment-based) to apply Gaussian focus (e.g. an unsharpen filter) which relies upon a focal singularity with a long tail.
Clarification is applying circular focus to every pixel of every frame (it is, after all, the reverse of the cause) using automatic cross focus point calibration (similar to a cell phone camera).
PROBLEM: Inadequate illumination of a nighttime scene in rec.601 YV12 color space (common surveillance format that stores only 220 shades of brightness and only 33% of RGB color data).
Enhancement is performed with manual (judgment-based) changes to brightness (static addition or subtraction to illumination values) and contrast
(frequency amplification between adjacent pixels), which can destroy illumination data at the max-min levels, and cause illumination blow out.
Clarification is applying automatic white balance and back lighting through histograms and curves (calibrated through maximum contrast deltas) plus
optional automated color/contrast
For peer reviewed work, join and read the extensive library of the IEEE (Institute of Electrical and Electronics Engineers). They sponsor the International Conference on Image Processing which explores the newest clarification technologies.
I started my journey with articles like this. The latest article I read was on raindrop
Some seemingly unrelated elements exist in the open-source community. Alone they are handy, but when incorporated as part of an adaptive process, they become quite powerful. Two that come to mind are Francois Visagie's
adaptive lens blur repair
and Vit's motion-compensated deinterlacer.
Suggested next article HERE