28 January 2021
eyeWitness’ Director, Wendy Betts, was recently called upon by Public International Law & Policy Group (PILGP) to answer the question of what chain of custody means and why it is important for digital evidence:
Chain of custody is what ensures that footage is authentic and reliable. It refers to what happens to a piece of evidence from the time it is captured to the time it goes before court. When it comes to digital footage, you need to identify exactly when and where it was taken, and who had access and thus the ability to change it from this time forward. Having a secure chain of custody is essential to proving that no one has manipulated the footage.
One solution is to use a verifiable camera app, including our own eyeWitness to Atrocities App. These apps, also called controlled capture apps, record when and where footage was taken. They then use encryption to protect this data and ensure that it cannot be tampered with.
However, as Betts reminds us, encryption only fixes part of the problem. There are multiple stages from capturing footage to getting it to a courtroom. The reality is that human rights organisations collecting digital evidence of atrocities and harms will need more than just technology to use their footage for justice.
In this article, we outline how and why technology needs to be combined with legal expertise and human support. It is written for human rights defenders who are interested in using photos and videos to secure justice, as well as the organisations who support them. It is also useful for lawyers who want to utilise digital evidence in their cases.
Different organisations will require different technologies, depending on their goals, resources, and environment. However, with so many tools available, it can be difficult to know which one to use.
For example, human rights organisations who wish to use images to secure justice for atrocity crimes should ideally use the eyeWitness App because it is designed to meet judicial standards. It also has security features designed specifically for documenters operating in high-risk areas. However, commercial enterprises such as insurance companies do not necessarily require the same levels of authentication and security. They will therefore benefit from a verifiable camera app more tailored to their needs.
Human rights organisations need human support to choose the right technologies for their needs and workflows. We have written a series of articles about the various features you should look for when evaluating which secure camera app to use for documenting atrocities.
Complex and specific legal evidentiary standards make it difficult to develop user-friendly technology.
Camera apps designed to capture evidence need to adhere to strict guidelines and cannot include flexible features that most users would like. One such feature that most documenters would prefer to use, is their phone’s native camera app as it is easily accessible from the lock screen. However, footage taken this way is difficult to authenticate for court because a phone’s default camera usually pulls the date and time from the device itself. These data points are neither reliable nor verifiable as they can easily be changed through the phone’s settings. Moreover, the footage is insecure and easily edited because it is stored in the phone’s native, unprotected gallery. Consequently, those developing verifiable technology cannot allow users to use the phone’s native camera, requiring instead a separate verifiable camera app on the phone’s desktop. A second feature many documenters would like is the ability to take photos with the device of their choosing and verify them later. However, this approach risks that the dates, times and locations may reflect where and when this metadata was generated – not where and when the footage was taken. Consequently, such flexible features are not possible if footage is to be verifiable for use as evidence in court.
As a result of adhering to these strict guidelines, the technology may be slightly less intuitive to the user. Furthermore, even the best designed and built technology may malfunction at times, or function differently on various makes and models of devices. Such difficulties could be disastrous for documentation missions. Consequently, easy-to-access technical support and comprehensive training must be readily available to all users.
eyeWitness is a good example of balancing the needs of the user with the processes required for court. Indeed, because the technology itself follows strict evidentiary protocols, users must take, store and view all images from within the eyeWitness App itself. They also cannot upload images taken outside of the App for verification. However, all partners are provided with personal and comprehensive documentation training, as well as readily available technical support.
© Anastasia Taylor-Lind
For footage to be used as evidence, it must be both relevant and reliable. Verifiable camera apps help with the reliability factor, but they cannot ensure that footage is relevant. By relevant, we mean that the images must make existence of a fact at issue more or less probable. The sorts of details that investigators and courts look for are not necessarily the images that documenters automatically think to capture. For example, when documenting a fatal shooting, an eyewitness may take a video focussing only on the victim’s body, or the facial expressions of those at the scene. Yet there are many crucial details that help investigators discern relevance: vehicle registration plates; close-ups of weaponry and ammunition; official badges, ID cards and uniforms; any damage to nearby buildings, etc.
This is where documenters may need ongoing mentoring, not only on how to use the technology, but also how to capture relevant images that will support their case. Such mentoring requires legal expertise and input. This support is something that eyeWitness offers to all its partners in addition to training and technical support.
One of the challenges human rights organisations have is that publicly disseminating footage can affect its integrity if proper protocols and safeguards are not followed beforehand.
To understand why, we need to return to the point about chain of custody: we must prove that no one has had the ability to alter or mishandle the footage at any time between it being captured and before the time it goes to court. However, if you decrypt your footage and take it out of its secure location to share it with others, then the chain of custody is compromised – you technically have capacity to alter the footage.
To solve this problem, the eyeWitness App does not allow users to download or share their footage until they have uploaded the original images to a secure, encrypted server. Once this action has been completed, users can save a copy that they can distribute as they wish. This process ensures that users can never be accused of editing or manipulating their footage, because the copy that goes to court is the one that was uploaded to eyeWitness’ server and is under lock and key.
Organisations who choose to store their footage themselves will need to create a similar setup. This will include having a secure server where footage is encrypted, and the access restricted. Those individuals who do have access (for example, to analyse and prepare images for court) will need to formally track their interactions using access-logs. They should also be prepared to testify in court that they have not tampered with the footage. These steps require precise, trustworthy, and rigorous human resources.
The final aspect that requires human support and legal expertise is preparing and using the footage for investigations and other accountability efforts.
One of the shortcomings of photos and videos in the age of smartphones is the sheer volume of footage. Millions of videos of one event alone can be shared on the internet. It is not humanly possible to watch them all. eyeWitness alone has nearly 12,000 photos and videos in its database, collected by different civil society organisations and actors around the world. We cannot hand over thousands of photos to an investigator as it will not be possible for them to review them all to find the ones most relevant to their case.
Consequently, footage needs to be manually reviewed, transcribed, described and catalogued. These steps are key for identifying and retrieving relevant footage for an investigation. Once identified, the footage and its metadata are compiled, indexed, and securely transmitted to the investigators. Each step of these processes should be documented. Consequently, transforming the footage into information that can be more effectively used for justice similarly requires significant human resources.
Technology has enormous potential to streamline and strengthen the documentation process. Verifiable camera apps can certainly help courts ascertain whether footage is reliable. However, we cannot lose sight of the human input and legal expertise that are needed to ensure the technology fulfils this potential.
Thank you to PILPG for inviting eyeWitness to contribute to their report, Human Rights Documentation by Civil Society – Technological Needs, Challenges, and Workflows.