Author: | Cassie Seo |
Date: | 6. July 2022 |
The requirement of adhering to the humanitarian principle of ‘doing no harm’ and being principled in our response is not foreign to humanitarian practitioners. Increasingly more opportunities of new and emerging technologies are being integrated in humanitarian action from artificial intelligence, blockchain to digital cash transfers. These technologies are not only seen to improve humanitarian service delivery but considered to create a a paradigm shift. The promise and appeal of harnessing the benefit of technology in humanitarian action has always been alluring with the promise of scalability from the start and transparency with developments like, such as blockchain. The impact and traction of such change is insurmountable that it is clearly reflected in many humanitarian organizations and development actors’ digital transformation and change management processes.
The impact of technology in our society as a whole would be best summarized as being a double edged sword. Beyond this, there are more design considerations and constraints when we plan to utilise technology for humanitarian intervention. So for those of us working in humanitarian settings, how might we do no harm (by design) as we work in our notoriously resource scarce setting? Acknowledging the tenet that technology has a certain degree of agency beyond the implementer’s intention, is never neutral and that it can independently cause harm to those we are aiming to serve (for NRC’s case, people who are forced to flee). How would we mitigate and derisk this unintended harm from using technology in our humanitarian intervention?
In this article, I will visit few examples of the few common issues that should not be overlooked when designing tech implementation in a humanitarian setting in order to derisk and mitigate accidental unintended harm to the organisation (reputational/operational risk) as well as to the people (staff and affected people we serve).
Product Safety
While there are robust guidelines and regulations on consumer product safety, much of them are targeting ‘tangible goods’ or physical products. There is more discourse on what safety means for digital products and digital services and even more discourse on how some of the largest international technology companies (commonly referred to as ‘Big Tech’) are failing to choose consumer safety over profit. Many of such companies are coming up with tools and internal policies to mitigate known harms and prevent unknown risks.However, such tools and policies developed for the private sector sometimes seem far-fetched or irrelevant for humanitarian practitioners as there are different compliance requirements and operational principles for humanitarian sector. One exercise I like to use with my colleagues in humanitarian response setting is persona Non Grata– imagining the worst case scenario for the intervention or product that we are building – as the activity of identifying and mapping risk and scenario building is not foreign to humanitarian practitioners working in volatile circumstance.
User-Centered Design has long been hailed in the humanitarian sector. The importance of getting to know the people you are designing for and maintaining systemic perspective on, your user’s context and needs (namely the the ecosystem), is often the first tenet in many digital principles. Understanding your users and their needs use cases are fundamental to draw the right product requirement. Seeing such use cases and user needs in your user’s context through a systemic lens is fundamental to the product’s success. Such findings are often synthesized as user persona, which is a commonly used tool when it comes to developing digital technology. It is an archetype of proxy users whose motivations and traits represent a certain user segment of a product or a service. It helps system builders to empathize with the user and provide a clear criteria for making design decisions.
However, designing and deploying technology isn’t always green fields and roses in the humanitarian context. Practitioners also must consider potential product misuse cases as well. Misuse cases are interactions where some users are interacting with your product/project in an (unintended or wrongful) manner where they could harm themselves, the product, the user, or others. It may be that an administrator accidentally assigns a wrong role and permission to another staff in an extremely complex case management system which results in the staff having access to extremely sensitive information that s/he should not have access to or a working group deduplication database that is using a sensitive personally identifiable information (PII) as an indicator without having a proper data sharing policywhich may result in Excel database containing affected people to be emailed around and, potentially, leaked to those who should not have access to it. When we identify the persona non-grata or ‘extreme users’, we bring in a mode of defensive thinking for every new requirement or feature we build; we think about how and by whom such a system could be either intentionally or unintentionally abused and, thus, create harm to affected people. This can at least help identify and mitigate problematic cases before they arise.
Revisit your Key Assumptions in the System
Everything is interconnected. The system you are designing may require a certain level of competence and training for a user. In an urgent emergency setting, legal and technical safeguarding (such as how to dispose of certain data or tools and the guidelines on how humanitarian data should be handled and shared) may not always be possible or available. Even worse, the system could easily be weaponized and spread false information or perpetuate imperialism. Design and test? In a way where you would reduce as much dependency on such ‘key control’ and pay attention to the assumptions you are making about your own users(target group), technology, and context. One key example in the humanitarian setting would be biometrics. The use of biometric identification systems in humanitarian contexts has been controversial for a few years. While biometric data offers higher transparency and accountability by providing “end-to-end auditability”, we have witnessed a few high-profile misuse cases of biometrics. Whether such catastrophe is caused by irresponsible management of the hardware or failure to take the unequal power dynamic into consideration before ticking a box that user consent to information collection and sharing is obtained, the impact from the unintended outcome remains and will negatively affect the aid recipients. Before introducing any technology, the purpose for using the technology and collecting sensitive data, as well as the assumptions and feasibility of using such technology in a specific context must be rigorously assessed, risks identified and mitigated.
Product Inclusion: See whom our design affects, especially the ones that aren’t easily seen – Consider your excluded/marginalized
In the technology world, ‘edge cases’ are an unlikely yet potentially disastrous situation. In software design, edge cases often threaten to break a system, existing rules, and the user experience as they often do not happen and deal with the extreme maximums and minimums of parameters. Mike Monteiro speaks about the danger of downplaying edge cases as it can marginalize certain groups of people that are often already undergoing discrimination and deemed as not crucial to the success of the product. However, with the scalability of technological solutions, such exclusions are also scaled at large. For a product like Facebook, who claims to have two billion users, one percent (often a safe number for edge cases) of the user group would be twenty million people.For those working in humanitarian technology, the consequences of intentionally excluding a certain group of people are potentially catastrophic; we could be leaving the most vulnerable behind, setting an inappropriate targeting strategy which does not consider the full context, or lose sight of transparency when it comes to the eligibility criteria. Needless to say that it is important to pay special attention to exclusion drivers in your tech-enabled humanitarian intervention design and address them early and often.
Security and Data Privacy
Every system is breach-able and corruptible. Contrary to the bleak outlook it brings, it will help you secure your systems and prepare you for the breach especially if you start from questioning your own assumption. I had learned much of this by conducting rapid risk assessment and identifying different threat scenarios and their likelihood and impact from the engagement with volunteers from Github Security Team through our engagement to improve data security and safety of some of NRC products and ecosystem through Github’s Skills Based Volunteering Programme. Through the engagement, we realize the our main source of our vulnerability is not limited to the technology but also on our people and processes as well. Unintended mistakes and malpractices that are result of simple negligence, inattention, or lack of training were more frequently happening than intentional threats intentional threats such as spyware, malware, intentional cyber attack. Thus, it is extremely important to implement ‘security by design’ principles which would mitigate such unintended harm. This is of course not to say that intentional sophisticated threats are non-existent nor possess serious risk to humanitarian organisations and the people they are serving.
The reality I had witnessed is that few humanitarian organisations actively manage tech products in its entirety ‘in-house’ – from problem identification to project launch and continuous iterations. The organisations are very likely to be working with third party vendors or partners to develop technology. Thus, I will focus on few de-risking activities with the assumption that an organisation working with a third party implementation partner. As the ‘problem owner,’ who had initially identified a problem that they would like to work on and made a decision to utilise a technological intervention, make sure your institutional and contextual knowledge about your users and their digital literacy levels and access to technology are well documented and considered as part of our initial speculations of user requirements. There are even personas for security which had outlined high-risk users with international representation. You can use this personas for security to utilise product inclusion practices to inform your product safety. The personas include not only demographic information, which is common among persona, but also users technical capacity, threats, risks and strengths. You could also utilise personas for your security, data protection and privacy.
A minimum level security standard checklist while designing a system could go a long way. It will save you much hassle if you assume that the technology implementer (an organisation implementing a project or a contractor like third party vendor, etc.) is or always has a high possibility of being compromised. In addition, you should perform a damage analysis where you identify what could go wrong in case of being compromised and what you can do to mitigate that risk of data and privacy breaches.
Closing Thoughts
The idiom that the road to hell is paved with good intentions is sadly often the case of our humanitarian technology interventions. I admit that it often feels overwhelming as we work in a setting where resources are scarce, goals are enormous, the implementation timeline is unrealistic and the expectations are huge. However, as a matter of fact, we are witnessing some positive change where some of the largest private sector technology makers are leveraging humanitarian principle in their product design/project management to reduce unintended harm to its users by amping up on product safety, being more inclusive and putting user rights on the driver seat when it comes to controlling their experience.
I will conclude with the importance of de-risking and continuously strive to do no harm with a quote from Mike Monteiro from his book, Ruined by Design, which has been crucial in my personal opinion development when designing/overseeing… humanitarian technology: “Success isn’t just launching. This is where the real work begins, ensuring that everything grows.”
Cassie Jiun Seo is the Head of Unit, Digital Transformation at the Norwegian Refugee Council (NRC) where she leads a team of technologists that are building tools and digital infrastructures that support the NRC to leverage technology in safely and sustainably.
Related posts
Online Workshop: Digital Accountability
27.06.2022 09:00 - 12:00Data & Digitalisation
Humanitarian Digital Panorama: A Roadmap beyond 2022
11.04.2022