Philosophy-Criminology Assessment 1000 words; 4×250 word short answers. Plagiarism free Harvard or APA referencing

We can write your essays! Let our essay writing experts help you get that A in your next essay. Place your order today, and you will enjoy it. No plagiarism.


Order a Similar Paper Order a Different Paper

Philosophy-Criminology Assessment

1000 words; 4×250 word short answers.

Plagiarism free

Harvard or APA referencing

Philosophy-Criminology Assessment 1000 words; 4×250 word short answers. Plagiarism free Harvard or APA referencing
Suggested Structure for Case Study Introduction (125 words) Case study description – what is the surveillance application (250 words) Rationale for surveillance, analysis of intended, likely or actual effect (250 words) Main issues and concerns (250 words) Conclusion (125 words) References (not included in word count) INTRO: Surveillance in Policing through Facial Recognition: (i.e., the specific sphere of surveillance – surveillance in policing critically analyse the rationale for surveillance in this context, – rationale of surveillance in policing consider the intended, likely, or actual effect, – of surveillance in policing outline the main issues and concerns about surveillance in this context – related to surveillance in policing.  EXAMPLES: https://www.npr.org/2020/06/24/882683463/the-computer-got-it-wrong-how-facial-recognition-led-to-a-false-arrest-in-michig https://www.perpetuallineup.org/ https://news.microsoft.com/en-au/features/wa-police-use-cloud-and-ai-to-track-criminals-digital-footprints/ https://edition.cnn.com/2021/04/29/tech/nijeer-parks-facial-recognition-police-arrest/index.html They should cite at least five (5) sources (this may include the essential or suggested readings but students must go beyond them). The sources could be academic articles, books or book chapters, government reports, policy documents, op-eds, media, or other relevant sources. All sources must be appropriately referenced according to Harvard or APA referencing style – it does not matter which style is adopted provided it is used consistently. The reference list is not included in the word count, however in-text referencing is included in the word count.
Philosophy-Criminology Assessment 1000 words; 4×250 word short answers. Plagiarism free Harvard or APA referencing
AI and Ethics https://doi.org/10.1007/s43681-021-00077-w ORIGINAL RESEARCH The ethics of facial recognition technologies, surveillance, and accountability in an age of articial intelligence: a comparative analysis of US, EU, and UK regulatory frameworks Denise Almeida 1  · Konstantin Shmarko 2  · Elizabeth Lomas 1 Received: 9 May 2021 / Accepted: 26 June 2021 © The Author(s) 2021 Abstract The rapid development of facial recognition technologies (FRT) has led to complex ethical choices in terms of balancing individual privacy rights versus delivering societal safety. Within this space, increasingly commonplace use of these technolo- gies by law enforcement agencies has presented a particular lens for probing this complex landscape, its application, and the acceptable extent of citizen surveillance. This analysis focuses on the regulatory contexts and recent case law in the United States (USA), United Kingdom (UK), and European Union (EU) in terms of the use and misuse of FRT by law enforcement agencies. In the case of the USA, it is one of the main global regions in which the technology is being rapidly evolved, and yet, it has a patchwork of legislation with less emphasis on data protection and privacy. Within the context of the EU and the UK, there has been a critical focus on the development of accountability requirements particularly when considered in the context of the EU’s General Data Protection Regulation (GDPR) and the legal focus on Privacy by Design (PbD). However, globally, there is no standardised human rights framework and regulatory requirements that can be easily applied to FRT rollout. This article contains a discursive discussion considering the complexity of the ethical and regulatory dimensions at play in these spaces including considering data protection and human rights frameworks. It concludes that data protection impact assessments (DPIA) and human rights impact assessments together with greater transparency, regulation, audit and explanation of FRT use, and application in individual contexts would improve FRT deployments. In addition, it sets out ten critical questions which it suggests need to be answered for the successful development and deployment of FRT and AI more broadly. It is suggested that these should be answered by lawmakers, policy makers, AI developers, and adopters. Keywords Facial recognition technology · Accountability · AI ethics · AI regulation · Data protection · GDPR · Human rights · Impact assessment · Law enforcement · Privacy · Surveillance 1 Introduction Law enforcement agencies globally are constantly seeking new technologies to better ensure successful detection and prosecution of crimes to keep citizens and society safe. In addition, there is a public expectation to deliver value for money and where possible to provide economic eciencies and reduced labor costs, which potentially new technolo- gies can help deliver. Over the last decade, many new tech- nologies have been harnessed by law enforcement agencies including, but not limited to surveillance cameras, auto- mated license plate readers, body cameras, drones, and now facial recognition technologies (FRT). Law enforcement agencies have been at the forefront of FRT adoption due to the benets that can be seen to be derived and justied in this space. However, each of these technologies changes the relationships between law enforcement operatives and citizens and requires the negotiation of new boundaries and revised accountability requirements. It is important to recognise that each technology has encroached on citizens’ privacy and relationship with the state. As such, what is being deemed as acceptable in terms of reshaping bounda- ries is under scrutiny and debate. However, the decisions All authors contributed equally to the writing, research, and ideas within this article. The initial concept was conceived by Denise Almeida with Konstantin Shmarko initiating the research work. * Denise Almeida [email protected] 1 Department of Information Studies, UCL, London, UK 2 Department of Economics, UCL, London, UK Vol.:(0123456789) 1 3 AI and Ethics being made in regard to technology adoption are not cur- rently uniform. There are distinct dierences in technology adoption and roll out nation to nation and in some national contexts state to state. These largely depend on the legal landscape in terms of privacy/data protection legislation and citizen acceptance and expectations of surveillance. Within this context, COVID-19 has further pushed the boundaries of privacy, with nations introducing new measures to track citizens’ movements and connections to contain the spread of the virus. However, the shift in enhanced monitoring, surveillance and privacy disclosures, and accountability in this regard is being questioned globally, drawing atten- tion to changes and challenges [1 , 2]. This latter question of accountability and acceptable privacy limits is critical in terms of balancing rights and responsibilities for FRT. Accountability provides for the obligation to explain, justify, and take responsibility for actions. In the context of the state and law enforcement, the state is obligated to be responsible for and answer for the choices it makes in terms of the technologies it rolls out and how these impact in particular case contexts. Many questions about the use of FRT and Articial Intelligence (AI) have yet to be fully resolved. FRT usage by law enforcement agencies provides a strong case study for considering aspects of FRT and AI ethics more generally. It provides for a very understandable use of personal data with clear impacts on individuals rights. This article considers the complexity of the ethical and regulatory dimensions at play in the space of FRT and law enforcement. The paper starts by providing a brief explana- tion of FRT, followed by an analysis of the use of FRT by law enforcement and legal approaches to the regulation of FRT in the US, EU, and UK. We conclude by recommending that there must be better checks and balances for individuals and societal needs. There needs to be accountability through greater transparency, regulation, audit and explanation of FRT use and application in individual contexts. One critical tool for this is the impact assessment, which can be used to undertake data protection impact assessments (DPIA) and human rights impact assessments. Ten critical ethical questions are framed that need to be considered for the ethi- cal development, procurement, rollout, and use of FRT for law enforcement purposes. It is worth stating these from the outset: 1. Who should control the development, purchase, and testing of FRT systems ensuring the proper manage – ment and processes to challenge bias? 2. For what purposes and in what contexts is it acceptable to use FRT to capture individuals’ images? 3. What specic consents, notices and checks and bal- ances should be in place for fairness and transparency for these purposes? 4. On what basis should facial data banks be built and used in relation to which purposes? 5. What specic consents, notices and checks and bal- ances should be in place for fairness and transparency for data bank accrual and use and what should not be allowable in terms of data scraping, etc.? 6. What are the limitations of FRT performance capabili- ties for dierent purposes taking into consideration the design context? 7. What accountability should be in place for dierent usages? 8. How can this accountability be explicitly exercised, explained and audited for a range of stakeholder needs? 9. How are complaint and challenge processes enabled and aorded to all? 10. Can counter-AI initiatives be conducted to challenge and test law enforcement and audit systems? Finally, it should be established that while law enforce- ment agencies are at the forefront of FRT adoption, others can learn valuable ethical lessons from the frameworks put in place to safeguard citizens’ rights and ensure account- ability through time. Many of these same questions are applicable to AI development more broadly and should be considered by law makers to legislate and mandate for robust AI frameworks. 2 Facial recognition technologies (FRT) Facial recognition in essence works by capturing an indi- vidual’s image and then identifying that person through analysing and mapping of those captured features compar – ing them to identied likenesses. Facial images, and their careful analysis, have been a critical toolkit of law enforce- ment agencies since the nineteenth century. However, in the twenty-rst century, the application of facial recognition, moving from manual techniques to facial recognition tech- nologies (FRT), to automatically extract and compare fea – tures and every nuance of their measurement through the application of articial intelligence (AI) and algorithms has signicantly enhanced this basic tool [3 ]. As such, the face can be mapped and compared to other data which oers a more formal match and identication to an individual. This can sometimes involve the introduction of other biometric data such as eye recognition data. One-to-one matching pro- vides for certain identication of an individual in a specic context. However, using an identied image in connection with other data banks or data lakes enables one-to-many pos- sibilities and connotations of usage. Matching that can pro- cess data at scale presents new possibilities and complexities when considering machine learning, algorithms, and AI. 1 3 AI and Ethics The context of the situation of FRT rollout and data gath- ering is potentially all important in terms of how it aligns with citizens’ security versus privacy concerns in diering situations. In 2008, Lenovo launched a new series of laptops that instead of requiring a password, could recognise the face of their authorised user [4 ]. This functionality was seen as a marketing benet for Lenovo and clearly users consented and engaged with the capture and use for their own per – sonal computing needs and one-to-one matching. However, there will be distinctions between expectations in one-to-one matching in a more private controlled space for transparent individual benets versus taking and using a verication process in broader and potentially big data contexts. As the proposed EU regulation on AI suggests, the use of FRT in public spaces is ethically (and legally) signicantly dierent than its use for device unlocking. Citizens will have dierent expectations about spaces in which surveillance and FRT should be in place. For example, when crossing national border jurisdictions, there has always been an exchange of data and careful identication of individuals and as such FRT may be deemed to be more acceptable in this space as opposed to when moving around public spaces more gen- erally, functioning in working spaces and nally residing within private home dwellings. In each of these spaces, the expectations for active law enforcement and surveillance clearly dier and there are a number of ethical questions to be answered for a successful rollout in dierent contexts and for dierent law enforcement purposes. In addition, there are dierences between expectations for localised enforce- ment agencies such as police services and national intel- ligence agencies undertaking more covert security opera- tions. In each citizen space, and dependent upon the form of law enforcement, there will be dierent perspectives and concerns from individuals and groups of stakeholders. As such, reaching a consensus in technological rollouts will be a journey. Even in the example of border controls, where ID data have always been exchanged, studies have shown that the views of travellers on acceptable technologies dier from the views of board control guards [5 ]. In regard to law enforcement, some scholars have advanced the theory that monitoring of social media by law enforcement could be perceived as a ‘digital stop and frisk’, potentially delivering, “everyday racism in social media policing as an emerging framework for conceptualizing how various forms of racism aect social media policing strategies” [6 ]. This statement evidences concerns about the bias and credibility of law enforcement agencies. Applying this same conceptual framework to, sometimes awed, facial recognition algorithms without taking accountability for the consequences of this usage could not only lead to further discrimination and victimisation of specic communities, but also to an even greater loss of trust between the general population and law enforcement agencies. In recent years, we have seen an exponential increase in research focused on issues of algorithmic accountability, 1 with the overarching message being that algorithms tend to reect the biases of those who build them, and the data used to train them. The extent to which they can be relied on without human checks is one of constant concern, particularly as the use of these technologies as well as identifying individuals is extending their reach to make further judgements about individuals including in regard to their behaviours, motivations, emo- tions, and protected characteristics such as gender or sexual- ity [7 ]. In the specic case of FRT, it is important to understand some aspects at play in the design and roll out that have led to concerns over biases and unbalanced power structures. The majority of technology workers in the West are claimed to be white men, which as such unintentionally inuences the development of technologies such as FRT [8 ]. Input bias has been known about for decades, but has not been fully surfaced in an FRT context. If FRT are trained on white male faces, then there will be implications when it is used to process data related to non-white and female faces. As such, studies have indicated that identication and bias failings do occur [9 ]. Even where inputs are adjusted, systems can be biased by attempting to meet the anticipated needs of pur – chasers and users which may skew the system particularly as algorithms are applied and developed through time. In each of these instances, a high proportion of the stakeholders with power and inuence are likely to be male and white [10]. These biases can lead to severe consequences, particularly when carried into uses by law enforcement. This brings to the surface issues of power dynamics and citizen trust of its law enforcement. However, it is equally to be noted that AI has the potential to challenge biases and to be used in innovative ways that can alter existing power dynamics. A signicant example of this, is the recent use of FRT by human rights activists and protesters as a way to identify, and hold accountable, law enforcement ocers who might be abusing their power [11]. This ‘turn of the tables’ adds a further layer of complexity to discussions of accountability and power. However, while a group of people who typically do not hold power may in limited circumstances use FRT to hold law enforcement 1 For example, see McGregor, L. (2018) ‘Accountability for Govern- ance Choices in Articial Intelligence: Afterword to Eyal Benven- isti’s Foreword’, European Journal of International Law, 29(4), pp. 1079–1085.; Shah, H. (2018) ‘Algorithmic accountability’, Philo- sophical Transactions of the Royal Society A: Mathematical, Physi- cal and Engineering Sciences, 376(2128), p. 20,170,362. https:// doi. org/ 10. 1098/ rsta. 2017. 0362; Buhmann, A., Paßmann, J. and Fieseler, C. (2020) ‘Managing Algorithmic Accountability: Balancing Reputa- tional Concerns, Engagement Strategies, and the Potential of Rational Discourse’, Journal of Business Ethics, 163(2), pp. 265–280. https:// doi. org/ 10. 1007/ s10551- 019- 04226-4.0. 1 3 AI and Ethics accountable, that does not make the technology ethically via- ble. However, this power shift, if more formally supported, might provide a part of the solution to FRT deployment and its impacts. For example, as images are captured and signi- cant in legal case contexts, AI has the power to potentially assist with identifying deep fakes and calling out adaptions to footage and photographs. As such, it is important to drill down into the use of FRT and the frameworks which sit around FRT. 3 The EU and UK legislative landscape for FRT in a law enforcement context There are currently no FRT specic pieces of legislation in the EU and UK domains, but there are other pieces of leg- islation that dictate the management and rollout of FRT. In terms of personal data management, the EU’s GDPR, which came into force in 2018 covering all the Member States of the EU, has been seen as setting the bar at the highest level for the management of personal data. As such, for many tech companies operating at a global level, it has been seen as the de facto standard to roll out across all global operations. It is to be noted that as the GDPR came into force, while the UK was part of the EU, it was enshrined into UK domestic legislation and still continues to apply within a UK context. The UK’s ongoing adequacy in terms of alignment to EU GDPR will continue to be judged by the EU. The GDPR has required systems to be implemented where ‘privacy by design’ (PbD) and ‘privacy by default’ are inbuilt for any personal data processing. Processing covers any activity with personal data including creating, receiving, sharing, and even destroying/deleting personal data. There must be a clear lawful basis for personal data processing, and in addition, the data must be processed fairly and trans- parently. Within this context, it is important to understand that this does not prevent personal data collection, but does require carefully documented processes and active personal data management through time. In addition, it must be noted that what is considered fair and lawful is potentially open to interpretation and legal debate and contest. In certain instances, consent for processing is required. In addition, there are specic data subject rights such as the right to know what is held on/about you, subject to certain exemp- tions and to ask for data to be rectied or deleted (the right to be forgotten) in certain circumstances. Where special category personal data are processed, stricter controls are required. Of note in this regard is bio – metric data which is categorised as physical or behavioural characteristics that uniquely identify an individual, includ- ing but not limited to DNA, ngerprints, faces, and voice patterns as examples. As such FRT are caught under this denition and within Article 9 of the GDPR, it is claried that biometric data should not be used to identify a person unless an individual has provided explicit consent or alter – natively other exemptions exist. One such example of an exempted area across the EU and UK is law enforcement. In the GDPR, personal data management for law enforcement purposes was derogated in Article 23, for determination at Member State level. There is therefore some divergence in terms of how the checks and balances exist between personal data rights and law enforcement rights. Within most EU Member States there is an expectation that for the purposes of pursuing law enforcement to identify and track oenders certain exemptions would exist, and consent would not be required. Within this space, the new technological landscape is further continuing to evolve and as such its rollout and use by law enforcement agencies is not consistent across the EU. Regardless of certain consent exemptions, other GDPR requirements do still apply, such as PbD, which does pro- vide a framework of accountability for law enforcement. For FRT purposes, a DPIA must be undertaken as a way of demonstrating and achieving PbD. The DPIA is a process of identifying risks that arise from data processing and is man- datory for high-risk applications, such as facial recognition in law enforcement use. 2 This requires that all aspects of a process are reviewed and considered to ensure that there are justications for the process; this ensures it is ‘fair and law – ful’, it is appropriately targeted, implemented and managed through time. This procedure is not only useful for the FRT operators, as it forces them to scrutinise their algorithms, focus and security, but can also benet the general public, as, if published, a DPIA can explain data processing in terms that are accessible to any individual, not just an IT specialist. Mandatory publication of the DPIA does not exist, but there is a requirement to be transparent about DP processing and to have in place privacy notices for this reason. Another important GDPR requirement is the need to have a Data Protection Ocer (DPO) within any public author – ity or private entities where the core activities require large scale, regular, and systematic monitoring of individuals or large-scale processing of special category data or data relat- ing to criminal convictions or oences. As such, this does mean that law enforcement agencies and businesses pro- viding processing services will be required to have a DPO. The DPO is required to advise an organisation on its data protection compliance. In addition, were an organisation to fail to fully comply with the GDPR, the DPO would act as a whistle-blower reporting to the relevant national ombuds- man on data protection. Each EU Member State and the UK has a regulatory requirement which establishes an oversight, complaint, and investigatory regime to be in place, a data protection 2 For the formal denition of the DPIA, see GDPR Article 35. 1 3 AI and Ethics ombudsman/regulator. There are currently 27 data protec- tion authorities in the EU, one for each country, plus the European Data Protection Supervisor, which oversees EU institutions and bodies. The UK also has a data protection supervisor. The exact responsibilities of the organisations dier, but all of them are tasked with monitoring and ensur – ing data protection and privacy compliance regionally on behalf of their citizens. In accordance with this mandate, it is not uncommon to see these authorities actively interven- ing in relevant disputes, sometimes even before any citizen complaints are led. The benet to accountability of these organisations is obvious—the data protection regulators have bigger budgets and better legal teams than most individuals, meaning that they are more eective in holding FRT opera- tors accountable. The authorities with enforcement powers can bypass litigation entirely, issuing nes and orders faster than a court would be able to. These factors ensure that the FRT providers and operators should never get complacent. Separately, citizens may bring forward lawsuits for data protection failings, but the ability to complain to a regulator provides the citizen with a cheaper alternative and one which should actively investigate and oversee any organisational data protection failings. The regulators are publicly funded and the resources for each across the EU and UK vary sig- nicantly. The extent of investigations and the timeliness of dealing with complaints have both been areas of criticism. For example, in 2020, a group of cross-party Members of the UK Parliament wrote complaining about the performance of the UK’s Information Commissioner. 3 Such complaints are not limited to the UK. In July 2020, the Irish High Court gave permission for a judicial review of the Data Protection Commissioner in respect of the delay dealing with com- plaints. It is to be noted that Ireland is the home to many tech companies’ European headquarters, and thus, these delays can impact more broadly upon EU citizens. However, equally, there are many examples of active engagement and investigation. In terms of moving to cover new developments, the GDPR is not a prescriptive piece of legislation and, as such, its ‘vagueness by default’ is intended to ensure that the regu- lation maintains its relevance, allowing for application to new technologies, including FRT. Even more importantly, the GDPR holds some sway outside of the EU as well, since any business dealing with the bloc has to adhere to the rules when managing European’s data, even if those same rules do not apply in their own domestic jurisdiction. This is gen- erally known as ‘The Brussels Eect’ [12, 13]. In practice, where FRT are rolled out in the EU, this means that it is much easier to hold FRT operators accountable, as there is no need to navigate a complex web of regional laws, and the operators themselves are more consistent in their behaviour, unable to use the splintering of regulation to their advan – tage.  In addition, companies will often roll out the same systems globally, meaning that those outside the EU may benet from some read over of standards. However, this is not to say that the systems will then be operated and man- aged in the same ways globally. In terms of AI more specically, this has become a focus for the EU and UK regulators and governments. The UK Information Commissioner’s Oce (ICO) has recently pub- lished [14] guidance on AI auditing, supported by impact assessments. Although this guidance marks an important start towards specic guidance tailored towards the compli- ance of AI systems, we are still lacking case studies and dedicated frameworks to address this problem in a standard- ised way [15]. Recently, the EU has engaged with the need to actively manage the ethics and legislation that sit around AI innovation. A 2019 press release by the European Data Protection Supervisor Wiewiórowsk, called out the account- ability and transparency concerns of facial recognition, par – ticularly around the input data for facial recognition systems stating, “the deployment of this technology so far has been marked by obscurity. We basically do not know how data are used by those who collect it, who has access and to whom it is sent, how long do they keep it, how a prole is formed and who is responsible at the end for the automated decision- making.” [16]. As such, the European Commission began publishing a roadmap for dealing with AI. In April 2021, the European Commission released documentation on its approach to AI, which includes an aspiration to harmonise all legislation and bring in a specic Articial Intelligence Act. FRT more specically have yet to be dealt with in detail but, within the proposals for harmonisation, law enforcement systems are categorised as high risk. It is stated that AI sys- tems used by law enforcement must ensure, “accuracy, reli- ability and transparency… to avoid adverse impacts, retain public trust and ensure accountability and eective redress” [ 17]. The documentation draws out areas of greater concern focusing on vulnerable people and those contexts where AI systems failures will have greater consequences. Examples include managing asylum seekers and ensuring individuals have a right to a fair trial. The importance of data quality and documentation is highlighted [17]. The Commission states that there must be oversight regarding: “the quality of data sets used, technical documentation and record-keeping, transparency and the provision of information to users, human oversight, and robustness, accuracy and cybersecurity. Those requirements are necessary to eectively mitigate the risks for health, safety and fundamental rights…” 3 See https:// www. openr ights group. org/ app/ uploa ds/ 2020/ 08/ Letter- for- MPs- Final- sigs-1. pdf. 1 3 AI and Ethics The place of the human in the system review is an impor- tant part of the process. In addition, the need for transpar – ency is highlighted. However, what is not yet in place is a prescribed system for transparency and accountability. As the publications are currently at a high level, a need to drill down and consider case examples is necessary for delivery. There are some limitations to these publications and the recent publications by the EU have been criticized for not bringing in a moratorium on biometric technologies such as FRT [18] In an EU context, in addition to the GDPR which dic- tates rules around managing personal data, privacy is further legislated for through the European Convention on Human Rights. As with the GDPR, this is enshrined in UK law as well as across all 27 EU Member States. The Human Rights legislation is potentially more holistic in terms of oering frameworks for consideration of law enforcement versus individual rights in the rollout considerations for FRT. It enshrines principles of equality and inclusion as well as privacy and rights to fair legal processes. The checks and balances of dierent and sometimes competing human rights are well established and tested through the courts. Under the terms of the law, individuals can bring legal cases, and, in the EU Member States (although not the UK), cases can progress to the European Court of Human Rights. How – ever, there is not the same active regulatory framework sitting around the legislation which provides for quicker and cheaper routes to justice, and which can actively take action without the requirement for an individual to bring a case. Justice through the European Courts most normally is expensive, uncertain, and takes years. In addition, the requirements for accountability and design documentation for human rights compliance are not explicitly enshrined in the law. In terms of transparency, aspects of accountabil – ity for policy more generally fall under freedom of infor – mation legislation which is enacted at Member State level and diers very widely nation to nation in terms of public accountability requirements for administration more gener – ally. There are also certain law enforcement and national security exemptions from freedom of information require- ments. Finally, it is important to note that it does not bind on private entities who do not have the same accountability requirements. In terms of actual FRT legal accountabilities, cases have been brought under both the GDPR and the Human Rights Act in respect of FRT. One such instance is the 2019 UK case of Bridges v. South Wales Police. Bridges, a civil rights campaigner, argued that the active FRT deployed by the police at public gatherings infringed on the right to respect for human life under the Human Rights Act 1998 and his privacy rights under the Data Protection Act 2018 (DPA 2018), the UK implementation of the GDPR. Relevant to this discussion, Bridges also claimed that, since the police failed to account for this infringement, its DPIA was not performed correctly [19]. After a lengthy litigation process, the court ruled in favour of Bridges, agreeing with the points above and additionally nding that the police had too broad a discretion regarding the use of FRT. This example highlights the value of the GDPR (or simi – lar legislative frameworks) and, in particular, the importance of the DPIA. Here, the impact assessment not only provided the basis for a large portion of the claimant’s argument, but it was also released to the public, making it easy for anyone with internet access to learn the details of the FRT data pro- cessing employed by the South Wales Police. 4 In addition, the case shows that the DPIA is not a checkbox exercise but, instead, requires that the FRT operator possesses substantial knowledge about the inner workings of the algorithm and its wider repercussions. The lawsuit also draws attention to the holistic under – standing of privacy under the GDPR. In a country with less-developed data protection laws, it may be sucient for an FRT operator to encrypt and anonymise faceprints, and, regardless of how they are collected, this will constitute suf- cient protection; the GDPR goes to great lengths to ensure that this is never the case. Of particular importance are the concepts of PbD and privacy by default, as mentioned above and dened in Article 25 of the regulation. In this example, the South Wales Police ensured privacy by design, meaning that its facial recognition algorithms were built around data protection. That, however, was not enough, since the FRT were then deployed indiscriminately, which violated privacy by default—the amount of personal data collected was dis- proportionate with respect to the intended goal of identifying individuals on watchlists. As such, the police use of FRT for these processes had to be stopped. This “one strike and you’re out” approach to personal data collection goes a long way towards ensuring accountability in facial recognition, since it makes it much harder for the FRT operator to get away with negligent data processing for which there can be signicant consequences. However, while the Human Rights legislation was deployed as part of the case, the lack of a published Human Rights Impact Assessment does diminish accountability in this regard. It is to be noted that a similar requirement to the provision of a DPIA, in regards to Human Rights Impact Assessments and human rights’ by design and default, could better improve citizen rights more generally. In spite of the data protection legislation, it is important to highlight that authorities and corporate entities may fall short in their duties, which is why a proactive regulator is a signicant attribute in the GDPR regime. In August 2018, upon the request of the London Mayor, the UK ICO started 4 This particular assessment is available here: https:// afr. south- wales. police. uk/ wp- conte nt/ uploa ds/ 2019/ 10/ DPIA- V5.4- Live. pdf. 1 3 AI and Ethics to investigate whether a private property company (Kings Cross Estate Services), which managed the area around Kings Cross, a critical London transport hub was using FRT in its CCTV. It emerged that for a number of years, this com- pany had been using FRT for ‘public safety’ reasons, but had not properly disclosed or made people aware that the scheme was in operation. In addition, as part of this investigation it transpired that not only had it been using FRT to capture the images of all those people passing through the transport hub, but it had been working with the Metropolitan Police in London to check and match for certain people entering the area. A data sharing agreement was in place with the inten- tion of providing for the potential identication of wanted individuals, known oenders, and missing persons. Over a 2-year period from 2016 to 2018, the Police passed images of seven people to the property entity. These people had been either arrested and charged, reprimanded, cautioned, or given a formal warning for oences. However, it was clear that the Police had failed to disclose that the scheme existed. [ 20]. That said, more generally the ICO has found that it is acceptable for the Police to use FRT and that there is a great deal of public support for its use, but that nevertheless it must be done so in a carefully targeted way taking into account individual’s Article 8 human rights to privacy [21]. Reecting on the position of the Regulators and their investigatory powers, one of the most active national data protection bodies in the EU is the Swedish Authority for Pri- vacy Protection (IMY), formerly known as the Swedish Data Protection Authority. In recent years, it has been involved in two FRT cases of note: a school using FRT to monitor class attendance [22], and the police using facial recognition software [23]. The first case, while not related to law enforcement, showcases how a data protection authority’s independence and legal expertise can ensure accountability where an indi- vidual or a civil organisation would not have been able to do so for various reasons. In this instance, the IMY “became aware through information in the media” that the school was trialing FRT on its students and decided to intervene. In the ensuing process, the authority found that the school’s use of facial recognition did not satisfy proportionality and neces- sity, which also led to the DPIA being conducted incorrectly. Most importantly, the IMY ruled that the consent that was given by the children’s parents to the school was invalid, as the students were in a position of dependence (school attend- ance is compulsory). The school’s board was subsequently ned approximately €20,000. There are several important aspects to this example. First, note that the IMY intervened in the case on its own voli- tion, without receiving any complaints or being asked to take action. This autonomy is important, as individuals may not always be able/willing to alert the authorities when their data are being collected and/or processed unlawfully. The reason why none of the parents came forward could be that they did not possess enough legal expertise to notice the problems in the FRT deployment or did not feel able to challenge the school given their own and their children’s relationship with it. The IMY had independence, sucient knowledge, and a position of power to hold the school accountable. Finally, note the “one strike and you’re out” approach mentioned above. While the school made reasonable eorts to comply with the legal requirements—the faceprints were recorded on a hard drive connected to an o ine computer locked away in a cupboard, and a DPIA was conducted—it failed to ensure complete compliance, and so was prosecuted. The second example concerns the use of FRT by the Swedish police. The IMY found that the police failed to con- duct a DPIA and were negligent enough to let unauthorised employees access the software, after which it imposed a ne of €250,000. Here, the law enforcement was ignorant to any negative consequences of FRT use and did not take appro- priate active PbD steps; as a result, it was held accountable for its failings. Exact data on how widespread FRT are across the EU is dicult to nd, but the technologies are not ubiquitous yet. In 2019, 12 national police forces had already deployed facial recognition with 7 more planning or testing deploy – ment at that date. Deployment has been deemed to be much slower than in USA [24]. This may in part be due to the fact that it is also surrounded by much more suitable, uniform legislation, greater transparency, and active data protection authorities—all of these components will play a large role in making Europe a better model for facial recognition account- ability. However, in the context of FRT, it is important to note that a lot of the development has happened outside the boundaries of the EU and UK. As such, while the EU may have set a high bar in terms of requiring PbD, much FRT application happens within a USA context. 4 The USA ethical and legislative landscape for FRT in a law enforcement context Having considered the European regulatory framework, strongly positioned to ensure some forms of ethical con- siderations before the deployment of FRT, we now turn to a much more fragmented legislative territory: the United States of America (USA). Within USA, FRT are heavily used by law enforcement, aecting over 117 million adults [ 25], which is over a third of the country’s total popula- tion. FRT rollouts are widespread, yet an average citizen has very limited means of holding its operators account- able should it be misused. The USA was an early adopter of freedom of information laws, passing the federal Publication Information Act in 1966, with individual state laws being passed after this date. This set of legislation provides for 1 3 AI and Ethics state authorities to answer for their policies and actions on receipt of a freedom of information request. This does not impact on private companies who are not held accountable in the same way. In addition, there are certain exemptions under the legislation for law enforcement and national secu- rity purposes. There are some sector-specic privacy laws, covering, for instance children online, but no overarching data protection law akin to the GDPR. These federal laws are then enforced by the Federal Trade Commission, which has an extremely broad mandate of protecting consumers against deceptive practices; it is not comparable, however, to the data protection authorities in European countries [26]. Such a massive rollout of FRT without a regulator/ombuds- man to investigate is a cause for concern as it then relies on individual legal action to call out wrongdoings. In addition, there are very considerable state-by-state dierences, and a notable lack of requirements for transparency or calls for that transparency. This reliance on individual action originates from USA lacking any federal (or state) data protection authority. This means that there is no body which would actively represent and protect citizens’ interests, while possessing the legal and regulatory powers of the state. Moreover, as we have seen, data protection authorities can intervene on behalf of the citizen and enforce decisions without initiating court proceedings; in the USA, this is not an option—any conict regarding FRT and related personal data has to be heard in court, necessitating lengthy and costly court battles (which is why citizen representation is so important). As a result, individuals often have to seek legal support from non-prot organisations; those who fail to secure it may not be able to hold FRT operators or providers accountable at all. The second issue is centered around state-by-state dif – ferences; it occurs thanks to an absence of a general federal privacy legislation, with state law often providing only very basic rights for holding FRT operators accountable. The extent of privacy laws in most states is limited to notifying an individual if their data have been stolen in a security breach [ 27]—hardly a consolation for someone who has been aected by unintentionally biased or malicious use of FRT. Relevant to our discussion, at the time of writing, there is only one state (Illinois) that has legislation allowing private individuals to sue and recover damages for improper usage and/or access to their biometric data, including face- prints [ 26]. However, even if you are lucky to live in Illinois, holding a malicious FRT provider or operator, private or public, accountable is likely to be dicult. Accountability relies on transparency—if, for instance, an individual would like to sue an FRT provider on the basis of a privacy viola- tion, they will need some knowledge of how their data are processed. This is where the USA falls short; not only are the law enforcement and federal agencies notoriously secretive, but they often do not understand how their own FRT works in the rst place. Without PbD and the requirements for a DPIA, there is less transparency on FRT processes, and it is harder to know exactly how processing is occurring and to hold operators to account. In addition, operators may often not have duly considered and weighted the implications of the FRT usage. In a USA context, the law on privacy and use of FRT for localised law enforcement operates very much at a state- by-state level. Within this context, California is often held to be the state with the strongest privacy laws; in 2020, it strengthened its existing privacy laws with the California Privacy Rights Act (CCPA), which established the Califor – nia Privacy Protection Agency and extended residents’ rights in terms of how business could collect and use their data. However, notably, it did not touch on any privacy powers in respect of law enforcement, and, in tandem with the CCPA, the state started to try to introduce a Facial Recognition Bill to enhance the use of FRT for law enforcement purposes. It is to be noted that some cities in California (e.g., Berkeley and San Francisco) have banned FRT usage. Interestingly, the Bill received lobbying support from Microsoft, but was ercely campaigned against by Civil Rights groups, and as such, it was not passed in June 2020. This period marked a growing sense of unease with the ethics around FRT. In the same month, IBM stated that it would cease all export sales of FRT. In its statement, it described FRT as akin to other innovations such as nuclear arms on which the USA has had to seize a lead for the protection of its citizens [28]. In addition, it highlighted the aws in the technology, for example its failure to deal with Black and Asian faces with sucient accuracy. At the same time, another big tech entity, Amazon stated that it would cease to sell FRT to the Police for 1 year to give Congress time to put in place new regula- tions to govern its ethical usage. Microsoft followed suit stat – ing, “we will not sell facial recognition technology to police departments in the United States until we have a national law in place, grounded in human rights, that will govern this technology” [29]. Each of these entities clearly became concerned about the potential misuse of the technology by law enforcement agencies which IBM said had caused con- cerns since the revelations by Edward Snowden in 2014 [29]. Clearly, there were valid ethical concerns about the devel- opment of FRT. However, when benecial inuences leave the marketplace, this may open up the eld to less ethical developers. Each of these entities has a process for review – ing the ethics of technology roll outs, for example, IBM has an Ethics AI Board led by a Chief Privacy Ocer. It is dicult to know how ethical or eective these private enti- ties are where there is such limited transparency, although clearly these large global corporations worry about their images. This was evidenced in the case of Google which received international press attention and criticism when it red Timnit Gebru, co-lead of its Ethical AI Research Team, 1 3 AI and Ethics for refusing to edit out certain statements from a research article on AI [30], and as a result of the controversy, it has since had to change its publication approach. The concerns of private enterprise and the relationship with law enforcement and national security have been rec- ognised at a national level. For example in the context of the Federal Bureau of Investigation (FBI), there have been hear – ings in Washington on the acceptable use of FRT. 5 At this hearing, it was stated that the “FBI has limited information on the accuracy of its face recognition technology capabili- ties.” These hearings called for greater accountability and transparency in the use of the technologies, although deni – tive outcomes from the hearings are still awaited. A recent illustration of the current opacity of the USA system is demonstrated in the case of Willie Allen Lynch, a black man convicted in 2016 by a Florida court of sell- ing cocaine; the Police Department made the decision to arrest him based on a facial recognition match, among other factors. In an attempt to appeal the decision, Lynch argued that the facial recognition system made an erroneous match (a reasonable statement, given FRT’s known inaccuracy with black faceprints [9 ]), proving this, however, required the police to turn over the photo in question and the list of possible faceprint matches oered by the system, which it refused to do. Strikingly, the detectives involved in the case admitted that, while the FRT rated Lynch’s faceprint as the closest match, they did not actually know how the rating sys- tem worked or even which scale the rating was assigned on. Ultimately, the court ruled in favour of the Police Depart- ment, and Lynch was never given access to the photo and potential matches [31]. On a federal level, the issues of a lack of transparency and accountability persist; an attempt by the American Civil Liberties Union (ACLU) to gather information about the use of FRT by the Department of Justice, the FBI and the Drug Enforcement Administration failed, since none of the agen- cies responded to a Freedom of Information Act request. Undeterred, the ACLU pursued legal action, with results yet to be seen—there has been no information about the case since October 2019, when the initial complaint was led [ 32]. In addition, the ACLU has called out the Government’s and private enterprises’ surveillance operations at airports and customs boundaries across the USA [33]. In regard to private companies, as previously noted, these are not caught by freedom of information laws and can often aord legal repower beyond the reach of even the wealthi- est individuals. Clearview AI, one of the leading providers of FRT to the USA law enforcement agencies, supplies the technologies to more than 600 police departments across USA [34]; the ACLU led a lawsuit against the company in the state of Illinois, arguing that it collected faceprints with – out consent, as required by the state’s Biometric Information Privacy Act [ 35]. Filed in May 2020, the case remains active at the time of writing, accumulating a seemingly endless stream of motions, memoranda, and briefs from both sides. The amount and complexity of the legal paperwork on a case that has not even been heard yet is illustrative of how ercely opposed the company is to any eorts to hold it account- able, and it is problematic for ordinary citizens to follow the lawsuit through on their own; although crowdsourcing and group action has become a reality for legal cases, as seen in the actions brought by the Austrian Max Schrems in the EU. In addition, there has been a class action brought against the Department Store Macy’s in Illinois for its use of FRT [36], so such legal action may become more common. Nevertheless, a mature democratic nation should have other solutions in place. This absence of the threat of litigation removes the pro- verbial sword hanging above the FRT providers’ heads, allowing them to have a free-for-all feast on user informa- tion. For instance, Clearview AI openly discloses informa- tion about scraping Facebook user proles for images to build up its reference database [ 34], even though this action is explicitly prohibited by the website’s terms of service. IBM, in a similar fashion, collected individuals’ Flickr pho – tos without consent; the aected users were not given a fea- sible way of deleting their information from the database [ 37]. A complete absence of data protection and privacy rights is hugely problematic. 5 Conclusion and recommendations FRT is no longer a topic of science ction or a concern for the future. It is here now, impacting people’s lives on a daily basis, from wrongful arrests to privacy invasions and human rights infringements. The widespread adoption of this technology without appropriate considerations could have catastrophic outcomes, and ultimately may jeopardise its development if some jurisdictions decide to ban the use of the technology for an indenite amount of time [38]. How – ever, critical in the success of FRT is the transparency and accountability in each stage of its development and usage and the ability to audit and challenge as required. The idea of power is particularly linked to the intended, and actual, outcomes of FRT, which should not be dissociated from dis- cussions around accountability. This discussions in this article makes the case that at all stages of the FRT process in all aspects of design and use including specic contexts, there is a requirement to docu- ment and account for the usage ensuring mechanisms for 5 See for example the 2019 report at https:// overs ight. house. gov/ legis lation/ heari ngs/ facial- recog nition- techn ology- part- ii- ensur ing- trans paren cy- in- gover nment- use. 1 3 AI and Ethics transparency and challenge. The GDPR provides a good regulatory starting point to address some of its concerns. However, the ethical considerations of this technology go far beyond issues of privacy and transparency alone. It requires broader considerations of equality, diversity, and inclusion as well as human rights issues more generally. As such other forms of assessments, such as Human Rights Impact Assessments, in addition to DPIA, should be part of the development and rollout of FRT—a DPIA alone is insucient. These Assessments should be automatically required to be put into the public domain. In addition, the requirements must equally be enacted upon both public and private enterprises with transparency and accountabil- ity requirements. In conjunction with these steps, global regulators are needed with powers to actively investigate each aspect of the development and deployment processes of FRT in case contexts, and with powers to step in, stop and ne inappropriate FRT development and deployment. In addition, there should be more normal audit processes required for FRT deployment just as there are for nancial oversights. The societal impacts for FRT misconduct are not to be underestimated. We conclude this paper with the recommendation of ten critical ethical questions that need to be considered, researched, and answered in granular detail for law enforce- ment purposes and which in addition have read over to other AI development. It is suggested that these need to be dealt with and regulated for. The questions are: 1. Who should control the development, purchase, and testing of FRT systems ensuring the proper manage – ment and processes to challenge bias? 2. For what purposes and in what contexts is it acceptable to use FRT to capture individuals’ images? 3. What specic consents, notices and checks and bal- ances should be in place for fairness and transparency for these purposes? 4. On what basis should facial data banks be built and used in relation to which purposes? 5. What specic consents, notices and checks and bal- ances should be in place for fairness and transparency for data bank accrual and use and what should not be allowable in terms of data scraping, etc.? 6. What are the limitations of FRT performance capabili- ties for dierent purposes taking into consideration the design context? 7. What accountability should be in place for dierent usages? 8. How can this accountability be explicitly exercised, explained and audited for, for a range of stakeholder needs? 9. How are complaint and challenge processes enabled and aorded to all? 10. Can counter-AI initiatives be conducted to challenge and test law enforcement and audit systems? We are at a tipping point in the relationships and power structures in place between citizens and law enforcers. We cannot wait to step in and act, and in fact, there are many potential solutions to better ensure ethical FRT deployment. However, this is currently an ethical emergency requiring urgent global attention. Funding This work received partial funding from the UCL AI Centre. Declarations Conflict of interest The authors conrm there are no conicts of inter – est. Open Access This article is licensed under a Creative Commons Attri- bution 4.0 International License, which permits use, sharing, adapta- tion, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/. References 1. OECD (2020a) Tracking and tracing COVID: Protecting pri- vacy and data while using apps and biometrics, OECD Policy Responses to Coronavirus (COVID-19), OECD Publishing, Paris https:// doi. org/ 10. 1787/ 8f394 636- en 2. OECD (2020b) Ensuring data privacy as we battle COVID-19, OECD Policy Responses to Coronavirus (COVID-19), OECD Publishing, Paris, https:// doi. org/ 10. 1787/ 36c2f 31e- en 3. Mann, M. and Smith, M. (2017) ‘Automated Facial Recognition Technology: Recent Developments and Approaches to Oversight’ University of New South Wales Law Journal 40, no. 1 (2017): 121–145. 4. Gates, K. (2011). Inventing the security-conscious, Tech-Savvy Citizen. In: Our biometric future: facial recognition technology and the culture of surveillance (pp 125–150). NYU Press. 5. Abomhara. M.Y., Yayilgan, S., Obiora, N.L., Székely, Z. (2021) A comparison of primary stakeholders’ views on the deployment of biometric technologies in border management: case study of Smart mobility at the European land borders. Technol Soc 64. 6. Patton, D.U. et al (2017) Stop and frisk online: theorizing every – day racism in digital policing in the use of social media for iden- tication of criminal conduct and associations. Social Media + Society, 3(3): 2056305117733344. https:// doi. org/ 10. 1177/ 20563 05117 733344. 7. Wang, Y., Kosinski, M. (2020) Deep Neural Networks Are More Accurate than Humans at Detecting Sexual Orientation from Facial Images. OSF. Doi: 10.17605/OSF.IO/ZN79K. 1 3 AI and Ethics 8. Dickey, M. (2019) The future of diversity and inclusion in tech: where the industry needs to go from here, Techcrunch, 17 June 2019. https:// techc runch. com/ 2019/ 06/ 17/ the- future- of- diver sity- and- inclu sion- in- tech/. Accessed 26 Apr 2021. 9. Buolamwini, J. and Gebru, T. Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification, Proceedings of the 1st Conference on Fairness, Accountability and Transparency, Proceedings of Machine Learning Research: PMLR, 77—91. 10. The Economist. (2021) Design bias: working in the dark, The Economist April 10–16 2021, p.14. 11. Hill, K. (2020) Activists Turn Facial Recognition Tools Against the Police, The New York Times, 21 October. https:// www. nytim es. com/ 2020/ 10/ 21/ techn ology/ facial- recog nition- police. html. Accessed: 23 Feb 2021. 12. Bradford, A. (2020) The Brussels Eect: How the European Union Rules the World. New York: Oxford University Press). 13. Bendiek, A., Römer, M.: Externalizing Europe: the global eects of European data protection. Digital Policy, Regulation and Governance 21(1), 32–43 (2019). https:// doi. org/ 10. 1108/ DPRG- 07- 2018- 0038 14. Information Commissioner’s Oce (2020) ‘Guidance on the AI auditing framework Draft guidance for consultation’. https:// ico. org. uk/ media/ about- the- ico/ consu ltati ons/ 26172 19/ guida nce- on- the- ai- audit ing- frame work- draft- for- consu ltati on. pdf. 15. Kazim, E., Denny, D.M.T., Koshiyama, A.: AI auditing and impact assessment: according to the UK information commis- sioner’s oce. AI and Ethics (2021). https:// doi. org/ 10. 1007/ s43681- 021- 00039-2 16. Wiewiórowsk, W. (2019) ‘Facial recognition: A solution in search of a problem?’, European Data Protection Supervisor. Available at: https:// edps. europa. eu/ press- publi catio ns/ press- news/ blog/ facial- recog nition- solut ion- search- probl em_ en. Accessed: 23 Feb 2021. 17. Commission, E.: Proposal for a regulation of the European Parlia- ment and of the Council laying down harmonised rules on arti- cial intelligence (Articial Intelligence Act) and amending certain Union legislative acts. European Commission, Brussels (2021) 18. Wiewiórowski, W. (2021) ‘Articial Intelligence Act: a wel- comed initiative, but ban on remote biometric identication in public space is necessary’, Press Release, 23 April 2021. Brus – sels: European Commission. Available at: https:// edps. europa. eu/ system/ les/ 2021- 04/ EDPS- 2021- 09- Arti cial- Intel ligen ce_ EN. pdf. Accessed: 23 Apr 2021. 19. Bridges, R (On the Application Of) v South Wales Police [2020] EWCA Civ 1058 (11 August 2020). https:// www. bailii. org/ ew/ cases/ EWCA/ Civ/ 2020/ 1058. html. Accessed: 9 May 2021. 20. Metropolitan Police Service (2018) Report to the Mayor of Lon- don. https:// www. london. gov. uk/ sites/ defau lt/ les/ 040910_ letter_ to_ unmesh_ desai_ am_ report_ re_ kings_ cross_ data_ shari ng. pdf. Accessed: 27 April 2021. 21. Information Commissioner’s Oce (2019) ICO investigation into how the Police use facial recognition technology. Wilmslow: ICO. Available at: https:// ico. org. uk/ media/ about- the- ico/ docum ents/ 26161 85/ live- frt- law- enfor cement- report- 20191 031. pdf . Accessed: 27 April 2021. 22. IMY (2019) Supervision pursuant to the General Data Protection Regulation (EU) 2016/679—facial recognition used to monitor the attendance of students. Stockholm (DI-2019–2221). 23. IMY (2021) Police unlawfully used facial recognition app. Avail- able at: https:// www. imy. se/ nyhet er/ police- unlaw fully- used- facial- recog nition- app/. Accessed: 27 April 2021. 24. Kayser-Bril, N. (2019). At least 11 police forces use face rec- ognition in the EU, AlgorithmWatch reveals. AlgorithmWatch. Retrieved from https:// algor ithmw atch. org/ en/ story/ face- recog nition- police- europe. Accessed: 13 February 2021. 25. Garvie, C., Bedoya, A. and Frankle, J. (2016) The Perpetual Line- Up: Unregulated Police Face Recognition in America: Center on Privacy & Technology at Georgetown Law. 26. Chabinsky, S. and Pittman, P. F. (2020) ’USA’, in Hickman, T. and Gabel, D. (eds.) Data Protection Laws and Regulations 2020: Global Legal Group. 27. Privacy Rights Clearinghouse (2018) Data Breach Notication in the United States and Territories: The International Association of Privacy Professionals. https:// iapp. org/ media/ pdf/ resou rce_ center/ Data_ Breach_ Noti cati on_ United_ States_ Terri tories. pdf. Accessed: 13 Feb 2021. 28. IBM (2020) ‘Articial Intelligence: a precision regulated approach to controlling facial recognition technology’. Available at: https:// www. ibm. com/ blogs/ policy/ facial- recog nition- export- contr ols/. Accessed: 27 Apr 2021. 29. Bajarin, T. (2020) ‘Why it matters that IBM has abandoned facial recognition technology’, Forbes, 18 June 2020. https:// www. forbes. com/ sites/ timba jarin/ 2020/ 06/ 18/ why- it- matte rs- that- ibm- has- aband oned- its- facial- recog nition- techn ology/. Accessed: 27 Apr 2021. 30. Dastin, J. and Dave, P. (2021) Two Google engineers resign over ring of AI ethics researcher Timnit Gebru. Reuters Reboot, 4 February 2021. https:// www. reute rs. com/ artic le/ us- alpha bet- resig natio ns- idUSK BN2A4 090. Accessed: 27 Apr 2021. 31. Mak, A. (2019) Facing Facts. Slate. Available at: https:// slate. com/ techn ology/ 2019/ 01/ facial- recog nition- arrest- trans paren cy- willie- allen- lynch. html. Accessed: 13 February 2021. 32. Crockford, K. (2019) The FBI is Tracking Our Faces in Secret. We’re Suing.: American Civil Liberties Union. https:// www. aclu. org/ news/ priva cy- techn ology/ the- fbi- is- track ing- our- faces- in- secret- were- suing/. Accessed: 13 Feb 2021. 33. Gorski, A. (2020) The Government Has a Secret Plan to Track Everyone’s Faces at Airports. We’re Suing.: American Civil Liberties Union. Available at: https:// www. aclu. org/ news/ priva cy- techn ology/ the- gover nment- has-a- secret- plan- to- track- every ones- faces- at- airpo rts- were- suing/. Accessed: 13 Feb 2021. 34. Hill, K. (2020) The Secretive Company That Might End Privacy as We Know It: The New York Times. https:// www. nytim es. com/ 2020/ 01/ 18/ techn ology/ clear view- priva cy- facial- recog nition. html. Accessed: 13 Feb 2021. 35. ACLU (2020) ACLU v. Clearview AI. Available at: https:// www. aclu. org/ cases/ aclu-v- clear view- ai. Accessed: 13 Feb 2021. 36. Mitchell, M. (2020) Macy’s faces class action lawsuit for use of facial recognition software Clearview AI: Cincinnati Enquirer. https:// www. cinci nnati. com/ story/ news/ 2020/ 08/ 07/ macys- faces- class- action- lawsu it- use- facial- recog nition- softw are- clear view- ai/ 33150 99001/. Accessed: 27 Apr 2021. 37. Solon, O. (2019) Facial recognition’s ’dirty little secret’: Millions of online photos scraped without consent: NBC News. https:// www. nbcne ws. com/ tech/ inter net/ facial- recog nition- s- dirty- lit- tle- secret- milli ons- online- photos- scrap ed- n9819 21. Accessed: 27 Apr 2021. 38. Conger, K., Fausset, R. and Kovaleski, S. F. (2019) San Francisco Bans Facial Recognition Technology—The New York Times. https:// www. nytim es. com/ 2019/ 05/ 14/ us/ facial- recog nition- ban- san- franc isco. html. Accessed: 3 Mar 2021. Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional aliations. 1 3
Philosophy-Criminology Assessment 1000 words; 4×250 word short answers. Plagiarism free Harvard or APA referencing
Full Terms & Conditions of access and use can be found at https://www.tandfonline.com/action/journalInformation?journalCode=cict20 Information & Communications Technology Law ISSN: (Print) (Online) Journal homepage: https://www.tandfonline.com/loi/cict20 Policing faces: the present and future of intelligent facial surveillance Lachlan Urquhart & Diana Miranda To cite this article: Lachlan Urquhart & Diana Miranda (2021): Policing faces: the present and future of intelligent facial surveillance, Information & Communications Technology Law, DOI: 10.1080/13600834.2021.1994220 To link to this article: https://doi.org/10.1080/13600834.2021.1994220 © 2021 The Author(s). Published by InformaUK Limited, trading as Taylor & Francis Group Published online: 28 Oct 2021.Submit your article to this journal Article views: 789View related articles View Crossmark data Policing faces: the present and future of intelligent facial surveillance Lachlan Urquhart a,b and Diana Miranda c aSchool of Law, University of Edinburgh, Edinburgh, UK; bHorizon Digital Economy Research Institute, School of Computer Science, University of Nottingham, Nottingham, UK; cFaculty of Social Sciences, University of Stirling, Stirling, UK ABSTRACTIn this paper, we discuss the present and future uses of intelligent facial surveillance (IFS) in law enforcement. We present an empirical and legally focused case study of live automated facial recognition technologies (LFR) in British policing. In Part I, we analyse insights from 26 frontline police o fficers exploring their concerns and current scepticism about LFR. We analyse recent UK case law on LFR use by police which raises concerns around human rights, data protection and anti-discrimination laws. In Part II, we consider frontline o fficers ’optimism around future uses of LFR and explore emerging forms of IFS, namely emotional AI (EAI) technologies. A key novelty of the paper is our analysis on how the proposed EU AI Regulation (AIR) will shape future uses of IFS in policing. AIR makes LFR a prohibited form of AI and EAI use by law enforcement will be regulated as high-risk AI that has to comply with new rules and design requirements. Part III presents a series of 10 practical lessons, drawn from our re flections on the legal and empirical perspectives. These aim to inform any future law enforcement use of IFS in the UK and beyond. KEYWORDSFacial recognition; Emotional AI; policing; surveillance; law Introduction Overview. In this paper, we bring together a novel analysis unpacking both legal and sociological dimensions of future uses of intelligent facial surveillance(IFS) by law enforce- ment. The paper provides a case study of live automated facial recognition (LFR) use by police in public spaces in the UK. In Part I, we present insights from 26 frontline o fficers on LFR, exploring their concerns and scepticism about the role of this technology. In particu- lar, we consider themes of ineffectiveness, inaccuracy, distrust, usefulness and intrusiveness . We then discuss the current law and policy landscape around LFR, particularly the discuss- ing legal challenges andconcerns raised in the UK Bridges cases. These focused on South Wales Police trials of LFR and raised concerns around human rights, data protection and anti-discrimination law compliance. In Part II, we advance the discussion to consider future uses of IFS by examining police offi ceroptimism around LFR when integrated © 2021 The Author(s). Published by Informa UK Limited, trading as Taylor & Francis Group This is an Open Access article distributed under the terms of the Creative Commons Attribution License ( http://creativecommons.org/ licenses/by/4.0/ ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. CONTACT Lachlan Urquhart [email protected] INFORMATION & COMMUNICATIONS TECHNOLOGY LAW https://doi.org/10.1080/13600834.2021.1994220 with other technological systems. We discuss the potential integration of LFR with other policing technologies such as body-worn cameras, and the potential for incorporation of face-based EAI capabilities, i.e. systems which seek to read facial expressions, not to ident- ify individuals, but instead to understand their underlying emotive state and intent. This raises legal questions that we explore through the new EU Proposed AI Regulation (AIR), as the world’s first comprehensive AI regulatory framework seeking to make AI more trust- worthy. 1We assess the implications of the proposal making LFR a prohibited form of AI in the EU and explore how EAI use by law enforcement will be regulated as a high risk AI system (HRAIS), introducing new rules and design requirements for diff erent stakeholders across the supply chain from providers to users. Part III draws together our consideration of legal issues and empirical insights from operational police offi cers. We consider the practical issues of deploying LFR and EAI in policing, and develop 10 lessons from current uses and highlight issues that need attention for legally informed IFS in future policing practice. Background. For centuries faces have been used by law enforcement not just to identify but to attempt to readstates of mind and infer suspicious behaviour. 2LFR aims to detect and map facial features from audio-visual footage. This is done in order to produce a tem- plate that is compared with police curated watchlists to identifyindividuals. Despite the recent attention facial recognition has faced in the press, policymaking and in scholarship, this is a longstanding area of technology development. 3For example, Japanese IT firm NEC has been developing facial recognition technologies since the Osaka World ’s Fair in 1970. 4However, with advances in machine learning and computer vision techniques, visual surveillance mechanisms are being coupled with biometric systems in order to automate suspicion and augment human policing capabilities. 5Legal and surveillance scholars have been raising concerns about the implementation of facial recognition, namely the implications for public space interactions and categorisation of suspicion. 6 1Even if the UK is no longer a member state, this regulation remains relevant because the EU envision it being a gold standard around the world for regulation of AI and thus it will set standards (in the same way the GDPR has). More directly, it remains relevant to the UK because the scope of the law applies to organisations providing services to EU organisations and seeking access to the EU marketplace (see Art 2). 2Simon Cole, Suspect Identities: A History of Fingerprinting and Criminal Identifi cation(Harvard University Press, 2001); Simone Brown, Dark Matters: On the Surveillance of Blackness (Duke University Press, 2015). Diana Miranda, ‘Identifying Suspicious Bodies? Historically Tracing the Trajectory of Criminal Identi fication Technologies in Portugal ’(2020) 18(1) Surveillance & Society 30– 47. 3Guardian News and BBC Portals on Facial Recognition < https://www.theguardian.com/technology/facial-recognition > and < https://www.bbc.co.uk/news/topics/c12jd8v541gt/facial-recognition > all URLs last accessed 16 July 2021. 4Kelly Gates,Our Biometric Future –Facial Recognition Technology and the Culture of Surveillance (New York University Press, 2011). 5Peter Fussey, Bethan Davies, Martin Innes, ‘Assisted ’Facial Recognition and the Reinvention of Suspicion and Discretion in Digital Policing’ (2021) 61(2) The British Journal of Criminology 325 –44 < https://doi.org/10.1093/bjc/azaa068 >; Diana Miranda, ‘Body Worn Cameras on the Move: Exploring the Contextual, Technical and Ethical Challenges in Poli- cing Practice ’(2021) Policing and Society < https://doi.org/10.1080/10439463.2021.1879074 >. 6Gates (n 5); Mitchell Gray,‘Urban Surveillance and Panopticism: Will We Recognize the Facial Recognition Society? ’ (2003) 1(3) Surveillance & Society 314 –30; Lucas Introna and David Wood, ‘Picturing Algorithmic Surveillance: The Poli- tics of Facial Recognition Systems’ (2004) 2(2/3) Surveillance & Society 177 –98; Kyriakos Kotsoglou and Marion Oswald, ‘The Long Arm of the Algorithm? Automated Facial Recognition as Evidence and Trigger for Police Intervention’ (2020) 2 Forensic Science International: Synergy 86 –89; David Lyon, Surveillance Society: Monitoring Everyday Life (Open Univer- sity Press, 2001); Clive Norris, ‘From Personal to Digital: CCTV, the Panopticon, and the Technological Mediation of Sus- picion and Social Control ’in David Lyon (ed), Surveillance as Social Sorting –Privacy, Risk and Digital Discrimination (Routledge, 2003); Gavin Smith, ‘The Politics of Algorithmic Governance in the Black Box City ’(2020) Big Data & Society 1– 9; Rebecca Venema, ‘How to Govern Visibility?: Legitimizations and Contestations of Visual Data Practices after the 2017 G20 Summit in Hamburg’ (2020) 18(4) Surveillance & Society 522–39; Joe Purshouse and Liz Campbell, 2 L. URQUHART ET AL. In particular, by perpetuating forms of profiling and reinforcing categories of suspicion, groups that are already disproportionally subject to more control are signi ficantly tar- geted by these systems. 7Concurrently, concerns around police use of facial recognition also relate to (un)reliability, (in)e ffectiveness and (in)accuracy when identifying faces in the crowd. 8Such concerns led to a ban on the use of LFR in several cities in the US (San Francisco, Oakland, etc.), alongside companies calling for bans on police use of these systems, including Google, Amazon, Microsoft, and IBM. 9More recently, in the wake of the proposed AIR, the European Data Protection Supervisor, European Data Pro- tection Board and European Parliament have all called for stricter regulation of LFR and Emotional AI technologies in public spaces, particularly for law enforcement. 10 Interest- ingly, the EDPS/EDPB Opinion raises concerns beyond just faces, but to a wider reading of biometrics in public too, indicating future concerns of how AI systems are used to read bodily features more broadly including ‘gait, fingerprints, DNA, voice, keystrokes and other biometric or behavioural signals ’. 11 In the UK context, British police forces, such as South Wales Police (SWP) and the Metropolitan Police Service (MPS) have been early adopters. MPS has been trialling LFR in events and crowded public spaces in London since 2016 and SWP has been using LFR in trials since 2017 and 2018 in high streets and at concert arenas. We fully discuss how NEC ’s NeoFaceWatch system was deployed by SWP in Part 1.2. Whilst there is extensive discussion of LFR in this paper, it is only one example of what we term intelligent facial surveillance (IFS). We use the concept of IFS in this paper to begin moving beyond current debates solely focused on LFR, and we believe there is value in the notion of IFS to provide long term re flections and to adopt a wider framing of AI-enabled surveillance targeti ng the face in law enforcement. We anticipate future forms of IFS beyond public space CCTV such as drones, body-worn cameras (BWCs), dashboard cams or smart home cameras (e.g. Ring partnering with the police ‘Privacy, Crime Control and Police Use of Automated Facial Recognition Technology ’(2019) 3 Criminal Law Review 188 – 204. 7Lucas Introna and Helen Nissenbaum, Facial Recognition Technology: A Survey of Policy and Implementation Issues (Lan- caster University Working Paper, 2010); Clare Garvie, Alvaro Bedoya and Jonathan Frankle, The Perpetual Line-up: Unre- gulated Police Face Recognition in America (Georgetown Law, Center on Privacy & Technology, 2016); Damien Williams, ‘Fitting the Description: Historical and Sociotechnical Elements of Facial Recognition and Anti-Black Surveillance ’(2020) 7(1) Journal of Responsible Innovation 74 –83. 8Gates (n 5); Information Commissioner Offi ce,ICO Investigation into How the Police Use Facial Recognition Technology in Public Places . Hereinafter ‘ICO 2019a’; Information Commissioner Offi ce,Live Facial Recognition Technology –Police Forces Need to Slow Down and Justify its Use . hereinafter‘ICO, 2019b’; Introna and Nissenbaum (n 8); Surveillance Camera Commissioner, Surveillance Camera Commissioner Annual Report 2017/2018 . (SCC, 2018) < https://assets.publishing.service.gov.uk/government/uploads/system/uploa ds/ attachment_data/ file/772440/CCS207_CCS1218140748-001_SCC_AR_2017-18_Web_Accessible.pdf >. Surveillance Camera Commissioner, Facing the Camera. (SCC, 2020) . 9Antoaneta Roussi,‘Resisting the Rise of Facial Recognition ’(2020) 587 Nature 350 –53.10European Parliament, Motion for a European Parliament Resolution on Arti ficial Intelligence in Criminal Law and its Use by the Police and Judicial Authorities in Criminal Matters (European Parliament, 2021). . 11European Data Protection Board and European Data Protection Supervisor, EDPB-EDPS Joint Opinion 5/2021 on the Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Arti ficial Intel- ligence (Arti ficial Intelligence Act) (EDPS-EDPB, 2021) . INFORMATION & COMMUNICATIONS TECHNOLOGY LAW 3 in the US and UK). 12 We argue IFS can also incorporate non-identifying applications focused on intentionality and state of mind, speci fically emotional AI. 13 Emotion sensing has been used in advertising and commercial contexts to date, 14 but is also emerging in law enforcement through the iBorder Ctrl 15 and VibroImage. 16 Aligning LFR and EAI with smart city initiatives can lead to troubling applications in law enforce- ment, e.g. Uyghurs being targeted by these systems in police stations in Xinjiang, China. 17 Avoiding this kind of future in the UK is a key motivation of this paper, and the current focus on identi fication based harms around LFR means there is a risk these types of systems go unaddressed. 18 We now turn to our o fficer perspectives on current uses of IFS through their insights on LFR. Part I –present uses of intelligent facial surveillance 1.1. Frontline policing perspectives We will now discuss the perceptions of British frontline police o fficers on the use of LFR. In total 26 semi-structured inte rviews were conducted with police o fficers from two British Police forces in di fferent geographic locations (South and North of the UK). 19 Due to con fidentiality reasons we cannot name these forces directly, however, they were selected because they were in the process of implementing a range of new visual surveillance technolog ies. We sought to understand the attitudes and perceptions of frontline o fficers around current and future uses of LFR in particu- lar.Thepoliceo fficers were selected to ensure a div ersity of ranks, age, genders, patrol location areas (both urban and rural) and years of experience. All the partici- pants were informed about the aims of t he project, consented freely to being involved and were briefed on the aims of the study and they could withdraw at any time. 20 These interviews, conducted between 2018 and 2019, lasted 45 minutes on average within police stations or headq uarters, were audio recorded and tran- scribed verbatim, without recording participants ’names. The names presented below are pseudonyms. The data was anal ysed and coded following a thematic 12Met Police,Amazon Ring Internet-connected Camera-enabled Doorbells: Freedom of Information Request (Met, 2020) < https://www.met.police.uk/foi-ai/metropolitan-police/disclosure-2020/jan uary/amazon-ring-internet-connected-cam era-enabled-doorbells/ >. 13Andrew McStay and Lachlan Urquhart, ‘“’This Time with Feeling?” Assessing EU Data Governance Implications of Out of Home Appraisal Based Emotional AI’ (2019) 24(1) First Monday . 14Andrew McStay,Emotional AI(Sage, 2018); Luke Stark and Jesse Huey, ‘The Ethics of Emotion in AI Systems’(2021) FAccT ’21: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency 782 –93. 15Javier Sánchez-Monedero & Lina Dencik, ‘The Politics of Deceptive Borders:‘Biomarkers of Deceit’and the Case of iBor- derCtrl ’(2020) Information, Communication & Society < https://doi.org/10.1080/1369118X.2020.1792530>. 16James Wright,‘Suspect AI: Vibraimage, Emotion Recognition Technology and Algorithmic Opacity ’Science, Technology and Society (2021) < https://doi.org/10.1177/09717218211003411 >. 17Jane Wakefield, ‘AI emotion-detection software tested on Uyghurs’ (2020) BBC News . 18Despite this, we have had recent calls from EU bodies to ban emotion sensing under the AI Act. See European Data Protection Supervisor and European Data Protection Board, Joint Opinion 05/2021 on the Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Arti ficial Intelligence (Arti ficial Intelligence Act) (EDPS/EDPB, 2021) < https://edpb.europa.eu/news/news/2021/edpb-edps-call-ban-use-ai-automate d-recognition- human-features-publicly-accessible_en>. 19Diana Miranda, Evidence to Justice Sub Committee on Policing: Facial Recognition: How Policing in Scotland Makes Use of this Technology (Scottish Parliament, 2020) < https://archive2021.parliament.scot/S5_JusticeSubCommitteeOnPolicing/ Inquiries/JS519FR25_Dr_Miranda.pdf >. 20This study was approved by Keele University Board of Ethics. 4 L. URQUHART ET AL. approach utilising principles of analytic induction, i.e. using rounds of analysis to sys- tematically re fine the thematic codes. When discussing scenarios of LFR use our participants ’position was mainly one of scep- ticism and disbelief in the technology. This is illustrated by Police Constable (PC) Amy (2 years of service), who felt even if it would be useful for police forces to incorporate FR capa- bilities in visual surveillance systems, such use was not perceived as realistic at present: I don’t know how useful or how much we would use something like that. Obviously, there are times where we get, say, CCTV footage of somebody who ’s committed a theft, and if people can ’t identify that person …Obviously, it would probably come in handy in cases like that, and you could get a higher detecting rate, but I can’t see that happening any time soon .Maybe that ’s me being sort of sceptical . In order to explore some of the concerns and elements of uncertainty raised by police o ffi cers, we observed the following themes: ine ffectiveness, inaccuracy, distrust (1.1.1), usefulness (1.1.2) and intrusiveness (1.1.3). 1.1.1. Ineff ectiveness, inaccuracy and distrust Despite the di fferent applications of LFR, such technology is often portrayed by police forces as an important tool in the fight against crime and as a valuable and neutral tool to aid policing. 21 This was particularly evident in the study developed by Fussey, Davies and Innes 22during the MPS trials in London, where ‘despite awareness of potential technological limitations’ , the levels of trust and belief in LFR as infallible and accurate were high. 23 In their words, ‘throughout the MPS trials, a commonly articulated and pre- vailing view was one of faith in LFR systems to enhance policing, but the challenge being in proving its worth externally’. 24 Our frontline o fficers are part of this external audience and our data shows they remain unconvinced about LFR. Our participants questioned the portrayal of objectivity and neutrality often associated with LFR. They also raised concerns in relation to the accuracy and e ffectiveness of LFR. Similarly to scholars 25 and policymakers, 26 our participants remained sceptical of its current uses and, in particular, showed disbelief and distrust in LFR technical capabilities. Portraying it as ine ffective, Sgt (Sergeant) Lawrence (12 years of service) argued: facial recognition is pretty terriblefrom what I’ve seen of it. I’ ve seen it work a couple of times and it ’s come out with all sorts of random decisionsand…so I don ’t think facial rec- ognition technology is where it needs to be at the moment. PC John (9 years of service) also discussed the current quality of footage and how more technological improvement is needed for cameras to operate e ffectively: ‘In terms of facial recognition, I ’d imagine there might need to be some improvement in relation to the quality of footage. ( …) they ’re probably going to struggle to get decent facial recognition. ’ 21Met Police, Live Facial Recognition Resource Page (Met, 2021) . 22Fussey, Davies and Innes (n 8) 14.23ibid.24ibid.25Introna and Nissenbaum (n 10).26ICO (n 11); Surveillance Camera Commissioner (n 11). INFORMATION & COMMUNICATIONS TECHNOLOGY LAW 5 This theme is also discussed by Fussey, Davies and Innes 27 in relation to the quality of input data impacting the system ’se ffectiveness. The custody images often used by the system di ffered signifi cantly in standard and resolution, having an impact on LFR performance. In their words: ‘Because they were “naturalistic ”with people, sometimes smiling or squinting or tilting their heads at an angle, this aff ected the shape of their faces and, therefore, the algorithm ’s ability to analyse them e ffectively. ’ 28 Nonetheless, the scepticism of our participants, as members of ‘an external audi- ence ’, was not only associated with direct experiences but also with fictional represen- tations of technology use. When re flecting about the process of searching for faces in a database, PC Larry mentioned: ‘I honestly don ’t know if the technology is like they make you believe in a Hollywood film where they can go through this massive data- base and go, oh, that ’s so-and-so, he ’s a terrorist, because we ’ve got his footage. ’ This position of scepticism was also linked to the lack of trust on a process of decision-making that involves both human and non-human actors. Our participants were hesitant to show trust in a technology that is automated and dictated by non- human actors such as computers (contrary to other elements of biometric identifi – cation). According to Sgt Patricia (14 years of service): I don ’t know how much I’ d trust that, because …Well, the only two things we go o ffnow are DNA and fingerprinting, and both of those things are individual to the person. I don’ t know how much I would trust technology if facial recognition …Because you ’re relying on a com- puter, aren ’t you …?. 1.1.2. Usefulness The use of such technologies not only is deemed ine ffective but also too expensive at a time of budget constraints. Our participants acknowledged there is a need to keep up with technological development; however, such investment comes at a high price. As mentioned by PC Kevin (12 years of service), based in a firearms unit: I think if you were to have an open chequebook, the possibilities for technology and policing, you ’d be like Robocop, you would have so much ability to do things. But it ’s just that cost, and sometimes you look at it and you roll your eyes. In particular, our participants highlighted the signi ficant financial costs associated with the potential implementation of LFR systems. As stated by Sgt Felicity (18 years of service) when questioned in relation to the use of this technology: ‘it sounds expensive, so that is a no ’. This was also illustrated by Fussey, Davies and Innes 29 in their study with both MPS and SWP, as the lack of human resources had impacts on the use of LFR. For instance, despite the alert of possible matches to di fferent units, the teams were not available to react to these matches. Indeed, ‘constraints imposed by the available human resources to service the ‘demand ’created by the LFR system moderates simplistic claims that such technologies can address austerity restrictions by replacing policing functions ’. 30 27Fussey, Davies and Innes (n 8).28ibid 12.29ibid.30ibid 16. 6 L. URQUHART ET AL. Overall, from the perspective of our participants, the possibility of incorporating FR capa- bilities in visual surveillance systems did not seem plausible, at least currently. As stated by PC Ross: Possibly a very long way in the future because it’s going to be public service in the public sector, it ’s not going to be well funded, and I don ’t think we ’re going to see that for a very long time. ( …) I don ’t know enough about facial recognition software until it ’s very common, it would be nice to see it, but it will come with its own issues … Our participants also acknowledged the availability of other technologies that are already used for identi fication purposes, in particular mobile fingerprint devices. Such technological devices already serve the purpose of recognising and identifying an individ- ual. In the words of Kevin ‘we always had fingerprint pads. If you said you ’re John Smith and you go, right, okay, scan your finger there, right, it ’s saying you ’re not John Smith, you ’re Dave Smith, John’ s brother, or whatever ’. The same is reiterated by PC Andrew (5 years of service) and PC John (9 years of service), respectively: I’ m sure you could talk to somebody who would tell you the facial recognition would be bril- liant, but if we ’ve any concerns about a person ’s identity, we just run their fingerprints, you get it that way as well. (Andrew) The same thing, we ’ve got the fingerprint scanner that can probably do the same job before you ’ve got facial recognition. You ’ve got to look at your fingerprintsfirst. (John) The sceptical views of our participants also highlighted di fferent operational and con- textual challenges faced with the use of technologies. Some police o fficers claimed LFR would not be the most relevant tool in the context in which they operate. In the words of PC Daniel, LFR ‘is only used in the most elite of elite ’, namely in megacities like London. When discussing this with PC Andrew, he also argued that the investment in such technology would not be particularly useful in the metropolitan area he works (a city with a population of approximately 200,000 inhabitants). Considering the size of this area and the type of criminal activity they often face, this participant stated they tend to know the members of the public they often interact with, reducing the value and relevance of technologies such as LFR: I don ’t know how useful it would be in [given city] in relation to, you know …even though it ’s a city, it ’s probably a big town, you know, you will know the majority of the bad men and women that you deal with, it ’s the same o ffenders kind of again, so I don ’t think you would need facial recognition. When you ’re talking about the Metropolitan Police, they’ re going to be dealing with terrorist incidents, you know, organised criminal gangs. Well, don ’t get me wrong, we have that up here, but they’ re probably on a lesser scale, so I think that they could probably use their money elsewhere more e ffectively. The same concern applies to rural settings and the debatable usefulness and relevance of LFR in areas where people normally know each other. Even if initially PC Matthew revealed optimism with the potential use of FR (in particular when considering large events), since he is based in a rural setting, he did not perceive the use of this technology as relevant in such context due to their interactions with the public occurring at ‘a smaller scale’ . In his words: obviously I think in terms of use maybe it wouldn ’t have quite as big an impact being kind of a rural setting because the people you encounter tend to be on a smaller scale, one on one maybe as opposed to large situations where people ’s images are being recorded. INFORMATION & COMMUNICATIONS TECHNOLOGY LAW 7 Lastly, even if LFR could be considered and applied in such settings, the participants raised some practical concerns in relation to the quality of the internet connection they have available. As mentioned by Sgt Nelson: with our radios we struggle [with reception] in certain parts, especially if you go more rural (… ) So, if we ’re having a live feed of a camera, that backs up somewhere else, then I would suggest that you would struggle with the connection sometimes. However, even in urban areas, there are often still technical diffi culties limiting the use of AFR. As portrayed by Fussey, Davies and Innes (2021), even in central London radios would often fail inside the AFR-equipped vans, limiting the operators ’capacity to react and communicate with the street-based intervention team. 1.1.3. Intrusiveness The results of a national survey published by the Ada Lovelace Institute 31 revealed that the British public is mainly concerned with privacy infringements, surveillance, consent and unethical use of FR by the police. Intriguingly, such concerns were shared by the police o fficers interviewed. The need for safeguards prior to LFR implementation was raised as crucial, as it was considered by our participants that there are signi ficant poten- tial impacts on the legal and human rights of citizens. PC Matthew was particularly concerned with the legal challenges and privacy impli- cations of using LFR: ‘obviously from the public’ s point of view I’m sure there ’dbea concern about invasion of privacy and everything else ’. Sgt Simon (12 years of service) also raised some concerns in relation to data collection and the need to follow due process and clearly explain the purposes of LFR, so it is not deemed invasive by members of the public: There’ re processes in place, and those processes are there for good reason because if you could just dip in and out all of that, you could …Yes, it ’s quite scary I think ( …) keeping up with what’ s relevant, but it ’s got to be relevant and useable and useful to us, and not being invasive . Everything’s got to be done for a reason. We got to be able to justify what we ’ve done, I think. In particular, our participants voiced concerns on how LFR might be deployed and the need to clarify the purpose of using this technology. If used for recognition and identi fication purposes, the police offi cers argued that is not needed when dealing with law-abiding citizens. This is particularly relevant if we consider a scenario of body-worn cameras with LFR capabilities 32 as a future IFS system. Still in the words of Sgt Simon : We don’t need to know the identity of everybody because most of the people we deal with are law abiding. If you ’re walking down the street, I don’t need to know who everybody is . It might be bene ficial because you might find Joe Bloggs who’s wanted ( …) but actually is that right, …because that ’s what you do, you are surveilling everybody , and that’s not the spirit of the camera. 31Ada Lovelace Institute, Beyond Face Value: Public Attitudes to Facial Recognition Technology (Ada Lovelace, 2019) < https://www.adalovelaceinstitute.org/report/beyond-face-value-public-att itudes-to-facial-recognition-technology/ >. 32Miranda, (n 8). 8 L. URQUHART ET AL. PC Ian also reiterated this by highlighting that there is no need to record every inter- action with members of the public, by comparing their professional practice with the process of gathering research data: I think that [LFR] would be better not necessarily on the o fficer ’s personal body-worn camera because you ’re not going to walk about with it on 24 hours a day. ( …) You don ’t have to record every interaction. How many conversations we have today? There ’s many but you won ’t record them all. Would you walk about as a researcher and do what you do, recording everybody ’s behaviour and everybody’ s… ? You know what I mean? It ’s an interest you have but is it valuable or no? Or is it just certain times you want to do it, like now? The camera’s the same. This analogy is interesting, where in performing empirical research and data collec- tion, researchers need to abide by ethical approval procedures, safeguards around data collection and storage, protecting participants with consent and transparency of behav- iour. There are also particular concerns related to data management and how footage is collected and stored in a scenario of constant collection. According to our participants, footage would need to be collected continuously to enable LFR and concerns were raised in relation to its security and storage. According to Sgt Nelson (13 years of service): I guess that would have to be done then live, so if we ’re going to have facial recognition tech- nology, you ’d have to be recording the whole time to enable that. (…)We ’d have to have a system then where we ’re recording live, picked up the whole time, otherwise the person we want, that it recognises, will be gone by the time we ’ve even registered it. ( …) That brings in the problems about us having to record the whole time ( …) And that presents a whole new problem , doesn’t it, of security. This was deemed to be particularly problematic if body-worn cameras were to incor- porate LFR, as police o fficers agreed they should only record specifi c interactions with these devices. However, participants believe LFR will become common practice in the future and that specifi c guidance is needed on how to manage the data e ffectively. For instance, Insp Oliver was convinced that LFR will eventually be implemented and high- lighted the need of proportionate use: I would bet my pension on it [LFR]. ( …) I would like to think, anyway, that it would be used proportionately . I mean it would be used to investigate more major crime. ( …) Once again, it will be all about managing the data and how that ’s looked after. And whether it ’s dealt with proportionately at the time. These themes of proportionality and scale of crimes where LFR should be used for are unpacked in the legal sections below. Overall, when considering the implications of using LFR, our participants questioned how human rights might be compromised and, in particular, if the use of this technology for law enforcement purposes is in the public interest. Our participants questioned if the use of such tools would be proportionate, in particular, if the purpose of use is not clear. As illustrated by PC Daniel (5 years of service): It would be well in the future [the implementation of LFR]. I think it would be extremely useful. But for us, in an ideal world imagine your camera could recognise that person without you even going up and asking for their details and that person’ s wanted? Yes, that would be very handy. But I imagine there ’dbe a lot of questions from the public INFORMATION & COMMUNICATIONS TECHNOLOGY LAW 9 because it would bescanning all the time .(… ) It would de finitely beinvasivebecause if members of the public knew that this was scanning your face and stuff like that I would be –even though I’ ve done nothing wrong –you ’ll probably be, your reaction is, oh the police are coming. I’ ll just walk away because I don ’t want to be scanned. Like, it ’s almost like a futuristic movie that you ’re watching where these robots can down scan people, you know what I mean? ( …)It would be great to scan everyone but morally I don ’t think that ’s right at all. Building on the issues previously explored, our participants were also concerned with how this technology might impact public con fidence (or lack of) in police work. If LFR is perceived negatively by the public, there is the need to clearly justify its purposes and uses to avoid damaging public’ s perception of the police. In the words of Sgt Simon: I think that’s quite a big step because then if you ’re going down that route [LFR], you ’re looking at having this on all the time almost. Not that has to be on, but it ’s always there in the background ( …) I think that would perhaps damage the perception of the police with the public because it ’s less di fficult …If it ’s always recording and they know it ’s always evi- dence gathering, it just looks like we ’re sleeping. I think if it happens it ’s going to be a long way o ff, and I think it would need to be really justi fied as to why. Really justified. I don ’t think I’ d be keen on that. As we see below in Parts II and III, changes prompted by the Bridges cases have led to initiatives from the College of Policing to create a framework for LFR use that addresses public concerns in the future. But this sits in contrast to shifts in the EU AIR, which seeks to prohibit LFR by default, and only use in specifi c circumstances (detailed in Part II). 1.2. Legal perspectives In this section, we contextualise the emerging legal framework around police use of facial surveillance in the UK by the police. Legal analysis involved examining primary sources in a doctrinal manner, namely legislation and case law pertaining to LFR. Thus, the Bridges cases, proposed EU AIR and commentary around these were considered. Our participants often point towards the lack of legal certainty surrounding the use of LFR, including around invasiveness. This section explores the Bridges High Court and Appeal Court test cases and unpacks how they will direct future police use of LFR. Bridges v South Wales Police: High Court (HC) One of the key resources in discussing legality of the use of LFR is the original High Court 33 and the Court of Appeal 34 Bridges v South Wales Policecases. We briefly consider the former before turning to the latter. The original case was decided on 4 September 2019, brought by civil liberties cam- paigner Edward Bridges. He filed the claim on the basis of two occasions where he was recorded by Automated Facial Recognition (AFR) Locate in Cardi ff, 35 an example of LFR systems. There were 50 trials run using the AFR Locatesystem by South Wales Police 33Bridges, R (On Application of) v The Chief Constable of South Wales Police [2019] EWHC 2341 (Admin) (04 September 2019) (hereinafter ‘HC’). 34R (on the application of Edward Bridges) v The Chief Constable of South Wales Police v The Secretary of State for the Home Department, The Information Commissioner, The Surveillance Camera Commissioner, The Police and Crime Commissioner for South Wales [2020] EWCA Civ 1058 , 2020 WL 04586697 hereinafer (‘CoA’). 35Para 34 CoA –they assume/accept he was, due to data retention etc, this cannot be proven. 10 L. URQUHART ET AL. (SWP), itself based on the NEC/North Gate Public Services NeoFaceWatch system. 36It is esti- mated it had scanned over 500,000 faces during these trials between 2017 and 2018, scan- ning up to 50 faces per second. 37They were overtly photographing members of the public in real time via CCTV (static or mobile on vans). They then analysed the images to detect faces and then extract facial features to create a biometric template consisting of numerical measurements. These were then checked against a watchlist database of between 400 and 800 faces. 38 Primarily based on a custody photograph database, this includes a variety of individuals, including those with warrants, escaped prisoners, suspects, vulnerable people and those needing protection like missing persons in addition to those of interest for intel- ligence or who prompt concern by being at an event. 39 To brie fly consider how this system works, it looks for a match and presents a similarity score with a % of the likelihood of this being the individual in question. The threshold % value is set by the user (to manage false positives and negatives), 40 and if there was no match, the facial image and biometric template are deleted immediately and automati- cally 41 (although CCTV footage is retained for 31 days and then automatically deleted).42 Once a match is identi fied, a police o fficer reviews this and decides on further action, e.g. intervention. 43 During trials the police would notify the public of the use of AFR Locate via social media channels, large posters in a 100 m radius of the cameras, on the SWP website and on postcard-sized notices given to the public. 44 The facts of the original Bridges case have been discussed fully by numerous commenta- tors. 45 For our purposes, we focus brie fly on the human rights requirements, as they help to explain the balancing exercise necessary in Art 8(1) and (2) of the ECHR. European Court of Human Rights jurisprudence is relevant in this case as UK courts have to consider it under sections 2 and 6 of the UK Human Rights Act. Art 8(1) of the ECHR states ‘everyone has a right to respect for their private and family life’ but this is a quali fied right under Art 8 (2). It then states that public authorities should not interfere with the right exceptif it is in accordance with the law 46 andnecessary in a democratic society (e.g. for one of legitimate goals of national security, prevention of disorder and crime or public safety). Case law has established that necessityalso includes an assessment of proportionality of the action to the legitimate aims being pursued. 47 If the Art 8(2) safe- guards cannot be satis fied, then there will be a violation of Art 8(1) and a breach of the Convention right. In this case, the activities of SWP triggered Art 8(1) and infringed it for a number of reasons. 36Para 10 CoA.37Para 16 CoA.38Para 13 CoA.39Para 13 CoA.40Para 7– 10 CoA.41Para 93 CoA (something that can help with adequacy under the law).42Para 17– 18 CoA.43Para 15 CoA.44Para 19 CoA.45Kotsoglou and Oswald (n 9).46Malone v UK 1984 7 EHRR 14.47Dudgeon v UK 1981 7525/76 s51 –53; Z v Finland 1997 22009/93 s94. INFORMATION & COMMUNICATIONS TECHNOLOGY LAW 11 –The police use of AFR Locate was not ‘expected and unsurprising ’, unlike with the mere taking of photos in public. 48 – The storage of data 49 by police was an interference, especially because it involved intrinsically private data like biometrics. – The fact data was collected in public or only used for a short period of time did not legitimise the AFR Locate system either (i.e. Art 8 is not only relevant when data is retained for a long time). The case focused on whether this infringement of Art 8(1) could be justi fied under Art 8 (2). The requirement of Art 8(2) for any infringement to be ‘in accordance with the law ’is important as the law should be accessible and foreseeablein order to guard against arbi- trariness and too much discretion by the state. 50 On those points, in the original case, it was deemed in accordance with the lawbecause of common law powers police had around prevention and detection of crime. AFR Locate is subject to the Data Protection Act 2018 and UK Protection of Freedoms Act 2012 along- side also the statutory code of practice: the Surveillance Camera Code. 51 The cameras were not deemed physically intrusive methods of obtaining data (unlike seizures from homes) and watch lists were deemed legal under the UK Police and Criminal Evidence Act 1984. 52 It was deemed necessaryandproportionate in part because it was not perma- nent and focused on a specifi c geographic area. Also, the public were informed it was operating and it had specifi c targets it sought to identify. As we will now see, some of these points were contested in the Court of Appeal case, and we re flect on concerns around anti-discrimination and data protection laws below, as the CoA case has had less academic analysis thus far. Bridges v South Wales Police: Court of Appeal (COA) Following an appeal supported by Liberty, in Aug 2020 a judgment was handed down by the Court of Appeal which deemed the use of AFR Locate unlawful. Whilst the case covered 5 grounds, we focus on grounds 1, 3 and 5 on which the appeal succeeded. 53 Ground 1 found AFR Locate ‘was not in accordance with the law ’. Here, the CoA was concerned there was too much discretion for police powers namely around discretion for who is added to watchlists and locations where AFR Locateis deployed. 54 Thus‘policies do not suffi ciently set out the terms on which discretionary powers can be exercised by the police and for that reason do not have the necessary quality of law’. 55The CoA refl ects on three areas of law in the UK that could provide ‘accordance with the law ’. Whilst the 48Wood v Commissioner of Police for the Metropolis [2009] EWCA Civ 414–mere taking of a photo in public (by anyone incl. police) did not engage Art 8 unless aggravating/harassing/hounding circumstances for a subject. But in this case, still had to consider full purpose of collection (e.g. if collecting to retain/use) and explain that they will be used in this way. 49S v UK 2009 48 EHRR 50.50Re Gallagher [2019] 2 WLR 509 ‘it should not’confer a discretion so broad that its scope is in practice dependent on the will of those who apply it, rather than on the law itself ’(para 17); SvUKpara 95. 51Para 80 HC.52R (Catt) v Association of Chief Police O fficers [2015] AC 1065.53Ground 2 failed as they deemed it to be proportionate see para 134 –144, if it had been in accordance with the law; ground 4, the challenge to the lawful basis for processing as required in s35 and quality of documentation required under s42 Data Protection Act 2018 was not considered, as the DPA 2018 was not in force at the time of the trials. 54Para 91 CoA.55Para 94 CoA. 12 L. URQUHART ET AL. DPA 2018 Part 3 is a key area of law around lawful and fair processing and necessity of sensitive processing, they conclude it is not sufficient by itself. 56Similarly, the Surveillance Camera Code provides scope to deal with aspects of the technology as it applies to LFR (as per s29 of the Protection of Freedoms Act 2012). 57 For example, whilst not currently pro- viding guidance on this, it could be a site for policy on criteria for watchlists and location of deployments, 58 and the Commissioner has already started making inroads into this on the issue of watchlists. 59 Lastly, local police policies were questioned as not having suffi cient quality of law too, as they do not provide su fficient guidance of terms for placing individuals on watchlists or locations where AFR Locate was to be deployed. 60 For example, with locations, the police stated AFR Locate would be used at all events including sporting and music but the court was worried that this is not a set of criteria and that it is overly broad. 61 They raised similar concerns with the judgement of why someone is added to a list, particularly for category of ‘other persons where intelligence is required ’where the court says ‘In effect it could cover anyone who is of interest to the police. In our judgement, that leaves too broad a discretion vested in the individual police officer to decide who should go onto the watchlist .’ 62 Ground 3 found that the data protection impact assessment (DPIA) required under s64 of the UK Data Protection Act 2018 was inadequately done. One issue was the DPIA was written assuming Art 8 was not engaged, when in fact it was, and it was infringed. Sec- ondly, the DPIA did not deal with biometric data of members of the public who were cap- tured by AFR Locate but not present on watchlists. As the court states, the use of AFR Locate was not in ‘accordance with the law ’because of wide discretion around ‘the selec- tion of those on watchlists, especially the “persons where intelligence is required ”cat- egory, and the locations where AFR Locate may be deployed ’thus it breached Art 64 (3)(b) and (c) DPA 2018 as it ‘failed properly to assess the risks to the rights and freedoms of data subjects and failed to address the measures envisaged to address the risks arising from the de ficiencies we have found’ . Ground 5 found that the use of AFR Locate did not comply with their public sector equality duty (PSED) under s149(1) Equality Act 2010. 63 This is due to the lack of inves- tigation in the equality impact assessm ent by the police if AFR Locate enabled indirect discrimination (they did consider direct). 64 They had not investigated if there were risks from AFR Locate based on race or sex bias, which has impacts on BAME communities in particular. 65 The complaint was based on the lack of ful filling the duty to investigate 56Para 104 CoA.57Para 110 CoA.58Para 118 CoA.59Para 118 CoA and Surveillance Camera Commissioner (2020).60Para 121 –129 CoA.61Para 130 CoA.62Para 124 CoA.63The PSED is a response to BAME/police relations after the Stephen Lawrence Inquiry. The fact this was not done prop- erly by the police is even more concerning para 179 CoA. 64Para 164 and 167 CoA.65It states: A public authority must, in the exercise of its functions, have due regard to the need to –(a) eliminate discrimination, harassment, victimisation and any other conduct that is prohibited by or under this Act; (b) advance equality of opportunity between persons who share a relevant protected characteristic and persons who do not share it; (c) foster good relations between persons who share a relevant protected characteristic and persons who do not share it. INFORMATION & COMMUNICATIONS TECHNOLOGY LAW 13 this, as opposed to allegations AFR Locate does perpetuate bias. 66 AFR Locate uses NEC ’sNeoFaceWatch 67 and testimony from NEC employee witness claims that it is trained with equal numbers of male and female face data and that ‘NeoFace Algorithm training data includes a wide spectrum of di fferent ethnicities and has been collected from sources in regions of the world to ensure a comprehensive and representative mix. ’ 68 Despite these claims the court was concerned that‘SWP have never sought to satisfy themselves, either directly or by way of independent veri fication, that the software program in this case does not have an unacceptable bias on grounds of race or sex .’ 69 The presence of human failsafe (i.e. whe re 2 humans have to consider if they want to respond to a positive match) was not deemed su fficient to discharge their duty. 70 This raises questions about public procur ement and need for public sector bodies to scrutinise algos as need to address equality obligations. Having unpacked the current law and offi cer perspectives through our examples, we now turn our attention to future optimism from o fficers around LFR but situate this within the emerging legal landscape. This highlights how face-based surveillance will be subject to growing policy measures that, depending on jurisdiction, either seeks to ban further use (as we see at the EU level with the AI Act), or to mitigate risks and legitimise roll out through policy (at UK Level with College of Policing guidance). Part II –future uses of intelligent facial surveillance 2.1. Frontline o fficer perspectives In this section, we consider three areas of future usediscussed by our officers, namely around how LFR can be integrated with other policing technologies; how it can be used to deal with policing large crowds; and their concerns about gradual integration shaping public acceptance. Our participants ’position of optimism and con fidence was projected into the future. We observed how they see IFS tools impacting their policing practice. They showed con fi- dence in future uses of LFR when it is linked and integrated with other technologies such as CCTV or BWCs. Considering the importance of IFS in their professional practice, partici- pants expect LFR to play a helpful role in future policing. As stated by both Richard (a frontline response offi cer with 9 years of service) and PC Katherine (10 years of service), most of their work relies on CCTV footage to recognise and identify suspects during inves- tigations. With the improvement of its technical capabilities, a system capable of pointing them in adirection could be extremely relevant in the future 71: 66Para 165 CoA: It is important to be clear that it is not alleged that the software used by SWP does have that eff ect. There is no claim brought on the basis of the negative obligations in the Equality Act, not to discriminate (whether directly or indirectly). Rather the complaint is based on an alleged breach of the positive duty to have due regard to the need to eliminate such discrimination. 67para 195 CoA.68para 196 CoA.69para 199 CoA.70para 184 –185 CoA.71Miranda (n 5). 14 L. URQUHART ET AL. If there wasfacial recognition software ,(… ) that would be a huge benefitto us. If we could run pictures through it and it comes up with matches, especially in more major investi- gations, that would be very bene ficial. If you are trawling through hours and hours of CCTV footage looking for a suspect, if a computer can scan it in minutes for you, it will save a lot of time and make positive identi fications… I don ’t know everybody here ( …) Just now we get the CCTV and whatever and people, you put them up in the muster room for people to have a look and see if they can identify them and if you had a computer system thatcan do that for you , that would be ideal. It would cut your time in half doing your job if people …I just don ’t know if the quality is up to that. I don ’t know. I have no idea but if it was it would be ideal. It would be half the battle for us. The same applies to the potential integration of LFR in BWC, as this would be particu- larly useful with the footage collected by mobile cameras that can capture facial features with more quality and defi nition than CCTV systems. In the words of PC Steph (4 years of service) and PC Carol (2 years of service): If that technology got good enough that we could really be relying on it then, yes, absolutely because BWCis going to capture somebody’ s facea lot better than any CCTV system could. So, yes, as and when that technology really starts to push forward then, yes, I think they would be good (PC Steph) Well, CCTV you can ’t really get here. With the BWC, you ’re right up close, getting their faces or directly. Because sometimes, with CCTV,it’sa bit distorted , their faces, you can ’t see. So, it ’d be good. The body worns are an excellent tool for going in and getting the faces because it ’s so clear. (PC Carol) Nonetheless, such con fident and optimistic positions would always be framed around the premise of a future vision where the technology is working e ffectively and available to help police forces. PC Oscar (14 years of service) illustrated this future vision by saying: If that helps us to do our job , then yes, I’m all up for that and I think that is to come. I don’t think obviously that ’s something that is readily available to us just yet, and I’ m pretty sure that ’s being looked at and designed and if worked on, evolving through other forces poten- tially. But if Review came back to us a couple of years down the line and said, look, we ’ve got this package now. We have the software upgrade which will allow you to start doing identi fi- cation via facial recognition from our database, from the cloud, is that something that inter- ests you? I would say, yes please, ( …) If that ’s my only avenue to identify that suspect, then yes, I ’m all up for that. It ’s just another piece of evolution and development which we will obviously match towards, strive towards. This is particularly relevant when imagining how they will deal with large events (ie. sporting events) and public order incidents in busier, urban areas. As illustrated by PC Matthew (4 years of service) and PC Ian (16 years of service), respectively: The more technology we have to assistus in our role the better really. I suppose, yes, on a practical level, certainly with maybe large publicor situations where they’ d be going to like a football match or something, there ’dbeade finite need for that technology. (PC Matthew) If you are going to an event or a large crowd [unclear] that was for that and you had cameras that had that technology on it, well then yes, I can see it. ( …) When a lot of people are fun- nelling through a small gap like a football turnstile …We ’ve got violent football fans. ( …)If you had facial recognition at turnstiles at sporting events like football, then these people can be stopped from coming in. (PC Ian) INFORMATION & COMMUNICATIONS TECHNOLOGY LAW 15 The participants discussed how the adoption of an emerging technology such as LFR is subject to a process of either acceptance or resistance from both members of the public and police organisations. Several examples were used to illustrate how technologies were accepted in the past and are now used in a daily basis (such as automatic number plate recognition) and how LFR could be just a ‘step further ’in order to ‘read the picture, the image, of the person’ in the future (Larry, 12 years of service). Nonetheless, o fficers agreed that they will face backlash from the public if LFR is not deemed to work e ffectively. For instance, PC Mark (27 years of service, firearms unit) believed that: We will be using a lot in the future. I wouldn ’t say debug it and get one that works but then it is just one of those things that will be used tofight crime . I would imagine lots of people would moan about it to start with , but I would imagine once we get a system that works properly we will end up using it . Now we turn to the College of Policing Guidance for UK police forces on LFR future, which shares some of this optimism, but as we state, raise some new concerns. 2.2. College of policing guidance Recent College of Policing (CoP) documentation from Spring 2021 72 outlines a proposed national approach to LFR for law enforcement in the UK. It is extensive but provides useful guidance which aligns with themes that are discussed in the paper (particularly around watchlists, the public sector equality duty and intrusiveness). We treat each in turn below. Even if these suggestions in the guidance change, it nevertheless indicates the optimism around LFR future use at a policy level in the UK. A. Watchlists In attending to concerns around vulnerable individuals appearing on watchlists, the CoP raise disability and age as two key attributes of concern. This is not due to vulner- ability per se, but instead around concerns of how these attributes impact the accuracy and e ffectiveness of LFR systems. For example, with disability the issue is if subjects have suff ered a facial injury or trauma, undergone facial surgery, have features which cannot be recognised. Similarly, with age, the guidance raises concerns around youth o ff enders under 18 or 13 because their faces change, impacting the reliability of LFR. 73 As both Bridges cases highlighted, how images come to be included on watchlists was a concern of the courts. CoP Guidance in Part 2.3 details the types of images that can be added. This is a broad list covering police originated and non-police originated images. The latter is particularly where the force does not have suitable images in house. We provide abridged guidance in the table below. Police Originated Images Custody images; Individuals wanted by the courts, Those suspected of committing an o ffence or with grounds to suspect; Those subject to bail conditions, court order or other restrictions; Missing persons at risk of harm; 72College of Policing, Police Use of Live Facial Recognition (CoP 2021) hereinafter (‘CoP’). 73CoP Part 2.2.3. 16 L. URQUHART ET AL. Those presenting harm to themselves or others; Victim, witness or associates. 74 Other: this depends on an assessment examining purposes underpinning why police holdthese images, processing limitations, the importance of inclusion and proportionality of using these. Non Police Originated Images 75 Does not stipulate exhaustive sources of data but can include from:. law enforcement partners, . public bodies, . private companies, . individuals Types are the same as those listed above, e.g. those wanted by courts, suspects etc. Criteria for inclusion in the watchlist include: . Only with the approval of the authorising o fficer . Assessment of purposes underpinning why police hold these images, . processing limitations, . importance of inclusion and . proportionality of using these. This list is problematic given concerns around the role of social media intelligence (SOCMINT) by police and the legalities of sourcing content for investigations from these channels. 76 It also highlights the risks of involving individuals in lateral surveillance practices, as we saw in police investigations following riots in Manchester and London in 2011 (e.g. ‘Catch a Looter ’). 77 Whilst providing criteria for being put on a watchlist, and thus satisfying concerns of the court around the nature of criteria, they can still be ques- tioned on the merits of their breadth. As we will discuss below, there is a contrast with the EU position in the AIR where it lists tighter applications for LFR use. B. Public sector equality duty The proposed guidance also suggests how police can address their PSED 78 and how forces can take steps to ensure the accuracy and performance of deployed LFR. They suggest a holistic approach incorporating assessment of the software, cameras, LFR system and authorising offi cer procedures too. 79 The CoP suggest steps including: . Conducting and reviewing their equality impact assessment, or similar process; . Ensuring they are satisfied reasonable steps have been taken to mitigate bias risks, par- ticularly for protected characteristics, e.g. sex, race, religion, belief; . Ensuring ongoing review of use, performance and utility of PSED mitigation measures; . Providing oversight of vendor claims about LFR including testing themselves. 80 74CoP Part 2.3.1.75Images not taken under direction of the police.76Lilian Edwards, and Lachlan Urquhart, ‘Privacy in Public Spaces: What Expectations of Privacy Do We Have in Social Media Intelligence?’ (2016) 24(3) International Journal of Law and Information Technology 279 –310; David Omand, Jamie Bartlett and Carl Miller, ‘Introducing Social Media Intelligence ’(2012) 27(1) Intelligence and National Security Review; Bert Jaap Koops, Jaap Henk Hoepman and Ronald Leenes, ‘Open Source Intelligence and Privacy by Design’ (2013) 29 Computer Law and Security Review 676. Daniel Trottier, ‘Open Source Intelligence, Social Media and Law Enforcement: Visions, Constraints and Critiques ’(2015) 18(4 –5) European Journal of Cultural Studies 542. 77Elisa Pieri, ‘Emergent Policing Practice: Operation Shop a Looter and Urban Space Securitisation in the Aftermath of the Manchester 2011 Riots ’(2014) 12 Surveillance and Society 1, 38. 78s149 Equality Act 2010.79Part 1.5.5 CoP.80Part 1.5 CoP. INFORMATION & COMMUNICATIONS TECHNOLOGY LAW 17 As the Bridges CoA caseflagged, the interplay between private vendors and police meant that commercial con fidentiality and a lack of willingness of vendors to provide information for testing prevented police discharging their PSED. 81 Thus, the desire for oversight is an important commitment, but in practice, the CoP do not provide steps how to materialise this approach in the UK. As discussed below in s2.2, legal obligations in the AIR are targeting this oversight and seeks to change the interactions between public and private sector in the realm of law enforcement. C. Procedure and discretion There is a high level of discretion for each police force, where Chief O fficers (CO) can create their own policy around LFR. 82 Nevertheless, the CoP establishe harmonised com- mitments to guide policy development at a national level, and we select some below as they relate to our data and analysis. 83 Firstly, forces should only use LFR if less intrusive methodswouldn’t enable the same objectives. Our participants raised a similar point, arguing fingerprinting pads could help to identify individuals in the street just as e ffectively. They argue this is a less intru- sive method that already exists in policing practice which makes it lower cost and does not require additional training (unlike LFR). Touching on this point, the Bridges CoA case refl ected on di fferences between fingerprinting vs. LFR, where they stated the latter involves procurement without the use of force, cooperation or knowledge of sub- jects and ability to do so on a mass scale. 84 This suggestion also aligns with the AIR pro- vision which puts the emphasis on police to show that the use of LFR is so important that if it was not used, harms would occur. Secondly, they raise proactive engagementwith public and community to foster public trust and con fidence as key. This again aligns with our empirical perspectives of offi cers around fears due to negative public perceptions of LFR. It also raises the issue of the PSED not being conducted properly which was a key issue in the Bridges case. The PSED is an obligation brought about by poor police relations with BAME communities after the Stephen Lawrence Inquiry, making the fact it was not done properly with a technology that could be biased against race even more problematic. 85 Thirdly, ensuring LFR use is in accordance with the law and used overtly in a ‘respon- sible, transparent, fair and ethical way ’. Whilst a laudable aim, the challenges raised in the Bridges cases showed issues of ensuring ‘in accordance with law ’given the need for quality of law that appropriately balances human rights with law enforcement goals. Also, the mention of overt usageis important, as covert uses may trigger diff erent inves- tigatory power rules, 86 as the CoP recognise. LFR has scope to become part of directed or 81Para 199 CoA.82Part 1.1.5.83See Parts 1.6.1 A) –P) CoP.84CoA para 23: Facial biometrics bear some similarity to fingerprints because both can be captured without the need for any form of intimate sampling and both concern a part of the body that is gen- erally visible to the public. A significant di fference, however, is that AFR technology enables facial biometrics to be procured without requiring the co-operation or knowledge of the subject or the use of force, and can be obtained on a mass scale. 85Ruha Benjamin, Race After Technology: Abolitionist Tools for the New Jim Code (Wiley, 2019).86Provisions of Regulation of Investigatory Powers Act 2000 not repealed by the IP Act, particularly Part II. 18 L. URQUHART ET AL. even intrusive covert surveillance operation, so forces need to continue to make it clear to the public that LFR is being used in an overt way. 87SWP used a number of mechanisms to do this, but if LFR is used in less transparent ways (e.g. less obvious cameras) then the quality of law, namely RIPA, could come into question too. Given existing concerns around transparency, community relations, and reputation damage with overtLFR, the risks from covertLFR use seem signifi cant for future deployments. Fourthly, at a mundane level, CoP also highlight the operational policy documents that need to be drafted for the use of LFR by a police force. These are interesting as they show how high level ethical and legal concerns boil down to a series of documents being drafted to legitimise the use of LFR. 88 This includes: . A flowchart for decision making on LFR use. . A standardised operating procedure for LFR including criteria for watch lists and imagery sources, guidance what to do when alerts created, location and camera place- ment, retention periods, ensuring use is overt ‘including considerations of prior notifica- tion and signage ’. . Data protection, equality and community Impact assessments . Training materials. . A policy document covering processing of sensitive data. Operationally, this checklist type approach shows lessons learned from Bridges about areas of concern raised there and procedural steps that need to be taken. 2.3. Future legal perspectives: the emergence of the EU proposed AI regulation In this penultimate section, we consider the Proposed EU AI Regulation (AIR) 89 and how it seeks to shape the future of IFS in Europe. AIR seeks to establish new risk based, tiered rules around the use of AI, to create an ecosystem of trust. It largely splits the rules on if AI is prohibited, high risk or minimal risk. It takes a stance on prohibiting AI for live bio- metric identi fication in public spaces by law enforcement (i.e. LFR) and provides guidance on law enforcement use of emotion identification(i.e. EAI) as high risk. As mentioned in the introduction, we are interested in the role EAI plays in policing and how it might emerge as a near-future IFS tool. AIR provides a useful roadmap of the priority areas and safe- guards needed around integrating AI into policing practice in the future. Whilst it may be non-binding for the UK policing due to Brexit, 90 UK firms (AI providers, users, distribu- tors and others) seeking access to the EU market will need to consider impacts of these rules. Moreover, it will remain useful policy guidance for best practicearound the use of IFS now and in the future, and in order to gain public trust, the UK may seek to align with strategies documented here. 87Indeed section 3.1 documents guidance on what kinds of steps should be taken in relation to date, time, duration and location of live AFR usage including, but not limited to locations such as hospitals, places of worship, polling stations, schools or demonstrations. Signage should also be accessible for children. 88Part 1.7. CoP.89European Commission, Proposal For A Regulation Of The European Parliament And Of The Council Laying Down Harmo- nised Rules On Arti ficial Intelligence (Arti ficial Intelligence Act) And Amending Certain Union Legislative Acts COM/2021/206 Final (2021) < https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=CELEX:52021PC02 06&from=EN > hereinafter ( ‘AIR ’). 90Although see Art 2 on scope and impact on third countries seeking access to EU market. INFORMATION & COMMUNICATIONS TECHNOLOGY LAW 19 2.3.1. LFR as prohibited AI Art 5 documents types of AI which areprohibitedfrom being placed on the EU market, into service or used and includes ‘the use of “real-time” remote biometric identi fication systems in publicly accessible spaces for the purpose of law enforcement ’. 91 The law takes this provision seriously, with large fines for breaches of 6% of previous year ’s annual turnover or up to €30m. Whilst this initially seems to be a ban on the types of LFR discussed in this paper and used by SWP and the MET, this provision is heavily caveated in the subsequent sections. It discusses elements around the purposes of the system, levels of proportionality, impacts on fundamental rights, if it is urgently needed, if EU member state (MS) domestic law permits it, and it tries to place oversight through authorisation from a judge or authority for usage. 92 To explore this further, in terms of purposes for use, it is prohibited unless strictly necessary for purposes such as targeted searching for victims of crime and missing children 93; preventing threats to life or physical safety such as terrorist attacks 94 and for finding suspects for serious crimes punishable by at least 3 years custo- dial sentence and covered by the European Arrest Warrant, e.g. terrorism, tra fficking in drugs or human beings, murder. 95 This latter point is interesting insofar as it brings in a severity of crime that goes beyond just deploying LFR at an arena or in a high street in the hope of catching shoplifting or antisocial behaviour suspects from a widely framed watch- list (as was the case in Bridges). This removes ‘fi shing expedition’ type uses and reiterates that this is a tool to be used for serious crimes. In deciding to use LFR, it is important to consider the circumstances surrounding use, particularly issues of ‘seriousness, probability and scale of harm ’that could result. Interest- ingly, it frames these in the negative, i.e. in the absence of the use of the system, what harms would occur? There is also need to consider the fundamental rights and freedoms of all subjects implicated by it, i.e. not just suspects but passers-by. 96 This shifts the nar- rative to justifying LFR on its merits and appears to set expectations about e ffectiveness of the technology. As we see in the Bridges case on real-world use, it often has mixed results in realising expectations around actual arrests in deployment. 97 Hence this point may be hard to justify the use of LFR, particularly with concerns around false positives. 98 The rules also state the need for ‘necessary and proportionate safeguards and con- ditions in relation to use, in particular as regards the temporal, geographical and personal limitation ’. 99 This is interesting, as it was a key point of contention in the Bridges case around appropriateness of safeguards and criteria for LFR operating in particular locations. As we saw above, this is picked up in College of Policing guidance. 91Art 5(1)(d) AIR.92Unless this is not possible.93Art 5(1)(d)(i) AIR.94Art 5(1)(d)(ii) AIR.95Art 5(1)(d)(iii) AIR: the detection, localisation, identification or prosecution of a perpetrator or suspect of a crim- inal o ffence referred to in Article 2(2) of Council Framework Decision 2002/584/JHA62 96and punishable in the Member State concerned by a custodial sentence or a detention order for a maximum period of at least three years, as determined by the law of that Member State. 96Art 5(2)(a) and (b) AIR.97CoA para 26–30.98Law Society of England and Wales, Algorithms in the Criminal Justice System (Law Society, 2019) 38.99Art 5(2) AIR. 20 L. URQUHART ET AL. The inclusion of need for authorisation by a judicial or independent administrative auth- ority would be welcome, particularly as the authority would need to be satis fied use is necessary and proportionate in line with conditions above. It is problematic that authoris- ation can be bypassed if it is really urgent or be requested after the fact when needed. 100 Thus, this hollows out the bene fits of any real oversight if it can ultimately be ignored. In any case, the EU law pushes some responsibility for handling this back to Member States (MS), where requests have to be in accordance with national laws on this. Whilst no longer an EU member, the emerging UK position in College of Policing guidance states the authorizing o fficer can delegate for deployment down the chain of command when urgent. 101 Similarly, despite this law being an EU Regulation, which seeks to harmonise practices across all MS, the current proposal does provide a caveat allowing MS to pass national laws that permit the use of real-time biometric identi fication in public spaces for law enforcement. 102 Thus, ultimately it enables live use of AFR, if MS create laws to do this. Whilst the UK is no longer an MS, the Bridges case does highlight the kinds of issues that would need to be addressed in any law governing the use of LFR in the future. 2.3.2. Emotional AI as high-risk AI We now turn to rules around EAI as High-Risk AI Systems (HRAIS). If we look at the list included in the AIR of high-risk AI systems, it includes discussion around law enforcement use of emotion state sensing in Annex III in the legislation (see footnote 90). This covers two examples, namely: –‘ 6. Law Enforcement ’(b) AI systems intended to be used by law enforcement authorities as polygraphs and similar tools or to detect the emotional stateof a natural person –‘ 7. Migration, asylum and border control management. ’(a) AI systems intended to be used by competent public authorities as polygraphs and similar tools or to detect the emotional state of a natural person. 103 As LFR can also be integrated with these systems, the AIR deems data-intensive predic- tive policing 104 and law enforcement profi ling 105 are also HRAIS, subject to the same rules. 106 This will challenge offi cer optimism about integration with existing technologies. 100Art 5(3) AIR.101CoP para 1.6.1 i.e. from superintendent down but not lower than inspector.102As per conditions in Art 5 (1)(d),(2), and (3) AIR.103This is included in the Annex II AIR (see above footnote 90) and is an updateable list of applications.104 (g)AI systems intended to be used for crime analytics regarding natural persons, allowing law enforcement authorities to search complex related and unrelated large data sets available in di fferent data sources or in di fferent data formats in order to identify unknown patterns or discover hidden relationships in the data. 105 (e) AI systems intended to be used by law enforcement authorities for predicting the occurrence or reoccur- rence of an actual or potential criminal o ffence based on pro filing of natural persons as referred to in Article 3 (4) of Directive (EU) 2016/680 or assessing personality traits and characteristics or past criminal behaviour of natural persons or groups; (f)AI systems intended to be used by law enforcement authorities for pro filing of natural persons as referred to in Article 3(4) of Directive (EU) 2016/680 in the course of detection, investi- gation or prosecution of criminal o ffences. 106Of less direct relevance but still pertaining to face it includes: ‘(c) AI systems intended to be used by law enforcement authorities to detect deep fakesas referred to in article 52(3) ’. INFORMATION & COMMUNICATIONS TECHNOLOGY LAW 21 The relevance of being deemed an HRAIS is it introduces a raft of new requirements around design of the system. These are covered in Title III at a high level, these include putting in place mechanisms like automated logs, technical documentation, human over- sight systems and data quality governance to increase transparency around how a system works and to audit when things go wrong. To bring out more detail, this includes: ▪Art 9 –which requires the establishment and maintenance of a risk management system to identify risks when HRAIS are used as intended and reasonable misuse; including implementing risk mitigation and testing regimes. ▪Art 10 –focuses on data and data governance, establishing quality criteria around train- ing, validation and testing data. ▪Art 11 –technical documentation drafted before going on market. ▪Art 12 –ensure record keeping and automated logs. ▪Art 13 –‘their operation is su fficiently transparent to enable users to interpret the system ’s output and use it appropriately. ’ ▪Art 14 –human oversight via ‘appropriate human-machine interface tools ’ It also puts obligations on different actors in the AI system supply chain too, from pro- viders and users through to importers and distributors to conduct conformity assess- ments and ensure their systems comply with these requirements above. As users of HRAIS, which the police would be if using emotion sensing systems provided by AI pro- viders, as per Art 29, they would need to take steps including: . Ensuring input data is relevant to the purpose 107 . Monitoring operation of high-risk AI according to instructions of use 108 . Keeping logs, if logs are under their control 109 . Conducting a DPIA as per Art 35 –using information from Art 13 around transparency information they need to provide. This approach seeks to increase responsibility at each point in the life cycle and supply chain of HRAIS and not allow the police as users to pass responsibility to the AI provider (e.g. NEC for SWP use of AFR Locate). When we consider the issues faced in the Bridges case around police procurement of LFR as users from the provider NEC, there were ques- tions about adequate human oversight. Art 17 AIR states the requirements for quality management systems that AI providers need to put in place. This is alongside a range of other obligations in Arts 16-23 around HRAIS, such as ensuring it has undergone a con- formity assessment. Cummulatively, these measures for HRAIS show how organisations providing AI tools to the police would need to provide more transparency mechanisms around how their system functions before being able to deploy it. With face-based EAI, debate remains around the baseline accuracy of emotion detection technologies, the underlying models of universal emotion, and cross-cultural dimensions of facial expression. 110 This will include determining what appropriate metrics of transparency 107Art 29 (3) AIR.108Art 29 (4) AIR.109Art 29 (5) AIR.110McStay and Urquhart (n 16). 22 L. URQUHART ET AL. will be needed for police use of EAI, if these systems are ever to be deployed operationally.The UK experience with the Bridges cases and emerging EU policy on regulating AI show- case the types of legal requirements that police need to attend to for future uses of AI and IFS. These are important barriers and safeguards to attend to, particularly in light of optimism shared in Section 2.1 by police o fficers, as they envision future uses of LFR integrated with other IFS tools. It will be harder for the state to bypass legal controls by outsourcing surveil- lance roles to private actors, as can occur now, 111 because this legislation targets the whole supply chain. To conclude part 2, whilst the EU direction of travel suggests greater regulation of LFR, and prohibition as the default, in the UK, College of Policing guidance appears more permissive. Despite the guidance seeking to ha rmonise policing practice across the UK as a national document to ensure consistency, providing public reassurance and guidance for forces, there remain concerns that align w ith the current scepticism indicated by our o ffi cers. In part 3 we provide re flections on police use of IFS in the future. Part III –concluding re flections on future IFS In this final section, we formulate lessons from our empirical and legal analysis, framing these as considerations for future IFS, drawing together our thoughts on LFR and EAI. Before doing this, we reiterate what the 7 main contributions of this paper are. Firstly, it provides empirical insights from frontline police o fficers around the prospective uses of LFR in policing practice. This is unique as it engages at the operational, as opposed to the strategic level, and addresses the gap in qualitative studies with practitioners on this topic. Secondly, it explores present-day scepticism of LFR presented by police o fficers and contextualises this within current debates around LFR. Thirdly, it explores future opti- mism about the use of LFR by police o fficers and how the legal landscape will shape that in practice. To do this, we present the emerging legal landscape around IFS both currently with the Bridges case in the UK, and in the future with the new EU AIR. Fourthly, we antici- pate wider risks for future IFS like EAI based on current understandings of LFR. Fifthly, it provides an interdisciplinary approach fusing legal, technical and criminological perspec- tives to assess the practice and legal requirements around IFS technologies. Sixthly, we argue that the future for IFS will be increasingly regulated, impacting policing practice, and challenging optimism about future uses. Lastly, we have formulated a series of lessons to guide future discussions around IFS, and we now provide these below. Lesson 1 . We urge caution with institutional optimism around future IFS, especially due to the harms arising from automating suspicionthrough biometric technologies. There is a risk of further entrenching bio-deterministic framings of criminality, based on facial data, and this is a harmful precedent to set. Emotional AI in policing poses a particular risk here for future IFS, given it could build on the history of physiognomy, phrenology and Lom- brosian rhetoric around reading bodies and criminal intent. Lesson 2. Innovative uses and resulting legal test cases can establish guidelines for IFS after deployment. This occurred with Bridges and provided police with a roadmap. But from a legal perspective, and as an entity of the state, higher standards are needed for law enforcement use of experimental IFS technologies beforethey are deployed. The 111Kirstie Ball and Laureen Snider, The Surveillance-Industrial Complex: A Political Economy of Surveillance (Springer, 2013). INFORMATION & COMMUNICATIONS TECHNOLOGY LAW 23 risks for equality, privacy, discrimination and human rights are too significanttoonlyrelyon responsive governance. Instead, guidance for police needs to be more precautionary and prescriptive in guiding behaviour prior to harm occurring. This will also be key for ensuring the suitable quality of law exists to protect citizens, as required by human rights law. Lesson 3 . There is signifi cant work to be done in navigating how to build organisational and technical safeguards into emerging IFS. For example, more transparency is needed around the organisational aspects of specifi cations formation by law enforcement agencies. What should police learn from experiences with training data and oversight with LFR for procurement processes? Beyond this, how can regulation be baked into the system to add another layer of protection to citizens through technicalarchitecture. This is an area needing increased research from technologists in conjunction with policy- makers, especially as the law increasingly pushes in this direction, e.g. with the AIR stipu- lating design requirements for data quality for HRAIS. However, requiring safeguards is one thing; implementing and operationalising them is another. Lesson 4. IFS, such as EAI, may be integrated with other existing surveillance technologies, and act as a layer that augments their functionality. We already see this emerging, as limit- ations in facial action coding (FACS) means EAI systems increasingly need supplementary information from on body and environmental sensing. This is in order to construct a clearer picture of the subject ’s emotional state through the use of location, heart rate, temp- erature, accelerometer data that provides ‘context ’. Another example could be within IFS systems where with LFR, the addition of EAI could provide further intent-based information to complement identification based information . So, police could understand not just who this individual is but how they are feeling in the moment. This heightens the risks to citizens ’ rights and needs a clear appreciation of how integration might pose further harms. Lesson 5. Less intrusive techniques should be used by law enforcement where they can serve the same purpose as IFS, e.g. fingerprinting for identi fication which provides the same outcome, albeit it cannot be used remotely. This could also help with resource man- agement in law enforcement agencies and address the allure of technological solutionism by avoiding investing in unreliable new IFS. Lesson 6. It is important to develop participatory approachesto involveoperational users from law enforcement in discussions of deploying new technologies. Their situated experience and sceptical narrative around the value and practical challenges of using new policing technologies can counteract unfounded optimism from the strategic level. The public has to be part of discussions around development and deployment of IFS too. This is especially for marginalised publicswho face additional equality and funda- mental rights risks due to protected characteristics such as gender, race and vulnerability being implicated in IFS, e.g. through disability or age. Further, EAI can create new cat- egories of suspicion and risk, for example through facial micro expressions and inferences about if someone is angry or sad. Lesson 7 .Effectiveness of IFS remains a difficult topic. On the one hand, if systems are more accurate, then there are privacy and fundamental rights implications because they enable more invasive surveillance practices. But similarly, if they are less accurate, there are risks of false positives that disproportionality impact those who are more vulner- able. Future IFS, like EAI, have further e ffectiveness issues, due to the lack of accuracy of models of baseline emotions. In a law enforcement context, this could impact processes 24 L. URQUHART ET AL. of evidence gathering and admissibility in court, further questioning the utility of such technologies in the long term.Lesson 8. Law enforcement needs to find mechanisms to exert control over private vendors of IFS to increase accountability within this public/private AI supply chain. A key area is oversight of training data processes. IFS vendors may have developed systems in di fferent regulatory, cultural and ethical contexts, but these could cause harms when implemented in a di fferent geographic domain. Thus, despite power asym- metries between vendors and law enforcement, there needs to be veri fication that the system they are using adheres to fundamental rights commitments. It is key that given the risks from IFS, any balance of interests of citizens with policing goals of prevention, detection and prosecution of crime are not skewed to the latter. The EU AIR is a positive force for improving audibility around the AI supply chain. However, in the UK, the impact this might have is unclear due to Brexit, and the UK College of Policing Guidance indicates a divergence for LFR, which may be mirrored for future IFS too. Lesson 9. Provenance of images in training data for IFS is a key area of concern. As law enfor- cement is an arm of the state, it should be held to higher standards around how it sources images for IFS. Emerging guidance on LFR, e.g. from the UK College of Policing, indicates social media data could be used for this purpose legitimately. Firms are already marketing access to face datasets for LFR in other parts of the world, e.g. scandal around Clearview in policing in the US. 112 In the same way curation of watchlists for LFR is subject to greater scru- tiny, the sourcing of non-police originating images needs better oversight. Risks from police use of social media intelligence have been discussed in di fferent contexts, e.g. in implicating technology firms in outsourced surveillance in the Snowden Revelations or policing UK Riots in 2011. There need to be publicly available guidelines for law enforcement that document how they should handle provenance tracking of images for IFS, particularly as this practice is likely to be covert. This could include embedding metadata into images flagging their origin, lawful basis for collection, use restrictions and what other agencies these can be shared with. Lesson 10. Law enforcement needs to be responsive to emergent societal harms and risks , particularly around integration of IFS with other technologies, e.g. predpol, senten- cing systems. Whilst processes like impact assessments are used for initial deployment and help map initial issues, forecasting new harms during deployment needs to be an ongoing focus. Improving horizon scanning for inequalities would be one element, par- ticularly for EAI, e.g. those who display emotion in diff erent ways. This will be needed in addition to process led safeguards like public sector equality duties to ensure any use of new IFS is proportionate, legally compliant and trusted by the public. Acknowledgements Thanks to our participants and Prof Andrew McStay for his valuable comments on an earlier draft. Disclosure statement No potential con flict of interest was reported by the author(s). 112Tate Ryan-Mosley, ‘The NYPD Used a Controversial Facial Recognition Tool. Here ’s What You Need to Know’(2021) MIT Technoloogy Review < https://www.technologyreview.com/2021/04/09/1022240/clearview-ai-nypd-em ails/ >. INFORMATION & COMMUNICATIONS TECHNOLOGY LAW 25 Funding This work was supported by the Economic and Social Research Council: [Grant Number ES/T00696X/ 1]; Engineering and Physical Sciences Research Council: [Grant Number EP/V026607/1]; Keele Uni- versity: [Grant Number Research Strategy Fund]. 26 L. URQUHART ET AL.

Writerbay.net

Everyone needs a little help with academic work from time to time. Hire the best essay writing professionals working for us today!

Get a 15% discount for your first order


Order a Similar Paper Order a Different Paper