Government Report Warns of AI Policing Bias

Government Report Warns of AI Policing Bias

A new government-backed report has warned that the growing use of automation and machine learning algorithms in policing could be amplifying bias, in the absence of consistent guidelines.



Commissioned by the Centre for Data Ethics and Innovation (CDEI), which sits in the Culture Department, the report from noted think tank the Royal United Services Institute (RUSI) will lead to formal recommendations in March 2020.



It’s based on interviews with civil society organizations, academics, legal experts and police themselves, many of whom are already trialing technology such as controversial AI-powered facial recognition.



The report claimed that use of such tools, and those used in predictive crime mapping and individual risk assessments, can actually amplify discrimination if they’re based on flawed data containing bias.



This could include over-policing of certain areas and a greater frequency of stop and search targeting the black community.



It also warned that the emerging technology is currently being used without any clear over-arching guidance or transparency, meaning key processes for scrutiny, regulation and enforcement are missing.



RUSI claimed that police forces need to carefully consider how algorithmic bias may result in them policing certain areas more heavily, and warned against over-reliance on technology which could reduce the role of case-by-case discretion. It also said that discrimination cases could be brought by individuals unfairly “scored” by algorithms.



“Interviews conducted to date evidence a desire for clearer national guidance and leadership in the area of data analytics, and widespread recognition and appreciation of the need for legality, consistency, scientific validity and oversight,” the report concluded.



“It is also apparent that systematic investigation of claimed benefits and drawbacks is required before moving ahead with full-scale deployment of new technology.”



OpenText head of AI and analytics, Zach Jarvinen, argued that the best way of avoiding bias in AI is to implement “ethical code” at the data collection phase.



“This must begin with a large enough sample of data to yield trustworthy insights and minimize subjectivity. Thus, a robust system capable of collecting and processing the richest and most complex sets of information, including both structured data and unstructured, and textual content, is necessary to generate the most accurate insights,” he added.



“Data collection principles should be overseen by teams representing a rich blend of views, backgrounds, and characteristics (race, gender, etc.). In addition, organizations should consider having an HR or ethics specialist working in tandem with data scientists to ensure that AI recommendations align with the organization’s cultural values.”



Source: Infosecurity
Government Report Warns of AI Policing Bias