Evidence-based policing and the rise of AI
- Dan Birks
- 2 days ago
- 3 min read
This opinion piece is one of several I have recently written with the help of generative AI, which I use to organise ideas that have been sitting in my head for too long. The thoughts are mine; the tools help me get the first draft on the page. You should try it.
Policing has historically struggled to base practice on rigorous evidence. Decisions were often guided by tradition, instinct, or perceived good practice rather than by robust evaluation or empirical testing. That is changing. Across operational meetings, professional practice forums, multi-agency partnerships, and applied academic conferences, police officers increasingly describe their work as data-informed and evidence-led. There is genuine interest in understanding what works, what does not, and what may cause unintended harm. This shift reflects a maturing profession that values the opportunity to learn from high-quality research and to deploy resources in ways that maximise public safety.
At the same time, policing is entering a period of rapid technological expansion. Artificial intelligence has moved from niche analytical applications to a field of general-purpose tools with potentially transformative impacts. Generative AI promises efficiencies, novel forms of insight, and new capabilities for summarising, interpreting and interacting with vast volumes of information. The pressure to adopt these tools is significant. Resources are stretched, demands are rising and diversifying, and governments are actively encouraging police and other public services to explore and harness AI.
This makes it more important than ever to draw a clear distinction between being data-driven and being evidence-based. A system that uses data is not inherently grounded in evidence. Data provides a record of what was observed. Evidence tells us if something works, if it is safe, if it is fair, and if it is likely to achieve the outcomes we claim. Too often, technological tools are adopted on the assumption that because they process data, they must be objective and effective. That is a category error. Evidence is established through transparency, scrutiny, evaluation, and replication. Without this, tools that happen to use data run the risk of becoming vehicles for harm.
This distinction becomes especially critical where AI is embedded in policing. I am optimistic about the potential of AI. These tools can offer real value if they are deployed within practical governance frameworks, and if their effectiveness and impacts are systematically monitored and evaluated. There are opportunities to improve decision-making, reduce administrative burdens, support investigations, and generate insights that would otherwise be inaccessible.
However, without careful evaluation and meaningful oversight, we risk sleepwalking into problems that could set policing back significantly. Ineffective or harmful technologies will damage public trust. They will drain already stretched resources. They will undermine the legitimacy of evidence-based practice. This challenge is further amplified by the speed and scale at which private sector vendors are moving into this space, often marketing powerful systems to forces whose capacity to appraise, test, and monitor what is being sold to them is limited. A single high-profile failure in deploying these new, potentially transformative AI applications could poison the well, making it far harder for genuinely beneficial tools to gain acceptance in the future.
The task is therefore to ensure that AI adoption aligns with the principles of evidence-based policing. That means clearly defining the problem a tool is intended to solve; planning evaluations before deployment, not after; running trials and phased testing before scaling; and scrutinising accuracy, bias, operational impacts, proportionality, and public acceptability throughout. It also means learning from other sectors that have already faced similar challenges. Crucially, it means recognising that an AI tool is not evidence-based simply because it happens to use data.
AI can genuinely deliver for policing and for society if we treat its adoption as an evidence-generation challenge, not as a procurement exercise. The goal should be to build a future where technology strengthens practice because it has been scrutinised, tested, and shown to work. That is how we avoid repeating past mistakes, protect fragile public trust, and realise the positive potential of AI in the long term.
Dan Birks, Professor of Computational Social Science at the University of Leeds
Dan is also Deputy Director and Data Science Lead at the ESRC Vulnerability & Policing Futures Research Centre, and Co-Director of the Yorkshire Policing-Academic Centre of Excellence (TYP-ACE), jointly hosted by the Universities of Leeds and York. TYP-ACE is one of nine nationally recognised Policing-Academic Centres of Excellence established by the NPCC and UKRI.


