Trust me, I'm an algorithm: Why trust is vital to successful implementation of AI for investigators

By Stuart Clarke
Trust me, I'm an algorithm: Why trust is vital to successful implementation of AI for investigators

Our enthusiasm to embrace the rapidly evolving opportunities AI offers means that we must reflect on some broader topics around barriers to adoption. Naturally, there are regulatory considerations in adopting AI technologies. Understanding pertinent regulations and guidelines will help ensure responsible, compliant AI adoption within your organisation. 

Benefits and pitfalls of AI regulation 

Firstly, I encourage everyone to familiarise themselves with emerging regulations like The EU Artificial Intelligence Act and US Executive Order of the use of AI. The EU AI framework establishes clear parameters for the use of Artificial Intelligence, and emphasises the importance of defining overarching principles to evaluate its potential impacts, both positive and negative.  

Regulations like this are encouraging in that they can create common standards, but they may also be applied in ways that limit innovation. For instance, strict data privacy requirements in some jurisdictions have made it challenging for companies to leverage cross-border AI solutions, potentially slowing technological progress. I believe that it’s essential for every AI user to understand the EU AI Act, not because I think it is a gold standard, but rather because it brings the key topics and considerations around data privacy into focus. 

Trust: the ultimate goal for the AI investigator 

While we may be guided by the AI regulatory landscape, there are other, more prominent factors to consider when we think about AI adoption. For me, the most important of these is trust. Trust is foundational; not only to an investigator but in all walks of life – when we lose trust in leaders, systems or technology, things start to break down. However, investigators have a particular need for trust in the systems they are using: our work is frequently used to make decisions that have significant societal impact, such as combating organised criminality. For an investigator to adopt any form of technology or product, we therefore need to trust that it's safe, reliable and ethical. These factors are not feelings; trust is earned following the production of evidence. Without trust, there will always be reluctance to AI technology adoption. 

For investigators to begin to trust AI as a collaborative solution that can truly support investigative work, it is important that we recognise that there are several dynamics which must be satisfied: 

  1. Security and Privacy: Security is already a central consideration in investigators’ work: we spend time minimising our online footprint and ensuring we remain anonymous to reduce risk. In an AI context, we must have evidence that it will not expose sensitive data, risking creating a new wave of cyber attacks. We must also have confidence that AI systems will not collect disproportionate volumes of data. 
  2. Explainability and Transparency: Because our investigations frequently provide evidence in a law enforcement context, explainability is absolutely essential, Black box models are not going to provide the evidence we need to develop trust. We need to understand how AI makes its decisions and be able explain its processes in our own words. A human should be able to replicate the steps and the thinking. 
  3. Fairness and Bias: Part of the skill honed by an experienced investigator is understanding and mitigating our own bias. AI is at its heart is a learner - and it’s important to remember that it is often trained by humans. We as humans add complexity when we are involved in the training of AI models, as we might introduce discrimination and bias, ultimately undermining our trust in what AI is doing. 
  4. Reliability and Performance: As investigators, we are inherently systematic and methodical in our approach. Erratic behaviour from AI is not going to build trust. AI needs to be consistent and predictable, but we need to have a tolerance for mistakes and be prepared to train the system. Ultimately, well-trained AI will make fewer mistakes than humans: the key is to understand its limitations.  

How can AI earn our trust? 

Trust in AI is not only technical. It is earned over time through consistency and alignment. If we avoid using AI, we will fail to recognise the potential of correctly implemented AI technology. To help establish a strong foundation on which trust for AI can be built, it is critical that we: 

  • Adapt existing and introduce new regulatory frameworks that encourage greater understanding of AI amongst the community and develop clear standards. These could help us to introduce concepts like AI sandboxing to develop solutions to complex problems in a responsible and scalable manner. 
  • Build technical skills and set educational standards to ensure the general population is equipped to use AI safely and effectively.  
  • Create centres of excellence, such as in investigative analytics, to address key technical challenges. 
  • Accelerate the development of public-private partnerships to help keep pace with AI, while actively developing standards that support innovation and ultimately build trust. 

It is critical that we are proactive in engaging with AI now to understand its potential and - perhaps more so - its limitations. These is little doubt that our future includes AI and demands continuous innovation. We must act now to strengthen our understanding and, even more importantly, find a level of trust that we are comfortable with. 

Share