Our enthusiasm to embrace the rapidly evolving opportunities AI offers means that we must reflect on some broader topics around barriers to adoption. Naturally, there are regulatory considerations in adopting AI technologies. Understanding pertinent regulations and guidelines will help ensure responsible, compliant AI adoption within your organisation.
Firstly, I encourage everyone to familiarise themselves with emerging regulations like The EU Artificial Intelligence Act and US Executive Order of the use of AI. The EU AI framework establishes clear parameters for the use of Artificial Intelligence, and emphasises the importance of defining overarching principles to evaluate its potential impacts, both positive and negative.
Regulations like this are encouraging in that they can create common standards, but they may also be applied in ways that limit innovation. For instance, strict data privacy requirements in some jurisdictions have made it challenging for companies to leverage cross-border AI solutions, potentially slowing technological progress. I believe that it’s essential for every AI user to understand the EU AI Act, not because I think it is a gold standard, but rather because it brings the key topics and considerations around data privacy into focus.
While we may be guided by the AI regulatory landscape, there are other, more prominent factors to consider when we think about AI adoption. For me, the most important of these is trust. Trust is foundational; not only to an investigator but in all walks of life – when we lose trust in leaders, systems or technology, things start to break down. However, investigators have a particular need for trust in the systems they are using: our work is frequently used to make decisions that have significant societal impact, such as combating organised criminality. For an investigator to adopt any form of technology or product, we therefore need to trust that it's safe, reliable and ethical. These factors are not feelings; trust is earned following the production of evidence. Without trust, there will always be reluctance to AI technology adoption.
For investigators to begin to trust AI as a collaborative solution that can truly support investigative work, it is important that we recognise that there are several dynamics which must be satisfied:
Trust in AI is not only technical. It is earned over time through consistency and alignment. If we avoid using AI, we will fail to recognise the potential of correctly implemented AI technology. To help establish a strong foundation on which trust for AI can be built, it is critical that we:
It is critical that we are proactive in engaging with AI now to understand its potential and - perhaps more so - its limitations. These is little doubt that our future includes AI and demands continuous innovation. We must act now to strengthen our understanding and, even more importantly, find a level of trust that we are comfortable with.