AI is no silver bullet – but law enforcement must apply it to OSINT to stay ahead

Written by Brett Redman
OSINT SME
AI and OSINT blog - main image

Open Source Intelligence is thought to make up as much as 80% of the intelligence material used by law enforcement. As a notoriously manual and time-intensive discipline, it appears to be a natural fit for AI application. But this isn’t always the reality. Many investigators argue that OSINT relies on nuanced assessments that AI isn’t yet able to provide, and that using it may be unethical.

Yet to counter malicious actors and remain competitive on a global stage, UK agencies and law enforcement must use every tool available. Considering that so much intelligence is derived from OSINT, establishing how and where AI can be used within OSINT processes should be high on their agenda.

Publicly available data: a blessing and a curse

OSINT’s widespread use is largely due to its publicly available nature and the value that can be extracted from huge quantities of available data. Intelligence agencies and law enforcement can find insights in this data that allow them to identify and stop criminals. However, because it’s public, so can anyone else. If law enforcement cannot gain an advantage over other OSINT users, its effectiveness will be limited.

What’s more, malicious entities or state actors can use OSINT for their own ends – for example, to create disinformation campaigns. Terrorist groups or state actors creating highly shareable social media content to create disruption and distrust amongst the public. Once this content becomes viral, it cannot be traced back to its creators and the information within them is left in circulation, which can be very harmful.

With OSINT increasingly accessible to the general population, investment in technology innovations like AI is one of the few ways governments can use OSINT to stay ahead of criminals and compete on a world stage. Where AI isn’t an appropriate solution, governments should prioritise automation to ensure maximum effectiveness in OSINT operations.

Where should AI use in OSINT be encouraged?

AI already has a number of obvious uses in the collection and processing of data – whether open source or privileged.

  1. Data collection: AI-driven automation is already being used to increase the efficiency of data-gathering, especially from multiple, disparate sources. This helps investigations teams overcome the challenges of limited manpower, making OSINT investigations more scalable.
  2. Monitoring: Investigators cannot be everywhere at once. AI-powered automation can flag changes to information sources online – whether news reporting or social media posts. This allows investigators to be aware of important developments in real time.
  3. Processing support: Sorting through large volumes of collected data is arguably the most time-consuming stage of the OSINT process. Natural Language Processing (NLP) can be used to bring structure to data so it is ready for analysis. It helps by extracting key entities, categorising by theme or sentiment, and filtering out irrelevant noise.
  4. Translation: Language skills are an essential tool for an investigations team, but no team can cover every language needed. AI can ensure accurate translations of collected data so that key insights are not missed.

Can AI ever help with analysis?

Experienced investigators continue to be integral to OSINT investigations. Human nuance and context is needed for truly effective analysis, especially where lives may be affected, by investigation outcomes.

However, AI can and should be used for some aspects of analysis. In some cases, investigators may produce less accurate results due to limited data availability, operational constraints or even concentration. These can include:

  • Summarising long videos or documents: AI can review long-form content faster than a human. Additionally, it has less risk of missing important details.
  • Sentiment analysis: AI can assess far larger amounts of data than would be possible for a human. Thus, it provides a more complete view of sentiment across several social media profiles, or about a particular topic.
  • Navigating data: AI can cluster similar content at scale and draw attention to the most important data, or data that is connected.
  • Geolocation: AI can assess more options and draw on a greater pool of information at once. This means it’s more likely to identify the location of an individual based on information shared.
  • Facial recognition and object detection: AI can review content for the presence of individuals or objects much faster than a human.

What about ethics and governance?

In all of the contexts listed above, there is the potential for AI to make mistakes or ignore important insights, justifiably giving rise to concerns about ethics and accuracy. However, it could be argued that humans are just as likely – if not more likely – to make such mistakes. On top of this, humans can even misinterpret the results AI produces.

The enormous efficiency and insight gains AI brings mean that investigators should use it to maintain an advantage and avoid the risk of falling behind. In many cases, potential reward outweighs the risks.

Yet the risks of AI use should not be disregarded. Now, legislation is emerging to help AI users make sense of this dilemma. The EU AI Act urges caution when using ‘high-risk AI systems’. This is a category which includes systems that process personal data, such as that found through OSINT methodology. However, even systems otherwise prohibited by the Act can be used in extenuating circumstances, which government and law enforcement face every day. ‘Searching for missing persons, abduction victims, and people who have been human trafficked or sexually exploited’, ‘Preventing substantial and imminent threat to life, or foreseeable terrorist attack’ or ‘Identifying suspects in serious crimes’ are all cases in which AI may be used, with caution.

It is this attitude that we must keep in mind when using AI for OSINT. Like humans, AI can be prone to bias and inaccuracies – but, with careful use, it can and must be used as a force for good.  

Other articles you maybe interested in

Why OSINT should be a hot topic at the Economic Crime Congress

In December, we’ll be joining hundreds of delegates from financial institutions and government agencies at the UK Finance-run Economic Crime Congress. We believe that Open…

Read More

From Registry to Gatekeeper: Companies House’s Strategic Intelligence Assessment and Its New Role in Combating Economic Crime

Companies House, the official registrar of UK companies, has recently published its first ever Strategic Intelligence Assessment. The role of Companies House has been transformed…

Read More

Contents

Contents

    Sign-up to our newsletter