Open Source Intelligence is thought to make up as much as 80% of the intelligence material used by law enforcement. As a notoriously manual and time-intensive discipline, it appears to be a natural fit for AI application. But this isn’t always the reality. Many investigators argue that OSINT relies on nuanced assessments that AI isn’t yet able to provide, and that using it may be unethical.
Yet to counter malicious actors and remain competitive on a global stage, UK agencies and law enforcement must use every tool available. Considering that so much intelligence is derived from OSINT, establishing how and where AI can be used within OSINT processes should be high on their agenda.
OSINT’s widespread use is largely due to its publicly available nature and the value that can be extracted from huge quantities of available data. Intelligence agencies and law enforcement can find insights in this data that allow them to identify and stop criminals. However, because it’s public, so can anyone else. If law enforcement cannot gain an advantage over other OSINT users, its effectiveness will be limited.
What’s more, malicious entities or state actors can use OSINT for their own ends – for example, to create disinformation campaigns. Terrorist groups or state actors creating highly shareable social media content to create disruption and distrust amongst the public. Once this content becomes viral, it cannot be traced back to its creators and the information within them is left in circulation, which can be very harmful.
With OSINT increasingly accessible to the general population, investment in technology innovations like AI is one of the few ways governments can use OSINT to stay ahead of criminals and compete on a world stage. Where AI isn’t an appropriate solution, governments should prioritise automation to ensure maximum effectiveness in OSINT operations.
AI already has a number of obvious uses in the collection and processing of data – whether open source or privileged.
Experienced investigators continue to be integral to OSINT investigations. Human nuance and context is needed for truly effective analysis, especially where lives may be affected, by investigation outcomes.
However, AI can and should be used for some aspects of analysis. In some cases, investigators may produce less accurate results due to limited data availability, operational constraints or even concentration. These can include:
In all of the contexts listed above, there is the potential for AI to make mistakes or ignore important insights, justifiably giving rise to concerns about ethics and accuracy. However, it could be argued that humans are just as likely – if not more likely – to make such mistakes. On top of this, humans can even misinterpret the results AI produces.
The enormous efficiency and insight gains AI brings mean that investigators should use it to maintain an advantage and avoid the risk of falling behind. In many cases, potential reward outweighs the risks.
Yet the risks of AI use should not be disregarded. Now, legislation is emerging to help AI users make sense of this dilemma. The EU AI Act urges caution when using ‘high-risk AI systems’. This is a category which includes systems that process personal data, such as that found through OSINT methodology. However, even systems otherwise prohibited by the Act can be used in extenuating circumstances, which government and law enforcement face every day. ‘Searching for missing persons, abduction victims, and people who have been human trafficked or sexually exploited’, ‘Preventing substantial and imminent threat to life, or foreseeable terrorist attack’ or ‘Identifying suspects in serious crimes’ are all cases in which AI may be used, with caution.
It is this attitude that we must keep in mind when using AI for OSINT. Like humans, AI can be prone to bias and inaccuracies – but, with careful use, it can and must be used as a force for good.