Semiotics, online culture and OSINT with Joe Ondrak

Guest
  • Dr Joe Ondrak, Director of Peryton Intelligence


Listen to this episode:

In this episode, Matthew Stibbe interviews Dr. Joe Ondrak, CEO of Peryton Intelligence, discussing the intersection of semiotics, extremism, and open source intelligence (OSINT). They explore the concept of epistemic security, the importance of cultural intelligence in OSINT, and the role of strategic narratology in understanding narratives. The conversation also touches on the implications of AI in intelligence work and the emerging phenomenon of meme coins as a new source of insight into online communities and behaviours.

Please note that any views expressed in this podcast episode are the speakers' own, and do not necessarily reflect those of Blackdot.

AI-generated transcript
Matthew Stibbe (00:01.498)
Hello and welcome to Blackdot Solutions' From The Source podcast. I'm your host Matthew Stibbe and today I'm talking to Joe Ondrak who is CEO of Peryton Intelligence. Great to have you on the show Joe.
Dr Joe Ondrak (00:15.512)
Pleasure to be here.
Matthew Stibbe (00:17.368)
Well, now, so let's start off with a question I ask all my guests. What are you geeking out about at the moment?
Dr Joe Ondrak (00:23.704)
Geeking out about... at the moment I would say broadly semiotics, but the semiotics of extremism.  So there was a wonderful, well, yeah... unfortunately you can take the nerd out of academia, et cetera, et cetera. But there was a brilliant article on Marc Andre's blog. Marc Andre is someone who does a lot of work in counter extremism.
Matthew Stibbe (00:33.454)
Gotta have a hobby, right?
Dr Joe Ondrak (00:50.794)
extremism, specifically new and emergent forms. And it's about new, transgressive online groups who are adopting the symbols of terrorist groups in the extreme right without actually believing in it. And it's posing the question of at what point does that stop mattering? Are they still doing a terrorist thing or are they just participating in the semiotics and the culture? So it's something that I've been thinking about for a little while. I wrote a piece in Gnet about similar things and participation culture and extremism and it just ties into this new, difficult world that we're stepping into when it comes to what online extremism means.
Matthew Stibbe (01:35.14)
They're doing it to be provocative or without necessarily being ideologically committed?
Dr Joe Ondrak (01:42.05)
Yeah, so it's, I'll be honest, it's no different from when Sex Pistols walked out wearing swastikas. It's transgression, except now it's just a lot more serious and a lot more hardcore than it once was.
Matthew Stibbe (01:57.06)
And that the risk is that it sort of legitimates or it brings into...
Dr Joe Ondrak (02:03.054)
There are a few risks. So there is the very easy and obvious one of normalising, you know, parading, know, Sonnenrads and  and more niche extreme symbology. The other side of it is it goes beyond the symbology and it becomes sharing different manuals and manifestos. And then you're looking at section 52 offenses. So for your younger generation who are, and it is a largely younger generation who are participating in these subcultures,
they might not know that this is massively illegal or they might not even necessarily care. It's it's transgression and it's using these symbols and these semiotic systems to illustrate and sort of show that level of transgression.
Matthew Stibbe (02:50.978)
And so tell us a little bit about your career. What got you to the semiotics of extremism?
Dr Joe Ondrak (02:57.934)
So I have always at heart been a language nerd. From way, way back, early on, pre-undergrads, I always wanted to go to university. I always wanted to do something with language. So my undergrad was in literature and linguistics. I then did a Master's by Research in literature and stylistics and formalism. So looking at how
the form of a story shapes our perception of it. Looking at interesting in-between novels that tried to bring the internet onto the printed page and how they went about that using different semiotic systems, basically. And then I did my PhD  in online narratology and that was looking at how...
various stories that are told on social media shape people's participation and behavior in that story and frame how they engage in that world. So that was looking at things like Creepypasta. One of the cases that I looked at was the 2014 Waukesha Slenderman stabbing. So you had two young girls who read stories about the Slenderman online and believed them because the form of the story is to be
believable, they were written from a first-person account. Now for older people who were participating in that, they knew it was a story, half the fun was the effective role play. But for these young girls, they believed it and they believed that they had to stab their friend, kill their friend, in order to be a proxy of the slender man. Thankfully their friend survived, but again it was a really good example of how stories told in a certain way online shape human behaviour in a really
concrete and serious way at times. And post-PhD, I got quite disillusioned with the UK postdoctoral academic job market. And it was just serendipity that a company called Logically was set up. And they were after open source intelligence analysts, leads, and people with sort of that more journalistic eye. And yeah.
Dr Joe Ondrak (05:12.94)
The rest from there is a hop, skip and a jump to where I am now at Peryton.
Matthew Stibbe (05:16.802)
I often wonder if I had my life but skipped forward 20 years whether I would have ended up in open source intelligence. I find it fascinating. And I hear that you were at Sheffield Hallam and you crossed paths with my namesake, the professor Matthew Stibbe. Is that right?
Dr Joe Ondrak (05:38.156)
Yes, So Professor Matthew Stibbe was the chair for my Viva, there as sort of the independent head, although he did notice my surname and he was doing a lot of work with in the history of the... So he was doing a lot of work around the Czech Airmen of the RAF who came over, which was obviously how my name ended up here in the UK.
Matthew Stibbe (05:53.882)
Because he's right about Jack, doesn't he?
Matthew Stibbe (06:08.536)
Many years ago, when I was a pilot, I took a bunch of retired airmen who'd fought in the war flying and one of them was a rear gunner like your grandfather and I just, I have to say, brave man. Anyway, moving on from that. Small, multiple small worlds. So when we were talking earlier, you mentioned three things to me that were...
mind-blowing. I love new words, I love new thoughts and I'd love to explore them with you for the benefit of our listeners. The first one is epistemic security. Can you just talk a little bit about that and for the uninitiated, I'm sure my audience are much brighter and better informed than I am. What does it mean? What does it, what you do?
Dr Joe Ondrak (06:57.239)
Yes, so Epistemic security, this is one of the things that I put at the forefront of a lot of the practice that I do. And it's certainly a lot of the training I do in counter inference operations and counter disinformation. And the idea here is with epistemology, you are talking about the study and the establishment of what we know as true and truthfulness.
Ideally, societies should have an epistemic commons, the idea that we can all come together with different viewpoints on various issues, divisive or not. It can be anything from football to polarising political issues, this, that, or the other. And we should all be able to argue our points to debate from a shared epistemology. We are all agreeing on what truth is, so therefore we can have that discussion. 
The internet and disinformation campaigns, influence operations, leverage the fact that the epistemic commons is much more fragmented than it once was. So epistemic security is a reframing of the practice of countering influence operations and countering disinformation away from
playing whack-a-mole or train spotting on campaigns, pointing things out that are happening technically, and instead thinking about how we can build a level of resilience in the shared epistemic commons that we have, so information operations and disinformation campaigns don't quite have the same level of power that they currently do with certain audiences. So it's a way to sort of
inject a new perspective into it from a digital research practitioner perspective, but also to reframe things around the human, around cultural cohesion, civic cohesion, and the wider whole of society approach to the problem, which I think is needed.
Matthew Stibbe (09:00.73)
I'm gonna come back to you on a question on that but my dog is whining if you're wondering what the noise is off so sorry everybody I'm just going to
Dr Joe Ondrak (09:07.393)
Not a problem.
Matthew Stibbe (09:15.555)
Do remember before Covid when people had dogs or pets or children coming on their television programmes and it was national news and now it's just like normal? Apologies to What can we do? What do we need to do as a society to reinforce that fabric, that epistemic centre?
Dr Joe Ondrak (09:39.48)
Honestly, I think one of the best things we can do, and this is off the back of speaking to people up and down the country, various bits of training or events that I've been speaking at. We need to actually be out there speaking to each other in person again, is the underlying let's cut to the chase answer here. There are a lot of things we can do with policy, there are a lot of things we can do with regulation, with sort of...
mapping and monitoring influence operations and known accounts and networks, but ultimately to build that societal resilience, we need to be out there speaking to each other a bit more. The thing that COVID did in shattering a lot of in-person meetups, the fact that we had a society that was centred around actually being in person rather than being digital, that that scar still remains. And I think we need to...
make more efforts to reconnect in person.
Matthew Stibbe (10:40.814)
That's a fascinating insight. Funny that we were mentioning COVID because we've also sort of partly retreated behind our screens. I mean, here we are ironically talking screen to screen, but if you're typing messages or you're recording stuff, there's this sort of indirection in that communication. But how do people with differing opinions...
I don't know, agree that sort of shared terms of reference, if you're meeting and conversing in person, are there sort of techniques or processes that people can adopt to do that better?
Dr Joe Ondrak (11:16.045)
There's, I think, a pragmatism that comes through in face-to-face communication. People don't want to be confrontational with one another. So, there is the sort of civic nature of the human social animal takes over a bit more. People will often try to find common ground, whereas if you are in mediated communication, be that video calls that we're on now, or if it's even worse, just typing messages across the screen,
Matthew Stibbe (11:23.77)
Mm.
Dr Joe Ondrak (11:44.787)
that's not another person that you're engaging with there. That is your imagination of another person conjured up by what you're reading or looking at on the screen. And you can be as mean to them as you like and disingenuous as you like. So, yeah, a lot of it does come down to there is something that happens when we are face to face and we can talk. Maybe it's a difficult conversation. Maybe it's one that leaves with two parties really frustrated. But more of an effort can be made in
the rebuilding those social centers. I you know, we've not said sort of just glorify the pub, but that is a place where those kinds of conversations can happen and happen between lots of different groups who might have wildly different viewpoints. But at the end of the day, a discussion is had, people hash things out and maybe mindsets are changed in different directions. But it's very, very difficult when communication is so mediated and so fractured. And when people are
are existing in their own bubbles. There is no physical shared space, which leads to that idea of a shared epistemic commons.
Matthew Stibbe (12:52.954)
And how is this concept useful to OSINT practitioners?
Dr Joe Ondrak (12:59.382)
So OSINT informs this idea of epistemic security for me. So a lot of that comes through in the training I'm doing rather than the open source intelligence practice. So this is more for the counter influence operations and counter disinformation training that I'm doing, especially for local authorities and local governments. So it's about ensuring that they recognize that disinformation isn't some abstract.
online problem or influence operations aren't these networks that just so happen to cause people to think bad things, but instead it's part of this wider cultural shift and the Epistemic Commons is something that they are stewards of in their local area and they need to change or they need to protect.
Matthew Stibbe (13:47.226)
Potentially sort of surfaces an objective or a tool set that isn't just, I like your phrase, playing whack-a-mole with the disinformation. Yeah, you could do that, but over here we can do something else that's constructive and proactive. I'd like to come on to the idea of cultural intelligence now. Tell me what that means to you and why it should matter to OSINT professionals.
Dr Joe Ondrak (14:14.828)
So for me, this is the idea of reframing or at least establishing the human element of OSINT. It's exciting sometimes when you're doing large scale data scraping to get lost in the source of the data of it all really, to look for patterns, to look for coincidences, especially when you're doing social listening or narrative mapping, that kind of thing.
But ultimately, the online surface that we all play our trade in wouldn't exist without people on there producing that data. So for me, the idea of cultural intelligence is being aware of the human elements of this, being aware that culture shapes human behavior and inversely, technology, especially microcultures that exist online, then shape that data. So it's the...
the awareness of and then the integration into your intelligence of those little cultural elements, those little tells is someone from a small, niche online community participating in some narrative that has come from a wider, state-backed influence operation. How has that shaped how they're spreading it with other people? Is that going to have knock-on effects? Those kinds of questions. So it really brings
the human to the heart of it.
Matthew Stibbe (15:45.082)
I think a lot of the OSINT work that I hear about and talk to people about is about the needle in the haystack, the signal in the noise, and this sounds like it's trying to imaginatively or intelligently understand the people creating the signal. Am I overinterpreting that? Is that a helpful way of thinking?
Dr Joe Ondrak (16:06.285)
No, That's about it really. Both have their merits, both are really, really useful. It would be the difference between looking at a network map and identifying the adversarial network within a hairball of collected data from a social media website versus looking at the edge list and looking at the edges on that network and going,
well, why is that account connecting to that account? What has been said between those two nodes that's generating that kind of interaction? So it's  it's building out that layer, not thinking about things in terms of nodes and data points, but as potential human producers of that data and how that shapes, how that data is then interpreted and shared elsewhere.
Matthew Stibbe (16:58.29)
Cultural affinities between the... yeah okay so let's move on otherwise i'm going to continue to try and educate myself rather than learn from you. The last... the third thing that you mentioned... strategic narratology. What is that?
Dr Joe Ondrak (17:17.162)
Yes. So a lot is said in open source intelligence, especially when you're looking at things like social listening and broader scale work around narratives. lot of value is placed on what are prevalent narratives in XYZ area of operations, area of interest, you know, who is saying what.
What is the narrative prevailing in this particular topic? Narrotology takes that a step further. It's the study of narrative, not just the identification of narrative. So narratives can be shared, they can be spread in many different ways. And the way they are spread shapes how people interact with it. So again, it loops into that cultural intelligence and the human element here.
So strategic narratology won't just be identifying a prevalent narrative somewhere. It will be then asking, how is this being shared? Is it short form vertical video? Is it text? Are there different strategies across different platforms? Are they sharing phraseology? Are they sharing particular stances or are they trying to appeal to different audiences? Is the audience reaction different compared to how they are spread? So is 
one version of this that is pushing a narrative that's using image with an audio clip or maybe a text caption working better than say a LinkedIn post doing something different. So it's breaking down how a narrative is deployed, how the form it's taking shapes perception of it and how reception of it by an audience differs based on how it's being replicated and spread across the internet.
Matthew Stibbe (19:16.25)
All of these things that we're talking about seem to require a of humanities-driven... not a sort of mathematical or scientific rigorous analysis, but more of a subjective or intuitive human interpretation. To what extent is that true? To what extent do open source intelligence analysts need to get in touch with their humanities?
Dr Joe Ondrak (19:46.612)
I mean, that is my background. That's how I got into this. Open source intelligence and certainly the technical aspects, they came out of my PhD because I was looking for original material, tracing narratives through the internet that at that point were 10 years old or so. So the open source intelligence side for me was the self-taught thing that I got the book for. The narrative, the humanities element has always been at the forefront of my mind and how that can then
shape my practice going forward, basically bringing all of this to where I am today. I think the open source element of open source intelligence can broaden its horizons, not just in terms of the data that we're able to gather, but in terms of the frameworks, the interpretive models, and even the disciplines that we can pluck from.
It doesn't necessarily all have to be computer science, AI and coding. We can, using the intelligence side of things, bring in stuff from sociology, from linguistics, from, in a GNS article I wrote recently, from folk studies, for instance, looking at how various practices in folk study behavior and...
telling of folk stories can then be used to identify early warning signals in extremism in school shooters. So there are plenty of ways we can pluck from other sources. If we understand them and we can then interpret them and bring them into the generation and dissemination of intelligence.
Matthew Stibbe (21:28.058)
Someone I knew from university did a study around the use of poetry in Islamic fundamentalist literature. And it's something that we wouldn't necessarily think of at first, there, so maybe, yeah, there's a lot. I would just put in a plug for the historians. Historians, I'm sure, have got something to teach us. But all of this stands in,
stark contrast to the conversations I'm often having on the show with people who are excited and or anxious about AI, both as a sort of producer of content and a way of sifting it and understanding it and I got the sense that you had a certain sort of wariness about AI which is great and I'd love to hear the sort of the anti-AI story.
Dr Joe Ondrak (22:21.077)
So for me, I think, as you've mentioned, there's a felt intuitive element to my practice and certainly the colleagues I've worked with in the past and the people I've trained, I try to bring that out of them, that very human, almost individual level expertise that everyone can sort of tap into, especially using OSINT. For me, it's a...
As much as it's a practice, it's a tool that you move through to get to an answer. You can train anyone to use any myriad of collection techniques, really cool tools, this, that, the other. You can't really train people to ask the right kinds of question. That kind of person has that or they don't. So for me, that human element, especially in the practitioner side, is absolutely key. When it comes to AI, I'm...
very, very wary of using it for analysis. I use it to vibe code scrapers here and there to do a little bit of assistance work, but I don't think it's got the ability to be attuned to everything that we've talked about previously, things around culture, things around narratology in a way that a human can because these are very human factors.
When it comes to the adversarial side of things, the production of AI content, I think we're in a much more difficult place because it's very, very easy and very, very cheap to do that at scale. And especially when we're talking about influence operations, they can just overwhelm. They can flood the zone, as Steve Bannon once said, and enough people won't necessarily believe it,
but accept it. And I think that's another key thing that comes from a more cultural intelligence side of things is a lot of panic around AI, meaning that we can't tell fiction from reality. But instead, think the real risk is that people won't care.
Matthew Stibbe (24:26.362)
Yeah, I was thinking about this earlier when we were talking because that phrase flood the zone with came to mind. There's an old article, a book about the difference between, I think it's called On Bullshit, but the difference between liars and bullshitters and bullshitters don't care about the truth and liars do.
Dr Joe Ondrak (24:47.614)
Yes.
Matthew Stibbe (24:52.986)
And I think there's this sort of new category of thing out there, which is actually actively trying to subvert or distort or overload the system so that people don't care whether it's true or not or it doesn't matter whether it's true or not or they don't trust whether it's true or not. And I think that has become a real challenge. And I think the volume of this AI stuff is going to be...
contributing to that and I think people are doing it deliberately.
Dr Joe Ondrak (25:24.794)
One of the more damning anecdotal examples I've got was around some of the extreme flooding and extreme weather events that have happened in the US. And there was a, what seemed to be a coordinated network of accounts on Facebook with a whole bunch of images of firefighters carrying children and cats through flood water, all of it AI generated. And what was really interesting was the conversations underneath. had people recognising it quite easily as AI. You know this isn't real, right? And the replies underneath, and this one sticks out, you know, six or seven months after doing this investigation now still, which was, 'it doesn't matter, something like this happened'. And I think that that's terrifying.
Matthew Stibbe (26:10.65)
It's alternative facts. Yes, yes, yes, yes. Well, listen, we're already having, I mean, this is such an interesting conversation. I think we've got time for one more little rabbit hole. See, there's narrative that's been taken over and subverted. Another little discussion. And then I think we need to wrap this up. You told me earlier about,
Dr Joe Ondrak (26:30.036)
Yes.
Matthew Stibbe (26:40.78)
adjusting the OSINT mindset to keep up with rapid shifts in meme coins. I was like, whoa. So let's start off by unpacking this idea of meme coins and what that might mean for OSINT practitioners.
Dr Joe Ondrak (26:57.236)
Yes, so this is a shift in internet culture again. So meme coins have come out of the post cryptocurrency gold rush, which, with Bitcoin dipping and dipping further, I think we're well past that now. And also the flash in the pan that was NFTs, which then became a subculture in its own right. And so what we have now,
when things hit a critical mass online, they become popular enough, you have a subculture of generation of meme coins. So they will peg this meme to a particular cryptocurrency on the blockchain as a way of preserving it, but also monetising it as well. So we're past the point of meme culture being a way to simply spread information or spread mindsets, ideologies, or disinformation, depending on what the meme is. It's no longer a vehicle
for human change in that sense, it's also something that can be monetised and marketable. And I think it's a very, very overlooked element when it comes to online threats and online attack surfaces and things that the open source intelligence community can really be focusing on because it's fascinating. More than anything else, sort of looping it back around to geeking on things.
Matthew Stibbe (28:17.698)
Is it a sort of a source of another source of insight and intelligence about what people are talking about and thinking about? Or does it provide other sorts of information that can be used by practitioners?
Dr Joe Ondrak (28:30.602)
All of the above really. So you will see with various different crypto exchanges when a meme hits a certain critical mass, someone will produce a meme coin of it. And then what you can do is look at the engagement with it on that market level, as well as looking at social listening and traditional engagement with that meme. So a good example of this would be Amelia, which is now about a fortnight or so old.
And Amelia was a character from a educational tool produced by the Home Office to warn people of the dangers of extremist rhetoric. Amelia was a purple-haired goth girl wearing a choker who just so happened to espouse viewpoints against immigration and about English heritage. And immediately the online right took her and said, we're having this one.
A quote verbatim from this is, if we can't make her a hate symbol in three days, we don't deserve to win. So they knew immediately that they were going to turn her into a meme. They were going to take this character and light the blue touch paper. Now there's been a fair bit written about her now in various newspapers. AI video is generated about her. She's become an icon, not quite as ubiquitous as Pepe the Frog, but she's up there now.
Very quickly, you also had the Amelia coin minted, pegged to, I think it was the Solana blockchain. And what that tells you is who's engaging with that, who's behind that account in terms of promotion. So you can look at the Amelia coin account on X where a lot of the promotion for Amelia was taking place. So you can look, using traditional OSINT methods, look at shared accounts.
You can then look at the engagement on the market, who's dumping money into it, where is it making money, is it losing money, what the market engagement is against also the social engagement of the meme. So it can give you a really interesting picture of social discourse, specifically to these sort of micro communities and subcultures. But I think it's something that... it's a
Dr Joe Ondrak (30:52.539)
weirdly large area now. There are meme coin crossover events with characters that who have just sort of existed on there. There's a whole Charlie Kirk meme coin cinematic universe now. That's going to be about three hours if I try to explain it. I'm sorry for any of your listeners who understood all of those words in one go.
Matthew Stibbe (31:11.54)
Okay, I don't want to open that box at all.
Yeah, but this, so the message out there is, if you're listening to this and you're not thinking about meme coins, it's another source of insight and data and one that I hadn't really given any thought to. So amazing, Joe, this has been the most fascinating conversation. Thank you so much for joining me today and thank you for all your wonderful, eye-opening insights.
Dr Joe Ondrak (31:28.668)
Absolutely.
Dr Joe Ondrak (31:42.109)
Thank you very much for having me and I hope it's not been too maddening.
Matthew Stibbe (31:46.682)
I I think I'm going to go and have a lie down now until we settle but on that bombshell that brings this episode to a close and if you'd like to learn more about OSINT, Videris or Blackdot please visit blackdotsolutions.com and thank you for listening and goodbye.

Share