The State of OSINT in 2025 or When Do I lose My Job to AI
Siri, Tell Me All The Times AI has Screwed Up Background Checks
It’s been a while since I created a good blog post. Scanning my brain for topics, I started gravitating toward the idea, “state of OSINT” or what’s new for 2025. OSINT does not change much year to year, and I was not arriving at anything bloggable. The fact that the obvious answer did not come right to me, perhaps gives away the story. Let’s see.
A year or so ago, I was interviewed by Kelly Paxton for her fraud podcast. In talking about research and fraud, AI of course, came up.
I poo-poohed it. My motto since I opened my shop in 1999 has been “Providing Vital Information to Manage Risks.” I can say that even before then my motto could have been “I poo-poo artificial intelligence”. With Kelly, I mentioned that AI has been around in the background/open-source/due diligence field way before anyone heard of Chat GPT. People have been looking a long time for a way to “solve” background checks.
Perhaps it’s Batman’s fault. In his Batcave, next to the Diversionary Batphone Lines and Batmobile Tracking Map, sat the Batcomputer. It knew the answer to any Bat-question, and you didn’t even need any search syntax to enter the query. Companies have, for a long time, wanted the same instant gratification for their myriad background needs. But what worked great for the cowled Bruce Wayne, did not work well in the corporate cubicle. Early AI systems failed on two levels. Foremost, they struggled with broad parameters. Either a name was too common or the dataset to be searched was too large. Thus, AI could not do what its users needed it to do: spit out clear, unambiguous answers. Instead, AI-generated results were long and confusing and required more time to analyze than if someone just queried their LexisNexis database.
That was then.
This is 2025. We cannot poo-poo forever! I will tell you why I think that is, but you can stop here if you’re bored.
I use AI. Really, most background researchers have been using forms of AI for a while with watchlist products and adverse news screens. But I’m really using actual AI. Because I’m cheap, I'm using the Microsoft version called Copilot. Along with all the general poking around open-sources I do, I will ask Copilot certain questions, like, is the target company reputable, or, does its owner pay bribes? I actually like to amuse myself with the queries like, when did Johhny stop getting in trouble and is it true that Acme is tied to the mob. Co-Pilot will earnestly answer no matter how crazy the question.
I do find stuff, and in one case, it performed exceptionally well by identifying information of interest about bribery, not at the target, but in the same industry/country, and that helped. I am going to continue to use AI in 2025.
I can tell you as we get into 2025, that AI still fails at background research. Oddly enough, I know this from a recent experience, but even before that happened I started writing this post. To get to the answer quickly, I asked my Copilot.
“Where has AI screwed up background checks”
Copilot admits AI has not been perfect, noting, “AI [is] not without its flaws. Here are a few areas where AI has encountered issues”
Don’t you love the lack of accountability? Encountered issues. What a euphemism. The issues cited were all general, structural issues, like how AI can be biased. I wanted more specifics, so I had to re-phrase my question, “Give me examples of AI getting background checks wrong.” Again, Copilot was not willing to fess-up to very much, but did admit to a couple of good ones:
- Facial Recognition Bias: AI-powered facial recognition technology has been known to misidentify individuals, especially those from minority groups.
- False Positives: AI algorithms can sometimes flag individuals incorrectly. There have been cases where AI systems mistakenly identified people as having criminal records due to errors in data processing
In what turned out to prove my point, two minutes of general google futzing found an article, “12 famous AI disasters” from CIO.com. In other words in 2025, I’m still better than AI. The disasters cited included:
- In an April 2024 post on X, Grok, the AI chatbot from Elon Musk’s xAI, falsely accused NBA star Klay Thompson of throwing bricks through windows of multiple houses in Sacramento, CA.
- Steven Schwartz learned when he found himself in hot water with US District Judge Kevin Castel in 2023 after using ChatGPT to research precedents in a suit against Colombian airline Avianca. Schwartz used the OpenAI gen AI chatbot to find prior cases, but at least six of the cases submitted in his brief didn’t exist.
- Zillow said a bad AI algorithm led it to unintentionally purchase homes at higher prices than its current estimates of future selling prices, resulting in a $304 million inventory write-down in Q3 2021.
Obviously, these are among the most famous examples. I have my own. Just happened.
The company I was researching made a partial disclosure, noting that its former CEO had been swept up in a fraud scheme but was eventually acquitted. I was able to track down the CEO and what happened, and the media accounts I found supported the disclosure. That’s when I asked my friend, Copilot. It answered definitively that there had been no fraud allegations involving the former CEO. This highlights a primary concern with AI—how will you know if it's wrong?
Luckily, Google’s AI provides some more reasons:
AI can miss key aspects of due diligence when it comes to complex contextual understanding, nuanced interpretations of information, cultural nuances, and situations requiring legal or industry-specific expertise.
Key areas where AI might mess up due diligence include complex relationships and context; qualitative factors; legal and regulatory nuances; unforeseen risks, and other factors.
Because it's not always apparent when AI inquiries produce incorrect info, I still ask. Despite that, it’s earned a spot as another tool in my toolbox. The idea that AI can produce accurate and reliable background research, however, remains far off (at least in my opinion). And that’s the current state of OSINT, 2025 edition.