Combat Criminal Activities with Advanced Entity Searching
by Katie Teitler, on Jun 4, 2020 9:15:38 AM
This article has been transferred from its original source. To read the original article in full, please click here.
In heightened states of stress and scarcity, some people may be compelled to do bad things. Under pressure, well-meaning people occasionally take shortcuts or make inadvertent mistakes they otherwise wouldn’t make during normal circumstances. Then there are outright bad actors; regardless of the situation, criminals will be criminals. The threat of bad actors has moved cyber security teams to warn about insider risk and insider threat, and thus implement technologies that can help identify risky, anomalous, and just plain old inappropriate behavior.
Many of these technologies have significant value, and we at TAG Cyber are fortunate to work with a few leading providers. Despite their efficacy, many of these tools are reactive—they identify and alert on bad behavior after it has happened, which means the organization must respond to an event. In some cases, patterns of behavior can indicate a threat before it happens, allowing the organization to be proactive—and that’s when these technologies really shine, especially when paired with access controls.
Insider risk or threat solutions are primarily aimed at protecting data and the systems that house it, and identifying potential financial fraud. Companies also rely on HR tools to vet incoming employees for criminal pasts or unsavory activities. These tools and processes are extremely important, but they're also often point-in-time. It’s hard to keep track of ongoing issues, and even more difficult to identify emerging issues.
For many companies, the aforementioned solutions offer just what is needed to keep problems at bay, but when the scope of the threat is bigger, let’s say, in law enforcement where investigators and analysts may be looking at large quantities of people to identify nefarious behavior or intent, or when an executive of a major enterprise is the subject of controversy and popular opinion, vetting and triaging publicly available information in one shot or based on internal controls may not be enough.
Founded in 2012 by former CBP Chief of Staff, Gary M. Shiffman, PhD, Giant Oak, Inc. was born from a Defense Advanced Research Project Agency (DARPA) anti-human trafficking project. Giant Oak today is a behavior analytics platform that re-indexes PAI based on behavioral characteristics. Speaking recently with Anna Wheeler, Giant Oak’s Director of Public Sector, Wheeler explained how the platform identifies evidence of threats.
“Unlike Google, which only indexes information on the surface web and where search results are driven by revenue, the deep web has far more searchable information—up to 95% of the web. And although manually searching the deep web is possible, the time it would take analysts is enormous, especially when searching through thousands of data points to try to find information and activity indicative of bad actor campaigns.”
The idea behind Giant Oak is behavioral science and the premise that bad actors display certain patterns of behavior. Tracing those patterns and individual activities using web-based results can reveal a great deal about intent. The search for behavioral indicators is commonplace in counterterrorism efforts, but the same ideas can be applied to fraud cases, personal protection, or any number of illicit activities. Looking at the San Bernardino shooters from 2015, experts all agree that there were indications of attack, but because intelligence agencies didn’t have a good way to aggregate all the data—to search through the “breadcrumbs”—deadly signals were ignored.
In less gruesome cases, Wheeler explained to me that most fraud cases also display indicators. “The people who commit fraud,” she said, “have generally committed fraud before. They know the rules, they know the systems being used, and they know how to circumvent those rules-based systems. Our platform can identify potential fraud perpetrators by finding information on individuals building false profiles, looking through their history of behavior, and searching commonly-used applications and documents to see if the claimant’s profile matches indexed information. It’s a deeper, automated way of searching through billions of data points to find relevant information. And by using machine learning-enabled entity resolution, analysts can quickly find context that highlights areas of concern.”
From a recent Giant Oak white paper on fraud, written as COVID-related stimulus efforts were getting underway, “A federal government agency used GOST® to reveal that an Indian industrialist living in the UK had used forged transport receipts to embezzle millions from the Reserve Bank of India. Another federal government agency using GOST® learned that a Pakistani government official had embezzled 100M RS from a provincial health department.”
These insights about prior behavior can then be traced to individuals that allow organizations to act before criminal activities are committed. Wheeler says that GOST® can be used for identifying intent to disrupt elections, physically attack individuals, disseminate disinformation, and much more.
How it works
The Giant Oak Search Technology (GOST®) platform leverages behavioral science and machine learning (ML) to allow analysts to easily build search queries and sort through open-source deep web content to create domains on search subjects. Using one of the nine multipurpose domain templates or custom templates, GOST® scours and indexes data and organizes it based on behavioral characteristics. Next, a risk score is applied based on either ranking, how relevant the results are to the specific search, or reliability, the level of confidence in entity resolution for that specified case. Combined, these scores provide prioritization for analysts who can then conduct a deeper analysis.
GOST® can be used to vet people, companies, and even de-anonymize chat handles. Analysts can build behavioral clustering and continuously monitor for anomalous or suspicious behavior. Using ML to train algorithms on the observed behaviors of bad actors, GOST® can target entities exhibiting those behaviors and surface cases requiring further review.
All of this can be accomplished in seconds, even when searching thousands of entities, and increases the efficacy of targeting. Further, Wheeler told me that the platform allows for replicability, meaning analysts can build one search query and it can be reproduced across departments and used for continuous auditing.
If you’re wondering, as I did, about GOST® gets around the tricky wicket of privacy, Wheeler tells me that all searches are non-attributable, and results are encrypted in transit and at rest in the Giant Oak cloud. Unlike many tools, Giant Oak pulls PAI, while not storing unique personally identifiable information (PII) in the long-term. “GOST® allows for continuous vetting on a high volume of cases,” Wheeler explained, “and gives companies actionable information they can use to prevent illegal or undesirable behavior.”