Coronavirus scams are proliferating. Smart tech can stop them.
by Gary M. Shiffman, PhD, on May 15, 2020 11:48:41 AM
This article has been transferred from its original source. To read the original article in full, please click here.
Many of the systems that government agencies and financial institutions have in place are not able to effectively vet or verify the identity and claims of applicants. If less than 1% of money launderers were caught before this crisis, imagine how pervasive financial crime will be after it.
From a technological standpoint, the Small Business Administration and the an departments of banks are nowhere nearready to process the new coronavirus aid fraud. Like many current banking systems, most government agencies produce inefficient results because they rely on credit checks and bypass even simple Google searches to vet individuals and businesses.
The risk of fraud almost always increases during times of large-scale emergency spending. By one government estimate, 16% of the $6.3 billion in relief distributed to victims of hurricanes Katrina and Rita was spent improperly.
The difference between 2005 and today, however, is that for the first time in history, artificial intelligence and machine learning (ML) technology exists to effectively screen upfront for fraud, corruption and abuse. But such technology needs to get to the hands of people on the frontlines countering financial crime.
Big data has already helped folks shop more efficiently, connect on social media, and find new shows aligned to their preferences. It’s time to apply the same capabilities to screening for fraud.
Because of the scale of the relief packages and the amount of crime occurring as a result, the deterrence effect of investigation and punishment is nearly nonexistent right now. To limit the occurrence of fraud and crime, criminals can’t be caught years after the fact.
Screening needs to occur during the process of approving people and companies for COVID-19 relief dollars, and it needs to be done quickly and efficiently because people urgently need fudding. There needs to be proactive prevention, rather than reactive response.
It is much harder for criminals to evade fraud prevention systems driven by AI and ML technologies because the systems are built to continuously learn, readjust to new data and improve their efficacy.
New AI and ML technology can screen a population’s entire online and public presence in a matter of seconds, identifying false narratives that signal a need for further investigation. For example, if someone claims to run a 20-person company but has a web presence indicating a one-person company, that’s a red flag.
AI and ML technology can also identify a person with a history of past lawsuits or tort claims, most of which are available in local papers or on public websites.
The best AI and ML technologies are built on good computer science, good data and a strong understanding of human behavior. This is because AI and ML work through pattern recognition. Scammers and financial criminals are human, and because they are human, they also behave predictably that can be identified through technologies like AI and ML.
If banks want to outthink and outperform scammers, hackers, money launderers and other criminals, they need to think about the science behind the systems used — both the computer science and the behavioral science. Doing so will help save the American people and their government millions of dollars.