This article was originally published by Technical.ly. Read the full article here.
A Florida man was arrested last week for using his Paycheck Protection Program (PPP) loan to buy a Lamborghini, vacations in high-end Miami resorts and designer clothes. He falsified the number and salaries of his employees to receive $3.9 million in loans — and his fraud was only uncovered after his new Lamborghini was involved in a hit-and-run accident.
I believe that books will be written about the summer of 2020 and the historic levels of fraud taking place. We cannot see it in the moment, but we sit in the middle of a maelstrom of illicitness made possible by two things: trillions of dollars in stimulus flowing from governments, and a marketplace enabling specialization in cyber-criminal activity. A panelist at the Berkeley Center for Law and Business’Fraud Fest in June summed up the situation: The pandemic is the perfect storm for fraud. We’ve seen massive fraud and corruption before, and while we cannot change the past, we can do things today to decrease the cringe of future generations reading about this time.
PPP loans are meant to protect employees and cover other critical expenses, like rent, during the pandemic; but we now face billions of dollars in fraudulent taxpayer-funded loans. Earlier this month, Maryland uncovered a massive coronavirus scheme involving 47,500 fraudulent claims that totaled over $501 million. That’s just the tip of the iceberg. A federal watchdog reported last week that it has identified $250 million in COVID-19 loan funds given to “potentially ineligible recipients,” according to The Washington Post. The Secret Service has estimated that $30 billion in stimulus funds will be stolen. And according to Department of Labor Inspector General Scott Dahl, payments for fraudulent unemployment benefits could cost up to $26 billion this year.
"Government agencies and financial institutions need screening technologies that scale rapidly, so that they can process almost all claims quickly while doing due diligence on those that pose a possible risk."
Here’s the challenge: We must get money into the hands of people with legitimate needs as quickly as possible, but we must also take steps to deter illegal activity. The fraud-deterring goal seems to conflict with the speed goal. The speed and scale of the required stimulus (nearly $3 trillion so far) create massive fraud opportunities for experienced and novices alike. Financial crimes expert Jim Richards shares some hilarious arrests on his LinkedIn page — funny in the sense of “I cannot believe that worked” funny; sad “funny.”
As for the important ideas for the history books: So far, government agencies and financial institutions have focused on getting funds out quickly, seemingly unaware of the ability for technology to enable high-speed, large-scale screening to deter fraud. It is not too late to make a positive difference.
In 2005 and early 2006, 16% of the $6.3 billion in relief distributed to victims of Hurricane Katrina and Hurricane Rita were spent improperly. In contrast, of the nearly 200,000 prime- and sub-contracts awarded by the American Recovery and Reinvestment Act (ARRA) in 2009, only 0.2% led to “consequential investigations” of fraud. This is like taking a test with the answer key already published: to deter crime, screen the applicants. ARRA used screening. ARRA also had the luxury of taking time. ARRA sent out funds over a two-year period. Like today, however, the post-Katrina response required quick action, so minimal screening was done.
Today is not 2005. Machine learning- and artificial intelligence-enabled screening solutions sit in the cloud, ready for CARES Act use, having been built by innovative technology companies. Shame on us for not using proven technology to screen as quickly as possible. For our financial and government institutions to get money out fast to those with legitimate needs, screening must occur.
It is much more difficult for bad actors to evade fraud prevention systems driven by AI/ML technologies because these systems are built to improve with time, by learning and readjusting continuously. We need ML-enabled entity resolution to disrupt fraud at its planning stage, and we need ML-enabled behavioral vetting using publicly available information to disrupt fraud at the later launching and cashing stages. This means examining whether an applicant’s profile and application match their publicly observable behavior. Are they who they say they are? Is their business what they say it is? Does their past behavior raise concerns?
Recently developed technology (such as my own company’s Giant Oak Search Technology) can highlight flimsy and fraudulent histories and unrealistic customer profiles that have gone undetected under the current approach. It accomplishes this by searching the deep web and integrating that into customizable information domains. In defense of the large institutions, these capabilities are new. Until recently, sufficient technology has not existed that enables the efficient accomplishment of this kind of search quickly and with limited people to clear false alarms.
Government agencies and financial institutions need screening technologies that scale rapidly, so that they can process almost all claims quickly while doing due diligence on those that pose a possible risk. In reality, almost every person seeking assistance needs the help, but a small fraction seek to commit fraud. Investing in proactive screening techniques can help agencies and financial institutions identify fraudulent applicants and disburse funds faster.