Get Started
Log In
Menu
Get Started
Log In

Dr. Gary Shiffman Named One of Washington Exec's Top AI Execs to Watch

by Meghan Rudorfer, on Nov 21, 2018 2:14:28 PM

October 16, 2018 — (WashingtonExec) — The founder and CEO of Giant Oak Gary Shiffman is a behavioral economist and retired Navy veteran who has served in policy roles at the Defense Department. He is a former chief of staff for Customs and Border Protection, where he helped revamp the border enforcement process. Today, his company offers technology designed to help combat illicit behaviors such as drug and human trafficking, terrorism and other types of corruption. He is an adjunct professor in Georgetown University’s Security Studies program.

Why Watch: Search technologies abound, but Shiffman’s company offers a very targeted system that remains important in today’s security environment. It can, for example, help financial institutions avoid loaning money for illicit purposes, offer behavior-based analysis on terrorist activity, and uncover any number of insider threats.

“We are uniquely bringing together behavioral science with technology, with artificial intelligence and machine learning into the domain of illicitness,” Shiffman said. “I don’t know anybody else doing that.”

This summer, the company received a $10 million investment through a partnership with Edison Partners to further develop and market its core platform, Giant Oak Search Technology. GOST scans the open, deep and dark web to create dossiers on individuals and groups and help leaders make mission critical decisions within the public and private sectors.

“GOST understands the specific information that the human is looking for, goes out into this massive and ever-changing universe of publicly available data, and retrieves what the human wants,” Shiffman said.

Through a feedback loop, the human can then direct GOST toward more specific results, thus driving machine learning. So, how do you account for human error in the process? Shiffman said the answer lies in training the technology to meet the user where he or she is.

“The way to prepare the human to work with an AI is not to expect the human to change, but to train the AI to understand the context in which the human interacts,” he said. “If you’re doing it well, AI systems are easy to use; they don’t require extensive training; and they’re not inherently error-prone.”

AI, he said, can theoretically be applied in almost any context as long as it is used properly.

“The example of AI gone wrong is in every science fiction movie you’ve ever seen, starting with ‘2001: A Space Odyssey,’ ‘The Matrix,’ ‘The Terminator’ and the list goes on and on,” Shiffman said. “There’s this very powerful meme of humans being afraid of machines, and the machines taking over. So, I think we have to be very careful as we move into this new era of artificial intelligence — which is really cool and exciting — that we make sure we design the systems where the humans are in control and keep the humans in the loop.”

Topics:Press

Recent Posts