Depending on your goals and needs, screening for risk can take on many different attributes. For example, a bank needs to screen for general illicit activity, but it might also need the specific ability to look for money laundering, human trafficking, corruption, or sanctions evasion. A government might need to screen for terrorism, but it might also need to screen for something completely different- drug trafficking, for example, or human rights abuses.
GOST (Giant Oak Search Technology) has always used its artificial intelligence and machine learning capabilities to optimize for different derogatory behaviors. Built on the basis of behavioral science, previous versions of GOST identified negative news and derogatory information through models that relied on associations with crime and derogatory concepts.
The richer feature space of GOST allows users to specifically investigate finer, more precise behaviors like confidence, sentiment, movement, and new information. This state-of-the-art approach is less black-and-white, allowing the user to target more behaviors for definition and training. With the ability to customize models, you and your team get to define what derogatory means to you, and the model will reflect your specific action criteria – using a wider range of data to draw features from and train against. This increase in model control means that you can be as responsive as needed to meet your risk challenges.
To learn more about GOST, schedule a demo today.