Back to Content Library

Specialization in AI for Fraud

September 25, 2023

September 23, 2023

Intro

Every functional company is affected by fraud, which can be difficult to plan for in advance and expensive to resolve in real time. At best, these threats can be handled by a team of analysts, implementing heuristics through a process of manually identifying, sizing, and mitigating threats. 


However, the volume of transactions, customers and threats at many high-growth and mature companies render this manual, analyst-based process unscalable, often leading to unnecessary fraud losses. For these companies, a platform of AI-powered flagging and actions can significantly reduce fraud costs and analyst staffing. 


At a glance, AI for fraud may look like other use cases. However, the adversarial, class-imbalanced, and dynamic nature of fraud threats often requires a more specialized AI toolkit. Below, we’ll walk through industry-proven techniques to tackle AI for fraud.

What’s different about AI for fraud?

AI for fraud is often adversarial, with fraudsters actively searching for uncovered edge cases and new thresholds as soon as a new mitigation strategy is launched. These relatively rare and purposefully obscured fraud attacks can cause issues for standard AI techniques.

Implement account restrictions for accounts from low-trust email domains? Fraudsters will likely shift to purchasing compromised gmail or corporate accounts. Similarly, invoice restrictions for newly generated accounts may result in fraudsters aging dormant accounts. In isolation, these issues can be quick to identify and mitigate, but they are only a handful of a seemingly endless list of novel attacks that will crop up. 

Data & Imbalance

When training a fraud model, it’s important to create a detailed, accurate and up-to-date representation of the transaction and historical user data.

Best practices include:

  • Planning for class imbalance: If your business has 50% of transactions as fraudulent, the issue should resolve itself quickly - you’ll be out of business quickly. More commonly, 1% or 0.1% of observations will be affected by fraud. In this case, it can be helpful to downsample the majority (negative) class, or upsample the minority (positive) class through techniques such as synthetic minority over sampling technique (SMOTE). In either case, make sure to evaluate precision and other metrics on a natural (unsampled) data set. 
  • Fingerprinting through velocity features: The best indicator of future behavior is generally historical behavior. While other domains can benefit from velocity features (such as last 30-day total spend, or number of email changes in the past year), they are often a required element of production-grade fraud solutions. Pro tips here include keeping multiple lookback windows (e.g. last 7, 30, and 90 days), normalizing (e.g. last 7-day spend and last 7-day spend per transaction), and building a solution that can minimize training/inference skew. 
  • Planning for cold starts: In many cases of the highest risk cases, historical user behavior is not available. Often, fraudsters will create new accounts to extract value from new user sign ups or other growth mechanisms. In these cases, it can be helpful to include as much information as possible to separate out procedurally generated accounts from real users (e.g. signup duration in seconds, email domain trust scores, and signals from external identify solutions)
  • Defining a robust label: Very rarely will a data set contain a fraud / not fraud label. Instead, you will likely have to use a combination of human-generated labels and synthetic labels. Human labels require well-defined labeling procedures and can be time and cost-intensive. Synthetic labels generally require a deep understanding of the business and data-generating process and may miss fraud outside of your positive class. 

Model training

Model training is core to the job of most Machine Learning Engineers and AI specialists. Model training can be an iterative and time-intensive process for even well-defined projects. For fraud projects with loosely defined goals and weak signals, it can be a never-ending process.

A few recommendations to quickly get to a release candidate:

  • Choosing an algorithm: Many modern algorithms can handle thousands of features, time series inputs, and class imbalance easily. If you’re not sure, try using XGBoost, or your favorite deep learning package. Some historical algorithms (e.g. linear models, SVMs) will struggle with these requirements and are best skipped. 
  • Hyperparameter tuning: Common approaches to increase model fidelity include standard hyperparameter grid searches, and including a relatively high number of free parameters (to accurately model a large number of features and high-class imbalance). Fraud models tend to be sensitive to changes in the training window, number of features, and class imbalance - in any of these cases it may be helpful to re-run hyperparameter tuning
  • Going unsupervised: For some applications, anomaly detection can be sufficient to mitigate fraud losses. For many others, pairing a supervised model with an unsupervised approach can help holistically tackle new and novel solutions. 

Maintaining a model

Once a fraud model is in production, fraudster behavior is guaranteed to change. This provides some job security while also maintaining a production model a little more involved. 

Once your model is in production, you may want to focus some time on:

  • Frequent retrains: Fraudsters will often change their behavior, and focus in on the regions just above and below model thresholds to keep extracting value from your company. To mitigate these risks, retrain often and include as much recent data as possible. Particularly when launching a novel solution, it may be helpful to plan on a minor iteration after a ‘burn-in’ period where fraudsters (and non-fraudulent users) have had time to adjust their behavior
  • Feature store: The velocity features mentioned earlier may be key for your model, but also generally require a complex and specialized pipeline to keep running. Monitoring your feature store to minimize training/serving skew can help avoid outages and headaches. 
  • Manual reviews: Whether you used human labels for your model or not, it’s important to set up recurring reviews for flagged and actioned trips. This can help identify new attack vectors, model drift, and additional signals to include in your next update. 

Next steps

Ready to embark on your journey? In addition to the checklist above, it’s worth considering how your infrastructure, business, analytics, and ML stakeholders can support your journey.

Synergise is also available to support you through the process, with both advising and turnkey solutions available. Contact us here to chat about AI for Fraud for your business. For more information on everything AI implementation, check out our growing guide here.

Written by one of our AI Advisors, Brendan 'B' Herger

For more content by Brendan, check out his blog: https://herger.co/blog

Back to Content Library

Contact Us

for a free discovery/consultation meeting