illustrated figures interacting with blocks multiple times their height
Andy Canning
Andy Canning Technology Principal

Our Thinking Fri 1st October, 2021

Four reasons why event-driven architecture is crucial in fraud and risk analysis.

“It takes 20 years to build a reputation, and a few minutes of cyber-incident to ruin it.” These are the words of security specialist Stéphane Nappo, and they perhaps ring truer now than ever before.

Whether you’re guarding against phishing and credential stuffing or enterprise-level data breaches, security is critical for every organisation with digital infrastructure. For clear and compelling reasons.

The knock-on effects of these incidents are well documented and devastating; loss of trust, loss of customers, loss of business, loss of revenue.

The scariest proposition? Cybercrime is constantly evolving; it has to in order to survive and circumvent the massive global effort associated with its elimination.

So, the question ultimately becomes this: are you willing to protect your most crucial assets with yesterday’s defence mechanisms?

When—as Nappo says—it only takes a few minutes of cyber-incident to ruin an organisation, why not invest in technology that pinpoints real-time fraudulent activity within seconds. Or technology that can predict and prevent the activity before it even happens.

Here’s why event-driven architecture is crucial in fraud and risk analysis.

1. Event-driven architecture is the only way to conduct risk analysis and fraud detection in real-time.

If you have a real-time need to identify and respond to customer activity, event-driven architecture is the only way forward.

Event-driven architectures empower you to respond to triggers or user behaviours within seconds, if not microseconds.

Unlike periodic batched processing—where malicious activity is only discovered once the damage has been done—event-streaming provides clear visibility of potential issues as they unfold in the moment. And, over time, the ability to predict problems before they even happen (see section 3, below).

You can read more about event-driven architecture here. For those unfamiliar, ‘events’ represent point-in-time records of granular user activities. Common examples of events might be:

  • A user logs into their account
  • A user changes their password or address details
  • A user changes their client IP address
  • A user applies for a loan
  • A  user facilitates a payment

In event-driven architectures, microservices are configured to ‘listen’ for specific events—or combinations of events—and respond instantly to instigate various business processes or surface information to business domains.

The use cases and potential benefits become apparent fairly quickly. Let’s consider some hypothetical situations:

  • Multiple user login attempts are detected from a single client IP address
  • A user changes their account information and then rapidly attempts to make a large payment or withdrawal
  • A single IP address is associated with 15+ user accounts

For banks, tax authorities and fin-techs, real-time visibility of each of these scenarios is incredibly valuable.

While these examples may not be concrete indicators of fraud, each of these use cases likely warrants extra monitoring with a view to risk mitigation.

Event-driven architecture gives you real-time visibility of customer behaviours and triggers, and the capacity to dynamically create the most appropriate user-journeys based on their behaviours.

2. Event-driven architecture gives you detailed historical records of customer activity, not just current state.

If you’re not operating in real-time with an event-driven architecture, most organisations will likely have master databases, master stores, or small databases per microservice. Whatever the solution for data storage, it’s more than likely that you’re only capable of storing ‘end state’.

In other words, you only have a singular viewpoint of a customer or user.

Event-driven architectures, on the other hand, create point-in-time feeds of data to paint incredibly granular portraits of customer activity over time.

Granular visibility of customer context and behaviour over time is incredibly powerful for fraud detection.

Imagine you’re a fin-tech providing upfront cash loans for people at point-of-purchase in ecommerce stores, which they pay back over time. Effectively, you’re provisioning micro-credit loans. Risk analysis here is key.

With ‘end state’ data available, you might see that ‘Customer A’ has a valid account, and has made purchases through the platform before. With events, you’ll see that ‘Customer A’ has attempted to log in from the same IP address multiple times against different user accounts and has made over 25 loan applications through these multiple accounts over the past 30 days. Which view of ‘Customer A’ is more valuable?

Fraud detection and risk analysis is not about ‘end state’; it’s about predicting the likelihood of an occurrence based on behavioural analysis.

3. Event-driven architecture perfectly complements machine learning models for predictive behaviour analysis.

Event-driven architecture gives you a point-in-time feed of data which can stream into a historical data lake, a data warehouse, or anywhere you prefer to keep data for historical batch analysis.

The ability to access these events is incredibly powerful for the models associated with machine learning. Behavioural analysis is facilitated by referencing information that represents an order of events, or activities, that occur within a platform or user flow.

Predictive machine learning models are fed by point-in-time based information. As a result, it’s crucial they are trained using point-in-time information too.

For example, we’ve worked with a fintech who uses this technology to provision credit ratings for micro-loan approvals within seven seconds. In order to do that, they use event-driven architecture combined with machine learning.

The platform would receive transactions and provision immediate data processing via Apache Kafka. This would create a series of aggregate data points for the user in real-time, with visibility of information like:

  • How many times we’ve seen this user
  • How many times we’ve seen this IP
  • How many times we’ve seen this email or IP address in the past 30 days
  • Whether—or how many times—the user has purchased and paid back a loan

All of this historical information is made available thanks to event-driven architecture and the point-in-time data processing it facilitates. This information would be fed into a machine learning model to generate a prospective credit score. The score would then be used to approve or deny the user’s loan application, within a seven second timeframe.

Crucially, these models begin to understand proportionate risk in relation to a specific user’s behavioral profile and past activity. For example, updating your password 10 times in a month may be a signal of risk for some users, but not for others. The models will adjust and accommodate for these specific considerations.

This example speaks to risk assessment, although very similar fraud detection models do exist. These work in much the same way but they track and appraise different behavioral information to establish a profile and determine the likelihood of fraudulent intent.

A combination of behaviours or historical data points might trigger additional security monitoring, for example, as we’ve implemented in our work with Her Majesty’s Revenue and Customs.

4. Event-driven architecture future proofs your organisation for cutting-edge fraud detection and risk analysis moving forward.

Even if your organisation doesn’t have the need or appetite to engage in real-time fraud detection now, an event-driven architecture feeding into a data lake sets you up for any real-time requirement in future. As a side note, streaming from an event-backbone is an incredibly effective way to feed your data lake or data warehouse anyway.

You can leverage these historical events in powerful, evolving ways moving forward. One example—in addition to the machine learning models highlighted above—is ‘network clustering’ in fraud detection. Or real-time clustering using the information gleaned from historical events.

As events arrive, we can surface clusters of information based on certain identifiers of interest. An example might be user IPs and email addresses. You form a node network which will display, in real-time, the amount of IP addresses associated with any one email address, and vice versa.

This is incredibly powerful for fraud detection. If we see five email addresses registered against a single IP, there’s potentially a problem with that user.

Using Apache Kafka, we’ve been able to conduct this type of analysis in near to real-time; surfacing events, cataloguing and clustering them within seconds. If someone was to attempt multiple calls against a stolen credit card, we could identify that almost immediately. And, initiate any relevant protocols, or surface that information for relevant business domains using data pipelines and platforms.

Looking to learn more about event-driven architecture, machine learning, or other strategies and technologies involved with fraud detection?