Increasingly, banks and financial organisations are defined by two things: their technology, and that technology’s ability to facilitate the highest quality customer experience.
Technology and customer experience are inherently connected. Speed of service, security, the flexibility to create and deliver value based on a person’s context or real-time interactions; these are some of the hallmarks of leading CX.
They’re also some of the reasons why event-driven architecture is a compelling solution for both large scale banks and start-up fin-techs alike.
1 – Scale technical infrastructure in response to high demand; quickly and cost effectively.
Banks and financial organisations typically deal with huge volumes of activity, and high-throughput requirements.
Depending on the scale of your organisation, enormous numbers of transactions, eKYC checks, risk analyses, and other micro-interactions will likely occur every hour.
If you’re reliant on a monolithic architecture, scaling to accommodate peak periods or fluctuations in demand typically represents a challenge for engineers, a strain on existing resources, or a significant cost for your organisation. Or all three in combination.
Imagine experiencing a wave of transactions that represent a 100% increase on your typical processing power. This likely creates a huge strain on services.
In a synchronous blocking scenario, response times will inevitably balloon as requests are pushed downstream through a chain of increasingly overwhelmed services.
With an event-driven architecture, you can easily build out multiple microservices to service the same stream of requests. These services operate through a receiver-driven routing pattern; in other words, each service will only pull a request from the queue if it is ready and available to complete its function.
In synchronous or linear processing, response times can grow as requests are pushed downstream through a chain of increasingly overwhelmed services.
In contrast, event-driven processing is ‘receiver-driven’. Each microservice pulls from the queue as it is capable of processing requests, meaning many services function together to make light work of high volume periods.
Sure, a load balancer will be serviceable if you’re consistently processing high volumes of requests. But when things are time sensitive—and fluctuating—the receiver-driven approach facilitated through event-processing is far more consistent, reliable, and powerful.
2 – Adapt to change—almost instantly—and respond to key drivers in market.
In a competitive financial sector—where startups are establishing a foothold and accruing greater market share—the ability to respond and create value quickly is crucial.
Unfortunately, for many large-scale banks, technical infrastructure is a source of constraint, rather than driving the change and potential we see in digital banks.
Event-driven architecture gives you the ability to respond to change rapidly, thanks to a couple of key factors.
Firstly, an event-driven architecture is typically comprised of decoupled microservices. By decoupling—and reducing chains of interdependencies between services—you reduce the potential impact and risk associated with testing and deployment processes. Reducing risk and effort involved in deployment, while maintaining quality assurance, means you can act faster to improve customer experience.
Secondly, an event-driven architecture allows you to quickly and easily create new services that leverage existing data streams.
In other words, you can rapidly spin up new services for evolving use cases as they become apparent or appealing.
Let’s consider a hypothetical scenario. Imagine your business intelligence team identifies a series of accounts sending potentially problematic transactions to an offshore bank. In this scenario, you could immediately create a new microservice to monitor for any transactions going to that bank, and fire immediate notifications to your risk analysis team for additional monitoring of the accounts associated with those transactions.
Almost immediately, you protect your organisation—and customers—against the risk associated with potentially fraudulent transactions.
You can do the equivalent in a monolithic architecture, but you’d have to plan everything ahead of time: mapping use cases, establishing how information will be used, where it will be sent, and more.
Engaging in that painstaking planning makes you slower to respond to the needs of your customers. Or, in the above example, slower to eliminate risk.
3 – Surface data to a range of distributed business domains for comprehensive real-time views of customers and operations.
Event-driven architectures create an incredible ability to consolidate multiple flows of information in real time.
In many cases, this can create visibility of issues or opportunities that are otherwise unidentifiable across large organisations with multiple domains.
Consider the following hypothetical:
- One flow of events indicates that a customer has updated their contact information; typically, this may be visible to Customer Services.
- Another flow indicates the same customer has withdrawn $2,000; typically, this may be visible through the Banking Platform.
In isolation, neither of these actions seem problematic or noteworthy. However, if they occur in close proximity to one another—and crucially, if you have a risk analysis team who has immediate visibility of both, or an automated flag to monitor this customer based on the combination of these actions—you may be able to prevent a potential fraud.
Conversely, if you’re operating with batched processing, it’s far less likely that this intersection of activity will be identified in time to take meaningful action. The system may generate a report overnight, and someone will action the report the next day; long after the incident has occurred, and the damage has been done.
With event-driven architecture, you could create an automated ancillary service that monitors exclusively for a range of key behavioural triggers.
If any of those triggers occur, the service automatically flags the account, and customer service agents or security teams can implement subsequent processes based on the information they have in the moment.
4 – Add services without risk or disruption to existing, business-critical infrastructure.
Events are all about speed, flexibility, and delivering on potential as it is identified. With an event-driven architecture, you’re typically able to create robust and reliable services within a day’s time, if not less.
Critically, the creation, testing and deployment of these services is a non-invasive process.
You have the capability to ‘annex’ new services to the side of existing flows, rather than change the foundational structure of your existing functionality. These annexed microservices are referred to as ‘ancillary services’, because they don’t affect your core services.
If a single ancillary service misfires within an event-driven architecture, the only implication is the state of that one ancillary service becomes stale.
On the other hand, provisioning change in a traditional architecture typically means re-testing everything.
In a monolithic approach, if you make an addition or amendment to core services, you have two choices: re-rest everything, or deploy at risk.
5 – Meet regulatory obligations in real-time and with greater accuracy.
With event-driven architectures, end-of-day reconciliation is a thing of the past. Teams can configure events to listen for micro-interactions that may impact or feature in regulatory reports to ensure they’re compiled in real time.
Coupled with the concept of ancillary services (see point 4, above), you could perform risk analysis as part of the same single flow—removing the potential for double handling of information and creating greater efficiencies for speed of processing.
6 – The ultimate benefit? Deliver a better experience for your customers, before your competitors can.
Each of the five previous points ultimately converge to provide greater security, flexibility, and an improved overall experience for your customers.
While event-driven architectures provide a wide range of valuable benefits for your internal teams and processes, the real value lies in delivering leading customer experience with a view to:
- Improve customer acquisition
- Improve customer retention
- Create broader customer interactions, through opportunities for time-sensitive or contextual promotion of a wide range of products (say your customer frequently transfers money into a savings deposit; you could trigger a service to send them information on home loans once they reach a certain threshold)
- Create deeper customer interactions, by delivering more value than a potential competitor
Looking to learn more?
Find out how we’ve leveraged event-driven architecture to support banks and financial organisations around the world, like: