Beyond Gen AI: Experts explore the benefits of AI in finance and superannuation
When we started planning our latest Melbourne Expert Talks event, we didn’t anticipate bulldozers becoming a central AI metaphor. But that’s the beauty of bringing experts together to talk about technology and leadership – you never know where the conversation might lead!
With a recent Gartner poll finding that 55% of organisations globally are already in pilot or production with Generative AI, we wanted to discover if Australia’s finance and superannuation sectors were investing in AI. At “Beyond Gen AI: Navigating how businesses realise the benefits of AI” our experts shared insights into how organisations are embracing the potential of AI and the foundational steps that organisations need to maximise their return on investment. As moderator of the panel, I’m delighted to share some of the discussions here.
Combining Artificial Intelligence with Human Intelligence
Gen AI continues to dominate technology conversations, but more organisations are understanding that its efficacy hinges on human oversight. Stephen Reilly Chief Operating Officer at HESTA illustrated this with the memorable bulldozer analogy: “No-one is surprised that a bulldozer can lift more dirt than a human with a shovel. But you still need a human to drive the bulldozer and decide where you want the dirt to go. It’s the same with AI. It’s no surprise that AI can do things faster than a human, but you still need a human in the process to get the best results.”
Claire Cornfield, Senior Executive Head of Customer Experience at La Trobe Financial, expanded on the importance of combining artificial intelligence with human intelligence, especially in organisations where trust is vital, such as financial services. ”We need to decide what we want the technology to do, what we want our staff to do and where each can add value. But we’ll still need humans to be involved in highly emotive or sensitive areas, even though these are also the more challenging jobs.”
Our panel also highlighted AI’s potential to improve customer experience and be used to create better outcomes for vulnerable people. For example, within healthcare, we are seeing the potential for AI to use data sets to predict health problems and enable early intervention. Andrea Lymbouris, Head of Information Services at State Trustees, can see the potential in AI to provide improved personalised customer interactions within her teams. “AI could support our consultants access data about the client they are speaking to, when they last called and why, so the client doesn’t have to repeat all the information and we can give them the help they need quicker.”
Balancing AI’s risks and rewards
The finance and superannuation sectors, bound by regulatory constraints and financial responsibilities, can often be seen as cautious in their approach to new technologies. But even within this sector, each business will have a different appetite for risk, said Michael Collins, former Chief Information Security Officer at Judo Bank. “Other businesses are going to run harder and run faster with AI – they won’t have the sensitive data we do so they’re going to be able to take more risks. But within each organisation, it comes down to what your board and your senior management are comfortable with from a risk perspective.”
Stephen noted numerous potential AI applications in the competitive superannuation sector, such as personalised experiences and detecting anomalies in customer behaviour. “But, as with everything with technology, we have been very conscious with what we enable, “ he said. “We want to ensure it adds value, optimise our use and ensure we keep it secure.”
Claire also raised concerns about AI’s impact on talent development if it is used to automate some tasks. “A lot of AI-use cases are automating tasks that usually fall to entry-level roles, “ she said. “But these tasks, like dealing with calls in a customer centre, give people the breadth of experience that they can take into their long-term financial services careers. If we take that work away, how are they going to get that experience?”
No doubt inspired by the Crowdstrike incident in the week before the event, the panel also stressed the importance of human intervention in AI systems. Michael said: “Good AI needs three things – confidentiality, integrity and security. But if an AI system went down, you would still rely on humans to be involved to fix it or maintain the service.”
For Andrea, the biggest challenge is creating a strategy for the business when AI is advancing so rapidly and how businesses decide when to leverage AI capabilities built into the tools they already procure and when to decide to take a wider approach. Andrea added: “I think from a technology perspective we need to rapidly increase our skill set and expand our knowledge of AI.”
Alongside AI strategies, getting funding for AI projects can also be a challenge, with limited investment models currently available to organisations to use within business cases. Michael said: “You’ve got to be very clear about why you’re asking for money and what you want to do because the business is always making trade-offs. But it’ll again come back to risk appetite and where you can pivot funding from in your current strategy.”
The importance of data quality and security
I’ve worked in digital transformation for a long time and I’ve seen the same questions about risk come up with each new technology – I remember people being horrified when we first introduced APIs in banking for example. But the difference I see with AI is that it is a technology that can be democratised and anyone can access tools like ChatGPT. During the panel, I asked if it means that tasks we often put on the back burner, such as data cleansing or resolving our internal data permissions, now become consequential. The question of data quality is certainly one we hear from businesses looking to start working with AI, particularly for organisations looking after sensitive information for their customers or clients.
Andrea said: “I think organisations are right to be thinking about protecting and securing the data. We need to be very mindful of it and put in place some additional risks and controls.” Michael also echoed the data security sentiment, adding: “You need to understand your data, where it is and how you’re going to use it before you start just running to the sexiest thing that’s on the internet and trying to install it and see how it goes.”
But Stephen also cautioned organisations waiting too long for their data to be perfect before embarking on an AI project. “Your data is never going to be perfect, so you have to figure out how to build in the margin for error for imperfect data. You have to drive forward. My encouragement is to test the quality of your data, overlay human intelligence onto your AI and embrace data governance people.”
Conclusion
We’d like to thank everyone who attended the event and our panel members for sharing their expert insights.
We’re also delighted to announce that will be matching the total amount raised from the event, boosting the final figure to $1,500 donated to the Aboriginal Investment Group’s Remote Laundry Project. This will power a laundry site for an entire year, giving remote aboriginal communities access to free laundry services to improve health and social outcomes.
Watch out for details of our next Expert Talks event and If you’re interested in exploring Gen AI in your organisation, contact the Equal Experts Australia team.
Here’s a fact you probably won’t know about me – I went to the same school as Alan Turing.
I mentioned this while appearing as a panellist at the recent Konnecta Ko-Lab: Series 1 discussion in Sydney. Having attended the same educational institution as one of the founding fathers of artificial intelligence, I guess I was always destined to be interested in AI. It’s a fact that certainly grabbed the attention of the audience.
AI, particularly Generative AI, continues to captivate technologists and business leaders worldwide. During the panel, we dived headfirst into the technical aspects of implementing AI, but some of the more compelling and challenging audience questions revolved around how organisations can balance AI’s opportunities with the concerns surrounding the new technology.
How generative AI is impacting business
Generative AI is reshaping the way we create and consume content. It can enhance customer experience through personalisation, streamline business processes and rapidly generate new creative ideas.
But along with the benefits, there are concerns about how Generative AI is impacting business and even society as a whole. These concerns, including data privacy, intellectual property rights, and the use of technology to mislead, deceive or manipulate people, are not new.
However, the speed at which Generative AI tools increased, along with its huge popularity and the ability for any person or business to use them, means it’s vital that people considering implementing AI understand how to innovate ethically and responsibly.
The importance of accountability and transparency
One of the key questions from the event audience was whether businesses can ethically implement AI while still reaping the business and revenue rewards. As a panel, we explored two areas that I believe underpin responsible innovation with AI: accountability and transparency.
As Generative AI is still an emerging technology, the regulations governing its use are also still emerging. Many AI frameworks around the world, including here in Australia, are still under consultation. There is also a lack of industry-standard benchmarks, security policies and reference architectures to help people build AI technologies and assess them for security reasons.
Decisions about how and when to use AI are still largely being driven by organisations themselves – so we’re relying on businesses to innovate but also self-regulate. To innovate responsibly, businesses need to be accountable for the technology they use – including its robustness, its security, and the accuracy of the data informing it. Transparency about how this information is being used, who has access to it and what has been put in place to prevent misuse is also crucial.
The third fundamental of responsible innovation in AI: human-centricity
Alongside compliance and technical robustness, during the panel discussion, I advocated for a third fundamental of AI in a “triangle of responsible innovation”: human-centricity.
Generative AI can create content quickly but it does not replicate human creativity. It’s an enabler for quick idea generation, proofreading or summarising, but it lacks real human empathy to be able to always resonate with audiences authentically.
Additionally, Generative AI reflects the bias in its underlying data, such as social inequality, gender discrimination or minority stereotypes. For instance, it may only generate an image of a white male in his 40s when asked to depict a doctor. Humans are required in the process to recognise this bias in the generated content and act on it.
This in itself can be challenging. After attempting to correct AI’s bias against minority groups, Google overcorrected, enabling users of its Gemini AI to create misleading and historically inaccurate images of people, including the US founding fathers of the US and Vikings, in a variety of ethnicities and genders.
Understand your business aims and collaborate with experts
As the hype around Generative AI continues to grow, more businesses are considering implementing AI within their own operations and workflows. However, all panellists agreed that a business must first define their aims, understand their use cases, and perform a thorough security and risk assessment before they onboard any new tool or start any AI development work.
To successfully embark on AI journey, businesses also need to ensure that leadership teams become fluent in AI and can prioritise use cases that are aligned with their business strategy. Organisations also need to set up an operating model which allows experimentation with AI in a structured way that can then be scaled when needed.
Those who rush into using new tools like Generative AI, without fully considering best practices or how it can add value to their business, risk inviting security issues and wasting time, money and effort on something that might not deliver on the hype. This is especially true with new and rapidly developing technologies such as Generative AI – even large global organisations such as Google, Meta and Microsoft are spending time constantly adapting and improving their understanding of AI and how it should be implemented.
Businesses need expert help and guidance to maximise the resources they invest in any new technology and AI is no different. If you’re interested in exploring how to implement AI in your organisation and maximise the opportunities and benefits it can offer, contact our experts in Australia.