How superfund CROs are handling the double-edged sword of AI
5 min read 22 July 2024
AI is going to disrupt the asset management and superannuation sector, introducing both revolutionary capabilities and a new raft of inherent risks. CROs must take a hands-on role to ensure their institutions manage the risks and are still able to make the most of the AI opportunities.
Asset managers and superannuation funds are harnessing AI
AI has the capacity to revolutionise how funds are managed, member services are delivered, and core processes are run.
Already, asset managers and superannuation funds are using AI for portfolio optimisation, risk assessment and predictive analytics. Firms are leveraging AI algorithms to undertake investment research and improve investment strategies by analysing market trends and identifying investment opportunities, supporting more efficient and informed decision-making. Other use cases we’ve seen in the market include:
- Marketing and client service enablement: AI can track and analyse client interactions and behaviours to identify preferences and anticipate future actions, leading to more personalised and effective customer engagement strategies.
- Legal and compliance: AI can continuously monitor transactions and communications, enabling 100% sampling of customer interactions whilst detecting suspicious activities and flagging them for further review. It is also being used to automate a small percentage of the work of legal and compliance, e.g. contract reviews.
- Complaints and disputes: AI can be leveraged to effectively prioritise complaints based on their sentiment, urgency and complexity, and ensure they are directed to the most appropriate team or individual to process.
But AI opportunities come with inherent risks
Although AI brings significant opportunities, it also raises concerns about data privacy, bias and cybersecurity. Boards understand that any AI-related incidents will come with significant reputational damage and a loss of customer trust.
Eventually, robust regulatory frameworks will be put in place, but current legislation in Australia is not sufficient to protect against AI ‘harm’ and it will take time to draft and embed appropriate legal and regulatory frameworks. Just like everyone else, regulators themselves are still learning about and grappling with AI. It will take considerable time, consultation and international collaboration to arrive at standards that can be globally enforced.
Also, while global regulators broadly agree that frameworks must ensure AI delivers both economic and societal benefits while mitigating and minimising risks, different jurisdictions have divergent views on how best to regulate AI. This will create complex additional challenges for asset managers with operations in multiple jurisdictions.
CROs must both manage AI risk and facilitate AI innovation
As CROs adapt existing risk frameworks to the new reality of AI, their mandate remains effectively the same: support the business, enable value creation and protect the value the business has created.
Many elements of the task are familiar. CROs are playing a crucial role in helping their organisations to govern AI by ensuring the risks associated with AI implementation are identified, assessed and managed effectively. CROs are also responsible for drafting risk appetite statements, establishing comprehensive controls and oversight for the AI risks identified. And, of course, developing risk management frameworks that address the ethical, legal, and operational implications of AI technologies within the organisation. This includes identifying and managing the risks:
- Introduced by vulnerabilities in current AI-infused tools, systems and processes.
- That AI creates unreliable, unfair or incorrect outputs due to bias, malfunction or hallucinations.
- Associated with privacy breaches, or intellectual property and copyright theft linked to the use of AI.
- In the supply chain where third-party suppliers use AI to deliver outcomes for funds.
- Of colleagues using AI tools, supported and unsupported, to fulfil their role.
Many CROs are proactively collaborating across their organisations to establish policies, procedures and compliance frameworks that promote responsible and compliant use of AI aligned with the organisation's risk appetite and current and incoming regulatory standards.
They are also investing time in educating themselves and their boards not only about the nature of the risks AI presents but also the importance of safely enabling AI-based innovation. When it comes to AI adoption, the tone from the top is important. Organisations cannot be afraid of AI, institutions should be focused on how to safely embrace the efficiency, accuracy and customer experience upsides. Leaders must build a ‘test and learn’ culture, emphasising that innovation will be supported and encouraged if AI is used within appropriate guardrails.
This is where the CRO comes in. In a clear evolution of the traditional role, CROs must proactively support their businesses to leverage the opportunities that AI brings. This is all about crafting principles and policies to enable AI innovation in a safe and compliant environment.
What steps can CROs take now?
Don’t wait for certainty. AI regulations in Australia are a way off and use cases are evolving rapidly. Start putting in place the enabling infrastructure needed to manage AI risk and opportunity now.
- Define and establish a responsibility framework to help the organisation understand its capabilities and limitations in pursuing opportunities with new AI technology. Ensure appropriate personnel are in place to perform control, oversight and monitoring activities.
- Establish your firm's risk appetite and get it approved by your Board. Update and refine your firm’s policies, standards and guidelines for AI use and adoption accordingly. This is less about creating policy documents and more about clearly defining what it means to develop and deploy AI solutions, whether they are built in-house or sourced from third parties.
- Focus on evolving your control framework and control capability to explicitly consider AI risks. As new AI risks emerge, you’ll need to evolve your existing controls in parallel as well as develop entirely new controls.
AI is full of opportunity for the financial services sector. But how you manage and approach the risks will determine the safety and success of your AI adoption.
This topic and other developments are discussed at Baringa’s CRO Symposium events attended by the CROs of many of Australia’s largest superannuation funds.
Get in touch
If you'd like to know more about this topic or Baringa's CRO Symposium, please contact us.
Our Experts
Related Insights
Why banks fail
After a decade of relative calm, a series of sharp and sudden failures hit the banking industry hard. They’ve prompted many post-mortem analyses, discussions, and regulatory recommendations. But are other institutions really taking the lessons learned to heart?
Read moreFour steps to comply with the updated BCBS239 regulations
Banks have spent millions on BCBS239 compliance, but they aren’t yet in the clear. In case you missed it, the ECB recently published new guidance that updates the decade-old regulation. Here are the four actions that we recommend firms take to meet the latest BCBS239 rules.
Read moreAI risk management: are financial services ready for AI regulation?
Find out how AI is transforming financial services and the crucial need for proactive risk management and compliance in the evolving regulatory environment.
Read moreHow lenders can scale the refinancing cliff
We surveyed 500 CFOs, financial directors, and senior treasurers in the US and UK to uncover how prepared businesses are to scale the refinancing cliff.
Read moreRelated Client Stories
Helping a telco turn digital transformation into competitive edge
How can a large telco unlock the advantages of digitalised operations and agile ways of working?
Read moreTurning RWE Supply & Trading into data leaders
How do you help an energy trading firm turn a mountain of data into a competitive advantage?
Read moreTransforming B2B sales with a data-driven GTM strategy
We partnered with a prominent computer manufacturer to engineer a go-to-market (GTM) strategy rooted in analytics, unlocking a potential 25% surge in revenue.
Read moreCreating the insurance platform of the future for one of the UK’s largest insurers
How do help our client to deliver seamless digital experiences for their customers?
Read moreAre digital and AI delivering what your business needs?
Digital and AI can solve your toughest challenges and elevate your business performance. But success isn’t always straightforward. Where can you unlock opportunity? And what does it take to set the foundation for lasting success?