The Financial Stability Board has issued a paper on the use of AI and machine learning in finance, warning that their current use creates risks that threaten a crisis.
Banks and hedge funds are understandably eager to draw on the power of artificial intelligence and machine learning. Successful market speculation depends upon recognizing opportunities before others and panning through data to spot emerging trends. This sort of rapid pattern or deviation recognition and risk analysis is the bread and butter of AI.
“AI and machine learning applications show substantial promise if their specific risks are properly managed,” the Financial Stability Board [FSB] said in its report. However, it warned that, “Taken as a group, universal banks’ vulnerability to systemic shocks may grow if they increasingly depend on similar algorithms or data streams.”
Read more: SAP: Banks must prepare for open banking age
AI and machine learning in finance
These technologies are deployed in various aspects of financial services today, including:
- assessing credit quality
- pricing and marketing insurance contracts
- automating client interactions
- optimizing scarce capital
- back-testing models and analyzing the market impact of trading large positions
- finding signals for higher uncorrelated returns
- optimizing trade execution
- ensuring regulatory compliance
- surveillance, data quality assessment and fraud detection.
Fintech: what are the risks?
The danger is that we are becoming too dependent on the emergent capabilities of machine learning. By eliminating the variability of human judgements, financial institutions are all shifting towards a shared view of risk – as decided by AI.
Firms are relying on a small selection of third-party developers and services – meaning the technical foundations that could underpin global financial systems in future are all susceptible to the same potential flaws and risks.
The FSB panel, led by Bank of England Governor Mark Carney, cautioned that greater monitoring and testing of fintech solutions is required if we are to avoid potential, currently unforeseen, issues. The difficulty of interpreting machine learning methods introduces unknown risks that are multiplied by their widespread use.
“Many of the models that result from the use of AI or machine learning techniques are difficult or impossible to interpret. The lack of interpretability may be overlooked in various situations, including, for example, if the model’s performance exceeds that of more interpretable models,” claimed the report.
Long-term security and trust
Many machine learning solutions have been ‘trained’ in times of low volatility. There are therefore questions around how the models will react in the face of economic downturn or financial crisis.
This isn’t just an issue of financial risk, cyber-crime is a very real concern too. “These risks may become more important in the future if AI and machine learning are used for ‘mission-critical’ applications of financial institutions,” the FSB said. “Moreover, advanced optimization techniques and predictable patterns in the behaviour of automated trading strategies could be used by insiders or by cyber-criminals to manipulate market prices.”
There is therefore a need to better understand all the implications and intricacies of AI and machine learning applications in finance – as well as data privacy, conduct risks and cybersecurity. The trust of the public, regulators and supervisors all depend on a vigilant long-sighted approach to technology assisted financial modelling.