Unstoppable in a Crisis: AI Trading Bots Will Pose Risks to Financial Stability Warns Bank of England's Hall
- Written by: Gary Howes
-
Image courtesy of Bank of England.
The Bank of England's Jonathan Hall says AI systems, especially those involved in trading, should be extensively trained and tested within multi-agent sandbox environments to ensure they perform as expected under various scenarios.
The external member of the Bank's Financial Policy Committee says there are several potential dangers of artificial intelligence (AI) to the financial sector that regulators must acknowledge or risk "severe financial disturbances".
Speaking in Bristol, Hall says, "deep trading agents, whilst increasing efficiency in good times, could lead to an increasingly brittle and highly correlated financial market."
"The incentives of deep trading agents could become misaligned with that of regulators and the public good," he adds.
Hall is a member of the Founders Circle of the Institute for the Future of Work (IFOW) and is in the process of completing a PhD in Philosophy of Mind at Edinburgh University.
He says there are a number of ways in which AI agents can amplify current vulnerabilities in non-bank finance. This includes the hypothetical AI 'Flo', whereby the machine could learn profit-maximising strategies that may not always align with market regulations.
He says one answer to this issue is to train AI systems to adhere to a "constitution" that could guide their actions within regulatory frameworks.
He also points out dangers associated with model misspecification, using the theoretical scenario of the "Paperclip Maximiser", where an AI system is designed to maximise paperclip production and ultimately consumes everything, including its owner.
Hall raises concerns about how AI could interact with financial markets to potentially destabilise them through emergent behaviours that are difficult for humans to predict or control.
He also warns of the potential for collusion and emergent communication where AI agents collude and pose challenges in controlling uninterpretable communication methods.
Advanced AI systems might develop strategies that involve collusion without explicit human direction or awareness. These strategies could emerge from the systems' interactions and be opaque to human understanding, making them challenging to monitor and regulate.
"Cutting-edge research in multi-agent reinforcement learning suggests that the risk of collusion is part of a broader category of emergent communication between AI agents. Because this is generally uninterpretable to humans, it is difficult to monitor and control any risk that arises from such communication," says Hall.
He adds that AI systems could ultimately learn to navigate around regulatory measures, optimising for profit maximisation in ways that comply with the letter but not the spirit of the law. This includes adapting to market abuse regulations in ways that might still undermine market fairness and integrity.
Hall worries that in situations where AI systems play a significant role in market movements, human operators may find it challenging to intervene effectively during crises due to the speed and complexity of AI-driven activities.
He says regulators should explore the following responses to encourage AI adoption while minimising risks:
Training, Monitoring, and Control:
AI systems, especially those involved in trading, should be extensively trained and tested within multi-agent sandbox environments to ensure they perform as expected under various scenarios. Implement tight monitoring and control mechanisms, including risk and stop-loss limits to manage and mitigate erratic behaviours that could harm the market.
Regulatory Alignment:
Ensure that AI systems are developed and operated in compliance with existing regulatory frameworks. This involves training AI systems to understand and adhere to regulatory requirements as if they were part of their operational 'constitution'. Continuously update training to address any discovered discrepancies between AI behaviors and regulatory intentions, ensuring alignment over time.
Stress Testing:
Regular and rigorous stress tests should be conducted, utilising adversarial techniques to better understand how AI systems might react under extreme conditions. Stress tests should not only verify performance and compliance but also explore the AI systems' reactions to different market dynamics, aiming to uncover any potential for destabilising actions.
Collaborative Efforts:
Encourage collaboration between regulators, market participants, and AI safety experts to develop standards and best practices that ensure AI technologies contribute positively to market stability. Foster a proactive dialogue about the implications of AI in financial markets to prepare for and mitigate potential risks.
Market-Wide Discussions:
Initiate market-wide discussions about the incorporation of regulatory rules into AI systems, which could help trading managers understand and implement best practices in AI governance. Promote transparency and sharing of insights across firms to facilitate a common understanding and approach to AI risks and regulation.