Le Cercle IA & Finance, L’ILB
AI in the financial sector: opportunities and challenges
AI in the financial sector: opportunities and challenges
I took part to a series of roundtables co-organized by Le Cerle IA et Finance and Institut Louis Bachelier (Tuesday the 4th of Feb).
The speakers of the last (more academic) roundtable (left to right on the picture):
- Aimé Lachapelle, Managing partner, Emerton Data
- Christine Balague, Professeur à l’Institut Mines Telecom
- Arnaud de Bresson (not in the roundtable but on the picture)
- Marie Brière, Directrice scientifique du programme FaiR, Institut Louis Bachelier ; Responsable Recherche Investisseurs, Amundi
- Charles-Albert Lehalle, Professeur à l’Ecole Polytechnique
- Jean-Marie John-Mathew, Co-CEO and co-founder of Giskard AI
My main message:
- we should rename Artificial Intelligence as Cheap Intelligence; Cheap in terms of quality, not in terms of cost (energy and organization of data and processes).
- AI is not only Large Language Models (LLM), especially for financial applications. Non linear statistical models (random forests, support vector machines, neural networks classifiers, etc) provide the capability to contextualize data. The true added value of Natural Language Processing is not the question-asnwer process by itself but the capability to structure texts: now documents can be stored as vectors, compared and processed the same way as standard tabular data.
- All these tools (including LLMs) are statistical tools, as such they need to be benchmarked on reference datasets. And as such we should not expect veracity (i.e. truth) from them, but integrity (i.e. adequacy with the data).
My main take-aways.
Artificial intelligence (AI) is transforming the financial sector, offering both considerable opportunities and new challenges. The roundtable and Denis Beau’s introduction highlight several key issues:
- AI applications: AI is finding applications in various areas of finance, including credit risk assessment, insurance pricing and volatility estimation. In addition, it is used to combat money laundering and fraud by managing large quantities of data. Generative AI, offering to structure queries in natural language, is also accelerating deployments in the sector.
- New risks: AI integration introduces new risk vectors. The increased complexity of AI systems can lead to damaging errors for customers and financial institutions; it also increases cyber risk. Online learning and the opacity of algorithms can prevent the integrity of the answers. There is also a potential environmental risk due to energy consumption.
- Financial stability: the question of automation has been already addressed by regulators during the debate on HFT (see Apports des sciences et technologies à l’évolution des marchés financiers (fr) too). AI systems can nevertheless intruduce new risks, notably
- the dependence on a single technology (Single Point of Failure)
- Fake news, potentially affecting market perception and financial stability.
- Regulation: It is crucial to identify risky systems, apply the principle of proportionality and integrate AI into existing prudential procedures. Regulation must also take into account the opacity of algorithmic systems and ensure fairness to avoid discrimination.
- Adoption and skills: Although goodwill for AI adoption is strong, only a small proportion (5%) of IT budgets are devoted to it as of today. A major challenge is deploying AI solutions inside heterogenous IT systems, and building employees’ skills. Quality data is essential for AI to be effective, putting forward the importance of Data Curation and Data Annotation.
- Measuring return on investment (RoI): Few companies (7%) actually measure the RoI of their AI investments. For some smaller companies, RoI may be more visible. It is important to redefine the business model and productivity of companies to effectively assess the impact of AI.
- Example, Impact on software development: AI can increase the number of produced lines of code, but as the number of bugs is often proportional to the amount of code produced, what should be measured is the number of features implemented.
- Training and talents: Building skills around a proper usage of AI within market participants, particularly critical thinking skills.
Specific issues about metrics raised during the round tables:
- Algorithm opacity: Algorithms’ outputs need to be monitored to avoid discrimination and ensure fairness. That for benchmarks and metrics are important tools.
- The 2024 AI Index Report mention a need for more specialized benchmarks, performances of end-to-end solutions should bemeasured and reviewed.
- Metrics: This is linked to the RoI assesment. Usually benchmarks are statistical ones, they should be complemented by operational criteria, that would push the companies to think about what they do want to improve (f.i. not the number of lines of code producd, but the number of features implemented). Moreover, enrgy-related and fairness-related metrics should be added. Hence it is required to define ad hoc multi-criteria indexes.
Towards responsible AI deployment:
- Collaboration and best practices: Collaboration and the adoption of international best practices are essential.
- Information system: An interoperable, usage-oriented information system is needed to fully exploit the benefits of AI.
- Holistic approach: The approach must be holistic, integrating talent development, practice innovation and information system evolution.
In summary, the implementation of AI in finance represents a significant opportunity to improve efficiency and risk management, but it requires a cautious approach, appropriate regulation, ongoing training and careful attention to data quality and the evaluation of results.