ALEXEI MARKOVITS
Artificial intelligence is changing the world around us. Every day, AI systems are buying and selling millions of financial instruments, assessing insurance claims, assigning credit scores and optimizing investment portfolios. As applications grow, it is not enough for AI systems to perform well. We need to understand how they work so we can trust them enough to use them to their full potential.
The challenge for modern AI, unlike previous technologies, is that how and why it works isn’t always obvious, even to the technology’s creators. Many of the advanced machine learning algorithms that power AI systems are inspired by the human brain, yet they lack the human ability to explain their actions or reasoning.
Thankfully, there’s an entire research field working towards describing the rationale behind AI decision-making: Explainable AI (XAI). Momentum in the field is growing as AI systems demonstrate performance and capabilities far beyond previous technologies, but encounter hurdles of practicality and legal compliance. For companies putting AI to work, XAI will be a key factor in successful implementations.
Explainability techniques will prove to be especially valuable in financial services, where the low signal-to-noise ratio typical of financial data demands a strong feedback loop between user and machine. AI solutions that do not leave room for human feedback to guide outputs risk never being adopted in favor of traditional approaches that rely on domain expertise and experience honed over many years. Regulation, too, raises the stakes by preventing AI-powered products from even entering the market if they are not auditable.
Market forecasting and investment management
Time series forecasting methods have grown in prominence across financial services. They are useful for predicting asset returns, econometric data, market volatility and bid-ask spreads, to name a few. But their success is limited due to their dependence on historical values. Because they can lack disparate meaningful information of the day, using time series to predict the most likely value of a stock or market volatility is very challenging. Complementing these models with explainability methods could allow users to understand the key signals the model uses in its prediction, and interpret the output based on their own complementary view of the market. This would in turn enable a synergy between the domain expertise of finance specialists and the big data crunching abilities of modern AI.
Explainability techniques enable similar human-in-the-lop AI solutions for selecting a portfolio. An investor might not choose to pick the suggested portfolio with the highest reward if the risk associated with it seems too large. On the other hand, a system that also provides a detailed explanation of the risks, e.g., how they are uncorrelated with the market, would be a powerful investment planning tool.
Credit scoring
Assigning or denying credit is a consequential decision that is well regulated to ensure fairness. Many opportunities for AI applications in credit scoring are dependent on the ability of an AI application to provide a robust explanation of its recommendation. Beyond compliance, the value of XAI can be seen for both the client and financial institution: clients can receive explanations that give them the information they need to improve their credit profile, while service providers can better understand predicted client churn and adapt their services. XAI in credit scoring can also help with derisking; for instance, an XAI model might provide an explanation of why a pool of assets has the best distribution to minimize the risk of a covered bond.
Designing for explainability
Most current AI practice focuses on performance, with explainability dealt with as an afterthought. However, both the AI community and industry are coming to see the need for more transparent AI systems. As AI solutions are evolving past contained proofs-of-concept to deployment at scale, they recognize the importance of prioritizing XAI to satisfy adoption, to power effective human-AI collaboration and to satisfy audit and regulatory needs.
Our new discussion paper offers a primer on XAI and how successfully applying it will mean taking a user-centric approach that starts at the beginning of the solution development. As we explore, designing for explainability will require evaluating the needs for transparency across AI systems and taking them into account from the initial steps of building a solution to the system rollout.