Have you ever wondered what happens behind the scenes when banks navigate ever-shifting regulatory storms? In this interview, we sit down with Abhishek Nagesh, a finance veteran who’s spent more than 15 years mastering everything from BCBS and PRA requirements to US tri-agency rulebooks. He has guided major institutions through credit risk modeling, counterparty exposures, and market-risk calculations while maintaining rock-solid financial reporting. Today, he shares how he turns dense oversight frameworks into actionable strategies and what challenges lie ahead for enterprise risk.
How are artificial intelligence and machine learning reshaping how financial institutions approach credit and market risks?
Artificial intelligence and machine learning are fundamentally transforming how banks approach credit and market risks today.
Banks are transitioning from manual and subjective qualitative assessments to instantaneous, data-driven, automated credit risk evaluations. For example, machine learning models now assist in obligor risk rating (credit scoring) by analyzing exponentially more data about a borrower than a human could, not just financial statements but transaction and internet search history, spending patterns, market trends, and even news and social media interactions. This means credit decisions like loan approvals can happen a lot more quickly.
Multiple AI tools, such as NLP, ML, and RPA, are being increasingly tasked with collaborating to provide a simple “go or no-go” recommendation on prospective loan exposures. Banks already use machine learning (ML) for fraud detection and anti-money laundering oversight, allowing them to catch suspicious patterns that traditional methods often miss increasingly. This is building an early warning system for banks where AI can detect subtle changes in a customer’s behavior or broader economic signals that suggest rising risk, allowing the bank to take action in real-time.
What specific algorithms or models are most effective in predicting and managing financial risk today?
Today’s risk management leverages various advanced algorithms and models, each suited to particular types of risk. Here are some of the most effective ones in use:
Banks are increasingly using supervised learning algorithms, such as decision trees, random forests, and neural networks, to predict the probability of default for loans and credit portfolios. These models often outperform traditional regression-based risk scorecards by considering nonlinear interactions and a wider range of data. For instance, a gradient boosting LLM analyzes hundreds of borrower attributes (income, spending patterns, even social data where allowed) to produce a more precise credit risk score. By using ML models, banks can approve credit more confidently and identify risky loans earlier.
Financial fraud and money laundering are areas where AI models shine today. Techniques such as clustering, outlier detection, and neural networks (especially autoencoders) are employed to identify unusual transaction patterns that may indicate money laundering or illicit activity. These algorithms don’t necessarily predict a specific outcome; instead, they monitor behaviors and raise red flags when something looks off compared to a baseline.
A model can project what a regular spending pattern on a credit card looks like for an individual and then instantly spot when a series of transactions doesn’t fit that pattern (potentially indicating a stolen card or account takeover). Similarly, banks utilize network analysis algorithms to identify rings of transactions that suggest potential money laundering. These AI-driven systems are far more effective than static rules because they can adapt to new fraud tactics as they emerge by recognizing shifts in the data.

How can AI-driven tools improve the accuracy and efficiency of risk assessments compared to traditional methods?
AI-driven tools often outperform traditional risk management methods in both accuracy and efficiency. AI can sift through thousands of publicly available disclosures about an entity or counterparty and identify a hidden trend indicating inefficiencies or imbalances in credit profiling, which forms the basis of credit strategy. Traditional methods, which rely on sampling data or simpler statistical models, may overlook these nuances because a human expert cannot replicate the speed and volume of seamless machine logic.
By capturing such details, AI provides a more accurate picture of risk, as it eliminates fewer blind spots and surprises. In short, decisions based on AI analysis are informed by vast amounts of verified data, which enhances their accuracy. Tasks that once took weeks, such as creating macroeconomic factor simulations, are now being automated through seamless data integration and the embedding of algorithmic processes, and can be completed in minutes.
Many banks have now piloted the use of robotic process automation (RPA) to auto-fill regulatory report forms and gather data from various platforms. This automation streamlines reporting and minimizes human errors. Similarly, AI-driven risk models can recalculate exposures or simulate instantaneous market shocks in real-time, providing risk managers with up-to-the-minute insights rather than waiting for end-of-day reports. This efficiency enables banks to respond to changing conditions more quickly, providing a competitive advantage in today’s rapidly evolving markets.
AI tools are excellent at continuous monitoring. For example, banks utilize AI to monitor transactions and communications, flagging anomalies immediately and continuously. Traditional rule-based systems might generate too many false alarms or miss novel fraud tactics. Still, human-reinforced machine learning can learn what “normal” behavior looks like and detect out-of-pattern events more accurately. Increasingly, AI prevents insider market manipulation and abuse by monitoring trader chats and calls.
What are some of the potential dangers or blind spots that arise when relying on AI for regulatory compliance?
Relying heavily on AI for regulatory compliance without balanced human judgment can introduce new risks and blind spots that banks must manage carefully.
One significant danger is that, over time, AI models evolve into black boxes and begin to give contradictory results. Suppose a machine learning model declines a loan or flags a transaction without a detailed, clear explanation. Compliance is more than just making the right decision; banks have to prove they followed the rules and made a prudent decision.
AI models learn, unlearn, and re-learn from data, and historical datasets can be biased or incomplete. If there were inadvertent biases in past lending, a credit risk AI might perpetuate or even amplify those biases. As an example, disadvantaging groups of borrowers based on ethnicity leads to compliance and ethical issues.
Conversely, there might be blind spots in the data: if a specific type of risk never occurred in the past, the AI won’t know how to spot it in the future. This particularly worries regulatory compliance, where new rules might target issues that haven’t been prevalent before. Still, the AI might simply not flag a compliance issue because it has no precedent for it in its training data. That’s why regulators often emphasize examining AI models for bias and completeness, and this is precisely why human oversight is needed to catch things the AI might miss.
How should financial institutions balance automation with human oversight in risk-related decision-making?
Banks should aim for a human-machine joint venture where AI handles heavy data processing, and humans provide strategic guidance and judgment. The idea is to let AI do what it does best, i.e., processing vast amounts of information and identifying patterns and connections. In contrast, humans must do what they do best, which is understanding context, making nuanced decisions, and ensuring ethical standards are met. Thus, successfully adopting AI means combining expert human judgment with AI analytics.
In practice, balancing automation and oversight might look like this: an AI system combs through thousands of contacts or derivative agreements and flags a handful as high risk based on complex patterns it has found. Instead of automatically raising a flag on increased margin call payments, a human credit officer reviews the AI’s findings and checks for any factors the model might not fully comprehend and then makes the final decision.
What role does explainability and transparency play in adopting AI for risk management, especially in highly regulated environments?
Explainability and transparency are important aspects of AI applications in risk management, particularly so in banking, where regulators and stakeholders demand consistent clarity. Being a highly regulated environment, Banks should always be able to explain how a proprietary model works or why it has concluded in a particular manner; otherwise, both internal risk committees and external regulators will not have confidence in the AI and view it as a hindrance rather than a panacea. Therefore, banks have specialized departments called Model Risk Management whose primary role is to ensure the robustness of all Banks’ models (including AI).
They work closely with regulators on AI model approval, with a heavy emphasis on explainability as mandated by FRB’s SR 11/7 requirements. This mandate is pervasive and cuts right across FR Y-9C and Y-14A reporting for Spot and projected Regulatory capital requirements under the Dodd-Frank Act and CCAR stress testing.
Suppose the model is too complex to interpret. In that case, regulators might mandate the use of techniques like model simplification or surrogate models for explanation or choose a more transparent but rudimentary type of model over a slightly more accurate but opaque one just to satisfy the need for clarity. The role of transparency here is to build trust. Regulators are more likely to green-light and even allow reliance on an AI tool if they see a clear line of reasoning in its decisions. Transparency is also essential for the banks themselves.
Risk Managers
Risk managers need to understand and defend the AI’s recommendations. For example, suppose an AI model flags a specific collared exotic barrier option trading strategy on Brent as too risky due to our firm’s profile, the exponentially extensive parameter usage, and the computational intensity involved in continuously managing the risk exposure, and suggests reducing the exposure. In that case, the risk manager must be able to explain to regulators (and upper management) why the AI is making that claim.
If the AI can highlight that this option portfolio is overly correlated with crude but underrepresents a strategic shift to shale in the near term, and our models predict a potential downturn in oil futures, then that is a transparent explanation that people can debate and act on. If all the team gets is a cryptic score or directive with no reasoning, they’re less likely to trust it. Thus, explainability makes AI recommendations more actionable.
Can you share some best practices for integrating new risk management technologies into legacy financial systems?
Integrating new risk technology into legacy systems is a challenge all banks face, as it disrupts critical day-to-day operations. Here are some best practices for making this integration successful:
Legacy systems often contain decades’ worth of data, sometimes in formats that new AI tools can’t readily use. The first step is to digitize and standardize ALL data lakes and comprehensively map and document the mappings. For example, converting historic loan and commitment documents (even those in foreign languages or just paper scans) into modern, machine-readable formats.
AI can work seamlessly only when the information available to it is in a standardized format. Similarly, banks have code and models written in older programming languages. Using tools (even generative AI) to translate legacy code into modern languages can help new systems interface with old ones seamlessly, and this contributes to the efficiency of AI. By cleaning up data and documentation upfront, you ensure that your shiny new AI risk model isn’t running blind or getting tripped up by inconsistent inputs from legacy sources.
Robotic Process Automation
Many banks utilize robotic process automation (RPA) as a bridge between their new and existing infrastructure. RPA bots can interact with legacy software just like a human user, i.e, retrieving reports, entering data, etc. The benefit is that you can layer new technology on top of existing systems without undergoing deep, disruptive, and painstaking bottom-up integration right away. This approach enables your new risk management AI to obtain the necessary information (via RPA pulling data from an existing core banking system, for instance) and even execute actions (such as feeding results back into another system) without requiring a complete overhaul of the underlying infrastructure. It’s like building adapters so new tools can plug into the old infrastructure gradually.
As it’s a heavy lift to roll out a big new risk system all at once, it’s usually more effective to pilot the technology in a controlled area/sandbox first. Banks have been piloting AI in high-impact areas, such as stress testing, credit modeling, loan documentation creation, and reviewing bilateral derivative contracts, and then scaling up gradually.
Risk Compliance
Also, involving Risk Compliance and Regulators early in the process, as close collaboration with these stakeholders is critical for large-scale adoption. For example, if a bank is developing an AI for automated regulatory reporting or capital calculations, it should keep regulators informed and seek their feedback on the approach.
Regulators don’t like big surprises. If they understand that your new system improves accuracy and you can demonstrate its reliability, they’re more likely to be supportive when you go live. Some banks even share pilot results or validation reports with regulators proactively. Internally, involving compliance teams ensures the new tech actually meets regulatory requirements and doesn’t inadvertently break any rules. This preempts many issues and smooths the integration path.
As technology continues to evolve, how should financial institutions prepare to adapt their risk strategies over the next five to ten years?
The rapid pace of technological advancements and the emergence of new risks have necessitated that financial institutions adapt their risk management strategies to be future-proof and flexible. Bank Risk strategies should explicitly adopt AI tools, not just as experiments on the side, but integrated into the core risk framework. Over the next 5-10 years, we can expect further Basel regulatory revisions, new market shocks, and cross-risk prudential capital requirements, among other developments, which will likely require handling even more data and complexity.
By building capabilities in machine learning, big data analytics, and automation now, banks lay the groundwork to tackle those future challenges. In practical terms, this could mean setting up dedicated AI MRM teams within risk management, investing in modern data infrastructure, and continually upgrading models with the latest techniques. Institutions that treat AI and advanced analytics as a strategic asset in risk (many are already doing so) will have a significant edge in resilience and adaptability. The next frontier of risk management goes beyond traditional credit or market risk.
Cross-Disciplinary Risks
Banks should prepare to address cross-disciplinary risks, including climate change impacts on portfolios, cybersecurity threats to financial systems, risks associated with fintech innovations, and volatility in crypto-assets. These areas are converging as new risk domains that will become part of mainstream risk regulations in the future. A forward-looking risk strategy over the next decade should start incorporating these areas.
For instance, regulators are already talking about climate stress tests for banks. Being proactive here – say, running internal climate risk assessments or monitoring crypto exposures even if not required yet – will prepare institutions for when these risks formally enter the regulatory sphere. Most importantly, Regulations will not be able to quickly arrive at rules and guidance without the support of AI.
For example, it’s a well-known fact that AI and crypto data centers consume a large amount of water resources to maintain the systems’ temperature below thresholds at all times. This is against the climate mandate. Yet, AI and crypto are expected to provide benefits in numerous areas, such as providing meaningful insights to better green banking. The adoption of crypto and blockchain technology would move the world away from increased paper usage and toward end-to-end electronic data storage. In the end, it is there a net benefit or net cost, that is the crucial question that AI can help the world solve.
Photo by Justin Ortega; Unsplash
Kyle Lewis is a seasoned technology journalist with over a decade of experience covering the latest innovations and trends in the tech industry. With a deep passion for all things digital, he has built a reputation for delivering insightful analysis and thought-provoking commentary on everything from cutting-edge consumer electronics to groundbreaking enterprise solutions.























