The AI in Lending Governance Gap: Why SR 26-2, the New Reg B, and ECOA AA Requirements Are the Same Problem

The last couple of weeks have included a couple of announcements, which when combined with previous standing guidance, could be misconstrued as separate compliance projects. In reality, they are not. We’re talking about the jointly issued SR 26-2, the updates to ECOA, and standing guidance on Adverse Action Notices, which is one of the few guidances that was not rescinded in 2025.

Here’s a quick recap: on April 17, 2026, the Federal Reserve, OCC, and FDIC jointly issued SR 26-2, replacing SR 11-7 and the legacy model risk management framework with principles-based guidance built around materiality and risk. On April 22, 2026, the CFPB published its rewrite of Regulation B, removing disparate impact from ECOA, narrowing discouragement to intent, and tightening Special Purpose Credit Programs. SR 26-2 takes effect on its issuance terms; Reg B becomes effective July 21, 2026. Sitting alongside both is the CFPB’s 2022 circular and 2024 supplemental guidance on AI-driven credit denials, which require specific, accurate adverse action reasons regardless of model complexity — guidance that has not been rescinded and has been reinforced by examination practice.

For a bank, IMB, or fintech with any AI or ML in the lending stack, these three are not three projects. They are one problem viewed from three angles. The institution that treats them as separate workstreams will end up with a model risk program that satisfies SR 26-2 on paper, a fair lending program rebuilt around intent and proxy, and an adverse action process that produces compliant reason codes — and the three will not talk to each other. When an examiner or a plaintiff’s counsel pulls on any thread, the absence of a unified framework will be visible immediately— and it exposes the very risks the institution thought it had mitigated.

The institutions that get this right will treat the next twelve months as a chance to build that unified framework. The ones that don’t will spend the following twelve months explaining why their AI underwriting model has materiality tier two governance, no documented proxy testing, and a generic “credit history insufficient” adverse action code that doesn’t actually map to what the model decided.

Where the three frameworks converge

Each of the three regimes asks a different surface question. Underneath, they are asking the same one.

SR 26-2 asks: is this model governed proportionately to its risk? Materiality tiering, conceptual soundness review, validation rigor, ongoing monitoring, and effective challenge all flow from how much the model matters and how it is used. The framework explicitly preserves third-party and vendor models in scope and elevates aggregate model risk — risk arising from dependencies across models that share data, assumptions, or methodologies — to a first-class governance concern. It carves generative AI and agentic AI out of MRM scope while making clear that institutions still have to govern those systems through other risk management practices.

The new Reg B asks: did the institution intend to discriminate, or knowingly use a proxy? With the effects test removed from ECOA, regulator and plaintiff scrutiny shifts to how decisions get made. Facially neutral criteria adopted as proxies for protected characteristics remain actionable as disparate treatment. That covers ML model features, alternative data attributes, geographic targeting, and any underwriting variable that correlates with protected classes without a documented business explanation.

The CFPB AA circular asks: can the consumer be told a specific, accurate reason? A model that declines an applicant has to produce reason codes that reflect what the model actually weighted. Generic codes — “credit history” or “insufficient information” — are insufficient if the model’s actual driver was something else. Complex algorithms, AI features, and vendor-supplied scores all must support compliant adverse action notices.

The same model can pass one of these tests and fail the others. A gradient-boosted underwriting model can be properly tiered and validated under SR 26-2, satisfy the new Reg B because it was developed without discriminatory intent, and still produce non-compliant adverse action reasons because the institution can’t explain which specific features drove individual declines. Or it can produce explainable adverse action reasons but fail Reg B because the most predictive feature turns out to be a proxy for race that no one tested for. Or it can satisfy both Reg B and the AA circular but fail SR 26-2 because nobody documented its conceptual soundness or independently validated it.

The artifacts the three frameworks need are the same artifacts. Feature documentation, business rationale for each variable, proxy testing against protected classes, performance monitoring across segments, adverse action reason mapping that ties model outputs to specific drivers, vendor due diligence that includes fairness testing, and ongoing monitoring that flags both performance drift and disparity drift. Building those artifacts three times for three different programs is wasteful at best and inconsistent at worst. Building them once, in a unified framework, is the only approach that scales.

The four points where the frameworks have to integrate

There are four specific points where the three regimes intersect, and where a unified governance framework either holds together or doesn’t.

Point one: model materiality has to incorporate fair lending materiality. SR 26-2’s materiality framework is built around model exposure (how significant the model’s outputs are to business decisions) and model purpose (whether it serves regulatory or risk management functions). For consumer credit models, fair lending exposure has to be a third dimension. A pricing model that drives small dollar amounts but produces statistically significant disparities across protected classes is a high-fair-lending-materiality model even if its dollar exposure is modest. The institutions that bolt fair lending tiering onto their MRM tiering after the fact end up with a two-track governance system that contradicts itself. The institutions that build it in once, at the materiality assessment stage, get a coherent framework.

Point two: validation has to include proxy testing as a standard component. Under SR 11-7, fair lending testing was often handled by a separate compliance function with its own analytics, on its own schedule, against its own data. SR 26-2’s emphasis on effective challenge — the quality of independent review rather than its organizational location — opens the door to integrating fair lending analytics directly into model validation. For high-materiality consumer credit models, every validation cycle should include proxy testing for protected class correlation in features, disparate outcome testing across segments, and benchmarking against less discriminatory alternatives where reasonable alternatives exist. The CFPB has signaled in recent guidance that the search for less discriminatory alternatives is itself an expectation in advanced model deployments. Pulling that into validation rather than treating it as a separate compliance exercise is more efficient and more defensible.

Point three: adverse action reason code mapping has to be a model artifact, not a downstream translation. The CFPB’s circular makes clear that the institution is responsible for specific, accurate adverse action reasons regardless of model complexity. In practice, many institutions handle this by mapping model outputs to a fixed set of generic reason codes after the fact — a translation layer that sits between the model and the consumer. This is fragile under both Reg B and the AA circular. The integrated approach builds reason code mapping into model documentation itself, so that for any given decline the institution can produce the specific factors that drove the decision, not a generic post-hoc translation. SR 26-2 supports this directly: documentation of model purpose, design, limitations, and outputs is a core expectation, and adverse action mapping fits naturally into that documentation. The institutions that treat AA reason code generation as a model output rather than a compliance translation will be in significantly stronger position.

Point four: vendor and fintech model governance has to span all three frameworks simultaneously. SR 26-2 keeps vendor and fintech models squarely in scope and ties model risk management to third-party risk management. The new Reg B applies to whoever makes the credit decision, regardless of who built the model. The CFPB AA circular is explicit that “the algorithm did it” or “the vendor’s model declined them” is not an acceptable response. For institutions running fintech-supplied scores, vendor underwriting models, or BaaS partner decisioning, this means vendor due diligence has to include all three lenses. Contracts have to support all three. Monitoring has to cover all three. Treating vendor MRM, vendor fair lending, and vendor AA compliance as three separate vendor management workstreams produces gaps that examiners and plaintiffs will exploit.

What about generative and agentic AI

This is the part of the picture almost no one is writing about cleanly, and it is where the most exposure currently sits.

SR 26-2 explicitly excludes generative AI and agentic AI from its scope as “models,” directing institutions to apply their broader risk management and governance practices instead. The agencies have signaled they plan to issue a Request for Information on MRM generally and on bank use of AI, GenAI, and agentic AI in the near future — meaning the carve-out is not permanent, just unfilled.

In the meantime, generative and agentic AI systems used in lending sit in a genuinely uncovered intersection. They are out of MRM scope. They are within Reg B’s scope if they touch credit decisions, marketing, applicant communications, or adverse action processes — and the discouragement framework explicitly covers oral and written statements, including digital content, which generative chatbots and AI-driven marketing produce. They are within the AA circular’s scope if they influence credit decisions in any meaningful way. And they are subject to whatever fair lending exposure flows from disparate treatment via proxy.

A bank that deploys a generative AI assistant to help loan officers structure adverse action explanations, or to draft marketing copy, or to handle applicant inquiries, is operating in this uncovered space. So is a fintech using an LLM-based system to triage applications or generate reason codes. The institutions that govern these systems well over the next twelve months will be using NIST’s AI Risk Management Framework, the Colorado AI Act, the Texas Responsible AI Governance Act, and the EU AI Act as the bridge framework — building governance that the eventual federal guidance can plug into rather than retrofitting after the fact.

The simple rule for this uncovered intersection: any generative or agentic system that touches a credit decision, an applicant communication, an adverse action process, or marketing content has to be governed under a parallel framework that produces the same artifacts a unified MRM/Reg B/AA framework produces — feature documentation, proxy testing, output explainability, performance monitoring, and disparate impact testing. Calling it “out of scope” because SR 26-2 says so is exactly the trap the agencies were trying to prevent.

What to actually do

A pragmatic checklist for the next twelve months:

  1. Build a unified governance taxonomy. Define materiality across SR 26-2, fair lending, and AA dimensions in a single tiering framework. Stop running parallel inventories.
  2. Integrate proxy testing into model validation. For every high-materiality consumer credit model, validation cycles should include fair lending analytics — disparate outcome testing, feature-level proxy correlation, and less discriminatory alternative analysis where appropriate.
  3. Make adverse action reason codes a model output, not a translation layer. Document at the model level which specific factors drive declines, and ensure adverse action notices reflect what the model actually weighted.
  4. Apply the unified framework to vendor and fintech models. Vendor due diligence, contracts, validation strategy, and ongoing monitoring all have to cover MRM, fair lending, and AA dimensions simultaneously.
  5. Stand up a parallel GenAI/agentic governance track. Use NIST AI RMF and applicable state AI laws as the framework. Produce the same artifacts a unified MRM program would produce. Plan for the eventual federal RFI and resulting guidance to slot in.
  6. Map dependencies. SR 26-2’s elevation of aggregate model risk means model inventory is now a dependency graph. Identify shared data sources, shared features, and shared assumptions across models that influence credit decisions.
  7. Document the unified framework as a single program. When examiners or counsel ask how the institution governs AI in lending, the answer should be one framework with three regulatory lenses, not three programs that occasionally talk to each other.

The bottom line

SR 26-2, the new Reg B, and the CFPB AA circular look like three separate regulatory developments. For any institution with AI or ML in the lending stack, they are one governance problem with three regulatory expressions. The institutions that build a unified framework — one model inventory, one validation discipline that includes fair lending, one adverse action infrastructure that ties to model documentation, one vendor governance approach that spans all three regimes, and one parallel track for GenAI and agentic systems — will be in genuinely defensible position.

The institutions that treat them as three projects will produce three programs that contradict each other under examination pressure.

This is the kind of integration moment where the gap between institutions that build cleanly and institutions that bolt on compounds quickly. The next twelve months will determine which side of that gap each institution lands on.

David is the Founder and Managing Principal of Arq Advisory LLC, a Service-Disabled Veteran-Owned Small Business specializing in consumer financial regulatory compliance consulting. He is a former CFPB Senior Commissioned Examiner and holds the Mortgage Banker Association Accredited Mortgage Professional and Certified Mortgage Compliance Professional Designations.

Discover more from Arq Advisory LLC

Subscribe now to keep reading and get access to the full archive.

Continue reading