Thinking out loud about disparate impact, proxy disparate treatment, and why fair lending analytics still matter as operational intelligence
Where I Started
With the CFPB’s recent Regulation B amendments and the revised Section 1071 rule, I have been spending a fair amount of time thinking about what is actually going on for lenders right now. Not the legal analysis — there is plenty of that floating around — but the operational reality. When the regulatory floor moves, what should a lender actually do differently? And maybe more importantly, what should a lender choose not to do differently, even though it now legally could?
Those changes are not identical in legal effect, but they point in the same practical direction: lenders are reassessing what demographic, proxy, and outcomes‑based analysis is still required, what is still prudent, and what may be strategically useful even when it is not compelled.
That second category is the one I keep coming back to. Because the more I work through it, the more I think the answer is less obvious than the early industry chatter has made it sound.
The Thing That Struck Me
Here is what kept catching my attention as I worked through the changes. Disparate impact and disparate treatment — particularly disparate treatment by proxy — often look at the same set of attributes. Income. Debt‑to‑income. Credit history. Employment tenure. Loan amount minimums. Many of the same variables that appear in an effects‑based fair‑lending review can also become relevant in a proxy‑disparate‑treatment analysis if they are intentionally designed or applied as substitutes for prohibited characteristics.
The difference is intent. The difference is what the lender knows, what the lender decided, and how the lender is using the information. The attributes themselves do not change when the legal theory changes. The mechanics of how those attributes interact with how income, wealth, and credit history are actually distributed across the population do not change either.
That feels like an important point to sit with. Because if the attributes are the same and the underlying mechanics are the same, then the question of whether to keep paying attention to them is not really a question about which legal theory is currently in fashion. It is a question about whether understanding how your own underwriting model behaves is worth knowing.
What the Research Actually Shows
The empirical work is more developed than many lenders appreciate. In Does Credit Scoring Produce a Disparate Impact?, one widely cited Federal Reserve study on credit scoring found evidence of limited disparate impact by age, with certain credit‑history variables appearing to lower scores for older consumers and raise them for younger consumers. Other research on household debt, credit access, and earnings distributions underscores a broader point that matters for underwriting: facially neutral ratios and thresholds can interact with uneven income, wealth, credit‑history, and debt distributions in ways that produce uneven outcomes. Even where nominal debt loads are similar, differences in income distributions can change the ratio itself.
The insurance industry has been wrestling with a similar issue in pricing and risk classification. Concepts from the algorithmic‑fairness literature — fairness through unawareness, demographic parity, and conditional demographic parity — are attempts to address a basic analytical problem: removing a protected‑class field from a model does not necessarily remove protected‑class information from the model’s output if other variables operate as proxies. That literature is worth reading. The lending side has tended to talk about these dynamics in compliance language. The insurance side has been forced to talk about them in pricing and risk language, which I think is actually closer to where the lending conversation needs to go.
The existence of meaningful proxy relationships in real‑world data is not seriously debatable. What remains contested is how much legal weight, operational significance, and remedial action should attach to them. That same literature also makes clear that eliminating proxy discrimination and achieving group‑level parity are not the same thing, and sometimes cannot be satisfied simultaneously.
What I Have Been Wrestling With
If a lender takes the CFPB’s revised ECOA/Regulation B position at face value and decides to stand down on effects‑based fair‑lending analytics, a few things happen. The Fair Housing Act exposure does not change. State law analogs do not change. Private litigation does not change. CRA performance evaluations still look at lending patterns across geographies and demographics. GSE seller/servicer expectations and counterparty diligence frameworks still ask their own questions. None of those lanes closed because one federal supervisory rationale narrowed.
But more interestingly, the operational picture inside the institution starts to shift in ways that are not really about compliance at all.
Disparate‑impact testing, done well, is not just a regulatory exercise. It is a form of model validation. It surfaces variables that are doing weak predictive work but strong sorting work. It can flag forms of model drift or segmentation effects that aggregate performance metrics may miss. It identifies underwriting criteria that are excluding qualified applicants without meaningfully reducing default risk — which is to say, it identifies places where the lender is leaving credit on the table.
A lender that walks away from that analytical infrastructure because the regulator no longer requires it is, I think, walking away from something that was producing operational intelligence the whole time. It just happened to be wearing a compliance jersey.
Which leads to the question I find genuinely interesting: if the analytical work was producing both regulatory cover and operational intelligence, and one slice of the regulatory cover is no longer required, does that make the operational intelligence less valuable? Or does it make it more valuable, because fewer competitors are going to keep doing it?
The Intent Question, Which I Do Not Think Is Settled
There is also a piece of this that I do not think the industry has fully thought through. The CFPB’s final rule did not say that proxy use of facially neutral criteria is fine. It said that practices intentionally designed or applied as proxies for prohibited characteristics remain subject to disparate‑treatment liability. The Bureau’s own commentary makes that explicit, noting that ECOA does not prohibit facially neutral criteria “except to the extent that facially neutral criteria function as proxies for protected characteristics designed or applied with the intention of advantaging or disadvantaging” applicants on a prohibited basis. Intent becomes the central question — and intent is rarely evaluated in a vacuum.
What I keep wondering is what happens when a lender’s own analytics surface a disparate effect from an underwriting variable, and the lender continues to use the variable without documenting the predictive value, the business justification, the alternatives considered, or the compensating controls. That may not convert an effects‑based issue into intentional discrimination by itself. But it may create a record that a plaintiff, examiner, state regulator, or prudential regulator will want to explore.
I do not have a clean answer to where that line gets drawn. I am not sure anyone does yet. But it strikes me as a question that gets harder, not easier, the more you actively decide to stop looking. The lenders who keep running the analysis at least have the documentation to show why they made the choices they made. The lenders who stop running it have neither the analysis nor the explanation.
The Practical Move
The practical move is not to keep running the old fair‑lending program exactly as it existed before. It is to reclassify the capability.
Some of the work remains legal and compliance risk management — Fair Housing Act, state law, private litigation, prudential exam readiness. Some of it belongs in model governance, where effects testing functions as a form of model validation and drift detection. Some belongs in credit strategy, where less‑discriminatory‑alternatives analysis becomes a hunt for qualified applicants the current model is filtering out. Some belongs in product design. Some belongs in board reporting and counterparty diligence.
That distinction matters. The institutions that preserve the analytics but update the governance around them will be better positioned than institutions that simply stop looking. The work does not need to live where it lived before. It just needs to keep happening, with a clear‑eyed understanding of which business question it is answering at any given moment.
Where I Keep Landing
I do not think the right read of the current environment is that lenders should keep doing fair‑lending analytics because regulators might change their minds again. They might. They might not. That is not the argument.
The argument is that the analytical capability lenders built over the last fifteen years was producing more than just a defensive posture. It was producing knowledge — about the model, the portfolio, the variables, the customer base — that has independent operational and financial value. The regulatory change may reduce one federal compliance rationale for maintaining that capability. It is not, for thoughtful operators, a directive to dismantle it.
The lenders who treated fair‑lending analysis as a cost center will probably scale it back. The lenders who treated it as a source of operational intelligence will probably keep it, repurpose it, and find that the questions it answers — which variables matter, where credit is being left on the table, where the model is drifting, which customer segments are being underserved profitably — are the same questions a CFO and chief credit officer should be asking anyway.
If that is right, the post‑rollback environment may widen the gap between lenders that understand their underwriting at a variable, segment, and outcome level and lenders that only maintained analytics because the rules told them to. That is not just a compliance distinction. It is a credit‑strategy distinction.
The institutions that keep the analytical capability but change the reason for using it may be the ones that come out of this period with the clearest view of both their risk and their opportunity.
Questions Lenders Should Ask
As lenders prepare for next steps, here are a few questions I would encourage them to ask:
- If you ran fair‑lending analytics under the old framework, what did the analysis actually teach you about your underwriting that you would not have known otherwise?
- Where do you draw the line between facially neutral criteria that are doing legitimate predictive work and criteria that are functioning primarily as demographic sorters?
- How are you thinking about the intent question under the new framework — particularly when your own internal analytics have already surfaced a disparate effect?
- And the one I most want to hear answers to: have you ever found that taking a hard look at a facially neutral underwriting variable actually opened up a profitable segment of customers your model was filtering out?
My instinct is that there are more answers to that last question than we talk about. But I would rather hear the answers than guess at them. If you would like to talk through any of this — whether you have answers, more questions, or want help thinking about what your own program should look like next — I would be glad to hear from you. david@arqadvisoryllc.com.
David Stickney
Founder and Managing Principal, Arq Advisory LLC. Former CFPB Senior Commissioned Examiner and Examiner‑in‑Charge. Arq Advisory is a Service‑Disabled Veteran‑Owned Small Business consumer financial regulatory compliance consulting firm.