By : Francois Aubin

In the financial industry, especially in banking and lending, one of the most important questions is: how do we fairly and consistently judge the quality of a client?

For decades, lenders have relied on rating systems—structured ways of evaluating management, financial capacity, and industry context. These systems are not perfect, but they are always better than relying on pure human judgment. Let’s see why, referencing the work of Daniel Kahneman.

Human Judgment: Strong on One Metric, Weak on Many

Humans are good at making isolated judgments. For example:

  • Does this business owner have more than five years of experience? 
  • Is there a documented backup plan? 
  • Is the debt ratio below a certain threshold? 

On these questions, the answer is usually clear, factual, and consistent across evaluators.

But when asked to combine multiple unrelated metrics—say, strong financials but weak management—humans struggle. One account manager may emphasize the financials and approve the deal, while another may emphasize the management weakness and reject it. The result is inconsistency.

Kahneman’s research supports this:

“Wherever there is judgment, there is noise — and more of it than you think.” Goodreads
“One reason for the inferiority of expert judgment [compared with algorithms] is that humans are incorrigibly inconsistent in making summary judgements of complex information.” Richard Smith’s non-medical blogs

The Power of Weighting Systems

To solve this, scoring systems introduce weights for each dimension.

Example weights might be:

  • Management: 20% 
  • Finance: 40% 
  • Industry: 40% 

Each criterion is scored (e.g., on a scale of 1 to 5). Weighted values are then combined into a single overall rating.

This approach:

  1. Eliminates personal bias (everyone uses the same weights). 
  2. Ensures repeatability (two analysts, same data, same score). 
  3. Creates a foundation for further statistical validation. 

Kahneman again:

“The important conclusion from this research is that an algorithm that is constructed on the back of an envelope is often good enough to compete with an optimally weighted formula, and certainly good enough to outdo expert judgment.” QuoteFancy

A Golf Analogy

Think of putting in golf. A skilled golfer evaluates two factors separately:

  • The strength needed to hit the ball. 
  • The slope of the green. 

An inexperienced golfer blends these into a vague global impression, often with poor results.

Lending decisions are similar. A structured system forces evaluators to consider each factor independently before combining them.

From Scores to Predictive Models

The first step is having a consistent scoring system. The next step is testing it:

  • Apply the scoring to 100–200 past client files. 
  • See if the ranking matches actual client outcomes (repayment vs. default). 
  • Adjust weights as needed. 

Over time, lenders can build statistical models to estimate Probability of Default (PD) and Loss Given Default (LGD). At this point, the scoring system evolves into a predictive risk model—the backbone of modern banking risk management.

Conclusion

Human judgment is valuable, but it is not reliable for integrating multiple dimensions. Scoring systems—by forcing clarity, weighting, and consistency—outperform intuition. As Kahneman has shown, structured models routinely beat expert judgment when decisions involve multiple factors:

“If you can replace judgements by rules and algorithms, they’ll do better.” Farnam Street

That’s why in banking, sports, or even golf, a simple scoring system is always better than none.