Machine Learning and Debt Collection

 

Table of Contents

What is machine learning?

Machine learning is an emerging field of computer science and statistics focused on giving computers the ability to make decisions that they haven’t been explicitly programmed to make. The future of debt collection communication is digital, and by leveraging data to enable computer systems to make decisions, creditors and collectors can build a better consumer experience.

How can a machine learn?

Machine learning algorithms are able to learn by aggregating large data sets and identifying patterns, but they need help to get started. When building a machine learning system, engineers and data scientists establish parameters that help the model define data in a set first. 

A simple supervised machine learning model known as a binary classifier can serve as a foundation for more complex decision making. Imagine a program that is designed to distinguish cats from dogs. The data scientists building the system know the difference between the two and can pick a few features that are likely to identify one or the other and break the qualitative information into quantitative values that the model can recognize.

Figure 1. This classifier has been provided features that distinguish cats and dogs. These features can be broken down into numbers or binary (Y/N) responses to help the computer model understand what features likely indicate a cat and which features likely indicate a dog.

Once a model has been trained, it can begin to make inferences based on similar data. If, after being fed large amounts of data, the classifier from Figure 1 was presented with new, uncategorized data, it would be able to process it and make predictions. 

Figure 2 shows how the model above can predict that the data set likely represents a dog. These models are not infallible, but the more data that they process and the more thoroughly they are tested, the more effective they become.

Figure 2. When a trained machine learning algorithm is presented with new data, it can process it and make predictions based on its previous experience. It has learned to make predictions on its own.

Supervised vs. unsupervised machine learning

Supervised machine learning

The binary classifier described above is a supervised learning algorithm, meaning that it still requires designers to engineer its features in order to get it up and running. Supervised learning algorithms are trained over time based on foundational data. This data will provide certain features as data points that will teach the algorithm how to generate the correct predictions. 

These models function best in situations in which there is an expected, intentionally designed output. In the example above, the expected output is that the algorithm can properly separate cats from dogs. In digital debt collection, it may be separating accounts that will be easy to collect on from ones that are more difficult. 

Classification vs. regression

The models above are both examples of a supervised learning model that is seeking classification, but supervised learning can also be used to build regression models. The key difference between the two is that in a regression model the output is a numerical value rather than categorical.

For example, a regression-based model may use input features such as job title and whether or not they own a home to accurately predict a person’s income. One way this translates to the business world is that using a regression-based model in combination with consumer data can help teams segment marketing communications. 

With proper supervision, these models will become more accurate over time, and the data scientists building them can adjust them as business needs change. Whether you are gathering data using a regressor or a classifier, it is dependent upon the data scientists to build the most effective inputs in order to get the “correct” output.

Classification
Regression

Figure 3. A visual representation showing the difference between classification and regression.

Unsupervised machine learning

While supervised models require careful curation in building proper features that will lead to the “correct” output, unsupervised models can take large sets of unlabeled data and identify patterns without human help. The output variables (e.g. dog or cat) are never specified because it is now the algorithm’s

role to process and sort the data based on similarities that it can identify. Using this method, you can learn things about your data that you didn’t even know!

Clustering vs. association

Just as supervised models have primary methods for training their output data as either classification or regression models, unsupervised models can be trained using clusters or associations. Clustering algorithms gather data into groups based on like-features that exist in the data set. 

If you have thousands upon thousands of customer accounts in your system, a clustering algorithm can learn using the customer data and form them into distinct (but unlabeled) groups. Once it has assigned these clusters, data scientists can review the output data and make inferences such as:

  • This cluster is all of the accounts that have not yet established a payment plan
  • This cluster is all of the users that started signing up for a payment plan but didn’t finish the process

This new data set then provides the foundation for a new outreach strategy.

Raw data
Clustered data

Figure 4. An illustration on how algorithms can sort raw data into clustered data.

Association algorithms are the other end of unsupervised learning algorithms. Associations take the idea of grouping random data points one step further and can make inferences based on the data available. A common example of this type of inference system can be seen on an infinite number of eCommerce sites. 

Product recommendations are crafted by algorithms that recognize product purchasing patterns and can identify new offers based on those patterns. If a large portion of people that purchase hot dogs also purchase hot dog buns, then the association algorithm can infer that someone who puts hot dogs into their virtual shopping cart is likely to buy buns as well.

If a large portion of people that purchase hot dogs also purchase hot dog buns, then the association algorithm can infer that someone who puts hot dogs into their virtual shopping cart is likely to buy buns as well.

For a more industry focused idea of how these algorithms can be applied, let’s consider information you would receive from a new account sign-up. An association-based model can identify two data points and draw conclusions based on the patterns it finds. One such pattern may be:

A person that signed up for an account the first time they opened an email is more likely to pay off their balance.

The algorithm recognizes that multiple steps in a customer’s journey creates another data point. Because association algorithms are still unsupervised, a team of scientists will be responsible for labeling the output data, but the algorithm can outline previously unnoticed patterns.

How machine learning optimizes performance: Multi-armed and contextual bandit algorithms

What is a multi-armed bandit?

The term “multi-armed bandit” in machine learning comes from a problem in the world of probability theory. In a multi-armed bandit problem, you have a limited amount of resources to spend and must maximize your gains. While you can divide those resources across multiple pathways or channels, you do not know the outcome of each path, but you may learn more about which is performing better over time.

The name is drawn from the one-armed bandit—slot machines—and comes from the idea that a gambler will attempt to maximize their gains by either trying different slot machines or staying where they are.

How do bandit algorithms fit into machine learning?

Applying this hypothetical problem to a machine-learning model involves using an algorithm to process performance data over time and optimize for better gains as it learns what is successful and what is not. 

A commonly used model that follows this type of structure is an A/B/n test or split test where a single variable is isolated and directly compared. While A/B testing can be used for any number of experiments and tests, in a consumer-facing world, it is frequently used to determine the impact and effectiveness of a message.

You can test elements like the content of a message, the timing of its delivery, and any number of other elements in competition with an alternative, measure them, and compare the results. These tests are designed to determine the optimal version of a message, but once that perfect message is crafted and set, you’re stuck with your “perfect” message until you decide to test again.

Anyone that works directly with customers or clients knows that there is no such thing as a perfect, one-size-fits-all solution. Message A, when pitted against Message B may perform better overall, but there is someone in your audience that may still prefer Message B.

Testing different facets of your communication in context with specific subsets of your audience can lead to higher engagement and more dynamic outreach. Figure 5 below outlines how a multi-armed bandit approach and the related contextual bandit approach can optimize for the right content at the right time for the right audience rather than committing to a single option.

What is a contextual bandit algorithm?

 

Contextual bandit tests are a subset of multi-armed bandit tests. Rather than entirely discarding Message A, the contextual bandit algorithm recognizes that roughly 10% of people still prefer it to other options. A multi-arm still functions like an A/B test in that it will determine an optimal path or winner from a set of data. A contextual bandit will, instead of determining a single winner, will determine which paths are optimal and use those for most cases, but still personalize and optimize for the individual rather than the group as a whole. 

Using this more fluid model is also more efficient because you don’t have to wait for a clear winner to emerge, and as you gather more relevant data, they become more potent.  For example, a multi-arm test may throw out Message A because only 2% of people liked it, but a contextual bandit approach will still try to send that message to that 2% of the population.

Figure 5. Visual representations showing the differences between A/B/n, Multi-armed Bandit and Contextual Bandit Tests.

Bandit strategies in collections 

Collections continues to expand its digital footprint, and by combining more in-depth data tracking with an omni-channel communication strategy, teams can clearly understand what’s working and what isn’t. Adapting a bandit algorithm to machine learning-powered digital debt collection provides endless opportunity to craft a better consumer experience. 

Effective messaging

Following from Figure 5, digital collections strategies can determine which messaging is right for which consumer. Sorting this data in context can mean distinguishing groups based on the size or the age of the debt and determining which message is the most appropriate. 

When algorithms can slowly learn to distinguish results or users and place them into groups, they can learn to do things like:

  • Understand what kinds of messaging people respond to
  • Recognize what kinds of payment offers seem to be accepted
  • Define different types of consumers

Optimizing outreach

With enough data to analyze and enough features extracted from that data, machine learning algorithms can help you optimize collections processes by making observations like:

  • “This group may prefer SMS messages to email”
  • “This group of consumers may not respond to content that includes the phrase ‘tax refund’ “

This information can help to inform new collections strategies, dictate the use of different communication channels, or provide further insights into effectively segmenting a customer base.

These strategies take time and thousands upon thousands of data points to get “right,” but the wonder of a contextual multi-armed bandit algorithm is that it doesn’t stop learning after making the right choice. It makes the right choice, at the right time, for the right people, and you can reach your consumers the way they want to be reached, even when those preferences change over time.

Experimentation and machine learning

One way to continue improving a machine learning model’s decision making ability is to provide it with more data and features to learn from. Perfecting a model requires a very scientific (and iterative) approach:

  1. Start with a hypothesis you want to test
  2. Monitor the performance of the test
  3. Introduce new information to the data set
  4. Review how the system operates and what decisions it makes with the newly presented data
  5. Iterate

Machines learn and collections grow

As it becomes more and more difficult to contact consumers in debt, integrating digital collections solutions into a collection strategy is becoming invaluable. Digital debt collection offers more opportunities for in-depth analysis, and by introducing machine-learning to that evaluation process, you can build systems that can support their own growth and improvement! And, by experimenting with various tools and approaches, a debt collection-focused machine-learning model can work with data teams to rapidly evolve and improve collections efficiencies at different stages of the collections process.

The more data you have, the better you can collect, and the more you collect, the more data you have. The self-sustaining nature of machine learning is revolutionizing approaches to collections, but it isn’t as easy as it sounds. Building and continuing to maintain complex systems requires a talented team and a stable infrastructure that can support these processes at scale. 

Those that can properly build and manage these systems will be the driving forces in the future of the collections industry, so find your partner and learn what you can. Maybe these machines can teach the industry a thing or two. 

Ready to learn more about how machine learning can change your collections strategy?