Scenario Planning, with Data, on Steroids

In Great By Choice, Jim Collins teaches us that there is no new normal… there is always just uncertainty and change. Predicting the future is hard. The best leaders and organizations, as such, plan for multiple futures and hedge accordingly. They plan probabilistically. One tool that can assist in scenario planning is Monte Carlo simulation. It is scenario planning, with data, on steroids. 

Cal Newport, in his book Deep Work, refers to the tool as “mysterious Monte Carlo simulations.” Having gone through the journey myself with these tools from beginner to… well, capable, I can see why you would say they are mysterious. It just takes a bit of effort to, first, grasp the concept and then to build them yourself. And, whether you are building them or not… it is an imperative thought pattern to understand if you are operating in an uncertain environment (which you almost certainly are). It’s imperative because it forces you to think in probabilities and ranges and uncertainty (i.e. the way the world actually is). 

This is my introduction to sim using Monte Carlo simulation (MC’s for short). Unlike many MC tutorials that focus on the stock market, I want to use an example that has almost certainly impacted you in some way, shape, or form. The ports of Los Angeles and Long Beach – where a bulk of US imports enter the country – have faced considerable delays over the past year. Import demand has been very high and the supply of labor at the ports has been constrained by COVID-19. In order to understand the impact of port delays on supply chains – particularly arrivals to domestic distribution centers – we can use MC’s. 

The code here is pretty inefficient (slow), because the goal is to communicate how this works rather than to build a robust, production grade analytics platform. Hence, bear with it. I would also steer clear of trying to directly use any of this information in your operation – it’s a demo.

With that said…

We need to start with some building blocks; distributions. Most people are familiar with normal distributions – or the bell curve. There are all kinds of distributions; uniform, Poisson, exponential, lognormal, and triangle to name a few. They all represent different ways in which data is allocated. For example, think about ocean containers coming across the pacific ocean on a container ship. Lots of them take 14 days with some going faster (12 days) and some going slower (16 days). A few go even slower… taking 20-30 days. Lots of transportation takes on this type of distribution. Commute times are like this. They are almost normally distributed, but with a lopsided right skew.

Why are these the building blocks? We are basically going to take a series of events that have distributions… and string them together to determine an outcome (in this example, an arrival date). Then we are going to do it many, many times. For simplicity, imagine that there are 3 legs of transit shown below:

On one instance, we can hypothetically get these transits: 14 days for Order to Overseas Port, 16 days for Overseas Port to Domestic Port, and 14 days for Domestic Port to Domestic Warehouse. That gives us a total transit of 44 days. In another instance, we can hypothetically get 14, 20, 14 giving us a total transit of 48 days. On and on and on…

Where do you get the distributions from in order to sample them? In reality, you would pull historical data, plot it, run some statistical tests, and determine the distribution. Perhaps you don’t have historical data. In that case, you can do what I do here – use some common knowledge + domain knowledge to generate hypothetical samples of data (R is great for that).

Let me give you a sense of what we want the output data to look like for a single simulation. Each row represents a shipment from an origin location on a single day. In the 1st row, you are shipping 100 units on December 1, 2020… expected to arrive at the domestic warehouse on January 4, 2021.

Then, if we aggregate this volume by arrival date we would see something like the following…

Finally, what if we ran that simulation 3x? We would have 3 hypotheticals like this…

On Jan 8th, the domestic warehouse received 2,097 in the first simulation. In the 2nd and 3rd simulations, it received 2,294 and 2,196 respectively. The difference arises because we have uncertainty in those foundational building blocks. If you have tight distributions, your arrival numbers will be close to each other each time. If you have wide distributions, your arrival numbers will have a wide range. This is one of the primary goals of supply chain professionals – to tighten distributions and increase precision. Thinking more broadly, this is probably a primary goal in any operations role. 

Before running any simulation, it is helpful to have the common mental model for what you are doing. Simple visualization can assist. For example, there are hundreds of origin locations and many ports. A quick flow chart will help everyone using these to build expectations on how to use the output (and adjust expectations for the model if necessary).

# Scenarios

Ok, let’s go. We are going to simulate a variety of parameters – this article is, after all, about scenario planning. We will vary the amount of uncertainty in the transits as well as the magnitude of 5-week delays at the ports. The scenarios will take these parameters:

Starting at a place with no variance (tight distributions) and no port delays helps us confirm that the code is working. With almost complete certainty, if all 10 origin locations are shipping the same amount of volume each day (100 units, standard deviation = 0.01)… and the transits are certain (14 days order to overseas port, 14 days ocean transit, 4 days domestic transit, each with standard deviation = 0.01)… then the domestic warehouse should receive the same amount of volume each day. And it does (1,000 units every day). Planning is easy.

Then, if we introduce just a little bit of real life uncertainty into the transit components (standard deviations of 1), then the domestic warehouse now faces a range of possible arrival volumes. Even when the daily shipment volume remains certain each day (at 100 units), adding a pinch of transit variance means that the IQR for expected daily arrival volume moves from 0 to ~400 units (the box in the boxplot is ~400 units). Further, now the probability of receiving over 1,250 units on any given day is ~19% whereas before it was 0%.

That last bit is helpful for understanding the probability of a large event – in this case that means a day where we receive >1,250 units. We can look at that in a histogram view like so: 

As we go through the scenarios I’ll continue to benchmark with this.

Let’s continue to add real world uncertainty – with a variance on daily shipment volume (continue with a mean of 100 units per day per origin, but increase the standard deviation from 0 to 30). The chance of a >=1,250 unit day increases to ~20%. Note that transit variance has a much larger impact on the arrival range than variance on the daily shipment volume (could be interesting, or it could be the relative magnitude of variance to each respective metric). 

Before moving on to port delays, the “stability” in the box plots may be deceiving. Imagine I ran the same parameters above… but for only a single simulation. We would have one arrival volume estimate for each day – and a line chart would bring us back to a sense of uncertainty (see below). The value of these simulations is to help us think in possibilities, in ranges. So, don’t forget that the boxplots are an aggregation of many potential instances.

# Layer in Port Delays

Let us add in some delays. For this demo, I am going to assume that delays occur on units that ship (from origin) between January 1st, 2021 and February 4th, 2021. That is a 5 week long period and the magnitude of delay can vary by week. For example, if we have an X day delay in each of those 5 weeks… the magnitude by week might look like this: 1 day, 3 days, 5 days, 3 days, 1 day. In the first week, the units face a 1 day delay at the port. In the second week, the units face a 3 day delay. And so on. The port congestion clears up in week 6. Here’s what the expected volume to the domestic warehouse looks like:  

Pretty decent impact from a relatively small delay. The chance of receiving >1250 units increases slightly to 23%. However, the chance of receiving >1250 units between March 1st and March 15th increases to almost 46%. That’s one scenario, let us try another. How about if the delay was a big bang and then recovered over the 5 week period. If the delay sequence was 10 days, 8 days, 6 days, 4 days, 2 days… and then clean, our expectation would look like this:

In this scenario your receipt volume basically falls off a cliff… and then comes back and remains elevated for several weeks. It is wild that small delays in the sequence of events results in pretty significant swings at the end of the chain. And most chains – whether it’s supply chains or commutes – have more than 3 legs. Each with its own uncertainty profile (distribution) and probability of disruption.

# Weighting Scenarios

Now, we put all the scenarios together. Instead of looking at the boxplot, I like to look at the line chart for the median value so we can compare and contrast different scenarios. For example, I want to look at 6 scenarios from above. 

There are at least two ways for decision makers to consider the information above. 

For each of the lines on the chart, the decision makers can estimate probabilities on each scenario. Perhaps, we anticipate the probability of the big delay to be ~10% and the probability of the small delay to be about 20%. That means we believe there is a 70% chance the average expected volume will be in the range of 900-1100 units. Now, we can say that the expected volume for ~Feb 7th is (10% x 0) + (20% x 750) + (70% x 1000) = 850 units. Similarly, on the upside on ~Mar 1st you can think (10% x 1300) + (20% x 1400) + (70% x 1000) = 1,110 units. 

We can add this information to our scenario table from earlier…

Planning with the median is one perspective. But, maybe delays are costly. Maybe they are very costly so being able to work volume at the domestic warehouse as fast as possible is imperative (especially if you need to compensate for prior delays). In that scenario, you would probably want to look at the upper range of possibility rather than the average. This is the second way to consider this information. We can produce the same line chart as above, but using the 80th percentile of each daily observation rather than the 50th percentile. 

Now, if you experience the Big Bang Delay, you are looking at >1,500 unit days at the end of February into March rather than the 1,300 unit days using the median. 

# Outro

“Improving decision quality is about increasing our chances of good outcomes, not guaranteeing them” is a powerful statement from Annie Duke’s book Thinking In Bets. Uncertainty (and luck) will inevitably, at some point, create a bad outcome in spite of a good decision. What we intend to do with scenario planning – particularly using Monte Carlo methods when applicable – is reduce the potential for a ‘good decision, bad outcome’ event. And some of the best practice you can do towards that objective is to build and work with simulation models, even if they are only toy models like the one I have created here.

# Code

This is the script I used to generate all the outputs and visualizations. I made it as simple to read as possible, meaning that it isn’t efficient (relatively slow).

# Acknowledgement

When I dabble in this arena, I always have a special appreciation for Ralph Asher who perpetually increases my enthusiasm for MCs and gives me feedback on my content. Thank you Ralph.

Leave a Reply

Your email address will not be published. Required fields are marked *