The Yhat Blog


machine learning, data science, engineering


Decision Making Under Uncertainty: An Introduction to Robust Optimization (Part 2)

by Saba Neyshabouri |


Robust Optimization

The idea behind Robust Optimization (RO) is to find solutions to an optimization problem such that these solutions are not very sensitive to changes in parameters. Many day-to-day problems are modeled as deterministic problems while in reality there is uncertainty involved in the model. In many cases, the solution obtained through a deterministic model will have very low quality or be infeasible with minor changes in the parameters. In other words, the solution is not very robust. RO methodology aims to address this issue.

In this post, we'll introduce a modeling approach for RO introduced by Bertsimas and Sim (2004). There are multiple ways to address and model RO problems, and interested readers are encouraged to refer to review papers [Ref: Bertsimas, Ben-Tala,...]. It is important to mention that RO methodology is inherently a "risk-averse" method that aims to mitigate the adverse effects of uncertainty on the solution quality. Therefore if you are a "risk-lover", RO is not a good fit for you and your problems.

As we mentioned before, while assuming all the uncertain parameters will have their worst-case value provides 100% protection, it is also very conservative, as it is very rare that all the things that can go wrong actually go wrong. So the pessimistic approach is a very conservative way of modeling, and does not mimic reality. In reality, some of the deviations in parameter can be in a direction that will help you, while others can deviate such that you need more resources. So there is a chance that these effects can cancel each other out!

Bertsimas and Sim (2004), based on the observation that not everything usually goes wrong, presented the idea of having a budget of uncertainty. In their model, the budget of uncertainty restricts normalized deviation of the uncertain parameters. Here I briefly explain the math and show how with this idea, we can formulate a LP problem into another LP with greater number of variables, such that it includes the uncertainty in parameters.

Setting up our Model

Consider the following linear optimization model:

$$ \begin{array}{r l} \max & z = \sum_{j=1}^{n} c_j x_j & \\ \text{s.t.} & \sum_{j=1}^{n} a_{ij} x_{j} \leq b_i & \forall\, i=1,\dotsc,m \\ & x_j \geq 0 & \forall\, j=1,\dotsc,n \\ \end{array} $$

Much similar to our production planning model, we assume that the parameters $a_{ij}$ are uncertain such that $a_{ij} \in [\bar a_{ij} -\hat a_{ij},\bar a_{ij} +\hat a_{ij}]$. We further assume that the uncertainty impacts of one constraint (resource-size) is independent of other constraints. In addition we assume that these parameters are independent of each other. Let $d_{ij} = \frac{\check a_{ij} - \bar a_{ij}}{\hat a_{ij}}$ be the normalized deviation of the parameter $a_{ij}$ from its nominal value. Clearly we have $-1\leq d_{ij} \leq 1$. In the presence of the budget of uncertainty, we assume that the sum of the absolute normalized deviations does not exceed a budget of uncertainty $\Gamma_i$ for each constraint (resource) $i=1,...,m$. Therefore $\Gamma_i$ can be at most equal to the number of uncertain parameters in the $i^{th}$ constraint. Consider that uncertain parameters can take values within a set, called uncertainty set $\mathcal U_i = \{ a_{ij}\in \mathbb {R} : a_{ij}= \bar a_{ij} + \hat a_{ij}d_{ij}, -1 \leq d_{ij} \leq 1, \sum_{j \in J_i} d_{ij} \leq \Gamma_i \}$, $\forall i$, which is defined for parameters in each of constraints. $J_i$ is the set of indexes for corresponding to products $j$ that use resource $i$ in their production. Intuitively, if $\Gamma_i$ is integer valued, it shows that at most $\Gamma_i$ parameters in the $i^{th}$ constraint can have their worst-case value $\bar a_{ij} + \hat a_{ij}$. To make derivations easy, we assume $\Gamma_i$ are integers. Note that the final results hold for any positive values of $\Gamma$. Considering the uncertainty set defined, the proposed formulation can be written as follows:

$$\begin{array}{r l} \max & z = \sum_{j=1}^{n} c_j x_j & \\ \text{s.t.} & \sum_{j=1}^{n} \bar a_{ij} x_{j} + \max_{S_i \subseteq J_i, |S_i| = \Gamma_i} { \sum_{j \in S_i} \hat a_{ij} y_j } \leq b_i & \forall\, i=1,\dotsc,m \\ & x_j \leq y_j & \forall\, j=1,\dotsc,n\\ & x_j \geq 0,y_j \geq 0 & \forall\, j=1,\dotsc,n \end{array}$$

So if $\Gamma_i = 0$ then the problem reduces to the nominal problem in which all the uncertain parameters have their nominal values. In case $\Gamma_i > 0$ , we are looking for a subset of parameters in constraint $i$ that has the cardinality of $\Gamma_i$, such that it can create the maximum resource consumption. Note that there is an optimization problem within each constraint of our optimization. In addition, this is still a worst-case approach, but our definition of worst-case is based on the parameter $\Gamma$ which defines a budgeted worst-case.

Budget of Uncertainty Worst Case?

To solve this new formulation, one can use strong duality to reformulate each of the maximization problems inside of each constraint and then substitute the value for the objective of the dual problem into the constraint. Authors show that the proposed formulation for the robust problem can be written as follows:

$$\begin{array}{r l} \max & z = \sum_{j=1}^{n} c_j x_j \\ s.t. & \sum_{j=1}^{n} \bar a_{ij} x_{j} + z_i \Gamma_i + \sum_{j \in J_i} p_{ij} \leq b_i & \forall i \\ & z_i + p_{ij} \geq \hat a_{ij} y_j & \forall i, j\in J_i\\ & x_j \leq y_j & \forall j=1,...,n\\ & p_{ij} \geq 0 & \forall i, j \in J_i\\ & z_i \geq 0 & \forall i\\ & x_j \geq 0,y_j \geq 0 & \forall j \end{array}$$

This formulation may seem strange: The first constraint has a term added to the nominal resource consumption, which calculates the extra consumption in resource through the allowed budget of uncertainty. The second constraint calculates exactly which parameter deviates from its nominal value. Note that in the case of integer-valued $\Gamma$, parameters either do not deviate or deviate to their highest possible value.

But enough with derivations. Let's model our production planning problem using RO methodology described above. The formulation will look like this:

$$\begin{array}{r l} \max & z =x_1 + x_2 \\ s.t. & 2x_1 + x_2 + z_1 \Gamma_1 + p_{11} + p_{12} \leq 6 \\ & x_1 + 2x_2 + z_2 \Gamma_2 + p_{21} + p_{22} \leq 6 \\ & z_1 + p_{11} \geq y_1\\ & z_1 + p_{12} \geq 0.5 y_2\\ & z_2 + p_{21} \geq 0.8 y_1\\ & z_2 + p_{22} \geq 1.5 y_2\\ & p_{ij} \geq 0 ,\forall i,j\\ & z_1,z_2 \geq 0\\ & y_1, y_2 \geq 0\\ & x_1,x_2 \geq 0 \end{array}$$

Note that the budget of uncertainty is an input to the problem that controls the level of risk that the decision maker is willing to take. In other words, with this approach, we are able to provide the decision maker with a knob to adjust for her or his risk-attitude. In order to see the effect of different values of the budget of uncertainty, we divide the possible range for the budget of uncertainty $[0,2]$ into 20 parts (increments of 5% in the level of increased uncertainty) and calculate the optimal value for all the $20 \times 20 = 400$ combinations of the budget of uncertainty.

import pulp

# The resource usage for each product (columns)- Nominal Values
a = [[2, 1], [1, 2]]

# Vector of deviations in resource consumptions
a_hat = [[1, 0.5], [0.8, 1.5]]

# Resources available
b = [6, 6]
Gamma = [0.0, 0.0]

# Keep results here
Result = []
x1 = []
x2 = []
for G1 in range(21):
    # Gamma 1 is G1/20 percent of the total possible value for gamma 1 which is 2
    Gamma[0] = (G1/20.0) * 2

    # Record the results for each Gamma 1
    Res_i = []
    x1_i = []
    x2_i = []

    for G2 in range(21):
        # Gamma 2 is being calculated as a percentage of total possible value for Gamma2
        Gamma[1] = (G2/20.0) * 2

        # Create maximization model
        Production = pulp.LpProblem('Production planning', pulp.LpMaximize)

        # Define our decision variables >= 0
        x = {}
        for j in range(2):
            x[j] = pulp.LpVariable('x[%s]' % j, 0)

        # Define y_j >= 0
        y = {}
        for j in range(2):
            y[j] = pulp.LpVariable('y[%s]' % j, 0)

        # Define z_i >= 0
        z = {}
        for i in range(2):
            z[i] = pulp.LpVariable('z[%s]' % i, 0)

        # Define p_ij >= 0
        p = {}
        for i in range(2):
            for j in range(2):
                p[i, j] = pulp.LpVariable('p[%s,%s]' % (i,j), 0)

        # Our objective is to maximize production
        Produced = pulp.LpVariable('Produced', 0)
        Production += Produced == sum(x[j] for j in range(2))
        Production.setObjective(Produced)

        # Define resource constraints
        for i in range(2):
            Production += sum(a[i][j] * x[j] for j in range(2)) + Gamma[i] * z[i] + sum(p[i, j] for j in range(2)) <= b[i]

        # Define each deviation for each uncertain parameter
        for i in range(2):
            for j in range(2):
                Production += z[i] + p[i, j] >= a_hat[i][j] * y[j]

        # Relate deviations to the decision variables
        for j in range(2):
            Production += x[j] <= y[j]

        # Solve the model
        status = Production.solve()
        assert status == pulp.LpStatusOptimal

        Res_i.append(pulp.value(Produced))

        # Record optimal plans
        x1_i.append(pulp.value(x[0]))
        x2_i.append(pulp.value(x[1]))

    # Save results
    Result.append(Res_i)
    x1.append(x1_i)
    x2.append(x2_i)

print 'G1=G2\tx1\tx2'
for i in range(21):
    print '%.1f\t%.4f\t%.4f' % ((i/20.0) * 2, x1[i][i], x2[i][i])

Output:

G1=G2   x1  x2
0.0 2.0000  2.0000
0.1 1.9630  1.8777
0.2 1.9212  1.7734
0.3 1.8770  1.6828
0.4 1.8321  1.6031
0.5 1.7872  1.5319
0.6 1.7431  1.4679
0.7 1.7001  1.4098
0.8 1.6583  1.3568
0.9 1.6179  1.3081
1.0 1.5789  1.2632
1.1 1.5695  1.2300
1.2 1.5611  1.1969
1.3 1.5539  1.1638
1.4 1.5478  1.1306
1.5 1.5429  1.0971
1.6 1.5392  1.0634
1.7 1.5368  1.0293
1.8 1.5358  0.9946
1.9 1.5363  0.9593
2.0 1.5385  0.9231

We only printed out the solutions in which $\Gamma = \Gamma_1 = \Gamma_2$, which means that the decision maker assigns the same level of uncertainty to each constraint. You can translate this into his knowledge of how things can go wrong and his preferences in protection of resources. In the case of equal budgets, the decision maker assumes that things are equally likely to go wrong for each resource during the process. In the case $\Gamma_1 =0$ and $\Gamma_2 =1$ , the decision maker is confident about the usage of labor resources and not confident about the use of lumber, and budgets accordingly.

As it can be seen from the results, as we increase the value of $\Gamma$, the production levels for each product decreases to account for the possible deviations in the resource consumption. When $\Gamma = 0$, we get the nominal solution to the problem, and when $\Gamma = 2$, we obtain the pessimistic approach. It can be seen that the decrease in $x_2$ (tables), is faster when the value of $\Gamma$ increases. This is due to larger uncertainty (value of $\hat a_{i2}$) in the resource consumption coefficients for making tables. Therefore as we become more conservative, we need to account for larger deviations in resource consumption for producing tables.

The difference between the objective value of the nominal case and the case for a specific value of $\Gamma$ is the price we have to pay to gain a robust solution and it's called the price of robustness.

Which one would you choose?

Assuming that we want the budget of uncertainties to be equal, we have 21 options to choose from. So which one do we choose? The results from the optimization model will give us the production plan, the number of products to be produced of each type, and the objective value. But does that provide enough information to make a decision?

In the process of decision analytics, we want to make sure we understand the impact of our decisions. What is the impact of each of these 21 production plans? What is the difference between each of these plans? How do you compare them? What's the next step?

As I mentioned before, RO is inherently a risk-averse methodology. So it would be helpful to know what is at risk here. This methodology, plans to hedge against the uncertainty such that the proposed solution is feasible for a given uncertainty set and budget. The issue is that setting the level for the budget of uncertainty has to be done prior to the realization of uncertainty. So it is useful to know how a specific production plan will behave under uncertainty. Essentially, we have a plan, uncertainty is present, and we need to see that uncertainty can alter the plan. What is the chance that the proposed plan becomes infeasible? What is the magnitude of infeasibility?

  • The probability of infeasibility: the percentage of time that at least one of the constraints is violated
    • This is closely related to chance-constraints and quality of service
  • The magnitude of infeasibility: How much extra resources (on average) is needed to avoid constraint violation
    • This is closely related to risk measures such as Conditional Value-at-Risk (CVaR) and expected shortfall

These risk measures will give us much deeper insight on the performance of the proposed production plan. Knowing the probability of infeasibility can increase the confidence of the decision maker in choosing a plan. The expected shortfall gives the decision maker a better understanding of how bad things can get and whats is the cost of possible recourse actions.

Imagine a shortage of resources occurs and we have to stop production and order more resources. Assume that after the production is started, the value for the resource consumption is realized. It is very likely that small orders of raw materials and resources can be satisfied in a short notice, but large order amounts might take a much longer time. Hiring 2 man-shifts of labor is much easier than 20. So both the probability of disruption (infeasibility) and the magnitude of shortfall are important to us.

Shortage of cookie resources

This looks like we need to simulate the production process. In our process, different values for consumption coefficients are realized and the probability of disruption, and the average shortfall in case of a shortfall (conditional) is calculated.

Let's assume we are not sure exactly what the underlying distribution for the parameters are but that we can generate them from those we suspect them to be from. Here, to show how it can impact the results, I chose the two following distribution:

  • $a_{ij} \thicksim Uniform (\bar a_{ij} - \hat a_{ij}, \bar a_{ij} + \hat a_{ij} )$
  • $a_{ij} \thicksim Triangular (\bar a_{ij} - \hat a_{ij}, \bar a_{ij} + \hat a_{ij}, \bar a_{ij})$

For each solution, we have 5 different levels for deviations:

  • when none of the parameters deviate (we know the answer to this case, all solutions are always feasible)
  • when only one parameter deviates
  • when 2 parameters deviate
  • when 3 parameters out of 4 are deviating from their nominal value
  • when all 4 parameters are deviating

for each of solutions and configurations, we do a 1000 replication, with each of the 2 distributions. The following provides the code:

import random
import math

b = [6, 6]

# We have 4 parameters that can deviate 1-a_11  2-a_12  3-a_21 4-a_22
a_nom = [2.0, 1.0, 1.0, 2.0]
a_dev = [1.0, 0.5, 0.8, 1.5]

x = [
    [2,      2],
    [1.963,  1.8777],
    [1.9212, 1.7734],
    [1.877,  1.6828],
    [1.8321, 1.6031],
    [1.7872, 1.5319],
    [1.7431, 1.4679],
    [1.7001, 1.4098],
    [1.6583, 1.3568],
    [1.6179, 1.3081],
    [1.5789, 1.2632],
    [1.5695, 1.23],
    [1.5611, 1.1969],
    [1.5539, 1.1638],
    [1.5478, 1.1306],
    [1.5429, 1.0971],
    [1.5392, 1.0634],
    [1.5368, 1.0293],
    [1.5358, 0.9946],
    [1.5363, 0.9593],
    [1.5385, 0.9231]
 ]

# This is a simulation for assessing the quality of the solution
Uni_Result = [] # Results for Uniform distribution
Tri_Result = [] # Results for Triangular distribution

# Lists to capture the risk magnitude for each resource and each distribution
Labor_Res = []
Lumber_Res = []

# Number of replications
Rep = 1000

#for each solution
for sol in range(21):
    Res_i = []
    triRes_i = []
    Labor = [[],[]]
    Lumber = [[],[]]

    # The different combinations that parameters can deviate.
    # 0 if no parameter deviates, ..., 4 if all 4 parameters deviate.
    # Note that when notheing goes wrong (j=0) we already know the result of the
    # simulation. We use it to calculate the probability of infeasibility,
    # If any of the 2 consrtaints become infeasible we have infeasibility in the solution.

    for i in range(5):
        # Counters for the simulation of each level (levels are the number of parameters deviating)
        Counter = 0.0
        CounterTri = 0.0

        # Counters to count the number of times each specific constraint is violated
        CounterU = [0.0, 0.0]   # how many times there was infeaibility for Uniform Distribution 
        CounterT = [0.0, 0.0]   # how many times there was infeaibility for Tiangular Distribution 
        LaborOverU = 0          # Total overage(infeasibility) in labor (only if it is infeasible) for Uniform
        LumberOverU = 0         # Total overage(infeasibility) in lumber(only if it is infeasible) for Uniform
        LaborOverT = 0          # Total overage(infeasibility) in labor (only if it is infeasible) for Triangular
        LumberOverT = 0         # Total overage(infeasibility) in lumber (only if it is infeasible) for Triangular

        for j in range(Rep):
            A = range(4)

            # List of elements that deviate from their nominal value
            # Initiate a list with 4 False values for none of them to deviate
            DevList = [False] * 4

            # Initialize the value for the realization of parameter a
            a_realized = []     # Uniform distribution
            a_realTri = []      # Triangular distribution

            # If i == 0 it will not enter the loop and no values will change to True
            for k in range(i):
                # Choose an index to deviate
                # Add it to the list of deviations
                # Remove it from the original List of all indexes
                DevItem = random.choice(A)
                DevList[DevItem] = True
                A.remove(DevItem)

            for h in range(4):
                if DevList[h] == True:
                    # If there has to be a deviation, we generate a random number from the uniform
                    # istribution starting in a_nom plus/minus a_hat
                    L = a_nom[h] - a_dev[h]
                    U = a_nom[h] + a_dev[h]
                    aRand = round(random.uniform(L, U), 4)
                    aRandTri = round(random.triangular(L, U, a_nom[h]), 4)
                    a_realized.append(aRand)
                    a_realTri.append(aRandTri)

                else:
                    a_realized.append(a_nom[h])
                    a_realTri.append(a_nom[h])

            # Now we need to check to see if the constraints are met!
            # Compute the excess of resources for the realization of parameter a and decision x 
            # with Gamma level at (sol/20)*2 
            s1 = b[0] - ((a_realized[0] * x[sol][0]) + (a_realized[1] * x[sol][1]))
            s2 = b[1] - ((a_realized[2] * x[sol][0]) + (a_realized[3] * x[sol][1]))

            # If one of the slacks are negative we have infeasibility
            if (s1 < 0 or s2 < 0):
                Counter += 1

            # Check the constraint violation and its magnitude (Uniform)- Labor
            if s1 < 0:
                CounterU[0] += 1
                LaborOverU += abs(s1)

            # Check the constraint violation and its magnitude (Uniform)- Lumber
            if s2 < 0:
                CounterU[1] += 1
                LumberOverU += abs(s2)

            # Check the results with triangular distribution
            s1 = b[0] - ((a_realTri[0] * x[sol][0]) + (a_realTri[1] * x[sol][1]))
            s2 = b[1] - ((a_realTri[2] * x[sol][0]) + (a_realTri[3] * x[sol][1]))

            # If one of the slacks is negative we have infeasibility
            if (s1 < 0 or s2 < 0):
                CounterTri += 1

            # Check the constraint violation and its magnitude (Triangular)- Labor
            if s1 < 0:
                CounterT[0] += 1
                LaborOverT += abs(s1)

            # Checking the constraint violation and its magnitude (Triangular)- Lumber
            if s2 < 0:
                CounterT[1] += 1
                LumberOverT += abs(s2)

        # Labor
        if CounterU[0] != 0:
            AvgLaborU = LaborOverU/CounterU[0]
        else:
            AvgLaborU = 0
        Labor[0].append(round(AvgLaborU, 4))

        if CounterT[0] != 0:    
            AvgLaborT = LaborOverT/CounterT[0]
        else:
            AvgLaborT = 0
        Labor[1].append(round(AvgLaborT, 4))

        # Lumber
        if CounterU[1] != 0 :
            AvgLumberU = LumberOverU/CounterU[1]
        else:
            AvgLumberU = 0
        Lumber[0].append(round(AvgLumberU, 4))

        if CounterT[1] != 0:
            AvgLumberT = LumberOverT/CounterT[1]
        else:
            AvgLumberT = 0
        Lumber[1].append(round(AvgLumberT, 4))

        # Uniform
        Avg_Infeasvility_Probability = Counter/Rep
        Res_i.append(Avg_Infeasvility_Probability)

        # Triangular
        Avg_Infeasvility_Probability = CounterTri/Rep
        triRes_i.append(Avg_Infeasvility_Probability)

    Uni_Result.append(Res_i)
    Tri_Result.append(triRes_i)
    Labor_Res.append(Labor)
    Lumber_Res.append(Lumber)

print "Probability of Infeasibility : Uniform Distribution || Triangular DIstribution  "
for i in range(21):
    print "Gamma =", (i/20.0) * 2,
    print Uni_Result[i], Tri_Result[i]
print

print "Labor Results (Uniform || Triangular): "
for i in range(21):
    print "Gamma =", (i/20.0) * 2,
    print Labor_Res[i]
print

print "Lumber Results (Uniform || Triangular): "
for i in range(21):
    print "Gamma =", (i/20.0) * 2,
    print Lumber_Res[i]

Output:

Probability of Infeasibility : Uniform Distribution || Triangular DIstribution  
Gamma = 0.0 [0.0, 0.534, 0.665, 0.773, 0.751] [0.0, 0.513, 0.704, 0.751, 0.761]
Gamma = 0.1 [0.0, 0.422, 0.588, 0.683, 0.691] [0.0, 0.408, 0.555, 0.621, 0.672]
Gamma = 0.2 [0.0, 0.345, 0.503, 0.635, 0.638] [0.0, 0.223, 0.417, 0.512, 0.565]
Gamma = 0.3 [0.0, 0.246, 0.442, 0.504, 0.582] [0.0, 0.141, 0.283, 0.399, 0.477]
Gamma = 0.4 [0.0, 0.204, 0.323, 0.421, 0.53] [0.0, 0.092, 0.201, 0.299, 0.342]
Gamma = 0.5 [0.0, 0.158, 0.267, 0.386, 0.407] [0.0, 0.057, 0.12, 0.196, 0.288]
Gamma = 0.6 [0.0, 0.112, 0.192, 0.312, 0.399] [0.0, 0.038, 0.104, 0.138, 0.195]
Gamma = 0.7 [0.0, 0.072, 0.142, 0.234, 0.294] [0.0, 0.028, 0.062, 0.094, 0.148]
Gamma = 0.8 [0.0, 0.052, 0.113, 0.127, 0.232] [0.0, 0.008, 0.026, 0.052, 0.084]
Gamma = 0.9 [0.0, 0.028, 0.071, 0.116, 0.182] [0.0, 0.004, 0.017, 0.02, 0.046]
Gamma = 1.0 [0.0, 0.0, 0.03, 0.071, 0.115] [0.0, 0.0, 0.004, 0.011, 0.026]
Gamma = 1.1 [0.0, 0.0, 0.019, 0.052, 0.099] [0.0, 0.0, 0.001, 0.008, 0.022]
Gamma = 1.2 [0.0, 0.0, 0.023, 0.032, 0.093] [0.0, 0.0, 0.003, 0.005, 0.013]
Gamma = 1.3 [0.0, 0.0, 0.009, 0.019, 0.056] [0.0, 0.0, 0.003, 0.006, 0.006]
Gamma = 1.4 [0.0, 0.0, 0.007, 0.029, 0.044] [0.0, 0.0, 0.0, 0.004, 0.004]
Gamma = 1.5 [0.0, 0.0, 0.005, 0.013, 0.025] [0.0, 0.0, 0.0, 0.001, 0.001]
Gamma = 1.6 [0.0, 0.0, 0.005, 0.013, 0.029] [0.0, 0.0, 0.0, 0.0, 0.002]
Gamma = 1.7 [0.0, 0.0, 0.001, 0.005, 0.018] [0.0, 0.0, 0.0, 0.0, 0.0]
Gamma = 1.8 [0.0, 0.0, 0.0, 0.004, 0.007] [0.0, 0.0, 0.0, 0.0, 0.0]
Gamma = 1.9 [0.0, 0.0, 0.0, 0.0, 0.002] [0.0, 0.0, 0.0, 0.0, 0.0]
Gamma = 2.0 [0.0, 0.0, 0.0, 0.0, 0.0] [0.0, 0.0, 0.0, 0.0, 0.0]

Labor Results (Uniform || Triangular): 
Gamma = 0.0 [[0.0, 0.7373, 0.7926, 0.9349, 1.0657], [0.0, 0.4679, 0.5202, 0.6436, 0.6921]]
Gamma = 0.1 [[0.0, 0.7035, 0.7001, 0.8248, 0.9839], [0.0, 0.4538, 0.452, 0.5601, 0.6074]]
Gamma = 0.2 [[0.0, 0.5131, 0.6348, 0.7155, 0.8771], [0.0, 0.3852, 0.4297, 0.4746, 0.6212]]
Gamma = 0.3 [[0.0, 0.4308, 0.5505, 0.675, 0.705], [0.0, 0.3207, 0.4323, 0.4394, 0.4811]]
Gamma = 0.4 [[0.0, 0.5055, 0.5975, 0.5953, 0.6588], [0.0, 0.3794, 0.4263, 0.4182, 0.3889]]
Gamma = 0.5 [[0.0, 0.3912, 0.5082, 0.5398, 0.5731], [0.0, 0.2723, 0.3593, 0.2978, 0.3847]]
Gamma = 0.6 [[0.0, 0.364, 0.3886, 0.4, 0.4889], [0.0, 0.256, 0.2368, 0.2634, 0.311]]
Gamma = 0.7 [[0.0, 0.2366, 0.2731, 0.3714, 0.4197], [0.0, 0.1826, 0.2273, 0.247, 0.2662]]
Gamma = 0.8 [[0.0, 0.1789, 0.1862, 0.331, 0.3232], [0.0, 0.1204, 0.1096, 0.1663, 0.1547]]
Gamma = 0.9 [[0.0, 0.0719, 0.1677, 0.235, 0.2385], [0.0, 0.1005, 0.1316, 0.0893, 0.1434]]
Gamma = 1.0 [[0.0, 0.0, 0.2078, 0.245, 0.212], [0.0, 0.0, 0.1695, 0.2668, 0.1957]]
Gamma = 1.1 [[0.0, 0.0, 0.1545, 0.1542, 0.1869], [0.0, 0.0, 0.0, 0.0872, 0.0425]]
Gamma = 1.2 [[0.0, 0.0, 0.1159, 0.1987, 0.1565], [0.0, 0.0, 0.214, 0.0, 0.114]]
Gamma = 1.3 [[0.0, 0.0, 0.1799, 0.1328, 0.1569], [0.0, 0.0, 0.1282, 0.0518, 0.1449]]
Gamma = 1.4 [[0.0, 0.0, 0.1233, 0.1083, 0.1017], [0.0, 0.0, 0.0, 0.0645, 0.0223]]
Gamma = 1.5 [[0.0, 0.0, 0.0835, 0.0776, 0.1296], [0.0, 0.0, 0.0, 0.0, 0.0]]
Gamma = 1.6 [[0.0, 0.0, 0.0146, 0.054, 0.0837], [0.0, 0.0, 0.0, 0.0, 0.0]]
Gamma = 1.7 [[0.0, 0.0, 0.0422, 0.0537, 0.0443], [0.0, 0.0, 0.0, 0.0, 0.0]]
Gamma = 1.8 [[0.0, 0.0, 0.0, 0.0009, 0.0007], [0.0, 0.0, 0.0, 0.0, 0.0]]
Gamma = 1.9 [[0.0, 0.0, 0.0, 0.0, 0.0148], [0.0, 0.0, 0.0, 0.0, 0.0]]
Gamma = 2.0 [[0.0, 0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0, 0.0]]

Lumber Results (Uniform || Triangular): 
Gamma = 0.0 [[0.0, 1.2018, 1.2379, 1.324, 1.5699], [0.0, 0.7897, 0.8569, 0.9643, 1.1951]]
Gamma = 0.1 [[0.0, 0.9915, 1.0678, 1.1568, 1.4659], [0.0, 0.6339, 0.72, 0.8368, 0.9431]]
Gamma = 0.2 [[0.0, 0.8564, 0.927, 1.0097, 1.2433], [0.0, 0.5623, 0.6192, 0.7216, 0.875]]
Gamma = 0.3 [[0.0, 0.6777, 0.7024, 0.847, 1.1586], [0.0, 0.4614, 0.5588, 0.6331, 0.7364]]
Gamma = 0.4 [[0.0, 0.5489, 0.6039, 0.8053, 0.9262], [0.0, 0.3876, 0.469, 0.5102, 0.5143]]
Gamma = 0.5 [[0.0, 0.4155, 0.5319, 0.672, 0.8883], [0.0, 0.2917, 0.3713, 0.4278, 0.5006]]
Gamma = 0.6 [[0.0, 0.3941, 0.5415, 0.6163, 0.793], [0.0, 0.3136, 0.3141, 0.3611, 0.4405]]
Gamma = 0.7 [[0.0, 0.3068, 0.376, 0.541, 0.6758], [0.0, 0.2172, 0.3396, 0.4035, 0.417]]
Gamma = 0.8 [[0.0, 0.2238, 0.326, 0.4635, 0.5228], [0.0, 0.0444, 0.1342, 0.2511, 0.2677]]
Gamma = 0.9 [[0.0, 0.1091, 0.2935, 0.3943, 0.5283], [0.0, 0.1046, 0.2454, 0.1621, 0.2324]]
Gamma = 1.0 [[0.0, 0.0, 0.5332, 0.438, 0.4371], [0.0, 0.0, 0.121, 0.2605, 0.2429]]
Gamma = 1.1 [[0.0, 0.0, 0.2416, 0.3457, 0.3626], [0.0, 0.0, 0.3531, 0.1483, 0.282]]
Gamma = 1.2 [[0.0, 0.0, 0.336, 0.444, 0.3203], [0.0, 0.0, 0.2064, 0.1965, 0.1823]]
Gamma = 1.3 [[0.0, 0.0, 0.3351, 0.3466, 0.2831], [0.0, 0.0, 0.0222, 0.1532, 0.0325]]
Gamma = 1.4 [[0.0, 0.0, 0.4365, 0.2781, 0.2437], [0.0, 0.0, 0.0, 0.2439, 0.0979]]
Gamma = 1.5 [[0.0, 0.0, 0.1195, 0.1946, 0.205], [0.0, 0.0, 0.0, 0.3112, 0.2201]]
Gamma = 1.6 [[0.0, 0.0, 0.1456, 0.1087, 0.1556], [0.0, 0.0, 0.0, 0.0, 0.157]]
Gamma = 1.7 [[0.0, 0.0, 0.0, 0.2182, 0.1115], [0.0, 0.0, 0.0, 0.0, 0.0]]
Gamma = 1.8 [[0.0, 0.0, 0.0, 0.0803, 0.0869], [0.0, 0.0, 0.0, 0.0, 0.0]]
Gamma = 1.9 [[0.0, 0.0, 0.0, 0.0, 0.0417], [0.0, 0.0, 0.0, 0.0, 0.0]]
Gamma = 2.0 [[0.0, 0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0, 0.0]]

At first, we have the probability of infeasibility for each solution shown by the rows (out of 21 choice of $\Gamma$) and each level of deviation shown by the columns (0-4). As expected, for level 0, all the probabilities are zero, meaning that if nothing deviates all the solutions are always feasible (which makes sense). But as we move to the right of each row, the number of parameters deviating increases (uncertainty increases) and the probability of infeasibility increases (which also makes sense!). As we come down each column (increase the budget of uncertainty) the probability of infeasibility decreases, until at $\Gamma=2$ it becomes zero. So as we become more conservative, we sacrifice producing more products in order to fight against infeasibility. Note that same results hold for the triangular distribution. But the drop in th probability of infeasibility is faster than the case of uniform distribution (Do you know why?).

It can be seen that, the probability of infeasibility for the case of 4 parameters deviating with the uniform distribution (Dev-4-U) is around 12%, and only 2% for the triangular distribution. Also note that to gain high confidence in the solution we do not need to set the value of $\Gamma$ at its maximum. Also note that in our case, the number of parameters that can deviate are very small. For larger problems (say 10 time larger in uncertain parameters) the choice of $\gamma$ does not need to be 10 folded to get the same level of protection. In fact you only need to increase $\Gamma$ by $\sqrt{10}$ times to gain the same level of protection.

Let's look at the case where all the parameters are deviating and see what is the expected level of shortfall.

As it can be seen, as $\Gamma$ increases, there is a general decreasing trend (bumps can be due to number of simulations) in the expected shortfall for resources. In the case of $\Gamma = 0$ and Uniform distribution, we are expected to require 1.6 more units of Lumber, almost 26% of the original 6 units we have. At $\Gamma=1$ however, we need almost 0.45 units, a mere 7.5% of the original amount of resources. Values for the same cases with the triangular distribution are even lower.

Now we have more insight about the operational aspects (results from the optimization) and the risks associated (results from the simulation) with each production plan. As you can see, the end results does not tell you which plan to choose, but it tells you what is to be expected if any of these plans are chosen. As a decision maker, the final decision is yours to see how much risk you are comfortable to take, and what your capabilities are in dealing with production disruptions through recourse actions.

Conclusion

We showed how robust optimization can be applied to decision making problems. We also showed that the decision making does not end with solving the optimization problem. The proposed steps in this post can be used in any decision making context and its goal is to provide the decision makers and managers with tools and knowledge so that they can make well-informed and educated decisions and be aware of the impact of their decisions too.

Hope you enjoyed it!


Visiting Data Scientist Program

Saba is the first (of hopefully many) member of the Yhat Visiting Data Scientist program. The program allows for researchers to write about topics they are passionate about and expose thousands of interested readers to research areas they might otherwise never know about.

Reach out to Saba on Twitter or email him at sneyshab [at] gmu.edu

To read more about the program head over to Yhat webpage.



Our Products


Rodeo: a native Python editor built for doing data science on your desktop.

Download it now!

ScienceOps: deploy predictive models in production applications without IT.

Learn More

Yhat (pronounced Y-hat) provides data science solutions that let data scientists deploy and integrate predictive models into applications without IT or custom coding.