Modify a model

Examples:diet, feasopt, fixanddive, gc_pwl_func, lpmod, sensitivity, workforce3, workforce4, workforce5

This section considers model modification. Modification can take many forms, including adding constraints or variables, deleting constraints or variables, modifying constraint and variable attributes, changing constraint coefficients, etc. The Gurobi examples don't cover all possible modifications, but they cover the most common types.

diet

This example builds a linear model that solves the classic diet problem: to find the minimum cost diet that satisfies a set of daily nutritional requirements. Once the model has been formulated and solved, it adds an additional constraint to limit the number of servings of dairy products and solves the model again. Let's focus on the model modification.

Adding constraints to a model that has already been solved is no different from adding constraints when constructing an initial model. In C, we can introduce a limit of 6 dairy servings through the following constraint (where variables 6 and 7 capture the number of servings of milk and ice cream, respectively):

printf("\nAdding constraint: at most 6 servings of dairy\n"); cind[0] = 7; cval[0] = 1.0; cind[1] = 8; cval[1] = 1.0; error = GRBaddconstr(model, 2, cind, cval, GRB_LESS_EQUAL, 6.0, "limit_dairy");
In C++:
cout << "\nAdding constraint: at most 6 servings of dairy" << endl; model.addConstr(buy[7] + buy[8] <= 6.0, "limit_dairy");
In Java:
lhs.addTerm(1.0, buy[8]); model.addConstr(lhs, GRB.LESS_EQUAL, 6.0, "limit_dairy");
In C#:
Console.WriteLine("\nAdding constraint: at most 6 servings of dairy"); model.AddConstr(buy[7] + buy[8] <= 6.0, "limit_dairy");
In Python:
print('\nAdding constraint: at most 6 servings of dairy') m.addConstr(buy.sum(['milk', 'ice cream']) <= 6, "limit_dairy")

For linear models, the previously computed solution can be used as an efficientwarm startfor the modified model. The Gurobi solver retains the previous solution, so the nextoptimizecall automatically starts from the previous solution.

lpmod

Changing a variable bound is also straightforward. Thelpmodexample changes a single variable bound, then re-solves the model in two different ways. A variable bound can be changed by modifying theUBorLBattribute of the variable. In C:

error = GRBsetdblattrelement(model, "UB", minVar, 0.0);
In C++:
v[minVar].set(GRB_DoubleAttr_UB, 0.0);
In Java:
minVar.set(GRB.DoubleAttr.UB, 0.0);
In C#:
minVar.UB = 0.0;
In Python:
minVar.UB = 0.0
The model is re-solved simply by calling theoptimizemethod again. For a continuous model, this starts the optimization from the previous solution. To illustrate the difference when solving the model from an initial, unsolved state, thelpmodexample calls theresetfunction. In C:
error = GRBreset(model, 0);
In C++, Java, and Python:
model.reset();
In C#:
model.Reset();
When we call the optimize method after resetting the model, optimization starts from scratch. Although the difference in computation time is insignificant for this tiny example, a warm start can make a big difference for larger models.

fixanddive

Thefixanddiveexample provides another example of bound modification. In this case, we repeatedly modify a set of variable bounds, utilizing warm starts each time. In C, variables are fixed as follows:

for (j = 0; j < nfix; ++j) { fixval = floor(fractional[j].X + 0.5); error = GRBsetdblattrelement(model, "LB", fractional[j].index, fixval); if (error) goto QUIT; error = GRBsetdblattrelement(model, "UB", fractional[j].index, fixval); if (error) goto QUIT; error = GRBgetstrattrelement(model, "VarName", fractional[j].index, &vname); if (error) goto QUIT; printf(" Fix %s to %f ( rel %f )\n", vname, fixval, fractional[j].X); }
In C++:
for (int i = 0; i < nfix; ++i) { GRBVar* v = fractional[i]; double fixval = floor(v->get(GRB_DoubleAttr_X) + 0.5); v->set(GRB_DoubleAttr_LB, fixval); v->set(GRB_DoubleAttr_UB, fixval); cout << " Fix " << v->get(GRB_StringAttr_VarName) << " to " << fixval << " ( rel " << v->get(GRB_DoubleAttr_X) << " )" << endl; }
In Java:
for (int i = 0; i < nfix; ++i) { GRBVar v = fractional.get(i); double fixval = Math.floor(v.get(GRB.DoubleAttr.X) + 0.5); v.set(GRB.DoubleAttr.LB, fixval); v.set(GRB.DoubleAttr.UB, fixval); System.out.println(" Fix " + v.get(GRB.StringAttr.VarName) + " to " + fixval + " ( rel " + v.get(GRB.DoubleAttr.X) + " )"); }
In C#:
for (int i = 0; i < nfix; ++i) { GRBVar v = fractional[i]; double fixval = Math.Floor(v.X + 0.5); v.LB = fixval; v.UB = fixval; Console.WriteLine(" Fix " + v.VarName + " to " + fixval + " ( rel " + v.X + " )"); }
In Python:
for i in range(nfix): v = fractional[i] fixval = int(v.X+0.5) v.LB = fixval v.UB = fixval print(' Fix %s to %g (rel %g)' % (v.VarName, fixval, v.X))
Again, the subsequent call tooptimizestarts from the previous solution.

sensitivity

Thesensitivity计算最优客观价值assoc示例iated with fixing each binary variable to 0 or 1. It first solves the given model to optimality. It then constructs a multi-scenario model, where in each scenario a binary variable is fixed to the complement of the value it took in the optimal solution. The resulting multi-scenario model is solved, giving the objective degradation associated with forcing each binary variable off of its optimal value.

feasopt

The last modification example we consider isfeasopt, which adds variables to existing constraints and also changes the optimization objective. Setting the objective to zero is straightforward. In C, set theObjattribute to 0:

for (j = 0; j < numvars; ++j) { error = GRBsetdblattrelement(model, "Obj", j, 0.0); if (error) goto QUIT; }
In the object-oriented interfaces, callsetObjectivewith an empty linear expression. In C++:
// clear objective feasmodel.setObjective(GRBLinExpr(0.0));
In Java:
// Clear objective feasmodel.setObjective(new GRBLinExpr());
In C#:
// Clear objective feasmodel.SetObjective(new GRBLinExpr());
In Python:
# clear objective feasmodel.setObjective(0.0)
Adding new variables is somewhat more complex. In the example, we want to add artificial variable(s) to each constraint in order to allow the constraint to be relaxed. We use two artificial variables for equality constraints and one for inequality constraints. The Python code for adding a single artificial variable to constraintcin C is:
error = GRBgetstrattrelement(model, "ConstrName", i, &cname); if (error) goto QUIT; vname = malloc(sizeof(char) * (6 + strlen(cname))); if (!vname) goto QUIT; strcpy(vname, "ArtN_"); strcat(vname, cname); vind[0] = i; vval[0] = -1.0; error = GRBaddvar(model, 1, vind, vval, 1.0, 0.0, GRB_INFINITY, GRB_CONTINUOUS, vname); if (error) goto QUIT;
In C++:
double coef = -1.0; feasmodel.addVar(0.0, GRB_INFINITY, 1.0, GRB_CONTINUOUS, 1, &c[i], &coef, "ArtN_" + c[i].get(GRB_StringAttr_ConstrName));
In Java:
GRBConstr[] constrs = new GRBConstr[] { c[i] }; double[] coeffs = new double[] { -1 }; feasmodel.addVar(0.0, GRB.INFINITY, 1.0, GRB.CONTINUOUS, constrs, coeffs, "ArtN_" + c[i].get(GRB.StringAttr.ConstrName));
In C#:
GRBConstr[] constrs = new GRBConstr[] { c[i] }; double[] coeffs = new double[] { -1 }; feasmodel.AddVar(0.0, GRB.INFINITY, 1.0, GRB.CONTINUOUS, constrs, coeffs, "ArtN_" + c[i].ConstrName);
In Python:
feasmodel.addVar(obj=1.0, name="ArtN_" + c.ConstrName, column=gp.Column([-1], [c]))
We use thecolumnargument of theaddVarmethod to specify the set of constraints in which the new variable participates, as well as the associated coefficients. In this example, the new variable only participates in the constraint to be relaxed. Default values are used here for all variables attributes except the objective and the variable name.