Parameter Tuning Tool


Parameter Tuning Tool

The Gurobi Optimizer provides a wide variety of参数that allow you to control the operation of the optimization engines. The level of control varies from extremely coarse-grained (e.g., theMethod参数,允许您选择用于解决连续模型的算法)以非常细粒度(例如,MarkowitzTolparameter, which allows you to adjust the tolerances used during simplex basis factorization). While these parameters provide a tremendous amount of user control, the immense space of possible options can present a significant challenge when you are searching for parameter settings that improve performance on a particular model. The purpose of the Gurobi tuning tool is to automate this search.

The Gurobi tuning tool performs multiple solves on your model, choosing different parameter settings for each solve, in a search for settings that improve runtime. The longer you let it run, the more likely it is to find a significant improvement. If you are using a Gurobi Compute Server, you can harness the power of multiple machines to perform distributed parallel tuning in order to speed up the search for effective parameter settings.

The tuning tool can be invoked through two different interfaces. You can either use thegrbtunecommand-line tool,或者您可以从我们的一个中调用它programming language APIs. Both approaches share the same underlying tuning algorithm. The command-line tool offers more tuning features. For example, it allows you to provide a list of models to tune, or specify a list of base settings to try (tunebasesettings.).

许多关注相关的参数允许您控制调谐工具的操作。可能是最重要的TuneTimeLimit,它控制搜索改进参数集的时间量。其他参数包括TuneTrials(which attempts to limit the impact of randomness on the result),TuneCriterion(指定调谐标准),Tuneresults.(which controls the number of results that are returned), andTuneOutput(which controls the amount of output produced by the tool).

Before we discuss the actual operation of the tuning tool, let us first provide a few caveats about the results. While parameter settings can have a big performance effect for many models, they aren't going to solve every performance issue. One reason is simply that there are many models for which even the best possible choice of parameter settings won't produce an acceptable result. Some models are simply too large and/or difficult to solve, while others may have numerical issues that can't be fixed with parameter changes.

Another limitation of automated tuning is that performance on a model can experience significant variations due to random effects (particularly for MIP models). This is the nature of search. The Gurobi algorithms often have to choose from among multiple, equally appealing alternatives. Seemingly innocuous changes to the model (such as changing the order of the constraint or variables), or subtle changes to the algorithm (such as modifying the random number seed) can lead to different choices. Often times, breaking a single tie in a different way can lead to an entirely different search. We've seen cases where subtle changes in the search produce 100X performance swings. While the tuning tool tries to limit the impact of these effects, the final result will typically still be heavily influenced by such issues.

The bottom line is that automated performance tuning is meant to give suggestions for parameters that could produce consistent, reliable improvements on your models. It is not meant to be a replacement for efficient modeling or careful performance testing.



Subsections