Yasha Pushak

Algorithm designers are regularly faced with the tedious task of finding suitable default values for the parameters that impact the performance of algorithms. Thoroughly evaluating even a single parameter configuration typically requires running the algorithm on a large number of problem instances, which can make the process very slow. To address this problem, many automated algorithm configuration procedures have been proposed. The vast majority of these are based on powerful meta-heuristics with strong diversification mechanisms, thereby ensuring that they sufficiently explore the parameter configuration space.

However, despite the prominence of automated algorithm configuration, relatively little is known about the algorithm configuration landscapes searched by these procedures, which relate parameter values to algorithm performance. As a result, while these strong diversification mechanisms make existing configurators robust, it is unclear whether or not they are actually required or simply increase the running time of the configurators.

One particularly notable early work in the field showed evidence suggesting that the algorithm configuration landscapes of two optimization algorithms are, in fact, close to uni-modal. However, existing fitness landscape analysis techniques are unable to account for the stochasticity in the performance measurements of algorithms in a statistically principled way, which is a major barrier to their application to studying algorithm configuration scenarios. We address this gap by developing the first statistically principled method for detecting significant deviations from uni-modality in a stochastic landscape.

We apply this method, along with other (new and existing) landscape analysis techniques, to a variety of algorithm configuration scenarios arising in autoiii mated machine learning (AutoML) and the minimization of the running time of algorithms for solving N P-hard problems. We show that algorithm configuration landscapes are most often highly structured and relatively simple.

Using the intuition from this analysis, we develop two prototype algorithm configuration procedures designed for AutoML. We show that the methods make assumptions that are too strong, leading to mixed results. However, we build on this intuition and develop another procedure for the configuration of N P-hard algorithms. Compared to state-of-the-art baselines, we show that our new method often finds similar or better configurations in the same or less time.

Scroll to Top
Skip to content