Date of Award
5-1-2023
Degree Name
Doctor of Philosophy
Department
Mathematics
First Advisor
Xiao, Mingqing
Second Advisor
Xu, Dashun
Abstract
The performance of classification algorithms is mainly governed by the hyperparameter configurations deployed. Traditional search-based algorithms tend to require extensive hyperparameter evaluations to select the desirable configurations during the process, and they are often very inefficient for implementations on large-scale tasks. In this dissertation, we resort to solving the problem of hyperparameter selection via meta-learning which provides a mechanism that automatically recommends the promising ones without any inefficient evaluations. In its approach, a meta-learner is constructed on the metadata extracted from historical classification problems which directly determines the success of recommendations. Designing fine meta-learners to recommend effective hyperparameter configurations efficiently is of practical importance. This dissertation divides into six chapters: the first chapter presents the research background and related work, the second to the fifth chapters detail our main work and contributions, and the sixth chapter concludes the dissertation and pictures our possible future work. In the second and third chapters, we propose two (kernel) multivariate sparse-group Lasso (SGLasso) approaches for automatic meta-feature selection. Previously, meta-features were usually picked by researchers manually based on their preferences and experience or by wrapper method, which is either less effective or time-consuming. SGLasso, as an embedded feature selection model, can select the most effective meta-features during the meta-learner training and thus guarantee the optimality of both meta-features and meta-learner which are essential for successful recommendations. In the fourth chapter, we formulate the problem of hyperparameter recommendation as a problem of low-rank tensor completion. The hyperparameter search space was often stretched to a one-dimensional vector, which removes the spatial structure of the search space and ignores the correlations that existed between the adjacent hyperparameters and these characteristics are crucial in meta-learning. Our contributions are to instantiate the search space of hyperparameters as a multi-dimensional tensor and develop a novel kernel tensor completion algorithm that is applied to estimate the performance of hyperparameter configurations. In the fifth chapter, we propose to learn the latent features of performance space via denoising autoencoders. Although the search space is usually high-dimensional, the performance of hyperparameter configurations is usually correlated to each other to a certain degree and its main structure lies in a much lower-dimensional manifold that describes the performance distribution of the search space. Denoising autoencoders are applied to extract the latent features on which two effective recommendation strategies are built. Extensive experiments are conducted to verify the effectiveness of our proposed approaches, and various empirical outcomes have shown that our approaches can recommend promising hyperparameters for real problems and significantly outperform the state-of-the-art meta-learning-based methods as well as search algorithms such as random search, Bayesian optimization, and Hyperband.
Access
This dissertation is Open Access and may be downloaded by anyone.