Although optimization methods are at the core of ML, recent advances have mostly focused on unconstrained convex optimization. However, as mentioned earlier, many engineering applications feature combinatorial and highly constrained settings. Combinatorial optimization and MIP have yet to be used systematically in ML but have the potential to improve generalization and interpretability. The thrust will develop novel ML methods for constrained learning based on combinatorial optimization.
Meta-Algorithms Research Group
The goal of the meta algorithms research group is to develop automatic algorithm/model selection for optimization problems or for machine learning tasks.
Publications
Related work
-
The max-cut decision tree: Improving on the accuracy and running time of decision trees. Jonathan Bodine and Dorit S Hochbaum. In Proceedings of the 12th International Joint Conference on Knowledge Discovery, Knowledge Engineering and Knowledge Management, KDIR, volume 1, pages 59–70, 2020.
-
Sparse and smooth signal estimation: Convexi- fication of l0 formulations. Alper Atamturk, Andres Gomez, and Shaoning Han. Journal of Machine Learning Research. (to appear).
-
Safe screening rules for l0-regression from perspective relaxations. A. Atamturk and A. Gomes. Proceedings of the 37th International Conference on Machine Learning, pages 1–10, 2020.
-
First-order and Stochastic Optimization Methods for Machine Learning. G. Lan. Springer Nature, Switzerland AG, 2020.
-
A comparative study of the leading machine learning techniques and two new optimization algorithms. Philipp Baumann, Dorit S Hochbaum, and Yan T Yang. European Journal of Operational Research, 272(3):1041–1057, 2019.
-
A novel combinatorial approach for cell identification in calcium-imaging movies. Quico Spaen, Roberto Asiın-Acha, Selmaan N Chettih, Matthias Minderer, Christopher Harvey, and Dorit S Hochbaum. Hnccorr:. eNeuro, 6(2), 2019.