Three researchers affiliated with the NSF AI Institute for Advances in Optimization presented their work at the 22nd International Conference on the Integration of Constraint Programming, Artificial Intelligence, and Operations Research (CPAIOR 2025), held Nov. 10–13 in Melbourne, Australia. The annual conference brings together researchers working at the intersection of optimization, artificial intelligence, and decision-making systems.

Multi-Task Learning for MILP Solving

Junyang Cai presented research titled “Multi-task Representation Learning for Mixed Integer Linear Programming.” The work addressed limitations in current machine learning–guided methods for solving mixed integer linear programs, which often depend on separate data collection and training pipelines that restrict scalability. His paper won the CPAIOR 2025 Best Paper Award.

Junyang Cai

Cai introduced a multi-task learning framework that produces shared MILP representations capable of guiding solvers across both different optimization tasks and solver platforms, including Gurobi and SCIP. The framework supports tasks such as branching decisions and solver configuration within a unified model. Experimental results on three widely used MILP benchmarks showed that the multi-task model matched the performance of specialized models on in-distribution tasks while significantly outperforming them in generalization across problem sizes and task types.

Quantum-Classical Methods for Power Systems Optimization

Rosemary Barrass presented research titled “Leveraging Quantum Computing for Accelerated Classical Algorithms in Power Systems Optimization.” The work examined the use of commercially available quantum annealing hardware to address mixed-integer problems in power systems, specifically the unit commitment problem, which grid operators solve daily to meet electricity demand while satisfying operational constraints.

Rosemary Barrass

Barrass introduced a hybrid quantum-classical algorithm, QC4UC, designed to improve computational efficiency. The approach incorporates a novel Benders-cut generation technique to improve solution quality while reducing hardware interactions and qubit requirements. A $k$-local neighborhood search method was also used as a recovery step to improve solution quality beyond what quantum annealing hardware alone can achieve. The algorithm was evaluated on a modified IEEE RTS-96 test system, with results compared across simulated annealing and real quantum annealing hardware.

Barrass noted that discussions during the conference highlighted growing interest in combining quantum computing, machine learning, and classical optimization to address challenging real-world problems, particularly where feasibility and solution speed are critical.

Learning-Based Proxies for Robust Optimization

Wyame Benslimane presented “Self-Supervised Penalty-Based Learning for Robust Constrained Optimization,” which introduced a learning-based optimization proxy designed to solve problems involving uncertainty. The approach builds on robust optimization principles by explicitly modeling uncertainty while learning an approximate robust solution.

Wyame Benslimane

The method uses a self-supervised loss function, eliminating the need for pre-solved training datasets. Instead, the model learns by minimizing a penalty-based objective. The framework was extended to handle both continuous and combinatorial optimization problems. Experimental results across three applications showed that the learned models produced high-quality solutions significantly faster than traditional solvers. The work also demonstrated how tuning the penalty parameter allows a trade-off between solution quality and constraint satisfaction.

CPAIOR Conference