The summer school will offer a wide range of talks and hands-on sessions on the use of Machine Learning for Constraint Programming.
The program is oriented to researchers in constraint solving (constraint programming, knowledge representation, SAT/SMT solving and more) as well as those at the intersection of constraint solving and machine learning. The program consists of a mix of invited talks and hands-on lab sessions.
Prof. Guns will welcome all to the summer school with a general motivation of why machine learning is increasingly used with constraint solving. This will be followed by an overview of the interactions between constraint solving and machine learning and how the program of the summer school highlights many of these interactions. The talk will then review the basic principles of constraint programming: from modeling problems using decision variables, constraints, and the use of global constraints, to different solver technologies and the translation to them. Real world examples will be shown of modeling, solving and visualising problems with the Python-based CPMpy modelling library.
We will briefly discuss the basic principles underlying machine leaning and provide an overview of the many different types of machine learning tasks and methods that exist. Topics will include forms of supervised and unsupervised learning, reinforcement learning, symbolic versus subsymbolic methods, pattern discovery, including background knowledge in the form of constraints. The talk will focus on explaining terminology and generating a high-level insight and understanding, rather than on details of how methods work.
In this lab session you will get to play with the data from the visual Sudoku Assistant https://sudoku-assistant.cs.kuleuven.be. Given a (well-centered) image of a sudoku grid, you will train neural networks to predict the digits, you will develop a constraint model for sudoku solving, and you will link the prediction (probabilities) to the solver to get a CP-based joint inference method that can do better than simply solving for the predicted values. The lab session will be interactive, with open ended questions that require experimenting with the provided Jupyter Python Notebooks, as well as an open part at the end of things that can be further improved (different options requiring different skill levels).
In recent years, the integration of Machine Learning (ML) with challenging scientific and engineering problems has experienced remarkable growth. In particular, deep learning has proven to be an effective solution for unconstrained problem settings, but it has struggled to perform as well in domains where hard constraints, domain knowledge, or physical principles need to be taken into account. In areas such as power systems, materials science, and fluid dynamics, the data follows well-established physical laws, and ignoring these laws can result in unreliable and ineffective solutions. In this talk, we will delve into the need for constraint-aware ML. We will present how to integrate key constrained optimization principles within the training process of deep learning models, endowing them with the capability of handling hard constraints and physical principles. The resulting models will bring a new level of accuracy and efficiency to hard decision tasks, which will be showcased on energy and scheduling problems. We will then introduce a powerful integration of constrained optimization as neural network layers, resulting in ML models that are able to enforce structure in the outputs of learned embeddings. This integration will provide ML models with enhanced expressiveness and modeling ability, which will be showcased through the certification fairness in learning to rank tasks and the assembly of high-quality ensemble models. Finally, we will discuss a number of grand challenges that I plan to address to develop a potentially transformative technology for both optimization and machine learning.
You've modeled your problem, now all you need to do is publish the results. But wait, the solver can't actually solve it! You're back to square one -- what to do? In this tutorial, we'll have a look at automated methods for getting better solving performance, without having to implement your own solver or fancy constraints. Using portfolios of solvers and automatically tuning their parameters, you can often get substantial performance improvements. We'll cover some of the basic methods (which are useful not only in Constraint Programming) and how to implement them in practice, with exercises that you can re-use for the constraints application that you're interested in. The methods are mostly agnostic to the particular constraint solvers and problems they are applied to and can be adapted quickly to other applications. We'll have a look at parallels to automated machine learning, current advances in this area, and open questions.
Combinatorial optimization is different from recognizing cats and dogs: in the limit of infinite computation, the task is trivial. Therefore, machine learning based approaches aim to reduce total computation, possibly at the expense of solution quality, by learning from past experience. A well-trained model can reduce the size of the relevant search space, but computation used for evaluating a model cannot be used for search, so this is a trade-off that should be carefully made: count your flops and make your flops count! In this talk I will go through practical examples in the context of this trade-off. I will present a number of challenges and provide some guidelines that may help define future research directions to apply deep learning for combinatorial optimization.
In this lab, we will get our hands dirty in solving some actual optimization problems! We will dive into the hybrid genetic search (HGS) algorithm, a state-of-the-art algorithm for the vehicle routing problem with time windows (VRPTW). We will explore how we can parameterize an important part of the method using a deep neural network, to improve its efficiency. It is up to you to make it work!
Combining of prediction and optimization has been an established practice in industrial decision support for a long time. In recent years, however, the topic has been the center of renewed research interest since the introduction of Decision-+Focused Learning (DFL). From a semantic point of view, DFL seeks to incorporate the decision-making phase at training time, so that predictions are optimized for minimum decision cost rather than for maximal accuracy. From a technical point of view, the key issue in DFL is finding ways to differentiate an argmin operator, so that optimization can be integrated with (stochastic) gradient descent. In this lecture, we will argue that the basic ingredients from DFL can be used for much more than its intended purpose. Starting from a technical review of DFL, we will start a 'ramble' that will lead to possibly unexpected places, including stochastic optimization, algorithm configuration, and reinforcement learning.
Constraint programming is known for being an efficient approach for solving combinatorial problems. Important design choices in a solver are the branching heuristics, which are designed to lead the search to the best solutions in a minimum amount of time.However, developing these heuristics is a time-consuming process that requires problem-specific expertise. This observation has motivated many efforts to use machine learning to automatically learn efficient heuristics without expert intervention. Although several generic variable-selection heuristics are available in the literature, the options for a generic value-selection heuristic are more scarce. In this talk, I will present a generic learning procedure that can be used to obtain a value-selection heuristic inside a constraint programming solver. This has been achieved thanks to a reinforcement learning algorithm combined with an architecture based on graph neural networks.
The related talk presented a generic learning procedure for getting a value-selection heuristic inside a constraint programming solver. In this practical session, you will have the opportunity to test this idea with Seapearl, a CP solver based on reinforcement learning for selecting the next value to branch on. Seapearl is an open-source project, available here: https://github.com/corail-research/SeaPearl.jl
Recently, many Deep Reinforcement Learning (DRL) approaches have been proposed to solve combinatorial optimization problems (COPs). Some approaches are end-to-end, which aim to learn the best decisions directly. Some are hybrid, i.e., learning is used to assist metaheuristics (search algorithms and evolutionary algorithms) in finding better decisions faster. I will discuss how to develop such learning based approaches to solve real-world industrial optimization problems. In addition, I will introduce the benchmark problem instances of machine scheduling that we have been developing for comparing different learning-based approaches.
The basic assumption in CP is that the user models the problem, and a solver is then used to solve it. However, expressing a combinatorial problem as a constraint model is not always straightforward. As a result, modelling is considered to be a bottleneck for the wider use of CP. To overcome this obstacle, several techniques have been proposed for modeling a constraint problem (semi-)automatically. In constraint acquisition, the model of a constraint problem is acquired (i.e., learned) using a set of examples of solutions, and possibly non-solutions. The talk will be an overview of constraint acquisition research, in which learning techniques are used to learn constraint models from data. This is done either in a passive setting, using an existing set of solutions and/or non-solutions of the problem, or in an active setting where the system interacts with the user to model the problem. I will discuss both passive and (inter)active learning, the current state-of-the-art and the connections to the machine learning field. Finally, I will focus on the current challenges in Constraint Acquisition.
Leuven is a charming and historical city in Belgium known for its beautiful architecture, vibrant student life, and rich cultural heritage. It is home to the oldest Catholic university in the world, the KU Leuven, and boasts many medieval and Renaissance-style buildings. Additionally, Leuven is renowned for its beer culture, with the headquarters of Stella Artois located in the city and several microbreweries offering unique and delicious brews.
All invited talks and lab sessions will take place in the "Van den Heuvel Institute" (VHI Aula 01.29) in the center of the city.
The summer school provides shared accommodation for students at the Ibis hotel Leuven (see registration page for more details). Several other hotels are nearby and can be booked via sites such as Booking.com. Accomodation can also be booked via the summerschool at reduced prices by filling in this form and sending it to firstname.lastname@example.org by 9/06/2023.