cp 2011
17th International Conference on Principles and Practice of
Constraint Programming
To be held in Perugia, Italy from 12-16th September 2011
City of Perugia, Italy

Tutorials

Automatic Solver Configuration and Solver Portfolios

Meinolf Sellmann (IBM, USA)

September 13th, Tuesday [Aula Magna]

CP and SAT solvers and solution methods have parameters, some implicit, some explicit. Moreover, different solvers and solution methods exhibit incomparable performance on different problems and distributions of problem instances. The fact that choosing the right method or solver or parametrization for a given problem is very hard to accomplish manually has caused us, traditionally, to value methods and solvers with few parameters that work robustly across a wide range of problems and instances, even at the cost of peak performance. The third best choice on very many problem sets may thus get chosen over a method that works extremely well on some few problems (and potentially the problem at hand!) but which performs poorly otherwise.

The aim of Automatic Solver Configuration and Solver Portfolios is to automate the choice of the right method or solver or solver parameters for a given problem or problem instance to boost solution performance. This approach has yielded to astonishing results in the past five years. In particular, solver portfolios have won solver competitions in CP, SAT, and QBF. Furthermore, the underlying principles can be exploited to improve virtually any algorithm, be it to improve its run-time, quality, or even robustness.

In this tutorial, I will discuss how the CP and SAT community has pushed the frontier of solver selection and configuration in the last decade: from the inception of parallel solver portfolios to highly efficient solver selectors like SATzilla to solver schedulers like CP-Hydra to automatic configurators like ParamILS and GGA to instance-specific tuning tools like Hydra and ISAC. I will review the most successful and general techniques to date and explain how practitioners and researchers alike can benefit from general purpose tools that are now available to them.

Download: download pdf Presentation [pdf]


Integer Programming for Constraint Programmers

Chris Beck (University of Toronto, Canada), Timo Berdhold, Ambros Gleixner, Stefan Heinz, Kati Wolter (Zuse Institute Berlin, Germany)

September 14th, Wednesday [Aula Magna]

Integer Programming (IP) is the most widely used generic technique for solving combinatorial optimization problems. It can be seen as a special case of the general idea of Constraint Programming (CP). Whereas the power of CP arises from the flexibility to model with a large variety of expressive constraints, the strength of IP is its restriction to linear and integrality constraints.

This tutorial will provide an introduction into the solving techniques of IP solvers, with a special focus on the Linear Programming (LP) relaxation of an IP. The participants will gain a good understanding of how an IP solver works and of the various ways the LP relaxation is exploited. The ideas presented will be illustrated using a well-known problem from the CP literature.

Download: download pdf Presentation [pdf]


Machine Learning and Data Mining: Challenges and Opportunities for Constraint Programming

Luc De Raedt, Siegfried Nijssen (K.U. Leuven, Belgium)

September 15th, Thursday [Aula Magna]

Data mining (as well as machine learning) are well-established fields of research that are concerned with the analysis of data in order to discover regularities in the form of patterns or functions. Contemporary methods of data mining and machine learning rely heavily on the use of constraints on the patterns or functions of interest. This has resulted in notions of, for instance, constraint-based mining and constrained clustering. Despite the obvious relationships to constraint programming, there has not yet been a lot of work on using constraint programming techniques within data mining and machine learning. On the other hand, data mining and machine learning could potentially also be used to help constraint pro- gramming. Even though there exist some approaches in this direction, we are still far away from a standard use of machine learning and data mining in constraint programming.

This tutorial will introduce machine learning and data mining to the constraint programming community. It will provide an overview of several data mining and machine learning tasks, including pattern mining, clustering and probabilistic modeling, and how constraints are used in these tasks, illustrated with the implementation of a number of itemset mining tasks in constraint programming. It will show how data mining and machine learning pose new challenges and opportunities for constraint programming and will address (to a lesser extent) what machine learning and data mining could do for constraint programming.

Download: download pdf Presentation [pdf]


Doctoral Tutorial

Mastering the Empirical Maze

Laurent Michel (University of Connecticut, USA)

September 15th, Thursday [Aula 8]

This tutorial will explore the issues revolving around the design and execution of experiments and the presentation of the results. These seemingly simple tasks are rife with pitfalls and fallacies that are too often the downfall of a good piece of work. The advent of modern architecture and the increasing reliance on randomization further muddles the issue and place a significant burden on anyone attempting to reproduce poorly designed and presented results.

The purpose of the tutorial is to provide a roadmap to navigate this subtle maze and deliver convincing empirical evidence in your future papers!

Download: download pdf Presentation [pdf]