-
Notifications
You must be signed in to change notification settings - Fork 4
5. How to use GHOST advanced
In Part 4, we learn how to use GHOST to model basic CSP/COP/EF-CSP/EF-COP problems and how to call GHOST's solver to find a solution. We will go one step further here and learn how to improve your model performances, how to manage user-defined data structures, how to use postprocessing and how to declare some of your models as permutation problems.
Overridding the Constraint::required_error
is sufficient to declare the logic of your constraints and to have functional models. However, Constraint::required_error
is a method often called by GHOST's solver and it prone to be a performance bottleneck.
We already saw in Part 3 that modeling your problems as an EF-CSP/EF-COP leads to a clear boost compare to CSP/COP models. What follows only concern EF-CSP/EF-COP models to boost them even more.
Rather than computing the whole error of the constraint when the solver is changing the value of one or two variables, it is almost always quicker to only compute what these local changes implies a gain or a loss for the current error. In other words, it is more efficient to compute the delta error, i.e., the variation of the current error if we assign the value v to the variable x.
For instance, the Capacity
constraint in Part 3 is checking if the sum of all variables of the constraint is less than or equal to a given constant:
double Capacity::required_error( const std::vector<ghost::Variable*>& variables ) const
{
double total_size = 0.0;
for( size_t i = 0 ; i < variables.size() ; ++i )
total_size += ( variables[i]->get_value() * _object_size[i] );
return std::max( 0., total_size - _capacity );
}
But when we think about it, if the solver is about to change the value of the 42th variable only, just before it does it, we could compute the error gain or loss with
double updated_sum = current_total_size - variables[42]->get_value() + new_value_of_var42;
auto delta = std::max( 0., updated_sum - _capacity ) - current_error;
And if you have a few variables that are about to change, then we should compute
double updated_sum = current_total_size;
for( int index : indexes )
updated_sum - variables[index]->get_value() + candidate_values[index];
auto delta = std::max( 0., updated_sum - _capacity ) - current_error;
This is where Constraint::optional_delta_error
kicks in. The current assignment, as well as its error, are automatically stored in Constraint
objects. Giving a vector of variable indexes and their respective candidate value, Constraint::optional_delta_error
ouputs the difference between the error of the current assignment in 'variables' given as input and the error we would get if we assign new candidate values. This method is prefixed by optional_
, meaning it is not mandatory to implement it to have a functional model, unlike methods prefixed by required_
.
The ouput can be negative, positive, or equals to 0. If the candidate error is strictly lower (then better) than the current error, the ouput is negative. If errors are the same, the ouputs equals to 0. Finally, if the candidate error is strictly higher (then worth) than the current error, the ouput is positive.
We strongly encourage you to look at the Constraint::optional_delta_error
documentation and to use it as much as possible within your EF-CSP/EF-COP models if performances are critical for you.
Sometimes, you need to store some data within your constraints or your objective function, or some shared data among your constraints and/or your objective function. GHOST gives you the tool to manage such data.
If your Constraint
or Objective
objects need some individual data, you need to declare them as fields of your derived classes. If these data are independant from the variable values, then you have nothing to do in particular.
On the other hand, if these data can change according to the current assignment in your constraint or objective function, then you need to override the Constraint::conditional_update_data_structures
and Objective::conditional_update_data_structures
methods. These methods take as input the variables in the scope of your constraint or objective function, the index of the variable the solver is about to change, and well as its new value, allowing you to update your data structures accordingly.
If you need to share some data among your constraints and/or your objective function that are modified according to the current assignment of variables, then you need to write a class inhereting from ghost::AuxiliaryData
. You may then to override its method AuxiliaryData::optional_update
if you need to update its internal data structures when the solver is changing the value of a variable.
If you don't need to override AuxiliaryData::optional_update
, it means you should probably better define unshared data within your constraints and/or your objective function, and call their constructor with the same data as input.
Usually, GHOST quickly find by itself high-quality solutions to many combinatorial optimization problems. But sometimes, you could feel the need to inject some human knowledge into your models, either to guide the solver when you know some directions are good to explore, and to clean some rough solutions. GHOST allows you to do that.
While deadling with an optimization problem, GHOST's solver will first try to find an assignment that satisfies all constraints of the model before looking for optimizing the objective function. However sometimes, the solver is making random guesses between several equivalent candidates offering the same gain of the satisfaction error. But according to your problem, you can know that some candidates might be better than other ones. You can express this knowledge by implementing the methods Objective::expert_heuristic_variable
and Objective::expert_heuristic_value
, to give the solver a user-defined tie-breaker when its find equivalently promising variables to change and equivalently promising values to assign.
However, like any expert_
-prefixed methods in GHOST, you must have a good understanding of how GHOST's solver is working and must know what you are doing.
If you know that some optimization solutions can be improved (for instance, by optimizing a secondary objective function), then GHOST lets you the possibility to change its solutions by implementing Objective::expert_postprocess
. This method takes as input the vector of Variable
containing the current solution, as well as the best optimization cost to help you know if your post-processing actually improve the solution (letting you the opportunity to rollback if it is not the case).
A permutation problem is a problem where the solver does not have to assign new values to variable, but only to swap values among variables. Not all problems can be defined as a permutation problem. However, if it does, it can give a huge performance boost to solvers supporting permutation problems, like GHOST's solver.
You only need two things to declare a permutation problem with GHOST: assign to variables each value that should consitute a solution, and set a Boolean parameter to true in the ModelBuilder
constructor.
For instance, consider the Magic Square problem. You have an NxN grid that you must fill with all numbers from 1 to N², such that the sum of each row, each column and the two diagonals is the same. It is known that these sums must be equal to N(N²+1)/2.
Intuitively, this could be modeled by two constraints: an AllDifferent constraint to force having exactly all numbers from 1 to N² in the grid (no occurences, no missing numbers), and a linear equation for each row, each column and the two diagonals, that must be equal to N(N²+1)/2.
But actually, you can get rid of the AllDifferent constraint, by giving to your N² variables an initial assignment with all values from 1 to N², and setting the Boolean parameter in the ModelBuilder
constructor about permutation problems to true. Thus, the solver only has to manage the linear equation constraints.
//In the user-defined ModelBuilder
UserBuilder( ... )
: ModelBuilder( true ) //declaration of a permutation problem
{
...
}
void UserBuilder::declare_variables()
{
//Create N² variables with domains [1,N²]
create_n_variables( N*N, 1, N*N );
//Set the first variable to 0, the second to 1, ...
for( int i = 0 ; i < N*N ; ++i )
variables[i].set_value( i + 1 );
}
Independently from models, you can decide to run GHOST's solver in
parallel on several threads. To do so, simply set to true the
parallel_runs
Boolean in ghost::Options
.
//In the main function
UserBuilder builder;
ghost::Solver solver( builder );
ghost::Options options;
options.parallel_runs = true;
double cost;
std::vector<int> solution;
double timeout_in_us = 100000;
solver.fast_search( cost, solution, timeout_in_us, options );
}
If no number of threads is given, then GHOST will use one thread per
CPU core. To define a specific number of threads with the option
number_threads
.
//In the main function
...
options.number_threads = 4;
...
Solver::fast_search
needs a timeout parameter, which is either a double
number representing a time budget in microseconds, or a
std::literals::chrono_literals
, much more convenient to use since it
allows you giving a timout directly in seconds, milliseconds or
anything else larger than or equal to a microsecond.
//In the main program file
#include <chrono>
using namespace std::literals::chrono_literals;
int main( int argc, char** argv )
{
...
solver.fast_search( cost, solution, 10s ); //10 seconds
solver.fast_search( cost, solution, 10ms ); //10 milliseconds
solver.fast_search( cost, solution, 10us ); //10 microseconds
}
}