-
Notifications
You must be signed in to change notification settings - Fork 346
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
Multiple profiles in the same problem #450
Conversation
Last build is actually OK, Travis only reports a failure because of |
…or a profile but computed for another.
A new way to handle some of the computed matrices is required to some extent. The simple cases matching previous behavior are:
Kind of an edge case now: providing some profile matrices but not all. We should actually allow this as long as missing matrices match a valid profile with a routing server configured. The tricky part is that then users do choose the layout with (mandatory) |
ab613d1
to
b8f2a1a
Compare
This PR is now in a alpha state, meaning:
In short the feature should be functional in term of I/O and profile/travel-times consistency but I'd still expect we produce quite sub-optimal solutions depending on the use-case. There is no change on single-profile instances AFAICT, except for an increase in computing time introduced by the profile-handling overhead (we'll have to evaluate this at some point). Now for the fun part, I'll start working on actual multi-profile instances to try and improve the solving. |
For the record there are still things to improve for the "technical" part as well. I spotted situations where we hit this assertion due to rounding subtleties with |
…cle, not profile.
After quite some benchmarking, I'm pretty confident the current setup works fine in most cases. Yet the concerns raised in the initial sketch about heuristics behavior are still somehow pending. Example of a open question: do we need to take into account new speed discrepancies to order vehicles in the On the other hand, this kind of concern/question is stressed by this PR but the actual scope is broader: we already perform some ordering choices based on vehicle capacity or working hours length that are arguably questionable in the same way. So I plan to include this whole topic in a dedicated ticket at some point and merge this PR without altering the overall heuristic logic, as it's already a massive change. |
…ion in basic heuristic.
I've been evaluating impact of this PR on single-profile instances. After the last commit that fixed a heuristic behavior change, I checked that this PR and current On those benchmarks the computing times are steadily increasing by around 17%. This is solely due to the overhead introduced to access travel times: we now have a function call wrapping the I've tried my best to make the changes compiler-friendly but maybe we still have some room for improvement there. This would require a closer look so I'll ticket that as a follow-up work after merging. |
Issue
Fixes #394.
Tasks
This PR aims at allowing several vehicle profiles in a single optimization, requiring different kind of changes to the codebase. In particular, all calls to travel times values should become vehicle-dependent. So on top of allowing several profiles, we could take the opportunity to introduce an additional scaling variable to allow fine-tuning travel times for vehicles with the same profile. Something along the line of a
vehicle.speed_factor
value, defaulting to 1, where a value of 1.1 would mean a faster vehicle and 0.9 a slower one.Technical adjustments
LocalSearch::try_job_additions
Local search
Some of the local search operators currently implicitly rely on travel times being equal across vehicles. For example when exchanging the end of two routes (2-opt), we currently only evaluate gains for edges at the breakpoints, not gains related to the vehicle change for the route portions exchanged.
We should go through all operators to make gain evaluation more generic and handle the situation of different travel times between pairs of vehicles involved. As far as I can tell, this should not increase algorithmic complexity but will in some case require to store more data in
SolutionState
. E.g. for 2-opt, we'll probably have to store accumulated travel times up to (and from) any step for any route and any profile.Currently existing operators are:
CrossExchange
- [ ]Exchange
- [ ]IntraCrossExchange
- [ ]IntraExchange
- [ ]IntraMixedExchange
- [ ]IntraOrOpt
- [ ]IntraRelocate
MixedExchange
OrOpt
- [ ]PdShift
- [ ]Relocate
ReverseTwoOpt
RouteExchange
TwoOpt
- [ ]UnassignedExchange
EDIT: I crossed out the ones that should not require any change.
Heuristics changes
This is where I expect most of the real solving adjustments to happen. We can expect the local search to fix heuristic biases to some extent, but this requires to get heuristics solutions that are not too bad in the first place.
For example, seeding a route with the furthest task (
INIT::FURTHEST
) or the more demanding wrt capacity (HIGHER_AMOUNT
) does not make sense when designing a route for a slow and small vehicle, especially if bigger and faster vehicles are in line for other routes down the line. Also the whole "regret" logic for costs (borrowed from Solomon) is likely to work in odd ways with different vehicle travel times.So maybe this will require working on vehicle ordering prior to building routes, maybe this will mean adjusting the seeding approach, probably this will require to adjust the combination of parameters used by the heuristics.
- [ ] Adjust heuristics to avoid the most obvious biases- [ ] Reset parameters tuning based on changes?Usual PR tasks
docs/API.md
CHANGELOG.md