The Doctor Rostering Problem - Asser Fahrenholz
The Doctor Rostering Problem - Asser Fahrenholz The Doctor Rostering Problem - Asser Fahrenholz
Chapter 4. Solving the DRP 29 As there are between two and four shifts on a given day, with six doctors to choose from, we get between: n! (n−k)! = 6! (6−2)! = 30 and 6! (6−4)! = 360 possible combinations, which is, at maximum, an increase of 6000% in combinations to explore. On a 1 GHz computer, this results in a solution generation taking several minutes as opposed to no enumeration solutions taking mere seconds. The increase in time when using enumeration makes fur- ther enumeration, a whole week for instance (332640 possible combinations), undesirable to the end user. Testing of this function proved useful as shall be shown in chapter 7. 4.4.3 Construction heuristic The implementation of the greedy algorithm is, given the data structure, straight for- ward. The implementation can be found in appendix A.2, and in short, loops through each day, each shift and requests the best doctor for either the shift or the set of best doctors for the whole day (depending on whether the enumeration is used) and then performs the assignments that are returned. 4.4.4 Metaheuristics For the implementation of the two metaheuristics, it applies that a neighborhood is investigated by the following procedure: 1. Find neighborhood n 2. Apply neighborhood n 3. Calculate objective function value and feasibility 4. Decide whether: (a) To keep the new solution (b) To undo neighborhood n When applying the neighborhood-transformation to a solution, the information con- tained in the neighborhood is saved, such that the search procedure can quickly revert the transformation if the new solution should turn out inferior. GRASP My implementation of the GRASP framework is based, to the letter, on the algorithms describing the adaptive greedy algorithm (algorithm 4.1 on page 19) and Local Search (algorithm 4.2 on page 22). The entire GRASP class is found in appendix A.3.
Chapter 4. Solving the DRP 30 Simulated Annealing The Simulated Annealing implementation is based on algorithm 4.4 on page 26 and is included in appendix A.4. 4.4.5 Implementation optimisations As noted above, the two heuristics calculate the objective function value and feasibility every time a neighborhood transformation has been applied. This presents an obvious area for optimisation. One optimisation that did not make it into the implementation is that of a delta function, which I describe in detail in chapter 8. As one of the questions that spring to mind when developing an application could be: What operations of this implementation are taking the most time?, I was fortunate to be in possession of a tool to investigate this exact area. The Netbeans IDE 6.1 allows profiling of the program, enabling the developer to investigate which methods are taking the most time. When doing so, I found that the operation requiring the most time are Hash Code checks. Hash Codes are 32 bit signed integers identifying objects by a single value, found by the use of prime numbers and the values of the class-fields. The new (optimised) hash code this lead to, can be seen in listing 4.1, where several lines, now commented out, were previously calculated. Only the ID-number is really needed to uniquely identify the doctor. 1 public class Doctor implements Comparable < Doctor >, Serializable { 2 . 3 @Override 4 public int hashCode () { 5 // final int prime = 31; 6 // int result = 1; 7 // result = prime * result + IDnumber ; 8 // result = prime * result + (( name == null ) ? 0 : name . hashCode ()); 9 // result = prime * result + numberOfShifts ; 10 // result = prime * result + (( shifts == null ) ? 0 : shifts . hashCode ()); 11 return IDnumber ; 12 } 13 . 14 } Listing 4.1: The new hashcode method The optimisation may not seem that big, but when the calculation of this hash code is done once per doctor per shift per constraint, every time we need to calculate the objective function value, it adds up!
- Page 1 and 2: TECHNICAL UNIVERSITY OF DENMARK The
- Page 3 and 4: TECHNICAL UNIVERSITY OF DENMARK Abs
- Page 5 and 6: Acknowledgements Foremost, I would
- Page 7 and 8: Contents vi 4.3.1 A construction he
- Page 9 and 10: List of Figures 4.1 The transformat
- Page 11 and 12: List of Algorithms 4.1 Adaptive gre
- Page 13 and 14: Chapter 1. Introduction and problem
- Page 15 and 16: Chapter 2. The Doctor Rostering Pro
- Page 17 and 18: Chapter 2. The Doctor Rostering Pro
- Page 19 and 20: Chapter 3 A mathematical model for
- Page 21 and 22: Chapter 3. The model and design 10
- Page 23 and 24: Chapter 3. The model and design 12
- Page 25 and 26: Chapter 3. The model and design 14
- Page 27 and 28: Chapter 4. Solving the DRP 16 it in
- Page 29 and 30: Chapter 4. Solving the DRP 18 4.3.1
- Page 31 and 32: Chapter 4. Solving the DRP 20 Given
- Page 33 and 34: Chapter 4. Solving the DRP 22 Since
- Page 35 and 36: Chapter 4. Solving the DRP 24 1. Co
- Page 37 and 38: Chapter 4. Solving the DRP 26 decre
- Page 39: Chapter 4. Solving the DRP 28 This
- Page 43 and 44: Chapter 5. Optimal solution 32 The
- Page 45 and 46: Chapter 6 The DRP Program Through t
- Page 47 and 48: Chapter 6. The DRP Program 36 Figur
- Page 49 and 50: Chapter 6. The DRP Program 38 of th
- Page 51 and 52: Chapter 6. The DRP Program 40 Figur
- Page 53 and 54: Chapter 7 Metaheuristic tests This
- Page 55 and 56: Chapter 7. Tests, results and discu
- Page 57 and 58: Chapter 7. Tests, results and discu
- Page 59 and 60: Chapter 7. Tests, results and discu
- Page 61 and 62: Chapter 7. Tests, results and discu
- Page 63 and 64: Chapter 7. Tests, results and discu
- Page 65 and 66: Chapter 8. Future considerations 54
- Page 67 and 68: Chapter 8. Future considerations 56
- Page 69 and 70: Chapter 9. Conclusion 58 The three
- Page 71 and 72: Appendix A. Implementation 60 27 //
- Page 73 and 74: 2 Appendix A. Implementation 62 3 i
- Page 75 and 76: Appendix A. Implementation 64 A.3 G
- Page 77 and 78: Appendix A. Implementation 66 97 wh
- Page 79 and 80: Appendix A. Implementation 68 27 su
- Page 81 and 82: Appendix A. Implementation 70 119 P
- Page 83 and 84: Appendix A. Implementation 72 64 br
- Page 85 and 86: Appendix A. Implementation 74 28 Ch
- Page 87 and 88: Appendix B GAMS Model This chapter
- Page 89 and 90: Appendix B. GAMS Model 78 46 c(day
Chapter 4. Solving the DRP 29<br />
As there are between two and four shifts on a given day, with six doctors to choose from,<br />
we get between: n!<br />
(n−k)!<br />
= 6!<br />
(6−2)! = 30 and 6!<br />
(6−4)!<br />
= 360 possible combinations, which is,<br />
at maximum, an increase of 6000% in combinations to explore. On a 1 GHz computer,<br />
this results in a solution generation taking several minutes as opposed to no enumeration<br />
solutions taking mere seconds. <strong>The</strong> increase in time when using enumeration makes fur-<br />
ther enumeration, a whole week for instance (332640 possible combinations), undesirable<br />
to the end user. Testing of this function proved useful as shall be shown in chapter 7.<br />
4.4.3 Construction heuristic<br />
<strong>The</strong> implementation of the greedy algorithm is, given the data structure, straight for-<br />
ward. <strong>The</strong> implementation can be found in appendix A.2, and in short, loops through<br />
each day, each shift and requests the best doctor for either the shift or the set of best<br />
doctors for the whole day (depending on whether the enumeration is used) and then<br />
performs the assignments that are returned.<br />
4.4.4 Metaheuristics<br />
For the implementation of the two metaheuristics, it applies that a neighborhood is<br />
investigated by the following procedure:<br />
1. Find neighborhood n<br />
2. Apply neighborhood n<br />
3. Calculate objective function value and feasibility<br />
4. Decide whether:<br />
(a) To keep the new solution<br />
(b) To undo neighborhood n<br />
When applying the neighborhood-transformation to a solution, the information con-<br />
tained in the neighborhood is saved, such that the search procedure can quickly revert<br />
the transformation if the new solution should turn out inferior.<br />
GRASP<br />
My implementation of the GRASP framework is based, to the letter, on the algorithms<br />
describing the adaptive greedy algorithm (algorithm 4.1 on page 19) and Local Search<br />
(algorithm 4.2 on page 22). <strong>The</strong> entire GRASP class is found in appendix A.3.