Logical Decisions - Classweb
Logical Decisions - Classweb Logical Decisions - Classweb
The completed goals hierarchy is shown in Figure 10-1. After the goals hierarchy had been completed, the next step was to complete the definitions of the measures. In particular, it was necessary to define the scale points in the constructed (nonnumeric) scales. It is not sufficient to use 1 - 10 or similar scales, since it is not clear what the different scale points mean. This makes it difficult to consistently rank the alternatives or to assess tradeoffs concerning the measure. Figure 10-2 shows some constructed scales for the computer selection example. Video Monitor Quality Best 1. A brand name multi-sync monitor comes with the system. 2. An unknown brand multi-sync monitor comes with the system. 3. A brand-name VGA monitor comes with the system. Worst 4. An unknown brand VGA monitor comes with the system. Video Card Quality Best 1. A brand-name 16 bit SVGA card comes with the system. 2. An unknown brand 16 bit SVGA card comes with the system. 3. An 8 bit SVGA card comes with the system. Worst 4. A VGA only card comes with the system. Company Quality Best 1. A first rate, well established company. 2. A "second tier" but still well known company. Worst 3. A "no-name" clone maker. Reviews Best 1. Rated a "Best Buy" by a national computer magazine. 2. Given a good review in a national or local publication. 3. No reviews found. Worst 4. Given a poor review in a national or local publication. Figure 10-2. Constructed measure scales for computer selection decision. After the decision maker had defined the measures, he could enter the levels on the measures for each alternative. This step was straightforward, since most of the data was available through information in the ads and reviews of the various computers. The decision maker assigned one probabilistic level on the Local Service measure for a company that had just opened a local dealership. He assigned a probability of 20% that the dealership would close and that no local service would be available. 10-4 Section 10 -- Examples
The next step was to assess preferences. Since the measures had few uncertainties, the mid-level splitting technique was used to assess the single measure utility functions. The tradeoffs were mostly assessed using price as a basis. Figure 10-3 is the bubble diagram for the tradeoffs for the computer selection decision. Figure 10-3. Tradeoff assessment "bubble diagram" for buying a computer example. After the preference assessment had been completed, the alternatives could be ranked. Section 10 -- Examples 10-5
- Page 267 and 268: Figure 9-5. Summary of SUF assessme
- Page 269 and 270: describe two alternatives: A, which
- Page 271 and 272: with equal chances of 40 and 70 per
- Page 273 and 274: In the original formulation of the
- Page 275 and 276: 1 Equal Importance Two activities c
- Page 277 and 278: You begin the process by selecting
- Page 279 and 280: Figure 9-7. Effects of goals with a
- Page 281 and 282: 0.5. Then the weight assigned to "P
- Page 283 and 284: ! You can use the "Smarter Method"
- Page 285 and 286: allocates this weight before comput
- Page 287 and 288: nth root of the product of the rati
- Page 289 and 290: Figure 9-8. Summary of estimating t
- Page 291 and 292: on the decision maker's response, L
- Page 293 and 294: Figure 9-10. MUF assessment figure
- Page 295 and 296: Now think of adjusting P so that th
- Page 297 and 298: Another approach is to use the rang
- Page 299 and 300: Figure 9-12. Quantitative range vs.
- Page 301 and 302: If a measure’s range changes, LDW
- Page 303 and 304: Figure 9-14 is an example of the ov
- Page 305 and 306: Similarly, a single member can have
- Page 307 and 308: chance having 160 hp (the most pref
- Page 309 and 310: A probability of less than 0.5 for
- Page 311 and 312: Interpreting the Ranking Results LD
- Page 313: S E C T I O N Examples 10
- Page 316 and 317: the idea that other manufactures an
- Page 320 and 321: Buying a House The ranking results
- Page 322 and 323: Figure 10-5. Goals hierarchy for bu
- Page 324 and 325: Overall goal Quality goal Costs goa
- Page 326 and 327: The preference assessments were don
- Page 328 and 329: Figure 10-9. Goals hierarchy for re
- Page 330 and 331: ! Noise, ! Agricultural Impacts, an
- Page 333: S E C T I O N Commands Summary 11
- Page 336 and 337: Assess Menu Edit Menu The Assess me
- Page 338 and 339: File Menu Edit::Insert Lets you add
- Page 340 and 341: utility function” if the active g
- Page 342 and 343: Matrix Menu ! Hierarchy -- options
- Page 344 and 345: Results::Dynamic Sensitivity See th
- Page 346 and 347: Review::Compute Utilities Compute t
- Page 348 and 349: Tradeoff Menu View Menu LDW display
- Page 351: S E C T I O N Glossary 12
- Page 354 and 355: Common Units Certainty Equivalent C
- Page 356 and 357: LDW File Level Lottery Measure assi
- Page 358 and 359: MUF MUF Formula See also: Alternati
- Page 360 and 361: Preference Set Probabilistic Level
- Page 362 and 363: Tradeoff Trial program decides whic
- Page 365: Bibliography B
The next step was to assess preferences. Since the measures had<br />
few uncertainties, the mid-level splitting technique was used to<br />
assess the single measure utility functions. The tradeoffs were<br />
mostly assessed using price as a basis. Figure 10-3 is the bubble<br />
diagram for the tradeoffs for the computer selection decision.<br />
Figure 10-3. Tradeoff assessment "bubble diagram" for buying a<br />
computer example.<br />
After the preference assessment had been completed, the<br />
alternatives could be ranked.<br />
Section 10 -- Examples 10-5