13.07.2015 Views

NASA Systems Engineering Handbook - NASA Technical Reports ...

NASA Systems Engineering Handbook - NASA Technical Reports ...

NASA Systems Engineering Handbook - NASA Technical Reports ...

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

<strong>NASA</strong>/SP-2007-6105Rev1<strong>NASA</strong><strong>Systems</strong> <strong>Engineering</strong><strong>Handbook</strong>


<strong>NASA</strong> STI Program … in ProfileSince its founding, the National Aeronautics and SpaceAdministration (<strong>NASA</strong>) has been dedicated to the advancementof aeronautics and space science. The <strong>NASA</strong>Scientific and <strong>Technical</strong> Information (STI) programplays a key part in helping <strong>NASA</strong> maintain this importantrole.The <strong>NASA</strong> STI program operates under the auspices ofthe Agency Chief Information Officer. It collects, organizes,provides for archiving, and disseminates <strong>NASA</strong>’sSTI. The <strong>NASA</strong> STI program provides access to the<strong>NASA</strong> Aeronautics and Space Database and its publicinterface, the <strong>NASA</strong> technical report server, thus providingone of the largest collections of aeronautical andspace science STI in the world. Results are published inboth non-<strong>NASA</strong> channels and by <strong>NASA</strong> in the <strong>NASA</strong>STI report series, which include the following reporttypes:• z <strong>Technical</strong> Publication: <strong>Reports</strong> of completed researchor a major significant phase of research that present theresults of <strong>NASA</strong> programs and include extensive dataor theoretical analysis. Includes compilations of significantscientific and technical data and informationdeemed to be of continuing reference value. <strong>NASA</strong>counterpart of peer-reviewed formal professional papersbut has less stringent limitations on manuscriptlength and extent of graphic presentations.• z <strong>Technical</strong> Memorandum: Scientific and technicalfindings that are preliminary or of specialized interest,e.g., quick release reports, working papers, and bibliographiesthat contain minimal annotation. Does notcontain extensive analysis.• z Contractor Report: Scientific and technical findingsby <strong>NASA</strong>-sponsored contractors and grantees.• z Conference Publication: Collected papers from scientificand technical conferences, symposia, seminars, orother meetings sponsored or co-sponsored by <strong>NASA</strong>.• z Special Publication: Scientific, technical, or historicalinformation from <strong>NASA</strong> programs, projects, andmissions, often concerned with subjects having substantialpublic interest.• z <strong>Technical</strong> Translation: English-language translationsof foreign scientific and technical material pertinentto <strong>NASA</strong>’s mission.Specialized services also include creating custom thesauri,building customized databases, and organizingand publishing research results.For more information about the <strong>NASA</strong> STI program, seethe following:•z Access the <strong>NASA</strong> STI program home page atwww.sti.nasa.gov•z E-mail your question via the Internet tohelp@sti.nasa.gov•z Fax your question to the <strong>NASA</strong> STI help desk at301‐621-0134•z Phone the <strong>NASA</strong> STI help desk at 301-621-0390•z Write to:<strong>NASA</strong> STI Help Desk<strong>NASA</strong> Center for AeroSpace Information7115 Standard DriveHanover, MD 21076-1320


Table of Contents4.2.2.2 Human Factors <strong>Engineering</strong> Requirements............................................................................... 454.2.2.3 Requirements Decomposition, Allocation, and Validation...................................................... 454.2.2.4 Capturing Requirements and the Requirements Database...................................................... 474.2.2.5 <strong>Technical</strong> Standards....................................................................................................................... 474.3 Logical Decomposition............................................................................................................................................ 494.3.1 Process Description ..................................................................................................................................... 494.3.1.1 Inputs............................................................................................................................................... 494.3.1.2 Process Activities............................................................................................................................ 494.3.1.3 Outputs............................................................................................................................................ 514.3.2 Logical Decomposition Guidance.............................................................................................................. 524.3.2.1 Product Breakdown Structure...................................................................................................... 524.3.2.2 Functional Analysis Techniques................................................................................................... 524.4 Design Solution Definition....................................................................................................................................... 554.4.1 Process Description........................................................................................................................................ 554.4.1.1 Inputs................................................................................................................................................. 554.4.1.2 Process Activities.............................................................................................................................. 564.4.1.3 Outputs.............................................................................................................................................. 614.4.2 Design Solution Definition Guidance.......................................................................................................... 624.4.2.1 Technology Assessment................................................................................................................ 624.4.2.2 Integrating <strong>Engineering</strong> Specialties into the <strong>Systems</strong> <strong>Engineering</strong> Process........................... 625.0 Product Realization................................................................................................................................. 715.1 Product Implementation......................................................................................................................................... 735.1.1 Process Description...................................................................................................................................... 735.1.1.1 Inputs............................................................................................................................................... 735.1.1.2 Process Activities ........................................................................................................................... 745.1.1.3 Outputs............................................................................................................................................ 755.1.2 Product Implementation Guidance........................................................................................................... 765.1.2.1 Buying Off-the-Shelf Products..................................................................................................... 765.1.2.2 Heritage........................................................................................................................................... 765.2 Product Integration.................................................................................................................................................. 785.2.1 Process Description...................................................................................................................................... 785.2.1.1 Inputs .............................................................................................................................................. 795.2.1.2 Process Activities............................................................................................................................ 795.2.1.3 Outputs............................................................................................................................................ 795.2.2 Product Integration Guidance.................................................................................................................... 805.2.2.1 Integration Strategy....................................................................................................................... 805.2.2.2 Relationship to Product Implementation .................................................................................. 805.2.2.3 Product/Interface Integration Support........................................................................................ 805.2.2.4 Product Integration of the Design Solution............................................................................... 815.2.2.5 Interface Management................................................................................................................... 815.2.2.6 Compatibility Analysis.................................................................................................................. 815.2.2.7 Interface Management Tasks........................................................................................................ 815.3 Product Verification ................................................................................................................................................ 835.3.1 Process Description...................................................................................................................................... 835.3.1.1 Inputs............................................................................................................................................... 835.3.1.2 Process Activities............................................................................................................................ 845.3.1.3 Outputs............................................................................................................................................ 895.3.2 Product Verification Guidance................................................................................................................... 895.3.2.1 Verification Program..................................................................................................................... 895.3.2.2 Verification in the Life Cycle........................................................................................................ 895.3.2.3 Verification Procedures ................................................................................................................ 92iv • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


Table of Contents5.3.2.4 Verification <strong>Reports</strong>....................................................................................................................... 935.3.2.5 End-to-End System Testing ......................................................................................................... 935.3.2.6 Modeling and Simulation.............................................................................................................. 965.3.2.7 Hardware-in-the-Loop.................................................................................................................. 965.4 Product Validation................................................................................................................................................... 985.4.1 Process Description...................................................................................................................................... 985.4.1.1 Inputs............................................................................................................................................... 985.4.1.2 Process Activities............................................................................................................................ 995.4.1.3 Outputs.......................................................................................................................................... 1045.4.2 Product Validation Guidance................................................................................................................... 1045.4.2.1 Modeling and Simulation............................................................................................................ 1045.4.2.2 Software......................................................................................................................................... 1045.5 Product Transition ................................................................................................................................................ 1065.5.1 Process Description.................................................................................................................................... 1065.5.1.1 Inputs............................................................................................................................................. 1065.5.1.2 Process Activities.......................................................................................................................... 1075.5.1.3 Outputs.......................................................................................................................................... 1095.5.2 Product Transition Guidance.................................................................................................................... 1105.5.2.1 Additional Product Transition Input Considerations............................................................. 1105.5.2.2 After Product Transition to the End User—What Next?........................................................ 1106.0 Crosscutting <strong>Technical</strong> Management................................................................................................... 1116.1 <strong>Technical</strong> Planning................................................................................................................................................. 1126.1.1 Process Description.................................................................................................................................... 1126.1.1.1 Inputs............................................................................................................................................. 1126.1.1.2 Process Activities.......................................................................................................................... 1136.1.1.3 Outputs.......................................................................................................................................... 1226.1.2 <strong>Technical</strong> Planning Guidance................................................................................................................... 1226.1.2.1 Work Breakdown Structure........................................................................................................ 1226.1.2.2 Cost Definition and Modeling................................................................................................... 1256.1.2.3 Lessons Learned .......................................................................................................................... 1296.2 Requirements Management.................................................................................................................................. 1316.2.1 Process Description.................................................................................................................................... 1316.2.1.1 Inputs............................................................................................................................................. 1316.2.1.2 Process Activities.......................................................................................................................... 1326.2.1.3 Outputs.......................................................................................................................................... 1346.2.2 Requirements Management Guidance.................................................................................................... 1346.2.2.1 Requirements Management Plan............................................................................................... 1346.2.2.2 Requirements Management Tools............................................................................................. 1356.3 Interface Management........................................................................................................................................... 1366.3.1 Process Description.................................................................................................................................... 1366.3.1.1 Inputs............................................................................................................................................. 1366.3.1.2 Process Activities.......................................................................................................................... 1366.3.1.3 Outputs.......................................................................................................................................... 1376.3.2 Interface Management Guidance............................................................................................................. 1376.3.2.1 Interface Requirements Document........................................................................................... 1376.3.2.2 Interface Control Document or Interface Control Drawing.................................................. 1376.3.2.3 Interface Definition Document ................................................................................................. 1386.3.2.4 Interface Control Plan................................................................................................................. 1386.4 <strong>Technical</strong> Risk Management................................................................................................................................. 1396.4.1 Process Description.................................................................................................................................... 1406.4.1.1 Inputs............................................................................................................................................. 140<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • v


Table of Contents6.4.1.2 Process Activities.......................................................................................................................... 1406.4.1.3 Outputs.......................................................................................................................................... 1416.4.2 <strong>Technical</strong> Risk Management Guidance................................................................................................... 1416.4.2.1 Role of Continuous Risk Management in <strong>Technical</strong> Risk Management ............................. 1426.4.2.2 The Interface Between CRM and Risk-Informed Decision Analysis.................................... 1426.4.2.3 Selection and Application of Appropriate Risk Methods....................................................... 1436.5 Configuration Management ................................................................................................................................. 1516.5.1 Process Description.................................................................................................................................... 1516.5.1.1 Inputs............................................................................................................................................. 1516.5.1.2 Process Activities.......................................................................................................................... 1516.5.1.3 Outputs.......................................................................................................................................... 1566.5.2 CM Guidance.............................................................................................................................................. 1566.5.2.1 What Is the Impact of Not Doing CM?..................................................................................... 1566.5.2.2 When Is It Acceptable to Use Redline Drawings?.................................................................... 1576.6 <strong>Technical</strong> Data Management................................................................................................................................ 1586.6.1 Process Description.................................................................................................................................... 1586.6.1.1 Inputs............................................................................................................................................. 1586.6.1.2 Process Activities......................................................................................................................... 1586.6.1.3 Outputs.......................................................................................................................................... 1626.6.2 <strong>Technical</strong> Data Management Guidance................................................................................................... 1626.6.2.1 Data Security and ITAR.............................................................................................................. 1626.7 <strong>Technical</strong> Assessment............................................................................................................................................ 1666.7.1 Process Description.................................................................................................................................... 1666.7.1.1 Inputs............................................................................................................................................. 1666.7.1.2 Process Activities.......................................................................................................................... 1666.7.1.3 Outputs.......................................................................................................................................... 1676.7.2 <strong>Technical</strong> Assessment Guidance............................................................................................................... 1686.7.2.1 Reviews, Audits, and Key Decision Points ............................................................................... 1686.7.2.2 Status Reporting and Assessment.............................................................................................. 1906.8 Decision Analysis................................................................................................................................................... 1976.8.1 Process Description.................................................................................................................................... 1976.8.1.1 Inputs............................................................................................................................................. 1986.8.1.2 Process Activities.......................................................................................................................... 1996.8.1.3 Outputs.......................................................................................................................................... 2026.8.2 Decision Analysis Guidance..................................................................................................................... 2036.8.2.1 <strong>Systems</strong> Analysis, Simulation, and Performance..................................................................... 2036.8.2.2 Trade Studies................................................................................................................................. 2056.8.2.3 Cost-Benefit Analysis.................................................................................................................. 2096.8.2.4 Influence Diagrams...................................................................................................................... 2106.8.2.5 Decision Trees.............................................................................................................................. 2106.8.2.6 Multi-Criteria Decision Analysis............................................................................................... 2116.8.2.7 Utility Analysis............................................................................................................................. 2126.8.2.8 Risk-Informed Decision Analysis Process Example................................................................ 2137.0 Special Topics......................................................................................................................................... 2177.1 <strong>Engineering</strong> with Contracts.................................................................................................................................. 2177.1.1 Introduction, Purpose, and Scope............................................................................................................ 2177.1.2 Acquisition Strategy................................................................................................................................... 2177.1.2.1 Develop an Acquisition Strategy................................................................................................ 2187.1.2.2 Acquisition Life Cycle.................................................................................................................. 2187.1.2.3 <strong>NASA</strong> Responsibility for <strong>Systems</strong> <strong>Engineering</strong>........................................................................ 2187.1.3 Prior to Contract Award............................................................................................................................ 219vi • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


Table of Contents7.1.3.1 Acquisition Planning................................................................................................................... 2197.1.3.2 Develop the Statement of Work................................................................................................. 2237.1.3.3 Task Order Contracts.................................................................................................................. 2257.1.3.4 Surveillance Plan.......................................................................................................................... 2257.1.3.5 Writing Proposal Instructions and Evaluation Criteria.......................................................... 2267.1.3.6 Selection of COTS Products....................................................................................................... 2267.1.3.7 Acquisition-Unique Risks .......................................................................................................... 2277.1.4 During Contract Performance.................................................................................................................. 2277.1.4.1 Performing <strong>Technical</strong> Surveillance ........................................................................................... 2277.1.4.2 Evaluating Work Products.......................................................................................................... 2297.1.4.3 Issues with Contract-Subcontract Arrangements.................................................................... 2297.1.5 Contract Completion ................................................................................................................................ 2307.1.5.1 Acceptance of Final Deliverables............................................................................................... 2307.1.5.2 Transition Management.............................................................................................................. 2317.1.5.3 Transition to Operations and Support...................................................................................... 2327.1.5.4 Decommissioning and Disposal................................................................................................ 2337.1.5.5 Final Evaluation of Contractor Performance........................................................................... 2337.2 Integrated Design Facilities................................................................................................................................... 2347.2.1 Introduction ............................................................................................................................................... 2347.2.2 CACE Overview and Importance............................................................................................................ 2347.2.3 CACE Purpose and Benefits..................................................................................................................... 2357.2.4 CACE Staffing............................................................................................................................................. 2357.2.5 CACE Process............................................................................................................................................. 2367.2.5.1 Planning and Preparation........................................................................................................... 2367.2.5.2 Activity Execution Phase............................................................................................................ 2367.2.5.3 Activity Wrap-Up ........................................................................................................................ 2377.2.6 CACE <strong>Engineering</strong> Tools and Techniques ............................................................................................. 2377.2.7 CACE Facility, Information Infrastructure, and Staffing ..................................................................... 2387.2.7.1 Facility........................................................................................................................................... 2387.2.7.2 Information Infrastructure......................................................................................................... 2387.2.7.3 Facility Support Staff Responsibilities....................................................................................... 2397.2.8 CACE Products .......................................................................................................................................... 2397.2.9 CACE Best Practices.................................................................................................................................. 2397.2.9.1 People............................................................................................................................................. 2407.2.9.2 Process and Tools......................................................................................................................... 2407.2.9.3 Facility........................................................................................................................................... 2407.3 Selecting <strong>Engineering</strong> Design Tools..................................................................................................................... 2427.3.1 Program and Project Considerations....................................................................................................... 2427.3.2 Policy and Processes................................................................................................................................... 2427.3.3 Collaboration.............................................................................................................................................. 2427.3.4 Design Standards........................................................................................................................................ 2437.3.5 Existing IT Architecture............................................................................................................................ 2437.3.6 Tool Interfaces............................................................................................................................................. 2437.3.7 Interoperability and Data Formats........................................................................................................... 2437.3.8 Backward Compatibility............................................................................................................................ 2447.3.9 Platform....................................................................................................................................................... 2447.3.10 Tool Configuration Control...................................................................................................................... 2447.3.11 Security/Access Control............................................................................................................................ 2447.3.12 Training........................................................................................................................................................ 2447.3.13 Licenses........................................................................................................................................................ 2447.3.14 Stability of Vendor and Customer Support............................................................................................. 2447.4 Human Factors <strong>Engineering</strong>................................................................................................................................. 246<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • vii


Table of Contents7.4.1 Basic HF Model.......................................................................................................................................... 2477.4.2 HF Analysis and Evaluation Techniques................................................................................................. 2477.5 Environmental, Nuclear Safety, Planetary Protection, and Asset Protection Policy Compliance............... 2567.5.1 NEPA and EO 12114.................................................................................................................................. 2567.5.1.1 National Environmental Policy Act........................................................................................... 2567.5.1.2 EO 12114 Environmental Effects Abroad of Major Federal Actions ................................... 2577.5.2 PD/NSC-25................................................................................................................................................. 2577.5.3 Planetary Protection.................................................................................................................................. 2587.5.4 Space Asset Protection............................................................................................................................... 2607.5.4.1 Protection Policy.......................................................................................................................... 2607.5.4.2 Goal................................................................................................................................................ 2607.5.4.3 Scoping.......................................................................................................................................... 2607.5.4.4 Protection Planning..................................................................................................................... 2607.6 Use of Metric System ............................................................................................................................................. 261Appendix A: Acronyms................................................................................................................................. 263Appendix B: Glossary.................................................................................................................................... 266Appendix C: How to Write a Good Requirement........................................................................................ 279Appendix D: Requirements Verification Matrix.......................................................................................... 282Appendix E: Creating the Validation Plan (Including Validation Requirements Matrix)........................ 284Appendix F: Functional, Timing, and State Analysis ................................................................................. 285Appendix G: Technology Assessment/Insertion......................................................................................... 293Appendix H: Integration Plan Outline......................................................................................................... 299Appendix I: Verification and Validation Plan Sample Outline................................................................... 301Appendix J: SEMP Content Outline............................................................................................................. 303Appendix K: Plans......................................................................................................................................... 308Appendix L: Interface Requirements Document Outline.......................................................................... 309Appendix M: CM Plan Outline...................................................................................................................... 311Appendix N: Guidance on <strong>Technical</strong> Peer Reviews/Inspections ............................................................... 312Appendix O: Tradeoff Examples................................................................................................................... 316Appendix P: SOW Review Checklist............................................................................................................. 317Appendix Q: Project Protection Plan Outline............................................................................................. 321References...................................................................................................................................................... 323Bibliography................................................................................................................................................... 327Index................................................................................................................................................................ 332viii • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


Table of ContentsFigures2.0‐1 SE in context of overall project management....................................................................................................... 42.1‐1 The systems engineering engine............................................................................................................................ 52.2-1 A miniaturized conceptualization of the poster-size <strong>NASA</strong> project life-cycle process flow forflight and ground systems accompanying this handbook................................................................................. 62.3‐1 SE engine tracking icon.......................................................................................................................................... 82.3‐2 Product hierarchy, tier 1: first pass through the SE engine................................................................................ 92.3‐3 Product hierarchy, tier 2: external tank.............................................................................................................. 102.3‐4 Product hierarchy, tier 2: orbiter.......................................................................................................................... 102.3‐5 Product hierarchy, tier 3: avionics system.......................................................................................................... 112.3‐6 Product hierarchy: complete pass through system design processes side of the SE engine........................ 112.3‐7 Model of typical activities during operational phase (Phase E) of a product................................................ 142.3‐8 New products or upgrades reentering the SE engine........................................................................................ 152.5‐1 The enveloping surface of nondominated designs............................................................................................ 162.5‐2 Estimates of outcomes to be obtained from several design concepts including uncertainty...................... 173.0‐1 <strong>NASA</strong> program life cycle...................................................................................................................................... 203.0‐2 <strong>NASA</strong> project life cycle......................................................................................................................................... 203.10‐1 Typical <strong>NASA</strong> budget cycle ............................................................................................................................... 294.0‐1 Interrelationships among the system design processes.................................................................................... 314.1-1 Stakeholder Expectations Definition Process.................................................................................................... 334.1-2 Product flow for stakeholder expectations......................................................................................................... 344.1-3 Typical ConOps development for a science mission........................................................................................ 364.1-4 Example of an associated end-to-end operational architecture ..................................................................... 364.1-5a Example of a lunar sortie timeline developed early in the life cycle............................................................. 374.1-5b Example of a lunar sortie DRM early in the life cycle.................................................................................... 374.1-6 Example of a more detailed, integrated timeline later in the life cycle for a science mission...................... 384.2-1 <strong>Technical</strong> Requirements Definition Process...................................................................................................... 404.2-2 Characteristics of functional, operational, reliability, safety, and specialty requirements........................... 434.2-3 The flowdown of requirements............................................................................................................................ 464.2-4 Allocation and flowdown of science pointing requirements........................................................................... 474.3-1 Logical Decomposition Process........................................................................................................................... 494.3-2 Example of a PBS................................................................................................................................................... 524.3-3 Example of a functional flow block diagram..................................................................................................... 534.3-4 Example of an N2 diagram................................................................................................................................... 544.4-1 Design Solution Definition Process ..................................................................................................................... 554.4-2 The doctrine of successive refinement................................................................................................................. 564.4-3 A quantitative objective function, dependent on life-cycle cost and all aspects of effectiveness................. 585.0-1 Product realization................................................................................................................................................ 715.1-1 Product Implementation Process........................................................................................................................ 735.2-1 Product Integration Process................................................................................................................................. 785.3-1 Product Verification Process................................................................................................................................ 845.3-2 Bottom-up realization process............................................................................................................................. 905.3-3 Example of end-to-end data flow for a scientific satellite mission.................................................................. 945.4-1 Product Validation Process.................................................................................................................................. 995.5-1 Product Transition Process................................................................................................................................. 1066.1-1 <strong>Technical</strong> Planning Process................................................................................................................................ 1126.1-2 Activity-on-arrow and precedence diagrams for network schedules........................................................... 1166.1-3 Gantt chart............................................................................................................................................................ 1186.1-4 Relationship between a system, a PBS, and a WBS......................................................................................... 1236.1-5 Examples of WBS development errors............................................................................................................. 1256.2-1 Requirements Management Process................................................................................................................. 1316.3-1 Interface Management Process.......................................................................................................................... 136<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • ix


Table of Contents6.4-1 <strong>Technical</strong> Risk Management Process................................................................................................................ 1406.4-2 Scenario-based modeling of hazards................................................................................................................ 1416.4-3 Risk as a set of triplets ........................................................................................................................................ 1416.4-4 Continuous risk management............................................................................................................................ 1426.4-5 The interface between CRM and risk-informed decision analysis............................................................... 1436.4-6 Risk analysis of decision alternatives................................................................................................................ 1446.4-7 Risk matrix............................................................................................................................................................ 1456.4-8 Example of a fault tree......................................................................................................................................... 1466.4-9 Deliberation.......................................................................................................................................................... 1476.4-10 Performance monitoring and control of deviations...................................................................................... 1496.4-11 Margin management method.......................................................................................................................... 1506.5-1 CM Process........................................................................................................................................................... 1516.5-2 Five elements of configuration management................................................................................................... 1526.5-3 Evolution of technical baseline.......................................................................................................................... 1536.5-4 Typical change control process.......................................................................................................................... 1556.6-1 <strong>Technical</strong> Data Management Process............................................................................................................... 1586.7-1 <strong>Technical</strong> Assessment Process........................................................................................................................... 1666.7-2 Planning and status reportingfeedback loop................................................................................................... 1676.7-3 Cost and schedule variances............................................................................................................................... 1906.7-4 Relationships of MOEs, MOPs,and TPMs....................................................................................................... 1926.7-5 Use of the planned profile method for the weight TPM with rebaseline in Chandra Project................... 1946.7-6 Use of the margin management method for the mass TPM in Sojourner................................................... 1946.8-1 Decision Analysis Process.................................................................................................................................. 1986.8-2 Example of a decision matrix ............................................................................................................................ 2016.8-3 <strong>Systems</strong> analysis across the life cycle................................................................................................................. 2036.8-4 Simulation model analysis techniques.............................................................................................................. 2046.8-5 Trade study process............................................................................................................................................. 2056.8-6 Influence diagrams.............................................................................................................................................. 2106.8-7 Decision tree......................................................................................................................................................... 2116.8-8 Utility function for a “volume” performance measure................................................................................... 2136.8-9 Risk-informed Decision Analysis Process........................................................................................................ 2146.8-10 Example of an objectives hierarchy................................................................................................................. 2157.1-1 Acquisition life cycle .......................................................................................................................................... 2187.1-2 Contract requirements development process.................................................................................................. 2237.2-1 CACE people/process/tools/facility paradigm................................................................................................ 2347.4-1 Human factors interaction model..................................................................................................................... 2477.4-2 HF engineering process and its links to the <strong>NASA</strong> program/project life cycle........................................... 248F-1 FFBD flowdown...................................................................................................................................................... 286F-2 FFBD: example 1..................................................................................................................................................... 287F-3 FFBD showing additional control constructs: example 2................................................................................. 287F-4 Enhanced FFBD: example 3.................................................................................................................................. 288F-5 Requirements allocation sheet.............................................................................................................................. 289F-6 N2 diagram for orbital equipment ...................................................................................................................... 289F-7 Timing diagram example....................................................................................................................................... 290F-8 Slew command status state diagram.................................................................................................................... 291G-1 PBS example........................................................................................................................................................... 294G-2 Technology assessment process........................................................................................................................... 295G-3 Architectural studies and technology development.......................................................................................... 296G-4 Technology readiness levels.................................................................................................................................. 296G-5 The TMA thought process.................................................................................................................................... 297G-6 TRL assessment matrix......................................................................................................................................... 298N-1 The peer review/inspection process.................................................................................................................... 312x • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


Table of ContentsN-2 Peer reviews/inspections quick reference guide................................................................................................ 315Tables2.3‐1 Project Life-Cycle Phases........................................................................................................................................ 74.1‐1 Typical Operational Phases for a <strong>NASA</strong> Mission.............................................................................................. 394.2-1 Benefits of Well-Written Requirements.............................................................................................................. 424.2‐2 Requirements Metadata........................................................................................................................................ 484.4-1 ILS <strong>Technical</strong> Disciplines....................................................................................................................................... 666.6‐1 <strong>Technical</strong> Data Tasks........................................................................................................................................... 1636.7-1 Program <strong>Technical</strong> Reviews............................................................................................................................... 1706.7‐2 P/SRR Entrance and Success Criteria............................................................................................................... 1716.7‐3 P/SDR Entrance and Success Criteria............................................................................................................... 1726.7‐4 MCR Entrance and Success Criteria................................................................................................................. 1736.7‐5 SRR Entrance and Success Criteria................................................................................................................... 1746.7‐6 MDR Entrance and Success Criteria................................................................................................................. 1756.7‐7 SDR Entrance and Success Criteria................................................................................................................... 1766.7‐8 PDR Entrance and Success Criteria.................................................................................................................. 1776.7‐9 CDR Entrance and Success Criteria.................................................................................................................. 1786.7‐10 PRR Entrance and Success Criteria................................................................................................................. 1796.7‐11 SIR Entrance and Success Criteria.................................................................................................................. 1806.7‐12 TRR Entrance and Success Criteria................................................................................................................ 1816.7‐13 SAR Entrance and Success Criteria................................................................................................................. 1826.7‐14 ORR Entrance and Success Criteria ............................................................................................................... 1836.7‐15 FRR Entrance and Success Criteria ................................................................................................................ 1846.7‐16 PLAR Entrance and Success Criteria ............................................................................................................. 1856.7‐17 CERR Entrance and Success Criteria.............................................................................................................. 1866.7‐18 PFAR Entrance and Success Criteria.............................................................................................................. 1866.7‐19 DR Entrance and Success Criteria................................................................................................................... 1876.7‐20 Functional and Physical Configuration Audits ............................................................................................ 1896.7‐21 <strong>Systems</strong> <strong>Engineering</strong> Process Metrics............................................................................................................. 1966.8‐1 Consequence Table.............................................................................................................................................. 1996.8‐2 Typical Information to Capture in a Decision Report ................................................................................... 2027.1-1 Applying the <strong>Technical</strong> Processes on Contract................................................................................................ 2207.1-2 Steps in the Requirements Development Process .......................................................................................... 2247.1‐3 Proposal Evaluation Criteria.............................................................................................................................. 2277.1‐4 Risks in Acquisition............................................................................................................................................. 2287.1‐5 Typical Work Product Documents.................................................................................................................... 2307.1‐6 Contract-Subcontract Issues.............................................................................................................................. 2317.4-1 Human and Organizational Analysis Techniques .......................................................................................... 2497.5‐1 Planetary Protection Mission Categories......................................................................................................... 2597.5‐2 Summarized Planetary Protection Requirements........................................................................................... 259D-1 Requirements Verification Matrix....................................................................................................................... 283E-1 Validation Requirements Matrix.......................................................................................................................... 284G-1 Products Provided by the TA as a Function of Program/Project Phase......................................................... 294H-1 Integration Plan Contents.................................................................................................................................... 300M-1 CM Plan Outline................................................................................................................................................... 311O-1 Typical Tradeoffs for Space <strong>Systems</strong>.................................................................................................................... 316O-2 Typical Tradeoffs in the Acquisition Process..................................................................................................... 316O-3 Typical Tradeoffs Throughout the Project Life Cycle....................................................................................... 316<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • xi


Table of ContentsBoxesSystem Cost, Effectiveness, and Cost-Effectiveness..................................................................................................... 16The <strong>Systems</strong> Engineer’s Dilemma................................................................................................................................... 17Program Formulation...................................................................................................................................................... 21Program Implementation................................................................................................................................................ 21Pre-Phase A: Concept Studies......................................................................................................................................... 22Phase A: Concept and Technology Development........................................................................................................ 23Phase B: Preliminary Design and Technology Completion........................................................................................ 24Phase C: Final Design and Fabrication.......................................................................................................................... 26Phase D: System Assembly, Integration and Test, Launch.......................................................................................... 27Phase E: Operations and Sustainment........................................................................................................................... 28Phase F: Closeout.............................................................................................................................................................. 28System Design Keys.......................................................................................................................................................... 32Example of Functional and Performance Requirements............................................................................................. 43Rationale............................................................................................................................................................................ 48DOD Architecture Framework....................................................................................................................................... 51Prototypes.......................................................................................................................................................................... 67Product Realization Keys................................................................................................................................................. 72Differences Between Verification and Validation Testing........................................................................................... 83Types of Testing................................................................................................................................................................. 85Types of Verification......................................................................................................................................................... 86Differences Between Verification and Validation Testing........................................................................................... 98Types of Validation......................................................................................................................................................... 100Examples of Enabling Products and Support Resources for Preparing to Conduct Validation........................... 102Model Verification and Validation............................................................................................................................... 104Crosscutting <strong>Technical</strong> Management Keys................................................................................................................. 111Gantt Chart Features...................................................................................................................................................... 117WBS Hierarchies for <strong>Systems</strong>........................................................................................................................................ 126Definitions....................................................................................................................................................................... 132Typical Interface Management Checklist.................................................................................................................... 138Key Concepts in <strong>Technical</strong> Risk Management ........................................................................................................... 139Example Sources of Risk................................................................................................................................................ 145Limitations of Risk Matrices......................................................................................................................................... 145Types of Configuration Change Management Changes............................................................................................ 154Warning Signs/Red Flags (How Do You Know When You’re in Trouble?)............................................................ 156Redlines Were identified as One of the Major Causes of the NOAA N-Prime Mishap........................................ 157Inappropriate Uses of <strong>Technical</strong> Data.......................................................................................................................... 160Data Collection Checklist.............................................................................................................................................. 162Termination Review....................................................................................................................................................... 169Analyzing the Estimate at Completion........................................................................................................................ 191Examples of <strong>Technical</strong> Performance Measures .......................................................................................................... 193An Example of a Trade Tree for a Mars Rover............................................................................................................ 207Trade Study <strong>Reports</strong>....................................................................................................................................................... 208Solicitations..................................................................................................................................................................... 219Source Evaluation Board................................................................................................................................................ 226Context Diagrams........................................................................................................................................................... 292xii • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


PrefaceSince the writing of <strong>NASA</strong>/SP-6105 in 1995, systemsengineering at the National Aeronautics and Space Administration(<strong>NASA</strong>), within national and internationalstandard bodies, and as a discipline has undergone rapidevolution. Changes include implementing standardsin the International Organization for Standardization(ISO) 9000, the use of Carnegie Mellon Software <strong>Engineering</strong>Institute’s Capability Maturity Model® Integration(CMMI®) to improve development and delivery ofproducts, and the impacts of mission failures. Lessonslearned on systems engineering were documented in reportssuch as those by the <strong>NASA</strong> Integrated Action Team(NIAT), the Columbia Accident Investigation Board(CAIB), and the follow-on Diaz Report. Out of theseefforts came the <strong>NASA</strong> Office of the Chief Engineer(OCE) initiative to improve the overall Agency systemsengineering infrastructure and capability for the efficientand effective engineering of <strong>NASA</strong> systems, to producequality products, and to achieve mission success. In addition,Agency policy and requirements for systems engineeringhave been established. This handbook updateis a part of the OCE-sponsored Agencywide systems engineeringinitiative.In 1995, SP-6105 was initially published to bring thefundamental concepts and techniques of systems engineeringto <strong>NASA</strong> personnel in a way that recognizes thenature of <strong>NASA</strong> systems and the <strong>NASA</strong> environment.This revision of SP-6105 maintains that original philosophywhile updating the Agency’s systems engineeringbody of knowledge, providing guidance for insight intocurrent best Agency practices, and aligning the handbookwith the new Agency systems engineering policy.The update of this handbook was twofold: a top-downcompatibility with higher level Agency policy and abottom-up infusion of guidance from the <strong>NASA</strong> practitionersin the field. The approach provided the opportunityto obtain best practices from across <strong>NASA</strong> andbridge the information to the established <strong>NASA</strong> systemsengineering process. The attempt is to communicateprinciples of good practice as well as alternativeapproaches rather than specify a particular way to accomplisha task. The result embodied in this handbook isa top-level implementation approach on the practice ofsystems engineering unique to <strong>NASA</strong>. The material forupdating this handbook was drawn from many differentsources, including <strong>NASA</strong> procedural requirements, fieldcenter systems engineering handbooks and processes, aswell as non-<strong>NASA</strong> systems engineering textbooks andguides.This handbook consists of six core chapters: (1) systemsengineering fundamentals discussion, (2) the <strong>NASA</strong>program/project life cycles, (3) systems engineering processesto get from a concept to a design, (4) systems engineeringprocesses to get from a design to a final product,(5) crosscutting management processes in systems engineering,and (6) special topics relative to systems engineering.These core chapters are supplemented by appendicesthat provide outlines, examples, and furtherinformation to illustrate topics in the core chapters. Thehandbook makes extensive use of boxes and figures todefine, refine, illustrate, and extend concepts in the corechapters without diverting the reader from the main information.The handbook provides top-level guidelines for goodsystems engineering practices; it is not intended in anyway to be a directive.<strong>NASA</strong>/SP-2007-6105 Rev1 supersedes SP-6105, datedJune 1995.<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • xiii


AcknowledgmentsPrimary points of contact: Stephen J. Kapurch, Officeof the Chief Engineer, <strong>NASA</strong> Headquarters, and Neil E.Rainwater, Marshall Space Flight Center.The following individuals are recognized as contributingpractitioners to the content of this handbook revision:■ Core Team Member (or Representative) from Center,Directorate, or Office◆ Integration Team Member• Subject Matter Expert Team Champion Subject Matter ExpertArden Acord, <strong>NASA</strong>/Jet Propulsion Laboratory Danette Allen, <strong>NASA</strong>/Langley Research Center Deborah Amato, <strong>NASA</strong>/Goddard Space Flight Center •Jim Andary, <strong>NASA</strong>/Goddard Space Flight Center ◆Tim Beard, <strong>NASA</strong>/Ames Research Center Jim Bilbro, <strong>NASA</strong>/Marshall Space Flight Center Mike Blythe, <strong>NASA</strong>/Headquarters ■Linda Bromley, <strong>NASA</strong>/Johnson Space Center ◆•■Dave Brown, Defense Acquisition University John Brunson, <strong>NASA</strong>/Marshall Space Flight Center •Joe Burt, <strong>NASA</strong>/Goddard Space Flight Center Glenn Campbell, <strong>NASA</strong>/Headquarters Joyce Carpenter, <strong>NASA</strong>/Johnson Space Center •Keith Chamberlin, <strong>NASA</strong>/Goddard Space Flight Center Peggy Chun, <strong>NASA</strong>/<strong>NASA</strong> <strong>Engineering</strong> and SafetyCenter ◆•■Cindy Coker, <strong>NASA</strong>/Marshall Space Flight Center Nita Congress, Graphic Designer ◆Catharine Conley, <strong>NASA</strong>/Headquarters Shelley Delay, <strong>NASA</strong>/Marshall Space Flight Center Rebecca Deschamp, <strong>NASA</strong>/Stennis Space Center Homayoon Dezfuli, <strong>NASA</strong>/Headquarters •Olga Dominguez, <strong>NASA</strong>/Headquarters Rajiv Doreswamy, <strong>NASA</strong>/Headquarters ■Larry Dyer, <strong>NASA</strong>/Johnson Space Center Nelson Eng, <strong>NASA</strong>/Johnson Space Center Patricia Eng, <strong>NASA</strong>/Headquarters Amy Epps, <strong>NASA</strong>/Marshall Space Flight Center Chester Everline, <strong>NASA</strong>/Jet Propulsion Laboratory Karen Fashimpaur, Arctic Slope Regional Corporation ◆Orlando Figueroa, <strong>NASA</strong>/Goddard Space Flight Center ■Stanley Fishkind, <strong>NASA</strong>/Headquarters ■Brad Flick, <strong>NASA</strong>/Dryden Flight Research Center ■Marton Forkosh, <strong>NASA</strong>/Glenn Research Center ■Dan Freund, <strong>NASA</strong>/Johnson Space Center Greg Galbreath, <strong>NASA</strong>/Johnson Space Center Louie Galland, <strong>NASA</strong>/Langley Research Center Yuri Gawdiak, <strong>NASA</strong>/Headquarters ■•Theresa Gibson, <strong>NASA</strong>/Glenn Research Center Ronnie Gillian, <strong>NASA</strong>/Langley Research Center Julius Giriunas, <strong>NASA</strong>/Glenn Research Center Ed Gollop, <strong>NASA</strong>/Marshall Space Flight Center Lee Graham, <strong>NASA</strong>/Johnson Space Center Larry Green, <strong>NASA</strong>/Langley Research Center Owen Greulich, <strong>NASA</strong>/Headquarters ■Ben Hanel, <strong>NASA</strong>/Ames Research Center Gena Henderson, <strong>NASA</strong>/Kennedy Space Center •Amy Hemken, <strong>NASA</strong>/Marshall Space Flight Center Bob Hennessy, <strong>NASA</strong>/<strong>NASA</strong> <strong>Engineering</strong> and SafetyCenter Ellen Herring, <strong>NASA</strong>/Goddard Space Flight Center •Renee Hugger, <strong>NASA</strong>/Johnson Space Center Brian Hughitt, <strong>NASA</strong>/Headquarters Eric Isaac, <strong>NASA</strong>/Goddard Space Flight Center ■Tom Jacks, <strong>NASA</strong>/Stennis Space Center Ken Johnson, <strong>NASA</strong>/<strong>NASA</strong> <strong>Engineering</strong> and SafetyCenter Ross Jones, <strong>NASA</strong>/Jet Propulsion Laboratory ■John Juhasz, <strong>NASA</strong>/Johnson Space Center Stephen Kapurch, <strong>NASA</strong>/Headquarters ■◆•Jason Kastner, <strong>NASA</strong>/Jet Propulsion Laboratory Kristen Kehrer, <strong>NASA</strong>/Kennedy Space Center John Kelly, <strong>NASA</strong>/Headquarters Kriss Kennedy, <strong>NASA</strong>/Johnson Space Center <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • xv


AcknowledgmentsSteven Kennedy, <strong>NASA</strong>/Kennedy Space Center Tracey Kickbusch, <strong>NASA</strong>/Kennedy Space Center ■Casey Kirchner, <strong>NASA</strong>/Stennis Space Center Kenneth Kumor, <strong>NASA</strong>/Headquarters Janne Lady, SAITECH/CSC Jerry Lake, <strong>Systems</strong> Management international Kenneth W. Ledbetter, <strong>NASA</strong>/Headquarters ■Steve Leete, <strong>NASA</strong>/Goddard Space Flight Center William Lincoln, <strong>NASA</strong>/Jet Propulsion Laboratory Dave Littman, <strong>NASA</strong>/Goddard Space Flight Center John Lucero, <strong>NASA</strong>/Glenn Research Center Paul Luz, <strong>NASA</strong>/Marshall Space Flight Center Todd MacLeod, <strong>NASA</strong>/Marshall Space Flight Center Roger Mathews, <strong>NASA</strong>/Kennedy Space Center •Bryon Maynard, <strong>NASA</strong>/Stennis Space Center Patrick McDuffee, <strong>NASA</strong>/Marshall Space Flight Center Mark McElyea, <strong>NASA</strong>/Marshall Space Flight Center William McGovern, Defense Acquisition University ◆Colleen McGraw, <strong>NASA</strong>/Goddard Space FlightCenter ◆•Melissa McGuire, <strong>NASA</strong>/Glenn Research Center Don Mendoza, <strong>NASA</strong>/Ames Research Center Leila Meshkat, <strong>NASA</strong>/Jet Propulsion Laboratory Elizabeth Messer, <strong>NASA</strong>/Stennis Space Center •Chuck Miller, <strong>NASA</strong>/Headquarters Scott Mimbs, <strong>NASA</strong>/Kennedy Space Center Steve Newton, <strong>NASA</strong>/Marshall Space Flight Center Tri Nguyen, <strong>NASA</strong>/Johnson Space Center Chuck Niles, <strong>NASA</strong>/Langley Research Center •Cynthia Null, <strong>NASA</strong>/<strong>NASA</strong> <strong>Engineering</strong> and SafetyCenter John Olson, <strong>NASA</strong>/Headquarters Tim Olson, QIC, Inc. Sam Padgett, <strong>NASA</strong>/Johnson Space Center Christine Powell, <strong>NASA</strong>/Stennis Space Center ◆•■Steve Prahst, <strong>NASA</strong>/Glenn Research Center Pete Prassinos, <strong>NASA</strong>/Headquarters ■Mark Prill, <strong>NASA</strong>/Marshall Space Flight Center Neil Rainwater, <strong>NASA</strong>/Marshall Space Flight Center ■ ◆Ron Ray, <strong>NASA</strong>/Dryden Flight Research Center Gary Rawitscher, <strong>NASA</strong>/Headquarters Joshua Reinert, ISL Inc. Norman Rioux, <strong>NASA</strong>/Goddard Space Flight Center Steve Robbins, <strong>NASA</strong>/Marshall Space Flight Center •Dennis Rohn, <strong>NASA</strong>/Glenn Research Center ◆Jim Rose, <strong>NASA</strong>/Jet Propulsion Laboratory Arnie Ruskin,* <strong>NASA</strong>/Jet Propulsion Laboratory •Harry Ryan, <strong>NASA</strong>/Stennis Space Center George Salazar, <strong>NASA</strong>/Johnson Space Center Nina Scheller, <strong>NASA</strong>/Ames Research Center ■Pat Schuler, <strong>NASA</strong>/Langley Research Center •Randy Seftas, <strong>NASA</strong>/Goddard Space Flight Center Joey Shelton, <strong>NASA</strong>/Marshall Space Flight Center •Robert Shishko, <strong>NASA</strong>/Jet Propulsion Laboratory ◆Burton Sigal, <strong>NASA</strong>/Jet Propulsion Laboratory Sandra Smalley, <strong>NASA</strong>/Headquarters Richard Smith, <strong>NASA</strong>/Kennedy Space Center John Snoderly, Defense Acquisition University Richard Sorge, <strong>NASA</strong>/Glenn Research Center Michael Stamatelatos, <strong>NASA</strong>/Headquarters ■Tom Sutliff, <strong>NASA</strong>/Glenn Research Center •Todd Tofil, <strong>NASA</strong>/Glenn Research Center John Tinsley, <strong>NASA</strong>/Headquarters Rob Traister, Graphic Designer ◆Clayton Turner, <strong>NASA</strong>/Langley Research Center ■Paul VanDamme, <strong>NASA</strong>/Jet Propulsion Laboratory Karen Vaner, <strong>NASA</strong>/Stennis Space Center Lynn Vernon, <strong>NASA</strong>/Johnson Space Center Linda Voss, <strong>Technical</strong> Writer ◆Britt Walters, <strong>NASA</strong>/Johnson Space Center ■Tommy Watts, <strong>NASA</strong>/Marshall Space Flight Center Richard Weinstein, <strong>NASA</strong>/Headquarters Katie Weiss, <strong>NASA</strong>/Jet Propulsion Laboratory •Martha Wetherholt, <strong>NASA</strong>/Headquarters Becky Wheeler, <strong>NASA</strong>/Jet Propulsion Laboratory Cathy White, <strong>NASA</strong>/Marshall Space Flight Center Reed Wilcox, <strong>NASA</strong>/Jet Propulsion Laboratory Barbara Woolford, <strong>NASA</strong>/Johnson Space Center •Felicia Wright, <strong>NASA</strong>/Langley Research Center Robert Youngblood, ISL Inc. Tom Zang, <strong>NASA</strong>/Langley Research Center *In memory of.xvi • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


1.0 Introduction1.1 PurposeThis handbook is intended to provide general guidanceand information on systems engineering that will beuseful to the <strong>NASA</strong> community. It provides a generic descriptionof <strong>Systems</strong> <strong>Engineering</strong> (SE) as it should be appliedthroughout <strong>NASA</strong>. A goal of the handbook is to increaseawareness and consistency across the Agency andadvance the practice of SE. This handbook provides perspectivesrelevant to <strong>NASA</strong> and data particular to <strong>NASA</strong>.This handbook should be used as a companion for implementingNPR 7123.1, <strong>Systems</strong> <strong>Engineering</strong> Processesand Requirements, as well as the Center-specific handbooksand directives developed for implementing systemsengineering at <strong>NASA</strong>. It provides a companion referencebook for the various systems engineering relatedcourses being offered under <strong>NASA</strong>’s auspices.1.2 Scope and DepthThe coverage in this handbook is limited to generalconcepts and generic descriptions of processes, tools,and techniques. It provides information on systems engineeringbest practices and pitfalls to avoid. There aremany Center-specific handbooks and directives as well astextbooks that can be consulted for in-depth tutorials.This handbook describes systems engineering as it shouldbe applied to the development and implementation oflarge and small <strong>NASA</strong> programs and projects. <strong>NASA</strong>has defined different life cycles that specifically addressthe major project categories, or product lines, whichare: Flight <strong>Systems</strong> and Ground Support (FS&GS), Researchand Technology (R&T), Construction of Facilities(CoF), and Environmental Compliance and Restoration(ECR). The technical content of the handbookprovides systems engineering best practices that shouldbe incorporated into all <strong>NASA</strong> product lines. (Checkthe <strong>NASA</strong> On-Line Directives Information System(NODIS) electronic document library for applicable<strong>NASA</strong> directives on topics such as product lines.) Forsimplicity this handbook uses the FS&GS product lineas an example. The specifics of FS&GS can be seen inthe description of the life cycle and the details of themilestone reviews. Each product line will vary in thesetwo areas; therefore, the reader should refer to the applicable<strong>NASA</strong> procedural requirements for the specificrequirements for their life cycle and reviews. The engineeringof <strong>NASA</strong> systems requires a systematic anddisciplined set of processes that are applied recursivelyand iteratively for the design, development, operation,maintenance, and closeout of systems throughout thelife cycle of the programs and projects.The handbook’s scope properly includes systems engineeringfunctions regardless of whether they are performedby a manager or an engineer, in-house, or by acontractor.<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 1


2.0 Fundamentals of <strong>Systems</strong> <strong>Engineering</strong><strong>Systems</strong> engineering is a methodical, disciplined approachfor the design, realization, technical management,operations, and retirement of a system. A “system”is a construct or collection of different elements that togetherproduce results not obtainable by the elementsalone. The elements, or parts, can include people, hardware,software, facilities, policies, and documents; that is,all things required to produce system-level results. Theresults include system-level qualities, properties, characteristics,functions, behavior, and performance. Thevalue added by the system as a whole, beyond that contributedindependently by the parts, is primarily createdby the relationship among the parts; that is, how they areinterconnected. 1 It is a way of looking at the “big picture”when making technical decisions. It is a way of achievingstakeholder functional, physical, and operational performancerequirements in the intended use environmentover the planned life of the systems. In other words, systemsengineering is a logical way of thinking.<strong>Systems</strong> engineering is the art and science of developingan operable system capable of meeting requirementswithin often opposed constraints. <strong>Systems</strong> engineeringis a holistic, integrative discipline, wherein thecontributions of structural engineers, electrical engineers,mechanism designers, power engineers, humanfactors engineers, and many more disciplines are evaluatedand balanced, one against another, to produce a coherentwhole that is not dominated by the perspective ofa single discipline. 2<strong>Systems</strong> engineering seeks a safe and balanced design inthe face of opposing interests and multiple, sometimesconflicting constraints. The systems engineer must developthe skill and instinct for identifying and focusingefforts on assessments to optimize the overall design1Rechtin, <strong>Systems</strong> Architecting of Organizations: Why EaglesCan’t Swim.2Comments on systems engineering throughout Chapter 2.0are extracted from the speech “System <strong>Engineering</strong> and theTwo Cultures of <strong>Engineering</strong>” by Michael D. Griffin, <strong>NASA</strong>Administrator.and not favor one system/subsystem at the expenseof another. The art is in knowing when and where toprobe. Personnel with these skills are usually tagged as“systems engineers.” They may have other titles—leadsystems engineer, technical manager, chief engineer—but for this document, we will use the term systems engineer.The exact role and responsibility of the systems engineermay change from project to project depending onthe size and complexity of the project and from phaseto phase of the life cycle. For large projects, there maybe one or more systems engineers. For small projects,sometimes the project manager may perform thesepractices. But, whoever assumes those responsibilities,the systems engineering functions must be performed.The actual assignment of the roles and responsibilitiesof the named systems engineer may also therefore vary.The lead systems engineer ensures that the system technicallyfulfills the defined needs and requirements andthat a proper systems engineering approach is being followed.The systems engineer oversees the project’s systemsengineering activities as performed by the technicalteam and directs, communicates, monitors, andcoordinates tasks. The systems engineer reviews andevaluates the technical aspects of the project to ensurethat the systems/subsystems engineering processes arefunctioning properly and evolves the system from conceptto product. The entire technical team is involved inthe systems engineering process.The systems engineer will usually play the key role inleading the development of the system architecture, definingand allocating requirements, evaluating designtradeoffs, balancing technical risk between systems, definingand assessing interfaces, providing oversight ofverification and validation activities, as well as manyother tasks. The systems engineer will usually have theprime responsibility in developing many of the projectdocuments, including the <strong>Systems</strong> <strong>Engineering</strong> ManagementPlan (SEMP), requirements/specification documents,verification and validation documents, certificationpackages, and other technical documentation.<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 3


2.0 Fundamentals of <strong>Systems</strong> <strong>Engineering</strong>In summary, the systems engineer is skilled in the artand science of balancing organizational and technical interactionsin complex systems. However, since the entireteam is involved in the systems engineering approach,in some ways everyone is a systems engineer. <strong>Systems</strong>engineering is about tradeoffs and compromises, aboutgeneralists rather than specialists. <strong>Systems</strong> engineering isabout looking at the “big picture” and not only ensuringthat they get the design right (meet requirements) butthat they get the right design.To explore this further, put SE in the context of projectmanagement. As discussed in NPR 7120.5, <strong>NASA</strong> SpaceFlight Program and Project Management Requirements,project management is the function of planning, overseeing,and directing the numerous activities requiredto achieve the requirements, goals, and objectives of thecustomer and other stakeholders within specified cost,quality, and schedule constraints. Project managementcan be thought of as having two major areas of emphasis,both of equal weight and importance. These areas aresystems engineering and project control. Figure 2.0‐1 isa notional graphic depicting this concept. Note that thereare areas where the two cornerstones of project managementoverlap. In these areas, SE provides the technicalaspects or inputs; whereas project control provides theprogrammatic, cost, and schedule inputs.This document will focus on the SE side of the diagram.These practices/processes are taken from NPR 7123.1,SYSTEMS ENGINEERING System Design– Requirements Definition– <strong>Technical</strong> Solution Definition Product Realization– Design Realization– Evaluation– Product Transition <strong>Technical</strong> Management– <strong>Technical</strong> Planning– <strong>Technical</strong> Control– <strong>Technical</strong> Assessment– <strong>Technical</strong> Decision Analysis Planning Risk Management ConfigurationManagement Data Management Assessment Decision AnalysisFigure 2.0‐1 SE in context of overall project management<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> Processes and Requirements.Each will be described in much greater detail in subsequentchapters of this document, but an overview isgiven below.2.1 The Common <strong>Technical</strong> Processesand the SE EngineThere are three sets of common technical processes inNPR 7123.1, <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> Processes andRequirements: system design, product realization, andtechnical management. The processes in each set andtheir interactions and flows are illustrated by the NPRsystems engineering “engine” shown in Figure 2.1‐1. Theprocesses of the SE engine are used to develop and realizethe end products. This chapter provides the applicationcontext of the 17 common technical processes requiredin NPR 7123.1. The system design processes, the productrealization processes, and the technical managementprocesses are discussed in more details in Chapters 4.0,5.0, and 6.0, respectively. Steps 1 through 9 indicated inFigure 2.1-1 represent the tasks in execution of a project.Steps 10 through 17 are crosscutting tools for carryingout the processes.z z System Design Processes: The four system designprocesses shown in Figure 2.1-1 are used to defineand baseline stakeholder expectations, generateand baseline technical requirements, and convertthe technical requirements into a design solutionthat will satisfy the baselinedstakeholder expectations.These processes arePROJECT CONTROLapplied to each product ofthe system structure fromthe top of the structure tothe bottom until the lowest Management Planning Integrated Assessmentproducts in any system Schedule Managementstructure branch are definedto the point where Configuration Management Resource Management Documentation and Data they can be built, bought,Managementor reused. All other productsin the system structure Acquisition Managementare realized by integration.Designers not only developthe design solutions to theproducts intended to performthe operational functionsof the system, but also4 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


2.1 The Common <strong>Technical</strong> Processes and the SE EngineRequirements flow downfrom level aboveSYSTEMDESIGNPROCESSESRequirements DefinitionProcesses1. Stakeholder ExpectationsDefinition2. <strong>Technical</strong> RequirementsDefinition<strong>Technical</strong> SolutionDefinition Processes3. Logical Decomposition4. Design Solution DefinitionTECHNICAL MANAGEMENTPROCESSES<strong>Technical</strong> PlanningProcess10. <strong>Technical</strong> Planning<strong>Technical</strong> ControlProcesses11. Requirements Management12. Interface Management13. <strong>Technical</strong> Risk Management14. Configuration Management15. <strong>Technical</strong> Data Management<strong>Technical</strong> AssessmentProcess16. <strong>Technical</strong> Assessment<strong>Technical</strong> Decision AnalysisProcess17. Decision AnalysisRealized productsto level abovePRODUCTREALIZATIONPROCESSESProduct Transition Process9. Product TransitionEvaluation Processes7. Product Verification8. Product ValidationDesign RealizationProcesses5. Product Implementation6. Product IntegrationRequirements flow downto level belowSystem design processesapplied to each work breakdownstructure model down andacross system structureRealized productsfrom level belowProduct realization processesapplied to each productup and acrosssystem structureFigure 2.1‐1 The systems engineering engineestablish requirements for the products and servicesthat enable each operational/mission product in thesystem structure.z z Product Realization Processes: The product realizationprocesses are applied to each operational/missionproduct in the system structure starting fromthe lowest level product and working up to higherlevel integrated products. These processes are used tocreate the design solution for each product (e.g., bythe Product Implementation or Product IntegrationProcess) and to verify, validate, and transition up tothe next hierarchical level products that satisfy theirdesign solutions and meet stakeholder expectations asa function of the applicable life-cycle phase.z z <strong>Technical</strong> Management Processes: The technicalmanagement processes are used to establish andevolve technical plans for the project, to managecommunication across interfaces, to assess progressagainst the plans and requirements for the systemproducts or services, to control technical execution ofthe project through to completion, and to aid in thedecisionmaking process.The processes within the SE engine are used both iterativelyand recursively. As defined in NPR 7123.1, “iterative”is the “application of a process to the same productor set of products to correct a discovered discrepancyor other variation from requirements,” whereas “recursive”is defined as adding value to the system “by therepeated application of processes to design next lowerlayer system products or to realize next upper layer endproducts within the system structure. This also applies torepeating application of the same processes to the systemstructure in the next life-cycle phase to mature thesystem definition and satisfy phase success criteria.” Theexample used in Section 2.3 will further explain theseconcepts. The technical processes are applied recursivelyand iteratively to break down the initializing concepts ofthe system to a level of detail concrete enough that thetechnical team can implement a product from the information.Then the processes are applied recursively and<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 5


2.3 Example of Using the SE Engine2.3 Example of Using the SE EngineTo help in understanding how the SE engine is applied,an example will be posed and walked through the processes.Pertinent to this discussion are the phases of theprogram and project life cycles, which will be discussedin greater depth in Chapter 3.0 of this document. As describedin Chapter 3.0, NPR 7120.5 defines the life cycleused for <strong>NASA</strong> programs and projects. The life-cyclephases are described in Table 2.3‐1.Use of the different phases of a life cycle allows the variousproducts of a project to be gradually developed andmatured from initial concepts through the fielding of theproduct and to its final retirement. The SE engine shownin Figure 2.1‐1 is used throughout all phases.In Pre-Phase A, the SE engine is used to develop theinitial concepts; develop a preliminary/draft set of keyhigh-level requirements; realize these concepts throughmodeling, mockups, simulation, or other means; andverify and validate that these concepts and productswould be able to meet the key high-level requirements.Note that this is not the formal verification and validationprogram that will be performed on the final productbut is a methodical runthrough ensuring that the conceptsthat are being developed in this Pre-Phase A wouldbe able to meet the likely requirements and expectationsof the stakeholders. Concepts would be developed to thelowest level necessary to ensure that the concepts are feasibleand to a level that will reduce the risk low enough tosatisfy the project. Academically, this process could proceeddown to the circuit board level for every system.Table 2.3‐1 Project Life-Cycle PhasesPhase Purpose Typical OutputPre-Phase AConceptStudiesTo produce a broad spectrum of ideas and alternatives for missionsfrom which new programs/projects can be selected. Determine feasibilityof desired system, develop mission concepts, draft system-levelrequirements, identify potential technology needs.Feasible system conceptsin the form of simulations,analysis, study reports,models, and mockupsFormulationPhase AConcept andTechnologyDevelopmentTo determine the feasibility and desirability of a suggested new majorsystem and establish an initial baseline compatibility with <strong>NASA</strong>’s strategicplans. Develop final mission concept, system-level requirements,and needed system structure technology developments.System concept definitionin the form of simulations,analysis, engineeringmodels, and mockups andtrade study definitionPhase BPreliminaryDesign andTechnologyCompletionTo define the project in enough detail to establish an initial baselinecapable of meeting mission needs. Develop system structure endproduct (and enabling product) requirements and generate a preliminarydesign for each system structure end product.End products in the formof mockups, trade studyresults, specification andinterface documents, andprototypesPhase CFinal Designand FabricationTo complete the detailed design of the system (and its associatedsubsystems, including its operations systems), fabricate hardware, andcode software. Generate final designs for each system structure endproduct.End product detaileddesigns, end productcomponent fabrication,and software developmentImplementationPhase DSystemAssembly,Integration andTest, LaunchPhase EOperations andSustainmentTo assemble and integrate the products to create the system, meanwhiledeveloping confidence that it will be able to meet the systemrequirements. Launch and prepare for operations. Perform systemend product implementation, assembly, integration and test, andtransition to use.To conduct the mission and meet the initially identified need andmaintain support for that need. Implement the mission operationsplan.Operations-ready systemend product with supportingrelated enablingproductsDesired systemPhase FCloseoutTo implement the systems decommissioning/disposal plan developedin Phase E and perform analyses of the returned data and anyreturned samples.Product closeout<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 7


2.0 Fundamentals of <strong>Systems</strong> <strong>Engineering</strong>However, that would involve a great deal of time andmoney. There may be a higher level or tier of productthan circuit board level that would enable designers toaccurately determine the feasibility of accomplishing theproject (purpose of Pre-Phase A).During Phase A, the recursive use of the SE engine iscontinued, this time taking the concepts and draft keyrequirements that were developed and validated duringPre-Phase A and fleshing them out to become the set ofbaseline system requirements and Concept of Operations(ConOps). During this phase, key areas of high riskmight be simulated or prototyped to ensure that the conceptsand requirements being developed are good onesand to identify verification and validation tools and techniquesthat will be needed in later phases.During Phase B, the SE engine is applied recursively tofurther mature requirements for all products in the developingproduct tree, develop ConOps preliminary designs,and perform feasibility analysis of the verificationand validation concepts to ensure the designs will likelybe able to meet their requirements.Phase C again uses the left side of the SE engine to finalizeall requirement updates, finalize ConOps, developthe final designs to the lowest level of the product tree,and begin fabrication. Phase D uses the right side of theSE engine to recursively perform the final implementation,integration, verification, and validation of the endproduct, and at the final pass, transition the end productto the user. The technical management processes of theSE engine are used in Phases E and F to monitor performance;control configuration; and make decisions associatedwith the operations, sustaining engineering, andcloseout of the system. Any new capabilities or upgradesof the existing system would reenter the SE engine asnew developments.2.3.1 Detailed ExampleSince it is already well known, the <strong>NASA</strong> Space TransportationSystem (STS) will be used as an example tolook at how the SE engine would be used in Phase A.This example will be simplified to illustrate the applicationof the SE processes in the engine, but will in no waybe as detailed as necessary to actually build the highlycomplex vehicle. The SE engine is used recursively todrive out more and more detail with each pass. The iconshown in Figure 2.3‐1 will be used to keep track of the applicableplace in the SE engine. The numbers in the iconcorrespond to the numberedprocesses withinthe SE engine as shownin Figure 2.1‐1. The variouslayers of the producthierarchy will be called“tiers.” Tiers are alsocalled “layers,” or “levels.”But basically, the higherthe number of the tier or level, the lower in the producthierarchy the product is going and the more detailed theproduct is becoming (e.g., going from boxes, to circuitboards, to components).2.3.2 Example Premise<strong>NASA</strong> decides that there is a need for a transportationsystem that will act like a “truck” to carry large piecesof equipment and crew into Low Earth Orbit (LEO).Referring back to the project life cycle, the project firstenters the Pre-Phase A. During this phase, several conceptstudies are performed, and it is determined that itis feasible to develop such a “space truck.” This is determinedthrough combinations of simulations, mockups,analyses, or other like means. For simplicity, assume feasibilitywill be proven through concept models. The processesand framework of the SE engine will be used todesign and implement these models. The project wouldthen enter the Phase A activities to take the Pre-Phase Aconcepts and refine them and define the system requirementsfor the end product. The detailed example willbegin in Phase A and show how the SE engine is used.As described in the overview, a similar process is usedfor the other project phases.2.3.2.1 Example Phase A System Design PassesFirst Pass101 11 2 12133144151617101 11 2 1213314415161795678Figure 2.3‐1 SE enginetracking iconTaking the preliminary concepts and drafting key systemrequirements developed during the Pre-Phase A activities,the SE engine is enteredat the first processand used to determinewho the product (i.e.,the STS) stakeholdersare and what they want.During Pre-Phase A these needs and expectations werepretty general ideas, probably just saying the Agencyneeds a “space truck” that will carry X tons of payloadinto LEO, accommodate a payload of so-and-so size,956788 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


carry a crew of seven, etc. During this Phase A pass,these general concepts are detailed out and agreed to.Tier 0The ConOps (sometimes referred to as operational concept)generated in Pre-Phase A is also detailed out andExternalagreed to to ensure all stakeholders are in agreement as Tier 1Tankto what is really expected of the product—in this casethe transportation system. The detailed expectations arethen converted into good requirement statements. (Formore information on what constitutes a good requirement,see Appendix C.) Subsequent passes and subsequentphases will refine these requirements into specificationsthat can actually be built. Also note that all of thetechnical management processes (SE engine processesnumbered 10 through 17) are also used during this andall subsequent passes and activities. These ensure that allthe proper planning, control, assessment, and decisionsare used and maintained. Although for simplificationthey will not be mentioned in the rest of this example,Second Passthey will always be in effect.10781 11 256 3410Next, using the requirementsand the ConOps1 11 92 1213previously developed,314415logical decomposition16models/diagrams are17built up to help bring therequirements into perspective and to show their relationship.Finally, these diagrams, requirements, and ConOpsdocuments are used to develop one or more feasible designsolutions. Note that at this point, since this is only thefirst pass through the SE engine, these design solutionsare not detailed enough to actually build anything. Consequently,the design solutions might be summarized as,“To accomplish this transportation system, the best optionin our trade studies is a three-part system: a reusableorbiter for the crew and cargo, a large external tankto hold the propellants, and two solid rocket boosters togive extra power for liftoff that can be recovered, refurbished,and reused.” (Of course, the actual design solutionwould be much more descriptive and detailed). So,for this first pass, the first tier of the product hierarchymight look like Figure 2.3‐2. There would also be otherenabling products that might appear in the product tree,but for simplicity only, the main products are shown inthis example.Now, obviously design solution is not yet at a detailedenough level to actually build the prototypes or modelsof any of these products. The requirements, ConOps,functional diagrams, and design solutions are still at a2.3 Example of Using the SE EngineSpaceTransportation SystemOrbiterSolidRocket BoosterFigure 2.3‐2 Product hierarchy, tier 1: first passthrough the SE enginepretty high, general level. Note that the SE processes onthe right side (i.e., the product realization processes) ofthe SE engine have yet to be addressed. The design mustfirst be at a level that something can actually be built,coded, or reused before that side of the SE engine can beused. So, a second pass of the left side of the SE enginewill be started.The SE engine is completely recursive. That is, each ofthe three elements shown in the tier 1 diagram can now12131415161795678be considered a productof its own and the SE engineis therefore appliedto each of the three elementsseparately. For example,the external tankis considered an end product and the SE engine resetsback to the first processes. So now, just focusing on theexternal tank, who are the stakeholders and what theyexpect of the external tank is determined. Of course, oneof the main stakeholders will be the owners of the tier 1requirements and the STS as an end product, but therewill also be other new stakeholders. A new ConOps forhow the external tank would operate is generated. Thetier 1 requirements that are applicable (allocated) to theexternal tank would be “flowed down” and validated.Usually, some of these will be too general to implementinto a design, so the requirements will have to be detailedout. To these derived requirements, there will alsobe added new requirements that are generated from thestakeholder expectations, and other applicable standardsfor workmanship, safety, quality, etc.Next, the external tank requirements and the externaltank ConOps are established, and functional diagramsare developed as was done in the first pass with the STSproduct. Finally, these diagrams, requirements, andConOps documents are used to develop some feasibledesign solutions for the external tank. At this pass, there<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 9


2.0 Fundamentals of <strong>Systems</strong> <strong>Engineering</strong>will also not be enoughdetail to actually buildor prototype the externaltank. The design solutionmight be summarized as,“To build this externaltank, since our trade studies showed the best option wasto use cryogenic propellants, a tank for the liquid hydrogenwill be needed as will another tank for the liquidoxygen, instrumentation, and an outer structure of aluminumcoated with foam.” Thus, the tier 2 product treefor the external tank might look like Figure 2.3‐3.Tier 0Tier 1101 11 2 12133144151617Tier 2 HydrogenTank101 11 2 121331441516179In a similar manner, theorbiter would also takeanother pass through theSE engine starting withidentifying the stakeholdersand their expectations,and generating a ConOps for the orbiter element.The tier 1 requirements that are applicable (allocated) tothe orbiter would be “flowed down” and validated; newrequirements derived from them and any additionalrequirements (including interfaces with the other elements)would be added.101 11 2 1213314415161795678OxygenTankFigure 2.3‐3 Product hierarchy, tier 2:external tank567895678ExternalTankSpaceTransportationSystemExternalStructureOrbiterSolidRocketBoosterNext, the orbiter requirementsand the ConOpsare taken, functional diagramsare developed,and one or more feasibledesign solutions for theorbiter are generated. As with the external tank, at thispass, there will not be enough detail to actually build ordo a complex model of the orbiter. The orbiter designsolution might be summarized as, “To build this orbiterwill require a winged vehicle with a thermal protectionsystem; an avionics system; a guidance, navigation, andcontrol system; a propulsion system; an environmentalcontrol system; etc.” So the tier 2 product tree for theorbiter element might look like Figure 2.3‐4.Tier 0Tier 1Tier 2Likewise, the solid rocket booster would also be consideredan end product, and a pass through the SE enginewould generate a tier 2 design concept, just as was donewith the external tank and the orbiter.Third PassExternalStructureExternalTankEach of the tier 2 elements is also considered an endproduct, and each undergoes another pass throughthe SE engine, definingstakeholders, generatingConOps, flowing downallocated requirements,generating new and derivedrequirements, anddeveloping functional diagrams and design solutionconcepts. As an example of just the avionics system element,the tier 3 product hierarchy tree might look likeFigure 2.3‐5.Passes 4 Through nThermalProtectionSystemSpaceTransportationSystemOrbiterAvionicsSystemSolidRocketBoosterInstrumentationEnvironmentalControlSystemFigure 2.3‐4 Product hierarchy, tier 2: orbiter101 11 2 1213314415161795678Etc.For this Phase A set of passes, this recursive process iscontinued for each product (model) on each tier downto the lowest level in the product tree. Note that in someprojects it may not be feasible, given an estimated projectcost and schedule, to perform this recursive process completelydown to the smallest component during Phase A.10 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


2.3 Example of Using the SE EngineTier 0SpaceTransportation SystemTier 1ExternalTankOrbiterSolidRocket BoosterTier 2ExternalStructureThermalProtectionSystemAvionicsSystemEnvironmentalControl SystemEtc.Tier 3CommunicationSystemInstrumentationSystemCommand & DataHandling SystemDisplays &ControlsEtc.101 11 2 12133144151617Figure 2.3‐5 Product hierarchy, tier 3: avionics systemIn these cases, engineeringjudgment must978 be used to determine5 what level of the product6tier is feasible. Note thatthe lowest feasible levelmay occur at different tiers depending on the productlinecomplexity. For example, for one product line it mayoccur at tier 2; whereas, for a more complex product, itcould occur at tier 8. This also means that it will take differentamounts of time to reach the bottom. Thus, forany given program or project, products will be at variousstages of development. For this Phase A example,Figure 2.3-6 depicts the STS product hierarchy after completelypassing through the system design processes sideof the SE engine. At the end of this set of passes, systemrequirements, ConOps, and high-level conceptual functionaland physical architectures for each product in thetree would exist. Note that these would not yet be thedetailed or even preliminary designs for the end prod-Tier 0SpaceTransportation SystemTier 1ExternalTankOrbiterSolidRocket BoosterTier 2AB C n A BC nABTier 3AaAbBaBbCaCnAaAbBaBbCaCnAaAbBaBbTier 4AbaAbbCaaCabAaaAabBaaBabCaaCabAaaAabBbaBbbTier 5CabaCabbBaaaBaabCabaCabbBbaaBbbbTier 6Baaba BaabbFigure 2.3‐6 Product hierarchy: complete pass through system design processes side of the SE engineNote: The unshaded boxes represent bottom-level phase products.<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 11


2.0 Fundamentals of <strong>Systems</strong> <strong>Engineering</strong>ucts. These will come later in the life cycle. At this point,enough conceptual design work has been done to ensurethat at least the high-risk requirements are achievable aswill be shown in the following passes.is realized in this first pass. The models will help us understandand plan the method to implement the finalend product and will ensure the feasibility of the implementedmethod.2.3.2.2 Example Product Realization PassesSo now that the requirements and conceptual designs forthe principal Phase A products have been developed, theyneed to be checked to ensure they are achievable. Note thatthere are two types of products. The first product is the“end product”—the one that will actually be delivered tothe final user. The second type of product will be calleda “phase product.” A phase product is generated within aparticular life-cycle phase that helps move the project towarddelivering a final product. For example, while in Pre-Phase A, a foam-core mockup might be built to help visualizesome of the concepts. Those mockups would not bethe final “end product,” but would be the “phase product.”For this Phase A example, assume some computer modelswill be created and simulations performed of these keyconcepts to show that they are achievable. These will bethe phase product for our example.Now the focus shifts to the right side (i.e., product realizationprocesses) of the SE engine, which will be appliedrecursively, starting at the bottom of the product hierarchyand moving upwards.First Pass1234101112131415161712341112131415101617Each of the phase products (i.e., our computer models)for the bottom-level product tier (ones that are unshadedin Figure 2.3‐6) is taken101 11 9 individually and realized—thatis, it is either2 1213314bought, built, coded, or41516reused. For our example,17assume the external tankproduct model Aa is a standard Commercial-Off-the-Shelf (COTS) product that is bought. Aba is a model thatcan be reused from another project, and product Abb isa model that will have to be developed with an in-housedesign that is to be built. Note that these models areparts of a larger model product that will be assembledor integrated on a subsequent runthrough of the SE engine.That is, to realize the model for product Ab of theexternal tank, models for products Aba and Abb mustbe first implemented and then later integrated together.This pass of the SE engine will be the realizing part. Likewise,each of the unshaded bottom-level model products95678Next, each of the realizedmodels (phase products)are used to verify that theend product would likelymeet the requirements asdefined in the <strong>Technical</strong>Requirements Definition Process during the system designpass for this product. This shows the product wouldlikely meet the “shall” statements that were allocated,derived, or generated for it by method of test, analysis,inspection, or demonstration—that it was “built right.”Verification is performed for each of the unshadedbottom-level model products. Note that during thisPhase A pass, this process is not the formal verificationof the final end product. However, using analysis, simulation,models, or other means shows that the requirementsare good (verifiable) and that the concepts willmost likely satisfy them. This also allows draft verificationprocedures of key areas to be developed. What canbe formally verified, however, is that the phase product(the model) meets the requirements for the model.95678After the phase product(models) has been verifiedand used for planningthe end product verification,the models are thenused for validation. Thatis, additional test, analysis, inspection, or demonstrationsare conducted to ensure that the proposed conceptual designswill likely meet the expectations of the stakeholdersfor this phase product and for the end product. This willtrack back to the ConOps that was mutually developedwith the stakeholders during the Stakeholder ExpectationsDefinition Process of the system design pass forthis product. This will help ensure that the project has“built the right” product at this level.10After verification and1 11 92validation of the phase1213 7product (models) and314 84155 using it for planning the16 6verification and validationof the end product,17it is time to prepare the model for transition to the next12 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


level up. Depending on complexity, where the model willbe transitioned, security requirements, etc., transitionmay involve crating and shipment, transmitting over anetwork, or hand carrying over to the next lab. Whateveris appropriate, each model for the bottom-level productis prepared and handed to the next level up for furtherintegration.Second PassNow that all the models (phase products) for the bottomlevelend products are realized, verified, validated, andtransitioned, it is time101 11 2 12133144151617tier 4 models for productAba and Abb are integrated to form the model for thetier 3 product Ab. Note that the Product ImplementationProcess only occurs at the bottommost product. All subsequentpasses of the SE engine will employ the ProductIntegration Process since already realized products willbe integrated to form the new higher level products. Integratingthe lower tier phase products will result in thenext-higher-tier phase product. This integration processcan also be used for planning the integration of the finalend products.101 11 2 121331441516177856has been formed (tier 3product Ab for example),it must now be proven10that it meets its require-1 92341 11 92 1213 7314 8415516 610179 to start integrating them78into the next higher level105product. For example, for1 11 96the external tank, realized2 1213 7314 8415516 617After the new integrated9 phase product (model) Passes 3 Through nments. These will be the allocated, derived, or generatedrequirements developed during the <strong>Technical</strong> RequirementsDefinition Process during the system design passfor the model for this integrated product. This ensures thatthe integrated product was built (assembled) right. Notethat just verifying the component parts (i.e., the individualmodels) that were used in the integration is not sufficientto assume that the integrated product will work right.There are many sources of problems that could occur—incomplete requirements at the interfaces, wrong assumptionsduring design, etc. The only sure way of knowing ifan integrated product is good is to perform verificationand validation at each stage. The knowledge gained fromverifying this integrated phase product can also be usedfor planning the verification of the final end products.2.3 Example of Using the SE EngineLikewise, after the integratedphase product isverified, it needs to bevalidated to show that itmeets the expectationsas documented in theConOps for the model of the product at this level. Eventhough the component parts making up the integratedproduct will have been validated at this point, the onlyway to know that the project has built the “right” integratedproduct is to perform validation on the integratedproduct itself. Again, this information will help in theplanning for the validation of the end products.The model for the integratedphase product atthis level (tier 3 productAb for example) is nowready to be transitionedto the next higher level(tier 2 for the example). As with the products in the firstpass, the integrated phase product is prepared accordingto its needs/requirements and shipped or handed over.In the example, the model for the external tank tier 3 integratedproduct Ab is transitioned to the owners of themodel for the tier 2 product A. This effort with the phaseproducts will be useful in planning for the transition ofthe end products.111213141516175678In a similar manner as the second pass, the tier 3 modelsfor the products are integrated together, realized, verified,validated, and transitionedto the next highertier. For the example,the realized model forexternal tank tier 3 integratedphase productAb is integrated with the model for tier 3 realized phaseproduct Aa to form the tier 2 phase product A. Note thattier 3 product Aa is a bottom-tier product that has yetto go through the integration process. It may also havebeen realized some time ago and has been waiting for theAb product line to become realized. Part of its transitionmight have been to place it in secure storage until theAb product line became available. Or it could be that Aawas the long-lead item and product Ab had been completedsome time ago and was waiting for the Aa purchaseto arrive before they could be integrated together.<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 13


2.0 Fundamentals of <strong>Systems</strong> <strong>Engineering</strong>The length of the branch of the product tree does not necessarilytranslate to a corresponding length of time. Thisis why good planning in the first part of a project is socritical.Final Pass101 11 2 1213314415161714 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>At some point, all the models for the tier 1 phase productswill each have been used to ensure the system requirementsand conceptsdeveloped duringthis Phase A cycle can beimplemented, integrated,verified, validated, andtransitioned. The ele-1091 11 2 127 1383145 4156 1617ments are now defined as the external tank, the orbiter,and the solid rocket boosters. One final pass throughthe SE engine will show that they will likely be successfullyimplemented, integrated, verified, and validated.The final of these products—in the form of the baselinedsystem requirements, ConOps, conceptual functionaland physical designs—are made to provide inputsinto the next life-cycle phase (B) where they will be furthermatured. In later phases, the products will actuallybe built into physical form. At this stage of the project,the key characteristics of each product are passed downstreamin key SE documentation, as noted.2.3.2.3 Example Use of the SE Engine inPhases B Through DPhase B begins the preliminary design of the final endproduct. The recursive passes through the SE engine arerepeated in a similar manner to that discussed in the detailedPhase A example. At this phase, the phase productmight be a prototype of the product(s). Prototypes couldbe developed and then put through the planned verificationand validation processes to ensure the design willlikely meet all the requirements and expectations priorto the build of the final flight units. Any mistakes foundon prototypes are much easier and less costly to correctthan if not found until the flight units are built and undergoingthe certification process.Whereas the previous phases dealt with the final productin the form of analysis, concepts, or prototypes, PhasesC and D work with the final end product itself. DuringPhase C, we recursively use the left side of the SE engineto develop the final design. In Phase D, we recursively usethe right side of the SE engine to realize the final productand conduct the formal verification and validation of thefinal product. As we come out of the last pass of the SEengine in Phase D, we have the final fully realized endproduct, the STS, ready to be delivered for launch.2.3.2.4 Example Use of the SE Engine inPhases E and FEven in Phase E (Operations and Sustainment) andPhase F (Closeout) of the life cycle, the technical managementprocesses in the95678SE engine are still beingused. During the operationsphase of a project,a number of activities arestill going on. In additionto the day-to-day use of the product, there is a needto monitor or manage various aspects of the system.This is where the key <strong>Technical</strong> Performance Measures(TPMs) that were defined in the early stages of developmentcontinue to play a part. (TPMs are described inSubsection 6.7.2.) These are great measures to monitorto ensure the product continues to perform as designedand expected. Configurations are still under control, stillexecuting the Configuration Management Process. Decisionsare still being made using the Decision AnalysisProcess. Indeed, all of the technical management processesstill apply. For this discussion, the term “systemsmanagement” will be used for this aspect of operations.In addition to systems management and systems operation,there may also be a need for periodic refurbishment,repairing broken parts, cleaning, sparing, logistics,or other activities. Although other terms are used,for the purposes of this discussion the term “sustainingengineering” will be used for these activities. Again, all ofthe technical management processes still apply to theseSustaining <strong>Engineering</strong>101 11 2 12133144151617Phase E<strong>Systems</strong> ManagementFigure 2.3‐7 Model of typical activities duringoperational phase (Phase E) of a product95678Operation


2.3 Example of Using the SE Engineactivities. Figure 2.3‐7 represents these three activitiesoccurring simultaneously and continuously throughoutthe operational lifetime of the final product. Some portionsof the SE processes need to continue even after thesystem becomes nonoperational to handle retirement,decommissioning, and disposal. This is consistent withthe basic SE principle of handling the full system lifecycle from “cradle to grave.”However, if at any point in this phase a new product, achange that affects the design or certification of a product,or an upgrade to an existing product is needed, the developmentprocesses of the SE engine are reentered at thetop. That is, the first thing that is done for an upgrade isto determine who the stakeholders are and what they expect.The entire SE engine is used just as for a newly developedproduct. This might be pictorially portrayed as inFigure 2.3‐8. Note that in the figure although the SE engineis shown only once, it is used recursively down throughthe product hierarchy for upgraded products, just as describedin our detailed example for the initial product.2.4 Distinctions Between ProductVerification and ProductValidationFrom a process perspective, the Product Verification andProduct Validation Processes may be similar in nature,but the objectives are fundamentally different. Verificationof a product shows proof of compliance with requirements—thatthe product can meet each “shall” statementas proven though performance of a test, analysis, inspection,or demonstration. Validation of a product shows thatthe product accomplishes the intended purpose in the intendedenvironment—that it meets the expectations of thecustomer and other stakeholders as shown through performanceof a test, analysis, inspection, or demonstration.Verification testing relates back to the approved requirementsset and can be performed at different stages in theproduct life cycle. The approved specifications, drawings,parts lists, and other configuration documentationestablish the configuration baseline of that product,which may have to be modified at a later time. Without averified baseline and appropriate configuration controls,later modifications could be costly or cause major performanceproblems.Validation relates back to the ConOps document. Validationtesting is conducted under realistic conditions (orsimulated conditions) on end products for the purposeof determining the effectiveness and suitability of theproduct for use in mission operations by typical users.The selection of the verification or validation method isbased on engineering judgment as to which is the mosteffective way to reliably show the product’s conformanceto requirements or that it will operate as intended anddescribed in the ConOps.Phase APhase B101 11 2 121331441516Phase C9Phase D5678Final Deployment to End UserSustaining <strong>Engineering</strong>101 11 2 12133144151617<strong>Systems</strong> Management95678Initial IdeaPre-Phase APhase EOperationPhase FCloseoutUpgrades/Changes Reenter SE Engineat Stakeholder Expectations DefinitionFigure 2.3‐8 New products or upgrades reentering the SE engine<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 15


2.0 Fundamentals of <strong>Systems</strong> <strong>Engineering</strong>2.5 Cost Aspect of <strong>Systems</strong><strong>Engineering</strong>The objective of systems engineering is to see that thesystem is designed, built, and operated so that it accomplishesits purpose safely in the most cost-effective waypossible considering performance, cost, schedule, andrisk.A cost-effective and safe system must provide a particularkind of balance between effectiveness and cost: thesystem must provide the most effectiveness for the resourcesexpended, or equivalently, it must be the leastexpensive for the effectiveness it provides. This conditionis a weak one because there are usually many designsthat meet the condition. Think of each possible designas a point in the tradeoff space between effectiveness andcost. A graph plotting the maximum achievable effectivenessof designs available with current technology asa function of cost would, in general, yield a curved linesuch as the one shown in Figure 2.5‐1. (In the figure, allthe dimensions of effectiveness are represented by theordinate (y axis) and all the dimensions of cost by theabscissa (x axis).) In other words, the curved line representsthe envelope of the currently available technologyin terms of cost-effectiveness.Points above the line cannot be achieved with currentlyavailable technology; that is, they do not represent feasibledesigns. (Some of those points may be feasible inthe future when further technological advances havebeen made.) Points inside the envelope are feasible, butare said to be dominated by designs whose combinedcost and effectiveness lie on the envelope line. Designsrepresented by points on the envelope line are calledcost-effective (or efficient or nondominated) solutions.EffectivenessThere are no designsthat produce results inthis portion of thetrade spaceAll possible designs withcurrently known technologyproduce results somewherein this portion of the tradespaceCostFigure 2.5‐1 The enveloping surface ofnondominated designsSystem Cost, Effectiveness, andCost-Effectivenessz z Cost: The cost of a system is the value of the resourcesneeded to design, build, operate, anddispose of it. Because resources come in manyforms—work performed by <strong>NASA</strong> personnel andcontractors; materials; energy; and the use of facilitiesand equipment such as wind tunnels, factories,offices, and computers—it is convenient to expressthese values in common terms by using monetaryunits (such as dollars of a specified year).z z Effectiveness: The effectiveness of a system is aquantitative measure of the degree to which thesystem’s purpose is achieved. Effectiveness measuresare usually very dependent upon system performance.For example, launch vehicle effectivenessdepends on the probability of successfullyinjecting a payload onto a usable trajectory. Theassociated system performance attributes includethe mass that can be put into a specified nominalorbit, the trade between injected mass and launchvelocity, and launch availability.z z Cost-Effectiveness: The cost-effectiveness of a systemcombines both the cost and the effectivenessof the system in the context of its objectives. Whileit may be necessary to measure either or both ofthose in terms of several numbers, it is sometimespossible to combine the components into a meaningful,single-valued objective function for use in designoptimization. Even without knowing how totrade effectiveness for cost, designs that have lowercost and higher effectiveness are always preferred.Design trade studies, an important part of the systemsengineering process, often attempt to find designs thatprovide a better combination of the various dimensionsof cost and effectiveness. When the starting point for adesign trade study is inside the envelope, there are alternativesthat either reduce costs with change to the overalleffectiveness or alternatives that improve effectivenesswithout a cost increase (i.e., moving closer to the envelopecurve). Then, the systems engineer’s decision is easy.Other than in the sizing of subsystems, such “win-win”design trades are uncommon, but by no means rare.When the alternatives in a design trade study requiretrading cost for effectiveness, or even one dimension ofeffectiveness for another at the same cost (i.e., movingparallel to the envelope curve), the decisions becomeharder.16 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


2.5 Cost Aspect of <strong>Systems</strong> <strong>Engineering</strong>The process of finding the most cost-effective design isfurther complicated by uncertainty, which is shown inFigure 2.5‐2. Exactly what outcomes will be realized bya particular system design cannot be known in advancewith certainty, so the projected cost and effectiveness of adesign are better described by a probability distributionthan by a point. This distribution can be thought of as acloud that is thickest at the most likely value and thinnestfarthest away from the most likely point, as is shown fordesign concept A in the figure. Distributions resultingfrom designs that have little uncertainty are dense andhighly compact, as is shown for concept B. Distributionsassociated with risky designs may have significant probabilitiesof producing highly undesirable outcomes, as issuggested by the presence of an additional low-effectiveness/high-costcloud for concept C. (Of course, the envelopeof such clouds cannot be a sharp line such as isshown in the figure, but must itself be rather fuzzy. Theline can now be thought of as representing the envelopeat some fixed confidence level, that is, a specific, numericalprobability of achieving that effectiveness.)EffectivenessBCCostFigure 2.5‐2 Estimates of outcomes to beobtained from several design concepts includinguncertaintyNote: A, B, and C are design concepts with different risk patterns.AThe <strong>Systems</strong> Engineer’s DilemmaAt each cost-effective solution:zz To reduce cost at constant risk, performance mustbe reduced.zz To reduce risk at constant cost, performance mustbe reduced.zz To reduce cost at constant performance, higherrisks must be accepted.zz To reduce risk at constant performance, highercosts must be accepted.In this context, time in the schedule is often a criticalresource, so that schedule behaves like a kind of cost.Both effectiveness and cost may require several descriptors.Even the Echo balloons (circa 1960), in additionto their primary mission as communications satellites,obtained scientific data on the electromagnetic environmentand atmospheric drag. Furthermore, Echo wasthe first satellite visible to the naked eye, an unquantifiable—butnot unrecognized at the beginning of thespace race—aspect of its effectiveness. Sputnik (circa1957), for example, drew much of its effectiveness fromthe fact that it was a “first.” Costs, the expenditure of limitedresources, may be measured in the several dimensionsof funding, personnel, use of facilities, and so on.Schedule may appear as an attribute of effectiveness orcost, or as a constraint. A mission to Mars that misses itslaunch window has to wait about two years for anotheropportunity—a clear schedule constraint.In some contexts, it is appropriate to seek the most effectivenesspossible within a fixed budget and with a fixedrisk; in other contexts, it is more appropriate to seek theleast cost possible with specified effectiveness and risk.In these cases, there is the question of what level of effectivenessto specify or what level of costs to fix. In practice,these may be mandated in the form of performanceor cost requirements. It then becomes appropriate to askwhether a slight relaxation of requirements could producea significantly cheaper system or whether a fewmore resources could produce a significantly more effectivesystem.The technical team must choose among designs thatdiffer in terms of numerous attributes. A variety ofmethods have been developed that can be used to helpuncover preferences between attributes and to quantifysubjective assessments of relative value. When this canbe done, trades between attributes can be assessed quantitatively.Often, however, the attributes seem to be trulyincommensurate: decisions need to be made in spite ofthis multiplicity.<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 17


3.0 <strong>NASA</strong> Program/Project Life CycleOne of the fundamental concepts used within <strong>NASA</strong> forthe management of major systems is the program/projectlife cycle, which consists of a categorization of everythingthat should be done to accomplish a program or projectinto distinct phases, separated by Key Decision Points(KDPs). KDPs are the events at which the decision authoritydetermines the readiness of a program/project toprogress to the next phase of the life cycle (or to the nextKDP). Phase boundaries are defined so that they providemore or less natural points for Go or No‐Go decisions.Decisions to proceed may be qualified by liens that mustbe removed within an agreed to time period. A programor project that fails to pass a KDP may be allowed to “goback to the drawing board” to try again later—or it maybe terminated.All systems start with the recognition of a need or thediscovery of an opportunity and proceed through variousstages of development to a final disposition. Whilethe most dramatic impacts of the analysis and optimizationactivities associated with systems engineering areobtained in the early stages, decisions that affect millionsof dollars of value or cost continue to be amenable to thesystems approach even as the end of the system lifetimeapproaches.Decomposing the program/project life cycle into phasesorganizes the entire process into more manageable pieces.The program/project life cycle should provide managerswith incremental visibility into the progress being madeat points in time that fit with the management and budgetaryenvironments.NPR 7120.5, <strong>NASA</strong> Space Flight Program and ProjectManagement Requirements defines the major <strong>NASA</strong>life-cycle phases as Formulation and Implementation.For Flight <strong>Systems</strong> and Ground Support (FS&GS)projects, the <strong>NASA</strong> life-cycle phases of Formulationand Implementation divide into the following sevenincremental pieces. The phases of the project life cycleare:zz Pre-Phase A: Concept Studies (i.e., identify feasiblealternatives)zz Phase A: Concept and Technology Development (i.e.,define the project and identify and initiate necessarytechnology)zz Phase B: Preliminary Design and Technology Com-pletion (i.e., establish a preliminary design and developnecessary technology)zz Phase C: Final Design and Fabrication (i.e., completethe system design and build/code the components)zz Phase D: System Assembly, Integration and Test,Launch (i.e., integrate components, and verify thesystem, prepare for operations, and launch)zz Phase E: Operations and Sustainment (i.e., operateand maintain the system)zz Phase F: Closeout (i.e., disposal of systems and anal-ysis of data)Figure 3.0‐1 (<strong>NASA</strong> program life cycle) and Figure 3.0‐2(<strong>NASA</strong> project life cycle) identify the KDPs and reviewsthat characterize the phases. Sections 3.1 and 3.2contain narrative descriptions of the purposes, majoractivities, products, and KDPs of the <strong>NASA</strong> programlife-cycle phases. Sections 3.3 to 3.9 contain narrativedescriptions of the purposes, major activities, products,and KDPs of the <strong>NASA</strong> project life-cycle phases.Section 3.10 describes the <strong>NASA</strong> budget cycle withinwhich program/project managers and systems engineersmust operate.3.1 Program FormulationThe program Formulation phase establishes a cost-effectiveprogram that is demonstrably capable of meetingAgency and mission directorate goals and objectives.The program Formulation Authorization Document(FAD) authorizes a Program Manager (PM) to initiatethe planning of a new program and to perform the analysesrequired to formulate a sound program plan. Majorreviews leading to approval at KDP I are the P/SRR,P/SDR, PAR, and governing Program ManagementCouncil (PMC) review. (See full list of reviews in theprogram and project life cycle figures on the next page.)A summary of the required gate products for the pro-<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 19


3.0 <strong>NASA</strong> Program/Project Life Cycle<strong>NASA</strong> Life-Cycle PhasesFormulationApprovalImplementationKey DecisionPointsKDP 0 KDP I KDP II KDP IIIKDP IVKDP nP/SRRP/SDRMajor ProgramReviewsUncoupled & Loosely Coupled ProgramsOrOrPSRPSRs, PIRs, and KDPs are conducted ~ every 2 yearsSingle-Project & Tightly Coupled ProgramsPDR CDR SIR TRR ORR FRR PLAR CERRPSRCDRCERRDRFRRKDPMCRMDRORRPDRPFARPIRFigure 3.0‐1 <strong>NASA</strong> program life cycleCritical Design ReviewPLAR Post-Launch Assessment ReviewCritical Events Readiness ReviewPRR Production Readiness ReviewDecommissioning ReviewP/SDR Program/System Definition ReviewFlight Readiness ReviewP/SRR Program/System Requirements ReviewKey Decision PointPSR Program Status ReviewMission Concept ReviewSAR System Acceptance ReviewMission Definition ReviewSDR System Definition ReviewOperational Readiness ReviewSIR System Integration ReviewPreliminary Design ReviewSRR System Requirements ReviewPost-Flight Assessment ReviewTRR Test Readiness ReviewProgram Implementation Review<strong>NASA</strong> Life-Cycle PhasesFormulationApprovalImplementationProject Life-Cycle PhasesPre-Phase A:ConceptStudiesPhase A:Concept & TechnologyDevelopmentPhase B:PreliminaryDesign &TechnologyCompletionPhase C:Final Design &FabricationPhase D:System Assembly,Integration & Test,LaunchPhase E:Operations &SustainmentPhase F:CloseoutKey DecisionPointsKDP A KDP B KDP C KDP DKDP EKDP FLaunchHuman SpaceFlight ReviewsMCR SRR SDRPDR CDR/PRR SIR TRR SAR ORR FRR PLAR CERR PFAR DRRobotic MissionReviewsMCRSRRMDRPDRCDR/PRRSIRTRRORRFRR PLAR CERRDRSupportingReviewsPeer Reviews, Subsystem Reviews, and System ReviewsFigure 3.0‐2 <strong>NASA</strong> project life cycle20 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


3.1 Program FormulationProgram FormulationPurposeTo establish a cost-effective program that is demonstrablycapable of meeting Agency and mission directorategoals and objectivesTypical Activities and Their Productszz Develop program requirements and allocate themto initial projectszz Define and approve program acquisition strategieszz Develop interfaces to other programszz Start development of technologies that cut acrossmultiple projects within the programzz Derive initial cost estimates and approve a programbudgetzz Perform required program Formulation technicalactivities defined in NPR 7120.5zz Satisfy program Formulation reviews’ entrance/suc-cess criteria detailed in NPR 7123.1Reviewszz P/SRRzzP/SDRgram Formulation phase can be found in NPR 7120.5.Formulation for all program types is the same, involvingone or more program reviews followed by KDP I wherea decision is made approving a program to begin implementation.Typically, there is no incentive to move a programinto implementation until its first project is readyfor implementation.3.2 Program ImplementationDuring the program Implementation phase, the PMworks with the Mission Directorate Associate Administrator(MDAA) and the constituent project managersto execute the program plan cost effectively.Program reviews ensure that the program continuesto contribute to Agency and mission directorate goalsand objectives within funding constraints. A summaryof the required gate products for the programImplementation phase can be found in NPR 7120.5.The program life cycle has two different implementationpaths, depending on program type. Each implementationpath has different types of major reviews.For uncoupled and loosely coupled programs, theImplementation phase only requires PSRs and PIRsto assess the program’s performance and make a recommendationon its authorization at KDPs approximatelyevery two years. Single-project and tightlycoupled programs are more complex. For singleprojectprograms, the Implementation phase programreviews shown in Figure 3.0-1 are synonymous (notduplicative) with the project reviews in the project lifecycle (see Figure 3.0‐2) through Phase D. Once in operations,these programs usually have biennial KDPspreceded by attendant PSRs/PIRs. Tightly coupledprograms during implementation have program reviewstied to the project reviews to ensure the properintegration of projects into the larger system. Once inoperations, tightly coupled programs also have biennialPSRs/PIRs/KDPs to assess the program’s performanceand authorize its continuation.Program ImplementationPurposeTo execute the program and constituent projectsand ensure the program continues to contribute toAgency goals and objectives within funding constraintsTypical Activities and Their Productszz Initiate projects through direct assignment or com-petitive process (e.g., Request for Proposal (RFP),Announcement of Opportunity (AO)zz Monitor project’s formulation, approval, implemen-tation, integration, operation, and ultimate decommissioningzz Adjust program as resources and requirementschangezz Perform required program Implementation techni-cal activities from NPR 7120.5zz Satisfy program Implementation reviews’ entrance/success criteria from NPR 7123.1Reviewszz PSR/PIR (uncoupled and loosely coupled programsonly)Reviews synonymous (not duplicative) with thezzproject reviews in the project life cycle (see Figure3.0‐2) through Phase D (single-project andtightly coupled programs only)<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 21


3.0 <strong>NASA</strong> Program/Project Life Cycle3.3 Project Pre-Phase A: ConceptStudiesThe purpose of this phase, which is usually performedmore or less continually by concept study groups, is todevise various feasible concepts from which new projects(programs) can be selected. Typically, this activityconsists of loosely structured examinations of new ideas,Pre-Phase A: Concept StudiesPurposeTo produce a broad spectrum of ideas and alternativesfor missions from which new programs/projectscan be selectedTypical Activities and Products(Note: AO projects will have defined the deliverableproducts.)zz Identify missions and architecture consistent withcharterzz Identify and involve users and other stakeholderszz Identify and perform tradeoffs and analyseszz Identify requirements, which include:▶▶ Mission,▶▶ Science, and▶▶ Top-level system.zz Define measures of effectiveness and measures ofperformancezz Identify top-level technical performance measureszz Perform preliminary evaluations of possible mis-sionszz Prepare program/project proposals, which may in-clude:▶▶ Mission justification and objectives;▶▶ Possible ConOps;▶▶ High-level WBSs;▶▶ Cost, schedule, and risk estimates; and▶▶ Technology assessment and maturation strategies.zz Prepare preliminary mission concept reportzz Perform required Pre-Phase A technical activitiesfrom NPR 7120.5zz Satisfy MCR entrance/success criteria from NPR 7123.1Reviewszz MCRzz Informal proposal reviewusually without central control and mostly oriented towardsmall studies. Its major product is a list of suggestedprojects, based on the identification of needs andthe discovery of opportunities that are potentially consistentwith <strong>NASA</strong>’s mission, capabilities, priorities, andresources.Advanced studies may extend for several years and maybe a sequence of papers that are only loosely connected.These studies typically focus on establishing missiongoals and formulating top-level system requirementsand ConOps. Conceptual designs are often offered todemonstrate feasibility and support programmatic estimates.The emphasis is on establishing feasibility anddesirability rather than optimality. Analyses and designsare accordingly limited in both depth and number of options.3.4 Project Phase A: Concept andTechnology DevelopmentDuring Phase A, activities are performed to fully developa baseline mission concept and begin or assume responsibilityfor the development of needed technologies. Thiswork, along with interactions with stakeholders, helpsestablish a mission concept and the program requirementson the project.In Phase A, a team—often associated with a program orinformal project office—readdresses the mission conceptto ensure that the project justification and practicalityare sufficient to warrant a place in <strong>NASA</strong>’s budget.The team’s effort focuses on analyzing mission requirementsand establishing a mission architecture. Activitiesbecome formal, and the emphasis shifts toward establishingoptimality rather than feasibility. The effortaddresses more depth and considers many alternatives.Goals and objectives are solidified, and the project developsmore definition in the system requirements, toplevelsystem architecture, and ConOps. Conceptual designsare developed and exhibit more engineering detailthan in advanced studies. <strong>Technical</strong> risks are identifiedin more detail, and technology development needs becomefocused.In Phase A, the effort focuses on allocating functions toparticular items of hardware, software, personnel, etc.System functional and performance requirements, alongwith architectures and designs, become firm as systemtradeoffs and subsystem tradeoffs iterate back and forth22 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


3.4 Project Phase A: Concept and Technology DevelopmentPhase A: Concept and Technology DevelopmentPurposeTo determine the feasibility and desirability of a suggested new major system and establish an initial baseline compatibilitywith <strong>NASA</strong>’s strategic plansTypical Activities and Their Productszz Prepare and initiate a project planzz Develop top-level requirements and constraintszz Define and document system requirements (hardware and software)zz Allocate preliminary system requirements to next lower levelzz Define system software functionality description and requirementszz Define and document internal and external interface requirementszz Identify integrated logistics support requirementszz Develop corresponding evaluation criteria and metricszz Document the ConOpszz Baseline the mission concept reportzz Demonstrate that credible, feasible design(s) existzz Perform and archive trade studieszz Develop mission architecturezz Initiate environmental evaluation/National Environmental Policy Act processzz Develop initial orbital debris assessment (<strong>NASA</strong> Safety Standard 1740.14)zz Establish technical resource estimateszz Define life-cycle cost estimates and develop system-level cost-effectiveness modelzz Define the WBSzz Develop SOWszz Acquire systems engineering tools and modelszz Baseline the SEMPzz Develop system risk analyseszz Prepare and initiate a risk management planzz Prepare and Initiate a configuration management planzz Prepare and initiate a data management planzz Prepare engineering specialty plans (e.g., contamination control plan, electromagnetic interference/electromagneticcompatibility control plan, reliability plan, quality control plan, parts management plan)zz Prepare a safety and mission assurance planzz Prepare a software development or management plan (see NPR 7150.2)zz Prepare a technology development plan and initiate advanced technology developmentzz Establish human rating planzz Define verification and validation approach and document it in verification and validation planszz Perform required Phase A technical activities from NPR 7120.5zz Satisfy Phase A reviews’ entrance/success criteria from NPR 7123.1Reviewszz SRRzz MDR (robotic mission only)zz SDR (human space flight only)<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 23


3.0 <strong>NASA</strong> Program/Project Life Cyclein the effort to seek out more cost-effective designs.(Trade studies should precede—rather than follow—system design decisions.) Major products to this pointinclude an accepted functional baseline for the systemand its major end items. The effort also produces variousengineering and management plans to prepare formanaging the project’s downstream processes, such asverification and operations, and for implementing engineeringspecialty programs.3.5 Project Phase B: PreliminaryDesign and TechnologyCompletionDuring Phase B, activities are performed to establishan initial project baseline, which (according to NPR7120.5 and NPR 7123.1) includes “a formal flow downof the project-level performance requirements to acomplete set of system and subsystem design specificationsfor both flight and ground elements” and“corresponding preliminary designs.” The technicalrequirements should be sufficiently detailed to establishfirm schedule and cost estimates for the project.It also should be noted, especially for AO-driven projects,that Phase B is where the top-level requirementsand the requirements flowed down to the next levelare finalized and placed under configuration control.While the requirements should be baselined inPhase A, there are just enough changes resulting fromthe trade studies and analyses in late Phase A andearly Phase B that changes are inevitable. However, bymid-Phase B, the top-level requirements should be finalized.Actually, the Phase B baseline consists of a collectionof evolving baselines covering technical and businessaspects of the project: system (and subsystem) requirementsand specifications, designs, verificationand operations plans, and so on in the technical portionof the baseline, and schedules, cost projections,and management plans in the business portion. Establishmentof baselines implies the implementationof configuration management procedures. (See Section6.5.)In Phase B, the effort shifts to establishing a functionallycomplete preliminary design solution (i.e., a functionalbaseline) that meets mission goals and objectives.Trade studies continue. Interfaces among thePurposePhase B: Preliminary Design andTechnology CompletionTo define the project in enough detail to establish aninitial baseline capable of meeting mission needsTypical Activities and Their Productszz Baseline the project planzz Review and update documents developed andbaselined in Phase Azz Develop science/exploration operations plan basedon matured ConOpszz Update engineering specialty plans (e.g., contami-nation control plan, electromagnetic interference/electromagnetic compatibility control plan, reliabilityplan, quality control plan, parts managementplan)zz Update technology maturation planningzz Report technology development resultszz Update risk management planzz Update cost and schedule datazz Finalize and approve top-level requirements andflowdown to the next level of requirementszz Establish and baseline design-to specifications(hardware and software) and drawings, verificationand validation plans, and interface documents atlower levelszz Perform and archive trade studies’ resultszz Perform design analyses and report resultszz Conduct engineering development tests and re-port resultszz Select a baseline design solutionzz Baseline a preliminary design reportzz Define internal and external interface design solu-tions (e.g., interface control documents)zz Define system operations as well as PI/contract pro-posal management, review, and access and contingencyplanningzz Develop appropriate level safety data packagezz Develop preliminary orbital debris assessmentzz Perform required Phase B technical activities fromNPR 7120.5zz Satisfy Phase B reviews’ entrance/success criteriafrom NPR 7123.1Reviewszz PDRzz Safety review24 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


3.5 Project Phase B: Preliminary Design and Technology Completionmajor end items are defined. <strong>Engineering</strong> test itemsmay be developed and used to derive data for furtherdesign work, and project risks are reduced by successfultechnology developments and demonstrations.Phase B culminates in a series of PDRs, containing thesystem-level PDR and PDRs for lower level end itemsas appropriate. The PDRs reflect the successive refinementof requirements into designs. (See the doctrineof successive refinement in Subsection 4.4.1.2 andFigure 4.4-2.) Design issues uncovered in the PDRsshould be resolved so that final design can begin withunambiguous design-to specifications. From this pointon, almost all changes to the baseline are expected torepresent successive refinements, not fundamentalchanges. Prior to baselining, the system architecture,preliminary design, and ConOps must have been validatedby enough technical analysis and design workto establish a credible, feasible design in greater detailthan was sufficient for Phase A.3.6 Project Phase C: Final Design andFabricationDuring Phase C, activities are performed to establish acomplete design (allocated baseline), fabricate or producehardware, and code software in preparation forintegration. Trade studies continue. <strong>Engineering</strong> testunits more closely resembling actual hardware are builtand tested to establish confidence that the design willfunction in the expected environments. <strong>Engineering</strong>specialty analysis results are integrated into the design,and the manufacturing process and controls aredefined and validated. All the planning initiated backin Phase A for the testing and operational equipment,processes and analysis, integration of the engineeringspecialty analysis, and manufacturing processes andcontrols is implemented. Configuration managementcontinues to track and control design changes as detailedinterfaces are defined. At each step in the successiverefinement of the final design, corresponding integrationand verification activities are planned in greaterdetail. During this phase, technical parameters, schedules,and budgets are closely tracked to ensure thatundesirable trends (such as an unexpected growth inspacecraft mass or increase in its cost) are recognizedearly enough to take corrective action. These activitiesfocus on preparing for the CDR, PRR (if required), andthe SIR.Phase C contains a series of CDRs containing thesystem-level CDR and CDRs corresponding to the differentlevels of the system hierarchy. A CDR for eachend item should be held prior to the start of fabrication/productionfor hardware and prior to the startof coding of deliverable software products. Typically,the sequence of CDRs reflects the integration processthat will occur in the next phase—that is, from lowerlevel CDRs to the system-level CDR. Projects, however,should tailor the sequencing of the reviews tomeet the needs of the project. If there is a productionrun of products, a PRR will be performed to ensure theproduction plans, facilities, and personnel are ready tobegin production. Phase C culminates with an SIR. Thefinal product of this phase is a product ready for integration.3.7 Project Phase D: SystemAssembly, Integration and Test,LaunchDuring Phase D, activities are performed to assemble,integrate, test, and launch the system. These activitiesfocus on preparing for the FRR. Activities include assembly,integration, verification, and validation of thesystem, including testing the flight system to expectedenvironments within margin. Other activities includethe initial training of operating personnel and implementationof the logistics and spares planning. For flightprojects, the focus of activities then shifts to prelaunchintegration and launch. Although all these activities areconducted in this phase of a project, the planning forthese activities was initiated in Phase A. The planningfor the activities cannot be delayed until Phase D beginsbecause the design of the project is too advancedto incorporate requirements for testing and operations.Phase D concludes with a system that has been shownto be capable of accomplishing the purpose for whichit was created.<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 25


3.0 <strong>NASA</strong> Program/Project Life CyclePhase C: Final Design and FabricationPurposeTo complete the detailed design of the system (and its associated subsystems, including its operations systems), fabricatehardware, and code softwareTypical Activities and Their Productszz Update documents developed and baselined in Phase Bzz Update interface documentszzUpdate mission operations plan based on matured ConOpszz Update engineering specialty plans (e.g., contamination control plan, electromagnetic interference/electromagneticcompatibility control plan, reliability plan, quality control plan, parts management plan)zz Augment baselined documents to reflect the growing maturity of the system, including the system architecture, WBS,and project planszz Update and baseline production planszz Refine integration procedureszzBaseline logistics support planzz Add remaining lower level design specifications to the system architecturezz Complete manufacturing and assembly plans and procedureszz Establish and baseline build-to specifications (hardware and software) and drawings, verification and validation plans,and interface documents at all levelszz Baseline detailed design reportzz Maintain requirements documentszz Maintain verification and validation planszz Monitor project progress against project planszz Develop verification and validation procedureszz Develop hardware and software detailed designszz Develop the system integration plan and the system operation planzz Develop the end-to-end information system designzz Develop spares planningzz Develop command and telemetry listzz Prepare launch site checkout and operations planszz Prepare operations and activation planzz Prepare system decommissioning/disposal plan, including human capital transition, for use in Phase Fzz Finalize appropriate level safety data packagezz Develop preliminary operations handbookzz Perform and archive trade studieszz Fabricate (or code) the productzz Perform testing at the component or subsystem levelzz Identify opportunities for preplanned product improvementzz Baseline orbital debris assessmentzz Perform required Phase C technical activities from NPR 7120.5zz Satisfy Phase C reviews’ entrance/success criteria from NPR 7123.1Reviewszz CDRzz PRRzz SIRzz Safety review26 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


3.7 Project Phase D: System Assembly, Integration and Test, LaunchPhase D: System Assembly, Integration and Test, LaunchPurposeTo assemble and integrate the products and create the system, meanwhile developing confidence that it will be able tomeet the system requirements; conduct launch and prepare for operationsTypical Activities and Their Productszz Integrate and verify items according to the integration and verification plans, yielding verified components and (sub-systems)zz Monitor project progress against project planszz Refine verification and validation procedures at all levelszz Perform system qualification verificationszz Perform system acceptance verifications and validation(s) (e.g., end-to-end tests encompassing all elements (i.e.,space element, ground system, data processing system)zz Perform system environmental testingzz Assess and approve verification and validation resultszz Resolve verification and validation discrepancieszz Archive documentation for verifications and validations performedzz Baseline verification and validation reportzz Baseline “as-built” hardware and software documentationzz Update logistics support planzz Document lessons learnedzz Prepare and baseline operator’s manualszz Prepare and baseline maintenance manualszz Approve and baseline operations handbookzz Train initial system operators and maintainerszz Train on contingency planningzz Finalize and implement spares planningzz Confirm telemetry validation and ground data processingzz Confirm system and support elements are ready for flightzz Integrate with launch vehicle(s) and launch, perform orbit insertion, etc., to achieve a deployed systemzz Perform initial operational verification(s) and validation(s)zz Perform required Phase D technical activities from NPR 7120.5zz Satisfy Phase D reviews’ entrance/success criteria from NPR 7123.1Reviewszz TRR (at all levels)zz SAR (human space flight only)zz ORRzz FRRzz System functional and physical configuration auditszz Safety review<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 27


3.0 <strong>NASA</strong> Program/Project Life Cycle3.8 Project Phase E: Operations andSustainmentDuring Phase E, activities are performed to conduct theprime mission and meet the initially identified need andmaintain support for that need. The products of the phaseare the results of the mission. This phase encompassesthe evolution of the system only insofar as that evolutiondoes not involve major changes to the system architecture.Changes of that scope constitute new “needs,” andPhase E: Operations and SustainmentPurposeTo conduct the mission and meet the initially identifiedneed and maintain support for that needTypical Activities and Their Productszz Conduct launch vehicle performance assessmentzz Conduct in-orbit spacecraft checkoutzz Commission and activate science instrumentszz Conduct the intended prime mission(s)zz Collect engineering and science datazz Train replacement operators and maintainerszz Train the flight team for future mission phases (e.g.,planetary landed operations)zz Maintain and approve operations and mainte-nance logszz Maintain and upgrade the systemzz Address problem/failure reportszz Process and analyze mission datazz Apply for mission extensions, if warranted, and con-duct mission activities if awardedzz Prepare for deactivation, disassembly, decommis-sioning as planned (subject to mission extension)zz Complete post-flight evaluation reportszz Complete final mission reportzz Perform required Phase E technical activities fromNPR 7120.5zz Satisfy Phase E reviews’ entrance/success criteriafrom NPR 7123.1Reviewszz PLARzz CERRzz PFAR (human space flight only)zz System upgrade reviewzz Safety reviewthe project life cycle starts over. For large flight projects,there may be an extended period of cruise, orbit insertion,on-orbit assembly, and initial shakedown operations.Near the end of the prime mission, the project mayapply for a mission extension to continue mission activitiesor attempt to perform additional mission objectives.3.9 Project Phase F: CloseoutDuring Phase F, activities are performed to implementthe systems decommissioning disposal planning and analyzeany returned data and samples. The products of thephase are the results of the mission.Phase F deals with the final closeout of the system whenit has completed its mission; the time at which this occursdepends on many factors. For a flight system thatreturns to Earth with a short mission duration, closeoutmay require little more than deintegration of the hardwareand its return to its owner. On flight projects of longduration, closeout may proceed according to establishedplans or may begin as a result of unplanned events, suchas failures. Refer to NPD 8010.3, Notification of Intent toDecommission or Terminate Operating Space <strong>Systems</strong> andTerminate Missions for terminating an operating mission.Alternatively, technological advances may make ituneconomical to continue operating the system either inits current configuration or an improved one.Phase F: CloseoutPurposeTo implement the systems decommissioning/disposalplan developed in Phase C and analyze any returneddata and samplesTypical Activities and Their Productszz Dispose of the system and supporting processeszz Document lessons learnedzz Baseline mission final reportzz Archive datazz Begin transition of human capital (if applicable)zz Perform required Phase F technical activities fromNPR 7120.5zz Satisfy Phase F reviews’ entrance/success criteriafrom NPR 7123.1Reviewszz DR28 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


3.9 Project Phase F: CloseoutTo limit space debris, NPR 8715.6, <strong>NASA</strong> ProceduralRequirements for Limiting Orbital Debris providesguidelines for removing Earth-orbiting roboticsatellites from their operational orbits at the end oftheir useful life. For Low Earth Orbiting (LEO) missions,the satellite is usually deorbited. For small satellites,this is accomplished by allowing the orbit toslowly decay until the satellite eventually burns upin the Earth’s atmosphere. Larger, more massive satellitesand observatories must be designed to demiseor deorbited in a controlled manner so that they canbe safely targeted for impact in a remote area of theocean. The Geostationary (GEO) satellites at 35,790km above the Earth cannot be practically deorbited,so they are boosted to a higher orbit well beyond thecrowded operational GEO orbit.In addition to uncertainty as to when this part of thephase begins, the activities associated with safe closeoutof a system may be long and complex and may affectthe system design. Consequently, different options andstrategies should be considered during the project’s earlierphases along with the costs and risks associated withthe different options.3.10 Funding: The Budget Cycle<strong>NASA</strong> operates with annual funding from Congress.This funding results, however, from a continuous rollingprocess of budget formulation, budget enactment, andfinally, budget execution. <strong>NASA</strong>’s Financial ManagementRequirements (FMR) Volume 4 provides the concepts,the goals, and an overview of <strong>NASA</strong>’s budgetsystem of resource alignment referred to as Planning,Programming, Budgeting, and Execution (PPBE) andestablishes guidance on the programming and budgetingphases of the PPBE process, which are critical tobudget formulation for <strong>NASA</strong>. Volume 4 includes strategicbudget planning and resources guidance, programreview, budget development, budget presentation, andjustification of estimates to the Office of Managementand Budget (OMB) and to Congress. It also providesdetailed descriptions of the roles and responsibilitiesfor key players in each step of the process. It consolidatescurrent legal, regulatory, and administrative policiesand procedures applicable to <strong>NASA</strong>. A highly simplifiedrepresentation of the typical <strong>NASA</strong> budget cycleis shown in Figure 3.10-1.PLANNINGPROGRAMMINGBUDGETINGEXECUTIONInternal/ExternalStudies andAnalysisProgram Pdm ()andResourceGuidanceProgrammaticand InstitutionalGuidanceOperating PlanandReprogramming<strong>NASA</strong>StrategicPlanProgramAnalysis andAlignmentOMB BudgetMonthlyPhasingPlansAnnualPerformanceGoalsInstitutionalInfrastructureAnalysisPresident’sBudgetAnalysis ofPerformance/ExpendituresImplementationPlanningProgramReview/IssuesBookCloseoutStrategicPlanningGuidanceProgramDecisionMemorandumAppropriationPerformance andAccountabilityReportFigure 3.10‐1 Typical <strong>NASA</strong> budget cycle<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 29


3.0 <strong>NASA</strong> Program/Project Life Cycle<strong>NASA</strong> typically starts developing its budget each Februarywith economic forecasts and general guidelines asidentified in the most recent President’s budget. By lateAugust, <strong>NASA</strong> has completed the planning, programming,and budgeting phases of the PPBE process andprepares for submittal of a preliminary <strong>NASA</strong> budgetto the OMB. A final <strong>NASA</strong> budget is submitted to theOMB in September for incorporation into the President’sbudget transmittal to Congress, which generallyoccurs in January. This proposed budget is thensubjected to congressional review and approval, culminatingin the passage of bills authorizing <strong>NASA</strong> toobligate funds in accordance with congressional stipulationsand appropriating those funds. The congressionalprocess generally lasts through the summer. Inrecent years, however, final bills have often been delayedpast the start of the fiscal year on October 1. Inthose years, <strong>NASA</strong> has operated on continuing resolutionby Congress.With annual funding, there is an implicit funding controlgate at the beginning of every fiscal year. While thesegates place planning requirements on the project andcan make significant replanning necessary, they are notpart of an orderly systems engineering process. Rather,they constitute one of the sources of uncertainty that affectproject risks, and they are essential to consider inproject planning.30 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


4.0 System DesignThis chapter describes the activities in the system designprocesses listed in Figure 2.1-1. The chapter is separatedinto sections corresponding to steps 1 to 4 listedin Figure 2.1-1. The processes within each step are discussedin terms of inputs, activities, and outputs. Additionalguidance is provided using examples that are relevantto <strong>NASA</strong> projects. The system design processes arefour interdependent, highly iterative and recursive processes,resulting in a validated set of requirements and avalidated design solution that satisfies a set of stakeholderexpectations. The four system design processes are to developstakeholder expectations, technical requirements,logical decompositions, and design solutions.Figure 4.0‐1 illustrates the recursive relationship amongthe four system design processes. These processes startwith a study team collecting and clarifying the stakeholderexpectations, including the mission objectives,constraints, design drivers, operational objectives, andcriteria for defining mission success. This set of stakeholderexpectations and high-level requirements is usedto drive an iterative design loop where a strawman architecture/design,the concept of operations, and derivedrequirements are developed. These three productsmust be consistent with each other and will require iterationsand design decisions to achieve this consistency.Once consistency is achieved, analyses allow the projectteam to validate the design against the stakeholder expectations.A simplified validation asks the questions:Does the system work? Is the system safe and reliable? Isthe system achievable within budget and schedule constraints?If the answer to any of these questions is no,MissionAuthorityStartStakeholderExpectationsMissionObjectives &ConstraintsOperationalObjectivesMissionSuccessCriteriaHigh-LevelRequirementsFunctionaland LogicalDecompositionTrade Studies and Iterative Design LoopDesign andProductBreakdownStructureConOpsDerived andAllocatedRequirements Functional Performance Interface Operational “Ilities”Legend:Stakeholder Expectations Definition<strong>Technical</strong> Requirements DefinitionLogical DecompositionYesNoRebaselinerequirements?Functional &PerformanceAnalysisNoSufficientdepth?YesWork?Safe & reliable?Affordable?No − Next LevelYesSelectBaselineDesign Solution DefinitionDecision AnalysisFigure 4.0‐1 Interrelationships among the system design processes<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 31


4.0 System Designthen changes to the design or stakeholder expectationswill be required, and the process started again. This processcontinues until the system—architecture, ConOps,and requirements—meets the stakeholder expectations.The depth of the design effort must be sufficient to allowanalytical verification of the design to the requirements.The design must be feasible and credible when judgedby a knowledgeable independent review team and musthave sufficient depth to support cost modeling.Once the system meets the stakeholder expectations, thestudy team baselines the products and prepares for thenext phase. Often, intermediate levels of decompositionare validated as part of the process. In the next level ofdecomposition, the baselined derived (and allocated) requirementsbecome the set of high-level requirementsfor the decomposed elements and the process beginsagain. These system design processes are primarily appliedin Pre-Phase A and continue through Phase C.The system design processes during Pre-Phase A focuson producing a feasible design that will lead to Formulationapproval. During Phase A, alternative designs andadditional analytical maturity are pursued to optimizethe design architecture. Phase B results in a preliminarydesign that satisfies the approval criteria. DuringPhase C, detailed, build-to designs are completed.This has been a simplified description intended to demonstratethe recursive relationship among the system designprocesses. These processes should be used as guidanceand tailored for each study team depending on thesize of the project and the hierarchical level of the studyteam. The next sections describe each of the four systemdesign processes and their associated products for agiven <strong>NASA</strong> mission.System Design Keyszz Successfully understanding and defining the mis-sion objectives and operational concepts are keysto capturing the stakeholder expectations, whichwill translate into quality requirements over the lifecycle of the project.zz Complete and thorough requirements traceabilityis a critical factor in successful validation of requirements.zz Clear and unambiguous requirements will helpavoid misunderstanding when developing theoverall system and when making major or minorchanges.zz Document all decisions made during the develop-ment of the original design concept in the technicaldata package. This will make the original designphilosophy and negotiation results available toassess future proposed changes and modificationsagainst.The design solution verification occurs when anzzacceptable design solution has been selected anddocumented in a technical data package. The designsolution is verified against the system requirementsand constraints. However, the validation ofa design solution is a continuing recursive and iterativeprocess during which the design solution isevaluated against stakeholder expectations.32 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


4.1 Stakeholder Expectations DefinitionThe Stakeholder Expectations Definition Process is the initialprocess within the SE engine that establishes the foundationfrom which the system is designed and the productis realized. The main purpose of this process is to identifywho the stakeholders are and how they intend to use theproduct. This is usually accomplished through use-case scenarios,Design Reference Missions (DRMs), and ConOps.4.1.1 Process DescriptionFigure 4.1-1 provides a typical flow diagram for theStakeholder Expectations Definition Process and identifiestypical inputs, outputs, and activities to consider inaddressing stakeholder expectations definition.4.1.1.1 InputsTypical inputs needed for the Stakeholder ExpectationsDefinition Process would include the following:z z Upper Level Requirements and Expectations: Thesewould be the requirements and expectations (e.g.,needs, wants, desires, capabilities, constraints, externalinterfaces) that are being flowed down to a particularsystem of interest from a higher level (e.g., program,project, etc.).z z Identified Customers and Stakeholders: The organizationor individual who has requested the product(s)and those who are affected by or are in some way accountablefor the product’s outcome.4.1.1.2 Process ActivitiesIdentifying StakeholdersAdvocacy for new programs and projects may originate inmany organizations. These include Presidential directives,Congress, <strong>NASA</strong> Headquarters (HQ), the <strong>NASA</strong> Centers,<strong>NASA</strong> advisory committees, the National Academy of Sci-Establish list of stakeholdersTo <strong>Technical</strong>Requirements Definition andRequirements Management andInterface Management ProcessesFrom projectInitial CustomerExpectationsOther StakeholderExpectationsFrom Design SolutionDefinition (recursive loop) andRequirements Management andInterface Management ProcessesCustomer FlowdownRequirementsElicit stakeholder expectationsEstablish operations concept and supportstrategiesDefine stakeholder expectations in acceptablestatementsAnalyze expectation statements for measuresof effectivenessValidate that defined expectation statementsreflect bidirectional traceabilityObtain stakeholder commitments to thevalidated set of expectationsValidated StakeholderExpectationsTo <strong>Technical</strong>Requirements Definitionand ConfigurationManagement ProcessesConcept of OperationsEnabling ProductSupport StrategiesTo <strong>Technical</strong> RequirementsDefinition and <strong>Technical</strong> DataManagement ProcessesBaseline stakeholder expectationsMeasures of EffectivenessFigure 4.1-1 Stakeholder Expectations Definition Process<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 33


4.0 System Designences, the National Space Council, and many other groupsin the science and space communities. These organizationsare commonly referred to as stakeholders. A stakeholder isa group or individual who is affected by or is in some wayaccountable for the outcome of an undertaking.Stakeholders can be classified as customers and otherinterested parties. Customers are those who will receivethe goods or services and are the direct beneficiaries ofthe work. Examples of customers are scientists, projectmanagers, and subsystems engineers.Other interested parties are those who affect the projectby providing broad, overarching constraints withinwhich the customers’ needs must be achieved. These partiesmay be affected by the resulting product, the mannerin which the product is used, or have a responsibility forproviding life-cycle support services. Examples includeCongress, advisory planning teams, program managers,users, operators, maintainers, mission partners, and<strong>NASA</strong> contractors. It is important that the list of stakeholdersbe identified early in the process, as well as theprimary stakeholders who will have the most significantinfluence over the project.Identifying Stakeholder ExpectationsStakeholder expectations, the vision of a particular stakeholderindividual or group, result when they specify what isdesired as an end state or as an item to be produced and putbounds upon the achievement of the goals. These boundsmay encompass expenditures (resources), time to deliver,performance objectives, or other less obvious quantitiessuch as organizational needs or geopolitical goals.Figure 4.1-2 shows the type of information needed whendefining stakeholder expectations and depicts how theinformation evolves into a set of high-level requirements.The yellow paths depict validation paths. Examplesof the types of information that would be definedduring each step are also provided.Defining stakeholder expectations begins with the missionauthority and strategic objectives that the mission ismeant to achieve. Mission authority changes dependingon the category of the mission. For example, science missionsare usually driven by <strong>NASA</strong> Science Mission Directoratestrategic plans; whereas the exploration missionsmay be driven by a Presidential directive.An early task in defining stakeholder expectations isunderstanding the objectives of the mission. Clearly describingand documenting them helps ensure that theproject team is working toward a common goal. Theseobjectives form the basis for developing the mission, sothey need to be clearly defined and articulated.Defining the objectives is done by eliciting the needs,wants, desires, capabilities, external interfaces, assumptions,and constraints from the stakeholders. Arrivingat an agreed-to set of objectives can be a long and arduoustask. The proactive iteration with the stakeholdersthroughout the systems engineering process is the wayMissionAuthorityMissionObjectivesOperationalObjectivesSuccessCriteriaDesignDrivers Agency StrategicPlans Announcements ofOpportunity Road Maps Directed Missions Science Objectives ExplorationObjectives TechnologyDemonstrationObjectives TechnologyDevelopmentObjectives ProgrammaticObjectivesOperational Drivers Integration and Test Launch On-Orbit Transfer Surface Science DataDistribution . . .Measurements What measurements? How well?Explorations What explorations? What goals?Mission Drivers Launch Date Mission Duration Orbit . . .34 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>Figure 4.1-2 Product flow for stakeholder expectations


4.1 Stakeholder Expectations Definitionthat all parties can come to a true understanding of whatshould be done and what it takes to do the job. It is importantto know who the primary stakeholders are andwho has the decision authority to help resolve conflicts.The project team should also identify the constraintsthat may apply. A constraint is a condition that must bemet. Sometimes a constraint is dictated by external factorssuch as orbital mechanics or the state of technology;sometimes constraints are the result of the overall budgetenvironment. It is important to document the constraintsand assumptions along with the mission objectives.Operational objectives also need to be included in definingthe stakeholder expectations. The operational objectivesidentify how the mission must be operated toachieve the mission objectives.The mission and operational success criteria define whatthe mission must accomplish to be successful. This willbe in the form of a measurement concept for sciencemissions and exploration concept for human explorationmissions. The success criteria also define how wellthe concept measurements or exploration activities mustbe accomplished. The success criteria capture the stakeholderexpectations and, along with programmatic requirementsand constraints, are used within the highlevelrequirements.The design drivers will be strongly dependent upon theConOps, including the operational environment, orbit,and mission duration requirements. For science missions,the design drivers may include, at a minimum, themission launch date, duration, and orbit. If alternativeorbits are to be considered, a separate concept is neededfor each orbit. Exploration missions must consider thedestination, the duration, the operational sequence (andsystem configuration changes), and the in situ explorationactivities that allow the exploration to succeed.Note: It is extremely important to involve stakeholdersin all phases of a project. Such involvement shouldbe built in as a self-correcting feedback loop that willsignificantly enhance the chances of mission success.Involving stakeholders in a project builds confidencein the end product and serves as a validation and acceptancewith the target audience.The end result of this step is the discovery and delineationof the system’s goals, which generally express the agreements,desires, and requirements of the eventual users ofthe system. The high-level requirements and success criteriaare examples of the products representing the consensusof the stakeholders.4.1.1.3 OutputsTypical outputs for capturing stakeholder expectationswould include the following:z z Top-Level Requirements and Expectations: Thesewould be the top-level requirements and expectations(e.g., needs, wants, desires, capabilities, constraints,and external interfaces) for the product(s) to be developed.z z ConOps: This describes how the system will be operatedduring the life-cycle phases to meet stakeholderexpectations. It describes the system characteristicsfrom an operational perspective and helps facilitatean understanding of the system goals. Exampleswould be the ConOps document or a DRM.4.1.2 Stakeholder Expectations DefinitionGuidance4.1.2.1 Concept of OperationsThe ConOps is an important component in capturingstakeholder expectations, requirements, and the architectureof a project. It stimulates the development ofthe requirements and architecture related to the userelements of the system. It serves as the basis for subsequentdefinition documents such as the operations plan,launch and early orbit plan, and operations handbookand provides the foundation for the long-range operationalplanning activities such as operational facilities,staffing, and network scheduling.The ConOps is an important driver in the system requirementsand therefore must be considered earlyin the system design processes. Thinking through theConOps and use cases often reveals requirements anddesign functions that might otherwise be overlooked. Asimple example to illustrate this point is adding systemrequirements to allow for communication during a particularphase of a mission. This may require an additionalantenna in a specific location that may not be requiredduring the nominal mission.The ConOps is important for all projects. For scienceprojects, the ConOps describes how the systems will beoperated to achieve the measurement set required for a<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 35


4.0 System Designsuccessful mission. They are usually driven by the datavolume of the measurement set. The ConOps for explorationprojects is likely to be more complex. There aretypically more operational phases, more configurationchanges, and additional communication links requiredfor human interaction. For human spaceflight, functionsand objectives must be clearly allocated between humanoperators and systems early in the project.The ConOps should consider all aspects of operationsincluding integration, test, and launch through disposal.Typical information contained in the ConOps includesa description of the major phases; operation timelines;operational scenarios and/or DRM; end-to-end communicationsstrategy; command and data architecture; operationalfacilities; integrated logistic support (resupply,maintenance, and assembly); and critical events. The operationalscenarios describe the dynamic view of the systems’operations and include how the system is perceivedto function throughout the various modes and modetransitions, including interactions with external interfaces.For exploration missions, multiple DRMs make upa ConOps. The design and performance analysis leadingto the requirements must satisfy all of them. Figure 4.1‐3DevelopoperationalrequirementsDevelopprojectoperationstimelineDevelopoperationalconfigurationsDevelopcriticaleventsDeveloporganizationalresponsibilitiesIdentifyoperationalfacilitiesDefineend-to-endcommunicationlinksDefineoperationalflightsegmentdriversDefineoperationalgroundsegmentdriversDefineoperationallaunchsegmentdriversFigure 4.1-3 Typical ConOps development for ascience missionKa-Band:150 MbpsScience DataS-Band:TRK, Cmd, and HK TlmInstrumentSOCInstrument#1Science Data55 MbpsS-Band:TRK, Cmd, and HK TlmS-Band:Ka-Band:TRK, Cmd,150 MbpsScience Dataand HK TlmInstrumentSOCGround Site #1S-BandGround System(Including Short-TermS-Band Data Storage)Ka-BandGround SystemKaScienceDataData DistributionSystem(Including Short-Term ScienceData Storage)Instrument#2Science Data2 MbpsGround Site #2S-BandGround System(Including Short-TermS-Band Data Storage)InstrumentSOCKa-BandGround SystemSame Interfacesas Prime Ground SiteObservatory CommandsAcquisition DataStation ControlObservatory Housekeeping TelemetryTracking DataStation StatusDDS ControlDDS StatusScience Planningand FDS ProductsInstrument Commands/LeadsExternalTracking StationInstrument#3Science Data58 MbpsR/T Housekeeping TelemetryR/T Housekeeping TelemetryR/T Housekeeping TelemetryS-Band:HK Tlm, TRK DataAcquisitionDataCmdMission Operations CenterTelemetry andCommand SystemASIST/FEDSTelemetry MonitoringCommand ManagementHK Data ArchivalHK Level-0 ProcessingAutomated OperationsAnomaly DetectionGround StationControl SystemDDSControl SystemFlight SoftwareMemory DumpsFigure 4.1-4 Example of an associated end-to-end operational architectureFlight DynamicsSystemOrbit DeterminationManuever PlanningProduct GenerationR/T Attitude DeterminationSensor/Actuator CalibrationMission Planningahd SchedulingPlan daily/periodic eventsCreate engineering planGenerate daily loadsTrendingAlert NotificationSystemSimulatedCommandsFlight SoftwareMaintenance Lab36 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


4.1 Stakeholder Expectations Definitionillustrates typical information included in the ConOpsfor a science mission, and Figure 4.1-4 is an example ofan end-to-end operational architecture. For more informationabout developing the ConOps, see ANSI/AIAAG-043-1992, Guide for the Preparation of OperationalConcept Documents.The operation timelines provide the basis for definingsystem configurations, operational activities, and othersequenced related elements necessary to achieve themission objectives for each operational phase. It describesthe activities, tasks, and other sequenced relatedelements necessary to achieve the mission objectives ineach of the phases. Depending on the type of project(science, exploration, operational), the timeline couldbecome quite complex.The timeline matures along with the design. It starts asa simple time-sequenced order of the major events andmatures into a detailed description of subsystem operationsduring all major mission modes or transitions.An example of a lunar sortie timeline and DRM early inthe life cycle are shown in Figures 4.1-5a and b, respectively.An example of a more detailed, integrated time-Integration and TestLaunch OperationsLEO OperationsLunar Transfer OperationsLunar Orbit OperationsLunar Surface OperationsEarth Transfer OperationsReentry and Landing Operations0 1 2 3 4Elapsed Time (Weeks)Figure 4.1-5a Example of a lunar sortie timelinedeveloped early in the life cycleline later in the life cycle for a science mission is shownin Figure 4.1-6.An important part of the ConOps is defining the operationalphases, which will span project Phases D, E,and F. The operational phases provide a time-sequencedMoon100 kmLow Lunar OrbitLSAM Performs Lunar Orbit InjectionAscent StageExpendedEarth DepartureStage ExpendedLow EarthOrbitLunarSurfaceAccessModule(LSAM)Crew Exploration VehicleDirect or SkipLand EntryEarthEarth Departure StageFigure 4.1-5b Example of a lunar sortie DRM early in the life cycle<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 37


4.0 System Designstructure for defining the configuration changes and operationalactivities needed to be carried out to meet thegoals of the mission. For each of the operational phases,facilities, equipment, and critical events should also beincluded. Table 4.1-1 identifies some common examplesof operational phases for a <strong>NASA</strong> mission.(Hrs) L-0.5 Launch Sep L+1 L+1.5CoverageNote: Nominally Acquire TDRS Pre-Sep and Stay on Until Acquisition at DongaraUSN SitesDongara, Australia - 9.4 hrs (11:45:38Z - 21:00:23z)TDRsOverburg, S. Africa - 25 mins (11:30:46Z - 11:56:04Z)Atlas EELVGN&CControl ModePropulsionC&DH/RFRecorderS-DownlinkPower/ElectricityDeployablesThermalS-XMTRS/C LoadInstruments4/18/089:59:13ZIRUs OnACE-A OnRWAs OffSTs OffDSS OffACS thr ISo Valves OpenRecord 16 kbps64 kbps 2 kbpsHardline HardlineStart XMTR OnSequencer (RTS)100% SOCSADS & HGADS Damper Heaters OnSurvival Heaters EnabledHMI EB & OP OffAIA EB & Op OffEVE EB & Op Off258 W10:59:13Z>95% SOCFairingJettison MECOL-5 minsSAS OffGo to Internal PowerSECO-1 Coast& 1RPM RollSECO-2XMTR on Noncoherentvia Stored CommandDetect Separation Then:- Initiate SA Deployment- Power on RWAs1095 W293 WSeparation:11:30:58ZLV to SepAttitudeRWAs Powered OnNull Rates(Wheels)Sun Acquisition Mode551 WLaunchSolar Array DeploymentAcquire Sun(If High Tipoff Rates,Delta-H If Neededby Ground Command Only)12:00ZTDRS: Prime at Sep Then Backup/Contingency Only2 kbps on CarrierSeparationL-31 mins 45 secsAltitude = 300 km2 kbps Command UplinkDump Recorder240 kbps on Carrier616 WSun AcquisitionComplete ACE-B Powered OnXMTR on CoherentPrep Lines & HGA SurvivalHeaters Come On656 W12:30Z64 kbps on Subcarrier/Ranging on CarrierPower PositiveCharge Battery@ 10 Amps846 W804 WGCE Powered OnDecontamination Heaters OnGroundMOC1 Hour Launch Window>95% Battery SOC Launch Criteria- Once on internal Power, >95% Allows>15 minutes Before Launch or ScrubFDF Receives MECO StateVector From EELVUpdate on-board EPVFigure 4.1-6 Example of a more detailed, integrated timeline later in the life cycle for a science mission38 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


Table 4.1‐1 Typical Operational Phases for a <strong>NASA</strong> Mission4.1 Stakeholder Expectations DefinitionOperational PhaseIntegration and testoperationsDescriptionProject Integration and Test: During the latter period of project integration and test, the systemis tested by performing operational simulations during functional and environmental testing. Thesimulations typically exercise the end-to-end command and data system to provide a complete verificationof system functionality and performance against simulated project operational scenarios.Launch Integration: The launch integration phase may repeat integration and test operational andfunctional verification in the launch-integrated configuration.Launch operationsLaunch: Launch operation occurs during the launch countdown, launch ascent, and orbit injection.Critical event telemetry is an important driver during this phase.Deployment: Following orbit injection, spacecraft deployment operations reconfigure the spacecraftto its orbital configuration. Typically, critical events covering solar array, antenna, and otherdeployments and orbit trim maneuvers occur during this phase.In-Orbit Checkout: In-orbit checkout is used to perform a verification that all systems are healthy.This is followed by on-orbit alignment, calibration, and parameterization of the flight systems toprepare for science operations.Science operationsSafe-holdoperationsAnomaly resolutionand maintenanceoperationsDisposal operationsThe majority of the operational lifetime is used to perform science operations.As a result of on-board fault detection or by ground command, the spacecraft may transition to asafe-hold mode. This mode is designed to maintain the spacecraft in a power positive, thermallystable state until the fault is resolved and science operations can resume.Anomaly resolution and maintenance operations occur throughout the mission. They may requireresources beyond established operational resources.Disposal operations occur at the end of project life. These operations are used to either provide acontrolled reentry of the spacecraft or a repositioning of the spacecraft to a disposal orbit. In thelatter case, the dissipation of stored fuel and electrical energy is required.<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 39


4.0 System Design4.2 <strong>Technical</strong> Requirements DefinitionThe <strong>Technical</strong> Requirements Definition Process transformsthe stakeholder expectations into a definition ofthe problem and then into a complete set of validatedtechnical requirements expressed as “shall” statementsthat can be used for defining a design solution for theProduct Breakdown Structure (PBS) model and relatedenabling products. The process of requirements definitionis a recursive and iterative one that develops the stakeholders’requirements, product requirements, and lowerIt is important to note that the team must not relysolely on the requirements received to design andbuild the system. Communication and iteration withthe relevant stakeholders are essential to ensure amutual understanding of each requirement. Otherwise,the designers run the risk of misunderstandingand implementing an unwanted solution to a differentinterpretation of the requirements.level product/component requirements (e.g., PBS modelproducts such as systems or subsystems and related enablingproducts such as external systems that provide orconsume data). The requirements should enable the descriptionof all inputs, outputs, and required relationshipsbetween inputs and outputs. The requirements documentsorganize and communicate requirements to the customerand other stakeholders and the technical community.<strong>Technical</strong> requirements definition activities apply to thedefinition of all technical requirements from the program,project, and system levels down to the lowest levelproduct/component requirements document.4.2.1 Process DescriptionFigure 4.2-1 provides a typical flow diagram for the<strong>Technical</strong> Requirements Definition Process and identifiestypical inputs, outputs, and activities to consider inaddressing technical requirements definition.Analyze scope of problemFrom StakeholderExpectations Definitionand ConfigurationManagement ProcessesBaselined StakeholderExpectationsBaselined Concept ofOperationsBaselined EnablingSupport StrategiesFrom StakeholderExpectations Definitionand <strong>Technical</strong> DataManagement ProcessesMeasures ofEffectivenessDefine design andproduct constraintsDefine technical requirementsin acceptable“shall” statementsValidate technicalrequirementsDefine functional andbehavioral expectation intechnical termsDefine performancerequirements for eachdefined functional andbehavioral expectationDefine measures ofperformance for eachmeasure of effectivenessTo Logical Decompositionand RequirementsManagement and InterfaceManagement ProcessesValidated <strong>Technical</strong>RequirementsTo Logical Decompositionand <strong>Technical</strong> DataManagement ProcessesMeasures ofPerformanceTo <strong>Technical</strong>Assessment Process<strong>Technical</strong> PerformanceMeasuresEstablish technicalrequirements baselineDefine technicalperformance measuresFigure 4.2-1 <strong>Technical</strong> Requirements Definition Process40 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


4.2 <strong>Technical</strong> Requirements Definition4.2.1.1 InputsTypical inputs needed for the requirements processwould include the following:z z Top-Level Requirements and Expectations: Thesewould be the agreed-to top-level requirements andexpectations (e.g., needs, wants, desires, capabilities,constraints, external interfaces) for the product(s) tobe developed coming from the customer and otherstakeholders.z z Concept of Operations: This describes how thesystem will be operated during the life-cycle phases tomeet stakeholder expectations. It describes the systemcharacteristics from an operational perspective andhelps facilitate an understanding of the system goals.Examples would be a ConOps document or a DRM.4.2.1.2 Process ActivitiesThe top-level requirements and expectations are initiallyassessed to understand the technical problem to besolved and establish the design boundary. This boundaryis typically established by performing the followingactivities:zz Defining constraints that the design must adhere to orhow the system will be used. The constraints are typicallynot able to be changed based on tradeoff analyses.zz Identifying those elements that are already under de-sign control and cannot be changed. This helps establishthose areas where further trades will be performedto narrow potential design solutions.zz Establishing physical and functional interfaces (e.g.,mechanical, electrical, thermal, human, etc.) withwhich the system must interact.zz Defining functional and behavioral expectations forthe range of anticipated uses of the system as identifiedin the ConOps. The ConOps describes how the systemwill be operated and the possible use-case scenarios.With an overall understanding of the constraints, physical/functionalinterfaces, and functional/behavioral expectations,the requirements can be further defined byestablishing performance criteria. The performance isexpressed as the quantitative part of the requirement toindicate how well each product function is expected tobe accomplished.Finally, the requirements should be defined in acceptable“shall” statements, which are complete sentenceswith a single “shall” per statement. See Appendix C forguidance on how to write good requirements and AppendixE for validating requirements. A well-writtenrequirements document provides several specific benefitsto both the stakeholders and the technical team, asshown in Table 4.2-1.4.2.1.3 OutputsTypical outputs for the <strong>Technical</strong> Requirements DefinitionProcess would include the following:z z <strong>Technical</strong> Requirements: This would be the approvedset of requirements that represents a complete descriptionof the problem to be solved and requirements thathave been validated and approved by the customer andstakeholders. Examples of documentation that capturethe requirements are a System Requirements Document(SRD), Project Requirements Document (PRD),Interface Requirements Document (IRD), etc.z z <strong>Technical</strong> Measures: An established set of measuresbased on the expectations and requirements that willbe tracked and assessed to determine overall systemor product effectiveness and customer satisfaction.Common terms for these measures are Measuresof Effectiveness (MOEs), Measures of Performance(MOPs), and <strong>Technical</strong> Performance Measures(TPMs). See Section 6.7 for further details.4.2.2 <strong>Technical</strong> Requirements DefinitionGuidance4.2.2.1 Types of RequirementsA complete set of project requirements includes thefunctional needs requirements (what functions need tobe performed), performance requirements (how wellthese functions must be performed), and interface requirements(design element interface requirements). Forspace projects, these requirements are decomposed andallocated down to design elements through the PBS.Functional, performance, and interface requirementsare very important but do not constitute the entire setof requirements necessary for project success. The spacesegment design elements must also survive and continueto perform in the project environment. These environmentaldrivers include radiation, thermal, acoustic,mechanical loads, contamination, radio frequency, andothers. In addition, reliability requirements drive designchoices in design robustness, failure tolerance, and redundancy.Safety requirements drive design choices inproviding diverse functional redundancy. Other spe-<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 41


4.0 System DesignTable 4.2-1 Benefits of Well-Written RequirementsBenefitEstablish the basis for agreementbetween the stakeholdersand the developers onwhat the product is to doReduce the developmenteffort because less rework isrequired to address poorlywritten, missing, and misunderstoodrequirementsProvide a basis for estimatingcosts and schedulesProvide a baseline for validationand verificationFacilitate transferServe as a basis for enhancementRationaleThe complete description of the functions to be performed by the product specified in therequirements will assist the potential users in determining if the product specified meetstheir needs or how the product must be modified to meet their needs. During systemdesign, requirements are allocated to subsystems (e.g., hardware, software, and othermajor components of the system), people, or processes.The <strong>Technical</strong> Requirements Definition Process activities force the relevant stakeholdersto consider rigorously all of the requirements before design begins. Careful review of therequirements can reveal omissions, misunderstandings, and inconsistencies early in thedevelopment cycle when these problems are easier to correct thereby reducing costlyredesign, remanufacture, recoding, and retesting in later life-cycle phases.The description of the product to be developed as given in the requirements is a realisticbasis for estimating project costs and can be used to evaluate bids or price estimates.Organizations can develop their validation and verification plans much more productivelyfrom a good requirements document. Both system and subsystem test plans and proceduresare generated from the requirements. As part of the development, the requirementsdocument provides a baseline against which compliance can be measured. The requirementsare also used to provide the stakeholders with a basis for acceptance of the system.The requirements make it easier to transfer the product to new users or new machines.Stakeholders thus find it easier to transfer the product to other parts of their organization,and developers find it easier to transfer it to new stakeholders or reuse it.The requirements serve as a basis for later enhancement or alteration of the finishedproduct.cialty requirements also may affect design choices. Thesemay include producibility, maintainability, availability,upgradeability, human factors, and others. Unlike functionalneeds requirements, which are decomposed andallocated to design elements, these requirements arelevied across major project elements. Designing to meetthese requirements requires careful analysis of designalternatives. Figure 4.2-2 shows the characteristics offunctional, operational, reliability, safety, and specialtyrequirements. Top-level mission requirements are generatedfrom mission objectives, programmatic constraints,and assumptions. These are normally grouped into functionand performance requirements and include the categoriesof requirements in Figure 4.2-2.Functional RequirementsFunctional requirements define what functions needto be done to accomplish the objectives.Performance requirements define how well the systemneeds to perform the functions.The functional requirements need to be specified forall intended uses of the product over its entire lifetime.Functional analysis is used to draw out both functionaland performance requirements. Requirements are partitionedinto groups, based on established criteria (e.g.,similar functionality, performance, or coupling, etc.),to facilitate and focus the requirements analysis. Functionaland performance requirements are allocated tofunctional partitions and subfunctions, objects, people,or processes. Sequencing of time-critical functions isconsidered. Each function is identified and describedin terms of inputs, outputs, and interface requirementsfrom the top down so that the decomposed functions arerecognized as part of larger functional groupings. Functionsare arranged in a logical sequence so that any specifiedoperational usage of the system can be traced in anend-to-end path to indicate the sequential relationship ofall functions that must be accomplished by the system.It is helpful to walk through the ConOps and scenariosasking the following types of questions: what functionsneed to be performed, where do they need to be performed,how often, under what operational and environ-42 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


4.2 <strong>Technical</strong> Requirements Definition<strong>Technical</strong> Requirements –Allocation Hierarchically to PBSFunctional RequirementsPerformance RequirementsInterface RequirementsOperational Requirements –Drive Functional RequirementsMission Timeline SequenceMission ConfigurationsCommand and Telemetry StrategyReliability Requirements – Project Standards –Levied Across <strong>Systems</strong>Mission EnvironmentsRobustness, Fault Tolerance, Diverse RedundancyVerificationProcess and WorkmanshipSafety Requirements – Project Standards –Levied Across <strong>Systems</strong>Orbital Debris and ReentryPlanetary ProtectionToxic SubstancesPressurized VesselsRadio Frequency EnergySystem Safety…Specialty Requirements – Project Standards –Drive Product DesignsProducibilityMaintainabilityAsset Protection…Figure 4.2-2 Characteristics of functional,operational, reliability, safety, and specialtyrequirementsmental conditions, etc. Thinking through this processoften reveals additional functional requirements.Performance RequirementsPerformance requirements quantitatively define howwell the system needs to perform the functions. Again,walking through the ConOps and the scenarios oftendraws out the performance requirements by asking thefollowing types of questions: how often and how well,to what accuracy (e.g., how good does the measurementneed to be), what is the quality and quantity of theoutput, under what stress (maximum simultaneous dataExample of Functional and PerformanceRequirementsInitial Function StatementThe Thrust Vector Controller (TVC) shall provide vehiclecontrol about the pitch and yaw axes.This statement describes a high-level function thatthe TVC must perform. The technical team needs totransform this statement into a set of design-to functionaland performance requirements.Functional Requirements with AssociatedPerformance Requirementszz The TVC shall gimbal the engine a maximum of9 degrees, ± 0.1 degree.zz The TVC shall gimbal the engine at a maximum rateof 5 degrees/second ± 0.3 degrees/second.zz The TVC shall provide a force of 40,000 pounds,± 500 pounds.zz The TVC shall have a frequency response of 20 Hz,± 0.1 Hz.requests) or environmental conditions, for what duration,at what range of values, at what tolerance, and atwhat maximum throughput or bandwidth capacity.Be careful not to make performance requirements toorestrictive. For example, for a system that must be ableto run on rechargeable batteries, if the performance requirementsspecify that the time to recharge should beless than 3 hours when a 12-hour recharge time wouldbe sufficient, potential design solutions are eliminated.In the same sense, if the performance requirementsspecify that a weight must be within ±0.5 kg, when±2.5 kg is sufficient, metrology cost will increase withoutadding value to the product.Wherever possible, define the performance requirementsin terms of (1) a threshold value (the minimum acceptablevalue needed for the system to carry out its mission)and (2) the baseline level of performance desired. Specifyingperformance in terms of thresholds and baselinerequirements provides the system designers with tradespace in which to investigate alternative designs.All qualitative performance expectations must be analyzedand translated into quantified performance requirements.Trade studies often help quantify performancerequirements. For example, tradeoffs can show whether<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 43


4.0 System Designa slight relaxation of the performance requirement couldproduce a significantly cheaper system or whether a fewmore resources could produce a significantly more effectivesystem. The rationale for thresholds and goals shouldbe documented with the requirements to understand thereason and origin for the performance requirement incase it must be changed. The performance requirementsthat can be quantified by or changed by tradeoff analysisshould be identified. See Section 6.8, Decision Analysis,for more information on tradeoff analysis.Interface RequirementsIt is important to define all interface requirements for thesystem, including those to enabling systems. The externalinterfaces form the boundaries between the product andthe rest of the world. Types of interfaces include: operationalcommand and control, computer to computer, mechanical,electrical, thermal, and data. One useful tool in defining interfacesis the context diagram (see Appendix F), which depictsthe product and all of its external interfaces. Once theproduct components are defined, a block diagram showingthe major components, interconnections, and external interfacesof the system should be developed to define boththe components and their interactions.Interfaces associated with all product life-cycle phasesshould also be considered. Examples include interfaceswith test equipment; transportation systems; IntegratedLogistics Support (ILS) systems; and manufacturing facilities,operators, users, and maintainers.As the technical requirements are defined, the interfacediagram should be revisited and the documented interfacerequirements refined to include newly identified interfacesinformation for requirements both internal andexternal. More information regarding interfaces can befound in Section 6.3.Environmental RequirementsEach space mission has a unique set of environmentalrequirements that apply to the flight segment elements.It is a critical function of systems engineering to identifythe external and internal environments for the particularmission, analyze and quantify the expected environments,develop design guidance, and establish a marginphilosophy against the expected environments.The environments envelope should consider what can beencountered during ground test, storage, transportation,launch, deployment, and normal operations from beginningof life to end of life. Requirements derived from themission environments should be included in the systemrequirements.External and internal environment concerns that mustbe addressed include acceleration, vibration, shock, staticloads, acoustic, thermal, contamination, crew-inducedloads, total dose radiation/radiation effects, Single-EventEffects (SEEs), surface and internal charging, orbital debris,atmospheric (atomic oxygen) control and quality,attitude control system disturbance (atmospheric drag,gravity gradient, and solar pressure), magnetic, pressuregradient during launch, microbial growth, and radio frequencyexposure on the ground and on orbit.The requirements structure must address the specialtyengineering disciplines that apply to the mission environmentsacross project elements. These discipline areaslevy requirements on system elements regarding ElectromagneticInterference, Electromagnetic Compatibility(EMI/EMC), grounding, radiation and other shielding,contamination protection, and reliability.Reliability RequirementsReliability can be defined as the probability that a device,product, or system will not fail for a given period of timeunder specified operating conditions. Reliability is an inherentsystem design characteristic. As a principal contributingfactor in operations and support costs and insystem effectiveness, reliability plays a key role in determiningthe system’s cost-effectiveness.Reliability engineering is a major specialty discipline thatcontributes to the goal of a cost-effective system. This isprimarily accomplished in the systems engineering processthrough an active role in implementing specific designfeatures to ensure that the system can perform in thepredicted physical environments throughout the mission,and by making independent predictions of systemreliability for design trades and for test program, operations,and integrated logistics support planning.Reliability requirements ensure that the system (andsubsystems, e.g., software and hardware) can perform inthe predicted environments and conditions as expectedthroughout the mission and that the system has theability to withstand certain numbers and types of faults,errors, or failures (e.g., withstand vibration, predicteddata rates, command and/or data errors, single-event44 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


4.2 <strong>Technical</strong> Requirements Definitionupsets, and temperature variances to specified limits).Environments can include ground (transportation andhandling), launch, on-orbit (Earth or other), planetary,reentry, and landing, or they might be for softwarewithin certain modes or states of operation. Reliabilityaddresses design and verification requirements to meetthe requested level of operation as well as fault and/orfailure tolerance for all expected environments and conditions.Reliability requirements cover fault/failure prevention,detection, isolation, and recovery.Safety Requirements<strong>NASA</strong> uses the term “safety” broadly to include human(public and workforce), environmental, and asset safety.There are two types of safety requirements—deterministicand risk-informed. A deterministic safety requirementis the qualitative or quantitative definition of athreshold of action or performance that must be met bya mission-related design item, system, or activity for thatitem, system, or activity to be acceptably safe. Examplesof deterministic safety requirements are incorporation ofsafety devices (e.g., build physical hardware stops into thesystem to prevent the hydraulic lift/arm from extendingpast allowed safety height and length limits); limits on therange of values a system input variable is allowed to takeon; and limit checks on input commands to ensure theyare within specified safety limits or constraints for thatmode or state of the system (e.g., the command to retractthe landing gear is only allowed if the airplane is inthe airborne state). For those components identified as“safety critical,” requirements include functional redundancyor failure tolerance to allow the system to meet itsrequirements in the presence of one or more failures orto take the system to a safe state with reduced functionality(e.g., dual redundant computer processors, safe-statebackup processor); detection and automatic system shutdownif specified values (e.g., temperature) exceed prescribedsafety limits; use of only a subset that is approvedfor safety-critical software of a particular computer language;caution or warning devices; and safety procedures.A risk-informed safety requirement is a requirement thathas been established, at least in part, on the basis of theconsideration of safety-related TPMs and their associateduncertainty. An example of a risk-informed safetyrequirement is the Probability of Loss of Crew (P(LOC))not exceeding a certain value “p” with a certain confidencelevel. Meeting safety requirements involves identificationand elimination of hazards, reducing the likelihoodof the accidents associated with hazards, or reducingthe impact from the hazard associated with these accidentsto within acceptable levels. (For additional informationconcerning safety, see, for example, NPR 8705.2, Human-Rating Requirements for Space <strong>Systems</strong>, NPR 8715.3, <strong>NASA</strong>General Safety Program Requirements, and <strong>NASA</strong>-STD-8719.13, Software Safety Standard.)4.2.2.2 Human Factors <strong>Engineering</strong>RequirementsIn human spaceflight, the human—as operator and asmaintainer—is a critical component of the mission andsystem design. Human capabilities and limitations mustenter into designs in the same way that the properties ofmaterials and characteristics of electronic componentsdo. Human factors engineering is the discipline thatstudies human-system interfaces and interactions andprovides requirements, standards, and guidelines to ensurethe entire system can function as designed with effectiveaccommodation of the human component.Humans are initially integrated into systems throughanalysis of the overall mission. Mission functions areallocated to humans as appropriate to the system architecture,technical capabilities, cost factors, and crewcapabilities. Once functions are allocated, human factorsanalysts work with system designers to ensure thathuman operators and maintainers are provided theequipment, tools, and interfaces to perform their assignedtasks safely and effectively.<strong>NASA</strong>-STD-3001, <strong>NASA</strong> Space Flight Human SystemStandards Volume 1: Crew Health ensures that systemsare safe and effective for humans. The standards focuson the human integrated with the system, the measuresneeded (rest, nutrition, medical care, exercise, etc.) toensure that the human stays healthy and effective, theworkplace environment, and crew-system physical andcognitive interfaces.4.2.2.3 Requirements Decomposition,Allocation, and ValidationRequirements are decomposed in a hierarchical structurestarting with the highest level requirements imposedby Presidential directives, mission directorates,program, Agency, and customer and other stakeholders.These high-level requirements are decomposed intofunctional and performance requirements and allocatedacross the system. These are then further decomposedand allocated among the elements and subsystems. This<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 45


4.0 System Designdecomposition and allocation process continues until acomplete set of design-to requirements is achieved. Ateach level of decomposition (system, subsystem, component,etc.), the total set of derived requirements must bevalidated against the stakeholder expectations or higherlevel parent requirements before proceeding to the nextlevel of decomposition.The traceability of requirements to the lowest level ensuresthat each require ment is necessary to meet thestakeholder expectations. Require ments that are not allocatedto lower levels or are not implemented at a lowerlevel re sult in a design that does not meet objectives andis, therefore, not valid. Con versely, lower level requirementsthat are not traceable to higher level requirementsresult in an overdesign that is not justified. This hierarchicalflowdown is illustrated in Figure 4.2-3.Figure 4.2-4 is an example of how science pointing requirementsare successively decomposed and allocatedfrom the top down for a typical science mission. It is importantto un derstand and document the relationship betweenrequirements. This will reduce the possibility ofmis in ter pretation and the possibility of an unsatisfactorydesign and associated cost increases.Throughout Phases A and B, changes in requirements andconstraints will occur. It is impera tive that all changes bethoroughly evaluated to determine the impacts on bothhigher and lower hierarchical levels. All changes must beMissionAuthorityMissionObjectivesCustomerImplementingOrganizationsEnvironmentaland Other DesignRequirementsand GuidelinesMissionRequirementsSystemFunctionalRequirementsSystemPerformanceRequirementsProgrammatics: Cost Schedule Constraints Mission ClassificationInstitutionalConstraintsAssumptionsSubsystem AFunctional andPerformanceRequirementsSubsystemBSubsystemCSubsystem XFunctional andPerformanceRequirementsAllocatedRequirementsDerivedRequirements...AllocatedRequirementsDerivedRequirementsFigure 4.2-3 The flowdown of requirements46 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


4.2 <strong>Technical</strong> Requirements DefinitionScience PointingRequirementsSpacecraftRequirementsGroundRequirementsAttitudeDeterminationRequirementsScience AxisKnowledgeRequirementsTotal Gyro toStar TrackerErrorAttitudeEstimationErrorInstrumentBoresight toScience AxisScience Axis toAttitude ControlSystem ReferenceGyro to StarTrackerCalibrationUncertaintyOpticalBenchThermalDeformationFilterEstimationErrorStarCatalogLocationErrorGyroBias RateDriftVelocityAberrationInstrumentCalibrationErrorInstrumentThermalDeformationMainStructureThermalDeformationFigure 4.2-4 Allocation and flowdown of science pointing requirementssubjected to a review and approval cycle as part of a formalchange control process to maintain traceability and to ensurethe impacts of any changes are fully assessed for allparts of the system. A more formal change control processis re quired if the mission is very large and involvesmore than one Center or crosses other jurisdic tional ororganizational boundaries.4.2.2.4 Capturing Requirements and theRequirements DatabaseAt the time the requirements are written, it is importantto capture the requirements statements along with themetadata associated with each requirement. The metadatais the supporting information necessary to helpclarify and link the requirements.The method of verification must also be thought throughand captured for each requirement at the time it is developed.The verification method includes test, inspection,analysis, and demonstration. Be sure to documentany new or derived requirements that are uncoveredduring determination of the verification method. Anexample is requiring an additional test port to givevisibility to an internal signal during integration andtest. If a requirement cannot be verified, then either itshould not be a requirement or the requirement statementneeds to be rewritten. For example, the requirementto “minimize noise” is vague and cannot be verified. If therequirement is restated as “the noise level of the componentX shall remain under Y decibels” then it is clearly verifiable.Examples of the types of metadata are provided inTable 4.2-2.The requirements database is an extremely useful tool forcapturing the requirements and the associated metadata andfor showing the bidirectional traceability between requirements.The database evolves over time and could be usedfor tracking status information related to requirements suchas To Be Determined (TBD)/To Be Resolved (TBR) status,resolution date, and verification status. Each project shoulddecide what metadata will be captured. The database is usuallyin a central location that is made available to the entireproject team. (See Appendix D for a sample requirementsverification matrix.)4.2.2.5 <strong>Technical</strong> StandardsImportance of Standards ApplicationStandards provide a proven basis for establishingcommon technical requirements across a program or<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 47


4.0 System DesignTable 4.2‐2 Requirements MetadataItemRequirement IDRationaleTraced fromOwnerVerification methodVerification leadVerification levelFunctionProvides a unique numbering system for sorting and tracking.Provides additional information to help clarify the intent of the requirements at the time they werewritten. (See “Rationale” box below on what should be captured.)Captures the bidirectional traceability between parent requirements and lower level (derived)requirements and the relationships between requirements.Person or group responsible for writing, managing, and/or approving changes to this requirement.Captures the method of verification (test, inspection, analysis, demonstration) and should bedetermined as the requirements are developed.Person or group assigned responsibility for verifying the requirement.Specifies the level in the hierarchy at which the requirements will be verified (e.g., system, subsystem,element).RationaleThe rationale should be kept up to date and include the following information:z z Reason for the Requirement: Often the reason for the requirement is not obvious, and it may be lost if not recordedas the requirement is being documented. The reason may point to a constraint or concept of operations. If there is aclear parent requirement or trade study that explains the reason, then reference it.z z Document Assumptions: If a requirement was written assuming the completion of a technology development programor a successful technology mission, document the assumption.z z Document Relationships: The relationships with the product’s expected operations (e.g., expectations about howstakeholders will use a product). This may be done with a link to the ConOps.Document Design Constraints:z z Imposed by the results from decisions made as the design evolves. If the requirementstates a method of implementation, the rationale should state why the decision was made to limit the solutionto this one method of implementation.project to avoid incompatibilities and ensure that at leastminimum requirements are met. Common standardscan also lower implementation cost as well as costs forinspection, common supplies, etc. Typically, standards(and specifications) are used throughout the product lifecycle to establish design requirements and margins, materialsand process specifications, test methods, and interfacespecifications. Standards are used as requirements(and guidelines) for design, fabrication, verification, validation,acceptance, operations, and maintenance.Selection of Standards<strong>NASA</strong> policy for technical standards is provided in NPD8070.6, <strong>Technical</strong> Standards, which addresses selection,tailoring, application, and control of standards. In general,the order of authority among standards for <strong>NASA</strong>programs and projects is as follows:zz Standards mandated by law (e.g., environmental stan-dards),zz National or international voluntary consensus stan-dards recognized by industry,zz Other Government standards,zz <strong>NASA</strong> policy directives, andzz <strong>NASA</strong> technical standards.<strong>NASA</strong> may also designate mandatory or “core” standardsthat must be applied to all programs where technicallyapplicable. Waivers to designated core standardsmust be justified and approved at the Agency level unlessotherwise delegated.48 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


4.3 Logical DecompositionLogical Decomposition is the process for creating thedetailed functional requirements that enable <strong>NASA</strong> programsand projects to meet the stakeholder expectations.This process identifies the “what” that must be achievedby the system at each level to enable a successful project.Logical decomposition utilizes functional analysis tocreate a system architecture and to decompose top-level(or parent) requirements and allocate them down to thelowest desired levels of the project.The Logical Decomposition Process is used to:zz Improve understanding of the defined technical re-quirements and the relationships among the requirements(e.g., functional, behavioral, and temporal),andzz Decompose the parent requirements into a set of log-ical decomposition models and their associated setsof derived technical requirements for input to the DesignSolution Definition Process.4.3.1 Process DescriptionFigure 4.3-1 provides a typical flow diagram for the LogicalDecomposition Process and identifies typical inputs,outputs, and activities to consider in addressing logicaldecomposition.From <strong>Technical</strong>Requirements Definitionand ConfigurationManagement ProcessesBaselined <strong>Technical</strong>RequirementsFrom <strong>Technical</strong>Requirements Definitionand <strong>Technical</strong> DataManagement ProcessesMeasures ofPerformanceDefine one or more logicaldecomposition modelsAllocate technical requirements tological decomposition models to forma set of derived technical requirementsResolve derived technicalrequirement conflictsValidate the resulting set of derivedtechnical requirementsEstablish the derived technicalrequirements baselineFigure 4.3-1 Logical Decomposition Process4.3.1.1 InputsTypical inputs needed for the Logical DecompositionProcess would include the following:z z <strong>Technical</strong> Requirements: A validated set of requirementsthat represent a description of the problem tobe solved, have been established by functional andperformance analysis, and have been approved by thecustomer and other stakeholders. Examples of documentationthat capture the requirements are an SRD,PRD, and IRD.z z <strong>Technical</strong> Measures: An established set of measuresbased on the expectations and requirements that willbe tracked and assessed to determine overall systemor product effectiveness and customer satisfaction.These measures are MOEs, MOPs, and a specialsubset of these called TPMs. See Subsection 6.7.2.2for further details.4.3.1.2 Process ActivitiesThe key first step in the Logical Decomposition Processis establishing the system architecture model. Thesystem architecture activity defines the underlying structureand relationships of hardware, software, communications,operations, etc., that provide for the implementationof Agency, mission directorate, program, project,and subsequent levels of theTo Design SolutionDefinition and RequirementsManagement and InterfaceManagement ProcessesDerived <strong>Technical</strong>RequirementsTo Design SolutionDefinition and ConfigurationManagement ProcessesLogical DecompositionModelsTo <strong>Technical</strong> DataManagement ProcessLogical DecompositionWork Productsrequirements. System architectureactivities drive thepartitioning of system elementsand requirements tolower level functions andrequirements to the pointthat design work can be accomplished.Interfaces andrelationships between partitionedsubsystems and elementsare defined as well.Once the top-level (orparent) functional requirementsand constraints havebeen established, the systemdesigner uses functionalanalysis to begin to formulatea conceptual system architecture.The system ar-<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 49


4.0 System Designchitecture can be seen as the strategic organization ofthe functional elements of the system, laid out to enablethe roles, relationships, dependencies, and interfaces betweenelements to be clearly defined and understood. Itis strategic in its focus on the overarching structure of thesystem and how its elements fit together to contribute tothe whole, instead of on the particular workings of theelements themselves. It enables the elements to be developedseparately from each other while ensuring thatthey work together effectively to achieve the top-level (orparent) requirements.Much like the other elements of functional decomposition,the development of a good system-level architectureis a creative, recursive, and iterative process thatcombines an excellent understanding of the project’s endobjectives and constraints with an equally good knowledgeof various potential technical means of deliveringthe end products.Focusing on the project’s ends, top-level (or parent) requirements,and constraints, the system architect mustdevelop at least one, but preferably multiple, concept architecturescapable of achieving program objectives. Eacharchitecture concept involves specification of the functionalelements (what the pieces do), their relationshipsto each other (interface definition), and the ConOps, i.e.,how the various segments, subsystems, elements, units,etc., will operate as a system when distributed by locationand environment from the start of operations to theend of the mission.The development process for the architectural conceptsmust be recursive and iterative, with feedback fromstakeholders and external reviewers, as well as from subsystemdesigners and operators, provided as often aspossible to increase the likelihood of achieving the program’sends, while reducing the likelihood of cost andschedule overruns.In the early stages of the mission, multiple concepts aredeveloped. Cost and schedule constraints will ultimatelylimit how long a program or project can maintain multiplearchitectural concepts. For all <strong>NASA</strong> programs, architecturedesign is completed during the Formulationphase. For most <strong>NASA</strong> projects (and tightly coupled programs),the selection of a single architecture will happenduring Phase A, and the architecture and ConOps willbe baselined during Phase B. Architectural changes athigher levels occasionally occur as decomposition tolower levels produces complications in design, cost, orschedule that necessitate such changes.Aside from the creative minds of the architects, there aremultiple tools that can be utilized to develop a system’sarchitecture. These are primarily modeling and simulationtools, functional analysis tools, architecture frameworks,and trade studies. (For example, one way of doingarchitecture is the Department of Defense (DOD) ArchitectureFramework (DODAF). See box.) As eachconcept is developed, analytical models of the architecture,its elements, and their operations will be developedwith increased fidelity as the project evolves. Functionaldecomposition, requirements development, and tradestudies are subsequently undertaken. Multiple iterationsof these activities feed back to the evolving architecturalconcept as the requirements flow down and the designmatures.Functional analysis is the primary method used insystem architecture development and functional requirementdecomposition. It is the systematic processof identifying, describing, and relating the functions asystem must perform to fulfill its goals and objectives.Functional analysis identifies and links system functions,trade studies, interface characteristics, and rationales torequirements. It is usually based on the ConOps for thesystem of interest.Three key steps in performing functional analysis are:zz Translate top-level requirements into functions thatmust be performed to accomplish the requirements.zz Decompose and allocate the functions to lower levelsof the product breakdown structure.zz Identify and describe functional and subsystem inter-faces.The process involves analyzing each system requirementto identify all of the functions that must be performedto meet the requirement. Each function identified is describedin terms of inputs, outputs, and interface requirements.The process is repeated from the top down so thatsubfunctions are recognized as part of larger functionalareas. Functions are arranged in a logical sequence sothat any specified operational usage of the system can betraced in an end-to-end path.The process is recursive and iterative and continues untilall desired levels of the architecture/system have beenanalyzed, defined, and baselined. There will almost cer-50 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


4.3 Logical DecompositionDOD Architecture FrameworkNew ways, called architecture frameworks, have been developed in the last decade to describe and characterize evolving,complex system-of-systems. In such circumstances, architecture descriptions are very useful in ensuring that stakeholderneeds are clearly understood and prioritized, that critical details such as interoperability are addressed upfront,and that major investment decisions are made strategically. In recognition of this, the U.S. Department of Defense hasestablished policies that mandate the use of the DODAF in capital planning, acquisition, and joint capabilities integration.An architecture can be understood as “the structure of components, their relationships, and the principles and guidelinesgoverning their design and evolution over time.”* To describe an architecture, the DODAF defines several views:operational, systems, and technical standards. In addition, a dictionary and summary information are also required. (Seefigure below.)<strong>Systems</strong> ThatSupport theActivities andInformationExchanges What Needs to Be Done Who Does It Information ExchangesRequired to Get It DoneOperational ViewIdentifies what needs to beaccomplished and by whomOperational Requirementsand Capabilities Basic TechnologySupportability New <strong>Technical</strong>Capabilities<strong>Systems</strong> ViewRelates systems and characteristicsto operational needsSpecific System CapabilitiesRequired to SatisfyInformation Exchanges<strong>Technical</strong> Standards CriteriaGoverning InteroperableImplementation/Procurementof the Selected System Capabilities<strong>Technical</strong> Standards ViewPrescribes standardsand conventionsWithin each of these views, DODAF contains specific products. For example, within the Operational View is a descriptionof the operational nodes, their connectivity, and information exchange requirements. Within the <strong>Systems</strong> View is a descriptionof all the systems contained in the operational nodes and their interconnectivity. Not all DODAF products arerelevant to <strong>NASA</strong> systems engineering, but its underlying concepts and formalisms may be useful in structuring complexproblems for the <strong>Technical</strong> Requirements Definition and Decision Analysis Processes.*Definition based on Institute of Electrical and Electronics Engineers (IEEE) STD 610.12.Source: DOD, DOD Architecture Framework.tainly be alternative ways to decompose functions; therefore,the outcome is highly dependent on the creativity,skills, and experience of the engineers doing the analysis.As the analysis proceeds to lower levels of the architectureand system and the system is better understood, thesystems engineer must keep an open mind and a willingnessto go back and change previously established architectureand system requirements. These changes willthen have to be decomposed down through the architectureand systems again, with the recursive process continuinguntil the system is fully defined, with all of therequirements understood and known to be viable, verifiable,and internally consistent. Only at that point shouldthe system architecture and requirements be baselined.4.3.1.3 OutputsTypical outputs of the Logical Decomposition Processwould include the following:z z System Architecture Model: Defines the underlyingstructure and relationship of the elements of the<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 51


4.0 System Designsystem (e.g., hardware, software, communications,operations, etc.) and the basis for the partitioning ofrequirements into lower levels to the point that designwork can be accomplished.z z End Product Requirements: A defined set of maketo,buy-to, code-to, and other requirements fromwhich design solutions can be accomplished.4.3.2 Logical Decomposition Guidance4.3.2.2 Functional Analysis TechniquesAlthough there are many techniques available to performfunctional analysis, some of the more popular are(1) Functional Flow Block Diagrams (FFBDs) to depicttask sequences and relationships, (2) N2 diagrams (orN x N interaction matrix) to identify interactions or interfacesbetween major factors from a systems perspective,and (3) Timeline Analyses (TLAs) to depict the timesequence of time-critical functions.4.3.2.1 Product Breakdown StructureThe decompositions represented by the PBS and theWork Breakdown Structure (WBS) form important perspectiveson the desired product system. The WBS is ahierarchical breakdown of the work necessary to completethe project. See Subsection 6.1.2.1 for further informationon WBS development. The WBS contains thePBS, which is the hierarchical breakdown of the productssuch as hardware items, software items, and informationitems (documents, databases, etc.). The PBS isused during the Logical Decomposition and functionalanalysis processes. The PBS should be carried down tothe lowest level for which there is a cognizant engineeror manager. Figure 4.3-2 is an example of a PBS.TelescopeDetectorsElectronicsPayloadElementSpacecraftInterfaceStructurePowerElectricalThermalPayloadInterfaceFlight SegmentSpacecraftBusCommand& DataGuidance,Navigation &ControlPropulsionMechanismsCommunicationsFigure 4.3-2 Example of a PBSFunctional Flow Block DiagramsThe primary functional analysis technique is the functionalflow block diagram. The purpose of the FFBD is toindicate the sequential relationship of all functions thatmust be accomplished by a system. When completed,these diagrams show the entire network of actions thatlead to the fulfillment of a function.LaunchAccommodationsPayloadAttachedFittingElectricalSupplyFFBDs specifically depict each functional event (representedby a block) occurring following the precedingfunction. Some functions may be performed in parallel,or alternative paths may be taken. The FFBD networkshows the logical sequence of “what” must happen; itdoes not ascribe a time duration to functions or betweenfunctions. The duration ofthe function and the timebetween functions may varyfrom a fraction of a secondto many weeks. To understandtime-critical requirements,a TLA is used. (Seethe TLA discussion later inthis subsection.)The FFBDs are functionoriented, not equipmentoriented. In other words,they identify “what” musthappen and must not assumea particular answerto “how” a function will beperformed. The “how” isthen defined for each blockat a given level by definingthe “what” functions at thenext lower level necessaryto accomplish that block.In this way, FFBDs are developedfrom the top down,52 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


4.3 Logical Decompositionin a series of levels, with tasks at each level identifiedthrough functional decomposition of a single task at ahigher level. The FFBD displays all of the tasks at eachlevel in their logical, sequential relationship, with theirrequired inputs and anticipated outputs (including metrics,if applicable), plus a clear link back to the single,higher level task.An example of an FFBD is shown in Figure 4.3-3. TheFFBD depicts the entire flight mission of a spacecraft.Each block in the first level of the diagram is expandedto a series of functions, as shown in the second-level diagramfor “Perform Mission Operations.” Note that thediagram shows both input (“Transfer to OPS Orbit”) andoutput (“Transfer to STS Orbit”), thus initiating the interfaceidentification and control process. Each block inthe second-level diagram can be progressively developedinto a series of functions, as shown in the third-level diagram.TOP LEVEL1.0Ascent IntoOrbit Injection2.0Check Outand Deploy3.0Transfer toOPS Orbit4.0PerformMissionOperationsOR6.0Transfer toSTS Orbit7.0RetrieveSpacecraft8.0Reenter andLand5.0ContingencyOperationsSECOND LEVEL(3.0) Ref.Transfer toOPS Orbit4.1ProvideElectric Power4.2ProvideAttitudeStabilization4.3ProvideThermalControl4.4Provide OrbitMain4.5ReceiveCommandOR4.7Store/ProcessCommand4.8AcquirePayload DataAND4.10Transmit Payload& SubsystemDataOR(6.0) Ref.Transfer toSTS Orbit4.6Receive Command(Omni)4.9AcquireSubsystemStatus DataOR4.11TransmitSubsystemDataTHIRD LEVEL(4.7) Ref.Store/ProcessCommand(4.10) Ref.Transmit Payload& SubsystemData4.8.14.8.24.8.34.8.44.8.54.8.64.8.74.8.8Compute TDRSPointingVectorSlew toand TrackTDRSRadar toStandbyCompute LOSPointingVectorSlew S/Cto LOSVectorCommandERP PWRadar OnProcess ReceivingSignaland FormatRadar toStandbyOR4.8.9Radar OffFigure 4.3-3 Example of a functional flow block diagram<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 53


4.0 System DesignFFBDs are used to develop, analyze, and flow down requirements,as well as to identify profitable trade studies,by identifying alternative approaches to performing eachfunction. In certain cases, alternative FFBDs may beused to represent various means of satisfying a particularfunction until trade study data are acquired to permit selectionamong the alternatives.The flow diagram also provides an understanding ofthe total operation of the system, serves as a basis fordevelopment of operational and contingency procedures,and pinpoints areas where changes in operationalprocedures could simplify the overall systemoperation.N2 DiagramsThe N-squared (N2 or N 2 ) diagram is used to developsystem interfaces. An example of an N2 diagram isshown in Figure 4.3-4. The system components orfunctions are placed on the diagonal; the remainder ofInputAlphaAELegend:EMSSElectricalMBEMechanicalSupplied ServicesInterfaceSSCA–H: System or SubsystemE M SSDEME M SSEEE M SSFGthe squares in the N x N matrix represent the interfaceinputs and outputs. Where a blank appears, there isno interface between the respective components orfunctions. The N2 diagram can be taken down intosuccessively lower levels to the component functionallevels. In addition to defining the interfaces, the N2diagram also pinpoints areas where conflicts couldarise in interfaces, and highlights input and outputdependency assumptions and requirements.Timeline AnalysisTLA adds consideration of functional durations and isperformed on those areas where time is critical to missionsuccess, safety, utilization of resources, minimization ofdowntime, and/or increasing availability. TLA can be appliedto such diverse operational functions as spacecraftcommand sequencing and launch; but for those functionalsequences where time is not a critical factor, FFBDs or N2diagrams are sufficient. The following areas are often categorizedas time-critical: (1) functions affecting systemreaction time, (2) missionturnaround time, (3) timecountdown activities, and(4) functions for which optimumequipment and/orpersonnel utilization are dependenton the timing ofparticular activities.ME M SSHMOutputBetaTimeline Sheets (TLSs) areused to perform and recordthe analysis of time-criticalfunctions and functionalsequences. For time-criticalfunctional sequences, thetime requirements are specifiedwith associated tolerances.Additional tools suchas mathematical models andcomputer simulations maybe necessary to establish theduration of each timeline.For additional informationon FFBD, N2 diagrams,timeline analysis, and otherfunctional analysis methods,see Appendix F.Figure 4.3-4 Example of an N2 diagram54 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


4.4 Design Solution DefinitionThe Design Solution Definition Process is used to translatethe high-level requirements derived from the stakeholderexpectations and the outputs of the Logical DecompositionProcess into a design solution. This involvestransforming the defined logical decomposition modelsand their associated sets of derived technical requirementsinto alternative solutions. These alternative solutionsare then analyzed through detailed trade studiesthat result in the selection of a preferred alternative. Thispreferred alternative is then fully defined into a final designsolution that will satisfy the technical requirements.This design solution definition will be used to generatethe end product specifications that will be used producethe product and to conduct product verification. Thisprocess may be further refined depending on whetherthere are additional subsystems of the end product thatneed to be defined.4.4.1 Process DescriptionFigure 4.4-1 provides a typical flow diagram for the DesignSolution Definition Process and identifies typicalinputs, outputs, and activities to consider in addressingdesign solution definition.4.4.1.1 InputsThere are several fundamental inputs needed to initiatethe Design Solution Definition Process:z z <strong>Technical</strong> Requirements: The customer and stakeholderneeds that have been translated into a reasonDefine alternative design solutionsAnalyze each alternative design solutionTo Requirements Managementand Interface Management ProcessesSystem-SpecifiedRequirementsEnd Product–SpecifiedRequirementsFrom LogicalDecomposition andConfiguration ManagementProcessesBaselined LogicalDecompositionModelsBaselined Derived<strong>Technical</strong>RequirementsSelect best design solution alternativeGenerate full design description of theselected solutionVerify the fully defined design solutionBaseline design solution specified requirementsand design descriptions*YesEnablingproductexists?NoInitiate developmentof enabling productsNeedlower levelproduct?YesInitiate developmentof next lower levelproducts* To Product Implementation ProcessFigure 4.4-1 Design Solution Definition ProcessNo*To Stakeholder Expectations Definitionand Requirements Management and InterfaceManagement ProcessesInitial SubsystemSpecificationsTo Stakeholder Expectations Definitionor Product Implementation andRequirements Management andInterface Management ProcessesEnabling ProductRequirementsTo Product Verification ProcessProduct VerificationPlanTo Product Validation ProcessProduct ValidationPlanTo <strong>Technical</strong> Data Management ProcessLogistics and Operate-To Procedures<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 55


4.0 System Designably complete set of validated requirements for thesystem, including all interface requirements.z z Logical Decomposition Models: Requirements decomposedby one or more different methods (e.g.,function, time, behavior, data flow, states, modes,system architecture, etc.).4.4.1.2 Process ActivitiesDefine Alternative Design SolutionsThe realization of a system over its life cycle involvesa succession of decisions among alternative courses ofac tion. If the alternatives are precisely defined and thoroughlyunderstood to be well differentiated in the costeffectivenessspace, then the systems engineer can makechoices among them with confidence.To obtain assessments that are crisp enough to facilitategood decisions, it is often necessary to delve moredeeply into the space of possible designs than has yetbeen done, as is illustrated in Figure 4.4-2. It should berealized, however, that this illustration represents neitherthe project life cycle, which encompasses the system developmentprocess from inception through disposal, northe product development process by which the systemdesign is developed and implemented.Recognizeneed/opportunityIncreasePerformmissionIncreaseresolutionresolutionIncreaseImplement decisionsresolutionSelectdesignSelectdesignIdentify andquantify goalsIdentify andquantify goalsIdentify andquantify goalsIdentify andquantify goalsSelectdesignSelectdesignFigure 4.4-2 The doctrine of successive refinementCreateconceptsDo tradestudiesCreateconceptsDo tradestudiesCreateconceptsDo tradestudiesDo tradestudiesCreateconceptsEach create concepts step in Figure 4.4-2 involves a recursiveand iterative design loop driven by the set of stakeholderexpectations where a strawman architecture/design, the associated ConOps, and the derived requirementsare developed. These three products must be consistentwith each other and will require iterations and designdecisions to achieve this consistency. This recursiveand iterative design loop is illustrated in Figure 4.0‐1.Each create concepts step also involves an assessment ofpotential capabilities offered by the continually changingstate of technology and potential pitfalls captured throughexperience-based review of prior program/project lessonslearned data. It is imperative that there be a continualinteraction between the technology developmentprocess and the design process to ensure that the designreflects the realities of the available technology and thatoverreliance on immature technology is avoided. Additionally,the state of any technology that is consideredenabling must be properly monitored, and care must betaken when assessing the impact of this technology onthe concept performance. This interaction is facilitatedthrough a periodic assessment of the design with respectto the maturity of the technology required to imple mentthe design. (See Subsection 4.4.2.1 for a more de taileddiscussion of technology assessment.) These tech nologyelements usually exist at a lower level in the PBS. Althoughthe process of design concept development bythe integration of lower level elements is a part of the systemsengineering process, there is always a danger thatthe top-down process cannot keep up with the bottomupprocess. Therefore, system architecture issues need tobe resolved early so that the system can be modeled withsufficient realism to do reliable trade studies.As the system is realized, its particulars become clearer—but also harder to change. The purpose of systems engineeringis to make sure that the Design Solution DefinitionProcess happens in a way that leads to the mostcost-effective final system. The basic idea is that beforethose decisions that are hard to undo are made, the alternativesshould be carefully assessed, particularly withrespect to the maturity of the required technology.Create Alternative Design ConceptsOnce it is understood what the system is to accomplish,it is possible to devise a variety of ways that those goalscan be met. Sometimes, that comes about as a consequenceof considering alternative functional allocationsand integrating available subsystem design options, all ofwhich can have technologies at varying degrees of matu-56 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


4.4 Design Solution Definitionrity. Ideally, as wide a range of plausible alternatives as isconsistent with the design organization’s charter shouldbe defined, keeping in mind the current stage in the processof successive refinement. When the bottom-up processis operating, a problem for the systems engineer isthat the designers tend to become fond of the designsthey create, so they lose their objectivity; the systems engineeroften must stay an “outsider” so that there is moreobjectivity. This is particularly true in the assessment ofthe technological maturity of the subsystems and componentsrequired for implementation. There is a tendencyon the part of technology developers and projectmanagement to overestimate the maturity and applicabilityof a technology that is required to implement a design.This is especially true of “heritage” equipment. Theresult is that critical aspects of systems engineering areoften overlooked.On the first turn of the successive refinement inFigure 4.4-2, the subject is often general approaches orstrategies, sometimes architectural concepts. On the next,it is likely to be functional design, then detailed design,and so on. The reason for avoiding a premature focus ona single design is to permit discovery of the truly best design.Part of the systems engineer’s job is to ensure thatthe design concepts to be compared take into account allinterface requirements. “Did you include the cabling?”is a characteristic question. When possible, each designconcept should be described in terms of controllable designparameters so that each represents as wide a classof designs as is reasonable. In doing so, the systems engineershould keep in mind that the potentials for changemay include organizational structure, schedules, procedures,and any of the other things that make up a system.When possible, constraints should also be described byparameters.Analyze Each Alternative Design SolutionThe technical team analyzes how well each of the designalternatives meets the system goals (technology gaps, effectiveness,cost, schedule, and risk, both quantified andotherwise). This assessment is accomplished throughthe use of trade studies. The purpose of the trade studyprocess is to ensure that the system architecture and designdecisions move toward the best solution that can beachieved with the available resources. The basic steps inthat process are:zz Devise some alternative means to meet the functionalrequirements. In the early phases of the project lifecycle,this means focusing on system architectures; inlater phases, emphasis is given to system designs.zz Evaluate these alternatives in terms of the MOEsand system cost. Mathematical models are useful inthis step not only for forcing recognition of the relationshipsamong the outcome variables, but also forhelping to determine what the measures of performancemust be quantitatively.zz Rank the alternatives according to appropriate selec-tion criteria.zz Drop less promising alternatives and proceed to thenext level of resolution, if needed.The trade study process must be done openly and inclusively.While quantitative techniques and rules areused, subjectivity also plays a significant role. To makethe process work effectively, participants must have openminds, and individuals with different skills—systems engineers,design engineers, specialty engineers, programanalysts, decision scientists, and project managers—must cooperate. The right quantitative methods and selectioncriteria must be used. Trade study assumptions,models, and results must be documented as part of theproject archives. The participants must remain focusedon the functional requirements, including those for enablingproducts. For an in-depth discussion of the tradestudy process, see Section 6.8. The ability to performthese studies is enhanced by the development of systemmodels that relate the design parameters to those assessments—butit does not depend upon them.The technical team must consider a broad range of conceptswhen developing the system model. The modelmust define the roles of crew, hardware, and software inthe system. It must identify the critical technologies requiredto implement the mission and must consider theentire life cycle, from fabrication to disposal. Evalu ationcriteria for selecting concepts must be established. Costis always a limiting factor. However, other criteria, suchas time to develop and certify a unit, risk, and re liability,also are critical. This stage cannot be accom plishedwithout addressing the roles of operators and maintainers.These contribute significantly to life-cycle costsand to the system reliability. Reliability analysis shouldbe performed based upon estimates of compo nentfailure rates for hardware. If probabilistic risk as sessmentmodels are applied, it may be necessary to in clude occurrencerates or probabilities for software faults or humanerror events. Assessments of the maturity of the required<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 57


4.0 System Designtechnology must be done and a technology developmentplan developed.Controlled modification and development of design concepts,together with such system models, often per mitsthe use of formal optimization techniques to find regionsof the design space that warrant further inves tigation.Whether system models are used or not, the designcon cepts are developed, modified, reassessed, and comparedagainst competing alternatives in a closed-loopprocess that seeks the best choices for further development.System and subsystem sizes are often determinedduring the trade studies. The end result is the determinationof bounds on the relative cost-effectiveness of thedesign alternatives, measured in terms of the quantifiedsystem goals. (Only bounds, rather than final values, arepossible because determination of the final details of thedesign is intentionally deferred.) Increasing detail associatedwith the continually improving resolution reducesthe spread between upper and lower bounds as the processproceeds.Select the Best Design Solution AlternativeThe technical team selects the best design solution fromamong the alternative design concepts, taking into accountsubjective factors that the team was unable toquantify as well as estimates of how well the alternativesmeet the quantitative requirements; the maturityof the available technology; and any effectiveness, cost,schedule, risk, or other constraints.The Decision Analysis Process, as described in Section6.8, should be used to make an evaluation of the alternativedesign concepts and to recommend the “best”design solution.When it is possible, it is usually well worth the troubleto develop a mathematical expression, called an “objectivefunction,” that expresses the values of combinationsof possible outcomes as a single measure of cost-effectiveness,as illustrated in Figure 4.4-3, even if both costand effectiveness must be described by more than onemeasure.The objective function (or “cost function”) assigns a realnumber to candidate solutions or “feasible solutions” inthe alternative space or “search space.” A feasible solutionthat minimizes (or maximizes, if that is the goal) theobjective function is called an “optimal solution.” WhenSome Aspect of Effectiveness,Expressed in Quantitative UnitsBCLife-Cycle Cost, Expressed in Constant DollarsFigure 4.4-3 A quantitative objective function,dependent on life-cycle cost and all aspects ofeffectivenessNote: The different shaded areas indicate different levels ofuncertainty. Dashed lines represent constant values of objectivefunction (cost-effectiveness). Higher values of cost-effectivenessare achieved by moving toward upper left. A, B, and C are designconcepts with different risk patterns.achievement of the goals can be quantitatively expressedby such an objective function, designs can be comparedin terms of their value. Risks associated with design conceptscan cause these evaluations to be somewhat nebulous(because they are uncertain and are best describedby probability distributions).In Figure 4.4-3, the risks are relatively high for designconcept A. There is little risk in either effectiveness orcost for concept B, while the risk of an expensive failureis high for concept C, as is shown by the cloud of probabilitynear the x axis with a high cost and essentially noeffectiveness. Schedule factors may affect the effectivenessand cost values and the risk distributions.The mission success criteria for systems differ significantly.In some cases, effectiveness goals may be muchmore important than all others. Other projects may demandlow costs, have an immutable schedule, or requireminimization of some kinds of risks. Rarely (if ever) isit possible to produce a combined quantitative measurethat relates all of the important factors, even if it is expressedas a vector with several components. Even whenthat can be done, it is essential that the underlying factorsand relationships be thoroughly revealed to and understoodby the systems engineer. The systems engineerA58 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


4.4 Design Solution Definitionmust weigh the importance of the unquantifiable factorsalong with the quantitative data.<strong>Technical</strong> reviews of the data and analyses, includingtechnology maturity assessments, are an importantpart of the decision support packages prepared for thetechnical team. The decisions that are made are generallyentered into the configuration management systemas changes to (or elaborations of) the system baseline.The supporting trade studies are archived for future use.An essential feature of the systems engineering processis that trade studies are performed before decisions aremade. They can then be baselined with much more confidence.Increase the Resolution of the DesignThe successive refinement process of Figure 4.4-2 illustratesa continuing refinement of the system design. Ateach level of decomposition, the baselined derived (andallocated) requirements become the set of high-level requirementsfor the decomposed elements, and the processbegins again. One might ask, “When do we stop refiningthe design?” The answer is that the design effortprecedes to a depth that is sufficient to meet severalneeds: the design must penetrate sufficiently to allow analyticalvalidation of the design to the requirements; itmust also have sufficient depth to support cost modelingand to convince a review team of a feasible design withperformance, cost, and risk margins.The systems engineering engine is applied again andagain as the system is developed. As the system is realized,the issues addressed evolve and the particulars ofthe activity change. Most of the major system decisions(goals, architecture, acceptable life-cycle cost, etc.) aremade during the early phases of the project, so the successiverefinements do not correspond precisely to thephases of the system life cycle. Much of the system architecturecan be seen even at the outset, so the successiverefinements do not correspond exactly to developmentof the architectural hierarchy, either. Rather, they correspondto the successively greater resolution by which thesystem is defined.It is reasonable to expect the system to be defined withbetter resolution as time passes. This tendency is formalizedat some point (in Phase B) by defining a baselinesystem definition. Usually, the goals, objectives, and constraintsare baselined as the requirements portion of thebaseline. The entire baseline is then subjected to configurationcontrol in an attempt to ensure that any subsequentchanges are indeed justified and affordable.At this point in the systems engineering process, there isa logical branch point. For those issues for which the processof successive refinement has proceeded far enough,the next step is to implement the decisions at that levelof resolution. For those issues that are still insufficientlyresolved, the next step is to refine the development further.Fully Describe the Design SolutionOnce the preferred design alternative has been selectedand the proper level of refinement has been completed,then the design is fully defined into a final design solutionthat will satisfy the technical requirements. The designsolution definition will be used to generate the endproduct specifications that will be used to produce theproduct and to conduct product verification. This processmay be further refined depending on whether thereare additional subsystems of the end product that needto be defined.The scope and content of the full design descriptionmust be appropriate for the product life-cycle phase, thephase success criteria, and the product position in thePBS (system structure). Depending on these factors, theform of the design solution definition could be simply asimulation model or a paper study report. The technicaldata package evolves from phase to phase, starting withconceptual sketches or models and ending with com pletedrawings, parts list, and other details needed for productimplementation or product integration. Typical outputdefinitions from the Design Solution Definition Processare shown in Figure 4.4-1 and are described in Subsection4.4.1.3.Verify the Design SolutionOnce an acceptable design solution has been selectedfrom among the various alternative designs and documentedin a technical data package, the design solutionmust next be verified against the system requirementsand constraints. A method to achieve this verificationis by means of a peer review to evaluate the resultingde sign solution definition. Guidelines for conducting apeer review are discussed in Section 6.7.In addition, peer reviews play a significant role as a detailedtechnical component of higher level technical and<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 59


4.0 System Designprogrammatic reviews. For example, the peer review ofa component battery design can go into much more technicaldetail on the battery than the integrated power subsystemreview. Peer reviews can cover the components ofa subsystem down to the level appropriate for verifyingthe design against the requirements. Concerns raised atthe peer review might have implications on the powersubsystem design and verification and therefore mustbe reported at the next higher level review of the powersubsystem.The verification must show that the design solution definition:zz Is realizable within the constraints imposed on thetechnical effort;zz Has specified requirements that are stated in accept-able statements and have bidirectional traceabilitywith the derived technical requirements, technical requirements,and stakeholder expectations; andzz Has decisions and assumptions made in forming thesolution consistent with its set of derived technicalrequirements, separately allocated technical requirements,and identified system product and serviceconstraints.This design solution verification is in contrast to theverification of the end product described in the endproduct verification plan which is part of the technicaldata package. That verification occurs in a later life-cyclephase and is a result of the Product Verification Process(see Section 5.3) applied to the realization of the designsolution as an end product.Validate the Design SolutionThe validation of the design solution is a recursive anditerative process as shown in Figure 4.0-1. Each alternativedesign concept is validated against the set of stakeholderexpectations. The stakeholder expectations drivethe iterative design loop in which a strawman architecture/design,the ConOps, and the derived requirementsare developed. These three products must be consistentwith each other and will require iterations and designdecisions to achieve this consistency. Once consistencyis achieved, functional analyses allow the study teamto validate the design against the stakeholder expectations.A simplified validation asks the questions: Doesthe system work? Is the system safe and reliable? Is thesystem affordable? If the answer to any of these questionsis no, then changes to the design or stakeholder expectationswill be required, and the process is started overagain. This process continues until the system—architecture,ConOps, and requirements—meets the stakeholderexpectations.This design solution validation is in contrast to the validationof the end product described in the end productvalidation plan, which is part of the technical datapackage. That validation occurs in a later life-cycle phaseand is a result of the Product Validation Process (see Section5.4) applied to the realization of the design solutionas an end product.Identify Enabling ProductsEnabling products are the life-cycle support productsand services (e.g., production, test, deployment, training,maintenance, and disposal) that facilitate the progressionand use of the operational end product through its lifecycle. Since the end product and its enabling productsare interdependent, they are viewed as a system. Projectresponsibility thus extends to responsibility for acquiringservices from the relevant enabling products in each lifecyclephase. When a suitable enabling product does notalready exist, the project that is responsible for the endproduct also can be responsible for creating and usingthe enabling product.Therefore, an important activity in the Design SolutionDefinition Process is the identification of the enablingproducts that will be required during the life cycle of theselected design solution and then initiating the acquisitionor development of those enabling products. Needdates for the enabling products must be realisticallyidentified on the project schedules, incorporating appropriateschedule slack. Then firm commitments in theform of contracts, agreements, and/or operational plansmust be put in place to ensure that the enabling productswill be available when needed to support the productlinelife-cycle phase activities. The enabling product requirementsare documented as part of the technical datapackage for the Design Solution Definition Process.An environmental test chamber would be an example ofan enabling product whose use would be acquired at anappropriate time during the test phase of a space flightsystem.Special test fixtures or special mechanical handling deviceswould be examples of enabling products thatwould have to be created by the project. Because of long60 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


4.4 Design Solution Definitionde velopment times as well as oversubscribed facilities, itis important to identify enabling products and secure thecommitments for them as early in the design phase aspossible.Baseline the Design SolutionAs shown earlier in Figure 4.0-1, once the selected systemdesign solution meets the stakeholder expectations, thestudy team baselines the products and prepares for thenext life-cycle phase. Because of the recursive nature ofsuccessive refinement, intermediate levels of decompositionare often validated and baselined as part of the process.In the next level of decomposition, the baselinedrequirements become the set of high-level requirementsfor the decomposed elements, and the process beginsagain.Baselining a particular design solution enables the technicalteam to focus on one design out of all the alternativedesign concepts. This is a critical point in the designprocess. It puts a stake in the ground and gets everyoneon the design team focused on the same concept. Whendealing with complex systems, it is difficult for teammembers to design their portion of the system if thesystem design is a moving target. The baselined designis documented and placed under configuration control.This includes the system requirements, specifications,and configuration descriptions.While baselining a design is beneficial to the design process,there is a danger if it is exercised too early in the DesignSolution Definition Process. The early explorationof alternative designs should be free and open to a widerange of ideas, concepts, and implementations. Baseliningtoo early takes the inventive nature out of the conceptexploration. Therefore baselining should be one ofthe last steps in the Design Solution Definition Process.4.4.1.3 OutputsOutputs of the Design Solution Definition Process arethe specifications and plans that are passed on to theproduct realization processes. They contain the designto,build-to, and code-to documentation that complieswith the approved baseline for the system.As mentioned earlier, the scope and content of the fulldesign description must be appropriate for the productlinelife-cycle phase, the phase success criteria, and theproduct position in the PBS.Outputs of the Design Solution Definition Process includethe following:z z The System Specification: The system specificationcontains the functional baseline for the system that isthe result of the Design Solution Definition Process.The system design specification provides sufficientguidance, constraints, and system requirements forthe design engineers to execute the design.z z The System External Interface Specifications: Thesystem external interface specifications describe thefunctional baseline for the behavior and characteristicsof all physical interfaces that the system haswith the external world. These include all structural,thermal, electrical, and signal interfaces, as well as thehuman-system interfaces.z z The End-Product Specifications: The end-productspecifications contain the detailed build-to and codetorequirements for the end product. They are detailed,exact statements of design particulars, suchas statements prescribing materials, dimensions, andquality of work to build, install, or manufacture theend product.z z The End-Product Interface Specifications: Theend-product interface specifications contain thedetailed build-to and code-to requirements forthe behavior and characteristics of all logical andphysical inter faces that the end product has withexternal elements, including the human-system interfaces.z z Initial Subsystem Specifications: The end-productsubsystem initial specifications provide detailed informationon subsystems if they are required.z z Enabling Product Requirements: The requirementsfor associated supporting enabling products providedetails of all enabling products. Enabling products arethe life-cycle support products and services that facilitatethe progression and use of the operational endproduct through its life cycle. They are viewed as partof the system since the end product and its enablingproducts are interdependent.zzProduct Verification Plan: The end-product verificationplan provides the content and depth of detail necessaryto provide full visibility of all verification activitiesfor the end product. Depending on the scope ofthe end product, the plan encompasses qualification,acceptance, prelaunch, operational, and disposal verificationactivities for flight hardware and software.<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 61


4.0 System DesignzzzzProduct Validation Plan: The end-product validationplan provides the content and depth of detailnecessary to provide full visibility of all activities tovalidate the realized product against the baselinedstakeholder expectations. The plan identifies the typeof validation, the validation procedures, and the validationenvironment that are appropriate to confirmthat the realized end product conforms to stakeholderexpectations.Logistics and Operate-to Procedures: The applicablelogistics and operate-to procedures for the system describesuch things as handling, transportation, maintenance,long-term storage, and operational considerationsfor the particular design solution.4.4.2 Design Solution Definition Guidance4.4.2.1 Technology AssessmentAs mentioned in the process description (Subsection4.4.1), the creation of alternative design solutions involvesassessment of potential capabilities offered by thecontinually changing state of technology. A continual interactionbetween the technology development processand the design process ensures that the design reflectsthe realities of the available technology. This interactionis facilitated through periodic assessment of the designwith respect to the maturity of the technology requiredto implement the design.After identifying the technology gaps existing in a givendesign concept, it will frequently be necessary to undertaketechnology development in order to ascertain viability.Given that resources will always be limited, it willbe necessary to pursue only the most promising technologiesthat are required to enable a given concept.If requirements are defined without fully understandingthe resources required to accomplish needed technologydevelopments then the program/project is at risk. Technologyassessment must be done iteratively until requirementsand available resources are aligned within an acceptablerisk posture. Technology development plays afar greater role in the life cycle of a program/project thanhas been traditionally considered, and it is the role of thesystems engineer to develop an understanding of the extentof program/project impacts—maximizing benefitsand minimizing adverse effects. Traditionally, from aprogram/project perspective, technology developmenthas been associated with the development and incorporationof any “new” technology necessary to meet requirements.However, a frequently overlooked area isthat associated with the modification of “heritage” systemsincorporated into different architectures and operatingin different environments from the ones for whichthey were designed. If the required modifications and/or operating environments fall outside the realm of experience,then these too should be considered technologydevelopment.To understand whether or not technology developmentis required—and to subsequently quantify the associatedcost, schedule, and risk—it is necessary to systematicallyassess the maturity of each system, subsystem, or componentin terms of the architecture and operational environment.It is then necessary to assess what is required inthe way of development to advance the maturity to a pointwhere it can successfully be incorporated within cost,schedule, and performance constraints. A process for accomplishingthis assessment is described in Ap pendix G.Because technology development has the po tential forsuch significant impacts on a program/project, technologyassessment needs to play a role throughout the design anddevelopment process from concept de velopment throughPreliminary Design Review (PDR). Lessons learned froma technology development point of view should then becaptured in the final phase of the program.4.4.2.2 Integrating <strong>Engineering</strong> Specialtiesinto the <strong>Systems</strong> <strong>Engineering</strong> ProcessAs part of the technical effort, specialty engineers inco operation with systems engineering and subsystemde signers often perform tasks that are common acrossdisciplines. Foremost, they apply specialized analyticaltechniques to create information needed by the projectmanager and systems engineer. They also help defineand write system requirements in their areas of expertise,and they review data packages, <strong>Engineering</strong> Change Requests(ECRs), test results, and documentation for majorproject reviews. The project manager and/or systems engineerneeds to ensure that the information and productsso generated add value to the project commensuratewith their cost. The specialty engineering technical effortshould be well integrated into the project. The roles andresponsibilities of the specialty engineering disciplinesshould be summarized in the SEMP.The specialty engineering disciplines included in thishandbook are safety and reliability, Quality Assurance62 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


4.4 Design Solution Definition(QA), ILS, maintainability, producibility, and humanfactors. An overview of these specialty engineering disciplinesis provided to give systems engineers a brief introduction.It is not intended to be a handbook for any ofthese discipline specialties.Safety and ReliabilityOverview and PurposeA reliable system ensures mission success by functioningproperly over its intended life. It has a low and accept ableprobability of failure, achieved through simplicity, properdesign, and proper application of reliable parts and materials.In addition to long life, a reliable system is robust andfault tolerant, meaning it can tolerate fail ures and variationsin its operating parameters and en vironments.Safety and Reliability in the System DesignProcessA focus on safety and reliability throughout the missionlife cycle is essential for ensuring mission success. Thefidelity to which safety and reliability are designed andbuilt into the system depends on the information neededand the type of mission. For human-rated systems, safetyand reliability is the primary objective throughout thedesign process. For science missions, safety and reliabilityshould be commensurate with the funding andlevel of risk a program or project is willing to accept. Regardlessof the type of mission, safety and reliability considerationsmust be an intricate part of the system designprocesses.To realize the maximum benefit from reliability analysis,it is essential to integrate the risk and reliability analystswithin the design teams. The importance of this cannotbe overstated. In many cases, the reliability and risk analystsperform the analysis on the design after it has beenformulated. In this case, safety and reliability features areadded on or outsourced rather than designed in. Thisresults in unrealistic analysis that is not focused on riskdrivers and does not provide value to the design.Risk and reliability analyses evolve to answer key questionsabout design trades as the design matures. Reliabilityanalyses utilize information about the system,identify sources of risk and risk drivers, and providean important input for decisionmaking. <strong>NASA</strong>-STD-8729.1, Planning, Developing, and Maintaining an EffectiveReli ability and Maintainability (R&M) Programoutlines en gineering activities that should be tailoredfor each spe cific project. The concept is to choose an effectiveset of reliability and maintainability engineeringactivities to ensure that the systems designed, built, anddeployed will operate successfully for the required missionlife cycle.In the early phases of a project, risk and reliability analyseshelp designers understand the interrelationships ofrequirements, constraints, and resources, and uncoverkey relationships and drivers so they can be properly considered.The analyst must help designers go beyond therequirements to understand implicit dependencies thatemerge as the design concept matures. It is unrealistic toassume that design requirements will correctly captureall risk and reliability issues and “force” a reliable design.The systems engineer should develop a system strategymapped to the PBS on how to allocate and coordinatereliability, fault tolerance, and recovery between systemsboth horizontally and vertically within the architectureto meet the total mission requirements. System impactsof designs must play a key role in the design. Makingdesigners aware of impacts of their decisions on overallmission reliability is key.As the design matures, preliminary reliability analysisoccurs using established techniques. The design andconcept of operations should be thoroughly examinedfor accident initiators and hazards that could lead tomishaps. Conservative estimates of likelihood and consequencesof the hazards can be used as a basis for applyingdesign resources to reduce the risk of failures. Theteam should also ensure that the goals can be met andfailure modes are considered and take into account theentire system.During the latter phases of a project, the team uses riskassessments and reliability techniques to verify that thedesign is meeting its risk and reliability goals and to helpdevelop mitigation strategies when the goals are not metor discrepancies/failures occur.Analysis Techniques and MethodsThis subsection provides a brief summary of the types ofanalysis techniques and methods.zz Event sequence diagrams/event trees are models thatdescribe the sequence of events and responses to offnominalconditions that can occur during a mission.zz Failure Modes and Effects Analyses (FMEAs) arebottom-up analyses that identify the types of failures<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 63


4.0 System Designthat can occur within a system and identify the causes,effects, and mitigating strategies that can be employedto control the effects of the failures.zz Qualitative top-down logic models identify how fail-ures within a system can combine to cause an undesiredevent.zz Quantitative logic models (probabilistic risk assess-ment) extend the qualitative models to include thelikelihood of failure. These models involve developingfailure criteria based on system physics and systemsuccess criteria, and employing statistical techniquesto estimate the likelihood of failure along with uncertainty.zz Reliability block diagrams are diagrams of the ele-ments to evaluate the reliability of a system to providea function.zz Preliminary Hazard Analysis (PHA) is performedearly based on the functions performed during themission. Preliminary hazard analysis is a “what if”process that considers the potential hazard, initiatingevent scenarios, effects, and potential corrective measuresand controls. The objective is to determine if thehazard can be eliminated, and if not, how it can becontrolled.zz Hazard analysis evaluates the completed design.Hazard analysis is a “what if” process that considersthe potential hazard, initiating event, effects, and potentialcorrective measures and controls. The objectiveis to determine if the hazard can be eliminated,and if not, how it can be controlled.zz Human reliability analysis is a method to understandhow human failures can lead to system failure and estimatethe likelihood of those failures.zz Probabilistic structural analysis provides a way tocombine uncertainties in materials and loads to evaluatethe failure of a structural element.zz Sparing/logistics models provide a means to estimatethe interactions of systems in time. These models includeground-processing simulations and missioncampaign simulations.Limitations on Reliability AnalysisThe engineering design team must understand that reliabilityis expressed as the probability of mission success.Probability is a mathematical measure expressing thelikelihood of occurrence of a specific event. Therefore,probability estimates should be based on engineeringand historical data, and any stated probabilities shouldinclude some measure of the uncertainty surroundingthat estimate.Uncertainty expresses the degree of belief analysts havein their estimates. Uncertainty decreases as the quality ofdata and understanding of the system improve. The initialestimates of failure rates or failure probability mightbe based on comparison to similar equipment, historicaldata (heritage), failure rate data from handbooks, or expertelicitation.In summary,zz Reliability estimates express probability of success.zz Uncertainty should be included with reliability esti-mates.zz Reliability estimates combined with FMEAs provideadditional and valuable information to aid in the decisionmakingprocess.Quality AssuranceEven with the best designs, hardware fabrication andtesting are subject to human error. The systems engineerneeds to have some confidence that the system actuallyproduced and delivered is in accordance with its functional,performance, and design requirements. QA providesan independent assessment to the project manager/systems engineer of the items produced and processesused during the project life cycle. The project manager/systems engineer must work with the quality assuranceengineer to develop a quality assurance program (the extent,responsibility, and timing of QA activities) tailoredto the project it supports.QA is the mainstay of quality as practiced at <strong>NASA</strong>.NPD 8730.5, <strong>NASA</strong> Quality Assurance Program Policystates that <strong>NASA</strong>’s policy is “to comply with prescribedre quirements for performance of work and to providefor independent assurance of compliance through implementationof a quality assurance program.” The qualityfunction of Safety and Mission Assurance (SMA) ensuresthat both contractors and other <strong>NASA</strong> functionsdo what they say they will do and say what they intend todo. This ensures that end product and program quality,reliability, and overall risk are at the level planned.The <strong>Systems</strong> Engineer’s Relationship to QAAs with reliability, producibility, and other characteristics,quality must be designed as an integral part of any64 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


4.4 Design Solution Definitionsystem. It is important that the systems engineer understandsSMA’s safeguarding role in the broad context oftotal risk and supports the quality role explicitly and vigorously.All of this is easier if the SMA quality function isactively included and if quality is designed in with buyinby all roles, starting at concept development. This willhelp mitigate conflicts between design and quality requirements,which can take on the effect of “tolerancestacking.”Quality is a vital part of risk management. Errors, variability,omissions, and other problems cost time, programresources, taxpayer dollars, and even lives. It is incumbenton the systems engineer to know how qualityaffects their projects and to encourage best practices toachieve the quality level.Rigid adherence to procedural requirements is necessaryin high-risk, low-volume manufacturing. In the absenceof large samples and long production runs, complianceto these written procedures is a strong step toward ensuringprocess, and, thereby, product consistency. To addressthis, <strong>NASA</strong> requires QA programs to be designedto mitigate risks associated with noncompliance to thoserequirements.There will be a large number of requirements and proceduresthus created. These must be flowed down to thesupply chain, even to lowest tier suppliers. For circumstanceswhere noncompliance can result in loss of lifeor loss of mission, there is a requirement to insert intoprocedures Government Mandatory Inspection Points(GMIPs) to ensure 100 percent compliance with safety/mission-critical attributes. Safety/mission-critical attributesinclude hardware characteristics, manufacturingprocess requirements, operating conditions, and functionalperformance criteria that, if not met, can resultin loss of life or loss of mission. There will be in placea Program/Project Quality Assurance Surveillance Plan(PQASP) as mandated by Federal Acquisition Regulation(FAR) Subpart 46.4. Preparation and content forPQASPs are outlined in NPR 8735.2, Management ofGovernment Quality Assurance Functions for <strong>NASA</strong> Contracts.This document covers quality assurance requirementsfor both low-risk and high-risk acquisitions andincludes functions such as document review, productexamination, process witnessing, quality system evaluation,nonconformance reporting and corrective action,planning for quality assurance and surveillance, andGMIPs. In addition, most <strong>NASA</strong> projects are required toadhere to either ISO 9001 (noncritical work) or AS9100(critical work) requirements for management of qualitysystems. Training in these systems is mandatory for most<strong>NASA</strong> functions, so knowledge of their applicability bythe systems engineer is assumed. Their texts and intentare strongly reflected in <strong>NASA</strong>’s quality procedural documents.Integrated Logistics SupportThe objective of ILS activities within the systems engineeringprocess is to ensure that the product system issupported during development (Phase D) and operations(Phase E) in a cost-effective manner. ILS is particularlyimportant to projects that are reusable or serviceable.Projects whose primary product does not evolveover its operations phase typically only apply ILS toparts of the project (for example, the ground system) orto some of the elements (for example, transportation).ILS is primarily accomplished by early, concurrent considerationof supportability characteristics; performingtrade studies on alternative system and ILS concepts;quantifying resource requirements for each ILS elementusing best practices; and acquiring the sup port items associatedwith each ILS element. During op erations, ILSactivities support the system while seeking improvementsin cost-effectiveness by conducting anal yses in responseto actual operational conditions. These analysescontinually reshape the ILS system and its re source requirements.Neglecting ILS or poor ILS deci sions invariablyhave adverse effects on the life-cycle cost of theresultant system. Table 4.4-1 summarizes the ILS disciplines.ILS planning should begin early in the project life cycleand should be documented. This plan should address theelements above including how they will be considered,conducted, and integrated into the systems engineeringprocess needs.MaintainabilityMaintainability is defined as the measure of the abilityof an item to be retained in or restored to specified conditionswhen maintenance is performed by personnelhaving specified skill levels, using prescribed proceduresand resources, at each prescribed level of maintenance. Itis the inherent characteristics of a design or installationthat contribute to the ease, economy, safety, and accu racywith which maintenance actions can be performed.<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 65


4.0 System DesignTable 4.4-1 ILS <strong>Technical</strong> Disciplines<strong>Technical</strong> Discipline DefinitionMaintenance sup portplanningDesign interface<strong>Technical</strong> data andtechnical publica tionsTraining and trainingsupportSupply supportTest and supportequipmentPackaging, handling,storage, and transportationPersonnelLogistics facilitiesComputer resourcessupportSource: Blanchard, System <strong>Engineering</strong> Management.Ongoing and iterative planning, organization, and management activities necessary to ensurethat the logistics requirements for any given program are properly coordinated and implementedThe interaction and relationship of logistics with the systems engineering process to ensure thatsupportability influences the definition and design of the system so as to reduce life-cycle costThe recorded scientific, engineering, technical, and cost information used to define, produce, test,evaluate, modify, deliver, support, and operate the systemEncompasses all personnel, equipment, facilities, data/documentation, and associated resourcesnecessary for the training of operational and maintenance personnelActions required to provide all the necessary material to ensure the system’s supportability andusability objectives are metAll tools, condition-monitoring equipment, diagnostic and checkout equipment, special testequipment, metrology and calibration equipment, maintenance fixtures and stands, and specialhandling equipment required to support operational maintenance functionsAll materials, equipment, special provisions, containers (reusable and disposable), and suppliesnecessary to support the packaging, safety and preservation, storage, handling, and transportationof the prime mission-related elements of the system, including personnel, spare and repairparts, test and support equipment, technical data computer resources, and mobile facilitiesInvolves identification and acquisition of personnel with skills and grades required to operate andmaintain a system over its lifetimeAll special facilities that are unique and are required to support logistics activities, including storagebuildings and warehouses and maintenance facilities at all levelsAll computers, associated software, connecting components, networks, and interfaces necessaryto support the day-to-day flow of information for all logistics functionsRole of the Maintainability EngineerMaintainability engineering is another major specialtydiscipline that contributes to the goal of a supportablesystem. This is primarily accomplished in the systemsengineering process through an active role in implementingspecific design features to facilitate safe andeffective maintenance actions in the predicted physicalenvironments, and through a central role in developingthe ILS system. Example tasks of the maintainability engineerinclude: developing and maintaining a systemmaintenance concept, establishing and allocating maintainabilityrequirements, performing analysis to quantifythe system’s maintenance resource requirements, andverifying the system’s maintainability requirements.ProducibilityProducibility is a system characteristic associated withthe ease and economy with which a completed designcan be transformed (i.e., fabricated, manufactured, orcoded) into a hardware and/or software realization.While major <strong>NASA</strong> systems tend to be produced in smallquantities, a particular producibility feature can be criticalto a system’s cost-effectiveness, as experience withthe shuttle’s thermal tiles has shown. Factors that influencethe producibility of a design include the choice ofmaterials, simplicity of design, flexibility in productionalternatives, tight tolerance requirements, and clarityand simplicity of the technical data package.Role of the Production EngineerThe production engineer supports the systems engineeringprocess (as a part of the multidisciplinary productdevelopment team) by taking an active role in implementingspecific design features to enhance producibilityand by performing the production engineering analysesneeded by the project. These tasks and analyses include:zz Performing the manufacturing/fabrication portionof the system risk management program. This is accomplishedby conducting a rigorous production riskassessment and by planning effective risk mitigationactions.zz Identifying system design features that enhance pro-ducibility. Efforts usually focus on design simplifica-66 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


4.4 Design Solution Definitiontion, fabrication tolerances, and avoidance of hazardousmaterials.zz Conducting producibility trade studies to determinethe most cost-effective fabrication/manufacturingprocess.zz Assessing production feasibility within project con-straints. This may include assessing contractor andprincipal subcontractor production experience andcapability, new fabrication technology, special tooling,and production personnel training requirements.zz Identifying long-lead items and critical materials.zz Estimating production costs as a part of life-cycle costmanagement.zz Supporting technology readiness assessments.zz Developing production schedules.zz Developing approaches and plans to validate fabrica-tion/manufacturing processes.The results of these tasks and production engineeringanalyses are documented in the manufacturing planwith a level of detail appropriate to the phase of theproject. The production engineer also participates in andcon tributes to major project reviews (primarily PDR andCritical Design Review (CDR)) on the above items, andto special interim reviews such as the PRR.PrototypesExperience has shown that prototype systems can beeffective in enabling efficient producibility even whenbuilding only a single flight system. Prototypes arebuilt early in the life cycle and they are made as closeto the flight item in form, fit, and function as is feasibleat that stage of the development. The prototypeis used to “wring out” the design solution so that experiencegained from the prototype can be fed backinto design changes that will improve the manufacture,integration, and maintainability of a single flightitem or the production run of several flight items. Unfortunately,prototypes are often deleted from projectsto save cost. Along with that decision, the projectaccepts an increased risk in the developmentphase of the life cycle. Fortunately, advancements incom puter-aided design and manufacturing have mitigatedthat risk somewhat by enabling the designerto visualize the design and “walk through” the integrationsequence to uncover problems before they becomea costly reality.Human Factors <strong>Engineering</strong>Overview and PurposeConsideration of human operators and maintainers ofsystems is a critical part of the design process. Humanfactors engineering is the discipline that studies thehuman-system interfaces and provides requirements,standards, and guidelines to ensure the human componentof the integrated system is able to function as intended.Human roles include operators (flight crewsand ground crews), designers, manufacturers, groundsup port, maintainers, and passengers. Flight crewfunctions include system operation, troubleshooting,and in-flight maintenance. Ground crew functions includespace craft and ground system manufacturing, assembly,test, checkout, logistics, ground maintenance,repair, refur bishment, launch control, and mission control.Human factors are generally considered in four categories.The first is anthropometry and biomechanics—the physical size, shape, and strength of the humans.The second is sensation and perception—primarilyvision and hearing, but senses such as touch are alsoimportant. The environment is a third factor—ambientnoise and lighting, vibration, temperature andhumidity, atmo spheric composition, and contaminants.Psychological factors comprise memory; informationprocessing com ponents such as patternrecognition, decisionmaking, and signal detection;and affective factors—e.g., emo tions, cultural patterns,and habits.zzHuman Factors <strong>Engineering</strong> in the SystemDesign ProcessStakeholder Expectations: The operators, maintainers,and passengers are all stakeholders in thesystem. The human factors specialist identifies rolesand responsibilities that can be performed by humansand scenarios that exceed human capabilities.The human factors specialist ensures that system operationalconcept development includes task analysisand human/system function allocation. As theseare refined, function allocation distributes operatorroles and responsibilities for subtasks to the crew, externalsupport teams, and automation. (For example,in aviation, tasks may be allocated to crew, air trafficcontrollers, or autopilots. In spacecraft, tasks may beperformed by crew, mission control, or onboard systems.)<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 67


4.0 System Designz z Requirements Definition: Human factors requirementsfor spacecraft and space habitats are program/project dependent, derived from <strong>NASA</strong>-STD-3001,<strong>NASA</strong> Space Flight Human System Standard Volume 1:Crew Health. Other human factors requirements ofother missions and Earth-based activities for humanspace flight missions are derived from human factorsstandards such as MIL-STD-1472, Human <strong>Engineering</strong>;NUREG-0700, Human-System InterfaceDesign Review Guidelines; and the Federal AviationAdministration’s Human Factors Design Standard.zz<strong>Technical</strong> Solution: Consider the human as a centralcomponent when doing logical decomposition anddeveloping design concepts. The users—operators ormaintainers—will not see the entire system as the designerdoes, only as the system interfaces with them.In engineering design reviews, human factors specialistspromote the usability of the design solution.With early involvement, human factors assessmentsmay catch usability problems at very early stages.For example, in one International Space Station payloaddesign project, a human factors assessment of avery early block diagram of the layout of stowage andhardware identified problems that would have madeoperations very difficult. Changes were made to theconceptual design at negligible cost—i.e., rearrangingconceptual block diagrams based on the sequence inwhich users would access items.z z Usability Evaluations of Design Concepts: Evaluationscan be performed easily using rapid prototypingtools for hardware and software interfaces, standardhuman factors engineering data-gathering and analysistools, and metrics such as task completion timeand number of errors. Systematically collected subjectivereports from operators also provide usefuldata. New technologies provide detailed objective information—e.g.,eye tracking for display and controllayout assessment. Human factors specialists provideassessment capabilities throughout the iterative designprocess.z z Verification: As mentioned, verification of requirementsfor usability, error rates, task completion times,and workload is challenging. Methods range from testswith trained personnel in mockups and simula tors, tomodels of human performance, to inspection by experts.As members of the systems engineering team,human factors specialists provide verification guidancefrom the time requirements are first devel oped.Human Factors <strong>Engineering</strong> AnalysesTechniques and MethodsExample methods used to provide human performancedata, predict human-system performance, and evaluatehuman-system designs include:z z Task Analysis: Produces a detailed description of thethings a person must do in a system to accomplish atask, with emphasis on requirements for informationpresentation, decisions to be made, task times, operatoractions, and environmental conditions.z z Timeline Analysis: Follows from task analysis. Durationsof tasks are identified in task analyses, and thetimes at which these tasks occur are plotted in graphs,which also show the task sequences. The purpose is toidentify requirements for simultaneous incompatibleactivities and activities that take longer than is available.Timelines for a given task can describe the activitiesof multiple operators or crewmembers.z z Modeling and Simulation: Models or mockups tomake predictions about system performance, compareconfigurations, evaluate procedures, and evaluatealternatives. Simulations can be as simple aspositioning a graphical human model with realisticanthropometric dimensions with a graphical modelof an operator station, or they can be complex stochasticmodels capturing decision points, error opportunities,etc.z z Usability Testing: Based on a task analysis and preliminarydesign, realistic tasks are carried out in a controlledenvironment with monitoring and re cordingequipment. Objective measures such as per formancetime and number of errors are evaluated; subjectiveratings are collected. The outputs system atically reporton strengths and weaknesses of candi date designsolutions.zzWorkload Assessment: Measurement on a standardizedscale such as the <strong>NASA</strong>-TLX or the Cooper-Harper rating scales of the amount and type of work.It assesses operator and crew task loading, which determinesthe ability of a human to perform the re quiredtasks in the desired time with the desired ac curacy.zz Human Error and Human Reliability Assessment:Top-down (fault tree analyses) and bottom-up (humanfactors process failure modes and effects analysis)analyses. The goal is to promote human reliability bycreating a system that can tolerate and recover fromhuman errors. Such a system must also support thehuman role in adding reliability to the system.68 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


4.4 Design Solution DefinitionRoles of the Human Factors SpecialistThe human factors specialist supports the systems engineeringprocess by representing the users’ and maintainers’requirements and capabilities throughout the design,production, and operations stages. Human factors specialists’roles include:zz Identify applicable requirements based on Agencystandards for human-system integration during therequirements definition phase.zz Support development of mission concepts by pro-viding information on human performance capabilitiesand limitations.zz Support task analysis and function allocation with in-formation on human capabilities and limitations.zz Identify system design features that enhance usability.This integrates knowledge of human performance capabilitiesand design features.zz Support trade studies by providing data on effects ofalternative designs on time to complete tasks, workload,and error rates.zz Support trade studies by providing data on effects ofalternative designs on skills and training required tooperate the system.zz Support design reviews to ensure compliance withhuman-systems integration requirements.zz Conduct evaluations using mockups and pro-totypes to provide detailed data on user performance.zz Support development of training and maintenanceprocedures in conjunction with hardware designersand mission planners.zz Collect data on human-system integration issuesduring operations to inform future designs.<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 69


5.0 Product RealizationThis chapter describes the activities in the product realizationprocesses listed in Figure 2.1-1. The chapter isseparated into sections corresponding to steps 5 through9 listed in Figure 2.1-1. The processes within each stepare discussed in terms of the inputs, the activities, andthe outputs. Additional guidance is provided using examplesthat are relevant to <strong>NASA</strong> projects.The product realization side of the SE engine is wherethe rubber meets the road. In this portion of the engine,five interdependent processes result in systems thatmeet the design specifications and stakeholder expectations.These products are produced, acquired, reused, orcoded; integrated into higher level assemblies; verifiedagainst design specifications; validated against stakeholderexpectations; and transitioned to the next level ofthe system. As has been mentioned in previous sections,products can be models and simulations, paper studiesor proposals, or hardware and software. The type andlevel of product depends on the phase of the life cycleand the product’s specific objectives. But whatever theproduct, all must effectively use the processes to ensurethe system meets the intended operational concept.This effort starts with the technical team taking the outputfrom the system design processes and using the appropriatecrosscutting functions, such as data and configurationmanagement, and technical assessments to make,buy, or reuse subsystems. Once these subsystems are realized,they must be integrated to the appropriate levelas designated by the appropriate interface requirements.These products are then verified through the <strong>Technical</strong>Assessment Process to ensure they are consistent withthe technical data package and that “the product wasbuilt right.” Once consistency is achieved, the technicalteam will validate the products against the stakeholderexpectations that “the right product was built.” Uponsuccessful completion of validation, the products aretransitioned to the next level of the system. Figure 5.0-1illustrates these processes.This is an iterative and recursive process. Early in the lifecycle, paper products, models, and simulations are runthrough the five realization processes. As the system maturesand progresses through the life cycle, hardware andsoftware products are run through these processes. It isimportant to catch errors and failures at the lowest levelof integration and early in the life cycle so that changescan be made through the design processes with minimumimpact to the project.The next sections describe each of the five product realizationprocesses and their associated products for agiven <strong>NASA</strong> mission.DESIGN REALIZATIONEVALUATION PROCESSESPRODUCTTRANSITIONPROCESSProductImplementationProductIntegrationProductVerificationProductValidationProductTransition Acquire Make/Code Reuse Assembly FunctionalEvaluation Functional Environmental Operational Testingin Integration& Test Environment OperationalTesting in MissionEnvironment Delivery to NextHigher Level in PBS Delivery toOperationalSystemFigure 5.0-1 Product realization<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 71


5.0 Product RealizationProduct Realization Keyszz Generate and manage requirements for off-the-shelf hardware/software products as for all other products.zz Understand the differences between verification testing and validation testing.▶ ▶ Verification Testing: Verification testing relates back to the approved requirements set (such as a System RequirementsDocument (SRD)) and can be performed at different stages in the product life cycle. Verification testing includes:(1) any testing used to assist in the development and maturation of products, product elements, or manufacturingor support processes; and/or (2) any engineering-type test used to verify status of technical progress, toverify that design risks are minimized, to substantiate achievement of contract technical performance, and to certifyreadiness for initial validation testing. Verification tests use instrumentation and measurements, and are generallyaccomplished by engineers, technicians, or operator-maintainer test personnel in a controlled environment tofacilitate failure analysis.▶ ▶ Validation Testing: Validation relates back to the ConOps document. Validation testing is conducted under realisticconditions (or simulated conditions) on any end product for the purpose of determining the effectiveness andsuitability of the product for use in mission operations by typical users; and the evaluation of the results of suchtests. Testing is the detailed quantifying method of both verification and validation. However, testing is required tovalidate final end products to be produced and deployed.zz Consider all customer, stakeholder, technical, programmatic, and safety requirements when evaluating the input nec-essary to achieve a successful product transition.zz Analyze for any potential incompatibilities with interfaces as early as possible.zz Completely understand and analyze all test data for trends and anomalies.zz Understand the limitations of the testing and any assumptions that are made.Ensure that a reused product meets the verification and validation required for the relevant system in which it is to bezzused, as opposed to relying on the original verification and validation it met for the system of its original use. It wouldthen be required to meet the same verification and validation as a purchased product or a built product. The “pedigree”of a reused product in its original application should not be relied upon in a different system, subsystem, or application.72 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


5.1 Product ImplementationProduct implementation is the first process encounteredin the SE engine that begins the movement from thebottom of the product hierarchy up towards the ProductTransition Process. This is where the plans, designs, analysis,requirements development, and drawings are realizedinto actual products.Product implementation is used to generate a specifiedproduct of a project or activity through buying,making/coding, or reusing previously developed hardware,software, models, or studies to generate a productappropriate for the phase of the life cycle. The productmust satisfy the design solution and its specified requirements.The Product Implementation Process is the key activitythat moves the project from plans and designs into realizedproducts. Depending on the project and life-cyclephase within the project, the product may be hardware,software, a model, simulations, mockups, study reports,or other tangible results. These products may be realizedthrough their purchase from commercial or other vendors,generated from scratch, or through partial or completereuse of products from other projects or activities.The decision as to which of these realization strategies,or which combination of strategies, will be used for theproducts of this project will have been made early in thelife cycle using the Decision Analysis Process.5.1.1 Process DescriptionFigure 5.1-1 provides a typical flow diagram for theProduct Implementation Process and identifies typicalinputs, outputs, and activities to consider in addressingproduct implementation.5.1.1.1 InputsInputs to the Product Implementation activity dependprimarily on the decision as to whether the end productwill be purchased, developed from scratch, or if theproduct will be formed by reusing part or all of productsfrom other projects. Typical inputs are shown in Figure5.1‐1.zzzzInputs if Purchasing the End Product: If the decisionwas made to purchase part or all of the productsfor this project, the end product design specificationsare obtained from the configuration managementsystem as well as other applicable documents such asthe SEMP.Inputs if Making/Coding the End Product: For endproducts that will be made/coded by the technicalFrom existingresources orexternal sourcesRequired RawMaterialsFrom ConfigurationManagement ProcessEnd Product DesignSpecifications andConfigurationDocumentationFrom existingresources or ProductTransition ProcessProductImplementation–Enabling ProductsIf implemented bybuying:Participate in purchaseof specified end productPrepare to conduct implementationIf implemented by making:Evaluate readiness ofproduct implementation–enabling productsMake the specified end productPrepare appropriateproduct support documentationCapture product implementationwork productsIf implemented byreuse:Participate in acquiringthe reuse end productTo ProductVerification ProcessDesired EndProductTo <strong>Technical</strong> DataManagement ProcessEnd ProductDocuments andManualsProductImplementationWork ProductsFigure 5.1-1 Product Implementation Process<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 73


5.0 Product Realizationzzteam, the inputs will be the configuration controlleddesign specifications and raw materials as provided toor purchased by the project.Inputs Needed if Reusing an End Product: For endproducts that will reuse part or all of products generatedby other projects, the inputs may be the documentationassociated with the product, as well as theproduct itself. Care must be taken to ensure that theseproducts will indeed meet the specifications and environmentsfor this project. These would have beenfactors involved in the Decision Analysis Process todetermine the make/buy/reuse decision.5.1.1.2 Process ActivitiesImplementing the product can take one of three forms:zz Purchase/buy,zz Make/code, orzz Reuse.These three forms will be discussed in the following subsections.Figure 5.1-1 shows what kind of inputs, outputs,and activities are performed during product implementationregardless of where in the product hierarchy orlife cycle it is. These activities include preparing to conductthe implementation, purchasing/making/reusingthe product, and capturing the product implementationwork product. In some cases, implementing a productmay have aspects of more than one of these forms (suchas a build-to-print). In those cases, the appropriate aspectsof the applicable forms are used.Prepare to Conduct ImplementationPreparing to conduct the product implementation is akey first step regardless of what form of implementationhas been selected. For complex projects, implementationstrategy and detailed planning or procedures need to bedeveloped and documented. For less complex projects,the implementation strategy and planning will need tobe discussed, approved, and documented as appropriatefor the complexity of the project.The documentation, specifications, and other inputs willalso need to be reviewed to ensure they are ready and atan appropriate level of detail to adequately complete thetype of implementation form being employed and forthe product life-cycle phase. For example, if the “make”implementation form is being employed, the designspecifications will need to be reviewed to ensure they areat a design-to level that will allow the product to be developed.If the product is to be bought as a pure Commercial-Off-the-Shelf(COTS) item, the specificationswill need to be checked to make sure they adequatelydescribe the vendor characteristics to narrow to a singlemake/model of their product line.Finally, the availability and skills of personnel needed toconduct the implementation as well as the availability ofany necessary raw materials, enabling products, or specialservices should also be reviewed. Any special trainingnecessary for the personnel to perform their tasks needsto be performed by this time.Purchase, Make, or Reuse the ProductPurchase the ProductIn the first case, the end product is to be purchased froma commercial or other vendor. Design/purchase specificationswill have been generated during requirementsdevelopment and provided as inputs. The technical teamwill need to review these specifications and ensure theyare in a form adequate for the contract or purchase order.This may include the generation of contracts, Statementsof Work (SOWs), requests for proposals, purchase orders,or other purchasing mechanisms. The responsibilitiesof the Government and contractor team shouldhave been documented in the SEMP. This will define,for example, whether <strong>NASA</strong> expects the vendor to providea fully verified and validated product or whether the<strong>NASA</strong> technical team will be performing those duties.The team will need to work with the acquisition teamto ensure the accuracy of the contract SOW or purchaseorder and to ensure that adequate documentation, certificatesof compliance, or other specific needs are requestedof the vendor.For contracted purchases, as proposals come back fromthe vendors, the technical team should work with thecontracting officer and participate in the review of thetechnical information and in the selection of the vendorthat best meets the design requirements for acceptablecost and schedule.As the purchased products arrive, the technical teamshould assist in the inspection of the delivered productand its accompanying documentation. The team shouldensure that the requested product was indeed the onedelivered, and that all necessary documentation, suchas source code, operator manuals, certificates of compliance,safety information, or drawings have been received.74 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


5.1 Product ImplementationThe technical team should also ensure that any enablingproducts necessary to provide test, operations, maintenance,and disposal support for the product also areready or provided as defined in the contract.Depending on the strategy and roles/responsibilities ofthe vendor, as documented in the SEMP, a determination/analysisof the vendor’s verification and validationcompliance may need to be reviewed. This may be doneinformally or formally as appropriate for the complexityof the product. For products that were verified and validatedby the vendor, after ensuring that all work productsfrom this phase have been captured, the productmay be ready to enter the Product Transition Process tobe delivered to the next higher level or to its final enduser. For products that will be verified and validated bythe technical team, the product will be ready to be verifiedafter ensuring that all work products for this phasehave been captured.Make/Code the ProductIf the strategy is to make or code the product, the technicalteam should first ensure that the enabling productsare ready. This may include ensuring all piece partsare available, drawings are complete and adequate, softwaredesign is complete and reviewed, machines to cutthe material are available, interface specifications are approved,operators are trained and available, procedures/processes are ready, software personnel are trained andavailable to generate code, test fixtures are developed andready to hold products while being generated, and softwaretest cases are available and ready to begin modelgeneration.The product is then made or coded in accordance withthe specified requirements, configuration documentation,and applicable standards. Throughout this process,the technical team should work with the quality organizationto review, inspect, and discuss progress and statuswithin the team and with higher levels of management asappropriate. Progress should be documented within thetechnical schedules. Peer reviews, audits, unit testing,code inspections, simulation checkout, and other techniquesmay be used to ensure the made or coded productis ready for the verification process.ReuseIf the strategy is to reuse a product that already exists,care must be taken to ensure that the product is truly applicableto this project and for the intended uses and theenvironment in which it will be used. This should havebeen a factor used in the decision strategy to make/buy/reuse.The documentation available from the reuse productshould be reviewed by the technical team to becomecompletely familiar with the product and to ensure itwill meet the requirements in the intended environment.Any supporting manuals, drawings, or other documentationavailable should also be gathered.The availability of any supporting or enabling productsor infrastructure needed to complete the fabrication,coding, testing, analysis, verification, validation, or shippingof the product needs to be determined. If any ofthese products or services are lacking, they will need tobe developed or arranged for before progressing to thenext phase.Special arrangements may need to be made or formssuch as nondisclosure agreements may need to be acquiredbefore the reuse product can be received.A reused product will frequently have to undergo thesame verification and validation as a purchased productor a built product. Relying on prior verification and validationshould only be considered if the product’s verificationand validation documentation meets the verification,validation, and documentation requirements of thecurrent project and the documentation demonstratesthat the product was verified and validated against equivalentrequirements and expectations. The savings gainedfrom reuse is not necessarily from reduced testing, butin a lower likelihood that the item will fail tests and generaterework.Capture Work ProductsRegardless of what implementation form was selected,all work products from the make/buy/reuse processshould be captured, including design drawings, designdocumentation, code listings, model descriptions, proceduresused, operator manuals, maintenance manuals,or other documentation as appropriate.5.1.1.3 Outputsz z End Product for Verification: Unless the vendorperforms verification, the made/coded, purchased,or reused end product, in a form appropriate for thelife-cycle phase, is provided for the verification process.The form of the end product is a function of the<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 75


5.0 Product Realizationlife-cycle phase and the placement within the systemstructure (the form of the end product could be hardware,software, model, prototype, first article for test,or single operational article or multiple productionarticle).z z End Product Documents and Manuals: Appropriatedocumentation is also delivered with the end productto the verification process and to the technical datamanagement process. Documentation may includeapplicable design drawings; operation, user, maintenance,or training manuals; applicable baseline documents(configuration baseline, specifications, stakeholderexpectations); certificates of compliance; orother vendor documentation.The process is complete when the following activitieshave been accomplished:zz End product is fabricated, purchased, or reuse mod-ules acquired.zz End products are reviewed, checked, and ready forverification.zz Procedures, decisions, assumptions, anomalies, cor-rective actions, lessons learned, etc., resulting fromthe make/buy/reuse are recorded.5.1.2 Product Implementation Guidance5.1.2.1 Buying Off-the-Shelf ProductsOff-the-Shelf (OTS) products are hardware/softwarethat has an existing heritage and usually originates fromone of several sources, which include commercial, military,and <strong>NASA</strong> programs. Special care needs to be takenwhen purchasing OTS products for use in the space environment.Most OTS products were developed for usein the more benign environments of Earth and may notbe suitable to endure the harsh space environments, includingvacuum, radiation, extreme temperature ranges,extreme lighting conditions, zero gravity, atomic oxygen,lack of convection cooling, launch vibration or acceleration,and shock loads.When purchasing OTS products, requirements shouldstill be generated and managed. A survey of availableOTS is made and evaluated as to the extent they satisfythe requirements. Products that meet all the requirementsare a good candidate for selection. If no productcan be found to meet all the requirements, a trade studyneeds to be performed to determine whether the requirementscan be relaxed or waived, the OTS can be modifiedto bring it into compliance, or whether another optionto build or reuse should be selected.Several additional factors should be considered when selectingthe OTS option:zz Heritage of the product;zz Critical or noncritical application;zz Amount of modification required and who performsit;zz Whether sufficient documentation is available;zz Proprietary, usage, ownership, warranty, and licensingrights;zz Future support for the product from the vendor/pro-vider;zz Any additional validation of the product needed bythe project; andzz Agreement on disclosure of defects discovered by thecommunity of users of the product.5.1.2.2 Heritage“Heritage” refers to the original manufacturer’s level ofquality and reliability that is built into parts and whichhas been proven by (1) time in service, (2) number ofunits in service, (3) mean time between failure performance,and (4) number of use cycles. High-heritageproducts are from the original supplier, who has maintainedthe great majority of the original service, design,performance, and manufacturing characteristics. Lowheritageproducts are those that (1) were not built bythe original manufacturer; (2) do not have a significanthistory of test and usage; or (3) have had significant aspectsof the original service, design, performance, ormanufacturing characteristics altered. An importantfactor in assessing the heritage of a COTS product isto ensure that the use/application of the product is relevantto the application for which it is now intended. Aproduct that has high heritage in a ground-based applicationcould have a low heritage when placed in a spaceenvironment.The focus of a “heritage review” is to confirm the applicabilityof the component for the current application.Assessments must be made regarding not only technicalinterfaces (hardware and software) and performance,but also the environments to which the unit has beenpreviously qualified, including electromagnetic compatibility,radiation, and contamination. The compatibilityof the design with parts quality requirements must also76 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


5.1 Product Implementationbe assessed. All noncompliances must be identified, documented,and addressed either by modification to bringthe component into compliance or formal waivers/deviationsfor accepted deficiencies. This heritage review iscommonly held closely after contract award.When reviewing a product’s applicability, it is importantto consider the nature of the application. A “catastrophic”application is one where a failure could causeloss of life or vehicle. A “critical” application is one wherefailure could cause loss of mission. For use in these applications,several additional precautions should be taken,including ensuring the product will not be used near theboundaries of its performance or environmental envelopes.Extra scrutiny by experts should be applied duringPreliminary Design Reviews (PDRs) and Critical DesignReviews (CDRs) to ensure the appropriateness of itsuse.Modification of an OTS product may be required for itto be suitable for a <strong>NASA</strong> application. This affects theproduct’s heritage, and therefore, the modified productshould be treated as a new design. If the product is modifiedby <strong>NASA</strong> and not the manufacturer, it would bebeneficial for the supplier to have some involvement inreviewing the modification. <strong>NASA</strong> modification mayalso require the purchase of additional documentationfrom the supplier such as drawings, code, or other designand test descriptions.For additional information and suggested test and analysisrequirements for OTS products, see JSC EA-WI-016or MSFC MWI 8060.1 both titled Off the Shelf HardwareUtilization in Flight Hardware Development and G-118-2006e AIAA Guide for Managing the Use of CommercialOff the Shelf (COTS) Software Components for Mission-Critical <strong>Systems</strong>.<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 77


5.0 Product Realization5.2 Product IntegrationProduct Integration is one of the SE engine product realizationprocesses that make up the system structure.In this process, lower level products are assembled intohigher level products and checked to make sure that theintegrated product functions properly. It is an elementof the processes that lead realized products from a levelbelow to realized end products at a level above, betweenthe Product Implementation, Verification, and ValidationProcesses.The purpose of the Product Integration Process is tosystematically assemble the higher level product fromthe lower level products or subsystems (e.g., productelements, units, components, subsystems, or operatortasks); ensure that the product, as integrated, functionsproperly; and deliver the product. Product integrationis required at each level of the system hierarchy. Theactivities associated with product integrations occurthroughout the entire product life cycle. This includesall of the incremental steps, including level-appropriatetesting, necessary to complete assembly of a productand to enable the top-levelproduct tests to be conducted.The Product IntegrationProcess may includeand often begins with analysisand simulations (e.g.,various types of prototypes)and progresses through increasinglymore realisticincremental functionalityuntil the final product isachieved. In each successivebuild, prototypes areconstructed, evaluated, improved,and reconstructedbased upon knowledgegained in the evaluationprocess. The degree of virtualversus physical prototypingrequired dependson the functionality of thedesign tools and the complexityof the product andits associated risk. There isa high probability that theproduct, integrated in thisFrom ProductTransition ProcessLower LevelProducts to BeIntegratedFrom ConfigurationManagement ProcessEnd Product DesignSpecifications andConfigurationDocumentationFrom existingresources or ProductTransition ProcessProduct Integration–Enabling Products78 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>manner, will pass product verification and validation.For some products, the last integration phase will occurwhen the product is deployed at its intended operationalsite. If any problems of incompatibility are discoveredduring the product verification and validation testingphase, they are resolved one at a time.The Product Integration Process applies not only to hardwareand software systems but also to service-oriented solutions,requirements, specifications, plans, and concepts.The ultimate purpose of product integration is to ensurethat the system elements will function as a whole.5.2.1 Process DescriptionFigure 5.2-1 provides a typical flow diagram for theProduct Integration Process and identifies typical inputs,outputs, and activities to consider in addressingproduct integration. The activities of the Product IntegrationProcess are truncated to indicate the action andobject of the action.Prepare to conduct productintegrationObtain lower level products forassembly and integrationConfirm that received productshave been validatedPrepare the integration environmentfor assembly and integrationAssemble and integrate thereceived products into the desiredend productPrepare appropriate productsupport documentationCapture product integration workproductsFigure 5.2-1 Product Integration ProcessTo ProductVerification ProcessDesired ProductTo <strong>Technical</strong> DataManagement ProcessProduct Documentsand ManualsProduct IntegrationWork Products


5.2 Product Integration5.2.1.1 InputsProduct Integration encompasses more than a one-timeassembly of the lower level products and operator tasksat the end of the design and fabrication phase of the lifecycle. An integration plan must be developed and documented.An example outline for an integration plan isprovided in Appendix H. Product Integration is conductedincrementally, using a recursive process of assemblinglower level products and operator tasks; evaluatingthem through test, inspection, analysis, or demonstration;and then assembling more lower level products andoperator tasks. Planning for Product Integration shouldbe initiated during the concept formulation phase of thelife cycle. The basic tasks that need to be established involvethe management of internal and external interfacesof the various levels of products and operator tasks tosupport product integration and are as follows:zz Define interfaces;zz Identify the characteristics of the interfaces (physical,electrical, mechanical, etc.);zz Ensure interface compatibility at all defined interfacesby using a process documented and approved by theproject;zz Ensure interface compatibility at all defined interfaces;zz Strictly control all of the interface processes duringdesign, construction, operation, etc.;zz Identify lower level products to be assembled and in-tegrated (from the Product Transition Process);zz Identify assembly drawings or other documentationthat show the complete configuration of the productbeing integrated, a parts list, and any assembly instructions(e.g., torque requirements for fasteners);zz Identify end-product, design-definition-specified re-quirements (specifications), and configuration documentationfor the applicable work breakdown structuremodel, including interface specifications, in theform appropriate to satisfy the product-line life-cyclephase success criteria (from the Configuration ManagementProcess); andzz Identify Product Integration–enabling products (fromexisting resources or the Product Transition Processfor enabling product realization).5.2.1.2 Process ActivitiesThis subsection addresses the approach to the top-levelimplementation of the Product Integration Process, includingthe activities required to support the process,The project would follow this approach throughout itslife cycle.The following are typical activities that support the ProductIntegration Process:zz Prepare to conduct Product Integration by (1) preparinga product integration strategy, detailed planning for theintegration, and integration sequences and proceduresand (2) determining whether the product configurationdocumentation is adequate to conduct the type ofproduct integration applicable for the product-line lifecyclephase, location of the product in the system structure,and management phase success criteria.zz Obtain lower level products required to assemble andintegrate into the desired product.zz Confirm that the received products that are to be as-sembled and integrated have been validated to demonstratethat the individual products satisfy theagreed-to set of stakeholder expectations, includinginterface requirements.zz Prepare the integration environment in which as-sembly and integration will take place, including evaluatingthe readiness of the product integration–enablingproducts and the assigned workforce.zz Assemble and integrate the received products into thedesired end product in accordance with the specifiedrequirements, configuration documentation, interfacerequirements, applicable standards, and integrationsequencing and procedures.zz Conduct functional testing to ensure that assembly isready to enter verification testing and ready to be integratedinto the next level.zz Prepare appropriate product support documentationsuch as special procedures for performing productverification and product validation.zz Capture work products and related information gen-erated while performing the product integration processactivities.5.2.1.3 OutputsThe following are typical outputs from this process anddestinations for the products from this process:zz Integrated product(s) in the form appropriate to theproduct-line life-cycle phase and to satisfy phase successcriteria (to the Product Verification Process).zz Documentation and manuals in a form appropriatefor satisfying the life-cycle phase success criteria, in-<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 79


5.0 Product Realizationcluding as-integrated product descriptions and operate-toand maintenance manuals (to the <strong>Technical</strong>Data Management Process).zz Work products, including reports, records, and non-deliverable outcomes of product integration activities(to support the <strong>Technical</strong> Data Management Process);integration strategy document; assembly/checkarea drawings; system/component documentation sequencesand rationale for selected assemblies; interfacemanagement documentation; personnel requirements;special handling requirements; system documentation;shipping schedules; test equipment and drivers’requirements; emulator requirements; and identificationof limitations for both hardware and software.5.2.2 Product Integration Guidance5.2.2.1 Integration StrategyAn integration strategy is developed, as well as supportingdocumentation, to identify optimal sequence of receipt,assembly, and activation of the various components thatmake up the system. This strategy should use business aswell as technical factors to ensure an assembly, activation,and loading sequence that minimizes cost and assemblydifficulties. The larger or more complex the system or themore delicate the element, the more critical the propersequence becomes, as small changes can cause large impactson project results.The optimal sequence of assembly is built from thebottom up as components become subelements, elements,and subsystems, each of which must be checkedprior to fitting into the next higher assembly. The sequencewill encompass any effort needed to establishand equip the assembly facilities (e.g., raised floor, hoists,jigs, test equipment, input/output, and power connections).Once established, the sequence must be periodicallyreviewed to ensure that variations in productionand delivery schedules have not had an adverse impacton the sequence or compromised the factors on whichearlier decisions were made.5.2.2.2 Relationship to ProductImplementationAs previously described, Product Implementation iswhere the plans, designs, analysis, requirements development,and drawings are realized into actual products.Product Integration concentrates on the control of theinterfaces and the verification and validation to achievethe correct product to meet the requirements. ProductIntegration can be thought of as released or phased deliveries.Product Integration is the process that pulls togethernew and existing products and ensures that theyall combine properly into a complete product withoutinterference or complications. If there are issues, theProduct Integration Process documents the exceptions,which can then be evaluated to determine if the productis ready for implementation/operations.Integration occurs at every stage of a project’s life cycle.In the Formulation phase, the decomposed requirementsneed to be integrated into a complete system to verify thatnothing is missing or duplicated. In the Implementationphase, the design and hardware need to be integrated intoan overall system to verify that they meet the requirementsand that there are no duplications or omissions.The emphasis on the recursive, iterative, and integratednature of systems engineering highlights how the productintegration activities are not only integrated across all ofthe phases of the entire life cycle in the initial planningstages of the project, but also used recursively across allof the life-cycle phases as the project product proceedsthrough the flow down and flow up conveyed by the SEengine. This ensures that when changes occur to requirements,design concepts, etc.—usually in response to updatesfrom stakeholders and results from analysis, modeling,or testing—that adequate course corrections aremade to the project. This is accomplished through reevaluationby driving through the SE engine, enabling allaspects of the product integration activities to be appropriatelyupdated. The result is a product that meets all ofthe new modifications approved by the project and eliminatesthe opportunities for costly and time-consumingmodifications in the later stages of the project.5.2.2.3 Product/Interface Integration SupportThere are several processes that support the integration ofproducts and interfaces. Each process allows either the integrationof products and interfaces or the validation thatthe integrated products meet the needs of the project.The following is a list of typical example processes andproducts that support the integration of products andinterfaces and that should be addressed by the projectin the overall approach to Product Integration: requirementsdocuments; requirements reviews; design reviews;design drawings and specifications; integrationand test plans; hardware configuration control docu-80 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


5.2 Product Integrationmentation; quality assurance records; interface controlrequirements/documents; ConOps documents; verificationrequirement documents; verification reports/analysis;<strong>NASA</strong>, military, and industry standards; best practices;and lessons learned.5.2.2.4 Product Integration of the DesignSolutionThis subsection addresses the more specific implementationof Product Integration related to the selected designsolution.Generally, system/product designs are an aggregation ofsubsystems and components. This is relatively obviousfor complex hardware and/or software systems. The sameholds true for many service-oriented solutions. For example,a solution to provide a single person access to theInternet involves hardware, software, and a communicationsinterface. The purpose of Product Integration is toensure that combination of these elements achieves therequired result (i.e., works as expected). Consequently,internal and external interfaces must be considered inthe design and evaluated prior to production.There are a variety of different testing requirements toverify product integration at all levels. Qualificationtesting and acceptance testing are examples of two ofthese test types that are performed as the product is integrated.Another type of testing that is important to thedesign and ultimate product integration is a planned testprocess in which development items are tested under actualor simulated mission profile environments to disclosedesign deficiencies and to provide engineeringinformation on failure modes and mechanisms. If accomplishedwith development items, this provides earlyinsight into any issues that may otherwise only be observedat the late stages of product integration whereit becomes costly to incorporate corrective actions. Forlarge, complex system/products, integration/verificationefforts are accomplished using a prototype.5.2.2.5 Interface ManagementThe objective of the interface management is to achievefunctional and physical compatibility among all interrelatedsystem elements. Interface management is definedin more detail in Section 6.3. An interface is anyboundary between one area and another. It may be cognitive,external, internal, functional, or physical. Interfacesoccur within the system (internal) as well as betweenthe system and another system (external) and maybe functional or physical (e.g., mechanical, electrical) innature. Interface requirements are documented in an InterfaceRequirements Document (IRD). Care should betaken to define interface requirements and to avoid specifyingdesign solutions when creating the IRD. In its finalform, the Interface Control Document (ICD) describesthe detailed implementation of the requirements containedin the IRD. An interface control plan describesthe management process for IRDs and ICDs. This planprovides the means to identify and resolve interface incompatibilitiesand to determine the impact of interfacedesign changes.5.2.2.6 Compatibility AnalysisDuring the program’s life, compatibility and accessibilitymust be maintained for the many diverse elements.Compatibility analysis of the interface definition demonstratescompleteness of the interface and traceabilityrecords. As changes are made, an authoritative meansof controlling the design of interfaces must be managedwith appropriate documentation, thereby avoiding thesituation in which hardware or software, when integratedinto the system, fails to function as part of the system asintended. Ensuring that all system pieces work togetheris a complex task that involves teams, stakeholders, contractors,and program management from the end of theinitial concept definition stage through the operationsand support stage. Physical integra tion is accomplishedduring Phase D. At the finer levels of resolution, piecesmust be tested, assembled and/or integrated, and testedagain. The systems engineer role includes performanceof the delegated management duties such as configurationcontrol and overseeing the integration, verification,and validation processes.5.2.2.7 Interface Management TasksThe interface management tasks begin early in the developmenteffort, when interface requirements can be influencedby all engineering disciplines and applicable interfacestandards can be invoked. They continue throughdesign and checkout. During design, emphasis is on ensuringthat interface specifications are documented andcommunicated. During system element checkout, bothprior to assembly and in the assembled configuration,emphasis is on verifying the implemented interfaces.Throughout the product integration process activities,interface baselines are controlled to ensure that changes<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 81


5.0 Product Realizationin the design of system elements have minimal impacton other elements with which they interface. Duringtesting or other validation and verification activities,multiple system elements are checked out as integratedsubsystems or systems. The following provides more detailson these tasks.Define InterfacesThe bulk of integration problems arise from unknown oruncontrolled aspects of interfaces. Therefore, system andsubsystem interfaces are specified as early as possible inthe development effort. Interface specifications addresslogical, physical, electrical, mechanical, human, and environmentalparameters as appropriate. Intra-system interfacesare the first design consideration for developersof the system’s subsystems. Interfaces are used from previousdevelopment efforts or are developed in accordancewith interface standards for the given disciplineor technology. Novel interfaces are constructed only forcompelling reasons. Interface specifications are verifiedagainst interface requirements. Typical products includeinterface descriptions, ICDs, interface requirements, andspecifications.Verify InterfacesIn verifying the interfaces, the systems engineer must ensurethat the interfaces of each element of the system orsubsystem are controlled and known to the developers.Additionally, when changes to the interfaces are needed,the changes must at least be evaluated for possible impacton other interfacing elements and then communicatedto the affected developers. Although all affecteddevelopers are part of the group that makes changes,such changes need to be captured in a readily accessibleplace so that the current state of the interfaces can beknown to all. Typical products include ICDs and exceptionreports.The use of emulators for verifying hardware and softwareinterfaces is acceptable where the limitations of theemulator are well characterized and meet the operatingenvironment characteristics and behavior requirementsfor interface verification. The integration plan shouldspecifically document the scope of use for emulators.Inspect and Acknowledge System and SubsystemElement ReceiptAcknowledging receipt and inspecting the condition ofeach system or subsystem element is required prior toassembling the system in accordance with the intendeddesign. The elements are checked for quantity, obviousdamage, and consistency between the element descriptionand a list of element requirements. Typical productsinclude acceptance documents, delivery receipts, andchecked packing list.Verify System and Subsystem ElementsSystem and subsystem element verification confirmsthat the implemented design features of developed orpurchased system elements meet their requirements.This is intended to ensure that each element of thesystem or subsystem functions in its intended environment,including those elements that are OTS for otherenvironments. Such verifications may be by test (e.g.,regression testing as a tool or subsystem/elements arecombined), inspection, analysis (deficiency or compliancereports), or demonstration and may be executedeither by the organization that will assemble the systemor subsystem or by the producing organization. Amethod of discerning the elements that “passed” verificationfrom those elements that “failed” needs to be inplace. Typical products include verified system featuresand exception reports.Verify Element InterfacesVerification of the system element interfaces ensuresthat the elements comply with the interface specificationprior to assembly in the system. The intent is to ensurethat the interface of each element of the system or subsystemis verified against its corresponding interfacespecification. Such verification may be by test, inspection,analysis, or demonstration and may be executedby the organization that will assemble the system orsubsystem or by another organization. Typical productsinclude verified system element interfaces, test reports,and exception reports.Integrate and VerifyAssembly of the elements of the system should be performedin accordance with the established integrationstrategy. This ensures that the assembly of the system elementsinto larger or more complex assemblies is conductedin accordance with the planned strategy. Toensure that the integration has been completed, a verificationof the integrated system interfaces should be performed.Typical products include integration reports,exception reports, and an integrated system.82 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


5.3 Product VerificationThe Product Verification Process is the first of the verificationand validation processes conducted on a realizedend product. As used in the context of the systems engineeringcommon technical processes, a realized productis one provided by either the Product ImplementationProcess or the Product Integration Process in a formsuitable for meeting applicable life-cycle phase successcriteria. Realization is the act of verifying, validating, andtransitioning the realized product for use at the next levelup of the system structure or to the customer. Simplyput, the Product Verification Process answers the criticalquestion, Was the end product realized right? TheProduct Validation Process addresses the equally criticalquestion, Was the right end product realized?Verification proves that a realized product for any systemmodel within the system structure conforms to the buildtorequirements (for software elements) or realize-to specificationsand design descriptive documents (for hardwareelements, manual procedures, or composite products ofhardware, software, and manual procedures).Distinctions Between Product Verification andProduct ValidationFrom a process perspective, product verification and validationmay be similar in nature, but the objectives arefundamentally different.It is essential to confirm that the realized product is inconformance with its specifications and design descriptiondocumentation (i.e., verification). Such specificationsand documents will establish the configurationbaseline of that product, which may have to be modifiedat a later time. Without a verified baseline and appropriateconfiguration controls, such later modificationscould be costly or cause major performance problems.However, from a customer point of view, the interest is inwhether the end product provided will do what the customerintended within the environment of use (i.e., validation).When cost effective and warranted by analysis,the expense of validation testing alone can be mitigatedby combining tests to perform verification and validationsimultaneously.The outcome of the Product Verification Process isconfirmation that the “as-realized product,” whetherachieved by implementation or integration, conformsDifferences Between Verification andValidation TestingVerification TestingVerification testing relates back to the approved requirementsset (such as an SRD) and can be performedat different stages in the product life cycle.Verification testing includes: (1) any testing used toassist in the development and maturation of products,product elements, or manufacturing or supportprocesses; and/or (2) any engineering-type test usedto verify the status of technical progress, verify thatdesign risks are minimized, substantiate achievementof contract technical performance, and certify readinessfor initial validation testing. Verification tests useinstrumentation and measurements and are generallyaccomplished by engineers, technicians, or operator-maintainertest personnel in a controlled environmentto facilitate failure analysis.Validation TestingValidation relates back to the ConOps document. Validationtesting is conducted under realistic conditions(or simulated conditions) on any end product to determinethe effectiveness and suitability of the productfor use in mission operations by typical users andto evaluate the results of such tests. Testing is the detailedquantifying method of both verification andvalidation. However, testing is required to validate finalend products to be produced and deployed.to its specified requirements, i.e., verification of the endproduct. This subsection discusses the process activities,inputs, outcomes, and potential deficiencies.5.3.1 Process DescriptionFigure 5.3-1 provides a typical flow diagram for theProduct Verification Process and identifies typical inputs,outputs, and activities to consider in addressingproduct verification.5.3.1.1 InputsKey inputs to the process are the product to be verified,verification plan, specified requirements baseline, andany enabling products needed to perform the ProductVerification Process (including the ConOps, missionneeds and goals, requirements and specifications, in-<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 83


5.0 Product RealizationFrom ProductImplementation orProduct Integration ProcessEnd Productto Be VerifiedFrom ConfigurationManagement ProcessSpecified RequirementsBaselineFrom Design SolutionDefinition and <strong>Technical</strong>Planning ProcessesProduct VerificationPlanFrom existingresources or ProductTransition ProcessProduct Verification–Enabling ProductsPrepare to conduct productverificationPerform the productverificationAnalyze the outcomes ofthe product verificationPrepare a productverification reportCapture the work productsfrom product verificationTo ProductValidation ProcessVerified End ProductTo <strong>Technical</strong>Assessment ProcessProduct VerificationResultsTo <strong>Technical</strong> DataManagement ProcessProduct VerificationReportProduct VerificationWork ProductsFigure 5.3-1 Product Verification Processterface control drawings, testing standards and policies,and Agency standards and policies).5.3.1.2 Process ActivitiesThere are five major steps in the Product VerificationProcess: (1) verification planning (prepare to implementthe verification plan); (2) verification preparation (preparefor conducting verification); (3) conduct verification(perform verification); (4) analyze verification results;and (5) capture the verification work products.The objective of the Product Verification Process is togenerate evidence necessary to confirm that end products,from the lowest level of the system structure to thehighest, conform to the specified requirements (specificationsand descriptive documents) to which they wererealized whether by the Product Implementation Processor by the Product Integration Process.Product Verification is usually performed by the developerthat produced (or “realized”) the end product, withparticipation of the end user and customer. ProductVerification confirms that the as-realized product,whether it was achieved by Product Implementation orProduct Integration, conforms to its specified requirements(specifications and descriptive documentation)used for making or assembling and integrating the endproduct. Developers of the system, as well as the users,are typically involved in verification testing. The customerand Quality Assurance (QA) personnel are alsocritical in the verification planning and execution activities.Product Verification PlanningPlanning to conduct the product verification is a key firststep. From relevant specifications and product form, thetype of verification (e.g., analysis, demonstration, inspection,or test) should be established based on the life-cyclephase, cost, schedule, resources, and the position of theend product within the system structure. The verificationplan should be reviewed (an output of the <strong>Technical</strong>Planning Process, based on design solution outputs) forany specific procedures, constraints, success criteria, orother verification requirements. (See Appendix I for asample verification plan outline.)Verification Plan and MethodsThe task of preparing the verification plan includes establishingthe type of verification to be performed, de-84 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


5.3 Product VerificationTypes of TestingThere are many different types of testing that can be used in verification of an end product. These examples are providedfor consideration:zzAerodynamiczzBurn-inzzDropzzEnvironmentalzzHigh-/Low-Voltage LimitszzLeak RateszzNominalzzParametriczzPressure LimitszzSecurity CheckszzThermal LimitszzAcceptancezzCharacterizationzzElectromagnetic CompatibilityzzG-loadingzz Human Factors <strong>Engineering</strong>/Human-in-the-Loop TestingzzLifetime/CyclingzzOff-NominalzzPerformancezzQualification FlowzzSystemzzThermal Vacuumzz Acousticzz Componentzz Electromagnetic Interferencezz Go or No-Gozz Integrationzz Manufacturing/Random Defectszz Operationalzz Pressure Cyclingzz Structural Functionalzz Thermal Cyclingzz Vibrationpendent on the life-cycle phase; position of the productin the system structure; the form of the product used;and related costs of verification of individual specifiedrequirements. The types of verification include analyses,inspection, demonstration, and test or some combinationof these four. The verification plan, typically writtenat a detailed technical level, plays a pivotal role in bottomupproduct realization.Note: Close alignment of the verification plan withthe project’s SEMP is absolutely essential.Verification can be performed recursively throughoutthe project life cycle and on a wide variety of productforms. For example:zz Simulated (algorithmic models, virtual reality simu-lator);zz Mockup (plywood, brass board, breadboard);zz Concept description (paper report);zz Prototype (product with partial functionality);zz <strong>Engineering</strong> unit (fully functional but may not besame form/fit);zz Design verification test units (form, fit, and functionis the same, but they may not have flight parts);zz Qualification units (identical to flight units but maybe subjected to extreme environments); andzz Flight units (end product that is flown, including proto-flight units).Any of these types of product forms may be in any ofthese states:zz Produced (built, fabricated, manufactured, or coded);zz Reused (modified internal nondevelopmental prod-ucts or OTS product); andzz Assembled and integrated (a composite of lower levelproducts).The conditions and environment under which theproduct is to be verified should be established and theverification planned based on the associated entrance/success criteria identified. The Decision Analysis Processshould be used to help finalize the planning details.Procedures should be prepared to conduct verificationbased on the type (e.g., analysis, inspection, demonstration,or test) planned. These procedures are typically developedduring the design phase of the project life cycleand matured as the design is matured. Operational useNote: The final, official verification of the end productshould be for a controlled unit. Typically, attemptingto “buy off” a “shall” on a prototype is not acceptable;it is usually completed on a qualification, flight,or other more final, controlled unit.<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 85


5.0 Product RealizationTypes of VerificationzzAnalysis: The use of mathematical modeling and analytical techniques to predict the suitability of a design to stakeholderexpectations based on calculated data or data derived from lower system structure end product verifications.Analysis is generally used when a prototype; engineering model; or fabricated, assembled, and integrated product isnot available. Analysis includes the use of modeling and simulation as analytical tools. A model is a mathematical representationof reality. A simulation is the manipulation of a model.z z Demonstration: Showing that the use of an end product achieves the individual specified requirement. It is generallya basic confirmation of performance capability, differentiated from testing by the lack of detailed data gathering.Demonstrations can involve the use of physical models or mockups; for example, a requirement that all controls shallbe reachable by the pilot could be verified by having a pilot perform flight-related tasks in a cockpit mockup or simulator.A demonstration could also be the actual operation of the end product by highly qualified personnel, such astest pilots, who perform a one-time event that demonstrates a capability to operate at extreme limits of system performance,an operation not normally expected from a representative operational pilot.z z Inspection: The visual examination of a realized end product. Inspection is generally used to verify physical designfeatures or specific manufacturer identification. For example, if there is a requirement that the safety arming pin has ared flag with the words “Remove Before Flight” stenciled on the flag in black letters, a visual inspection of the armingpin flag can be used to determine if this requirement was met.z z Test: The use of an end product to obtain detailed data needed to verify performance, or provide sufficient informationto verify performance through further analysis. Testing can be conducted on final end products, breadboards,brass boards or prototypes. Testing produces data at discrete points for each specified requirement under controlledconditions and is the most resource-intensive verification technique. As the saying goes, “Test as you fly, and fly as youtest.” (See Subsection 5.3.2.5.)scenarios are thought through so as to explore all possibleverification activities to be performed.Outcomes of verification planning include the following:zz The verification type that is appropriate for showingor proving the realized product conforms to its specifiedrequirements is selected.zz The product verification procedures are clearly de-fined based on: (1) the procedures for each type ofverification selected, (2) the purpose and objective ofeach procedure, (3) any pre-verification and post-ver-Note: Verification planning is begun early in the projectlife cycle during the requirements developmentphase. (See Section 4.2.) Which verification approachto use should be included as part of the requirementsdevelopment to plan for the future activities, establishspecial requirements derived from verificationenablingproducts identified, and to ensure that thetechnical statement is a verifiable requirement. Updatesto verification planning continue throughoutlogical decomposition and design development, especiallyas design reviews and simulations shed lighton items under consideration. (See Section 6.1.)ification actions, and (4) the criteria for determiningthe success or failure of the procedure.zz The verification environment (e.g., facilities, equip-ment, tools, simulations, measuring devices, personnel,and climatic conditions) in which the verificationprocedures will be implemented is defined.zz As appropriate, project risk items are updated basedon approved verification strategies that cannot duplicatefully integrated test systems, configurations,and/or target operating environments. Rationales,trade space, optimization results, and implications ofthe approaches are documented in the new or revisedrisk statements as well as references to accommodatefuture design, test, and operational changes to theproject baseline.Product Verification PreparationIn preparation for verification, the specified requirements(outputs of the Design Solution Process) are collectedand confirmed. The product to be verified is obtained(output from implementation or integration), asare any enabling products and support resources thatare necessary for verification (requirements identifiedand acquisition initiated by design solution definition86 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


5.3 Product Verificationactivities). The final element of verification preparationincludes the preparation of the verification environment(e.g., facilities, equipment, tools, simulations, measuringdevices, personnel, and climatic conditions). Identificationof the environmental requirements is necessary andthe implications of those requirements must be carefullyconsidered.Note: Depending on the nature of the verification effortand the life-cycle phase the program is in, sometype of review to assess readiness for verification (aswell as validation later) is typically held. In earlierphases of the life cycle, these reviews may be held informally;in later phases of the life cycle, this reviewbecomes a formal event called a Test Readiness Review.TRRs and other technical reviews are an activityof the <strong>Technical</strong> Assessment Process.On most projects, a number of TRRs with tailored entrance/successcriteria are held to assess the readinessand availability of test ranges; test facilities; trainedtesters; instrumentation; integration labs; supportequipment; and other enabling products; etc.Peer reviews are additional reviews that may be conductedformally or informally to ensure readiness forverification (as well as the results of the verificationprocess).Outcomes of verification preparation include the following:zz The preparations for performing the verification asplanned are completed;zz An appropriate set of specified requirements and sup-porting configuration documentation is available andon hand;zz Articles/models to be used for verification are onhand, assembled, and integrated with the verificationenvironment according to verification plans andschedules;zz The resources needed to conduct the verificationare available according to the verification plans andschedules; andzz The verification environment is evaluated for ade-quacy, completeness, readiness, and integration.Conduct Planned Product VerificationThe actual act of verifying the end product is conductedas spelled out in the plans and procedures and conformanceestablished to each specified verification requirement.The responsible engineer should ensure that theprocedures were followed and performed as planned,the verification-enabling products were calibrated correctly,and the data were collected and recorded for requiredverification measures.The Decision Analysis Process should be used to helpmake decisions with respect to making needed changesin the verification plans, environment, and/or conduct.Outcomes of conducting verification include the following:zz A verified product is established with supporting con-firmation that the appropriate results were collectedand evaluated to show completion of verification objectives,zz A determination as to whether the realized endproduct (in the appropriate form for the life-cyclephase) complies with its specified requirements,zz A determination that the verification product was ap-propriately integrated with the verification environmentand each specified requirement was properlyverified, andzz A determination that product functions were veri-fied both together and with interfacing productsthroughout their performance envelope.Analyze Product Verification ResultsOnce the verification activities have been completed,the results are collected and analyzed. The data are analyzedfor quality, integrity, correctness, consistency,and validity, and any verification anomalies, variations,and out-of-compliance conditions are identified and reviewed.Variations, anomalies, and out-of-compliance conditionsmust be recorded and reported for followup actionand closure. Verification results should be recorded inthe requirements compliance matrix developed duringthe <strong>Technical</strong> Requirements Definition Process or othermechanism to trace compliance for each verification requirement.System design and product realization process activitiesmay be required to resolve anomalies not resulting frompoor verification conduct, design, or conditions. If thereare anomalies not resulting from the verification conduct,design, or conditions, and the mitigation of these<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 87


5.0 Product Realizationanomalies results in a change to the product, the verificationmay need to be planned and conducted again.Outcomes of analyzing the verification results includethe following:zz End-product variations, anomalies, and out-of-com-pliance conditions have been identified;zz Appropriate replanning, redefinition of requirements,design and reverification have been accomplished forresolution for anomalies, variations, or out-of-complianceconditions (for problems not caused by poorverification conduct);zz Variances, discrepancies, or waiver conditions havebeen accepted or dispositioned;zz Discrepancy and corrective action reports have beengenerated as needed; andzz The verification report is completed.ReengineeringBased on analysis of verification results, it could be necessaryto re-realize the end product used for verificationor to reengineer the end products assembled and integratedinto the product being verified, based on whereand what type of defect was found.Reengineering could require the reapplication of thesystem design processes (Stakeholder Expectations Definition,<strong>Technical</strong> Requirements Definition, Logical Decomposition,and Design Solution Definition).Verification DeficienciesVerification test outcomes can be unsatisfactory for severalreasons. One reason is poor conduct of the verification(e.g., procedures not followed, equipment not calibrated,improper verification environmental conditions,or failure to control other variables not involved in verifyinga specified requirement). A second reason couldbe that the realized end product used was not realizedcorrectly. Reapplying the system design processes couldcreate the need for the following:Note: Nonconformances and discrepancy reportsmay be directly linked with the <strong>Technical</strong> Risk ManagementProcess. Depending on the nature of thenonconformance, approval through such bodies as amaterial review board or configuration control board(which typically includes risk management participation)may be required.zz Reengineering products lower in the system structurethat make up the product that were found to be defective(i.e., they failed to satisfy verification requirements)and/orzz Reperforming the Product Verification Process.Pass Verification But Fail Validation?Many systems successfully complete verification but thenare unsuccessful in some critical phase of the validationprocess, delaying development and causing extensive reworkand possible compromises with the stakeholder.Developing a solid ConOps in early phases of the project(and refining it through the requirements developmentand design phases) is critical to preventing unsuccessfulvalidation. Communications with stakeholders helps toidentify operational scenarios and key needs that mustbe understood when designing and implementing theend product. Should the product fail validation, redesignmay be a necessary reality. Review of the understoodrequirements set, the existing design, operationalscenarios, and support material may be necessary, aswell as negotiations and compromises with the customer,other stakeholders, and/or end users to determinewhat, if anything, can be done to correct or resolvethe situation. This can add time and cost to theoverall project or, in some cases, cause the project tofail or be cancelled.Capture Product Verification Work ProductsVerification work products (inputs to the <strong>Technical</strong> DataManagement Process) take many forms and involvemany sources of information. The capture and recordingof verification results and related data is a very important,but often underemphasized, step in the ProductVerification Process.Verification results, anomalies, and any correctiveaction(s) taken should be captured, as should all relevantresults from the application of the Product VerificationProcess (related decisions, rationale for the decisionsmade, assumptions, and lessons learned).Outcomes of capturing verification work products includethe following:zz Verification of work products are recorded, e.g., typeof verification, procedures, environments, outcomes,decisions, assumptions, corrective actions, lessonslearned.88 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


5.3 Product Verificationzz Variations, anomalies, and out-of-compliance condi-tions have been identified and documented, includingthe actions taken to resolve them.zz Proof that the realized end product did or did not sat-isfy the specified requirements is documented.zz The verification report is developed, including:▶▶Recorded test/verification results/data;▶▶Version of the set of specified requirements used;▶▶Version of the product verified;▶▶Version or standard for tools, data, and equipmentused;▶▶Results of each verification including pass or faildeclarations; and▶▶Expected versus actual discrepancies.5.3.1.3 OutputsKey outputs from the process are:zz Discrepancy reports and identified corrective actions;zz Verified product to validation or integration; andzz Verification report(s) and updates to requirementscompliance documentation (including verificationplans, verification procedures, verification matrices,verification results and analysis, and test/demonstration/inspection/analysisrecords).Success criteria include: (1) documented objective evidenceof compliance (or waiver, as appropriate) witheach system-of-interest requirement and (2) closure ofall discrepancy reports. The Product Verification Processis not considered or designated complete until alldiscrepancy reports are closed (i.e., all errors tracked toclosure).5.3.2 Product Verification Guidance5.3.2.1 Verification ProgramA verification program should be tailored to the projectit supports. The project manager/systems engineer mustwork with the verification engineer to develop a verificationprogram concept. Many factors need to be consideredin developing this concept and the subsequentverification program. These factors include:zz Project type, especially for flight projects. Verificationmethods and timing depend on:▶▶The type of flight article involved (e.g., an experiment,payload, or launch vehicle).▶▶<strong>NASA</strong> payload classification ( NPR 8705.4, RiskClassification for <strong>NASA</strong> Payloads). Guidelines areintended to serve as a starting point for establishmentof the formality of test programs which can betailored to the needs of a specific project based onthe “A-D” payload classification.▶▶Project cost and schedule implications. Verificationactivities can be significant drivers of aproject’s cost and schedule; these implicationsshould be considered early in the developmentof the verification program. Trade studies shouldbe performed to support decisions about verificationmethods and requirements and the selectionof facility types and locations. For example, atrade study might be made to decide between performinga test at a centralized facility or at severaldecentralized locations.▶▶Risk implications. Risk management must be consideredin the development of the verification program.Qualitative risk assessments and quantitativerisk analyses (e.g., a Failure Mode and Effects Analysis(FMECA)) often identify new concerns that canbe mitigated by additional testing, thus increasingthe extent of verification activities. Other risk assessmentscontribute to trade studies that determinethe preferred methods of verification to be used andwhen those methods should be performed. For example,a trade might be made between performinga model test versus determining model characteristicsby a less costly, but less revealing, analysis. Theproject manager/systems engineer must determinewhat risks are acceptable in terms of the project’scost and schedule.zz Availability of verification facilities/sites and trans-portation assets to move an article from one locationto another (when needed). This requires coordinationwith the Integrated Logistics Support (ILS) engineer.zz Acquisition strategy (i.e., in-house development orsystem contract). Often, a <strong>NASA</strong> field center canshape a contractor’s verification process through theproject’s SOW.zz Degree of design inheritance and hardware/softwarereuse.5.3.2.2 Verification in the Life CycleThe type of verification completed will be a function ofthe life-cycle phase and the position of the end product<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 89


5.0 Product Realizationwithin the system structure. The end product must beverified and validated before it is transitioned to the nextlevel up as part of the bottom-up realization process.(See Figure 5.3-2.)While illustrated here as separate processes, there can beconsiderable overlap between some verification and validationevents when implemented.Quality Assurance in VerificationEven with the best of available designs, hardware fabrication,software coding, and testing, projects are subjectto the vagaries of nature and human beings. The systemsengineer needs to have some confidence that the systemactually produced and delivered is in accordance with itsfunctional, performance, and design requirements. QAprovides an independent assessment to the project manager/systemsengineer of the items produced and processesused during the project life cycle. The QA engineertypically acts as the systems engineer’s eyes and earsin this context.The QA engineer typically monitors the resolution andcloseout of nonconformances and problem/failure reports;verifies that the physical configuration of thesystem conforms to the build-to (or code-to) documentationapproved at CDR; and collects and maintains QAdata for subsequent failure analyses. The QA engineer alsoparticipates in major reviews (primarily SRR, PDR, CDR,and FRR) on issues of design, materials, workmanship,fabrication and verification processes, and other characteristicsthat could degrade product system quality.The project manager/systems engineer must work withthe QA engineer to develop a QA program (the extent,responsibility, and timing of QA activities) tailored tothe project it supports. In part, the QA program ensuresverification requirements are properly specified, especiallywith respect to test environments, test configurations,and pass/fail criteria, and monitors qualificationand acceptance tests to ensure compliance with verificationrequirements and test procedures to ensure that testdata are correct and complete.To end user/use environmentVerify againstend productspecifiedrequirementsVerify againstend productspecifiedrequirementsVerify againstend productspecifiedrequirementsTier 5End Product90 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>Verify againstend productspecifiedrequirementsTier 4End ProductVerify againstend productspecifiedrequirementsTier 3End ProductTier 2End ProductTier 1End ProductDeliver verifiedend productDeliver verifiedend productValidate against stakeholderexpectations and ConOpsDeliver verifiedend productDeliver verifiedend productValidate against stakeholderexpectations and ConOpsDeliver verifiedend productValidate against stakeholderexpectations and ConOpsValidate against stakeholderexpectations and ConOpsFigure 5.3-2 Bottom-up realization process


5.3 Product VerificationConfiguration VerificationConfiguration verification is the process of verifying thatresulting products (e.g., hardware and software items)conform to the baselined design and that the baselinedocumentation is current and accurate. Configurationverification is accomplished by two types of control gateactivity: audits and technical reviews.Qualification VerificationQualification-stage verification activities begin aftercompletion of development of the flight/operations hardwaredesigns and include analyses and testing to ensurethat the flight/operations or flight-type hardware (andsoftware) will meet functional and performance requirementsin anticipated environmental conditions. Duringthis stage, many performance requirements are verified,while analyses and models are updated as test data are acquired.Qualification tests generally are designed to subjectthe hardware to worst-case loads and environmentalstresses plus a defined level of margin. Some of the verificationsperformed to ensure hardware compliance arevibration/acoustic, pressure limits, leak rates, thermalvacuum, thermal cycling, Electromagnetic Interferenceand Electromagnetic Compatibility (EMI/EMC), highandlow-voltage limits, and lifetime/cycling. Safety requirements,defined by hazard analysis reports, may alsobe satisfied by qualification testing.Qualification usually occurs at the component or subsystemlevel, but could occur at the system level as well. Aproject deciding against building dedicated qualificationhardware—and using the flight/operations hardware itselffor qualification purposes—is termed “protoflight.”Here, the requirements being verified are typically lessthan that of qualification levels but higher than that ofacceptance levels.Qualification verification verifies the soundness of thedesign. Test levels are typically set with some marginabove expected flight/operations levels, including themaximum number of cycles that can be accumulatedduring acceptance testing. These margins are set to addressdesign safety margins in general, and care shouldbe exercised not to set test levels so that unrealistic failuremodes are created.Acceptance VerificationAcceptance-stage verification activities provide the assurancethat the flight/operations hardware and softwareare in compliance with all functional, performance,and design requirements and are ready for shipment tothe launch site. The acceptance stage begins with the acceptanceof each individual component or piece part forassembly into the flight/operations article, continuingthrough the System Acceptance Review (SAR). (SeeSubsection 6.7.2.1.)Some verifications cannot be performed after a flight/operations article, especially a large one, has been assembledand integrated (e.g., due to inaccessibility). Whenthis occurs, these verifications are to be performed duringfabrication and integration, and are known as “in-process”tests. In this case, acceptance testing begins with inprocesstesting and continues through functional testing,environmental testing, and end-to-end compatibilitytesting. Functional testing normally begins at the componentlevel and continues at the systems level, endingwith all systems operating simultaneously.When flight/operations hardware is unavailable, or itsuse is inappropriate for a specific test, simulators may beused to verify interfaces. Anomalies occurring duringa test are documented on the appropriate reportingsystem, and a proposed resolution should be defined beforetesting continues. Major anomalies, or those that arenot easily dispositioned, may require resolution by a collaborativeeffort of the systems engineer and the design,test, and other organizations. Where appropriate, analysesand models are validated and updated as test dataare acquired.Acceptance verification verifies workmanship, not design.Test levels are set to stress items so that failures arisefrom defects in parts, materials, and workmanship. Assuch, test levels are those anticipated during flight/operationswith no additional margin.Deployment VerificationThe pre-launch verification stage begins with the arrivalof the flight/operations article at the launch site and concludesat liftoff. During this stage, the flight/operationsarticle is processed and integrated with the launch vehicle.The launch vehicle could be the shuttle or someother launch vehicle, or the flight/operations articlecould be part of the launch vehicle. Verifications performedduring this stage ensure that no visible damageto the system has occurred during shipment and that thesystem continues to function properly.<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 91


5.3 Product Verificationments so that the completed procedure can serve as partof the verification report. The as-run and certified copyof the procedure is maintained as part of the project’s archives.5.3.2.4 Verification <strong>Reports</strong>A verification report should be provided for each analysisand, at a minimum, for each major test activity—such as functional testing, environmental testing, andend-to-end compatibility testing—occurring over longperiods of time or separated by other activities. Verificationreports may be needed for each individual testactivity, such as functional testing, acoustic testing, vibrationtesting, and thermal vacuum/thermal balancetesting. Verification reports should be completed withina few weeks following a test and should provide evidenceof compliance with the verification requirements forwhich it was conducted.The verification report should include as appropriate:zz Verification objectives and the degree to which theywere met;zz Description of verification activity;zz Test configuration and differences from flight/opera-tions configuration;zz Specific result of each test and each procedure, in-cluding annotated tests;zz Specific result of each analysis;zz Test performance data tables, graphs, illustrations,and pictures;zz Descriptions of deviations from nominal results,problems/failures, approved anomaly corrective actions,and retest activity;zz Summary of nonconformance/discrepancy reports,including dispositions;zz Conclusions and recommendations relative to successof verification activity;zz Status of support equipment as affected by test;zz Copy of as-run procedure; andzz Authentication of test results and authorization of ac-ceptability.5.3.2.5 End-to-End System TestingThe objective of end-to-end testing is to demonstrateinterface compatibility and desired total functionalityamong different elements of a system, between systems,Note: It is important to understand that, over the lifetimeof a system, requirements may change or componentobsolescence may make a design solutiontoo difficult to produce from either a cost or technicalstandpoint. In these instances, it is critical to employthe systems engineering design processes at a lowerlevel to ensure the modified design provides a properdesign solution. An evaluation should be made to determinethe magnitude of the change required, andthe process should be tailored to address the issuesappropriately. A modified qualification, verification,and validation process may be required to baseline anew design solution, consistent with the intent previouslydescribed for those processes. The acceptancetesting will also need to be updated as necessary toverify that the new product has been manufacturedand coded in compliance with the revised baselineddesign.and within a system as a whole. End-to-end tests performedon the integrated ground and flight system includeall elements of the payload, its control, stimulation,communications, and data processing to demonstratethat the entire system is operating in a manner to fulfillall mission requirements and objectives.End-to-end testing includes executing complete threadsor operational scenarios across multiple configurationitems, ensuring that all mission and performance requirementsare verified. Operational scenarios are usedextensively to ensure that the system (or collections ofsystems) will successfully execute mission requirements.Operational scenarios are a step-by-step description ofhow the system should operate and interact with its usersand its external interfaces (e.g., other systems). Scenariosshould be described in a manner that will allow engineersto walk through them and gain an understandingof how all the various parts of the system should functionand interact as well as verify that the system will satisfythe user’s needs and expectations. Operational scenariosshould be described for all operational modes, missionphases (e.g., installation, startup, typical examplesof normal and contingency operations, shutdown, andmaintenance), and critical sequences of activities for allclasses of users identified. Each scenario should includeevents, actions, stimuli, information, and interactions asappropriate to provide a comprehensive understandingof the operational aspects of the system.<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 93


5.0 Product RealizationFigure 5.3-3 presents an example of an end-to-end dataflow for a scientific satellite mission. Each arrow in thediagram represents one or more data or control flowsbetween two hardware, software, subsystem, or systemconfiguration items. End-to-end testing verifies that thedata flows throughout the multisystem environment arecorrect, that the system provides the required functionality,and that the outputs at the eventual end points correspondto expected results. Since the test environment isas close an approximation as possible to the operationalenvironment, performance requirements testing is alsoincluded. This figure is not intended to show the full extentof end-to-end testing. Each system shown wouldneed to be broken down into a further level of granularityfor completeness.End-to-end testing is an integral part of the verificationand validation of the total system and is an activity thatis employed during selected hardware, software, andsystem phases throughout the life cycle. In comparisonwith configuration item testing, end-to-end testing addresseseach configuration item only down to the levelwhere it interfaces externally to other configurationitems, which can be either hardware, software, or humanbased. Internal interfaces (e.g., software subroutine calls,analog-to-digital conversion) of a configuration item arenot within the scope of end-to-end testing.How to Perform End-to-End TestingEnd-to-end testing is probably the most significant elementof any project verification program and the testshould be designed to satisfy the edict to “test the waywe fly.” This means assembling the system in its realisticconfiguration, subjecting it to a realistic environmentand then “flying” it through all of its expected operationalmodes. For a scientific robotic mission, targetsand stimuli should be designed to provide realistic inputsto the scientific instruments. The output signalsfrom the instruments would flow through the satellitedata-handling system and then be transmitted to theactual ground station through the satellite communicationssystem. If data are transferred to the ground stationthrough one or more satellite or ground relays (e.g., theTracking and Data Relay Satellite System (TDRSS)) thenthose elements must be included as part of the test.The end-to-end compatibility test encompasses the entirechain of operations that will occur during all missionmodes in such a manner as to ensure that the system willfulfill mission requirements. The mission environmentshould be simulated as realistically as possible, and theinstruments should receive stimuli of the kind they willreceive during the mission. The Radio Frequency (RF)links, ground station operations, and software functionsshould be fully exercised. When acceptable simulationEXTERNALSYSTEMSGROUND SYSTEMFLIGHT SYSTEMEXTERNALSTIMULIUplink ProcessX-raysMissionPlanningPlanningCommandGenerationTransmissionInstrumentSet AVisibleSoftwareLoadsSpacecraftExecutionUltravioletScientificCommunityArchivalAnalysisDataCaptureInstrumentSet BInfraredDownlink ProcessMicrowaveFigure 5.3-3 Example of end-to-end data flow for a scientific satellite mission94 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


facilities are available for portions of the operational systems,they may be used for the test instead of the actualsystem elements. The specific environments under whichthe end-to-end test is conducted and the stimuli, payloadconfiguration, RF links, and other system elementsto be used must be determined in accordance with thecharacteristics of the mission.Although end-to-end testing is probably the most complextest in any system verification program, the samecareful preparation is necessary as for any other systemleveltest. For example, a test lead must be appointed andthe test team selected and trained. Adequate time mustbe allocated for test planning and coordination with thedesign team. Test procedures and test software must bedocumented, approved, and placed under configurationcontrol.Plans, agreements, and facilities must be put in place wellin advance of the test to enable end-to-end testing betweenall components of the system.Once the tests are run, the test results are documentedand any discrepancies carefully recorded and reported.All test data must be maintained under configurationcontrol.Before completing end-to-end testing, the following activitiesare completed for each configuration item:zz All requirements, interfaces, states, and state tran-sitions of each configuration item should be testedthrough the exercise of comprehensive test proceduresand test cases to ensure the configuration itemsare complete and correct.zz A full set of operational range checking tests shouldbe conducted on software variables to ensure that thesoftware performs as expected within its completerange and fails, or warns, appropriately for out-ofrangevalues or conditions.End-to-end testing activities include the following:1.Note: This is particularly important when missions aredeveloped with international or external partners.Operational scenarios are created that span all of thefollowing items (during nominal, off-nominal, andstressful conditions) that could occur during themission:2.3.4.5.6.7.8.5.3 Product Verificationzz Mission phase, mode, and state transitions;zz First-time events;zz Operational performance limits;zz Fault protection routines;zz Failure Detection, Isolation, and Recovery (FDIR)logic;zz Safety properties;zz Operational responses to transient or off-nom-inal sensor signals; andzz Communication uplink and downlink.The operational scenarios are used to test the configurationitems, interfaces, and end-to-end performanceas early as possible in the configuration items’development life cycle. This typically means simulatorsor software stubs have to be created to implementa full scenario. It is extremely important toproduce a skeleton of the actual system to run fullscenarios as soon as possible with both simulated/stubbed-out and actual configuration items.A complete diagram and inventory of all interfacesare documented.Test cases are executed to cover human-human,human-hardware, human-software, hardware-software,software-software, and subsystem-subsysteminterfaces and associated inputs, outputs, and modesof operation (including safing modes).It is strongly recommended that during end-to-endtesting, an operations staff member who has not previouslybeen involved in the testing activities be designatedto exercise the system as it is intended to beused to determine if it will fail.The test environment should approximate/simulatethe actual operational conditions when possible. Thefidelity of the test environment should be authenticated.Differences between the test and operationalenvironment should be documented in the test orverification plan.When testing of a requirement is not possible, verificationis demonstrated by other means (e.g., modelchecking, analysis, or simulation). If true end-to-endtesting cannot be achieved, then the testing mustbe done piecemeal and patched together by analysisand simulation. An example of this would be asystem that is assembled on orbit where the variouselements come together for the first time on orbit.When an error in the developed system is identifiedand fixed, regression testing of the system or compo-<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 95


5.0 Product Realization9.nent is performed to ensure that modifications havenot caused unintended effects and that the systemor component still complies with previously testedspecified requirements.When tests are aborted or a test is known to be flawed(e.g., due to configuration, test environment), the testshould be rerun after the identified problem is fixed.10. The operational scenarios should be used to formulatethe final operations plan.11. Prior to system delivery, as part of the system qualificationtesting, test cases should be executed to coverall of the plans documented in the operations plan inthe order in which they are expected to occur duringthe mission.End-to-end test documentation includes the following:zz Inclusion of end-to-end testing plans as a part of thetest or verification plan.zz A document, matrix, or database under configura-tion control that traces the end-to-end system testsuite to the results. Data that are typically recordedinclude the test-case identifier, subsystems/hardware/program sets exercised, list of the requirements beingverified, interfaces exercised, date, and outcome oftest (i.e., whether the test actual output met the expectedoutput).zz End-to-end test cases and procedures (including in-puts and expected outputs).zz A record of end-to-end problems/failures/anomalies.End-to-end testing can be integrated with other projecttesting activities; however, the documentation mentionedin this subsection should be readily extractablefor review, status, and assessment.5.3.2.6 Modeling and SimulationFor the Product Verification Process, a model is a physical,mathematical, or logical representation of an endproduct to be verified. Modeling and Simulation (M&S)can be used to augment and support the Product VerificationProcess and is an effective tool for performing theverification whether in early life-cycle phases or later. Boththe facilities and the model itself are developed using thesystem design and product realization processes.Note: The development of the physical, mathematical,or logical model includes evaluating whether themodel to be used as representative of the system endproduct was realized according to its design–solution-specifiedrequirements for a model and whetherit will be valid for use as a model. In some cases, themodel must also be accredited to certify the range ofspecific uses for which the model can be used. Likeany other enabling product, budget and time mustbe planned for creating and evaluating the model tobe used to verify the applicable system end product.The model used, as well as the M&S facility, are enablingproducts and must use the 17 technical processes (see NPR7123.1, <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> Processes and Requirements)for their development and realization (includingacceptance by the operational community) to ensure thatthe model and simulation adequately represent the operationalenvironment and performance of the modeled endproduct. Additionally, in some cases certification is requiredbefore models and simulations can be used.M&S assets can come from a variety of sources; for example,contractors, other Government agencies, or laboratoriescan provide models that address specific systemattributes.5.3.2.7 Hardware-in-the-LoopFully functional end products, such as an actual piece ofhardware, may be combined with models and simulationsthat simulate the inputs and outputs of other endproducts of the system. This is referred to as “Hardwarein-the-Loop”(HWIL) testing. HWIL testing links all elements(subsystems and test facilities) together withina synthetic environment to provide a high-fidelity, realtimeoperational evaluation for the real system or subsystems.The operator can be intimately involved in thetesting, and HWIL resources can be connected to otherfacilities for distributed test and analysis applications.One of the uses of HWIL testing is to get as close to theactual concept of operation as possible to support verificationand validation when the operational environmentis difficult or expensive to recreate.During development, this HWIL verification normallytakes place in an integration laboratory or test facility. Forexample, HWIL could be a complete spacecraft in a specialtest chamber, with the inputs/outputs being providedas output from models that simulate the system in an operationalenvironment. Real-time computers are used tocontrol the spacecraft and subsystems in projected operationalscenarios. Flight dynamics, responding to the96 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


5.3 Product Verificationcommands issued by the guidance and control systemhardware/software, are simulated in real-time to determinethe trajectory and to calculate system flight conditions.HWIL testing verifies that the end product beingevaluated meets the interface requirements, properlytransforming inputs to required outputs. HWIL modelingcan provide a valuable means of testing physicalend products lower in the system structure by providingsimulated inputs to the end product or receiving outputsfrom the end product to evaluate the quality of those outputs.This tool can be used throughout the life cycle of aprogram or project. The shuttle program uses an HWILto verify software and hardware updates for the controlof the shuttle main engines.Modeling, simulation, and hardware/human-in-thelooptechnology, when appropriately integrated and sequencedwith testing, provide a verification method ata reasonable cost. This integrated testing process specifically(1) reduces the cost of life-cycle testing, (2) providessignificantly more engineering/performance insightsinto each system evaluated, and (3) reduces testtime and lowers project risk. This process also significantlyreduces the number of destructive tests requiredover the life of the product. The integration of M&Sinto verification testing provides insights into trendsand tendencies of system and subsystem performancethat might not otherwise be possible due to hardwarelimitations.<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 97


5.0 Product Realization5.4 Product ValidationThe Product Validation Process is the second of the verificationand validation processes conducted on a realizedend product. While verification proves whether“the system was done right,” validation proves whether“the right system was done.” In other words, verificationprovides objective evidence that every “shall” statementwas met, whereas validation is performed for thebenefit of the customers and users to ensure that thesystem functions in the expected manner when placedin the intended environment. This is achieved by examiningthe products of the system at every level of thestructure.Validation confirms that realized end products at anyposition within the system structure conform to theirset of stakeholder expectations captured in the ConOps,and ensures that any anomalies discovered during validationare appropriately resolved prior to product delivery.This section discusses the process activities,types of validation, inputs and outputs, and potentialdeficiencies.Distinctions Between Product Verification andProduct ValidationFrom a process perspective, Product Verification andProduct Validation may be similar in nature, but the objectivesare fundamentally different.From a customer point of view, the interest is in whetherthe end product provided will do what they intend withinthe environment of use. It is essential to confirm that therealized product is in conformance with its specificationsand design description documentation because thesespecifications and documents will establish the configurationbaseline of the product, which may have to be modifiedat a later time. Without a verified baseline and appropriateconfiguration controls, such later modificationscould be costly or cause major performance problems.When cost-effective and warranted by analysis, variouscombined tests are used. The expense of validationtesting alone can be mitigated by ensuring that each endproduct in the system structure was correctly realized inaccordance with its specified requirements before conductingvalidation.5.4.1 Process DescriptionFigure 5.4-1 provides a typical flow diagram for theProduct Validation Process and identifies typical inputs,outputs, and activities to consider in addressing productvalidation.5.4.1.1 InputsKey inputs to the process are:zz Verified product,zz Validation plan,zz Baselined stakeholder expectations (including ConOpsand mission needs and goals), andzz Any enabling products needed to perform the ProductValidation Process.Differences Between Verification and Validation TestingzzVerification Testing: Verification testing relates back to the approved requirements set (such as an SRD) and can beperformed at different stages in the product life cycle. Verification testing includes: (1) any testing used to assist in thedevelopment and maturation of products, product elements, or manufacturing or support processes; and/or (2) anyengineering-type test used to verify status of technical progress, to verify that design risks are minimized, to substantiateachievement of contract technical performance, and to certify readiness for initial validation testing. Verificationtests use instrumentation and measurements, and are generally accomplished by engineers, technicians, or operatormaintainertest personnel in a controlled environment to facilitate failure analysis.Validation Testing:z z Validation relates back to the ConOps document. Validation testing is conducted under realisticconditions (or simulated conditions) on any end product for the purpose of determining the effectiveness and suitabilityof the product for use in mission operations by typical users; and the evaluation of the results of such tests. Testingis the detailed quantifying method of both verification and validation. However, testing is required to validate finalend products to be produced and deployed.98 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


5.4 Product ValidationFrom ProductVerification ProcessEnd Productto Be ValidatedFrom ConfigurationManagement ProcessStakeholder ExpectationBaselineFrom Design SolutionDefinition and <strong>Technical</strong>Planning ProcessesProduct ValidationPlanFrom existingresources or ProductTransition ProcessProduct Validation–Enabling ProductsPrepare to conduct productvalidationPerform the productvalidationAnalyze the outcomes ofthe product validationPrepare a productvalidation reportCapture the work productsfrom product validationTo ProductTransition ProcessValidated End ProductTo <strong>Technical</strong>Assessment ProcessProduct ValidationResultsTo <strong>Technical</strong> DataManagement ProcessProduct ValidationReportProduct ValidationWork ProductsFigure 5.4-1 Product Validation Process5.4.1.2 Process ActivitiesThe Product Validation Process demonstrates that therealized end product satisfies its stakeholder (customerand other interested party) expectations within the intendedoperational environments, with validation performedby anticipated operators and/or users. The typeof validation is a function of the life-cycle phase and theposition of the end product within the system structure.There are five major steps in the validation process:(1) validation planning (prepare to implement the validationplan), (2) validation preparation (prepare forconducting validation), (3) conduct planned validation(perform validation), (4) analyze validation results, and(5) capture the validation work products.The objectives of the Product Validation Process are:zz To confirm that▶▶The right product was realized—the one wanted bythe customer,▶▶The realized product can be used by intended operators/users,and▶▶The Measures of Effectiveness (MOEs) are satisfied.zz To confirm that the realized product fulfills its intendeduse when operated in its intended environment:▶▶Validation is performed for each realized (implementedor integrated) product from the lowest endproduct in a system structure branch up to the topWBS model end product.▶▶Evidence is generated as necessary to confirm thatproducts at each layer of the system structure meetthe capability and other operational expectationsof the customer/user/operator and other interestedparties.zz To ensure that any problems discovered are appropri-ately resolved prior to delivery of the realized product(if validation is done by the supplier of the product) orprior to integration with other products into a higherlevel assembled product (if validation is done by thereceiver of the product).Verification and validation events are illustrated as separateprocesses, but when used, can considerably overlap.When cost effective and warranted by analysis, variouscombined tests are used. However, while from a processperspective verification and validation are similar in nature,their objectives are fundamentally different.From a customer’s point of view, the interest is in whetherthe end product provided will supply the needed capabilitieswithin the intended environments of use. The<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 99


5.0 Product Realizationexpense of validation testing alone can be mitigated byensuring that each end product in the system structurewas correctly realized in accordance with its specified requirementsprior to validation, during verification. It ispossible that the system design was not done properlyand, even though the verification tests were successful(satisfying the specified requirements), the validationtests would still fail (stakeholder expectations not satisfied).Thus, it is essential that validation of lower productsin the system structure be conducted as well as verificationso as to catch design failures or deficiencies asearly as possible.Product Validation PlanningPlanning to conduct the product validation is a key firststep. The type of validation to be used (e.g., analysis,demonstration, inspection, or test) should be establishedbased on the form of the realized end product, the applicablelife-cycle phase, cost, schedule, resources available,and location of the system product within the systemstructure. (See Appendix I for a sample verification andvalidation plan outline.)An established set or subset of requirements to be validatedshould be identified and the validation plan reviewed(an output of the <strong>Technical</strong> Planning Process,based on design solution outputs) for any specific procedures,constraints, success criteria, or other validationrequirements. The conditions and environment underwhich the product is to be validated should be establishedand the validation planned based on the relevantlife-cycle phase and associated success criteria identified.The Decision Analysis Process should be used to help finalizethe planning details.It is important to review the validation plans with relevantstakeholders and understand the relationship betweenthe context of the validation and the context of use(human involvement). As part of the planning process,validation-enabling products should be identified, andscheduling and/or acquisition initiated.Procedures should be prepared to conduct validationbased on the type (e.g., analysis, inspection, demonstration,or test) planned. These procedures are typicallydeveloped during the design phase of the projectlife cycle and matured as the design is matured. Operationaland use-case scenarios are thought throughso as to explore all possible validation activities to beperformed.Validation Plan and MethodsThe validation plan is one of the work products of the<strong>Technical</strong> Planning Process and is generated during theDesign Solution Process to validate the realized productagainst the baselined stakeholder expectations. This plancan take many forms. The plan describes the total Testand Evaluation (T&E) planning from development oflower end through higher end products in the systemstructure and through operational T&E into productionand acceptance. It may include the verification and validationplan. (See Appendix I for a sample verificationand validation plan outline.)The types of validation include test, demonstration, inspection,and analysis. While the name of each methodTypes of ValidationzzAnalysis: The use of mathematical modeling andanalytical techniques to predict the suitability of adesign to stakeholder expectations based on calculateddata or data derived from lower systemstructure end product validations. It is generallyused when a prototype; engineering model; or fabricated,assembled, and integrated product is notavailable. Analysis includes the use of both modelingand simulation.z z Demonstration: The use of a realized end productto show that a set of stakeholder expectations canbe achieved. It is generally used for a basic confirmationof performance capability and is differentiatedfrom testing by the lack of detailed data gathering.Validation is done under realistic conditionsfor any end product within the system structure forthe purpose of determining the effectiveness andsuitability of the product for use in <strong>NASA</strong> missionsor mission support by typical users and evaluatingthe results of such tests.z z Inspection: The visual examination of a realizedend product. It is generally used to validate physicaldesign features or specific manufacturer identification.zzTest: The use of a realized end product to obtaindetailed data to validate performance or to providesufficient information to validate performancethrough further analysis. Testing is the detailedquantifying method of both verification and validationbut it is required in order to validate final endproducts to be produced and deployed.100 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


5.4 Product Validationis the same as the name of the methods for verification,the purpose and intent are quite different.Validation is conducted by the user/operator or by thedeveloper, as determined by <strong>NASA</strong> Center directives orthe contract with the developers. <strong>Systems</strong>-level validation(e.g., customer T&E and some other types of validation)may be performed by an acquirer testing organization.For those portions of validation performed by thedeveloper, appropriate agreements must be negotiated toensure that validation proof-of-documentation is deliveredwith the realized product.All realized end products, regardless of the source (buy,make, reuse, assemble and integrate) and the positionin the system structure, should be validated to demonstrate/confirmsatisfaction of stakeholder expectations.Variations, anomalies, and out-of-compliance conditions,where such have been detected, are documentedalong with the actions taken to resolve the discrepancies.Validation is typically carried out in the intended operationalenvironment under simulated or actual operationalconditions, not under the controlled conditionsusually employed for the Product Verification Process.Validation can be performed recursively throughout theproject life cycle and on a wide variety of product forms.For example:zz Simulated (algorithmic models, virtual reality simu-lator);zz Mockup (plywood, brassboard, breadboard);zz Concept description (paper report);zz Prototype (product with partial functionality);zz <strong>Engineering</strong> unit (fully functional but may not besame form/fit);zz Design validation test units (form, fit and functionmay be the same, but they may not have flight parts);zz Qualification unit (identical to flight unit but may besubjected to extreme environments); orzz Flight unit (end product that is flown).Any of these types of product forms may be in any ofthese states:zz Produced (built, fabricated, manufactured, or coded);zz Reused (modified internal nondevelopmental prod-ucts or off-the-shelf product); orzz Assembled and integrated (a composite of lower levelproducts).Note: The final, official validation of the end productshould be for a controlled unit. Typically, attemptingfinal validation against operational concepts ona prototype is not acceptable: it is usually completedon a qualification, flight, or other more final, controlledunit.Outcomes of validation planning include the following:zz The validation type that is appropriate to confirm thatthe realized product or products conform to stakeholderexpectations (based on the form of the realizedend product) has been identified.zz Validation procedures are defined based on: (1) theneeded procedures for each type of validation selected,(2) the purpose and objective of each procedurestep, (3) any pre-test and post-test actions, and(4) the criteria for determining the success or failureof the procedure.zz A validation environment (e.g., facilities, equipment,tools, simulations, measuring devices, personnel, andoperational conditions) in which the validation procedureswill be implemented has been defined.Note: In planning for validation, consideration shouldbe given to the extent to which validation testing willbe done. In many instances, off-nominal operationalscenarios and nominal operational scenarios shouldbe utilized. Off-nominal testing offers insight into asystem’s total performance characteristics and oftenassists in identification of design issues and humanmachineinterface, training, and procedural changesrequired to meet the mission goals and objectives.Off-nominal testing, as well as nominal testing, shouldbe included when planning for validation.Product Validation PreparationTo prepare for performing product validation, the appropriateset of expectations against which the validationis to be made should be obtained. Also, the productto be validated (output from implementation, or integrationand verification), as well as the validation-enablingproducts and support resources (requirements identifiedand acquisition initiated by design solution activities)with which validation will be conducted, should becollected.<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 101


5.0 Product RealizationExamples of Enabling Products and SupportResources for Preparing to ConductValidationOne of the key tasks in the Product Validation Process“prepare for conducting validation” is to obtain necessaryenabling products and support resources neededto conduct validation. Examples of these include:zz Measurement tools (scopes, electronic devices,probes);zz Embedded test software;zz Test wiring, measurement devices, and telemetryequipment;zz Recording equipment (to capture test results);zz End products in the loop (software, electronics, ormechanics) for hardware-in-the-loop simulations;zz External interfacing products of other systems;zz Actual external interfacing products of other sys-tems (aircraft, vehicles, humans); andzz Facilities and skilled operators.The validation environment is then prepared (set up theequipments, sensors, recording devices, etc., that will beinvolved in the validation conduct) and the validationprocedures reviewed to identify and resolve any issuesimpacting validation.Outcomes of validation preparation include the following:zz Preparation for doing the planned validation is com-pleted;zz Appropriate set of stakeholder expectations are avail-able and on hand;zz Articles or models to be used for validation with thevalidation product and enabling products are integratedwithin the validation environment accordingto plans and schedules;zz Resources are available according to validation plansand schedules; andzz The validation environment is evaluated for adequacy,completeness, readiness, and integration.Conduct Planned Product ValidationThe act of validating the end product is conducted asspelled out in the validation plans and procedures andconformance established to each specified validation requirement.The responsible engineer should ensure thatthe procedures were followed and performed as planned,the validation-enabling products were calibrated correctly,and the data were collected and recorded for requiredvalidation measures.When poor validation conduct, design, or conditionscause anomalies, the validation should be replanned asnecessary, the environment preparation anomalies corrected,and the validation conducted again with improvedor correct procedures and resources. The DecisionAnalysis Process should be used to make decisionsfor issues identified that may require alternative choicesto be evaluated and a selection made or when neededchanges to the validation plans, environment, and/orconduct are required.Outcomes of conducting validation include the following:zz A validated product is established with supportingconfirmation that the appropriate results were collectedand evaluated to show completion of validationobjectives.zz A determination is made as to whether the fabricated/manufactured or assembled and integrated products(including software or firmware builds, as applicable)comply with their respective stakeholder expectations.zz A determination is made that the validated productwas appropriately integrated with the validation environmentand the selected stakeholder expectationsset was properly validated.zz A determination is made that the product being vali-dated functions together with interfacing productsthroughout their performance envelopes.Analyze Product Validation ResultsOnce the validation activities have been completed, theresults are collected and the data are analyzed to confirmthat the end product provided will supply the customer’sneeded capabilities within the intended environments ofuse, validation procedures were followed, and enablingproducts and supporting resources functioned correctly.The data are also analyzed for quality, integrity, correctness,consistency, and validity and any unsuitable productsor product attributes are identified and reported.It is important to compare the actual validation results tothe expected results and to conduct any required systemdesign and product realization process activities to resolvedeficiencies. The deficiencies, along with recom-102 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


5.4 Product Validationmended corrective actions and resolution results, shouldbe recorded and validation repeated, as required.Outcomes of analyzing validation results include the following:zz Product deficiencies and/or issues are identified.zz Assurances that appropriate replanning, redefinitionof requirements, design, and revalidation have beenaccomplished for resolution of anomalies, variations,or out-of-compliance conditions (for problems notcaused by poor validation conduct).zz Discrepancy and corrective action reports are gener-ated as needed.zz The validation report is completed.Validation NotesThe types of validation used are dependent on the lifecyclephase; the product’s location in the system structure;and cost, schedule, and resources available. Validationof products within a single system model may beconducted together (e.g., an end product with its relevantenabling products, such as operational (control center or aradar with its display), maintenance (required tools workwith product), or logistical (launcher or transporter).Each realized product of system structure should be validatedagainst stakeholder expectations before being integratedinto a higher level product.ReengineeringBased on the results of the Product Validation Process,it could become necessary to reengineer a deficient endproduct. Care should be taken that correcting a deficiency,or set of deficiencies, does not generate a new issue witha part or performance that had previously operated satisfactorily.Regression testing, a formal process of rerunningpreviously used acceptance tests primarily used forsoftware, is one method to ensure a change did not affectfunction or performance that was previously accepted.Validation DeficienciesValidation outcomes can be unsatisfactory for severalreasons. One reason is poor conduct of the validation(e.g., enabling products and supporting resources missingor not functioning correctly, untrained operators,procedures not followed, equipment not calibrated, orimproper validation environmental conditions) and failureto control other variables not involved in validatinga set of stakeholder expectations. A second reason couldbe a shortfall in the verification process of the end product.This could create the need for:zz Reengineering end products lower in the system struc-ture that make up the end product that was found tobe deficient (which failed to satisfy validation requirements)and/orzz Reperforming any needed verification and validationprocesses.Other reasons for validation deficiencies (particularlywhen M&S are involved) may be incorrect and/or inappropriateinitial or boundary conditions; poor formulationof the modeled equations or behaviors; the impact ofapproximations within the modeled equations or behaviors;failure to provide the required geometric and physicsfidelities needed for credible simulations for the intendedpurpose; referent for comparison of poor or unknown uncertaintyquantification quality; and/or poor spatial, temporal,and perhaps, statistical resolution of physical phenomenaused in M&S.Note: Care should be exercised to ensure that the correctiveactions identified to remove validation deficienciesdo not conflict with the baselined stakeholderexpectations without first coordinating suchchanges with the appropriate stakeholders.Capture Product Validation Work ProductsValidation work products (inputs to the <strong>Technical</strong> DataManagement Process) take many forms and involvemany sources of information. The capture and recordingof validation-related data is a very important, but often underemphasized,step in the Product Validation Process.Validation results, deficiencies identified, and correctiveactions taken should be captured, as should all relevantresults from the application of the Product ValidationProcess (related decisions, rationale for decisions made,assumptions, and lessons learned).Outcomes of capturing validation work products includethe following:zz Work products and related information generatedwhile doing Product Validation Process activities andtasks are recorded; i.e., type of validation conducted,the form of the end product used for validation, val-<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 103


5.0 Product Realizationidation procedures used, validation environments,outcomes, decisions, assumptions, corrective actions,lessons learned, etc. (often captured in a matrix orother tool—see Appendix E).zz Deficiencies (e.g., variations and anomalies andout-of-compliance conditions) are identified anddocumented, including the actions taken to resolve.zz Proof is provided that the realized product is in con-formance with the stakeholder expectation set used inthe validation.zz Validation report including:▶▶Recorded validation results/data;▶▶Version of the set of stakeholder expectations used;▶▶Version and form of the end product validated;▶▶Version or standard for tools and equipment used,together with applicable calibration data;▶▶Outcome of each validation including pass or faildeclarations; and▶▶Discrepancy between expected and actual results.5.4.2 Product Validation GuidanceThe following is some generic guidance for the ProductValidation Process.5.4.2.1 Modeling and SimulationAs stressed in the verification process material, M&S isalso an important validation tool. M&S usage considerationsinvolve the verification, validation, and certificationof the models and simulations.Model Verification and Validationz z Model Verification: Degree to which a model accuratelymeets its specifications. Answers “Is it whatI intended?”z z Model Validation: The process of determining thedegree to which a model is an accurate representationof the real world from the perspective of the intendeduses of the model.Model Certification:z z Certification for use for a specificpurpose. Answers, “Should I endorse this model?”Note: For systems where only a single deliverable itemis developed, the Product Validation Process normallycompletes acceptance testing of the system. However,for systems with several production units, it isimportant to understand that continuing verificationand validation is not an appropriate approach to usefor the items following the first deliverable. Instead,acceptance testing is the preferred means to ensurethat subsequent deliverables comply with the baselineddesign.5.4.1.3 OutputsKey outputs of validation are:zz Validated product,zz Discrepancy reports and identified corrective actions,andzz Validation reports.Success criteria for this process include: (1) objective evidenceof performance and the results of each systemof-interestvalidation activity are documented, and (2)the validation process should not be considered or designatedas complete until all issues and actions are resolved.5.4.2.2 SoftwareSoftware verification is a software engineering activitythat demonstrates the software products meet specifiedrequirements. Methods of software verification includepeer reviews/inspections of software engineering productsfor discovery of defects, software verification of requirementsby use of simulations, black box and whitebox testing techniques, analyses of requirement implementation,and software product demonstrations.Software validation is a software engineering activitythat demonstrates the as-built software product or softwareproduct component satisfies its intended use in itsintended environment. Methods of software validationinclude: peer reviews/inspections of software productcomponent behavior in a simulated environment, acceptancetesting against mathematical models, analyses,and operational environment demonstrations. The project’sapproach for software verification and validation isdocumented in the software development plan. SpecificAgency-level requirements for software verification andvalidation, peer reviews (see Appendix N), testing andreporting are contained in NPR 7150.2, <strong>NASA</strong> SoftwareRequirements.104 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


5.4 Product ValidationThe rigor and techniques used to verify and validatesoftware depend upon software classifications (whichare different from project and payload classifications). Acomplex project will typically contain multiple systemsand subsystems having different software classifications.It is important for the project to classify its software andplan verification and validation approaches that appropriatelyaddress the risks associated with each class.In some instances, <strong>NASA</strong> management may selecta project for additional independent software verificationand validation by the <strong>NASA</strong> Software IndependentVerification and Validation (IV&V) Facilityin Fairmount, West Virginia. In this case aMemorandum of Understanding (MOU) and separatesoftware IV&V plan will be created and implemented.<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 105


5.0 Product Realization5.5 Product TransitionThe Product Transition Process is used to transition averified and validated end product that has been generatedby product implementation or product integrationto the customer at the next level in the system structurefor integration into an end product or, for the top-levelend product, transitioned to the intended end user. Theform of the product transitioned will be a function of theproduct-line life-cycle phase success criteria and the locationwithin the system structure of the WBS model inwhich the end product exits.Product transition occurs during all phases of the lifecycle. During the early phases, the technical team’s productsare documents, models, studies, and reports. As theproject moves through the life cycle, these paper or softproducts are transformed through implementation andintegration processes into hardware and software solutionsto meet the stakeholder expectations. They are repeatedwith different degrees of rigor throughout the lifecycle. The Product Transition Process includes producttransitions from one level of the system architecture upward.The Product Transition Process is the last of theproduct realization processes,and it is a bridgefrom one level of the systemto the next higher level.The Product Transition Processis the key to bridge fromone activity, subsystem, orelement to the overall engineeredsystem. As the systemdevelopment nears completion,the Product TransitionProcess is again applied forthe end product, but withmuch more rigor since nowthe transition objective isdelivery of the system-levelend product to the actualend user. Depending on thekind or category of systemdeveloped, this may involvea Center or the Agency andimpact thousands of individualsstoring, handling,and transporting multipleFrom ProductValidation ProcessEnd Product to BeTransitionedFrom <strong>Technical</strong> DataManagement ProcessDocumentation toAccompany theDelivered End ProductFrom existingresources or ProductTransition ProcessProduct Transition–Enabling Products106 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>end products; preparing user sites; training operatorsand maintenance personnel; and installing and sustaining,as applicable. Examples are transitioning theexternal tank, solid rocket boosters, and orbiter to KennedySpace Center (KSC) for integration and flight.5.5.1 Process DescriptionFigure 5.5-1 provides a typical flow diagram for theProduct Transition Process and identifies typical inputs,outputs, and activities to consider in addressing producttransition.5.5.1.1 InputsInputs to the Product Transition Process depend primarilyon the transition requirements, the product that is beingtransitioned, the form of the product transition that is takingplace, and where the product is transitioning to. Typicalinputs are shown in Figure 5.5-1 and described below.zz The End Product or Products To Be Transitioned(from Product Validation Process): The product tobe transitioned can take several forms. It can be a sub-Prepare to conduct producttransitionEvaluate the end product, personnel,and enabling product readiness forproduct transitionPrepare the end product for transitionTransition the end product to thecustomer with required documentationbased on the type of transition requiredPrepare sites, as required, where theend product will be stored, assembled,integrated, installed, used, and/ormaintainedCapture product implementationwork productsFigure 5.5-1 Product Transition ProcessTo end user or ProductIntegration Process(recursive loop)Delivered End ProductWith ApplicableDocumentationTo <strong>Technical</strong> DataManagement ProcessProduct TransitionWork ProductsTo ProductImplementation,Integration, Verification,Validation, andTransition ProcessesRealized EnablingProducts


5.5 Product Transitionsystem component, system assembly, or top-level endproduct. It can be hardware or software. It can be newlybuilt, purchased, or reused. A product can transitionfrom a lower system product to a higher one by beingintegrated with other transitioned products. This processmay be repeated until the final end product isachieved. Each succeeding transition requires uniqueinput considerations when preparing for the validatedproduct for transition to the next level.Early phase products can take the form of informationor data generated from basic or applied researchusing analytical or physical models and often are inpaper or electronic form. In fact, the end product formany <strong>NASA</strong> research projects or science activities is areport, paper, or even an oral presentation. In a sense,the dissemination of information gathered through<strong>NASA</strong> research and development is an importantform of product transition.zz Documentation Including Manuals, Procedures,and Processes That Are To Accompany the EndProduct (from <strong>Technical</strong> Data Management Process):The documentation required for the ProductTransition Process depends on the specific endproduct; its current location within the system structure;and the requirements identified in various agreements,plans, or requirements documents. Typically,a product has a unique identification (i.e., serialnumber) and may have a pedigree (documentation)that specifies its heritage and current state. Pertinentinformation may be documented through a configurationmanagement system or work order system aswell as design drawings and test reports. Documentationoften includes proof of verification and validationconformance. A COTS product would typicallycontain a manufacturer’s specification or fact sheet.Documentation may include operations manuals, installationinstructions, and other information.The documentation level of detail is dependent uponwhere the product is within the product hierarchyand the life cycle. Early in the life cycle, this documentationmay be preliminary in nature. Later in thelife cycle, the documentation may be detailed designdocuments, user manuals, drawings, or other workproducts. Documentation that is gathered duringthe input process for the transition phase may requireediting, assembling, or repackaging to ensureit is in the required condition for acceptance by thecustomer.Special consideration must be given to safety, includingclearly identifiable tags and markings thatidentify the use of hazardous materials, special handlinginstructions, and storage requirements.zz Product-Transition-Enabling Products, IncludingPackaging Materials; Containers; Handling Equipment;and Storage, Receiving, and Shipping Facilities(from Existing Resources or Product TransitionProcess for Enabling Product Realization): Producttransition-enablingproducts may be required to facilitatethe implementation, integration, evaluation,transition, training, operations, support, and/or retirementof the transition product at its next higher levelor for the transition of the final end product. Some orall of the enabling products may be defined in transition-relatedagreements, system requirements documents,or project plans. In some cases, product-transition-enablingproducts are developed during therealization of the product itself or may be required tobe developed during the transition stage.As a product is developed, special containers, holders,or other devices may also be developed to aid in thestoring and transporting of the product through developmentand realization. These may be temporaryaccommodations that do not satisfy all the transitionrequirements, but allow the product to be initiatedinto the transition process. In such cases, the temporaryaccommodations will have to be modified or newaccommodations will need to be designed and builtor procured to meet specific transportation, handling,storage, and shipping requirements.Sensitive or hazardous products may require specialenabling products such as monitoring equipment,inspection devices, safety devices, and personneltraining to ensure adequate safety and environmentalrequirements are achieved and maintained.5.5.1.2 Process ActivitiesTransitioning the product can take one of two forms:zz The delivery of lower system end products to higherones for integration into another end product orzz The delivery of the final end product to the customeror user that will use it in its operational environment.In the first case, the end product is one of perhaps severalother pieces that will ultimately be integrated together toform the item in the second case for final delivery to thecustomer. For example, the end product might be one of<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 107


5.0 Product Realizationseveral circuit cards that will be integrated together toform the final unit that is delivered. Or that unit mightalso be one of several units that have to be integrated togetherto form the final product.The form of the product transitioned is not only a functionof the location of that product within the systemproduct hierarchy (i.e., WBS model), but also a functionof the life-cycle phase. Early life-cycle phase productsmay be in the form of paper, electronic files, physicalmodels, or technology demonstration prototypes.Later phase products may be preproduction prototypes(engineering models), the final study report, or the flightunits.Figure 5.5‐1 shows what kind of inputs, outputs, and activitiesare performed during product transition regardlessof where in the product hierarchy or life cycle theproduct is. These activities include preparing to conductthe transition; making sure the end product, all personnel,and any enabling products are ready for transitioning;preparing the site; and performing the transitionincluding capturing and documenting all workproducts.How these activities are performed and what form thedocumentation takes will depend on where the enditems are in the product hierarchy (WBS model) and itslife-cycle phase.Prepare to Implement TransitionThe first task is to identify which of the two forms oftransition is needed: (1) the delivery of lower system endproducts to higher ones for integration into another endproduct or (2) the delivery of the final end product tothe customer or user that will use the end product in itsoperational environment. The form of the product beingtransitioned will affect transition planning and the kindof packaging, handling, storage, and transportation thatwill be required. The customer and other stakeholder expectations,as well as the specific design solution, may indicatespecial transition procedures or enabling productneeds for packaging, storage, handling, shipping/transporting,site preparation, installation, and/or sustainability.These requirements need to be reviewed duringthe preparation stage.Other tasks in preparing to transition a product involvemaking sure the end product, personnel, and any enablingproducts are ready for that transition. This includesthe availability of the documentation that will besent with the end product, including proof of verificationand validation conformance. The appropriatenessof detail for that documentation depends upon wherethe product is within the product hierarchy and the lifecycle. Early in the life cycle, this documentation may bepreliminary in nature. Later in the life cycle, the documentationmay be detailed design documents, user manuals,drawings, or other work products. Procedures necessaryfor conducting the transition should be reviewedand approved by this time. This includes all necessaryapprovals by management, legal, safety, quality, property,or other organizations as identified in the SEMP.Finally, the availability and skills of personnel needed toconduct the transition as well as the availability of anynecessary packaging materials/containers, handlingequipment, storage facilities, and shipping/transporterservices should also be reviewed. Any special trainingnecessary for the personnel to perform their tasks needsto be performed by this time.Prepare the Product for TransitionWhether transitioning a product to the next room forintegration into the next higher assembly, or for finaltransportation across the country to the customer, caremust be taken to ensure the safe transportation of theproduct. The requirements for packaging, handling,storage, and transportation should have been identifiedduring system design. Preparing for the packaging forprotection, security, and prevention of deterioration iscritical for products placed in storage or when it is necessaryto transport or ship between and within organizationalfacilities or between organizations by land, air,and/or water vehicles. Particular emphasis needs to beon protecting surfaces from physical damage, preventingcorrosion, eliminating damage to electronic wiring orcabling, shock or stress damage, heat warping or coldfractures, moisture, and other particulate intrusion thatcould damage moving parts.The design requirements should have already addressedthe ease of handling or transporting the product such ascomponent staking, addition of transportation hooks,crating, etc. The ease and safety of packing and unpackingthe product should also have been addressed.Additional measures may also need to be implementedto show accountability and to securely track the productduring transportation. In cases where hazardous mate-108 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


5.5 Product Transitionrials are involved, special labeling or handling needs includingtransportation routes need to be in place.Prepare the Site to Receive the ProductFor either of the forms of product transition, the receivingsite needs to be prepared to receive the product.Here the end product will be stored, assembled, integrated,installed, used, and/or maintained, as appropriatefor the life-cycle phase, position of the end product inthe system structure, and customer agreement.A vast number of key complex activities, many of themoutside direct control of the technical team, have to besynchronized to ensure smooth transition to the enduser. If transition activities are not carefully controlled,there can be impacts on schedule, cost, and safety of theend product.A site survey may need to be performed to determinethe issues and needs. This should address the adequacyof existing facilities to accept, store, and operate the newend product and identify any logistical-support-enablingproducts and services required but not plannedfor. Additionally, any modifications to existing facilitiesmust be planned well in advance of fielding; therefore,the site survey should be made during an early phase inthe product life cycle. These may include logistical enablingproducts and services to provide support for endproductuse, operations, maintenance, and disposal.Training for users, operators, maintainers, and othersupport personnel may need to be conducted. NationalEnvironmental Policy Act documentation or approvalsmay need to be obtained prior to the receipt of the endproduct.Prior to shipment or after receipt, the end product mayneed to be stored in suitable storage conditions to protectand secure the product and prevent damage or thedeterioration of it. These conditions should have beenidentified early in the design life cycle.Transition the ProductThe end product is then transitioned (i.e., moved, transported,or shipped) with required documentation to thecustomer based on the type of transition required, e.g.,to the next higher level item in the Product BreakdownStructure (PBS) for product integration or to the enduser. Documentation may include operations manuals,installation instructions, and other information.The end product is finally installed into the next higherassembly or into the customer/user site using the preapprovedinstallation procedures.Confirm Ready to SupportAfter installation, whether into the next higher assemblyor into the final customer site, functional and acceptancetesting of the end product should be conducted. This ensuresno damage from the shipping/handling processhas occurred and that the product is ready for support.Any final transitional work products should be capturedas well as documentation of product acceptance.5.5.1.3 Outputszz Delivered End Product for Integration to Next Levelup in System Structure: This includes the appropriatedocumentation. The form of the end product and applicabledocumentation are a function of the life-cyclephase and the placement within the system structure.(The form of the end product could be hardware, software,model, prototype, first article for test, or singleoperational article or multiple production article.)Documentation includes applicable draft installation,operation, user, maintenance, or training manuals;applicable baseline documents (configuration baseline,specifications, and stakeholder expectations);and test results that reflect completion of verificationand validation of the end product.z zDelivered Operational End Product for End Users:The appropriate documentation is to be delivered withthe delivered end product as well as the operationalend product appropriately packaged. Documentationincludes applicable final installation, operation, user,maintenance, or training manuals; applicable baselinedocuments (configuration baseline, specifications,stakeholder expectations); and test results that reflectcompletion of verification and validation of the endproduct. If the end user will perform end product validation,sufficient documentation to support end uservalidation activities is delivered with the end product.zz Work Products from Transition Activities to Tech-nical Data Management: Work products could includethe transition plan, site surveys, measures,training modules, procedures, decisions, lessonslearned, corrective actions, etc.zz Realized Enabling End Products to AppropriateLife-Cycle Support Organization: Some of the enablingproducts that were developed during the var-<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 109


5.0 Product Realizationious phases could include fabrication or integrationspecialized machines; tools; jigs; fabrication processesand manuals; integration processes and manuals;specialized inspection, analysis, demonstration, ortest equipment; tools; test stands; specialized packagingmaterials and containers; handling equipment;storage-site environments; shipping or transportationvehicles or equipment; specialized courseware;instructional site environments; and delivery of thetraining instruction. For the later life-cycle phases,enabling products that are to be delivered may includespecialized mission control equipment; data collectionequipment; data analysis equipment; operationsmanuals; specialized maintenance equipment, tools,manuals, and spare parts; specialized recovery equipment;disposal equipment; and readying recovery ordisposal site environments.The process is complete when the following activitieshave been accomplished:zz The end product is validated against stakeholder ex-pectations unless the validation is to be done by theintegrator before integration is accomplished.zz For deliveries to the integration path, the end product isdelivered to intended usage sites in a condition suitablefor integration with other end products or compositesof end products. Procedures, decisions, assumptions,anomalies, corrective actions, lessons learned, etc., resultingfrom transition for integration are recorded.zz For delivery to the end user path, the end productsare installed at the appropriate sites; appropriate acceptanceand certification activities are completed;training of users, operators, maintainers, and other necessarypersonnel is completed; and delivery is closedout with appropriate acceptance documentation.zz Any realized enabling end products are also delivered asappropriate including procedures, decisions, assumptions,anomalies, corrective actions, lessons learned,etc., resulting from transition-enabling products.5.5.2 Product Transition Guidance5.5.2.1 Additional Product Transition InputConsiderationsIt is important to consider all customer, stakeholder,technical, programmatic, and safety requirementswhen evaluating the input necessary to achieve a successfulProduct Transition Process. This includes thefollowing:z z Transportability Requirements: If applicable, requirementsin this section define the required configurationof the system of interest for transport. Further,this section details the external systems (and theinterfaces to those systems) required for transport ofthe system of interest.zzzzzzzzEnvironmental Requirements: Requirements in thissection define the environmental conditions in whichthe system of interest is required to be during transition(including storage and transportation).Maintainability Requirements: Requirements in thissection detail how frequently, by whom, and by whatmeans the system of interest will require maintenance(also any “care and feeding,” if required).Safety Requirements: Requirements in this sectiondefine the life-cycle safety requirements for thesystem of interest and associated equipment, facilities,and personnel.Security Requirements: This section defines the InformationTechnology (IT) requirements, Federal andinternational export and security requirements, andphysical security requirements for the system of interest.z z Programmatic Requirements: Requirements in thissection define cost and schedule requirements.5.5.2.2 After Product Transition to the EndUser—What Next?As mentioned in Chapter 2.0, there is a relationship betweenthe SE engine and the activities performed afterthe product is transitioned to the end user. As shown inFigure 2.3‐8, after the final deployment to the end user,the end product is operated, managed, and maintainedthrough sustaining engineering functions. The technicalmanagement processes described in Section 6.0 areused during these activities. If at any time a new capability,upgrade, or enabling product is needed, the developmentalprocesses of the engine are reengaged. Whenthe end product’s use is completed, the plans developedearly in the life cycle to dispose, retire, or phase out theproduct are enacted.110 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


6.0 Crosscutting <strong>Technical</strong> ManagementThis chapter describes the activities in the technical managementprocesses listed in Figure 2.1-1. The chapteris separated into sections corresponding to steps 10through 17 listed in Figure 2.1-1. The processes withineach step are discussed in terms of the inputs, the activities,and the outputs. Additional guidance is providedusing examples that are relevant to <strong>NASA</strong> projects.The technical management processes are the bridges betweenproject management and the technical team. Inthis portion of the engine, eight crosscutting processesprovide the integration of the crosscutting functions thatallow the design solution to be realized. Even thoughevery technical team member might not be directly involvedwith these eight processes, they are indirectly affectedby these key functions. Every member of the technicalteam relies on technical planning; managementof requirements, interfaces, technical risk, configuration,and technical data; technical assessment; and decisionanalysis to meet the project’s objectives. Withoutthese crosscutting processes, individual members andtasks cannot be integrated into a functioning system thatmeets the ConOps within cost and schedule. The projectmanagement team also uses these crosscutting functionsto execute project control on the apportioned tasks.This effort starts with the technical team conducting extensiveplanning early in Pre-Phase A. With this early,detailed baseline plan, technical team members willunderstand the roles and responsibilities of each teammember, and the project can establish its program costand schedule goals and objectives. From this effort, the<strong>Systems</strong> <strong>Engineering</strong> Management Plan (SEMP) is developedand baselined. Once a SEMP has been established,it must be synchronized with the project master plansand schedule. In addition, the plans for establishing andexecuting all technical contracting efforts are identified.This is a recursive and iterative process. Early in the lifecycle, the plans are established and synchronized to runthe design and realization processes. As the system maturesand progresses through the life cycle, these plansmust be updated as necessary to reflect the current environmentand resources and to control the project’sperformance, cost, and schedule. At a minimum, theseupdates will occur at every Key Decision Point (KDP).However, if there is a significant change in the project,such as new stakeholder expectations, resource adjustments,or other constraints, all plans must be analyzedfor the impact of these changes to the baselined project.The next sections describe each of the eight technicalmanagement processes and their associated products fora given <strong>NASA</strong> mission.Crosscutting <strong>Technical</strong> Management Keyszz Thoroughly understand and plan the scope of thetechnical effort by investing time upfront to developthe technical product breakdown structure,the technical schedule and workflow diagrams,and the technical resource requirements and constraints(funding, budget, facilities, and long-leaditems) that will be the technical planning infrastructure.zz Define all interfaces and assign interface author-ities and responsibilities to each, both intra- andinterorganizational. This includes understandingpotential incompatibilities and defining the transitionprocesses.zz Control of the configuration is critical to under-standing how changes will impact the system.For example, changes in design and environmentcould invalidate previous analysis results.zz Conduct milestone reviews to enable a critical andvaluable assessment to be performed. These reviewsare not to be used to meet contractual orscheduling incentives. These reviews have specificentrance criteria and should be conducted whenthese are met.zz Understand any biases, assumptions, and con-straints that impact the analysis results.Place all analysis under configuration control to bezzable to track the impact of changes and understandwhen the analysis needs to be reevaluated.<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 111


6.0 Crosscutting <strong>Technical</strong> Management6.1 <strong>Technical</strong> PlanningThe <strong>Technical</strong> Planning Process, the first of the eighttechnical management processes contained in the systemsengineering engine, establishes a plan for applyingand managing each of the common technical processesthat will be used to drive the development of systemproducts and associated work products. This process alsoestablishes a plan for identifying and defining the technicaleffort required to satisfy the project objectives andlife-cycle phase success criteria within the cost, schedule,and risk constraints of the project.6.1.1 Process DescriptionFigure 6.1-1 provides a typical flow diagram for the <strong>Technical</strong>Planning Process and identifies typical inputs, outputs,and activities to consider in addressing technicalplanning.6.1.1.1 InputsInput to the <strong>Technical</strong> Planning Process comes fromboth the project management and technical teams asoutputs from the other common technical processes. Initialplanning utilizing external inputs from the project todetermine the general scope and framework of the technicaleffort will be based on known technical and programmaticrequirements, constraints, policies, and processes.Throughout the project’s life cycle, the technicalteam continually incorporates results into the technicalplanning strategy and documentation and any internalchanges based on decisions and assessments generatedby the other processes of the SE engine or from requirementsand constraints mandated by the project.As the project progresses through the life-cycle phases,technical planning for each subsequent phase must beassessed and continually updated. When a project transitionsfrom one life-cycle phase to the next, the technicalplanning for the upcoming phase must be assessedand updated to reflect the most recent project data.zzExternal Inputs from the Project: The project planprovides the project’s top-level technical requirements,the available budget allocated to the projectFrom projectProject <strong>Technical</strong> EffortRequirements and ProjectResource ConstraintsAgreements, CapabilityNeeds, Applicable Product-Line Life-Cycle PhaseApplicable Policies,Procedures, Standards,and OrganizationalProcessesFrom <strong>Technical</strong> DataManagement ProcessPrior Phase orBaseline PlansFrom <strong>Technical</strong> Assessmentand <strong>Technical</strong> RiskManagement ProcessesReplanning NeedsPrepare to conduct technicalplanningDefine the technical workSchedule, organize, and costthe technical workPrepare SEMP and othertechnical plansObtain stakeholder commitmentsto technical plansIssue authorized technicalwork directivesCapture technical planningwork productsTo projectCost Estimates,Schedules, and ResourceRequestsTo <strong>Technical</strong>Assessment ProcessProduct and ProcessMeasuresTo applicable technicalprocessesSEMP and Other<strong>Technical</strong> PlansTo applicable technical teams<strong>Technical</strong> Work DirectivesTo <strong>Technical</strong> DataManagement Process<strong>Technical</strong> PlanningWork Products112 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>Figure 6.1-1 <strong>Technical</strong> Planning Process


6.1 <strong>Technical</strong> Planningfrom the program, and the desired schedule for theproject to support overall program needs. Althoughthe budget and schedule allocated to the project willserve as constraints on the project, the technical teamwill generate a technical cost estimate and schedulebased on the actual work required to satisfy the project’stechnical requirements. Discrepancies betweenthe project’s allocated budget and schedule and thetechnical team’s actual cost estimate and schedulemust be reconciled continuously throughout the project’slife cycle.The project plan also defines the applicable projectlife-cycle phases and milestones, as well as any internaland external agreements or capability needsrequired for successful project execution. The project’slife-cycle phases and programmatic milestoneswill provide the general framework for establishingthe technical planning effort and for generating thedetailed technical activities and products required tomeet the overall project milestones in each of the lifecyclephases.Finally, the project plan will include all programmaticpolicies, procedures, standards, and organizationalprocesses that must be adhered to during executionof the technical effort. The technical team must developa technical approach that ensures the projectrequirements will be satisfied and that any technicalprocedures, processes, and standards to be used in developingthe intermediate and final products complywith the policies and processes mandated in theproject plan.zz Internal Inputs from Other Common <strong>Technical</strong>Processes: The latest technical plans (either baselinedor from the previous life-cycle phase) from the DataManagement or Configuration Management Processesshould be used in updating the technical planningfor the upcoming life-cycle phase.<strong>Technical</strong> planning updates may be required based onresults from technical reviews conducted in the <strong>Technical</strong>Assessment Process, issues identified during the<strong>Technical</strong> Risk Management Process, or from decisionsmade during the Decision Analysis Process.6.1.1.2 Process Activities<strong>Technical</strong> planning as it relates to systems engineering at<strong>NASA</strong> is intended to identify, define, and plan how the17 common technical processes in NPR 7123.1, <strong>NASA</strong><strong>Systems</strong> <strong>Engineering</strong> Processes and Requirements will beapplied in each life-cycle phase for all levels of the WBSmodel (see Subsection 6.1.2.1) within the system structureto meet product-line life-cycle phase success criteria.A key document generated by this process is the SEMP.The SEMP is a subordinate document to the projectplan. While the SEMP defines to all project participantshow the project will be technically managed within theconstraints established by the project, the project plandefines how the project will be managed to achieve itsgoals and objectives within defined programmatic constraints.The SEMP also communicates how the systemsengineering management techniques will be appliedthroughout all phases of the project life cycle.<strong>Technical</strong> planning should be tightly integrated with the<strong>Technical</strong> Risk Management Process (see Section 6.4)and the <strong>Technical</strong> Assessment Process (see Section 6.7)to ensure corrective action for future activities will be incorporatedbased on current issues identified within theproject.<strong>Technical</strong> planning, as opposed to program or projectplanning, addresses the scope of the technical effort requiredto develop the system products. While the projectmanager concentrates on managing the overall projectlife cycle, the technical team, led by the systems engineer,concentrates on managing the technical aspects of theproject. The technical team identifies, defines, and developsplans for performing decomposition, definition,integration, verification, and validation of the systemwhile orchestrating and incorporating the appropriateconcurrent engineering. Additional planning will includedefining and planning for the appropriate technicalreviews, audits, assessments, and status reports anddetermining any specialty engineering and/or designverification requirements.This section describes how to perform the activitiescontained in the <strong>Technical</strong> Planning Process shown inFigure 6.1-1. The initial technical planning at the beginningof the project will establish the technical teammembers; their roles and responsibilities; and the tools,processes, and resources that will be utilized in executingthe technical effort. In addition, the expected activitiesthe technical team will perform and the products it willproduce will be identified, defined, and scheduled. <strong>Technical</strong>planning will continue to evolve as actual data fromcompleted tasks are received and details of near-termand future activities are known.<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 113


6.0 Crosscutting <strong>Technical</strong> Management<strong>Technical</strong> Planning PreparationFor technical planning to be conducted properly, theprocesses and procedures to conduct technical planningshould be identified, defined, and communicated.As participants are identified, their roles and responsibilitiesand any training and/or certification activitiesshould be clearly defined and communicated.Once the processes, people, and roles and responsibilitiesare in place, a planning strategy may be formulatedfor the technical effort. A basic technical planning strategyshould address the following:zz The level of planning documentation required for theSEMP and all other technical planning documents;zz Identifying and collecting input documentation;zz The sequence of technical work to be conducted, in-cluding inputs and outputs;zz The deliverable products from the technical work;zz How to capture the work products of technical activi-ties;zz How technical risks will be identified and managed;zz The tools, methods, and training needed to conductthe technical effort;zz The involvement of stakeholders in each facet of thetechnical effort;zz How the <strong>NASA</strong> technical team will be involved withthe technical efforts of external contractors;zz The entry and success criteria for milestones, such astechnical reviews and life-cycle phases;zz The identification, definition, and control of internaland external interfaces;zz The identification and incorporation of relevant les-sons learned into the technical planning;zz The approach for technology development and howthe resulting technology will be incorporated into theproject;zz The identification and definition of the technical met-rics for measuring and tracking progress to the realizedproduct;zz The criteria for make, buy, or reuse decisions and in-corporation criteria for Commercial Off-the-Shelf(COTS) software and hardware;zz The plan to identify and mitigate off-nominal perfor-mance;zz The “how-tos” for contingency planning and replan-ning;zz The plan for status assessment and reporting; andzz The approach to decision analysis, including materialsneeded, required skills, and expectations in terms ofaccuracy.By addressing these items and others unique to theproject, the technical team will have a basis for understandingand defining the scope of the technical effort,including the deliverable products that the overall technicaleffort will produce, the schedule and key milestonesfor the project that the technical team must support, andthe resources required by the technical team to performthe work.A key element in defining the technical planning effort isunderstanding the amount of work associated with performingthe identified activities. Once the scope of thetechnical effort begins to coalesce, the technical teammay begin to define specific planning activities and toestimate the amount of effort and resources required toperform each task. Historically, many projects have underestimatedthe resources required to perform properplanning activities and have been forced into a positionof continuous crisis management in order to keep upwith changes in the project.Define the <strong>Technical</strong> WorkThe technical effort must be thoroughly defined. Whenperforming the technical planning, realistic values forcost, schedule, and labor resources should be used.Whether extrapolated from historical databases or frominteractive planning sessions with the project and stakeholders,realistic values must be calculated and providedto the project team. Contingency should be included inany estimate and based on complexity and criticality ofthe effort. Contingency planning must be conducted.The following are examples of contingency planning:zz Additional, unplanned-for software engineering re-sources are typically needed during hardware andsystems development and testing to aid in troubleshootingerrors/anomalies. Frequently, software engineersare called upon to help troubleshoot problemsand pinpoint the source of errors in hardware and systemsdevelopment and testing (e.g., for writing additiontest drivers to debug hardware problems). Additionalsoftware staff should be planned into the projectcontingencies to accommodate inevitable componentand system debugging and avoid cost and scheduleoverruns.114 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


6.1 <strong>Technical</strong> Planningzz Hardware-in-the-Loop (HWIL) must be accountedfor in the technical planning contingencies. HWILtesting is typically accomplished as a debugging exercisewhere the hardware and software are broughttogether for the first time in the costly environment ofan HWIL. If upfront work is not done to understandthe messages and errors arising during this test, additionaltime in the HWIL facility may result in significantcost and schedule impacts. Impacts may bemitigated through upfront planning, such as makingappropriate debugging software available to the technicalteam prior to the test, etc.Schedule, Organize, and Cost the <strong>Technical</strong> EffortOnce the technical team has defined the technical workto be done, efforts can focus on producing a scheduleand cost estimate for the technical portion of the project.The technical team must organize the technical tasksaccording to the project WBS in a logical sequence ofevents, taking into consideration the major project milestones,phasing of available funding, and timing of availabilityof supporting resources.SchedulingProducts described in the WBS are the result of activitiesthat take time to complete. These activities have timeprecedence relationships among them that may usedto create a network schedule explicitly defining the dependenciesof each activity on other activities, the availabilityof resources, and the receipt of receivables fromoutside sources.Scheduling is an essential component of planning andmanaging the activities of a project. The process of creatinga network schedule provides a standard methodfor defining and communicating what needs to be done,how long it will take, and how each element of the projectWBS might affect other elements. A complete networkschedule may be used to calculate how long it will take tocomplete a project; which activities determine that duration(i.e., critical path activities); and how much sparetime (i.e., float) exists for all the other activities of theproject.“Critical path” is the sequence of dependent tasks thatdetermines the longest duration of time needed to completethe project. These tasks drive the schedule and continuallychange, so they must be updated. The criticalpath may encompass only one task or a series of interrelatedtasks. It is important to identify the critical pathand the resources needed to complete the critical tasksalong the path if the project is to be completed on timeand within its resources. As the project progresses, thecritical path will change as the critical tasks are completedor as other tasks are delayed. This evolving criticalpath with its identified tasks needs to be carefully monitoredduring the progression of the project.Network scheduling systems help managers accuratelyassess the impact of both technical and resource changeson the cost and schedule of a project. Cost and technicalproblems often show up first as schedule problems. Understandingthe project’s schedule is a prerequisite fordetermining an accurate project budget and for trackingperformance and progress. Because network schedulesshow how each activity affects other activities, they assistin assessing and predicting the consequences of scheduleslips or accelerations of an activity on the entire project.Network Schedule Data and GraphicalFormatsNetwork schedule data consist of:zz Activities and associated tasks;zz Dependencies among activities (e.g., where an activitydepends upon another activity for a receivable);zz Products or milestones that occur as a result of one ormore activities; andzz Duration of each activity.A network schedule contains all four of the above dataitems. When creating a network schedule, creatinggraphical formats of these data elements may be a usefulfirst step in planning and organizing schedule data.Workflow DiagramsA workflow diagram is a graphical display of the firstthree data items. Two general types of graphical formatsare used as shown in Figure 6.1-2. One places activitieson arrows, with products and dependencies at the beginningand end of the arrow. This is the typical format ofthe Program Evaluation and Review Technique (PERT)chart.The second format, called precedence diagrams, usesboxes to represent activities; dependencies are thenshown by arrows. The precedence diagram format allowsfor simple depiction of the following logical relationships:<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 115


6.0 Crosscutting <strong>Technical</strong> Managementzz Activity B begins when Activity A begins (start-start).zz Activity B begins only after Activity A ends (finish-start).zz Activity B ends when Activity A ends (finish-finish).Each of these three activity relationships may be modifiedby attaching a lag (+ or –) to the relationship, as shownin Figure 6.1-2. It is possible to summarize a number oflow-level activities in a precedence diagram with a singleactivity. One takes the initial low-level activity and attachesa summary activity to it using the start-start relationshipdescribed above. The summary activity is thenattached to the final low-level activity using the finishstartrelationship. The most common relationship usedin precedence diagrams is the finish-start one. The activity-on-arrowformat can represent the identical timeprecedencelogic as a precedence diagram by creating artificialevents and activities as needed.Establishing a Network ScheduleScheduling begins with project-level schedule objectivesfor delivering the products described in the upperlevels of the WBS. To develop network schedules that areconsistent with the project’s objectives, the following sixActivity-on-Arrow DiagramPrecedence Diagram10A 1 A 25 5Activity duration(e.g., days)ASS5Means thatActivity B canstart 5 days afterActivity A startsB 15B 2510BActivity A artificially dividedinto two separate activitiesActivity description, including an actionand the subject of that actionActivity description, including an actionand the subject of that actionActivity duration (e.g., days)Figure 6.1-2 Activity-on-arrow and precedencediagrams for network schedulessteps are applied to each element at the lowest availablelevel of the WBS.Step 1: Identify activities and dependencies needed tocomplete each WBS element. Enough activities shouldbe identified to show exact schedule dependencies betweenactivities and other WBS elements. This first stepis most easily accomplished by:zz Ensuring that the WBS model is extended downwardto describe all significant products including documents,reports, and hardware and software items.zz For each product, listing the steps required for its gen-eration and drawing the process as a workflow diagram.zz Indicating the dependencies among the products, andany integration and verification steps within the workpackage.Step 2: Identify and negotiate external dependencies. Externaldependencies are any receivables from outside of,and any deliverables that go outside of, the WBS element.Negotiations should occur to ensure that there is agreementwith respect to the content, format, and labeling ofproducts that move across WBS elements so that lowerlevel schedules can be integrated.Step 3: Estimate durations of all activities. Assumptionsbehind these estimates (workforce, availability of facilities,etc.) should be written down for future reference.Step 4: Enter the data for each WBS element into a schedulingprogram to obtain a network schedule and an estimateof the critical path for that element. It is not unusualat this point for some iteration of steps 1 to 4 toobtain a satisfactory schedule. Reserve is often added tocritical-path activities to ensure that schedule commitmentscan be met within targeted risk levels.Step 5: Integrate schedules of lower level WBS elementsso that all dependencies among elements are correctlyincluded in a project network. It is important to includethe impacts of holidays, weekends, etc., by this point.The critical path for the project is discovered at this stepin the process.Step 6: Review the workforce level and funding profileover time and make a final set of adjustments to logicand durations so that workforce levels and funding levelsare within project constraints. Adjustments to the logicand the durations of activities may be needed to con-116 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


6.1 <strong>Technical</strong> Planningverge to the schedule targets established at the projectlevel. Adjustments may include adding more activities tosome WBS elements, deleting redundant activities, increasingthe workforce for some activities that are on thecritical path, or finding ways to do more activities in parallel,rather than in series.Again, it is good practice to have some schedule reserve,or float, as part of a risk mitigation strategy. The productof these last steps is a feasible baseline schedule for eachWBS element that is consistent with the activities of allother WBS elements. The sum of all of these schedulesshould be consistent with both the technical scope andthe schedule goals of the project. There should be enoughfloat in this integrated master schedule so that scheduleand associated cost risk are acceptable to the project andto the project’s customer. Even when this is done, timeestimates for many WBS elements will have been underestimatedor work on some WBS elements will not startas early as had been originally assumed due to late arrivalof receivables. Consequently, replanning is almostalways needed to meet the project’s goals.Reporting TechniquesSummary data about a schedule is usually described incharts. A Gantt chart is a bar chart that depicts a projectschedule using start and finish dates of the appropriateproduct elements tied to the project WBS of a project.Some Gantt charts also show the dependency (i.e., precedenceand critical path) relationships among activitiesand also current status. A good example of a Gantt chart isshown in Figure 6.1-3. (See box on Gantt chart features.)Another type of output format is a table that shows thefloat and recent changes in float of key activities. For example,a project manager may wish to know preciselyhow much schedule reserve has been consumed by criticalpath activities, and whether reserves are being consumedor are being preserved in the latest reportingperiod. This table provides information on the rate ofchange of schedule reserve.Resource LevelingGood scheduling systems provide capabilities to show resourcerequirements over time and to make adjustmentsso that the schedule is feasible with respect to resourceconstraints over time. Resources may include workforcelevel, funding profiles, important facilities, etc. The objectiveis to move the start dates of tasks that have float toGantt Chart FeaturesThe Gantt chart shown in Figure 6.1-3 illustrates thefollowing desirable features:zz A heading that describes the WBS element, iden-tifies the responsible manager, and provides thedate of the baseline used and the date that statuswas reported.zz A milestone section in the main body (lines 1 and 2).zz An activity section in the main body. Activity datashown includes:▶▶ WBS elements (lines 3, 5, 8, 12, 16, and 21);▶▶ Activities (indented from WBS elements);▶▶ Current plan (shown as thick bars);▶▶ Baseline plan (same as current plan, or if different,represented by thin bars under the thick bars);▶▶ Slack for each activity (dotted horizontal line beforethe milestone on line 12);▶▶ Schedule slips from the baseline (dotted horizontallines after the current plan bars);▶▶ The critical path is shown encompassing lines 18through 21 and impacting line 24; and▶▶ Status line (dotted vertical line from top to bottomof the main body of the chart) at the datethe status was reported.zz A legend explaining the symbols in the chart.This Gantt chart shows only 24 lines, which is a summaryof the activities currently being worked for thisWBS element. It is appropriate to tailor the amount ofdetail reported to those items most pertinent at thetime of status reporting.points where the resource profile is feasible. If that is notsufficient, then the assumed task durations for resourceintensiveactivities should be reexamined and, accordingly,the resource levels changed.BudgetingBudgeting and resource planning involve the establishmentof a reasonable project baseline budget and thecapability to analyze changes to that baseline resultingfrom technical and/or schedule changes. The project’sWBS, baseline schedule, and budget should be viewedas mutually dependent, reflecting the technical content,time, and cost of meeting the project’s goals and objectives.The budgeting process needs to take into account<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 117


6.0 Crosscutting <strong>Technical</strong> ManagementResponsible Manager:ACTIVITY1 Milestones – Subsystem2 – Assembly3 Management4 Quarterly Assessments5 System <strong>Engineering</strong>6 Assembly Design7 Subassembly Requirements8 Subassembly #19 Design10 Fabricate11 Test12 Subassembly #213 Design14 Fabricate15 Test16 Subassembly #317 Design18 Fabricate Phase 119 Fabricate Phase 220 Test21 Integration and Test22 Plans23 Procedures24 Integrate and TestSYSTEM (TIER 2)SUBSYSTEM (TIER 3)ASSEMBLY (TIER 4)EXAMPLE PROJECT Page 1/2Status as of: Jan 20, 2007Revision Date: Dec 23, 20062006 2007FY 2007Oct Nov Dec Jan Feb Mar Apr May Jun Jul Aug SepSDRPDRCDRPDRCDRDeliverRec’d RequirementsApprovalApprovalFinalApprovalFixturesTo Integration & TestTo Integration & TestTo Integration & TestReceive AllSubassembliesToday’s DateLegend:End of Task ScheduleMajor MilestoneScheduled Period of Performance for ActivityCritical PathNot CompletedCompletedFloat, positive or negative, is shown above the activity bars and event symbols.The baselined schedule is shown below revised schedule, if they differ.Figure 6.1-3 Gantt chartwhether a fixed cost cap or cost profile exists. When nosuch cap or profile exists, a baseline budget is developedfrom the WBS and network schedule. This specificallyinvolves combining the project’s workforce and other resourceneeds with the appropriate workforce rates andother financial and programmatic factors to obtain costelement estimates. These elements of cost include:zz Direct labor costs,zz Overhead costs,zz Other direct costs (travel, data processing, etc.),zz Subcontract costs,zz Material costs,zz General and administrative costs,zz Cost of money (i.e., interest payments, if applicable),zz Fee (if applicable), andzz Contingency.When there is a cost cap or a fixed cost profile, there areadditional logic gates that must be satisfied before completingthe budgeting and planning process. A determinationneeds to be made whether the WBS and networkschedule are feasible with respect to mandated cost capsand/or cost profiles. If not, it will be necessary to considerstretching out a project (usually at an increase inthe total cost) or descoping the project’s goals and objectives,requirements, design, and/or implementation approach.118 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


6.1 <strong>Technical</strong> PlanningIf a cost cap or fixed cost profile exists, it is important tocontrol costs after they have been baselined. An importantaspect of cost control is project cost and schedule status reportingand assessment, methods for which are discussedin Section 6.7. Another is cost and schedule risk planning,such as developing risk avoidance and workaround strategies.At the project level, budgeting and resource planningmust ensure that an adequate level of contingency funds isincluded to deal with unforeseen events.The maturity of the Life-Cycle Cost Estimate (LCCE)should progress as follows:zz Pre-Phase A: Initial LCCE (70 percent confidencelevel; however, much uncertainty is expected)zz Phase A: Preliminary commitment to LCCEzz Phase B: Approve LCCE (70 percent confidence levelat PDR commitment)zz Phase C, D, and E report variances to LCCE baselineusing Earned Value Management (EVM) and LCCEupdatesCredibility of the cost estimate is suspect if:zz WBS cost estimates are expressed only in dollars withno other identifiable units, indicating that requirementsare not sufficiently defined for processes andresources to be identified.zz The basis of estimates does not contain sufficient de-tail for independent verification that work scope andestimated cost (and schedule) are reasonable.zz Actual costs vary significantly from the LCCE.zz Work is performed that was not originally planned,causing cost or schedule variance.zz Schedule and cost earned value performance trendsreadily indicate unfavorable performance.Prepare the SEMP and Other <strong>Technical</strong> PlansThe SEMP is the primary, top-level technical managementdocument for the project and is developed earlyin the Formulation phase and updated throughout theproject life cycle. The SEMP is driven by the type ofproject, the phase in the project life cycle, and the technicaldevelopment risks and is written specifically foreach project or project element. While the specific contentof the SEMP is tailored to the project, the recommendedcontent is discussed in Appendix J.The technical team, working under the overall projectplan, develops and updates the SEMP as necessary. Thetechnical team works with the project manager to reviewthe content and obtain concurrence. This allows for thoroughdiscussion and coordination of how the proposedtechnical activities would impact the programmatic, cost,and schedule aspects of the project. The SEMP providesthe specifics of the technical effort and describes whattechnical processes will be used, how the processes will beapplied using appropriate activities, how the project willbe organized to accomplish the activities, and the cost andschedule associated with accomplishing the activities.The physical length of a SEMP is not what is important.This will vary from project to project. The plan needs tobe adequate to address the specific technical needs of theproject. It is a living document that is updated as often asnecessary to incorporate new information as it becomesavailable and as the project develops through Implementation.The SEMP should not duplicate other projectdocuments; however, the SEMP should reference andsummarize the content of other technical plans.The systems engineer and project manager must identifyadditional required technical plans based on theproject scope and type. If plans are not included in theSEMP, they should be referenced and coordinated in thedevelopment of the SEMP. Other plans, such as systemsafety and the probabilistic risk assessment, also needto be planned for and coordinated with the SEMP. If atechnical plan is a stand-alone, it should be referencedin the SEMP. Depending on the size and complexity ofthe project, these may be separate plans or may be includedwithin the SEMP. Once identified, the plans canbe developed, training on these plans established, andthe plans implemented. Examples of technical plans inaddition to the SEMP are listed in Appendix K.The SEMP must be developed concurrently with theproject plan. In developing the SEMP, the technical approachto the project and, hence, the technical aspect ofthe project life cycle is developed. This determines theproject’s length and cost. The development of the programmaticand technical management approachesrequires that the key project personnel develop anunderstanding of the work to be performed and the relationshipsamong the various parts of that work. Referto Subsections 6.1.2.1 and 6.1.1.2 on WBSs and networkscheduling, respectively.The SEMP’s development requires contributions fromknowledgeable programmatic and technical experts fromall areas of the project that can significantly influence the<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 119


6.0 Crosscutting <strong>Technical</strong> Managementproject’s outcome. The involvement of recognized expertsis needed to establish a SEMP that is credible to theproject manager and to secure the full commitment ofthe project team.Role of the SEMPThe SEMP is the rule book that describes to all participantshow the project will be technically managed. The<strong>NASA</strong> field center responsible for the project shouldhave a SEMP to describe how it will conduct its technicalmanagement, and each contractor should have a SEMPto describe how it will manage in accordance with bothits contract and <strong>NASA</strong>’s technical management practices.Each Center that is involved with the project should alsohave a SEMP for its part of the project, which would interfacewith the project SEMP of the responsible <strong>NASA</strong>Center, but this lower tier SEMP specifically will addressthat Center’s technical effort and how it interfaces withthe overall project. Since the SEMP is project- and contract-unique,it must be updated for each significant programmaticchange, or it will become outmoded and unused,and the project could slide into an uncontrolled state.The lead <strong>NASA</strong> field center should have its SEMP developedbefore attempting to prepare an initial cost estimate,since activities that incur cost, such as technical risk reduction,need to be identified and described beforehand.The contractor should have its SEMP developed duringthe proposal process (prior to costing and pricing) becausethe SEMP describes the technical content of theproject, the potentially costly risk management activities,and the verification and validation techniques to beused, all of which must be included in the preparation ofproject cost estimates. The SEMPs from the supportingCenters should be developed along with the primaryproject SEMP. The project SEMP is the senior technicalmanagement document for the project: all other technicalplans must comply with it. The SEMP should becomprehensive and describe how a fully integrated engineeringeffort will be managed and conducted.Obtain Stakeholder Commitments to <strong>Technical</strong>PlansTo obtain commitments to the technical plans by thestakeholders, the technical team should ensure that theappropriate stakeholders have a method to provide inputsand to review the project planning for implementationof stakeholder interests. During Formulation,the roles of the stakeholders should be defined in theproject plan and the SEMP. Review of these plans andthe agreement from the stakeholders of the content ofthese plans will constitute buy-in from the stakeholdersin the technical approach. Later in the project life cycle,stakeholders may be responsible for delivery of productsto the project. Initial agreements regarding the responsibilitiesof the stakeholders are key to ensuring that theproject technical team obtains the appropriate deliveriesfrom stakeholders.The identification of stakeholders is one of the early steps inthe systems engineering process. As the project progresses,stakeholder expectations are flowed down through theLogical Decomposition Process, and specific stakeholdersare identified for all of the primary and derived requirements.A critical part of the stakeholders’ involvement isin the definition of the technical requirements. As requirementsand ConOps are developed, the stakeholders willbe required to agree to these products. Inadequate stakeholderinvolvement will lead to inadequate requirementsand a resultant product that does not meet the stakeholderexpectations. Status on relevant stakeholder involvementshould be tracked and corrective action taken if stakeholdersare not participating as planned.Throughout the project life cycle, communication withthe stakeholders and commitment from the stakeholdersmay be accomplished through the use of agreements. Organizationsmay use an Internal Task Agreement (ITA),a Memorandum of Understanding (MOU), or othersimilar documentation to establish the relationship betweenthe project and the stakeholder. These agreementsalso are used to document the customer and provider responsibilitiesfor definition of products to be delivered.These agreements should establish the Measures of Effectiveness(MOEs) or Measures of Performance (MOPs)that will be used to monitor the progress of activities. Reportingrequirements and schedule requirements shouldbe established in these agreements. Preparation of theseagreements will ensure that the stakeholders’ roles andresponsibilities support the project goals and that theproject has a method to address risks and issues as theyare identified.During development of the project plan and the SEMP,forums are established to facilitate communication anddocument decisions during the life cycle of the project.These forums include meetings, working groups, decisionpanels, and control boards. Each of these forumsshould establish a charter to define the scope and authorityof the forum and identify necessary voting or120 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


6.1 <strong>Technical</strong> Planningnonvoting participants. Ad hoc members may be identifiedwhen the expertise or input of specific stakeholdersis needed when specific topics are addressed. Ensure thatstakeholders have been identified to support the forum.Issue <strong>Technical</strong> Work DirectivesThe technical team provides technical work directivesto Cost Account Managers (CAMs). This enables theCAMs to prepare detailed plans that are mutually consistentand collectively address all of the work to be performed.These plans include the detailed schedules andbudgets for cost accounts that are needed for cost managementand EVM.Issuing technical work directives is an essential activityduring Phase B of a project, when a detailed planningbaseline is required. If this activity is not implemented,then the CAMs are often left with insufficient guidancefor detailed planning. The schedules and budgets that areneeded for EVM will then be based on assumptions andlocal interpretations of project-level information. If thisis the case, it is highly likely that substantial varianceswill occur between the baseline plan and the work performed.Providing technical work directives to CAMsproduces a more organized technical team. This activitymay be repeated when replanning occurs.This activity is not limited to systems engineering. Thisis a normal part of project planning wherever there is aneed for an accurate planning baseline.The technical team will provide technical directives toCAMs for every cost account within the SE element ofthe WBS. These directives may be in any format, butshould clearly communicate the following informationfor each account:zz <strong>Technical</strong> products expected;zz Documents and technical reporting requirements foreach cost account;zz Critical events, and specific products expected from aparticular CAM in support of this event (e.g., this costaccount is expected to deliver a presentation on specifictopics at the PDR);zz References to applicable requirements, policies, andstandards;zz Identification of particular tools that should be used;zz Instructions on how the technical team wants to co-ordinate and review cost account plans before they goto project management; andzz Decisions that have been made on how work is to beperformed and who is to perform it.CAMs receive these technical directives, along with theproject planning guidelines, and prepare cost accountplans. These plans may be in any format and may havevarious names at different Centers, but minimally theywill include:zz Scope of the cost account, which includes:▶▶<strong>Technical</strong> products delivered;▶▶Other products developed that will be needed tocomplete deliverables (e.g., a Configuration Management(CM) system may need development inorder to deliver the product of a “managed configuration”);▶▶A brief description of the procedures that will befollowed to complete work on these products, suchas:•• Product X will be prepared in-house, using thelocal procedure A, which is commonly used inOrganization ABC,•• Product X will be verified/validated in the followingmanner…,•• Product X will be delivered to the project in thefollowing manner…,•• Product X delivery will include the following reports(e.g., delivery of a CM system to the projectwould include regular reports on the status of theconfiguration, etc.),•• Product Y will be procured in accordance withprocurement procedure B.zz A schedule attached to this plan in a format com-patible with project guidelines for schedules. Thisschedule would contain each of the procedures anddeliverables mentioned above and provide additionalinformation on the activity steps of each procedure.zz A budget attached to this plan in a system compat-ible with project guidelines for budgets. This budgetwould be consistent with the resources needed to accomplishthe scheduled activities.zz Any necessary agreements and approvals.If the project is going to use EVM, then the scope of acost account needs to further identify a number of “workpackages,” which are units of work that can be scheduledand given cost estimates. Work packages should bebased on completed products to the greatest extent pos-<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 121


6.0 Crosscutting <strong>Technical</strong> Managementsible, but may also be based on completed procedures(e.g., completion of validation). Each work packagewill have its own schedule and a budget. The budget forthis work package becomes part of the Budgeted Costof Work Scheduled (BCWS) in the EVM system. Whenthis unit of work is completed, the project’s earned valuewill increase by this amount. There may be future workin this cost account that is not well enough defined tobe described as a set of work packages. For example,launch operations will be supported by the technicalteam, but the details of what will be done often have notbeen worked out during Phase B. In this case, this futurework is called a “planning package,” which has a highlevelschedule and an overall budget. When this work isunderstood better, the planning package will be brokenup into work packages, so that the EVM system can continueto operate during launch operations.Cost account plans should be reviewed and approved bythe technical team and by the line manager of the costaccount manager’s home organization. Planning guidelinesmay identify additional review and approval requirements.The planning process described above is not limited tosystems engineering. This is the expected process for allelements of a flight project. One role that the systems engineermay have in planning is to verify that the scope ofwork described in cost account plans across the projectis consistent with the project WBS dictionary, and thatthe WBS dictionary is consistent with the architectureof the project.Capture <strong>Technical</strong> Planning Work ProductsThe work products from the <strong>Technical</strong> Planning processshould be managed using either the <strong>Technical</strong>Data Management Process or the Configuration ManagementProcess as required. Some of the more importantproducts of technical planning (i.e., the WBS, theSEMP, and the schedule, etc.) are kept under configurationcontrol and captured using the CM process. The<strong>Technical</strong> Data Management Process is used to capturetrade studies, cost estimates, technical analyses, reports,and other important documents not under formal configurationcontrol. Work products, such as meetingminutes and correspondence (including e-mail) containingdecisions or agreements with stakeholders alsoshould be retained and stored in project files for laterreference.6.1.1.3 OutputsTypical outputs from technical planning activities are:zz <strong>Technical</strong> work cost estimates, schedules, and re-source needs, e.g., funds, workforce, facilities, andequipment (to project), within the project resources;zz Product and process measures needed to assess prog-ress of the technical effort and the effectiveness ofprocesses (to <strong>Technical</strong> Assessment Process);zz <strong>Technical</strong> planning strategy, WBS, SEMP, and othertechnical plans that support implementation of thetechnical effort (to all processes; applicable plans totechnical processes);zz <strong>Technical</strong> work directives, e.g., work packages or taskorders with work authorization (to applicable technicalteams); andzz <strong>Technical</strong> Planning Process work products neededto provide reports, records, and nondeliverable outcomesof process activities (to <strong>Technical</strong> Data ManagementProcess).The resulting technical planning strategy would constitutean outline, or rough draft, of the SEMP. This wouldserve as a starting part for the overall <strong>Technical</strong> PlanningProcess after initial preparation is complete. When preparationsfor technical planning are complete, the technicalteam should have a cost estimate and schedule forthe technical planning effort. The budget and schedule tosupport the defined technical planning effort can then benegotiated with the project manager to resolve any discrepanciesbetween what is needed and what is available.The SEMP baseline needs to be completed. Planning forthe update of the SEMP based on programmatic changesneeds to be developed and implemented. The SEMP needsto be approved by the appropriate level of authority.This “technical work directives” step produces: (1) planningdirectives to cost account managers that result in(2) a consistent set of cost account plans. Where EVMis called for, it produces (3) an EVM planning baseline,including a BCWS.6.1.2 <strong>Technical</strong> Planning Guidance6.1.2.1 Work Breakdown StructureA work breakdown structure is a hierarchical breakdownof the work necessary to complete a project. TheWBS should be a product-based, hierarchical divisionof deliverable items and associated services. As such, it122 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


6.1 <strong>Technical</strong> Planningshould contain the project’s Product Breakdown Structure(PBS) with the specified prime product(s) at the topand the systems, segments, subsystems, etc., at successivelower levels. At the lowest level are products such ashardware items, software items, and information items(documents, databases, etc.) for which there is a cognizantengineer or manager. Branch points in the hierarchyshould show how the PBS elements are to be integrated.The WBS is built, in part, from the PBS by adding, at eachbranch point of the PBS, any necessary service elements,such as management, systems engineering, Integrationand Verification (I&V), and integrated logistics support.If several WBS elements require similar equipment orsoftware, then a higher level WBS element might be definedfrom the system level to perform a block buy or adevelopment activity (e.g., system support equipment).Figure 6.1-4 shows the relationship between a system, aPBS, and a WBS. In summary, the WBS is a combinationof the PBS and input from the system level. The systemlevel is incorporated to capture and integrate similaritiesacross WBS elements.A project WBS should be carried down to the cost accountlevel appropriate to the risks to be managed. Theappropriate level of detail for a cost account is determinedby management’s desire to have visibility intocosts, balanced against the cost of planning and reporting.Contractors may have a Contract WBS (CWBS) thatis appropriate to their need to control costs. A summaryCWBS, consisting of the upper levels of the full CWBS,is usually included in the project WBS to report costs tothe contracting organization. WBS elements should beidentified by title and by a numbering system that performsthe following functions:zz Identifies the level of the WBS element,zz Identifies the higher level element into which theWBS element will be integrated, andzz Shows the cost account number of the element.A WBS should also have a companion WBS dictionarythat contains each element’s title, identification number,objective, description, and any dependencies (e.g., receivables)on other WBS elements. This dictionary providesa structured project description that is valuable fororienting project members and other interested parties.It fully describes the products and/or services expectedfrom each WBS element. This subsection provides sometechniques for developing a WBS and points out somemistakes to avoid.Role of the WBSThe whole does morethan the sum of the parts.Subsystem BPBSShows the componentsthat form the systemSystemA B C DThe individualsystem componentsA B C DSubsystem AWork to producethe individualsystem componentsSubsystem CSubsystem DSystemMgmtThe whole takes more workthan the sum of the parts.SYSTEMComponents(subsystems)held togetherby “glue”(integration)WBSAll work componentsnecessary to producea complete systemI&VWork to integrate thecomponents into a systemFigure 6.1-4 Relationship between a system, aPBS, and a WBSThe technical team should receive planning guidelinesfrom the project office. The technical team should providethe project office with any appropriate tailoring orexpansion of the systems engineering WBS element, andhave project-level concurrence on the WBS and WBSdictionary before issuing technical work directives.A product-based WBS is the organizing structure for:zz Project and technical planning and scheduling.zz Cost estimation and budget formulation. (In partic-ular, costs collected in a product-based WBS can becompared to historical data. This is identified as a primaryobjective by DOD standards for WBSs.)SEILS<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 123


6.0 Crosscutting <strong>Technical</strong> Managementzz Defining the scope of statements of work and specifi-cations for contract efforts.zz Project status reporting, including schedule, cost,workforce, technical performance, and integratedcost/schedule data (such as earned value and estimatedcost at completion).zz Plans, such as the SEMP, and other documentationproducts, such as specifications and drawings.It provides a logical outline and vocabulary that describesthe entire project, and integrates information ina consistent way. If there is a schedule slip in one elementof a WBS, an observer can determine which otherWBS elements are most likely to be affected. Cost impactsare more accurately estimated. If there is a designchange in one element of the WBS, an observer can determinewhich other WBS elements will most likely beaffected, and these elements can be consulted for potentialadverse impacts.Techniques for Developing the WBSDeveloping a successful project WBS is likely to requireseveral iterations through the project life cycle since itis not always obvious at the outset what the full extentof the work may be. Prior to developing a preliminaryWBS, there should be some development of the systemarchitecture to the point where a preliminary PBS can becreated. The PBS and associated WBS can then be developedlevel by level from the top down. In this approach,a project-level systems engineer finalizes the PBS at theproject level and provides a draft PBS for the next lowerlevel. The WBS is then derived by adding appropriateservices such as management and systems engineering tothat lower level. This process is repeated recursively untila WBS exists down to the desired cost account level. Analternative approach is to define all levels of a completePBS in one design activity and then develop the completeWBS. When this approach is taken, it is necessaryto take great care to develop the PBS so that all productsare included and all assembly/I&V branches are correct.The involvement of people who will be responsible forthe lower level WBS elements is recommended.Common Errors in Developing a WBSThere are three common errors found in WBSs.z z Error 1: The WBS describes functions, not products.This makes the project manager the only one formallyresponsible for products.z z Error 2: The WBS has branch points that are notconsistent with how the WBS elements will be integrated.For instance, in a flight operations systemwith a distributed architecture, there is typicallysoftware associated with hardware items that will beintegrated and verified at lower levels of a WBS. Itwould then be inappropriate to separate hardwareand software as if they were separate systems to beintegrated at the system level. This would make itdifficult to assign accountability for integration andto identify the costs of integrating and testing componentsof a system.z z Error 3: The WBS is inconsistent with the PBS. Thismakes it possible that the PBS will not be fully implementedand generally complicates the managementprocess.Some examples of these errors are shown in Figure 6.1‐5.Each one prevents the WBS from successfully performingits roles in project planning and organizing. These errorsare avoided by using the WBS development techniquesdescribed above.Common to both the project management and systemsengineering disciplines is the requirement for organizingand managing a system throughout its life cycle withina systematic and structured framework, reflective of thework to be performed and the associated cost, schedule,technical, and risk data to be accumulated, summarized,and reported. (See NPR 7120.5.)A key element of this framework is a hierarchical,product-oriented WBS. Derived from both the physicaland system architectures, the WBS provides a systematic,logical approach for defining and translating initialmission goals and technical concepts into tangibleproject goals, system products, and life-cycle support (orenabling) functions.When appropriately structured and used in conjunctionwith sound engineering principles, the WBS supplies acommon framework for subdividing the total projectinto clearly defined, product-oriented work components,logically related and sequenced according to hierarchy,schedule, and responsibility assignment.The composition and level of detail required in the WBShierarchy is determined by the project management andtechnical teams based on careful consideration of theproject’s size and the complexity, constraints, and riskassociated with the technical effort. The initial WBS will124 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


6.1 <strong>Technical</strong> PlanningError 1: Functions Without ProductsProjectManagement <strong>Engineering</strong> Fabrication VerificationTransmitterError 2: Inappropriate BranchesSoftwareError 3: Inconsistency With PBSSubsystemWBSDistributedInformation SystemTWT AmplifierHardwareSubsystemTransmitterTWT AmplifierPBSFigure 6.1-5 Examples of WBS developmenterrorsprovide a structured framework for conceptualizing anddefining the program/project objectives and for translatingthe initial concepts into the major systems, componentproducts, and services to be developed, produced,and/or obtained. As successive levels of detailare defined, the WBS hierarchy will evolve to reflect acomprehensive, complete view of both the total projecteffort and each system or end product to be realizedthroughout the project’s life cycle.Decomposition of the major deliverables into unique,tangible product or service elements should continue toa level representative of how each WBS element will beplanned and managed. Whether assigned to in-house orcontractor organizations, these lower WBS elements willbe subdivided into subordinate tasks and activities andaggregated into the work packages and control accountsutilized to populate the project’s cost plans, schedules,and performance metrics.At a minimum, the WBS should reflect the major systemproducts and services to be developed and/or procured,the enabling (support) products and services, and anyhigh-cost and/or high-risk product elements residing atlower levels in the hierarchy. 1 The baseline WBS configurationwill be documented as part of the program planand utilized to structure the SEMP. The cost estimatesand the WBS dictionary are maintained throughout theproject’s life cycle to reflect the project’s current scope.The preparation and approval of three key program/project documents, the Formulation Authorization Document(FAD), the program commitment agreement, andthe program/project plans are significant contributors toearly WBS development.The initial contents of these documents will establish thepurpose, scope, objectives, and applicable agreementsfor the program of interest and will include a list of approvedprojects, control plans, management approaches,and any commitments and constraints identified.The technical team selects the appropriate system designprocesses to be employed in the top-down definition ofeach product in the system structure. Subdivision of theproject and system architecture into smaller, more manageablecomponents will provide logical summary pointsfor assessing the overall project’s accomplishments andfor measuring cost and schedule performance.Once the initial mission goals and objectives have evolvedinto the build-to or final design, the WBS will be refinedand updated to reflect the evolving scope and architectureof the project and the bottom-up realization of eachproduct in the system structure.Throughout the applicable life-cycle phases, the WBSand WBS dictionary will be updated to reflect the project’scurrent scope and to ensure control of high-risk andcost/schedule performance issues.6.1.2.2 Cost Definition and ModelingThis subsection deals with the role of cost in the systemsanalysis and engineering process, how to measure it,how to control it, and how to obtain estimates of it. Thereason costs and their estimates are of great importance1IEEE Standard 1220, Section C.3, “The system productsand life cycle enabling products should be jointly engineeredand once the enabling products and services are identified,should be treated as systems in the overall system hierarchy.”<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 125


6.0 Crosscutting <strong>Technical</strong> ManagementWBS Hierarchies for <strong>Systems</strong>It is important to note that while product-oriented in nature, the standard WBS mandated for <strong>NASA</strong> space flight projectsin NPR 7120.5 approaches WBS development from a project and not a system perspective. The WBS mandated reflectsthe scope of a major Agency project and, therefore, is structured to include the development, operation, and disposalof more than one major system of interest during the project’s normal life cycle.WBS hierarchies for <strong>NASA</strong>’s space flight projects will include high-level system products, such as payload, spacecraft,and ground systems, and enabling products and services, such as project management, systems engineering, and education.These standard product elements have been established to facilitate alignment with the Agency’s accounting,acquisition, and reporting systems.Unlike the project-view WBS approach described in NPR 7120.5, creation of a technical WBS focuses on the developmentand realization of both the overall end product and each subproduct included as a lower level element in theoverall system structure.NPR 7123.1, <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> Processes and Requirements mandates a standard, systematic technical approachto system or end-product development and realization. Utilizing a building-block or product-hierarchy approach, thesystem architecture is successively defined and decomposed into subsystems (elements performing the operationalfunctions of the system) and associated and interrelated subelements (assemblies, components, parts, and enablinglife-cycle products).The resulting hierarchy or family-product tree depicts the entire system architecture in a PBS. Recognized by Governmentand industry as a “best practice,” utilization of the PBS and its building-block configuration facilitates both the applicationof NPR 7123.1’s 17 common technical processes at all levels of the PBS structure and the definition and realizationof successively lower level elements of the system’s hierarchy.Definition and application of the work effort to the PBS structure yields a series of functional subproducts or “children”WBS models. The overall parent or system WBS model is realized through the rollup of successive levels of these product-based,subelement WBS models.Each WBS model represents one unique unit or functional end product in the overall system configuration and, whenrelated by the PBS into a hierarchy of individual models, represents one functional system end product or “parent” WBSmodel.(See NPR 7120.5, <strong>NASA</strong> Space Flight Program and Project Management Requirements.)in systems engineering goes back to a principal objectiveof systems engineering: fulfilling the system’s goalsin the most cost-effective manner. The cost of each alternativeshould be one of the most important outcomevariables in trade studies performed during the systemsengineering process.One role, then, for cost estimates is in helping to chooserationally among alternatives. Another is as a controlmechanism during the project life cycle. Cost measuresproduced for project life-cycle reviews are important indetermining whether the system goals and objectivesare still deemed valid and achievable, and whether constraintsand boundaries are worth maintaining. Thesemeasures are also useful in determining whether systemgoals and objectives have properly flowed down throughto the various subsystems.As system designs and ConOps mature, cost estimatesshould mature as well. At each review, cost estimatesneed to be presented and compared to the funds likelyto be available to complete the project. The cost estimatespresented at early reviews must be given special attentionsince they usually form the basis for the initial cost commitmentfor the project. The systems engineer must beable to provide realistic cost estimates to the project manager.In the absence of such estimates, overruns are likelyto occur, and the credibility of the entire system developmentprocess, both internal and external, is threatened.Life-Cycle Cost and Other Cost MeasuresA number of questions need to be addressed so that costsare properly treated in systems analysis and engineering.These questions include:126 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


6.1 <strong>Technical</strong> Planningzz What costs should be counted?zz How should costs occurring at different times betreated?zz What about costs that cannot easily be measured indollars?What Costs Should Be CountedThe most comprehensive measure of the cost of an alternativeis its life-cycle cost. According to NPR 7120.5,a system’s life-cycle cost is, “the total of the direct, indirect,recurring, nonrecurring, and other related expensesincurred, or estimated to be incurred, in the design, development,verification, production, operation, maintenance,support, and disposal of a project. The life-cyclecost of a project or system can also be defined as the totalcost of ownership over the project or system’s life cyclefrom Formulation through Implementation. It includesall design, development, deployment, operation andmaintenance, and disposal costs.”Costs Occurring Over TimeThe life-cycle cost combines costs that typically occur overa period of several years. To facilitate engineering tradesand comparison of system costs, these real year costs aredeescalated to constant year values. This removes theimpact of inflation from all estimates and allows readycomparison of alternative approaches. In those instanceswhere major portfolio architectural trades are being conducted,it may be necessary to perform formal cost benefitanalyses or evaluate leasing versus purchase alternatives.In those trades, engineers and cost analysts shouldfollow the guidance provided in Office of Managementand Budget (OMB) Circular A-94 on rate of return andnet present value calculation in comparing alternatives.Difficult-to-Measure CostsIn practice, estimating some costs poses special problems.These special problems, which are not unique to<strong>NASA</strong> systems, usually occur in two areas: (1) when alternativeshave differences in the irreducible chances ofloss of life, and (2) when externalities are present. Twoexamples of externalities that impose costs are pollutioncaused by some launch systems and the creation of orbitaldebris. Because it is difficult to place a dollar figureon these resource uses, they are generally called “incommensurablecosts.” The general treatment of these typesof costs in trade studies is not to ignore them, but insteadto keep track of them along with other costs. If these elementsare part of the trade space, it is generally advisableto apply Circular A-94 approaches to those trades.Controlling Life-Cycle CostsThe project manager/systems engineer must ensure thatthe probabilistic life-cycle cost estimate is compatiblewith <strong>NASA</strong>’s budget and strategic priorities. The currentpolicy is that projects are to submit budgets sufficient toensure a 70 percent probability of achieving the objectiveswithin the proposed resources. Project managersand systems engineers must establish processes to estimate,assess, monitor, and control the project’s life-cyclecost through every phase of the project.Early decisions in the systems engineering process tend tohave the greatest effect on the resultant system life-cyclecost. Typically, by the time the preferred system architectureis selected, between 50 and 70 percent of the system’slife-cycle cost has been locked in. By the time a preliminarysystem design is selected, this figure may be ashigh as 90 percent. This presents a major dilemma to thesystems engineer, who must lead this selection process.Just at the time when decisions are most critical, the stateof information about the alternatives is least certain. Uncertaintyabout costs is a fact of systems engineering,and that uncertainty must be accommodated by completeand careful analysis of the project risks and provisionof sufficient margins (cost, technical, and schedule)to ensure success. There are a number of estimating techniquesto assist the systems engineer and project managerin providing for uncertainty and unknown requirements.Additional information on these techniques canbe found in the <strong>NASA</strong> Cost Estimating <strong>Handbook</strong>.This suggests that efforts to acquire better informationabout the life-cycle cost of each alternative early in theproject life cycle (Phases A and B) potentially have veryhigh payoffs. The systems engineer needs to identify theprincipal life-cycle cost drivers and the risks associatedwith the system design, manufacturing, and operations.Consequently, it is particularly important with such asystem to bring in the specialty engineering disciplinessuch as reliability, maintainability, supportability, andoperations engineering early in the systems engineeringprocess, as they are essential to proper life-cycle cost estimation.One mechanism for controlling life-cycle cost is to establisha life-cycle cost management program as part of theproject’s management approach. (Life-cycle cost man-<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 127


6.0 Crosscutting <strong>Technical</strong> Managementagement has sometimes been called “design-to-life-cyclecost.”) Such a program establishes life-cycle cost as a designgoal, perhaps with subgoals for acquisition costs oroperations and support costs. More specifically, the objectivesof a life-cycle cost management program are to:zz Identify a common set of ground rules and assump-tions for life-cycle cost estimation;zz Manage to a cost baseline and maintain traceability tothe technical baseline with documentation for subsequentcost changes;zz Ensure that best-practice methods, tools, and modelsare used for life-cycle cost analysis;zz Track the estimated life-cycle cost throughout theproject life cycle; and, most importantzz Integrate life-cycle cost considerations into the designand development process via trade studies and formalchange request assessments.Trade studies and formal change request assessmentsprovide the means to balance the effectiveness and lifecyclecost of the system. The complexity of integratinglife-cycle cost considerations into the design and developmentprocess should not be underestimated, but neithershould the benefits, which can be measured in termsof greater cost-effectiveness. The existence of a rich setof potential life-cycle cost trades makes this complexityeven greater.Cost-Estimating MethodsVarious cost-estimating methodologies are utilizedthroughout a program’s life cycle. These include parametric,analogous, and engineering (grassroots).z z Parametric: Parametric cost models are used in theearly stages of project development when there is limitedprogram and technical definition. Such modelsinvolve collecting relevant historical data at an aggregatedlevel of detail and relating it to the area to be estimatedthrough the use of mathematical techniquesto create cost-estimating relationships. Normally, lessdetail is required for this approach than for othermethods.zzz z Analogous: This is based on most new programsoriginated or evolved from existing programs orsimply representing a new combination of existingcomponents. It uses actual costs of similar existing orpast programs and adjusts for complexity, technical,or physical differences to derive the new system estimate.This method would be used when there is insufficientactual cost data to use as a basis for a detailedapproach but there is a sufficient amount of programand technical definition.<strong>Engineering</strong> (Grassroots): These bottom-up estimatesare the result of rolling up the costs estimatedby each organization performing work described inthe WBS. Properly done, grassroots estimates can bequite accurate, but each time a “what if” question israised, a new estimate needs to be made. Each changeof assumptions voids at least part of the old estimate.Because the process of obtaining grassroots estimatesis typically time consuming and labor intensive, thenumber of such estimates that can be prepared duringtrade studies is in reality severely limited.The type of cost estimating method used will depend onthe adequacy of program definition, level of detail required,availability of data, and time constraints. For example,during the early stages of a program, a conceptualstudy considering several options would dictate an estimatingmethod requiring no actual cost data and limitedprogram definition on the systems being estimated.A parametric model would be a sound approach at thispoint. Once a design is baselined and the program ismore adequately defined, an analogy approach becomesappropriate. As detailed actual cost data are accumulated,a grassroots methodology is used.More information on cost-estimating methods and thedevelopment of cost estimates can be found in the <strong>NASA</strong>Cost Estimating <strong>Handbook</strong>.Integrating Cost Model Results for a CompleteLife-Cycle Cost EstimateA number of parametric cost models are available forcosting <strong>NASA</strong> systems. A list of the models currently inuse may be found in an appendix in the <strong>NASA</strong> Cost Estimating<strong>Handbook</strong>. Unfortunately, none alone is sufficientto estimate life-cycle cost. Assembling an estimate of lifecyclecost often requires that several different models(along with the other two techniques) be used together.Whether generated by parametric models, analogous,or grassroots methods, the estimated cost of the hardwareelement must frequently be “wrapped” or have factorsapplied to estimate the costs associated with management,systems engineering, test, etc., of the systemsbeing estimated. The <strong>NASA</strong> full-cost factors also mustbe applied separately.128 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


6.1 <strong>Technical</strong> PlanningTo integrate the costs being estimated by these differentmodels, the systems engineer should ensurethat the inputs to and assumptions of the models areconsistent, that all relevant life-cycle cost componentsare covered, and that the phasing of costs is correct.Estimates from different sources are often expressedin different year constant dollars which must be combined.Appropriate inflation factors must be appliedto enable construction of a total life-cycle cost estimatein real year dollars. Guidance on the use of inflationrates for new projects and for budget submissionsfor ongoing projects can be found in the annual<strong>NASA</strong> strategic guidance.Cost models frequently produce a cost estimate for thefirst unit of a hardware item, but where the project requiresmultiple units a learning curve can be applied tothe first unit cost to obtain the required multiple-unitestimate. Learning curves are based on the concept thatresources required to produce each additional unit declineas the total number of units produced increases.The learning curve concept is used primarily for uninterruptedmanufacturing and assembly tasks, which arehighly repetitive and labor intensive. The major premiseof learning curves is that each time the product quantitydoubles, the resources (labor hours) required to producethe product will reduce by a determined percentage ofthe prior quantity resource requirements. The two typesof learning curve approaches are unit curve and cumulativeaverage curve. The systems engineer can learn moreabout the calculation and use of learning curves in the<strong>NASA</strong> Cost Estimating <strong>Handbook</strong>.Models frequently provide a cost estimate of the totalacquisition effort without providing a recommendedphasing of costs over the life cycle. The systems engineercan use a set of phasing algorithms based on the typicalramping-up and subsequent ramping-down of acquisitioncosts for that type of project if a detailed projectschedule is not available to form a basis for the phasingof the effort. A normal distribution curve, or beta curve,is one type of function used for spreading parametricallyderived cost estimates and for R&D contracts where costsbuild up slowly during the initial phases and then escalateas the midpoint of the contract approaches. A betacurve is a combination of percent spent against percenttime elapsed between two points in time. More aboutbeta curves can be found in an appendix of the <strong>NASA</strong>Cost Estimating <strong>Handbook</strong>.Although parametric cost models for space systems arealready available, their proper use usually requires a considerableinvestment in learning how to appropriatelyutilize the models. For projects outside of the domainsof these existing cost models, new cost models may beneeded to support trade studies. Efforts to develop thesemodels need to begin early in the project life cycle to ensuretheir timely application during the systems engineeringprocess. Whether existing models or newly createdones are used, the SEMP and its associated life-cyclecost management plan should identify which (and how)models are to be used during each phase of the projectlife cycle.6.1.2.3 Lessons LearnedNo section on technical planning guidance would becomplete without the effective integration and incorporationof the lessons learned relevant to the project.<strong>Systems</strong> <strong>Engineering</strong> Role in Lessons Learned<strong>Systems</strong> engineers are the main users and contributorsto lessons learned systems. A lesson learned is knowledgeor understanding gained by experience—eithera successful test or mission or a mishap or failure. <strong>Systems</strong>engineers compile lessons learned to serve as historicaldocuments, requirements’ rationales, and othersupporting data analysis. <strong>Systems</strong> engineering practitionerscollect lessons learned during program and projectplans, key decision points, life-cycle phases, systems engineeringprocesses and technical reviews. <strong>Systems</strong> engineers’responsibilities include knowing how to utilize,manage, create, and store lessons learned and knowledgemanagement best practices.Utilization of Lessons Learned Best PracticeLessons learned are important to future programs, projects,and processes because they show hypotheses andconclusive insights from previous projects or processes.Practitioners determine how previous lessons from processesor tasks impact risks to current projects and implementthose lessons learned that improve design and/or performance.To pull in lessons learned at the start of a project or task:zz Search the <strong>NASA</strong> Lessons Learned InformationSystem (LLIS) database using keywords of interestto the new program or project. The process for recordinglessons learned is explained in NPR 7120.6,<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 129


6.0 Crosscutting <strong>Technical</strong> ManagementLessons Learned Process. In addition, other organizationsdoing similar work may have publicly availabledatabases with lessons learned. For example, theChemical Safety Board has a good series of case studyreports on mishaps.zz Supporting lessons from each engineering disciplineshould be reflected in the program and project plans.Even if little information was found, the search for lessonslearned can be documented.zz Compile lessons by topic and/or discipline.zz Review and select knowledge gained from particularlessons learned.zz Determine how these lessons learned may representpotential risk to the current program or project.zz Incorporate knowledge gained into the project data-base for risk management, cost estimate, and anyother supporting data analysis.As an example, a systems engineer working on the conceptfor an instrument for a spacecraft might search thelessons learned database using the keywords “environment,”“mishap,” or “configuration management.” One ofthe lessons learned that search would bring up is #1514.The lesson was from Chandra. A rebaseline of the programin 1992 removed two instruments, changed Chandra’sorbit from low Earth to high elliptical, and simplifiedthe thermal control concept from the active controlrequired by one of the descoped instruments to passive“cold-biased” surface plus heaters. This change inthermal control concept mandated silver Teflon thermalcontrol surfaces. The event driving the lesson was a severespacecraft charging and an electrostatic dischargeenvironment. The event necessitated an aggressive electrostaticdischarge test and circuit protection effort thatcost over $1 million, according to the database. TheTeflon thermal control surfaces plus the high ellipticalorbit created the electrostatic problem. Design solutionsfor one environment were inappropriate in another environment.The lesson learned was that any orbit modificationsshould trigger a complete new iteration of the systemsengineering processes starting from requirementsdefinition. Rebaselining a program should take into accountchange in the natural environment before new designdecisions are made. This lesson would be valuableto keep in mind when changes occur to baselines on theprogram currently being worked on.Management of Lessons Learned Best PracticeCapturing lessons learned is a function of good managementpractice and discipline. Too often lessons learnedare missed because they should have been developed andmanaged within, across, or between life-cycle phases.There is a tendency to wait until resolution of a situationto document a lesson learned, but the unfolding ofa problem at the beginning is valuable information andhard to recreate later. It is important to document a lessonlearned as it unfolds, particularly as resolution may notbe reached until a later phase. Since detailed lessons areoften hard for the human mind to recover, waiting until atechnical review or the end of a project to collect the lessonslearned hinders the use of lessons and the evolutionof practice. A mechanism for managing and leveraginglessons as they occur, such as monthly lessons learnedbriefings or some periodic sharing forums, facilitates incorporatinglessons into practice and carrying lessonsinto the next phase.At the end of each life-cycle phase, practitioners shoulduse systems engineering processes and procedural tasksas control gate cues. All information passed across controlgates must be managed in order to successfully enterthe next phase, process, or task.The systems engineering practitioner should make sureall lessons learned in the present phase are concise andconclusive. Conclusive lessons learned contain series ofevents that formulate abstracts and driving events. Irresolutelessons learned may be rolled into the next phaseto await proper supporting evidence. Project managersand the project technical team are to make sure lessonslearned are recorded in the Agency database at the endof all life-cycle phases, major systems engineering processes,key decision points, and technical reviews.130 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


6.2 Requirements ManagementRequirements management activities apply to the managementof all stakeholder expectations, customer requirements,and technical product requirements downto the lowest level product component requirements(hereafter referred to as expectations and requirements).The Requirements Management Process is used to:zz Manage the product requirements identified, base-lined, and used in the definition of the WBS modelproducts during system design;zz Provide bidirectional traceability back to the top WBSmodel requirements; andzz Manage the changes to established requirement base-lines over the life cycle of the system products.6.2.1 Process DescriptionFigure 6.2-1 provides a typical flow diagram for the RequirementsManagement Process and identifies typicalinputs, outputs, and activities to consider in addressingrequirements management.6.2.1.1 InputsThere are several fundamental inputs to the RequirementsManagement Process.Note: Requirements can be generated from nonobviousstakeholders and may not directly support thecurrent mission and its objectives, but instead providean opportunity to gain additional benefits orinformation that can support the Agency or the Nation.Early in the process, the systems engineer canhelp identify potential areas where the system can beused to collect unique information that is not directlyrelated to the primary mission. Often outside groupsare not aware of the system goals and capabilities untilit is almost too late in the process.zz Requirements and stakeholder expectations are iden-tified during the system design processes, primarilyfrom the Stakeholder Expectation Definition Processand the <strong>Technical</strong> Requirements Definition Process.zz The Requirements Management Process must be pre-pared to deal with requirement change requests thatcan be generated at any time during the project lifecycle or as a result of reviews and assessments as partof the <strong>Technical</strong> Assessment Process.zz TPM estimation/evaluation results from the Tech-nical Assessment Process provide an early warning ofFrom system design processesExpectations andRequirements to BeManagedFrom project and <strong>Technical</strong>Assessment ProcessRequirements ChangeRequestsFrom <strong>Technical</strong>Assessment ProcessTPM Estimation/Evaluation ResultsFrom Product Verificationand Product ValidationProcessesProduct Verification andProduct Validation ResultsPrepare to conductrequirements managementConduct requirementsmanagementConduct expectations andrequirements traceabilityManage expectationsand requirements changesCapture work products fromrequirements managementactivitiesTo ConfigurationManagement ProcessRequirementsDocumentsApproved Changes toRequirements BaselinesTo <strong>Technical</strong> DataManagement ProcessRequirementsManagement WorkProductsFigure 6.2-1 Requirements Management Process<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 131


6.2 Requirements Managementthat will satisfy the applicable product-line life-cyclephase success criteria.3. Do Requirements Satisfy Stakeholders: All relevantstakeholder groups identify and remove defects.Requirements validation results are often a decidingfactor in whether to proceed with the next process of LogicalDecomposition or Design Solution Definition. Theproject team should be prepared to: (1) demonstrate thatthe project requirements are complete and understandable;(2) demonstrate that prioritized evaluation criteriaare consistent with requirements and the operations andlogistics concepts; (3) confirm that requirements andevaluation criteria are consistent with stakeholder needs;(4) demonstrate that operations and architecture conceptssupport mission needs, goals, objectives, assumptions,guidelines, and constraints; and (5) demonstratethat the process for managing change in requirements isestablished, documented in the project information repository,and communicated to stakeholders.Managing Requirement ChangesThroughout Phases A and B, changes in requirementsand constraints will occur. It is impera tive that all changesbe thoroughly evaluated to determine the impacts on thearchitecture, design, interfaces, ConOps, and higher andlower level requirements. Performing functional andsensitivity analyses will en sure that the requirements arerealistic and evenly allocated. Rigorous requirementsverification and validation ensure that the requirementscan be satisfied and conform to mission objectives. Allchanges must be subjected to a review and approval cycleto maintain traceability and to ensure that the impactsare fully assessed for all parts of the system.Once the requirements have been validated and reviewedin the System Requirements Review they areplaced under formal configuration control. Thereafter,any changes to the requirements must be approved bythe Configuration Control Board (CCB). The systemsengineer, project manager, and other key engineers usuallyparticipate in the CCB approval processes to assessthe impact of the change including cost, performance,programmatic, and safety.The technical team should also ensure that the approvedrequirements are communicated in a timely manner toall relevant people. Each project should have already establishedthe mechanism to track and disseminate thelatest project information. Further information on ConfigurationManagement (CM) can be found in Section6.5.Key Issues for Requirements ManagementRequirements ChangesEffective management of requirements changes requiresa process that assesses the impact of the proposedchanges prior to approval and implementation of thechange. This is normally accomplished through the useof the Configuration Management Process. In order forCM to perform this function, a baseline configurationmust be documented and tools used to assess impactsto the baseline. Typical tools used to analyze the changeimpacts are as follows:z z Performance Margins: This tool is a list of key performancemargins for the system and the current statusof the margin. For example, the propellant performancemargin will provide the necessary propellantavailable versus the propellant necessary to completethe mission. Changes should be assessed for their impactto performance margins.zzCM Topic Evaluators List: This list is developed bythe project office to ensure that the appropriate personsare evaluating the changes and providing impactsto the change. All changes need to be routed tothe appropriate individuals to ensure that the changehas had all impacts identified. This list will need to beupdated periodically.z z Risk System and Threats List: The risk system canbe used to identify risks to the project and the cost,schedule, and technical aspects of the risk. Changesto the baseline can affect the consequences and likelihoodof identified risk or can introduce new risk tothe project. A threats list is normally used to identifythe costs associated with all the risks for the project.Project reserves are used to mitigate the appropriaterisk. Analyses of the reserves available versus theneeds identified by the threats list assist in the prioritizationfor reserve use.The process for managing requirements changes needsto take into account the distribution of information relatedto the decisions made during the change process.The Configuration Management Process needs to communicatethe requirements change decisions to the affectedorganizations. During a board meeting to approvea change, actions to update documentation need to beincluded as part of the change package. These actions<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 133


6.0 Crosscutting <strong>Technical</strong> Managementshould be tracked to ensure that affected documentationis updated in a timely manner.Feedback to the Requirements BaselineDuring development of the system components, it willbe necessary to provide feedback to the requirements.This feedback is usually generated during the productdesign, validation, and verification processes. The feedbackto the project will include design implementationissues that impact the interfaces or operations of thesystem. In many cases, the design may introduce constraintson how the component can be operated, maintained,or stored. This information needs to be communicatedto the project team to evaluate the impact to theaffected system operation or architecture. Each systemcomponent will optimize the component design and operation.It is the systems engineering function to evaluatethe impact of this optimization at the component level tothe optimization of the entire system.Requirements Creep“Requirements creep” is the term used to describe thesubtle way that requirements grow imperceptibly duringthe course of a project. The tendency for the set of requirementsis to relentlessly increase in size during thecourse of development, resulting in a system that is moreexpensive and complex than originally intended. Oftenthe changes are quite innocent and what appear to bechanges to a system are really enhancements in disguise.However, some of the requirements creep involves trulynew requirements that did not exist, and could not havebeen anticipated, during the <strong>Technical</strong> RequirementsDefinition Process. These new requirements are the resultof evolution, and if we are to build a relevant system,we cannot ignore them.There are several techniques for avoiding or at least minimizingrequirements creep:zz In the early requirements definition phase, flush outthe conscious, unconscious, and undreamt-of requirementsthat might otherwise not be stated.zz Establish a strict process for assessing requirementchanges as part of the Configuration ManagementProcess.zz Establish official channels for submitting change re-quests. This will determine who has the authority togenerate requirement changes and submit them formallyto the CCB (e.g., the contractor-designated representative,project technical leads, customer/scienceteam lead, or user).zz Measure the functionality of each requirement changerequest and assess its impact on the rest of the system.Compare this impact with the consequences of notapproving the change. What is the risk if the changeis not approved?zz Determine if the proposed change can be accommo-dated within the fiscal and technical resource budgets.If it cannot be accommodated within the establishedresource margins, then the change most likely shouldbe denied.6.2.1.3 OutputsTypical outputs from the requirements management activitiesare:z z Requirements Documents: Requirements documentsare submitted to the Configuration Management Processwhen the requirements are baselined. The officialcontrolled versions of these documents are generallymaintained in electronic format within the requirementsmanagement tool that has been selected by theproject. In this way they are linked to the requirementsmatrix with all of its traceable relationships.z zApproved Changes to the Requirements Baselines:Approved changes to the requirements baselines areissued as an output of the Requirements ManagementProcess after careful assessment of all the impacts ofthe requirements change across the entire productor system. A single change can have a far-reachingripple effect which may result in several requirementchanges in a number of documents.zz Various Requirements Management Work Prod-ucts: Requirements management work products areany reports, records, and undeliverable outcomes ofthe Requirements Management Process. For example,the bidirectional traceability status would be one ofthe work products that would be used in the verificationand validation reports.6.2.2 Requirements ManagementGuidance6.2.2.1 Requirements Management PlanThe technical team should prepare a plan for performingthe requirements management activities. This plan isnormally part of the SEMP but also can stand alone. Theplan should:134 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


6.2 Requirements Managementzz Identify the relevant stakeholders who will be involvedin the Requirements Management Process (e.g., thosewho may be affected by, or may affect, the product aswell as the processes).zz Provide a schedule for performing the requirementsmanagement procedures and activities.zz Assign responsibility, authority, and adequate re-sources for performing the requirements managementactivities, developing the requirements managementwork products, and providing the requirementsmanagement services defined in the activities (e.g.,staff, requirements management database tool, etc.).zz Define the level of configuration management/datamanagement control for all requirements managementwork products.zz Identify the training for those who will be performingthe requirements management activities.6.2.2.2 Requirements Management ToolsFor small projects and products, the requirements canusually be managed using a spreadsheet program. However,the larger programs and projects require the useof one of the available requirements management tools.In selecting a tool, it is important to define the project’sprocedure for specifying how the requirements will beorganized in the requirements management databasetool and how the tool will be used. It is possible, givenmodern requirements management tools, to create arequirements management database that can store andsort requirements data in multiple ways according to theparticular needs of the technical team. The organizationof the database is not a trivial exercise and has consequenceson how the requirements data can be viewed forthe life of the project. Organize the database so that ithas all the views into the requirements information thatthe technical team is likely to need. Careful considerationshould be given to how flowdown of requirementsand bidirectional traceability will be represented in thedatabase. Sophisticated requirements management databasetools also have the ability to capture numerous requirementattributes in the tools’ requirements matrix,including the requirements traceability and allocationlinks. For each requirement in the requirements matrix,the verification method(s), level, and phase are documentedin the verification requirements matrix housedin the requirements management database tool (e.g., thetool associates the attributes of method, level, and phasewith each requirement). It is important to make sure thatthe requirements management database tool is compatiblewith the verification and validation tools chosen forthe project.<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 135


6.0 Crosscutting <strong>Technical</strong> Management6.3 Interface ManagementThe management and control of interfaces is crucial tosuccessful programs or projects. Interface managementis a process to assist in controlling product developmentwhen efforts are divided among parties (e.g., Government,contractors, geographically diverse technicalteams, etc.) and/or to define and maintain complianceamong the products that must interoperate.6.3.1 Process DescriptionFigure 6.3-1 provides a typical flow diagram for the InterfaceManagement Process and identifies typical inputs,outputs, and activities to consider in addressinginterface management.6.3.1.1 InputsTypical inputs needed to understand and address interfacemanagement would include the following:z z System Description: This allows the design of thesystem to be explored and examined to determinewhere system interfaces exist. Contractor arrangementswill also dictate where interfaces are needed.z z System Boundaries: Document physical boundaries,components, and/or subsystems, which are all driversfor determining where interfaces exist.z z Organizational Structure: Decide which organizationwill dictate interfaces, particularly when there isthe need to jointly agree on shared interface param-eters of a system. The program and project WBS willalso provide interface boundaries.z z Boards Structure: The SEMP should provide insightinto organizational interface responsibilities and driveout interface locations.z z Interface Requirements: The internal and externalfunctional and physical interface requirements developedas part of the <strong>Technical</strong> Requirements DefinitionProcess for the product(s).z z Interface Change Requests: These include changesresulting from program or project agreements orchanges on the part of the technical team as part ofthe <strong>Technical</strong> Assessment Process.6.3.1.2 Process ActivitiesDuring project Formulation, the ConOps of the productis analyzed to identify both external and internal interfaces.This analysis will establish the origin, destination,stimuli, and special characteristics of the interfaces thatneed to be documented and maintained. As the systemstructure and architecture emerges, interfaces will beadded and existing interfaces will be changed and mustbe maintained. Thus, the Interface Management Processhas a close relationship to other areas, such as requirementsdefinition and configuration management duringthis period. Typically, an Interface Working Group(IWG) establishes communication links between thoseFrom user or program andsystem design processesInterfaceRequirementsFrom project and<strong>Technical</strong> AssessmentProcessInterfaceChangesPrepare or update interfacemanagement proceduresConduct interface management duringsystem design activities for each WBSlikemodel in the system structureConduct interface management duringproduct integration activitiesConduct interface controlCapture work products from interfacemanagement activitiesTo ConfigurationManagement ProcessInterface ControlDocumentsApproved InterfaceRequirementChangesTo <strong>Technical</strong> DataManagement ProcessInterface ManagementWork Products136 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>Figure 6.3-1 Interface Management Process


6.3 Interface Managementresponsible for interfacing systems, end products, enablingproducts, and subsystems. The IWG has the responsibilityto ensure accomplishment of the planning,scheduling, and execution of all interface activities. AnIWG is typically a technical team with appropriate technicalmembership from the interfacing parties (e.g., theproject, the contractor, etc.).During product integration, interface management activitieswould support the review of integration and assemblyprocedures to ensure interfaces are properlymarked and compatible with specifications and interfacecontrol documents. The interface management processhas a close relationship to verification and validation. Interfacecontrol documentation and approved interfacerequirement changes are used as inputs to the ProductVerification Process and the Product Validation Process,particularly where verification test constraints and interfaceparameters are needed to set the test objectives andtest plans. Interface requirements verification is a criticalaspect of the overall system verification.6.3.1.3 OutputsTypical outputs needed to capture interface managementwould include interface control documentation. This isthe documentation that identifies and captures the interfaceinformation and the approved interface change requests.Types of interface documentation include the InterfaceRequirements Document (IRD), Interface ControlDocument/Drawing (ICD), Interface Definition Document(IDD), and Interface Control Plan (ICP). These outputswill then be maintained and approved using the ConfigurationManagement Process and become a part of theoverall technical data package for the project.6.3.2 Interface Management Guidance6.3.2.1 Interface Requirements DocumentAn interface requirement defines the functional, performance,electrical, environmental, human, and physicalrequirements and constraints that exist at a commonboundary between two or more functions, system elements,configuration items, or systems. Interface requirementsinclude both logical and physical interfaces. Theyinclude, as necessary, physical measurements, definitionsof sequences of energy or information transfer, andall other significant interactions between items. For example,communication interfaces involve the movementand transfer of data and information within the system,and between the system and its environment. Properevaluation of communications requirements involvesdefinition of both the structural components of communications(e.g., bandwidth, data rate, distribution, etc.)and content requirements (what data/information is beingcommunicated, what is being moved among the systemcomponents, and the criticality of this informationto system functionality). Interface requirements can bederived from the functional allocation if function inputsand outputs have been defined. For example:zz If function F1 outputs item A to function F2, andzz Function F1 is allocated to component C1, andzz Function F2 is allocated to component C2,zz Then there is an implicit requirement that the inter-face between components C1 and C2 pass item A,whether item A is a liquid, a solid, or a message containingdata.The IRD is a document that defines all physical, functional,and procedural interface requirements betweentwo or more end items, elements, or components of asystem and ensures project hardware and software compatibility.An IRD is composed of physical and functionalrequirements and constraints imposed on hardwareconfiguration items and/or software configurationitems. The purpose of the IRD is to control the interfacesbetween interrelated components of the system underdevelopment, as well as between the system under developmentand any external systems (either existing orunder development) that comprise a total architecture.Interface requirements may be contained in the SRDuntil the point in the development process where the individualinterfaces are determined. IRDs are useful whenseparate organizations are developing components of thesystem or when the system must levy requirements onother systems outside program/project control. Duringboth Phase A and Phase B, multiple IRDs are drafted fordifferent levels of interfaces. By SRR, draft IRDs would becomplete for system-to-external-system interfaces (e.g.,the shuttle to the International Space Station), and segment-to-segmentinterfaces (e.g., the shuttle to the launchpad). An IRD generic outline is described in Appendix L.6.3.2.2 Interface Control Document orInterface Control DrawingAn interface control document or drawing details thephysical interface between two system elements, includingthe number and types of connectors, electricalparameters, mechanical properties, and environmentalconstraints. The ICD identifies the design solution to the<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 137


6.0 Crosscutting <strong>Technical</strong> Managementinterface requirement. ICDs are useful when separateorganizations are developing design solutions to be adheredto at a particular interface.6.3.2.3 Interface Definition DocumentAn IDD is a unilateral document controlled by the enditemprovider, and it basically provides the details of theinterface for a design solution that is already established.This document is sometimes referred to as a “one-sidedICD.” The user of the IDD is provided connectors, electricalparameters, mechanical properties, environmentalconstraints, etc., of the existing design. The user mustthen design the interface of the system to be compatiblewith the already existing design interface.6.3.2.4 Interface Control PlanAn ICP should be developed to address the process forcontrolling identified interfaces and the related interfacedocumentation. Key content for the ICP is the list of interfacesby category and who owns the interface. TheICP should also address the configuration control forumand mechanisms to implement the change process (e.g.,Preliminary Interface Revision Notice (PIRN)/InterfaceRevision Notice (IRN)) for the documents.Typical Interface Management Checklistzz Use the generic outline provided when developingthe IRD. Define a “reserved” placeholder if a paragraphor section is not applicable.zz Ensure that there are two or more specificationsthat are being used to serve as the parent for theIRD specific requirements.zz Ensure that “shall” statements are used to definespecific requirements.zz Each organization must approve and sign the IRD.zz A control process must be established to managechanges to the IRD.zz Corresponding ICDs are developed based upon therequirements in the IRD.zz Confirm connectivity between the interface re-quirements and the Product Verification and ProductValidation Processes.zz Define the SEMP content to address interface man-agement.Each major program or project should include anzzICP to describe the how and what of interface managementproducts.138 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


6.4 <strong>Technical</strong> Risk ManagementThe <strong>Technical</strong> Risk Management Process is one of thecrosscutting technical management processes. Risk is definedas the combination of (1) the probability that a programor project will experience an undesired event and(2) the consequences, impact, or severity of the undesiredevent, were it to occur. The undesired event mightcome from technical or programmatic sources (e.g., acost overrun, schedule slippage, safety mishap, healthproblem, malicious activities, environmental impact,or failure to achieve a needed scientific or technologicalobjective or success criterion). Both the probabilityand consequences may have associated uncertainties.<strong>Technical</strong> risk management is an organized, systematicrisk-informed decisionmaking discipline that proactivelyidentifies, analyzes, plans, tracks, controls, communicates,documents, and manages risk to increasethe likelihood of achieving project goals. The <strong>Technical</strong>Risk Management Process focuses on project objectives,Key Concepts in <strong>Technical</strong> Risk Managementz z Risk: Risk is a measure of the inability to achieve overall program objectives within defined cost, schedule, and technicalconstraints and has two components: (1) the probability of failing to achieve a particular outcome and (2) theconsequences/impacts of failing to achieve that outcome.z z Cost Risk: This is the risk associated with the ability of the program/project to achieve its life-cycle cost objectives andsecure appropriate funding. Two risk areas bearing on cost are (1) the risk that the cost estimates and objectives arenot accurate and reasonable and (2) the risk that program execution will not meet the cost objectives as a result of afailure to handle cost, schedule, and performance risks.zzSchedule Risk: Schedule risks are those associated with the adequacy of the time estimated and allocated for the development,production, implementation, and operation of the system. Two risk areas bearing on schedule risk are (1)the risk that the schedule estimates and objectives are not realistic and reasonable and (2) the risk that program executionwill fall short of the schedule objectives as a result of failure to handle cost, schedule, or performance risks.zz<strong>Technical</strong> Risk: This is the risk associated with the evolution of the design and the production of the system of interestaffecting the level of performance necessary to meet the stakeholder expectations and technical requirements.The design, test, and production processes (process risk) influence the technical risk and the nature of the product asdepicted in the various levels of the PBS (product risk).z z Programmatic Risk: This is the risk associated with action or inaction from outside the project, over which the projectmanager has no control, but which may have significant impact on the project. These impacts may manifestthemselves in terms of technical, cost, and/or schedule. This includes such activities as: International Traffic in ArmsRequirements (ITAR), import/export control, partner agreements with other domestic or foreign organizations, congressionaldirection or earmarks, Office of Management and Budget direction, industrial contractor restructuring, externalorganizational changes, etc.zzHazard Versus Risk: Hazard is distinguished from risk. A hazard represents a potential for harm, while risk includes considerationof not only the potential for harm, but also the scenarios leading to adverse outcomes and the likelihood ofthese outcomes. In the context of safety, “risk” considers the likelihood of undesired consequences occurring.Probabilistic Risk Assessment (PRA):z z PRA is a scenario-based risk assessment technique that quantifies the likelihoodsof various possible undesired scenarios and their consequences, as well as the uncertainties in the likelihoodsand consequences. Traditionally, design organizations have relied on surrogate criteria such as system redundancyor system-level reliability measures, partly because the difficulties of directly quantifying actual safety impacts, as opposedto simpler surrogates, seemed insurmountable. Depending on the detailed formulation of the objectives hierarchy,PRA can be applied to quantify <strong>Technical</strong> Performance Measures (TPMs) that are very closely related to fundamentalobjectives (e.g., Probability of Loss of Crew (P(LOC))). PRA focuses on the development of a comprehensivescenario set, which has immediate application to identify key and candidate contributors to risk. In all but the simplestsystems, this requires the use of models to capture the important scenarios, to assess consequences, and to systematicallyquantify scenario likelihoods. These models include reliability models, system safety models, simulation models,performance models, and logic models.<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 139


6.0 Crosscutting <strong>Technical</strong> Managementbringing to bear an analytical basis for risk managementdecisions and the ensuing management activities, and aframework for dealing with uncertainty.Strategies for risk management include transferring performancerisk, eliminating the risk, reducing the likelihoodof undesired events, reducing the negative effectsof the risk (i.e., reducing consequence severity), reducinguncertainties if warranted, and accepting some or all ofthe consequences of a particular risk. Once a strategyis selected, technical risk management ensures its successfulimplementation through planning and implementationof the risk tracking and controlling activities.<strong>Technical</strong> risk management focuses on risk that relatesto technical performance. However, management oftechnical risk has an impact on the nontechnical risk byaffecting budget, schedule, and other stakeholder expectations.This discussion of technical risk management isapplicable to technical and nontechnical risk issues, butthe focus of this section is on technical risk issues.6.4.1 Process DescriptionFigure 6.4-1 provides a typical flow diagram for the <strong>Technical</strong>Risk Management Process and identifies typical inputs,activities, and outputs to consider in addressingtechnical risk management.6.4.1.1 InputsThe following are typical inputs to technical risk management:z z Plans and Policies: Risk management plan, risk reportingrequirements, systems engineering managementplan, form of technical data products, and policyinput to metrics and thresholds.z z <strong>Technical</strong> Inputs: <strong>Technical</strong> performance measures,program alternatives to be assessed, technical issues,and current program baseline.z zInputs Needed for Risk Analysis of Alternatives:Design information and relevant experience data.6.4.1.2 Process Activities<strong>Technical</strong> risk management is an iterative process that considersactivity requirements, constraints, and priorities to:zz Identify and assess the risks associated with the im-plementation of technical alternatives;zz Analyze, prioritize, plan, track and control risk andthe implementation of the selected alternative;zz Plan, track, and control the risk and the implementa-tion of the selected alternative;zz Implement contingency action plans as triggered;From projectProject RiskManagement PlanFrom project and alltechnical processes<strong>Technical</strong> RiskIssuesFrom <strong>Technical</strong>Assessment andDecision AnalysisProcesses<strong>Technical</strong> Risk StatusMeasurementsFrom project and<strong>Technical</strong> AssessmentProcess<strong>Technical</strong> RiskReportingRequirementsPrepare a strategy to conduct technicalrisk managementIdentify technical risksConduct technical risk assessmentPrepare for technical risk mitigationMonitor the status of each technicalrisk periodicallyImplement technical risk mitigation andcontingency action plans as triggeredCapture work products from technicalrisk management activitiesTo <strong>Technical</strong>Planning Process<strong>Technical</strong> RiskMitigation and/orContingency ActionsTo project and <strong>Technical</strong>Data ManagementProcess<strong>Technical</strong> Risk<strong>Reports</strong>To <strong>Technical</strong> DataManagement ProcessWork Products of<strong>Technical</strong> RiskManagement140 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>Figure 6.4-1 <strong>Technical</strong> Risk Management Process


zz Communicate, deliberate, and document work prod-ucts and the risk; andzz Iterate with previous steps in light of new informationthroughout the life cycle.6.4.1.3 OutputsFollowing are key technical risk outputs from activities:z z Plans and Policies: Baseline-specific plan for trackingand controlling riskz z <strong>Technical</strong> Outputs: <strong>Technical</strong> risk mitigation or contingencyactions and tracking results, status findings,and emergent issueszzOutputs from Risk Analysis of Alternatives: Identified,analyzed, prioritized, and assigned risk; and riskanalysis updates6.4.2 <strong>Technical</strong> Risk Management GuidanceA widely used conceptualizationof risk is the scenarios,likelihoods, and consequencesconcept as shownin Figures 6.4-2 and 6.4-3.Accident Prevention LayersHazards6.4 <strong>Technical</strong> Risk Managementtriplet concept applies in principle to all risk types, andincludes the information needed for quantifying simplermeasures, such as expected consequences. Estimates ofexpected consequences (probability or frequency multipliedby consequences) alone do not adequately informtechnical decisions. Scenario-based analyses providemore of the information that risk-informed decisionsneed. For example, a rare but severe risk contributormay warrant a response different from that warrantedby a frequent, less severe contributor, even though bothhave the same expected consequences. In all but the simplestsystems, this requires the use of detailed models tocapture the important scenarios, to assess consequences,and to systematically quantify scenario likelihoods. Foradditional information on probabilistic risk assessments,refer to NPR 8705.3, Probabilistic Risk Assessment ProceduresGuide for <strong>NASA</strong> Managers and Practitioners.Accident Mitigation LayersHazardsThe scenarios, along withconsequences, likelihoods,and associated uncertainties,make up the completerisk triplet (risk as a set oftriplets—scenarios, likelihoods,consequences). TheInitiatingEventSystem does notcompensate(failure of controls)Accident(Mishap)System does notlimit the severityof consequenceFigure 6.4-2 Scenario-based modeling of hazardsSafety AdverseConsequenceRISK Structure of ScenarioStructure of ScenarioLikelihood andIts UncertaintyLikelihood andIts UncertaintyConsequence Severityand Its UncertaintyConsequence Severityand Its UncertaintyStructure of ScenarioLikelihood andIts UncertaintyConsequence Severityand Its UncertaintyFigure 6.4-3 Risk as a set of triplets<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 141


6.0 Crosscutting <strong>Technical</strong> Management6.4.2.1 Role of Continuous Risk Managementin <strong>Technical</strong> Risk ManagementContinuous Risk Management (CRM) is a widely usedtechnique within <strong>NASA</strong>, initiated at the beginning andcontinuing throughout the program life cycle to monitorand control risk. It is an iterative and adaptive process,which promotes the successful handling of risk.Each step of the paradigm builds on the previous step,leading to improved designs and processes through thefeedback of information generated. Figure 6.4-4 suggeststhis adaptive feature of CRM.Communicate, Deliberate, and Document: This isan element of each of the previous steps. Focus on understandingand communicating all risk informationthroughout each program phase. Document the risk,risk control plans, and closure/acceptance rationale.Deliberate on decisions throughout the CRM process.6.4.2.2 The Interface Between CRM and Risk-Informed Decision AnalysisFigure 6.4-5 shows the interface between CRM and riskinformeddecision analysis. (See Subsection 6.8.2 formore on the Decision Analysis Process.) The followingsteps are a risk-informed Decision Analysis Process:ControlTrackIdentifyCommunicate,Deliberate, andDocumentPlanAnalyze1.2.3.4.5.Formulate the objectives hierarchy and TPMs.Propose and identify decision alternatives. Alternativesfrom this process are combined with the alternativesidentified in the other systems engineeringprocesses, including design solution, verification,and validation as well as production.Perform risk analysis and rank decision alternatives.Evaluate and recommend decision alternative.Track the implementation of the decision.A brief overview of CRM is provided below for reference:z z Identify: Identify program risk by identifying scenarioshaving adverse consequences (deviations fromprogram intent). CRM addresses risk related to safety,technical performance, cost, schedule, and other riskthat is specific to the program.z z Analyze: Estimate the likelihood and consequencecomponents of the risk through analysis, includinguncertainty in the likelihoods and consequences, andthe timeframes in which risk mitigation actions mustbe taken.z z Plan: Plan the track and control actions. Decide whatwill be tracked, decision thresholds for corrective action,and proposed risk control actions.z z Track: Track program observables relating to TPMs(performance data, schedule variances, etc.), measuringhow close the program performance is comparedto its plan.zzzzFigure 6.4-4 Continuous risk managementControl: Given an emergent risk issue, execute theappropriate control action and verify its effectiveness.These steps support good decisions by focusing first onobjectives, next on developing decision alternatives withthose objectives clearly in mind, and using decision alternativesthat have been developed under other systemsengineering processes. The later steps of the decisionanalysis interrelate heavily with the <strong>Technical</strong> Risk ManagementProcess, as indicated in Figure 6.4-5.The risk analysis of decision alternatives (third box) notonly guides selection of a preferred alternative, it also carriesout the “identify” and “analyze” steps of CRM. Selectionof a preferred alternative is based in part on an understandingof the risks associated with that alternative.Alternative selection is followed immediately by a planningactivity in which key implementation aspects areaddressed, namely, risk tracking and control, includingrisk mitigation if necessary. Also shown conceptually onFigure 6.4-5 is the interface between risk managementand other technical and programmatic processes.Risk Analysis, Performing Trade Studies andRankingThe goal of this step is to carry out the kinds and amountsof analysis needed to characterize the risk for two purposes:ranking risk alternatives, and performing the“identify” and “analyze” steps of CRM.142 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


6.4 <strong>Technical</strong> Risk ManagementRisk-Informed DecisionAnalysisFormulation of ObjectivesHierarchy and TPMsProposing and/orIdentifying DecisionAlternativesStakeholderExpectation,RequirementsDefinition/ManagementDesign Solution,<strong>Technical</strong> PlanningCRM ProcessIdentifyRisk Analysis of DecisionAlternatives, PerformingTrade Studies and RankingDesign Solution,<strong>Technical</strong> Planning,Decision AnalysisControlTrackCommunicate,Deliberate, andDocumentPlanAnalyze<strong>Technical</strong> Risk ManagementDeliberating andRecommending aDecision AlternativeTracking and ControllingPerformance Deviations<strong>Technical</strong> Planning,Decision AnalysisDecision Analysis,Lessons Learned,KnowledgeManagementDecisionmakingandImplementationof DecisionAlternativeFigure 6.4-5 The interface between CRM and risk-informed decision analysisTo support ranking, trade studies may be performed.TPMs that can affect the decision outcome are quantifiedincluding uncertainty as appropriate.To support the “identify” and “analyze” steps of CRM,the risk associated with the preferred alternative is analyzedin detail. Refer to Figure 6.4-6. Risk analysis cantake many forms, ranging from qualitative risk identification(essentially scenarios and consequences, withoutperforming detailed quantification of likelihood usingtechniques such as Failure Mode and Effects Analysis(FMEA) and fault trees), to highly quantitative methodssuch as PRA. The analysis stops when the technicalcase is made; if simpler, more qualitative methods suffice,then more detailed methods need not be applied.The process is then identified, planned for, and continuouslychecked. Selection and application of appropriatemethods is discussed as follows.6.4.2.3 Selection and Application ofAppropriate Risk MethodsThe nature and context of the problem, and the specificTPMs, determine the methods to be used. In some projects,qualitative methods are adequate for making decisions;in others, these methods are not precise enough toappropriately characterize the magnitude of the problem,or to allocate scarce risk reduction resources. The technicalteam needs to decide whether risk identificationand judgment-based characterization are adequate, orwhether the improved quantification of TPMs throughmore detailed risk analysis is justified. In making that determination,the technical team must balance the cost ofrisk analysis against the value of the additional informationto be gained. The concept of “value of information”is central to making the determination of what analysisis appropriate and to what extent uncertainty needs to bequantified.<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 143


6.0 Crosscutting <strong>Technical</strong> ManagementA review of the lessons learned files, data, and reportsfrom previous similar projects can produce insights andinformation for hazard identification on a new project.This includes studies from similar systems and historicaldocuments, such as mishap files and near-miss reports.The key to applying this technique is in recognizingwhat aspects of the old projects and the current projectare analogous, and what data from the old projects arerelevant to the current project. In some cases the use ofquantitative methods can compensate for limited availabilityof information because these techniques pull themost value from the information that is available.Types of RiskAs part of selecting appropriate risk analysis methods,it is useful to categorize types of risk. Broadly, risk canbe related to cost, schedule, and technical performance.Many other categories exist, such as safety, organizational,management, acquisition, supportability, political,and programmatic risk, but these can be thought ofas subsets of the broad categories. For example, programmaticrisk refers to risk that affects cost and/or schedule,but not technical.In the early stages of a risk analysis, it is typically necessaryto screen contributors to risk to determine thedrivers that warrant more careful analysis. For this purpose,conservative bounding approaches may be appropriate.Overestimates of risk significance will becorrected when more detailed analysis is performed.However, it can be misleading to allow bounding estimatesto drive risk ranking. For this reason, analysis willtypically iterate on a problem, beginning with screeningestimates, using these to prioritize subsequent analysis,and moving on to a more defensible risk profile based oncareful analysis of significant contributors. This is part ofthe iteration loop shown in Figure 6.4-6.Qualitative MethodsCommonly used qualitative methods accomplish thefollowing:zz Help identify scenarios that are potential risk contrib-utors,zz Provide some input to more quantitative methods,andzz Support judgment-based quantification of TPMs.Examples of Decisions Architecture A vs. Architecture B vs. Architecture C Making changes to existing systems Extending the life of existing systems Responding to operational occurrences in real time Contingency Plan A vs. Contingency Plan B Technology A vs. Technology B Changing requirements Prioritization Launch or no launchDeliberation andRecommending aDecision AlternativeDecisionAlternatives forAnalysisIdentifyAnalyzeQualitativeTechniquesRisk &PerformanceMeasurementResultsScoping &Determination ofMethods to BeUsedSpectrum ofAvailableTechniquesPreliminary Risk& PerformanceMeasurementResultsIs theranking/comparisonrobust?IdentifyAnalyzeQuantitativeTechniquesRisk AnalysisTechniquesNoNetbeneficial to reduceuncertainty?NoIterationYes144 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>Additional Uncertainty Reduction if Necessary per StakeholdersFigure 6.4-6 Risk analysis of decision alternatives


6.4 <strong>Technical</strong> Risk ManagementExample Sources of RiskIn the “identify” activity, checklists such as this canserve as a reminder to analysts regarding areas inwhich risks have been identified previously.zz Unrealistic schedule estimates or allocationzz Unrealistic cost estimates or budget allocationzz Inadequate staffing or skillszz Uncertain or inadequate contractor capabilityzz Uncertain or inadequate vendor capabilityzz Insufficient production capacityzz Operational hazardszz Issues, hazards, and vulnerabilities that could ad-versely affect the program’s technical effortzz Unprecedented efforts without estimateszz Poorly defined requirementszz No bidirectional traceability of requirementszz Infeasible designzz Inadequate configuration managementzz Unavailable technologyzz Inadequate test planningzz Inadequate quality assurancezz Requirements prescribing nondevelopmental prod-ucts too low in the product treezz Lack of concurrent development of enabling prod-ucts for deployment, training, production, operations,support, or disposalThese qualitative methods are discussed briefly below.Risk Matrices“NxM” (most commonly 5x5) risk matrices provide assistancein managing and communicating risk. (See Figure6.4-7.) They combine qualitative and semi-quantitativemeasures of likelihood with similar measures ofconsequences. The risk matrix is not an assessment tool,but can facilitate risk discussions. Specifically, risk matriceshelp to:zz Track the status and effects of risk-handling efforts,andzz Communicate risk status information.LikelihoodWhen ranking risk, it is important to use a commonmethodology. Different organizations, and sometimesprojects, establish their own format. This can cause confusionand miscommunication. So before using a rankingsystem, the definitions should be clearly establishedand communicated via a legend or some other method.For the purposes of this handbook, a definition widelyused by <strong>NASA</strong>, other Government organizations, andindustry is provided.zzzz543211LOWMODERATEHIGH2 3 4 5ConsequencesFigure 6.4-7 Risk matrixLow (Green) Risk: Has little or no potential for increasein cost, disruption of schedule, or degradationof performance. Actions within the scope of theplanned program and normal management attentionshould result in controlling acceptable risk.Moderate (Yellow) Risk: May cause some increase incost, disruption of schedule, or degradation of per-Limitations of Risk Matriceszz Interaction between risks is not considered. Eachrisk is mapped onto the matrix individually. (Theserisks can be related to each item using FMECA or afault tree.)zz Inability to deal with aggregate risks (i.e., total risk).zz Inability to represent uncertainties. A risk is as-sumed to exist within one likelihood range andconsequence range, both of which are assumed tobe known.Fixed tradeoff between likelihood and consezzquence. Using the standardized 5x5 matrix, the significanceof different levels of likelihood and consequenceare fixed and unresponsive to the contextof the program.<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 145


6.0 Crosscutting <strong>Technical</strong> Managementzzformance. Special action and management attentionmay be required to handle risk.High (Red) Risk: Likely to cause significant increasein cost, disruption of schedule, or degradation of performance.Significant additional action and high-prioritymanagement attention will be required to handlerisk.FMECAs, FMEAs, and Fault TreesFMEA; Failure Modes, Effects, and Criticality Analysis(FMECA); and fault trees are methodologies designed toidentify potential failure modes for a product or process,to assess the risk associated with those failure modes, torank the issues in terms of importance, and to identifyand carry out corrective actions to address the most seriousconcerns. These methodologies focus on the hardwarecomponents as well as processes that make up thesystem. According to MIL-STD-1629, Failure Mode andEffects Analysis, FMECA is an ongoing procedure bywhich each potential failure in a system is analyzed todetermine the results or effects thereof on the system,and to classify each potential failure mode according toits consequence severity. A fault tree evaluates the combinationsof failures that can lead to the top event of interest.(See Figure 6.4-8.)AE₁BasicFailure EventBTIntermediateFailureEventE₃CITopEventE₅Leak Not DetectedFE₂GController FailsPressureTransducer 1FailsE₄HQuantitative and Communication MethodsPRA is a comprehensive, structured, and logical analysismethod aimed at identifying and assessing risks in complextechnological systems for the purpose of cost-effectivelyimproving their safety and performance.Risk management involves prevention of (reduction ofthe frequency of) adverse scenarios (ones with undesirableconsequences) and promotion of favorable scenarios.This requires understanding the elements of adverse scenariosso that they can be prevented and the elements ofsuccessful scenarios so that they can be promoted.PRA quantifies risk metrics. “Risk metric” refers to thekind of quantities that might appear in a decision model:such things as the frequency or probability of consequencesof a specific magnitude or perhaps expectedconsequences. Risk metrics of interest for <strong>NASA</strong> includeprobability of loss of vehicle for some specific missiontype, probability of mission failure, and probability oflarge capital loss. Figures of merit such as system failureprobability can be used as risk metrics, but the phrase“risk metric” ordinarily suggests a higher level, moreLegend:Singular eventEvent that results from acombination of singularevents“And” logic symbol“Or” logic symbolFigure 6.4-8 Example of a fault treeconsequence-oriented figure of merit. The resourcesneeded for PRA are justified by the importance of theconsequences modeled or until the cost in time and resourcesof further analysis is no longer justified by theexpected benefits.The <strong>NASA</strong> safety and risk directives determine the scopeand the level of rigor of the risk assessments. NPR 8715.3,<strong>NASA</strong> General Safety Program Requirements assigns theproject a priority ranking based on its consequence categoryand other criteria. NPR 8705.5, Probabilistic Risk As-146 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


6.4 <strong>Technical</strong> Risk Managementsessment (PRA) Procedures for <strong>NASA</strong> Programs and Projectsthen determines the scope and the level of rigor anddetails for the assessment based on the priority rankingand the level of design maturity.QuantificationTPMs are quantified for each alternative and used toquantify an overall performance index or an overallmeasure of effectiveness for each alternative. These resultsare then used for ranking alternatives.Bounding approaches are often used for initial screeningof possible risk contributors. However, realistic assessmentsmust ultimately be performed on the risk drivers.Bounding approaches are inappropriate for ranking alternativesbecause they bias each TPM in which they areapplied, and are very difficult to do at a quantitativelyconsistent level from one analysis to the next.Because different tools employ different simplificationsand approximations, it is difficult to compare analysisresults in a consistent manner if they are based on differenttools or done by different analysts. These sourcesof inconsistency need to be considered when the workis planned and when the results are applied. Vetting riskand TPM results with these factors in mind is one benefitof deliberation (discussed below).Consideration of Uncertainty ReductionMeasuresIn some cases, the preliminary ranking of alternativeswill not be robust. A “robust” ranking is one that is notsensitive to small changes in model parameters or assumptions.As an example, suppose that differences inTPMs of different decision alternatives are sufficientlysmall that variations of key parameters within the stateduncertainty bounds could change the ranking. Thiscould arise in a range of decision situations, includingarchitecture decisions and risk management decisionsfor a given architecture. In the latter case, the alternativesresult in different risk mitigation approaches.In such cases, it may be worthwhile to invest in work toreduce uncertainties. Quantification of the “value of information”can help the decisionmaker determine whetheruncertainty reduction is an efficient use of resources.Deliberation and Recommendation of DecisionAlternativeDeliberationDeliberation is recommended in order to make use ofcollective wisdom to promote selection of an alternativefor actual implementation, or perhaps, in the caseof complex and high-stakes decisions, to recommend afinal round of trade studies or uncertainty reduction efforts,as suggested by the analysis arrow in Figure 6.4‐9.Capturing the Preferred Alternative and theBasis for Its SelectionDepending on the level at which this methodology isbeing exercised (project level, subtask level, etc.), thetechnical team chooses an alternative, basing the choiceon deliberation to the extent appropriate. The decisionitself is made by appropriate authority inside of the systemsengineering processes. The purpose of calling outRisk and TPMResultsPresentpreliminaryresults tostakeholdersNeed foradditionaluncertaintyreduction?NoNeed torefine/adjustdecisionalternatives?NoYesYesAnalysisSelection Furnish recommendationto decisionmaker Capture basis Develop/update riskmanagement plan Refine metrics and developmonitoring thresholdsFigure 6.4-9 DeliberationDecisionmaking andimplementation ofdecision alternative<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 147


6.0 Crosscutting <strong>Technical</strong> Managementthis step is to emphasize that key information about thealternative needs to be captured and that this key informationincludes the perceived potential program vulnerabilitiesthat are input to the “planning” activity withinCRM. By definition, the selection of the alternative isbased at least in part on the prospective achievement ofcertain values of the TPMs. For purposes of monitoringand implementation, these TPM values help to definesuccess, and are key inputs to the determination of monitoringthresholds.Planning <strong>Technical</strong> Risk Management of theSelected AlternativeAt this point, a single alternative has been chosen. Duringanalysis, the risk of each alternative will have beenevaluated for purposes of TPM quantification, but detailedrisk management plans will not have been drawnup. At this stage, detailed planning for technical riskmanagement of the selected alternative takes place anda formal risk management plan is drafted. In the planningphase:zz Provisional decisions are made on risk control actions(eliminate, mitigate, research, watch, or accept);zz Observables are determined for use in measurementof program performance;zz Thresholds are determined for the observables suchthat nonexceedance of the thresholds indicates satisfactoryprogram performance;zz Protocols are determined that guide how often observ-ables are to be measured, what to do when a thresholdis exceeded, how often to update the analyses, decisionauthority, etc.; andzz Responsibility for the risk tracking is assigned.General categories of risk control actions from NPR8000.4, Risk Management Procedural Requirements aresummarized here. Each identified and analyzed risk canbe managed in one of five ways:zz Eliminate the risk,zz Mitigate the risk,zz Research the risk,zz Watch the risk, orzz Accept the risk.Steps should be taken to eliminate or mitigate the risk ifit is well understood and the benefits realized are commensuratewith the cost. Benefits are determined usingthe TPMs from the program’s objectives hierarchy. Theconsequences of mitigation alternatives need to be analyzedto ensure that they do not introduce unwarrantednew contributions to risk.If mitigation is not justified, other activities are considered.Suppose that there is substantial uncertainty regardingthe risk. For example, there may be uncertaintyin the probability of a scenario or in the consequences.This creates uncertainty in the benefits of mitigation,such that a mitigation decision cannot be made withconfidence. In this case, research may be warranted toreduce uncertainty and more clearly indicate an appropriatechoice for the control method. Research is only aninterim measure, eventually leading either to risk mitigationor to acceptance.If neither risk mitigation nor research is justified and theconsequence associated with the risk is small, then it mayneed to be accepted. The risk acceptance process considersthe likelihood and the severity of consequences.NPR 8000.4 delineates the program level with authorityto accept risk and requires accepted risk to be reviewedperiodically (minimum of every 6 months) to ensurethat conditions and assumptions have not changed requiringthe risk acceptance to be reevaluated. These reviewsshould take the form of quantitative and qualitativeanalyses, as appropriate.The remaining cases are those in which neither risk mitigationnor research are justified, and the consequence associatedwith the risk is large. If there is large uncertaintyin the risk, then it may need to be watched. This allowsthe uncertainty to reduce naturally as the program progressesand knowledge accumulates, without a researchprogram targeting that risk. As with research, watchingis an interim measure, eventually leading either to riskmitigation or to acceptance, along guidelines previouslycited.Effective PlanningThe balance of this subsection is aimed primarily at ensuringthat the implementation plan for risk monitoringis net beneficial.A good plan has a high probability of detecting significantdeviations from program intent in a timely fashion,without overburdening the program. In order to accomplishthis, a portfolio of observables and thresholdsneeds to be identified. Selective plan implementationthen checks for deviations of actual TPM values from148 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


6.4 <strong>Technical</strong> Risk Managementplanned TPM values, and does so in a way that adds netvalue by not overburdening the project with reportingrequirements. Elements of the plan include financial andprogress reporting requirements, which are somewhatpredetermined, and additional program-specific observables,audits, and program reviews.The selection of observables and thresholds should havethe following properties:zz Measurable parameters (direct measurement of theparameter or of related parameters that can be usedto calculate the parameter) exist to monitor systemperformance against clearly defined, objective thresholds;zz The monitoring program is set up so that, when athreshold is exceeded, it provides timely indication ofperformance issues; andzz The program burden associated with the activity isthe minimum needed to satisfy the above.For example, probability of loss of a specific missioncannot be directly measured, but depends on manyquantities that can be measured up to a point, such aslower level reliability and availability metrics.Monitoring protocols are established to clarify requirements,assign responsibility, and establish intervals formonitoring. The results of monitoring are collected andanalyzed, and responses are triggered if performancethresholds are exceeded. These protocols also determinewhen the analyses must be updated. For example, technicalrisk management decisions should be reassessedwith analysis if the goals of the program change. Dueto the long lead time required for the high-technologyproducts required by <strong>NASA</strong> programs, program requirementsoften change before the program completes itslife cycle. These changes may include technical requirements,budget or schedule, risk tolerance, etc.Tracking and Controlling PerformanceDeviationsAs shown in Figure 6.4-10, tracking is the process bywhich parameters are observed, compiled, and reportedaccording to the risk management plan. Risk mitigation/control is triggered when a performance threshold is exceeded,when risk that was assumed to be insignificantis found to be significant, or when risk that was not addressedduring the analyses is discovered. Control mayalso be required if there are significant changes to theExecution ofChosen DecisionAlternativeIdentifyIterate continuousrisk managementprocessMetrics and ThresholdsFrom PlanTrackMonitorperformancebased on TPMs andThresholds decision rules or OtherControlIntervene ifperformancethresholds areexceededFigure 6.4-10 Performance monitoring andcontrol of deviationsprogram. The need to invoke risk control measures inlight of program changes is determined in the risk managementplan. Alternatives are proposed and analyzed,and a preferred alternative is chosen based on the performanceof the alternatives with respect to the TPM, sensitivityand uncertainty analyses, and deliberation by thestakeholders. The new preferred alternative is then subjectedto planning, tracking, and control.During the planning phase, control alternatives wereproactively conceived before required. Once a thresholdis triggered, a risk control action (as described in Subsection6.4.2.3) is required. At this point, there may beconsiderably more information available to the decisionmakerthan existed when the control alternatives wereproposed. Therefore, new alternatives or modificationsof existing alternatives should be considered in additionto the existing alternatives by iterating this technical riskmanagement process.Figure 6.4-11 shows an example of tracking and controllingperformance by tracking TPM margins against predeterminedthresholds. At a point in time correspondingwith the vertical break, the TPM’s margin is less thanthe required margin. At this point, the alternative waschanged, such that the margin and margin requirementincreased.<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 149


6.0 Crosscutting <strong>Technical</strong> ManagementTPM Margin+0-MarginDiscontinuityInduced by Shiftto New AllocationOriginal MarginRequirementNew Margin RequirementReplanning Occurred HeretTimeCurrent DateFigure 6.4-11 Margin management method<strong>Technical</strong> risk management is not exited until the programterminates, although the level of activity variesaccording to the current position of the activity in thelife cycle. The main outputs are the technical risk reports,including risk associated with proposed alternatives,risk control alternatives, and decision supportdata. Risk control alternatives are fed back to technicalplanning as more information is learned about the alternatives’risk. This continues until the risk managementplan is established. This learning process also producesalternatives, issues, or problems and supportingdata that are fed back into the project. Once a projectbaseline is chosen, technical risk management focuseson measuring the deviation of project risk from thisbaseline, and driving decision support requests basedon these measurements.150 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


6.5 Configuration ManagementConfiguration Management is a management disciplineapplied over the product’s life cycle to provide visibilityinto and to control changes to performance and functionaland physical characteristics. CM ensures that theconfiguration of a product is known and reflected inproduct information, that any product change is beneficialand is effected without adverse consequences, andthat changes are managed.CM reduces technical risks by ensuring correct productconfigurations, distinguishes among product versions,ensures consistency between the product and informationabout the product, and avoids the embarrassment ofstakeholder dissatisfaction and complaint. <strong>NASA</strong> adoptsthe CM principles as defined by ANSI/EIA 649, <strong>NASA</strong>methods of implementation as defined by <strong>NASA</strong> CMprofessionals, and as approved by <strong>NASA</strong> management.When applied to the design, fabrication/assembly, system/subsystem testing, integration, operational and sustainingactivities of complex technology items, CM represents the“backbone” of the enterprise structure. It instills disciplineand keeps the product attributes and documentationconsistent. CM enables all stakeholders in the technicaleffort, at any given time in the life of a product, touse identical data for development activities and decisionmaking.CM principles are applied to keep the documentationconsistent with the approved engineering, and toensure that the product conforms to the functional andphysical requirements of the approved design.6.5.1 Process DescriptionFigure 6.5-1 provides a typical flow diagram for the ConfigurationManagement Process and identifies typical inputs,outputs, and activities to consider in addressing CM.6.5.1.1 InputsThe required inputs for this process are:zz CM plan,zz Work products to be controlled, andzz Proposed baseline changes.6.5.1.2 Process ActivitiesThere are five elements of CM (see Figure 6.5-2):From projectProject ConfigurationManagement Plan<strong>Engineering</strong> ChangeProposalsFrom RequirementsManagement and InterfaceManagement ProcessesExpectation,Requirements, andInterface DocumentsApprovedRequirementsBaseline ChangesFrom StakeholderExpectation Definition,Logical Decomposition, and<strong>Technical</strong> Planning ProcessesDesignatedConfiguration Itemsto Be ControlledPrepare a strategy to conductconfiguration managementIdentify baseline to be underconfiguration controlManage configuration change controlMaintain the status of configurationdocumentationConduct configuration auditsCapture work products fromconfiguration management activitiesTo applicabletechnical processesList of ConfigurationItems UnderControlCurrentBaselinesTo project and<strong>Technical</strong> DataManagement ProcessConfigurationManagement<strong>Reports</strong>To <strong>Technical</strong> DataManagement ProcessConfigurationManagementWork ProductsFigure 6.5-1 CM Process<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 151


6.0 Crosscutting <strong>Technical</strong> Managementzz Configuration planning and managementzz Configuration identification,zz Configuration change management,zz Configuration Status Accounting (CSA), andzz Configuration verification.ConfigurationPlanning andManagementConfigurationIdentification152 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>CONFIGURATIONMANAGEMENTConfigurationChangeManagementConfiguration IdentificationConfiguration identification is the systematic process ofselecting, organizing, and stating the product attributes.Identification requires unique identifiers for a productand its configuration documentation. The CM activityassociated with identificationincludes selecting theConfiguration Items (CIs),determining CIs’ associatedconfiguration documentation,determining the appropriatechange controlConfigurationStatusAccountingFigure 6.5-2 Five elements of configuration managementCM Planning and ManagementCM planning starts at a program’s or project’s inception.The CM office must carefully weigh the value of prioritizingresources into CM tools or into CM surveillanceof the contractors. Reviews by the Center ConfigurationManagement Organization (CMO) are warranted andwill cost resources and time, but the correction of systemicCM problems before they erupt into losing configurationcontrol are always preferable to explaining whyincorrect or misidentified parts are causing major problemsin the program/project.One of the key inputs to preparing for CM implementationis a strategic plan for the project’s complete CMprocess. This is typically contained in a CM plan. See AppendixM for an outline of a typical CM plan.This plan has both internal and external uses:z z Internal: It is used within the project office to guide,monitor, and measure the overall CM process. It describesboth the CM activities planned for future acquisitionphases and the schedule for implementingthose activities.z z External: The CM plan is used to communicate theCM process to the contractors involved in the program.It establishes consistent CM processes andworking relationships.The CM plan may be a stand-alone document, or it maybe combined with other program planning documents.It should describe the criteria for each technical baselinecreation, technical approvals, and audits.ConfigurationVerificationauthority, issuing uniqueidentifiers for both CIsand CI documentation, releasingconfiguration documentation,and establishingconfiguration baselines.<strong>NASA</strong> has four baselines, each of which defines a distinctphase in the evolution of a product design. Thebaseline identifies an agreed-to description of attributesof a CI at a point in time and provides a knownconfiguration to which changes are addressed. Baselinesare established by agreeing to (and documenting)the stated definition of a CI’s attributes. The approved“current” baseline defines the basis of the subsequentchange. The system specification is typically finalizedfollowing the SRR. The functional baseline is establishedat the SDR and will usually transfer to <strong>NASA</strong>’scontrol at that time.The four baselines (see Figure 6.5-3) normally controlledby the program, project, or Center are the following:z z Functional Baseline: The functional baseline is theapproved configuration documentation that describesa system’s or top-level CI’s performance requirements(functional, interoperability, and interface characteristics)and the verification required to demonstratethe achievement of those specified characteristics. Thefunctional baseline is controlled by <strong>NASA</strong>.z z Allocated Baseline: The allocated baseline is the approvedperformance-oriented configuration documentationfor a CI to be developed that describes thefunctional and interface characteristics that are allocatedfrom a higher level requirements documentor a CI and the verification required to demonstrateachievement of those specified characteristics. The allocatedbaseline extends the top-level performance


6.5 Configuration ManagementMission NeedStatementConceptFUNCTIONALBASELINEMajor ArchitectureAspects of DesignCompleteSegment SpecSystem SpecSegment SpecSegment SpecPartialAnalysis<strong>Engineering</strong>ItemsProgramPlanMDRSRRSDRALLOCATEDBASELINEImplementationAspects of DesignCompletePrime ItemDesign-to-SpecPrime ItemDesign-to-SpecPrime ItemDesign-to-SpecCompleteAnalysisPlanPDREnd ItemDesign-to-SpecEnd ItemDesign-to-SpecEnd ItemDesign-to-SpecProceduresCDRPRODUCTBASELINERealization Aspects ofDesign Complete; Fabricationand Text CompleteEnd ItemBuild-to-SpecDesignDisclosureQualificationItemsManualsEndItemsSARAS-DEPLOYEDBASELINEOperational CapabilityDemonstrateProductSystemORRFigure 6.5-3 Evolution of technical baselinezzrequirements of the functional baseline to sufficientdetail for initiating manufacturing or coding of a CI.The allocated baseline is usually controlled by the designorganization until all design requirements havebeen verified. The allocated baseline is typically establishedat the successful completion of the PDR. Priorto CDR, <strong>NASA</strong> normally reviews design output forconformance to design requirements through incrementaldeliveries of engineering data. <strong>NASA</strong> controlof the allocated baseline occurs through review of theengineering deliveries as data items.Product Baseline: The product baseline is the approvedtechnical documentation that describes theconfiguration of a CI during the production, fielding/deployment, and operational support phases of its lifezzcycle. The established product baseline is controlled asdescribed in the configuration management plan thatwas developed during Phase A. The product baselineis typically established at the completion of the CDR.The product baseline describes:▶▶Detailed physical or form, fit, and function characteristicsof a CI;▶▶The selected functional characteristics designatedfor production acceptance testing; and▶▶The production acceptance test requirements.As-Deployed Baseline: The as-deployed baselineoccurs at the ORR. At this point, the design is consideredto be functional and ready for flight. Allchanges will have been incorporated into the documentation.<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 153


6.0 Crosscutting <strong>Technical</strong> ManagementConfiguration Change ManagementConfiguration change management is a process tomanage approved designs and the implementation ofapproved changes. Configuration change managementis achieved via the systematic proposal, justification, andevaluation of proposed changes, followed by incorporationof approved changes and verification of implementation.Implementing configuration change managementin a given program requires unique knowledgeof the program objectives and requirements. The firststep establishes a robust and well-disciplined internal<strong>NASA</strong> Configuration Control Board (CCB) system,which is chaired by someone with program change authority.CCB members represent the stakeholders withauthority to commit the team they represent. The secondstep creates configuration change management surveillanceof the contractor’s activity. The CM office advisesthe <strong>NASA</strong> program or project manager to achieve a balancedconfiguration change management implementationthat suits the unique program/project situation. SeeFigure 6.5-4 for an example of a typical configurationchange management control process.Types of Configuration ChangeManagement Changesz z <strong>Engineering</strong> Change: An engineering change isan iteration in the baseline (draft or established).Changes can be major or minor. They may or maynot include a specification change. Changes affectingan external interface must be coordinated andapproved by all stakeholders affected.▶▶ A “major” change is a change to the baseline configurationdocumentation that has significant impact(i.e., requires retrofit of delivered productsor affects the baseline specification, cost, safety,compatibility with interfacing products, or operator,or maintenance training).▶▶ A ”minor” change corrects or modifies configurationdocumentation or processes without impactto the interchangeability of products or systemelements in the system structure.Waiver:z z A waiver is a documented agreement intentionallyreleasing a program or project frommeeting a requirement. (Some Centers use deviationsprior to Implementation and waivers duringImplementation.) Authorized waivers do not constitutea change to a baseline.Configuration Status AccountingConfiguration Status Accounting (CSA) is the recordingand reporting of configuration data necessary to manageCIs effectively. An effective CSA system provides timelyand accurate configuration information such as:zz Complete current and historical configuration docu-mentation and unique identifiers.zz Status of proposed changes, deviations, and waiversfrom initiation to implementation.zz Status and final disposition of identified discrepanciesand actions identified during each configuration audit.Some useful purposes of the CSA data include:zz An aid for proposed change evaluations, change deci-sions, investigations of design problems, warranties,and shelf-life calculations.zz Historical traceability.zz Software trouble reporting.zz Performance measurement data.The following are critical functions or attributes to considerif designing or purchasing software to assist withthe task of managing configuration.zz Ability to share data real time with internal and ex-ternal stakeholders securely;zz Version control and comparison (track history of anobject or product);zz Secure user checkout and check in;zz Tracking capabilities for gathering metrics (i.e., time,date, who, time in phases, etc.);zz Web based;zz Notification capability via e-mail;zz Integration with other databases or legacy systems;zz Compatible with required support contractors and/orsuppliers (i.e., can accept data from a third party asrequired);zz Integration with drafting and modeling programs asrequired;zz Provide neutral format viewer for users;zz License agreement allows for multiple users within anagreed-to number;zz Workflow and life-cycle management;zz Limited customization;zz Migration support for software upgrades;zz User friendly;zz Consideration for users with limited access;154 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


6.5 Configuration ManagementORIGINATORCONFIGURATIONMANAGEMENTORGANIZATIONEVALUATORSRESPONSIBLEOFFICECONFIGURATIONCONTROLBOARDACTIONEES1.Prepareschange requestand sends toconfigurationmanagementorganization2.Checks formatand contentAssigns numberand enters intoCSADeterminesevaluators(with originator)3.Evaluatechangepackage4.ConsolidatesevaluationsFormulatesrecommendeddispositionPresents to CCB5.Chairdispositionschange requestChair assignsaction itemsif required8.Completeassigned actionsSchedules CCBdate if requiredPrepares CCBagenda6.Finalizes CCBdirectiveUpdates CSACreates CCBminutes7.Chair signs finaldirective9.Updatesdocument,hardware, orsoftwareTracks actionitems10.Performs finalcheck ondocuments11.Releasesdocument perdirectiveStores anddistributes asrequiredFigure 6.5-4 Typical change control processzz Ability to attach standard format files from desktopzz Workflow capability (i.e., route a CI as required basedon a specific set of criteria); andzz Capable of acting as the one and only source for re-leased information.Configuration VerificationConfiguration verification is accomplished by inspectingdocuments, products, and records; reviewing procedures,processes, and systems of operations to verify thatthe product has achieved its required performance re-<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 155


6.0 Crosscutting <strong>Technical</strong> Managementquirements and functional attributes; and verifying thatthe product’s design is documented. This is sometimesdivided into functional and physical configuration audits.(See Section 6.7 for more on technical reviews.)6.5.1.3 OutputsNPR 7120.5 defines a project’s life cycle in progressivephases. Beginning with Pre-Phase A, these steps in turnare grouped under the headings of Formulation and Implementation.Approval is required to transition betweenthese phases. Key Decision Points (KDPs) define transitionsbetween the phases. CM plays an important role indetermining whether a KDP has been met. Major outputsof CM are procedures, approved baseline changes,configuration status, and audit reports.6.5.2 CM Guidance6.5.2.1 What Is the Impact of Not Doing CM?The impact of not doing CM may result in a project beingplagued by confusion, inaccuracies, low productivity,and unmanageable configuration data. During theColumbia accident investigation, the Columbia AccidentInvestigation Board found inconsistencies relatedto the hardware and the documentation with “unincor-Warning Signs/Red Flags (How Do You Know When You’re in Trouble?)General warning signs of an improper implementation of CM include the following:zz Failure of program to define the “top-level” technical requirement (“We don’t need a spec”).zz Failure of program to recognize the baseline activities that precede and follow design reviews.zz Program office reduces the time to evaluate changes to one that is impossible for engineering, SMA, or other CCBmembers to meet.zz Program office declares “there will be no dissent in the record” for CCB documentation.zz Contract is awarded without CM requirements concurred with by CMO supporting the program office.zz Redlines used inappropriately on production floor to keep track of changes to design.zz Material Review Board does not know the difference between critical, major, and minor nonconformances and theappropriate classification of waivers.zz Drawings are not of high quality and do not contain appropriate notes to identify critical engineering items for con-figuration control or appropriate tolerancing.zz Vendors do not understand the implication of submitting waivers to safety requirements as defined in engineering.zz Subcontractors/vendors change engineering design without approval of integrating contractor, do not know how tocoordinate and write an engineering change request, etc.zz Manufacturing tooling engineering does not keep up with engineering changes that affect tooling concepts. Manu-facturing tools lose configuration control and acceptability for production.zz Verification data cannot be traced to released part number and specification that apply to verification task.zz Operational manuals and repair instructions cannot be traced to latest released part number and repair drawings thatapply to repair/modification task.zz Maintenance and ground support tools and equipment cannot be traced to latest released part number and specifi-cation that applies to equipment.zz Parts and items cannot be identified due to improper identification markings.zz Digital closeout photography cannot be correlated to the latest release engineering.zz <strong>NASA</strong> is unable to verify the latest released engineering through access to the contractor’s CM Web site.zz Tools required per installation procedures do not match the fasteners and nuts and bolts used in the design of CIs.zz CIs do not fit into their packing crates and containers due to losing configuration control in the design of the ship-ping and packing containers.Supporting procurement/fabrication change procedures do not adequately involve approval by originating engizzneering organization.156 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


6.5 Configuration Managementporated documentation changes” that led to failure. NoCM issues were cited as a cause of the accident. Theusual impact of not implementing CM can be describedas “losing configuration control.” Within <strong>NASA</strong>, thishas resulted in program delays and engineering issues,especially in fast prototyping developments (X-37 Program)where schedule has priority over recording whatis being done to the hardware. If CM is implementedproperly, discrepancies identified during functional andphysical configuration audits will be addressed. The followingimpacts are possible and have occurred in thepast:zz Mission failure and loss of property and life due to im-properly configured or installed hardware or software,zz Mission failure to gather mission data due to improp-erly configured or installed hardware or software,zz Significant mission delay incurring additional costdue to improperly configured or installed hardwareor software, andzz Significant mission costs or delay due to improperlycertified parts or subsystems due to fraudulent verificationdata.If CM is not implemented properly, problems may occurin manufacturing, quality, receiving, procurement, etc.The user will also experience problems if ILS data arenot maintained. Using a shared software system thatcan route and track tasks provides the team with the resourcesnecessary for a successful project.6.5.2.2 When Is It Acceptable to Use RedlineDrawings?“Redline” refers to the control process of marking updrawings and documents during design, fabrication, production,and testing that are found to contain errors or inaccuracies.Work stoppages could occur if the documentswere corrected through the formal change process.All redlines require the approval of the responsible hardwaremanager and quality assurance manager at a minimum.These managers will determine whether redlinesare to be incorporated into the plan or procedure.The important point is that each project must have acontrolled procedure for redlines that specifies redlineprocedures and approvals.Redlines Were identified as One of the Major Causes of the NOAA N-Prime MishapExcerpts from the NOAA N-Prime Mishap Investigation Final Report:“Several elements contributed to the NOAA N-PRIME incident, the most significant of which were the lack of properTOC [Turn Over Cart] verification, including the lack of proper PA [Product Assurance] witness, the change in scheduleand its effect on the crew makeup, the failure of the crew to recognize missing bolts while performing the interface surfacewipe down, the failure to notify in a timely fashion or at all the Safety, PA, and Government representatives, and theimproper use of procedure redlines leading to a difficult-to-follow sequence of events. The interplay of the several elementsallowed a situation to exist where the extensively experienced crew was not focusing on the activity at hand.There were missed opportunities that could have averted this mishap.“In addition, the operations team was utilizing a heavily redlined procedure that required considerable ‘jumping’ fromstep to step, and had not been previously practiced. The poorly written procedure and novel redlines were preconditionsto the decision errors made by the RTE [Responsible Test Engineer].“The I&T [Integration and Test] supervisors allowed routine poor test documentation and routine misuse of procedureredlines.“Key processes that were found to be inadequate include those that regulate operational tempo, operations planning, proceduredevelopment, use of redlines, and GSE [Ground Support Equipment] configurations. For instance, the operationduring which the mishap occurred was conducted using extensively redlined procedures. The procedures were essentiallynew at the time of the operation—that is, they had never been used in that particular instantiation in any prior operation.The rewritten procedure had been approved through the appropriate channels even though such an extensive use of redlineswas unprecedented. Such approval had been given without hazard or safety analyses having been performed.”<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 157


6.0 Crosscutting <strong>Technical</strong> Management6.6 <strong>Technical</strong> Data ManagementThe <strong>Technical</strong> Data Management Process is used to planfor, acquire, access, manage, protect, and use data of a technicalnature to support the total life cycle of a system. DataManagement (DM) includes the development, deployment,operations and support, eventual retirement, and retentionof appropriate technical, to include mission and science,data beyond system retirement as required by NPR 1441.1,<strong>NASA</strong> Records Retention Schedules.DM is illustrated in Figure 6.6-1. Key aspects of DM forsystems engineering include:zz Application of policies and procedures for data iden-tification and control,zz Timely and economical acquisition of technical data,zz Assurance of the adequacy of data and its protection,zz Facilitating access to and distribution of the data tothe point of use,zz Analysis of data use,zz Evaluation of data for future value to other programs/projects, andzz Process access to information written in legacy software.6.6.1 Process DescriptionFigure 6.6-1 provides a typical flow diagram for the <strong>Technical</strong>Data Management Process and identifies typicalinputs, outputs, and activities to consider in addressingtechnical data management.6.6.1.1 InputsInputs include technical data, regardless of the form ormethod of recording and whether the data are generatedby the contractor or Government during the life cycle ofthe system being developed. Major inputs to the <strong>Technical</strong>Data Management Process include:zz Program DM plan,zz Data products to be managed, andzz Data requests.6.6.1.2 Process ActivitiesEach Center is responsible for policies and proceduresfor technical DM. NPR 7120.5 and NPR 7123.1 definethe need to manage data, but leave specifics to the individualCenters. However, NPR 7120.5 does require thatDM planning be provided as either a section in the program/projectplan or as a separate document. The programor project manager is responsible for ensuring thatthe data required are captured and stored, data integrityis maintained, and data are disseminated as required.Other <strong>NASA</strong> policies address the acquisition and storageof data and not just the technical data used in the lifecycle of a system.Role of Data Management PlanThe recommended procedure is that the DM plan be aseparate plan apart from the program/project plan. DMissues are usually of sufficient magnitude to justify a separateplan. The lack of specificity in Agency policy andprocedures provides further justification for more detailedDM planning. The plan should cover the followingmajor DM topics:From all technicalprocesses and contractors<strong>Technical</strong> DataProducts to BeManagedFrom project and alltechnical processes<strong>Technical</strong> DataRequestsPrepare for technical datamanagement implementationCollect and store requiredtechnical dataMaintain stored technical dataProvide technical data to authorizedpartiesTo all technical processesand contractorsForm of <strong>Technical</strong>Data Products<strong>Technical</strong> DataElectronic ExchangeFormatsTo project and alltechnical processesDelivered <strong>Technical</strong>Data158 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>Figure 6.6-1 <strong>Technical</strong> Data Management Process


6.6 <strong>Technical</strong> Data Managementzz Identification/definition of data requirements for allaspects of the product life cycle.zz Control procedures—receipt, modification, review,and approval.zz Guidance on how to access/search for data for users.zz Data exchange formats that promote data reuse andhelp to ensure that data can be used consistentlythroughout the system, family of systems, or systemof systems.zz Data rights and distribution limitations such as ex-port-control Sensitive But Unclassified (SBU).zz Storage and maintenance of data, including masterlists where documents and records are maintainedand managed.<strong>Technical</strong> Data Management Key ConsiderationsSubsequent activities collect, store, and maintain technicaldata and provide it to authorized parties as required.Some considerations that impact these activities for implementing<strong>Technical</strong> Data Management include:zz Requirements relating to the flow/delivery of data toor from a contractor should be specified in the technicaldata management plan and included in the Requestfor Proposal (RFP) and contractor agreement.zz <strong>NASA</strong> should not impose changes on existing con-tractor data management systems unless the programtechnical data management requirements, includingdata exchange requirements, cannot otherwise bemet.zz Responsibility for data inputs into the technical datamanagement system lies solely with the originator orgenerator of the data.zz The availability/access of technical data will lie withthe author, originator, or generator of the data in conjunctionwith the manager of the technical data managementsystem.zz The established availability/access description and listshould be baselined and placed under configurationcontrol.zz For new programs, a digital generation and deliverymedium is desired. Existing programs must weigh thecost/benefit trades of digitizing hard copy data.General Data Management RolesThe <strong>Technical</strong> Data Management Process provides thebasis for applying the policies and procedures to identifyand control data requirements; to responsively andeconomically acquire, access, and distribute data; and toanalyze data use.Adherence to DM principles/rules enables the sharing,integration, and management of data for performingtechnical efforts by Government and industry, and ensuresthat information generated from managed technicaldata satisfies requests or meets expectations.The <strong>Technical</strong> Data Management Process has a leadingrole in capturing and organizing technical data and providinginformation for the following uses:zz Identifying, gathering, storing and maintaining thework products generated by other systems engineeringtechnical and technical management processesas well as the assumptions made in arriving atthose work products;zz Enabling collaboration and life-cycle use of systemproduct data;zz Capturing and organizing technical effort inputs, aswell as current, intermediate, and final outputs;zz Data correlation and traceability among require-ments, designs, solutions, decisions, and rationales;zz Documenting engineering decisions, including pro-cedures, methods, results, and analyses;zz Facilitating technology insertion for affordability im-provements during reprocurement and post-productionsupport; andzz Supporting other technical management and tech-nical processes, as needed.Data Identification/DefinitionEach program/project determines data needs during thelife cycle. Data types may be defined in standard documents.Center and Agency directives sometimes specifycontent of documents and are appropriately used forin-house data preparation. The standard description ismodified to suit program/project-specific needs, andappropriate language is included in SOWs to implementactions resulting from the data evaluation. “Datasuppliers” may be a contractor, academia, or the Government.Procurement of data from an outside supplieris a formal procurement action that requires a procurementdocument; in-house requirements may be handledin a less formal method. Below are the differenttypes of data that might be utilized within a program/project:<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 159


6.0 Crosscutting <strong>Technical</strong> Managementzz Data▶▶“Data” is defined in general as “recorded informationregardless of the form or method of recording.”However, the terms “data” and “information” arefrequently used interchangeably. To be more precise,data generally must be processed in somemanner to generate useful, actionable information.▶▶“Data,” as used in SE DM, includes technical data;computer software documentation; and representationof facts, numbers, or data of any nature that canbe communicated, stored, and processed to forminformation required by a contract or agreement tobe delivered to, or accessed by, the Government.▶▶Data include that associated with system development,modeling and simulation used in developmentor test, test and evaluation, installation, parts,spares, repairs, usage data required for product sustainability,and source and/or supplier data.▶▶Data specifically not included in <strong>Technical</strong> DataManagement would be data relating to general<strong>NASA</strong> workforce operations information, communicationsinformation (except where related toa specific requirement), financial transactions, personneldata, transactional data, and other data of apurely business nature.z z Data Call: Solicitation from Government stakeholders(specifically Integrated Product Team (IPT)leads and functional managers) identifies and justifiestheir data requirements from a proposed contractedprocurement. Since data provided by contractors havea cost to the Government, a data call (or an equivalentactivity) is a common control mechanism used to ensurethat the requested data are truly needed. If approvedby the data call, a description of each data itemneeded is then developed and placed on contract.z z Information: Information is generally considered asprocessed data. The form of the processed data is dependenton the documentation, report, review formats,or templates that are applicable.z z <strong>Technical</strong> Data Package: A technical data package isa technical description of an item adequate for supportingan acquisition strategy, production, engineering,and logistics support. The package definesthe required design configuration and procedures toensure adequacy of item performance. It consists ofall applicable items such as drawings, associated lists,specifications, standards, performance requirements,quality assurance provisions, and packaging details.z z <strong>Technical</strong> Data Management System: The strategies,plans, procedures, tools, people, data formats, dataexchange rules, databases, and other entities and descriptionsrequired to manage the technical data of aprogram.Inappropriate Uses of <strong>Technical</strong> DataExamples of inappropriate uses of technical data include:zz Unauthorized disclosure of classified data or dataotherwise provided in confidence;zz Faulty interpretation based on incomplete, out-of-context, or otherwise misleading data; andzz Use of data for parts or maintenance procurementfor which at least Government purpose rights havenot been obtained.Ways to help prevent inappropriate use of technicaldata include the following:zz Educate stakeholders on appropriate data use andzz Control access to data.Initial Data Management System StructureWhen setting up a DM system, it is not necessary toacquire (that is, to purchase and take delivery of) alltechnical data generated on a project. Some data maybe stored in other locations with accessibility providedon a need-to-know basis. Data should be purchasedonly when such access is not sufficient, timely,or secure enough to provide for responsive life-cycleplanning and system maintenance. Data calls are acommon control mechanism to help address thisneed.Data Management Planningzz Prepare a technical data management strategy. Thisstrategy can document how the program data managementplan will be implemented by the technical effortor, in the absence of such a program-level plan,be used as the basis for preparing a detailed technicaldata management plan, including:▶▶Items of data that will be managed according toprogram or organizational policy, agreements, orlegislation;▶▶The data content and format;160 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


6.6 <strong>Technical</strong> Data Management▶▶A framework for data flow within the programand to/from contractors including the language(s)to be employed in technical effort information exchanges;▶▶<strong>Technical</strong> data management responsibilities andauthorities regarding the origin, generation, capture,archiving, security, privacy, and disposal ofdata products;▶▶Establishing the rights, obligations, and commitmentsregarding the retention of, transmission of,and access to data items; and▶▶Relevant data storage, transformation, transmission,and presentation standards and conventionsto be used according to program or organizationalpolicy, agreements, or legislative constraints.zz Obtain strategy/plan commitment from relevantstakeholders.zz Prepare procedures for implementing the technicaldata management strategy for the technical effortand/or for implementing the activities of the technicaldata management plan.zz Establish a technical database(s) to use for technicaldata maintenance and storage or work with the programstaff to arrange use of the program database(s)for managing technical data.zz Establish data collection tools, as appropriate to thetechnical data management scope and available resources.(See Section 7.3.)zz Establish electronic data exchange interfaces in accor-dance with international standards/agreements andapplicable <strong>NASA</strong> standards.zz Train appropriate stakeholders and other technicalpersonnel in the established technical data managementstrategy/plan, procedures, and data collectiontools, as applicable.zz Expected outcomes:▶▶A strategy and/or plan for implementing technicaldata management;▶▶Established procedures for performing planned<strong>Technical</strong> Data Management activities;▶▶Master list of managed data and its classification bycategory and use;▶▶Data collection tools established and available; and▶▶Qualified technical personnel capable of conductingestablished technical data management proceduresand using available data collection tools.Key Considerations for Planning DataManagement and for Tool Selectionzz All data entered into the technical data managementsystem or delivered to a requester from the databasesof the system should have traceability to the author,originator, or generator of the data.zz All technical data entered into the technical datamanagement system should carry objective evidenceof current status (for approval, for agreement, for information,etc.), version/control number, and date.zz The technical data management approach should becovered as part of the program’s SEMP.zz <strong>Technical</strong> data expected to be used for reprocurementof parts, maintenance services, etc., might need to bereviewed by the Center’s legal counsel.Careful consideration should be taken when planningthe data access and storage of data that will be generatedfrom a project or program. If a system or tool is needed,many times the CM tool can be used with less formality.If a separate tool is required to manage the data, refer tothe section below for some best practices when evaluatinga data management tool. Priority must be placedon being able to access the data and ease of inputting thedata. Second priority should be the consideration of thevalue of the specific data to current project/program, futureprograms/projects, <strong>NASA</strong>’s overall efficiency, anduniqueness to <strong>NASA</strong>’s engineering knowledge.The following are critical functions or attributes to considerif designing or purchasing software to assist withthe task of managing data:zz Ability to share data with internal and external stake-holders securely;zz Version control and comparison, to track history ofan object or product;zz Secure user updating;zz Access control down to the file level;zz Web based;zz Ability to link data to CM system or elements;zz Compatible with required support contractors and/orsuppliers, i.e., can accept data from a third party asrequired;zz Integrate with drafting and modeling programs as re-quired;zz Provide neutral format viewer for users;zz License agreement allows for multiuser seats;<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 161


6.0 Crosscutting <strong>Technical</strong> Managementzz Workflow and life-cycle management is a suggestedoption;zz Limited customization;zz Migration support between software version up-grades;zz User friendly;zz Straightforward search capabilities; andzz Ability to attach standard format files from desktop.Value of DataStorage of engineering data needs to be planned at thebeginning of a program or project. Some of the datatypes will fall under the control of NPR 1441.1, RecordsRetention Schedules; those that do not will have to be addressed.It is best to evaluate all data that will be producedand decide how long it is of value to the programor project or to <strong>NASA</strong> engineering as a whole. There arefour basic questions to ask when evaluating data’s value:zz Do the data describe the product/system that is beingdeveloped or built?zz Are the data required to accurately produce theproduct/system being developed or built?zz Do the data offer insight for similar future programsor projects?zz Do the data hold key information that needs to bemaintained in <strong>NASA</strong>’s knowledge base for future engineersto use or kept as a learning example?<strong>Technical</strong> Data Capture TasksTable 6.6-1 defines the tasks required to capture technicaldata.Protection for Data DeliverablesAll data deliverables should include distribution statementsand procedures to protect all data that containcritical technology information, as well as to ensure thatlimited distribution data, intellectual property data, orproprietary data are properly handled during systemsengineering activities. This injunction applies whetherthe data are hard copy or digital.As part of overall asset protection planning, <strong>NASA</strong> hasestablished special procedures for the protection ofCritical Program Information (CPI). CPI may includecomponents; engineering, design, or manufacturingprocesses; technologies; system capabilities and vulnerabilities;and any other information that gives a systemits distinctive operational capability.CPI protection should be a key consideration for the<strong>Technical</strong> Data Management effort and is part of theasset protection planning process, as shown in AppendixQ.6.6.1.3 OutputsData Collection Checklistzz Have the frequency of collection and the pointsin the technical and technical management processeswhen data inputs will be available been determined?zz Has the timeline that is required to move data fromthe point of origin to storage repositories or stakeholdersbeen established?zz Who is responsible for the input of the data?zz Who is responsible for data storage, retrieval, andsecurity?Have necessary supporting tools been developedzzor acquired?Outputs include timely, secure availability of needed datain various representations to those authorized to receiveit. Major outputs from the <strong>Technical</strong> Data ManagementProcess include (refer to Figure 6.6-1):zz <strong>Technical</strong> data management procedures,zz Data representation forms,zz Data exchange formats, andzz Requested data/information delivered.6.6.2 <strong>Technical</strong> Data ManagementGuidance6.6.2.1 Data Security and ITAR<strong>NASA</strong> generates an enormous amount of information,much of which is unclassified/nonsensitive in naturewith few restrictions on its use and dissemination.<strong>NASA</strong> also generates and maintains Classified NationalSecurity Information (CNSI) under a variety of Agencyprograms, projects, and through partnerships and collaborationwith other Federal agencies, academia, andprivate enterprises. SBU markings requires the author,distributor, and receiver to keep control of the sensitive162 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


6.6 <strong>Technical</strong> Data ManagementTable 6.6‐1 <strong>Technical</strong> Data TasksDescription Tasks Expected Outcomes<strong>Technical</strong>data capture<strong>Technical</strong>data maintenanceCollect and store inputs and technical effort outcomes from the technicaland technical management processes, including:zz results from technical assessments;zz descriptions of methods, tools, and metrics used;zz recommendations, decisions, assumptions, and impacts of technicalefforts and decisions;zz lessons learned;zz deviations from plan;zz anomalies and out-of-tolerances relative to requirements; andzz other data for tracking requirementsPerform data integrity checks on collected data to ensure compliance withcontent and format as well as technical data check to ensure there are noerrors in specifying or recording the data.Report integrity check anomalies or variances to the authors or generators ofthe data for correction.Prioritize, review, and update data collection and storage procedures as partof regularly scheduled maintenance.Implement technical management roles and responsibilities with technicaldata products received.Manage database(s) to ensure that collected data have proper quality andintegrity; and are properly retained, secure, and available to those withaccess authority.Periodically review technical data management activities to ensure consistencyand identify anomalies and variances.Review stored data to ensure completeness, integrity, validity, availability,accuracy, currency, and traceability.Perform technical data maintenance, as required.Identify and document significant issues, their impacts, and changes madeto technical data to correct issues and mitigate impacts.Maintain, control, and prevent the stored data from being used inappropriately.Store data in a manner that enables easy and speedy retrieval.Maintain stored data in a manner that protects the technical data againstforeseeable hazards, e.g., fire, flood, earthquake, etc.Sharable data needed toperform and control thetechnical and technicalmanagement processes iscollected and stored.Stored data inventory.Records of technical datamaintenance.<strong>Technical</strong> effort data,including captured workproducts, contractordelivereddocumentsand acquirer-provideddocuments, are controlledand maintained.Status of data stored ismaintained, to include:version description,timeline, and securityclassification.(continued)document and data or pass the control to an establishedcontrol process. Public release is prohibited, and a document/datamarked as such must be transmitted by securemeans. Secure means are encrypted e-mail, secure fax,or person-to-person tracking. WebEx is a nonsecure environment.Standard e-mail is not permitted to transmitSBU documents and data. A secure way to send SBU informationvia e-mail is using the Public Key Infrastructure(PKI) to transmit the file(s). PKI is a system that manageskeys to lock and unlock computer data. The basic purposeof PKI is to enable you to share your data keys with otherpeople in a secure manner. PKI provides desktop security,as well as security for desktop and network applications,including electronic and Internet commerce.Data items such as detailed design data (models, drawings,presentations, etc.), limited rights data, source selectiondata, bid and proposal information, financialdata, emergency contingency plans, and restricted computersoftware are all examples of SBU data. Items that<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 163


6.0 Crosscutting <strong>Technical</strong> ManagementTable 6.6‐1 <strong>Technical</strong> Data Tasks (continued)Description Tasks Expected Outcomes<strong>Technical</strong>data/informationdistributionDatamanagementsystemmaintenanceMaintain an information library or reference index to provide technical dataavailability and access instructions.Receive and evaluate requests to determine data requirements and deliveryinstructions.Process special requests for technical effort data or information according toestablished procedures for handling such requests.Ensure that required and requested data are appropriately distributed tosatisfy the needs of the acquirer and requesters in accordance with theagreement, program directives, and technical data management plans andprocedures.Ensure that electronic access rules are followed before database access isallowed or any requested data are electronically released/transferred to therequester.Provide proof of correctness, reliability, and security of technical dataprovided to internal and external recipients.Implement safeguards to ensure protection of the technical database and ofen route technical data from unauthorized access or intrusion.Establish proof of coherence of the overall technical data set to facilitateeffective and efficient use.Maintain, as applicable, backups of each technical database.Evaluate the technical data management system to identify collection andstorage performance issues and problems; satisfaction of data users; risksassociated with delayed or corrupted data, unauthorized access, or survivabilityof information from hazards such as fire, flood, earthquake, etc.Review systematically the technical data management system, including thedatabase capacity, to determine its appropriateness for successive phases ofthe Defense Acquisition Framework.Recommend improvements for discovered risks and problems:zz Handle risks identified as part of technical risk management.zz Control recommended changes through established program changemanagement activities.Access information (e.g.,available data, accessmeans, security procedures,time period foravailability, and personnelcleared for access) is readilyavailable.<strong>Technical</strong> data areprovided to authorizedrequesters in the appropriateformat, with the appropriatecontent, and by asecure mode of delivery, asapplicable.Current technical datamanagement system.<strong>Technical</strong> data are appropriatelyand regularlybacked up to prevent dataloss.are deemed SBU must be clearly marked in accordancewith NPR 1600.1, <strong>NASA</strong> Security Program Procedural Requirements.Data or items that cannot be directly marked,such as computer models and analyses, must have an attachedcopy of <strong>NASA</strong> Form 1686 that indicates the entirepackage is SBU data. Documents are required to have a<strong>NASA</strong> Form 1686 as a cover sheet. SBU documents anddata should be safeguarded. Some examples of ways tosafeguard SBU data are: access is limited on a need-toknowbasis, items are copy controlled, items are attendedwhile being used, items are properly marked (documentheader, footer, and <strong>NASA</strong> Form 1686), items are storedin locked containers or offices and secure servers, transmittedby secure means, and destroyed by approvedmethods (shredding, etc.). For more information onSBU data, see NPR 1600.1.International Traffic in Arms Regulations (ITAR) implementthe Arms Export Control Act, and contain theUnited States Munitions List (USML). The USML listsarticles, services, and related technical data that aredesignated as “defense articles” and “defense services,”pursuant to Sections 38 and 47(7) of the Arms ExportControl Act. The ITAR is administered by the U.S. Departmentof State. “<strong>Technical</strong> data” as defined in theITAR does not include information concerning generalscientific, mathematical, or engineering principles commonlytaught in schools, colleges, and universities or informationin the public domain (as that term is defined164 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


6.6 <strong>Technical</strong> Data Managementin 22 CFR 120.11). It also does not include basic marketinginformation on function and purpose or generalsystem descriptions. For purposes of the ITAR, the followingdefinitions apply:zz“Defense Article” (22 CFR 120.6): A defense articleis any item or technical data on the USML. The termincludes technical data recorded or stored in any physicalform, models, mockups, or other items that revealtechnical data directly relating to items designated inthe USML. Examples of defense articles included onthe USML are (1) launch vehicles, including their specificallydesigned or modified components, parts, accessories,attachments, and associated equipment;(2) remote sensing satellite systems, including groundcontrol stations for telemetry, tracking, and controlof such satellites, as well as passive ground stations ifsuch stations employ any cryptographic items controlledon the USML or employ any uplink commandcapability; and (3) all components, parts, accessories,attachments, and associated equipment (includingground support equipment) that is specifically designed,modified, or configured for such systems. (See22 CFR 121.1 for the complete listing.)z z “<strong>Technical</strong> Data” (22 CFR 120.10): <strong>Technical</strong> dataare information required for the design, development,production, manufacture, assembly, operation, repair,testing, maintenance, or modification of defense articles.This includes information in the form of blueprints,drawings, photographs, plans, instructions,and documentation.zz Classified Information Relating to Defense Articlesand Defense Services: Classified information is coveredby an invention secrecy order (35 U.S.C. 181 etseq.; 35 CFR Part 5).z z Software Directly Related to Defense Articles: Controlledsoftware includes, but is not limited to, systemfunctional design, logic flow, algorithms, applicationprograms, operating systems, and support softwarefor design, implementation, test, operations, diagnosis,and repair related to defense articles.<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 165


6.0 Crosscutting <strong>Technical</strong> Management6.7 <strong>Technical</strong> Assessment<strong>Technical</strong> assessment is the crosscutting process used to helpmonitor technical progress of a program/project throughPeriodic <strong>Technical</strong> Reviews (PTRs). It also provides statusinformation to support assessing system design, product realization,and technical management decisions.6.7.1 Process DescriptionFigure 6.7-1 provides a typical flow diagram for the<strong>Technical</strong> Assessment Process and identifies typical inputs,outputs, and activities to consider in addressingtechnical assessment.6.7.1.1 InputsTypical inputs needed for the <strong>Technical</strong> Assessment Processwould include the following:z z<strong>Technical</strong> Plans: These are the planning documentsthat will outline the technical reviews/assessmentprocess as well as identify the technical product/processmeasures that will be tracked and assessed to determinetechnical progress. Examples of these planswill be the SEMP, review plans, and EVM plan.z zz z<strong>Technical</strong> Measures: These are the identified technicalmeasures that will be tracked to determine technicalprogress. These measures are also referred to asMOEs, MOPs, and TPMs.Reporting Requirements: These are the requirementson the methodology in which the status of the technicalmeasures will be reported in regard to risk, cost,schedule, etc. The methodology and tools used for reportingthe status will be established on a project-byprojectbasis.6.7.1.2 Process ActivitiesAs outlined in Figure 6.7-1, the technical plans (e.g.,SEMP, review plans) provide the initial inputs into the<strong>Technical</strong> Assessment Process. These documents willoutline the technical reviews/assessment approach aswell as identify the technical measures that will be trackedand assessed to determine technical progress. An importantpart of the technical planning is determining what isneeded in time, resources, and performance to completea system that meets desired goals and objectives. ProjectFrom <strong>Technical</strong>Planning ProcessProduct and ProcessMeasures<strong>Technical</strong> PlansFrom projectRisk ReportingRequirements<strong>Technical</strong> Cost and ScheduleStatus <strong>Reports</strong>From Product Verification andProduct Validation ProcessesProduct MeasurementsFrom Decision Analysis ProcessDecision SupportRecommendations andImpactsPrepare strategy for conductingtechnical assessmentsAssess technical work productivity(measure progress against plans)Assess technical product quality(measure progress against requirements)Conduct horizontal and vertical progresstechnical reviewsCapture work products from technicalassessment activitiesTo <strong>Technical</strong> Planning,<strong>Technical</strong> Risk Management, andRequirements Management ProcessesAssessment Results/FindingsTo Decision Analysis ProcessAnalysis SupportRequestsTo project and <strong>Technical</strong> DataManagement Process<strong>Technical</strong> Review<strong>Reports</strong>To <strong>Technical</strong> Planning,Requirements Management, andInterface Management ProcessesCorrective ActionRecommendationsTo <strong>Technical</strong> DataManagement ProcessWork Products From<strong>Technical</strong> AssessmentFigure 6.7-1 <strong>Technical</strong> Assessment Process166 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


6.7 <strong>Technical</strong> Assessmentmanagers need visibility into the progress of those plansin order to exercise proper management control. Typicalactivities in determining progress against the identifiedtechnical measures will include status reporting and assessingthe data. Status reporting will identify where theproject stands in regard to a particular technical measure.Assessing will analytically convert the output of the statusreporting into a more useful form from which trends canbe determined and variances from expected results can beunderstood. Results of the assessment activity will thenfeed into the Decision Analysis Process (see Section 6.8)where potential corrective action is necessary.These activities together form the feedback loop depictedin Figure 6.7-2.(Re-)PlanningExecuteStatusReportingAssessingStatus Not OKDecisionmakingStatus OKFigure 6.7-2 Planning and status reportingfeedback loopThis loop takes place on a continual basis throughoutthe project life cycle. This loop is applicable at each levelof the project hierarchy. Planning data, status reportingdata, and assessments flow up the hierarchy with appropriateaggregation at each level; decisions cause actionsto be taken down the hierarchy. Managers at eachlevel determine (consistent with policies established atthe next higher level of the project hierarchy) how often,and in what form, reporting data and assessmentsshould be made. In establishing these status reportingand assessment requirements, some principles of goodpractice are:zz Use an agreed-upon set of well-defined technicalmeasures. (See Subsection 6.7.2.2.)zz Report these technical measures in a consistent formatat all project levels.zz Maintain historical data for both trend identificationand cross-project analyses.zz Encourage a logical process of rolling up technical mea-sures (e.g., use the WBS for project progress status).zz Support assessments with quantitative risk measures.zz Summarize the condition of the project by usingcolor-coded (red, yellow, and green) alert zones for alltechnical measures.Regular, periodic (e.g., monthly) tracking of the technicalmeasures is recommended, although some measuresshould be tracked more often when there is rapidchange or cause for concern. Key reviews, such as PDRsand CDRs, are points at which technical measures andtheir trends should be carefully scrutinized for earlywarning signs of potential problems. Should there be indicationsthat existing trends, if allowed to continue, willyield an unfavorable outcome, corrective action shouldbegin as soon as practical. Subsection 6.7.2.2 providesadditional information on status reporting and assessmenttechniques for costs and schedules (includingEVM), technical performance, and systems engineeringprocess metrics.The measures are predominantly assessed during theprogram and project technical reviews. Typical activitiesperformed for technical reviews include (1) identifying,planning, and conducting phase-to-phase technical reviews;(2) establishing each review’s purpose, objective,and entry and success criteria; (3) establishing themakeup of the review team; and (4) identifying and resolvingaction items resulting from the review. Subsection6.7.2.1 summarizes the types of technical reviewstypically conducted on a program/project and the roleof these reviews in supporting management decisionprocesses. It also identifies some general principles forholding reviews, but leaves explicit direction for executinga review to the program/project team to define.The process of executing technical assessment has closerelationships to other areas, such as risk management,decision analysis, and technical planning. These areasmay provide input into the <strong>Technical</strong> Assessment Processor be the benefactor of outputs from the process.6.7.1.3 OutputsTypical outputs of the <strong>Technical</strong> Assessment Processwould include the following:zz Assessment Results, Findings, and Recommenda-tions: This is the collective data on the establishedmeasures from which trends can be determined andvariances from expected results can be understood.Results will then feed into the Decision Analysis Processwhere potential corrective action is necessary.z z <strong>Technical</strong> Review <strong>Reports</strong>/Minutes: This is the collectiveinformation coming out of each review thatcaptures the results, recommendations, and actions inregard to meeting the review’s success criteria.<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 167


6.0 Crosscutting <strong>Technical</strong> Management6.7.2 <strong>Technical</strong> Assessment Guidance6.7.2.1 Reviews, Audits, and Key Decision PointsTo gain a general understanding of the various technicalreviews called out in Agency policy (e.g., NPR7120.5 and NPR 7123.1), we need to examine the intentof the policy within each of the above-mentioned documents.These reviews inform the decision authority.NPR 7120.5’s primary focus is to inform the decision authorityas to the readiness of a program/project to proceedinto the next phase of the life cycle. This is done foreach milestone review and is tied to a KDP throughoutthe life cycle. For KDP/milestone reviews, external independentreviewers known as Standing Review Board(SRB) members evaluate the program/project and, in theend, report their findings to the decision authority. For aprogram or project to prepare for the SRB, the technicalteam must conduct their own internal peer review process.This process typically includes both informal andformal peer reviews at the subsystem and system level.This handbook attempts to provide sufficient insight andguidance into both policy documents so that practitionerscan understand how they are to be successfully integrated;however, the main focus in this handbook willbe on the internal review process.The intent and policy for reviews, audits, and KDPsshould be developed during Phase A and defined in theprogram/project plan. The specific implementation ofthese activities should be consistent with the types of reviewsand audits described in this section, and with the<strong>NASA</strong> program and project life-cycle charts (see Figures3.0‐1 and 3.0‐2). However, the timing of reviews,audits, and KDPs should accommodate the need of eachspecific project.Purpose and DefinitionThe purpose of a review is to furnish the forum and processto provide <strong>NASA</strong> management and their contractorsassurance that the most satisfactory approach, plan,or design has been selected; that a configuration item hasbeen produced to meet the specified requirements; or thata configuration item is ready. Reviews help to develop abetter understanding among task or project participants,open communication channels, alert participants andmanagement to problems, and open avenues for solutions.Reviews are intended to add value to the project andenhance project quality and the likelihood of success. Thisis aided by inviting outside experts to confirm the viabilityof the presented approach, concept, or baseline or to recommendalternatives. Reviews may be program life-cyclereviews, project life-cycle reviews, or internal reviews.The purpose of an audit is to provide <strong>NASA</strong> managementand its contractors a thorough examination of adherenceto program/project policies, plans, requirements, andspecifications. Audits are the systematic examination oftangible evidence to determine adequacy, validity, andeffectiveness of the activity or documentation under review.An audit may examine documentation of policiesand procedures, as well as verify adherence to them.The purpose of a KDP is to provide a scheduled event atwhich the decision authority determines the readiness ofa program/project to progress to the next phase of thelife cycle (e.g., B to C, C to D, etc.) or to the next KDP.KDPs are part of <strong>NASA</strong>’s oversight and approval processfor programs/projects. For a detailed description ofthe process and management oversight teams, see NPR7120.5. Essentially, KDPs serve as gates through whichprograms and projects must pass. Within each phase, aKDP is preceded by one or more reviews, including thegoverning Program Management Council (PMC) review.Allowances are made within a phase for the differencesbetween human and robotic space flight programsand projects, but phases always end with the KDP. Thepotential outcomes at a KDP include:zz Approval for continuation to the next KDP.zz Approval for continuation to the next KDP, pendingresolution of actions.zz Disapproval for continuation to the next KDP. In suchcases, follow-up actions may include a request formore information and/or a delta independent review;a request for a Termination Review (described below)for the program or the project (Phases B, C, D, and Eonly); direction to continue in the current phase; orredirection of the program/project.The decision authority reviews materials submitted by thegoverning PMC, SRB, Program Manager (PM), projectmanager, and Center Management Council (CMC) inaddition to agreements and program/project documentationto support the decision process. The decision authoritymakes decisions by considering a number of factors,including continued relevance to Agency strategicneeds, goals, and objectives; continued cost affordabilitywith respect to the Agency’s resources; the viabilityand the readiness to proceed to the next phase; and re-168 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


6.7 <strong>Technical</strong> Assessmentmaining program or project risk (cost, schedule, technical,safety). Appeals against the final decision of the decisionauthority go to the next higher decision authority.Project TerminationIt should be noted that project termination, while usuallydisappointing to project personnel, may be a properreaction to changes in external conditions or to an improvedunderstanding of the system’s projected cost-effectiveness.Termination ReviewA termination review is initiated by the decision authorityto secure a recommendation as to whether tocontinue or terminate a program or project. Failing tostay within the parameters or levels specified in controllingdocuments will result in consideration of a terminationreview.At the termination review, the program and theproject teams present status, including any materialrequested by the decision authority. Appropriatesupport organizations are represented (e.g., procurement,external affairs, legislative affairs, public affairs)as needed. The decision and basis of the decision arefully documented and reviewed with the <strong>NASA</strong> AssociateAdministrator prior to final implementation.General Principles for ReviewsSeveral factors can affect the implementation plan forany given review, such as design complexity, schedule,cost, visibility, <strong>NASA</strong> Center practices, the review itself,etc. As such, there is no set standard for conducting a reviewacross the Agency; however, there are key elements,or principles, that should be included in a review plan.These include definition of review scope, objectives, successcriteria (consistent with NPR 7123.1), and process.Definition of the review process should include identificationof schedule, including duration of the face-tofacemeeting (and draft agenda), definition of roles andresponsibilities of participants, identification of presentationmaterial and data package contents, and a copy ofthe form to be used for Review Item Disposition (RID)/Request For Action (RFA)/Comment. The review processfor screening and processing discrepancies/requests/comments should also be included in the plan. The reviewplan must be agreed to by the technical team lead,project manager, and for SRB-type reviews, the SRB chairprior to the review.It is recommended that all reviews consist of oral presentationsof the applicable project requirements and theapproaches, plans, or designs that satisfy those requirements.These presentations are normally provided by thecognizant design engineers or their immediate supervisor.It is also recommended that, in addition to the SRB, thereview audience include key stakeholders, such as the sciencecommunity, program executive, etc. This ensuresthat the project obtains buy-in from the personnel whohave control over the project as well as those who benefitfrom a successful mission. It is also very beneficial to haveproject personnel in attendance that are not directly associatedwith the design being reviewed (e.g., EPS attendinga thermal discussion). This gives the project an additionalopportunity to utilize cross-discipline expertise to identifydesign shortfalls or recommend improvements. Ofcourse, the audience should also include nonproject specialistsfrom safety, quality and mission assurance, reliability,verification, and testing.Program <strong>Technical</strong> Life-Cycle ReviewsWithin <strong>NASA</strong> there are various types of programs:zz Single-project programs (e.g., James Webb SpaceTelescope Program) tend to have long developmentand/or operational lifetimes, represent a large investmentof Agency resources in one program/project,and have contributions to that program/project frommultiple organizations or agencies.zz Uncoupled programs (e.g., Discovery Program, Ex-plorer) are implemented under a broad scientifictheme and/or a common program implementationconcept, such as providing frequent flight opportunitiesfor cost-capped projects selected through AOsor <strong>NASA</strong> research announcements. Each such projectis independent of the other projects within the program.zz Loosely coupled programs (e.g., Mars ExplorationProgram or Lunar Precursor and Robotic Program)address specific scientific or exploration objectivesthrough multiple space flight projects of variedscope. While each individual project has an assignedset of mission objectives, architectural and technologicalsynergies and strategies that benefit the programas a whole are explored during the Formulationprocess. For instance, all orbiters designed formore than one year in Mars orbit are required tocarry a communication system to support presentand future landers.<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 169


6.0 Crosscutting <strong>Technical</strong> Managementzz Tightly coupled programs (e.g., Constellation Pro-gram) have multiple projects that execute portions ofa mission or missions. No single project is capable ofimplementing a complete mission. Typically, multiple<strong>NASA</strong> Centers contribute to the program. Individualprojects may be managed at different Centers. Theprogram may also include other Agency or internationalpartner contributions.Regardless of the type, all programs are required to undergothe two technical reviews listed in Table 6.7‐1. Themain difference lies between uncoupled/loosely coupledprograms that tend to conduct “status-type” reviews ontheir projects after KDP I and single-project/tightly coupledprograms that tend to follow the project technicallife-cycle review process post KDP I.Table 6.7-1 Program <strong>Technical</strong> ReviewsReviewProgram/SystemRequirementsReviewProgram/SystemDefinitionReviewPurposeThe P/SRR examines the functionaland performance requirementsdefined for the program (and itsconstituent projects) and ensures thatthe requirements and the selectedconcept will satisfy the programand higher level requirements. It isan internal review. Rough order ofmagnitude budgets and schedules arepresented.The P/SDR examines the proposedprogram architecture and theflowdown to the functional elementsof the system.After KDP I, single-project/tightly coupled programsare responsible for conducting the system-level reviews.These reviews bring the projects together and help ensurethe flowdown of requirements and that the overallsystem/subsystem design solution satisfies the programrequirements. The program/program reviews also helpresolve interface/integration issues between projects. Forthe sake of this handbook, single-project programs andtightly coupled programs will follow the project life-cyclereview process defined after this table. Best practices andlessons learned drive programs to conduct their “conceptand requirements-type” reviews prior to project conceptand requirements reviews and “program design and acceptance-type”reviews after project design and acceptancereviews.Project <strong>Technical</strong> Life-Cycle ReviewsThe phrase “project life cycle/project milestone reviews”has, over the years, come to mean different things tovarious Centers. Some equate it to mean the project’scontrolled formal review using RIDS and pre-boards/boards, while others use it to mean the activity tied toRFAs and SRB/KDP process. This document will use thelatter process to define the term. Project life-cycle reviewsare mandatory reviews convened by the decisionauthority, which summarize the results of internal technicalprocesses (peer reviews) throughout the projectlife cycle to <strong>NASA</strong> management and/or an independentreview team, such as an SRB (see NPR 7120.5). Thesereviews are used to assess the progress and health of aproject by providing <strong>NASA</strong> management assurance thatthe most satisfactory approach, plan, or design has beenselected, that a configuration item has been produced tomeet the specified requirements, or that a configurationitem is ready for launch/operation. Some examples oflife-cycle reviews include System Requirements Review,Preliminary Design Review, Critical Design Review, andAcceptance Review.Specified life-cycle reviews are followed by a KDP inwhich the decision authority for the project determines,based on results and recommendations from the lifecyclereview teams, whether or not the project can proceedto the next life-cycle phase.Standing Review BoardsThe SRB’s role is advisory to the program/project and theconvening authorities, and does not have authority overany program/project content. Its review provides expertassessment of the technical and programmatic approach,risk posture, and progress against the program/projectbaseline. When appropriate, it may offer recommendationsto improve performance and/or reduce risk.Internal ReviewsDuring the course of a project or task, it is necessaryto conduct internal reviews that present technical approaches,trade studies, analyses, and problem areas toa peer group for evaluation and comment. The timing,participants, and content of these reviews is normallydefined by the project manager or the manager of theperforming organization with support from the technicalteam. In preparation for the life-cycle reviews aproject will initiate an internal review process as definedin the project plan. These reviews are not just meetings170 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


6.7 <strong>Technical</strong> Assessmentto share ideas and resolve issues, but are internal reviewsthat allow the project to establish baseline requirements,plans, or design through the review of technical approaches,trade studies, and analyses.Internal peer reviews provide an excellent means forcontrolling the technical progress of the project. Theyshould also be used to ensure that all interested partiesare involved in the development early on and throughoutthe process. Thus, representatives from areas such asmanufacturing and quality assurance should attend theinternal reviews as active participants. It is also a goodpractice to include representatives from other Centersand outside organizations providing support or developingsystems or subsystems that may interface to yoursystem/subsystem. They can then, for example, ensurethat the design is producible and integratable and thatquality is managed through the project life cycle.Since internal peer reviews will be at a much greater levelof detail than the life-cycle reviews, the team may utilizeinternal and external experts to help develop and assessapproaches and concepts at the internal reviews. Someorganizations form a red team to provide an internal, independent,peer review to identify deficiencies and offer recommendations.Projects often refer to their internal reviewsas “tabletop” reviews or “interim” design reviews. Whateverthe name, the purpose is the same: to ensure the readinessof the baseline for successful project life-cycle review.It should be noted that due to the importance of thesereviews each review should have well-defined entranceand success criteria established prior to the review.Required <strong>Technical</strong> ReviewsThis subsection describes the purpose, timing, objectives,success criteria, and results of the NPR 7123.1required technical reviews in the <strong>NASA</strong> program andproject life cycles. This information is intended to provideguidance to program/project managers and systemsengineers, and to illustrate the progressive maturation ofreview activities and systems engineering products. ForFlight <strong>Systems</strong> and Ground Support (FS&GS) projects,the <strong>NASA</strong> life-cycle phases of Formulation and Implementationdivide into seven project phases. The checklistsprovided below aid in the preparation of specificreview entry and success criteria, but do not take theirplace. To minimize extra work, review material shouldbe keyed to program/project documentation.Program/System Requirements ReviewThe P/SRR is used to ensure that the program requirementsare properly formulated and correlated with theAgency and mission directorate strategic objectives.Table 6.7‐2 P/SRR Entrance and Success CriteriaProgram/System Requirements ReviewEntrance CriteriaSuccess Criteria1.2.3.4.5.6.7.8.An FAD has been approved.Program requirements have been defined that support missiondirectorate requirements on the program.Major program risks and corresponding mitigation strategies havebeen identified.The high-level program requirements have been documented to include:a. performance,b. safety, andc. programmatic requirements.An approach for verifying compliance with program requirements hasbeen defined.Procedures for controlling changes to program requirements havebeen defined and approved.Traceability of program requirements to individual projects isdocumented in accordance with Agency needs, goals, and objectives,as described in the <strong>NASA</strong> Strategic Plan.Top program/project risks with significant technical, safety, cost, andschedule impacts are identified.1.2.3.4.5.6.7.With respect to mission and sciencerequirements, defined high-level programrequirements are determined to becomplete and are approved.Defined interfaces with other programsare approved.The program requirements are determinedto provide a cost-effective program.The program requirements are adequatelylevied on either the single-program projector the multiple projects of the program.The plans for controlling program requirementchanges have been approved.The approach for verifying compliancewith program requirements has beenapproved.The mitigation strategies for handlingidentified major risks have been approved.<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 171


6.0 Crosscutting <strong>Technical</strong> ManagementProgram/System Definition ReviewThe P/SDR applies to all <strong>NASA</strong> space flight programs toensure the readiness of these programs to enter an approvedProgram Commitment Agreement (PCA). Theapproved PCA permits programs to transition from theprogram Formulation phase to the program Implementationphase. A Program Approval Review (PAR) is conductedas part of the P/SDR to provide Agency managementwith an independent assessment of the readiness ofthe program to proceed into implementation.The P/SDR examines the proposed program architectureand the flowdown to the functional elements ofthe system. The proposed program’s objectives and theconcept for meeting those objectives are evaluated. Keytechnologies and other risks are identified and assessed.The baseline program plan, budgets, and schedules arepresented. The technical team provides the technicalcontent to support the P/SDR. The P/SDR examines theproposed program architecture and the flowdown to thefunctional elements of the system.Table 6.7‐3 P/SDR Entrance and Success CriteriaProgram/System Definition ReviewEntrance CriteriaSuccess Criteria1.2.3.4.5.6.7.8.A P/SRR has been satisfactorily completed.A program plan has been prepared that includes the following:a. how the program will be managed;b. a list of specific projects;c. the high-level program requirements (including risk criteria);d. performance, safety, and programmatic requirements correlated to Agency and directorate strategicobjectives;e. description of the systems to be developed (hardware and software), legacy systems, system interfaces,and facilities; andf. identification of major constraints affecting system development (e.g., cost, launch window, requiredlaunch vehicle, mission planetary environment, engine design, international partners, and technologydrivers).Program-level SEMP that includes project technical approaches and management plans to implement theallocated program requirements including constituent launch, flight, and ground systems; and operationsand logistics concepts.Independent cost analyses (ICAs) and independent cost estimates (ICEs).Management plan for resources other than budget.Documentation for obtaining the PCA that includes the following:a. the feasibility of the program mission solution with a cost estimate within acceptable cost range,b. project plans adequate for project formulation initiation,c. identified and prioritized program concept evaluation criteria to be used in project evaluations,d. estimates of required annual funding levels,e. credible program cost and schedule allocation estimates to projects,f. acceptable risk and mitigation strategies (supported by a technical risk assessment),g. organizational structures and defined work assignments,h. defined program acquisition strategies,i. interfaces to other programs and partners,j. a draft plan for program implementation, andk. a defined program management system.A draft program control plan that includes:a. how the program plans to control program requirements, technical design, schedule, and cost to achieveits high-level requirements;b. how the requirements, technical design, schedule, and cost of the program will be controlled;c. how the program will utilize its technical, schedule, and cost reserves to control the baseline;d. how the program plans to report technical, schedule, and cost status to the MDAA, including frequencyand the level of detail; ande. how the program will address technical waivers and how dissenting opinions will be handled.For each project, a top-level description has been documented.1.2.3.4.5.6.7.8.9.An approvedprogram planand managementapproach.Approved SEMPand technicalapproach.Estimated costsare adequate.Documentationfor obtainingthe PCA isapproved.An approveddraft programcontrol plan.Agreement thatthe programis aligned withAgency needs,goals, andobjectives.The technicalapproach isadequate.The schedule isadequate andconsistent withcost, risk, andmission goals.Resources otherthan budget areadequate andavailable.172 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


6.7 <strong>Technical</strong> AssessmentMission Concept ReviewThe MCR will affirm the mission need and examinethe proposed mission’s objectives and the concept formeeting those objectives. It is an internal review thatusually occurs at the cognizant organization for systemdevelopment. The MCR should be completed prior toentering the concept development phase (Phase A).ObjectivesThe objectives of the review are to:zz Ensure a thorough review of the products supportingthe review.zz Ensure the products meet the entrance criteria andsuccess criteria.zz Ensure issues raised during the review are appropriatelydocumented and a plan for resolution is prepared.Results of ReviewA successful MCR supports the determination that theproposed mission meets the customer need, and hassufficient quality and merit to support a field Centermanagement decision to propose further study to thecognizant <strong>NASA</strong> program associate administrator as acandidate Phase A effort.Table 6.7‐4 MCR Entrance and Success CriteriaMission Concept ReviewEntrance CriteriaSuccess Criteria1.2.3.4.5.6.7.8.9.Mission goals and objectives.Analysis of alternative concepts to showat least one is feasible.ConOps.Preliminary mission descope options.Preliminary risk assessment includingtechnologies and associated riskmanagement/mitigation strategies andoptions.Conceptual test and evaluation strategy.Preliminary technical plans to achievenext phase.Defined MOEs and MOPs.Conceptual life-cycle support strategies(logistics, manufacturing, operation, etc.).1.2.3.4.5.6.7.8.9.Mission objectives are clearly defined and stated and are unambiguousand internally consistent.The preliminary set of requirements satisfactorily provides a systemthat will meet the mission objectives.The mission is feasible. A solution has been identified that is technicallyfeasible. A rough cost estimate is within an acceptable cost range.The concept evaluation criteria to be used in candidate systems evaluationhave been identified and prioritized.The need for the mission has been clearly identified.The cost and schedule estimates are credible.An updated technical search was done to identify existing assets orproducts that could satisfy the mission or parts of the mission.<strong>Technical</strong> planning is sufficient to proceed to the next phase.Risk and mitigation strategies have been identified and are acceptablebased on technical assessments.<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 173


6.0 Crosscutting <strong>Technical</strong> ManagementSystem Requirements ReviewThe SRR examines the functional and performance requirementsdefined for the system and the preliminaryprogram or project plan and ensures that the requirementsand selected concept will satisfy the mission.TheSRR is conducted during the concept development phase(Phase A) and before conducting the SDR or MDR.ObjectivesThe objectives of the review are to:zz Ensure a thorough review of the products supportingthe review.zz Ensure the products meet the entrance criteria andsuccess criteria.zz Ensure issues raised during the review are appropri-ately documented and a plan for resolution is prepared.Results of ReviewSuccessful completion of the SRR freezes program/project requirements and leads to a formal decision bythe cognizant program associate administrator to proceedwith proposal request preparations for project implementation.Table 6.7‐5 SRR Entrance and Success CriteriaSystem Requirements ReviewEntrance CriteriaSuccess Criteria1.2.3.Successful completion of the MCR and responses made to all MCR RFAs andRIDs.A preliminary SRR agenda, success criteria, and charge to the board have beenagreed to by the technical team, project manager, and review chair prior to the SRR.The following technical products for hardware and software system elementsare available to the cognizant participants prior to the review:a. system requirements document;b. system software functionality description;c. updated ConOps;d. updated mission requirements, if applicable;e. baselined SEMP;f. risk management plan;g. preliminary system requirements allocation to the next lower level system;h. updated cost estimate;i. technology development maturity assessment plan;j. updated risk assessment and mitigations (including PRA, as applicable);k. logistics documentation (e.g., preliminary maintenance plan);l. preliminary human rating plan, if applicable;m. software development plan;n. system SMA plan;o. CM plan;p. initial document tree;q. verification and validation approach;r. preliminary system safety analysis; ands. other specialty disciplines, as required.1.2.3.4.5.The project utilizes a soundprocess for the allocationand control of requirementsthroughout all levels, and a planhas been defined to completethe definition activity withinschedule constraints.Requirements definition iscomplete with respect totop-level mission and sciencerequirements, and interfaces withexternal entities and betweenmajor internal elements havebeen defined.Requirements allocation andflowdown of key driving requirementshave been defined downto subsystems.Preliminary approaches havebeen determined for howrequirements will be verifiedand validated down to thesubsystem level.Major risks have been identifiedand technically assessed, andviable mitigation strategies havebeen defined.174 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


6.7 <strong>Technical</strong> AssessmentMission Definition Review (Robotic MissionsOnly)The MDR examines the proposed requirements, themission architecture, and the flowdown to all functionalelements of the mission to ensure that the overall conceptis complete, feasible, and consistent with availableresources.MDR is conducted during the concept developmentphase (Phase A) following completion of the conceptstudies phase (Pre-Phase A) and before the preliminarydesign phase (Phase B).ObjectivesThe objectives of the review are to:zz Ensure a thorough review of the products supportingthe review.zz Ensure the products meet the entrance criteria andsuccess criteria.zz Ensure issues raised during the review are appropri-ately documented and a plan for resolution is prepared.Results of ReviewA successful MDR supports the decision to further developthe system architecture/design and any technologyneeded to accomplish the mission. The results reinforcethe mission’s merit and provide a basis for the system acquisitionstrategy.Table 6.7‐6 MDR Entrance and Success CriteriaMission Definition ReviewEntrance CriteriaSuccess Criteria1.2.3.Successful completion of the SRR and responses made to all SRR RFAsand RIDs.A preliminary MDR agenda, success criteria, and charge to the boardhave been agreed to by the technical team, project manager, and reviewchair prior to the MDR.The following technical products for hardware and software system elementsare available to the cognizant participants prior to the review:a. system architecture;b. updated system requirements document, if applicable;c. system software functionality description;d. updated ConOps, if applicable;e. updated mission requirements, if applicable;f. updated SEMP, if applicable;g. updated risk management plan, if applicable;h. technology development maturity assessment plan;i. preferred system solution definition including major trades andoptions;j. updated risk assessment and mitigations (including PRA, as ap-plicable);k. updated cost and schedule data;l. logistics documentation (e.g., preliminary maintenance plan);m. software development plan;n. system SMA plan;o. CM plan;p. updated initial document tree, if applicable;q. preliminary system safety analysis; andr. other specialty disciplines as required.1.2.3.4.The resulting overall concept is reasonable,feasible, complete, responsiveto the mission requirements, and isconsistent with system requirementsand available resources (cost, schedule,mass, and power).System and subsystem design approachesand operational conceptsexist and are consistent with therequirements set.The requirements, design approaches,and conceptual design will fulfill themission needs within the estimatedcosts.Major risks have been identified andtechnically assessed, and viable mitigationstrategies have been defined.<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 175


6.0 Crosscutting <strong>Technical</strong> ManagementSystem Definition Review (Human SpaceFlight Missions Only)The SDR examines the proposed system architecture/designand the flowdown to all functional elements of thesystem. SDR is conducted at the end of the concept developmentphase (Phase A) and before the preliminarydesign phase (Phase B) begins.ObjectivesThe objectives of the review are to:zz Ensure a thorough review of the products supportingthe review.zz Ensure the products meet the entrance criteria andsuccess criteria.zz Ensure issues raised during the review are appropri-ately documented and a plan for resolution is prepared.Results of ReviewAs a result of successful completion of the SDR, thesystem and its operation are well enough understoodto warrant design and acquisition of the end items. Approvedspecifications for the system, its segments, andpreliminary specifications for the design of appropriatefunctional elements may be released. A configurationmanagement plan is established to control design andrequirement changes. Plans to control and integrate theexpanded technical process are in place.Table 6.7‐7 SDR Entrance and Success CriteriaSystem Definition ReviewEntrance CriteriaSuccess Criteria1.2.3.Successful completion of the SRR and responses made to allSRR RFAs and RIDs.A preliminary SDR agenda, success criteria, and charge to theboard have been agreed to by the technical team, projectmanager, and review chair prior to the SDR.SDR technical products listed below for both hardware andsoftware system elements have been made available to thecognizant participants prior to the review:a. system architecture;b. preferred system solution definition including majortrades and options;c. updated baselined documentation, as required;d. preliminary functional baseline (with supporting tradeoffanalyses and data);e. preliminary system software functional requirements;f. SEMP changes, if any;g. updated risk management plan;h. updated risk assessment and mitigations (including PRA,as applicable);i. updated technology development maturity assessmentplan;j. updated cost and schedule data;k. updated logistics documentation;l. based on system complexity, updated human rating plan;m. software test plan;n. software requirements document(s);o. interface requirements documents (including software);p. technical resource utilization estimates and margins;q. updated SMA plan; andr. updated preliminary safety analysis.1.2.3.4.5.6.7.8.9.<strong>Systems</strong> requirements, including mission successcriteria and any sponsor-imposed constraints,are defined and form the basis for the proposedconceptual design.All technical requirements are allocated, andthe flowdown to subsystems is adequate. Therequirements, design approaches, and conceptualdesign will fulfill the mission needs consistent withthe available resources (cost, schedule, mass, andpower).The requirements process is sound and canreasonably be expected to continue to identifyand flow detailed requirements in a manner timelyfor development.The technical approach is credible and responsiveto the identified requirements.<strong>Technical</strong> plans have been updated, as necessary.The tradeoffs are completed, and those plannedfor Phase B adequately address the option space.Significant development, mission, and safety risksare identified and technically assessed, and a riskprocess and resources exist to manage the risks.Adequate planning exists for the development ofany enabling new technology.The ConOps is consistent with proposed designconcept(s) and is in alignment with the missionrequirements.176 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


6.7 <strong>Technical</strong> AssessmentPreliminary Design ReviewThe PDR demonstrates that the preliminary designmeets all system requirements with acceptable risk andwithin the cost and schedule constraints and establishesthe basis for proceeding with detailed design. It will showthat the correct design options have been selected, interfaceshave been identified, approximately 10 percent ofengineering drawings have been created, and verificationmethods have been described. PDR occurs near thecompletion of the preliminary design phase (Phase B) asthe last review in the Formulation phase.ObjectivesThe objectives of the review are to:zz Ensure a thorough review of the products supportingthe review.zz Ensure the products meet the entrance criteria andsuccess criteria.zz Ensure issues raised during the review are appropri-ately documented and a plan for resolution is prepared.Results of ReviewAs a result of successful completion of the PDR, the design-tobaseline is approved. A successful review resultalso authorizes the project to proceed into implementationand toward final design.Table 6.7‐8 PDR Entrance and Success CriteriaPreliminary Design Review1.2.3.Entrance CriteriaSuccessful completion of the SDR or MDR and responses made to allSDR or MDR RFAs and RIDs, or a timely closure plan exists for thoseremaining open.A preliminary PDR agenda, success criteria, and charge to the boardhave been agreed to by the technical team, project manager, andreview chair prior to the PDR.PDR technical products listed below for both hardware and softwaresystem elements have been made available to the cognizant participantsprior to the review:a. Updated baselined documentation, as required.b. Preliminary subsystem design specifications for each configurationitem (hardware and software), with supporting tradeoff analysesand data, as required. The preliminary software design specificationshould include a completed definition of the software architectureand a preliminary database design description as applicable.c. Updated technology development maturity assessment plan.d. Updated risk assessment and mitigation.e. Updated cost and schedule data.f. Updated logistics documentation, as required.g. Applicable technical plans (e.g., technical performance measure-ment plan, contamination control plan, parts management plan,environments control plan, EMI/EMC control plan, payload-to-carrierintegration plan, producibility/manufacturability program plan,reliability program plan, quality assurance plan).h. Applicable standards.i. Safety analyses and plans.j. <strong>Engineering</strong> drawing tree.k. Interface control documents.l. Verification and validation plan.m. Plans to respond to regulatory (e.g., National Environmental PolicyAct) requirements, as required.n. Disposal plan.o. <strong>Technical</strong> resource utilization estimates and margins.p. System-level safety analysis.q. Preliminary LLIL.1.2.3.4.5.6.7.8.9.Success CriteriaThe top-level requirements—including missionsuccess criteria, TPMs, and any sponsor-imposed constraints—areagreed upon, finalized, stated clearly,and consistent with the preliminary design.The flowdown of verifiable requirements is completeand proper or, if not, an adequate plan exists fortimely resolution of open items. Requirements aretraceable to mission goals and objectives.The preliminary design is expected to meet therequirements at an acceptable level of risk.Definition of the technical interfaces is consistentwith the overall technical maturity and provides anacceptable level of risk.Adequate technical interfaces are consistent with theoverall technical maturity and provide an acceptablelevel of risk.Adequate technical margins exist with respect toTPMs.Any required new technology has been developedto an adequate state of readiness, or backup optionsexist and are supported to make them a viablealternative.The project risks are understood and have beencredibly assessed, and plans, a process, and resourcesexist to effectively manage them.SMA (e.g., safety, reliability, maintainability, quality,and EEE parts) has been adequately addressed in preliminarydesigns and any applicable SMA products(e.g., PRA, system safety analysis, and failure modesand effects analysis) have been approved.10. The operational concept is technically sound,includes (where appropriate) human factors, andincludes the flowdown of requirements for its execution.<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 177


6.0 Crosscutting <strong>Technical</strong> ManagementCritical Design ReviewThe purpose of the CDR is to demonstrate that the maturityof the design is appropriate to support proceedingwith full scale fabrication, assembly, integration, andtest, and that the technical effort is on track to completethe flight and ground system development and missionoperations to meet mission performance requirementswithin the identified cost and schedule constraints. Approximately90 percent of engineering drawings are approvedand released for fabrication. CDR occurs duringthe final design phase (Phase C).ObjectivesThe objectives of the review are to:zz Ensure a thorough review of the products supportingthe review.zz Ensure the products meet the entrance criteria andsuccess criteria.zz Ensure issues raised during the review are appropri-ately documented and a plan for resolution is prepared.Results of ReviewAs a result of successful completion of the CDR, thebuild-to baseline, production, and verification plansare approved. A successful review result also authorizescoding of deliverable software (according to the buildtobaseline and coding standards presented in the review),and system qualification testing and integration.All open issues should be resolved with closure actionsand schedules.Table 6.7‐9 CDR Entrance and Success CriteriaCritical Design ReviewEntrance CriteriaSuccess Criteria1.2.3.Successful completion of the PDR and responses made to all PDR RFAs and RIDs,or a timely closure plan exists for those remaining open.A preliminary CDR agenda, success criteria, and charge to the board have beenagreed to by the technical team, project manager, and review chair prior to the CDR.CDR technical work products listed below for both hardware and softwaresystem elements have been made available to the cognizant participants prior tothe review:a.b.c.d.e.f.g.h.i.j.k.l.m.n.o.p.q.r.s.t.u.v.w.updated baselined documents, as required;product build-to specifications for each hardware and software configurationitem, along with supporting tradeoff analyses and data;fabrication, assembly, integration, and test plans and procedures;technical data package (e.g., integrated schematics, spares provisioning list,interface control documents, engineering analyses, and specifications);operational limits and constraints;technical resource utilization estimates and margins;acceptance criteria;command and telemetry list;verification plan (including requirements and specifications);validation plan;launch site operations plan;checkout and activation plan;disposal plan (including decommissioning or termination);updated technology development maturity assessment plan;updated risk assessment and mitigation;update reliability analyses and assessments;updated cost and schedule data;updated logistics documentation;software design document(s) (including interface design documents);updated LLIL;subsystem-level and preliminary operations safety analyses;system and subsystem certification plans and requirements (as needed); andsystem safety analysis with associated verifications.1.2.3.4.5.6.7.8.The detailed design is expected to meet therequirements with adequate margins at anacceptable level of risk.Interface control documents are appropriatelymatured to proceed with fabrication,assembly, integration, and test, and plansare in place to manage any open items.High confidence exists in the productbaseline, and adequate documentationexists or will exist in a timely manner to allowproceeding with fabrication, assembly,integration, and test.The product verification and product validationrequirements and plans are complete.The testing approach is comprehensive,and the planning for system assembly,integration, test, and launch site and missionoperations is sufficient to progress intothe next phase.Adequate technical and programmaticmargins and resources exist to completethe development within budget, schedule,and risk constraints.Risks to mission success are understood andcredibly assessed, and plans and resourcesexist to effectively manage them.SMA (e.g., safety, reliability, maintainability,quality, and EEE parts) have beenadequately addressed in system and operationaldesigns, and any applicable SMA planproducts (e.g., PRA, system safety analysis,and failure modes and effects analysis) havebeen approved.178 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


6.7 <strong>Technical</strong> AssessmentProduction Readiness ReviewA PRR is held for FS&GS projects developing or acquiringmultiple or similar systems greater than three oras determined by the project. The PRR determines thereadiness of the system developers to efficiently producethe required number of systems. It ensures that the productionplans; fabrication, assembly, and integration-enablingproducts; and personnel are in place and readyto begin production. PRR occurs during the final designphase (Phase C).ObjectivesThe objectives of the review are to:zz Ensure a thorough review of the products supportingthe review.zz Ensure the products meet the entrance criteria andsuccess criteria.zz Ensure issues raised during the review are appropriatelydocumented and a plan for resolution is prepared.Results of ReviewAs a result of successful completion of the PRR, the finalproduction build-to baseline, production, and verificationplans are approved. Approved drawings are releasedand authorized for production. A successful review resultalso authorizes coding of deliverable software (accordingto the build-to baseline and coding standardspresented in the review), and system qualification testingand integration. All open issues should be resolved withclosure actions and schedules.Table 6.7‐10 PRR Entrance and Success CriteriaProduction Readiness Review1.2.3.4.Entrance CriteriaThe significant production engineeringproblems encounteredduring development are resolved.The design documentation isadequate to support production.The production plans and preparationare adequate to beginfabrication.The production-enabling productsand adequate resources areavailable, have been allocated,and are ready to support endproduct production.1.2.3.4.5.6.7.8.9.The design is appropriately certified.Success CriteriaThe system requirements are fully met in the final production configuration.Adequate measures are in place to support production.Design-for-manufacturing considerations ensure ease and efficiency ofproduction and assembly.Risks have been identified, credibly assessed, and characterized; and mitigationefforts have been defined.The bill of materials has been reviewed and critical parts identified.Delivery schedules have been verified.Alternative sources for resources have been identified, as appropriate.Adequate spares have been planned and budgeted.10. Required facilities and tools are sufficient for end product production.11. Specified special tools and test equipment are available in proper quantities.12. Production and support staff are qualified.13. Drawings are certified.14. Production engineering and planning are sufficiently mature for cost-effectiveproduction.15. Production processes and methods are consistent with quality requirementsand compliant with occupational safety, environmental, and energy conservationregulations.16. Qualified suppliers are available for materials that are to be procured.<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 179


6.0 Crosscutting <strong>Technical</strong> ManagementSystem Integration ReviewAn SIR ensures that the system is ready to be integrated.Segments, components, and subsystems are availableand ready to be integrated into the system. Integrationfacilities, support personnel, and integration plans andprocedures are ready for integration. SIR is conductedat the end of the final design phase (Phase C) and beforethe systems assembly, integration, and test phase(Phase D) begins.ObjectivesThe objectives of the review are to:zz Ensure a thorough review of the products supportingthe review.zz Ensure the products meet the entrance criteria andsuccess criteria.zz Ensure issues raised during the review are appropriatelydocumented and a plan for resolution is prepared.Results of ReviewAs a result of successful completion of the SIR, the finalas-built baseline and verification plans are approved. Approveddrawings are released and authorized to supportintegration. All open issues should be resolved with closureactions and schedules. The subsystems/systems integrationprocedures, ground support equipment, facilities,logistical needs, and support personnel are plannedfor and are ready to support integration.Table 6.7‐11 SIR Entrance and Success CriteriaSystem Integration Review1.2.3.4.Entrance CriteriaIntegration plans and procedures have been completed andapproved.Segments and/or components are available for integration.Mechanical and electrical interfaces have been verified against theinterface control documentation.All applicable functional, unit-level, subsystem, and qualificationtesting has been conducted successfully.5. Integration facilities, including clean rooms, ground supportequipment, handling fixtures, overhead cranes, and electrical testequipment, are ready and available.6.7.8.9.Support personnel have been adequately trained.Handling and safety requirements have been documented.All known system discrepancies have been identified and disposedin accordance with an agreed-upon plan.All previous design review success criteria and key issues havebeen satisfied in accordance with an agreed-upon plan.10. The quality control organization is ready to support the integrationeffort.1.2.3.4.5.6.Success CriteriaAdequate integration plans and proceduresare completed and approved for the systemto be integrated.Previous component, subsystem, and systemtest results form a satisfactory basis forproceeding to integration.Risk level is identified and accepted byprogram/project leadership, as required.The integration procedures and workflowhave been clearly defined and documented.The review of the integration plans, as wellas the procedures, environment, and theconfiguration of the items to be integrated,provides a reasonable expectation that theintegration will proceed successfully.Integration personnel have received appropriatetraining in the integration and safetyprocedures.180 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


6.7 <strong>Technical</strong> AssessmentTest Readiness ReviewA TRR ensures that the test article (hardware/software),test facility, support personnel, and test procedures areready for testing and data acquisition, reduction, andcontrol. A TRR is held prior to commencement of verificationor validation testing.ObjectivesThe objectives of the review are to:zz Ensure a thorough review of the products supportingthe review.zz Ensure the products meet the entrance criteria andsuccess criteria.zz Ensure issues raised during the review are appropri-ately documented and a plan for resolution is prepared.Results of ReviewA successful TRR signifies that test and safety engineershave certified that preparations are complete, and thatthe project manager has authorized formal test initiation.Table 6.7‐12 TRR Entrance and Success CriteriaTest Readiness ReviewEntrance CriteriaSuccess Criteria1.2.3.4.5.6.7.8.9.The objectives of the testing have been clearly defined anddocumented and all of the test plans, procedures, environment,and the configuration of the test item(s) support those objectives.Configuration of the system under test has been defined andagreed to. All interfaces have been placed under configurationmanagement or have been defined in accordance with anagreed-to plan, and a version description document has beenmade available to TRR participants prior to the review.All applicable functional, unit-level, subsystem, system, andqualification testing has been conducted successfully.All TRR-specific materials such as test plans, test cases, andprocedures have been made available to all participants prior toconducting the review.All known system discrepancies have been identified anddisposed in accordance with an agreed-upon plan.All previous design review success criteria and key issues havebeen satisfied in accordance with an agreed-upon plan.All required test resources people (including a designated testdirector), facilities, test articles, test instrumentation, and otherenabling products have been identified and are available tosupport required tests.Roles and responsibilities of all test participants are defined andagreed to.Test contingency planning has been accomplished, and allpersonnel have been trained.1.2.3.4.5.6.7.8.Adequate test plans are completed and approvedfor the system under test.Adequate identification and coordination ofrequired test resources are completed.Previous component, subsystem, and systemtest results form a satisfactory basis for proceedinginto planned tests.Risk level is identified and accepted by program/competencyleadership as required.Plans to capture any lessons learned from thetest program are documented.The objectives of the testing have been clearlydefined and documented, and the review ofall the test plans, as well as the procedures,environment, and the configuration of the testitem, provide a reasonable expectation that theobjectives will be met.The test cases have been reviewed and analyzedfor expected results, and the results areconsistent with the test plans and objectives.Test personnel have received appropriate trainingin test operation and safety procedures.<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 181


6.0 Crosscutting <strong>Technical</strong> ManagementSystem Acceptance ReviewThe SAR verifies the completeness of the specific endproducts in relation to their expected maturity level andassesses compliance to stakeholder expectations. TheSAR examines the system, its end products and documentation,and test data and analyses that support verification.It also ensures that the system has sufficienttechnical maturity to authorize its shipment to the designatedoperational facility or launch site.ObjectivesThe objectives of the review are to:zz Ensure a thorough review of the products supportingthe review.zz Ensure the products meet the entrance criteria andsuccess criteria.zz Ensure issues raised during the review are appropri-ately documented and a plan for resolution is prepared.Results of ReviewAs a result of successful completion of the SAR, thesystem is accepted by the buyer, and authorization isgiven to ship the hardware to the launch site or operationalfacility, and to install software and hardware foroperational use.Table 6.7‐13 SAR Entrance and Success CriteriaSystem Acceptance ReviewEntrance CriteriaSuccess Criteria1.2.A preliminary agenda has been coordinated (nominally) prior to theSAR.The following SAR technical products have been made available to thecognizant participants prior to the review:a. results of the SARs conducted at the major suppliers;b. transition to production and/or manufacturing plan;c. product verification results;d. product validation results;e. documentation that the delivered system complies with theestablished acceptance criteria;f. documentation that the system will perform properly in theexpected operational environment;g. technical data package updated to include all test results;h. certification package;i. updated risk assessment and mitigation;j. successfully completed previous milestone reviews; andk. remaining liens or unclosed actions and plans for closure.1.2.3.4.5.6.Required tests and analyses arecomplete and indicate that the systemwill perform properly in the expectedoperational environment.Risks are known and manageable.System meets the established acceptancecriteria.Required safe shipping, handling,checkout, and operational plans andprocedures are complete and ready foruse.<strong>Technical</strong> data package is complete andreflects the delivered system.All applicable lessons learned fororganizational improvement and systemoperations are captured.182 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


6.7 <strong>Technical</strong> AssessmentOperational Readiness ReviewThe ORR examines the actual system characteristics andthe procedures used in the system or end product’s operationand ensures that all system and support (flightand ground) hardware, software, personnel, procedures,and user documentation accurately reflect the deployedstate of the system.ObjectivesThe objectives of the review are to:zz Ensure a thorough review of the products supportingthe review.zz Ensure the products meet the entrance criteria andsuccess criteria.zz Ensure issues raised during the review are appropriatelydocumented and a plan for resolution is prepared.Results of ReviewAs a result of successful ORR completion, the system isready to assume normal operations.Table 6.7‐14 ORR Entrance and Success CriteriaOperational Readiness ReviewEntrance CriteriaSuccess Criteria1.2.3.4.5.6.All validation testing has been completed.Test failures and anomalies from validation testing have beenresolved and the results incorporated into all supporting andenabling operational products.All operational supporting and enabling products (e.g.,facilities, equipment, documents, updated databases) that arenecessary for the nominal and contingency operations havebeen tested and delivered/installed at the site(s) necessary tosupport operations.Operations handbook has been approved.Training has been provided to the users and operators on thecorrect operational procedures for the system.Operational contingency planning has been accomplished,and all personnel have been trained.1.2.3.4.The system, including any enabling products,is determined to be ready to be placed in anoperational status.All applicable lessons learned for organizationalimprovement and systems operations have beencaptured.All waivers and anomalies have been closed.<strong>Systems</strong> hardware, software, personnel, andprocedures are in place to support operations.<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 183


6.0 Crosscutting <strong>Technical</strong> ManagementFlight Readiness ReviewThe FRR examines tests, demonstrations, analyses, andaudits that determine the system’s readiness for a safeand successful flight or launch and for subsequent flightoperations. It also ensures that all flight and groundhardware, software, personnel, and procedures are operationallyready.ObjectivesThe objectives of the review are to:zz Ensure a thorough review of the products supportingthe review.zz Ensure the products meet the entrance criteria andsuccess criteria.zz Ensure issues raised during the review are appropri-ately documented and a plan for resolution is prepared.Results of ReviewAs a result of successful FRR completion, technical andprocedural maturity exists for system launch and flightauthorization and in some cases initiation of system operations.Table 6.7‐15 FRR Entrance and Success CriteriaFlight Readiness Review1.2.3.4.5.6.Entrance CriteriaReceive certification that flight operations cansafely proceed with acceptable risk.The system and support elements have been confirmedas properly configured and ready for flight.Interfaces are compatible and function as expected.The system state supports a launch Go decisionbased on Go or No-Go criteria.Flight failures and anomalies from previouslycompleted flights and reviews have been resolvedand the results incorporated into all supporting andenabling operational products.The system has been configured for flight.1.Success CriteriaThe flight vehicle is ready for flight.2. The hardware is deemed acceptably safe for flight (i.e., meetingthe established acceptable risk criteria or documented asbeing accepted by the PM and DGA).3.4.5.6.7.Flight and ground software elements are ready to supportflight and flight operations.Interfaces are checked out and found to be functional.Open items and waivers have been examined and found tobe acceptable.The flight and recovery environmental factors are withinconstraints.All open safety and mission risk items have been addressed.184 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


6.7 <strong>Technical</strong> AssessmentPost-Launch Assessment ReviewA PLAR is a post-deployment evaluation of the readinessof the spacecraft systems to proceed with full, routineoperations. The review evaluates the status, performance,and capabilities of the project evident from theflight operations experience since launch. This can alsomean assessing readiness to transfer responsibility fromthe development organization to the operations organization.The review also evaluates the status of the projectplans and the capability to conduct the mission withemphasis on near-term operations and mission-criticalevents. This review is typically held after the early flightoperations and initial checkout.The objectives of the review are to:zz Ensure a thorough review of the products supportingthe review.zz Ensure the products meet the entrance criteria andsuccess criteria.zz Ensure issues raised during the review are appropriatelydocumented and a plan for resolution is prepared.Table 6.7‐16 PLAR Entrance and Success CriteriaPost-Launch Assessment ReviewEntrance CriteriaSuccess Criteria1.2.3.4.5.6.7.8.9.The launch and early operations performance, including (when appropriate)the early propulsive maneuver results, are available.The observed spacecraft and science instrument performance, includinginstrument calibration plans and status, are available.The launch vehicle performance assessment and mission implications,including launch sequence assessment and launch operations experiencewith lessons learned, are completed.The mission operations and ground data system experience, includingtracking and data acquisition support and spacecraft telemetry dataanalysis, are available.The mission operations organization, including status of staffing,facilities, tools, and mission software (e.g., spacecraft analysis, andsequencing), is available.In-flight anomalies and the responsive actions taken, including anyautonomous fault protection actions taken by the spacecraft, or anyunexplained spacecraft telemetry, including alarms, are documented.The need for significant changes to procedures, interface agreements,software, and staffing has been documented.Documentation is updated, including any updates originating from theearly operations experience.Future development/test plans are developed.1.2.3.4.The observed spacecraft and sciencepayload performance agrees with prediction,or if not, it is adequately understoodso that future behavior can be predictedwith confidence.All anomalies have been adequatelydocumented, and their impact onoperations assessed. Further, anomaliesimpacting spacecraft health and safetyor critical flight operations have beenproperly disposed.The mission operations capabilities,including staffing and plans, are adequateto accommodate the actual flightperformance.Liens, if any, on operations, identified aspart of the ORR, have been satisfactorilydisposed.<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 185


6.0 Crosscutting <strong>Technical</strong> ManagementCritical Event Readiness ReviewA CERR confirms the project’s readiness to execute themission’s critical activities during flight operation.The objectives of the review are to:zz Ensure a thorough review of the products supportingthe review.zz Ensure the products meet the entrance criteria andsuccess criteria.zz Ensure issues raised during the review are appropriatelydocumented and a plan for resolution is prepared.Table 6.7‐17 CERR Entrance and Success CriteriaCritical Event Readiness ReviewEntrance CriteriaSuccess CriteriaPost-Flight Assessment ReviewThe PFAR evaluates the activities from the flight after recovery.The review identifies all anomalies that occurredduring the flight and mission and determines the actionsnecessary to mitigate or resolve the anomalies for futureflights.The objectives of the review are to:zz Ensure a thorough review of the products supportingthe review.zz Ensure the products meet the entrance criteria andsuccess criteria.zz Ensure issues raised during the review are appropriatelydocumented and a plan for resolution is prepared.Table 6.7‐18 PFAR Entrance and Success Criteria1. Mission overview andcontext for the criticalevent(s).2.Activity requirements andconstraints.3. Critical activity sequencedesign descriptionincluding key tradeoffsand rationale for selectedapproach.4.Fault protection strategy.5. Critical activity operationsplan including planneduplinks and criticality.6. Sequence verification(testing, walk-throughs,peer review) and criticalactivity validation.7.8.9.Operations team trainingplan and readiness report.Risk areas and mitigations.Spacecraft readinessreport.10. Open items and plans.1.2.3.4.The critical activitydesign complieswith requirements.The preparationfor the criticalactivity, includingthe verificationand validation, isthorough.The project (includingall the systems,supporting services,and documentation)is ready to supportthe activity.The requirementsfor the successfulexecution of thecritical event(s)are complete andunderstood andhave been floweddown to the appropriatelevels forimplementation.Post-Flight Assessment ReviewEntrance Criteria1. All anomalies that occurredduring the mission, as wellas during preflight testing,countdown, and ascent,identified.2.3.4.5.6.7.Report on overall post-recoverycondition.Report any evidence of ascentdebris.All photo and video documentationavailable.Retention plans for scrappedhardware completed.Post-flight assessment teamoperating plan completed.Disassembly activitiesplanned and scheduled.8. Processes and controls tocoordinate in-flight anomalytroubleshooting and postflightdata preservationdeveloped.9. Problem reports, correctiveaction requests, post-flightanomaly records, and finalpost-flight documentationcompleted.10. All post-flight hardware andflight data evaluation reportscompleted.Success Criteria1. Formal finalreport documentingflightperformance andrecommendationsfor futuremissions.2. All anomalieshave beenadequatelydocumented anddisposed.3. The impact ofanomalies onfuture flightoperations hasbeen assessed.4. Plans for retainingassessmentdocumentationand imaginghave been made.5. <strong>Reports</strong> andother documentationhavebeen added toa database forperformancecomparison andtrending.186 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


6.7 <strong>Technical</strong> AssessmentDecommissioning ReviewThe DR confirms the decision to terminate or decommissionthe system and assesses the readiness of thesystem for the safe decommissioning and disposal ofsystem assets. The DR is normally held near the end ofroutine mission operations upon accomplishment ofplanned mission objectives. It may be advanced if someunplanned event gives rise to a need to prematurely terminatethe mission, or delayed if operational life is extendedto permit additional investigations.ObjectivesThe objectives of the review are to:zz Ensure a thorough review of the products supportingthe review.zz Ensure the products meet the entrance criteria andsuccess criteria.zz Ensure issues raised during the review are appropri-ately documented and a plan for resolution is prepared.Results of ReviewA successful DR completion ensures that the decommissioningand disposal of system items and processes areappropriate and effective.Table 6.7‐19 DR Entrance and Success CriteriaDecommissioning ReviewEntrance CriteriaSuccess Criteria1.2.3.4.5.6.Requirements associated with decommissioningand disposal are defined.Plans are in place for decommissioning,disposal, and any other removal fromservice activities.Resources are in place to support decommissioningand disposal activities,plans for disposition of project assets,and archival of essential mission andproject data.Safety, environmental, and any otherconstraints are described.Current system capabilities aredescribed.For off-nominal operations, all contributingevents, conditions, and changesto the originally expected baseline aredescribed.1.2.3.4.5.6.7.8.The reasons for decommissioning disposal are documented.The decommissioning and disposal plan is complete, approved by appropriatemanagement, and compliant with applicable Agency safety,environmental, and health regulations. Operations plans for all potentialscenarios, including contingencies, are complete and approved. Allrequired support systems are available.All personnel have been properly trained for the nominal and contingencyprocedures.Safety, health, and environmental hazards have been identified. Controlshave been verified.Risks associated with the disposal have been identified and adequatelymitigated. Residual risks have been accepted by the required management.If hardware is to be recovered from orbit:a. Return site activity plans have been defined and approved.b. Required facilities are available and meet requirements, includingthose for contamination control, if needed.c. Transportation plans are defined and approved. Shipping containersand handling equipment, as well as contamination and environmentalcontrol and monitoring devices, are available.Plans for disposition of mission-owned assets (i.e., hardware, software,facilities) have been defined and approved.Plans for archival and subsequent analysis of mission data have beendefined and approved. Arrangements have been finalized for theexecution of such plans. Plans for the capture and dissemination ofappropriate lessons learned during the project life cycle have beendefined and approved. Adequate resources (schedule, budget, andstaffing) have been identified and are available to successfully completeall decommissioning, disposal, and disposition activities.<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 187


6.0 Crosscutting <strong>Technical</strong> ManagementOther <strong>Technical</strong> ReviewsThese typical technical reviews are some that have beenconducted on previous programs and projects but arenot required as part of the NPR 7123.1 systems engineeringprocess.Design Certification ReviewPurposeThe Design Certification Review (DCR) ensures that thequalification verifications demonstrate design compliancewith functional and performance requirements.TimingThe DCR follows the system CDR, and after qualificationtests and all modifications needed to implementqualification-caused corrective actions have been completed.ObjectivesThe objectives of the review are to:zz Confirm that the verification results met functionaland performance requirements, and that test plansand procedures were executed correctly in the specifiedenvironments.zz Certify that traceability between test article and pro-duction article is correct, including name, identificationnumber, and current listing of all waivers.zz Identify any incremental tests required or conducteddue to design or requirements changes made since testinitiation, and resolve issues regarding their results.Criteria for Successful CompletionThe following items comprise a checklist to aid in determiningthe readiness of DCR product preparation:zz Are the pedigrees of the test articles directly traceableto the production units?zz Is the verification plan used for this article currentand approved?zz Do the test procedures and environments used complywith those specified in the plan?zz Are there any changes in the test article configurationor design resulting from the as-run tests? Do they requiredesign or specification changes and/or retests?zz Have design and specification documents been au-dited?zz Do the verification results satisfy functional and per-formance requirements?zz Do the verification, design, and specification docu-mentation correlate?Results of ReviewAs a result of a successful DCR, the end item design isapproved for production. All open issues should be resolvedwith closure actions and schedules.188 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


6.7 <strong>Technical</strong> AssessmentFunctional and Physical Configuration AuditsConfiguration audits confirm that the configuredproduct is accurate and complete. The two types of configurationaudits are the Functional Configuration Audit(FCA) and the Physical Configuration Audit (PCA). TheFCA examines the functional characteristics of the configuredproduct and verifies that the product has met, viatest results, the requirements specified in its functionalbaseline documentation approved at the PDR and CDR.FCAs will be conducted on both hardware or softwareconfigured products and will precede the PCA of theconfigured product. The PCA (also known as a configurationinspection) examines the physical configurationof the configured product and verifies that the productcorresponds to the build-to (or code-to) product baselinedocumentation previously approved at the CDR.PCAs will be conducted on both hardware and softwareconfigured products.<strong>Technical</strong> Peer ReviewsPeer reviews provide the technical insight essential toensure product and process quality. Peer reviews arefocused, in-depth technical reviews that support theevolving design and development of a product, includingcritical documentation or data packages. They are often,but not always, held as supporting reviews for technicalreviews such as PDR and CDR. A purpose of the peerreview is to add value and reduce risk through expertknowledge infusion, confirmation of approach, identificationof defects, and specific suggestions for productimprovements.The results of the engineering peer reviews comprise akey element of the review process. The results and issuesthat surface during these reviews are documentedand reported out at the appropriate next higher elementlevel.The peer reviewers should be selected from outside theproject, but they should have a similar technical background,and they should be selected for their skill andexperience. Peer reviewers should be concerned withonly the technical integrity and quality of the product.Peer reviews should be kept simple and informal. Theyshould concentrate on a review of the documentationand minimize the viewgraph presentations. A roundtableformat rather than a stand-up presentation is preferred.The peer reviews should give the full technicalpicture of items being reviewed.Table 6.7‐20 Functional and Physical Configuration AuditsRepresentative Audit Data Listzz Design specificationszz Design drawings and parts listFCAzz <strong>Engineering</strong> change proposals/engineering change requestszz Deviation/waiver approval requests incorporated and pendingzz Specification and drawing treezz Fracture control planzz Structural dynamics, analyses, loads, and models documentationzz Materials usage agreements/materials identification usage listzz Verification and validation requirements, plans, procedures, andreportszz Software requirements and development documentszz Listing of accomplished tests and test resultszz CDR completion documentation including RIDs/RFAs and disposi-tion reportszz Analysis reportszz ALERT (Acute Launch Emergency Restraint Tip) tracking logzz Hazard analysis/risk assessmentPCAzz Final version of all specificationszz Product drawings and parts listzz Configuration accounting and status reportszz Final version of all software and software docu-mentszz Copy of all FCA findings for each productzz List of approved and outstanding engineeringchange proposals, engineering change requests,and deviation/waiver approval requestszz Indentured parts listzz As-run test procedureszz Drawing and specification treezz Manufacturing and inspection “build” recordszz Inspection recordszz As-built discrepancy reportszz Product log bookszz As-built configuration list<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 189


6.0 Crosscutting <strong>Technical</strong> Management<strong>Technical</strong> depth should be established at a level that allowsthe review team to gain insight into the technicalrisks. Rules need to be established to ensure consistencyin the peer review process. At the conclusion of the review,a report on the findings, recommendations, andactions must be distributed to the technical team.For those projects where systems engineering is doneout-of-house, peer reviews must be part of the contract.Additional guidance on establishing and conductingpeer reviews can be found in Appendix N.6.7.2.2 Status Reporting and AssessmentThis subsection provides additional information onstatus reporting and assessment techniques for costs andschedules (including EVM), technical performance, andsystems engineering process metrics.Cost and Schedule Control MeasuresStatus reporting and assessment on costs and schedulesprovides the project manager and systems engineer visibilityinto how well the project is tracking against itsplanned cost and schedule targets. From a managementpoint of view, achieving these targets is on a par withmeeting the technical performance requirements of thesystem. It is useful to think of cost and schedule statusreporting and assessment as measuring the performanceof the “system that produces the system.”Assessment MethodsPerformance measurement data are used to assess projectcost, schedule, and technical performance and their impactson the completion cost and schedule of the project.In program control terminology, a difference betweenactual performance and planned costs or schedule statusis called a “variance.” Variances must be controlled at thecontrol account level, which is typically at the subsystemWBS level. The person responsible for this activity is frequentlycalled the Control Account Manager (CAM).The CAM develops work and product plans, schedules,and time-phased resource plans. The technical subsystemmanager/leads often takes on this role as part oftheir subsystem management responsibilities.Figure 6.7-3 illustrates two types of variances, cost andschedule, and some related concepts. A product-orientedWBS divides the project work into discrete tasksand products. Associated with each task and product (atany level in the WBS) is a schedule and a budgeted (i.e.,planned) cost. The Budgeted Cost for Work Scheduled(BCWS t) for any set of WBS elements is the sum of thebudgeted cost of all work on tasks and products in thoseelements scheduled to be completed by time t. The BudgetedCost for Work Performed (BCWP t), also calledEarned Value (EV t), is the sum of the budgeted cost fortasks and products that have actually been produced attime t in the schedule for those WBS elements. The difference,BCWP tand BCWS t, is called the schedule varianceat time t. A negative value indicates that the work isbehind schedule.The Actual Cost of Work Performed (ACWPt) representsthe funds that have been expended up to time t onthose WBS elements. The difference between the budgetedand actual costs, BCWP t ACWP t, is called thecost variance at time t. A negative value here indicates acost overrun.NPR 7120.5 provides specific requirements for the applicationof EVM to support cost and schedule management.EVM is applicable to both in-house and contractedefforts. The level of EVM system implementationwill depend on the dollar value and risk of a project orcontract. The standard for EVM systems is ANSI-EIA-748. The project manager/systems engineer will use theguidelines to establish the program and project EVMimplementation plan.190 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>Cumulative $0Time tBCWP(or EV)CurrentDateBCWSACWPCost Varianceto Date(BCWP - ACWP)EACBudgetForecastof CostVariance atCompletionScheduleVarianceto Date(BCWP - BCWS)Figure 6.7-3 Cost and schedule variances


6.7 <strong>Technical</strong> AssessmentWhen either schedule variance or cost variance exceedspreestablished control-account-level thresholds that representsignificant departures from the baseline plan, theconditions must be analyzed to identify why the varianceexists. Once the cause is understood, the CAM can makean informed forecast of the time and resources neededto complete the control account. When corrective actionsare feasible (can stay within the BCWS), the planfor implementing them must be included in the analysis.Sometimes no corrective action is feasible; overruns orschedule slips may be unavoidable. One must keep inmind that the earlier a technical problem is identified asa result of schedule or cost variances, the more likely theproject team can minimize the impact on completion.Variances may indicate that the cost Estimate at Completion(EAC t) of the project is likely to be differentfrom the Budget at Completion (BAC). The differencebetween the BAC and the EAC is the Variance at Completion(VAC). A negative VAC is generally unfavorable,while a positive is usually favorable. These variances mayalso point toward a change in the scheduled completiondate of the project. These types of variances enablea program analyst to estimate the EAC at any point inthe project life cycle. (See box on analyzing EAC.) Theseanalytically derived estimates should be used only as a“sanity check” against the estimates prepared in the varianceanalysis process.If the cost and schedule baselines and the technical scopeof the work are not adequately defined and fully integrated,then it is very difficult (or impossible) to estimatethe current cost EAC of the project.Other efficiency factors can be calculated using the performancemeasurement data. The Schedule PerformanceIndex (SPI) is a measure of work accomplishment in dollars.The SPI is calculated by dividing work accomplishedin dollars or BCWP by the dollar value of the work scheduledor BCWS. Just like any other ratio, a value less thanone is a sign of a behind-schedule condition, equal to oneindicates an on-schedule status, and greater than one denotesthat work is ahead of schedule. The Cost PerformanceIndex (CPI) is a measure of cost efficiency and iscalculated as the ratio of the earned value or BCWP fora segment of work compared to the cost to complete thatsame segment of work or ACWP. A CPI will show howmuch work is being accomplished for every dollar spenton the project. A CPI of less than one reveals negativecost efficiency, equal to one is right on cost, and greaterAnalyzing the Estimate at CompletionAn EAC can be estimated at any point in the projectand should be reviewed at least on a monthly basis.The EAC requires a detailed review by the CAM. A statisticalestimate can be used as a cross-check of theCAM’s estimate and to develop a range to bound theestimate. The appropriate formula used to calculatethe statistical EAC depends upon the reasons associatedwith any variances that may exist. If a varianceexists due to a one-time event, such as an accident,then EAC = ACWP + (BAC – BCWP). The CPI and SPIshould also be considered in developing the EAC.If there is a growing number of liens, action items, orsignificant problems that will increase the difficultyof future work, the EAC might grow at a greater ratethan estimated by the above equation. Such factorscould be addressed using risk management methodsdescribed in the Section 6.4.than one is positive. Note that traditional measures compareplanned cost to actual cost; however, this comparisonis never made using earned value data. Comparingplanned to actual costs is an indicator only of spendingand not of overall project performance.<strong>Technical</strong> Measures—MOEs, MOPs, and TPMsMeasures of EffectivenessMOEs are the “operational” measures of success that areclosely related to the achievement of mission or operationalobjectives in the intended operational environment.MOEs are intended to focus on how well missionor operational objectives are achieved, not on how theyare achieved, i.e., MOEs should be independent of anyparticular solution. As such, MOEs are the standardsagainst which the “goodness” of each proposed solutionmay be assessed in trade studies and decision analyses.Measuring or calculating MOEs not only makes it possibleto compare alternative solutions quantitatively, butsensitivities to key assumptions regarding operationalenvironments and to any underlying MOPs can also beinvestigated. (See MOP discussion below.)In the systems engineering process, MOEs are used to:zz Define high-level operational requirements from thecustomer/stakeholder viewpoint.zz Compare and rank alternative solutions in tradestudies.<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 191


6.0 Crosscutting <strong>Technical</strong> Managementzz Investigate the relative sensitivity of the projectedmission or operational success to key operational assumptionsand performance parameters.zz Determine that the mission or operational successquantitative objectives remain achievable as systemdevelopment proceeds. (See TPM discussion below.)Measures of PerformanceMOPs are the measures that characterize physical orfunctional attributes relating to the system, e.g., engineI sp, max thrust, mass, and payload-to-orbit. These attributesare generally measured under specified test conditionsor operational environments. MOPs are attributesdeemed important in achieving mission or operationalsuccess, but do not measure it directly. Usually multipleMOPs contribute to an MOE. MOPs often becomesystem performance requirements that, when met by adesign solution, result in achieving a critical thresholdfor the system MOEs.The distinction between MOEs and MOPs is that theyare formulated from different viewpoints. An MOE refersto the effectiveness of a solution from the missionor operational success criteria expressed by the user/customer/stakeholder.An MOE represents a stakeholderexpectation that is critical to the success of the system,and failure to attain a critical value for it will cause thestakeholder to judge the system a failure. An MOP is ameasure of actual performance of a (supplier’s) particulardesign solution, which taken alone may only be indirectlyrelated to the customer/stakeholder’s concerns.<strong>Technical</strong> Performance MeasuresTPMs are critical or key mission success or performanceparameters that are monitored during implementationby comparing the current actual achievement of the parameterswith the values that were anticipated for thecurrent time and projected for future dates. They areused to confirm progress and identify deficiencies thatmight jeopardize meeting a system requirement or putthe project at cost or schedule risk. When a TPM valuefalls outside the expected range around the anticipatedvalue, it signals a need for evaluation and corrective action.In the systems engineering process, TPMs are used to:zz Forecast values to be achieved by critical parametersat major milestones or key events during implementation.zz Identify differences between the actual and plannedvalues for those parameters.zz Provide projected values for those parameters in orderto assess the implications for system effectiveness.zz Provide early warning for emerging risks requiringmanagement attention (when negative margins exist).zz Provide early identification of potential opportuni-ties to make design trades that reduce risk or cost, orincrease system effectiveness (when positive marginsexist).zz Support assessment of proposed design changes.Selecting TPMsTPMs are typically selected from the defined set ofMOEs and MOPs. Understanding that TPM tracking requiresallocation of resources, care should be exercisedin selecting a small set of succinct TPMs that accuratelyreflect key parameters or risk factors, are readily measurable,and that can be affected by altering design decisions.In general, TPMs can be generic (attributes that aremeaningful to each PBS element, like mass or reliability)or unique (attributes that are meaningful only to specificPBS elements). The relationship of MOEs, MOPs, andTPMs can be found in Figure 6.7‐4. The systems engineerneeds to decide which generic and unique TPMsare worth tracking at each level of the PBS. (See box forexamples of TPMs.) At lower levels of the PBS, TPMsMOE #1 MOE #2MOEsMOPsTPMsMOP #1 MOP #2 MOP #3 MOP#nTPM #1 TPM #2 TPM #3 TPM #4 TPM #5 TPM #kDerived from stakeholder expectation statements;deemed critical to mission or operational success ofthe systemBroad physical and performance parameters; means ofensuring meeting the associated MOEsCritical mission success or performance attributes;measurable; progress profile established, controlled,and monitoredFigure 6.7-4 Relationships of MOEs, MOPs,and TPMs192 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


6.7 <strong>Technical</strong> Assessmentworth tracking can be identified through the functionaland performance requirements levied on each individualsystem, subsystem, etc.Examples of <strong>Technical</strong> PerformanceMeasuresTPMs from MOEszz Mission performance (e.g., total science data vol-ume returned)zz Safety (e.g., probability of loss of crew, probabilityof loss of mission)zz Achieved availability (e.g., (system uptime)/(systemuptime + system downtime))TPMs from MOPszz Thrust versus predicted/specifiedzzI spversus predicted/specifiedzz End of Mission (EOM) dry masszz Injected mass (includes EOM dry mass, baselinemission plus reserve propellant, other consumablesand upper stage adaptor mass)zz Propellant margins at EOMzz Other consumables margins at EOMzz Electrical power margins over mission lifezz Control system stability marginszz EMI/EMC susceptibility marginszz Onboard data processing memory demandzz Onboard data processing throughput timezz Onboard data bus capacityzz Total pointing errorzz Total vehicle mass at launchzz Payload mass (at nominal altitude or orbit)zz Reliabilityzz Mean time before refurbishment requiredzz Total crew maintenance time requiredzz System turnaround timezz Fault detection capabilityPercentage of system designed for on-orbit crewzzaccessAs TPMs are intended to provide an early warning ofthe adequacy of a design in satisfying selected criticaltechnical parameter requirements, the systems engineershould select TPMs that fall within well-defined (quantitative)limits for reasons of system effectiveness or missionfeasibility. Usually these limits represent either afirm upper or lower bound constraint. A typical exampleof such a TPM for a spacecraft is its injected mass, whichmust not exceed the capability of the selected launchvehicle. Tracking injected mass as a high-level TPM ismeant to ensure that this does not happen. A high-levelTPM like injected mass must often be “budgeted” andallocated to multiple system elements. Tracking and reportingshould be required at these lower levels to gainvisibility into the sources of any variances.In summary, for a TPM to be a valuable status and assessmenttool, certain criteria must be met:zz Be a significant descriptor of the system (e.g., weight,range, capacity, response time, safety parameter) thatwill be monitored at key events (e.g., reviews, audits,planned tests).zz Can be measured (either by test, inspection, demon-stration, or analysis).zz Is such that reasonable projected progress profiles canbe established (e.g., from historical data or based ontest planning).TPM Assessment and Reporting MethodsStatus reporting and assessment of the system’s TPMscomplement cost and schedule control. There are anumber of assessment and reporting methods that havebeen used on <strong>NASA</strong> projects, including the planned profilemethod and the margin management method.A detailed example of the planned profile methodfor the Chandra Project weight TPM is illustrated inFigure 6.7‐5. This figure depicts the subsystem contributions,various constraints, project limits, and managementreserves from project SRR to launch.A detailed example of the margin management methodfor the Sojourner mass TPM is illustrated in Figure 6.7-6This figure depicts the margin requirements (horizontalstraight lines) and actual mass margins from project SRRto launch.Relationship of TPM Assessment Program to theSEMPThe SEMP is the usual document for describing the project’sTPM assessment program. This description shouldinclude a master list of those TPMs to be tracked, and themeasurement and assessment methods to be employed.If analytical methods and models are used to measure<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 193


6.0 Crosscutting <strong>Technical</strong> ManagementIICD WeightControl WeightObservatory +Science InstrumentsObservatory ContractorSpec WeightObservatoryProjected WeightObservatoryCurrent Weight +Subsystem ReservesObservatory BasicWeight (Planned)Observatory BasicWeight (Estimated)Propellant(Less MUPS)/Pressurant/IUS Adapter/System Reserve<strong>NASA</strong> Reserve(Includes FPSIReserves)ScienceInstrumentsContractor MarginContractor ReserveObservatoryContingency +S/C, T/S, and SIMReservesProjected Basic WeightChandra Project: Weight ChangesTolerance BandWeight (lb)12,930 13,00010,56010,5339,994 10,0009,9079,5878,000SRRPDRCDRLaunchFigure 6.7-5 Use of the planned profile method for the weight TPM with rebaseline in Chandra Projectcertain high-level TPMs, then these need to be identified.The reporting frequency and timing of assessmentsshould be specified as well. In determining these, the systemsengineer must balance the project’s needs for accurate,timely, and effective TPM tracking against the costof the TPM tracking program.Mass Margin (kg)3.02.52.01.51.00.50.0-0.5-1.0July ‘93Oct. ‘93Jan. ‘94CDR ValueApr. ’9415% Margin Level5% Margin LevelJuly ‘94Oct. ‘941% Margin LevelJan. ‘95MonthFigure 6.7-6 Use of the margin managementmethod for the mass TPM in SojournerNote: Current Margin Description: Microrover System (Rover +Lander-Mounted Rover Equipment (LMRE)) Allocation = 16.0 kg;Microrover System (Rover + LMRE) Current Best Estimate =15.2 kg; Microrover System (Rover + LMRE) Margin = 0.8 kg (5.0%).Apr. ’9510% Margin LevelJuly ‘95Oct. ‘95Jan. ‘96Apr. ’96July ‘96Oct. ‘96Jan. ‘97The TPM assessment program plan, which may be a partof the SEMP or a stand-alone document for large programs/projects,should specify each TPM’s allocation,time-phased planned profile or margin requirement,and alert zones, as appropriate to the selected assessmentmethod.A formal TPM assessment program should be fullyplanned and baselined with the SEMP. Tracking TPMsshould begin as soon as practical in Phase B. Data tosupport the full set of selected TPMs may, however,not be available until later in the project life cycle. Asthe project life cycle proceeds through Phases C and D,the measurement of TPMs should become increasinglymore accurate with the availability of more actual dataabout the system.For the WBS model in the system structure, typically thefollowing activities are performed:zz Analyze stakeholder expectation statements to estab-lish a set of MOEs by which overall system or producteffectiveness will be judged and customer satisfactionwill be determined.194 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


6.7 <strong>Technical</strong> Assessmentzz Define MOPs for each identified MOE.zz Define appropriate TPMs and document the TPM as-sessment program in the SEMP.<strong>Systems</strong> <strong>Engineering</strong> Process MetricsStatus reporting and assessment of systems engineeringprocess metrics provide additional visibility into the performanceof the “system that produces the system.” Assuch, these metrics supplement the cost and schedulecontrol measures discussed in this subsection.<strong>Systems</strong> engineering process metrics try to quantifythe effectiveness and productivity of the systems engineeringprocess and organization. Within a singleproject, tracking these metrics allows the systems engineerto better understand the health and progress of thatproject. Across projects (and over time), the tracking ofsystems engineering process metrics allows for betterestimation of the cost and time of performing systemsengineering functions. It also allows the systems engineeringorganization to demonstrate its commitment tocontinuous improvement.Selecting <strong>Systems</strong> <strong>Engineering</strong> ProcessMetricsGenerally, systems engineering process metrics fall intothree categories—those that measure the progress ofthe systems engineering effort, those that measure thequality of that process, and those that measure its productivity.Different levels of systems engineering managementare generally interested in different metrics.For example, a project manager or lead systems engineermay focus on metrics dealing with systems engineeringstaffing, project risk management progress, andmajor trade study progress. A subsystem systems engineermay focus on subsystem requirements and interfacedefinition progress and verification procedures progress.It is useful for each systems engineer to focus on just afew process metrics. Which metrics should be trackeddepends on the systems engineer’s role in the total systemsengineering effort. The systems engineering processmetrics worth tracking also change as the projectmoves through its life cycle.Collecting and maintaining data on the systems engineeringprocess is not without cost. Status reporting andassessment of systems engineering process metrics diverttime and effort from the activity itself. The systems engineermust balance the value of each systems engineeringprocess metric against its collection cost. The valueof these metrics arises from the insights they provideinto the activities that cannot be obtained from cost andschedule control measures alone. Over time, these metricscan also be a source of hard productivity data, which areinvaluable in demonstrating the potential returns from investmentin systems engineering tools and training.Examples and Assessment MethodsTable 6.7‐21 lists some systems engineering process metricsto be considered. This list is not intended to be exhaustive.Because some of these metrics allow for differentinterpretations, each <strong>NASA</strong> Center needs to definethem in a common-sense way that fits its own processes.For example, each field Center needs to determine whatis meant by a “completed” versus an “approved” requirement,or whether these terms are even relevant. As partof this definition, it is important to recognize that not allrequirements, for example, need be lumped together. Itmay be more useful to track the same metric separatelyfor each of several different types of requirements.Quality-related metrics should serve to indicate whena part of the systems engineering process is overloadedand/or breaking down. These metrics can be defined andtracked in several different ways. For example, requirementsvolatility can be quantified as the number of newlyidentified requirements, or as the number of changes toalready approved requirements. As another example,<strong>Engineering</strong> Change Request (ECR) processing could betracked by comparing cumulative ECRs opened versuscumulative ECRs closed, or by plotting the age profileof open ECRs, or by examining the number of ECRsopened last month versus the total number open. Thesystems engineer should apply his or her own judgmentin picking the status reporting and assessment method.Productivity-related metrics provide an indication ofsystems engineering output per unit of input. Althoughmore sophisticated measures of input exist, the mostcommon is the number of systems engineering hours dedicatedto a particular function or activity. Because not allsystems engineering hours cost the same, an appropriateweighing scheme should be developed to ensure comparabilityof hours across systems engineering personnel.Schedule-related metrics can be depicted in a table orgraph of planned quantities versus actuals, for example,comparing planned number of verification closure noticesagainst actual. This metric should not be confused<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 195


6.0 Crosscutting <strong>Technical</strong> ManagementTable 6.7‐21 <strong>Systems</strong> <strong>Engineering</strong> Process MetricsFunction Metric CategoryRequirementsdevelopment andmanagementDesign and developmentVerification andvalidationRequirements identified versus completed versus approvedRequirements volatilityTrade studies planned versus completedRequirements approved per systems engineering hourTracking of TBAs, TBDs, and TBRs (to be announced, determined, or resolved) resolvedversus remainingSpecifications planned versus completedProcessing of engineering change proposals (ECPs)/engineering change requests (ECRs)<strong>Engineering</strong> drawings planned versus releasedVerification and validation plans identified versus approvedVerification and validation procedures planned versus completedFunctional requirements approved versus verifiedVerification and validation plans approved per systems engineering hourProcessing of problem/failure reportsSQSPSSQSSSSPQReviews Processing of RIDs QProcessing of action itemsQS = progress or schedule related; Q = quality related; P = productivity relatedwith EVM described in this subsection. EVM is focusedon integrated cost and schedule at the desired level,whereas this metric focuses on an individual process orproduct within a subsystem, system, or project itself.The combination of quality, productivity, and schedulemetrics can provide trends that are generally more importantthan isolated snapshots. The most useful kind ofassessment method allows comparisons of the trend ona current project with that for a successfully completedproject of the same type. The latter provides a benchmarkagainst which the systems engineer can judge hisor her own efforts.196 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


6.8 Decision AnalysisThe purpose of this section is to provide a descriptionof the Decision Analysis Process, including alternativetools and methodologies. Decision analysis offers individualsand organizations a methodology for makingdecisions; it also offers techniques for modeling decisionproblems mathematically and finding optimal decisionsnumerically. Decision models have the capacityfor accepting and quantifying human subjective inputs:judgments of experts and preferences of decisionmakers.Implementation of models can take the form of simplepaper-and-pencil procedures or sophisticated computerprograms known as decision aids or decision systems.The methodology is broad and must always be adaptedto the issue under consideration. The problem is structuredby identifying alternatives, one of which mustbe decided upon; possible events, one of which occursthereafter; and outcomes, each of which results from acombination of decision and event. Decisions are madethroughout a program/project life cycle and often aremade through a hierarchy of panels, boards, and teamswith increasing complementary authority, wherein eachprogressively more detailed decision is affected by theassumptions made at the lower level. Not all decisionsneed a formal process, but it is important to establish aprocess for those decisions that do require a formal process.Important decisions as well as supporting information(e.g., assumptions made), tools, and models mustbe completely documented so that new information canbe incorporated and assessed and past decisions can beresearched in context. The Decision Analysis Processaccommodates this iterative environment and occursthroughout the project life cycle.An important aspect of the Decision Analysis Process isto consider and understand at what time it is appropriateor required for a decision to be made or not made. Whenconsidering a decision, it is important to ask questionssuch as: Why is a decision required at this time? For howlong can a decision be delayed? What is the impact ofdelaying a decision? Is all of the necessary informationavailable to make a decision? Are there other key driversor dependent factors and criteria that must be in placebefore a decision can be made?The outputs from this process support the decisionmaker’sdifficult task of deciding among competing alternativeswithout complete knowledge; therefore, it is criticalto understand and document the assumptions and limitationof any tool or methodology and integrate themwith other factors when deciding among viable options.Early in the project life cycle, high-level decisions aremade regarding which technology could be used, suchas solid or liquid rockets for propulsion. Operationalscenarios, probabilities, and consequences are determinedand the design decision made without specifyingthe component-level detail of each design alternative.Once high-level design decisions are made, nested systemsengineering processes occur at progressively moredetailed design levels flowed down through the entiresystem. Each progressively more detailed decision is affectedby the assumptions made at the previous levels.For example, the solid rocket design is constrained bythe operational assumptions made during the decisionprocess that selected that design. This is an iterative processamong elements of the system. Also early in the lifecycle, the technical team should determine the types ofdata and information products required to support theDecision Analysis Process during the later stages of theproject. The technical team should then design, develop,or acquire the models, simulations, and other tools thatwill supply the required information to decisionmakers.In this section, application of different levels and kindsof analysis are discussed at different stages of the projectlife cycle.6.8.1 Process DescriptionThe Decision Analysis Process is used to help evaluatetechnical issues, alternatives, and their uncertainties tosupport decisionmaking. A typical process flow diagramis provided in Figure 6.8-1, including inputs, activities,and outputs.Typical processes that use decision analysis are:zz Determining how to allocate limited resources (e.g.,budget, mass, power) among competing subsysteminterests to favor the overall outcome of the project;zz Select and test evaluation methods and tools againstsample data;zz Configuration management processes for majorchange requests or problem reports;zz Design processes for making major design decisionsand selecting design approaches;<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 197


6.0 Crosscutting <strong>Technical</strong> ManagementEstablish guidelines to determinewhich technical issues are subject to aformal analysis/evaluation processFrom all technicalprocessesDecision Need,Alternatives, Issues, orProblems andSupporting DataFrom <strong>Technical</strong>Assessment ProcessAnalysis SupportRequestsDefine the criteria for evaluatingalternative solutionsIdentify alternative solutions toaddress decision issuesSelect evaluation methods and toolsEvaluate alternative solutions withthe established criteria andselected methodsSelect recommended solutions fromthe alternatives based on theevaluation criteriaReport analysis results withrecommendations, impacts, andcorrective actionsTo all technical processesAlternative SelectionRecommendations andImpactsTo <strong>Technical</strong>Assessment ProcessDecision SupportRecommendations andImpactsTo <strong>Technical</strong> DataManagement ProcessWork Products FromDecision AnalysisCapture work products fromdecision analysis activitiesFigure 6.8-1 Decision Analysis Processzz Key decision point reviews or technical review deci-sions (e.g., PDR, CDR) as defined in NPR 7120.5 andNPR 7123.1;zz Go or No-Go decisions (e.g., FRR):▶▶Go—authorization to proceed or▶▶No-Go—repeat some specific aspects of developmentor conduct further research.zz Project management of major issues, schedule delays,or budget increases;zz Procurement of major items;zz Technology decisions;zz Risk management of major risks (e.g., red or yellow);zz SMA decisions; andzz Miscellaneous decisions (e.g., whether to intervene inthe project to address an emergent performance issue).Decision analysis can also be used in emergency situations.Under such conditions, process steps, procedures,Note: Studies often deal in new territory, so it is importantto test whether there are sufficient data, neededquality, resonance with decision authority, etc., beforediving in, especially for large or very complex decisiontrade spaces.and meetings may be combined, and the decision analysisdocumentation may be completed at the end of theprocess (i.e., after the decision is made). However, a decisionmatrix should be completed and used during the decision.Decision analysis documentation must be archivedas soon as possible following the emergency situation.6.8.1.1 InputsFormal decision analysis has the potential to consumesignificant resources and time. Typically, its applicationto a specific decision is warranted only when some of thefollowing conditions are met:198 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


6.8 Decision Analysisz z High Stakes: High stakes are involved in the decision,such as significant cost, safety, or mission success criteria.z z Complexity: The actual ramifications of alternativesare difficult to understand without detailed analysis.z z Uncertainty: Uncertainty in key inputs creates substantialuncertainty in the ranking of alternatives andpoints to risks that may need to be managed.z z Multiple Attributes: Greater numbers of attributescause a greater need for formal analysis.z z Diversity of Stakeholders: Extra attention is warrantedto clarify objectives and formulate TPMs whenthe set of stakeholders reflects a diversity of values,preferences, and perspectives.Satisfaction of all of these conditions is not a requirementfor initiating decision analysis. The point is, rather, thatthe need for decision analysis increases as a function ofthe above conditions. When the Decision Analysis Processis triggered, the following are inputs:zz Decision need, identified alternatives, issues, or prob-lems and supporting data (from all technical managementprocesses).zz Analysis support requests (from <strong>Technical</strong> Assess-ment Process).zz High-level objectives and constraints (from the pro-gram/project).6.8.1.2 Process ActivitiesFor the Decision Analysis Process, the following activitiestypically are performed.Establish Guidelines to Determine Which<strong>Technical</strong> Issues Are Subject to a Formal Analysis/Evaluation ProcessThis step includes determining:zz When to use a formal decisionmaking procedure,zz What needs to be documented,zz Who will be the decisionmakers and their responsi-bilities and decision authorities, andzz How decisions will be handled that do not require aformal evaluation procedure.Decisions are based on facts, qualitative and quantitativedata, engineering judgment, and open communicationsto facilitate the flow of information throughout the hierarchyof forums where technical analyses and evaluationsare presented and assessed and where decisions aremade. The extent of technical analysis and evaluation requiredshould be commensurate with the consequencesof the issue requiring a decision. The work required toconduct a formal evaluation is not insignificant and applicabilitymust be based on the nature of the problem tobe resolved. Guidelines for use can be determined by themagnitude of the possible consequences of the decisionto be made.For example, the consequence table from a risk scorecardcan be used to assign numerical values for applicabilityaccording to impacts to mission success, flightsafety, cost, and schedule. Actual numerical thresholdsfor use would then be set by a decision authority. Samplevalues could be as shown in Table 6.8-1.Table 6.8‐1 Consequence TableNumerical Value Consequence ApplicabilityConsequence = 5, 4 High MandatoryConsequence = 3 Moderate OptionalConsequence = 1, 2 Low Not requiredDefine the Criteria for Evaluating AlternativeSolutionsThis step includes identifying:zz The types of criteria to consider, such as customer ex-pectations and requirements, technology limitations,environmental impact, safety, risks, total ownershipand life-cycle costs, and schedule impact;zz The acceptable range and scale of the criteria; andzz The rank of each criterion by its importance.Decision criteria are requirements for individually assessingoptions and alternatives being considered. Typicaldecision criteria include cost, schedule, risk, safety,mission success, and supportability. However, considerationsshould include technical criteria specific to thedecision being made. Criteria should be objective andmeasurable. Criteria should also permit distinguishingamong options or alternatives. Some criteria may not bemeaningful to a decision; however, they should be documentedas having been considered. Identify criteria thatare mandatory (i.e., “must have”) versus the other criteria(i.e., “nice to have”). If mandatory criteria are notmet, that option should be disregarded. For complexdecisions, criteria can be grouped into categories or ob-<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 199


6.0 Crosscutting <strong>Technical</strong> Managementjectives. (See the analytical hierarchy process in Subsection6.8.2.6.)Ranking or prioritizing the criteria is probably thehardest part of completing a decision matrix. Not all criteriahave the same importance, and ranking is typicallyaccomplished by assigning weights to each. To avoid“gaming” the decision matrix (i.e., changing decisionoutcomes by playing with criteria weights), it is best toagree upon weights before the decision matrix is completed.Weights should only be changed with consensusfrom all decision stakeholders.For example, ranking can be done using a simple approachlike percentages. Have all the weights for eachcriterion add up to 100. Assign percents based on howimportant the criterion is. (The higher the percentage,the more important, such as a single criterion worth 50percent.) The weights need to be divided by 100 to calculatepercents. Using this approach, the option with thehighest percentage is typically the recommended option.Ranking can also be done using sophisticated decisiontools. For example, pair-wise comparison is a decisiontechnique that calculates the weights using paired comparisonsamong criteria and options. Other methods include:zz Formulation of objectives hierarchy and TPMs;zz Analytical hierarchy process, which addresses criteriaand paired comparisons; andzz Risk-informed decision analysis process withweighting of TPMs.Identify Alternative Solutions to AddressDecision IssuesThis step includes considering alternatives in addition tothose that may be provided with the issue.Almost every decision will have options to choose from.Brainstorm decision options, and document optionsummary names for the available options. For complexdecisions, it is also a best practice to perform a literaturesearch to identify options. Reduce the decision optionsto a reasonable set (e.g., seven plus or minus two).Some options will obviously be bad options. Documentthe fact that these options were considered. The use ofmandatory criteria also can help reduce the number ofoptions. A few decisions might only have one option. Itis a best practice to document a decision matrix even forone option if it is a major decision. (Sometimes doingnothing or not making a decision is an option.)Select Evaluation Methods and ToolsSelect evaluation methods and tools/techniques basedon the purpose for analyzing a decision and on the availabilityof the information used to support the methodand/or tool.Typical evaluation methods include: simulations; weightedtradeoff matrices; engineering, manufacturing, cost, andtechnical opportunity of trade studies; surveys; extrapolationsbased on field experience and prototypes; user reviewand comment; and testing.Tools and techniques to be used should be selected basedon the purpose for analyzing a decision and on the availabilityof the information used to support the methodand/or tool.Additional evaluation methods include:zz Decision matrix (see Figure 6.8-2);zz Decision analysis process support, evaluation methods,and tools;zz Risk-informed decision analysis process; andzz Trade studies and decision alternatives.Evaluate Alternative Solutions with theEstablished Criteria and Selected MethodsRegardless of the methods or tools used, results must include:zz Evaluation of assumptions related to evaluation cri-teria and of the evidence that supports the assumptions,andzz Evaluation of whether uncertainty in the values for al-ternative solutions affects the evaluation.Alternatives can be compared to evaluation criteria viathe use of a decision matrix as shown in Figure 6.8‐2.Evaluation criteria typically are in the rows on the leftside of the matrix. Alternatives are typically the columnheadings on the top of the matrix (and to the right top).Criteria weights are typically assigned to each criterion.In the example shown, there are also mandatory criteria.If mandatory criteria are not met, the option is scored at0 percent.When decision criteria have different measurementbases (e.g., numbers, money, weight, dates), normalizationcan be used to establish a common base for mathematicaloperations. The process of “normalization” ismaking a scale so that all different kinds of criteria can200 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


6.8 Decision AnalysisDecision MatrixExample for BatteryMission Success (GetExperiment Data)ENTER SCORESCRITERIA Mandatory (Y=1/N=0)? Weight SCALE1 30Cost per Option 0 103 = Most Supportive1 = Least Supportive3 = Least Expensive1 = Most ExpensiveExtend Old BatteryLifeBuy New BatteriesCollect ExperientData WithAlternativeExperiment2 3 3 01 2 3 1CancelledExperimentRisk (Overall Option Risk) 0 15Schedule 0 10Safety 1 15Uninterrupted Data Collection 0 203 = Least Risk1 = Most Risk3 = Shortest Schedule1 = Longest Schedule3 = Most Safe1 = Least Safe3 = Most Supportive1 = Least Supportive2 1 2 33 2 1 32 1 2 33 1 2 1WEIGHTED TOTALS in % 100% 3 73% 60% 77% 0%SCALE 1-3Figure 6.8-2 Example of a decision matrixbe compared or added together. This can be done informally(e.g., low, medium, high), on a scale (e.g., 1-3-9),or more formally with a tool. No matter how normalizationis done, the most important thing to remember isto have operational definitions of the scale. An operationaldefinition is a repeatable, measurable number. Forexample, “high” could mean “a probability of 67 percentand above.” “Low” could mean “a probability of 33 percentand below.” For complex decisions, decision toolsusually provide an automated way for normalization. Besure to question and understand the operational definitionsfor the weights and scales of the tool.Select Recommended Solutions from theAlternatives Based on the Evaluation CriteriaThis step includes documenting the information, includingassumptions and limitations of the evaluation methodsNote: Completing the decision matrix can be thoughtof as a default evaluation method. Completing thedecision matrix is iterative. Each cell for each criterionand each option needs to be completed by the team.Use evaluation methods as needed to complete theentire decision matrix.used, that justifies the recommendations made and givesthe impacts of taking the recommended course of action.The highest score (e.g., percentage, total score) is typicallythe option that is recommended to management.If a different option is recommended, an explanationmust be provided as to why the lower score is preferred.Usually, if a lower score is recommended, the “risks”or “disadvantages” were too great for the highest score.Sometimes the benefits and advantages of a lower orclose score outweigh the highest score. Ideally, all risks/benefits and advantages/disadvantages would show upin the decision matrix as criteria, but this is not alwayspossible. Sometimes if there is a lower score being recommended,the weighting or scores given may not beaccurate.Report the Analysis and Evaluation Results andFindings with Recommendations, Impacts, andCorrective ActionsTypically a technical team of subject matter expertsmakes a recommendation to a <strong>NASA</strong> decisionmaker(e.g., a <strong>NASA</strong> board, forum, or panel). It is highly recommendedthat the team produce a white paper to documentall major recommendations to serve as a backup to<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 201


6.0 Crosscutting <strong>Technical</strong> Managementany presentation materials used. A presentation can alsobe used, but a paper in conjunction with a decision matrixis preferred (especially for complex decisions). Decisionsare typically captured in meeting minutes, but canbe captured in the white paper.Capture Work Products from Decision AnalysisActivitiesThis step includes capturing:zz Decision analysis guidelines generated and strategyand procedures used;zz Analysis/evaluation approach, criteria, and methodsand tools used;zz Analysis/evaluation results, assumptions made in ar-riving at recommendations, uncertainties, and sensitivitiesof the recommended actions or corrective actions;andzz Lessons learned and recommendations for improvingfuture decision analyses.Typical information captured in a decision report isshown in Table 6.8-2.6.8.1.3 OutputsDecision analysis continues throughout the life cycle.The products from decision analysis include:zz Alternative selection recommendations and impacts(to all technical management processes);zz Decision support recommendations and impacts (to<strong>Technical</strong> Assessment Process);Table 6.8‐2 Typical Information to Capture in a Decision Report# Section Section Description1 Executive Summary Provide a short half-page executive summary of the report:zz Recommendation (short summary—1 sentence)zz Problem/issue requiring a decision (short summary—1 sentence)2 Problem/IssueDescription3 Decision MatrixSetup Rationale4 Decision MatrixScoring Rationale5 Final DecisionMatrixDescribe the problem/issue that requires a decision. Provide background, history, thedecisionmaker(s) (e.g., board, panel, forum, council), and decision recommendation team, etc.Provide the rationale for setting up the decision matrix:zz Criteria selectedzz Options selectedzz Weights selectedzz Evaluation methods selectedProvide a copy of the setup decision matrix.Provide the rationale for the scoring of the decision matrix. Provide the results of populating thescores of the matrix using the evaluation methods selected.Cut and paste the final spreadsheet into the document. Also include any important snapshots ofthe decision matrix.6 Risk/Benefits For the final options being considered, document the risks and benefits of each option.7 Recommendationand/or FinalDecisionDescribe the recommendation that is being made to the decisionmaker(s) and the rationale forwhy the option was selected. Can also document the final decision in this section.8 Dissent If applicable, document any dissent with the recommendation. Document how dissent wasaddressed (e.g., decision matrix, risk, etc.).9 References Provide any references.A Appendices Provide the results of the literature search, including lessons learned, previous related decisions,and previous related dissent. Also document any detailed data analysis and risk analysis used forthe decision. Can also document any decision metrics.202 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


6.8 Decision Analysiszz Work products of decision analysis activities (to Tech-nical Data Management Process); andzz <strong>Technical</strong> risk status measurements (to <strong>Technical</strong> RiskManagement Process).zz TPMs, Performance Indexes (PIs) for alternatives, theprogram- or project-specific objectives hierarchy, andthe decisionmakers’ preferences (to all technical managementprocesses).6.8.2 Decision Analysis GuidanceThe purpose of this subsection is to provide guidance,methods, and tools to support the Decision AnalysisProcess at <strong>NASA</strong>.6.8.2.1 <strong>Systems</strong> Analysis, Simulation, andPerformance<strong>Systems</strong> analysis can be better understood in the contextof the system’s overall life cycle. <strong>Systems</strong> analysis withinthe context of the life cycle is responsive to the needs ofthe stakeholder at every phase of the life cycle, from pre-Phase A through Phase B to realizing the final productand beyond (See Figure 6.8-3.)<strong>Systems</strong> analysis of a product must support the transformationfrom a need into a realized, definitive product;be able to support compatibility with all physical andfunctional requirements; and support the operationalPre-Phase A:Concept StudiesPhase A:Concept & TechnologyDevelopmentPhase B:Preliminary Design andTechnology CompletionPhase C:Final Design andFabricationPhase D:System Assembly,Integration & Test, LaunchPhase E:Operations andSustainmentRequirements Understanding/Filtering Architectures Selection/Analysis ExecutionConceptCollaboration,Assessment, andFeedback to DesignProcess/ProductAssessment, andRefinementVirtual PrototypingDetailed, Focused DevelopmentOperationsAlternative ArchitecturesTop-Level ArchitectureNeeds IdentificationFunctional AnalysisRequirements AnalysisRequirements AllocationSystem Feasibility AnalysisTrade Studies and Decision AnalysisCost/Benefit AnalysisInput Data RequiredOperational Requirements Analysis DefinitionsEstablish MOEs, TPMsFunctional <strong>Systems</strong> Analysis and AllocationAnalysis, Synthesis, and EvaluationFeasible Technology Applications Evaluation<strong>Technical</strong> Approach SelectionFunctional Definition of SystemSystem PlanningFunctional BaselineSelection Evaluation TechniquesSelect Model(s)Identification of Design-Dependent ParametersReevaluation of Established MOEs/TPMsAcquire Input DataEvaluate Each CandidateTradeoff and Decision AnalysisSensitivity AnalysisDesign RefinementRecommendationsConfidence LevelsTradeoffsBreakeven PointsSensitivities—Risks and UncertaintiesMOEs Associated With Decision AlternativesDesign to Baseline or Allocated BaselineRefinement of Synthesis/Evaluation for Systemand Subsystem LevelsEvaluation in Terms of RequirementsContinuation of IterationsSensitivity and Contingency AnalysisUpdate Analysis and Models With New Data60–90% of life-cyclecost locked in (but notnecessarily known)As-Deployed BaselineSystem Operation in User EnvironmentSustaining Maintenance & Logistic SupportOperational TestingSystem Modifications for ImprovementContractor SupportSystem Assessment/Field Data Collectionand AnalysisUpdate Analysis and ModelsBuild-to Product BaselineEvaluation and Selection of Different TechnologiesEvaluation and Selection of Different MaterialsAlternative System Packaging and WiringAlternative DiagnosticsAlternative Instrumentation ConceptsEvaluation and Selection of COTS ItemsAlternative Maintenance and Support ConceptsEvaluation of Autonomous vs. Human FacilitationAcquisition PlansContractingProgram ImplementationMajor Suppliers and Supplier ActivitiesSubsystem Component DesignTrade Studies and Evaluation of Decision AlternativesDevelopment of <strong>Engineering</strong> and Prototype ModelsVerification of Manufacturing and Production ProcessesDevelopmental Test and EvaluationSupplier ActivitiesAlternative Manufacture/Producibility ConceptsAlternative Logistics Support ConceptsAlternative System/Material Disposal ConceptsUpdate Analysis and Models With New DataAs-Built Product BaselineProduction and/or Construction of System ComponentsSupplier Production ActivitiesAcceptance TestingSystem Distribution and OperationDevelopmental/Operational Test and EvaluationInterim Contractor SupportUpdate Analysis With New DataFigure 6.8-3 <strong>Systems</strong> analysis across the life cycle<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 203


6.0 Crosscutting <strong>Technical</strong> Managementscenarios in terms of reliability, maintainability, supportability,serviceability, and disposability, while maintainingperformance and affordability.<strong>Systems</strong> analysis support is provided from cradle tograve of the system. This covers the product design, verification,manufacturing, operations and support, anddisposal. Viewed in this manner, life-cycle engineeringis the basis for concurrent engineering.<strong>Systems</strong> analysis should support concurrent engineering.Appropriate systems analysis can be conducted early inthe life cycle to support planning and development. Theintent here is to support seamless systems analysis optimallyplanned across the entire life cycle. For example,systems engineering early in the life cycle can supportoptimal performance of the deployment, operations, anddisposal facets of the system.Historically, this has not been the case. <strong>Systems</strong> analysiswould focus only on the life cycle that the project occupiedat that time. The systems analyses for the laterphases were treated serially, in chronological order. Thisresulted in major design modifications that were verycostly in the later life-cycle phases. Resources can beused more efficiently if the requirements across the lifecycle are considered concurrently, providing results fordecisionmaking about the system.Figure 6.8-3 shows a life-cycle chart that indicates howthe various general types of systems analyses fit acrossthe phases of the life cycle. The requirements for analysisbegin with a broader scope and more types of analysisrequired in the early phases of the life cycle and funnelor narrow in scope and analysis requirements as decisionsare made and project requirements become cleareras the project proceeds through its life cycle. Figure 6.8‐4presents a specific spaceport example and shows howspecific operational analysis inputs can provide analysisresult outputs pertinent to the operations portion ofthe life cycle. Note that these simulations are conductedacross the life cycle and updated periodically with thenew data that is obtained as the project evolves.Simulation Model InputsMission Model Annual Launches for Each VehicleConfigurationProcess Flow Task Duration, Sequence Resource RequirementsConOps Work Shifts and Priorities Weather, Range, Safety Constraints Extended MaintenanceProbabilistic Events Weather and Range Events Unplanned Work Equipment Downtime (MTBFand MTTR) Process Time Variability andLearning Effects Loss of VehicleResources Launch Vehicle Quantity Facilities and Equipment Personnel Quality and SkillsMAKE CHANGES HERELaunch RateLaunch RateSchedule DependabilityFrequencyFacility UtilizationUtilizationHangar VIF PPF PadSimulation Model OutputsTurnaroundmean = 18.2Personnel UtilizationOvertimePreventive Maint.Unplanned WorkPlanned WorkStraight TimeWorkedPaidManhoursSEE IMPACTS HEREProgram SchedulesLanding opsHangar opsPPF opsVIF opsPad opsFlight opsDateSensitivity Analyses Sensitivity Analyses Customized AnalysesLaunch RateSys1 Sys2 Sys3mean = 1.00 1 2 3 4 5 6 7Launch Delay (days)OptimumFlight H/W ReliabilityFrequencyPersonnel UtilizationAvailable PoolUtilizationTurnaround16 17 18 19 20 21 22 23Turnaround (days)vehicle1vehicle2Datevehicle1OptimumPersonnel QuantityOptimizationMeets all req’mtsNonrecurring $Stationized CostManhoursSys1Sys2Landing opsHangar opsPPF opsVIF opsPad opsFlight opsUnlimited CustomizedAnalyses per SpecificProgram Needs=n =yOptimumSimulation Model Iterations204 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>Figure 6.8-4 Simulation model analysis techniquesFrom: Lockheed Martin presentation to KSC, November 2003, Kevin Brughelli, Lockheed Martin Space <strong>Systems</strong> Company; Debbie Carstens,Florida Institute of Technology; and Tim Barth, KSC.


6.8 Decision AnalysisDuring the early life-cycle phases, inputs should includea plan for collecting the quantitative and qualitativedata necessary to manage contracts and improveprocesses and products as the project evolves. This planshould indicate the type of data necessary to determinethe cause of problems, nonconformances, and anomalies,and propose corrective action to prevent recurrence.This closed-loop plan involving identification, resolution,and recurrence control systems is critical to producingactual reliability that approaches predicted reliability.It should indicate the information technologyinfrastructure and database capabilities to provide datasorting, data mining, data analysis, and precursor management.Management of problems, nonconformances,and anomalies should begin with data collection, shouldbe a major part of technical assessment, and should providecritical information for decision analysis.6.8.2.2 Trade StudiesThe trade study process is a critical part of systems engineering.Trade studies help to define the emerging systemat each level of resolution. One key message of this subsectionis that to be effective, trade studies require theparticipation of people with many skills and a unity ofeffort to move toward an optimum system design.Figure 6.8-5 shows the trade study process in simplestterms, beginning with the step of defining the system’sgoals and objectives, and identifying the constraints itmust meet. In the early phases of the project life cycle,the goals, objectives, and constraints are usually stated ingeneral operational terms. In later phases of the projectlife cycle, when the architecture and, perhaps, some aspectsof the design have already been decided, the goalsand objectives may be stated as performance requirementsthat a segment or subsystem must meet.At each level of system resolution, the systems engineerneeds to understand the full implications of the goals,objectives, and constraints to formulate an appropriatesystem solution. This step is accomplished by performinga functional analysis. “Functional analysis” is the processof identifying, describing, and relating the functions asystem must perform to fulfill its goals and objectivesand is described in detail in Section 4.4.Define/IdentifyGoals/Objectivesand ConstraintsPerform FunctionalAnalysisDefinePlausibleAlternativesDefineSelectionRule*The following questionsshould be considered: Have the goals/objectives andconstraints been met? Is the tentative selection robust?Define measures andmeasurement methods for: System effectiveness System performance ortechnical attributes System costCollect data oneach alternativeto supportevaluation byselectedmeasurementmethods Is more analytical refinementneeded to distinguish amongalternatives? Have the subjective aspects ofthe problem been addressed?No Compute an estimate of system effectiveness,performance or technical attributes, and costfor each alternative Compute or estimate uncertainty ranges Perform sensitivity analysesMake atentativeselection(decision)Is tentativeselectionacceptable?*YesProceed to furtherresolution ofsystem design, or toimplementationAnalytical Portion of Trade StudiesFigure 6.8-5 Trade study process<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 205


6.0 Crosscutting <strong>Technical</strong> ManagementClosely related to defining the goals and objectives andperforming a functional analysis is the step of definingthe measures and measurement methods for system effectiveness(when this is practical), system performanceor technical attributes, and system cost. (These variablesare collectively called outcome variables, in keeping withthe discussion in Section 2.3. Some systems engineeringbooks refer to these variables as decision criteria, but thisterm should not be confused with “selection rule,” describedbelow. Sections 2.5 and 6.1 discuss the concepts ofsystem cost and effectiveness in greater detail.) Definingmeasures and measurement methods begins the analyticalportion of the trade study process, since it suggests theinvolvement of those familiar with quantitative methods.For each measure, it is important to address how thatquantitative measure will be computed—that is, whichmeasurement method is to be used. One reason fordoing this is that this step then explicitly identifies thosevariables that are important in meeting the system’s goalsand objectives.Evaluating the likely outcomes of various alternativesin terms of system effectiveness, the underlying performanceor technical attributes, and cost before actual fabricationand/or programming usually requires the use ofa mathematical model or series of models of the system.So a second reason for specifying the measurementmethods is to identify necessary models.Sometimes these models are already available from previousprojects of a similar nature; other times, they needto be developed. In the latter case, defining the measurementmethods should trigger the necessary system modelingactivities. Since the development of new modelscan take a considerable amount of time and effort, earlyidentification is needed to ensure they will be ready forformal use in trade studies. Defining the selection ruleis the step of explicitly determining how the outcomevariables will be used to make a (tentative) selectionof the preferred alternative. As an example, a selectionrule may be to choose the alternative with the highestestimated system effectiveness that costs less than x dollars(with some given probability), meets safety requirements,and possibly meets other political or scheduleconstraints. Defining the selection rule is essentially decidinghow the selection is to be made. This step is independentfrom the actual measurement of system effectiveness,system performance or technical attributes,and system cost.Many different selection rules are possible. The selectionrule in a particular trade study may depend on the contextin which the trade study is being conducted—in particular,what level of system design resolution is being addressed.At each level of the system design, the selectionrule generally should be chosen only after some guidancefrom the next higher level. The selection rule fortrade studies at lower levels of the system design shouldbe in consonance with the higher level selection rule.Defining plausible alternatives is the step of creatingsome alternatives that can potentially achieve the goalsand objectives of the system. This step depends on understanding(to an appropriately detailed level) the system’sfunctional requirements and operational concept. Runningan alternative through an operational timeline orreference mission is a useful way of determining whetherit can plausibly fulfill these requirements. (Sometimes itis necessary to create separate behavioral models to determinehow the system reacts when a certain stimulusor control is applied, or a certain environment is encountered.This provides insights into whether it can plausiblyfulfill time-critical and safety requirements.) Definingplausible alternatives also requires an understandingof the technologies available, or potentially available,at the time the system is needed. Each plausible alternativeshould be documented qualitatively in a descriptionsheet. The format of the description sheet should,at a minimum, clarify the allocation of required systemfunctions to that alternative’s lower level architectural ordesign components (e.g., subsystems).One way to represent the trade study alternatives underconsideration is by a trade tree.During Phase A trade studies, the trade tree should containa number of alternative high-level system architectures toavoid a premature focus on a single one. As the systemsengineering process proceeds, branches of the trade treecontaining unattractive alternatives will be “pruned,” andgreater detail in terms of system design will be added tothose branches that merit further attention. The processof pruning unattractive early alternatives is sometimesknown as doing “killer trades.” (See trade tree box.)Given a set of plausible alternatives, the next step is tocollect data on each to support the evaluation of the measuresby the selected measurement methods. If modelsare to be used to calculate some of these measures, thenobtaining the model inputs provides some impetus and206 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


6.8 Decision AnalysisAn Example of a Trade Tree for a Mars RoverThe figure below shows part of a trade tree for a robotic Mars rover system, whose goal is to find a suitable manned landingsite. Each layer represents some aspect of the system that needs to be treated in a trade study to determine the bestalternative. Some alternatives have been eliminated a priori because of technical feasibility, launch vehicle constraints,etc. The total number of alternatives is given by the number of end points of the tree. Even with just a few layers, thenumber of alternatives can increase quickly. (This tree has already been pruned to eliminate low-autonomy, large rovers.)As the systems engineering process proceeds, branches of the tree with unfavorable trade study outcomes are discarded.The remaining branches are further developed by identifying more detailed trade studies that need to be made.A whole family of (implicit) alternatives can be represented in a trade tree by the continuous variable. In this example,rover speed or range might be so represented. By treating a variable this way, mathematical optimization techniquescan be applied. Note that a trade tree is, in essence, a decision tree without chance nodes.Mars RoverSizeSmall(–10 kg)Medium(–100 kg)Large(–1,000 kg)Number50 20 10 10 5 2 1AutonomyLow High Low High Low High Low Semi High Low Semi High Low Semi High Semi HighMobilityWheelsWheelsLegsWheelsWheelsLegsWheelsWheelsLegsWheelsWheelsLegsWheelsLegsWheelsWheelsLegsWheelsLegsWheelsWheelsLegsWheelsLegsWheelsLegsWheelsLegsdirection to the data collection activity. By providingdata, engineers in such disciplines as reliability, maintainability,producibility, integrated logistics, software,testing, operations, and costing have an important supportingrole in trade studies. The data collection activity,however, should be orchestrated by the systems engineer.The results of this step should be a quantitative descriptionof each alternative to accompany the qualitative.Test results on each alternative can be especially useful.Early in the systems engineering process, performanceand technical attributes are generally uncertain and mustbe estimated. Data from breadboard and brassboard testbedscan provide additional confidence that the range ofvalues used as model inputs is correct. Such confidenceis also enhanced by drawing on data collected on related,previously developed systems.The next step in the trade study process is to quantify theoutcome variables by computing estimates of system effectiveness,its underlying system performance or technicalattributes, and system cost. If the needed data havebeen collected and the measurement methods (for example,models) are in place, then this step is, in theory,mechanical. In practice, considerable skill is often neededto get meaningful results.In an ideal world, all input values would be preciselyknown and models would perfectly predict outcomevariables. This not being the case, the systems engineershould supplement point estimates of the outcome variablesfor each alternative with computed or estimateduncertainty ranges. For each uncertain key input, arange of values should be estimated. Using this rangeof input values, the sensitivity of the outcome variables<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 207


6.0 Crosscutting <strong>Technical</strong> Managementcan be gauged and their uncertainty ranges calculated.The systems engineer may be able to obtain meaningfulprobability distributions for the outcome variables usingMonte Carlo simulation, but when this is not feasible,the systems engineer must be content with only rangesand sensitivities. See the risk-informed decision analysisprocess in Subsection 6.8.2.8 for more information onuncertainty.This essentially completes the analytical portion of thetrade study process. The next steps can be described asthe judgmental portion. Combining the selection rulewith the results of the analytical activity should enablethe systems engineer to array the alternatives from mostpreferred to least, in essence making a tentative selection.This tentative selection should not be accepted blindly. Inmost trade studies, there is a need to subject the resultsto a “reality check” by considering a number of questions.Have the goals, objectives, and constraints trulybeen met? Is the tentative selection heavily dependenton a particular set of input values to the measurementmethods, or does it hold up under a range of reasonableinput values? (In the latter case, the tentative selectionis said to be robust.) Are there sufficient data to back upthe tentative selection? Are the measurement methodssufficiently discriminating to be sure that the tentativeselection is really better than other alternatives? Have thesubjective aspects of the problem been fully addressed?If the answers support the tentative selection, then thesystems engineer can have greater confidence in a recommendationto proceed to a further resolution of thesystem design, or to the implementation of that design.The estimates of system effectiveness, its underlying performanceor technical attributes, and system cost generatedduring the trade study process serve as inputsto that further resolution. The analytical portion of thetrade study process often provides the means to quantifythe performance or technical (and cost) attributes thatthe system’s lower levels must meet. These can be formalizedas performance requirements.If the reality check is not met, the trade study process returnsto one or more earlier steps. This iteration may resultin a change in the goals, objectives, and constraints;a new alternative; or a change in the selection rule, basedon the new information generated during the trade study.The reality check may lead instead to a decision to firstimprove the measures and measurement methods (e.g.,models) used in evaluating the alternatives, and then torepeat the analytical portion of the trade study process.Controlling the Trade Study ProcessThere are a number of mechanisms for controlling thetrade study process. The most important one is the SEMP.The SEMP specifies the major trade studies that are to beperformed during each phase of the project life cycle. Itshould also spell out the general contents of trade studyreports, which form part of the decision support packages(i.e., documentation submitted in conjunction withformal reviews and change requests).A second mechanism for controlling the trade study processis the selection of the study team leaders and members.Because doing trade studies is part art and part science,the composition and experience of the team is animportant determinant of a study’s ultimate usefulness.A useful technique to avoid premature focus on a specifictechnical design is to include in the study team individualswith differing technology backgrounds.Trade Study <strong>Reports</strong>Trade study reports should be prepared for eachtrade study. At a minimum, each trade study reportshould identify:zz The system under analysiszz System goals and objectives (or requirements, asappropriate to the level of resolution), and constraintszz The measures and measurement methods (mod-els) usedzz All data sources usedzz The alternatives chosen for analysiszz The computational results, including uncertaintyranges and sensitivity analyses performedzz The selection rule usedzz The recommended alternative.Trade study reports should be maintained as part ofthe system archives so as to ensure traceability of decisionsmade through the systems engineering process.Using a generally consistent format for thesereports also makes it easier to review and assimilatethem into the formal change control process.208 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


6.8 Decision AnalysisAnother mechanism is limiting the number of alternativesthat are to be carried through the study. Thisnumber is usually determined by the time and resourcesavailable to do the study because the work required indefining additional alternatives and obtaining the necessarydata on them can be considerable. However, focusingon too few or too similar alternatives defeats thepurpose of the trade study process.A fourth mechanism for controlling the trade study processcan be exercised through the use (and misuse) ofmodels. Lastly, the choice of the selection rule exerts aconsiderable influence on the results of the trade studyprocess. See Appendix O for different examples of howtrade studies are used throughout the life cycle.6.8.2.3 Cost-Benefit AnalysisA cost-benefit analysis is performed to determine theadvantage of one alternative over another in terms ofequivalent cost or benefits. The analysis relies on the additionof positive factors and the subtraction of negativefactors to determine a net result. Cost-benefit analysismaximizes net benefits (benefits minus costs). A costbenefitanalysis finds, quantifies, and adds all the positivefactors. These are the benefits. Then it identifies, quantifies,and subtracts all the negatives, the costs. The differencebetween the two indicates whether the plannedaction is a preferred alternative. The real trick to doinga cost-benefit analysis well is making sure to include allthe costs and all the benefits and properly quantify them.A similar approach, used when a cost cap is imposedexternally, is to maximize effectiveness for a given levelof cost. Cost-effectiveness is a systematic quantitativemethod for comparing the costs of alternative means ofachieving the same equivalent benefit for a specific objective.A project is cost-effective if, on the basis of lifecyclecost analysis of competing alternatives, it is determinedto have the lowest costs expressed in present valueterms for a given amount of benefits.Cost-effectiveness analysis is appropriate whenever it isimpractical to consider the dollar value of the benefitsprovided by the alternatives. This is the scenario whenevereach alternative has the same life-cycle benefits expressedin monetary terms, or each alternative has thesame life-cycle effects, but dollar values cannot be assignedto their benefits. After determining the scope ofthe project on the basis of mission and other requirements,and having identified, quantified, and valuedthe costs and benefits of the alternatives, the next step isto identify the least-cost or most cost-effective alternativeto achieve the purpose of the project. A comparativeanalysis of the alternative options or designs is oftenrequired. This is illustrated in Figure 4.4-3. In cases inwhich alternatives can be defined that deliver the samebenefits, it is possible to estimate the equivalent rate betweeneach alternative for comparison. Least-cost analysisaims at identifying the least-cost project option formeeting the technical requirements. Least-cost analysisinvolves comparing the costs of the various technicallyfeasible options and selecting the one with thelowest costs. Project options must be alternative ways ofachieving the mission objectives. If differences in resultsor quality exist, a normalization procedure must be appliedthat takes the benefits of one option relative to anotheras a cost to the option that does not meet all ofthe mission objectives to ensure an equitable comparison.Procedures for the calculation and interpretationof the discounting factors should be made explicit, withthe least-cost project being identified by comparing thetotal life-cycle costs of the project alternatives and calculatingthe equalizing factors for the difference in costs.The project with the highest equalizing factors for allcomparisons is the least-cost alternative.Cost-effectiveness analysis also deals with alternativemeans of achieving mission requirements. However,the results may be estimated only indirectly. For example,different types of systems may be under considerationto obtain science data. The effectiveness ofeach alternative may be measured through obtainingscience data through different methods. An exampleof a cost-effectiveness analysis requires the increase inscience data to be divided by the costs for each alternative.The most cost-effective method is the one thatraises science data by a given amount for the least cost.If this method is chosen and applied to all similar alternatives,the same increase in science data can be obtainedfor the lowest cost. Note, however, that the mostcost-effective method is not necessarily the most effectivemethod of meeting mission objectives. Anothermethod may be the most effective, but also cost a lotmore, so it is not the most cost-effective. The cost-effectivenessratios—the cost per unit increase in sciencedata for each method—can be compared to see howmuch more it would cost to implement the most effectivemethod. Which method is chosen for implementationthen depends jointly on the desired mission ob-<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 209


6.0 Crosscutting <strong>Technical</strong> Managementjectives and the extra cost involved in implementing themost effective method.There will be circumstances where project alternativeshave more than one outcome. To assess the cost-effectivenessof the different alternatives, it is necessary to devisea testing system where the results for the differentfactors can be added together. It also is necessary to decideon weights for adding the different elements together,reflecting their importance in relation to the objectivesof the project. Such a cost-effectiveness analysis iscalled weighted cost-effectiveness analysis. It introducesa subjective element, the weights, into the comparison ofproject alternatives, both to find the most cost-effectivealternative and to identify the extra cost of implementingthe most effective alternative.6.8.2.4 Influence DiagramsAn influence diagram (also called a decision network) isa compact graphical and mathematical representation ofa decision state. (See Figure 6.8-6.) Influence diagramswere first developed in the mid-1970s within the decisionanalysis community as an intuitive approach that iseasy to understand. They are now adopted widely andare becoming an alternative to decision trees, whichtypically suffer from exponential growth in number ofbranches with each variable modeled. An influence diagramis directly applicable in team decision analysisWill HitWill MissEvacuateStayElements of Influence DiagramsDecisionChanceValueValueForecast Hurricane Path HitsMissesDecisionDecision node: representsalternativesChance node: representsevents (states of nature)}Value node: representsconsequences, objectives,or calculationsConsequencesFigure 6.8-6 Influence diagramsDecision Table:Decision\Hurricane Pathsince it allows incomplete sharing of information amongteam members to be modeled and solved explicitly. Itselements are:zz Decision nodes, indicating the decision inputs, andthe items directly influenced by the decision outcome;zz Chance nodes, indicating factors that impact thechance outcome, and items influenced by the chanceoutcome;zz Value nodes, indicating factors that affect the value,and items influenced by the value; andzz Arrows, indicating the relationships among the ele-ments.An influence diagram does not depict a strictly sequentialprocess. Rather, it illustrates the decision process ata particular point, showing all of the elements importantto the decision. The influence diagram for a particularmodel is not unique. The strength of influence diagramsis their ability to display the structure of a decisionproblem in a clear, compact form, useful both for communicationand to help the analyst think clearly duringproblem formulation. An influence diagram can betransformed into a decision tree for quantification.6.8.2.5 Decision TreesLike the influence diagram, a decision tree portrays a decisionmodel, but a decision tree is drawn from a pointof view different from that of the influence diagram. Thedecision tree exhaustively works out the expected consequencesof all decision alternatives by discretizing all“chance” nodes, and, based on this discretization, calculatingand appropriately weighting all possible consequencesof all alternatives. The preferred alternative is thenidentified by summing the appropriate outcome variables(MOE or expected utility) from the path end states.A decision tree grows horizontally from left to right,with the trunk at the left. Typically, the possible alternativesinitially available to the decisionmaker stem fromthe trunk at the left. Moving across the tree, the decisionmakerencounters branch points corresponding toprobabilistic outcomes and perhaps additional decisionnodes. Thus, the tree branches as it is read from left toright. At the far right side of the decision tree, a vectorof TPM scores is listed for each terminal branch, representingeach combination of decision outcome andchance outcome. From the TPM scores, and the chosenselection rule, a preferred alternative is determined.210 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


6.8 Decision AnalysisIn even moderately complicated problems, decision treescan quickly become difficult to understand. Figure 6.8-7shows a sample of a decision tree. This figure only showsa simplified illustration. A complete decision tree withadditional branches would be expanded to the appropriatelevel of detail as required by the analysis. A commonlyemployed strategy is to start with an equivalentinfluence diagram. This often aids in helping to understandthe principal issues involved. Some software packagesmake it easy to develop an influence diagram andthen, based on the influence diagram, automatically furnisha decision tree. The decision tree can be edited ifthis is desired. Calculations are typically based on the decisiontree itself.6.8.2.6 Multi-Criteria Decision AnalysisMulti-Criteria Decision Analysis (MCDA) is a methodaimed at supporting decisionmakers who are faced withmaking numerous and conflicting evaluations. Thesetechniques aim at highlighting the conflicts in alternativesand deriving a way to come to a compromise ina transparent process. For example, <strong>NASA</strong> may applyMCDA to help assess whether selection of one set ofsoftware tools for every <strong>NASA</strong> application is cost effective.MCDA involves a certain element of subjectiveness;the bias and position of the team implementing MCDAplay a significant part in the accuracy and fairness of decisions.One of the MCDA methods is the Analytic HierarchyProcess (AHP).The Analytic Hierarchy ProcessAHP was first developed and applied by Thomas Saaty.AHP is a multi-attribute methodology that provides aproven, effective means to deal with complex decisionmakingand can assist with identifying and weightingselection criteria, analyzing the data collected for thecriteria, and expediting the decisionmaking process.Many different problems can be investigated with themathematical techniques of this approach. AHP helpscapture both subjective and objective evaluation measures,providing a useful mechanism for checking theconsistency of the evaluation measures and alternativessuggested by the team, and thus reducing bias in decisionmaking.AHP is supported by pair-wise comparisontechniques, and it can support the entire decisionprocess. AHP is normally done in six steps:AlternativeAContaminationWidespread0.0Localized0.1None0.9CrewHours100 Hours0.7150 Hours0.3Cost200 Hours0.0120%0.7110%0.3100%0.0AlternativeSelectionAlternativeBContaminationWidespread0.1Localized0.8None0.1CrewHours100 Hours120%0.00.0150 Hours110%0.1Cost0.1200 Hours100%0.90.9AlternativeCFigure 6.8-7 Decision tree<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 211


6.0 Crosscutting <strong>Technical</strong> Management1.2.3.4.5.6.Describe in summary form the alternatives underconsideration.Develop a set of high-level objectives.Decompose the high-level objective from general tospecific to produce an objectives hierarchy.Determine the relative importance of the evaluationobjectives and attributes by assigning weightsarrived at by engaging experts through a structuredprocess such as interviews or questionnaires.Have each expert make pair-wise comparisons ofthe performance of each decision alternative withrespect to a TPM. Repeat this for each TPM. Combinethe results of these subjective evaluations mathematicallyusing a process or, commonly, an availablesoftware tool that ranks the alternatives.Iterate the interviews/questionnaires and AHP evaluationprocess until a consensus ranking of the alternativesis achieved.If AHP is used only to produce the TPM weights to beused in a PI or MOE calculation, then only the first foursteps listed above are applicable.With AHP, consensus may be achieved quickly or severalfeedback rounds may be required. The feedback consistsof reporting the computed ranking, for each evaluatorand for the group, for each option, along with the reasonsfor differences in rankings, and identified areas of divergence.Experts may choose to change their judgments onTPM weights. At this point, divergent preferences can betargeted for more detailed study. AHP assumes the existenceof an underlying preference vector with magnitudesand directions that are revealed through the pairwisecomparisons. This is a powerful assumption, whichmay at best hold only for the participating experts. Theranking of the alternatives is the result of the experts’judgments and is not necessarily a reproducible result.For further information on AHP, see references by Saaty,The Analytic Hierarchy Process.Flexibility and Extensibility AttributesIn some decision situations, the selection of a particulardecision alternative will have implications for the longterm that are very difficult to model in the present. Insuch cases, it is useful to structure the problem as a seriesof linked decisions, with some decisions to be made inthe near future and others to be made later, perhaps onthe basis of information to be obtained in the meantime.There is value in delaying some decisions to the future,when additional information will be available. Sometechnology choices might foreclose certain opportunitiesthat would be preserved by other choices.In these cases, it is desirable to consider attributes suchas “flexibility” and “extensibility.” Flexibility refers to theability to support more than one current application. Extensibilityrefers to the ability to be extended to other applications.For example, in choosing an architecture tosupport lunar exploration, one might consider extensibilityto Mars missions. A technology choice that imposesa hard limit on mass that can be boosted into aparticular orbit has less flexibility than a choice that ismore easily adaptable to boost more. Explicitly addingextensibility and flexibility as attributes to be weightedand evaluated allows these issues to be addressed systematically.In such applications, extensibility and flexibilityare being used as surrogates for certain future performanceattributes.6.8.2.7 Utility Analysis“Utility” is a measure of the relative value gained froman alternative. Given this measure, the team looks at increasingor decreasing utility, and thereby explain alternativedecisions in terms of attempts to increase their utility.The theoretical unit of measurement for utility is the util.The utility function maps the range of the TPM into therange of associated utilities, capturing the decisionmaker’spreferences and risk attitude. It is possible to imaginesimply mapping the indicated range of values linearlyonto the interval [0,1] on the utility axis, but in general,this would not capture the decisionmaker’s preferences.The decisionmaker’s attitude toward risk causes thecurve to be convex (risk prone), concave (risk averse), oreven some of each.The utility function directly reflects the decisionmaker’sattitude toward risk. When ranking alternatives on thebasis of utility, a risk-averse decisionmaker will rank analternative with highly uncertain performance below analternative having the same expected performance butless uncertainty. The opposite outcome would result fora risk-prone decisionmaker. When the individual TPMutility functions have been assessed, it is important tocheck the result for consistency with the decisionmaker’sactual preferences (e.g., is it true that intermediate valuesof TPM 1and TPM 2are preferred to a high value of TPM 1and a low value of TPM 2).212 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


6.8 Decision AnalysisAn example of a utility function for the TPM “volume”is shown in Figure 6.8-8. This measure was developedin the context of design of sensors for a space mission.Volume was a precious commodity. The implication ofthe graph is that low volume is good, large volume is bad,and the decisionmaker would prefer a design alternativewith a very well-determined volume of a few thousandcc’s to an alternative with the same expected volume butlarge uncertainty.Utility0.80.60.40.22,000 4,000 6,000 8,000 10,000 12,000Volume (ccm)Figure 6.8-8 Utility function for a “volume”performance measureValue functions can take the place of utility functionswhen a formal treatment of risk attitude is unnecessary.They appear very similar to utility functions, but haveone important difference. Value functions do not considerthe risk attitude of the decisionmaker. They do notreflect how the decisionmaker compares certain outcomesto uncertain outcomes.The assessment of a TPM’s value function is relativelystraightforward. The “best” end of the TPM’s range is assigneda value of 1. The “worst” is assigned a value of0. The decisionmaker makes direct assessments of thevalue of intermediate points to establish the preferencestructure in the space of possible TPM values. The utilityfunction can be treated as a value function, but the valuefunction is not necessarily a utility function.One way to rank alternatives is to use a Multi-Attribute,Utility Theory (MAUT) approach. With this approach,the “expected utility” of each alternative is quantified, andalternatives are ranked based on their expected utilities.Sometimes the expected utility is referred to as a PI. Animportant benefit of applying this method is that it is thebest way to deal with significant uncertainties when thedecisionmaker is not risk neutral. Probabilistic methodsare used to treat uncertainties. A downside of applyingthis method is the need to quantify the decisionmaker’srisk attitudes. Top-level system architecture decisions arenatural examples of appropriate applications of MAUT.6.8.2.8 Risk-Informed Decision AnalysisProcess ExampleIntroductionA decision matrix works for many decisions, but the decisionmatrix may not scale up to very complex decisionsor risky decisions. For some decisions, a tool is neededto handle the complexity. The following subsection describesa detailed Decision Analysis Process that can beused to support a risk-informed decision.In practice, decisions are made in many different ways.Simple approaches may be useful, but it is important torecognize their limitations and upgrade to better analysiswhen this is warranted. Some decisionmakers, whenfaced with uncertainty in an important quantity, determinea best estimate for that quantity, and then reason asif the best estimate were correct. This might be called the“take-your-best-shot” approach. Unfortunately, whenthe stakes are high, and uncertainty is significant, thisbest-shot approach may lead to poor decisions.The following steps are a risk-informed decision analysisprocess:1.2.3.4.5.Formulation of the objectives hierarchy, TPMs.Proposing and identifying decision alternatives.Alternatives from this process are combined withthe alternatives identified in the other systems engineeringprocesses including the Design SolutionDefinition Process, but also including verificationand validation as well as production.Risk analysis of decision alternatives and ranking ofalternatives.Deliberation and recommendation of decision alternatives.Followup tracking of the implementation of the decision.These steps support good decisions by focusing first onobjectives, next on developing decision alternatives withthose objectives clearly in mind and/or using decision al-<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 213


6.0 Crosscutting <strong>Technical</strong> Managementternatives that have been developed under other systemsengineering processes. The later steps of the DecisionAnalysis Process interrelate heavily with the <strong>Technical</strong>Risk Management Process, as indicated in Figure 6.8-9.These steps include risk analysis of the decision alternatives,deliberation informed by risk analysis results, andrecommendation of a decision alternative to the decisionmaker.Implementation of the decision is also important.Objectives Hierarchy/TPMsAs shown in Figure 6.8-9, risk-informed decision analysisstarts with formulation of the objectives hierarchy.Using this hierarchy, TPMs are formulated to quantifyperformance of a decision with respect to the programobjectives. The TPMs should have the following characteristics:zz They can support ranking of major decision alternatives.zz They are sufficiently detailed to be used directly in therisk management process.zz They are preferentially independent. This means thatthey contribute in distinct ways to the program goal.This property helps to ensure that alternatives areranked appropriately.An example of an objectives hierarchy is shown inFigure 6.8-10. Details will vary from program to program,but a construct like Figure 6.8-10 is behind theprogram-specific objectives hierarchy.The TPMs in this figure are meant to be generically importantfor many missions. Depending on the mission,these TPMs are further subdivided to the point wherethey can be objectively measured. Not all TPMs can bemeasured directly. For example, safety-related TPMs areRecognitionof IssuesorOpportunitiesDecision AnalysisFormulation of ObjectivesHierarchy and TPMsProposing and/orIdentifying DecisionAlternativesStakeholderExpectation,RequirementsDefinition/ManagementDesign Solution,<strong>Technical</strong> PlanningRisk Analysis of DecisionAlternatives, PerformingTrade Studies and RankingDesign Solution,<strong>Technical</strong> Planning,Decision AnalysisDecisionmakingandImplementationof DecisionAlternativeDeliberating andRecommending aDecision AlternativeTracking and ControllingPerformance Deviations<strong>Technical</strong> Risk Management<strong>Technical</strong> Planning,Decision AnalysisDecision Analysis,Lessons Learned,KnowledgeManagement214 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>Figure 6.8-9 Risk-informed Decision Analysis Process


6.8 Decision AnalysisOBJECTIVES HIERARCHYAffordabilityMeetBudgetConstraintsMeetSchedules<strong>Technical</strong> Objectivesand PerformanceAchieveMission-CriticalFunctionsEnhanceEffectiveness/PerformanceMission SuccessProvideExtensibility/SupportabilityProtectWorkforce& PublicHealthSafetyProtectEnvironmentOther Stakeholders’SupportProtectMission& PublicAssetsRealizeOtherStakeholders’SupportPERFORMANCE MEASURES(representative measures are shown)Design/DevelopmentCostOverrunOperationCostOverrunScheduleSlippage…Loss ofMissionFunction x…Mass/CargoCapacityReliability/Availability/QAEffectivenessCommercialExtensibilityLoss ofSupportCapabilityxAstronautDeath orInjuryPublicDeathor InjuryEarthContaminationPlanetaryContaminationLoss ofFlight<strong>Systems</strong>Loss ofPublicPropertyPublicSupportScienceCommunitySupport<strong>Technical</strong> Performance MeasuresMODEL-BASEDQUANTIFICATIONOF PERFORMANCEMEASURESEconomics andSchedule ModelsModels to Assess Life-Cycle Cost andSchedule PerformanceModels Quantifying <strong>Technical</strong> Performance Measures Models Quantifying Capability Metrics Relating to Mission Requirements (Mass,Thrust, Cargo Capacity) Models Quantifying Metrics Relating to Probability of Loss of Mission-CriticalSystem/Function (e.g., PRA) Models Quantifying Metrics Relating to Frequency or Probability of Failure to MeetHigh-Level Safety Objectives (e.g., PRA)StakeholderModelsDecisionAlternativeFigure 6.8-10 Example of an objectives hierarchydefined in terms of the probability of a consequence typeof a specific magnitude (e.g., probability of any generalpublic deaths or injuries) or the expected magnitude ofa consequence type (e.g., the number of public deathsor injuries). Probability of Loss of Mission and Probabilityof Loss of Crew (P(LOM) and P(LOC)) are twoparticularly important safety-related TPMs for mannedspace missions. Because an actuarial basis does not sufficefor prediction of these probabilities, modeling willbe needed to quantify them.<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 215


7.0 Special TopicsThe topics below are of special interest for enhancingthe performance of the systems engineering process orconstitute special considerations in the performanceof systems engineering. The first section elucidates theprocess of how the systems engineering principles needto be applied to contracting and contractors that implement<strong>NASA</strong> processes and create <strong>NASA</strong> products.Applying lessons learned enhances the efficiency of thepresent with the wisdom of the past. Protecting the environmentand the Nation’s space assets are importantconsiderations in the design and development of requirementsand designs. Integrated design can enhancethe efficiency and effectiveness of the designprocess.7.1 <strong>Engineering</strong> with Contracts7.1.1 Introduction, Purpose, and ScopeHistorically, most successful <strong>NASA</strong> projects have dependedon effectively blending project management, systemsengineering, and technical expertise among <strong>NASA</strong>,contractors, and third parties. Underlying these successesare a variety of agreements (e.g., contract, memorandumof understanding, grant, cooperative agreement)between <strong>NASA</strong> organizations or between <strong>NASA</strong>and other Government agencies, Government organizations,companies, universities, research laboratories, andso on. To simplify the discussions, the term “contract” isused to encompass these agreements.This section focuses on the engineering activities pertinentto awarding a contract, managing contract performance,and completing a contract. However, interfacesto the procurement process will be covered, since the engineeringtechnical team plays a key role in developmentand evaluation of contract documentation.Contractors and third parties perform activities that supplement(or substitute for) the <strong>NASA</strong> project technicalteam accomplishment of the common technical processactivities and requirements. Since contractors might beinvolved in any part of the systems engineering life cycle,the <strong>NASA</strong> project technical team needs to know how toprepare for, perform, and complete surveillance of technicalactivities that are allocated to contractors.7.1.2 Acquisition StrategyCreating an acquisition strategy for a project is a collaborativeeffort among several <strong>NASA</strong> HQ offices thatleads to approval for project execution. The programand project offices characterize the acquisition strategyin sufficient detail to identify the contracts needed to executethe strategy. Awarding contracts at the project leveloccurs in the context of the overall program acquisitionstrategy.While this section pertains to projects where the decisionhas been made to have a contractor implement a portionof the project, it is important to remember that thechoice between “making” a product in-house by <strong>NASA</strong>or “buying” it from a contractor is one of the most crucialdecisions in systems development. (See Section 5.1.)Questions that should be considered in the “make/buy”decision include the following:zz Is the desired system a development item or more offthe shelf?zz What is the relevant experience of <strong>NASA</strong> versus po-tential contractors?zz What are the relative importance of risk, cost, schedule,and performance?zz Is there a desire to maintain an “in-house” capability?As soon as it is clear that a contract will be needed toobtain a system or service, the responsible project managershould contact the local procurement office. Thecontracting officer will assign a contract specialist tonavigate the numerous regulatory requirements that affect<strong>NASA</strong> procurements and guide the developmentof contract documentation needed to award a contract.The contract specialist engages the local legal office asneeded.<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 217


7.0 Special Topics7.1.2.1 Develop an Acquisition StrategyThe project manager, assisted by the assigned procurementand legal offices, first develops a project acquisitionstrategy or verifies the one provided. The acquisitionstrategy provides a business and technical managementoutline for planning, directing, and managing a projectand obtaining products and services via contract.In some cases, it may be appropriate to probe outsidesources in order to gather sufficient information to formulatean acquisition strategy. This can be done by issuinga Request for Information (RFI) to industry andother parties that may have interest in potential futurecontracts. An RFI is a way to obtain information abouttechnology maturity, technical challenges, capabilities,price and delivery considerations, and other market informationthat can influence strategy decisions.The acquisition strategy includes:zz Objectives of the acquisition—capabilities to be pro-vided, major milestones;zz Acquisition approach—single step or evolutionary(incremental), single or multiple suppliers/contracts,competition or sole source, funding source(s), phases,system integration, Commercial-Off-the-Shelf(COTS) products;zz Business considerations—constraints (e.g., funding,schedule), availability of assets and technologies, applicabilityof commercial items versus internal technicalproduct development;zz Risk management of acquired products or services—major risks and risk sharing with the supplier;zz Contract types—performance-based or level of effort,fixed-price or cost reimbursable;zz Contract elements—incentives, performance param-eters, rationale for decisions on contract type; andzz Product support strategy—oversight of deliveredsystem, maintenance, and improvements.The technical team gathers data to facilitate the decisionmakingprocess regarding the above items. The technicalteam knows about issues with the acquisition approach,determining availability of assets and technologies, applicabilityof commercial items, issues with system in-tegration, and details of product support. Similarly, thetechnical team provides corporate knowledge to identifyand evaluate risks of acquiring the desired product, especiallyregarding the proposed contract type and particularcontract elements.7.1.2.2 Acquisition Life CycleContract activities are part of the broader acquisitionlife cycle, which comprises the phases solicitation,source selection, contract monitoring, and acceptance.(See Figure 7.1-1.) The acquisition life cycle overlapsand interfaces with the systems engineering processesin the project life cycle. Acquisition planning focuses ontechnical planning when a particular contract (or purchase)is required. (See Section 6.1.) In the figure below,requirements development corresponds to the <strong>Technical</strong>Requirements Definition Process in the systemsengineering engine. (See Figure 2.1‐1.) The next fourphases—solicitation, source selection, contract monitoring,and acceptance—are the phases of the contractactivities. Transition to operations and maintenance representsactivities performed to transition acquired productsto the organization(s) responsible for operatingand maintaining them (which could be contractor(s)).Acquisition management refers to project managementactivities that are performed throughout the acquisitionlife cycle by the acquiring organization.7.1.2.3 <strong>NASA</strong> Responsibility for <strong>Systems</strong><strong>Engineering</strong>The technical team is responsible for systems engineeringthroughout the acquisition life cycle. The technical teamcontributes heavily to systems engineering decisions andresults, whatever the acquisition strategy, for any combinationof suppliers, contractors, and subcontractors. Thetechnical team is responsible for systems engineeringwhether the acquisition strategy calls for the technicalteam, a prime contractor, or some combination of thetwo to perform system integration and testing of productsfrom multiple sources.This subsection provides specific guidance on how to assignresponsibility when translating the technical processesonto a contract. Generally, the <strong>Technical</strong> Planning,AcquisitionPlanningRequirementsDevelopmentSolicitationSourceSelectionContractMonitoringAcceptanceTransition toOperations &MaintenanceFigure 7.1-1 Acquisition life cycle218 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


7.1 <strong>Engineering</strong> with ContractsInterface Management, <strong>Technical</strong> Risk Management,Configuration Management, <strong>Technical</strong> Data Management,<strong>Technical</strong> Assessment, and Decision Analysis processesshould be implemented throughout the projectby both the <strong>NASA</strong> team and the contractor. StakeholderExpectations Definition, <strong>Technical</strong> Requirements Definition,Logical Decomposition, Design Solution Definition,Product Implementation and Integration, ProductVerification and Validation, Product Transition, and RequirementsManagement Processes are implemented by<strong>NASA</strong> or the contractor depending upon the level of theproduct decomposition.Table 7.1-1 provides guidance on how to implement the17 technical processes from NPR 7123.1. The first twocolumns have the number of the technical process andthe requirement statement of responsibility. The nextcolumn provides general guidance on how to distinguishwho has responsibility for implementing the process.The last column provides a specific example of theapplication of how to implement the process for a particularproject. The particular scenario is a science missionwhere a contractor is building the spacecraft, <strong>NASA</strong>assigns Government-Furnished Property (GFP) instrumentsto the contractor, and <strong>NASA</strong> operates the mission.7.1.3 Prior to Contract Award7.1.3.1 Acquisition PlanningBased on the acquisition strategy, the technical teamneeds to plan acquisitions and document the planin developing the SEMP. The SEMP covers the technicalteam’s involvement in the periods before contractaward, during contract performance, and uponcontract completion. Included in acquisition planningare solicitation preparation, source selection activities,contract phase-in, monitoring contractor performance,acceptance of deliverables, completing the contract,and transition beyond the contract. The SEMPfocuses on interface activities with the contractor, including<strong>NASA</strong> technical team involvement with andmonitoring of contracted work.Often overlooked in project staffing estimates is theamount of time that technical team members are involvedin contracting-related activities. Depending onthe type of procurement, a technical team member involvedin source selection could be consumed nearly fulltime for 6 to 12 months. After contract award, technicalmonitoring consumes 30 to 50 percent, peaking at fulltime when critical milestones or key deliverables arrive.Keep in mind that for most contractor activities, <strong>NASA</strong>staff performs supplementary activities.The technical team is intimately involved in developingtechnical documentation for the acquisition package. Theacquisition package consists of the solicitation (e.g., Requestfor Proposals (RFPs) and supporting documents.The solicitation contains all the documentation that isadvertised to prospective contractors (or offerors). Thekey technical sections of the solicitation are the SOW (orperformance work statement), technical specifications,and contract data requirements list. Other sections ofthe solicitation include proposal instructions and evaluationcriteria. Documents that support the solicitationinclude a procurement schedule, source evaluation plan,Government cost estimate, and purchase request. Inputfrom the technical team will be needed for some of thesupporting documents.It is the responsibility of the contract specialist, withinput from the technical team, to ensure that the appropriateclauses are included in the solicitation. The contractspecialist is familiar with requirements in the FederalAcquisition Regulation (FAR) and the <strong>NASA</strong> FARSupplement (NFS) that will be included in the solicita-SolicitationsThe release of a solicitation to interested parties isthe formal indication of a future contract. A solicitationconveys sufficient details of a Government need(along with terms, conditions, and instructions) to allowprospective contractors (or offerors) to respondwith a proposal. Depending on the magnitude andcomplexity of the work, a draft solicitation may beissued. After proposals are received, a source evaluationboard (or committee) evaluates technical andbusiness proposals per its source evaluation planand recommends a contractor selection to the contractingofficer. The source evaluation board, led by atechnical expert, includes other technical experts anda contracting specialist. The source selection processis completed when the contracting officer signs thecontract.The most common <strong>NASA</strong> solicitation types are RFPand Announcement of Opportunity (AO). Visit the online<strong>NASA</strong> Procurement Library for a full range of detailsregarding procurements and source selection.<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 219


7.0 Special TopicsTable 7.1-1 Applying the <strong>Technical</strong> Processes on Contract# NPR 7123.1 Process1 The Center Directors or designees establishand maintain a process to includeactivities, requirements, guidelines, anddocumentation for the definition of stakeholderexpectations for the applicableWBS model.2 The Center Directors or designees establishand maintain a process to includeactivities, requirements, guidelines, anddocumentation for definition of thetechnical requirements from the set ofagreed-upon stakeholder expectations forthe applicable WBS model.3 The Center Directors or designees establishand maintain a process to includeactivities, requirements, guidelines, anddocumentation for logical decompositionof the validated technical requirements ofthe applicable WBS.4 The Center Directors or designees establishand maintain a process to includeactivities, requirements, guidelines, anddocumentation for designing productsolution definitions within the applicableWBS model that satisfy the derived technicalrequirements.5 The Center Directors or designees establishand maintain a process to includeactivities, requirements, guidelines, anddocumentation for implementation ofa design solution definition by making,buying, or reusing an end product of theapplicable WBS model.6 The Center Directors or designees establishand maintain a process to includeactivities, requirements, guidelines, anddocumentation for the integration oflower level products into an end productof the applicable WBS model in accordancewith its design solution definition.7 The Center Directors or designees establishand maintain a process to include activities,requirements, guidelines, and documentationfor verification of end productsgenerated by the Product ImplementationProcess or Product Integration Processagainst their design solution definitions.General Guidance on WhoImplements the ProcessIf stakeholders are at the contractor,then the contractor shouldhave responsibility and vice versa.Assignment of responsibilityfollows the stakeholders, e.g., ifstakeholders are at the contractor,then requirements are developedby the contractor and vice versa.Follows the requirements, e.g., ifrequirements are developed atthe contractor, then the decompositionof those requirements isimplemented by the contractorand vice versa.Follows the requirements, e.g., ifrequirements are developed at thecontractor, then the design of theproduct solution is implementedby the contractor and vice versa.Follows the design, e.g., if thedesign is developed at the contractor,then the implementationof the design is performed by thecontractor and vice versa.Follows the design, e.g., if thedesign is developed at thecontractor, then the integration ofthe design elements is performedby the contractor and vice versa.Follows the product integration,e.g., if the product integration isimplemented at the contractor,then the verification of the productis performed by the contractorand vice versa.Application to a Science MissionStakeholders for the mission/projectare within <strong>NASA</strong>; stakeholdersfor the spacecraft power subsystemare mostly at the contractor.<strong>NASA</strong> develops the high-levelrequirements, and the contractordevelops the requirements for thepower subsystem.<strong>NASA</strong> performs the decompositionof the high-level requirements, andthe contractor performs the decompositionof the power subsystemrequirements.<strong>NASA</strong> designs the mission/project,and the contractor designs thepower subsystem.<strong>NASA</strong> implements (and retainsresponsibility for) the design for themission/project, and the contractordoes the same for the powersubsystem.<strong>NASA</strong> integrates the design for themission/project, and the contractordoes the same for the powersubsystem.<strong>NASA</strong> verifies the mission/project,and the contractor does the samefor the power subsystem.(continued)220 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


Table 7.1-1 Applying the <strong>Technical</strong> Processes on Contract (continued)7.1 <strong>Engineering</strong> with Contracts# NPR 7123.1 Process8 The Center Directors or designees establishand maintain a process to include activities,requirements, guidelines, and documentationfor validation of end productsgenerated by the Product ImplementationProcess or Product Integration Processagainst their stakeholder expectations.9 The Center Directors or designees establishand maintain a process to includeactivities, requirements, guidelines, anddocumentation for transitioning endproducts to the next-higher-level WBSmodel customer or user.10 The Center Directors or designees establishand maintain a process to includeactivities, requirements, guidelines, anddocumentation for planning the technicaleffort.11 The Center Directors or designees establishand maintain a process to includeactivities, requirements, guidelines, anddocumentation for management ofrequirements defined and baselinedduring the application of the systemdesign processes.12 The Center Directors or designees establishand maintain a process to includeactivities, requirements, guidelines, anddocumentation for management ofthe interfaces defined and generatedduring the application of the systemdesign processes.General Guidance on WhoImplements the ProcessFollows the product integration,e.g., if the product integration isimplemented at the contractor,then the validation of the productis performed by the contractorand vice versa.Follows the product verification andvalidation, e.g., if the product verificationand validation is implementedat the contractor, then the transitionof the product is performed by thecontractor and vice versa.Assuming both <strong>NASA</strong> and thecontractor have technical workto perform, then both <strong>NASA</strong> andthe contractor need to plan theirrespective technical efforts.Follows process #2.Interfaces should be managedone level above the elementsbeing interfaced.Application to a Science Mission<strong>NASA</strong> validates the mission/project,and the contractor does the samefor the power subsystem.<strong>NASA</strong> transitions the mission/projectto operations, and the contractortransitions the power subsystemto the spacecraft level.<strong>NASA</strong> would plan the technicaleffort associated with the GFPinstruments and the launch and operationsof the spacecraft, and thecontractor would plan the technicaleffort associated with the design,build, verification and validation,and delivery and operations of thepower subsystem.The interface from the spacecraftto the project ground systemwould be managed by <strong>NASA</strong>, whilethe power subsystem to attitudecontrol subsystem interface wouldbe managed by the contractor.13 The Center Directors or designees establishand maintain a process to include activities,requirements, guidelines, and documentationfor management of the technical riskidentified during the technical effort.NPR 8000.4, Risk Management ProceduralRequirements is to be used as a sourcedocument for defining this process; andNPR 8705.5, Probabilistic Risk Assessment(PRA) Procedures for <strong>NASA</strong> Programs andProjects provides one means of identifyingand assessing technical risk.<strong>Technical</strong> risk managementis a process that needs to beimplemented by both <strong>NASA</strong> andthe contractor. All elements of theproject need to identify their risksand participate in the project riskmanagement process. Decidingwhich risks to mitigate, when, atwhat cost is generally a function of<strong>NASA</strong> project management.<strong>NASA</strong> project management shouldcreate a project approach to riskmanagement that includes participationfrom the contractor. Risksidentified throughout the projectdown to the power subsystem leveland below should be identifiedand reported to <strong>NASA</strong> for possiblemitigation.(continued)<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 221


7.0 Special TopicsTable 7.1-1 Applying the <strong>Technical</strong> Processes on Contract (continued)# NPR 7123.1 Process14 The Center Directors or designees establishand maintain a process to includeactivities, requirements, guidelines, anddocumentation for CM.15 The Center Directors or designees establishand maintain a process to includeactivities, requirements, guidelines, anddocumentation for management of thetechnical data generated and used inthe technical effort.16 The Center Directors or designees establishand maintain a process to includeactivities, requirements, guidelines, anddocumentation for making assessmentsof the progress of planned technical effortand progress toward requirementssatisfaction.17 The Center Directors or designees establishand maintain a process to includeactivities, requirements, guidelines, anddocumentation for making technicaldecisions.General Guidance on WhoImplements the ProcessLike risk management, CM is aprocess that should be implementedthroughout the projectby both the <strong>NASA</strong> and contractorteams.Like risk management and CM,technical data management isa process that should be implementedthroughout the projectby both the <strong>NASA</strong> and contractorteams.Assessing progress is a processthat should be implementedthroughout the project by boththe <strong>NASA</strong> and contractor teams.Clearly technical decisions aremade throughout the projectboth by <strong>NASA</strong> and contractor personnel.Certain types of decisionsor decisions on certain topics maybest be made by either <strong>NASA</strong> orthe contractor depending uponthe Center’s processes and thetype of project.Application to a Science Mission<strong>NASA</strong> project management shouldcreate a project approach to CMthat includes participation fromthe contractor. The contractor’sinternal CM process will have to beintegrated with the <strong>NASA</strong> approach.CM needs to be implementedthroughout the project down to thepower subsystem level and below.<strong>NASA</strong> project management shouldcreate a project approach totechnical data management thatincludes participation from thecontractor. The contractor’s internaltechnical data process will have tobe integrated with the <strong>NASA</strong> approach.Management of technicaldata needs to be implementedthroughout the project down to thepower subsystem level and below.<strong>NASA</strong> project management shouldcreate a project approach toassessing progress that includesparticipation from the contractor.Typically this would be the projectreview plan. The contractor’s internalreview process will have to beintegrated with the <strong>NASA</strong> approach.<strong>Technical</strong> reviews need to be implementedthroughout the projectdown to the power subsystem leveland below.For this example, decisions affectinghigh-level requirements or missionsuccess would be made by <strong>NASA</strong>and those at the lower level, e.g.,the power subsystem that did notaffect mission success would bemade by the contractor.tion as clauses in full text form or as clauses incorporatedby reference. Many of these clauses relate to publiclaws, contract administration, and financial management.Newer clauses address information technology security,data rights, intellectual property, new technologyreporting, and similar items. The contract specialist staysabreast of updates to the FAR and NFS. As the SOW andother parts of the solicitation mature, it is important forthe contract specialist and technical team to work closelyto avoid duplication of similar requirements.222 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


7.1 <strong>Engineering</strong> with Contracts7.1.3.2 Develop the Statement of WorkEffective surveillance of a contractor begins with the developmentof the SOW. The technical team establishesthe SOW requirements for the product to be developed.The SOW contains process, performance, and managementrequirements the contractor must fulfill duringproduct development.As depicted in Figure 7.1-2, developing the SOW requiresthe technical team to analyze the work, performance,and data needs to be accomplished by the contractor.The process is iterative and supports the development ofother documentation needed for the contracting effort.The principal steps in the figure are discussed further inTable 7.1-2.After a few iterations, baseline the SOW requirementsand place them under configuration management. (SeeSection 6.5.)Use the SOW checklist, which is in Appendix P, to helpensure that the SOW is complete, consistent, correct, unambiguous,and verifiable. Below are some key items torequire in the SOW:zz <strong>Technical</strong> and management deliverables having thehighest risk potential (e.g., the SEMP, developmentand transition plans); requirements and architecturespecifications; test plans, procedures and reports;metrics reports; delivery, installation, and maintenancedocumentation.zz Contractual or scheduling incentives in a contractshould not be tied to the technical milestone reviews.These milestone reviews (for example, SRR, PDR, CDR,etc.) enable a critical and valuable technical assessmentto be performed. These reviews have specific entrancecriteria that should not be waived. The reviews shouldbe conducted when these criteria are met, rather thanbeing driven by a particular schedule.zz Timely electronic access to data, work products, andinterim deliverables to assess contractor progress onfinal deliverables.zz Provision(s) to flow down requirements to subcon-tractors and other team members.zz Content and format requirements of deliverables inthe contract data requirements list. These requirementsare specified in a data requirements documentNEEDStep 1: AnalyzeWorkStep 2: AnalyzePerformanceDefineScopeOrganizeSOWWrite SOWRequirementsStep 3: AnalyzeDataDefinePerformanceStandardsDocumentRationaleFront-EndAnalysisIdentifyStandardsDefineDeliverablesBaselineSOWRequirementsOther RelatedDocumentation Define Product Specify Standards Prepare Quality AssuranceSurveillance Plan Define IncentivesFigure 7.1-2 Contract requirements development process<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 223


7.0 Special TopicsTable 7.1-2 Steps in the Requirements Development ProcessStep Task DetailStep 1:Analyze theWorkStep 2:AnalyzePerformanceDefine scopeOrganize SOWWrite SOW requirementsDocument rationaleDefine performancestandardsDocument in the SOW that part of the project’s scope that will be contracted.Give sufficient background information to orient offerors.Organize the work by products and associated activities (i.e., product WBS).Include activities necessary to:zz Develop products defined in the requirements specification; andzz Support, manage, and oversee development of the products.Write SOW requirements in the form “the Contractor shall.”Write product requirements in the form “the system shall.”Document separately from the SOW the reason(s) for including requirementsthat may be unique, unusual, controversial, political, etc. The rationale is notpart of the solicitation.Define what constitutes acceptable performance by the contractor. Commonmetrics for use in performance standards include cost and schedule. Forguidance on metrics to assess the contractor’s performance and to assessadherence to product requirements on delivered products, refer to System andSoftware Metrics for Performance-Based Contracting.Step 3:Analyze DataIdentify standardsDefine deliverablesIdentify standards (e.g., EIA, IEEE, ISO) that apply to deliverable work productsincluding plans, reports, specifications, drawings, etc. Consensus standardsand codes (e.g., National Electrical Code, National Fire Protection Association,American Society of Mechanical Engineers) that apply to product developmentand workmanship are included in specifications.Ensure each deliverable data item (e.g., technical data —requirements specifications,design documents; management data—plans, metrics reports) has acorresponding SOW requirement for its preparation. Ensure each product has acorresponding SOW requirement for its delivery.or data item description, usually as an attachment.Remember that you need to be able to edit data deliverables.zz Metrics to gain visibility into technical progress foreach discipline (e.g., hardware, software, thermal,optics, electrical, mechanical). For guidance on metricsto assess the contractor’s performance and to assessadherence to product requirements on deliveredproducts, refer to System and Software Metrics for Performance-BasedContracting.zz Quality incentives (defect, error count, etc.) to re-duce risk of poor quality deliverables. Be careful becauseincentives can affect contractor behavior. Forexample, if you reward early detection and correctionof software defects, the contractor may expend effortcorrecting minor defects and saving major defects forlater.zz A continuous management program to include a pe-riodically updated risk list, joint risk reviews, andvendor risk approach.zz Surveillance activities (e.g., status meetings, reviews,audits, site visits) to monitor progress and production,especially access to subcontractors and otherteam members.zz Specialty engineering (e.g., reliability, quality assur-ance, cryogenics, pyrotechnics, biomedical, wastemanagement) that is needed to fulfill standards andverification requirements.zz Provisions to assign responsibilities between <strong>NASA</strong>and contractor according to verification, validation,or similar plans that are not available prior to award.zz Provisions to cause a contractor to disclose changinga critical process. If a process is critical to humansafety, require the contractor to obtain approval from224 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


7.1 <strong>Engineering</strong> with Contractsthe contracting officer before a different process is implemented.Note: If you neglect to require something in the SOW,it can be costly to add it later.The contractors must supply a SEMP that specifies theirsystems engineering approach for requirements development,technical solution definition, design realization,product evaluation, product transition, and technicalplanning, control, assessment, and decision analysis. Itis best to request a preliminary SEMP in the solicitation.The source evaluation board can use the SEMP toevaluate the offeror’s understanding of the requirements,as well as the offeror’s capability and capacity to deliverthe system. After contract award, the technical team caneliminate any gaps between the project’s SEMP and thecontractor’s SEMP that could affect smooth execution ofthe integrated set of common technical processes.Often a technical team has experience developing technicalrequirements, but little or no experience developingSOW requirements. If you give the contractor acomplex set of technical requirements, but neglect to includesufficient performance measures and reporting requirements,you will have difficulty monitoring progressand determining product and process quality. Understandingperformance measures and reporting requirementswill enable you to ask for the appropriate data orreports that you intend to use.Traditionally, <strong>NASA</strong> contracts require contractors to satisfyrequirements in <strong>NASA</strong> policy directives, <strong>NASA</strong> proceduralrequirements, <strong>NASA</strong> standards, and similar documents.These documents are almost never written inlanguage that can be used directly in a contract. Too often,these documents contain requirements that do not applyto contracts. So, before the technical team boldly goeswhere so many have gone before, it is a smart idea to understandwhat the requirements mean and if they apply tocontracts. The requirements that apply to contracts needto be written in a way that is suitable for contracts.7.1.3.3 Task Order ContractsSometimes, the technical team can obtain engineeringproducts and services through an existing task order contract.The technical team develops a task order SOW andinteracts with the contracting officer’s technical representativeto issue a task order. Preparing the task order SOWis simplified because the contract already establishes baselinerequirements for execution. First-time users need tounderstand the scope of the contract and the degree towhich delivery and reporting requirements, performancemetrics, incentives, and so forth are already covered.Task contracts offer quick access (days or weeks insteadof months) to engineering services for studies, analyses,design, development, and testing and to support servicesfor configuration management, quality assurance, maintenance,and operations. Once a task order is issued, thetechnical team performs engineering activities associatedwith managing contract performance and completing acontract (discussed later) as they apply to the task order.7.1.3.4 Surveillance PlanThe surveillance plan defines the monitoring of the contractoreffort and is developed at the same time as theSOW. The technical team works with mission assurancepersonnel, generally from the local Safety and Mission Assurance(SMA) organization, to prepare the surveillanceplan for the contracted effort. Sometimes mission assuranceis performed by technical experts on the project.In either case, mission assurance personnel should beengaged from the start of the project. Prior to contractaward, the surveillance plan is written at a general levelto cover the Government’s approach to perceived programmaticrisk. After contract award, the surveillanceplan describes in detail inspection, testing, and otherquality-related surveillance activities that will be performedto ensure the integrity of contract deliverables,given the current perspective on programmatic risks.Recommended items to include in the surveillance planfollow:zz Review key deliverables within the first 30 days to en-sure adequate startup of activities.zz Conduct contractor/subcontractor site visits to mon-itor production or assess progress.zz Evaluate effectiveness of the contractor’s systems en-gineering processes.Drafting the surveillance plan when the SOW is developedpromotes the inclusion of key requirements in theSOW that enable activities in the surveillance plan. Forexample, in order for the technical team to conduct sitevisits to monitor production of a subcontractor, then theSOW must include a requirement that permits site visits,combined with a requirement for the contractor to flowdown requirements that directly affect subcontractors.<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 225


7.0 Special Topics7.1.3.5 Writing Proposal Instructions andEvaluation CriteriaOnce the technical team has written the SOW, the Governmentcost estimate, and the preliminary surveillanceplan and updated the SEMP, the solicitation can be developed.Authors of the solicitation must understand theinformation that will be needed to evaluate the proposalsand write instructions to obtain specifically needed information.In a typical source selection, the source selectionboard evaluates the offerors’ understanding ofthe requirements, management approach, and cost andtheir relevant experience and past performance. This informationis required in the business and technical proposals.(This section discusses only the technical proposal.)The solicitation also gives the evaluation criteriathat the source evaluation board will use. This sectioncorresponds one-for-one to the items requested in theproposal instructions section.State instructions clearly and correctly. The goal is toobtain enough information to have common groundsfor evaluation. The challenge becomes how much informationto give the offerors. If you are too prescriptive,the proposals may look too similar. Be careful notto level the playing field too much, otherwise discriminatingamong offerors will be difficult. Because thetechnical merits of a proposal compete with nontechnicalitems of similar importance (e.g., cost), the technicalteam must wisely choose discriminators to facilitatethe source selection.Source Evaluation BoardOne or more members of the technical team serve asmembers of the source evaluation board. They participatein the evaluation of proposals following applicable<strong>NASA</strong> and Center source selection procedures.Because source selection is so important, the procurementoffice works closely with the source evaluationboard to ensure that the source selection processis properly executed. The source evaluation board developsa source evaluation plan that describes theevaluation factors, and the method of evaluating theofferors’ responses. Unlike decisions made by systemsengineers early in a product life cycle, source selectiondecisions must be carefully managed in accordancewith regulations governing the fairness of theselection process.The source evaluation board evaluates nontechnical(business) and technical items. Items may be evaluatedby themselves, or in the context of other technical ornontechnical items. Table 7.1-3 shows technical itemsto request from offerors and the evaluation criteria withwhich they correlate.Evaluation ConsiderationsThe following are important to consider when evaluatingproposals:zz Give adequate weight to evaluating the capability ofdisciplines that could cause mission failure (e.g., hardware,software, thermal, optics, electrical, mechanical).zz Conduct a preaward site visit of production/test facili-ties that are critical to mission success.zz Distinguish between “pretenders” (good proposalwriters) and “contenders” (good performing organizations).Pay special attention to how process descriptionsmatch relevant experience and past performance.While good proposals can indicate good future performance,lesser quality proposals usually predict lesserquality future work products and deliverables.zz Assess the contractor’s SEMP and other items sub-mitted with the proposal based on evaluation criteriathat include quality characteristics (e.g., complete, unambiguous,consistent, verifiable, and traceable).The cost estimate that the technical team performs aspart of the <strong>Technical</strong> Planning Process supports evaluationof the offerors’ cost proposals, helping the sourceevaluation board determine the realism of the offerors’technical proposals. (See Section 6.1.) The source evaluationboard can determine “whether the estimatedproposed cost elements are realistic for the work to beperformed; reflect a clear understanding of the requirements;and are consistent with the unique methods ofperformance and materials described in the offeror’stechnical proposal.” 17.1.3.6 Selection of COTS ProductsWhen COTS products are given as part of the technicalsolution in a proposal, it is imperative that the selectionof a particular product be evaluated and documentedby applying the Decision Analysis Process. Bypassingthis task or neglecting to document the evaluation suf-1FAR 15.404-1(d) (1).226 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


7.1 <strong>Engineering</strong> with ContractsTable 7.1‐3 Proposal Evaluation CriteriaItemPreliminary contractor SEMP.Process descriptions, including subcontractor’s(or team member’s) processes.Artifacts (documents) of relevant workcompleted. Such documentation depicts theprobable quality of work products an offerorwill provide on your contract. Artifacts provideevidence (or lack) of systems engineeringprocess capability.<strong>Engineering</strong> methods and tools.Process and product metrics.Subcontract management plan (may be part ofcontractor SEMP).Phase-in plan (may be part of contractor SEMP).CriteriaHow well the plan can be implemented given the resources, processes,and controls stated. Look at completeness (how well it covers allSOW requirements), internal consistency, and consistency with otherproposal items. The SEMP should cover all resources and disciplinesneeded to meet product requirements, etc.Effectiveness of processes and compatibility of contractor and subcontractorprocesses (e.g., responsibilities, decisionmaking, problemresolution, reporting).Completeness of artifacts, consistency among artifacts on a given project,consistency of artifacts across projects, conformance to standards.Effectiveness of the methods and tools.How well the offeror measures performance of its processes and qualityof its products.Effectiveness of subcontract monitoring and control and integration/separation of risk management and CM.How well the plan can be implemented given the existing workload ofresources.ficiently could lead to a situation where <strong>NASA</strong> cannotsupport its position in the event of a vendor protest.7.1.3.7 Acquisition-Unique RisksTable 7.1-4 identifies a few risks that are unique to acquisitionalong with ways to manage them from an engineeringperspective. Bear in mind, legal and procurementaspects of these risks are generally covered incontract clauses.There may also be other acquisition risks not listed inTable 7.1-4. All acquisition risks should be identified andhandled the same as other project risks using the ContinuousRisk Management (CRM) process. A project canalso choose to separate out acquisition risks as a risk-listsubset and handle them using the risk-based acquisitionmanagement process if so desired.When the technical team completes the activities priorto contract award, they will have an updated SEMP, theGovernment cost estimate, an SOW, and a preliminarysurveillance plan. Once the contract is awarded, thetechnical team begins technical oversight.7.1.4 During Contract Performance7.1.4.1 Performing <strong>Technical</strong> SurveillanceSurveillance of a contractor’s activities and/or documentationis performed to demonstrate fiscal responsibility,ensure crew safety and mission success, and determineaward fees for extraordinary (or penalty fees for substandard)contract execution. Prior to or outside of a contractaward, a less formal agreement may be made for theGovernment to be provided with information for a tradestudy or engineering evaluation. Upon contract award,it may become necessary to monitor the contractor’s adherenceto contractual requirements more formally. (Fora greater understanding of surveillance requirements,see NPR 8735.2, Management of Government Quality AssuranceFunctions for <strong>NASA</strong> Contracts.)Under the authority of the contracting officer, the technicalteam performs technical surveillance as establishedin the <strong>NASA</strong> SEMP. The technical team assesses technicalwork productivity, evaluates product quality, andconducts technical reviews of the contractor. (Refer tothe <strong>Technical</strong> Assessment Process.) Some of the key activitiesare discussed below.<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 227


7.0 Special TopicsTable 7.1‐4 Risks in AcquisitionRiskSupplier goes bankrupt prior todeliverySupplier acquired by anothersupplier with different policiesDeliverables include software tobe developedDeliverables include COTS products(especially software)Products depend on results frommodels or simulationsBudget changes prior to deliveryof all products (and contractwas written without interimdeliverables)Contractor is a specialty supplierwith no experience in a particularengineering discipline; for example,the contractor producescryogenic systems that use alarmmonitoring software from anothersupplier, but the contractor doesnot have software expertiseMitigationThe source selection process is the strongest weapon. Select a supplier with a proventrack record, solid financial position, and stable workforce. As a last resort, the Governmentmay take possession of any materials, equipment, and facilities on the work sitenecessary for completing the work in-house or via another contract.Determine differences between policies before and after the acquisition. If there is acritical difference, then consult with the procurement and legal offices. Meet with thesupplier and determine if the original policy will be honored at no additional cost. Ifthe supplier balks, then follow the advice from legal.Include an experienced software manager on the technical team. Monitor thecontractor’s adherence to software development processes. Discuss software progress,issues, and quality at technical interchange meetings.Understand the quality of the product:zz Look at test results. When test results show a lot of rework to correct defects, thenusers will probably find more defects.zz Examine problem reports. These show whether or not users are finding defectsafter release.zz Evaluate user documentation.zz Look at product support.Establish the credibility and uncertainty of results. Determine depth and breadth ofpractices used in verification and validation of the model or simulation. Understandthe quality of software upon which the model or simulation is built. For more information,refer to <strong>NASA</strong>-STD-(I)-7009, Standard for Models and Simulations.Options include:zz Remove deliverables or services from the contract scope in order to obtain keyproducts.zz Relax the schedule in exchange for reduced cost.zz Accept deliverables “as is.”To avoid this situation, include electronic access to data, work products, and interimdeliverables to assess contractor progress on final deliverables in the SOW.Mitigate risks of COTS product deliverables as discussed earlier. If the contract is fordelivery of a modified COTS product or custom product, then include provisions inthe SOW to cover the following:zz Supplier support (beyond product warranty) that includes subsupplier supportzz Version upgrade/replacement planszz Surveillance of subsupplierIf the product is inexpensive, simply purchasing spares may be more cost effectivethan adding surveillance requirements.z zDevelop <strong>NASA</strong>-Contractor <strong>Technical</strong> Relationship:At the contract kick-off meeting, set expectations fortechnical excellence throughout the execution of thecontract. Highlight the requirements in the contractSOW that are the most important. Discuss the qualityof work and products to be delivered against the technicalrequirements. Mutually agree on the format ofthe technical reviews and how to resolve misunderstandings,oversights, and errors.z z Conduct <strong>Technical</strong> Interchange Meetings: Start earlyin the contract period and meet periodically with thecontractor (and subcontractors) to confirm that the228 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


7.1 <strong>Engineering</strong> with Contractszzzzcontractor has a correct and complete understandingof the requirements and operational concepts. Establishday-to-day <strong>NASA</strong>-contractor technical communications.Control and Manage Requirements: Almost inevitably,new or evolving requirements will affect aproject. When changes become necessary, the technicalteam needs to control and manage changes andadditions to requirements proposed by either <strong>NASA</strong>or the contractor. (See Section 6.2.) Communicatechanges to any project participants that the changeswill affect. Any changes in requirements that affectcontract cost, schedule, or performance must be conveyedto the contractor through a formal contractchange. Consult the contracting officer’s technicalrepresentative.Evaluate <strong>Systems</strong> <strong>Engineering</strong> Processes: Evaluatethe effectiveness of defined systems engineering processes.Conduct audits and reviews of the processes.Identify process deficiencies and offer assistance withprocess improvement.z z Evaluate Work Products: Evaluate interim plans, reports,specifications, drawings, processes, procedures,and similar artifacts that are created during the systemsengineering effort.zz Monitor Contractor Performance Against Key Met-rics: Monitoring contractor performance extends beyondprogrammatic metrics to process and productmetrics. (See Section 6.7 on technical performancemeasures.) These metrics depend on acceptableproduct quality. For example, “50 percent of designdrawings completed” is misleading if most of themhave defects (e.g., incorrect, incomplete, inconsistent).The amount of work to correct the drawings affectscost and schedule. It is useful to examine reportsthat show the amount of contractor time invested inproduct inspection and review.zzzzConduct <strong>Technical</strong> Reviews: Assess contractor progressand performance against requirements throughtechnical reviews. (See Section 6.7.)Verify and Validate Products: Verify and validatethe functionality and performance of products beforedelivery and prior to integration with other systemproducts. To ensure that a product is ready for systemintegration or to enable further system development,perform verification and validation as early as practical.(See Sections 5.3 and 5.4.)7.1.4.2 Evaluating Work ProductsWork products and deliverables share common attributesthat can be used to assess quality. Additionally, relationshipsamong work products and deliverables canbe used to assess quality. Some key attributes that helpdetermine quality of work products are listed below:zz Satisfies content and format requirements,zz Understandable,zz Complete,zz Consistent (internally and externally) including ter-minology (an item is called the same thing throughoutthe documents, andzz Traceable.Table 7.1-5 shows some typical work products from thecontractor and key attributes with respect to other documentsthat can be used as evaluation criteria.7.1.4.3 Issues with Contract-SubcontractArrangementsIn the ideal world, a contractor manages its subcontractors,each subcontract contains all the right requirements,and resources are adequate. In the real world, thetechnical team deals with contractors and subcontractorsthat are motivated by profit, (sub)contracts with missingor faulty requirements, and resources that are consumedmore quickly than expected. These and other factorscause or influence two key issues in subcontracting:zz Limited or no oversight of subcontractors andzz Limited access to or inability to obtain subcontractordata.These issues are exacerbated when they apply to second-(or lower) tier subcontractors. Table 7.1-6 looks at theseissues more closely along with potential resolutions.Scenarios other than those above are possible. Resolutionsmight include reducing contract scope or deliverablesin lieu of cost increases or sharing informationtechnology in order to obtain data. Even with the adequateflowdown requirements in (sub)contracts, legalwrangling may be necessary to entice contractors to satisfythe conditions of their (sub)contracts.Activities during contract performance will generatean updated surveillance plan, minutes documentingmeetings, change requests, and contract change orders.Processes will be assessed, deliverables and workproducts evaluated, and results reviewed.<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 229


7.0 Special TopicsTable 7.1‐5 Typical Work Product DocumentsWork ProductEvaluation CriteriaSEMPSoftware management/development planSystem designSoftware designInstallation plansTest plansTest proceduresTransition plansUser documentationDrawings and documents(general)Describes activities and products required in the SOW.The SEMP is not complete unless it describes (or references) how each activity and productin the SOW will be accomplished.Consistent with the SEMP and related project plans.Describes how each software-related activity and product in the SOW will be accomplished.Development approach is feasible.Covers the technical requirements and operational concepts.System can be implemented.Covers the technical requirements and operational concepts.Consistent with hardware design.System can be implemented.Covers all user site installation activities required in the SOW.Presents a sound approach.Shows consistency with the SEMP and related project plans.Covers qualification requirements in the SOW.Covers technical requirements.Approach is feasible.Test cases are traceable to technical requirements.Describes all transition activities required in the SOW.Shows consistency with the SEMP and related project plans.Sufficiently and accurately describes installation, operation, or maintenance (depending onthe document) for the target audience.Comply with content and format requirements specified in the SOW.7.1.5 Contract CompletionThe contract comes to completion with the delivery ofthe contracted products, services, or systems and theirenabling products or systems. Along with the product,as-built documentation must be delivered and operationalinstructions including user manuals.7.1.5.1 Acceptance of Final DeliverablesThroughout the contract period, the technical team reviewsand accepts various work products and interimdeliverables identified in the contract data requirementslist and schedule of deliverables. The technical team alsoparticipates in milestone reviews to finalize acceptanceof deliverables. At the end of the contract, the technicalteam ensures that each technical deliverable is receivedand that its respective acceptance criteria are satisfied.The technical team records the acceptance of deliverablesagainst the contract data requirements list and theschedule of deliverables. These documents serve as aninventory of items and services to be accepted. Althoughrejections and omissions are infrequent, the technicalteam needs to take action in such a case. Good datamanagement and configuration management practicesfacilitate the effort.Acceptance criteria include:zz Product verification and validation completed success-fully. The technical team performs or oversees verificationand validation of products, integration of productsinto systems, and system verification and validation.zz <strong>Technical</strong> data package is current (as-built) and com-plete.230 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


7.1 <strong>Engineering</strong> with ContractsTable 7.1‐6 Contract-Subcontract IssuesIssueOversight of subcontractor islimited because requirement(s)missing from contractOversight of subcontractor islimited because requirement(s)not flowed down from contractorto subcontractorOversight of second-tier subcontractoris limited becauserequirement(s) not floweddown from subcontractor tosecond-tier subcontractorAccess to subcontractor datais limited or not providedbecause providing the data isnot required in the contractAccess to subcontractor datais limited or not providedbecause providing the data isnot required in the subcontractResolutionThe technical team gives the SOW requirement(s) to the contracting officer who addsthe requirement(s) to the contract and negotiates the change order, including additionalcosts to <strong>NASA</strong>. The contractor then adds the requirement(s) to the subcontract andnegotiates the change order with the subcontractor. If the technical team explicitlywants to perform oversight, then the SOW should indicate what the contractor, itssubcontractors, and team members are required to do and provide.It is the contractor’s responsibility to satisfy the requirements of the contract. If thecontract includes provisions to flow down requirements to subcontractors, then thetechnical team can request the contracting officer to direct the contractor to execute theprovisions. The contractor may need to add requirements and negotiate cost changeswith the subcontractor. If <strong>NASA</strong> has a cost-plus contract, then expect the contractor tobill <strong>NASA</strong> for any additional costs incurred. If <strong>NASA</strong> has a fixed-price contract, then thecontractor will absorb the additional costs or renegotiate cost changes with <strong>NASA</strong>.If the contract does not explicitly include requirements flowdown provisions, thecontractor is responsible for performing oversight.This is similar to the previous case, but more complicated. Assume that the contractorflowed down requirements to its subcontractor, but the subcontractor did not flowdown requirements to the second-tier subcontractor. If the subcontract includes provisionsto flow down requirements to lower tier subcontractors, then the technical teamcan request the contracting officer to direct the contractor to ensure that subcontractorsexecute the flowdown provisions to their subcontractors.If the subcontract does not explicitly include requirements flowdown provisions, thesubcontractor is responsible for performing oversight of lower tier subcontractors.The technical team gives the SOW requirement(s) to the contracting officer who addsthe requirement(s) to the contract and negotiates the change order, including additionalcosts to <strong>NASA</strong>. The contractor then adds the requirement(s) to the subcontract and negotiatesthe change order with the subcontractor. If the technical team explicitly wantsdirect access to subcontractor data, then the SOW should indicate what the contractor,its subcontractors, and team members are required to do and provide.It is the contractor’s responsibility to obtain data (and data rights) necessary to satisfythe conditions of its contract, including data from subcontractors. If the technicalteam needs direct access to subcontractor data, then follow the previous case to addflowdown provisions to the contract so that the contractor will add requirements to thesubcontract.zz Transfer of certifications, spare parts, warranties, etc.,is complete.zz Transfer of software products, licenses, data rights, in-tellectual property rights, etc., is complete.zz <strong>Technical</strong> documentation required in contract clausesis complete (e.g., new technology reports).It is important for <strong>NASA</strong> personnel and facilities to beready to receive final deliverables. Key items to have preparedinclude:zz A plan for support and to transition products to op-erations;zz Training of personnel;zz Configuration management system in place; andzz Allocation of responsibilities for troubleshooting, re-pair, and maintenance.7.1.5.2 Transition ManagementBefore the contract was awarded, a product supportstrategy was developed as part of the acquisition strategy.The product support strategy outlines preliminary notionsregarding integration, operations, maintenance,improvements, decommissioning, and disposal. Later,after the contract is awarded, a high-level transition plan<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 231


7.0 Special Topicsthat expands the product support strategy is recorded inthe SEMP. Details of product/system transition are subsequentlydocumented in one or more transition plans.Elements of transition planning are discussed in Section5.5.Transition plans must clearly indicate responsibility foreach action (<strong>NASA</strong> or contractor). Also, the contractSOW must have included a requirement that the contractorwill execute responsibilities assigned in the transitionplan (usually on a cost-reimbursable basis).Frequently, <strong>NASA</strong> (or <strong>NASA</strong> jointly with a prime contractor)is the system integrator on a project. In thissituation, multiple contractors (or subcontractors) willexecute their respective transition plans. <strong>NASA</strong> is responsiblefor developing and managing a system integrationplan that incorporates inputs from each transitionplan. The provisions that were written in the SOWmonths or years earlier accommodate the transfer ofproducts and systems from the contractors to <strong>NASA</strong>.7.1.5.3 Transition to Operations and SupportThe successful transition of systems to operations andsupport, which includes maintenance and improvements,depends on clear transition criteria that thestakeholders agree on. The technical team participatesin the transition, providing continuity for the customer,especially when a follow-on contract is involved. Whenthe existing contract is used, the technical team conductsa formal transition meeting with the contractor. Alternatively,the transition may involve the same contractorunder a different contract arrangement (e.g., modified ornew contract). Or the transition may involve a differentcontractor than the developer, using a different contractarrangement.The key benefits of using the existing contract are thatthe relevant stakeholders are familiar with the contractorand that the contractor knows the products andsystems involved. Ensure that the contractor and otherkey stakeholders understand the service provisions (requirements)of the contract. This meeting may lead tocontract modifications in order to amend or remove servicerequirements that have been affected by contractchanges over the years.Seeking to retain the development contractor under adifferent contract can be beneficial. Although it takestime and resources to compete the contract, it permits<strong>NASA</strong> to evaluate the contractor and other offerorsagainst operations and support requirements only. Theincumbent contractor has personnel with developmentknowledge of the products and systems, while serviceproviders specialize in optimizing cost and availabilityof services. In the end, the incumbent may be retainedunder a contract that focuses on current needs (not severalyears ago), or else a motivated service provider willwork hard to understand how to operate and maintainthe systems. If a follow-on contract will be used, consultthe local procurement office and exercise the steps thatwere used to obtain the development contract. Assumethat the amount of calendar time to award a follow-oncontract will be comparable to the time to award the developmentcontract. Also consider that the incumbentmay be less motivated upon losing the competition.Some items to consider for follow-on contracts duringthe development of SOW requirements include:zz Staff qualifications;zz Operation schedules, shifts, and staffing levels;zz Maintenance profile (e.g., preventive, predictive, run-to-fail);zz Maintenance and improvement opportunities (e.g.,schedule, turnaround time);zz Historical data for similar efforts; andzz Performance-based work.The transition to operations and support represents ashift from the delivery of products to the delivery of services.Service contracts focus on the contractor’s performanceof activities, rather than development of tangible products.Consequently, performance standards reflect customersatisfaction and service efficiency, such as:zz Customer satisfaction ratings;zz Efficiency of service;zz Response time to a customer request;zz Availability (e.g., of system, Web site, facility);zz Time to perform maintenance action;zz Planned versus actual staffing levels;zz Planned versus actual cost;zz Effort and cost per individual service action; andzz Percent decrease in effort and cost per individual ser-vice action.For more examples of standards to assess the contractor’sperformance, refer to System and Software Metricsfor Performance-Based Contracting.232 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


7.1 <strong>Engineering</strong> with Contracts7.1.5.4 Decommissioning and DisposalContracts offer a means to achieve the safe and efficientdecommissioning and disposal of systems and productsthat require specialized support systems, facilities, andtrained personnel, especially when hazardous materialsare involved. Consider these needs during developmentof the acquisition strategy and solidify them before thefinal design phase. Determine how many contracts willbe needed across the product’s life cycle.Some items to consider for decommissioning and disposalduring the development of SOW requirements:zz Handling and disposal of waste generated during thefabrication and assembly of the product.zz Reuse and recycling of materials to minimize the dis-posal and transformation of materials.zz Handling and disposal of materials used in the prod-uct’s operations.zz End-of-life decommissioning and disposal of theproduct.zz Cost and schedule to decommission and dispose ofthe product, waste, and unwanted materials.zz Metrics to measure decommissioning and disposal ofthe product.zz Metrics to assess the contractor’s performance. (Referto System and Software Metrics for Performance-BasedContracting.)For guidelines regarding disposal, refer to the <strong>Systems</strong><strong>Engineering</strong> <strong>Handbook</strong>: A “What To” Guide for all SEPractitioners.7.1.5.5 Final Evaluation of ContractorPerformanceIn preparation for closing out a contract, the technicalteam gives input to the procurement office regarding thecontractor’s final performance evaluation. Although thetechnical team has performed periodic contractor performanceevaluations, the final evaluation offers a meansto document good and bad performance that continuedthroughout the contract. Since the evaluation is retainedin a database, it can be used as relevant experience andpast performance input during a future source selectionprocess.This phase of oversight is complete with the closeoutor modification of the existing contract, award of thefollow-on contract, and an operational system. Oversightcontinues with follow-contract activities.<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 233


7.0 Special Topics7.2 Integrated Design Facilities7.2.1 IntroductionConcurrent <strong>Engineering</strong> (CE) and integrated design is asystematic approach to integrated product developmentthat emphasizes response to stakeholder expectationsand embodies team values of cooperation, trust, andsharing. The objective of CE is to reduce the product developmentcycle time through a better integration of activitiesand processes. Parallelism is the prime concept inreducing design lead time and concurrent engineeringbecomes the central focus. Large intervals of parallelwork on different parts of the design are synchronizedby comparatively brief exchanges between teams to produceconsensus and decisions. 1 CE has become a widelyaccepted concept and is regarded as an excellent alternativeapproach to a sequential engineering process.This section addresses the specific application of CE andintegrated design practiced at <strong>NASA</strong> in Capability forAccelerated Concurrent <strong>Engineering</strong> (CACE) environments.CACE is comprised of four essential components:people, process, tools, and facility. The CACE environmenttypically involves the collocation of an in-placeleadership team and core multidisciplinary engineeringteam working with a stakeholder team using well definedprocesses in a dedicated collaborative, concurrent engineeringfacility with specialized tools. The engineeringand collaboration tools are connected by the facility’s integratedinfrastructure. The teams work synchronouslyfor a short period of time in a technologically intensivephysical environment to complete an instrument or missiondesign. CACE is most often used to design spaceinstruments and payloads or missions including orbitalconfiguration; hardware such as spacecraft, landers,rovers, probes, or launchers; data and ground communicationsystems; other ground systems; and mission operations.But the CACE process applies beyond strict instrumentand/or mission conceptual design.Most <strong>NASA</strong> centers have a CACE facility. <strong>NASA</strong> CACEis built upon a people/process/tools/facility paradigmthat enables the accelerated production of high-qualityengineering design concepts in a concurrent, collaborative,rapid design environment. (See Figure 7.2-1.)1From Miao and Haake “Supporting Concurrent Designby Integrating Information Sharing and Activity Synchronization.”Figure 7.2-1 CACE people/process/tools/facilityparadigmAlthough CACE at <strong>NASA</strong> is based on a common philosophyand characteristics, specific CACE implementationvaries in many areas. These variations include levelof engineering detail, information infrastructure, knowledgebase, areas of expertise, engineering staffing approach,administrative and engineering tools, type of facilitation,roles and responsibilities within CACE team,roles and responsibilities across CACE and stakeholderteams, activity execution approach, and duration of session.While primarily used to support early life-cyclephases such as pre-Formulation and Formulation, theCACE process has demonstrated applicability across thefull project life cycle.7.2.2 CACE Overview and ImportanceCACE design techniques can be an especially effectiveand efficient method of generating a rapid articulation ofconcepts, architectures, and requirements.The CACE approach provides an infrastructure for brainstormingand bouncing ideas between the engineers andstakeholder team representatives, which routinely resultsin a high-quality product that directly maps to the customerneeds. The collaboration design paradigm is sosuccessful because it enables a radical reduction in decisionlatency. In a non-CACE environment, questions,issues, or problems may take several days to resolve. Ifa design needs to be changed or a requirement reevaluated,significant time may pass before all engineeringteam members get the information or stakeholder teammembers can discuss potential requirement changes.These delays introduce the possibility, following initialevaluation, of another round of questions, issues, and234 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


7.2 Integrated Design Facilitieschanges to design and requirements, adding further delays.The tools, data, and supporting information technologyinfrastructure within CACE provide an integrated supportenvironment that can be immediately utilized bythe team. The necessary skills and experience are gatheredand are resident in the environment to synchronouslycomplete the design. In a collaborative environment,questions can be answered immediately, or keyparticipants can explore assumptions and alternativeswith the stakeholder team or other design team membersand quickly reorient the whole team when a designchange occurs. The collaboration triggers the creativityof the engineers and helps them close the loop and rapidlyconverge on their ideas. Since the mid-1990s, theCACE approach has been successfully used at several<strong>NASA</strong> Centers as well as at commercial enterprises todramatically reduce design development time and costswhen compared to traditional methods.CACE stakeholders include <strong>NASA</strong> programs and projects,scientists, and technologists as well as other Governmentagencies (civil and military), Federal laboratories,and universities. CACE products and services include:zz Generating mission concepts in support of Centerproposals to science AO;zz Full end-to-end designs including system/subsystemconcepts, requirements, and tradeoffs;zz Focused efforts assessing specific architecture subelementsand tradeoffs;zz Independent assessments of customer-provided re-ports, concepts, and costs;zz Roadmapping support; andzz Technology and risk assessments.As integrated design has become more accepted, collaborativeengineering design efforts expanded from theparticipation of one or more Centers in a locally executedactivity; to geographically distributed efforts across a few<strong>NASA</strong> Centers with limited scope and participation; totrue One<strong>NASA</strong> efforts with participation from many<strong>NASA</strong> integrated design teams addressing broad, complexarchitectures.The use of geographically distributed CACE teams is apowerful engineering methodology to achieve lower riskand more creative solutions by factoring in the best skillsand capabilities across the Agency. Using a geographicallydistributed process must build upon common CACE elementswhile considering local CACE facility differencesand the differences in the local Center cultures.7.2.3 CACE Purpose and BenefitsThe driving forces behind the creation of <strong>NASA</strong>’s earlyCACE environments were increased systems engineeringefficiency and effectiveness. More specifically, theearly CACE environments addressed the need for:zz Generating more conceptual design studies at re-duced cost and schedule,zz Creating a reusable process within dedicated facilitiesusing well-defined tools,zz Developing a database of mission requirements anddesigns for future use,zz Developing mission generalists from a pool of experi-enced discipline engineers, andzz Infusing a broader systems engineering perspectiveacross the organization.Additional resulting strategic benefits across <strong>NASA</strong> included:zz Core competency support (e.g., developing systemsengineers, maturing and broadening of discipline engineers,training environment, etc.);zz Sensitizing the customer base to end-to-end issuesand implications of requirements upon design;zz Test-bed environment for improved tools and pro-cesses;zz Environment for forming partnerships;zz Technology development and roadmapping support;zz Improved quality and consistency of conceptual de-signs; andzz One<strong>NASA</strong> environment that enables cooperative ratherthan competitive efforts among <strong>NASA</strong> organizations.7.2.4 CACE StaffingA management or leadership team, a multidisciplinaryengineering team, a stakeholder team, and a facility supportteam are all vital elements in achieving a successfulCACE activity.A CACE team consists of a cadre of engineers, each representinga different discipline or specialty engineeringarea, along with a lead systems engineer and a team leador facilitator. As required, the core engineering team issupplemented with specialty and/or nonstandard engi-<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 235


7.0 Special Topicsneering skills to meet unique stakeholder needs. Thesesupplementary engineering capabilities can be obtainedeither from the local Center or from an external source.The team lead coordinates and facilitates the CACE activityand interacts with the stakeholders to ensure thattheir objectives are adequately captured and represented.Engineers are equipped with techniques and softwareused in their area of expertise and interact with the teamlead, other engineers, and the stakeholder team to studythe feasibility of a proposed solution and produce a designfor their specific subsystem.A CACE operations manager serves as the Center advocateand manager, maintaining an operational capability,providing initial coordination with potential customersthrough final delivery of CACE product, and infusingcontinuous process and product improvement as well asevolutionary growth into the CACE environment to ensureits continued relevance to the customer base.A CACE facility support team maintains and develops theinformation infrastructure to support CACE activities.7.2.5 CACE ProcessThe CACE process starts with a customer requestingengineering support from CACE management. CACEmanagement establishes that the customer’s request iswithin the scope of the team capabilities and availabilityand puts together a multidisciplinary engineering teamunder the leadership of a team lead and lead systems engineercollaborating closely with the customer team. Thefollowing subsections briefly describe the three majorCACE activity phases: (1) planning and preparation,(2) execution, and (3) wrap-up.7.2.5.1 Planning and PreparationOnce a customer request is approved and team leadchosen, a planning meeting is scheduled. The key expertsattending the planning meeting may include theCACE manager, a team lead, and a systems engineeras well as key representatives from the customer/stakeholderteam. Interactions with the customer/stakeholderteam and their active participation in the process are integralto the successful planning, preparation, and executionof a concurrent design session. Aspects addressedinclude establishing the activity scope, schedule, andcosts; a general agreement on the type of product to beprovided; and the success criteria and metrics. Agreementsreached at the planning meeting are documentedand distributed for review and comment.Products from the planning and preparation phase includethe identification of activities required by the customer/stakeholderteam, the CACE team, or a combinationof both teams, as well as the definition of theobjectives, the requirements, the deliverables, the estimatedbudget, and the proposed schedule. Undersome conditions, followup coordination meetings arescheduled that include the CACE team lead, the systemsengineer(s), a subset of the remaining team members,and customer/stakeholder representatives, as appropriate.The makeup of participants is usually basedon the elements that have been identified as the activitydrivers and any work identified that needs to be done beforethe actual design activity begins.During the planning and preparation process, the stakeholder-provideddata and the objectives and activityplan are reviewed, and the scope of the activity is finalized.A discussion is held of what activities need to bedone by each of the stakeholders and the design teams.For example, for planning a mission design study, thecustomer identifies the mission objectives by definingthe measurement objectives and the instrument specifications,as applicable, and identifying the top-level requirements.A subset of the CACE engineering teammay perform some preliminary work before the actualstudy (e.g., launch vehicle performance trajectory analysis;thrust and navigation requirements; the entry, descent,and landing profile; optical analysis; mechanicaldesign; etc.) as identified in the planning meetings tofurther accelerate the concurrent engineering process inthe study execution phase. The level of analysis in thisphase is a function of many things, including the level ofmaturity of the incoming design, the stated goals and objectivesof the engineering activity, engineer availability,and CACE scheduling.7.2.5.2 Activity Execution PhaseA typical activity or study begins with the customer presentationof the overall mission concept and instrumentconcepts, as applicable, to the entire team. Additionalinformation provided by the customer/stakeholders includesthe team objectives, the science and technologygoals, the initial requirements for payload, spacecraftand mission design, the task breakdown between providersof parts or functions, top challenges and concerns,and the approximate mission timeline. This informationis often provided electronically in a format accessible tothe engineering team and is presented by the customer/236 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


7.2 Integrated Design Facilitiesstakeholder representatives at a high level. During thispresentation, each of the subsystems engineers focuseson the part of the overall design that is relevant to theirsubsystem. The systems engineer puts the high-levelsystem requirements into the systems spreadsheets and/or a database that is used throughout the process to trackengineering changes. These data sources can be projectedon the displays to keep the team members synchronizedand the customer/stakeholders aware of the latest developments.The engineering analysis is performed iteratively with theCACE team lead and systems engineer playing key rolesto lead the process. Thus, issues are quickly identified, soconsensus on tradeoff decisions and requirements redefinitioncan be achieved while maintaining momentum.The customer team actively participates in the collaborativeprocess (e.g., trade studies, requirements relaxation,clarifying priorities), contributing to the rapid developmentof an acceptable product.Often, there are breakout sessions, or sidebars, in whichpart of the team discusses a particular tradeoff study.Each subsystem has a set of key parameters that are usedfor describing its design. Because of the dependenciesamong the various subsystems, each discipline engineerneeds to know the value of certain parameters related toother subsystems. These parameters are shared via theCACE information infrastructure. Often, there are conflictingor competing objectives for various subsystems.Many tradeoff studies, typically defined and led by theteam systems engineer, are conducted among subsystemexperts immediately as issues occur. Most of the communicationamong team members is face to face or livevia video or teleconference. Additional subject matterexperts are consulted as required. In the CACE environment,subsystems that need to interact extensively areclustered in close proximity to facilitate the communicationprocess among the experts.The team iterates on the requirements, and each subsystemexpert refines or modifies design choices asschedule allows. This process continues until an acceptablesolution is obtained. There may be occasions whereit is not possible to iterate to an acceptable solution priorto the scheduled end of the activity. In those cases, theavailable iterated results are documented and form thebasis of the delivered product.In each iteration, activities such as the following takeplace, sometimes sequentially and other times in parallel.The subsystem experts of science, instruments,mission design, and ground systems collaboratively definethe science data strategy for the mission in question.The telecommunications, ground systems, and commandand data-handling experts develop the data-returnstrategy. The attitude control systems, power, propulsion,thermal, and structure experts iterate on the spacecraftdesign and the configuration expert prepares the initialconcept. The systems engineer interacts with all disciplineengineers to ensure that the various subsystem designs fitinto the intended system architecture. Each subsystem expertprovides design and cost information, and the costexpert estimates the total cost for the mission.While design activity typically takes only days or weekswith final products available within weeks after studycompletion, longer term efforts take advantage of theconcurrent, collaborative environment to perform moredetailed analyses than those performed in the shorterduration CACE exercises.7.2.5.3 Activity Wrap-UpAfter the completion of a CACE study, the product is deliveredto the customer. In some CACE environments,the wrap-up of the product is completed with minimaladditional resources: the engineers respond to customer/stakeholderfeedback by incorporating additionalrefinements or information emphasizing basic cleanup.In other CACE environments, significant time is expendedto format the final report and review it with thecustomer/stakeholders to ensure that their expectationshave been addressed adequately.Some CACE environments have standardized theirwrap-up activities to address the customer/stakeholderfeedback and develop products that are structured anduniform across different ranges of efforts.As part of activity followup, customer/stakeholder feedbackis requested on processes, whether the productmet their needs, and whether there are any suggestedimprovements. This feedback is factored back into theCACE environment as part of a continuous improvementprocess.7.2.6 CACE <strong>Engineering</strong> Tools andTechniques<strong>Engineering</strong> tools and techniques vary within and acrossCACE environments in several technical aspects (e.g.,level of fidelity, level of integration, generally available<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 237


7.0 Special Topicscommercial applications versus custom tools versus customizedknowledge-based Excel spreadsheets, degree ofparametric design and/or engineering analysis). For example,mechanical design tools range from white-boarddiscussions to note pad translations to computer-aideddesign to 3D rapid design prototyping.Important factors in determining which tools are appropriateto an activity include the purpose and duration ofthe activity, the engineers’ familiarity or preference, theexpected product, the local culture, and the evolution ofthe engineering environment. Factors to be consideredin the selection of CACE tools and engineering techniquesshould also include flexibility, compatibility withthe CACE environment and process, and value and easeof use for the customer after the CACE activities.<strong>Engineering</strong> tools may be integrated into the CACE infrastructure,routinely provided by the supporting engineeringstaff, and/or utilized only on an activity-byactivitybasis, as appropriate. As required, auxiliaryengineering analysis outside of the scope of the CACEeffort can be performed external to the CACE environmentand imported for reference and incorporation intothe CACE product.7.2.7 CACE Facility, InformationInfrastructure, and StaffingEach CACE instantiation is unique to the Center, program,or project that it services. While the actual implementationsvary, the basic character does not. Eachimplementation concentrates on enabling engineers,designers, team leads, and customer/stakeholders to bemore productive during concurrent activities and communication.This subsection focuses on three aspects ofthis environment: the facility, the supporting informationinfrastructure, and the staff required to keep the facilityoperational.7.2.7.1 FacilityThe nature of communication among discipline specialistsworking together simultaneously creates a somewhatchaotic environment. Although it is the duty of the teamlead to maintain order in the environment, the facility itselfhas to be designed to allow the participants to maintainorder and remain on task while seeking to increasecommunication and collaboration. To do this effectivelyrequires a significant investment in infrastructure resources.The room needs sufficient space to hold active participantsfrom the disciplines required, customer/stakeholderrepresentatives, and observers. CACE managersencourage observers to show potential future CACEusers the value of active CACE sessions.It is also important to note that the room will get reconfiguredoften. Processes and requirements change, andthe CACE facility must change with that. The facilitycould appear to an onlooker as a work in progress. Tables,chairs, computer workstations, network connections,electrical supplies, and visualization systems willcontinually be assessed for upgrades, modification, orelimination.CACE requirements in the area of visualization areunique. When one subject matter expert wants to communicateto either a group of other discipline specialistsor to the whole group in general, the projection systemneeds to be able to switch to different engineering workstations.When more than one subject matter expertwants to communicate with different groups, multipleprojection systems need to be able to switch. This cantypically require three to six projection systems withswitching capability from any specific workstation toany specific projector. In addition, multiple projectionsystems switchable to the engineering workstations needto be mounted so that they can be viewed without impactingother activities in the room or so that the entiregroup can be refocused as required during the session.The ease of this reconfiguration is one measure of the efficacyof the environment.7.2.7.2 Information InfrastructureA CACE system not only requires a significant investmentin the facility but relies heavily on the informationinfrastructure. Information infrastructure requirementscan be broken down into three sections: hardware, software,and network infrastructure.The hardware portion of the information infrastructureused in the CACE facility is the most transient elementin the system. The computational resources, the communicationfabric, servers, storage media, and the visualizationcapabilities benefit from rapid advances in technology.A CACE facility must be able to take advantageof the economy produced by those advances and mustalso be flexible enough to take advantage of the new capabilities.238 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


7.2 Integrated Design FacilitiesOne of the major costs of a CACE infrastructure issoftware. Much of the software currently used by engineeringprocesses is modeling and simulation, usuallyproduced by commercial software vendors. Infrastructuresoftware to support exchange of engineering data;to manage the study archive; and to track, administer,and manage facility activities is integral to CACE success.One of the functions of the CACE manager is to determinehow software costs can be paid, along with whatsoftware should be the responsibility of the participantsand customers.The network infrastructure of a CACE facility is critical.Information flowing among workstations, file servers,and visualization systems in real time requires a significantnetwork infrastructure. In addition, the networkinfrastructure enables collaboration with outside consultants,external discipline experts, and intra-Centercollaboration. The effective use of the network infrastructurerequires a balance between network securityand collaboration and, as such, will always be a sourceof modification, upgrade, and reconfiguration. A naturalextension of this collaboration is the execution of geographicallydistributed CACE efforts; therefore it is essentialthat a CACE facility have the tools, processes, andcommunications capabilities to support such distributedstudies.7.2.7.3 Facility Support Staff ResponsibilitiesA core staff of individuals is required to maintain an operationalCACE environment. The responsibilities to becovered include end-to-end CACE operations and themanagement and administration of the information infrastructure.CACE information infrastructure management and administrationincludes computer workstation configuration;network system administration; documentationdevelopment; user help service; and software support tomaintain infrastructure databases, tools, and Web sites.7.2.8 CACE ProductsCACE products are applicable across project life-cyclephases and can be clearly mapped to the various outputsassociated with the systems engineering activities suchas requirements definition, trade studies, decision analysis,and risk management. CACE products from a typicaldesign effort include a requirements summary withdriving requirements identified; system and subsystemanalysis; functional architectures and data flows; mass/power/data rackups; mission design and ConOps; engineeringtrades and associated results; technology maturitylevels; issues, concerns, and risks; parametric and/orgrassroots cost estimates; engineering analyses, models,and applicable tools to support potential future efforts;and a list of suggested future analyses.CACE product format and content vary broadly bothwithin and across CACE environments. The particularCACE environment, the goals/objectives of the supportedactivity, whether the activity was supported by multipleCACE teams or not, the customer’s ultimate use, and theschedule requirements are some aspects that factor intothe final product content and format. A primary goal inthe identification and development of CACE productsand in the packaging of the final delivery is to facilitatetheir use after the CACE activity.Products include in-study results presentation, Power-Point packages, formal reports and supporting computer-aideddesign models, and engineering analysis.Regardless of format, the CACE final products typicallysummarize the incoming requirements, study goal expectation,and study final results.CACE environment flexibility enables support activitiesbeyond that of a traditional engineering design study(e.g., independent technical reviews, cost validation, riskand technology assessments, roadmapping, and requirementsreview). Product contents for such activities mightinclude feasibility assessment, technical recommendations,risk identification, recosting, technology infusionimpact and implementation approach, and architecturaloptions.In addition to formal delivery of the CACE product tothe customer team, the final results and planning dataare archived within the CACE environment for futurereference and for inclusion in internal CACE cross-studyanalyses.7.2.9 CACE Best PracticesThis subsection contains general CACE best practicesfor a successful CACE design activity. Three main topicareas—people, process, technologies—are applicableto both local and geographically distributed activities.Many lessons learned about the multi-CACE collaborationactivities were learned through the <strong>NASA</strong> ExplorationDesign Team (NEDT) effort, a One<strong>NASA</strong><strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 239


7.0 Special Topicsmulti-Center distributed collaborative design activityperformed during FY05.7.2.9.1 Peoplez z Training: Individuals working in CACE environmentsbenefit from specialized training. This trainingshould equip individuals with the basic skills necessaryfor efficient and effective collaboration. Trainingshould include what is required technically as well asorientation to the CACE environment and processes.z z Characteristics: Collaborative environment skills includebeing flexible, working with many unknowns,and willingness to take risks. Ability and willingnessto think and respond in the moment is required aswell as the ability to work as part of a team and tointeract directly with customer representatives to negotiaterequirements and to justify design decisions.Supporting engineers also need the ability to quicklyand accurately document their final design as well aspresent this design in a professional manner. In addition,the CACE team leads or facilitators should haveadditional qualities to function well in a collaborativedesign environment. These include organizationaland people skills, systems engineering skills and background,and broad general engineering knowledge.7.2.9.2 Process and Toolsz z Customer Involvement: Managing customer expectationsis the number one factor in positive study outcome.It is important to make the customers continuouslyaware of the applications and limitations of theCACE environment and to solicit their active participationin the collaborative environment.zzAdaptability: The CACE environments must adaptprocesses depending on study type and objectives, asdetermined in negotiations prior to study execution.In addition to adapting the processes, engineers withappropriate engineering and collaborative environmentskills must be assigned to each study.z z Staffing: Using an established team has the benefit ofthe team working together and knowing each otherand the tools and processes. A disadvantage is that astanding army can get “stale” and not be fluent withthe latest trends and tools in their areas of expertise.Supporting a standing army full time is also an expensiveproposition and often not possible. A workablecompromise is to have a full-time (or nearly full-time)leadership team complemented by an engineeringteam. This engineering team could be composed ofengineers on rotational assignments or long-term detailto the team, as appropriate. An alternative paradigmis to partially staff the engineering team withpersonnel provided through the customer team.z z Tools and Data Exchange: In general, each engineershould use the engineering tools with which he or sheis most familiar to result in an effective and efficientprocess. The CACE environment should provide aninformation infrastructure to integrate resulting engineeringparameters.z z Decision Process: Capturing the decisionmakingand design rationale is of great interest and of valueto CACE customers as well as being a major challengein the rapid engineering environment. The benefit ofthis is especially important as a project progresses andmakes the CACE product more valuable to the customer.Further along in the life cycle of a mission orinstrument, captured decisions and design rationaleare more useful than a point-design from some earliertime.z z Communication: CACE environments foster rapidcommunication among the team members. Becauseof the fast-paced environment and concurrent engineeringactivities, keeping the design elements “insynch” is a challenge. This challenge can be addressedby proactive systems engineers, frequent tag-ups, additionalsystems engineering support and the use ofappropriate information infrastructure tools.z z Standards Across CACE Environments: Establishingminimum requirements and standard sets of toolsand techniques across the <strong>NASA</strong> CACE environmentwould facilitate multi-Center collaborations.z z Planning: Proper planning and preparation are crucialfor efficient CACE study execution. Customerswanting to forgo the necessary prestudy activity orplanning and preparation must be aware of and acceptthe risk of a poor or less-than-desired outcome.7.2.9.3 Facilityz z Communication Technologies: The communicationinfrastructure is the backbone of the collaborativeCACE environment. Certain technologies should beavailable to allow efficient access to resources externalto a CACE facility. It is important to have “plug andplay” laptop capability, for example. Multiple phonesshould be available to the team and cell phone accessis desirable.240 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


7.2 Integrated Design Facilitiesz z Distributed Team Connectivity: Real-time transfer ofinformation for immediate access between geographicallydistributed teams or for multi-Center activities canbe complicated due to firewall and other networking issues.Connectivity and information transfer methodsshould be reviewed and tested before study execution.<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 241


7.0 Special Topics7.3 Selecting <strong>Engineering</strong> Design Tools<strong>NASA</strong> utilizes cutting-edge design tools and techniquesto create the advanced analyses, designs, and conceptsrequired to develop unique aerospace products, spacecraft,and science experiments. The diverse nature of thedesign work generated and overseen by <strong>NASA</strong> requiresuse of a broad spectrum of robust electronic tools suchas computer-aided design tools and computer-aided systemsengineering tools. Based on the distributed andvaried nature of <strong>NASA</strong> projects, selection of a single suiteof tools from only one vendor to accomplish all designtasks is not practical. However, opportunities to improvestandardization of design policy, processes, and tools remaina focus for continuous improvement activities at alllevels within the Agency.These guidelines serve as an aid to help in the selectionof appropriate tools in the design and development ofaerospace products and space systems and when selectingtools that affect multiple Centers.7.3.1 Program and Project ConsiderationsWhen selecting a tool to support a program or project,all of the upper level constraints and requirements mustbe identified early in the process. Pertinent informationfrom the project that affects the selection of the tools willinclude the urgency, schedule, resource restrictions, extenuatingcircumstances, and constraints. A tool thatdoes not support meeting the program master scheduleor is too costly to be bought in sufficient numbers will notsatisfy the project manager’s requirements. For example,a tool that requires extensive modification and trainingthat is inconsistent with the master schedule should notbe selected by the technical team. If the activity to beundertaken is an upgrade to an existing project, legacytools and availability of trained personnel are factors tobe considered.7.3.2 Policy and ProcessesWhen selecting a tool, one must consider the applicablepolicies and processes at all levels, including those at theCenter level, within programs and projects, and at otherCenters when a program or project is a collaborative effort.In the following discussion, the term “organization”will be used to represent any controlling entity that establishespolicy and/or processes for the use of tools in the designor development of <strong>NASA</strong> products. In other words,“organization” can mean the user’s Center, another collaboratingCenter, a program, a project, in-line engineeringgroups, or any combination of these entities.Policies and processes affect many aspects of a tool’sfunctionality. First and foremost, there are policies thatdictate how designs are to be formally or informally controlledwithin the organization. These policies addressconfiguration management processes that must be followedas well as the type of data object that will be formallycontrolled (e.g., drawings or models). Clearly thiswill affect the types of tools that will be used and howtheir designs will be annotated and controlled.The Information Technology (IT) policy of the organizationalso needs to be considered. Data security andexport control (e.g., International Traffic in Arms Regulations(ITAR)) policies are two important IT policyconsiderations that will influence the selection of a particulardesign tool.The policy of the organization may also dictate requirementson the format of the design data that is producedby a tool. A specific format may be required for sharinginformation with collaborating parties. Other considerationsare the organizations’ quality processes, whichcontrol the versions of the software tools as well as theirverification and validation. There are also policies ontraining and certifying users of tools supporting criticalflight programs and projects. This is particularly importantwhen the selection of a new tool results in the transitionfrom a legacy tool to a new tool. Therefore, the qualityof the training support provided by the tool vendor is animportant consideration in the selection of any tool.Also, if a tool is being procured to support a multi-Centerprogram or project, then program policy may dictatewhich tool must be used by all participating Centers. IfCenters are free to select their own tool in support of amulti-Center program or project, then consideration ofthe policies of all the other Centers must be taken intoaccount to ensure compatibility among Centers.7.3.3 CollaborationThe design process is highly collaborative due to the complexspecialties that must interact to achieve a successfulintegrated design. Tools are an important part of a suc-242 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


7.3 Selecting <strong>Engineering</strong> Design Toolscessful collaboration. To successfully select and integratetools in this environment requires a clear understandingof the intended user community size, functionality required,nature of the data to be shared, and knowledgeof tools to be used. These factors will dictate the numberof licenses, hosting capacity, tool capabilities, IT securityrequirements, and training required. The sharing ofcommon models across a broad group requires mechanismsfor advancing the design in a controlled way. Effectiveuse of data management tools can help controlthe collaborative design by requiring common namingconventions, markings, and design techniques to ensurecompatibility among distributed design tools.7.3.4 Design StandardsDepending on the specific domain or discipline, theremay be industry and Center-specific standards that mustbe followed, particularly when designing hardware. Thiscan be evident in the design of a mechanical part, wherea mechanical computer-aided design package selected tomodel the parts must have the capability to meet specificstandards, such as model accuracy, dimensioning andtolerancing, the ability to create different geometries, andthe capability to produce annotations describing how tobuild and inspect the part. However, these same issuesmust be considered regardless of the product.7.3.5 Existing IT ArchitectureAs with any new tool decision, an evaluation of definedAgency and Center IT architectures should be made thatfocuses on compatibility with and duplication of existingtools. Typical architecture considerations would includedata management tools, middleware or integration infrastructure,network transmission capacity, design analysistools, manufacturing equipment, approved hosting,and client environments.While initial focus is typically placed on current needs,the scalability of the tools and the supporting IT infrastructureshould be addressed too. Scalability applies toboth the number of users and capacity of each user tosuccessfully use the system over time.7.3.6 Tool InterfacesInformation interfaces are ubiquitous, occurring wheneverinformation is exchanged.This is particularly characteristic of any collaborativeenvironment. It is here that inefficiencies arise, informationis lost, and mistakes are made. There may be anorganizational need to interface with other capabilitiesand/or analysis tools, and understanding the tools usedby the design teams with which your team interfacesand how the outputs of your team drive other downstreamdesign functions is critical to ensure compatibilityof data.For computer-aided systems engineering tools, users areencouraged to select tools that are compatible with theObject Management Group System Modeling Language(SysML) standard. SysML is a version of the UnifiedModeling Language (UML) that has been specificallydeveloped for systems engineering.7.3.7 Interoperability and Data FormatsInteroperability is an important consideration when selectingtools. The tools must represent the designs in formatsthat are acceptable to the end user of the data. It isimportant that any selected tool include associative dataexchange and industry-standard data formats. As theAgency increasingly engages in multi-Center programsand projects, the need for interoperability among differenttools, and different versions of the same tool, becomeseven more critical. True interoperability reduceshuman error and the complexity of the integration task,resulting in reduced cost, increased productivity, and aquality product.When considering all end users’ needs, it is clear that interoperabilitybecomes a difficult challenge. Three broadapproaches, each with their own strengths and weaknesses,are:zz Have all employees become proficient in a variety ofdifferent tool systems and the associated end use applications.While this provides a broad capability, itmay not be practical or affordable.zz Require interoperability among whatever tools areused, i.e., requiring that each tool be capable of transferringmodel data in a manner that can be easily andcorrectly interpreted by all the other tools. Considerableprogress has been made in recent years in thestandards for the exchange of model data. While thiswould be the ideal solution for many, standard dataformats that contain the required information for allend users do not yet exist.zz Dictate that all participating organizations use thesame version of the same tool.<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 243


7.0 Special Topics7.3.8 Backward CompatibilityOn major programs and projects that span several years,it is often necessary to access design data that are morethan 3 to 5 years old. However, access to old design datacan be extremely difficult and expensive, either becausetool vendors end their support or later versions of the toolcan no longer read the data. Strategies for maintainingaccess include special contracts with vendors for longersupport, archiving design data in neutral formats, continuousmigration of archives into current formats, andrecreating data on demand. Organizations should selectthe strategy that works best for them, after a careful considerationof the cost and risk.7.3.9 PlatformWhile many tools will run on multiple hardware platforms,some perform better in specific environmentsor are only supported by specified versions of operatingsystems. In the case of open-source operating systems,many different varieties are available that may not fullysupport the intended tools. If the tool being consideredrequires a new platform, the additional procurement costand administration support costs should be factored in.7.3.10 Tool Configuration ControlTool configuration control is a tradeoff between responsiveadoption of the new capabilities in new versions andsmooth operation across tool chain components. Thisis more difficult with heterogeneous (multiple vendor)tool components. An annual or biannual block upgradestrategy requires significant administrative effort. On theother hand, the desktop diversity resulting from usermanagedupgrade timing also increases support requirements.7.3.11 Security/Access ControlSpecial consideration should be given to the sensitivityand required access of all design data. Federal Governmentand Agency policy requires the assessment of alltools to ensure appropriate security controls are addressedto maintain the integrity of the data.7.3.12 TrainingMost of the major design tools have similar capabilitiesthat will not be new concepts to a seasoned designer.However, each design tool utilizes different techniquesto perform design functions, and each contains someunique tool sets that will require training. The moreresponsive vendors will provide followup access to instructorsand onsite training with liberal distribution oftraining materials and worked examples. The cost andtime to perform the training and time for the designer tobecome proficient can be significant and should be carefullyfactored in when making decisions on new designtools.The disruptive aspect of training is an important considerationin adapting to a different tool. Before transitioningto a new tool, an organization must consider theschedule of deliverables to major programs and projects.Can commitments still be met in a timely fashion? It issuggested that organizations implement a phase-in approachto a new tool, where the old tool is retained forsome time to allow people to learn the new tool and becomeproficient in its use. The transition of a fully functionaland expert team using any one system, to the sameteam fully functional using another system, is a significantundertaking. Some overlap between the old tooland the new tool will ensure flexibility in the transitionand ensure that the program and project work proceedsuninterrupted.7.3.13 LicensesLicenses provide and control access to the various modulesor components of a product or product family. Considerationof the license scheme should be taken intoaccount while selecting a tool package. Licenses aresometimes physical, like a hardware key that plugs intoa serial or parallel port, or software that may or may notrequire a whole infrastructure to administer. Software licensesmay be floating (able to be shared on many computerson a first-come, first-served basis) or locked (dedicatedto a particular computer). A well-thought-outstrategy for licenses must be developed in the beginningof the tool selection process. This strategy must take intoconsideration program and project requirements andconstraints as well as other factors such as training anduse.7.3.14 Stability of Vendor and CustomerSupportAs in the selection of any support device or tool, vendorstability is of great importance. Given the significant investmentin the tools (directly) and infrastructure (indirectly),it is important to look at the overall company sta-244 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


7.3 Selecting <strong>Engineering</strong> Design Toolsbility to ensure the vendor will be around to support thetools. Maturity of company products, installed user base,training, and financial strength can all provide clues tothe company’s ability to remain in the marketplace witha viable product. In addition, a responsive vendor providescustomer support in several forms. A useful venueis a Web-based user-accessible knowledge base that includesresolved issues, product documentation, manuals,white papers, and tutorials. Live telephone support can bevaluable for customers who don’t provide support internally.An issue resolution and escalation process involvescustomers directly in prioritizing and following closureof critical issues. Onsite presence by the sales team andapplication engineers, augmented by post-sales supportengineers, can significantly shorten the time to discoveryand resolution of issues and evolving needs.<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 245


7.0 Special Topics7.4 Human Factors <strong>Engineering</strong>The discipline of Human Factors (HF) is devoted to thestudy, analysis, design, and evaluation of human-systeminterfaces and human organizations, with an emphasison human capabilities and limitations as they impactsystem operation. HF engineering issues relate to all aspectsof the system life, including design, build, test, operate,and maintain, across the spectrum of operatingconditions (nominal, contingency, and emergency).People are critical components in complex aerospacesystems: designers, manufacturers, operators, groundsupport, and maintainers. All elements of the systemare influenced by human performance. In the world ofhuman-system interaction, there are four avenues forimproving performance, reducing error, and makingsystems more error tolerant: (1) personnel selection;(2) system, interface, and task design; (3) training; and(4) procedure improvement. Most effective performanceimprovement involves all four avenues. People can behighly selected for the work they are to perform and theenvironment they are to perform it in. Second, equipmentand systems can be designed to be easy to use, error resistant,and quickly learned. Third, people can be trainedto proficiency on their required tasks. Fourth, improvingtasks or procedures can be an important intervention.HF focuses on those aspects where people interfacewith the system. It considers all personnel who must interactwith the system, not just the operator; deals withorganizational systems as well as hardware; and examinesall types of interaction, not just hardware or softwareinterfaces. The role of the HF specialist is to advocatefor the human component and to ensure that thedesign of hardware, software, tasks, and environment iscompatible with the sensory, perceptual, cognitive, andphysical attributes of those interacting with the system.The HF specialist should elucidate why human-relatedissues or features should be included in analyses, designdecisions, or tests and explain how design optionswill affect human performance in ways that impact totalsystem performance and/or cost. As system complexitygrows, the potential for conflicts between requirementsincreases. Sophisticated human-system interfaces createconflicts such as the need to create systems that are easyfor novices to learn while also being efficient for expertsto use. The HF specialist recognizes these tradeoffs andconstraints and provides guidance on balancing thesecompeting requirements. The domain of application isanywhere there are concerns regarding human and organizationalperformance, error, safety, and comfort. Thegoal is always to inform and improve the design.What distinguishes an HF specialist is the particularknowledge and methods used, the domain of employment,and the goal of the work. HF specialists have expertisein the knowledge of human performance, bothgeneral and specific. There are many academic specialtiesconcerned with applying knowledge of human behavior.These include psychology, cognitive science, cognitivepsychology, sociology, economics, instructionalsystem development, education, physiology, industrialpsychology, organizational behavior, communication,and industrial engineering. Project and/or process managersshould consult with their engineering or SMA directoratesto get advice and recommendations on specificHF specialists who would be appropriate for theirparticular activity.It is recommended to consider having HF specialistson the team throughout all the systems engineeringcommon technical processes so that they can constructthe specific HF analysis techniques and tests customizedto the specific process or project. Not only do theHF specialists help in the development of the end itemsin question, but they should also be used to make surethat the verification test and completeness techniquesare compatible and accurate for humans to undertake.Participation early in the process is especially important.Entering the system design process early ensures thathuman systems requirements are “designed in” ratherthan corrected later. Sometimes the results of analysesperformed later call for a reexamination of earlier analyses.For example, functional allocation typically mustbe refined as design progresses because of technologicalbreakthroughs, unforeseen technical difficulties in designor programming, or task analysis may indicate thatsome tasks assigned to humans exceed human capabilitiesunder certain conditions.During requirements definition, HF specialists ensurethat HF-related goals and constraints are included inthe overall plans for the system. The HF specialist mustidentify the HF-related issues, design risks, and tradeoffspertinent to each human-system component, anddocument these as part of the project’s requirements so246 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


7.4 Human Factors <strong>Engineering</strong>they are adequately addressed during the design phase.For stakeholder expectation definition from the HF perspective,the stakeholders include not only those who arespecifying the system to be built, but also those who willbe utilizing the system when it is put into operation. Thisapproach yields requirements generated from the topdown—what the system is intended to accomplish—andfrom the bottom up—how the system is anticipated tofunction. It is critical that the HF specialist contribute tothe ConOps. The expectations of the role of the human inthe system and the types of tasks the human is expectedto perform underlie all the hardware and software requirements.The difference between a passive passengerand an active operator will drive major design decisions.The number of crewmembers will drive subsequent decisionsabout habitable volume and storage and aboutcrew time available for operations and maintenance.HF specialists ensure appropriate system design that definesthe environmental range in which the system willoperate and any factors that impact the human components.Many of these factors will need to accommodatehuman, as well as machine, tolerances. The requirementsmay need to specify acceptable atmospheric conditions,including temperature, pressure, composition, and humidity,for example. The requirements might also addressacceptable ranges of acoustic noise, vibration, acceleration,and gravitational forces, and the use of protectiveclothing. The requirements may also need to accommodateadverse or emergency conditions outside the rangeof normal operation.7.4.1 Basic HF ModelA key to conducting human and organizational analysis,design, and testing is to have an explicit framework thatrelates and scopes the work in question. The followingmodel identifies the boundaries and the components involvedin assessing human impacts.The HF interaction model (Figure 7.4-1) provides a referencepoint of items to be aware of in planning, analyzing,designing, testing, operating, and maintaining systems.Detailed checklists should be generated and customizedfor the particular system under development. The modelpresented in this module is adapted from David Meister’sHuman Factors: Theory and Practice and is one depictionof how humans and systems interact. Environmental influenceson that interaction have been added. The modelillustrates a typical information flow between the humanand machine components of a system.Machine CPUComponentMachine DisplayComponentMachine InputDevice ComponentHuman FactorsInteraction ModelFigure 7.4-2 provides a reference point of human factorsprocess phases to be aware of in planning, analyzing, designing,testing, operating, and maintaining systems.7.4.2 HF Analysis and EvaluationTechniquesHuman SensoryComponentHuman CognitiveComponentHuman MusculoskeletalComponentFigure 7.4-1 Human factors interaction modelTable 7.4-1 provides a set of techniques for human andorganizational analysis and evaluation that can help toensure that appropriate human and organizational factorshave been considered and accommodated. TheseHF analysis methods are used to analyze systems, providedata about human performance, make predictionsabout human-system performance, and evaluate if thehuman-machine system performance meets design criteria.Most methods involve judgment and so are highlydependent on the skill and expertise of the analyst. Inaddition, both experienced and inexperienced operatorsprovide valuable information about the strengthsand weaknesses of old systems and how the new systemmight be used.These methods are appropriate to all phases of systemdesign with increasing specificity and detail as developmentprogresses. While HF principles are often researchedand understood at a generic level, their applicationis only appropriate when tailored to fit the designphase. Each type of analysis yields different kinds of informationand so they are not interchangeable. The outputsor products of the analyses go into specification documents(operational needs document, ConOps, SystemRequirements Document (SRD), etc.) and formal reviewprocesses (e.g., ORR, SRR, SDR, PDR, CDR, PRR, PIR).<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 247


7.0 Special Topics<strong>NASA</strong> Program/ProjectLife CyclePre-Phase A: Concept Studies/Phase A: Concept & Tech DevelopmentPhase A: Concept & Tech Development/Phase B: Prelim Design & Tech CompletionPhase C:Final Design & FabricationPhase D: System Assembly,Integration & Test, LaunchHuman Factors <strong>Engineering</strong>Process Integrating Points1. Operational Analysis &Analysis of Similar <strong>Systems</strong>2. Preliminary FunctionAllocation & Task Analysis3. Human FactorsRequirements Definitions4. Usability Study ofComponents, Prototypes, Mockups5. Formal Usability Testingof Full SystemPhase E:Operations & Sustainment6. Usability Testing of Procedures &Integration AssessmentPhase F:Closeout7. Track Use of the System;Validation TestingFigure 7.4-2 HF engineering process and its links to the <strong>NASA</strong> program/project life cycleThe list shown in Table 7.4-1 is not exhaustive. The mainpoint is to show examples that demonstrate the scopeand usefulness of common methods used to evaluatesystem design and development.248 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


7.4 Human Factors <strong>Engineering</strong>Table 7.4-1 Human and Organizational Analysis TechniquesProcess Human/Individual Analysis Additional Organizational AnalysisA. Operational AnalysisDefinitionPurposeInputsProcessOutputsDefinitionPurposeInputsProcessOutputsWhen applied to HF, it is analysis of projected operations.Obtain information about situations or events that may confrontoperators and maintainers using the new system. <strong>Systems</strong>engineers or operations analysts have typically done operationalanalyses. HF specialists should also be members of the analysisteam to capture important operator or maintainer activities.RFPs, planning documents, system requirements documents, andexpert opinion.Consult the systems engineer and projected users to extractimplications for operators and maintainers.Detailed scenarios (for nominal operations, hard and soft failures,and emergencies) including consequences; verbal descriptionsof events confronting operators and maintainers; anticipatedoperations (list of feasible operations and those that may overstressthe system); assumptions; constraints that may affect system performance;environments; list of system operation and maintenancerequirements.B. Similar <strong>Systems</strong> AnalysisWhen applied to HF, it examines previous systems or systems in usefor information useful for the new system.To obtain lessons learned and best practices useful in planning fora new system. Experiences gained from systems in use is valuableinformation that should be capitalized on.Structured observations, interviews, questionnaires, activity analysis,accident/incident reports, maintenance records, and trainingrecords.Obtain data on the operability, maintainability, and number ofpeople required to staff the system in use. Identify skills requiredto operate and maintain the system and training required to bringoperators to proficiency. Obtain previous data on HF design problemsand problems encountered by previous users of the previoussystem or system in use.Identification of environmental factors that may affect personnel;preliminary assessments of workload and stress levels; assessmentof skills required and their impact on selection, training, anddesign; estimates of future staffing and manpower requirements;identification of operator and maintainer problems to avoid;assessment of desirability and consequences of reallocation ofsystems functions.Assess interactions and logisticsbetween individuals and organizations.Evaluate operations under differenttypes of organizations, structures, ordistributions.New or adjusted workflows to compensatefor organizational impacts asappropriate.Identify the existing system’s organizationalhierarchies and managementdistribution schemes (centralized versusdecentralized).Evaluation of the configuration’s impacton performance and its potential risks.(continued)<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 249


7.0 Special TopicsTable 7.4-1 Human and Organizational Analysis Techniques (continued)Process Human/Individual Analysis Additional Organizational AnalysisC. Critical Incident StudyDefinitionPurposeInputsProcessOutputsDefinitionPurposeInputsProcessWhen applied to HF, it identifies sources of difficulties for operators ormaintenance or in the operational systems (or simulations of them).To analyze and hypothesize sources of errors and difficulties in asystem. This is particularly useful when a system has been operationaland difficulties are observed or suspected, but the natureand severity of those difficulties is not known.Operator/maintainer accounts of accidents, near-accidents,mistakes, and near-mistakes.Interview large numbers of operators/maintainers; categorize incidents/accidents;use HF knowledge and experience to hypothesizesources of difficulty and how each one could be further studied;mitigate or redesign to eliminate difficulty.Sources of serious HF difficulties in the operation of a system or itsmaintenance with suggested solutions to those difficulties.D. Functional Flow AnalysisWhen applied to HF, it is a structured technique for determiningsystem requirements. Decomposes the sequence of functions oractions that a system must perform.Provides a sequential ordering of functions that will achieve systemrequirements and a detailed checklist of system functions thatmust be considered in ensuring that the system will be able toperform its intended mission. These functions are needed for thesolution of trade studies and determinations of their allocationamong operators, equipment, software, or some combination ofthem. Decision-action analysis is often used instead of a functionalflow analysis when the system requires binary decisions (e.g.,software-oriented).Operational analyses, analyses of similar systems, activity analyses.Top-level functions are progressively expanded to lower levelscontaining more and more detailed information. If additionalelaboration is needed about information requirements, sources ofinformation, potential problems, and error-inducing features, forexample, then an action-information analysis is also performed.Trace difficulties between individualsand organizations and map associatedresponsibilities and process assignments.Identification of potential gaps ordisconnects based on the mapping.Map functional flows to associatedorganizational structures.Outputs Functional flow diagrams. Identification of any logistics or responsibilitygaps based on the integratedmap.E. Action-Information AnalysisDefinitionWhen applied to HF, it elaborates each function or action infunctional flows or decision-action diagrams by identifying theinformation that is needed for each action or decision to occur.This analysis is often supplemented with sources of data, potentialproblems, and error-inducing features associated with each functionor action.(continued)250 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


Table 7.4-1 Human and Organizational Analysis Techniques (continued)7.4 Human Factors <strong>Engineering</strong>Process Human/Individual Analysis Additional Organizational AnalysisPurposeInputsProcessOutputsDefinitionPurposeInputsProcessOutputsDefinitionPurposeProvides more detail before allocating functions to agents.Data from the analysis of similar systems, activity analyses, criticalincident studies, functional flow and decision-action analyses, andcomments and data from knowledgeable experts.Each function or action identified in functional flows or decisionactionanalyses is elaborated.Detailed lists of information requirements for operator-system interfaces,early estimates of special personnel provisions likely to beneeded, support requirements, and lists of potential problems andprobable solutions. Often produces suggestions for improvementsin design of hardware, software, or procedures.F. Functional AllocationWhen applied to HF, it is a procedure for assigning each systemfunction, action, and decision to hardware, software, operators,maintainers, or some combination of them.To help identify user skill needs and provide preliminary estimatesof staffing, training, and procedures requirements and workloadassessments. Functional flows and decision-action analyses donot identify the agent (person or machine) that will execute thefunctions.Functional flow analyses, decision-action analyses, action-informationanalyses, past engineering experience with similar systems,state-of-the-art performance capabilities of machines andsoftware, and store of known human capabilities and limitations.Identify and place to the side all those functions that must beallocated to personnel or equipment for reasons of safety, limitationsof engineering technology, human limitations, or systemrequirements. List the remaining functions—those that could beeither performed manually or by some combination of personneland equipment. Prepare descriptions of implementation. Establishweighting criteria for each design alternative. Compare alternativeconfigurations in terms of their effectiveness in performing thegiven function according to those criteria.Allocations of system functions to hardware, software, operators,maintainers, or some combination of them. Task analyses are thenperformed on those functions allocated to humans.G. Task AnalysisWhen applied to HF, it is a method for producing an ordered list ofall the things people will do in a system.To develop input to all the analyses that come next. A subsequenttimeline analysis chart can provide the temporal relationshipamong tasks—sequences of operator or maintainer actions, thetimes required for each action, and the time at which each actionshould occur.Map associated components (function,action, decisions) to the responsibleorganizational structures.Identification of any logistics or responsibilitygaps based on the integratedmap.After initial assignment configurationis completed, evaluate allocationsagainst relevant organizational norms,values, and organizational interfaces forlogistics and management impacts.List of potential impacts with recommendedmodifications in either functionsor management or both.(continued)<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 251


7.0 Special TopicsTable 7.4-1 Human and Organizational Analysis Techniques (continued)Process Human/Individual Analysis Additional Organizational AnalysisInputsProcessOutputsDefinitionPurposeInputsProcessOutputsDefinitionPurposeData from all the methods above supplemented with informationprovided by experts who have had experience with similar systems.HF specialists and subject matter experts list and describe all tasks,subdividing them into subtasks with the addition of supplementaryinformation.Ordered list of all the tasks people will perform in a system. Detailson information requirements, evaluations and decisions that mustbe made, task times, operator actions, and environmental conditions.H. Fault Tree AnalysisWhen applied to HF, it determines those combinations of eventsthat could cause specific system failures, faults, or catastrophes.Fault tree and failure mode and effects analysis are concerned witherrors.Anticipate mistakes that operators or maintainers might make andtry to design against those mistakes. Limitation for HF is that eachevent must be described in terms of only two possible conditions,and it is extremely difficult to attach exact probabilities to humanactivities.All outputs of the methods described above, supplemented withdata on human reliability.Construct a tree with symbols (logic gates) to represent eventsand consequences and describe the logical relationship betweenevents.Probabilities of various undesirable workflow-related events, theprobable sequences that would produce them, and the identificationof sensitive elements that could reduce the probability of amishap.I. Failure Modes and Effects AnalysisWhen applied to HF, it is a methodology for identifying error-inducingfeatures in a system.Deduce the consequences for system performance of a failure inone or more components (operators and maintainers) and theprobabilities of those consequences occurring.Group all tasks assigned to a givenorganization and evaluate the range ofskills, communications, and managementcapabilities required. Evaluatenew requirements against existingorganization’s standard operatingprocedures, norms, and values.Identify group-level workloads,management impacts, and trainingrequirements.Anticipate mistakes and disconnectsthat may occur in the workflow betweenindividuals and organizations includingunanticipated interactions betweenstandard organization operating proceduresand possible system events.Probabilities of various undesirableworkflow-related events arranged byorganizational interface points for theworkflows, the probable sequences thatwould produce them, and the identificationof sensitive elements that couldreduce the probability of a mishap.(continued)252 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


Table 7.4-1 Human and Organizational Analysis Techniques (continued)7.4 Human Factors <strong>Engineering</strong>Process Human/Individual Analysis Additional Organizational AnalysisInputsProcessOutputsDefinitionPurposeInputsProcessAll outputs of methods described above, supplemented with dataon human reliability.Analyst identifies the various errors operators or maintainers maymake in carrying out subtasks or functions. Estimates are madeof the probabilities or frequencies of making each kind of error.The consequences of each kind of error are deduced by tracing itseffects through a functional flow diagram to its final outcome.List of human failures that would have critical effects on systemoperation, the probabilities of system or subsystem failures due tohuman errors, and identification of those human tasks or actionsthat should be modified or replaced to reduce the probability ofserious system failures.J. Link AnalysisWhen applied to HF, it is an examination of the relationshipsbetween components, including the physical layout of instrumentpanels, control panels, workstations, or work areas to meet certainobjectives.To determine the efficiency and the effectiveness of the physicallayout of the human-machine interface.Data from activity and task analysis and observations of functionalor simulated systems.List all personnel and items. Estimate frequencies of linkagesbetween items, operators, or items and operators. Estimate theimportance of each link. Compute frequency-importance valuesfor each link. Starting with the highest link values, successively additems with lower link values and readjust to minimize linkages. Fitthe layout into the allocated space. Evaluate the new layout againstthe original objectives.Identify possible organizational entitiesand behaviors (i.e., political) involvedwith the system. Estimate the probabilitiesof occurrence and the impact ofconsequence.List organizational behaviors thatwould have a critical effect on thesystem operation; the probabilities ofthe system or subsystem failures dueto the organizational behaviors; andthose organizational values, culture, oractions/standard operating proceduresthat should be modified or replaced toreduce the probability of serious systemfailures.Assess links at individual versus organizationallevels.Outputs Recommended layouts of panels, workstations, or work areas. Adjusted layouts of panels,workstations, or work areas based onoptimum individual and organizationalperformance priorities.K. SimulationDefinitionPurposeInputsWhen applied to HF, it is a basic engineering or HF method to predictperformance of systems. Includes usability testing and prototyping.To predict the performance of systems, or parts of systems, that donot exist or to allow users to experience and receive training on systems,or parts of systems, that are complex, dangerous, or expensive.Hardware, software, functions, and tasks elucidated in task analysis,operating procedures.(continued)<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 253


7.0 Special TopicsTable 7.4-1 Human and Organizational Analysis Techniques (continued)Process Human/Individual Analysis Additional Organizational AnalysisProcessOutputsDefinitionPurposeInputsProcessOutputsDefinitionPurposeInputsProcessOutputsUsers perform typical tasks on models or mockups prepared toincorporate some or all of the inputs.Predictions about system performance, assessment of workloads,evaluation of alternative configurations, evaluation of operatingprocedures, training, and identification of accident- or errorprovocativesituations and mismatches between personnel andequipment.L. Controlled ExperimentationWhen applied to HF, it is a highly controlled and structured versionof simulation with deliberate manipulation of some variables.To answer one or more hypotheses and narrow number of alternativesused in simulation.From any or all methods listed thus far.Select experimental design; identify dependent, independent,and controlled variables; set up test, apparatus, facilities, and tasks;prepare test protocol and instructions; select subjects; run tests;and analyze results statistically.Quantitative statements of the effects of some variables on othersand differences between alternative configurations, procedures, orenvironments.M. Operational Sequence AnalysisWhen applied to HF, it is a powerful technique used to simulatesystems.To permit visualization of interrelationships between operatorsand operators and equipment, identify interface problems, andexplicitly identify decisions that might otherwise go unrecognized.Less expensive than mockups, prototypes, or computer programsthat attempt to serve the same purpose.Data from all above listed methods.Diagram columns show timescale, external inputs, operators,machines, external outputs. Flow of events (actions, functions,decisions) is then plotted from top to bottom against the timescaleusing special symbology.Time-based chart showing the functional relationships amongsystem elements, the flow of materials or information, the physicaland sequential distribution of operations, the inputs and outputs ofsubsystems, the consequences of alternative design configurations,potential sources of human difficulties.Assess individual performance withinpossible organizational models.Predict system performance undervaried organizational conditions, assessworkloads, evaluate alternative configurationsand operating procedures, train,and identify accident- or error-provocativesituations and mismatches betweenpersonnel and equipment.Scale to organizational levels whereappropriate and feasible.Quantitative statements of the effects ofsome organizational variables on othersand differences between alternative organizationalconfigurations, procedures,or environments.After initial analysis is complete, groupresults by responsible organizations.Assessment of range of skills, communicationprocesses, and managementcapabilities required. Evaluation of performanceunder various organizationalstructures.(continued)254 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


7.4 Human Factors <strong>Engineering</strong>Table 7.4-1 Human and Organizational Analysis Techniques (continued)Process Human/Individual Analysis Additional Organizational AnalysisN. Workload AssessmentDefinitionPurposeInputsProcessOutputsDefinitionPurposeInputsProcessOutputsDefinitionPurposeInputsProcessOutputsWhen applied to HF, it is a procedure for appraising operatorand crew task loadings or the ability of personnel to carry out allassigned tasks in the time allotted or available.To keep operator workloads at reasonable levels and to ensure thatworkloads are distributed equitably among operators.Task time, frequency, and precision data are obtained from manyof the above listed methods supplemented with judgments andestimates from knowledgeable experts.DOD-HDBK-763 recommends a method that estimates the timerequired to perform a task divided by the time available or allottedto perform it. There are three classes of methods: performancemeasures, physiological measures, and subjective workloads eitherduring or after an activity.Quantitative assessments of estimated workloads for particulartasks at particular times.O. Situational AwarenessWhen applied to HF, it is a procedure for appraising operator andcrew awareness of tasks and current situation.To raise operator and maintainer awareness to maintain safety andefficiency.All of the above listed analyses.Different methods have been proposed including: situation awarenessrating technique, situation awareness behaviorally anchoredrating scale, situation awareness global assessment technique,situation awareness verification and analysis tool.Quantitative estimates of situational awareness for particular tasksat particular times.P. Performance ModelingWhen applied to HF, it is a computational process for predictinghuman behavior based on current cognitive research.To predict human limitations and capabilities before prototyping.All of the above listed analyses.Input results from above analyses. Input current relevant environmentaland machine parameters. Can be interleaved with fast-timesimulation to obtain frequency of error types.Interrelationships between operators and operators and equipmentand identification of interface problems and decisions thatmight otherwise go unrecognized.After initial analysis is complete, groupresults by responsible organizations.Assessment of range of skills, communicationprocesses, and management capabilitiesrequired. Evaluation of performanceunder various organizational structures.Collect organizational decisionmakingstructures and processes and map theorganization’s situational awarenessprofiles.Identification of possible gaps, disconnects,and shortfalls.Scale as appropriate to relevant organizationalbehaviors.Interrelationships between individualsand organizations and identificationof organizational interface problemsand decisions that might otherwise gounrecognized.<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 255


7.0 Special Topics7.5 Environmental, Nuclear Safety, Planetary Protection, and Asset ProtectionPolicy Compliance7.5.1 NEPA and EO 121147.5.1.1 National Environmental Policy ActThe National Environmental Policy Act (NEPA) declaresthe basic national policy for protecting the human environment.NEPA sets the Nation’s goals for enhancingand preserving the environment. NEPA also providesthe procedural requirements to ensure compliance byall Federal agencies. NEPA compliance can be a criticalpath item in project or mission implementation. NEPArequires all Federal agencies to consider, before an actionis taken, environmental values in the planning of actionsand activities that may have a significant impact uponthe quality of the human environment. NEPA directsagencies to consider alternatives to their proposed activities.In essence, NEPA requires <strong>NASA</strong> decisionmakersto integrate the NEPA process into early planning to ensureappropriate consideration of environmental factors,along with technical and economic ones. NEPA isalso an environmental disclosure statute. It requires thatavailable information be adequately addressed and madeavailable to the <strong>NASA</strong> decisionmakers in a timely mannerso they can consider the environmental consequences ofthe proposed action or activity before taking final action.Environmental information must also be made availableto the public as well as to other Federal, state, and localagencies. NEPA does not require that the proposed actionor activity be free of environmental impacts, be the mostenvironmentally benign of potential alternatives, or bethe most environmentally wise decision. NEPA requiresthe decisionmaker to consider environmental impacts asone factor in the decision to implement an action.<strong>NASA</strong> activities are implemented through specific sponsoringentities, such as <strong>NASA</strong> HQ, <strong>NASA</strong> Centers (includingcomponent facilities, e.g., Wallops Flight Facility,White Sands Test Facility, and Michoud Assembly Facility),mission directorates, program, or mission supportoffices. The lead officials for these entities, the officialsin charge, have the primary responsibility for ensuringthat the NEPA process is integrated into their organizations’project planning activities before the sponsoringentities implement activities and actions. The sponsoringentities also are responsible for ensuring that recordsmanagement requirements are met. NEPA functionsare not performed directly by lead officials. Each<strong>NASA</strong> Center has an Environmental Management Office(EMO), which is usually delegated the responsibilityfor implementing NEPA. The EMO performs the primaryor working-level functions of the NEPA process,such as evaluating proposed activities, developing and/or reviewing and approving required documentation,advising project managers, and signing environmentaldecision documents on projects and programs havinglittle or no environmental impact. However, ultimate responsibilityfor complying with NEPA and completingthe process in a timely manner lies with the program orproject manager. Since the EMO provides essential functionalsupport to the sponsoring entity, and because itsimplementation responsibilities are delegated, the term“sponsoring entity” will be used throughout to includethe implementing NEPA organization at any <strong>NASA</strong> facility.In cases where the sponsoring entity needs to befurther defined, it will be specifically noted. For proposalsmade by tenants or entities using services or facilities at a<strong>NASA</strong> Center or component facility, the sponsoring entityshall be that Center or, if such authority is delegatedto the component facility, the component facility.NEPA compliance documentation must be completedbefore project planning reaches a point where <strong>NASA</strong>’sability to implement reasonable alternatives is effectivelyprecluded (i.e., before hard decisions are made regardingproject implementation). Environmental planningfactors should be integrated into the Pre-Phase Aconcept study phase when a broad range of alternativeapproaches is being considered. In the Phase A conceptdevelopment stage, decisions are made that could affectthe Phase B preliminary design stage. At a minimum,an environmental evaluation should be initiated in thePhase A concept development stage. During this stage,the responsible project manager will have the greatestlatitude in making adjustments in the plan to mitigateor avoid important environmental sensitivities and inplanning the balance of the NEPA process to avoid unpleasantsurprises later in the project cycle which mayhave schedule and/or cost implications. Before completingthe NEPA process, no <strong>NASA</strong> official can take anaction that would (1) affect the environment or (2) limitthe choice of reasonable alternatives.256 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


7.5 Environmental, Nuclear Safety, Planetary Protection, and Asset Protection Policy ComplianceAccommodating environmental requirements early inproject planning ultimately conserves both budget andschedule. Further detail regarding NEPA compliancerequirements for <strong>NASA</strong> programs and projects can befound in NPR 8580.1, Implementing The National EnvironmentalPolicy Act and Executive Order 12114.7.5.1.2 EO 12114 Environmental EffectsAbroad of Major Federal ActionsExecutive Order (EO) 12114 was issued “solely for the purposeof establishing internal procedures for Federal agenciesto consider the significant effects of their actions onthe environment outside the United States, its territoriesand possessions.” The EO also specifically provided thatits purpose is to enable the decisionmakers of the Federalagencies to be informed of pertinent environmental considerations,and factor such considerations in their decisions;however, such decisionmakers must still take intoaccount considerations such as foreign policy, national security,and other relevant special circumstances.The <strong>NASA</strong> Office of the General Counsel (OGC), or designee,is the <strong>NASA</strong> point of contact and official <strong>NASA</strong>representative on any matter involving EO 12114. Accordingly,any action by, or any implementation or legal interpretationof EO 12114 requires consultations with and theconcurrence of the designee of the OGC. The sponsoringentity and local EMO contemplating an action that wouldhave global environmental effects or effects outside theterritorial jurisdiction of the United States must notify the<strong>NASA</strong> Headquarters/Environmental Management Division(HQ/EMD). The HQ/EMD will, in turn, coordinatewith the Office of the General Counsel, the Assistant Administratorfor External Relations, and other <strong>NASA</strong> organizationsas appropriate; and assist the sponsoring entityto develop a plan of action. (Such a plan is subject to theconcurrence of the OGC.) Further detail regarding EO12114 compliance requirements for <strong>NASA</strong> programs andprojects can be found in NPR 8580.1.7.5.2 PD/NSC-25<strong>NASA</strong> has procedural requirements for characterizingand reporting potential risks associated with a plannedlaunch of radioactive materials into space, on launch vehiclesand spacecraft, during normal or abnormal flightconditions. Procedures and levels of review and analysisrequired for nuclear launch safety approval vary with thequantity of radioactive material planned for use and potentialrisk to the general public and the environment.Specific details concerning these requirements can befound in NPR 8715.3, <strong>NASA</strong> General Safety Program Requirements.For any U.S. space mission involving the use of radioisotopepower systems, radioisotope heating units, nuclearreactors, or a major nuclear source, launch approvalmust be obtained from the Office of the President perPresidential Directive/National Security Council MemorandumNo. 25 (PD/NSC-25), “Scientific or TechnologicalExperiments with Possible Large-Scale AdverseEnvironmental Effects and Launch of Nuclear <strong>Systems</strong>into Space,” paragraph 9, as amended May 8, 1996. Theapproval decision is based on an established and provenreview process that includes an independent evaluationby an ad hoc Interagency Nuclear Safety Review Panel(INSRP) comprised of representatives from <strong>NASA</strong>, theDepartment of Energy (DOE), the Department of Defense,and the Environmental Protection Agency, withan additional technical advisor from the Nuclear RegulatoryCommission. The process begins with developmentof a launch vehicle databook (i.e., a compendium of informationdescribing the mission, launch system, andpotential accident scenarios including their environmentsand probabilities). DOE uses the databook to prepare aPreliminary Safety Analysis Report for the space mission.In all, three Safety Analysis <strong>Reports</strong> (SARs) are typicallyproduced and submitted to the mission’s INSRP—thePSAR, an updated SAR (draft final SAR), and a final SAR.The DOE project office responsible for providing the nuclearpower system develops these documents.The ad hoc INSRP conducts its nuclear safety/risk evaluationand documents their results in a nuclear SafetyEvaluation Report (SER). The SER contains an independentevaluation of the mission radiological risk. DOEuses the SER as its basis for accepting the SAR. If theDOE Secretary formally accepts the SAR-SER package,it is forwarded to the <strong>NASA</strong> Administrator for use in thelaunch approval process.<strong>NASA</strong> distributes the SAR and SER to the other cognizantGovernment agencies involved in the INSRP, andsolicits their assessment of the documents. After receivingresponses from these agencies, <strong>NASA</strong> conductsinternal management reviews to address the SAR andSER and any other nuclear safety information pertinentto the launch. If the <strong>NASA</strong> Administrator recommendsproceeding with the launch, then a request for nuclearsafety launch approval is sent to the director of the Office<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 257


7.0 Special Topicsof Science and Technology Policy (OSTP) within the Officeof the President.<strong>NASA</strong> HQ is responsible for implementing this processfor <strong>NASA</strong> missions. It has traditionally enlisted the JetPropulsion Laboratory (JPL) to assist in this activity.DOE supports the process by analyzing the responseof redundant power system hardware to the differentaccident scenarios identified in the databook and preparinga probabilistic risk assessment of the potentialradiological consequences and risks to the public andthe environment for the mission. KSC is responsible foroverseeing development of databooks and traditionallyuses JPL to characterize accident environments and integratedatabooks. Both KSC and JPL subcontractors provideinformation relevant to supporting the developmentof databooks. The development team ultimately selectedfor a mission would be responsible for providing payloaddescriptions, describing how the nuclear hardware integratesinto the spacecraft, describing the mission, and supportingKSC and JPL in their development of databooks.Mission directorate associate administrators, Center Directors,and program executives involved with the controland processing of radioactive materials for launch intospace must ensure that basic designs of vehicles, spacecraft,and systems utilizing radioactive materials provideprotection to the public, the environment, and users suchthat radiation risk resulting from exposures to radioactivesources are as low as reasonably achievable. Nuclearsafety considerations must be incorporated from the Pre-Phase A concept study stage throughout all project stagesto ensure that the overall mission radiological risk is acceptable.All space flight equipment (including medicaland other experimental devices) that contain or use radioactivematerials must be identified and analyzed forradiological risk. Site-specific ground operations and radiologicalcontingency plans must be developed commensuratewith the risk represented by the planned launch ofnuclear materials. Contingency planning, as required bythe National Response Plan, includes provisions for emergencyresponse and support for source recovery efforts.NPR 8710.1, Emergency Preparedness Program and NPR8715.2, <strong>NASA</strong> Emergency Preparedness Plan ProceduralRequirements—Revalidated address the <strong>NASA</strong> emergencypreparedness policy and program requirements.7.5.3 Planetary ProtectionThe United States is a signatory to the United Nations’Treaty of Principles Governing the Activities of Statesin the Exploration and Use of Outer Space, Includingthe Moon and Other Celestial Bodies. Known as theOuter Space Treaty, it states in part (Article IX) that explorationof the Moon and other celestial bodies shallbe conducted “so as to avoid their harmful contaminationand also adverse changes in the environmentof the Earth resulting from the introduction of extraterrestrialmatter.” <strong>NASA</strong> policy (NPD 8020.7, BiologicalContamination Control for Outbound and InboundPlanetary Spacecraft) specifies that the purpose of preservingsolar system conditions is for future biologicaland organic constituent exploration. This NPD also establishesthe basic <strong>NASA</strong> policy for the protection ofthe Earth and its biosphere from planetary and otherextraterrestrial sources of contamination. The generalregulations to which <strong>NASA</strong> flight projects must adhereare set forth in NPR 8020.12, Planetary Protection Provisionsfor Robotic Extraterrestrial Missions. Differentrequirements apply to different missions, dependingon which solar system object is targeted or encounteredand the spacecraft or mission type (flyby, orbiter,lander, sample return, etc.). For some bodies (such asthe Sun, Moon, and Mercury), there are minimal planetaryprotection requirements. Current requirementsfor the outbound phase of missions to Mars and Europa,however, are particularly rigorous. Table 7.5-1 shows the current planetary protection categories,while Table 7.5-2 provides a brief summary of their associatedrequirements.At the core, planetary protection is a project managementresponsibility and a systems engineering activity.The effort cuts across multiple WBS elements, and failureto adopt and incorporate a viable planetary protectionapproach during the early planning phases will add costand complexity to the mission. Planning for planetaryprotection begins in Phase A, during which feasibility ofthe mission is established. Prior to the end of Phase A,the project manager must send a letter to the PlanetaryProtection Officer (PPO) stating the mission type andplanetary targets and requesting that the mission be assigneda planetary protection category.Prior to the PDR, at the end of Phase B, the project managermust submit to the <strong>NASA</strong> PPO a planetary protectionplan detailing the actions that will be taken to meetthe requirements. The project’s progress and completionof the requirements are reported in a planetary protectionpre-launch report submitted to the <strong>NASA</strong> PPO forapproval. The approval of this report at the FRR con-258 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


7.5 Environmental, Nuclear Safety, Planetary Protection, and Asset Protection Policy ComplianceTable 7.5‐1 Planetary Protection Mission CategoriesPlanet Priorities Mission Type Category ExampleNot of direct interest for understanding theprocess of chemical evolution. No protection ofsuch planets is warranted (no requirements).Of significant interest relative to the processof chemical evolution, but only a remotechance that contamination by spacecraft couldjeopardize future exploration.Of significant interest relative to the processof chemical evolution and/or the origin of lifeor for which scientific opinion provides a significantchance of contamination which couldjeopardize a future biological experiment.Any I Lunar missionsAny II Stardust (outbound)Genesis (outbound)CassiniFlyby, Orbiter III OdysseyMars Global SurveyorMars Reconnaissance OrbiterLander, Probe IV Mars Exploration RoverPhoenixEuropa ExplorerMars Sample Return(outbound)Any solar system body. Unrestricted Earth return a V Stardust (return)Genesis (return)Restricted Earth return b V Mars Sample Return (return)a. No special precautions needed for returning material/samples back to Earth.b. Special precautions need to be taken for returning material/samples back to Earth. See NPR 8020.12.Table 7.5‐2 Summarized Planetary Protection RequirementsMission CategorySummarized RequirementsIIIIIIIVVCertification of category.Avoidance of accidental impact by spacecraft and launch vehicle. Documentation of final dispositionof launched hardware.Stringent limitations on the probability of impact. Requirements on orbital lifetime or requirementsfor microbial cleanliness of spacecraft.Stringent limitations on the probability of impact and/or the contamination of the object. Microbialcleanliness of landed hardware surfaces directly established by bioassays.Outbound requirements per category of a lander mission to the target. Detailed restricted Earthreturn requirements will depend on many factors, but will likely include sterilization of anyhardware that contacted the target planet before its return to Earth, and the containment of anyreturned sample.stitutes the final planetary protection approval for theproject and must be obtained for permission to launch.An update to this report, the planetary protection postlaunchreport, is prepared to report any deviations fromthe planned mission due to actual launch or early missionevents. For sample return missions only, additionalreports and reviews are required: prior to launch towardthe Earth, prior to commitment to Earth reentry, andprior to the release of any extraterrestrial sample to thescientific community for investigation. Finally, at the formallydeclared End of Mission (EOM), a planetary protectionEOM report is prepared. This document reviews<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 259


7.0 Special Topicsthe entire history of the mission in comparison to theoriginal planetary protection plan and documents thedegree of compliance with <strong>NASA</strong>’s planetary protectionrequirements. This document is typically reported on bythe <strong>NASA</strong> PPO at a meeting of the Committee on SpaceResearch (COSPAR) to inform other spacefaring nationsof <strong>NASA</strong>’s degree of compliance with international planetaryprotection requirements.7.5.4 Space Asset ProtectionThe terrorist attacks on the World Trade Center in NewYork and on the Pentagon on September 11, 2001 havecreated an atmosphere for greater vigilance on the partof Government agencies to ensure that sufficient securityis in place to protect their personnel, physical assets,and information, especially those assets that contributeto the political, economic, and military capabilities of theUnited States. Current trends in technology proliferation,accessibility to space, globalization of space programsand industries, commercialization of space systems andservices, and foreign knowledge about U.S. space systemsincrease the likelihood that vulnerable U.S. spacesystems may come under attack, particularly vulnerablesystems. The ability to restrict or deny freedom of accessto and operations in space is no longer limited to globalmilitary powers. The reality is that there are many existingcapabilities to deny, disrupt, or physically destroyorbiting spacecraft and the ground facilities that commandand control them. Knowledge of U.S. space systems’functions, locations, and physical characteristics,as well as the means to conduct counterspace operationsis increasingly available on the international market. Nationsor groups hostile to the United States either possessor can acquire the means to disrupt or destroy U.S. spacesystems by attacking satellites in space, their communicationsnodes on the ground and in space, the groundnodes that command these satellites or process theirdata, and/or the commercial infrastructure that supportsa space system’s operations.7.5.4.1 Protection PolicyThe new National Space Policy authorized by the Presidenton August 31, 2006, states that space capabilities arevital to the Nation’s interests and that the United Stateswill “take those actions necessary to protect its space capabilities.”The policy also gives responsibility for SpaceSituational Awareness (SSA) to the Secretary of Defense.In that capacity the Secretary of Defense will conductSSA for civil space capabilities and operations, particularlyhuman space flight activities. SSA provides an indepthknowledge and understanding of the threats posedto U.S., allied, and coalition space systems by adversariesand the environment, and is essential in developing andemploying protection measures. Therefore, <strong>NASA</strong>’s spaceasset protection needs will drive the requirements thatthe <strong>NASA</strong> levies on DOD for SSA.7.5.4.2 GoalThe overall space asset protection goal for <strong>NASA</strong> is tosupport sustained mission assurance through the reductionof susceptibilities and the mitigation of vulnerabilities,relative to risk, and within fiscal constraints.7.5.4.3 ScopingSpace asset protection involves the planning and implementationof measures to protect <strong>NASA</strong> space assetsfrom intentional or unintentional disruption, exploitation,or attack, whether natural or manmade. It is essentialthat protection is provided for all segments of a spacesystem (ground, communications/information, space, andlaunch) and covers the entire life cycle of a project. Spaceasset protection includes aspects of personnel, physical,information, communications, information technology,and operational security, as well as counterintelligence activities.The role of the systems engineer is to integrate securitycompetencies with space systems engineering andoperations expertise to develop mission protection strategiesconsistent with payload classifications as defined inNPR 8705.4, Risk Classification for <strong>NASA</strong> Payloads.7.5.4.4 Protection Planning<strong>Systems</strong> engineers use protection planning processes andproducts (which include engineering trade studies andcost-benefit analyses) to meet <strong>NASA</strong>’s needs for acquiring,fielding, and sustaining secure and uncompromised spacesystems. Project protection plans are single-source documentsthat coordinate and integrate protection efforts andprevent inadvertent or uncontrolled disclosure of sensitiveprogram information. Protection plans provide projectmanagement personnel (project manager, project scientist,mission systems engineer, operations manager, usercommunity, etc.) with an overall view of the valid threatsto a space system (both hostile and environmental), identifyinfrastructure vulnerabilities, and propose securitycountermeasures to mitigate risks and enhance survivabilityof the mission. An outline for a typical protectionplan can be found in Appendix Q.260 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


7.6 Use of Metric SystemThe decision whether a project or program could orshould implement the System Internationale (SI), oftencalled the “metric system,” requires consideration of anumber of factors, including cost, technical, risk, andother programmatic aspects.The Metric Conversion Act of 1975 (Public Law 94-168)amended by the Omnibus Trade and CompetitivenessAct of 1988 (Public Law 100-418) establishes a nationalgoal of establishing the metric system as the preferredsystem of weights and measures for U.S. trade and commerce.<strong>NASA</strong> has developed NPD 8010.2, Use of the SI(Metric) System of Measurement in <strong>NASA</strong> Programs,which implements SI and provides specific requirementsand responsibilities for <strong>NASA</strong>.However, a second factor to consider is that there arepossible exceptions to the required implementation approach.Both EO 12770 and NPD 8010.2 allow exceptionsand, because full SI implementation may be difficult,allow the use of “hybrid” systems. Considerationof the following factors will have a direct impact on theimplementation approach and use of exceptions by theprogram or project.Programs or projects must do analysis during the earlylife-cycle phases when the design solutions are being developedto identify where SI is feasible or recommendedand where exceptions will be required. A major factor toconsider is the capability to actually produce or providemetric-based hardware components. Results and recommendationsfrom these analyses must be presented bySRR for approval.In planning program or project implementation to producemetric-based systems, issues to be addressed shouldinclude the following:zz Interfaces with heritage components (e.g., valves, py-rotechnic devices, etc.) built to English-based units:▶▶Whether conversion from English to SI and/or interfaceto English-based hardware is required.▶▶The team should review design implementation toensure there is no certification impact with heritagehardware or identify and plan for any necessary recertificationefforts.zz Dimensioning and tolerancing:▶▶Can result in parts that do not fit.▶▶Rounding errors have occurred when convertingunits from one unit system to the other.▶▶The team may require specific additional procedures,steps, and drawing Quality Assurance (QA)personnel when converting units.zz Tooling:▶▶Not all shops have full metric tooling (e.g., drill bits,taps, end mills, reamers, etc.).▶▶The team needs to inform potential contractors ofintent to use SI and obtain feedback as to potentialimpacts.zz Fasteners and miscellaneous parts:▶▶High-strength fastener choices and availability aremore limited in metric sizes.▶▶Bearings, pins, rod ends, bushings, etc., are readilyavailable in English with minimal lead times.▶▶The team needs to ascertain availability of acceptableSI-based fasteners in the timeframe needed.zz Reference material:▶▶Some key aerospace reference materials are builtonly in English units, e.g., MIL-HDBK-5 (metallicmaterial properties), and values will need to be convertedwhen used.▶▶Other key reference materials or commercial databasesare built only in SI units.▶▶The team needs to review the reference material tobe used and ensure acceptable conversion controlsare in place, if necessary.zz Corporate knowledge:▶▶Many engineers presently think in English units,i.e., can relate to pressure in PSI, can relate to materialstrength in KSI, can relate to a tolerance of 0.003inches, etc.▶▶However, virtually all engineers coming out ofschool in this day and era presently think in SIunits and have difficulty relating to English-basedunits such as slugs (for mass) and would require retrainingwith attendant increase in conversion errors.▶▶The team needs to be aware of their programorproject-specific knowledge in English and SIunits and obtain necessary training and experience.<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 261


7.0 Special Topicszz Industry practices:▶▶Certain industries work exclusively in English units,and sometimes have their own jargon associatedwith English material properties. The parachuteindustry falls in this category, e.g., “600-lb braidedKevlar line.”▶▶Other industries, especially international suppliers,may work exclusively in metric units, e.g., “30-mmthickraw bar stock.”▶▶The team needs to be aware of these unique casesand ensure both procurement and technical designand integration have the appropriate controls toavoid errors.zz Program or project controls: The team needs toconsider, early in the SE process, what program- orproject-specific risk management controls (such asconfiguration management steps) are required. Thiswill include such straightforward concerns as theconversion(s) between system elements that are inEnglish units and those in SI units or other, morecomplex, issues.Several <strong>NASA</strong> projects have taken the approach of usingboth systems, which is allowed by NPD 8010.2. For example,the Mars soil drill project designed and developedtheir hardware using English-based components,while accomplishing their analyses using SI-based units.Other small-scale projects have successfully used a similarapproach.For larger or more dispersed projects or programs, amore systematic and complete risk management approachmay be needed to successfully implement an SIbasedsystem. Such things as standard conversion factors(e.g., from pounds to kilograms) should be documented,as should standard SI nomenclature. Many of these riskmanagement aspects can be found in such documentsas the National Institute of Standards and Technology’sGuide for the Use of the International System of Units (SI)and the DOD Guide for Identification and Developmentof Metric Standards.Until the Federal Government and the aerospace industrialbase are fully converted to an SI-based unit system,the various <strong>NASA</strong> programs and projects will have to addresstheir own level of SI implementation on a case-bycasebasis. It is the responsibility of each <strong>NASA</strong> programand project management team, however, to comply withall laws and executive orders while still maintaining areasonable level of risk for cost, schedule, and performance.262 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


Appendix A: AcronymsACS Attitude Control <strong>Systems</strong>ACWP Actual Cost of Work PerformedAD2 Advance ment Degree of Difficulty AssessmentAHP Analytic Hierarchy ProcessAIAA American Institute of Aeronautics and AstronauticsAO Announcement of OpportunityASME American Society of Mechanical EngineersBAC Budget at CompletionBCWP Budgeted Cost for Work PerformedBCWS Budgeted Cost for Work ScheduledC&DH Command and Data Han dlingCACE Capability for Accelerated Concurrent <strong>Engineering</strong>CAIB Columbia Accident Investigation BoardCAM Control Account Manager or Cost AccountManagerCCB Configuration Control BoardCDR Critical Design ReviewCE Concurrent <strong>Engineering</strong>CERR Critical Event Readiness ReviewCI Configuration ItemCM Configuration ManagementCMC Center Management CouncilCMMI Capability Maturity Model® IntegrationCMO Configuration Management OrganizationCNSI Classified National Security InformationCoF Construction of FacilitiesConOps Concept of OperationsCOSPAR Committee on Space ResearchCOTS Commercial Off the ShelfCPI Critical Program Information or Cost PerformanceIndexCRM Continuous Risk ManagementCSA Configuration Status AccountingCWBS Contract Work Breakdown StructureDCR Design Certification ReviewDGA Designated Governing AuthorityDLA Defense Logistics AgencyDM Data ManagementDOD Department of DefenseDOE Department of EnergyDODAF DOD Architecture FrameworkDR Decommissioning ReviewDRM Design Reference MissionEAC Estimate at CompletionECP <strong>Engineering</strong> Change ProposalECR Environmental Compliance and Restorationor <strong>Engineering</strong> Change RequestEEE Electrical, Electronic, and ElectromechanicalEFFBD Enhanced Functional Flow Block DiagramEIA Electronic Industries AllianceEMC Electromagnetic CompatibilityEMI Electromagnetic InterferenceEMO Environmental Management OfficeEO Executive OrderEOM End of MissionEV Earned ValueEVM Earned Value ManagementFAD Formulation Authorization DocumentFAR Federal Acquisition RequirementFCA Functional Configuration AuditFDIR Failure Detection, Isolation, And RecoveryFFBD Functional Flow Block DiagramFMEA Failure Modes and Effects AnalysisFMECA Failure Modes, Effects, and Criticality AnalysisFMR Financial Management RequirementsFRR Flight Readiness ReviewFS&GS Flight <strong>Systems</strong> and Ground SupportGEO GeostationaryGFP Government-Furnished PropertyGMIP Government Mandatory Inspection PointGPS Global Positioning SatelliteHF Human FactorsHQ HeadquartersHQ/EMD <strong>NASA</strong> Headquar ters/Environmental ManagementDivisionHWIL Hardware in the LoopICA Independent Cost AnalysisICD Interface Control Document/DrawingICE Independent Cost EstimateICP Interface Control PlanIDD Interface Definition DocumentIEEE Institute of Electrical and Electronics EngineersILS Integrated Logistics SupportINCOSE International Council on <strong>Systems</strong> <strong>Engineering</strong><strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 263


Appendix A: AcronymsINSRPIPTIRDIRNISOITITAITARI&VIV&VIWGJPLKDPKSCLCCELEOLLILLLISM&SMAUTMCDAMCRMDAAMDRMOEMOPMOU<strong>NASA</strong>NEDTNEPANFSNODISNIATNOAANPDNPROCEOGCOMBORROSTPOTSPARPBSPCAPD/NSCPDRInteragency Nuclear Safety Review PanelIntegrated Product TeamInterface Requirements DocumentInterface Revision NoticeInternational Organization for StandardizationInformation Technology or IterationInternal Task Agreement.International Traffic in Arms RegulationIntegration and VerificationIndependent Verification and ValidationInterface Working GroupJet Propulsion LaboratoryKey Decision PointKennedy Space CenterLife-Cycle Cost EstimateLow Earth Orbit or Low Earth OrbitingLimited Life Items ListLessons Learned Information SystemModeling and SimulationMulti-Attribute Utility TheoryMulti-Criteria Decision AnalysisMission Concept ReviewMission Directorate Associate AdministratorMission Definition ReviewMeasure of EffectivenessMeasure of PerformanceMemorandum of UnderstandingNational Aeronautics and Space Administration<strong>NASA</strong> Ex ploration Design TeamNational Environmental Policy Act<strong>NASA</strong> FAR Supple ment<strong>NASA</strong> On-Line Directives InformationSystem<strong>NASA</strong> Integrated Action TeamNational Oceanic and Atmospheric Administration<strong>NASA</strong> Policy Directive<strong>NASA</strong> Procedural RequirementsOffice of the Chief EngineerOffice of the General CounselOffice of Management and BudgetOperational Readiness ReviewOffice of Science and Technology PolicyOff-the-ShelfProgram Approval ReviewProduct Breakdown StructurePhysical Configuration Audit or ProgramCommitment AgreementPresidential Directive/National SecurityCouncilPreliminary Design ReviewPERTPFARPHAPIPIRPIRNPKIPLARP(LOC)P(LOM)PMCPPBEPPOPQASPPRAPRDPRRP/SDRPSRP/SRRPTRQAR&TRFRFARFIRFPRIDSARSBUSDRSESEESEMPSERSISIRSMASOWSPSPISRBSRDSRRSSASTISTSSysMLT&EProgram Evaluation and Review TechniquePost-Flight Assessment ReviewPreliminary Hazard AnalysisPerformance Index/Principal InvestigatorProgram Implementation ReviewPreliminary Interface Revision NoticePublic Key InfrastructurePost-Launch Assessment ReviewProbability of Loss of CrewProbability of Loss of MissionProgram Management CouncilPlanning, Programming, Budgeting, and ExecutionPlanetary Protection OfficerProgram/Project Quality Assurance SurveillancePlanProbabilistic Risk AssessmentProject Requirements DocumentProduction Readiness ReviewProgram/System Definition ReviewProgram Status ReviewProgram/System Requirements ReviewPeriodic <strong>Technical</strong> ReviewsQuality AssuranceResearch and TechnologyRadio FrequencyRequests for ActionRequest for InformationRequest for ProposalReview Item DiscrepancySystem Acceptance Review or Safety AnalysisReportSensitive But UnclassifiedSystem Definition Review<strong>Systems</strong> <strong>Engineering</strong>Single-Event Effects<strong>Systems</strong> <strong>Engineering</strong> Management PlanSafety Evaluation ReportSystem Internationale (metric system)System Integration ReviewSafety and Mission AssuranceStatement of WorkSpecial PublicationSchedule Performance IndexStanding Review BoardSystem Requirements DocumentSystem Requirements ReviewSpace Situational AwarenessScientific and <strong>Technical</strong> InformationSpace Transportation SystemSystem Modeling LanguageTest and Evaluation264 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


Appendix A: AcronymsTATBDTBRTDRSTDRSSTLATLSTMATPMTechnology As sessmentTo Be DeterminedTo Be ResolvedTracking and Data Relay SatelliteTracking and Data Relay Satellite SystemTimeline AnalysisTimeline SheetTechnology Maturity Assessment<strong>Technical</strong> Performance MeasureTRARTRLTRRTVCUMLUSMLV&VVACWBSTechnology Readiness Assessment ReportTechnology Readiness LevelTest Readiness ReviewThrust Vector ControllerUnified Modeling LanguageUnited States Munitions ListVerification and ValidationVariance at CompletionWork Breakdown Structure<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 265


Appendix B: GlossaryTermAcceptable RiskAcquisitionActivityAdvancement Degreeof DifficultyAssessment (AD 2 )Allocated Baseline(Phase C)AnalysisAnalysis ofAlternativesAnalytic HierarchyProcessApprovalApproval (forImplementation)As-DeployedBaselineDefinition/ContextThe risk that is understood and agreed to by the program/project, governing authority, missiondirectorate, and other customer(s) such that no further specific mitigating action is required.The acquiring by contract with appropriated funds of supplies or services (including construction) byand for the use of the Government through purchase or lease, whether the supplies or services are alreadyin existence or must be created, developed, demonstrated, and evaluated. Acquisition begins atthe point when Agency needs are established and includes the description of requirements to satisfyAgency needs, solicitation and selection of sources, award of contracts, contract financing, contractperformance, contract administration, and those technical and management functions directly relatedto the process of fulfilling Agency needs by contract.(1) Any of the project components or research functions that are executed to deliver a product or serviceor provide support or insight to mature technologies. (2) A set of tasks that describe the technicaleffort to accomplish a process and help generate expected outcomes.The process to develop an understanding of what is required to advance the level of system maturity.The allocated baseline is the approved performance-oriented configuration documentation for a CIto be developed that describes the functional and interface characteristics that are allocated from ahigher level requirements document or a CI and the verification required to demonstrate achievementof those specified characteristics. The allocated baseline extends the top-level performance requirementsof the functional baseline to sufficient detail for initiating manufacturing or coding of a CI.The allocated baseline is controlled by the <strong>NASA</strong>. The allocated baseline(s) is typically established atthe Preliminary Design Review. Control of the allocated baseline would normally occur following theFunctional Configuration Audit.Use of mathematical modeling and analytical techniques to predict the compliance of a design toits requirements based on calculated data or data derived from lower system structure end productvalidations.A formal analysis method that compares alternative approaches by estimating their ability to satisfymission requirements through an effectiveness analysis and by estimating their life-cycle coststhrough a cost analysis. The results of these two analyses are used together to produce a cost-effectivenesscomparison that allows decisionmakers to assess the relative value or potential programmaticreturns of the alternatives.A multi-attribute methodology that provides a proven, effective means to deal with complex decisionmakingand can assist with identifying and weighting selection criteria, analyzing the data collectedfor the criteria, and expediting the decisionmaking process.Authorization by a required management official to proceed with a proposed course of action. Approvalsmust be documented.The acknowledgment by the decision authority that the program/project has met stakeholderexpectations and formulation requirements, and is ready to proceed to implementation. By approvinga program/project, the decision authority commits the budget resources necessary to continue intoimplementation.The as-deployed baseline occurs at the Operational Readiness Review. At this point, the design isconsidered to be functional and ready for flight. All changes will have been incorporated into thedocumentation.266 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


Appendix B: GlossaryBaselineTermBidirectionalTraceabilityBrassboardBreadboardComponentFacilitiesConcept of Operations(ConOps)(sometimesOperationsConcept)ConcurrenceConcurrent<strong>Engineering</strong>ConfigurationItemsConfigurationManagementProcessContext DiagramContinuous RiskManagementContractContractorControl AccountManagerControl Gate (ormilestone)Cost-BenefitAnalysisCost-EffectivenessAnalysisDefinition/ContextAn agreed-to set of requirements, designs, or documents that will have changes controlled through aformal approval and monitoring process.An association among two or more logical entities that is discernible in either direction (i.e., to andfrom an entity).A research configuration of a system, suitable for field testing, that replicates both the function andconfiguration of the operational systems with the exception of nonessential aspects such as packaging.A research configuration of a system, generally not suitable for field testing, that replicates both thefunction but not the actual configuration of the operational system and has major differences inactual physical layout.Complexes that are geographically separated from the <strong>NASA</strong> Center or institution to which they areassigned.The ConOps describes how the system will be operated during the life-cycle phases to meet stakeholderexpectations. It describes the system characteristics from an operational perspective and helpsfacilitate an understanding of the system goals. It stimulates the development of the requirementsand architecture related to the user elements of the system. It serves as the basis for subsequentdefinition documents and provides the foundation for the long-range operational planning activities.A documented agreement by a management official that a proposed course of action is acceptable.Design in parallel rather than serial engineering fashion.A Configuration Item is any hardware, software, or combination of both that satisfies an end usefunction and is designated for separate configuration management. Configuration items are typicallyreferred to by an alphanumeric identifier which also serves as the unchanging base for the assignmentof serial numbers to uniquely identify individual units of the CI.A process that is a management discipline that is applied over a product’s life cycle to provide visibilityinto and to control changes to performance and functional and physical characteristics. It ensuresthat the configuration of a product is known and reflected in product information, that any productchange is beneficial and is effected without adverse consequences, and that changes are managed.A diagram that shows external systems that impact the system being designed.An iterative process to refine risk management measures. Steps are to analyze risk, plan for trackingand control measures, track risk, carry out control measures, document and communicate all riskinformation, and deliberate throughout the process to refine it.A mutually binding legal relationship obligating the seller to furnish the supplies or services (includingconstruction) and the buyer to pay for them. It includes all types of commitments that obligate theGovernment to an expenditure of appropriated funds and that, except as otherwise authorized, are inwriting.An individual, partnership, company, corporation, association, or other service having a contract withthe Agency for the design, development, manufacture, maintenance, modification, operation, orsupply of items or services under the terms of a contract to a program or project.The person responsible for controlling variances at the control account level, which is typically at thesubsystem WBS level. The CAM develops work and product plans, schedules, and time-phased resourceplans. The technical subsystem manager/lead often takes on this role as part of their subsystemmanagement responsibilities.See “Key Decision Point.”A methodology to determine the advantage of one alternative over another in terms of equivalentcost or benefits. It relies on totaling positive factors and subtracting negative factors to determine anet result.A systematic quantitative method for comparing the costs of alternative means of achieving the sameequivalent benefit for a specific objective.<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 267


Appendix B: GlossaryTermCritical DesignReviewCritical Event (orkey event)Critical EventReadiness ReviewCustomerData ManagementDecision AnalysisProcessDecision AuthorityDecision MatrixDecision SupportPackageDecision TreesDecommissioningReviewDeliverable DataItemDemonstrationDerived RequirementsDescopeDesign SolutionDefinition ProcessDesignated GoverningAuthorityDoctrine ofSuccessiveRefinementEarned ValueDefinition/ContextA review that demonstrates that the maturity of the design is appropriate to support proceedingwith full-scale fabrication, assembly, integration, and test, and that the technical effort is on track tocomplete the flight and ground system development and mission operations in order to meet missionperformance requirements within the identified cost and schedule constraints.An event that requires monitoring throughout the projected life cycle of a product that will generatecritical requirements that would affect system design, development, manufacture, test, and operations(such as with an MOE, MOP, or TPM).A review that confirms the project’s readiness to execute the mission’s critical activities during flightoperation.The organization or individual that has requested a product and will receive the product to bedelivered. The customer may be an end user of the product, the acquiring agent for the end user, orthe requestor of the work products from a technical effort. Each product within the system hierarchyhas a customer.DM is used to plan for, acquire, access, manage, protect, and use data of a technical nature to supportthe total life cycle of a system.A process that is a methodology for making decisions. It also offers techniques for modeling decisionproblems mathematically and finding optimal decisions numerically. The methodology entailsidentifying alternatives, one of which must be decided upon; possible events, one of which occursthereafter; and outcomes, each of which results from a combination of decision and event.The Agency’s responsible individual who authorizes the transition at a KDP to the next life-cycle phasefor a program/project.A methodology for evaluating alternatives in which valuation criteria typically are displayed in rowson the left side of the matrix, and alternatives are the column headings of the matrix. Criteria “weights”are typically assigned to each criterion.Documentation submitted in conjunction with formal reviews and change requests.A portrayal of a decision model that displays the expected consequences of all decision alternativesby making discreet all “chance” nodes, and, based on this, calculating and appropriately weighting thepossible consequences of all alternatives.A review that confirms the decision to terminate or decommission the system and assess the readinessfor the safe decommissioning and disposal of system assets. The DR is normally held near theend of routine mission operations upon accomplishment of planned mission objectives. It may beadvanced if some unplanned event gives rise to a need to prematurely terminate the mission, ordelayed if operational life is extended to permit additional investigations.Consists of technical data–requirements specifications, design documents, management data–plans,and metrics reports.Use of a realized end product to show that a set of stakeholder expectations can be achieved.For a program, requirements that are required to satisfy the directorate requirements on the program.For a project, requirements that are required to satisfy the program requirements on the project.Taken out of the scope of a project.The process by which high-level requirements derived from stakeholder expectations and outputs ofthe Logical Decomposition Process are translated into a design solution.The management entity above the program, project, or activity level with technical oversight responsibility.A recursive and iterative design loop driven by the set of stakeholder expectations where a strawmanarchitecture/design, the associated ConOps, and the derived requirements are developed.The sum of the budgeted cost for tasks and products that have actually been produced (completed orin progress) at a given time in the schedule.268 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


Appendix B: GlossaryTermEarned ValueManagementEnabling Products<strong>Technical</strong> CostEstimateEnhanced FunctionalFlow BlockDiagramEntry CriteriaEnvironmentalImpactEnvironmentalManagementEstablish (withrespect toprocesses)EvaluationExtensibilityFlexibilityFlight ReadinessReviewFlight <strong>Systems</strong>and GroundSupportFloatFormulationPhaseFunctionalAnalysisFunctional Baseline(Phase B)Definition/ContextA tool for measuring and assessing project performance through the integration of technical scopewith schedule and cost objectives during the execution of the project. EVM provides quantification oftechnical progress, enabling management to gain insight into project status and project completioncosts and schedules. Two essential characteristics of successful EVM are EVM system data integrity andcarefully targeted monthly EVM data analyses (i.e., risky WBS elements).The life-cycle support products and services (e.g., production, test, deployment, training, maintenance,and disposal) that facilitate the progression and use of the operational end product throughits life cycle. Since the end product and its enabling products are interdependent, they are viewed asa system. Project responsibility thus extends to responsibility for acquiring services from the relevantenabling products in each life-cycle phase. When a suitable enabling product does not already exist,the project that is responsible for the end product may also be responsible for creating and using theenabling product.The cost estimate of the technical work on a project created by the technical team based on itsunderstanding of the system requirements and operational concepts and its vision of the systemarchitecture.A block diagram that represents control flows and data flows as well as system functions and flow.Minimum accomplishments each project needs to fulfill to enter into the next life-cycle phase or levelof technical maturity.The direct, indirect, or cumulative beneficial or adverse effect of an action on the environment.The activity of ensuring that program and project actions and decisions that potentially impact ordamage the environment are assessed and evaluated during the formulation and planning phaseand reevaluated throughout implementation. This activity must be performed according to all <strong>NASA</strong>policy and Federal, state, and local environmental laws and regulations.The act of developing policy, work instructions, or procedures to implement process activities.The continual, independent (i.e., outside the advocacy chain of the program/project) evaluation of theperformance of a program or project and incorporation of the evaluation findings to ensure adequacyof planning and execution according to plan.The ability of a decision to be extended to other applications.The ability of a decision to support more than one current application.A review that examines tests, demonstrations, analyses, and audits that determine the system’s readinessfor a safe and successful flight/launch and for subsequent flight operations. It also ensures that allflight and ground hardware, software, personnel, and procedures are operationally ready.FS&GS is one of four interrelated <strong>NASA</strong> product lines. FS&GS projects result in the most complex andvisible of <strong>NASA</strong> investments. To manage these systems, the Formulation and Implementation phasesfor FS&GS projects follow the <strong>NASA</strong> project life-cycle model consisting of Phases A (concept development)through F (closeout). Primary drivers for FS&GS projects are safety and mission success.Extra time built into a schedule.The first part of the <strong>NASA</strong> management life cycle defined in NPR 7120.5 where system requirementsare baselined, feasible concepts are determined, a system definition is baselined for the selectedconcept(s), and preparation is made for progressing to the Implementation phase.The process of identifying, describing, and relating the functions a system must perform to fulfill itsgoals and objectives.The functional baseline is the approved configuration documentation that describes a system’s ortop-level CIs’ performance requirements (functional, interoperability, and interface characteristics) andthe verification required to demonstrate the achievement of those specified characteristics.<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 269


Appendix B: GlossaryTermFunctionalConfigurationAudit (FCA)FunctionalDecompositionFunctional FlowBlock DiagramGantt ChartGoalGovernmentMandatoryInspection PointsHeritage (orlegacy)Human Factors<strong>Engineering</strong>ImplementationPhaseThe part of the <strong>NASA</strong> management life cycle defined in NPR 7120.5 where the detailed design ofsystem products is completed and the products to be deployed are fabricated, assembled, integrated,and tested and the products are deployed to their customers or users for their assigned use or mission.IncommensurableCostsInfluenceDiagramInspectionIntegrated LogisticsSupportInterface ManagementProcessIterativeKey DecisionPoint (or milestone)Key EventKnowledgeManagementLeast-CostAnalysisDefinition/ContextExamines the functional characteristics of the configured product and verifies that the product hasmet, via test results, the requirements specified in its functional baseline documentation approved atthe PDR and CDR. FCAs will be conducted on both hardware- or software-configured products andwill precede the PCA of the configured product.A subfunction under logical decomposition and design solution definition, it is the examination of afunction to identify subfunctions necessary for the accomplishment of that function and functionalrelationships and interfaces.A block diagram that defines system functions and the time sequence of functional events.Bar chart depicting start and finish dates of activities and products in the WBS.Quantitative and qualitative guidance on such things as performance criteria, technology gaps,system context, effectiveness, cost, schedule, and risk.Inspection points required by Federal regulations to ensure 100 percent compliance with safety/mission-criticalattributes when noncompliance can result in loss of life or loss of mission.Refers to the original manufacturer’s level of quality and reliability that is built into the parts whichhave been proven by (1) time in service, (2) number of units in service, (3) mean time between failureperformance, and (4) number of use cycles.The discipline that studies human-system interfaces and provides requirements, standards, andguidelines to ensure the human component of an integrated system is able to function as intended.Costs that cannot be easily measured, such as controlling pollution on launch or mitigating debris.A compact graphical and mathematical representation of a decision state.Visual examination of a realized end product to validate physical design features or specific manufactureridentification.Activities within the SE process that ensure the product system is supported during development(Phase D) and operations (Phase E) in a cost-effective manner. This is primarily accomplished by early,concurrent consideration of supportability characteristics, performing trade studies on alternativesystem and ILS concepts, quantifying resource requirements for each ILS element using best-practicetechniques, and acquiring the support items associated with each ILS element.The process to assist in controlling product development when efforts are divided among parties(e.g., Government, contractors, geographically diverse technical teams) and/or to define and maintaincompliance among the products that must interoperate.Application of a process to the same product or set of products to correct a discovered discrepancy orother variation from requirements. (See “Recursive” and “Repeatable.”)The event at which the decision authority determines the readiness of a program/project to progressto the next phase of the life cycle (or to the next KDP).See “Critical Event.”Getting the right information to the right people at the right time without delay while helping peoplecreate knowledge and share and act upon information in ways that will measurably improve theperformance of <strong>NASA</strong> and its partners.A methodology that identifies the least-cost project option for meeting the technical requirements.270 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


Appendix B: GlossaryLiensTermLife-Cycle CostLogical DecompositionModelsLogical DecompositionProcessLogisticsMaintain (withrespect toestablishment ofprocesses)MaintainabilityMarginMeasure ofEffectivenessMeasure ofPerformanceMetricMissionMission ConceptReviewMission DefinitionReview<strong>NASA</strong> Life-Cycle Phases (orprogram life-cyclephases)Definition/ContextRequirements or tasks not satisfied that have to be resolved within a certain assigned time to allowpassage through a control gate to proceed.The total cost of ownership over the project’s or system’s life cycle from Formulation throughImplementation. The total of the direct, indirect, recurring, nonrecurring, and other related expensesincurred, or estimated to be incurred, in the design, development, verification, production, deployment,operation, maintenance, support, and disposal of a project.Requirements decomposed by one or more different methods (e.g., function, time, behavior, dataflow, states, modes, system architecture).The process for creating the detailed functional requirements that enable <strong>NASA</strong> programs andprojects to meet the ends desired by Agency stakeholders. This process identifies the “what” that mustbe achieved by the system at each level to enable a successful project. It utilizes functional analysis tocreate a system architecture and to decompose top-level (or parent) requirements and allocate themdown to the lowest desired levels of the project.The management, engineering activities, and analysis associated with design requirements definition,material procurement and distribution, maintenance, supply replacement, transportation, anddisposal that are identified by space flight and ground systems supportability objectives.The act of planning the process, providing resources, assigning responsibilities, training people, managingconfigurations, identifying and involving stakeholders, and monitoring process effectiveness.The measure of the ability of an item to be retained in or restored to specified conditions whenmaintenance is performed by personnel having specified skill levels, using prescribed procedures andresources, at each prescribed level of maintenance.The allowances carried in budget, projected schedules, and technical performance parameters (e.g.,weight, power, or memory) to account for uncertainties and risks. Margin allocations are baselined inthe Formulation process, based on assessments of risks, and are typically consumed as the program/project proceeds through the life cycle.A measure by which a stakeholder’s expectations will be judged in assessing satisfaction with productsor systems produced and delivered in accordance with the associated technical effort. The MOE isdeemed to be critical to not only the acceptability of the product by the stakeholder but also critical tooperational/mission usage. An MOE is typically qualitative in nature or not able to be used directly as adesign-to requirement.A quantitative measure that, when met by the design solution, will help ensure that an MOE for aproduct or system will be satisfied. These MOPs are given special attention during design to ensurethat the MOEs to which they are associated are met. There are generally two or more measures ofperformance for each MOE.The result of a measurement taken over a period of time that communicates vital information aboutthe status or performance of a system, process, or activity. A metric should drive appropriate action.A major activity required to accomplish an Agency goal or to effectively pursue a scientific, technological,or engineering opportunity directly related to an Agency goal. Mission needs are independentof any particular system or technological solution.A review that affirms the mission need and examines the proposed mission’s objectives and theconcept for meeting those objectives. It is an internal review that usually occurs at the cognizantorganization for system development.A review that examines the functional and performance requirements defined for the system and thepreliminary program or project plan and ensures that the requirements and the selected concept willsatisfy the mission.Consists of Formulation and Implementation phases as defined in NPR 7120.5.<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 271


Appendix B: GlossaryTermObjective Function(sometimesCost Function)OperationalReadiness ReviewOptimal SolutionOther InterestedParties (Stakeholders)Peer ReviewPerformanceIndexPerformanceStandardsPhysical ConfigurationAudits (orconfigurationinspection)Post-Flight AssessmentReviewPost-Launch AssessmentReviewPrecedenceDiagramPreliminaryDesign ReviewProcessProducibilityProductProduct Baseline(Phase D/E)Definition/ContextA mathematical expression that expresses the values of combinations of possible outcomes as a singlemeasure of cost-effectiveness.A review that examines the actual system characteristics and the procedures used in the system orproduct’s operation and ensures that all system and support (flight and ground) hardware, software,personnel, procedures, and user documentation accurately reflects the deployed state of the system.A feasible solution that minimizes (or maximizes, if that is the goal) an objective function.A subset of “stakeholders,” other interested parties are groups or individuals who are not customersof a planned technical effort but may be affected by the resulting product, the manner in which theproduct is realized or used, or have a responsibility for providing life-cycle support services.Independent evaluation by internal or external subject matter experts who do not have a vestedinterest in the work product under review. Peer reviews can be planned, focused reviews conductedon selected work products by the producer’s peers to identify defects and issues prior to that workproduct moving into a milestone review or approval cycle.An overall measure of effectiveness for each alternative.Common metrics for use in performance standards include cost and schedule.The PCA examines the physical configuration of the configured product and verifies that the productcorresponds to the build-to (or code-to) product baseline documentation previously approved at theCDR. PCAs will be conducted on both hardware- and software-configured products.A review that evaluates the activities from the flight after recovery. The review identifies all anomaliesthat occurred during the flight and mission and determines the actions necessary to mitigate orresolve the anomalies for future flights.A review that evaluates the status, performance, and capabilities of the project evident from the flightoperations experience since launch. This can also mean assessing readiness to transfer responsibilityfrom the development organization to the operations organization. The review also evaluates thestatus of the project plans and the capability to conduct the mission with emphasis on near-termoperations and mission-critical events. This review is typically held after the early flight operations andinitial checkout.Workflow diagram that places activities in boxes, connected by dependency arrows; typical of a Ganttchart.A review that demonstrates that the preliminary design meets all system requirements with acceptablerisk and within the cost and schedule constraints and establishes the basis for proceeding withdetailed design. It will show that the correct design option has been selected, interfaces have beenidentified, and verification methods have been described.A set of activities used to convert inputs into desired outputs to generate expected outcomes andsatisfy a purpose.A system characteristic associated with the ease and economy with which a completed design can betransformed (i.e., fabricated, manufactured, or coded) into a hardware and/or software realization.A part of a system consisting of end products that perform operational functions and enablingproducts that perform life-cycle services related to the end product or a result of the technical effortsin the form of a work product (e.g., plan, baseline, or test result).The product baseline is the approved technical documentation that describes the configuration ofa CI during the production, fielding/deployment, and operational support phases of its life cycle.The product baseline describes detailed physical or form, fit, and function characteristics of a CI; theselected functional characteristics designated for production acceptance testing; the productionacceptance test requirements.272 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


Appendix B: GlossaryTermProduct BreakdownStructureProductImplementationProcessProduct IntegrationProcessProduct RealizationProduct TransitionProcessProduct ValidationProcessProduct VerificationProcessProduction ReadinessReviewProgramProgram/SystemDefinition ReviewProgram/SystemRequirementsReviewProgrammaticRequirementsProjectProject PlanDefinition/ContextA hierarchical breakdown of the hardware and software products of the program/project.The first process encountered in the SE engine, which begins the movement from the bottom of theproduct hierarchy up toward the Product Transition Process. This is where the plans, designs, analysis,requirement development, and drawings are realized into actual products.One of the SE engine product realization processes that make up the system structure. In this process,lower level products are assembled into higher level products and checked to make sure that theintegrated product functions properly. It is the first element of the processes that lead from realizedproducts from a level below to realized end products at a level above, between the Product Implementation,Verification, and Validation Processes.The act of making, buying, or reusing a product, or the assembly and integration of lower level realizedproducts into a new product, as well as the verification and validation that the product satisfies itsappropriate set of requirements and the transition of the product to its customer.A process used to transition a verified and validated end product that has been generated by productimplementation or product integration to the customer at the next level in the system structure forintegration into an end product or, for the top-level end product, transitioned to the intended enduser.The second of the verification and validation processes conducted on a realized end product. Whileverification proves whether “the system was done right,” validation proves whether “the right systemwas done.” In other words, verification provides objective evidence that every “shall” was met, whereasvalidation is performed for the benefit of the customers and users to ensure that the system functionsin the expected manner when placed in the intended environment. This is achieved by examining theproducts of the system at every level of the structure.The first of the verification and validation processes conducted on a realized end product. As used inthe context of systems engineering common technical processes, a realized product is one providedby either the Product Implementation Process or the Product Integration Process in a form suitable formeeting applicable life-cycle phase success criteria.A review that is held for FS&GS projects developing or acquiring multiple or similar systems greaterthan three or as determined by the project. The PRR determines the readiness of the system developersto efficiently produce the required number of systems. It ensures that the production plans;fabrication, assembly, and integration-enabling products; and personnel are in place and ready tobegin production.A strategic investment by a mission directorate (or mission support office) that has defined goals, objectives,architecture, funding level, and a management structure that supports one or more projects.A review that examines the proposed program architecture and the flowdown to the functionalelements of the system. The proposed program’s objectives and the concept for meeting thoseobjectives are evaluated. Key technologies and other risks are identified and assessed. The baselineprogram plan, budgets, and schedules are presented.A review that is used to ensure that the program requirements are properly formulated and correlatedwith the Agency and mission directorate strategic objectives.Requirements set by the mission directorate, program, project, and PI, if applicable. These includestrategic scientific and exploration requirements, system performance requirements, and schedule,cost, and similar nontechnical constraints.(1) A specific investment having defined goals, objectives, requirements, life-cycle cost, a beginning,and an end. A project yields new or revised products or services that directly address <strong>NASA</strong>’s strategicneeds. They may be performed wholly in-house; by Government, industry, academia partnerships;or through contracts with private industry. (2) A unit of work performed in programs, projects, andactivities.The document that establishes the project’s baseline for implementation, signed by the cognizantprogram manager, Center Director, project manager, and the MDAA, if required.<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 273


Appendix B: GlossaryTermProject <strong>Technical</strong>TeamSolicitationPrototypeQuality AssuranceRealized ProductRecursiveRelevant StakeholderReliabilityRepeatableRequirementRequirementsAllocation SheetRequirementsManagementProcessRiskRisk AssessmentRisk ManagementRisk-InformedDecision AnalysisProcessThe whole technical team for the project.Definition/ContextThe vehicle by which information is solicited from contractors to let a contract for products or services.Items (mockups, models) built early in the life cycle that are made as close to the flight item in form, fit,and function as is feasible at that stage of the development. The prototype is used to “wring out” thedesign solution so that experience gained from the prototype can be fed back into design changesthat will improve the manufacture, integration, and maintainability of a single flight item or theproduction run of several flight items.An independent assessment needed to have confidence that the system actually produced anddelivered is in accordance with its functional, performance, and design requirements.The desired output from the application of the four product realization processes. The form of thisproduct is dependent on the phase of the product-line life cycle and the phase success criteria.Value is added to the system by the repeated application of processes to design next lower layer systemproducts or to realize next upper layer end products within the system structure. This also appliesto repeating application of the same processes to the system structure in the next life-cycle phase tomature the system definition and satisfy phase exit criteria.See “Stakeholder.”The measure of the degree to which a system ensures mission success by functioning properly overits intended life. It has a low and acceptable probability of failure, achieved through simplicity, properdesign, and proper application of reliable parts and materials. In addition to long life, a reliable systemis robust and fault tolerant.A characteristic of a process that can be applied to products at any level of the system structure orwithin any life-cycle phase.The agreed-upon need, desire, want, capability, capacity, or demand for personnel, equipment, facilities,or other resources or services by specified quantities for specific periods of time or at a specifiedtime expressed as a “shall” statement. Acceptable form for a requirement statement is individuallyclear, correct, feasible to obtain, unambiguous in meaning, and can be validated at the level of thesystem structure at which stated. In pairs of requirement statements or as a set, collectively, they arenot redundant, are adequately related with respect to terms used, and are not in conflict with oneanother.Documents the connection between allocated functions, allocated performance, and the physicalsystem.A process that applies to the management of all stakeholder expectations, customer requirements,and technical product requirements down to the lowest level product component requirements.The combination of the probability that a program or project will experience an undesired event(some examples include a cost overrun, schedule slippage, safety mishap, health problem, maliciousactivities, environmental impact, or failure to achieve a needed scientific or technological breakthroughor mission success criteria) and the consequences, impact, or severity of the undesired event,were it to occur. Both the probability and consequences may have associated uncertainties.An evaluation of a risk item that determines (1) what can go wrong, (2) how likely is it to occur,(3) what the consequences are, and (4) what are the uncertainties associated with the likelihood andconsequences.An organized, systematic decisionmaking process that efficiently identifies, analyzes, plans, tracks,controls, communicates, and documents risk and establishes mitigation approaches and plans toincrease the likelihood of achieving program/project goals.A five-step process focusing first on objectives and next on developing decision alternatives withthose objectives clearly in mind and/or using decision alternatives that have been developed underother systems engineering processes. The later steps of the process interrelate heavily with the <strong>Technical</strong>Risk Management Process.274 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


Appendix B: GlossarySafetyTermSearch Space (orAlternative Space)SoftwareSpecificationStakeholderStakeholderExpectationsStakeholderExpectationsDefinition ProcessStanding ReviewBoardState DiagramSuccess CriteriaSurveillance(or Insight orOversight)SystemSystem AcceptanceReviewSystem DefinitionReviewSystem IntegrationReviewSystem RequirementsReviewDefinition/ContextFreedom from those conditions that can cause death, injury, occupational illness, damage to or loss ofequipment or property, or damage to the environment.The envelope of concept possibilities defined by design constraints and parameters within whichalternative concepts can be developed and traded off.As defined in NPD 2820.1, <strong>NASA</strong> Software Policy.A document that prescribes completely, precisely, and verifiably the requirements, design, behavior,or characteristics of a system or system component.A group or individual who is affected by or is in some way accountable for the outcome of an undertaking.The term “relevant stakeholder” is a subset of the term “stakeholder” and describes the peopleidentified to contribute to a specific task. There are two main classes of stakeholders. See “Customers”and “Other Interested Parties.”A statement of needs, desires, capabilities, and wants that are not expressed as a requirement (notexpressed as a “shall” statement) is to be referred to as an “expectation.” Once the set of expectationsfrom applicable stakeholders is collected, analyzed, and converted into a “shall” statement, theexpectation becomes a requirement. Expectations can be stated in either qualitative (nonmeasurable)or quantitative (measurable) terms. Requirements are always stated in quantitative terms. Expectationscan be stated in terms of functions, behaviors, or constraints with respect to the product beingengineered or the process used to engineer the product.The initial process within the SE engine that establishes the foundation from which the system is designedand the product realized. The main purpose of this process is to identify who the stakeholdersare and how they intend to use the product. This is usually accomplished through use-case scenarios,design reference missions, and operational concepts.The entity responsible for conducting independent reviews of the program/project per the life-cyclerequirements. The SRB is advisory and is chartered to objectively assess the material presented by theprogram/project at a specific review.A diagram that shows the flow in the system in response to varying inputs.Specific accomplishments that must be satisfactorily demonstrated to meet the objectives of atechnical review so that a technical effort can progress further in the life cycle. Success criteria aredocumented in the corresponding technical review plan.The monitoring of a contractor’s activities (e.g., status meetings, reviews, audits, site visits) for progressand production and to demonstrate fiscal responsibility, ensure crew safety and mission success, anddetermine award fees for extraordinary (or penalty fees for substandard) contract execution.(1) The combination of elements that function together to produce the capability to meet a need. Theelements include all hardware, software, equipment, facilities, personnel, processes, and proceduresneeded for this purpose. (2) The end product (which performs operational functions) and enablingproducts (which provide life-cycle support services to the operational end products) that make up asystem.A review that verifies the completeness of the specific end item with respect to the expected maturitylevel and to assess compliance to stakeholder expectations. The SAR examines the system, its enditems and documentation, and test data and analyses that support verification and validation. It alsoensures that the system has sufficient technical maturity to authorize its shipment to the designatedoperational facility or launch site.A review that examines the proposed system architecture/design and the flowdown to all functionalelements of the system.A review that ensures that the system is ready to be integrated; segments, components, andsubsystems are available and ready to be integrated; and integration facilities, support personnel, andintegration plans and procedures are ready for integration. SIR is conducted at the end of the finaldesign phase (Phase C) and before the systems assembly, integration, and test phase (Phase D) begins.A review that examines the functional and performance requirements defined for the system and thepreliminary program or project plan and ensures that the requirements and the selected concept willsatisfy the mission.<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 275


Appendix B: GlossaryTermSystem Safety<strong>Engineering</strong>System Structure<strong>Systems</strong> Analysis<strong>Systems</strong> Approach<strong>Systems</strong> <strong>Engineering</strong>Engine<strong>Systems</strong> <strong>Engineering</strong>ManagementPlanTailoring<strong>Technical</strong> AssessmentProcess<strong>Technical</strong> DataManagementProcess<strong>Technical</strong> DataPackage<strong>Technical</strong>Measures<strong>Technical</strong> PerformanceMeasures<strong>Technical</strong> PlanningProcess<strong>Technical</strong>RequirementsDefinition ProcessDefinition/ContextThe application of engineering and management principles, criteria, and techniques to achieve acceptablemishap risk within the constraints of operational effectiveness and suitability, time, and cost,throughout all phases of the system life cycle.A system structure is made up of a layered structure of product-based WBS models. (See “WorkBreakdown Structure.”)The analytical process by which a need is transformed into a realized, definitive product, able tosupport compatibility with all physical and functional requirements and support the operationalscenarios in terms of reliability, maintainability, supportability, serviceability, and disposability,while maintaining performance and affordability. <strong>Systems</strong> analysis is responsive to the needs of thecustomer at every phase of the life cycle, from pre-Phase A to realizing the final product and beyond.The application of a systematic, disciplined engineering approach that is quantifiable, recursive,iterative, and repeatable for the development, operation, and maintenance of systems integrated intoa whole throughout the life cycle of a project or program.The technical processes framework for planning and implementing the technical effort withinany phase of a product-line life cycle. The SE engine model in Figure 2.1-1 shows the 17 technicalprocesses that are applied to products being engineered to drive the technical effort.The SEMP identifies the roles and responsibility interfaces of the technical effort and how thoseinterfaces will be managed. The SEMP is the vehicle that documents and communicates the technicalapproach, including the application of the common technical processes; resources to be used; and keytechnical tasks, activities, and events along with their metrics and success criteria.The documentation and approval of the adaptation of the process and approach to complying withrequirements underlying the specific program or project.The crosscutting process used to help monitor technical progress of a program/project throughperiodic technical reviews. It also provides status information in support of assessing system design,product realization, and technical management decisions.The process used to plan for, acquire, access, manage, protect, and use data of a technical nature tosupport the total life cycle of a system. This includes its development, deployment, operations andsupport, eventual retirement, and retention of appropriate technical data beyond system retirementas required by current <strong>NASA</strong> policies.An output of the Design Solution Definition Process, it evolves from phase to phase, starting with conceptualsketches or models and ending with complete drawings, parts list, and other details neededfor product implementation or product integration.An established set of measures based on the expectations and requirements that will be tracked andassessed to determine overall system or product effectiveness and customer satisfaction. Commonterms for these measures are MOEs, MOPs, and TPMs.The set of critical or key performance parameters that are monitored by comparing the current actualachievement of the parameters with that anticipated at the current time and on future dates. Usedto confirm progress and identify deficiencies that might jeopardize meeting a system requirement.Assessed parameter values that fall outside an expected range around the anticipated values indicatea need for evaluation and corrective action. <strong>Technical</strong> performance measures are typically selectedfrom the defined set of MOPs.The first of the eight technical management processes contained in the SE engine, the <strong>Technical</strong> PlanningProcess establishes a plan for applying and managing each of the common technical processesthat will be used to drive the development of system products and associated work products. Thisprocess also establishes a plan for identifying and defining the technical effort required to satisfy theproject objectives and life-cycle-phase success criteria within the cost, schedule, and risk constraintsof the project.The process used to transform the stakeholder expectations into a complete set of validated technicalrequirements expressed as “shall” statements that can be used for defining a design solution for thePBS model and related enabling products.276 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


Appendix B: GlossaryTerm<strong>Technical</strong> Risk<strong>Technical</strong> RiskManagementProcess<strong>Technical</strong> TeamTechnologyReadiness AssessmentReportTechnologyAssessmentTechnologyDevelopmentPlanTechnology MaturityAssessmentTechnologyReadiness LevelTestTest ReadinessReviewTraceabilityTrade StudyTrade StudyReportTrade TreeTransitionUtilityDefinition/ContextRisk associated with the achievement of a technical goal, criterion, or objective. It applies to undesiredconsequences related to technical performance, human safety, mission assets, or environment.The process for measuring or assessing risk and developing strategies to manage it, an importantcomponent of managing <strong>NASA</strong> programs under its charter to explore and expand knowledge. Criticalto this process is the proactive identification and control of departures from the baseline program,project, or activity.A group of multidisciplinary individuals with appropriate domain knowledge, experience, competencies,and skills assigned to a specific technical task.A document required for transition from Phase B to Phase C/D demonstrating that all systems, subsystems,and components have achieved a level of technological maturity with demonstrated evidenceof qualification in a relevant environment.A systematic process that ascertains the need to develop or infuse technological advances into asystem. The technology assessment process makes use of basic systems engineering principles andprocesses within the framework of the PBS. It is a two-step process comprised of (1) the determinationof the current technological maturity in terms of technology readiness levels and (2) the determinationof the difficulty associated with moving a technology from one TRL to the next through the use ofthe AD 2 .A document required for transition from Phase A to Phase B identifying technologies to be developed,heritage systems to be modified, alternative paths to be pursued, fallback positions and correspondingperformance descopes, milestones, metrics, and key decision points. It is incorporated in thepreliminary project plan.The process to determine a system’s technological maturity via TRLs.Provides a scale against which to measure the maturity of a technology. TRLs range from 1, BasicTechnology Research, to 9, <strong>Systems</strong> Test, Launch, and Operations. Typically, a TRL of 6 (i.e., technologydemonstrated in a relevant environment) is required for a technology to be integrated into an SEprocess.The use of a realized end product to obtain detailed data to validate performance or to provide sufficientinformation to validate performance through further analysis.A review that ensures that the test article (hardware/software), test facility, support personnel, and testprocedures are ready for testing and data acquisition, reduction, and control.A discernible association among two or more logical entities such as requirements, system elements,verifications, or tasks.A means of evaluating system designs by devising alternative means to meet functional requirements,evaluating these alternatives in terms of the measures of effectiveness and system cost, ranking thealternatives according to appropriate selection criteria, dropping less promising alternatives, andproceeding to the next level of resolution, if needed.A report written to document a trade study. It should include: he system under analysis; system goals,objectives (or requirements, as appropriate to the level of resolution), and constraints; measures andmeasurement methods (models) used; all data sources used; the alternatives chosen for analysis;computational results, including uncertainty ranges and sensitivity analyses performed; the selectionrule used; and the recommended alternative.A representation of trade study alternatives in which each layer represents some system aspect thatneeds to be treated in a trade study to determine the best alternative.The act of delivery or moving of a product from the location where the product has been implementedor integrated, as well as verified and validated, to a customer. This act can include packaging,handling, storing, moving, transporting, installing, and sustainment activities.A measure of the relative value gained from an alternative. The theoretical unit of measurement forutility is the util.<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 277


Appendix B: GlossaryTermValidatedRequirementsValidationValidation (of aproduct)VarianceVerificationVerification (of aproduct)WaiverWBS ModelWork BreakdownStructure (WBS)WorkflowDiagramDefinition/ContextA set of requirements that are well formed (clear and unambiguous), complete (agree with customerand stakeholder needs and expectations), consistent (conflict free), and individually verifiable andtraceable to a higher level requirement or goal.Testing, possibly under simulated conditions, to ensure that a finished product works as required.Proof that the product accomplishes the intended purpose. Validation may be determined by acombination of test, analysis, and demonstration.In program control terminology, a difference between actual performance and planned costs orschedule status.The process of proving or demonstrating that a finished product meets design specifications andrequirements.Proof of compliance with specifications. Verification may be determined by test, analysis, demonstration,or inspection.A documented agreement intentionally releasing a program or project from meeting a requirement.(Some Centers use deviations prior to Implementation and waivers during Implementation).Model that describes a system that consists of end products and their subsystems (which performthe operational functions of the system), the supporting or enabling products, and any other workproducts (plans, baselines) required for the development of the system.A product-oriented hierarchical division of the hardware, software, services, and data required toproduce the program/project’s end product(s) structured according to the way the work will beperformed, reflecting the way in which program/project costs, schedule, technical, and risk data are tobe accumulated, summarized, and reported.A scheduling chart that shows activities, dependencies among activities, and milestones.278 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


Appendix C: How to Write a Good RequirementUse of Correct Termsz z Shall = requirementz z Will = facts or declaration of purposez z Should = goalEditorial ChecklistPersonnel Requirement1. The requirement is in the form “responsible party shall perform such and such.” In other words, use the active,rather than the passive voice. A requirement must state who shall (do, perform, provide, weigh, or other verb) followedby a description of what must be performed.Product Requirement1. The requirement is in the form “product ABC shall XYZ.” A requirement must state “The product shall” (do, perform,provide, weigh, or other verb) followed by a description of what must be done.2.3.4.5.The requirement uses consistent terminology to refer to the product and its lower level entities.Complete with tolerances for qualitative/performance values (e.g., less than, greater than or equal to, plus or minus,3 sigma root sum squares).Is the requirement free of implementation? (Requirements should state WHAT is needed, NOT HOW to provideit; i.e., state the problem not the solution. Ask, “Why do you need the requirement?” The answer may point to thereal requirement.)Free of descriptions of operations? (Is this a need the product must satisfy or an activity involving the product? Sentenceslike “The operator shall…” are almost always operational statements not requirements.)Example Product Requirementszz The system shall operate at a power level of…zz The software shall acquire data from the…zz The structure shall withstand loads of…zz The hardware shall have a mass of…General Goodness Checklist1.2.3.4.5.6.7.The requirement is grammatically correct.The requirement is free of typos, misspellings, and punctuation errors.The requirement complies with the project’s template and style rules.The requirement is stated positively (as opposed to negatively, i.e., “shall not”).The use of “To Be Determined” (TBD) values should be minimized. It is better to use a best estimate for a valueand mark it “To Be Resolved” (TBR) with the rationale along with what must be done to eliminate the TBR, who isresponsible for its elimination, and by when it must be eliminated.The requirement is accompanied by an intelligible rationale, including any assumptions. Can you validate (concurwith) the assumptions? Assumptions must be confirmed before baselining.The requirement is located in the proper section of the document (e.g., not in an appendix).<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 279


Appendix C: How to Write a Good RequirementRequirements Validation ChecklistClarity1.2.3.4.Are the requirements clear and unambiguous? (Are all aspects of the requirement understandable and not subjectto misinterpretation? Is the requirement free from indefinite pronouns (this, these) and ambiguous terms (e.g., “asappropriate,” “etc.,” “and/or,” “but not limited to”)?)Are the requirements concise and simple?Do the requirements express only one thought per requirement statement, a standalone statement as opposed tomultiple requirements in a single statement, or a paragraph that contains both requirements and rationale?Does the requirement statement have one subject and one predicate?Completeness1.2.3.Are requirements stated as completely as possible? Have all incomplete requirements been captured as TBDs orTBRs and a complete listing of them maintained with the requirements?Are any requirements missing? For example have any of the following requirements areas been overlooked: functional,performance, interface, environment (development, manufacturing, test, transport, storage, operations),facility (manufacturing, test, storage, operations), transportation (among areas for manufacturing, assembling, deliverypoints, within storage facilities, loading), training, personnel, operability, safety, security, appearance andphysical characteristics, and design.Have all assumptions been explicitly stated?Compliance1.2.3.Are all requirements at the correct level (e.g., system, segment, element, subsystem)?Are requirements free of implementation specifics? (Requirements should state what is needed, not how to provide it.)Are requirements free of descriptions of operations? (Don’t mix operation with requirements: update the ConOpsinstead.)Consistency1.2.3.4.Are the requirements stated consistently without contradicting themselves or the requirements of related systems?Is the terminology consistent with the user and sponsor’s terminology? With the project glossary?Is the terminology consistently used through out the document?Are the key terms included in the project’s glossary?Traceability1.2.3.Are all requirements needed? Is each requirement necessary to meet the parent requirement? Is each requirementa needed function or characteristic? Distinguish between needs and wants. If it is not necessary, it is not a requirement.Ask, “What is the worst that could happen if the requirement was not included?”Are all requirements (functions, structures, and constraints) bidirectionally traceable to higher level requirementsor mission or system-of-interest scope (i.e., need(s), goals, objectives, constraints, or concept of operations)?Is each requirement stated in such a manner that it can be uniquely referenced (e.g., each requirement is uniquelynumbered) in subordinate documents?Correctness1.2.3.Is each requirement correct?Is each stated assumption correct? Assumptions must be confirmed before the document can be baselined.Are the requirements technically feasible?280 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


Functionality1.Appendix C: How to Write a Good RequirementAre all described functions necessary and together sufficient to meet mission and system goals and objectives?Performance1.2.3.Are all required performance specifications and margins listed (e.g., consider timing, throughput, storage size, latency,accuracy and precision)?Is each performance requirement realistic?Are the tolerances overly tight? Are the tolerances defendable and cost-effective? Ask, “What is the worst thing thatcould happen if the tolerance was doubled or tripled?”Interfaces1.2.3.Are all external interfaces clearly defined?Are all internal interfaces clearly defined?Are all interfaces necessary, sufficient, and consistent with each other?Maintainability1.2.Have the requirements for system maintainability been specified in a measurable, verifiable manner?Are requirements written so that ripple effects from changes are minimized (i.e., requirements are as weakly coupledas possible)?Reliability1.2.3.4.5.Are clearly defined, measurable, and verifiable reliability requirements specified?Are there error detection, reporting, handling, and recovery requirements?Are undesired events (e.g., single event upset, data loss or scrambling, operator error) considered and their requiredresponses specified?Have assumptions about the intended sequence of functions been stated? Are these sequences required?Do these requirements adequately address the survivability after a software or hardware fault of the system from thepoint of view of hardware, software, operations, personnel and procedures?Verifiability/Testability1.2.3.Can the system be tested, demonstrated, inspected, or analyzed to show that it satisfies requirements? Can this bedone at the level of the system at which the requirement is stated? Does a means exist to measure the accomplishmentof the requirement and verify compliance? Can the criteria for verification be stated?Are the requirements stated precisely to facilitate specification of system test success criteria and requirements?Are the requirements free of unverifiable terms (e.g., flexible, easy, sufficient, safe, ad hoc, adequate, accommodate,user-friendly, usable, when required, if required, appropriate, fast, portable, light-weight, small, large, maximize,minimize, sufficient, robust, quickly, easily, clearly, other “ly” words, other “ize” words)?Data Usage1.Where applicable, are “don’t care” conditions truly “don’t care”? (“Don’t care” values identify cases when the valueof a condition or flag is irrelevant, even though the value may be important for other cases.) Are “don’t care” conditionsvalues explicitly stated? (Correct identification of “don’t care” values may improve a design’s portability.)<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 281


Appendix D: Requirements Verification MatrixWhen developing requirements, it is important toidentify an approach for verifying the requirements.This appendix provides the matrix that defines howall the requirements are verified. Only “shall” requirementsshould be included in these matrices. The matrixshould identify each “shall” by unique identifierand be definitive as to the source, i.e., document fromwhich the requirement is taken. This matrix could bedivided into multiple matrices (e.g., one per requirementsdocument) to delineate sources of requirementsdepending on the project. The example is shown toprovide suggested guidelines for the minimum informationthat should be included in the verification matrix.282 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


Table D-1 Requirements Verification MatrixVerifi-Require-Shall Verification SuccessCriteria e Methodcation Facilityment No. a Document b Paragraph c Statement d fAcceptanceRequireorLab g Phase hPreflightAccept-ment? iPerformingOrganizaance?j tion k Results lP-1 xxx 3.2.1.1Capability:SupportUplinkedData (LDR)System Xshall providea max.groundto-stationuplink of…1. System X locks toforward link at themin and max datarate tolerances2. System X locksto the forward linkat the min and maxoperating frequencytolerancesTest xxx 5 xxx TPS xxxxP-i xxx OtherparagraphsOther“shalls” inPTRSOther criteria xxx xxx xxx xxx Memo xxxS-i or otheruniquedesignatorxxxxx (otherspecs, ICDs,etc.)OtherparagraphsOther“shalls” inspecs, ICDs,etc.Other criteria xxx xxx xxx xxx Report xxxa. Unique identifier for each System X requirement.b. Document number the System X requirement is contained within.c. Paragraph number of the System X requirement.d. Text (within reason) of the System X requirement, i.e., the “shall.”e. Success criteria for the System X requirement.f. Verification method for the System X requirement (analysis, inspection, demonstration, or test).g. Facility or laboratory used to perform the verification and validation.h. Phase in which the verification and validation will be performed: (1) Pre-Declared Development, (2) Formal Box-Level Functional, (3) Formal Box-Level Environmental, (4) Formal System-Level Environmental, (5) Formal System-Level Functional, (6) Formal End-to-End Functional, (7) Integrated Vehicle Functional, (8) On-Orbit Functional.i. Indicate whether this requirement is also verified during initial acceptance testing of each unit.j. Indicate whether this requirement is also verified during any pre-flight or recurring acceptance testing of each unit.k. Organization responsible for performing the verificationl. Indicate documents that contain the objective evidence that requirement was satisfied


Appendix E: Creating the Validation Plan(Including Validation Requirements Matrix)When developing requirements, it is important to identifya validation approach for how additional validationevaluation, testing, analysis or other demonstrations willbe performed to ensure customer/sponsor satisfaction.This validation plan should include a validation matrixwith the elements in the example below. The finalcolumn in the matrix below uses a display product as aspecific example.Table E-1 Validation Requirements MatrixValidationProduct # Activity ObjectiveValidationMethodFacility orLabPhasePerformingOrganizationResultsUniqueidentifierforvalidationproductDescribeevaluationby the customer/sponsorthat willbe performedWhat is to beaccomplishedby thecustomer/sponsorevaluationValidationmethod forthe System Xrequirement(analysis,inspection,demonstration,or test)Facility orlaboratoryused toperformthe validationPhase inwhich theverification/validationwill beperformed aOrganizationresponsible forcoordinatingthe validationactivityIndicatetheobjectiveevidencethatvalidationactivityoccurred1 Customer/sponsor willevaluate thecandidatedisplays1. Ensurelegibility isacceptable2. Ensure overallappearanceis acceptableTest xxx Phase A xxxa. Example: (1) during product selection process, (2) prior to final product selection (if COTS) or prior to PDR, (3) prior to CDR, (4) duringbox-level functional, (5) during system-level functional, (6) during end-to-end functional, (7) during integrated vehicle functional,(8) during on-orbit functional.284 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


Appendix F: Functional, Timing, and State AnalysisFunctional Flow Block DiagramsFunctional analysis can be performed using variousmethods, one of which is Functional Flow Block Diagrams(FFBDs). FFBDs define the system functions anddepict the time sequence of functional events. They identify“what” must happen and do not assume a particularanswer to “how” a function will be performed. They arefunctionally oriented, not solution oriented.FFBDs are made up of functional blocks, each of whichrepresents a definite, finite, discrete action to be accomplished.The functional architecture is developed using aseries of leveled diagrams to show the functional decompositionand display the functions in their logical, sequentialrelationship. A consistent numbering scheme isused to label the blocks. The numbers establish identificationand relationships that carry through all the diagramsand facilitate traceability from the lower levels to the toplevel. Each block in the first- (top-) level diagram can beexpanded to a series of functions in the second-level diagram,and so on. (See Figure F-1.) Lines connecting functionsindicate function flow and not lapsed time or intermediateactivity. Diagrams are laid out so that the flowdirection is generally from left to right. Arrows are oftenused to indicate functional flows. The diagrams show bothinput (transfer to operational orbit) and output (transferto STS orbit), thus facilitating the definition of interfacesand control process.Each diagram contains a reference to other functionaldiagrams to facilitate movement between pages of thediagrams. Gates are used: “AND,” “OR,” “Go or No-Go,”sometimes with enhanced functionality, including exclusiveOR gate (XOR), iteration (IT), repetition (RP), orloop (LP). A circle is used to denote a summing gate andis used when AND/OR is present. AND is used to indicateparallel functions and all conditions must be satisfiedto proceed (i.e., concurrency). OR is used to indicate thatalternative paths can be satisfied to proceed (i.e., selection).G and G — are used to denote Go and No-Go conditions.These symbols are placed adjacent to lines leavinga particular function to indicate alternative paths. Forexamples of the above, see Figures F‐2 and F-3.Enhanced Functional Flow Block Diagrams (EFFBDs)provide data flow overlay to capture data dependencies.EFFBDs (shown in Figure F-4) represent: (1) functions,(2) control flows, and (3) data flows. An EFFBD specificationof a system is complete enough that it is executableas a discrete event model, capable of dynamic, as well asstatic, validation. EFFBDs provide freedom to use eithercontrol constructs or data triggers or both to specify executionconditions for the system functions. EFFBDsgraphically distinguish between triggering and nontriggeringdata inputs. Triggering data are required beforea function can begin execution. Triggers are actuallydata items with control implications. In Figure F-4, thedata input shown with a green background and doubleheadedarrows is a triggering data input. The nontriggeringdata inputs are shown with gray backgrounds andsingle-headed arrows. An EFFBD must be enabled by:(1) the completion of the function(s) preceding it in thecontrol construct and (2) triggered, if trigger data areidentified, before it can execute. For example, in FigureF-4, “1. Serial Function” must complete and “Data3” must be present before “3. Function in Concurrency”can execute. It should be noted that the “External Input”data into “1. Serial Function” and the “External Output”data from “6. Output Function” should not be confusedwith the functional input and output for these functions,which are represented by the input and the output arrowsrespectively. Data flows are represented as elongated ovalswhereas functions are represented as rectangular boxes.Functional analysis looks across all life-cycle processes.Functions required to deploy a system are very differentfrom functions required to operate and ultimately disposeof the system. Preparing FFBDs for each phase ofthe life cycle as well as the transition into the phasesthemselves is necessary to draw out all the requirements.These diagrams are used both to develop requirementsand to identify profitability. The functional analysis alsoincorporates alternative and contingency operations,which improve the probability of mission success. Theflow diagrams provide an understanding of total operationof the system, serve as a basis for development ofoperational and contingency procedures, and pinpoint<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 285


Appendix F: Functional, Timing, and State AnalysisTOP LEVEL1.0Ascent IntoOrbit Injection2.0Check Outand Deploy3.0Transfer toOPS Orbit4.0PerformMissionOperationsOR6.0Transfer toSTS Orbit7.0RetrieveSpacecraft8.0Reenter andLand5.0ContingencyOperationsSECOND LEVEL(3.0) Ref.Transfer toOPS Orbit4.1ProvideElectric Power4.2ProvideAttitudeStabilization4.3ProvideThermalControl4.4Provide OrbitMain4.5ReceiveCommandOR4.7Store/ProcessCommand4.8AcquirePayload DataAND4.10Transmit Payload& SubsystemDataOR(6.0) Ref.Transfer toSTS Orbit4.6Receive Command(Omni)4.9AcquireSubsystemStatus DataOR4.11TransmitSubsystemDataTHIRD LEVEL(4.7) Ref.Store/ProcessCommand(4.10) Ref.Transmit Payload& SubsystemData4.8.14.8.24.8.34.8.44.8.54.8.64.8.74.8.8Compute TDRSPointingVectorSlew toand TrackTDRSRadar toStandbyCompute LOSPointingVectorSlew S/Cto LOSVectorCommandERP PWRadar OnProcess ReceivingSignaland FormatRadar toStandbyOR4.8.9Radar OffFigure F-1 FFBD flowdownareas where changes in operational procedures couldsimplify the overall system operation. This organizationwill eventually feed into the WBS structure and ultimatelydrive the overall mission organization and cost.In certain cases, alternative FFBDs may be used to representvarious means of satisfying a particular functionuntil data are acquired, which permits selection amongthe alternatives. For more information on FFBDs andEFFBDs, see Jim Long’s Relationships between CommonGraphical Representations in <strong>Systems</strong> <strong>Engineering</strong>.Requirements Allocation SheetsRequirements allocation sheets document the connectionbetween allocated functions, allocated performance, andthe physical system. They provide traceability between<strong>Technical</strong> Requirements Definition functional analysisactivities and Logical Decomposition and Design SolutionDefinition activities and maintain consistency betweenthem, as well as show disconnects. Figure F-5 providesan example of a requirements allocation sheet. The286 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


Appendix F: Functional, Timing, and State Analysis3.5 Ref1.1.2 RefANDInterface ReferenceBlock (used on firstandlower levelfunctional diagramsonly)Ref 9.2, Provide Guidance9.2.29.2.1See DetailDiagramAND Parallel Functions ANDORFunctionalDescription9.2.3FunctionNumberAlternate FunctionsSystem 9.2.4MalfunctionSee Detail DiagramLeaderNoteScope Note: _________________________________________________________________________ORSummingGateG _GNo Go FlowTitle Block and Standard Drawing NumberORORFlow-Level DesignatorGo Flow2nd LevelRef 11.3.1See Detail DiagramTentativeFunctionFunctional Flow BlockDiagram FormatFigure F-2 FFBD: example 1Multiple Exit FunctionDomain set for iterateIterateLoopAnnotationIT3Function inan IterateITCompletion CriterionConcurrencySelectLoop annotationRefAND1Function in aConcurrencyAND2SecondFunction in aConcurrencyORBranch #2LP4Multi-ExitFunctionDomain set forreplicate withcoordination5CC #1Function onExit BranchCC #26Function ona CoordinateBranchORLPORRefBranch AnnotationBranch #3RP7Function in aReplicateRPReplicateFigure F-3 FFBD showing additional control constructs: example 2<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 287


Appendix F: Functional, Timing, and State AnalysisCC #14Function inMulti-ExitConstruct2Multi-ExitFunctionORData 1 Data 23 timesCC #2IT5Function inIterateITData 51SerialFunctionANDData 3AND6OutputFunction3Data 4ExternalInputFunction inConcurrencyExternalOutputFigure F-4 Enhanced FFBD: example 3reference column to the far right indicates the functionnumbers from the FFBDs. Fill in the requirements allocationsheet by performing the following:1.2.3.4.5.Include the functions and function numbers fromthe FFBDs.Allocate functional performance requirements anddesign requirements to the appropriate function(s)(many requirements may be allocated to one function,or one requirement may be allocated to manyfunctions).All system-level requirements must be allocated toa function to ensure the system meets all system requirements(functions without allocated requirementsshould be eliminated as unnecessary activities).Allocate all derived requirements to the functionthat spawned the requirement.Identify the physical equipment, configuration item,facilities, and specifications that will be used to meetthe requirements.(For a reference on requirements allocation sheets, seeDOD’s <strong>Systems</strong> <strong>Engineering</strong> Fundamentals Guide.)N2 DiagramsAn N-squared (N2) diagram is a matrix representationof functional and/or physical interfaces between elementsof a system at a particular hierarchical level. TheN2 diagram has been used extensively to develop data interfaces,primarily in the software areas. However, it canalso be used to develop hardware interfaces as shown inFigure F-6. The system components are placed on the diagonal.The remainder of the squares in the NxN matrixrepresent the interfaces. The square at the intersection ofa row and a column contains a description of the interfacebetween the two components represented on thatrow and that column. For example, the solar arrays havea mechanical interface with the structure and an electricalinterface and supplied service interface with thevoltage converters. Where a blank appears, there is nointerface between the respective components.The N2 diagram can be taken down into successivelylower levels to the hardware and software componentfunctional levels. In addition to defining the data thatmust be supplied across the interface, by showing the288 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


Appendix F: Functional, Timing, and State AnalysisID DESCRIPTION REQUIREMENTTRACEDFROMPERFORMANCE MARGIN COMMENTS REFM1 Mission Orbit 575 +/-15 km Sun-synchronous dawn-dusk orbit S3, S11, P3 Complies NA Pegasus XL with HAPSprovides required launchinjection dispersion accuracyM2 Launch Vehicle Pegasus XL with HAPS P2, P4 Complies NA F.2.cM3 Observatory Mass The observatory total mass shall not exceed M1, M2 192.5 kg 25.20% F.5.b241 kgM4 Data AcquisitionQualityThe mission shall deliver 95% data withbetter than 1 in 100,000 BERP1 Complies NA Standard margins andsystems baselined; formalsystem analysis to becompleted by PDRF.7M5 CommunicationBandM7 TrackingThe mission shall use S-band SQPSK at 5 Mbps forspacecraft downlink and 2 kbps uplinkMOC shall use NORAD two-line elements forobservatory trackingF.2.cS12, P4 Complies NA See SC27, SC28, and G1, G2 F.3.f,F.7P4 Complies NA F.7M8 Data Latency Data latency shall be less than 72 hours P12 Complies NA F.7M9 Daily Data Volume Accommodate average daily raw science datavolume of 10.8 GbitsP1, S12 Complies 12% Margin based on fundedground contactsF.3.e,F.7M10 Ground Station The mission shall be compatible with theRutherford Appleton Laboratory Ground Stationand the Poker Flat Ground StationP1 Complies NA F.7M11 Orbital Debris(Casualty Area)Design observatory for demise uponreentry with


Appendix F: Functional, Timing, and State Analysisdata flows the N2 chart pinpoints areas where conflictscould arise in interfaces, and highlights input and outputdependency assumptions and requirements.Timing AnalysisThere are several methods for visualizing the complextiming relationships in a system. Two of the more importantones are the timing diagram and the state transitiondiagram. The timing diagram (see Figure F-7) definesthe behavior of different objects within a timescale.It provides a visual representation of objects changingstate and interacting over time. Timing diagrams can beused for defining the behavior of hardware-driven and/or software-driven components. While a simple timelineanalysis is very useful in understanding relationshipssuch as concurrency, overlap, and sequencing, state diagrams(see Figure F-8) allow for even greater flexibilityin that they can depict events such as loops and decisionprocesses that may have largely varying timelines.Timing information can be added to an FFBD to createa timeline analysis. This is very useful for allocating resourcesand generating specific time-related design requirements.It also elucidates performance characteristicsand design constraints. However, it is not complete.State diagrams are needed to show the flow of the systemin response to varying inputs.The tools of timing analysis are rather straightforward.While some Commercial-Off-the-Shelf (COTs) tools areavailable, any graphics tool and a good spreadsheet willdo. The important thing to remember is that timelineanalysis is better for linear flows while circular, looping,multi-path, and combinations of these are best describedwith state diagrams. Complexity should be kept layeredand track the FFBDs. The ultimate goal of using all thesetechniques is simply to force the thought process enoughinto the details of the system that most of the big surprisescan be avoided.State AnalysisState diagramming is another graphical tool that is mosthelpful for understanding and displaying the complextiming relationships in a system. Timing diagrams donot give the complete picture of the system. State diagramsare needed to show the flow of the system in responseto varying inputs. State diagrams provide a sort ofsimplification of understanding on a system by breakingcomplex reactions into smaller and smaller known re-[d…3*d]Wait AccessUserWait CardCode0…13IdleStartOK {t…t+3}Access CardSystemNo CardHas CardUser Accepted[d…3*d]Idle Wait Card Wait AccessIdle0 10 20 30 40 50 60 70 80 90 100 110 120 130 140 150 160 170 180 190Figure F-7 Timing diagram example290 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


Appendix F: Functional, Timing, and State Analysissponses. This allows detailed requirements to be developedand verified with their timing performance.Figure F-8 shows a slew command status state diagramfrom the James Webb Space Telescope. Ovals representthe system states. Arcs represent the event that triggers thestate change as well as the action or output taken by thesystem in response to the event.Self-loops are permitted. In the example in FigureF‐8 the slew states can loop until they arrive atthe correct location, and then they can loop whilethey settle.When it is used to represent the behavior of a sequentialfinite-state machine, the state diagram is called a statetransition diagram. A sequential finite-state machine isone that has no memory, which means that the currentoutput only depends on the current input. The state transitiondiagram models the event-based, time-dependentbehavior of such a system.InitializeSlewCommandTimer Expire“Any”SlewCommandGoodSlewCommandSlewingReadySlewCommandTimer Expire“Bad”SlewCommandYesRejectedSlewCommandTi merExpired?SlewCommandTimer Expire“Any”SlewCommandSlewCommandTimer ExpireEnd ofSlewCompleteEnd ofSettlingMinorCycleSettled“Any”SlewCommandNoFigure F-8 Slew command status state diagram<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 291


Appendix F: Functional, Timing, and State AnalysisContext DiagramsWhen presented with a system design problem, the systems engineer’s first taskContextis to truly understand the problem. That means understanding the context inwhich the problem is set. A context diagram is a useful tool for grasping the systemto be built and the external domains that are relevant to that system andExternal <strong>Systems</strong>which have interfaces to the system. The diagram shows the general structureSystemof a context diagram. The system is shown surrounded by the external systemswhich have interfaces to the system. These systems are not part of the system,but they interact with the system via the system’s external interfaces. The externalsystems can impact the system, and the system does impact the externalAre impacted by the systemImpacts, but not impacted by, the systemsystems. They play a major role in establishing the requirements for the system.Entities further removed are those in the system’s context that can impact the system but cannot be impacted by thesystem. These entities in the system’s context are responsible for some of the system’s requirements.Defining the boundaries of a system is a critical but often neglected task. Using an example from a satellite project, oneof the external systems that is impacted by the satellite would be the Tracking and Data Relay Satellite System (TDRSS).The TDRSS is not part of the satellite system, but it defines requirements on the satellite and is impacted by the satellitesince it must schedule contacts, receive and transmit data and commands, and downlink the satellite data to theground. An example of an entity in the context of the satellite system that is not impacted by the satellite system is theGlobal Positioning Satellite (GPS) system. The GPS is not impacted in any way by the satellite, but it will levy some requirementson the satellite if the satellite is to use the GPS signals for navigation.Reference: Diagram is from Buede, The <strong>Engineering</strong> Design of <strong>Systems</strong>, p. 38.292 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


Appendix G: Technology Assessment/InsertionIntroduction, Purpose, and ScopeThe Agency’s programs and projects, by their very nature,frequently require the development and infusionof new technological advances to meet mission goals,objectives, and resulting requirements. Sometimes thenew technological advancement being infused is actuallya heritage system that is being incorporated into adifferent architecture and operated in different environmentfrom that for which it was originally designed. Inthis latter case, it is often not recognized that adaptationof heritage systems frequently requires technological advancementand as a result, key steps in the developmentprocess are given short shrift—often to the detriment ofthe program/project. In both contexts of technologicaladvancement (new and adapted heritage), infusion is avery complex process that has been dealt with over theyears in an ad hoc manner differing greatly from projectto project with varying degrees of success.Frequently, technology infusion has resulted in scheduleslips, cost overruns, and occasionally even to cancellationsor failures. In post mortem, the root cause of suchevents has often been attributed to “inadequate definitionof requirements.” If such were indeed the root cause,then correcting the situation would simply be a matter ofrequiring better requirements definition, but since historyseems frequently to repeat itself, this must not bethe case—at least not in total.In fact there are many contributors to schedule slip, costoverrun, and project cancellation and failure—amongthem lack of adequate requirements definition. Thecase can be made that most of these contributors arerelated to the degree of uncertainty at the outset of theproject and that a dominant factor in the degree of uncertaintyis the lack of understanding of the maturity ofthe technology required to bring the project to fruitionand a concomitant lack of understanding of the cost andschedule reserves required to advance the technologyfrom its present state to a point where it can be qualifiedand successfully infused with a high degree of confidence.Although this uncertainty cannot be eliminated,it can be substantially reduced through the early applicationof good systems engineering practices focused onunderstanding the technological requirements; the maturityof the required technology; and the technologicaladvancement required to meet program/project goals,objectives, and requirements.A number of processes can be used to develop the appropriatelevel of understanding required for successfultechnology insertion. The intent of this appendix is todescribe a systematic process that can be used as an exampleof how to apply standard systems engineeringpractices to perform a comprehensive Technology Assessment(TA). The TA comprises two parts, a TechnologyMaturity Assessment (TMA) and an AdvancementDegree of Difficulty Assessment (AD 2 ). Theprocess begins with the TMA which is used to determinetechnological maturity via <strong>NASA</strong>’s TechnologyReadiness Level (TRL) scale. It then proceeds to developan understanding of what is required to advancethe level of maturity through AD 2 . It is necessary toconduct TAs at various stages throughout a program/project to provide the Key Decision Point (KDP) productsrequired for transition between phases. (See TableG-1.)The initial TMA provides the baseline maturity ofthe system’s required technologies at program/projectoutset and allows monitoring progress throughout development.The final TMA is performed just prior tothe Preliminary Design Review. It forms the basis forthe Technology Readiness Assessment Report (TRAR),which documents the maturity of the technological advancementrequired by the systems, subsystems, andcomponents demonstrated through test and analysis.The initial AD 2 assessment provides the material necessaryto develop preliminary cost and to schedule plansand preliminary risk assessments. In subsequent assessment,the information is used to build the technologydevelopment plan in the process identifying alternativepaths, fallback positions, and performance descope options.The information is also vital to preparing milestonesand metrics for subsequent Earned Value Management(EVM).<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 293


Appendix G: Technology Assessment/InsertionTable G-1 Products Provided by the TA as a Function of Program/Project PhaseGateKDP A—Transition fromPre-Phase A to Phase AKDP B—Transition fromPhase A to Phase BKDP C—Transition fromPhase B to Phase C/DProductRequires an assessment of potential technology needs versus current and planned technologyreadiness levels, as well as potential opportunities to use commercial, academic, and othergovernment agency sources of technology. Included as part of the draft integrated baseline.Requires a technology development plan identifying technologies to be developed, heritagesystems to be modified, alternative paths to be pursued, fall-back positions and correspondingperformance descopes, milestones, metrics, and key decision points. Incorporated in thepreliminary project plan.Requires a TRAR demonstrating that all systems, subsystems, and components have achieveda level of technological maturity with demonstrated evidence of qualification in a relevantenvironment.Source: NPR 7120.5.The TMA is performed against the hierarchical breakdownof the hardware and software products of the program/projectPBS to achieve a systematic, overall understandingat the system, subsystem, and componentlevels. (See Figure G-1.)1.3Crew LaunchVehicle1.3.8LaunchVehicle1.3.8.1First Stage1.3.8.2Upper Stage1.3.8.3Upper StageEngine......1.3.8.2.4MPS1.3.8.2.5US RCS1.3.8.2.6FS RCS1.3.8.2.7TVCS1.3.8.2.8Avionics1.3.8.2.9Software1.3.8.2.10IntegratedTest ... H/W....1: Integ MPS.2: LH System.3: O2 Fluid Sys..4: Pressure &Pneumatic Sys..5: Umbilicals &Disconnect.1: Integ RCS.2: Integ EnergySupport.1: Integ RCS .1: Integ TVCS.2: Actuator.3: HydraulicPower.4: APU.1: Integ Avionics.2: C&DH System.3: GN&C H/W.4: Radio FrequencySystem.5: EPS.6: Electrical Integration.1: Integ S/WSystem.2: Flight S/W.7: Develop Flight Instrument.1: MPTA.2: GVT.3: STA.4: US for DTF-1.5: US for VTF-2.6: US for RRF-3.7: Struc. ThermalComponent Test.8: Sensor & Instrument System.9: EGSE.10: Integ CLV Avionics SystemElement Testing.11: Flight Safety SystemFigure G-1 PBS example294 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


Inputs/Entry CriteriaIt is extremely important that a TA process be definedat the beginning of the program/project and that it beperformed at the earliest possible stage (concept development)and throughout the program/project throughPDR. Inputs to the process will vary in level of detail accordingto the phase of the program/project, and eventhough there is a lack of detail in Pre-Phase A, the TA willdrive out the major critical technological advancementsrequired. Therefore, at the beginning of Pre-Phase A, thefollowing should be provided:zz Refinement of TRL definitions.zz Definition of AD2.zz Definition of terms to be used in the assessment process.zz Establishment of meaningful evaluation criteria andmetrics that will allow for clear identification of gapsand shortfalls in performance.zz Establishment of the TA team.zz Establishment of an independent TA review team.How to Do Technology AssessmentIdentify systems, subsystems,and componentsper hierarchical productbreakdown of the WBSBaseline technologymaturity assessmentAppendix G: Technology Assessment/InsertionThe technology assessment process makes use of basicsystems engineering principles and processes. As mentionedpreviously, it is structuredto occur within theframework of the ProductBreakdown Structure (PBS)to facilitate incorporation ofthe results. Using the PBSas a framework has a twofoldbenefit—it breaks the“problem” down into systems,subsystems, and componentsthat can be moreaccurately assessed; and itprovides the results of theassessment in a format thatcan readily be used in thegeneration of program costsand schedules. It can also behighly beneficial in providingmilestones and metrics forprogress tracking usingEVM. As discussed above,it is a two-step process comprisedof (1) the determinationof the current technological maturity in terms ofTRLs and (2) the determination of the difficulty associatedwith moving a technology from one TRL to thenext through the use of the AD 2 . The overall processis iterative, starting at the conceptual level during programFormulation, establishing the initial identificationof critical technologies and the preliminary cost,schedule, and risk mitigation plans. Continuing on intoPhase A, it is used to establish the baseline maturity, thetechnology development plan and associated costs andschedule. The final TA consists only of the TMA andis used to develop the TRAR which validates that allelements are at the requisite maturity level. (See FigureG‐2.)Even at the conceptual level, it is important to use the formalismof a PBS to avoid having important technologiesslip through the crack. Because of the preliminary natureof the concept, the systems, subsystems, and componentswill be defined at a level that will not permit detailed assessmentsto be made. The process of performing the assessment,however, is the same as that used for subsequent,more detailed steps that occur later in the program/projectwhere systems are defined in greater detail.Once the concept has been formulated and the initialidentification of critical technologies made, it is nec-Assign TRL to allcomponents based onassessment of maturityIdentify all components,subsystems, and systemsthat are at lower TRLsthan required by programPerform AD 2 on allcomponents, subsystems,and systems that are belowrequisite maturity levelTechnology Development PlanCost PlanSchedule PlanRisk AssessmentFigure G-2 Technology assessment processAssign TRL to subsystemsbased on lowest TRL ofcomponents and TRLstate of integrationAssign TRL to systemsbased on lowest TRL ofsubsystems and TRLstate of integration<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 295


Appendix G: Technology Assessment/Insertionessary to perform detailed architecture studies withthe Technology Assessment Process intimately inter-RequirementsConceptsArchitecturalStudiesTRL/AD 2 AssessmentTechnology MaturationSystem test, launch,and operationsSystem/subsystemdevelopmentTechnologydemonstrationTechnologydevelopmentResearch to provefeasibilityBasic technologyresearchSystemDesignFigure G-3 Architectural studies and technologydevelopmentwoven. (See Figure G-3.) The purpose of the architecturestudies is to refine end-item system design to meetthe overall scientific requirements of the mission. It isimperative that there be a continuous relationship betweenarchitectural studiesand maturing technologyadvances. The architecturalstudies must incorporatethe results of the technologymaturation, planning foralternative paths and identifyingnew areas requiredfor development as the architectureis refined. Similarly,it is incumbent uponthe technology maturationprocess to identify requirementsthat are not feasibleand development routesthat are not fruitful and totransmit that information tothe architecture studies in atimely manner. Similarly, itis incumbent upon the architecturestudies to providefeedback to the technologydevelopment process relativeto changes in requirements.Particular attentionmust be given to “heritage”systems in that they areoften used in architecturesand environments differentfrom those in which theywere designed to operate.Establishing TRLsTRL is, at its most basic, a description of the performancehistory of a given system, subsystem, or componentrelative to a set of levels first described at <strong>NASA</strong>HQ in the 1980s. The TRL essentially describes the stateof the art of a given technology and provides a baselinefrom which maturity is gauged and advancement defined.(See Figure G-4.) Even though the concept of TRLhas been around for almost 20 years, it is not well understoodand frequently misinterpreted. As a result, weoften undertake programs without fully understandingeither the maturity of key technologies or what is neededto develop them to the required level. It is impossibleto understand the magnitude and scope of a developmentprogram without having a clear understanding ofthe baseline technological maturity of all elements of thesystem. Establishing the TRL is a vital first step on theTRL 9__TRL 8__TRL 7__TRL 6__TRL 5__TRL 4__TRL 3__TRL 2__TRL 1Actual system “flight proven” through successfulmission operationsActual system completed and “flight qualified” throughtest and demonstration (ground or flight)System prototype demonstration in atarget/space environmentSystem/subsystem model or prototype demonstrationin a relevant environment (ground or space)Component and/or breadboard validation in relevantenvironmentComponent and/or breadboard validation in laboratoryenvironmentAnalytical and experimental critical function and/orcharacteristic proof-of-conceptTechnology concept and/or application formulatedBasic principles observed and reportedFigure G-4 Technology readiness levels296 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


Appendix G: Technology Assessment/Insertionway to a successful program. A frequent misconceptionis that in practice it is too difficult to determine TRLs andthat when you do it is not meaningful. On the contrary,identifying TRLs can be a straightforward systems engineeringprocess of determining what was demonstratedand under what conditions was it demonstrated.calls relative to the status of the technology in question.For this step, it is extremely important to have a wellbalanced,experienced assessment team. Team membersdo not necessarily have to be discipline experts. The primaryexpertise required for a TRL assessment is that theAt first blush, the TRL descriptions in Figure G-4 appearto be straightforward. It is in the process of tryingto assign levels that problems arise. A primary cause ofdifficulty is in terminology—everyone knows what abreadboard is, but not everyone has the same definition.Also, what is a “relevant environment”? What is relevantto one application may or may not be relevant to another.Many of these terms originated in various branches of engineeringand had, at the time, very specific meanings tothat particular field. They have since become commonlyused throughout the engineering field and often take differencesin meaning from discipline to discipline, somesubtle, some not so subtle. “Breadboard,” for example,comes from electrical engineering where the original usereferred to checking out the functional design of an electricalcircuit by populating a “breadboard” with componentsto verify that the design operated as anticipated.Other terms come from mechanical engineering, referringprimarily to units that are subjected to differentlevels of stress under testing, i.e., qualification, protoflight,and flight units. The first step in developing auniform TRL assessment (see Figure G-5) is to definethe terms used. It is extremely important to develop anduse a consistent set of definitions over the course of theprogram/project.Has an identical unit been successfullyoperated/launched in identicalconfiguration/environment?NOHas an identical unit in a different configuration/system architecture been successfully operatedin space or the target environment or launched?If so, then this initially drops to TRL 5 untildifferences are evaluated.NOHas an identical unit been flight qualified butnot yet operated in space or the targetenvironment or launched?NOHas a prototype unit (or one similar enough to beconsidered a prototype) been successfully operatedin space or the target environment or launched?NOHas a prototype unit (or one similar enoughto be considered a prototype) beendemonstrated in a relevant environment?NOHas a breadboard unit been demonstrated ina relevant environment?NOYESYESYESYESYESYESTRL 9TRL 5TRL 8TRL 7TRL 6TRL 5Having established a common set of terminology, itis necessary to proceed to the next step—quantifying“judgment calls” on the basis of past experience. Evenwith clear definitions there will be the need for judgmentcalls when it comes time to assess just how similara given element is relative to what is needed (i.e., is itclose enough to a prototype to be considered a prototype,or is it more like an engineering breadboard?). Describingwhat has been done in terms of form, fit, andfunction provides a means of quantifying an elementbased on its design intent and subsequent performance.The current definitions for software TRLs are containedin NPR 7120.8, <strong>NASA</strong> Research and Technology Programand Project Management Requirements.A third critical element of any assessment relates to thequestion of who is in the best position to make judgmentHas a breadboard unit been demonstrated ina laboratory environment?NOHas analytical and experimentalproof-of-concept been demonstrated?NOHas concept or applicationbeen formulated?NOHave basic principles been observedand reported?NORETHINK POSITION REGARDINGTHIS TECHNOLOGY!YESYESYESYESFigure G-5 The TMA thought processTRL 4TRL 3TRL 2TRL 1<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 297


Appendix G: Technology Assessment/Insertionsystems engineer/user understands the current state of theart in applications. Having established a set of definitions,defined a process for quantifying judgment calls, and assembledan expert assessment team, the process primarilyconsists of asking the right questions. The flowchart depictedin Figure G-5 demonstrates the questions to ask todetermine TRL at any level in the assessment.Note the second box particularly refers to heritage systems.If the architecture and the environment havechanged, then the TRL drops to TRL 5—at least intially.Additional testing may need to be done for heritage systemsfor the new use or new environment. If in subsequentanalysis the new environment is sufficiently close tothe old environment, or the new architecture sufficientlyclose to the old architecture then the resulting evaluationcould be then TRL 6 or 7, but the most important thingto realize is that it is no longer at a TRL 9. Applying thisprocess at the system level and then proceeding to lowerlevels of subsystem and component identifies those elementsthat require development and sets the stage for thesubsequent phase, determining the AD 2 .A method for formalizing this process is shown in FigureG-6. Here, the process has been set up as a table: therows identify the systems, subsystems, and componentsthat are under assessment. The columns identify the categoriesthat will be used to determine the TRL—i.e., whatunits have been built, to what scale, and in what environmenthave they been tested. Answers to these questions determinethe TRL of an item under consideration. The TRLof the system is determined by the lowest TRL present inthe system; i.e., a system is at TRL 2 if any single element inthe system is at TRL 2. The problem of multiple elementsbeing at low TRLs is dealt with in the AD 2 process. Notethat the issue of integration affects the TRL of every system,subsystem, and component. All of the elements can be at ahigher TRL, but if they have never been integrated as a unit,the TRL will be lower for the unit. How much lower dependson the complexity of the integration.TRL ASSESSMENTXRed = Below TRL 3Yellow = TRL 3,4 & 5Green = TRL 6 and aboveWhite = UnknownExistsConceptDemonstration UnitsBreadboardBrassboardDevelopmental ModelPrototypeFlight QualifiedLaboratory EnvironmentEnvironmentRelevant EnvironmentSpace EnvironmentSpace/Launch OperationUnit Description1.0 System1.1 Subsystem X1.1.1 Mechanical Components1.1.2 Mechanical <strong>Systems</strong>1.1.3 Electrical Components X X X X X1.1.4 Electrical <strong>Systems</strong>1.1.5 Control <strong>Systems</strong>1.1.6 Thermal <strong>Systems</strong> X X X1.1.7 Fluid <strong>Systems</strong> X1.1.8 Optical <strong>Systems</strong>1.1.9 Electro-Optical <strong>Systems</strong>1.1.10 Software <strong>Systems</strong>1.1.11 Mechanisms X1.1.12 Integration1.2 Subsystem Y1.2.1 Mechanical ComponentsFormFitFunctionAppropriate ScaleOverall TRLFigure G-6 TRL assessment matrix298 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


Appendix H: Integration Plan OutlinePurposeThe integration plan defines the integration and verificationstrategies for a project interface with the system designand decomposition into the lower level elements. 1The integration plan is structured to bring the elementstogether to assemble each subsystem and to bring all ofthe subsystems together to assemble the system/product.The primary purposes of the integration plan are: (1) todescribe this coordinated integration effort that supportsthe implementation strategy, (2) to describe for the participantswhat needs to be done in each integration step,and (3) to identify the required resources and when andwhere they will be needed.Questions/Checklistzz Does the integration plan include and cover integra-tion of all of the components and subsystems of theproject, either developed or purchased?zz Does the integration plan account for all external sys-tems to be integrated with the system (for example,communications networks, field equipment, othercomplete systems owned by the government or ownedby other government agencies)?zz Does the integration plan fully support the imple-mentation strategy, for example, when and where thesubsystems and system are to be used?zz Does the integration plan mesh with the verificationplan?zz For each integration step, does the integration plandefine what components and subsystems are to be integrated?zz For each integration step, does the integration planidentify all the needed participants and define whattheir roles and responsibilities are?zz Does the integration plan establish the sequence andschedule for every integration step?zz Does the integration plan spell out how integrationproblems are to be documented and resolved?Integration Plan ContentsTable H-1 outlines the content of the integration plan bysection.1The material in this appendix is adapted from FederalHighway Administration and CalTrans, <strong>Systems</strong> <strong>Engineering</strong>Guidebook for ITS, Version 2.0.<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 299


Appendix H: Integration Plan OutlineTable H-1 Integration Plan ContentsTitle pageSection1.0 Purpose of DocumentContentsThe title page should follow the <strong>NASA</strong> procedures or style guide. At a minimum, it should containthe following information:zz INTEGRATION PLAN FOR THE [insert name of project] AND [insert name of organization]zz Contract numberzz Date that the document was formally approvedzz The organization responsible for preparing the documentzz Internal document control number, if availablezz Revision version and date issuedA brief statement of the purpose of this document. It is the plan for integrating the componentsand subsystems of the project prior to verification.2.0 Scope of Project This section gives a brief description of the planned project and the purpose of the system to bebuilt. Special emphasis is placed on the project’s deployment complexities and challenges.3.0 IntegrationStrategy4.0 Phase 1 Integration5.0 Multiple PhaseIntegration Steps (1or N steps)This section informs the reader what the high-level plan is for integration and, most importantly,why the integration plan is structured the way it is. The integration plan is subject to several,sometimes conflicting, constraints. Also, it is one part of the larger process of build, integrate, verify,and deploy, all of which must be synchronized to support the same project strategy. So, for evena moderately complex project, the integration strategy, based on a clear and concise statementof the project’s goals and objectives, is described here at a high, but all-inclusive, level. It may alsobe necessary to describe the analysis of alternative strategies to make it clear why this particularstrategy was selected.The same strategy is the basis for the build plan, the verification plan, and the deployment plan.This section covers and describes each step in the integration process. It describes what componentsare integrated at each step and gives a general idea of what threads of the operational capabilities(requirements) are covered. It ties the plan to the previously identified goals and objectivesso the stakeholders can understand the rationale for each integration step. This summary-leveldescription also defines the schedule for all the integration efforts.This, and the following sections, define and explain each step in the integration process. The intenthere is to identify all the needed participants and to describe to them what they have to do.In general, the description of each integration step should identify:zz The location of the activities.zz The project-developed equipment and software products to be integrated . Initially this is just ahigh-level list, but eventually the list must be exact and complete, showing part numbers andquantity.zz Any support equipment (special software, test hardware, software stubs, and drivers to simulateyet-to-be-integrated software components, external systems) needed for this integration step.The same support equipment is most likely needed for the subsequent verification step.zz All integration activities that need to be performed after installation, including integration withon-site systems and external systems at other sites.zz A description of the verification activities, as defined in the applicable verification plan, thatoccur after this integration step.zz The responsible parties for each activity in the integration step.zz The schedule for each activity.This, and any needed additional sections, follow the format for Section 3.0. Each covers each step ina multiple step integration effort.300 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


Appendix I: Verification and Validation PlanSample Outline1. Introduction1.1 Purpose and Scope1.2 Responsibility and Change Authority1.3 Definitions2. Applicable and Reference Documents2.1 Applicable Documents2.2 Reference Documents2.3 Order of Precedence3. System X Description3.1 System X Requirements Flow Down3.2 System X Architecture3.3 End Item Architectures3.3.1 System X End Item A3.3.n System X End Item n3.4 System X Ground Support Equipment3.5 Other Architecture Descriptions4. Verification and Validation Process4.1 Verification and Validation Management Responsibilities4.2 Verification Methods4.2.1 Analysis4.2.2 Inspection4.2.3 Demonstration4.2.4 Test4.2.4.1 Qualification Testing4.2.4.2 Other Testing4.3 Validation Methods4.4 Certification Process4.5 Acceptance Testing5. Verification and Validation Implementation5.1 System X Design and Verification and Validation Flow5.2 Test Articles5.3 Support Equipment5.4 Facilities6. System X End Item Verification and Validation6.1 End Item A6.1.1 Developmental/<strong>Engineering</strong> Unit Evaluations6.1.2 Verification Activities6.1.2.1 Verification Testing6.1.2.1.1 Qualification Testing6.1.2.1.2 Other Testing<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 301


Appendix I: Verification and Validation Plan Sample Outline6.1.2.2 Verification Analysis6.1.2.2.1 Thermal Analysis6.1.2.2.2 Stress Analysis6.1.2.2.3 Analysis of Fracture Control6.1.2.2.4 Materials Analysis6.1.2.2.5 EEE Parts Analysis6.1.2.3 Verification Inspection6.1.2.4 Verification Demonstration6.1.3 Validation Activities6.1.4 Acceptance Testing6.n End Item n7. System X Verification and Validation7.1 End-Item-to-End-Item Integration7.1.1 Developmental/<strong>Engineering</strong> Unit Evaluations7.1.2 Verification Activities7.1.2.1 Verification Testing7.1.2.2 Verification Analysis7.1.2.3 Verification Inspection7.1.2.4 Verification Demonstration7.1.3 Validation Activities7.2 Complete System Integration7.2.1 Developmental/<strong>Engineering</strong> Unit Evaluations7.2.2 Verification Activities7.2.2.1 Verification Testing7.2.2.2 Verification Analysis7.2.2.3 Verification Inspection7.2.2.4 Verification Demonstration7.2.3 Validation Activities8. System X Program Verification and Validation8.1 Vehicle Integration8.2 End-to-End Integration8.3 On-Orbit V&V Activities9. System X Certification ProductsAppendix A:Appendix B:Appendix C:Appendix D:Acronyms and AbbreviationsDefinition of TermsRequirement Verification MatrixSystem X Validation Matrix302 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


Appendix J: SEMP Content OutlineSEMP ContentThe SEMP is the foundation document for the technicaland engineering activities conducted during theproject. The SEMP conveys information on the technicalintegration methodologies and activities for the projectwithin the scope of the project plan to all of the personnel.Because the SEMP provides the specific technicaland management information to understand the technicalintegration and interfaces, its documentation andapproval serves as an agreement within the project ofhow the technical work will be conducted. The technicalteam, working under the overall program/project plan,develops and updates the SEMP as necessary. The technicalteam works with the project manager to review thecontent and obtain concurrence. The SEMP includes thefollowing three general sections:zz <strong>Technical</strong> program planning and control, which de-scribes the processes for planning and control of theengineering efforts for the design, development, test,and evaluation of the system.zz <strong>Systems</strong> engineering processes, which includes spe-cific tailoring of the systems engineering process asdescribed in the NPR, implementation procedures,trade study methodologies, tools, and models to beused.zz <strong>Engineering</strong> specialty integration describes the in-tegration of the technical disciplines’ efforts into thesystems engineering process and summarizes eachtechnical discipline effort and cross references each ofthe specific and relevant plans.Purpose and ScopeThis section provides a brief description of the purpose,scope, and content of the SEMP. The scope encompassesthe SE technical effort required to generate the workproducts necessary to meet the success criteria for theproduct-line life-cycle phases. The SEMP is a plan fordoing the project technical effort by a technical team fora given WBS model in the system structure and to helpmeet life-cycle phase success criteria.Applicable DocumentsThis section of the SEMP lists the documents applicableto this specific project and its SEMP implementation anddescribes major standards and procedures that this technicaleffort for this specific project needs to follow. Specificimplementation of standardization tasking is incorporatedinto pertinent sections of the SEMP.Provide the engineering standards and procedures to beused in the project. Examples of specific procedures couldinclude any hazardous material handling, crew trainingfor control room operations, special instrumentationtechniques, special interface documentation for vehicles,and maintenance procedures specific to the project.<strong>Technical</strong> SummaryThis section contains an executive summary describingthe problem to be solved by this technical effort and thepurpose, context, and products of the WBS model to bedeveloped and integrated with other interfacing systemsidentified.System DescriptionThis section contains a definition of the purpose/mission/objectiveof the system being developed, a briefdescription of the purpose of the products of the WBSmodels of the system structure for which this SEMP applies,and the expected scenarios for the system. EachWBS model includes the system end products and theirsubsystems and the supporting or enabling products andany other work products (plans, baselines) required forthe development of the system. The description shouldinclude any interfacing systems and system products,including humans, with which the WBS model systemproducts will interact physically, functionally, or electronically.Identify and document system constraints, includingcost, schedule, and technical (for example, environmental,design).<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 303


Appendix J: SEMP Content OutlineSystem StructureThis section contains an explanation of how the WBSmodels will be developed, how the resulting WBS modelwill be integrated into the project WBS, and how theoverall system structure will be developed. This sectioncontains a description of the relationship of the specificationtree and the drawing tree with the products of thesystem structure and how the relationship and interfacesof the system end products and their life-cycle-enablingproducts will be managed throughout the planned technicaleffort.Product IntegrationThis subsection contains an explanation of how theproduct will be integrated and will describe clear organizationalresponsibilities and interdependencies whetherthe organizations are geographically dispersed or managedacross Centers. This includes identifying organizations—intra-and inter-<strong>NASA</strong>, other Government agencies,contractors, or other partners—and delineatingtheir roles and responsibilities.When components or elements will be available for integrationneeds to be clearly understood and identified onthe schedule to establish critical schedule issues.Planning ContextThis subsection contains the product-line life-cyclemodel constraints (e.g., NPR 7120.5) that affect the planningand implementation of the common technical processesto be applied in performing the technical effort.The constraints provide a linkage of the technical effortwith the applicable product-line life-cycle phases coveredby the SEMP including, as applicable, milestone decisiongates, major technical reviews, key intermediateevents leading to project completion, life-cycle phase,event entry and success criteria, and major baseline andother work products to be delivered to the sponsor orcustomer of the technical effort.Boundary of <strong>Technical</strong> EffortThis subsection contains a description of the boundaryof the general problem to be solved by the technical effort.Specifically, it identifies what can be controlled bythe technical team (inside the boundary) and what influencesthe technical effort and is influenced by the technicaleffort but not controlled by the technical team (outsidethe boundary). Specific attention should be given tophysical, functional, and electronic interfaces across theboundary.Define the system to be addressed. A description of theboundary of the system can include the following: definitionof internal and external elements/items involvedin realizing the system purpose as well as the systemboundaries in terms of space, time, physical, and operational.Also, identification of what initiates the transitionsof the system to operational status and what initiatesits disposal is important. The following is a generallisting of other items to include, as appropriate:zz General and functional descriptions of the subsys-tems,zz Document current and established subsystem perfor-mance characteristics,zz Identify and document current interfaces and charac-teristics,zz Develop functional interface descriptions and func-tional flow diagrams,zz Identify key performance interface characteristics, andzz Identify current integration strategies and architecture.Cross ReferencesThis subsection contains cross references to appropriatenontechnical plans and critical reference material thatinterface with the technical effort. It contains a summarydescription of how the technical activities covered inother plans are accomplished as fully integrated parts ofthe technical effort.<strong>Technical</strong> Effort IntegrationThis section contains a description of how the variousinputs to the technical effort will be integrated into a coordinatedeffort that meets cost, schedule, and performanceobjectives.The section should describe the integration and coordinationof the specialty engineering disciplines into thesystems engineering process during each iteration of theprocesses. Where there is potential for overlap of specialtyefforts, the SEMP should define the relative responsibilitiesand authorities of each. This section should contain,as needed, the project’s approach to the following:zz Concurrent engineering,zz The activity phasing of specialty engineering,zz The participation of specialty disciplines,zz The involvement of specialty disciplines,304 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


Appendix J: SEMP Content Outlinezz The role and responsibility of specialty disciplines,zz The participation of specialty disciplines in systemdecomposition and definition,zz The role of specialty disciplines in verification andvalidation,zz Reliability,zz Maintainability,zz Quality assurance,zz Integrated logistics,zz Human engineering,zz Safety,zz Producibility,zz Survivability/vulnerability,zz National Environmental Policy Act compliance, andzz Launch approval/flight readiness.Provide the approach for coordination of diverse technicaldisciplines and integration of the development tasks. Forexample, this can include the use of integrated teamingapproaches. Ensure that the specialty engineering disciplinesare properly represented on all technical teams andduring all life-cycle phases of the project. Define the scopeand timing of the specialty engineering tasks.Responsibility and AuthorityThis subsection contains a description of the organizingstructure for the technical teams assigned to this technicaleffort and includes how the teams will be staffed and managed,including (1) what organization/panel will serve asthe designated governing authority for this project and,therefore, will have final signature authority for this SEMP;(2) how multidisciplinary teamwork will be achieved;(3) identification and definition of roles, responsibilities,and authorities required to perform the activities of eachplanned common technical process; (4) planned technicalstaffing by discipline and expertise level, with human resourceloading; (5) required technical staff training; and(6) assignment of roles, responsibilities, and authorities toappropriate project stakeholders or technical teams to ensureplanned activities are accomplished.Provide an organization chart and denote who on theteam is responsible for each activity. Indicate the linesof authority and responsibility. Define the resolution authorityto make decisions/decision process. Show howthe engineers/engineering disciplines relate.The systems engineering roles and responsibilities needto be addressed for the following: project office, user,Contracting Office <strong>Technical</strong> Representative (COTR),systems engineering, design engineering, specialty engineering,and contractor.Contractor IntegrationThis subsection contains a description of how the technicaleffort of in-house and external contractors is to beintegrated with the <strong>NASA</strong> technical team efforts. Thisincludes establishing technical agreements, monitoringcontractor progress against the agreement, handlingtechnical work or product requirements change requests,and acceptance of deliverables. The subsection will specificallyaddress how interfaces between the <strong>NASA</strong> technicalteam and the contractor will be implemented foreach of the 17 common technical processes. For example,it addresses how the <strong>NASA</strong> technical team will beinvolved with reviewing or controlling contractor-generateddesign solution definition documentation or howthe technical team will be involved with product verificationand product validation activities.Key deliverables for the contractor to complete theirsystems and those required of the contractor for otherproject participants need to be identified and establishedon the schedule.Support IntegrationThis subsection contains a description of the methods(such as integrated computer-aided tool sets, integratedwork product databases, and technical management informationsystems) that will be used to support technicaleffort integration.Common <strong>Technical</strong> ProcessesImplementationEach of the 17 common technical processes will have aseparate subsection that contains a plan for performingthe required process activities as appropriately tailored.(See NPR 7123.1 for the process activities required andtailoring.) Implementation of the 17 common technicalprocesses includes (1) the generation of the outcomesneeded to satisfy the entry and success criteria of theapplicable product-line life-cycle phase or phases identifiedin D.4.4.4 and (2) the necessary inputs for othertechnical processes. These sections contain a descriptionof the approach, methods, and tools for:zz Identifying and obtaining adequate human and non-human resources for performing the planned process,<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 305


Appendix J: SEMP Content Outlinedeveloping the work products, and providing the servicesof the process.zz Assigning responsibility and authority for performingthe planned process, developing the work products,and providing the services of the process.zz Training the technical staff performing or supportingthe process, where training is identified as needed.zz Designating and placing designated work products ofthe process under appropriate levels of configurationmanagement.zz Identifying and involving stakeholders of the process.zz Monitoring and controlling the process.zz Identifying, defining, and tracking metrics and suc-cess.zz Objectively evaluating adherence of the process andthe work products and services of the process to theapplicable requirements, objectives, and standardsand addressing noncompliance.zz Reviewing activities, status, and results of the processwith appropriate levels of management and resolvingissues.This section should also include the project-specificdescription of each of the 17 processes to be used, includingthe specific tailoring of the requirements to thesystem and the project; the procedures to be used inimplementing the processes; in-house documentation;trade study methodology; types of mathematical and/orsimulation models to be used; and generation of specifications.Technology InsertionThis section contains a description of the approach andmethods for identifying key technologies and their associatedrisks and criteria for assessing and insertingtechnologies, including those for inserting criticaltechnologies from technology development projects.An approach should be developed for appropriate leveland timing of technology insertion. This could includealternative approaches to take advantage of new technologiesto meet systems needs as well as alternativeoptions if the technologies do not prove appropriate inresult or timing. The strategy for an initial technologyassessment within the scope of the project requirementsshould be provided to identify technology constraintsfor the system.Additional SE Functions andActivitiesThis section contains a description of other areas notspecifically included in previous sections but that are essentialfor proper planning and conduct of the overalltechnical effort.System SafetyThis subsection contains a description of the approachand methods for conducting safety analysis and assessingthe risk to operators, the system, the environment, or thepublic.<strong>Engineering</strong> Methods and ToolsThis subsection contains a description of the methodsand tools not included in the technology insertion sectionthat are needed to support the overall technical effortand identifies those tools to be acquired and tooltraining requirements.Define the development environment for the project, includingautomation and software tools. If required, developand/or acquire the tools and facilities for all disciplineson the project. Standardize when possible acrossthe project, or enable a common output format of thetools that can be used as input by a broad range of toolsused on the project. Define the requirements for informationmanagement systems and for using existing elements.Define and plan for the training required to usethe tools and technology across the project.Specialty <strong>Engineering</strong>This subsection contains a description of engineeringdiscipline and specialty requirements that apply acrossprojects and the WBS models of the system structure.Examples of these requirement areas would includeplanning for safety, reliability, human factors, logistics,maintainability, quality, operability, and supportability.Estimate staffing levels for these disciplines and incorporatewith the project requirements.Integration with the Project Plan and<strong>Technical</strong> Resource AllocationThis section contains how the technical effort will integratewith project management and defines roles and responsibilities.It addresses how technical requirements306 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


Appendix J: SEMP Content Outlinewill be integrated with the project plan to determinatethe allocation of resources, including cost, schedule, andpersonnel, and how changes to the allocations will be coordinated.This section describes the interface between all of thetechnical aspects of the project and the overall projectmanagement process during the systems engineeringplanning activities and updates. All activities to coordinatetechnical efforts with the overall project are included,such as technical interactions with the externalstakeholders, users, and contractors.WaiversThis section contains all approved waivers to the CenterDirector’s Implementation Plan for SE NPR 7123.1 requirementsfor the SEMP. This section also contains aseparate subsection that includes any tailored SE NPRrequirements that are not related and able to be documentedin a specific SEMP section or subsection.AppendicesAppendices are included, as necessary, to provide a glossary,acronyms and abbreviations, and information publishedseparately for convenience in document maintenance.Included would be: (1) information that maybe pertinent to multiple topic areas (e.g., description ofmethods or procedures); (2) charts and proprietary dataapplicable to the technical efforts required in the SEMP;and (3) a summary of technical plans associated with theproject. Each appendix should be referenced in one ofthe sections of the engineering plan where data wouldnormally have been provided.TemplatesAny templates for forms, plans, or reports the technicalteam will need to fill out, like the format for the verificationand validation plan, should be included in the appendices.ReferencesThis section contains all documents referenced in thetext of the SEMP.SEMP Preparation ChecklistThe SEMP, as the key reference document capturing thetechnical planning, needs to address some basic topics.For a generic SEMP preparation checklist, refer to <strong>Systems</strong><strong>Engineering</strong> Guidebook by James Martin.<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 307


Appendix K: PlansActivity Plan.......................................................................................................................... 187Baseline Plan......................................................................................................................... 111Build Plan............................................................................................................................... 300Closure Plan.......................................................................................................................... 178Configuration Management Plan.............................................................................. 176, 311Cost Account Plan................................................................................................................ 121Data Management Plan....................................................................................................... 158Deployment Plan.................................................................................................................. 300Earned Value Management Plan........................................................................................ 166<strong>Engineering</strong> Plan.................................................................................................................. 307Implementation Plan........................................................................................................... 148Installation Plan.................................................................................................................... 230Integration Plan.................................................................................................................... 299Interface Control Plan........................................................................................................... 81Launch and Early Orbit Plan................................................................................................ 35Life-Cycle Cost Management Plan.................................................................................... 129Logistics Support Plan........................................................................................................... 26Mission Operations Plan....................................................................................................... 26Operations Plan...................................................................................................................... 35Production Plan...................................................................................................................... 25Program Plan.......................................................................................................................... 19Project Plan............................................................................................................................ 112Project Protection Plan................................................................................................ 260, 321Quality Control Plan.............................................................................................................. 24Requirements Management Plan....................................................................................... 134Review Plan........................................................................................................................... 169Risk Control Plan................................................................................................................. 142Risk Management Plan........................................................................................................ 140Risk Mitigation Plan............................................................................................................ 295Software Development Plan............................................................................................... 104Software IV&V Plan............................................................................................................. 105Source Evaluation Plan........................................................................................................ 219Strategic Plan......................................................................................................................... 152Surveillance Plan.................................................................................................................. 225System and Subsystem Test Plans........................................................................................ 42<strong>Systems</strong> Decommissioning/Disposal Plan........................................................................... 7<strong>Systems</strong> <strong>Engineering</strong> Management Plan (SEMP)................................................... 113, 303Technology Development Plan.......................................................................................... 277Test Plan................................................................................................................................. 230Time-Phased Resource Plan............................................................................................... 190Transition Plan...................................................................................................................... 230Transportation Plan............................................................................................................. 187Validation Plan.............................................................................................................. 100, 284Verification Plan..................................................................................................................... 83308 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


Appendix L: Interface RequirementsDocument Outline1.0 Introduction1.1 Purpose and Scope. State the purpose of this document and briefly identify the interface to be defined. (Forexample, “This IRD defines and controls the interface(s) requirements between ______ and _____.”)1.2 Precedence. Define the relationship of this document to other program documents and specify which iscontrolling in the event of a conflict.1.3 Responsibility and Change Authority. State the responsibilities of the interfacing organizations for developmentof this document and its contents. Define document approval authority (including change approvalauthority).2.0 Documents2.1 Applicable Documents. List binding documents that are invoked to the extent specified in this IRD. Thelatest revision or most recent version should be listed. Documents and requirements imposed by higher–leveldocuments (higher order of precedence) should not be repeated.2.2 Reference Documents. List any document that is referenced in the text in this subsection.3.0 Interfaces3.1 General. In the subsections that follow, provide the detailed description, responsibilities, coordinate systems,and numerical requirements as they relate to the interface plane.3.1.1 Interface Description. Describe the interface as defined in the system specification. Use tables, figures,or drawings as appropriate.3.1.2 Interface Responsibilities. Define interface hardware and interface boundary responsibilities to depictthe interface plane. Use tables, figures, or drawings as appropriate.3.1.3 Coordinate <strong>Systems</strong>. Define the coordinate system used for interface requirements on each side of theinterface. Use tables, figures, or drawings as appropriate.3.1.4 <strong>Engineering</strong> Units, Tolerances, and Conversions. Define the measurement units along with tolerances.If required, define the conversion between measurement systems.3.2 Interface Requirements. In the subsections that follow, define structural limiting values at the interface, suchas interface loads, forcing functions, and dynamic conditions.3.2.1 Interface Plane. Define the interface requirements on each side of the interface plane.3.2.1.1 Envelope3.2.1.2 Mass Properties. Define the derived interface requirements based on the allocated requirementscontained in the applicable specification pertaining to that side of the interface. For example,this subsection should cover the mass of the element.3.2.1.3 Structural/Mechanical. Define the derived interface requirements based on the allocated requirementscontained in the applicable specification pertaining to that side of the interface.For example, this subsection should cover attachment, stiffness, latching, and mechanisms.3.2.1.4 Fluid. Define the derived interface requirements based on the allocated requirements containedin the applicable specification pertaining to that side of the interface. For example, thissubsection should cover fluid areas such as thermal control, O 2and N 2, potable and wastewater, fuel cell water, and atmospheric sampling.<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 309


Appendix L: Interface Requirements Document Outline3.2.1.5 Electrical (Power). Define the derived interface requirements based on the allocated requirementscontained in the applicable specification pertaining to that side of the interface. For example,this subsection should cover various electric current, voltage, wattage, and resistancelevels.3.2.1.6 Electronic (Signal). Define the derived interface requirements based on the allocated requirementscontained in the applicable specification pertaining to that side of the interface.For example, this subsection should cover various signal types such as audio, video, commanddata handling, and navigation.3.2.1.7 Software and Data. Define the derived interface requirements based on the allocated requirementscontained in the applicable specification pertaining to that side of the interface. For example,this subsection should cover various data standards, message timing, protocols, errordetection/correction, functions, initialization, and status.3.2.1.8 Environments. Define the derived interface requirements based on the allocated requirementscontained in the applicable specification pertaining to that side of the interface. Forexample, cover the dynamic envelope measures of the element in English units or the metricequivalent on this side of the interface.310 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>3.2.1.8.1 Electromagnetic Effects3.2.1.8.1.a3.2.1.8.1.b3.2.1.8.1.c3.2.1.8.1.d3.2.1.8.1.eElectromagnetic Compatibility. Define the appropriate electromagneticcompatibility requirements. For example, end-item-1-toend-item-2interface shall meet the requirements [to be determined]of systems requirements for electromagnetic compatibility.Electromagnetic Interference. Define the appropriate electromagneticinterference requirements. For example, end-item-1-to-enditem-2interface shall meet the requirements [to be determined] ofelectromagnetic emission and susceptibility requirements for electromagneticcompatibility.Grounding. Define the appropriate grounding requirements. Forexample, end-item-1-to-end-item-2 interface shall meet the requirements[to be determined] of grounding requirements.Bonding. Define the appropriate bonding requirements. For example,end-item-1-to-end-item-2 structural/mechanical interfaceshall meet the requirements [to be determined] of electrical bondingrequirements.Cable and Wire Design. Define the appropriate cable and wire designrequirements. For example, end-item-1-to-end-item-2 cableand wire interface shall meet the requirements [to be determined]of cable/wire design and control requirements for electromagneticcompatibility.3.2.1.8.2 Acoustic. Define the appropriate acoustics requirements. Define the acoustic noiselevels on each side of the interface in accordance with program or project requirements.3.2.1.8.3 Structural Loads. Define the appropriate structural loads requirements. Definethe mated loads that each end item must accommodate.3.2.1.8.4 Vibroacoustics. Define the appropriate vibroacoustics requirements. Define thevibroacoustic loads that each end item must accommodate.3.2.1.9 Other Types of Interface Requirements. Define other types of unique interface requirementsthat may be applicable.


Appendix M: CM Plan OutlineA typical CM plan should include the following:Table M-1 CM Plan OutlineSectionDescription1.0 Introduction This section includes:zz The purpose and scope of the CM plan and the program phases to which it applieszz Brief description of the system or top-level configuration items2.0 Applicable and ReferenceDocumentsThis section includes a list of the specifications, standards, manuals, and other documents,referenced in the plan by title, document number, issuing authority, revision,and as applicable, change notice, amendment, and issue date.3.0 CM Concepts and Organization This section includes:zz CM objectiveszz Information needed to support the achievement of objectives in the current andfuture phaseszz Description and graphic portraying the project’s planned organization withemphasis on the CM activities4.0 CM Processzz CM Management and Planningzz Configuration Identificationzz Configuration Controlzz Configuration Status Accountingzz Configuration Audits5.0 Management of ConfigurationDataThis section includes a description of the project’s CM process for accomplishing thefive CM activities, which includes but is not limited to:zz CM activities for the current and future phaseszz Baselineszz Configuration itemszz Establishment and membership of configuration control boardszz Nomenclature and numberingzz Hardware/software identificationzz Functional configuration audits and physical configuration auditsThis section describes the methods for meeting the CM technical data requirements.6.0 Interface Management This section includes a description on how CM will maintain and control interfacedocumentation.7.0 CM Phasing and Schedule This section describes milestones for implementing CM commensurate with majorprogram milestones.8.0 Subcontractor/Vendor Control This section describes methods used to ensure subcontractor/vendors comply withCM requirements.<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 311


Appendix N: Guidance on <strong>Technical</strong> PeerReviews/InspectionsIntroductionThe objective of technical peer reviews/inspections is toremove defects as early as possible in the developmentprocess. Peer reviews/inspections are a well defined reviewprocess for finding and fixing defects, conducted bya team of peers with assigned roles, each having a vestedinterest in the work product under review. Peer reviews/inspectionsare held within development phases,between milestone reviews, on completed products orcompleted portions of products. The results of peer reviews/inspectionscan be reported at milestone reviews.Checklists are heavily utilized in peer reviews/inspectionsto improve the quality of the review.<strong>Technical</strong> peer reviews/inspections have proven overtime to be one of the most effective practices available forensuring quality products and on-time deliveries. Manystudies have demonstrated their benefits, both within<strong>NASA</strong> and across industry. Peer reviews/inspections improvequality and reduce cost by reducing rework. Thestudies have shown that the rework effort saved not onlypays for the effort spent on inspections, but also providesadditional cost savings on the project. By removing defectsat their origin (e.g., requirements and design documents,test plans and procedures, software code, etc.),inspections prevent defects from propagating throughmultiple phases and work products, and reduce theoverall amount of rework necessary on projects. In addition,improved team efficiency is a side effect of peer reviews/inspections(e.g., by improving team communication,more quickly bringing new members up to speed,and educating project members about effective developmentpractices).How to Perform <strong>Technical</strong> PeerReviews/InspectionsFigure N-1 shows a diagram of the peer review/inspectionstages, and the text below the figure explains how toperform each of the stages. (Figure N-2, at the end of thePlanningModeratorAuthorOverviewMeetingModeratorAuthorInspectorsPreparationModeratorInspectorsInspectionMeetingModeratorAuthorReaderRecorderReworkAuthorFollow-UpModeratorAuthorInspectorsInspectionAnnouncementIndividualPreparationLogsThird HourModeratorAuthorInspectionSummaryReportInspectors= Process Stage= Optional Stage= Person= Stage TransitionDetailedInspectionReportInspectionDefectListOthers= Form312 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>Figure N-1 The peer review/inspection process


Appendix N: Guidance on <strong>Technical</strong> Peer Reviews/Inspectionsappendix, summarizes the information as a quick referenceguide.)It is recommended that the moderator review the PlanningInspection Schedule and Estimating Staff Hours,Guidelines for Successful Inspections, and 10 BasicRules of Inspections in Figure N-2 before beginning theplanning stage. (Note: NPR 7150.2, <strong>NASA</strong> Software <strong>Engineering</strong>Requirements defines Agency requirements onthe use of peer reviews and inspections for software development.<strong>NASA</strong> peer review/inspection training is offeredby the <strong>NASA</strong> Office of the Chief Engineer.)A. PlanningThe moderator of the peer review/inspection performsthe following activities. 11.2.3.4.5.Note: Where activities have an *, the moderator recordsthe time on the inspection summary report.Determine whether peer review/inspection entrancecriteria have been met.Determine whether an overview of the product isneeded.Select the peer review/inspection team and assignroles. For guidance on roles, see Roles of Participantsin Figure N-2 at the end of this appendix. Reviewershave a vested interest in the work product (e.g., theyare peers representing areas of the life cycle affectedby the material being reviewed).Determine if the size of the product is within theprescribed guidelines for the type of inspection. (SeeMeeting Rate Guidelines in Figure N-2 for guidelineson the optimal number of pages or lines ofcode to inspect for each type of inspection.) If theproduct exceeds the prescribed guidelines, break theproduct into parts and inspect each part separately.(It is highly recommended that the peer review/inspectionmeeting not exceed 2 hours.)Schedule the overview (if one is needed).1Langley Research Center, Instructional <strong>Handbook</strong> forFormal Inspections. This document provides more detailed instructionson how to perform technical peer reviews/inspections.It also provides templates for the forms used in the peerreview/inspection process described above: inspection announcement,individual preparation log, inspection defectlist, detailed inspection report, and the inspection summaryreport.6.7.8.Schedule peer review/inspection meeting time andplace.Prepare and distribute the inspection announcementand package. Include in the package the product tobe reviewed and the appropriate checklist for thepeer review/inspection.Record total time spent in planning.*B. Overview Meeting1.2.Moderator runs the meeting, and the author presentsbackground information to the reviewers.Record total time spent in the overview.*C. Peer Review/Inspection Preparation1.2.3.4.5.6.Peers review the checklist definitions of defects.Examine materials for understanding and possibledefects.Prepare for assigned role in peer review/inspection.Complete and turn in individual preparation log tothe moderator.The moderator reviews the individual preparationlogs and makes Go or No-Go decision and organizesinspection meeting.Record total time spent in the preparation.*D. Peer Review/Inspection Meeting1.2.3.4.5.6.7.8.The moderator introduces people and identifiestheir peer review/inspection roles.The reader presents work products to the peer review/inspection team in a logical and orderly manner.Peer reviewers/inspectors find and classify defectsby severity, category, and type. (See Classification ofDefects in Figure N-2.)The recorder writes the major and minor defects onthe inspection defect list (for definitions of majorand minor, see the Severity section of Figure N.2).Steps 1 through 4 are repeated until the review of theproduct is completed.Open issues are assigned to peer reviewers/inspectorsif irresolvable discrepancies occur.Summarize the number of defects and their classificationon the detailed inspection report.Determine the need for a reinspection or third hour.Optional: Trivial defects (e.g., redlined documents)can be given directly to the author at the end of theinspection.<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 313


Appendix N: Guidance on <strong>Technical</strong> Peer Reviews/Inspections9.The moderator obtains an estimate for rework timeand completion date from the author, and does thesame for action items if appropriate.10. The moderator assigns writing of change requestsand/or problem reports (if needed).11. Record total time spent in the peer review/inspectionmeeting.*E. Third Hour1.2.3.Complete assigned action items and provide informationto the author.Attend third hour meeting at author’s request.Provide time spent in third-hour to the moderator.*F. Rework1.All major defects noted in the inspection defect listare resolved by the author.2.3.Minor and trivial defects (which would not result infaulty execution) are resolved at the discretion of theauthor as time and cost permit.Record total time spent in the rework on the inspectiondefect list.G. Followup1.2.3.4.5.6.The moderator verifies all major defects have beencorrected and no secondary defects have been introduced.The moderator ensures all open issues are resolvedand verifies all success criteria for the peer review/inspection are met.Record total time spent in rework and followup.*File the inspection package.The inspection summary report is distributed.Communicate that the peer review/inspection hasbeen passed.314 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


Appendix N: Guidance on <strong>Technical</strong> Peer Reviews/Inspections1PLANNING OVERVIEW PREPARATIONINSPECTIONREWORK FOLLOW-UPMEETINGMEETINGstaff hours staff hours staff hoursstaff hours staff hours staff hours= 1 to 3 2 = 1 hour x 4 = 2 hours x 5 = 2 hours 6 = 5 to 20 9 = 1 to 3hours# inspectors # inspectors x # inspectors hourshours+ authorprep time(Author &Moderator)Planning Inspection Schedule*and Estimating Staff HoursTRANSITION TIMES**(Use on approx.17% of total1. 1-day minimuminspections.)2. 5-day minimum, when included3. 3- to 5-day minimum for inspectors to fit preparation time into normalwork schedule4. 3- to 5-day minimum for inspectors to fit preparation time into normalwork schedule5. 4 hour minimum prior to inspection meeting6. Immediate: Rework can begin as soon as inspection meeting ends7. 1 day recommended8. Minimum possible time9. 1-week maximum from end of Inspection meeting10. 2-week maximumGuidelines for Successful Inspections• Train moderators, inspectors, and managers• No more than 25% of developers’ time should bedevoted to inspections• Inspect 100% of work product• Be prepared• Share responsibility for work product quality• Be willing to associate and communicate• Avoid judgmental language• Do not evaluate author• Have at least one positive and negative input• Raise issues; don’t resolve them• Avoid discussions of style• Stick to standard or change it• Be technically competent• Record all issues in public• Stick to technical issues• Distribute inspection documents as soon as possible• Let author determine when work product is readyfor inspection• Keep accurate statistics10 Basic Rules of Inspections• Inspections are carried out at a number of pointsinside phases of the life cycle. Inspections are notsubstitutes for milestone reviews.• Inspections are carried out by peers representingareas of life cycle affected by material beinginspected (usually limited to 6 or fewer people).All inspectors should have a vested interest in thework product.• Management is not present during inspections.Inspections are not to be used as a tool to evaluateworkers.• Inspections are led by a trained moderator.• Trained inspectors are assigned roles.• Inspections are carried out in a prescribed series ofsteps.• Inspection meeting is limited to 2 hours.• Checklists of questions are used to define task andto stimulate defect finding.• Material is covered during inspection meeting withinan optional page rate, which has been found to givemaximum error-finding ability.• Statistics on number of defects, types of defects, andtime expended by engineers on inspections are kept.3(Use on approx. 6%of total inspections.)10(All inspectors)(All inspectors)THIRD HOURstaff hours= 0.5 hourx # inspectors* Historically, complete inspections have averaged 30.5 total staff hours for 5-person teams.** Entire inspection process should be completed from start to finish within a 3-week period.7TypeR0R1I0I1I2IT1IT28(Author)(Author &Moderator)Meeting Length• Overview* 0.5 to 1hrs• Inspection 2 hrs Max.• Third Hour 1 to 2 hrs*Author Preparation for Overview:3 to 4 hrs over 3 to 5 working daysSYMBOLS= PROCESS STAGE= OPTIONAL STAGE= TIME REFERENCE= STAGE TRANSITIONMeeting* Rate Guidelinesfor Various Inspection TypesInspection MeetingTarget per 2 Hrs Range per 2 Hrs20 pages 10 to 30 pages20 pages 10 to 30 pages30 pages 20 to 40 pages35 pages 25 to 45 Pages500 lines ofsource code**400 to 600 linesof source code**30 pages 20 to 40 pages35 pages 25 to 45 pages* Assumes a 2-hour meeting. Scale down plannedmeeting duration for shorter work products.** Flight software and other highly complex codesegments should proceed at about half this rate.Roles of ParticipantsModeratorResponsible for conducting inspection process andcollecting inspection data. Plays key role in all stagesof process except rework. Required to perform specialduties during an inspection in addition to inspector’stasks.InspectorsResponsible for finding defects in work product froma general point of view, as well as defects thataffect their area of expertise.AuthorProvides information about work product during allstages of process. Responsible for correcting allmajor defects and any minor and trivial defects thatcost and schedule permit. Performs duties of aninspector.ReaderGuides team through work product during inspectionmeeting. Reads or paraphrases work product indetail. Should be an inspector from same (or next)life-cycle phase as author. Performs duties of aninspector in addition to reader’s role.RecorderAccurately records each defect found duringinspection meeting on the Inspection Defect List.Performs duties of an inspector in addition torecorder’s role.Classification of DefectsSeverityMajor• An error that would cause a malfunction orprevents attainment of an expected or specifiedresult.• Any error that would in the future result in anapproved change request or failure report.Minor• A violation of standards, guidelines, or rulesthat would not result in a deviation fromrequirements if not corrected, but could result indifficulties in terms of operations, maintenance,or future development.Trivial• Editorial errors such as spelling, punctuation, andgrammar that do not cause errors or changerequests. Recorded only as redlines. Presenteddirectly to author.Author is required to correct all major defectsand should correct minor and trivial defects astime and cost permit.CategoryTypePeer Reviews/InspectionsSY1SY2SU1SU2R1I0I1I2IT1IT2• Missing • Wrong • ExtraTypes of defects are derived from headings onchecklist used for the inspection. Defect types canbe standardized across inspections from all phasesof the life cycle. A suggested standard set of defecttypes are:• Clarity• Completeness• Compliance• Consistency• Correctness/Logic• Data Usage• Fault Tolerance• Functionality• Interface• Level of Detail• Maintainability• Performance• Reliability• Testability• Traceability• OtherEXAMPLEThe following is an example of a defect classificationthat would be recorded on the Inspection Defect List:DescriptionClassificationLine 169 – While countingthe number of leadingspaces in variable NAME,the wrong “I” is used tocalculate “J.”QUICK REFERENCEGUIDETypes of InspectionsSystem RequirementsSystem DesignSubsystem RequirementsSubsystem DesignSoftware RequirementsArchitectural DesignDetailed DesignSource CodeTest PlanTest Procedures & FunctionsMajor Defect X MissingMinor Defect Wrong XOpen Issue ExtraType Data UsageOriginBased on JCK/LLW/SSP/HS: 10/92Figure N-2 Peer reviews/inspections quick reference guide<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 315


Appendix O: Tradeoff ExamplesTable O-1 Typical Tradeoffs for Space <strong>Systems</strong>Development Relatedzz Custom versus commercial-off-the-shelfzz Light parts (expensive) versus heavy parts (less expensive)zz On-board versus remote processingzz Radio frequency versus optical linkszz Levels of margin versus cost/riskzz Class S versus non-class S partszz Radiation-hardened versus standard componentszz Levels of redundancyzz Degrees of quality assurancezz Built-in test versus remote diagnosticszz Types of environmental exposure prior to operationzz Level of test (system versus subsystem)zz Various life-cycle approaches (e.g., waterfall versus spiralversus incremental)Operations and Support Relatedzz Upgrade versus new startzz Manned versus unmannedzz Autonomous versus remotely controlledzz System of systems versus stand-alone systemzz One long-life unit versus many short-life unitszz Low Earth orbit versus medium Earth orbit versus geosta-tionary orbit versus high Earth orbitzz Single satellite versus constellationzz Launch vehicle type (e.g., Atlas versus Titan)zz Single stage versus multistage launchzz Repair in-situ versus bring down to groundzz Commercial versus Government assetszz Limited versus public accesszz Controlled versus uncontrolled reentryTable O-2 Typical Tradeoffs in the Acquisition ProcessAcquisition PhaseMission needs analysisConcept exploration (concept and technologydevelopment)Demonstration/validationFull-scale development (system developmentand demonstrationProductionTrade Study PurposePrioritize identified user needs1. Compare new technology with proven concepts2. Select concepts best meeting mission needs3. Select alternative system configurations4. Focus on feasibility and affordability1. Select technology2. Reduce alternative configurations to a testable number1. Select component/part designs2. Select test methods3. Select operational test and evaluation quantities1. Examine effectiveness of all proposed design changes2. Perform make/buy, process, rate, and location decisionsTable O-3 Typical Tradeoffs Throughout the Project Life CyclePre-Phase A Phase A Phase B Phases C&D Phases D&E Phases E&Fzz Problem selec-tionzz Upgrade versusnew startzz On-boardversus groundprocessingzz Low Earth orbitversus geostationaryorbitzz Levels ofredundancyzz Radio frequencylinks versusoptical linkszz Single sourceversus multiplesupplierszz Level of testingzz Platform STS-28versus STS-3azz Launch go-ahead (Go orNo-Go)zz Adjust orbitdaily versusweeklyDeorbit nowzzversus later316 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


Appendix P: SOW Review ChecklistEditorial Checklist1.2.3.4.5.6.7.8.9.Is the SOW requirement in the form “who” shall “do what”? An example is, “The Contractor shall (perform, provide,develop, test, analyze, or other verb followed by a description of what).”Example SOW requirements:zz The Contractor shall design the XYZ flight software…zz The Contractor shall operate the ABC ground system…zz The Contractor shall provide maintenance on the following…zz The Contractor shall report software metrics monthly …zz The Contractor shall integrate the PQR instrument with the spacecraft…Is the SOW requirement a simple sentence that contains only one requirement?Compound sentences that contain more than one SOW requirement need to be split into multiple simple sentences.(For example, “The Contractor shall do ABC and perform XYZ” should be rewritten as “The Contractorshall do ABC” and “The Contractor shall perform XYZ.”)Is the SOW composed of simple, cohesive paragraphs, each covering a single topic? Paragraphs containing manyrequirements should be divided into subparagraphs for clarity.Has each paragraph and subparagraph been given a unique number or letter identifier? Is the numbering or letteringcorrect?Is the SOW requirement in the active rather than the passive voice? Passive voice leads to vague statements. (Forexample, state, “The Contractor shall hold monthly management review meetings” instead of “Management reviewmeeting shall be held monthly.”)Is the SOW requirement stated positively as opposed to negatively? (Replace statements such as, “The Contractorshall not exceed the budgetary limits specified” with “The contractor shall comply with the budgetary limits specified.”)Is the SOW requirement grammatically correct?Is the SOW requirement free of typos, misspellings, and punctuation errors?Have all acronyms been defined in an acronym list or spelled out in the first occurrence?10. Have the quantities, delivery schedules, and delivery method been identified for each deliverable within the SOWor in a separate attachment/section?11. Has the content of documents to be delivered been defined in a separate attachment/section and submitted withthe SOW?12. Has the file format of each electronic deliverable been defined (e.g., Microsoft—Project, Adobe—Acrobat PDF, NationalInstruments—Labview VIs)?Content Checklist1.Are correct terms used to define the requirements?z zz zz zShall = requirement (binds the contractor)Should = goal (leaves decision to contractor; avoid using this word)May = allowable action (leaves decision to contractor; avoid using this word)<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 317


Appendix P: SOW Review Checklist2.3.4.5.6.7.8.9.z z Will = facts or declaration of intent by the Government (use only in referring to the Government)z z Present tense (e.g., “is”) = descriptive text only (avoid using in requirements statements; use “shall” instead)z z NEVER use ”must”Is the scope of the SOW clearly defined? Is it clear what you are buying?Is the flow and organizational structure of the document logical and understandable? (See LPR 5000.2 “ProcurementInitiator’s Guide,” Section 12, for helpful hints.) Is the text compatible with the title of the section it’s under?Are subheadings compatible with the subject matter of headings?Is the SOW requirement clear and understandable?zz Can the sentence be understood only one way?zz Will all terminology used have the same meaning to different readers without definition? Has any terminologyfor which this is not the case been defined in the SOW? (E.g., in a definitions section or glossary.)zz Is it free from indefinite pronouns (“this,” “that,” “these,” “those”) without clear antecedents (e.g., replace state-ments such as, “These shall be inspected on an annual basis” with “The fan blades shall be inspected on an annualbasis”)?zz Is it stated concisely?Have all redundant requirements been removed? Redundant requirements can reduce clarity, increase ambiguity,and lead to contradictions.Is the requirement consistent with other requirements in the SOW, without contradicting itself, without using thesame terminology with different meanings, without using different terminology for the same thing?If the SOW includes the delivery of a product (as opposed to just a services SOW):zz Are the technical product requirements in a separate section or attachment, apart from the activities that thecontractor is required to perform? The intent is to clearly delineate between the technical product requirementsand requirements for activities the contractor is to perform (e.g., separate SOW statements “The contractorshall” from technical product requirement statements such as “The system shall” and “The software shall”).zz Are references to the product and its subelements in the SOW at the level described in the technical product re-quirements?zz Is the SOW consistent with and does it use the same terminology as the technical product requirements?Is the SOW requirement free of ambiguities? Make sure the SOW requirement is free of vague terms (for example,“as appropriate,” “any,” “either,” “etc.,” “and/or,” “support,” “necessary,” “but not limited to,” “be capable of,” “be ableto”).Is the SOW requirement verifiable? Make sure the SOW requirement is free of unverifiable terms (for example,“flexible,” “easy,” “sufficient,” “safe,” “ad hoc,” “adequate,” “accommodate,” “user-friendly,” “usable,” “when required,”“if required,” “appropriate,” “fast,” “portable,” “lightweight,” “small,” “large,” “maximize,” “minimize,” “optimize,”“sufficient,” “robust,” “quickly,” “easily,” “clearly,” other “ly” words, other “ize” words).10. Is the SOW requirement free of implementation constraints? SOW requirements should state WHAT the contractoris to do, NOT HOW they are to do it (for example, “The Contractor shall design the XYZ flight software”states WHAT the contractor is to do, while “The Contractor shall design the XYZ software using object-orienteddesign” states HOW the contractor is to implement the activity of designing the software. In addition, too low alevel of decomposition of activities can result in specifying how the activities are to be done, rather than what activitiesare to be done).11. Is the SOW requirement stated in such a way that compliance with the requirement is verifiable? Do the meansexist to measure or otherwise assess its accomplishment? Can a method for verifying compliance with the requirementbe defined (e.g., described in a quality assurance surveillance plan)?12. Is the background material clearly labeled as such (i.e., included in the background section of the SOW if one isused)?318 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


Appendix P: SOW Review Checklist13. Are any assumptions able to be validated and restated as requirements? If not, the assumptions should be deletedfrom the SOW. Assumptions should be recorded in a document separate from the SOW.14.15.16.Is the SOW complete, covering all of the work the contractor is to do?zz Are all of the activities necessary to develop the product included (e.g., system, software, and hardware activi-ties for the following: requirements, architecture, and design development; implementation and manufacturing;verification and validation; integration testing and qualification testing.)?zz Are all safety, reliability, maintainability (e.g., mean time to restore), availability, quality assurance, and securityrequirements defined for the total life of the contract?zz Does the SOW include a requirement for the contractor to have a quality system (e.g., ISO certified), if one isneeded?zz Are all of the necessary management and support requirements included in the SOW (for example, projectmanagement; configuration management; systems engineering; system integration and test; risk management;interface definition and management; metrics collection, reporting, analysis, and use; acceptance testing; <strong>NASA</strong>Independent Verification and Validation (IV&V) support tasks.)?zz Are clear performance standards included and sufficient to measure contractor performance (e.g., systems, soft-ware, hardware, and service performance standards for schedule, progress, size, stability, cost, resources, anddefects)? See Langley’s Guidance on System and Software Metrics for Performance-Based Contracting for moreinformation and examples on performance standards.zz Are all of the necessary service activities included (for example, transition to operations, operations, mainte-nance, database administration, system administration, and data management)?zz Are all of the Government surveillance activities included (for example, project management meetings; decisionpoints; requirements and design peer reviews for systems, software, and hardware; demonstrations; test readinessreviews; other desired meetings (e.g., technical interchange meetings); collection and delivery of metrics forsystems, software, hardware, and services (to provide visibility into development progress and cost); electronicaccess to technical and management data; and access to subcontractors and other team members for the purposesof communication)?zz Are the Government requirements for contractor inspection and testing addressed, if necessary?zz Are the requirements for contractor support of Government acceptance activities addressed, if necessary?Does the SOW only include contractor requirements? It should not include Government requirements.Does the SOW give contractors full management responsibility and hold them accountable for the end result?17. Is the SOW sufficiently detailed to permit a realistic estimate of cost, labor, and other resources required to accomplisheach activity?18. Are all deliverables identified (e.g., status, financial, product deliverables)? The following are examples of deliverablesthat are sometimes overlooked: management and development plans; technical progress reports that identifycurrent work status, problems and proposed corrective actions, and planned work; financial reports that identifycosts (planned, actual, projected) by category (e.g., software, hardware, quality assurance); products (e.g., sourcecode, maintenance/user manual, test equipment); and discrepancy data (e.g., defect reports, anomalies). All deliverablesshould be specified in a separate document except for technical deliverables (e.g., hardware, software, prototypes),which should be included in the SOW.19. Does each technical and management deliverable track to a paragraph in the SOW? Each deliverable should havea corresponding SOW requirement for its preparation (i.e., the SOW identifies the title of the deliverable in parenthesesafter the task requiring the generation of the deliverable).20.Are all reference citations complete?zz Are the complete number, title, and date or version of each reference specified?zz Does the SOW reference the standards and other compliance documents in the proper SOW paragraphs?<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 319


Appendix P: SOW Review Checklistzz Is the correct reference document cited and is it referenced at least once?zz Is the reference document either furnished with the SOW or available at a location identified in the SOW?zz If the referenced standard or compliance document is only partially applicable, does the SOW explicitly and un-ambiguously reference the portion that is required of the contractor?320 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


Appendix Q: Project Protection Plan OutlineThe following outline will assist systems engineers in preparing a project protectionplan. The plan is a living document that will be written and updated as the project progressesthrough major milestones and ultimately through end of life.1. Introduction1.1 Protection Plan Overview1.2 Project Overview1.3 Acquisition Status2. References2.1 Directives and Instructions2.2 Requirements2.3 Studies and Analyses3. References3.1 Threats: Hostile Action3.1.1 Overview3.1.2 Threat Characterization3.1.2.1 Cyber Attack3.1.2.2 Electronic Attack3.1.2.3 Lasers3.1.2.4 Ground Attack3.1.2.5 Asymmetric Attack on Critical Commercial Infrastructure3.1.2.6 Anti-Satellite Weapons3.1.2.7 High-Energy Radio Frequency Weapons3.1.2.8 Artificially Enhanced Radiation Environment3.2 Threats: Environmental3.2.1 Overview3.2.2 Threat Characterization3.2.2.1 Natural Environment Storms3.2.2.2 Earthquakes3.2.2.3 Floods3.2.2.4 Fires3.2.2.5 Radiation Effects in the Natural Environment3.2.2.6 Radiation Effects to Spacecraft Electronics4. Protection Vulnerabilities4.1 Ground Segment Vulnerabilities4.1.1 Command and Control Facilities4.1.2 Remote Tracking Stations4.1.3 Spacecraft Simulator(s)4.1.4 Mission Data Processing Facilities4.1.5 Flight Dynamic Facilities4.1.6 Flight Software Production/Verification/Validation Facilities<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 321


Appendix Q: Project Protection Plan Outline4.2 Communications/Information Segment Vulnerabilities4.2.1 Command Link4.2.2 Telemetry Link (Mission Data)4.2.3 Telemetry Link (<strong>Engineering</strong> Data)4.2.4 Ground Network4.3 Space Segment Vulnerabilities4.3.1 Spacecraft Physical Characteristics4.3.2 Spacecraft Operational Characteristics4.3.3 Orbital Parameters4.3.4 Optical Devices (Sensors/Transmitters/Receivers)4.3.5 Communications Subsystem4.3.6 Command and Data Handling Subsystem4.3.7 Instruments4.4 Launch Segment Vulnerabilities4.4.1 Launch Parameters4.4.2 Launch Site Integration and Test Activities4.5 Commercial Infrastructure Vulnerabilities4.5.1 Electrical Power4.5.2 Natural Gas4.5.3 Telecommunications4.5.4 Transportation5. Protection Countermeasures5.1 Protection Strategy5.2 Mission Threat Mitigation5.3 Mission Restoration Options5.4 Mission Survivability Characteristics6. Debris Mitigation6.1 Design Guidelines6.2 End-Of-Life Mitigation Procedures6.3 Collision Avoidance7. Critical Program Information and Technologies7.1 Critical Program Information Elements7.2 Critical Information Program8. Program Protection Costs8.1 System Trade Analyses8.2 Cost/Benefit Analyses322 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


ReferencesThis appendix contains references that were cited in thesections of the handbook and sources for developing thematerial in the indicated sections. See the Bibliographyfor complete citations.Section 2.0 Fundamentals of <strong>Systems</strong><strong>Engineering</strong>Griffin, Michael D., “System <strong>Engineering</strong> and the TwoCultures of <strong>Engineering</strong>.” 2007.Rechtin, Eberhardt. <strong>Systems</strong> Architecting of Organizations:Why Eagles Can’t Swim. 2000.Section 3.4 Project Phase A: Concept andTechnology Development<strong>NASA</strong>. <strong>NASA</strong> Safety Standard 1740.14, Guidelines andAssessment Procedures for Limiting Orbital Debris. 1995.Section 4.1 Stakeholder ExpectationsDefinitionANSI. Guide for the Preparation of Operational ConceptDocuments. 1992.Section 4.2 <strong>Technical</strong> Requirements Definition<strong>NASA</strong>. <strong>NASA</strong> Space Flight Human System Standard. 2007.Section 4.3 Logical DecompositionInstitute of Electrical and Electronics Engineers. StandardGlossary of Software <strong>Engineering</strong> Terminology.1999.Section 4.4 Design SolutionBlanchard, Benjamin S. System <strong>Engineering</strong> Management.2006.DOD. MIL-STD-1472, Human <strong>Engineering</strong>. 2003.Federal Aviation Administration. Human Factors DesignStandard. 2003.International Organization for Standardization. Quality<strong>Systems</strong> Aerospace—Model for Quality Assurance in Design,Development, Production, Installation, and Servicing.1999.<strong>NASA</strong>. <strong>NASA</strong> Space Flight Human System Standard. 2007.<strong>NASA</strong>. Planning, Developing, and Maintaining and EffectiveReliability and Maintainability (R&M) Program.1998.U. S. Army Research Laboratory. MIL HDBK 727, DesignGuidance for Producibility. 1990.U.S. Nuclear Regulatory Commission. Human-SystemInterface Design Review Guidelines. 2002.Section 5.1 Product ImplementationAmerican Institute of Aeronautics and Astronautics.AIAA Guide for Managing the Use of Commercial Off theShelf (COTS) Software Components for Mission-Critical<strong>Systems</strong>. 2006.International Council on <strong>Systems</strong> <strong>Engineering</strong>. <strong>Systems</strong><strong>Engineering</strong> <strong>Handbook</strong>. 2006.<strong>NASA</strong>. Off-the-Shelf Hardware Utilization in Flight HardwareDevelopment. 2004.Section 5.3 VerificationElectronic Industries Alliance. Processes for <strong>Engineering</strong>a System. 1999.Institute of Electrical and Electronics Engineers. Standardfor Application and Management of the <strong>Systems</strong> <strong>Engineering</strong>Process. 1998.International Organization for Standardization. <strong>Systems</strong><strong>Engineering</strong>—System Life Cycle Processes. 2002.<strong>NASA</strong>. Project Management: <strong>Systems</strong> <strong>Engineering</strong> &Project Control Processes and Requirements. 2004.U.S. Air Force. SMC <strong>Systems</strong> <strong>Engineering</strong> Primer and<strong>Handbook</strong>. 2005.Section 5.4 ValidationElectronic Industries Alliance. Processes for <strong>Engineering</strong>a System. 1999.Institute of Electrical and Electronics Engineers. Standardfor Application and Management of the <strong>Systems</strong> <strong>Engineering</strong>Process. 1998.<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 323


ReferencesInternational Organization for Standardization. <strong>Systems</strong><strong>Engineering</strong>—System Life Cycle Processes. 2002.<strong>NASA</strong>. Project Management: <strong>Systems</strong> <strong>Engineering</strong> &Project Control Processes and Requirements. 2004.U.S. Air Force. SMC <strong>Systems</strong> <strong>Engineering</strong> Primer and<strong>Handbook</strong>. 2005.Section 5.5 Product TransitionDOD. Defense Acquisition Guidebook. 2004.Electronic Industries Alliance. Processes for <strong>Engineering</strong>a System. 1999.International Council on <strong>Systems</strong> <strong>Engineering</strong>. <strong>Systems</strong><strong>Engineering</strong> <strong>Handbook</strong>. 2006.International Organization for Standardization. <strong>Systems</strong><strong>Engineering</strong>—A Guide for the Application of ISO/IEC15288. 2003.—. <strong>Systems</strong> <strong>Engineering</strong>—System Life Cycle Processes.2002.Naval Air <strong>Systems</strong> Command. <strong>Systems</strong> Command SEGuide: 2003. 2003.Section 6.1 <strong>Technical</strong> PlanningAmerican Institute of Aeronautics and Astronautics.AIAA Guide for Managing the Use of CommercialOff the Shelf (COTS) Software Components for Mission-Critical<strong>Systems</strong>. 2006.Institute of Electrical and Electronics Engineers. Standardfor Application and Management of the <strong>Systems</strong> <strong>Engineering</strong>Process. 1998.Martin, James N. <strong>Systems</strong> <strong>Engineering</strong> Guidebook: A Processfor Developing <strong>Systems</strong> and Products. 1996.<strong>NASA</strong>. <strong>NASA</strong> Cost Estimating <strong>Handbook</strong>. 2004.—. Standard for Models and Simulations. 2006.Section 6.4 <strong>Technical</strong> Risk ManagementClemen, R., and T. Reilly. Making Hard Decisions withDecisionTools Suite. 2002.Dezfuli, H. “Role of System Safety in Risk-Informed Decisionmaking.”2005.Kaplan, S., and B. John Garrick. “On the QuantitativeDefinition of Risk.” 1981.Morgan, M. Granger, and M. Henrion. Uncertainty: AGuide to Dealing with Uncertainty in Quantitative Riskand Policy Analysis. 1990.Stamelatos, M., H. Dezfuli, and G. Apostolakis. “A ProposedRisk-Informed Decisionmaking Framework for<strong>NASA</strong>.” 2006.Stern, Paul C., and Harvey V. Fineberg, eds. UnderstandingRisk: Informing Decisions in a Democratic Society.1996.U.S. Nuclear Regulatory Commission. White Paper onRisk-Informed and Performance-Based Regulation. 1998.Section 6.5 Configuration ManagementAmerican Society of Mechanical Engineers. <strong>Engineering</strong>Drawing Practices. 2004.—. Types and Applications of <strong>Engineering</strong> Drawings.1999.DOD. Defense Logistics Agency (DLA) Cataloging <strong>Handbook</strong>.—. MIL-HDBK-965, Parts Control Program. 1996.—. MIL-STD-881B, Work Breakdown Structure(WBS) for Defense Materiel Items. 1993.DOD, U.S. General Services Administration, and <strong>NASA</strong>.Acquisition of Commercial Items. 2007.—. Quality Assurance, Nonconforming Supplies orServices. 2007.Institute of Electrical and Electronics Engineers. EIAGuide for Information Technology Software Life CycleProcesses—Life Cycle Data. 1997.—. IEEE Guide to Software Configuration Management.1987.—. Standard for Software Configuration ManagementPlans. 1998.International Organization for Standardization. InformationTechnology—Software Life Cycle Processes ConfigurationManagement. 1998.—. Quality Management—Guidelines for ConfigurationManagement. 1995.<strong>NASA</strong>. NOAA-N Prime Mishap Investigation Final Report.2004.National Defense Industrial Association. Data Management.2004.—. National Consensus Standard for ConfigurationManagement. 1998.324 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


ReferencesSection 6.6 <strong>Technical</strong> Data ManagementNational Defense Industrial Association. Data Management.2004.—. National Consensus Standard for ConfigurationManagement. 1998.Section 6.8 Decision AnalysisBlanchard, Benjamin S. System <strong>Engineering</strong> Management.2006.Blanchard, Benjamin S., and Wolter Fabrycky. <strong>Systems</strong><strong>Engineering</strong> and Analysis. 2006.Clemen, R., and T. Reilly. Making Hard Decisions withDecisionTools Suite. 2002.Keeney, Ralph L. Value-Focused Thinking: A Path to CreativeDecisionmaking. 1992.Keeney, Ralph L., and Timothy L. McDaniels. “A Frameworkto Guide Thinking and Analysis Regarding ClimateChange Policies.” 2001.Keeney, Ralph L., and Howard Raiffa. Decisions with MultipleObjectives: Preferences and Value Tradeoffs. 1993.Morgan, M. Granger, and M. Henrion. Uncertainty: AGuide to Dealing with Uncertainty in Quantitative Riskand Policy Analysis. 1990.Saaty,Thomas L. The Analytic Hierarchy Process. 1980.Section 7.1 <strong>Engineering</strong> with ContractsAdams, R. J., S. Eslinger, P. Hantos, K. L. Owens, et al.Software Development Standard for Space <strong>Systems</strong>. 2005.DOD, U.S. General Services Administration, and <strong>NASA</strong>.Contracting Office Responsibilities. 2007.Eslinger, Suellen. Software Acquisition Best Practices forthe Early Acquisition Phases. 2004.Hofmann, Hubert F., Kathryn M. Dodson, Gowri S. Ramani,and Deborah K. Yedlin. Adapting CMMI® for AcquisitionOrganizations: A Preliminary Report. 2006.International Council on <strong>Systems</strong> <strong>Engineering</strong>. <strong>Systems</strong><strong>Engineering</strong> <strong>Handbook</strong>: A “What To” Guide for all SEPractitioners. 2004.The Mitre Corporation. Common Risks and Risk MitigationActions for a COTS-Based System.<strong>NASA</strong>. Final Memorandum on <strong>NASA</strong>’s Acquisition ApproachRegarding Requirements for Certain Software <strong>Engineering</strong>Tools to Support <strong>NASA</strong> Programs. 2006.—. The SEB Source Evaluation Process. 2001.—. Solicitation to Contract Award. 2007.—. Standard for Models and Simulations. 2006.—. Statement of Work Checklist.—. System and Software Metrics for Performance-Based Contracting.Naval Air <strong>Systems</strong> Command. <strong>Systems</strong> <strong>Engineering</strong>Guide. 2003.Section 7.2 Integrated Design FacilitiesMiao, Y., and J. M. Haake. “Supporting Concurrent Designby Integrating Information Sharing and ActivitySynchronization.” 1998.Section 7.4 Human Factors <strong>Engineering</strong>Blanchard, Benjamin S., and Wolter Fabrycky. <strong>Systems</strong><strong>Engineering</strong> and Analysis. 2006.Chapanis, A. “The Error-Provocative Situation: A CentralMeasurement Problem in Human Factors <strong>Engineering</strong>.”1980.DOD. Human <strong>Engineering</strong> Procedures Guide. 1987.—. MIL-HDBK-46855A, Human <strong>Engineering</strong> ProgramProcess and Procedures. 1996.Eggemeier, F. T., and G. F. Wilson. “Performance andSubjective Measures of Workload in Multitask Environments.”1991.Endsley, M. R., and M. D. Rogers. “Situation AwarenessInformation Requirements Analysis for En Route AirTraffic Control.” 1994.Fuld, R. B. “The Fiction of Function Allocation.” 1993.Glass, J. T., V. Zaloom, and D. Gates. “A Micro-Computer-AidedLink Analysis Tool.” 1991.Gopher, D., and E. Donchin. “Workload: An Examinationof the Concept.” 1986.Hart, S. G., and C. D. Wickens. “Workload Assessmentand Prediction.” 1990.Huey, B. M., and C. D. Wickens, eds. Workload Transition.1993.Jones, E. R., R. T. Hennessy, and S. Deutsch, eds. HumanFactors Aspects of Simulation. 1985.Kirwin, B., and L. K. Ainsworth. A Guide to Task Analysis.1992.<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 325


ReferencesKurke, M. I. “Operational Sequence Diagrams in SystemDesign.” 1961.Meister, David. Behavioral Analysis and MeasurementMethods. 1985.—. Human Factors: Theory and Practice. 1971.Price, H. E. “The Allocation of Functions in <strong>Systems</strong>.”1985.Shafer, J. B. “Practical Workload Assessment in the DevelopmentProcess.” 1987.Section 7.6 Use of Metric SystemDOD. DoD Guide for Identification and Development ofMetric Standards. 2003.Taylor, Barry. Guide for the Use of the International Systemof Units (SI). 2007.Appendix F: Functional, Timing, and StateAnalysisBuede, Dennis. The <strong>Engineering</strong> Design of <strong>Systems</strong>:Models and Methods. 2000.Defense Acquisition University. <strong>Systems</strong> <strong>Engineering</strong>Fundamentals Guide. 2001.Long, Jim. Relationships Between Common GraphicalRepresentations in <strong>Systems</strong> <strong>Engineering</strong>. 2002.<strong>NASA</strong>. Training Manual for Elements of Interface Definitionand Control. 1997.Sage, Andrew, and William Rouse. The <strong>Handbook</strong> of <strong>Systems</strong><strong>Engineering</strong> and Management. 1999.Appendix H: Integration Plan OutlineFederal Highway Administration and CalTrans. <strong>Systems</strong><strong>Engineering</strong> Guidebook for ITS. 2007.Appendix J: SEMP Content OutlineDOD. MIL-HDBK-881, Work Breakdown Structures forDefense Materiel <strong>Systems</strong>. 2005.DOD <strong>Systems</strong> Management College. <strong>Systems</strong> <strong>Engineering</strong>Fundamentals. 2001.Martin, James N. <strong>Systems</strong> <strong>Engineering</strong> Guidebook: A Processfor Developing <strong>Systems</strong> and Products. 1996.<strong>NASA</strong>. <strong>NASA</strong> Cost Estimating <strong>Handbook</strong>. 2004.The Project Management Institute®. Practice Standardsfor Work Breakdown Structures. 2001.326 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


BibliographyAdams, R. J., et al. Software Development Standard forSpace <strong>Systems</strong>, Aerospace Report No. TOR—2004(3909)-3537, Revision B. March 11, 2005.American Institute of Aeronautics and Astronautics.AIAA Guide for Managing the Use of Commercial Off theShelf (COTS) Software Components for Mission-Critical<strong>Systems</strong>, AIAA G-118-2006e. Reston, VA, 2006.American National Standards Institute. Guide for thePreparation of Operational Concept Documents, ANSI/AIAA G-043-1992. Washington, DC, 1992.American Society of Mechanical Engineers. <strong>Engineering</strong>Drawing Practices, ASME Y14.100. New York, 2004.—. Types and Applications of <strong>Engineering</strong> Drawings,ASME Y14.24. New York, 1999.Blanchard, Benjamin S. System <strong>Engineering</strong> Management,6th ed. New Dehli: Prentice Hall of India PrivateLimited, 2006.Blanchard, Benjamin S., and Wolter Fabrycky. <strong>Systems</strong><strong>Engineering</strong> and Analysis, 6th ed. New Dehli: PrenticeHall of India Private Limited, 2006.Buede, Dennis. The <strong>Engineering</strong> Design of <strong>Systems</strong>: Modelsand Methods. New York: Wiley & Sons, 2000.Chapanis, A. “The Error-Provocative Situation: A CentralMeasurement Problem in Human Factors <strong>Engineering</strong>.”In The Measurement of Safety Performance. Edited by W.E. Tarrants. New York: Garland STPM Press, 1980.Clemen, R., and T. Reilly. Making Hard Decisions withDecisionTools Suite. Pacific Grove, CA: Duxbury ResourceCenter, 2002.Defense Acquisition University. <strong>Systems</strong> <strong>Engineering</strong>Fundamentals Guide. Fort Belvoir, VA, 2001.Department of Defense. DOD Architecture Framework,Version 1.5, Vol. 1. Washington, DC, 2007.—. Defense Logistics Agency (DLA) Cataloging<strong>Handbook</strong>, H4/H8 Series. Washington, DC.—. DoD Guide for Identification and Development ofMetric Standards, SD-10. Washington, DC: DOD, Officeof the Under Secretary of Defense, Acquisition, Technology,& Logistics, 2003.—. DOD-HDBK-763, Human <strong>Engineering</strong> ProceduresGuide. Washington, DC, 1987.—. MIL-HDBK-965, Parts Control Program. Washington,DC, 1996.—. MIL-HDBK-46855A, Human <strong>Engineering</strong> ProgramProcess and Procedures. Washington, DC, 1996.—. MIL-STD-881B, Work Breakdown Structure(WBS) for Defense Materiel Items. Washington, DC,1993.—. MIL-STD-1472, Human <strong>Engineering</strong>. Washington,DC, 2003.DOD, <strong>Systems</strong> Management College. <strong>Systems</strong> <strong>Engineering</strong>Fundamentals. Fort Belvoir, VA: Defense AcquisitionPress, 2001.DOD, U.S. General Services Administration, and <strong>NASA</strong>.Acquisition of Commercial Items, 14CFR1214–Part 1214–Space Flight 48CFR1814. Washington, DC, 2007.—. Contracting Office Responsibilities, i 46.103(a).Washington, DC, 2007.—. Quality Assurance, Nonconforming Supplies orServices, FAR Part 46.407. Washington, DC, 2007.Dezfuli, H. “Role of System Safety in Risk-informed Decisionmaking.”In Proceedings, the <strong>NASA</strong> Risk ManagementConference 2005. Orlando, December 7, 2005.Eggemeier, F. T., and G. F. Wilson. “Performance andSubjective Measures of Workload in Multitask Environments.”In Multiple-Task Performance. Edited by D.Damos. London: Taylor and Francis, 1991.Electronic Industries Alliance. Processes for <strong>Engineering</strong>a System, ANSI/EIA–632. Arlington, VA, 1999.Endsley, M. R., and M. D. Rogers. “Situation AwarenessInformation Requirements Analysis for En Route AirTraffic Control.” In Proceedings of the Human Factors andErgonomics Society 38th Annual Meeting. Santa Monica:Human Factors and Ergonomics Society, 1994.<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 327


BibliographyEslinger, Suellen. Software Acquisition Best Practices forthe Early Acquisition Phases. El Segundo, CA: The AerospaceCorporation, 2004.Federal Aviation Administration. HF-STD-001, HumanFactors Design Standard. Washington, DC, 2003.Federal Highway Administration, and CalTrans. <strong>Systems</strong><strong>Engineering</strong> Guidebook for ITS, Version 2.0. Washington,DC: U.S. Department of Transportation, 2007.Fuld, R. B. “The Fiction of Function Allocation.” Ergonomicsin Design (January 1993): 20–24.Glass, J. T., V. Zaloom, and D. Gates. “A Micro-Computer-AidedLink Analysis Tool.” Computers in Industry16, (1991): 179–87.Gopher, D., and E. Donchin. “Workload: An Examinationof the Concept.” In <strong>Handbook</strong> of Perception andHuman Performance: Vol. II. Cognitive Processes andPerformance. Edited by K. R. Boff, L. Kaufman, and J. P.Thomas. New York: John Wiley & Sons, 1986.Griffin, Michael D., <strong>NASA</strong> Administrator. “System <strong>Engineering</strong>and the Two Cultures of <strong>Engineering</strong>.” BoeingLecture, Purdue University, March 28, 2007.Hart, S. G., and C. D. Wickens. “Workload Assessmentand Prediction.” In MANPRINT: An Approach to <strong>Systems</strong>Integration. Edited by H. R. Booher. New York: Van NostrandReinhold, 1990.Hofmann, Hubert F., Kathryn M. Dodson, Gowri S. Ramani,and Deborah K. Yedlin. Adapting CMMI® for AcquisitionOrganizations: A Preliminary Report, CMU/SEI-2006-SR-005. Pittsburgh: Software <strong>Engineering</strong> Institute,Carnegie Mellon University, 2006, pp. 338–40.Huey, B. M., and C. D. Wickens, eds. Workload Transition.Washington, DC: National Academy Press, 1993.Institute of Electrical and Electronics Engineers. EIAGuide for Information Technology Software Life Cycle Processes—LifeCycle Data, IEEE Std 12207.1. Washington,DC, 1997.—. IEEE Guide to Software Configuration Management,ANSI/IEEE 1042. Washington, DC, 1987.—. Standard for Application and Management of the<strong>Systems</strong> <strong>Engineering</strong> Process, IEEE Std 1220. Washington,DC, 1998.—. Standard Glossary of Software <strong>Engineering</strong> Terminology,IEEE Std 610.12-1990. Washington, DC, 1999.—. Standard for Software Configuration ManagementPlans, IEEE Std 828. Washington, DC, 1998.International Council on <strong>Systems</strong> <strong>Engineering</strong>. <strong>Systems</strong><strong>Engineering</strong> <strong>Handbook</strong>, version 3. Seattle, 2006.—. <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>: A “What To”Guide for All SE Practitioners, INCOSE-TP-2003-016-02, Version 2a. Seattle, 2004.International Organization for Standardization. InformationTechnology—Software Life Cycle Processes ConfigurationManagement, ISO TR 15846. Geneva, 1998.—. Quality Management—Guidelines for ConfigurationManagement, ISO 10007: 1995(E). Geneva, 1995.—. Quality <strong>Systems</strong> Aerospace—Model for QualityAssurance in Design, Development, Production, Installation,and Servicing, ISO 9100/AS9100. Geneva: InternationalOrganization for Standardization, 1999.—. <strong>Systems</strong> <strong>Engineering</strong>—A Guide for the Applicationof ISO/IEC 15288, ISO/IEC TR 19760: 2003. Geneva,2003.—. <strong>Systems</strong> <strong>Engineering</strong>—System Life Cycle Processes,ISO/IEC 15288: 2002. Geneva, 2002.Jones, E. R., R. T. Hennessy, and S. Deutsch, eds. HumanFactors Aspects of Simulation. Washington, DC: NationalAcademy Press, 1985.Kaplan, S., and B. John Garrick. “On the QuantitativeDefinition of Risk.” Risk Analysis 1(1). 1981.Keeney, Ralph L. Value-Focused Thinking: A Path to CreativeDecisionmaking. Cambridge, MA: Harvard UniversityPress, 1992.Keeney, Ralph L., and Timothy L. McDaniels. “A Frameworkto Guide Thinking and Analysis Regarding ClimateChange Policies.” Risk Analysis 21(6): 989–1000. 2001.Keeney, Ralph L., and Howard Raiffa. Decisions withMultiple Objectives: Preferences and Value Tradeoffs.Cambridge, UK: Cambridge University Press, 1993.Kirwin, B., and L. K. Ainsworth. A Guide to Task Analysis.London: Taylor and Francis, 1992.Kurke, M. I. “Operational Sequence Diagrams in SystemDesign.” Human Factors 3: 66–73. 1961.Long, Jim. Relationships Between Common GraphicalRepresentations in <strong>Systems</strong> <strong>Engineering</strong>. Vienna, VA: VitechCorporation, 2002.328 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


BibliographyMartin, James N. Processes for <strong>Engineering</strong> a System: AnOverview of the ANSI/GEIA EIA-632 Standard and ItsHeritage. New York: Wiley & Sons, 2000.—. <strong>Systems</strong> <strong>Engineering</strong> Guidebook: A Process for Developing<strong>Systems</strong> and Products. Boca Raton: CRC Press,1996.Meister, David. Behavioral Analysis and MeasurementMethods. New York: John Wiley & Sons, 1985.—. Human Factors: Theory and Practice. New York:John Wiley & Sons, 1971.Miao, Y., and J. M. Haake. “Supporting Concurrent Designby Integrating Information Sharing and ActivitySynchronization.” In Proceedings of the 5th ISPE InternationalConference on Concurrent <strong>Engineering</strong> Researchand Applications (CE98). Tokyo, 1998, pp. 165–74.The Mitre Corporation. Common Risks and Risk MitigationActions for a COTS-based System. McLean, VA.Morgan, M. Granger, and M. Henrion. Uncertainty: AGuide to Dealing with Uncertainty in Quantitative Riskand Policy Analysis. Cambridge, UK: Cambridge UniversityPress, 1990.<strong>NASA</strong>. Final Memorandum on <strong>NASA</strong>’s Acquisition ApproachRegarding Requirements for Certain Software <strong>Engineering</strong>Tools to Support <strong>NASA</strong> Programs, AssignmentNo. S06012. Washington, DC, <strong>NASA</strong> Office of InspectorGeneral, 2006.—. <strong>NASA</strong> Cost Estimating <strong>Handbook</strong>. Washington,DC, 2004.—. <strong>NASA</strong>-STD-3001, <strong>NASA</strong> Space Flight HumanSystem Standard Volume 1: Crew Health. Washington,DC, 2007.—. <strong>NASA</strong>-STD-(I)-7009, Standard for Models andSimulations. Washington, DC, 2006.—. <strong>NASA</strong>-STD-8719.13, Software Safety Standard,<strong>NASA</strong> <strong>Technical</strong> Standard, Rev B. Washington, DC, 2004.—. <strong>NASA</strong>-STD-8729.1, Planning, Developing, andMaintaining and Effective Reliability and Maintainability(R&M) Program. Washington, DC, 1998.—. NOAA N-Prime Mishap Investigation Final Report.Washington, DC, 2004.—. NPD 2820.1, <strong>NASA</strong> Software Policy. Washington,DC, 2005.—. NPD 8010.2, Use of the SI (Metric) System of Measurementin <strong>NASA</strong> Programs. Washington, DC, 2007.—. NPD 8010.3, Notification of Intent to Decommissionor Terminate Operating Space <strong>Systems</strong> and TerminateMissions. Washington, DC, 2004.—. NPD 8020.7, Biological Contamination Controlfor Outbound and Inbound Planetary Spacecraft. Washington,DC, 1999.—. NPD 8070.6, <strong>Technical</strong> Standards. Washington,DC, 2003.—. NPD 8730.5, <strong>NASA</strong> Quality Assurance ProgramPolicy. Washington, DC, 2005.—. NPR 1441.1, <strong>NASA</strong> Records Retention Schedules.Washington, DC, 2003.—. NPR 1600.1, <strong>NASA</strong> Security Program ProceduralRequirements. Washington, DC, 2004.—. NPR 2810.1, Security of Information Technology.Washington, DC, 2006.—. NPR 7120.5, <strong>NASA</strong> Space Flight Program andProject Management Processes and Requirements. Washington,DC, 2007.—. NPR 7120.6, Lessons Learned Process. Washington,DC, 2007.—. NPR 7123.1, <strong>Systems</strong> <strong>Engineering</strong> Processes andRequirements. Washington, DC, 2007.—. NPR 7150.2, <strong>NASA</strong> Software <strong>Engineering</strong> Requirements.Washington, DC, 2004.—. NPR 8000.4, Risk Management Procedural Requirements.Washington, DC, <strong>NASA</strong> Office of Safety andMission Assurance, 2007.—. NPR 8020.12, Planetary Protection Provisions forRobotic Extraterrestrial Missions. Washington, DC, 2004.—. NPR 8580.1, Implementing The National EnvironmentalPolicy Act and Executive Order 12114. Washington,DC, 2001.—. NPR 8705.2, Human-Rating Requirements forSpace <strong>Systems</strong>. Washington, DC, 2005.—. NPR 8705.3, Probabilistic Risk Assessment ProceduresGuide for <strong>NASA</strong> Managers and Practitioners.Washington, DC, 2002.<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 329


Bibliography—. NPR 8705.4, Risk Classification for <strong>NASA</strong> Payloads.Washington, DC, 2004.—. NPR 8705.5, Probabilistic Risk Assessment (PRA)Procedures for <strong>NASA</strong> Programs and Projects. Washington,DC, 2004.—. NPR 8710.1, Emergency Preparedness Program.Washington, DC, 2006.—. NPR 8715.2, <strong>NASA</strong> Emergency Preparedness PlanProcedural Requirements—Revalidated. Washington,DC, 1999.—. NPR 8715.3, <strong>NASA</strong> General Safety Program Requirements.Washington, DC, 2007.—. NPR 8715.6, <strong>NASA</strong> Procedural Requirements forLimiting Orbital Debris. Washington, DC, 2007.—. NPR 8735.2, Management of Government QualityAssurance Functions for <strong>NASA</strong> Contracts. Washington,DC, 2006.—. NSS-1740.14, <strong>NASA</strong> Safety Standard Guidelinesand Assessment Procedures for Limiting Orbital Debris.Washington, DC, 1995.—. Off-the-Shelf Hardware Utilization in Flight HardwareDevelopment, MSFC <strong>NASA</strong> MWI 8060.1 Rev A.Washington, DC, 2004.—. Off-the-Shelf Hardware Utilization in FlightHardware Development, JSC Work Instruction EA-WI-016. Washington, DC.—. Project Management: <strong>Systems</strong> <strong>Engineering</strong> &Project Control Processes and Requirements, JPR 7120.3.Washington, DC, 2004.—. The SEB Source Evaluation Process. Washington,DC, 2001.—. Solicitation to Contract Award. Washington, DC,<strong>NASA</strong> Procurement Library, 2007.—. Statement of Work Checklist. Washington, DC.—. System and Software Metrics for Performance-Based Contracting. Washington, DC.—. <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>, SP-6105. Washington,DC, 1995.—. Training Manual for Elements of Interface Definitionand Control, <strong>NASA</strong> Reference Publication 1370.Washington, DC, 1997.<strong>NASA</strong> Langley Research Center. Instructional <strong>Handbook</strong>for Formal Inspections.______. Guidance on System and Software Metrics forPerformance-Based Contracting.National Defense Industrial Association. Data Management,ANSI/GEIA GEIA-859. Arlington, VA, 2004.—. National Consensus Standard for ConfigurationManagement, ANSI/GEIA EIA-649, Arlington, VA,1998.Naval Air <strong>Systems</strong> Command. <strong>Systems</strong> Command SEGuide: 2003 (based on requirements of ANSI/EIA 632:1998). Patuxent River, MD, 2003.Nuclear Regulatory Commission. NUREG-0700,Human-System Interface Design Review Guidelines, Rev.2. Washington, DC, Office of Nuclear Regulatory Research,2002.—. <strong>Systems</strong> <strong>Engineering</strong> Guide. Patuxent River, MD,2003.Price, H. E. “The Allocation of Functions in <strong>Systems</strong>.”Human Factors 27: 33–45. 1985.The Project Management Institute®. Practice Standards forWork Breakdown Structures. Newtown Square, PA, 2001.Rechtin, Eberhardt. <strong>Systems</strong> Architecting of Organizations:Why Eagles Can’t Swim. Boca Raton: CRC Press,2000.Saaty, Thomas L. The Analytic Hierarchy Process. NewYork: McGraw-Hill, 1980.Sage, Andrew, and William Rouse. The <strong>Handbook</strong> of <strong>Systems</strong><strong>Engineering</strong> and Management. New York: Wiley &Sons, 1999.Shafer, J. B. “Practical Workload Assessment in the DevelopmentProcess.” In Proceedings of the Human FactorsSociety 31st Annual Meeting. Santa Monica: Human FactorsSociety, 1987.Stamelatos, M., H. Dezfuli, and G. Apostolakis. “A ProposedRisk-Informed Decisionmaking Framework for<strong>NASA</strong>.” In Proceedings of the 8th International Conferenceon Probabilistic Safety Assessment and Management.New Orleans, LA, May 14–18, 2006.Stern, Paul C., and Harvey V. Fineberg, eds. UnderstandingRisk: Informing Decisions in a Democratic Society.Washington, DC: National Academies Press, 1996.330 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


BibliographyTaylor, Barry. Guide for the Use of the International Systemof Units (SI), Special Publication 811. Gaithersburg, MD:National Institute of Standards and Technology, PhysicsLaboratory, 2007.U.S. Air Force. SMC <strong>Systems</strong> <strong>Engineering</strong> Primer and<strong>Handbook</strong>, 3rd ed. Los Angeles: Space & Missile <strong>Systems</strong>Center, 2005.U.S. Army Research Laboratory. Design Guidance forProducibility, MIL HDBK 727. Adelphi, MD: Weaponsand Materials Research Directorate, 1990.—. White Paper on Risk-Informed and Performance-BasedRegulation, SECY-98-144. Washington,DC, 1998.<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 331


Indexacceptance verification, 91acknowledgment of receipt of system, 82acquisition, product, 129, 175, 217–227, 316action-information analysis for HF, 250–251activity-on-arrow diagram, 115, 116actual cost of work performed (ACWP), 190advancement degree of difficulty assessment (AD 2) , 293agreements, 120, 125, 172(see also contracting)AHP (analytic hierarchy process), 211–212allocated baseline, CI, 152, 153analogous cost models, 128analysis validation type, 100analysis verification type, 86analytic hierarchy process (AHP), 211–212anomaly resolution and maintenance operations, 39approval phase, 19, 20, 21(see also formulation)architectureIT, 243modeling of, 49–51system, 56and technology assessment, 296as-deployed baseline, CI, 153assembly, product, 78, 79–82(see also Phase D)assessments(see also performance; reviews)TA, 62, 293–298technical, 5, 61, 166–170, 190–196, 222workload assessment for HF, 68audits, 91, 154, 157, 168, 189authorityCM, 152, 154decision analysis, 199and KDPs, 19mission, 34requirements management, 134, 135stakeholder, 35and standards, 48technical assessments, 168–170and technical planning, 120, 122BAC (budget at completion), 191backward compatibility of engineering tools, 244baselinesconfiguration identification, 152–153design solution, 61and life cycle phases, 24requirements, 134system design processes, 32BCWP (budgeted cost of work performed), 190BCWS (budgeted cost of work scheduled), 122, 190beta curve in cost estimate, 129bidirectional traceability of requirements, 132bounding approaches to quantification of risk, 147budget at completion (BAC), 191budget considerations in technical planning, 117, 118budget cycle, 29–30budgeted cost of work performed (BCWP), 190budgeted cost of work scheduled (BCWS), 122, 190CAMs (cost account managers), 121, 190capability for accelerated concurrent engineering (CACE),234–241CCB (configuration control board), 133, 154, 155CDR (Critical Design Review), 25, 77, 178CE (concurrent engineering), 234–241CERR (Critical Event Readiness Review), 186certification, model, 104Chandra project, 193–194CIs (configuration items), 152classified national security information (CNSI), 162–163closeout, project (see Phase F)CM (configuration management) (see configuration management(CM))CMO (configuration management organization), 152, 155CM topic evaluators list, 133CNSI (classified national security information), 162–163coding/making end products, 73–74collaboration design paradigm, 234–241, 242–243Columbia disaster investigation, 156–157Committee on Space Research (COSPAR), 260compatibility analysis and product integration, 81concept development (see Phase A)concept of operations (ConOps) (see ConOps (concept of operations))concept studies (Pre-Phase A), 7, 8, 22concurrent engineering (CE), 234–241configuration audits, 189configuration change management, 154configuration control board (CCB), 133, 154, 155configuration inspection (PCA), 189configuration items (CIs), 152configuration management (CM)and contracting, 222and data management, 161identification, 152–153planning, 152, 311and requirements changes, 133–134332 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


Indexin technical management process, 111, 122, 151–157configuration management organization (CMO), 152, 155configuration status accounting (CSA), 154configuration verification, 91, 155–156ConOps (concept of operations)HF participation in, 247in product realization, 88in requirements definition, 41, 42–43SE role of, 9–13, 15in stakeholder expectations definition, 35–37in system design processes, 35–36, 37–38, 41constraints in system design processes, 35, 41context diagrams, 44, 291contingency planning, technical, 114continuous risk management (CRM), 142–143, 227contractingacquisition strategy, 217–219completion, 230–233introduction, 217performance, 227–230planning and preparation, 219–227contractor off-the-shelf (COTS) products, 226, 227contractors, working with, 159, 217contract specialist, 219contract WBS (CWBS), 123controlled experimentation and HF analysis, 254COSPAR (Committee on Space Research), 260cost account managers (CAMs), 121, 190cost account plans, 121–122cost aspects of SEcost-effectiveness, 16–17, 21, 44, 58, 209–210estimates, 119, 126, 127, 128–129, 226system architecture, 50technical assessment, 190–191technical planning, 115, 117, 118–119, 121, 125, 126–129validation, 100verification, 89cost-benefit analysis, 209–210cost cap, 118–119cost performance index (CPI), 191cost risk, 139, 144COTS (contractor off-the-shelf) products, 226, 227CPI (cost performance index), 191criteria(see also reviews)acceptance for contracted deliverables, 230decision, 199–201and design solution definition, 57MCDA, 211–212performance, 41proposal evaluation, 227success, 31, 34, 35, 57–59Critical Design Review (CDR), 25, 77, 178Critical Event Readiness Review (CERR), 186critical incident study, HF, 250critical path sequence, 115CRM (continuous risk management), 142–143, 227crosscutting processes, 111(see also technical management processes)CSA (configuration status accounting), 154cumulative average curve approach, 129customers in system design processes, 33–34(see also stakeholders)CWBS (contract WBS), 123data, definition, 160data call, 160data capture requirements, 47, 48, 159data formats and interoperability, 243data management (DM), 122, 158–165, 222DCR (Design Certification Review), 188debris, space, limitation of, 29decision analysisand contracting, 222and product validation, 102and product verification, 87risk-informed, 142, 143–148in system design processes, 31in technical management processes, 197–215decision networks, 210decision trees, 210–211decommissioning, 28, 233Decommissioning Review (DR), 187defense article, and ITAR, 165demonstration validation type, 100demonstration verification type, 86deploymentas-deployed baseline, 153in launch operations, 39verification of, 91–92design(see also Phase B; system design processes)CDR, 25, 77, 178collaboration paradigm for, 234–241, 242–243integrated facilities for, 234–241in life cycle phases, 22, 24–25, 26process metrics for, 196and qualification verification, 91realization processes for, 71, 73–82solution definition for, 31, 55–69, 81, 234–241tool selection, 242–245and verification vs. validation, 83Design Certification Review (DCR), 188design drivers, 34, 35design-to-life-cycle cost, 127–128deterministic safety requirements, 45development phase and ILS, 65(see also Phase D)discrepancies, product, and verification, 88disposal process, 28, 39, 92, 187, 233DM (data management), 122, 158–165, 222DR (Decommissioning Review), 187EAC (estimate at completion), 191earned value management (EVM), 121–122, 190, 196<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 333


IndexEcho balloons, 17EFFBDs (enhanced functional flow block diagrams), 285effectiveness vs. cost for SE, 16efficient solutions, 16EMO (Environmental Management Office), 256, 257emulators and interface verification, 82enabling products, 60–61, 79, 102end of mission (EOM), planetary protection report, 259–260end-to-end system testing, 93–96engineering (see systems engineering (SE))engineering (grassroots) cost models, 128enhanced functional flow block diagrams (EFFBDs), 285entrance criteria (see reviews)environmental compliance and restoration (ECR), 195environmental considerationsand HF, 247NEPA compliance, 256–267planetary protection policy, 258–260and product realization, 76, 87, 102, 109radioactive materials management, 257–258and technical requirements, 41, 44–45Environmental Management Office (EMO), 256, 257EO (executive order) 12114, 257EOM (end of mission), planetary protection report, 259–260estimate at completion (EAC), 191estimates, cost, 119, 126, 127, 128–129, 226evaluation(see also assessments; validation; verification)and contracting, 226, 227, 234decision analysis methods and tools, 200–201human factors engineering, 68, 247–255overview, 71PERT chart, 115safety, 257T&E, 100event sequence diagrams/event trees, 63EVM (earned value management), 121–122, 190, 196executive order (EO) 12114, 257exploration projects, 36extensibility attributes in decision analysis, 212fabrication (see Phase C)facilities, integrated design, 234–241FAD (formulation authorization document), 19, 125failure modes and effects, and criticality analyses (FMECAs), 146failure modes and effects analyses (FMEAs), 63–64, 146, 252–253fault trees, 146, 252FCA (functional configuration audit), 189fixed-cost profile, 118–119flexibility attributes in decision analysis, 212Flight Readiness Review (FRR), 25, 184Flight <strong>Systems</strong> and Ground Support (FS&GS), 19FMEAs (failure modes and effects analyses), 63–64, 146, 252–253FMECAs (failure modes and effects, and criticality analyses), 146formulationactivities of, 19, 21, 125life cycle role of, 20overview, 7, 8and system architecture, 50formulation authorization document (FAD), 19, 125FRR (Flight Readiness Review), 25, 184FS&GS (Flight <strong>Systems</strong> and Ground Support), 19functional allocation, HF, 251functional analysisFFBDs, 52–54, 285–288requirements allocation sheets, 286–287, 289system architecture, 49–51and trade study process, 205–206functional baseline, CI, 152, 153functional configuration audit (FCA), 189functional flow analysis, HF, 250functional flow block diagram (FFBD), 52–54, 285–288functional needs requirements, 41–44funding issuesBAC, 191BCWS, 122, 190budget cycle, 29–30in technical planning, 117, 118Gantt chart, 117, 118geostationary (GEO) satellites, 29goal setting in system design processes, 35Government mandatory inspection points (GMIPs), 65grassroots cost estimates, 128hardware-in-the-loop (HWIL) testing, 96–97, 115hazard analysis, 64hazard vs. risk, 139heritage products, 76–77, 89human error, 68human factors (HF) engineering, 45, 67–69, 246–255human reliability analysis, 64, 68human spaceflight projects, 36, 45, 176HWIL (hardware-in-the-loop) testing, 96–97, 115ICD (interface control document/drawing), 81, 137–138ICP (interface control plan), 138IDD (interface definition document), 138identification, configuration, 152–153ILS (integrated logistics support), 65, 66implementationactivities of, 21and integration, 80life cycle role, 20overview, 7, 8product, 73–77, 80and transition, 108influence diagrams, 210information, data definition, 160information infrastructure for CACE, 238–239information technology (IT) architectures, 243in-orbit checkout in launch operations, 39in-process testing, 91334 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


Indexinspectionsconfiguration, 189GMIPs, 65and product integration, 82of purchased products, 74–75inspection validation type, 100inspection verification type, 86INSRP (Interagency Nuclear Safety Review Panel ), 257integrated design facilities, 234–241integrated logistics support (ILS), 65, 66integration(see also Phase D)components of, 39and design solution, 81, 234–241and interface management, 137plan outline, 299–300product, 78–82and SEMP content outline, 303–307SIR, 180Interagency Nuclear Safety Review Panel (INSRP), 257interface control document/drawing (ICD), 81, 137–138interface control plan (ICP), 138interface definition document (IDD), 138interface requirements document (IRD), 81, 137interfacesdefining, 82and end-to-end testing, 94and HF, 246–255information, 243management of, 54, 81–82, 111, 136–138, 221N2 diagrams, 52, 54, 288, 289–290and product integration, 79, 80–81requirements, 41, 44, 81, 137, 309–310verification, 82interface working group (IWG), 136–137internal task agreement (ITA), 120international environmental considerations, 257International Traffic in Arms Regulations (ITAR), 164–165interoperability and engineering tool selection, 243IRD (interface requirements document), 81ITA (Internal Task Agreement), 120ITAR (International Traffic in Arms Regulations), 164–165iterative processes, 5–15IT (information technology) architectures, 243IWG (interface working group), 136–137Jet Propulsion Laboratory (JPL), 258Kennedy Space Center (KSC), 258key decision points (KDPs), 19, 20, 21, 111, 168launch, product, 39(see also Phase D)launch vehicle databook, 257–258layers, definition, 8LCCE (life-cycle cost estimate), 119learning curve concept and cost estimates, 129least-cost analysis, 209LEO (low Earth orbiting) missions, 29lessons learned in technical planning, 129–130levels, definition, 8license schemes for engineering tools, 244life cycle(see also phases)acquisition, 218budget cycle, 29–30CACE, 234CM role in, 155cost considerations, 126–129decision analysis, 197environmental considerations, 256functional analysis in, 285HF in, 248overview, 6, 7phase overview, 19–29planetary protection considerations, 258–259planning and status reporting feedback loop, 167product realization, 71–72, 78–79, 80, 89–90, 103, 106, 108systems analysis, 203–205technical planning, 112–113technology assessment, 293–295tradeoffs during, 316WBS, 124–125life-cycle cost estimate (LCCE), 119link analysis and HF, 253logical decomposition, 31, 49–54, 120loosely coupled programs, 169low Earth orbiting (LEO) missions, 29maintainability, product/design, 65, 66, 110making/coding end products, 73–74, 75margin risk management method, 149–150, 194maturity, system, 6, 56–58, 62MAUT (multi-attribute utility theory), 213MCDA (multi-criteria decision analysis), 211–212MCR (Mission Concept Review), 173MDAA (mission directorate associate administrator), 21MDR (Mission Definition Review), 175measures and measurement methods, defining, 205measures of effectiveness (MOEs), 120, 191–192, 193measures of performance (MOPs), 120, 192, 193memorandum of understanding (MOU), 120metric system usage, 261–262milestone reviews, 170missionin life cycle phases, 22, 28and stakeholder expectations, 34mission assurance, 225mission authority, 34Mission Concept Review (MCR), 173Mission Definition Review (MDR), 175mission directorate associate administrator (MDAA), 21mitigation of risk, 148, 149modelingof architecture, 49–51<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 335


Indexcertification, 104cost, 128, 129HF, 68logic, 64scenario-based hazard, 141simulations, 68, 96, 104, 204–205, 253–254validation, 104verification, 96–97modeling and simulation (M&S), 96–97, 103MOEs (measures of effectiveness), 120, 191–192, 193MOPs (measures of performance), 120, 192, 193MOU (memorandum of understanding), 120M&S (modeling and simulation), 96–97, 103multi-attribute utility theory (MAUT), 213multi-criteria decision analysis (MCDA), 211–212N2 diagrams (N x N interaction matrix), 52, 54, 288, 289–290<strong>NASA</strong>, contracting responsibilities, 232National Environmental Policy Act (NEPA), 256–257National Space Policy (2006), 260network infrastructure for CACE, 239network scheduling systems, 115–117NOAA N-Prime mishap, 157nominal testing, 101nondominated solutions, 16nuclear materials management, 257–258objectives, mission, 34objectives hierarchy, 213, 214–215Office of General Counsel (OGC), 257off-nominal testing, 101off-the-shelf (OTS) products, 76–77Operational Readiness Review (ORR), 153, 183operations(see also Phase E)analysis of, 249, 254, 285–286objectives for, 34, 35phases of, 39, 65, 232and requirements, 43verification of, 92ORR (Operational Readiness Review), 153, 183OTS (off-the-shelf) products, 76–77outcome variables in trade studies, 205, 207–208Outer Space Treaty, 258parallelism in design integration, 234parametric cost models, 128, 129payload classification and verification, 89PBS (product breakdown structure) (see product breakdownstructure (PBS))PCA (physical configuration audit), 189PCA (program commitment agreement ), 125, 172PDR (Preliminary Design Review), 25, 77, 177peer reviews, 59–60, 87, 170–171, 189–190, 312–315performance(see also evaluation; technical performance measures(TPMs))assessment of contractor, 232, 233HF evaluation of, 255requirements for, 41–42, 43, 133, 152, 153, 224, 227periodical technical reviews (PTRs), 166(see also reviews)PERT (program evaluation and review technique) chart, 115PFAR (Post Flight Assessment Review), 186PHA (preliminary hazard analysis), 64Phase Aactivities of, 22, 23life cycle role of, 19, 20overview, 7, 8–9Phase Bactivities of, 24life cycle role of, 19, 20overview, 7, 8, 14–15Phase Cactivities of, 25, 26life cycle role of, 19, 20overview, 6, 7, 8, 14–15Phase Dactivities of, 25, 27, 65life cycle role of, 19, 20overview, 6, 7, 8, 14–15Phase Eactivities of, 28life cycle role of, 19, 20overview, 6, 7, 8, 14Phase Factivities of, 28–29life cycle role of, 19, 20overview, 6, 7, 8, 14phases(see also formulation; implementation; Phase C; Phase D;Phase E; Phase F)A, 7, 8–9, 19, 20, 22, 23B, 7, 8, 14–15, 19, 20, 24life cycle role of, 20overview and example, 6–15Pre-Phase A, 7, 8, 22physical configuration audit (PCA), 189PKI (public key infrastructure), 163planetary protection officer (PPO), 258planetary protection policy, 258–260planning, programming, budgeting, and execution (PPBE),29PLAR (Post Launch Assessment Review), 185PMC (Program Management Council), 168PM (program manager), 19policy compliance issues, 242, 256–260Post Flight Assessment Review (PFAR), 186Post Launch Assessment Review (PLAR), 185PPBE (planning, programming, budgeting, and execution),29PPO (planetary protection officer), 258PQASP (program/project quality assurance surveillanceplant), 65PRA (probabilistic risk assessment), 139, 146precedence diagrams, 115–116336 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


Indexpre-launch verification stage, 91–92preliminary design (see Phase B)Preliminary Design Review (PDR), 25, 77, 177preliminary hazard analysis (PHA), 64Pre-Phase A, 7, 8, 22probabilistic risk assessment (PRA), 139, 146probabilistic structural analysis, 64process metrics, 195–196procurement process and contracts, 217producibility and system design, 66–67product baseline, CI, 153product breakdown structure (PBS)logical decomposition, 52in systems design processes, 40in technology assessment, 294, 295and WBS, 123, 124, 125, 126Production Readiness Review (PRR), 179productivity-related process metrics, 195product management and contracts, 219product realization processesacquisition, 129, 175, 217–227, 316example, 12–14implementation, 73–77, 80integration, 78–82overview, 5, 71–72transition, 71, 106–110validation, 98–105verification, 83–97products, systems analysis of, 203–205program commitment agreement (PCA), 125, 172program evaluation and review technique (PERT) chart, 115Program Management Council (PMC), 168program manager (PM), 19programmatic risk, 139, 144program/project quality assurance surveillance plant (PQASP),65programs, 4, 169–170(see also life cycle; phases)Program/System Definition Review (P/SDR), 170, 172Program/<strong>Systems</strong> Requirements Review (P/SRR), 170, 171project protection plan, 260, 321–322projects, 4, 119, 169–170(see also life cycle; phases)proposal, contract solicitation, 226, 227PRR (Production Readiness Review), 179P/SDR (Program/System Definition Review), 170, 172P/SRR (Program/<strong>Systems</strong> Requirements Review), 170, 171PTRs (periodical technical reviews), 166(see also reviews)public key infrastructure (PKI), 163purchasing end products, 73, 74–75, 76qualification verification, 91qualitative logic models, 64quality assurance (QA), 64–65, 90, 229, 230quality-related process metrics, 195quantitative logic models, 64radioactive materials management, 257–258R&D (research and development), 107realization processes (see product realization processes)recursive processes, 5–15redline drawings and CM, 157reengineering and validation discrepancies, 103reengineering and verification discrepancies, 88reliabilityhuman, 64, 68requirements for, 43, 44–45in system design process, 63–65reliability block diagrams, 64reportsdecision analysis, 201–203EOM, 259–260life cycle feedback loop, 167risk management, 150SARs, 257and SEMP, 120SER, 257in technical assessment, 167, 190, 195–106technical planning types, 117trade study, 208TRAR, 293, 295validation, 102–103verification, 93request for information (RFI), 218request for proposal (RFP), 159requirementsallocation of, 45–46, 47composition guidance, 279–281and contracts, 218, 221, 223–225, 229data capture, 47, 48, 159decomposition of, 45–46definition processes, 5, 31, 40–48, 246–247and design solution, 59engineering tools, 242and HF, 68and integration, 79interface, 41, 44, 81, 137, 309–310management of, 131–135, 166obsolescence of, 93OTS products, 77performance, 41–42, 43, 133, 152, 153, 224, 227process metrics for, 196product transition, 108, 110and resource leveling, 117system, 24, 174and system design processes, 32, 33traceability of, 132validation of, 46, 132–133, 280–281, 284verification of, 47, 48, 282–283requirements allocation sheets, 286–287, 289requirements “creep,” 134research and development (R&D), 107resource leveling, 117reusing end products, 73–74, 75, 89<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 337


IndexreviewsCDR, 25, 77, 178CERR, 186configuration audits, 189and configuration verification, 91DCR, 188DR, 187FRR, 25, 184heritage, 76–77internal, 170–171life cycle overview, 20, 21, 23, 24, 25, 26–27, 28MCR, 173MDR, 175ORR, 153, 183PDR, 25, 77, 177peer reviews, 59–60, 87, 170–171, 189–190, 312–315PFAR, 186PLAR, 185pre-verification, 87process metrics for, 196PRR, 179P/SDR, 170, 172P/SRR, 170, 171PTRs, 166QA for product verification, 90SAR, 91, 182SDR, 176SIR, 180SRR, 174technical assessment role of, 167–170TRR, 92, 181RFI (request for information), 218RFP (request for proposal), 159risk-informed decision analysis, 213–214risk-informed safety requirements, 45risk matrices, 145–146risk metrics, 146risk/risk managementand CM, 151and contracting, 221, 227, 228and cost-effectiveness, 17definition, 139and design alternatives, 58levels of, 145–146process of, 139–150and requirements changes, 133sources of, 145and system design process, 63–65and technical assessment, 166types of risk, 139, 144and verification, 89robotic missions, 175safe-hold operations, 39safety, 41, 43, 45, 63–64, 110safety analysis reports (SARs), 257safety and mission assurance (SMA) organization, 225safety evaluation report (SER), 257SARs (safety analysis reports), 257SAR (System Acceptance Review), 91, 182satellites, disposal of, 29SBU (sensitive but unclassified) information, handling of,162–164scenario-based modeling of hazards, 141schedule performance index (SPI), 191schedule-related process metrics, 195–196schedule risk, 139, 144scheduling, 17, 50, 89, 115–117, 190–191science projects, 35–37, 39, 46–47SDR (System Definition Review), 176securitydata, 162–165, 244space asset protection, 260SE engine, 5, 6–15, 59selection rule, defining, 206SEMP (system engineering management plan)content outline, 303–307and contracting process, 219, 225product realization role, 74, 85, 111, 113, 119–120, 122,208and TPM assessment, 193–194sensitive but unclassified (SBU) information, handling of,162–164SER (safety evaluation report), 257service contracts, 232SE (systems engineering) (see systems engineering (SE))shall statements in technical requirements definition, 40, 41similar systems analysis, HF, 249simulations, 68, 96, 104, 204–205, 253–254single-project programs, 169, 170SIR (System Integration Review), 180SI (System Internationale) (metric system), 261situational awareness and HF, 255SMA (safety and mission assurance) organization, 225softwareCACE tools, 239in contingency planning, 114data management, 161–162security of, 165verification and validation, 104–105solicitation of contract, 219, 222–227source evaluation board, 226SOW (statement of work), 223, 224–225, 232, 317–320space asset protection, 260space situational awareness (SSA), 260space systems tradeoffs, 316space transportation system (STS), 8–15sparing/logistics models, 64specialty engineering, 62–63specialty requirements, 42, 43specifications, design, 61SRB (standing review board), 168, 170SRR (System Requirements Review), 174SSA (space situational awareness), 260stakeholders338 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


IndexCACE process, 234, 235–236, 240and CCB, 154and CM, 151commitment to technical plan, 120–121expectations definition, 31, 33–39and HF, 67, 247identifying, 33–34importance of ongoing communication, 40and MOEs, 192and requirements management, 131and SAR, 182and validation, 98, 99, 100, 103standards, 47, 48, 224, 240, 243standing review board (SRB), 168, 170state diagrams, 290–291statement of work (SOW), 223, 224–225, 232, 317–320state transition diagrams, 290, 291status accounting, configuration, 154status reporting in technical assessment, 167, 190, 195–196STS (space transportation system), 8–15subcontractors, working with, 229, 231subsystem elements, verification of, 82success criteria, 34, 35, 57–59surveillance of contractor operations, 225, 227sustainment (see Phase E)system, definition, 3System Acceptance Review (SAR), 91, 182System Definition Review (SDR), 176system design processesdesign solution, 31, 55–69, 81, 234–241example, 8–12and HF, 68and interface management, 54logical decomposition, 31, 49–54, 120overview, 4, 5, 6, 31–32requirements definition, 5, 31, 40–48, 246–247stakeholder expectations definition, 31, 33–39System Integration Review (SIR), 180System Internationale (SI) (metric system), 261system-of-systems, 51System Requirements Review (SRR), 174systems, verification of elements, 82systems analysis, 203–205systems engineerCACE role of, 237and QA, 64–65responsibilities of, 3–4systems engineering (SE)definition, 3fundamentals of, 3–17overview, 1process metrics, 195–196and project/program life cycle, 19SE engine, 5, 6–15, 59specialty integration into, 62–63task analysis for HF, 68, 251–252task order contracts, 225TA (technology assessment), 62, 293–298teams, mission developmentand CACE process, 234, 235–236, 240–241contract development role, 218–219, 223, 225contractor evaluation role, 234and decision solution definition, 57HF specialists on, 246importance of technical management processes, 111inspection of purchased products, 74–75requirements validation role, 133risk management role, 63, 143surveillance of contractor operations, 227, 228–229trade study leadership, 208technical assessment process, 5, 62, 166–170, 190–196, 222(see also reviews)technical data package, 160technical management processes(see also configuration management (CM); decision analysis;reviews; risk/risk management)assessment, 5, 62, 166–196, 222and contracts, 218–219, 220–222data management, 122, 158–165, 222interface management, 54, 81–82, 111, 136–138, 221overview, 5–6, 111planning, 112–130requirements management, 131–135, 166technical performance measures (TPMs)objectives hierarchy, 214–215purpose of, 192–195quantification of, 147and risk management, 139, 148–149and safety requirements, 45utility and value functions of, 213technical processes overview, 4–6(see also product realization processes; system design processes;technical management processes)technical readiness level (TRL) scale, 293, 295, 296–298technical requirements definition, 31, 40–48(see also requirements)technical risk, 139, 144technical solution definition processesdesign solution definition, 31, 55–69, 81, 234–241and HF, 68logical decomposition, 31, 49–54, 120overview, 5technical work directives and planning, 121technology assessment (TA), 62, 293–298technology development, 56, 62(see also Phase A; Phase B)technology infusion assessment, 293–298technology maturity assessment (TMA), 293, 294, 297technology readiness assessment report (TRAR), 293, 295termination review, 169test and evaluation (T&E), 100testing(see also Phase D)<strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong> • 339


Indexcomponents of, 39integration, 81post-transition to end user, 109in product realization processes, 72validation, 98verification, 83, 85, 91–92, 93–97, 98testing verification type, 86Test Readiness Review (TRR), 92, 181test validation type, 100T&E (test and evaluation), 100tiers, definition, 8tightly coupled programs, 170timeline analyses (TLAs), 52, 54timelines, operation, 37–38, 68timeline sheets (TLSs), 54timing analysis, 290–291TLAs (timeline analyses), 52, 54TLSs (timeline sheets), 54TMA (technology maturity assessment), 293, 294, 297TPM (technical performance measures) (see technical performancemeasures (TPMs))traceability of requirements, 132tradeoffs, summary of types, 316trade study process, 57, 59, 128, 142–143, 205–209training and engineering tool selection, 244transition process, 71, 106–110, 231–233transportability requirements, 110TRAR (technology readiness assessment report), 293, 295triggering data and functional flow analysis, 285TRL (technical readiness level) scale, 293, 295, 296–298TRR (test readiness review), 92, 181uncertainty, 64, 147(see also risk/risk management)uncoupled programs, 169unit curve approach, 129United States Munitions List (USML), 164usability evaluation and design, 68utility analysis, 212–213VAC (variance at completion), 191validation(see also testing)design solution, 60, 62and interface management, 137in life cycle phases, 25planning, 100, 284, 301–302process metrics for, 196product, 71, 72, 75, 98–105of requirements, 46, 132–133, 280–281, 284vs. verification, 15, 83, 88, 98variance at completion (VAC), 191variances, control of project, 190–191vendor stability and engineering tool selection, 245verification(see also inspections; testing)configuration, 91, 155–156design solution, 59–60and HF, 68of interfaces, 82, 137in life cycle phases, 25planning, 84–86process metrics for, 196product, 71, 72, 75, 82, 83–97program guidance, 89of requirements, 47, 48, 282–283sample plan outline, 301–302software tools, 104–105and system design processes, 32vs. validation, 15, 83, 88, 98waivers, 154weighted cost-effectiveness analysis, 210work breakdown structure (WBS)and cost/schedule control measures, 190and logical decomposition, 52and technical measures, 194–195in technical planning, 116–117, 122–125, 126workflow diagrams, technical planning, 115workload assessment for HF, 68, 255340 • <strong>NASA</strong> <strong>Systems</strong> <strong>Engineering</strong> <strong>Handbook</strong>


To request print or electronic copies or provide comments,contact the Office of the Chief Engineer viaSP6105rev1SE<strong>Handbook</strong>@nasa.govElectronic copies are also available from<strong>NASA</strong> Center for AeroSpace Information7115 Standard DriveHanover, MD 21076-1320athttp://ntrs.nasa.gov/For sale by the Superintendent of Documents, U.S. Government Printing OfficeInternet: bookstore.gpo.gov Phone: toll free (866) 512-1800; DC area (202) 512-1800Fax: (202) 512-2104 Mail: Stop IDCC, Washington, DC 20402-0001ISBN 978-0-16-079747-7

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!