10.07.2015 Views

Qualification and Reliability of Complex Electronic Rotorcraft Systems

Qualification and Reliability of Complex Electronic Rotorcraft Systems

Qualification and Reliability of Complex Electronic Rotorcraft Systems

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

<strong>Qualification</strong> <strong>and</strong> <strong>Reliability</strong> <strong>of</strong> <strong>Complex</strong> <strong>Electronic</strong> <strong>Rotorcraft</strong> <strong>Systems</strong>Alex K. Boydston, MSEE, U.S. Army <strong>Electronic</strong>s EngineerWilliam D. Lewis, Ph.D. Director <strong>of</strong> U.S. Army Aviation Engineering DirectorateRedstone Arsenal, AL,alex.boydston@us.army.milbill.lewis6@us.army.milExecutive SummaryThere is a need to develop an industry st<strong>and</strong>ardmethod for qualifying complex integrated systems toa specified reliability. The goal <strong>of</strong> this paper is not toresolve the problems or evaluate past or existingdesigns, but it is to catch the attention <strong>of</strong> government<strong>and</strong> commercial aviation industry <strong>and</strong> solicit inputinto new processes <strong>and</strong> st<strong>and</strong>ards to correct thelongst<strong>and</strong>ing issues in the development <strong>of</strong> complexavionics systems. It will encourage committeeformation to address these issues. The United States(U.S.) Army Aviation Engineering Directorate(AED) requests the input <strong>of</strong> all technical stakeholders<strong>and</strong> desires to not dictate these requirements but toestablish requirements, tools <strong>and</strong> guidelines forreliability <strong>and</strong> qualification as a unified community.AbstractMilitary, civil <strong>and</strong> commercial rotary wing <strong>and</strong> fixedwind aircraft rely on complex <strong>and</strong> highly integratedhardware <strong>and</strong> s<strong>of</strong>tware systems for safe operation <strong>and</strong>successful execution <strong>of</strong> missions. These complexsystems are now classified as “cyber-physicalsystems” per the National Science Foundation (NSF)[38]. Whatever the architecture chosen, be itfederated, integrated or some hybrid, complexavionics systems must be robust, reliable <strong>and</strong> safe.The architecture should meet the functionalrequirements for timeliness, predictability <strong>and</strong>controllability in order to satisfy the needs to safely<strong>and</strong> effectively aviate, navigate, communicate, <strong>and</strong>execute a mission.Traditionally, avionics systems were federated, buthave evolved into highly integrated systems <strong>of</strong>computer hardware <strong>and</strong> s<strong>of</strong>tware. In an integratedsystem, if the architecture has flaws, faults cancouple between systems <strong>and</strong> propagate, leading tounpredictable <strong>and</strong> unreliable behavior unless properpartitioning is accomplished. <strong>Qualification</strong> <strong>and</strong>reliability assessment <strong>of</strong> such systems is challengingwithin schedule <strong>and</strong> budget constraints despite usingaccepted engineering practices. Given this, there is aneed to develop an industry st<strong>and</strong>ard method forqualifying complex integrated systems to a specifiedreliability.Avionics systems require deterministic performance forcritical functions. Integrated Modular Avionic (IMA)systems introduce new issues with data processing, errorchecking, <strong>and</strong> error h<strong>and</strong>ling beyond their federatedcounterparts. Federated systems were not tightly coupled<strong>and</strong> seemed easier to test to some degree since thefunctions were encapsulated in dedicated <strong>and</strong> separateunits. Now, avionics systems rely on computer hardwarerunning multiple processes on an integrated system whichhopefully is implemented with a partitioned operatingsystem. Whether federated, integrated or some hybridcyber-physical system, it is crucial for reliability, safety,verification <strong>and</strong> validation to be embedded in the life-cycle<strong>of</strong> a system. Dealing with this complexity is challengingbecause <strong>of</strong> the intricacies present in the design <strong>of</strong> suchsystems. St<strong>and</strong>-alone federated systems have reliabilitytargets, based upon their functions. <strong>Complex</strong> cyberphysicalsystems may have both distributed <strong>and</strong> integratedfunctions <strong>and</strong> multithreaded processes.Program management, systems engineers, developers,systems integration engineers, human factors engineers,test engineers <strong>and</strong> other disciplines must be cognizant <strong>of</strong>the need for design for testability <strong>and</strong> high reliability <strong>of</strong>such systems early in the life-cycle. Waiting untilPreliminary Design Review (PDR) is too late to startaddressing these considerations <strong>and</strong> may require redesignlater or onerous testing during qualification that is costly<strong>and</strong> time consuming. If complete <strong>and</strong> correct requirementsare not given emphasis at the high level then problemswith poor reliability <strong>and</strong> inadequate qualification are foundafter the systems are acquired by the user. This may resultin low performance <strong>and</strong>, possibly, risk <strong>of</strong> injury or loss <strong>of</strong>life.Processes for project management <strong>and</strong> engineering arefairly well understood <strong>and</strong> promoted through the defenseacquisition guidelines <strong>and</strong> by various industry st<strong>and</strong>ardssuch as the Capability Maturity Model Index (CMMI),MIL-STD-882, SAE ARP4754, SAE ARP4761, SAEARP4754, DO-178B, DO-254, ARINC 653 <strong>and</strong> others.Taken together, these st<strong>and</strong>ards provide guidance that, iffollowed, likely will result in safe, highly reliable <strong>and</strong> costeffectivesystems over the life-cycle <strong>of</strong> the system.Typically, as complexity <strong>and</strong> redundancy <strong>of</strong> systemsincrease, the reliability also increases to a break-even point<strong>and</strong> then begins to decrease at some level <strong>of</strong> complexity.2


Redundant systems by their nature increasecomplexity <strong>of</strong> the system. They are designed withthe goal to provide fault tolerance, which ultimatelyprotects the crew, passengers, test personnel, groundpersonnel, byst<strong>and</strong>ers, <strong>and</strong> expensive equipment.Correct decisions must be made on good data withproper timing, data type <strong>and</strong> data exchange in mind.As the complexity <strong>and</strong> integration <strong>of</strong> the systemincreases, costs in design <strong>and</strong> qualification <strong>and</strong>schedule sometimes increases.The current guidelines indicate how bad a design is,but do not define how to measure the expectedreliability <strong>of</strong> such complex systems. Additionally,these guidelines fall short in defining how reliabilityis measured throughout the life-cycle <strong>of</strong> a program.The current system design guidance <strong>of</strong>ten does notyield desired reliability <strong>of</strong> complex electronic aircraftgoals <strong>of</strong> meeting performance specifications. Thismust be introduced at higher levels like the ProgramElement Office (PEO) because they control overallcost <strong>and</strong> schedule. Presently, ease <strong>of</strong> test,reliability <strong>and</strong> safety are by-products <strong>of</strong> thedesigners‟ <strong>and</strong> implementers‟ diligence in meetingpurposeful requirements. There must be a concertedeffort from government agencies such as the FederalAviation Administration (FAA), Department <strong>of</strong>Defense (DOD), <strong>and</strong> commercial aviation industry todefine better design guidance. These proposedguidelines should not constrain the creativity indesign, but must set the boundaries necessary forachieving advances in design with performance,reliability <strong>and</strong> safety in mind.In summary, this paper will:1) Identify development challenges with complexsystems,2) Present the current U.S. Army approach toairworthiness <strong>and</strong> testing,3) Discuss the challenges facing reliability <strong>and</strong>safety,4) Survey complexity issues,5) Cover the historical aspect <strong>of</strong> the reliabilityproblem <strong>and</strong> identify that this not a new problem,6) Itemize some current design guidelines formodeling systems <strong>and</strong> identify deficiencies,7) Address certification assessment considerations,8) Encourage development <strong>and</strong> refinement <strong>of</strong>st<strong>and</strong>ard modeling <strong>and</strong> analysis tools to place powerin the h<strong>and</strong>s <strong>of</strong> the designer to mitigate issues withsystem reliability early <strong>and</strong> throughout the projectlife-cycle, <strong>and</strong>9) Request input from the development <strong>and</strong> testcommunity to establish st<strong>and</strong>ard processes <strong>and</strong>requirements for qualifying complex systems.Development Challenges with <strong>Complex</strong> <strong>Systems</strong>It would be wonderful if all systems were straightforwardin design, easily testable <strong>and</strong> simple to write anAirworthiness Release (AWR) for; however, that is not thecase. Legacy aircraft have been upgraded in a piecemealfashion, acquiring much needed improvements in aviation,navigation <strong>and</strong> communication. The generalrecommended system development V-curve as shown inFigure 1 is not always followed in a strict sense; although,it should be the goal process.Figure 1 - Defense Acquisition System Design <strong>and</strong> TestCycleNegligence to the proper process makes the establishment<strong>of</strong> certification very difficult. For new systemdevelopment <strong>and</strong> existing system upgrades, requirementsmust be clear, complete <strong>and</strong> testable. The certificationrequirements must be made obvious in the development <strong>of</strong>the requirements establishment phase; with the goal <strong>of</strong>being fully identified during the requirementsdevelopment. Orchestrating agreement among allstakeholders (e.g., the program manager, systemsengineers, human factors engineers, integrators, testengineers, manufacturers, users, <strong>and</strong> certifiers) is necessaryto mitigate problems such as:juggling multiple s<strong>of</strong>tware builds,producing a difficult-to-test, difficult-to-certify, <strong>and</strong>difficult-to-deploy systems,misunderst<strong>and</strong>ing system safety, <strong>and</strong>requiring design iterations that impact schedules<strong>and</strong> costs.Specifications are sometimes loaded with so-called “bloat”or “feature creep” that is not necessary for achieving themission. As mentioned by Lui Sha, “useful-butunessentialfeatures cause most <strong>of</strong> the complexity. [The]approach should combine complexity control architecturedesigns with formal methods, so that the architecturedesign ensures the simplicity <strong>of</strong>: (1) the critical service,3


modifications, to function satisfactorily when used<strong>and</strong> maintained within prescribed limits [25]. TheU.S. Army has traditionally qualified systems <strong>and</strong>components by similarity, analysis, test,demonstration, or examination. Historically, mostsystems were federated. They were hardware based,simple <strong>and</strong> distributed. Now they have become moreintegrated. They are more s<strong>of</strong>tware intensive,complex <strong>and</strong> have combined functionality containedin one or more computers. With this evolution fromsimple to more complex the Army is finding it moredifficult to execute an AWR. As systems evolve tomore complex systems <strong>of</strong> systems this problem isonly growing worse.The current test approach to achieving confidence insystems for an AWR for the U.S. Army is basedmore on traditional federated avionics systems.Experienced personnel in S<strong>of</strong>tware VehicleManagement <strong>Systems</strong>, Avionic Communications,Navigation & Guidance Control, Electrical,Environmental, Human Factors, Electrical <strong>and</strong>Electro-magnetic Effects (E 3 ), Structures, Thermal,Safety, Integration Testing, <strong>and</strong> Flight Test personnel<strong>and</strong> test pilots all play important roles inaccomplishing test <strong>and</strong> review <strong>of</strong> new <strong>and</strong> existingsystems. While some may not consider areas such asthermal or E 3 important to s<strong>of</strong>tware development theyare crucial since the s<strong>of</strong>tware is running on physicalsystems that are affected by heat <strong>and</strong> susceptibility toelectromagnetic radiation which can cause abnormaloperation or bit errors. Current test methodology forhardware relies on MIL-STD-810, MIL-STD-461,<strong>and</strong> requirements analysis such as traceability. MIL-STD-810 is the "Department <strong>of</strong> Defense Test MethodSt<strong>and</strong>ard for Environmental EngineeringConsiderations <strong>and</strong> Laboratory Tests".Figure 4. S<strong>of</strong>tware Engineering Directorate (SED)Avionics S<strong>of</strong>tware Integration Facility (ASIF)Figure 5 – Kiowa, Apache, Chinook <strong>and</strong> Blackhawk inTest Flights5


Table 1 – PEO Aviation System Safety Management Decision Authority Matrix(P = Probability based on 100,000 flight hours. Note: Dollar impact values were once associated with this.)Table 2 - Hardware <strong>Reliability</strong> versus S<strong>of</strong>tware <strong>Reliability</strong> (reference [39])Hardware <strong>Reliability</strong>Failure rate has a bathtub curve. The burn-in state issimilar to the s<strong>of</strong>tware debugging state.Material deterioration can cause failures even thoughthe system is not used.Failure data are fitted to some distributions. Theselection <strong>of</strong> the underlying distribution is based onthe analysis <strong>of</strong> failure data <strong>and</strong> experiences.Emphasis is placed on analyzing failure data.Failures are caused by material deterioration, designerrors, misuse, <strong>and</strong> environment.Can be improved by better design, better material,applying redundancy <strong>and</strong> accelerated life cycletesting.Hardware repairs restore the original condition.Hardware failures are usually preceded by warnings.Hardware components can be st<strong>and</strong>ardized.Hardware can usually be tested exhaustively.S<strong>of</strong>tware <strong>Reliability</strong>Without considering program evolution, failure rate isstatistically non-increasing.Failures never occur if the s<strong>of</strong>tware is not used.Most models are analytically derived fromassumptions. Emphasis is on developing the model,the interpretation <strong>of</strong> the model assumptions, <strong>and</strong> thephysical meaning <strong>of</strong> the parameters.Failures are caused by incorrect logic, incorrectstatements, or incorrect input data.Can be improved by increasing testing effort <strong>and</strong>correcting discovered faults. <strong>Reliability</strong> tends tochange continuously during testing due to the addition<strong>of</strong> problems in new code or the removal <strong>of</strong> problems bydebugging errors.S<strong>of</strong>tware repairs establish a new piece <strong>of</strong> s<strong>of</strong>tware.S<strong>of</strong>tware failures are rarely preceded by warnings.S<strong>of</strong>tware components have rarely been st<strong>and</strong>ardized.S<strong>of</strong>tware essentially requires infinite testing forcompleteness.6


on the design which costs time <strong>and</strong> money.Furthermore, dealing with existing, fielded complexsystems that have not gone through the rigors <strong>of</strong> propersystems engineering results, quite <strong>of</strong>ten, with anoverwhelming mess <strong>of</strong> functionality to test, verify,validate, qualify, <strong>and</strong> certify designs at PDR is too late.It has been shown that discovering issues at this stage inthe life-cycle will cause a reiteration on the designwhich costs time <strong>and</strong> money. Furthermore, dealing withexisting, fielded complex systems that have not gonethrough the rigors <strong>of</strong> proper systems engineering results,quite <strong>of</strong>ten, with a complicated conglomeration <strong>of</strong>functionality to test, verify, validate, qualify, certify <strong>and</strong>maintain. This is indicative where complexity hasexceeded our underst<strong>and</strong>ing in how to certify a system.We still do not fully underst<strong>and</strong> complexity <strong>and</strong> how toaddress reliability <strong>of</strong> complex systems. How do weaccurately assess operating risks, performance, <strong>and</strong>reliability <strong>of</strong> complex systems based on limited testing<strong>and</strong> analysis? How do we know when a system designis good enough? How do we modularize spirallydeveloped systems to minimize the need for requalification<strong>of</strong> unchanging portions <strong>of</strong> the system? Weare 30 plus years into this technology <strong>and</strong> we still dealwith systems with latent defects that are occurring insupposedly well-tested, <strong>and</strong> mature systems. To furtherexacerbate the problem we are now dealing withcomplex system <strong>of</strong> systems (i.e., cyber-physicalsystems).It is a given that you can continue to add redundancy<strong>and</strong> complexity to a problem at some point in time thereliability will taper <strong>of</strong>f. At best, we sometimes mustsatisfy for an optimum point before digressing inreliability. In the same vein, system development costs<strong>and</strong> schedule increase with complexity too (see Figure 7<strong>and</strong> 8).Avionics parts <strong>and</strong> s<strong>of</strong>tware constantly change over thelife <strong>of</strong> a program. Typically a spiral developmentprogram occurs with complex s<strong>of</strong>tware developmentwhich means that qualification is required frequently.This begs the question <strong>of</strong> how to streamline the processso that the need to conduct a complete requalification isavoided.With these complex systems there are other hurdles tocross such as fully characterizing <strong>and</strong> conducting theFunctional Hazard Assessments (FHAs), Fault TreeAnalyses (FTAs) <strong>and</strong> Failure Mode, Effects, <strong>and</strong>Criticality Analyses (FMECAs). It is crucial for thesafety assessment that these are conducted correctly t<strong>of</strong>ully underst<strong>and</strong> the risks <strong>and</strong> later perform the correcttests at the right level. Additionally, once the complexcomponent hardware <strong>and</strong> s<strong>of</strong>tware are integrated thenyet other problems appear. It is at that time that thedisparity <strong>of</strong> teams crossing multiple contractors <strong>and</strong>development groups becomes obvious <strong>and</strong>, if notproperly coordinated, could cause impacts to theschedule. Other programmatic problems affect complexsystem development <strong>and</strong> qualification. For instance,lack <strong>of</strong> schedule <strong>and</strong> funding resources causes ashortcoming to adequately provide for the propercompliance with specification <strong>and</strong> requirements shortcircuitingthe systems engineering process. An everdecreasing availability <strong>of</strong> trained engineers to supportthe development <strong>and</strong> test <strong>of</strong> such systems exists. Nontechnicalpolitical influences sometimes affect thereliability <strong>of</strong> complex systems <strong>and</strong> are difficult to avoid.Lastly, there is a lack <strong>of</strong> a centralized database thatcaptures the various families <strong>of</strong> systems that have beenbuilt along with their characterization <strong>of</strong> success <strong>and</strong>failures. Such a database from all past <strong>and</strong> presentgovernment complex systems could be valuable inestablishing reliability basis for future models.Figure 7 – Notional Trends with <strong>Complex</strong>ity vs.<strong>Reliability</strong>Figure 8 – Notional Trends with <strong>Complex</strong>ity vs. Cost<strong>and</strong> Schedule8


An Old ProblemAs mentioned earlier, complex avionics systems are nota new idea. Since the early 1960‟s complex avionicarchitectures have existed beginning with theApollo/Saturn program. Massachusetts Institute <strong>of</strong>Technology (MIT) Instrumentation Lab (IL), which isnow Draper Laboratory, <strong>and</strong> International BusinessMachines (IBM) led the way with the MIT/IL ApolloGuidance Computer (AGC) <strong>and</strong> the Saturn V IBM triplemodular redundant (TMR) voting guidance computersystem. The word s<strong>of</strong>tware was not even coined at thetime but engineers such as Margaret Hamilton, MIT/ILDirector <strong>of</strong> Apollo On-board S<strong>of</strong>tware, can attest to thefact that some the same issues with creating reliables<strong>of</strong>tware then still exists today [5]. A large majority <strong>of</strong>the issues then dealt with the communication betweensystems engineers <strong>and</strong> the programmers. Requirementswere thrown over the wall without the confirmation thatthe requirements were complete <strong>and</strong> a lot <strong>of</strong> the issuescropped up as interface problems. In fact, according toHamilton somewhere on the order <strong>of</strong> 75% <strong>of</strong> theproblems experienced in Apollo were interfaceproblems. Identifying these issues prompted Hamiltonto create her own company <strong>and</strong> create a modelinglanguage called Universal <strong>Systems</strong> Language (USL) tohead <strong>of</strong>f the problems experienced with Apollo [10, 11].Some 200 plus modeling programs have been developedsince Apollo <strong>and</strong> used to mitigate issues <strong>and</strong> increaseconfidence in systems <strong>of</strong> varying complexity.As time progressed other systems came along. TheNASA Dryden F8 Crusader was the first digital fly bywire (DFBW) jet aircraft that relied heavily on complexIMA <strong>and</strong> s<strong>of</strong>tware for flight control. The SpaceTransportation System (STS) shuttle includes a QuadModular Redundant (QMR) system with a fifth backupflight computer containing uncommon code. U.S. AirForce <strong>and</strong> Naval airplanes that have possessed complexor redundant IMA configurations include the F14Tomcat, F15 Eagle, F16 Falcon, F18 Hornet, F22Raptor, F35 Joint Strike Fighter, F117 Nighthawk, V22Osprey, C17 Globemaster <strong>and</strong> many more along withrecent Unmanned Air Vehicle <strong>Systems</strong> (UAVS). TheU.S. Army complex systems on helicopters include the:RAH-66 Comanche DFBW Triple ModularRedundant (TMR) architecture,Glass cockpit avionics on the UH-60M Blackhawkbaseline,Common Avionics Architecture System (CAAS)glass cockpit on the UH-60M Blackhawkmodernization <strong>and</strong> CH-47F Chinooks <strong>and</strong> otheraircraft.Full Authority Digital Engine Control (FADEC)units found on various helicopters.Additionally, there are many self-checking pair enginecontroller systems along with system <strong>of</strong> system FutureCombat <strong>Systems</strong> (FCS) <strong>and</strong> Unmanned Air Vehicle<strong>Systems</strong> (UAVS). This has also permeated thecommercial airliner market with the Airbus 320, higherAirbus models, Boeing 777 <strong>and</strong> Boeing 787 aircraft.With this ever increasing technology something must bedone about the reliability issue. With such a wealth <strong>of</strong>data on aviation <strong>and</strong> non-aviation cyber-physicalsystems such as submarine, ship, nuclear, medical,locomotive <strong>and</strong> automotive systems there should beadequate information to get a start on modeling systemscorrectly for reliability. Therefore, this is not anisolated problem to avionics <strong>and</strong> other disciplinesshould aide in resolving this problem.Along with these examples there have also been manyinstances <strong>of</strong> failures <strong>of</strong> complex systems or s<strong>of</strong>tware. Afew examples include the following:Multiple crashes occurred with the V-22 Osprey[41].In 1999 the Mars Climate Orbiter crashed because<strong>of</strong> incorrect units in a program caused by poorsystems engineering practices [42, 44].In 1988 an Airbus 320 was shot down by the USSVincennes because <strong>of</strong> cryptic <strong>and</strong> misleadingoutput displayed by the tracking s<strong>of</strong>tware [3].In 1989 an Airbus A320 crashed at an air show dueto altitude indication <strong>and</strong> s<strong>of</strong>tware h<strong>and</strong>ling [3].In 1994 a China Airlines Airbus Industries A300crash on killing 264 from faulty s<strong>of</strong>tware [3].In 1996 the first Ariane 5 satellite launcherdestruction mishap was caused by a faulty s<strong>of</strong>twaredesign error with a few lines <strong>of</strong> ADA codecontaining unprotected variables. Horizontalvelocity <strong>of</strong> the Ariane 5 exceeded the Ariane 4resulting in the guidance system veering the rocket<strong>of</strong>f course. Insufficient testing did not catch thiserror which was a carry-over from Ariane 4.[3, 39]In 1986 a Mexicana Airlines Boeing 727 airlinercrashed into a mountain due to the s<strong>of</strong>tware notcorrectly determining the mountain position [39].In 1986 the Therac-25 radiation therapy machinesoverdosed cancer patients due to flaw in thecomputer program controlling the highly automateddevices [3, 39, 45].During the maiden launch in 1981 <strong>of</strong> the Columbiaspace shuttle, a failure <strong>of</strong> the primary flight controlcomputer system to establish sync with the backupduring prelaunch [43].In 1997 The Mars Pathfinder s<strong>of</strong>tware resetproblem due to latent task execution caused bypriority inversion with a mutex [3, 44].An error in a single FORTRAN statement resultedin the loss <strong>of</strong> the first American probe to Venus.9


From G. J. Myers, S<strong>of</strong>tware <strong>Reliability</strong>: Principles& Practice, p. 25 [3]. In September 1999 the Korean Airlines KAL 901accident in Guam killed 225 out <strong>of</strong> 254 aboard. Aworldwide bug was discovered in barometricaltimetry in Ground Proximity Warning System(GPWS). From ACM SIGSOFT S<strong>of</strong>twareEngineering Notes, vol. 23, no. 1 [3]. The Soviet Phobos I Mars probe was lost, due to afaulty s<strong>of</strong>tware update, at a cost <strong>of</strong> 300 millionrubles. Its disorientation broke the radio link <strong>and</strong>the solar batteries discharged before reacquisition.From Aviation Week, 13 Feb 1989 [3]. An F-18 fighter plane crash due to a missingexception condition. From ACM SIGSOFTS<strong>of</strong>tware Engineering Notes, vol. 6, no. 2 [3]. An F-14 fighter plane was lost to uncontrollablespin, traced to tactical s<strong>of</strong>tware. From ACMSIGSOFT S<strong>of</strong>tware Engineering, vol.9, no.5 [3]. In 1989, Swedish Gripen prototype crashed due tos<strong>of</strong>tware in their digital fly-by-wire system [3, 46]. In 1995, another Gripen fighter plane crashedduring air-show caused by a s<strong>of</strong>tware issue [3, 46]. On February 11 2007, twelve F/A-22 Raptors wereforced to head back to the Hawaii when a s<strong>of</strong>twarebug caused a computer crash as they were crossingInternational Date Line.[47] February 2006 German-Spanish UnmannedCombat Air Vehicle, Barracuda, crash due tos<strong>of</strong>tware failure [4]. December 2004 a glitch in the s<strong>of</strong>tware forcontrolling flight probably caused an F/A-22Raptor stealth fighter jet to crash on take<strong>of</strong>f atNellis Air Force [4]. In 2008 a United Airbus A320, registrationN462UA, experienced multiple avionics <strong>and</strong>electrical failures, including loss <strong>of</strong> allcommunications, shortly after departing NewarkLiberty International Airport in Newark, NewJersey [NTSB Report ID: DCA08IA033]. In 2006 a Boeing 777 Malaysian Airlines jetliner‟sautopilot caused a stall to occur by climbing 3000feet. Pilots struggled to nose down the plane butplunged into a steep dive. After pulling back up thepilots regained control. Cause was defective flights<strong>of</strong>tware providing incorrect data for airspeed <strong>and</strong>acceleration, confusing the flight computers, <strong>and</strong>initially ignoring the pilot‟s comm<strong>and</strong>s.[49] US Army <strong>and</strong> Air Force UAV crashes from controlsystem or human error.In Dr. Nancy Leveson‟s paper, “The Role <strong>of</strong> S<strong>of</strong>twarein Spacecraft Accidents”, she cited problems withs<strong>of</strong>tware development issues within NASA on variousprojects [37]. According to Dr. Leveson, there were“flaws in the safety culture, diffusion <strong>of</strong> responsibility<strong>and</strong> authority, limited communication channels <strong>and</strong> poorinformation flow, inadequate system <strong>and</strong> s<strong>of</strong>twareengineering, poor or missing specifications, unnecessarycomplexity <strong>and</strong> s<strong>of</strong>tware functionality, s<strong>of</strong>tware reuse orchanges without appropriate safety analysis, violation <strong>of</strong>basic safety engineering practices, inadequate systemsafety engineering, flaws in test <strong>and</strong> simulationenvironments, <strong>and</strong> inadequate human factors design fors<strong>of</strong>tware”. While these problems were identified forspacecraft development within NASA <strong>and</strong> corrected,aviation in general could learn from these lessons tomitigate issues with complex systems development.Current Guidelines <strong>and</strong> Certification AssessmentConsiderationsThe problems identified establish a need for st<strong>and</strong>ards<strong>and</strong> guidelines. The aviation community has developedguidelines such as the following to provide a path tocreating robust <strong>and</strong> reliable aircraft:DO-178B-S<strong>of</strong>tware Considerations in Airborne<strong>Systems</strong> <strong>and</strong> Equipment CertificationDO-248B–Final Report for the Clarification <strong>of</strong> DO-178BDO-278-Guidelines for Communications,Navigation, Surveillance, <strong>and</strong> Air TrafficManagement (CNS/ATM) <strong>Systems</strong> S<strong>of</strong>twareIntegrity AssuranceDO-254-Design Assurance Guidance for Airborne<strong>Electronic</strong> HardwareDO-297–Integrated Modular Avionics (IMA)Development Guidance <strong>and</strong> CertificationConsiderationsSAE-ARP4754–Certification Consideration forHighly Integrated or <strong>Complex</strong> Aircraft <strong>Systems</strong>SAE-ARP4671- Guidelines <strong>and</strong> Methods forConducting the Safety Assessment Process onAirborne <strong>Systems</strong> <strong>and</strong> EquipmentFAA Advisory Circular AC27-1B-Certification <strong>of</strong>Normal Category <strong>Rotorcraft</strong>FAA Advisory Circular AC29-2C-Certification <strong>of</strong>Transport Category <strong>Rotorcraft</strong>ISO/IEC 12207-S<strong>of</strong>tware Life-cycle ProcessesARINC 653-Specification St<strong>and</strong>ard for Time <strong>and</strong>System PartitionMIL-STD-882D-DoD System SafetyADS-51-HDBK-<strong>Rotorcraft</strong> <strong>and</strong> Aircraft<strong>Qualification</strong> H<strong>and</strong>bookAR-70-62-Airworthiness Release St<strong>and</strong>ardADS-75-SS–Army Aviation System SafetyAssessments <strong>and</strong> AnalysesADS-48-PRF-Performance Specification forAirworthiness <strong>Qualification</strong> Requirements forOperation <strong>of</strong> Aircraft in Instrument MeteorologicalConditions <strong>and</strong> Civil Instrument Flight Rules10


ADS-64-SP–Airworthiness Requirements forMilitary <strong>Rotorcraft</strong>SED-SES-PMHFSA001-S<strong>of</strong>tware EngineeringDirectorate (SED) S<strong>of</strong>tware EngineeringEvaluation System (SEES) Program ManagerH<strong>and</strong>book for Flight S<strong>of</strong>tware AirworthinessSED-SES-PMHSS001 SED SEES ProgramManager H<strong>and</strong>book for S<strong>of</strong>tware SafetyThe current guidelines such as DO-178B, DO-254, DO-297, SAE-ARP-4754, <strong>and</strong> SAE-ARP-4671 along withmany other guidelines outline the proper steps thatshould be taken. System safety management‟s militaryst<strong>and</strong>ard is MIL-STD-882 <strong>and</strong> has been in use fordecades. Civilian safety st<strong>and</strong>ards for the aviationindustry include SAE ARP4754 which shows theincorporation <strong>of</strong> system safety activities into the designprocess <strong>and</strong> provides guidance on techniques to use toensure a safe design. SAE ARP4761 containssignificant guidance on how to perform the systemsafety activities spoken about in SAE ARP4754. DO-178B outlines development assurance guidance foraviation s<strong>of</strong>tware based on the failure conditioncategorization <strong>of</strong> the functionality provided by thes<strong>of</strong>tware. DO-254 embodies similar guidance foraviation hardware. ARINC 653 is a widely acceptedst<strong>and</strong>ard to ensure time <strong>and</strong> space partitioning fors<strong>of</strong>tware. DO-297 does an excellent job <strong>of</strong> describingthe certification tasks to take for an IMA system whichinclude:Task 1: Module acceptanceTask 2: Application s<strong>of</strong>tware/hardware acceptanceTask 3: IMA system acceptanceTask 4: Aircraft integration <strong>of</strong> IMA systemsincluding verification <strong>and</strong> validationTask 5: Change <strong>of</strong> modules or applicationsTask 6: Reuse <strong>of</strong> modules or applicationsTaken together, these st<strong>and</strong>ards provide guidance that, iffollowed, likely will result in safe, highly reliable <strong>and</strong>cost-effective systems over the life-cycle <strong>of</strong> the system.Do we have enough st<strong>and</strong>ards now, but just do not payto contract them <strong>and</strong> test to them? Perhaps it should beinvestigated that these st<strong>and</strong>ards are consolidated <strong>and</strong>tailored to a single st<strong>and</strong>ard for complex systems similarto how MIL-STD-586 was a consolidation <strong>of</strong> severalst<strong>and</strong>ards with only applicable parts taken.While these guidelines exist, there is not a consistentindustry-wide method to assess a system at any stage <strong>of</strong>the life-cycle to allow a trade<strong>of</strong>f <strong>of</strong> design alternatives.Also, there is not a st<strong>and</strong>ard outlining overall reliabilityfor a system to include hardware <strong>and</strong> s<strong>of</strong>twarereliability. In order to achieve this level <strong>of</strong> reliability ast<strong>and</strong>ard should be developed to define the process <strong>and</strong>method to achieve a quantifiable reliability number thatwould, in turn lead to acceptance.In order to execute an AWR sufficient data must beprovided to the appropriate airworthiness authorities todemonstrate that safety requirements are met to thelevels per the System Safety Assessment (SSA), FHA,FTA, FMECA or equivalent. The certification processis currently lengthy <strong>and</strong> depends on much humaninterpretation <strong>of</strong> the myriad <strong>of</strong> complex architecturefunctions. Something needs to be done to remedy thiswithout adversely affecting cost <strong>and</strong> to the programsdeveloping complex systems.Architectural Modeling <strong>and</strong> Tools to Mitigate IssuesAs mentioned in the introduction the overall goal <strong>of</strong> thispaper is to state the need for an industry st<strong>and</strong>ardmethod for qualifying complex integrated systems to aspecified reliability. While not suggesting as solution intotal, better modeling practices should be consideredtoward a solution to bridge the gap between design, test<strong>and</strong> implementation. A method to model thearchitecture early in the requirements establishmentphase, follow the detailed design <strong>and</strong> coding, <strong>and</strong> thenbe able to verify the system along with the model maybe a path to greater confidence in the system <strong>and</strong> reducethe risks, warnings <strong>and</strong> cautions that must be issued.In order to achieve this, the systems must be brokendown to component levels <strong>and</strong> built up to subsystem<strong>and</strong> system levels. An overall aggregated systemreliability value should result (see Figure 9). The goalshould be to establish the ability to assess the reliabilityfrom a component, subsystem <strong>and</strong> then a system levelwith each phase working toward a higher TechnicalReadiness Level (TRL). The end result would be fedinto the accepted Type Certificate (TC) or AWR.To achieve this goal modeling <strong>and</strong> analysis tools thatfollow a st<strong>and</strong>ard for modeling reliability should exist.As previously stated, over 200 tools have been createdsince 1970s. Here is a list <strong>of</strong> a few <strong>of</strong> the current tools:Universal <strong>Systems</strong> Language (USL)Unified Modeling Language (UML)<strong>Systems</strong> Modeling Language (SysML)MATLAB/SimulinkTelelogic RhapsodyMathCADColored Petri NetsRate Monotonic Analysis (RMA)STATEMATE (Used by Airbus)St<strong>and</strong>ard for the Development <strong>of</strong> Safety-CriticalEmbedded S<strong>of</strong>tware (SCADE)OPNETEmbedded System Modeling Language (ESML)11


Component Synthesis using Model-IntegratedComputing (CoSMIC)Architectural Analysis <strong>and</strong> Design Language(AADL)By no means is this list complete. There is no concretecertification process now for any manufacturer orsubcontractor. They self-certify in-house processesfrom machining to soldering to s<strong>of</strong>tware with processeslike ISO9001 <strong>and</strong> CMMI. Typically, differentcompanies <strong>and</strong> projects address this challenge <strong>and</strong>choose unit tools to perform the upfront analysis <strong>and</strong>modeling not following a st<strong>and</strong>ard approach. Multipletools need to converge or be compatible with a setmodeling st<strong>and</strong>ard for complex systems. The complianttools could be used to verify the requirements up frontto mitigate reliability issues down the life-cycle.A notional approach would follow that shown in Figure10 where the system V is followed but architecturalmodeling <strong>and</strong> analysis go parallel with the realdevelopment effort. This would allow reliability to bemeasured during the design phase <strong>and</strong> measured duringthe implementation, test <strong>and</strong> verification phase using themodel. This approach could bridge the design <strong>and</strong> testphases together. It is emphasized here that thearchitectural model would not replace critical testing butaugments the process to allow for better requirementidentification <strong>and</strong> verification. Thorough ground <strong>and</strong>flight tests should never be replaced by modeling.Modeling would only allow for more robust <strong>and</strong> ahigher level <strong>of</strong> confidence in the requirements <strong>and</strong>design. The model could be used in conjunction withthe testing to confirm the design. Proper modeling <strong>and</strong>analysis would reduce total program costs by enforcingmore complete <strong>and</strong> correct initial requirements whichreduce issues discovered down the road in testing thatare expensive to fix or impossible to fix <strong>and</strong> having toaccept high risks. Additionally, if the model ismaintained <strong>and</strong> optimized then it could possibly be usedafter system deployment to analyze impacts <strong>of</strong> upgradesor changes to the system, allowing for more completeanalysis <strong>and</strong> reduce overall system redesign costs.A hurdle to cross with modeling <strong>and</strong> analysis isconvincing people to believe those models. Somemethod to certify these models <strong>and</strong> modeling toolsshould be addressed in the future. St<strong>and</strong>ards should beset in place for correct modeling techniques for complexsystems. Lastly, consideration <strong>of</strong> st<strong>and</strong>ard verificationchecking tools should be made such as with the use <strong>of</strong>the Motor Industry S<strong>of</strong>tware <strong>Reliability</strong> Association(MISRA) compliance verification tool for the use <strong>of</strong> Cin safety critical systems.SummaryIn conclusion, methods for achieving a design forcomplex systems do exist; however, achievingreliability <strong>and</strong> attaining a level <strong>of</strong> qualification thatwould permit better system integrity <strong>and</strong> basis <strong>of</strong>qualification <strong>and</strong> test does not. It is critical that ast<strong>and</strong>ard is developed to tackle this issue rather thanrelying on current methods to ascertain airworthiness <strong>of</strong>complex systems. This will not occur overnight. If itnot addressed then systems will continue to becomeever increasingly expensive in money <strong>and</strong> schedule.An orchestrated collaboration among industry,academia, military labs, <strong>and</strong> technically pr<strong>of</strong>essionalsocieties to focus on development <strong>of</strong> this st<strong>and</strong>ardshould allow us to draw upon the experiences to feedinto this reliability st<strong>and</strong>ard with AWR safety in mind.We have long living existing complex s<strong>of</strong>tware systemson the Space Transportation System (STS),International Space Station (ISS), missile systems,nuclear submarines <strong>and</strong> ship systems, nuclear controlsystems, military <strong>and</strong> commercial jet systems fromwhich we should be able to obtain at least inferreds<strong>of</strong>tware reliability information from the architecture<strong>and</strong> run time that the systems use. We should look at thelessons learned from these systems to see what couldhave been done to improve <strong>and</strong> what was done right thatshould be carried forward. The challenge is collectingthis information into a central database <strong>and</strong> arriving atsome figure <strong>of</strong> reliability from previous systems wheredata exists. This would at least provide a starting pointto allow initial assessments <strong>and</strong> could be optimized inthe future. This would not be an easy task since theinformation is dispersed. Also, historical analyses onfailures are subject error, may be inaccurate orpolitically sensitive. Additionally, there have beenother studies in the past for establishing reliabilitymetrics to complex s<strong>of</strong>tware systems. Research projects<strong>of</strong> similarity to this effort that have come <strong>and</strong> gone. Ifthose studies had merit then data from those projectsshould not be wasted but studied to feed into whateverst<strong>and</strong>ard that is developed. Even if the result came todead end then future study on the same path should notbe repeated. While historical information would beuseful, each design is unique <strong>and</strong> requires tools toaccomplish the design. Investigation <strong>of</strong> architecturalmodeling constructs should be further investigated as apossible augmentation to the design <strong>and</strong> test process.We need to determine which forum is best to conductthis effort (e.g., SAE, IEEE, AIAA, ACM, AHS,INCOSE, or other). As stated in the paper “SpaceShuttle Avionics” [31], “The designers, the flight crew,<strong>and</strong> other operational users <strong>of</strong> the system <strong>of</strong>ten have amindset, established in a previous program orexperience which results in a bias against new ordifferent, „unconventional‟ approaches”. If nothing isdone to address this problem it will only get worse overtime. It is past time to address the issue <strong>of</strong> reliability <strong>of</strong>complex systems <strong>and</strong> s<strong>of</strong>tware.12


.Figure 9 - The Aggregation <strong>of</strong> Components to <strong>Systems</strong> Level <strong>Reliability</strong>Figure 10 - Notional System V development curve with Architectural Modeling <strong>and</strong> AnalysisCoupled13


Acronym ListAcronymAADLACACMAEDAFTDAGCAHSAIAAAMCOMAMRDECARARINCARPASIFATAMATMAWRCAASCH-47CMMCMMICMUCNSCoSMICCPSCRCDFBWDoDE 3ESMLFAAFCSFHAFMECAFTAGPWSIBMIECILDefinitionArchitectural Analysis <strong>and</strong> DesignLanguageAdvisory Circular (FAA)Association <strong>of</strong> Computing MachineryAviation Engineering Directorate(AMRDEC)Aviation Flight Test Directorate (USArmy)Apollo Guidance ComputerAmerican Helicopter SocietyAmerican Institute <strong>of</strong> Aeronautics <strong>and</strong>Astronautics (Inc.)Aviation <strong>and</strong> Missile Comm<strong>and</strong> (USArmy)Aviation <strong>and</strong> Missiles Research,Development <strong>and</strong> Engineering Center(US Army)Army RegulationAeronautical Radio Inc.Aerospace Recommended PracticeAvionics S<strong>of</strong>tware Integration FacilityArchitecture Trade<strong>of</strong>f AnalysisMethodAir Traffic ManagementAirworthiness ReleaseCommon Avionics ArchitectureSystemCargo Helicopter, ChinookCapability Maturity ModelCapability Maturity Model IndexCarnegie Mellon UniversityCommunications, Navigation,SurveillanceComponent Synthesis using Model-Integrated ComputingCyber-Physical SystemChemical Rubber Company (i.e., CRCPress)Digital Fly-By-WireDepartment <strong>of</strong> DefenseElectrical <strong>and</strong> Electromagnetic EffectsEmbedded System ModelingLanguageFederal Aviation AdministrationFuture Combat <strong>Systems</strong>Functional Hazard AssessmentFailure Mode, Effects <strong>and</strong> CriticalityAnalysisFault Tree AnalysisGround Proximity Warning SystemInternational Business MachinesInternational Engineering ConsortiumInstrumentation Lab (now DraperLaboratory)AcronymIMAINCOSEISOISSKALMISRAMITNASAPDRPEOPNASRAQRMARTCRTTCRTCASAESCADESEDSEESSEISILSSASTSSysMLTCTMRQMRTRLUASUAVUH-60UMLU.S.USLDefinitionIntegrated Modular AvionicsInternational Council on <strong>Systems</strong>EngineeringInternational Organization forSt<strong>and</strong>ardizationInternational Space StationKorean AirlinesMotor Industry S<strong>of</strong>tware <strong>Reliability</strong>AssociationMassachusetts Institute <strong>of</strong> TechnologyNational Aeronautics <strong>and</strong> SpaceAdministration (USA)Preliminary Design ReviewProgram Element OfficeProceedings <strong>of</strong> the National Academy<strong>of</strong> Sciences<strong>Rotorcraft</strong> <strong>and</strong> Aircraft <strong>Qualification</strong>Rate Monotonic AnalysisRedstone Test Center (US Army)Redstone Technical Test Center (USArmy)Radio Technical Commission forAeronauticsSociety <strong>of</strong> Automotive EngineersSafety Critical ApplicationDevelopmentS<strong>of</strong>tware Engineering Directorate(AMRDEC)S<strong>of</strong>tware Engineering EvaluationSystemS<strong>of</strong>tware Engineering Institute (CMU)System Integration LaboratorySystem Safety AssessmentSpace Transportation System<strong>Systems</strong> Modeling LanguageType CertificateTriple Modular RedundantQuad Modular RedundantTechnical Readiness LevelUnmanned Aircraft SystemUnmanned Air VehicleUtility Helicopter, BlackhawkUnified Modeling LanguageUnited StatesUniversal <strong>Systems</strong> LanguageReference DocumentsDocument ID Document NameADS-51-HDBK <strong>Rotorcraft</strong> <strong>and</strong> Aircraft <strong>Qualification</strong>H<strong>and</strong>bookADS-64-SP Airworthiness Requirements forMilitary <strong>Rotorcraft</strong>14


Document ID Document NameADS-48-PRF Performance Specification forAirworthiness <strong>Qualification</strong>Requirements for Operation <strong>of</strong>Aircraft in IMC <strong>and</strong> Civil InstrumentFlight RulesADS-75-SS Army Aviation System SafetyAssessments <strong>and</strong> AnalysesAMCOM-385-17 Safety St<strong>and</strong>ardAR-70-62 Airworthiness <strong>Qualification</strong> <strong>of</strong>Aircraft <strong>Systems</strong>AR-95-1 Aviation Flight RegulationsARINC 653 Specification St<strong>and</strong>ard for Time <strong>and</strong>System PartitionDO-178B S<strong>of</strong>tware Considerations in Airborne<strong>Systems</strong> <strong>and</strong> Equipment CertificationDO-248B Final Report for the Clarification <strong>of</strong>DO-178BDO-254 Design Assurance Guidance forAirborne <strong>Electronic</strong> HardwareDO-278 Guidelines for Communications,Navigation, Surveillance, <strong>and</strong> AirTraffic Management (CNS/ATM)<strong>Systems</strong> S<strong>of</strong>tware Integrity AssuranceDO-297 Integrated Modular Avionics (IMA)Development Guidance <strong>and</strong>Certification ConsiderationsFAA-AC-25.1309Equipment, <strong>Systems</strong>, <strong>and</strong> Installationsin Part 23 AirplanesFAA-AC-27-1B Advisory Circular - Certification <strong>of</strong>Normal Category <strong>Rotorcraft</strong>FAA-AC-29-2C Advisory Circular - Certification <strong>of</strong>Transport Category <strong>Rotorcraft</strong>ISO/IEC-12207 S<strong>of</strong>tware Life-cycle ProcessesMIL-STD-461 Requirements for the Control <strong>of</strong>Electromagnetic InterferenceCharacteristics <strong>of</strong>Subsystems <strong>and</strong> EquipmentMIL-STD-810 Department <strong>of</strong> Defense Test MethodSt<strong>and</strong>ard for EnvironmentalEngineering Considerations <strong>and</strong>Laboratory TestsMIL-STD-882D Department <strong>of</strong> Defense System SafetyPMHFSA-001 SED SEES Program ManagerH<strong>and</strong>book for Flight S<strong>of</strong>twareAirworthinessPMHSS-001SED SEES Program ManagerH<strong>and</strong>book for S<strong>of</strong>tware SafetySAE-ARP-4754 Certification Consideration for HighlyIntegrated or <strong>Complex</strong> Aircraft<strong>Systems</strong>SAE-ARP-4671 Guidelines<strong>and</strong> Methods for Conducting theSafety Assessment Process onAirborne <strong>Systems</strong> <strong>and</strong> EquipmentBibliography[1] Israel Koren <strong>and</strong> Mani Krishna, “Fault-Tolerant<strong>Systems</strong>”, Morgan Kaufmann, 2007[2] Jianto Pan, “S<strong>of</strong>tware <strong>Reliability</strong>”, Carnegie MellonUniversity, Spring 1999[3] Nachum Dershowitz, http://www.cs.tau.ac.il/~nachumd/horror.html[4] http://www.air-attack.com[5] David A. Mindell, “Digital Apollo: Human <strong>and</strong>Machine in Spaceflight”, The MIT Press, 2008[6] S<strong>of</strong>tware Engineering Directorate: S<strong>of</strong>twareEngineering Evaluation System (SEES), “ProgramManager H<strong>and</strong>book for Flight S<strong>of</strong>tware Airworthiness,SED-SES-PMHFSA 001, December 2003[7] S<strong>of</strong>tware Engineering Directorate: S<strong>of</strong>twareEngineering Evaluation System (SEES), “ProgramManager H<strong>and</strong>book for S<strong>of</strong>tware Safety, SED-SES-PMHSSA 001, February 2006[8] AMCOM Regulation 385-17 S<strong>of</strong>tware SystemSafety Policy, 15 March 2008[9] NASA S<strong>of</strong>tware Safety Guidebook, NASA-GB-8719.13, 31 March 2004[10] Margaret Hamilton & William Hackler, “Universal<strong>Systems</strong> Language: Lessons Learned from Apollo”,IEEE Computer, IEEE Society, December 2008,http://www.htius.com/Articles/r12ham.pdf[11] M. Hamilton, "001: A Full Life Cycle <strong>Systems</strong>Engineering <strong>and</strong> S<strong>of</strong>tware Development Environment;Development Before the Fact in Action", cover story,special editorial supplement, 22ES-30ES, <strong>Electronic</strong>Design, June 1994.http://www.htius.com/Articles/Full_Life_Cycle.htm[12] Peter Feiler, David Gluch, John Hudak, “TheArchitecture Analysis & Design Language (AADL): AnIntroduction”, CMU/SEI-2006-TN-011, February 2006[13] Peter Feiler, John Hudak, “Developing AADLModels for Control <strong>Systems</strong>: A Practitioner‟s Guide”,CMU/SEI-2007-TR-014, July 2007[14] Bruce Lewis, “Using the Architecture Analysis <strong>and</strong>Design Language for System Verification <strong>and</strong>Validation”, SEI Presentation, 2006[15] Feiler, Gluch, Hudak, Lewis, “Embedded SystemArchitecture Analysis Using SAE AADL”, CMU/SEI-2004-TN-004, June 2004[16] Charles Pecheur, Stacy Nelson, “V&V <strong>of</strong>Advanced <strong>Systems</strong> at NASA”, NASA/CR-2002-211402, April 2002[17] <strong>Systems</strong> Integration Requirements Task Group,“ARP 4754: Certification Considerations for Highly-Integrated or <strong>Complex</strong> Aircraft <strong>Systems</strong>”, SAEAerospace, 10 April 1996[18] SAE, “ARP 4761: Guidelines <strong>and</strong> Methods forConducting the Safety Assessment Process on CivilAirborne System <strong>and</strong> Equipment”, December 1996[19] Aeronautical Radio Inc. (ARINC), “ARINCSpecification 653P1-2: Avionics Application S<strong>of</strong>tware15


St<strong>and</strong>ard Interface Part 1 – Required Services”, 7 March2006[20] Department <strong>of</strong> Defense, “MIL-STD-882D:St<strong>and</strong>ard Practice for System Safety”, 19 January 1993[21] RTCA, Incorporated, “DO-178: S<strong>of</strong>twareConsiderations in Airborne <strong>Systems</strong> <strong>and</strong> EquipmentCertification”, 1 December 1992[22] RTCA, Incorporated, “DO-254: Design AssuranceGuidance for Airborne <strong>Electronic</strong> Hardware”, 19 April2000[23] US Army, “Aeronautical Design St<strong>and</strong>ardH<strong>and</strong>book: <strong>Rotorcraft</strong> <strong>and</strong> Aircraft <strong>Qualification</strong>(RAQ) H<strong>and</strong>book”, 21 October 1996[24] Cary R. Spitzer (Editor), “Avionics: Elements,S<strong>of</strong>tware <strong>and</strong> Functions”, CRC Press, 2007[25] US Army, “Army Regulation 70-62: Airworthiness<strong>Qualification</strong> <strong>of</strong> Aircraft <strong>Systems</strong>”, 21 May 2007[26] US Army, “Army Regulation 95-1: Aviation FlightRegulations”, 3 February 2006[27] Barbacci, Clements, Lattanze, Northrop, Wood,“Using the Architecture Trade<strong>of</strong>f Analysis Method(ATAM) to Evaluate the S<strong>of</strong>tware Architecture for aProduct Line <strong>of</strong> Avionics <strong>Systems</strong>: A Case Study”, July2003, CMU/SEI-2003-TN-012[28] “All in the Family: CAAS & AADL”, Peter Feiler,August 2008, CMU/SEI-2008-SR-021[29] Chrissis, Konrad, Shrum, “CMMI: Guidelines forProcess Integration <strong>and</strong> Product Improvement”, PearsonEducation, 2007[30] Brendan O‟Connell, “Achieving Fault Tolerancevia Robust Partitioning <strong>and</strong> N-Modular Redundancy”,Draper Laboratory, January 2006[31] John F. Hanaway, Robert W. Moorehead, “SpaceShuttle Avionics <strong>Systems</strong>”, NASA SP-504, 1989[32] Lui Sha, “The <strong>Complex</strong>ity Challenge in ModernAvionics S<strong>of</strong>tware”, August 14, 2006.[33] “Incidents Prompt New Scrutiny <strong>of</strong> AirplaneS<strong>of</strong>tware Glitches”, 30 May 2006, Wall Street Journal[34] Eyal Ophir, Clifford Nass, <strong>and</strong> Anthony Wagner,“Cognitive Control in Media Multitaskers”, PNAS, 20July 2009[35] “Advisory Circular AC 25.1309-1A: SystemDesign <strong>and</strong> Analysis”, Federal Aviation Administration,21 June 1988[36] Program Element Office (PEO) PolicyMemor<strong>and</strong>um 08-03 (Risk Matrix)[37] Nancy Leveson, “The Role <strong>of</strong> S<strong>of</strong>tware inSpacecraft Accidents”, MIT, AIAA Journal <strong>of</strong>Spacecraft <strong>and</strong> Rockets[38] http://www.nsf.gov/pubs/2008/nsf08611/nsf08611.htm, National Science Foundation webpageon Cyber-Physical <strong>Systems</strong>.[39] Hoang Pham, “S<strong>of</strong>tware <strong>Reliability</strong>”, Springer,2000[40] Paul Rook, editor, “S<strong>of</strong>tware <strong>Reliability</strong>H<strong>and</strong>book”, Elsevier Science Publishers, 1990[41]http://www.navair.navy.mil/v22/index.cfm?fuseaction=news.detail&id=128[42]http://mars8.jpl.nasa.gov/msp98/news/mco990930.html[43]John Garman, “The Bug Heard Around the World”,ACM SIGSOFT, October 1981[44]http://marsprogram.jpl.nasa.gov/MPF/newspio/mpf/status/pf970715.html, “Mars Pathfinder MissionStatus”, July 15, 1997[45] Nancy Leveson, “Safeware: System Safety <strong>and</strong>Computers”, Addison-Wesley Publishing Company,1995[46] http://www.electronicaviation.com/aircraft/JAS-39_Gripen/810[47] Br<strong>and</strong>on Hill,http://www.freerepublic.com/focus/fnews/1791574/posts,Lockheed's F-22 Raptor GetsZapped by International Date Line, DailyTech LLC ,February 26, 2007[48] http://www.military.com/news/article/human-errorcited-in-most-uav-crashes.html[49] Daniel Michaels <strong>and</strong> Andy Pasztor, “IncidentsPrompt New Scrutiny Of Airplane S<strong>of</strong>tware Glitches,As Programs Grow <strong>Complex</strong>, Bugs Are Hard to Detect;A Jet's Roller-Coaster Ride, Teaching Pilots to GetControl” Wall-Street Journal, May 30, 2006AcknowledgementsIn addition to the references, the authors would like toexpress their appreciation to those who reviewed <strong>and</strong>provided critical input through discussion for this paper.Those people included: Dr. William Lewis, DavidCripps, Kevin Rotenberger, Fred Reed, Yen Fuqua,Michael Walsh Wayne Alan Garrison, Ward Saemisch,Larry Fisher, John Byerly, Dan Francis, Glenn Carter,Raquel Gardner, Adam Robinson, Mazen Khazendar,Michael Dreyer, Bruce Lewis, Josh McNeil, MelvinJohnson, Steven Hosner, Michael Kidd, R<strong>and</strong>yCrockett, Margaret Hamilton, Dr. Paul Miner, SusanDeGuzman, David Homan, Dr. Jose Lopez, MichaelAucoin, Roger Racine, Gary Schwarz Mervin Brokke,Amy Boydston <strong>and</strong> the nice folks at Redstone ScientificInformation Center (RSIC). It should be noted that theviews <strong>of</strong> the reviewers does not necessarily agree withthe author(s).About the AuthorsAlex Boydston has over 17 years <strong>of</strong> experience inDepartment <strong>of</strong> Defense, NASA <strong>and</strong> commercialtelecommunication systems, test, <strong>and</strong> s<strong>of</strong>twareengineering. He has held engineering positions atTeledyne Brown Engineering, SCI, ADTRAN, DraperLaboratory, <strong>and</strong> is now employed with the United StatesArmy Aviation Engineering Directorate all positionswithin Huntsville, AL. From 1990 to 1992 Alex was acooperative education student in Advanced SpacePrograms with Teledyne Brown Engineering working16


Space Station Furnace Facility <strong>and</strong> Protein CrystalGrowth Facility. From 1992 to 1994 he was aCommunications <strong>and</strong> <strong>Systems</strong> Test Engineer forTeledyne Brown Engineering on the National MissileDefense program. From 1994 to 1996 Alex was thelead systems integrator <strong>of</strong> experiments <strong>and</strong> hardware forthe Space Shuttle Middeck Glovebox. In 1996 heworked as a Tactical <strong>Systems</strong> <strong>Electronic</strong>s Engineer forSCI, Inc working on the audio systems for the F-16 <strong>and</strong>F-18 aircraft. From 1996 to 2007 he worked atADTRAN in various roles <strong>of</strong> Technical Support (2years), Product <strong>Qualification</strong> Test (5 years), <strong>and</strong>Embedded S<strong>of</strong>tware Design Engineering (4 years) forenterprise <strong>and</strong> carrier grade data <strong>and</strong> voicecommunication systems. From 2007 to 2008 he servedas a lead Flight Processing Architect for theConstellation Ares I rocket for Draper Laboratory atMarshall Space Flight Center. In 2009 Alex beganworking for the U.S. Army as an <strong>Electronic</strong>s Engineerin the Avionics Branch <strong>of</strong> Mission Equipment for theAviation Engineering Directorate. Mr. Boydston holdsa Bachelor <strong>of</strong> Science (B.S) Degree in ElectricalEngineering (1992) <strong>and</strong> Masters <strong>of</strong> Science (M.S.)Degree in Electrical Engineering (2001), both from theUniversity <strong>of</strong> Alabama in Huntsville.Dr. William D. Lewis was selected for the SeniorExecutive Service in July 2006. As the Director <strong>of</strong>Aviation Engineering, he is the Airworthiness authorityfor Army aircraft <strong>and</strong> provides matrix support tocustomers. Dr. Lewis‟ direct customers are the ProgramExecutive Officer Aviation Program/Project/ ProductManagers. The ultimate customers are the Army aircraftcrew, passengers, <strong>and</strong> maintainers that operate the Armyaviation systems. In this capacity, he insures flightsafety <strong>of</strong> all Army aircraft via design <strong>and</strong> flightrestrictions. Additionally, he is responsible for thedesign <strong>and</strong> performance verification <strong>of</strong> aircraft <strong>and</strong>equipment to support Army aviation. From 2004 topresent Dr. Lewis is the Acting Director, AviationEngineering Directorate, U.S. Army Aviation <strong>and</strong>Missile Research, Development, <strong>and</strong> EngineeringCenter, Redstone Arsenal, AL. In 2004 he was BranchChief, Flight Controls, Aeromechanics Division,Aviation Engineering Directorate, U.S. Army Aviation<strong>and</strong> Missile Research, Development, <strong>and</strong> EngineeringCenter, Redstone Arsenal, AL. From 2003 to 2004 hewas a Chief Engineer, RAH-66 Comanche ProgramU.S. Army Aviation <strong>and</strong> Missile Research,Development, <strong>and</strong> Engineering Center, RedstoneArsenal, AL. From 2002 to 2003 he was a <strong>Systems</strong>Engineer, Comanche Program, U.S. Army Aviation <strong>and</strong>Missile Research, Development, <strong>and</strong> EngineeringCenter, Redstone Arsenal, AL. From 1996 to 2002 hewas Pr<strong>of</strong>essor at University <strong>of</strong> Tennessee, Tullahoma,TN. From 1992 to 1996 he was a Product Manager withthe US Army Aviation <strong>Systems</strong> Comm<strong>and</strong>, Ft Eustis,VA (Retired from Army after 21 years <strong>of</strong> service at rank<strong>of</strong> LTC). From 1989 to 1992 he was a SimulationEngineer in Atlanta, GA. From 1986 to 1989 he was anExperimental Test Pilot, US Army Aviation <strong>Systems</strong>Comm<strong>and</strong>, Edwards Air Force Base, CA. From 1983 to1985 Dr. Lewis was an Aerospace Engineer with the USArmy Aviation <strong>Systems</strong> Comm<strong>and</strong>, St. Louis, MO. Dr.Lewis holds a Ph.D. in Aerospace Engineering fromGeorgia Institute <strong>of</strong> Technology, an M.S. in AviationManagement, Embry-Riddle Aeronautical University<strong>and</strong> a B.S. from the United States Military Academy atWest Point.DISCLAIMERPresented at the AHS Specialists’ Meetingon <strong>Systems</strong> Engineering Harford, CT onOctober 15-16, 2009. This material isdeclared a work <strong>of</strong> the U.S. Government<strong>and</strong> is not subject <strong>of</strong> copyrightmaterials. Approved for public release;distribution unlimited. Review completedby the AMRDEC Public Affairs Office 21Sep 2009; FN4208. Reference herein toany specific commercial, private orpublic products, process, or service bytrade name, trademark, manufacturer, orotherwise, does not constitute or implyits endorsement, recommendation, orfavoring by the United States Government.The views <strong>and</strong> opinions expressed hereinare strictly those <strong>of</strong> the authors <strong>and</strong> donot represent or reflect those <strong>of</strong> theUnited States Government.17

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!