Enhancing MIL-HDBK-217 Reliability Predictions with Physics of Failure Methods

James G. McLeish, CRE

Download PDF

Key Words: Physics of Failure, Reliability Assessment, Reliability Prediction, MIL-HDBK-217 Design for Reliability,


The Defense Standardization Program Office (DSPO) of the U.S. Department of Defense’s (DoD) has initiated a multiphase effort to update MIL-HDBK-217 (217), the military’s often imitated and frequently criticized reliability prediction bible for electronics equipment. This document, based on field data fitted empirical models, has not been updated since 1995. The lack of updates led to expectations that its statistically-based empirical approach would be phased out. Especially after science-based Physics of Failure (PoF) [a.k.a. Reliability Physics] research led Gilbert F. Decker, Assistant Secretary of the Army for Research, Development and Acquisition to declare that MIL-HDBK-217 was not to appear in Army RFP acquisition requirements as it had been “shown to be unreliable and its use can lead to erroneous and misleading reliability predictions”[1].

Despite such criticism, MIL-HDBK-217 is now being updated as part of the recent climate within the DoD to reembrace RAMs methods [2+3]. This paper reviews the reason for the document’s revival and update along with the primary concerns over its shortcomings.

The team working to update MIL-HDBK-217 developed a proposal for resolving the limitations with empirical reliability prediction. A hybrid approach was developed where improved and more holistic empirical MTBF models would be used for comparison evaluations during a program’s acquisition-supplier selection activities. Later, science-based PoF reliability modeling combined with probabilistic mechanics techniques are proposed for use during the actual system design-development phase to evaluate and optimize stress and wearout limitations of a design in order to foster the creation of highly reliable, robust E/E systems.

A proposal for incorporating this approach into a future Revision H of MIL-HDBK-217 has been submitted to the DOD-DSPO where plans for implementing and funding this proposal are now being considered. This paper reviews the concepts on how PoF methods can co-exist with empirical prediction techniques in MIL-HDBK-217. This is discussed from the point of view of a member of the MIL-HDBK-217 revision team. The author wishes to thank the leaders and team member on the 217 workgroup for their contributions.


The two empirical reliability predictions methods defined in MIL-HDBK-217’s known as “Part Count” and “Part Stress” are used to estimate the average life of electronic equipment in terms of their Mean Time Between Failures (MTBF) which is the inverse of the Failure Rate λ (Lambda). In the “Part Count” method the MTBF value is determined by taking the inverse of the sum of the failure rates (from generic tables) for each component in an electronic device (See Equation 1).

enhance fig 1

These basic failure rates can then be scaled to account for the average increase in failure rate caused by operating under harsh environmental conditions such as; ground mobile, naval, airborne, missile, space . . . etc. MIL-HDBK-217 recognizes 14 different generalized generic environment conditions. The “Part Stress” method provides additional generic scaling factors intended to account for the reliability degradation effects of usage stresses such as power, voltage, and temperature. The stress factors cannot be used for a prediction until the program has matured to the point that these stresses can be quantified by using circuit simulation tools or parametric measurements from functional design prototypes. Stress factors may be used earlier in the design process through component derating guidelines, which establish stress rules for components in a particular circuit application.


There are numerous concerns about the empirical reliability prediction methods defined in MIL-HDBK-217. A summary of the primary criticisms which have been covered thoroughly in other publications [4+5] are:

1) The handbook’s reliability predictions are based solely on constant failure rates which are meant to model only random failure situations. Constant failure rates are used because they simplify failure data collection and calculations, which were a necessity back in the precomputerized world of the 1950s and 1960s when these prediction methods were first developed. When failure trends are modeled as only random events via the exponential distribution, infant mortality and wearout related failures are not accounted for. Tabulation errors where infant mortality and wearout issues are tallied as random failures are another risk of this scheme. Later significant errors can occur when reliability predictions are made using the exponential distribution with contaminated (mixed failure mode) base data. Such inaccuracies are inappropriate, and can misdirect reliability improvement effort away from more effective quality and durability improvements activities.

2) Empirical reliability predictions typically correlate poorly to actual field performance since they do not account for the physics or mechanics of failure. Hence, they can not provide insight for controlling actual failure mechanisms and they are incapable of evaluating new technologies that lack a field history to base projections on.

3) The models are based upon industry wide average failure rates that are not vendor, device nor event specific.

4) The MTBF results provide no insight on the starting point growth rate and distribution range of true failure trends. Also the MTBF concept is often misinterpreted by people without formal reliability training.

5) Over emphasis on the Arrhenius model and steady state temperature as the primary factor in electronic component failure while the roles of key stress factors such as: temperature cycling, humidity, vibration and shock have not been individually modeled [6+7+8].

6) Over emphasis on component failures despite RAIC (formerly RAC) data that shows that at least 78% of electronic failures are due to other issues that are not modeled such as: design errors, PCB assembly defects, solder and wiring interconnect failures, PCB insulation resistance and via failures, software errors. . . etc. [9]

7) The last 217 update was in 1995; new components, technology advancement and quality improvements since then are not covered. It is grossly out of date. For example, the microcircuit model was last updated in 1992 and the data used to develop the model was based on parts manufactured on or before 1991, the majority of this data is from the 1980’s [10]. The connector model dates back to 1985 using data that was 20 years old [5].

 8) The 217 handbook needs to be kept up to date with regularly scheduled releases of new data. This is an enormous task that is further complicated with the creation of each new device and component family that needs to be tracked. This maintenance effort is underfunded, the current 217 data is over 15 years out of date and it is unable to deal with today’s continuous quality/reliability improvement processes that rapidly make components more reliable.


The MIL-HDBK-217 Rev. G update is authorized under DoD Acquisition Streamlining and Standardization Information System (ASSIST) Project # SESS-2008-001, a DoD-DPSO initiative. The Naval Surface Warfare Center (NSWC) Crane Division is managing the project. The 217 Rev G Work Group (217WG) kick-off meeting was held on May 8th, 2008 in Indianapolis, IN [11]]. It was attended by 23 administrators and reliability professionals from the Navy, the Air Force, defense contractors, RAMS software providers, consultants and test labs. The Army, which no longer uses 217, did not participate. The project’s objectives were to:

  • Refresh the data for today’s electronic part technologies.
  • The update was not to produce a new reliability prediction approach. Models could be reviewed and modified if needed, but should generally remain intact
  • The update was to continue to look and work the same so reliability engineers would not have to learn a new tool.
  • 217 is to continue to be a paper document despite the obvious need for a web based, living electronic failure rate database essential for staying up to date with the rapid, continuous advancements in electronics.
  • Contrary to past revision efforts when universities and research organizations were contracted to make the revisions, Rev G would be performed on a shoe string budget relying on volunteers to support the effort.

The objective of the Rev G project was not to develop a better, more accurate reliability prediction tool or to produce more reliable systems. The actual goal was to return to a common and consistent method for estimating the inherent reliability of an eventually mature design during acquisition so that competitive designs could be evaluated by a common process. To support this position, data from a NSWC Crane survey was cited showing that the majority of respondents use 217 for reliability prediction and that they wanted to continue to use it in its current format. The next most used methods were PRISM, 217 PLUS, and Telcordia SR-332.

The 217 Rev F failure rate data and models are a frozen snapshot of conditions from over 15 years ago that are well out of date. Many organizations attempted to improve their reliability estimates by using modified or alternative prediction methods. These varied from using the 217 models with their own component failure rate data to using alternative empirical models such as SR-332, the European FIDES method, or the RAC PRISM (later renamed RIAC 217-PLUS) method, to using PoF techniques. These efforts to make better predictions and more reliable systems were encouraged by many reliability professionals. However, the diversity made it difficult for acquisition personnel and program managers to evaluate contractors and their products.

Concerns were raised at the kickoff meeting that a survey of current users of empirical prediction methods failed to capture the views of those who had stopped using them. There was a considerable amount of discussion that the reasons for dissatisfaction, discontinued use and the diverse, individual efforts to find better methods should also be considered equally with the acquisition needs when defining the goals for updating 217 for the first time in many years. It was felt that the high survey ranking of MIL-HDBK-217 empirical methods was partly due to the lack of an effort to develop and sanction a better method to replace empirical reliability prediction methodologies.

A concept for starting a second effort to develop better reliability prediction methods after the 217-Rev G effort was proposed. But the discussions at the kick-off meeting led to accelerating this proposal and expanding the project into multiple phases. The original Rev G effort to update the current failure rate models and data would continue as the Phase I effort. A Phase II task was created to research and define a proposal for an improved reliability prediction methodology and the best means to implement it. Upon acceptance of the Phase II plan, a Phase III effort would later be created to implement the Phase II plan, which would become MIL-HDBK-217-Rev. H.


In researching alternative reliability prediction methods, the 217WG leveraged a Quality Function Deployment (QFD) approach using data collected by Aerospace Vehicle Systems Institute (AVSI) AFE 70 Reliability Framework and Roadmap project that compiled and documented the needs of potential users and correlate them into functions and tasks for achieving the objective. QFD is a widely used tool to help project teams sort out, identify and prioritize the requirements for complex issues in order to create new product or services that incorporate key quality characteristics from the viewpoint of potential customers or end users. The results are documented in a matrix format known as the House of Quality.

enhance fig 2

The QFD analysis identified that a more holistic approach to reliability prediction was needed that could more accurately evaluate the risks of specific issues in addition to overall reliability. Also needed was a way to evaluate the time to first failure in addition to MTBF and a way to deal with the constant emergence of new technologies that did not require years of field performance before reliability predictions could be made. After considerable evaluation, the Phase II team converged on two basic approaches: 1) To improve the empirical reliability predictions approach and 2) to embrace and standardize the science-based Physics of Failure (PoF) approach where cause and effects deterministic relationships are analyzed using fundamental engineering principles.

After further deliberation over the strengths and weaknesses of each approach, it became evident that neither method could resolve all reliability predictions issues to satisfy the needs of every user group. Eventually it became evident that a two-part hybrid approach should be considered. An updated and improved empirical approach based on the RIAC 217 Plus methodology was proposed to provide preliminary module or system level reliability estimates based on historical component failure rates. This approach would support acquisition comparison and program management activities during the early stages of an acquisition program.

The proposed second part would define Physics of Failure modeling for use during the actual engineering design and development phases of a program. These methods would be used to assess the susceptibility and durability of design alternatives to various failure mechanisms under the intended usage profile and application environment. In this way items that lacked the required durability and reliability required for an application can be screened out early, at low cost during the design phase resulting in more reliable military hardware and systems. Since the 217 Plus approach has been well defined in other publications [12], the rest of this paper will provide an overview to the PoF approach proposed for 217 Rev H.


The Physics of Failure (also known as the Reliability Physics) approach applies analysis early in the design process to predict the reliability and durability of design alternatives in specific applications. This enables designers to make design and manufacturing choices that minimize failure opportunities, which produces reliability optimized, robust products. PoF focuses on understanding the cause and effect physical processes and mechanisms that cause degradation and failure of materials and components. It is based in the analysis of loads and stresses in an application and evaluating the ability of materials to endure them from a strength and mechanics of material point of view. This approach integrates reliability into the design process via a science-based process for evaluating materials, structures and technologies.

These techniques known as load-to-strength interference analysis have been used for centuries. They are a basic part of mechanical, structural, construction and civil engineering processes. Unfortunately during the early development and evolution of Electrical/Electronics (E/E) technology in the 1950s and 1960, this approach was not used. Since electrical engineers were not trained in or familiar with structural analysis techniques and the miniaturization of electronics had not yet reached the point where structural and strength optimization was required. Also as with any new technology, the reasons for failures were not initially well understood. Research into E/E failures was slow and difficult because unlike mechanical and structural items most E/E failures are not obvious. Evaluating and learning about new E/E failures was more difficult because they are not readily apparent to the naked eye since most components are microscopic and electrons are not visible.

Due to these difficulties, empirical probabilistic reliability methods were adapted instead and became so entrenched that the development of better alternatives were stifled.

Over the last 25 years, great progress has been made in PoF modeling and the characterization of E/E material properties. By adapting the techniques of mechanical and structural engineering, computerized durability simulations of the E/E devices using deterministic physics and chemistry models are now possible and becoming more practical and cost effective every year. Failure analysis research has led PoF methods to be organized around 3 generic root cause failure categories which are: Errors and Excessive Variation, Wearout Mechanisms and Overstress Mechanisms.

Overstress Failures

Overstress failures such as yield, buckling and electrical surges occur when the stresses of the application rapidly or greatly exceeds the strength or capabilities of a device’s materials. This causes immediate or imminent failures. In items that are well designed for the loads in their application, overstress failures are rare and random. They occur only under conditions that are beyond the design intent of the device, such as acts of god or war, such as being struck by lightning or submerged in a flood. Overstress is the PoF engineering view of random failures from traditional reliability theory. If overstress failures occur frequently, then either the device was not suited for the application or the range of application stresses used by the designer were underestimated. PoF load-stress analysis is used to determine the strength limits of a design for stresses like shock and electrical transients and to assess if they are adequate.

Wearout Failures

Wearout in PoF is defined as stress driven damage accumulation of materials which covers failure mechanisms like fatigue and corrosion. Numerous methods for stress analysis in structural materials have been developed by mechanical engineers. These techniques are readily adapted to the microstructures of electronics once material properties have been characterized. PoF wearout analysis does more than estimate the mean time to wearout failures for an assembly. It identifies the most likely components or features in a device to fail 1st, 2nd , 3rd . . . etc, along with their times to first failure and their related fall out rates afterwards, for various wearout mechanisms. This enables designers to determine which (if any) items are prone to various types of wearout during the intended service life of a new product. The design can then be optimized until susceptibility to wearout risks during the desired service life are designed out.

Errors and Excessive Variation Related Failures

Errors and excessive variation issues comprise the PoF view of the traditional concept of infant mortality. Opportunities for error and variation touch every aspect of design, supply chain and manufacturing processes. These types of failures are the most diverse and challenging category. Since diverse, random, stochastic events are involved, these types of failures can not be modeled or predicted using a deterministic PoF cause and effect approach. However, reliability improvements are still possible when PoF knowledge and lessons learned are used to evaluate and select manufacturing processes that are proven to be capable, ensure robustness and implement error proofing


The 217WG developed a dual approach for integrating PoF overstress and wearout analysis into 217 Rev. H along side improved empirical prediction methods. One proposed PoF section addresses electronic component issues while the second deals with Circuit Card Assembly (CCA) issues. These sections are meant to serve as a guide to the type of PoF models and methods that exist for reliability assessments.

Physics of Failure Methods for Components

The proposed PoF component section focuses on the failure mechanisms and reliability aspects of semiconductor dies, microcircuit packaging, interconnects and wearout mechanisms of components such as capacitors. A current key industry concern is the expected reduction in lifetime reliability due to the scaling reduction of IC die features that have reached nanoscale levels of 90, 65 and 45 nanometers (nm) [13]. Models that evaluate IC failure mechanisms such as Time Dependent Dielectric Breakdown, Electromigration, Hot Carrier Injection and Negative Bias Temperature Instability are being considered to address this concern [14].

Physics of Failure Methods for Circuit Card Assemblies

The proposed PoF circuit card assembly section defines 4 categories of analysis techniques (See Figure 2 at the end of theis paper) that can be performed with currently available Computer Aided Engineering (CAE) software. A probabilistic mechanics [15] approach is used to account for variation issues. This methodology is aligned with the Analysis, Modeling and Simulations methods recommended in Section 8 of SAE J1211 - Handbook for Robustness Validation of Automotive Electrical/Electronic Modules [16]. The 4 categories are:

1) E/E Performance and Variation Modeling used to evaluate if stable E/E circuit performance objectives are achieved under static and dynamic conditions that include tolerancing and drift concerns.

2) Electromagnetic Compatibility (EMC) and Signal Integrity Analysis to evaluate if a CCA generates or is susceptible to disruptions by Electromagnetic Interference and if the transfer of high frequency signals are stable.

3) Stress Analysis is used to assess the ability of a CCA’s physical packaging to maintain structural and circuit interconnection integrity, maintain a suitable environment for E/E circuits to function reliable and determine if the CCA is susceptible to overstress failures [17].

4) Wearout Durability and Reliability Modeling uses the results of the stress analysis to predict the long-term stress aging/stress endurance, gradual degradation and wearout capabilities of a CCA [17]. Results are provided in terms of time to first failure, the expected failure distribution in an ordered list of 1st, 2nd, 3rd . . . etc. devices, features, mechanisms and sites of mostly likely expected failures.

Each of the 4 groups contains analysis tasks that use similar analytical skills and tools. Combined, these techniques provide a multi-discipline virtual engineering prototyping process for finding design weaknesses, susceptibilities to failure mechanisms and for predicting reliability early in the design when improvements can be implement at low costs.

Most of these modeling techniques require specialized modeling skills and experience with CAE software. It is not expected that reliability engineers would personally learn and perform these tasks. However, the definition and recognition of PoF methods as integral, accepted, reliability methods for creating robust and high reliable systems is expected to help connect reliability professional with design engineers and help integrate reliability by design concepts into design activities.

The PoF sections are not intended to mandate that every model has to be applied to every item in every design or that modeling is limited to only the listed models since new models are constantly being developed. Furthermore, the list is not all inclusive since PoF models for every issue do not yet exist. The goal is to identify existing evaluation methods that can be selected as needed during design and development activities to mitigate reliability risks. This way, more reliability growth can occur faster, at lower costs, in a virtual environment during a project’s design phase.

By establishing a roadmap for merging fundamental engineer analysis and reliability methods, a technology infrastructure can be encouraged to continue to grow (perhaps faster) to provide more tools and methods for reliability engineers and product design teams to use in unison.


enhance fig 3


  1. G. F. Decker, “Policy on Incorporating a Performance Based Approach to Reliability in RFPs”, Dept of the Army Memo, Feb, 15 1995
  2. Report of the Reliability Improvement Working Group, U.S. Dept of Defense, June 2008
  3. Report of the Defense Science Board on Developmental Test & Evaluation, U.S. Dept of Defense, May 2008
  4. F. R. Nash, “Estimating Device Reliability: Assessment of Credibility”. AT&T Bell Labs/Kluwer Publishing, MA, 1993.
  5. M. Pecht, “Why the Traditional Reliability Prediction Models Do Not Work - Is There an Alternative?”, Electronics Cooling, Vol. 2, pp. 10-12, January 1996
  6. M. Osterman, “We Still have a headache with Arrhenius”, Electronics Cooling, Vol. 7, No. 1, pp 53-54, Feb. 2001
  7. M. Pecht, P. Lall, E. Hakim, “Temperature as a Reliability Factor”, 1995 Eurotherm Seminar No. 45: Thermal Management of Electronic Systems, pp. 36.1-22
  8. O. Milton, “Reliability& Failure of Electronic Materials & Devices”, Ch 4.5.8 – “Is Arrhenius Erroneous” Academic Press, San Diego CA. 1998
  9. D.D. Dylis, M.G. Priore, “A Comprehensive Reliability Assessment Tool for Electronic Systems (Prism)”, IIT Research/Reliability Analysis Center, Rome NY, RAMS 2001
  10. “PRISM vs. commercially available prediction tools”, RIAC Admin Posting #558. May 17, 2007 RIAC.ORG,http://www.theriac.org/forum/showthread.ph p?t=12904
  11. L. Gullo, “The Revitalization of MIL-HDBK-217”, IEEE Reliability Newsletter, Sept 2008 http://www.ieee.org/portal/cms_docs_relsoc/relsoc/Newsl etters/Sep2008/Revitalization_MIL-HDBK-217.htm
  12. D. Nicholls, “An Introduction to the RIAC 217Plus Component Failure Rate Models”, The Journal Of The Reliability Information Analysis Center, 1st Quarter - 2007
  13. R. Alderman, “Physics of Failure: Predicting Reliability in Electronic Components”, Embedded Technology, July 2009
  14. S. Salemi, L. Yang, J. Dai, J. Qin , J.B. Bernstein Physicsof-Failure Based Handbook of Microelectronic Systems, Defense Technical Info Center/Air Force Research Lab Report., U of MD & RIAC, Utica, NY, Mar. 2008
  15. I. Elishakoff “Probabilistic Theory of Structures” 2nd edition, Dover Publications, Feb. 1999
  16. SAE J1211 – “Handbook for Robustness Validation of Automotive E/E Modules”, Section 8 - Analysis, Modeling and Simulations, SAE, April 2009.
  17. S.A. McKeown, Mechanical Analysis of Electronic Packaging Systems, Marcel Dekker, New York 1999.


James G. McLeish, CRE

DfR (Design for Reliability) Solutions

5110 Roanoke Place, Suite 101

College Park, Maryland 20740 – USA

e-mail: jmcleish@dfrsolutions.com

Mr. McLeish holds a dual EE/ME Masters degree in Vehicle E/E Controls System. He is a Certified Reliability Engineer and a core member of the Society of Automotive Engineering Reliability Standards Workgroup with over 32 years of automotive and military Electrical/Electronics experience. He started his career as a practicing electronics designing engineer who helped invent the first microprocessor based engine computer at Chrysler Corp. in the 1970’s. He has since worked in systems engineering, design, development, product, validation, reliability and quality assurance of both E/E components and vehicle systems at General Motors and GM Military. He is credited with the introduction of Physics-ofFailure methods to GM while serving as an E/E Reliability Manager and E/E QRD (Quality/Reliability/Durability) Technology Expert. Since 2006 Mr. McLeish has been a partner and manager of the Michigan office of DfR Solutions, a quality/reliability engineering consulting and laboratory services firm formed by senior scientists and staffers from the University of Maryland’s CALCE Center for Electronic Product and Systems. DfR Solutions is a leader in providing PoF science and expertise to the global electronics industry.



Implementing Physics Of Failure In Electronic Boxes

The performance of many electronic components is only as good as the enclosures that surround them. Reliability prediction of an electronic box is a crucial and sometimes underappreciated element that directly affects the performance of the circuitry and other costly systems inside.