In this section we will introduce the topic of product qualification and provide an overview. Product qualification is an integral part of the component management process. It is widely perceived as the activity by which one demonstrates a product is fit for a particular application.
Let’s assume that I come to you to purchase integrated circuits for an electronics application I am building. I want to make sure that your circuits will work in my application. The challenge is how to demonstrate that. You may provide me with data that indicates in your mind that the product will work in my application, but I may not be convinced. We need a framework upon which we can both agree will demonstrate the “fitness” of your component. There are a number of issues associated with this task. They include new configurations and technologies, which introduce changes from what was previously done. The complexity of the product can make it challenging, because we may not be able to test or characterize the components fully. The reliability margins of components are shrinking, so we may not simply be able to increase the margin arbitrarily to avoid problems. Many different customers with different applications may use the component, each with their own needs and objectives. The now-prevalent fabless/foundry model can make it difficult to obtain the necessary data. And finally, there are new failure modes for which we do not have an understanding, or the models to make accurate predictions.
Product qualification involves a number of requirements. These are typically a set of both electrical and mechanical stresses that are designed to ensure the product will meet all of its specifications under the application use conditions for the desired life of the product. The qualification effort should determine the field failure rates and lifetime expectancies for the component of interest. The results will be unique to the component. Product qualification also includes certification. The manufacturer is required to certify that the product will perform all of its functions as expected and meet its functionality and usability requirements. As the graphic in Figure 1 depicts, there are four areas that feed into this assessment: the top-level system behavior, analysis of the component behavior, its technology, and the application conditions. Furthermore, product qualification involves the intersection of functionality, reliability and manufacturability. A successful component should be able to meet all three at the target design point, like we show here.
There are many steps to a qualified product. We can consider two areas that affect the qualified product: the product qualification process and the quality system. Under product qualification, we can define a technology reliability component which tracks the goodness of the wafer fab process to create a working product, a product reliability component which includes specific items related to the product design and its package, functionality or the ability of the component to work over a specified operating range, and manufacturability or the ability to create a high-yielding component that can be consistently produced. Under the quality system, we have the process for creating a qualified product, monitoring the line for potential problems, change management to control product revisions, and returns management to correct problems that might occur. These systems should lead to a quality product if implemented and monitored correctly.
If we look at the qualification plan development in more detail, we see a number of inputs to the process. The first is customer input. Customers can work with their suppliers to help communicate application conditions, failure rate expectations and system level mechanisms to deal with failures. The second is product design. There are issues associated with test caused by internal or 3rd party complexity, margins for the component, as well as prior experiences with similar products. The third is cost. The tests required for qualification can cost a significant amount of money. Failure analysis and hardware fixturing also costs money. Customers may not be willing to bear the cost for all of these activities. The fourth is technology reliability. This includes studies on wearout mechanisms and new failure modes. The fifth is package reliability. This includes issues like chip/package interactions and packaging reliability. The sixth is schedule, which addresses issues like resources for qualification, turnaround times, and production schedules.
Let’s take a brief look at the plan elements. We discuss the individual elements in more detail under the qualification standards elsewhere in our online training site. The first major group of elements is the accelerated product stresses. These include tests like high temperature operating life, early life failure rate, step stress, or temperature cycling, non-volatile memory tests, if the product contains non-volatile memory, electrostatic discharge testing, latch-up testing, soft error testing, and potentially other tests. The second major element of the plan would be the electrical characterization of the component. This is usually done in some detail to ensure proper operation. The third major element would be manufacturability. For a fabless company, the foundry might be required to demonstrate this. For an integrated device manufacturer, there will need to be a process plan that involves die-specific failure mechanisms like dielectric breakdown, electromigration, and other mechanisms. The fourth major element group would be customer functionality. These would be tests beyond standard electrical tests to ensure the component works for a specific application. It might involve a test using the customer’s software code, specific requirements like shock, an extended temperature range, or any number of items.
Today, there are two basic approaches to qualification. The first is to use an industry standard, where one proves that a defined failure rate can be met or exceeded. The advantage of this approach is that it is a pass/fail method, where you either meet the requirements, or you don’t. It is cheaper, since fewer samples are tested. There is also better agreement in the industry about the approach since the process is codified into the standard. And the lower accelerations tend to be more design friendly. The second approach is what is called a knowledge-based approach. You project failure rates under various use conditions based on models of the failure mechanisms of interest. This is a flexible method for qualification and can be quite useful for fabrication improvement cycles as well as design optimization. In a sense, the failures are good because they teach us something about the way the device might fail in the field. This approach can be more costly though, as larger samples are necessary. This approach can also lead to arguments, if there is not agreement about the models or how to apply them. Finally, this approach may not be possible with all designs as the costs could outweigh the benefits.
Let’s look into the knowledge-based qualification flow more closely for a minute. This chart helps to illustrate the stress flow associated with the qualification process. There is a data collection phase and a data analysis phase. In the data collection phase, engineers stress components with a stress to bring out a failure mechanism of interest. This will require a design of experiments if the failure mechanism is unknown, or a defined set of tests if the mechanism is understood. One gathers data with an appropriate test structure or component and test system to allow projection to use conditions with a certain confidence level. On the data analysis side, the engineers determine the acceleration factors from the distributions, and develop screens to reduce the problem, develop reliability, availability, and serviceability plans to mitigate potential problems, and perform failure analysis to provide feedback to the fab to improve the manufacturing process to mitigate the effects of the mechanism. These activities can be used to generate a failure rate for the customer. Finally, there should be a feedback cycle to gather field failure data, operational reliability management data, and other elements to lower the rate if possible.
A key component of the knowledge-based qualification approach is the failure model. This is normally handled as a two-phase approach. First, the engineers develop a rough model based on a test structure or test chip. A test structure or test chip can help magnify the effects of the problem, or be designed to be more sensitive to the problem. It can also be tested more easily. This provides a way to examine technology limitations, determine the product stress requirements to meet the objectives for the component, and establish process failure pareto data for fabrication improvements. Second, the engineers develop a product model. This model might include parameters that help expose design-related weaknesses that are systematic in nature. It helps establish the product reliability as well. For example, there might be an area conversion for a TDDB model based on the number of transistors in a product vis-a-vis a gate oxide test structure.
Another key element of the knowledge-based qualification process is accelerated stress testing. The accelerated test serves as a bridge between the data in a lab stress and the customer’s application. This approach can work in a straightforward manner when there is linear acceleration. This occurs when the failure mechanisms are the same at stress and use conditions. One simply multiplies by an acceleration factor to obtain the mean time to fail, or some other percentage value. Some common variables include temperature, voltage, and mechanical stress like thermal cycling. One should watch for signs of wearout, as these are important to ascertain the nature of the failures.
Once a model is characterized on a test structure or test chip, the engineers need to develop a transformation to a product-based model. The challenge here is which distribution to use. Projections to lower failure rates are always difficult, because the distribution models can diverge dramatically from high failure rates to very low failure rates. Next, there is the issue of early life failures, or infant mortality failures, versus end of life, or wearout failures. Some wearout mechanisms are well accelerated during high temperature over life tests, like TDDB and bias temperature instability. Other mechanisms, like electromigration, may not be accelerated properly with HTOL. Furthermore, circuit layout rules work to mitigate the effects of some mechanisms, so they will not contribute significantly to product reliability. Many engineers will simply run a single set of tests, and calculate both the early life failure rate and the average failure rate from the same distribution by dividing the cumulative distribution function by the number of use hours.
We need to understand though that the estimated failure rate at use conditions in the field will not be constant. The failure rate tends to decrease through the useful product life. However, with an exponential failure rate, the rate is defined as constant. If you decide to use a burn-in to model field failure rates, the infant mortality failures need to be removed, or otherwise the data has censoring problems. One can define two different levels or time scales, one for the infant mortality failures, and another for the use life failures, to deal with this situation. Here we show an example of this approach where we use 8,760 hours for infant mortality failures and 40,000 hours for use life failures.
Here is an example of data from both temperature acceleration, and voltage acceleration. The time to failure at use conditions is in magenta, while the time to failure under temperature acceleration is shown is blue, and the time to failure under voltage acceleration is shown in green.
A part is considered qualified when all of the qualification objectives are met. These can be standards based, or knowledge-based objectives. If they’re not met, then one needs to define a reliability and/or manufacturing screening strategy to remove potential failures from the population. Second, one might define a product guardband strategy to ensure that a mechanism does not proceed far enough to cause product failures. This strategy will need to be validated through testing to demonstrate that it is effective. If the strategy is deemed effective, and your customer agrees on the strategy, then one can qualify the product with these additional screens and guardbands.
Let’s briefly turn our attention back to standards based qualification. This is a common approach, and is still used widely in the industry. There are four major standards-based qualification processes used in the industry. The United States Military uses MIL-STD 883 for qualification. JEDEC, the Joint Electron Device Engineering Council, developed JESD47, a stress-based qualification standard. The Automotive Electronics Council has a standard for qualification, and Zentralverband Elektrotechnik- und Elektronikindustrie (ZVEI), or the Central Association of Electrical and Electronics Industry in Germany, also has a qualification standard that is used for some European electronics systems, including automotive systems. Several of these standards are discussed in our Online Training System.
These are some common types of qualification used in the semiconductor industry. They include MIL-STD 883 for military and space applications, JEDEC JESD47 for stress driven qualification, and JESD34 for failure mechanism-driven qualification. JESD34 is no longer in active use. Scientists at SEMATECH proposed the idea of knowledge-based qualification in 2000. The Automotive Electronics Council, or AEC, released its failure mechanism-based stress test qualification for packaged ICs known as AEC-Q100 back in 2007. It is used widely in automotive applications.
In conclusion, we introduced the subject of qualification. This is a very broad area, involving numerous tests, standards, and approaches. For more information on this topic, we encourage the reader to consider our new Product Qualification Course, or accessing Product Qualification materials in our Online Training Website.
At the leading edge, most companies have now made the transition to Hi-K/metal gate CMOS technologies. A more subtle distinction less understood is the distinction between "Gate First" and "Gate Last" technologies. You may have read articles that mention "Gate First" or "Gate Last", but what exactly does that mean? Well, we will try to clear that up in this brief article. "Gate First" typically refers to the dielectric layers being deposited prior to the formation of the sidewall spacers, while "Gate Last" refers to the dielectric layers being deposited after the formation of the spacers. Some companies use the term "Replacement Gate" for "Gate Last" or for when they etch the gate structure but leave a portion of the gate stack intact. In a "Gate First" scenario, process engineers deposit the interlayer oxide, followed by the Hi-K dielectric, and the metal gate. Most technologies employing gate first then use a polysilicon-capping layer. Early on, engineers experimented with nickel to create a fully silicided poly region or FUSI (Fully SIlicided) structure, but they no longer pursue this approach due to difficulties with silicide phases and difficult threshold voltage (VT) adjustments. Afterwards, they deposit and etch the dielectric that forms the sidewall spacers.
In a "Gate Last" scenario, process engineers deposit the interlayer oxide, followed by the Hi-K dielectric, the metal gate, and lastly, the polysilicon cap. They then form the sidewall spacers using the initial structure. After forming the sidewall spacers, process engineers have two options. They can etch away a portion of the gate, leaving just the Hi-K and metal gate layers, or they can remove the entire stack down to the silicon substrate. Once they remove the stack, they re-deposit the layers. In the structure above, we show a sequence where the Hi-K/metal gate structure grows on the vertical edge of the sidewall structure followed by a polysilicon cap or PC gap fill to complete the structure.
The main advantage of "Gate First" is its conventional process flow. This leads to lower costs. Unfortunately, this approach requires a large thermal budget for deposition and annealing. It is difficult to adjust the VT of the transistors, and interface problems create mobility degradation and reliability issues with thin effective oxide thicknesses. The "Gate Last" approach uses a lower thermal budget, and can amplify the stress induced by the silicon-germanium source/drain regions. This leads to increased mobility and performance. Unfortunately the complexity is high, which leads to higher costs. Also, the structure requires more restricted design rules due to problems like CMP dishing.
Q: My sample appears to have a burned-in rectangle after I image it for a period in the SEM. What is happening?
A: Several things come to mind. I would examine the problems in this order. One, your sample might simply be charging. This can happen if you image a dielectric layer with a higher accelerating voltage for some time. If you remove the sample from the chamber, put it back in, and the rectangle is gone. This is likely what's happening. Two, the problem could be due to sample preparation. If you don't get the surface completely clean, it is possible for the electron beam to charge a residue layer or polymerize a residue layer, creating this burn-in effect. To correct this problem, try performing ion beam milling. The oxygen or argon bombardment will remove the residue layer, eliminating this problem. Gatan and other manufacturers sell equipment that can do this. And three, the problem could be due to roughing pump failure. As a roughing pump fails, oil can potentially backstream into the chamber and on to the sample surface. This can be corrected by rebuilding or replacing the roughing pump.
Please visit http://www.semitracks.com/courses/analysis/eos-esd-and-how-to-differentiate.php to learn more about this exciting course!
(Click on each item for details).
Failure and Yield Analysis on August 27-30, 2012 (Mon.-Thurs.) in San Jose, CA, USA
EOS, ESD and How to Differentiate on November 7-8, 2012 (Wed.-Thurs.) in San Jose, CA, USA
Introduction to Polymers and FTIR on November 7-8, 2012 (Wed.-Thurs.) in San Jose, CA, USA
If you have a suggestion or a comment regarding our courses, online training, discussion forums, reference materials, or if you wish to suggest a new course or location, please feel free to call us at 1-505-858-0454, or e-mail us at firstname.lastname@example.org.
To submit questions to the Q&A section, inquire about an article, or suggest a topic you would like to see covered in the next newsletter, please contact Jeremy Henderson by e-mail (email@example.com).
We are always looking for ways to enhance our courses and educational materials.
Home > Newsletters > 2012 August Newsletter