Traditional test methodologies have been based on the functional model, that is to say the netlist. The most well-known is probably the stuck-at model which grades a sequence of test vectors by whether they would have managed to notice the difference between a fully functional design and one where one of the signals was permanently stuck at 0 (or 1). In some ways it is a crude measure, since many faults (such as two signals shorting together) don’t manifest themselves in precisely that way. But it turns out to be a lot better than might be expected. In the same way as code-coverage of Verilog (or C++) doesn’t guarantee correctness, for sure if a line of code is not executed you can’t tell if it is correct. In the same way, if a test sequence can’t detect a stuck-at fault then there are some malfunctions that wil make the chip fail but which the test sequence cannot detect.
To take fault detection to the next level requires a better fault model but also requires taking into account more of the design than is available in just the functional netlist. For example, to model whether two signals are shorted together and whether the test sequence notices (produces different results if the short is modeled or not modeled) requires looking at which signals can potentially short. With the functional netlist there is no information and an explosively exponentially large number of signal pairs. But by taking the layout into account this can be pruned back to signals that are actually physically adjacent. Signal pairs that are never next to each other on the chip cannot short and so don’t need to be considered.
Another problem that isn’t modeled by stuck-at is a signal that is open. If the fanout is more than 1 then there is the possibility of an open that leaves some of the fanout signals connected correctly, and others that are not driven at all (and will probably have a partial route from vdd to vss that is sometimes called crowbar current).
Rating a test program for its effectiveness at finding faults is only one part of what test is about. That gives you the tools to improve the test program itself so that it works better.
When there is a pattern of test failures then yield can potentially be improved by locating what is actually wrong and then fixing it. For example, if a layout hot-spot in optical proximity correction (OPC) is causing many failures then a minor change to how the reticle enhancement technology decoration is done (RET) may increase yield.
Tessent has a tight connection between the layout engine and the logic engine, meaning that Tessent Diagnosis can remove more than 85% of all bridge suspects leaving just a few that are both logically and physically feasible. It can also reduce the bounding box of many signals that might have an open by noticing which signals are responding correctly and which are not, allowing the designer to home in on possible problem areas even though the net might be one that goes all over the chip.
More articles by Paul McLellan…
Build a 100% Python-based Design environment for Large SoC Designs