New techniques for fault diagnosis and isolation of switched mode power supplies
Keywords:power
/ARTICLES/2002APR/2002APR18_AMD_POW_EDA_AN.PDF |
Abstract - This paper describes new software techniques to perform analysis, diagnosis, and isolation of
failures in analog and mixed-signal circuits including switched mode power supplies. Unique methods and
algorithms for schematic entry, setting of failure characteristics, definition of test strategies, recording of simu-
lation based measurements, creation of fault trees and sequencing tests are all discussed. To ensure a realistic
test of the new software techniques, diagnostics were developed for a moderately complex analog and mixed
signal switched mode power supply. A discussion of some of the drawbacks of sensitivity based failure analysis
techniques is also included.
I. INTRODUCTION
In late 1996, the SPICE simulation tool manufacturer Intusoft initiated development of a new product,
tailored to the unique and demanding needs of the test engineer. This product, called Test Designer, provides
aneffective,interactivedesignenvironmentforthesynthesisofdiagnostictests,generationoffaultdictionar-
ies and the building of diagnostic fault trees. Since the category of analog and mixed-signal test synthesis and
sequencing software is relatively new, a number of unique techniques were developed to solve key parts of
the FMEA (failure mode effects analysis) and test program set design process.
The motivations for improving the analog and mixed-signal test set design and failure analysis process are
plentiful and well documented [1-5]. For instance, identification of early production faults, improved
safety and reliability through the analysis of difficult to test failures, investigation of power supply
failure mechanisms such as power switch over-current, FET gate over-voltage, start-up failures, and
excessive component stress could all benefit from improved simulation software. Yet little dedicated
software currently exists to help streamline and organize analog and mixed-signal circuit test proce-
dures.
Most testing is done to assure product performance standards. UL standards for various types of supplies
require that tests be developed for overload protection circuitry, short circuit tests, and breakdown of com-
ponents[6]. In some cases, when circuit analysis indicates that no other component or portion of the circuit
is seriously overloaded as a result of the assumed open circuiting or short circuiting of another component,
some tests can even be bypassed.
The software can add benefits in 2 other ways. First, if it can identify failure modes that arent tested, you will
have found either unnecessary parts or a flaw in the acceptance test. Either case would improve the quality
oftheproduct.Anotherimportant aspect is the tracking of component quality during the production lifetime.
Frequently, a suppliers product will evolve or the supplier will be changed, and the products performance
will drift away from its design center. This drift itself is observable from the acceptance test results, but the
software also allows you to track the nearby failures, including parametric failures. These, of course, arent
really failures in the sense that the product cant be shipped; rather, they are component quality indicators.
New Techniques for Fault Diagnosis and Isolation
of Switched Mode Power Supplies
by
C.E. Hymowitz, L.G. Meares, B. Halal
Intusoft P.O. Box 710, San Pedro CA 90733-0710, info@intusoft.com
SPICE 3/XSPICE
Simulation Engine
Failure Modes
Definition
Part Tolerances
and Models
Failure Results
Reporting &
Database
Pass-Fail
Tolerance
Setting
Simulation
Output
Processing
Alarm Reporting
Schematic Entry
Design Database Schematic
Configurations
Simulation
Settings
Fault Tree
Generation
Test Synthesis &
Sequencing
Reports &
Documentation
Test
Measurement
Definition
ICL (SPICE 3) Script
Commands
Monte Carlo,
User-Defined,
Expand-to-pass
Pseudo-Logic Sequence,
Test Entropy Rankings,
Test Description
OLE/ActiveX
Communication
Figure 1, A block diagram of the software system implemented to provide
automated failure analysis and test program set development.
The software, outlined in
Figure 1, provides the afore-
mentioned benefits and in-
cludes a complete system ca-
pable of design entry, simu-
lation, analysis, and test syn-
thesis and fault tree sequenc-
ing. The schematic entry pro-
gram is specially enhanced to
hold the entire design data-
base including part and
model values and tolerances,
and all part failure modes. It
also contains a description of
the various test configurations
and measurements for each
test. Using Object linking
and embedding (OLE) communication, the schematic builds the required netlist for IsSpice4, a SPICE
3/XSPICE based analog and mixed signal simulator. The simulation output data is processed using
the Berkeley SPICE Interactive Command Language (ICL) in order to extract the desired measure-
ments. This process is continued automatically until all of the faults are simulated. The measure-
ments are then parsed into various report forms which contain the pass-fail tolerances. Finally, the
tests are sequenced into a fault tree.
A six step process, described in detail below, is utilized for the development of fault diagnostics. In chrono-
logical order, the steps are:
7 DesignEntryincluding:SchematicLayers setup, Schematic Configurations setup, Simulation Directives
setup, Failure modes characteristics definition, Test Configuration definition
7 Measurementdefinition
7 Pass-Fail tolerance setting
7 Faultsimulation
7 Resultsreporting
7 Failure states and sequencing
II. CONFIGURABLE SCHEMATICS
A long-standing problem in electrical and mechanical circuit design has been the conflict between
the needs of the designer and the needs of production and manufacturing. The main method of con-
veying the designers creation is the circuit schematic which is used to describe the behavior of the
circuit as well as the details of how production will build the hardware.
The designer is concerned with creating a circuit that meets specifications. This is done chiefly through
various EDA tools but, mainly with circuit simulation. The designer must build multiple test configurations,
add parasitic components and stimuli, and even include system elements in the simulation. A top-down
design methodology, where different levels of abstraction are inserted for different components, is
commonplace. Modeling electrical behavior often results in different representations for different
test configurations. In general, the schematic becomes so cluttered with circuitry and data, that it
must be redrawn for production, greatly raising the probability of a transcription error.
The need for a reconfigurable schematic capability becomes even more mandatory when we analyze
the needs of the failure analysis and test program development engineer. In order to be effective, the
simulation process can not become burdened with the bookkeeping intricacies of multiple schematic
variations and analysis specifications. The designer must have a way to connect various stimuli and
loads to core circuitry and to group the desired SPICE analyses and test measurements with each
schematic configuration.
Until now, the best approach has been to hide these special configurations in subcircuits; for example
a resistors parasitic capacitance could be included in a subcircuit. While this approach works for
hierarchical schematic entry and extending individual component models, it doesnt solve the prob-
lem of adding test equipment, different stimulus inputs, or dealing with multiple simulation sce-
narios.
A test setup provides loads, voltage and current stimuli and instrumentation connections at specific
points on the Unit Under Test (UUT). When viewed in a broader context, the combination of the test
setup circuitry and the UUT can be considered to be a circuit configuration in and of itself. Indeed, for
simulation purposes, the test setup circuitry must be included as part of the circuit. Most Test Pro-
gram Sets (TPSs) implement multiple setups during the testing sequence. This increases the simula-
tion burden by requiring a separate schematic for every test setup.
The system described by Figure 2 addresses the multiple test setup problem with a unique solution. It
allows the user to assign each setup/UUT combination a different configuration name and simulates
all of the stand-alone configurations in a batch operation. The setup/UUT combination is called a
circuit configuration. The circuit configuration is defined during the schematic entry process. Ev-
ery circuit configuration is composed of one or more schematic layers. An active layer can be thought
of as a transparency that overlays other transparencies such that as you view them, you see the com-
plete circuit configuration schematic. Circuit nodes on the top layer connect with nodes on underly-
Simulation
Directives
AC, DC,
Tran, etc.
Layers combined
into a configuration
C1 R4 390
Y8
Q2
2N2222
Q3
2N2222
R5 470
Q4
2N2222
VCC
X4
HL7801E
R6
1k
R9
470
R10
27
R11
330
vpd
vq4e
.01u
Sample Circuit
R7 820
VCC
VEE
X5
LM741N
VCC
VEE
R3 10k
C3
.01u
R12 10k
R13
5k
R14
100
C2 .01u
Vin
vin
vq3e
Vcc
5
Vee
-5
VEE
1
3
2
Layer #
Simulation
Netlist
Measurement
Directives
(Max, Min
Prop delay, etc.)
Circuit Description
ICL
Scripts
SPICE
Statements
Design Database: Part values,
tolerances, stimuli and failure modes
4
Schematic Configuration
Combination of Layers 1&3
Figure 2, A unique reconfigurable schematic program allows different schematic layers to be combined
in order to create various circuit configurations. Simulation and measurement directives are then added
to create multiple tests descriptions.
ing layers as if the drawing were created on a single page. The schematic allows mixing and matching
of layers to form the required circuit configurations. Any circuitry, elements, or documentation can
be placed on any layer.
Use of a layered concept in itself is not unique. It is generally used as a drawing management feature
to remove complexity from the users visual field, rather than making a multiplicity of configura-
tions. While PCB layout software has had a similar feature for quite some time, a configurable sche-
matic has not been implemented (to the best of our knowledge). This is the first known graphical
entry method which is capable of solving the Test - Simulation bridge using a reconfigurable layered
schematic approach.
III. SIMULATION DIRECTIVES
The system also allows different sets of SPICE analysis statements to be grouped and stored (Figure 2). For
instance, an operating point analysis can be specified to run along with a brief transient analysis. In another
group, a frequency response can be run with a long transient analysis. The user can then pair any set of
SPICE simulation directives with any circuit configuration to create a unique Test Configuration. For ex-
ample, a single circuit configuration can be assigned multiple types of simulation analyses in order to define
multipletestconfigurations.
IV. FAILURE DEFINITION
Each component is defined by a set of nominal device and model parameter attributes, as well as parametric
tolerances. Each component also has a set of associated failure modes. Initially, parts include the failure
modes as defined in the Navys CASS (Consolidated Automated Support System) Red Team Package[7].
Users can edit the predefined failure modes or add their own catastrophic or parametric failure modes.
Failure modes are simulated by programmatically generating the proper SPICE 3 syntax to describe the
failure. Failure modes can be setup for primitive (resistor, transistor, etc.) as well as subcircuit macromodel-
based elements. Any node on a part can be shorted to any other node, opened, or stuck. The stuck condi-
tion allows the user to attach a B element expression. The B element is the Berkeley SPICE 3 arbi-
Table 1, SPICE Syntax for Various Failure Modes
Fault Before Insertion After Fault Insertion
Shorted Base Emitter Q1121924QN2222A Q1121924QN2222A
Rshort_19 19 24 .1
Open Resistor R317010K R3 17_open 0 10K
Ropen_17 17_open 17 100Meg
LowBeta Q1121924QN2222 Q1121924Q1_Fail
Parametric fault .MODEL QN2222 NPN AF=1BF=105 .MODEL Q1_Fail NPN AF=1BF=10
BR=4CJC=15.2PCJE=29.5P BR=4CJC=15.2PCJE=29.5P
Resistor Stuck R1601K R1601K
2V below Vcc Rstuck_6 6_Stuck 6 10.00000
Bstuck_6 6_Stuck 0 V= Vcc- 2
TimedDependent L23062U L23062U
InductorFault Rstuck_3 3_Stuck 3 10.00000
Bstuck_3 3_Stuck 0 V=Time>10n?0:V(3)
trary dependent source which is capable of analog be-
havioral modeling[8,9]. Behavioral expressions can
contain mathematical equations, If-Then-Else direc-
tives, or Boolean logic [10]. The expressions can re-
fer to other quantities in the design such as nodes and
currents, thus creating an unlimited fashion in which
to stuck a node. A series of examples are shown in
Table 1.
It should be noted that the software builds the B ele-
ment expressions and SPICE models, inserts the re-
quired elements, and generates the SPICE netlist au-
tomatically. All of the failure mode definitions are
carried out in a graphical manner. No script writing or programming is necessary in order to define or
simulate a fault. There is no need for the user to write or know the required SPICE syntax (examples
are shown in table 1). The characteristics (open/short/stuck resistance) of each failure mode can be
defined by the user (Figure 3).
V. MEASUREMENT DEFINITION
In order to create a test, the user must combine a circuit configuration with a set of SPICE simula-
tion directives and a desired measurement which will be made on the resulting data. Therefore, the
simulator, or some type of data post-processing program, must be included to extract and record
information from each desired test point waveform. For the Test Designer software, the former was
chosen and implemented using ICL which is available in SPICE 3 and Nutmeg.
The software uses the IsSpice4 simulation engine to perform the failure analysis. IsSpice4 is an
enhanced version of Berkeley SPICE 3 [9] and XSPICE [11,12]. IsSpice4 includes and expands upon
the standard Berkeley SPICE 3 ICL. ICL is a set of commands that can direct SPICE to perform
various operations such as running a particular analysis or changing a component value. The com-
mands, which look and act like Visual Basic scripts, can be run interactively or in a batch mode.
IsSpice4 contains new ICL commands which allow SPICE 3 plots, or sets of vectors (i.e. waveforms)
to be scanned and measured with cursors. In contrast to traditional SPICE dot statements, ICL
Figure 3, An example of the dialog which is used to
define the failure mode characteristics.
GroupDelayMeasurement
Cursor Script
HomeCursors Reset cursor to waveform endpoints
Vsignal = db(???) Use dB values
Vphase = PhaseExtend(ph(???)) Use phase values
theDelay = -differentiate(Vphase)/360 Differentiate the phase waveform
theMax=max(Vsignal) Findthemaximum
MoveCursorRight(0,Vsignal,theMax) Move the left cursor to the maximum
Fmax=GetCursorX(0) Store the frequency
SetCursor(1,Fmax) Move the right cursor to the maximum
MoveCursorRight(1,Vsignal,theMax-3) Move the right cursor to the upper -3dB point
MoveCursorLeft(0,Vsignal,theMax-3) Move the left cursor to the lower -3dB point
Measurement Script
groupDelay = mean(theDelay) Store the measurement
Table2,ExampleWizard
generated ICL scripts used to
automate measurements of the
failure analysis. The ???
fields are replaced with the
desired vectors which will be
processed.
commands are performed in order, one at a time. This makes ICL scripts perfect for describing test
procedures.
A Wizard approach is taken in order to alleviate the syntax headaches associated with script development.
For example, a Cursor Wizard is employed to position one or more imaginary cursors on a waveform or set
of waveforms. Y axis cursors are positioned with respect to a single waveform or vector, while X axis
cursors are positioned with respect to an entire set of vectors. A Measurement Script Wizard is used to
manipulate the data derived from the cursors in order to produce a single measurement.
A variety of functions are available for setting the cursor positions and for measuring the data in between the
cursors. Two example scripts are shown in Table 2. As shown in figure 2, these measurement scripts are
combinedwithtraditionalSPICEanalysisdirectivesandatestconfigurationdescriptiontoformasimulatable
IsSpice4 netlist.
VI . EXAMPLE
Now that we have defined how a design is setup, we can proceed to show how the software performs the
simulation, and discuss the failure diagnostic and test sequencing process. This is best done through an
example.
The circuit shown in Figure 4 is a forward converter which uses the Unitrode UC1843 PWM and Magnetics
MPP5812 core models from the Intusoft Power Supply Designers Library. The start-up transient wave-
form(V(5),topright)isshown,alongwithaclose-upviewoftheoutputripple.Becausethecircuitusesafull
nonlinear switch level PWM IC model and an accurate power Mosfet model, we can examine such detailed
Figure 4, An SMPS forward converter example using the UC1843 PWM controller. Note the use of the nonlinear
core (MPP) model. The test point in the center of X10 shows the inductance value as a function of time.
phenomenon as the Mosfets switching characteristics, operating current into VCC, under voltage
lockout threshold, and propagation delay. To simulate the start-up of the circuit, the power supplies
V1 and V4 were ramped from zero to their final value, over a 100us interval.
Initially, the SMPS was simulated in full start-up mode for 1.2ms. The simulation runtime was 184.90s. It
was decided that a shorter transient run could yield enough useful tests to detect the majority of the faults and
be simulated more quickly. For the first failure analysis, a nominal transient analysis of .3ms in length was
selected.
The selected measurements were the value at 50us, the maximum value, and the final value at .3ms. The
maximum value will be useful for oscillating waveforms, while the final value will be useful for filtered and
slow-changing waveforms. The 50us measurement can be used as a comparative measurement of start-up
performance. The measurements were determined after looking at the nature of the waveforms and estimat-
ing which tests would best be able to detect significant differences when a fault is inserted.
For each of the three measurements, all of the available vectors (circuit test points including voltage current
and power dissipation) were recorded. While not all vectors can normally be probed on a circuit card, it is
important to gather all of the possible data. Later, when the test sequencing is performed, we can grade the
usefulness of each measurement. By measuring all possible test points up front, we eliminate the need to
perform subsequent simulations.
The initial results, shown in Figure 5, indicate that all of the tests fail, since no pass-fail tolerances have been
applied. Figure 6 shows the results after applying default tolerances of1100uV for voltages and11mA for
currents.
The results report shows the simulated (Measured column) value, whether the test passed or failed, and
shows the minimum, nominal, and maximum values. A special histogram bar icon is used to provide a quick
visual indication of the test status.
Figure 5, The results
of an initial simulation
before the default
tolerances are applied
to the measurements.
Figure 6, The results
of an initial simulation
after the default
tolerances are applied
to the measurements.
The Results dialog
shows the pass-fail
status through the use
of a unique histogram
indicator (left side).
VII. SETTING PASS-FAIL TOLERANCES
A variety of methods are available for setting tolerances in-
cluding hard limits, Monte Carlo analysis, and a unique Ex-
pand to Pass method (as shown in Figure 7). Expand to pass
moves the min and max tolerance bands outward until the
measurement is within the pass band. This allows tolerances
to be set through different simulation scenarios such as high
and low temperature, or high and low power supply voltage.
Setting limits for the tests is an iterative process of selecting
highly reliable tests with regard to detection characteristics,
and adjusting the limits on less-than-reliable tests in order to
improve their detection characteristics when such tests are re-
quired in order to achieve desired isolation metrics.
Test set tolerances and variations can cause measurements to
fail. Therefore, a convenient way to account for this is to ex-
pand the tolerances by increasing and decreasing the power
source values and then using the Expand to Pass feature.
Monte Carlo analysis can also be used to set the measurement tolerances. However, Monte Carlo
results tend to yield tolerances that are too tight. A combination of the two methods can also be used.
Of course, tolerances can also be set manually on individual measurements or groups of measure-
ments. Figure 8 shows the results dialog for the FinalValue measurement after increasing and de-
creasing the power supply values by 5% and using the Expand to Pass feature.
VIII. FAULT SIMULATION
At this point, a failure analysis is run. Measurements alarms can be set in order to flag failure modes
which overstress parts. Once these failure modes are detected, separate tests can be added to find
them. They can then be sequenced so that the failure mode test does not destroy functioning parts.
Figure 7, The dialog which is used to set
measurement tolerances.
Figure 8, The Results
dialog after using the
Expand to Pass feature
to set tolerances for
1 5% power supply
variations.
The fault universe for the SMPS consisted of 51 failure modes. Of particular interest were the PWM
and power Mosfet faults. The three considered PWM IC failure modes were: Output shorted to VCC,
Output shorted to ground, and Output open. The five considered power Mosfet failure modes were:
(D=Drain, G=Gate, S=Source) shortedGS, shortedGD, shortedDS, OpenDS, and OpenG.
Failure mode simulation can proceed in one of several ways:
7 One fault mode is inserted at a time for 1 test configuration.
7 One fault mode is inserted at a time for several test configurations.
7 All of the fault modes individually inserted, in succession, for 1 or more test configura-
tions.
In this example, all failure modes are individually simulated for a ramped power supplies configura-
tion, running a short transient analysis. The results are reported in a special dialog which is shown in
Figure 9. For each failure mode, the software records the value of every user-defined measurement.
Past work [13,14] implies that simulation runtime is a major inhibitor to this methodology. However,
these remarks tend to ignore recent developments in the area of model optimization and behavioral
modeling, Analog Hardware Description Language (AHDL) modeling, and state-of-the-art simulator
and computer performance. With the proper application of the these items and control of the simula-
tor options, analysis of a SMPS in the transient domain using fault-by-fault simulation is clearly
possible, as the indicated by the following results:
Type of Run Analyses CircuitConfiguration Time
SingleSimulation FullTransient Ramped Supplies 184.90s
SingleSimulation ShortTransient Ramped Supplies 49.18s
Monte Carlo (30 Cases) ShortTransient Ramped Supplies 23.5minutes
Failure Modes (51) ShortTransient Ramped Supplies 47minutes
All simulations were performed on a 200Mhz Pentium. processor with 32MB RAM.
IX. RESULTS REPORTING
The results for each failure are reported in a special dialog (shown in Figure 9). A tree list, similar to
the Windows 95 Explorer tree, lists each drawing configuration, the simulation setups which are used
under that configuration, and the individual analyses for each setup. Folded out from the analysis type
are the various measurements that have been defined. There are two other display types; one that
shows all of the failure results for each test, and another that shows a histogram of test measurements
vs. failure modes, as shown in Figures 10 and 11.
The meter on the left of the report is used as a quick indicator of the measurements pass-fail status. A long
bar on the left or right of the meter center indicates that the associated failure mode moves the measured
value outside of the pass/fail limits by more than 3 times the difference between the upper and lower
pass/fail limits (e.g., a very high probability of failure detection). A short bar just to the right or left of
center indicates that the failure is detected but is out of limits by less than one tolerance range (e.g.
could be an uncertain detection and may merit further investigation using Monte Carlo techniques).
The Variation drop-down contains a list of all of the simulated faults, making it easier to thumb
through the results of each fault.
Figure 11, Shows all of the faults for the final value measurement of V(5) using a histogram sorting technique.
Failed tests are on the left. The rest of the faults are grouped into bins where T is the pass bandwidth (max-min).
Figure 9, The results
dialog after simulating
all of the fault modes.
The list of measure-
ments for R2:Open
are shown.
Figure 10, This
version shows all of
the fault modes
results for the final
value measurement of
the output V(5).
X. ADDING MORE TESTS
Using the test sequencing techniques which are described below, 78% of the faults were detected.
X13:Out-toVcc, C3:Open, C2:Open, D1:Short, D1:Open, D2:Short, D2:Open, L2:Open, Q1:ShortCE,
R1:Open, and X16:ShortGD were not be detected. Some faults were not be detected from the test
data, while other faults caused simulation convergence failures. It is evident that the SMPS diagnos-
tics require several additional test setups in order to completely detect all of the faults. Three other
configurations are necessary. They are described below.
Circuit Configuration Analyses Faults Detected Measurement Description
Dead Circuit Test AC analysis C3:Open Resistance DMM resistance check
PWM Output Test Short Transient X13:Out-to-VCC Peak-peak PWM IC output short to Vcc
Reduced Supplies Short Transient C2:Open, D1/D2:Short Maximum Main power supply off
D1/D2:Open, L2:Open PWM supply ramped on
Q1:shortCE, R1:Open slowly
X16:shortGD
One of the three configurations differed solely in its stimulus and power supply settings. Five sche-
matic layers were created in order to implement these three configurations. The first layer, Core
Circuitry, contains all circuitry for the SMPS. The other layers contained different power supplies
for the other configurations, or in the case of the impedance measurement, a DMM instrument. It
should be noted that each of the configurations is a stand-alone design. Each has a unique netlist and
a unique part list. The common production circuitry is carried across all configurations; this greatly
helps minimize transcription errors.
In a second failure analysis pass, the three new configurations were simulated with respective failure
modes inserted. The last step, discussed below, involves the sequencing of the tests into a fault tree.
XI. RANKING OF FAILURE STATES
The process of fault detection and fault tree generation takes place in the Fault Tree design dialog
(Figure 12) using a novel test sequencing technique.
It is generally accepted that the best fault tree is one that arrives at the highest probability failure conclusion
with the least amount of work. The best test is then the test that produces the optimum fault tree, free
from errors. Several methods for selecting the best test out of those remaining have been proposed
[1,2]. Given an equal probability of occurrence for each fault, the best test is usually one that evenly
divides the input group between the pass and fail group. A somewhat more complex procedure has
also been proposed. Note that a general solution, made by exhaustive search, rapidly becomes intrac-
table [2].
If the component failure rate is used to weight each fault in the ambiguity group, then we can assign
a probability of detection to the pass group, p, and the fail group, q. What is really determined is the
probability that a test will pass (p) and the probability that a test will fail (q). Then p + q = 1.0, because
the test will either pass or fail. The input probability must be 1.0 because one of the conclusions in the
current ambiguity group will be the answer. Weighting is used to reassess individual failure probabil-
ity, given that the group is the answer at this point in the isolation process. Now the probability of
reaching each conclusion can be predicted, based on failure weights. The best fault tree can now be
defined as the one which arrives at each failure conclusion with the least amount of work. If each test
is equally difficult, then the work is the summation of the probabilities of reaching each conclusion.
To compute these probabilities, we simply traverse the tree for each conclusion, multiplying the
probabilities of each successive outcome, and then summing the resultant probabilities for each con-
clusion. The best result from this procedure is 1.0. The figure of merit tells us how well we did, and
can be used to compare fault trees.
Clearly, we must select tests which produce high probability outcomes. To find these tests, we
compute the entropy of each useful test that is available: Entropy = -p*log(p) -q*log(q)
According to information theory, the highest entropy test contains the most information. Proceeding in this
manner tends to produce efficient fault trees. The software does not attempt to grade tests by diffi-
culty, since this may be very subjective. Instead, tests may be grouped into a common pool, or group,
for selection. This allows tests to be ordered not only by difficulty, but also by logical requirements;
for example, high temperature vs. low temperature and safe-to-start vs. operating point.
The failure state, (binary, tertiary, histogram or vector), defines the method by which measurements
are used to make a test. Tests have only 2 outcomes, pass or fail; but a measurement can be compared
with many different limits, creating a large number of possible tests for each measurement. Heres the
way each failure state works to define tests:
1. Binary: The test passes if the result is within the test limits, and fails if it is outside of the limits.
2. Tertiary: The measurement is divided into 3 states; fail low, pass and fail high. Two tests are
generated for each measurement, with outcomes of
Related Articles | Editor's Choice |
Visit Asia Webinars to learn about the latest in technology and get practical design tips.