Benchmarking Diagnostic Algorithms On An Electrical Power .

2y ago
20 Views
2 Downloads
1.53 MB
14 Pages
Last View : 4d ago
Last Download : 2m ago
Upload by : Mia Martinelli
Transcription

Benchmarking Diagnostic Algorithms on an Electrical PowerSystem TestbedTolga Kurtoglu 1 , Sriram Narasimhan 2 , Scott Poll 3 , David Garcia 4 , Stephanie Wright 512Mission Critical Technologies @ NASA Ames Research Center, Moffett Field, CA, 94035, USAtolga.kurtoglu@nasa.govUniversity of California, Santa Cruz @ NASA Ames Research Center, Moffett Field, CA, 94035, USASriram.Narasimhan-1@nasa.gov3NASA Ames Research Center, Moffett Field, CA, 94035, USAscott.poll@nasa.gov4Stinger Ghaffarian Technologies @ NASA Ames Research Center, Moffett Field, CA, 94035, USAdavid.garcia@nasa.gov5Vanderbilt University, Nashville, TN, 37203, ect?), fault isolation (what is broken in the system?), fault identification (what is the magnitude ofthe failure?), and fault recovery (how can the systemcontinue to operate in the presence of the faults?). Expert knowledge and prior know-how about the system,models describing the behavior of the system, and sensor data from the system during actual operation areused to develop diagnostic inference algorithms. Thisproblem is non-trivial for a variety of reasons including:Diagnostic algorithms (DAs) are key to enablingautomated health management. These algorithmsare designed to detect and isolate anomalies ofeither a component or the whole system basedon observations received from sensors. In recentyears a wide range of algorithms, both modelbased and data-driven, have been developed toincrease autonomy and improve system reliability and affordability. However, the lack of support to perform systematic benchmarking of thesealgorithms continues to create barriers for effective development and deployment of diagnostictechnologies. In this paper, we present our effortsto benchmark a set of DAs on a common platform using a framework that was developed toevaluate and compare various performance metrics for diagnostic technologies. The diagnosedsystem is an electrical power system, namely theAdvanced Diagnostics and Prognostics Testbed(ADAPT) developed and located at the NASAAmes Research Center. The paper presents thefundamentals of the benchmarking framework,the ADAPT system, description of faults anddata sets, the metrics used for evaluation, andan in-depth analysis of benchmarking results obtained from testing ten diagnostic algorithms onthe ADAPT electrical power system testbed. Incorrect and/or insufficient knowledge aboutsystem behavior Limited observability Presence of many different types of faults(system/supervisor/actuator/sensor faults, additive/multiplicative faults, abrupt/incipient faults,persistent/intermittent faults) Non-local and delayed effect of faults due to dynamic nature of system behavior Presence of other phenomena that influence/maskthe symptoms of faults (unknown inputs acting onsystem, noise that affects the output of sensors,etc.)Several communities have attempted to solve thediagnostic inference problem using various methods.Some typical approaches have been:1 INTRODUCTIONFault Diagnosis in physical systems involves the detection of anomalous system behavior and the identification of its cause. Key steps in the diagnostic inference are fault detection (is the output of the system Expert Systems - These approaches encodeknowledge about system behavior into a form thatcan be used for inference. Some examples arerule-based systems (Kostelezky et al., 1990) andfault trees (Kavcic and Juricic, 1997). Model-based Systems - These approaches use anexplicit model of the system configuration andbehavior to guide the diagnostic inference. Someexamples are “FDI” methods (Gertler and Inc.,This is an open-access article distributed under the termsof the Creative Commons Attribution 3.0 United States License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author andsource are credited.1

Annual Conference of the Prognostics and Health Management Society, 20091998), statistical methods (Basseville and Nikiforov, 1993), “AI” methods (Hamscher et al.,1992). Data-driven Systems - These approaches use onlythe data from representative runs to learn parameters that can then be used for anomaly detection ordiagnostic inference for future runs. Some examples are IMS (Iverson, 2004), Neural Networks(Sorsa and Koivo, 1998), etc. Stochastic Method - These approaches treat thediagnosis problem as a belief state estimationproblem. Some examples are Bayesian Networks(Lerner et al., 2000), Particle Filters (de Freitas,2001), etc.Despite the development of such a variety of notations, techniques, and algorithms, efforts to evaluateand compare the different diagnostic algorithms (DA)have been minimal (discussed in Section 2). One of themajor deterrents is the lack of a common frameworkfor evaluating and comparing diagnostic algorithms.Such a framework would consist of the following: Define a standard representation format for thesystem description, sensor data, and diagnosis results Develop a software run-time architecture that canrun specific scenarios from actual system, simulation, or other data sources such as files (individually or as a batch), execute DAs, send scenario data to the DA at appropriate time steps, andarchive the diagnostic results from the DA Define a set of metrics to be computed based onthe comparison of the actual scenario and diagnosis results from the DASome initial steps in developing such a frameworkare presented in (Kurtoglu et al., 2009b) as part of theDX Competition initiative (Kurtoglu et al., 2009a). Inthis paper, we present our efforts to benchmark a setof DAs on the NASA Ames Electrical Power Systemtestbed (ADAPT) using the DXC framework. Section 2 describes other work related to benchmarkingof DAs. Section 3 presents the DXC framework inbrief. Section 4 describes how the benchmarking wasperformed including a description of the ADAPT system, the faults injected, the DAs tested and the metricscomputed. Section 5 presents the results and detailedanalyses of the benchmarking activity. Section 6 liststhe limitations and plans for future work. Lastly, section 7 presents the conclusions.2RELATED WORKSeveral researchers have attempted to demonstratebenchmarking capability on different systems. Amongthese, (Orsagh et al., 2002) provided a set of 14 metrics to measure the performance and effectiveness ofprognostics and health management algorithms for USNavy applications (Roemer et al., 2005). (Bartys etal., 2006) presented a benchmarking study for actuator fault detection and identification (FDI). This study,developed by the DAMADICS Research Training Network, introduced a set of 18 performance indices usedfor benchmarking of FDI algorithms on an industrialvalve-actuator system. Izadi-Zamanabadi and Blanke(1999) presented a ship propulsion system as a benchmark for autonomous fault control. This benchmarkhas two main elements. One is the development of anFDI algorithm, and the other is the analysis and implementation of autonomous fault accommodation. Finally, (Simon et al., 2008) introduced a benchmarkingtechnique for gas path diagnosis methods to assess theperformance of engine health management technologies.The approach presented in this paper uses the DXCFramework (Kurtoglu et al., 2009b) (described in Section 3) which adopts some of its metrics from theliterature (Society of Automotive Engineers, 2007;Orsagh et al., 2002; Simon et al., 2008) and extendsprior work in this area by 1) defining a number ofnew benchmarking indices, 2) providing a generic,application-independent architecture that can be usedfor benchmarking different monitoring and diagnostic algorithms, and 3) facilitating the use of real process data on large-scale, complex engineering systems.Moreover, it is not restricted to a single fault assumption and enables the calculation of benchmarking metrics for systems in which each fault scenario may contain multiple faults.3 FRAMEWORK OVERVIEWThe framework architecture employed for benchmarking diagnostic algorithms is shown in Figure 1. Major elements are the physical system (ADAPT EPStestbed), diagnostic algorithms, scenario-based experiments, and benchmarking software. The physical system description and sample data (nominal and faulty)are provided to algorithm and model developers tobuild DAs. System documentation in XML formatspecifies the components, connections, and high-levelmode behavior descriptions, including failure modes.A diagram with component labels and connection information is also provided. The documentation defines component and mode identifiers DAs must report in their diagnoses for proper benchmarking. Thefault catalog, part of the system documentation, establishes the failure modes that may be injected into experimental test scenarios and diagnosed by the DAs.Benchmarking software is used to quantitatively evaluate the DA output against the known fault injectionsusing predefined metrics.Execution and evaluation of diagnostic algorithmsare accomplished with the run-time architecture depicted in Figure 2. The architecture has been designedto interface DAs with little overhead by using minimalistic C and Java APIs or by implementing a simple ASCII-based TCP messaging protocol. The Scenario Loader is the main entry point for DAs; it starts2

FaultInjection4TestScenarios4.1Modeling & DA DevelopmentFault CatalogFaultDataXMLPhysical System under EvaluationADAPTXML SystemDescriptionPerformanceMetricsDiagnostic AlgorithmAlgorithm CAlgorithm BDiagnosticDataOutputAlgorithm A.BenchmarkingExperimentationAnnual Conference of the Prognostics and Health Management Society, 2009ADAPTXML marking SoftwareMetric 1 .Metric 2 .Metric 3 .1Figure 1: Benchmarking Framework Architectureand stops other processes and cleans up upon completion of all scenarios. The Scenario Data Source publishes archived datasets containing commands and sensor values following a wall-clock schedule specifiedby timestamps in the scenario files. The DA uses thediagnostic framework messaging interface to receivesensor and command data, perform a diagnosis, andpublish the results; it is the only component that is implemented by DA developers. The Scenario Recordertimestamps each fault injection and diagnosis messageupon arrival and compiles it into a Scenario Resultsfile. The Evaluator takes the Scenario Results File andcalculates metrics to evaluate DA performance.Messages in the run-time architecture are exchangedas ASCII text over TCP/IP. DAs may use provided APIcalls for parsing, sending, and receiving messages orchoose to use the underlying TCP/IP. The DA outputis standardized to facilitate benchmarking and includesa timestamp indicating when the diagnosis has been issued; a detection signal that indicates whether the DAhas detected a fault; an isolation signal that indicateswhether a DA has isolated a candidate or set of candidates with associated probabilities; and a candidatefault set that has one, or multiple candidates – eachwith a single or multiple faults. More details about theframework can be found in (Kurtoglu et al., 2009b).ScenarioLoaderloads Data Source,Algorithm, RecorderScenarioData SourceDiagnosticAlgorithmsends sensor values,commandssends sensor values,commands, fault dataScenarioRecordersends diagnosisOutputsScenarioResultsProcessedbyFigure 2: Run-time ArchitectureEvaluatorBENCHMARKING ON ADAPT EPSTESTBEDADAPT EPSSystem DescriptionThe physical system used for benchmarking is theElectrical Power System testbed in the ADAPT lab atNASA Ames Research Center (Poll et al., 2007). TheADAPT EPS testbed provides a means for evaluating diagnostic algorithms through the controlled insertion of faults in repeatable failure scenarios. The EPStestbed incorporates low-cost commercial off-the-shelf(COTS) components connected in a system topologythat provides the functions typical of aerospace vehicle electrical power systems: energy conversion /generation (battery chargers), energy storage (three setsof lead-acid batteries), power distribution (two inverters, several relays, circuit breakers, and loads) andpower management (command, control, and data acquisition). The EPS delivers AC (Alternating Current) and DC (Direct Current) power to loads, whichin an aerospace vehicle could include subsystems suchas the avionics, propulsion, life support, environmental controls, and science payloads. A data acquisitionand control system commands the testbed into different configurations and records data from sensors thatmeasure system variables such as voltages, currents,temperatures, and switch positions. Data is presentlyacquired at a 2 Hz rate.The scope of the ADAPT EPS testbed used in thebenchmarking study is shown Figure 3. Power storage and distribution elements from the batteries to theloads are within scope; there are no power generationelements. In order to encourage more DA developersto participate in the benchmarking effort, we also included a simplified scope of the system called ADAPTLite, which included a single battery to a single load asindicated by the dashed lines in the schematic (Figure3). The characteristics of ADAPT-Lite and ADAPTare summarized in Table 1. The greatest simplification of ADAPT-Lite relative to ADAPT is not the reduced size of the domain but the elimination of nominal mode transitions. The starting configuration forADAPT-Lite data has all relays and circuit breakersclosed and no nominal mode changes are commandedduring the scenarios. Hence, any noticeable changesin sensor values may be correctly attributed to faultsinjected into the scenarios. By contrast, the initialconfiguration for ADAPT data has all relays open andnominal mode changes are commanded during the scenarios. The commanded configuration changes result in adjustments to sensor values as well as transients which are nominal and not indicative of injectedfaults. Finally, ADAPT-Lite is restricted to singlefaults whereas multiple faults are allowed in ADAPT.Fault Injection and ScenariosADAPT supports the repeatable injection of faults intothe system in three ways:3

Annual Conference of the Prognostics and Health Management Society, 2009Load Bank 1Battery 167EY170FAN415ST165CB166IT167XT167120V AC 1FESH175ESH183L1GESH184L1H24V DC 0EY172EY144L1AEY184EY244Load Bank E328INV2ISH266E265ST265E267CB266IT267XT267120V AC EY273Sensor SymbolsESHFTVoltageISHCircuit BreakerPosition T281FlowLTLightXTPhase Angle24V DC peedRelay C485L2HEY284Figure 3: ADAPT EPS SchematicTable 1: ADAPT EPS Benchmarking System CharacteristicsAspectADAPT-LiteADAPT#Comps/Modes 37/93173/430Initial StateRelaysRelaysclosed; cir- open;cuit breakers circuitclosedbreakersclosedNominal mode NoYeschanges?Multiple faults? NoYesHardware-Induced Faults: These faults are physically injected at the testbed hardware. A simple example is tripping a circuit breaker using the manualthrow bars. Another is using the power toggle switchto turn off the inverter. Faults may also be introducedin the loads attached to the EPS. For example, the valvecan be closed slightly to vary the back pressure on thepump and reduce the flow rate.Software-Induced Faults: In addition to fault injection through hardware, faults may be introducedvia software. Software fault injection includes one ormore of the following: 1) sending commands to thetestbed that were not intended for nominal operations;2) blocking commands sent to the testbed; and 3) altering the testbed sensor data.Real Faults: In addition the aforementioned twomethods, real faults may be injected into the systemby using actual faulty components. A simple example includes a blown light bulb. This method of faultinjection was not used in this study.For the results presented in this paper, only abruptdiscrete (change in operating mode of component) andparametric (step change in parameter values) faults areconsidered. Distinct faults types that are injected intothe testbed for benchmarking are shown Table 2.The diagnostic algorithms are tested against a number of diagnostic scenarios consisting of nominal orfaulty data of approximately four minutes in length.Some key points and intervals of a notional scenario are illustrated in Figure 4, which splits the scenario into three important time intervals: startup , injection , and shutdown . During the first interval startup , the diagnostic algorithm is given timeto initialize, read data files, etc. Note that this isnot the time for compilation; compilation-based algorithms compile their models beforehand. Thoughsensor observations may be available during startup ,no faults are injected during this time. Fault injection takes place during injection . Once faults are in-4

Annual Conference of the Prognostics and Health Management Society, 2009Table 2: Fault types used for ADAPT and ADAPT-LiteComponentBatteryBoolean SensorCircuit BreakerInverterRelaySensorPump(Load)Fan(Load)Light Bulb(Load)Basic LoadFault DescriptionDegradedStuck at ValueFailed OpenStuck ClosedFailed OffStuck OpenStuck ClosedStuck at ValueOffsetFlow BlockedFailed OffOver SpeedUnder SpeedFailed OffFailed OffFailed Offjected they persist until the end of the scenario. Multiple faults may be injected simultaneously or sequentially. Finally, the algorithms are given some timepost-injection to send final diagnoses and gracefullyterminate during shutdown . The intervals used in thisstudy are 30 seconds, 3 minutes, and 30 seconds for startup , injection , and shutdown , respectively.Below are some notable points for the example diagnostic scenario from Figure 2: tinj - A fault is injected at this time; tf d - The diagnostic algorithm has detected afault; tf f i - The diagnostic algorithm has isolated afault for the first time; tf ir - The diagnostic algorithm has modified itsisolation assumption; tlf i - This is the last fault isolation during injection .Figure 4: Key time points, intervals, and signalsNominal and failure scenarios were created usinghardware and software-induced fault injection meth-ods according to the intervals specified above and using the faults listed in the fault catalog (Table 2). Asshown in Table 3, nominal scenarios comprise roughlyhalf of the ADAPT-Lite and one-third of the ADAPTtest scenarios. The ADAPT-Lite fault scenarios arelimited to single faults. Half of the ADAPT faults scenarios are single faults; the others are double or triplefaults.Table 3: Number of sample and test scenarios forADAPT and ADAPT-LiteSampleCompetition#Scenarios ADAPT ADAPT ADAPT ouble019030faultTriple01010faultDiagnostic ChallengesThe ADAPT EPS testbed offers a number of challenges to DAs. It is a hybrid system with multiplemodes of operation due to switching elements suchas relays and circuit breakers. There are continuousdynamics within the operating modes and componentsfrom multiple physical domains, including electrical,mechanical, and hydraulic. It is possible to inject multiple faults into the system. Furthermore, timing considerations and transient behavior must be taken intoaccount when designing DAs. For example, whenpower is input to the inverter there is a delay of afew seconds before power is available at the output.For some loads, there is

Advanced Diagnostics and Prognostics Testbed (ADAPT) developed and located at the NASA Ames Research Center. The paper presents the fundamentals of the benchmarking framework, the ADAPT system, description of faults and data sets, the metrics used for evaluation, and an in-depth analysis of benchmarking results ob-

Related Documents:

Bad benchmarking Benchmarking has its limitations. Whilst good benchmarking is about performance and best practice, bad benchmarking can lead to mediocrity. Bad benchmarking is using data to justify average performance, rather than challenging and driving improvements. This

The tourism sector began to apply benchmarking in the mid-1990s. Wöber (2001) distinguishes these areas of benchmarking focus in tourism: (a) benchmarking of profit-oriented organisations, (b) benchmarking of non-profit organisations, and (c)

Benchmarking in Tourism Benchmarking in tourism can be classified into these spheres – Benchmarking of non-profit oriented tourism organizations National or regional tourist boards/organizations Attractions operated by public authorities or other forms of non-profit oriented bus

benchmarking, tourism, tourist destination, comparability. 1. Introduction Benchmarking is a relatively new concept that derives from the English word “benchmark”. In a simple manner, benchmarking is a management method that involves an organiza

manufacturing industry, benchmarking is still an obscure idea in the service industry, especially in the tourism field. Many researchers have stated benchmarking in different aspects which helps in benchmarking the tourism destination in different crite

We will discuss the advantages and disadvantages of composite indicators focusing on their two probable uses, benchmarking and quality improvement. Composites for benchmarking Benchmarking of providers based on only one or a few indicators of quality may be problematic for several rea-sons. First,

Proceso de Benchmarking Pasos previos para diseñar un buen proceso de Benchmarking: 1. Obtener el respaldo de la alta gerencia y buscar información. 2. Seleccionar el equipo de trabajo, tipo y método de benchmarking. 3. Seleccionar el proceso de Benchmarking más ligado a los objetivos estratégicos y procesos clave de la organización. 4.

MATH348: Advanced Engineering Mathematics Nori Nakata. Sep. 7, 2012 1 Fourier Series (sec: 11.1) 1.1 General concept of Fourier Series (10 mins) Show some figures by using a projector. Fourier analysis is a method to decompose a function into sine and cosine functions. Explain a little bit about Gibbs phenomenon. 1.2 Who cares? frequency domain (spectral analysis, noise separation .