Nondestructive Inspection and Testing Section SA-ALC/MAQCN Kelly AFB, TX

Similar documents
bivariate analysis: The statistical analysis of the relationship between two variables.

The Need For Technical Justification and Performance Demonstrations

On-Wing Ultrasonic Phased Array Inspection of Trent Fan Blades

Implementation of a Performance Evaluation System for Nondestructive Testing Methods Daniel Algernon, Sascha Feistkorn and Michael Scherrer

Marine Composite Certification Group

API. Generic Procedure. for the. Ultrasonic Examination. Ferritic Welds

Unit 1 Exploring and Understanding Data

Advanced Integration of Non-Destructive Testing during Laboratory Fatigue Test of Airframe Metal Parts

Investigations to introduce the probability of detection method for ultrasonic inspection of hollow axles at Deutsche Bahn

Ultrasonic Phased Array Inspection of Turbine Components

Reliability Analysis Combining Multiple Inspection Techniques

Simulation of eddy current nondestructive testing problems with the CIVA software platform

An analysis of the use of animals in predicting human toxicology and drug safety: a review

Quantification of the reliability of flaw detection (NDT) using probability of detection (POD) based on synthetic data

FATIGUE CRACK DETECTION BY ACOUSTIC EMISSION MONITORING IN THE COURSE OF LABORATORY STRENGTH TEST

POD Tutorial Part III Advanced Data Analysis Methods for POD

MMI 409 Spring 2009 Final Examination Gordon Bleil. 1. Is there a difference in depression as a function of group and drug?

Saville Consulting Wave Professional Styles Handbook

Case studies on uncertainties of ultrasonic weld testing interpretation

COWLEY COLLEGE & Area Vocational Technical School

Statistics Assignment 11 - Solutions

CRACK DETECTION AND PIPELINE INTEGRITY SOLUTIONS

Real Time 2D Ultrasound Camera Imaging: A Higher Resolution Option to Phased Array Bob Lasser Randy Scheib Imperium, Inc.

Key word: Ultrasonic Testing, POD, rail, online NDT,detectability,safety

DESIRABLE DISTINCTIONS IN NDE RELIABILITY TERMINOLOGY AND PRACTICE

Modeling Sentiment with Ridge Regression

Ultrasonic Phased Array Testing of Complex Aircraft Structures

COWLEY COLLEGE & Area Vocational Technical School

COURSE DESCRIPTION FOR NONDESTRUCTIVE TESTING

APPLICATION AND DEPLOYMENT OF ADVANCED NDE TECHNIQUES IN HIGH PRESSURE VESSELS

POD data collection & analysis Tools for beginners

Table of Contents. Plots. Essential Statistics for Nursing Research 1/12/2017

INSPECTION THROUGH AN OVERLAY REPAIR WITH A SMART FLEXIBLE ARRAY PROBE.

Qualification of Manual Phased Array Ultrasonic Techniques for Pipe Weld Inspection in Nuclear Power Plants

Choosing a Significance Test. Student Resource Sheet

5~fi3--q7-a~C/6 CONI=- 97/093--

The Volume POD as an Optimisation Tool of Multiple NDT Inspections of Complex Geometries

Non Destructive Testing Services

Introduction to Nondestructive Testing. Outline

AIRWORTHINESS ADVISORY CIRCULAR

Business Statistics Probability

Field inspection of Phase-Array ultrasonic for polyethylene electrofusion joints

NPTEL NPTEL ONLINE COURSE. NPTEL Online Certification Course (NOC) NPTEL. Theory and Practice of Non Destructive Testing

COMBINED APPLICATION OF ULTRASONIC INSPECTION AND ACOUSTIC EMISSION MONITORING FOR EVALUATION OF FRACTURE CRITICAL MEMBERS IN RAILWAY BRIDGES.

MOTION ANALYSIS OF A LOW COST CUSTOMIZED A-SCAN NON DESTRUCTIVE TESTING UNIT. M.N. M. Zukri and E. A. Bakar

Farouk AL Quadan lecture 3 Implentation of

Quantitative Methods in Computing Education Research (A brief overview tips and techniques)

THE DEVELOPMENT AND MANUFACTURE OF FIXED- ULTRASONIC INSPECTION REFERENCE REFLECTORS AND TRANSDUCERS FOR COMPRESSOR BLADE DOVETAILS

The Evolution and Benefits of Phased Array Technology for the Every Day Inspector

FAST UT, A NEW ULTRASONIC INSPECTION TECHNIQUE

Table 1: Samples used in experimental program. Block ID/thickness [ mm ]

MEA DISCUSSION PAPERS

Understandable Statistics

Section 6: Analysing Relationships Between Variables

Application of Time Reversal Technique for the Inspection of Composite Structures

A CONTRIBUTION TO QUANTIFYING THE SOURCES OF ERRORS IN PAUT

Controlling the risk due to the use of gamma sources for NDT First feedback from the deployment of replacement NDT Techniques

Introduction to Nondestructive Testing

Describe what is meant by a placebo Contrast the double-blind procedure with the single-blind procedure Review the structure for organizing a memo

Nondestructive testing course structure

Medical Statistics 1. Basic Concepts Farhad Pishgar. Defining the data. Alive after 6 months?

Reliability Analysis of the Phased-Array Ultrasonic System used for the Inspection of Friction Stir Welds of Copper Canisters

Analysis and Interpretation of Data Part 1

MANUAL PHASED ARRAY ULTRASONIC TECHNIC FOR PIPE WELD INSPECTION IN NUCLEAR POWER PLANTS

ROC Curves. I wrote, from SAS, the relevant data to a plain text file which I imported to SPSS. The ROC analysis was conducted this way:

Initial Report on the Calibration of Paper and Pencil Forms UCLA/CRESST August 2015

Assessing Functional Neural Connectivity as an Indicator of Cognitive Performance *

10 Years Experience in Industrial Phased Array Testing of Rolled Bars

BACKGROUND CHARACTERISTICS OF EXAMINEES SHOWING UNUSUAL TEST BEHAVIOR ON THE GRADUATE RECORD EXAMINATIONS

Statistical questions for statistical methods

How reliable are the results of my NDT process? A scientific answer to a practical everyday question

Flaw Assessment Using Shear wave Phased array Ultrasonic Transducer

RESPONSE SURFACE MODELING AND OPTIMIZATION TO ELUCIDATE THE DIFFERENTIAL EFFECTS OF DEMOGRAPHIC CHARACTERISTICS ON HIV PREVALENCE IN SOUTH AFRICA

NDT 2010 Conference Topics

REVIEW PROBLEMS FOR FIRST EXAM

Defense Technical Information Center Compilation Part Notice

AP STATISTICS 2010 SCORING GUIDELINES

Empirical Demonstration of Techniques for Computing the Discrimination Power of a Dichotomous Item Response Test

Aerospace NDT with Advanced Laser Shearography

STATISTICS AND RESEARCH DESIGN

DURATINET COURSE - Testing Techniques for Structures Inspection LNEC Lisbon Portugal May 2012

Item Analysis: Classical and Beyond

Nondestructive Evaluation Courses for Undergraduate Engineering and Engineering Technology Students

Describe what is meant by a placebo Contrast the double-blind procedure with the single-blind procedure Review the structure for organizing a memo

GCAA ADVISORY CIRCULAR

A Clinical Evaluation of Various Delta Check Methods

NDT PERFORMANCE DEMONSTRATION USING SIMULATION

3. For a $5 lunch with a 55 cent ($0.55) tip, what is the value of the residual?

Chapter 1: Explaining Behavior

Statistics and Probability

Pipeline Technology Conference 2007

Six Sigma Glossary Lean 6 Society

ADVANCE ULTRASONIC INSPECTION

STP 231 Example FINAL

Addendum: Multiple Regression Analysis (DRAFT 8/2/07)

Week 17 and 21 Comparing two assays and Measurement of Uncertainty Explain tools used to compare the performance of two assays, including

Readings: Textbook readings: OpenStax - Chapters 1 13 (emphasis on Chapter 12) Online readings: Appendix D, E & F

We are providing Level I, II Training and Certification as per Recommended practice SNT TC 1A 2006 in the following NDT Methods.

COWLEY COLLEGE & Area Vocational Technical School

Introduction to Quantitative Methods (SR8511) Project Report

Transcription:

PROFICIENCY EVALUATION OF NDE PERSONNEL UTILIZING THE ULTRASONIC METHODOLOGY Mark K. Davis Nondestructive Inspection and Testing Section SA-ALC/MAQCN Kelly AFB, TX 7824-5000 INTRODUCTION The measure by which the capability of a nondestructive testing (NDT) inspector can be assessed is based upon the proficiency by which that particular individual can perform an evaluation of a given set of test specimens in a reasonable length of time with a good probability of detection at a high confidence level. This criteria is governed by parameters based upon the training and experience of the technician, as well as the technical orders and procedures under which he/she must perform inspections. This is the philosophy which governed the NDT proficiency program discussed in this paper. The proficiency test was administered by the Nondestructive Testing laboratory at the San Antonio Air Logistics Center (SA-ALC), Kelly AFB, Texas. The subjects were NDT technicians of the Structural Assessment Testing (SAT) Facility located in the Engine Division of the SA-ALC. The purpose of this program was to test the proficiency of NDT technicians in the inspection of F-00 stage 2 compressor blades, utilizing the ultrasonic method. The evaluation results can be used to establish general training program requirements and also to estimate the NDT capability of the individual inspector. Areas of deficient capability can be identified and corrected. METHODOLOGY Test Specimen Evaluation The test specimens consisted of both flawed (artificially-induced fatigue cracks) and good F-00 stage 2 compressor blades. All of the participating technicians were experienced and routinely perform these blade inspections. The individual NDT technician was given both written and verbal instruction on how the proficiency evaluation was to be administered. The inspectors were to perform an A-scan ultrasonic inspection using a surface wave technique in accordance with Technical Order (T.O.) 2J-F00-9. The inspection methodology is similar to what the inspectors normally use in a production environment. It is noted at this point that the T.O. calls for the ultrasonic instrument calibration to be performed on a test block with a.050" E0x slot and that the amplitude of the instrument on this flaw is set to 50%. The technician was to hold suspect signals above this limit, but 777

reject any blade with a 75% amplitude or greater. sufficient time to complete the test. Each technician was given Proficiency Evaluation The evaluation was separated into two phases. Phase I involved 24 inspectors evaluating a 50 blade test set. The test set consisted of 2 flawed blades with artificially-induced fatigue cracks ranging in length from.006" to.73" (Fig. ), and 38 unfawed or "good" blades. Both the flawed and good blades were combined to provide the technician with a realistic random selection. The individual technician was given an eight hour work period to complete the inspection of the test set. The inspectors were not permitted any outside assistance. A Phase I data subset consisting of 94 specimens was also evaluated because of the limitation placed by the minimum flaw size in the test blades imposed by the inspection technique of the technical order. This subset consisted of 56 flawed specimens ranging from.050" to.73" (Fig. 2) and the data from the original 38 good blades. In Phase II, inspectors from Phase I were retested on a 50 blade test set. The set consisted of 2 specimens with flaws ranging in size from.050" to.084" (Fig. 3) and 38 good blades. The inspectors were allowed a four hour period in which to evaluate the test set. Again, no outside assistance was permitted. 30 27 24 2 8 ;S 5 Fig...05".00S".075".0S".3S".6S" BLADE FLAW SIZE DISTRIBUTION Flaw Size Distribution for the Phase I Test Set 4 Fig. 2..05".04S".075".0S".3S".6S" BLADE FLAW SIZE DISTRIBUTION Flaw Size Distribution for the Phase I Data Subset 778

5 4 f-- r-- Fig. 3..OOB".024".040".05S".072".OSS" BLADE FLAW SIZE DISTRIBUTION Flaw Size Distribution for the Phase II Test Set DATA DEVELOPMENT Data Obtained The pertinent data obtained from the technician evaluations of the Phase I and Phase II test sets were false calls (FC), FC percentage, finds, and probability of detection (POD) percentage (Tables I and 2). A false call occurs when a good blade is incorrectly marked as flawed. The FC percentage is the number of false calls divided by the total number of good blades evaluated expressed as a percentage. Finds are when flawed specimens are correctly marked as flawed. Probability of detection percentage is obtained from the number of finds divided by the total number of flawed specimens in the test set and is expressed as a percentage. Other data utilized are the number of flawed specimens missed and the number of good blades which were correctly determined to be unflawed. Data Analysis Two different types of analyses were performed on data obtained from Phase I, the Phase I subset and Phase II. One analytical method relates finds, false calls, unflawed specimens correctly marked as unflawed, and flaws missed. This analysis methodology, obtained from the Lockheed Georgia Company, relates the preceding four parameters into a two-by-two decision matrix (Fig. 4). A second matrix (Fig. 5), based on the data obtained from the technicians test set evaluation, is developed to show what could occur by chance. The four combinations derived from the individual technicians performance area entered into the test matrix (Fig. 6). A comparison of the test matrix with the chance/random matrix is performed by a chi-square determination as shown in Figure 7. If there is any difference between the two, then the performance is unlike chance. Conversely, no difference between the two could indicate performance attributed to chance. One further step is taken to derive an index of performance. The chi-square value is normalized to the test size (number of decisions) and the square root is calculated. The resulting value is the performance index or coefficient of contingency (C of C). The higher the C of C, the better the capability or performance of an individual inspector for a given test set. The second analytical method related POD percentage to FC percentage in what is termed a relative operating characteristic (ROC) curve. The ROC curve is important in determining the ability of individual technicians to inspect test specimens. Their probability of flaw detection is demonstrated 779

Table. Selected data from Phase I and Phase I subset evaluations TECH. Finds POD% FINDS POD% NO. FC FC% (50 set) (Finds/2) (94 set) (Finds/56) 0 0 0 6 54 48 86 02 4 74 66 5 9 03 3 8 79 70 53 95 04 6 6 8 72 53 95 05 2 5 78 70 53 95 06 29 8 72 5 9 07 29 76 0 90 5 9 08 5 39 97 87 56 00 09 29 76 09 97 52 93 0 2 5 72 64 5 9 8 2 75 67 54 96 2 26 68 93 83 48 86 3 DID NOT COMPLETE EVALUATION 4 7 45 88 79 52 93 5 4 7 63 5 9 6 36 95 08 96 54 96 7 3 8 69 62 5 9 8 2 55 96 86 55 98 9 30 79 06 95 55 98 20 9 24 89 79 53 95 2 3 8 8 72 53 95 22 30 79 03 92 52 93 23 38 00 2 00 56 00 24 7 8 75 67 52 93 Table 2. Selected Data from the Phase II Evaluation TECH. POD% NO. FC FC% Finds (50 set) (Finds/2) 0 (3) 7 8 0 83 02 (07) 3 34 4 33 03 (08) 9 24 9 75 04 (09) 3 0 83 05 (4) 2 55 9 75 06 (6) DID NOT ACCOMPLISH EVALUATION 07 (8) 8 2 92 08 (9) 5 3 92 09 (22) 3 82 2 00 0 (23) 38 00 2 00 (2) 38 00 92 as well as the reliability of their decision in the form of the FC percentage. The POD percentage is plotted along the Y-axis with the false call percentage plotted along the X-axis on a X-Y graph. A "chance" line is drawn for the probability, that for a given POD percentage, an equally corresponding false call percentage is obtained. 780

MARKED FLAWED CORRECT DECISION A UN FLAWED INCORRECT DECISION FALSE CALL B Fig. 4. NOT MARKED INCORRECT DECISION CORRECT DECISION C 0 t MISSED FLAW The Decision Matrix for a Flaw Detection Evaluation FLAWED UN FLAWED MARKED A B ROW OR [, UNMARKED C D ROW 2 OR [2 COLUMN' OR C, C = A + C COLUMN 2 OR C2 [ = A + B C2 = B + 0 [2 = C + D Fig. 5. TEST DATA MATRIX N (TOTAL) = A B + C - D Decision Matrix Sums Applied to Random/Chance Matrix Formulation FLAWED UNFLAWED MARKED C C 2 - r - f, N N Fig. 6. UNMARKED RANDOM MATRIX C, r 2 C 2 r 2 - _0 N Random matrix developed from inspection decision and test conditions N 78

CHI-SQUARE [.- '"' + -C- 2 - J _0 r N [.- ";' J Fig. 7. COEFFICIENT OF CONTINGENCY - (CHI-SQUARE/N) /2 Statistical test for association between matrices and ranking determination An example of a ROC graph is shown in Figure 8. The limits by which a ROC curve was interpreted were based on the recommendations of experienced NDT personnel. A POD percentage minimum of 80% and FC percentage maximum of 30% defines the limits for a proficient NDT inspector. As noted previously, the inspector calibrates and applies his inspection criteria to a calibration block which contains a.050" E0x notch. However, the Phase I test set contained 2 specimens with flaws of various sizes, including 56 specimens with flaws less than.050". The most relevant data analysis to be conducted on the Phase I test set would be an evaluation of a subset which contains that blade data for the 56 specimens with flaws greater than or equal to.050", as well as the information from the 38 good blades which were in the test set. The complete Phase I test set of 50 specimens, the subset of 94 specimens, and the Phase II test set of 50 specimens underwent analyses both where coefficient of contingency (C of C) values and relative operating characteristic (ROC) curve data was obtained. 00 90 80 i 70 il: 60 co co 50 ;;: g 40 z '" 30 20 0 0 20 30 40 50 60 70 80 90 00 FALSE CALLS 'II> Fig. 8. Example of relative operating characteristic (ROC) graph 782

RESULTS Phase I A total of 24 technicians (numbered # thru #24) participated in the Phase I evaluation. Twenty-three individuals completed the test on the 50 blade test set. One inspector (#3) failed to complete the evaluation. The C of C data for the Phase I evaluation are presented in Table 3. A C of C distribution in the form of a histogram is presented for the 23 inspectors which completed the Phase I evaluation (Fig. 9). The performance ranking of the C of C participants is presented in Figure 0. The histogram distribution indicates that a majority of the individuals performed relatively well ranging between 4.0 and 6.0. Conversely, ten individuals had C of C results less than 4.0. The average C of C value for the Phase I group is 3.7 which indicates a difficulty which is not well matched to the overall group performance. The performance ranking in Figure 0, as plotted, shows some linearity in the distribution of the individual inspectors. The ROC curve for the Phase I data is presented in Figure. Note that nine individuals had a FC percentage of 80% or greater. However, all nine of these individuals had a FC percentage exceeding 30%. Applying the accept/reject criteria of a POD percentage that is 80% or greater, and a FC percentage that is 30% or less, none of the 23 individuals who completed the Phase I evaluation demonstrated the required NDT proficiency. Table 3. C of C ranking for 23 inspectors completing the Phase I evaluation RANK C of C % FINDS FC 0 (2) 5.6 72.3 03 02 (05) 5.6 69.6 02 03 (03) 5.5 70.5 03 04 (20) 5. 79.5 09 06 (04) 5.0 72.3 06 07 (0) 4.8 54.5 0 08 (08) 4.7 86.6 5 09 (02) 4.7 65.5 04 0 (7) 4.7 6.6 03 (5) 4.6 63.4 04 2 (24) 4. 66.4 07 3 () 4.0 67.0 08 4 (06) 3.9 72.3 5 (09) 3.4 97.3 29 6 (8) 3.2 85.7 2 7 (4) 3.2 78.6 7 8 (9) 2.3 94.6 30 9 (22).8 92.0 30 20 (07).8 90.2 29 2 (2).6 83.0 26 22 (6) 0.4 96.4 36 23 (23) 0 00 38 783

5 r-- 4 i 3 <=> g: ;-- ;-- '" 2 is - r-- r-- - Fig. 9..0 2.0 3.0 4.0 5.0 6.0 COEFFICIENT OF CONTINGENCY Histogram showing C of C data distribution for Phase I analysis 6 5 0 5 20 25 RANKING Fig. 0. Performance (C of C) ranking for Phase I inspectors 00 90 ;: 80 70 " 60 50 ' 40 "# 30 20 0.,. :l." -,. 2, 0! 2" " CHANCE LINE " O,------r-- 0 20 30 40 50 60 70 80 90 00 FALSE CALL '!i> Fig.. ROC curve for Phase I data analysis 784

Phase I Subset An noted previously, the NDT technician was to follow the inspection technique set out in the applicable technical order, T. O. 2J-FIOO-9. This involved calibrating the instrument on an ultrasonic test block which contained a.050" Elox notch. The technician was to set the amplitude on this ultrasonic instrument to a 50% limit when calibrating on the.050" Elox notch. Any indication above a 75% amplitude limit resulted in the blade being considered a reject. This subset consists of the data relating to the 56 flawed test specimens which had flaws greater or equal to.050" from the Phase I test set. The data from the 38 good blades were also included to form a subset of 94 test specimens. The C of C data for this Phase I subset are shown in Table 4. In Figure 2, a histogram shows the distribution of the C of C data and their occurrence. The plot for the performance ranking of the C of C data is shown in Figure 3. Note that the highest C of C is 8.9 and the lowest is O. The median C of C is 5.84 which shows that the difficulty of the evaluation is well matched to overall group performance capabilities. The ROC curve (Fig. 4) is of paramount interest with regard to the entire Phase I evaluation. All 23 individual technicians had POD percentage scores greater than 80%. However, note that ten inspectors has a 30% or greater FC percentage. One inspector called a flaw in each of the specimens of the subset. He had a 00% POD percentage coupled with a 00% FC percentage. Thirteen individuals fell within the acceptable limits which define a proficient NDT technician. A majority of the 3 achieved a 90% or greater POD percentage line and fell primarily into two clusters on the 0% and 20% FC percentage. Ten of the twenty-three inspectors failed to achieve the limits for a proficient inspector. Table 4. C of C ranking from Phase I subset data analysis RANK C of C % FINDS FC 0 (05) 8.9 94.6 2 02 (03) 8.7 94.6 3 03 (2) 8.7 94.6 3 04 (0) 8.5 9. 2 05 (0) 8.4 85.7 0 06 (7) 8.3 9. 3 07 (04) 8.0 94.6 6 08 (02) 8.0 9. 4 09 (5) 8.0 9. 4 0 () 7.8 96.4 8 (24) 7.6 92.9 7 2 (20) 7.3 94.6 9 3 (08) 6.9 00 5 4 (06) 6.4 9. 5 (8) 5.4 98.2 2 6 (4) 5.3 92.9 7 7 (9) 3.2 98.2 30 8 (09) 2.4 92.9 29 9 (2) 2. 85.7 26 20 (22) 2.0 92.9 30 2 (07) 2.0 9. 29 22 (6) 0.4 96.4 36 23 (23) 0.0 00 38 785

6-5 r--!---- r- r-- r-- -.0 2.0 3.0 4.0 5.0 6.0 7.0 8.0 9.0 COEFACIENT OF CONTINSENCY Fig. 2. Histogram showing C of C data distribution for Phase I data subset analysis Fig. 3. 5 0 5 20 25 30 RANKINS Performance (C of C) ranking for Phase I data subset inspectors 00 & 2'... 2..8 90,:,'t 2 2..0 4.'8 23 30 20 0 786 Fig. 4. 0 20 30 40 50 60 70 80 90 00 FALSE CAll'" ROC curve for Phase I data subset analysis

Phase III The ten technicians who did not achieve an acceptable proficiency rating from the Phase I subset evaluation, along with the individual who failed to complete the Phase I test, were retested with a modified specimen test set. The Phase II test set, as mentioned earlier, consisted of 50 test blades. Twelve of the 50 specimens have flaws ranging from.050" to.084" and the remaining 38 specimens consisted of unflawed or "good" blades. Ten of the eleven inspectors accomplished the Phase II evaluation. One inspector (#6) did not. The C of C data are presented in Table 5. A histogram (Fig. 5) shows the distribution of values for the C of C and their occurrence among the Phase II participants. The performance ranking for the C of C values is shown in Figure 6, the distribution is linear. The highest C of C value is 8.3 with 4 out of 0 individuals having a score of zero. The median C of C value is 3.62 which indicates that the test difficulty is relatively well matched to overall performance capabilities. In Figure 7 the ROC curve for Phase II is shown. Of the ten individuals who completed the Phase II evaluation, four fell within the region bounded by the acceptable limits (80% POD percentage min. and 30% FC percentage max.). Six failed to achieve the limits required of a proficient NDT inspector. Table 5. C of C ranking for 0 inspectors completing the Phase II evaluation RANK C of C % FINDS FC 0 (09) 8.3 83.3 03 02 (9) 7.2 9. 7 04 03 (8) 6.2 9. 7 05 04 (3) 5.9 83.3 04 05 (08) 4.6 75.0 04 06 (22) 2.3 00 0 07 (4).7 75.0 07 08 (07) 0 33.3 04 09 (2) 0 9. 7 2 0 (23) 0 00 2 5 4 z i3... I-- ;:: '" '" ;.. 2 '" Fig. 5. 2.0 4.0 6.0 B.O 0.0 20 COEFFICIENT OF CONTINGENCY Histogram showing C of C data distribution for Phase II analysis 787

8 6 4 Fig. 6. 6 RAK I N, 8 0 2 Performance (C of C) ranking for Phase II inspectors." ".,' 0 60 ; 50 o 40 30 "'it 20 0 " '" thante LI NE 0 20 30 40 50 60 0 80 90 00 FALSE CALL Fig. 7. ROC curve for Phase II data analysis SUMMARY The following key points emerged from this proficiency evaluation program: a. A majority of the individuals who did not perform well on the Phase I subset analysis, were subsequently shown to do poorly on the Phase II retest (Table 6). b. Technician proficiency can be evaluated accurately with a high degree of confidence at SA-ALC by using two analytical methodologies, C of C values and, to a greater extent, the ROC curves. c. Individual inspection capability can be assessed from the range of performance demonstrated and individual baselines established. d. For the range of flaw sizes in the Phase I, phase I subset, and Phase II evaluations, technician performance appears to be less dependent on detection criteria than on individual detection capability. e. The FC percentage data from the C of C values and ROC curves appears to be a deciding factor in eliminating a number of individuals from the acceptable proficiency limits regardless of the high POD percentage an individual might have obtained. 788

Table 6. Distribution of personnel for the Phase I subset and Phase II % Probability of Detection % False Calls Distribution of Personnel Phase I Subset Phase II 80-00 60-80 <=60 <-30 30-40 40-50 50-60 60-70 70-00 <=30 30-40 40-50 50-60 60-70 70-00 <=30 30-40 3 4 6 3 Totals 23 0 f. The information obtained from this evaluation was used by management organizations to identify those personnel in need of additional NDT training. RECOMMENDATIONS Recommendations are as follows: a. The No'ndestructive Testing Laboratory at SA-ALC should continue and expand its in-house capability to conduct proficiency evaluations on NDT personnel at Kelly AFB. b. A mechanism be established for the retraining of individuals who have demonstrated a repeated lack of proficiency in a particular NDT skill. c. A centralized databank be established where NDT proficiency evaluation information from all Air Force organizations can be made available for analysis and research. REFERENCES. Lewis, William H., Sproat, William H. and Hamilton, James M., Air Force NDI Technician Proficiency, Final Report, Lockheed-Georgia Co., AG82ER0099, 98. 2. Rummel, Ward D., et al. Reliability of NDI of Aircraft Engine Components. Phase IV, Final REport, Martin-Marietta Aerospace, SA ALC/MM-85, 984. 3. Swets, J. A., "Assessment of NDT Systems-Part I and II," Materials Evaluation, Vol. 4, No., 983, pp. 294-303. 4. Swets, J. A. (ed.), Signal Detection and Recognition by Human Observers, Wiley, New york, 964. 5. Technical Order 33B-l-l, Nondestructive Inspection Methods, Air Force F4608-82-DA097, 984. 6. Technical Order 2J-FIOO-9, Inspection of Stage and 2 Fan Blades, Air Force F33657-83-C-200, 983. 789