Chapter 4 DESIGN OF EXPERIMENTS

Similar documents
CHAPTER 2 TAGUCHI OPTIMISATION TECHNIQUE

9.0 L '- ---'- ---'- --' X

Chapter 5: Field experimental designs in agriculture

Chapter 1: Exploring Data

A Survey of Techniques for Optimizing Multiresponse Experiments

10 Intraclass Correlations under the Mixed Factorial Design

Research Methods in Forest Sciences: Learning Diary. Yoko Lu December Research process

SAMPLING ERROI~ IN THE INTEGRATED sysrem FOR SURVEY ANALYSIS (ISSA)

Applying Neural Networks Approach to Achieve the Parameter Optimization for Censored Data

CHAPTER - 6 STATISTICAL ANALYSIS. This chapter discusses inferential statistics, which use sample data to

CHAPTER VI RESEARCH METHODOLOGY

One-Way ANOVAs t-test two statistically significant Type I error alpha null hypothesis dependant variable Independent variable three levels;

Analysis of Environmental Data Conceptual Foundations: En viro n m e n tal Data

Inferences: What inferences about the hypotheses and questions can be made based on the results?

Lessons in biostatistics

What Is Science? Lesson Overview. Lesson Overview. 1.1 What Is Science?

Enumerative and Analytic Studies. Description versus prediction

Single-Factor Experimental Designs. Chapter 8

Chapter 23. Inference About Means. Copyright 2010 Pearson Education, Inc.

Mark J. Anderson, Patrick J. Whitcomb Stat-Ease, Inc., Minneapolis, MN USA

CHAPTER 3 RESEARCH METHODOLOGY

1. The Role of Sample Survey Design

BAYESIAN ESTIMATORS OF THE LOCATION PARAMETER OF THE NORMAL DISTRIBUTION WITH UNKNOWN VARIANCE

Methodology for Non-Randomized Clinical Trials: Propensity Score Analysis Dan Conroy, Ph.D., inventiv Health, Burlington, MA

Key Ideas. Explain how science is different from other forms of human endeavor. Identify the steps that make up scientific methods.

Reliability of Ordination Analyses

Relational methodology

STATISTICS AND RESEARCH DESIGN

Behavioral Data Mining. Lecture 4 Measurement

How to describe bivariate data

The Basics of Experimental Design [A Quick and Non-Technical Guide]

Psychology Research Process

List of Figures. List of Tables. Preface to the Second Edition. Preface to the First Edition

BIOL 458 BIOMETRY Lab 7 Multi-Factor ANOVA

Bayesian and Frequentist Approaches

Applied Analysis of Variance and Experimental Design. Lukas Meier, Seminar für Statistik

Chapter 1. Introduction

Optimization and Experimentation. The rest of the story

Improved Intelligent Classification Technique Based On Support Vector Machines

Biostatistics for Med Students. Lecture 1

An Empirical Test of a Postulate of a Mediating Process between Mind Processes Raimo J Laasonen Project Researcher Vihti Finland

Fundamental Concepts for Using Diagnostic Classification Models. Section #2 NCME 2016 Training Session. NCME 2016 Training Session: Section 2

Learning to classify integral-dimension stimuli

RNA-seq. Design of experiments

Two Factor Analysis of Variance

Business Statistics Probability

ANALYSIS AND DETECTION OF BRAIN TUMOUR USING IMAGE PROCESSING TECHNIQUES

EXPERIMENTAL DESIGN Page 1 of 11. relationships between certain events in the environment and the occurrence of particular

What Science Is and Is Not

Data collection, summarizing data (organization and analysis of data) The drawing of inferences about a population from a sample taken from

Field data reliability analysis of highly reliable item

Applying the Experimental Paradigm to Software Engineering

HS Exam 1 -- March 9, 2006

Overview of Perspectives on Causal Inference: Campbell and Rubin. Stephen G. West Arizona State University Freie Universität Berlin, Germany

CHECKLIST FOR EVALUATING A RESEARCH REPORT Provided by Dr. Blevins

PROBABILITY Page 1 of So far we have been concerned about describing characteristics of a distribution.

Statistics for Psychology

CHAPTER 3 DATA ANALYSIS: DESCRIBING DATA

Reveal Relationships in Categorical Data

Numerical Integration of Bivariate Gaussian Distribution

INTRODUCTION TO STATISTICS SORANA D. BOLBOACĂ

Statistics 2. RCBD Review. Agriculture Innovation Program

PSYCHOLOGY 300B (A01)

Assignment Question Paper I. 1. What is the external and internal commands and write 5 commands for each with result?

Hypothesis-Driven Research

FUZZY INFERENCE SYSTEM FOR NOISE POLLUTION AND HEALTH EFFECTS IN MINE SITE

Lec 02: Estimation & Hypothesis Testing in Animal Ecology

Six Sigma Glossary Lean 6 Society

Lesson 1 Understanding Science

Describe what is meant by a placebo Contrast the double-blind procedure with the single-blind procedure Review the structure for organizing a memo

WDHS Curriculum Map Probability and Statistics. What is Statistics and how does it relate to you?

CRITICAL EVALUATION OF BIOMEDICAL LITERATURE

A NEW DIAGNOSIS SYSTEM BASED ON FUZZY REASONING TO DETECT MEAN AND/OR VARIANCE SHIFTS IN A PROCESS. Received August 2010; revised February 2011

Type II Fuzzy Possibilistic C-Mean Clustering

SCIENTIFIC PROCESSES ISII

Chapter 2 Planning Experiments

Describe what is meant by a placebo Contrast the double-blind procedure with the single-blind procedure Review the structure for organizing a memo

Chapter 2. Behavioral Variability and Research

Experimental Psychology

Chapter 02. Basic Research Methodology

July 7, Dockets Management Branch (HFA-305) Food and Drug Administration 5630 Fishers Lane, Rm Rockville, MD 20852

Designing Psychology Experiments: Data Analysis and Presentation

Clinical Trials A Practical Guide to Design, Analysis, and Reporting

Research Landscape. Qualitative = Constructivist approach. Quantitative = Positivist/post-positivist approach Mixed methods = Pragmatist approach

PRINCIPLES OF STATISTICS

Chapter 1: Introduction to Statistics

The Research Roadmap Checklist

Psych 1Chapter 2 Overview

Completely randomized designs, Factors, Factorials, and Blocking

Reliability Theory for Total Test Scores. Measurement Methods Lecture 7 2/27/2007

Still important ideas

Experiment Terminology

10.1 Estimating with Confidence. Chapter 10 Introduction to Inference

Bayes Linear Statistics. Theory and Methods

11/18/2013. Correlational Research. Correlational Designs. Why Use a Correlational Design? CORRELATIONAL RESEARCH STUDIES

Study of cigarette sales in the United States Ge Cheng1, a,

CHAPTER 3 METHODOLOGY

Introducing Meta-Analysis. and Meta-Stat. This chapter introduces meta-analysis and the Meta-Stat program.

On the Performance of Maximum Likelihood Versus Means and Variance Adjusted Weighted Least Squares Estimation in CFA

UN Handbook Ch. 7 'Managing sources of non-sampling error': recommendations on response rates

Transcription:

Chapter 4 DESIGN OF EXPERIMENTS 4.1 STRATEGY OF EXPERIMENTATION Experimentation is an integral part of any human investigation, be it engineering, agriculture, medicine or industry. An experiment can be defined as a test or series of tests in which purposeful changes are made to the input variables of a system/ process so that we may observe, identify and analyze the reasons for change that may be observed in the output parameters. In general experiments are used to study the performance of processes and systems which can be represented by a general model as shown in figure 4.1. Some of the process variables are controllable whereas others are uncontrollable. The main objective of experimentation is to determine the influence of these process variables on the output variables. The general approach to planning and conducting the experiment is called the strategy of experimentation. Controllable factors x 1 x 2 x r Inputs Outputs (z) Process y 1 y 2 y s Uncontrollable factors Figure 4.1 General model of a process 43

4.2 DESIGN OF EXPERIMENTS (DOE) To use statistical approach in designing and analyzing an experiment it is necessary to have a clear idea in advance of exactly what is to be studied, how the data are to be collected and at least a qualitative understanding of how these data are to be analyzed. Design of experiments (DOE) is a structured, organized method of conducting experiment that is used to determine the relationship between the different factors affecting a process. It helps in drawing valid inferences on the unknown parameters based on the data collected after experimentations. The science of statistical design of experiments (DOE) was first originated by Sir Ronald Fisher, 1920 along with Frank Yates, who proposed the basic principles of experimentation and the data analysis technique called Analysis of Variance (ANOVA), applying matrices to aid in the discipline, precision, and productivity and optimizing agricultural performance. DOE involves a well planned set of experimental runs, in which all parameters of interest are varied over a specified range, which is based on the preliminary experiments and a systematic data results. The analysis based on the DOE is not very much easy for the researcher unless the application of statistics, mathematics is applied, thus inferring the effects of various parameters on the observed data is apparent. The use of statistical methods is prominent for analyzing and interpreting the results of experimental runs from observant data. Statistical methods offer the only sound and logical means of treatments (parameter levels), a technique that must be known to every researcher s/ scientists in industry to deal effectively with the problems. The classical design of experiment approach called the one factor at a time approach was where one variable being studied while the other variables were held 44

constant. The major disadvantage of this approach was that it was inefficient and suffers from the inability to assess interactions among the variables. Furthermore, a large number of experiments have to be carried out as the number of the process parameters increases. The correct approach for dealing with several factors is factorial designs. This is an experimental strategy in which factors are varied together and it allow for the efficient testing of the main effects of the variables as well as the interaction effects among the variables. Interpretation of the data obtained from factorial designs is often accomplished using analysis of variance (ANOVA). Factorial designs coupled with analysis using ANOVA enable the experimenter to determine all main effects (i.e. the effect on the process of changing one variable) and all interaction effects (the effect on the process of the interaction of several variables). A combination of the levels of all the factors involved in the experiment is called a treatment combination. A factorial experiment involves all (or a chosen few) of the treatment combinations. A factorial experiment is said to be complete if every treatment combination is used in the experiment at least once. Also, if all the factors involved in a factorial experiment have the same number of levels, the experiment is called a symmetric factorial experiment; otherwise, the experiment is called a asymmetric or mixed level experiment. The problem with a complete factorial experiment is that the number of experimental units needed to conduct the experiment increase rapidly with the increase in the number of factors and number of levels. For example, even with ten factors at two levels of each, there are as many as 2 10 = 1024 treatment combinations and complete factorial experiments will involve at least 1024 experimental units. Due to cost and other constraints, such a large experiment is infeasible in many experimental 45

situations, especially in industrial and engineering experimentation. Alternatively, one can experiment with only a subset of the set of all treatment combinations. An experimental strategy with a subset of treatment combinations that allows drawing of inferences on relevant factorial effects is called fractional replication or, fractional factorials. An important class of fractional factorial plans is one that is based on orthogonal arrays. In 1980 s Taguchi suggested highly fractional designs and other orthogonal arrays along with some novel statistical methods for solving engineering problems. He devised special method which uses specially designed orthogonal arrays to study the entire parameter space with only a small number of experiments. This technique helps the researcher to determine the possible combinations of factors and to identify the best combination to achieve improvements in product quality and process efficiency. In performing a designed experiment, changes are made to the input variables and the corresponding changes in the output variables are observed. The input variables are called factors and the output variables are called response. Each factor can take several values (levels) during the experiment. Applying DOE to monitor the process characteristics in EDM is very much appropriate, since it provides the best setting of EDM parameters to fulfill the multiobjectives response parameters. Yang and Tarng [77], Ghani et al. [78], Wu and Chang [79], Chang and Kuo [80], Liu et al. [81] applied the Taguchi methodology to different processes like turning, end milling, die casting, laser assisted machining and electro deposition respectively and found promising results. Lee and Yur [2], Lin et al. [33], Simao et al. [46], Marafona and Wykes [56], George et al. [82], Lin et al. [83] and 46

Prihandana et al. [84] employed Taguchi methodology to obtain the characteristics of EDM process. Most of EDM experiments were done by using this method because of the number of experiments can be reduced since EDM process involves many parameters. 4.3 TAGUCHI METHOD Taguchi methods are statistical methods developed by Genichi Taguchi to improve the quality of manufactured goods, and more recently also applied to marketing, advertising, biotechnology, engineering and manufacturing. Dr. Genichi Taguchi developed a robust parameter design method based on the orthogonal array experiments which gives much reduced variance for experiment with optimum settings of control parameters. In fact, it is a very useful tool for reducing variation and improved products/processes quality. This method is devised for process optimization and identification of optimal combinations of factors for selected response parameters. As a result, the Taguchi method has become a powerful tool in the design of experiment methods [85]. Taguchi distinguishes between various types of factors; the main types being control factors and noise factors. Control factors are those factors that can be set at specified levels during the production process they are in fact the input applied to the system in a controlled way to study their influence and control on the responses whereas a noise factor is anything that causes a measurable product or process characteristic to deviate from its target value they are not controllable and their influences are not known. The purpose then is to learn through an experiment, how changes in the setting of the control factors affect the average quality of a product and how they affect the product s robustness to noise factors. 47

The Taguchi experimental design is a special subset of a fractional factorial design. A full factorial design is normally easy to design. The methods to construct fractional factorial design are however complex and consistency between designs is not guaranteed with traditional methods. Taguchi constructed a special set of orthogonal arrays (OA) to assist in designing fractional factorial experiments. By combining orthogonal Latin squares in a unique manner, Taguchi prepared a new set of standard OA. The arrays published by Taguchi ensure that experimenters will design almost identical experiments. The techniques used to analyze the data from a Taguchi experiment are exactly the same as those used for any fractional factorial design. The first step in the Taguchi experimental design is to determine the number of parameter that needs to be evaluated. From this an appropriate OA is selected and the experiments aer performed [85]. The convention for naming the fractional factorial orthogonal is given as: L a (b c ) Where, a= No. of experimental runs b= No. of levels for each factor c= No. of columns in each array Taguchi used the term parameter design for experiments aimed at selecting the optimal levels of the control factors to study the individual effects of these, as also to study the interaction between control and noise factors; the current terminology for such design is robust design. 48

The fundamental principle of Robust design is to improve the quality of a product by minimizing the effect of the causes of variation without eliminating the causes, which is achieved by optimizing the product and process designs to make the performance minimally sensitive to the various causes of variation, a process called parameter Design. The Robustness Strategy uses five primary tools: 1. P-Diagram (Figure 4.2) is used to classify the variables associated with the product into noise, control, signal (input), and response (output) factors. Signal Factors X Product Process System Noise factors M Responses Y Control factors N Figure 4.2 Parameter diagram (P-diagram) of a process/system 2. Orthogonal arrays are used for gathering dependable information about control factors (design parameters) with a small number of experiments. 3. Ideal function is used to mathematically specify the ideal form of the signalresponse relationship as embodied by the design concept for making the higher-level system work perfectly. 4. Signal-to-Noise ratio is used for predicting the field quality through laboratory experiments. 49

5. Quadratic loss function (also known as quality loss function) is used to quantify the loss incurred by the user due to deviation from target performance. 4.3.1 Orthogonal array The Taguchi design of experiment makes use of Orthogonal arrays (OA) to help design the experiment. It is a systematic approach to investigation of a system (products/processes) which involves a series of structured experimental runs that are designed in which planned changes are made to the input variables, and their effects on the response variable are analyzed. By combining the orthogonal Latin squares in a unique manner Taguchi prepared a set of common orthogonal arrays. These are a set of tables of numbers, each of which can be used to lay out experiments for a number of experimental situations. Orthogonal arrays make it possible to carry out fractional factorial experiments in order to avoid numerous experimental works as well as to provide shortcuts for optimizing factors. The orthogonal arrays are determined by the number of factors and levels considered in the process. An orthogonal array selector assists in determining how many experimental runs are required to be performed, and the factor levels for each parameter in each run / trial. Appropriate orthogonal selection depends on total degrees of freedom (DOF) in experiment. The degrees of freedom are defined as the number of comparisons between process parameters that need to be made to determine which level is better and specifically how much better it is. DOF is the minimal number of comparisons between levels of factors or interactions in order to improve process characteristics. DOF can be calculated as: 50

DOF of a factor = number of levels of the factor 1 DOF of a column = number of levels of the column 1 DOF of an array = total of all column DOFs for the array DOF of an experiment = total number of results of all trials 1 4.3.2 Signal-to-Noise (S/N) ratio In the Taguchi experiment design, a loss function is used to calculate the deviation between the experimental value and the desired value. The loss function is further transformed into utility function. The utility function developed by Taguchi is called the Signal-to-Noise (S/N) ratio to determine the statistical performance characteristic deviating from the desired value. There are three standard types of S/N ratios depending on the desired performance response: Lower the better (for making the system response as low as possible) Nominal the best (for reducing variability around a target) Higher the better (for making the system response as high as possible) The S/N ratio for the i th performance characteristic in the j th experiment can be expressed as: Lower the better (for making the system response as low as possible): (1) Higher the better (for making the system response as high as possible): (2) 51

Where: n = the number of tests y ijk = experimental value of i th performance characteristic in the j th experiment at the k th test. The properties of ideal signal-to-noise metric are as follows: 1) The S/N ratio reflects the variability in the response of a system caused by noise factors. 2) The S/N ratio is independent of the adjustment of the mean. 3) The S/N ratio used for comparative objective measures relative quality. The objective of S/N analysis is to determine the most optimum set of the operating conditions, from variations of the influencing factors within the results. The results of the experiments are analyzed to achieve the following objectives: To establish the best or optimal condition for the product or process. To establish the contribution of individual factors. To estimate the response under optimal conditions. A commonly applied statistical treatment for analysis of experimental results is Analysis of Variance (ANOVA). ANOVA is used to analyze the results of the OA experiment in product design, and to determine how much variation each quality-influencing factor has contributed. ANOVA is a statistical technique that identifies factors significantly affecting the experimental results. ANOVA consists of the following: (1) Summing squares for distributions of all characteristic values (experimental data). (2) Unbiased variance. 52

(3) Decomposing this total sum into the sums of squares for all factors used in the experiment. (4) Calculating unbiased variances through the sums of squares for all factors over their DOF. (5) Calculating the variance ratio F 0 by dividing each unbiased variance by the error variance. (6) Searching which factors significantly affect experimental results by analyzing the error variance. The application steps for design of experiments (DOE) using Taguchi methodology deals with the following: 1. Planning (To evaluate the objectives, factors involved in the project etc.) 2. Designing experiments (Based on factors and levels identified in the planning step). 3. Running experiments (The experimental run is performed on the basis of designed experiment) 4. Analysis of results (The experimental results are analyzed to provide information in the following steps): a) Main effects. b) ANOVA (Analysis of Variance) c) Optimum condition d) Conclusions 5. Running confirmation tests (To perform confirmation experiments so as to check the repeatability of the results) 53

Figure 4.3 shows the detailed flowchart which indicates various steps involved in Taguchi method viz problem identification, brain storming session, experimental design, experimental run, analysis of results and confirmation runs. Problem Identification Brain storming session (Identify: Factors, factors levels, possible interactions, objectives) Experimental Design (Trial experiments, selection of orthogonal array, research design) Experimental Run Analyze Results Confirmation Runs Figure 4.3 Flowchart indicating steps involved in Taguchi method 54