An Empirical Study to Evaluate a Domain Specific Language for Formalizing Software Engineering Experiments

Size: px
Start display at page:

Download "An Empirical Study to Evaluate a Domain Specific Language for Formalizing Software Engineering Experiments"

Transcription

1 An Empirical Study to Evaluate a Domain Specific Language for Formalizing Software Engineering Experiments Marília Freire a,b, Uirá Kulesza a, Eduardo Aranha a, Andreas Jedlitschka c, Edmilson Campos a,b,silvia T. Acuña d, Marta N. Gómez e a Federal University of Rio Grande do Norte, Department of Informatics and Applied Mathematics, Natal, Brasil b Federal Institute of Rio Grande do Norte, Natal, Brasil c Fraunhofer Institute for Experimental Software Engineering, Kaiserslautern, Germany d Universidad Autónoma de Madrid, Madrid, Spain e Universidad San Pablo-CEU, Madrid, Spain {marilia.freire,edmilsoncampos}@ppgsc.ufrn.br,{uira,eduardo}@dimap.ufrn.br, andreas.jedlitschka@iese.fraunhofer.de, silvia.acunna@uam.es, mn.gomez@usp.ceu.es Abstract The research about the formalization and conduction of controlled experiments in software engineering has reported important insights and guidelines for conducting an experiment. However, the computational support to formalize and execute controlled experiments still requires deeper investigation. In this context, this paper presents an empirical study that evaluates a domain-specific language proposed to formalize controlled experiments in software engineering. The language is part of an approach that allows the generation of executable workflows for the experiment participants, according to the statistical design of the experiment. Our study involves the modeling of eight software engineering experiments to analyze the completeness and expressiveness of our domain-specific language when specifying different controlled experiments. The results highlighted several limitations that affect the formalization and execution of experiments. These outcomes were used to extend the evaluated domain-specific language. Finally, the improved version of the language was used to model the same experiments to show the benefits of the improvements. I. INTRODUCTION The conduction of controlled experiments in software engineering has increased over the last years [1] [2]. Experiments and their replication are considered essential to produce scientific evidences and to enable causal effect analysis, improving the knowledge about the benefits and limitations of new and existing software engineering (SE) methods, theories and technologies. In addition, it also promotes the sharing of this knowledge inside the SE community. Consequently, controlled experiments can accelerate the development and evaluation of effective scientific innovations produced by the academy or the industry. Over the last decade, the community discussed how to better support the application of controlled experiments in the SE research area. These studies have proposed guidelines to report controlled experiments [3], conceptual frameworks to guide the replication of controlled experiments [4], and environments/tools to support their conduction and replication [5] [6] [7]. Although these studies brought important insights and outcomes related to the conduction of controlled experiments, few of them [7][8] [9] [10] [11] have proposed to formalize the planning, execution and analysis of controlled experiments. In addition, the existing approaches that aim at formalizing the specification of controlled experiments such as ontologies or domain-specific languages and/or at providing support for their execution and analysis have not been evaluated. Therefore, this subject still requires deeper investigation. In this context, this paper describes an empirical study conducted to evaluate a domain specific language that provides support to the modeling and execution of SE controlled experiments proposed by our research group [8] [12]. In our study, we have modeled several controlled experiments with the objective of analyzing the completeness and expressivity of the domain-specific language (DSL) and supporting environment of our approach. These criteria were analyzed based on the specification of eight different experiments and considering fundamental experimental aspects documented by the experimental software engineering community [1] [2] [3]. We present the results of our analysis and illustrate how they have been used to improve the evaluated domain-specific language. This article is organized as follows. Section II describes the study settings. Section III discusses the evaluation results, as well as the new extensions proposed for the evaluated DSL. Sections IV and V present, respectively, the threats to validity and related work. Finally, Section VI presents conclusions and directions for future work. II. STUDY SETTINGS This section presents the study settings in terms of: its main goal and research questions (Section II.A), the approach being evaluated (Section II.B), the study methodology (Section II.C), and the evaluation criteria (Section II.4). A. Study Goal and Research Question 250

2 Table 1: Experiments modeled in our study. Experiment Institution-Country Testing [17] UFPE-Brazil Human Factors [14] UAM-Spain experiment. These questionnaires can be applied before or after all the activities of the experiment process. Requirements [15] SPrL [20] MDD SPL [21] CFT [18] PBR [19] UPM-Spain UFRN-Brazil UPV-Spain PUC-Rio-Brazil Fraunhofer-IESE-Germany University of Maryland- USA The main goal of this study is to validate the experimental domain specific language with respect to the modeling of SE controlled experiments from the perspective of the experimenters. To achieve this goal, a research question (RQ) was defined: Are the DSL abstractions of the approach adequate to model the different aspects of SE controlled experiments? In order to answer this question, our study investigated how the different specification aspects of controlled experiments could be addressed by our DSL. Different and complementary criteria were adopted to analyze the completeness and expressiveness of the domain-specific language during the modeling of a set of existing experiments. B. Evaluated Domain Specific Language Our study focuses on the assessment of a DSL that is part of a model-driven approach for supporting the formalization and execution of experiments in software engineering [8] [12]. The approach consist of: (i) a DSL, called ExpDSL, used to describe the process and statistical design of controlled experiments; (ii) model transformations that allow the generation of workflow models that are specific for each experiment participant and according to the experiment s design; and (iii) a workflow execution environment that guides and monitors the participant activities during the experiment execution, including the gathering of participants feedback during this process. ExpDSL is a textual DSL that is composed of four main parts/views: process view, metrics views, experimental plan view, and questionnaire view. Each view allows defining the experiment aspects as follows: (i) Process View allows defining the activities of data collection from the experiment participants. It is similar to a software process language, where we can define the activities, tasks, artifacts, and roles involved in the process; (ii) Metric View defines the metrics that have to be collected during the experiment execution. Each metric intercepts process activities or tasks in order to quantify observations of interest (dependent variables, etc.) in the study; (iii) Experimental Plan View defines the experimental plan by identifying the factors to be controlled (treatments, noise variables, etc.) and the statistical design to be used (treatments, participants and experimental material allocation); and (iv) Questionnaire View defines questionnaires to collect quantitative and qualitative data from participants of the Figure 1: ExpDSL fragment before extensions Fig. 1 shows the fragments of the Testing experiment specification [13] using the original version of our DSL (ExpDSLv1). It is a controlled experiment that compares two different black-box manual test design techniques: a generic technique and a product specific technique. The study evaluates those techniques from the point of view of the test execution process. In the specification, we can see an Experimental Plan View (Fig. 1- A,B,C,D,E), a Process View fragment (Fig. 1 F), a Metric View fragment (Fig. 1 - G) and a Questionnaire View fragment (Fig. 1 H). Section III presents and discusses those fragments in the context of our study. 251

3 Figure 2: ExpDSLfragment after extensions The experiment modeled in ExpDSL is submitted in our approach to a set of transformations (model to model and model to text) to generate workflow models for the participants according to the statistical design of experiment (DoE). Finally, these workflow models can be executed in a workflow engine (web application), guiding and monitoring the participants during the experiment. For additional details of our approach, please refer to [12]. C. Study Methodology Our study was organized in three main phases: (i) the selection and specification of different experiments using the evaluated version of ExpDSL herein called ExpDSLv1; (ii) the evaluation of each modeled experiment through the study criteria (Section II.D); (iii) the analysis, discussion, and proposal of improvements for the ExpDSL and the approach considering the study results. We selected 2 quasi-experiments and 6 controlled experiments with different statistical designs, executed and documented by the software engineering community. We have also considered experiments from different research groups and software development areas (requirements, model-driven, software product lines, testing, human factors). Finally, the selection also took into consideration the availability of information about the experiment planning and conduction. Table 1 lists the experiments used in our study. The specification of the experiments in ExpDSL is available at: Our team the authors of this paper modeled different aspects of the experiments using the ExpDSLv1 based on their available documentation. For most of the experiments, we have also interacted with the researchers that conducted them. In the last phase of our study, we analyzed the collected results for each specified experiment, and evaluated how the completeness and expressiveness criteria were tackled by the approach. During this analysis, we investigated for each criterion: (i) the reasons for the (non)adequate specification of each specific aspect of the experiment; and (ii) which improvements can be applied to the DSL to address a better modeling of the identified aspects. Finally, we implemented these new improvements and re-modeled the same experiments using the improved version of ExpDSL (herein called ExpDSLv2) to highlight the achieved results. Fig. 2 shows fragments of new or modified elements in ExpDSLv2. D. Adopted Criteria Our study adopted three main assessment criteria used for evaluating DSLs [13]: Completeness: All concepts of the domain can be expressed in the DSL. Expressiveness: The degree to which a problem solving strategy can be mapped into a program naturally. In particular in this study, we evaluated the orthogonality aspect of expressiveness [13] that states that each DSL construct is used to represent exactly one distinct concept in the domain. For the completeness analysis, we investigated how the different chosen experiments can be properly specified using the ExpDSLv1. The following aspects were assessed during the specification: Goals, Hypotheses, Subhypotheses, Design of experiment (DoE), Independent variables (controlled variable), Dependent Variables, Metrics, Measurement Instruments, Characterization/ Contextualization, Data Collection Procedure, Experimental roles, Statistical Analysis Technique, and Questionnaires. Our analysis considered three levels regarding the completeness of the specification: (i) supported it was possible to specify the aspect; (ii) partially supported the specification required adjustments to the modeling of the experiment aspect; and (iii) not supported the DSL cannot specify that specific experiment aspect. The expressiveness analysis involved verifying if each ExpDSLv1 construct was used to specify exactly one concept during the specification of the chosen experiments. Thus, if there are constructs being used to specify different abstractions of the experiments, our analysis consisted on the identification of such scenarios, which indicate a lack of expressiveness of the DSL. III. ANALYSIS OF THE RESULTS A. General Results The modeling of controlled experiments in the study revealed that the investigated DSL successfully addressed most 252

4 of the evaluated criteria. The completeness analysis showed that 60% of experimental aspects of the modeled experiments were supported, 15% partially supported, and 25% not supported. On the other hand, the expressiveness criteria revealed that there was only one ExpDSLv1 construct, the Metric element, which was used to specify three different concepts of the specified experiments. Regarding the completeness of modeled experimental aspects about 60% were successfully satisfied. On the other hand, the results also show that a high percentage of experimental aspects was only partially supported (15%) or not supported (25%) by our approach. This demanded the investigation of how we could improve the ExpDSLv1 and our approach to allow their adequate specification. There were a few cases where the experiment aspect was not specified because it was not part of the experiment (n/a not applicable). Only three concepts were not supported for all modeled experiments, which are the dependent variable, measurement instruments, and the statistical analysis technique. These results motivated to propose ExpDSLv1 extensions to include: (i) a DependentVariable element that relates this concept with hypotheses and metrics, allowing the traceability among them and facilitating the conduction of meta-analysis in the future; and (ii) a statistical analysis element that allows documenting the statistical test which can be used for each experiment hypotheses. There were also some scenarios where experimental aspects (such as metrics and experimental roles) have been only partially satisfied during the DSL completeness assessment. The metric element, for example, only allows specifying metrics regarding execution time for activity/task, and metrics that are quantified based on the produced artifacts. This result has also motivated the extension of ExpDSL to specify a new kind of metric related to any collected data informed by the user during an activity/task execution of the experiment process. Section III.C shows how we have improved the expressiveness of the language by extending the metric concept. It also discusses the lack of expressiveness of ExpDSLv1 to specify the design of experiment (DoE) and statistical analysis technique, and how language users can deal with that. Next sections detail the results of the study regarding the modeling of different experimental aspects by presenting limitations of our approach and proposing new improvements. B. Completeness Results and DSL Improvement. This section describes the main results of our completeness study considering the different experimental aspects that were modeled in ExpDSLv1. For the more affected elements, we show and discuss the obtained results when modeling the experiments. In addition, we also describe how those results have been used to propose improvements and new extensions for the ExpDSL (ExpDSLv2). 1) Hypotheses ExpDSLv1 allows the definition of statistical hypotheses for the experiment through the Hypothesis element. In our study, it was possible to model all hypotheses of the experiments using such language element. Fig. 1 (B) shows the hypothesis specification for the Testing experiment. To enable automatic analysis of hypotheses after that experiment execution, we propose a more formal hypotheses definition. This allows specifying an expression that defines the hypothesis relating the factor/levels with a dependent variable. Fig. 2 - (A) shows the new formalization of hypotheses after the language extension. Each textual hypothesis can be decomposed in one or more formal hypotheses, which associate the Factor or Treatment with the dependent variable. There are two additional benefits provided by this DSL extension: (i) it contributes to the automation of the statistical hypotheses analysis; and (ii) it supports the analysis of variables used in different experiments (meta-analysis). Besides, it allows the hypotheses specification in a more formal way, associating dependents variables and treatments. 2) Design of Experiment ExpDSLv1 supports three experimental DoE: (i) completely randomized design (CRD); (ii) randomized complete block design (RCBD); and (iii) Latin square (LS). For this reason, it was not possible to model the designs used in the Human Factors and Requirements experiments, because both experiments used a simple prospective ex post facto quasiexperimental design [14] [15]. The same issue happens to the PBR experiment, which adopts a factorial design in blocks of size 2. The wrong choice of one of the existing supported DoE of our DSL could cause a problem during the model transformation, because the distribution of treatments will not correspond to the real design of these experiments. For allowing the modeling of experiments that do not follow any of the supported DoE in ExpDSLv2, we extend the language to include the Other option. This change has a direct impact in the experiment configuration. Without knowledge about the experimental design, transformations of models specified in ExpDSLv2 are not able to automatically randomize and allocate treatments to participants and experimental materials. Therefore, the experimenter has to configure this information manually using the execution environment using the ExpDSLv2. Only after this manual configuration, the execution environment instantiates the workflow correctly to start the experiment execution. The choice of the Other DoE also makes difficult automatic validations of ExpDSLv2 specification based on statistical design rules and assumptions. Our DSL editor can check, for example, if the requirement of the Latin Square DoE is satisfied: existence of two variables to block. Therefore, the Other option increases language flexibility but limits specification validations. 3) Dependent Variable In ExpDSLv1, we can define metrics related to dependent variables but there is no explicit ways to specify them. Because of that, we have extended the ExpDSLv1 to include the modeling of dependent variable that can be mapped to: (i) metrics that are associated to the execution time of specific activities or tasks of the experiments; or (ii) metrics that are associated to fields (of process activities/tasks) and that will be informed by the participants or researchers (number of defects found, etc.) during the experiment execution. 253

5 Fig 2 (B) shows the dependent variable specified for the Testing experiment. There are two response variables in this experiment: the number of valid terminated CR (NumberofCR), and the time to execute the tests (TestExecutionTime). The TestExecutionTime, for example, has a description and is associated to the metrics TimeExecution1, TimeExecution2, TimeExecution3, and TimeExecution4. These metrics are defined in the experiment specification in the Metrics View. 4) Metrics All metrics of the experiments were modeled by using one of the two kind of existing metrics in ExpDSLv1: those related to activities/tasks execution time and those associated to an artifact. Participants inform the value of a metric by using a text box provided in a web form related to the participant workflow. Another option is to define an ArtifactMetric element, forcing the participant to send a file whose content was only a value (text or number), for example. All these options represent workarounds that do not allow associating the informed metric value with the experiment hypotheses or other specific element. Fig. 1- (G) shows the definition of the metric Reported CRs used in the Testing experiment. In this experiment, the participant will be asked to upload a file whose content represents the collected CRs for each reported task. The specification shows the metric defined to the process related to the Specific Technique and Feature 1 (SpecificTecFeature1_Tests). The metric name is ReportedCR1. The output artifacts related to the CR reporting are listed in the artifacts attribute of this metric (artifacts CR SP1_F1_1, CR SP1_F1_2, etc.). In our study, the specification of different experiments using such strategy indicated the need to provide support to inform the metric value associated to an activity/task in the experiment. Such support contributes to not only explicitly identify the metric, but also associate it with the experiment hypotheses thus contributing to the analysis phase. Therefore, we extended the ExpDSLv1 creating a new Metric type named DataMetric that can be used to specify metrics associated to artifacts and that can be collected by the experiment subjects and/or researchers. This adaptation also provides an association between a metric and a text or number variable the collecteddata attribute. It has to be defined during the metric definition, and it is also used and collected in the correspondent activity/task of the experiment process. Fig. 2 - (G) shows the specification of the ReportedCR1 metric in ExpDSLv2. It has a description and a collecteddata text attribute named CR. This CR collecteddata represents a data that will be gathered during the experiment execution, according to the process specification. Fig. 2 (G) also illustrates that this ReportedCR1 metric is associated to the Number of CR dependent variable. This will improve the traceability between dependent variable, metric and associated activity/tasks fields, facilitating their collection during the experiment execution and their usage during the experiment analysis. 5) Data collection procedure In ExpDSLv1, data collection procedures are specified as a process in the Process View, containing activities, tasks, roles and artifacts. Our DSL does not currently support loops and conditionals paths but almost all the experiment could be modeled as sequential procedures. The semantic related to link the process to the correspondent treatment combination, Link element in ExpDSLv1 (Fig. 1 E), was moved to the process definition (Fig. 2 E), so the Link element was removed from the DSL. This element was not used in the quasi-experiment definitions (Human Factors and Requirements). 6) Statistical Analysis Technique There was no support for defining analysis techniques in the experimental DSL. As the choice of the statistical test depends on the experiment design (DoE), we extended ExpDSLv1 to allow the selection of the statistical analysis technique from a list. Although this information is currently used only to document the experiment, our workflow engine is being extended to support the automatic application and analysis of the statistical tests of the experiments after the collection of the data of interest during the analysis phase. Fig. 2 (D) shows the statistical analysis techniques chosen for the CFT experiment: McNemar to test hypothesis H1, and Wilconox to test hypotheses H2 e H3. The current list of possible techniques for testing hypotheses is: Chi-2, Binomial Test, t-test, F-test, McNemar Test, Mann-Whitney, Paired t-test, Wilcoxon, Sign test, ANOVA, Kruskal-Wallis and, for any other test, Others. More than one type can be specified for each hypothesis. C. Expressiveness Results This section describes the analysis of the expressiveness criterion. We have mapped each ExpDSLv1 element/construction regarding the domain concept in order to evaluate if each language construction is associated to only one distinct concept in the domain. For ExpDSLv1 we observed that the ArtifactMetric element was used to model metrics related to artifacts, questionnaires, and user data (data collected during the task/activity execution). This means that distinct concepts in the domain were expressed using the same abstraction of the DSL. In order to overcome this limitation, we created the DataMetric element in ExpDSLv2, which provides different ways to specify these three domain concepts: metric related to artifacts, questionnaires, and collected data. Fig. 2 - (G) shows a metric modeled as a collecteddata element in ExpDSLv2. This defectnumber variable is referenced in each activity where the data have to be collected. It means that during the experiment process execution the user will be asked to inform this value during that specific activity. We also observed an expressiveness limitation in ExpDSLv2 related to the language extension that defines the Other type to refer any other type of Design of Experiment (DoE) besides CRD, CRBD, and Latin Square. It has improved the completeness of the language (Section III.B), but on the other hand, it affects the expressiveness of the DSL, because this element can be used to express different concepts of the domain (factorial design, quasi-experimental designs, within subjects design, and so on). In order to improve the language coverage, we opted to accept this expressiveness decreasing because it allows the specification and guided execution of a higher number of experiments. The same problem of expressiveness happens to the AnalysisTechniqueType element of ExpDSLv2, which also has the option to the Other type. 254

6 IV. THREATS TO VALIDITY One threat to the validity of this study is related to the choice of the experiments to be modeled. This choice defines for which types of experiments the conclusions are valid, restricting the extent to which the results can be generalized. We control this threat by selecting real experiments from different areas and with different DoE. Another threat to validity of this study is called Reliability [1], which is concerned with to what extent the data and the analysis are dependent on the specific researchers. We control this threat by modeling and validating the experiments models with experimental software engineering researchers from Fraunhofer IESE and Universidad Politecnica de Madrid (UPM), which were not involved in the DSL development. V. RELATED WORK Whereas some existing research work explores the development of automated environments to support the conduction of controlled experiments [16], most of them do not describe in details how the specification of the different experimental aspects is addressed. Only a few research works have presented ways to formalize experiments in software engineering. Garcia et al. [10] present an ontology to describe experiments that have predicates to generate a plan for the experiment. Siy and Wu [11] present an ontology that allows defining an experiment and checking some constraints regarding validity threats to the experimental design, which are extracted from a set of experiments. Cartaxo et al. [9] present a graphic domain specific language to model experiments and generate a textual plan of it. However, all these research works do not provide an environment or tool support that allow interpreting and executing the controlled experiments based on their specifications. In addition, they have not conducted empirical studies to evaluate the completeness and expressivity of their specification mechanism ontology or DSLs through the modeling of different experiments, as we have presented in this paper. VI. CONCLUSIONS AND FUTURE WORK This paper presents an empirical study of evaluation and improvement of a DSL that supports the formalization of controlled experiments. In our study, several experiments reported by the software engineering community were specified using the DSL. These specifications were evaluated against the completeness and expressiveness criteria. The results of our study demonstrated the value of the approach, but on the other hand exposed a series of improvement opportunities. These improvements were discussed and addressed in a new version of the domain-specific language also evaluated in this paper. As part of future work of this research, we are preparing two new studies to be conducted: (i) the first one is a survey that will be performed and applied to experts from the experimental software engineering community to collect feedback about the DSL; and (ii) the other one involves the usage of the improved version of the experimental DSL to conduct controlled experiments by other research groups in order to assessing the usability and performance of the approach. Finally, we also plan to extend the study presented in this paper to consider other existing criteria adopted in DSL evaluations [13]. ACKNOWLEDGMENTS This study is supported by the program Ciência sem Fronteiras, a joint initiative from Ministério da Ciência, Tecnologia e Inovação (MCTI) and Ministério da Educação (MEC) of Brazil, through CNPq and CAPES. It is partially supported by the National Institute of Science and Technology for Software Engineering (INES), funded by CNPq, grants / and /2011-7, and by FAPERN, CETENE and CAPES/PROAP. REFERENCES 1 Wohlin et al. Experimentation in Software Engineering: An Introduction. Kluwer Academic Publishers, Boston/Dordrecht/London, Juristo, Natalia and Moreno, Ana M. Basics of Software Engineering Experimentation. Kluwer Academic Publisher, Madrid, Jedlitschka et al. Reporting Experiments in Software Engineering. In Guide to Advanced Empirical Software Engineering. Springer Science+Business Media, Mendonça et al. A Framework for Software Engineering Experimental Replications. IEEE ICECCS (2008). 5 Hochstein et al. An Environment for Conducting Families of Software Engineering Experiments. Advances in Computers, 74 (2008), Sjøberg et al. Conducting Realistic Experiments in Software Engineering. ISESE (2002). 7 Travassos et al. An environment to support large scale experimentation in software engineering. 13th IEEE ICECCS (2008), Freire et al. A Model-Driven Approach to Specifying and Monitoring Controlled Experiments in Software Engineering. In PROFES. LNCS, Paphos, Cyprus, Cartaxo et al. ESEML: empirical software engineering modeling language. (New York 2012), Workshop on Domain-specific modeling. 10 Garcia et al. An Ontology for Controlled Experiments on Software Engineering. (San Francisco 2008), SEKE. 11 Siy, H. and Wu, Yan. An Ontology to Support Empirical Studies in Software Engineering. (Fullerton 2009), ICCEI. 12 Freire, M. A.. A Model-Driven Approach to Formalize and Support Controlled Experiments in Software Engineering. (Baltimore 2013), Proceedings of the ESEM Doctoral Symposium (ESEM 2013). 13 Kahraman, G. and Bilgen, S. A framework for qualitative assessment of domain-specific languages. SoSym Journal (November 23, 2013), Acuña et al. How do personality, team processes and task characteristics relate to job satisfaction and software quality? IST, 51, 3 (2009), Aranda et al. In Search of Requirements Analyst Characteristics that Influence Requirements Elicitation Effectiveness: a Quasi-experiment. (Madrid 2012), INTEAMSE. 16 Freire et al. Automated Support for Controlled Experiments in Software Engineering: A Systematic Review. In SEKE (Boston/USA 2013). 17 Accioly et al. Comparing Two Black-box Testing Strategies for Software Product Lines. SBCARS (2012). 18 Jung et al. A Controlled Experiment on Component Fault Trees. In Conference on Computer Safety, Reliability and Security (SafeComp) (Toulouse 2013). 19 Basili et al. The Empirical Investigation of Perspective-Based Reading. ( 1996). 20 Aleixo et al. Modeling Variabilities from Software Process Lines with Compositional and Annotative Techniques: A Quantitative Study. (Paphos 2013), PROFES. 21 Cirilo et al. Configuration Knowledge of Software Product Lines: A Comprehensibility Study. (Porto de Galinhas 2011), Workshop on Variability & composition. 255

Applying the Experimental Paradigm to Software Engineering

Applying the Experimental Paradigm to Software Engineering Applying the Experimental Paradigm to Software Engineering Natalia Juristo Universidad Politécnica de Madrid Spain 8 th European Computer Science Summit Current situation 16.3% of software projects are

More information

Requirement Error Abstraction and Classification: A Control Group Replicated Study

Requirement Error Abstraction and Classification: A Control Group Replicated Study 18th IEEE International Symposium on Software Reliability Engineering Requirement Error Abstraction and Classification: A Control Group Replicated Study Gursimran S. Walia, Jeffrey C. Carver, and Thomas

More information

Are Students Representatives of Professionals in Software Engineering Experiments?

Are Students Representatives of Professionals in Software Engineering Experiments? Are Students Representatives of Professionals in Software Engineering Experiments? Iflaah Salman, Ayse Tosun Misirli e Natalia Juristo 37th IEEE International Conference on Software Engineering - ICSE

More information

Status of Empirical Research in Software Engineering

Status of Empirical Research in Software Engineering Status of Empirical Research in Software Engineering Andreas Höfer, Walter F. Tichy Fakultät für Informatik, Universität Karlsruhe, Am Fasanengarten 5, 76131 Karlsruhe, Germany {ahoefer tichy}@ipd.uni-karlsruhe.de

More information

Chapter 4 DESIGN OF EXPERIMENTS

Chapter 4 DESIGN OF EXPERIMENTS Chapter 4 DESIGN OF EXPERIMENTS 4.1 STRATEGY OF EXPERIMENTATION Experimentation is an integral part of any human investigation, be it engineering, agriculture, medicine or industry. An experiment can be

More information

EMPIRICAL RESEARCH METHODS IN VISUALIZATION

EMPIRICAL RESEARCH METHODS IN VISUALIZATION EMPIRICAL RESEARCH METHODS IN VISUALIZATION and some thoughts on their role in Master, PHD and postdoctoral projects Talk at University of Sydney, 11. August 2014 Stephan Diehl University of Trier / Universität

More information

Scoping and Planning. Experiment Planning. Planning Phase Overview. Planning Steps. Context Selection. Context Selection

Scoping and Planning. Experiment Planning. Planning Phase Overview. Planning Steps. Context Selection. Context Selection DCC / ICEx / UFMG Scoping and Planning Experiment Planning Scoping determines the foundations for the experiment Why the experiment is conducted? Eduardo Figueiredo http://www.dcc.ufmg.br/~figueiredo Planning

More information

The Research Roadmap Checklist

The Research Roadmap Checklist 1/5 The Research Roadmap Checklist Version: December 1, 2007 All enquires to bwhitworth@acm.org This checklist is at http://brianwhitworth.com/researchchecklist.pdf The element details are explained at

More information

Statistical analysis DIANA SAPLACAN 2017 * SLIDES ADAPTED BASED ON LECTURE NOTES BY ALMA LEORA CULEN

Statistical analysis DIANA SAPLACAN 2017 * SLIDES ADAPTED BASED ON LECTURE NOTES BY ALMA LEORA CULEN Statistical analysis DIANA SAPLACAN 2017 * SLIDES ADAPTED BASED ON LECTURE NOTES BY ALMA LEORA CULEN Vs. 2 Background 3 There are different types of research methods to study behaviour: Descriptive: observations,

More information

MSc Software Testing MSc Prófun hugbúnaðar

MSc Software Testing MSc Prófun hugbúnaðar MSc Software Testing MSc Prófun hugbúnaðar Fyrirlestrar 43 & 44 Evaluating Test Driven Development 15/11/2007 Dr Andy Brooks 1 Case Study Dæmisaga Reference Evaluating Advantages of Test Driven Development:

More information

External Variables and the Technology Acceptance Model

External Variables and the Technology Acceptance Model Association for Information Systems AIS Electronic Library (AISeL) AMCIS 1995 Proceedings Americas Conference on Information Systems (AMCIS) 8-25-1995 External Variables and the Technology Acceptance Model

More information

Goal-Oriented Measurement plus System Dynamics A Hybrid and Evolutionary Approach

Goal-Oriented Measurement plus System Dynamics A Hybrid and Evolutionary Approach Goal-Oriented Measurement plus System Dynamics A Hybrid and Evolutionary Approach Dietmar Pfahl Fraunhofer Institute IESE pfahl@iese.fhg.de Günther Ruhe University of Calgary ruhe@ucalgary.ca 1. Aim and

More information

Defect Removal. RIT Software Engineering

Defect Removal. RIT Software Engineering Defect Removal Agenda Setting defect removal targets Cost effectiveness of defect removal Matching to customer & business needs and preferences Performing defect removal Techniques/approaches/practices

More information

Dolly or Shaun? A Survey to Verify Code Clones Detected using Similar Sequences of Method Calls

Dolly or Shaun? A Survey to Verify Code Clones Detected using Similar Sequences of Method Calls Dolly or Shaun? A Survey to Verify Code Clones Detected using Similar Sequences of Method Calls Alexandre Paiva, Johnatan Oliveira, Eduardo Figueiredo Department of Computer Science Federal University

More information

Models of Information Retrieval

Models of Information Retrieval Models of Information Retrieval Introduction By information behaviour is meant those activities a person may engage in when identifying their own needs for information, searching for such information in

More information

Penny Rheingans. Evaluation of Visualization

Penny Rheingans. Evaluation of Visualization Evaluation CMSC 436/636 Penny Rheingans University of Maryland, Baltimore County Evaluation of Visualization Is this technique useful? For what? To whom? Why is this technique useful? What other techniques

More information

9.0 L '- ---'- ---'- --' X

9.0 L '- ---'- ---'- --' X 352 C hap te r Ten 11.0 10.5 Y 10.0 9.5 9.0 L...- ----'- ---'- ---'- --' 0.0 0.5 1.0 X 1.5 2.0 FIGURE 10.23 Interpreting r = 0 for curvilinear data. Establishing causation requires solid scientific understanding.

More information

Test-Driven Development

Test-Driven Development On the Influence of Test-Driven Development on Software Design by SzeChernTan School of Informatics University of Edinburgh 12 February 2009 Agenda Introduction Overview of paper Experimental design Results

More information

Understanding the impact of industrial context on software engineering research: some initial insights. Technical Report No.: 390

Understanding the impact of industrial context on software engineering research: some initial insights. Technical Report No.: 390 Understanding the impact of industrial context on software engineering research: some initial insights Technical Report No.: 390 Austen Rainer Department of Computer Science University of Hertfordshire

More information

An Empirical Study of Process Conformance

An Empirical Study of Process Conformance An Empirical Study of Process Conformance Sivert Sørumgård 1 Norwegian University of Science and Technology 3 December 1996 Abstract An experiment was carried out at the Norwegian University of Science

More information

What Case Study means? Case Studies. Case Study in SE. Key Characteristics. Flexibility of Case Studies. Sources of Evidence

What Case Study means? Case Studies. Case Study in SE. Key Characteristics. Flexibility of Case Studies. Sources of Evidence DCC / ICEx / UFMG What Case Study means? Case Studies Eduardo Figueiredo http://www.dcc.ufmg.br/~figueiredo The term case study frequently appears in title and abstracts of papers Its meaning varies a

More information

The Effect of Code Coverage on Fault Detection under Different Testing Profiles

The Effect of Code Coverage on Fault Detection under Different Testing Profiles The Effect of Code Coverage on Fault Detection under Different Testing Profiles ABSTRACT Xia Cai Dept. of Computer Science and Engineering The Chinese University of Hong Kong xcai@cse.cuhk.edu.hk Software

More information

Improving the Mechanisms for Replicating Software Engineering Experiments

Improving the Mechanisms for Replicating Software Engineering Experiments Improving the Mechanisms for Replicating Software Engineering Experiments Natalia Juristo, Sira Vegas, Ana M. Moreno Facultad de Informática, Universidad Politécnica de Madrid Campus de Montegancedo, 28660

More information

Handling Variability in Software Architecture: Problems and Implications

Handling Variability in Software Architecture: Problems and Implications 2011 2011 Ninth Ninth Working Working IEEE/IFIP Conference Conference Software on Software Architecture Architecture Handling Variability in Software Architecture: Problems and Implications Matthias Galster

More information

A Qualitative Survey of Regression Testing Practices

A Qualitative Survey of Regression Testing Practices A Qualitative Survey of Regression Testing Practices Emelie Engström and Per Runeson Department of Computer Science, Lund University, SE-221 00 LUND, Sweden {Emelie.Engstrom,Per.Runeson}@cs.lth.se Abstract.

More information

TDD HQ : Achieving Higher Quality Testing in Test Driven Development

TDD HQ : Achieving Higher Quality Testing in Test Driven Development TDD HQ : Achieving Higher Quality Testing in Test Driven Development Adnan Čaušević, Sasikumar Punnekkat and Daniel Sundmark School of Innovation, Design and Engineering Mälardalen University, Västerås,

More information

PLANNING THE RESEARCH PROJECT

PLANNING THE RESEARCH PROJECT Van Der Velde / Guide to Business Research Methods First Proof 6.11.2003 4:53pm page 1 Part I PLANNING THE RESEARCH PROJECT Van Der Velde / Guide to Business Research Methods First Proof 6.11.2003 4:53pm

More information

Paper presentation: Preliminary Guidelines for Empirical Research in Software Engineering (Kitchenham et al. 2002)

Paper presentation: Preliminary Guidelines for Empirical Research in Software Engineering (Kitchenham et al. 2002) Paper presentation: Preliminary Guidelines for Empirical Research in Software Engineering (Kitchenham et al. 2002) Who? Eduardo Moreira Fernandes From? Federal University of Minas Gerais Department of

More information

In Search of Requirements Analyst Characteristics that Influence Requirements Elicitation Effectiveness: a Quasi-experiment

In Search of Requirements Analyst Characteristics that Influence Requirements Elicitation Effectiveness: a Quasi-experiment In Search of Requirements Analyst Characteristics that Influence Requirements Elicitation Effectiveness 20 In Search of Requirements Analyst Characteristics that Influence Requirements Elicitation Effectiveness:

More information

Combining Archetypes with Fast Health Interoperability Resources in Future-proof Health Information Systems

Combining Archetypes with Fast Health Interoperability Resources in Future-proof Health Information Systems 180 Digital Healthcare Empowering Europeans R. Cornet et al. (Eds.) 2015 European Federation for Medical Informatics (EFMI). This article is published online with Open Access by IOS Press and distributed

More information

Chapter 1. Research : A way of thinking

Chapter 1. Research : A way of thinking Chapter 1 Research : A way of thinking Research is undertaken within most professions. More than a set of skills, research is a way of thinking: examining critically the various aspects of your day-to-day

More information

Chapter 1. Research : A way of thinking

Chapter 1. Research : A way of thinking Chapter 1 Research : A way of thinking Research is undertaken within most professions. More than a set of skills, research is a way of thinking: examining critically the various aspects of your day-to-day

More information

SOFIT: Sociotechnical and Organizational Factors for Insider Threat

SOFIT: Sociotechnical and Organizational Factors for Insider Threat SOFIT: Sociotechnical and Organizational Factors for Insider Threat Frank L. Greitzer, Justin Purl, Yung Mei Leong, D.E. (Sunny) Becker, PsyberAnalytix Human Resources Research Organization Independent

More information

Cognitive Factors in Perspective-Based Reading (PBR): A Protocol Analysis Study

Cognitive Factors in Perspective-Based Reading (PBR): A Protocol Analysis Study Cognitive Factors in Perspective-Based Reading (PBR): A Protocol Analysis Study Bryan Robbins Department of Computer Science and Engineering Mississippi State University bryan AT btr3 DOT com Jeff Carver

More information

Evaluation: Controlled Experiments. Title Text

Evaluation: Controlled Experiments. Title Text Evaluation: Controlled Experiments Title Text 1 Outline Evaluation beyond usability tests Controlled Experiments Other Evaluation Methods 2 Evaluation Beyond Usability Tests 3 Usability Evaluation (last

More information

Psychology 2019 v1.3. IA2 high-level annotated sample response. Student experiment (20%) August Assessment objectives

Psychology 2019 v1.3. IA2 high-level annotated sample response. Student experiment (20%) August Assessment objectives Student experiment (20%) This sample has been compiled by the QCAA to assist and support teachers to match evidence in student responses to the characteristics described in the instrument-specific marking

More information

A Framework for Conceptualizing, Representing, and Analyzing Distributed Interaction. Dan Suthers

A Framework for Conceptualizing, Representing, and Analyzing Distributed Interaction. Dan Suthers 1 A Framework for Conceptualizing, Representing, and Analyzing Distributed Interaction Dan Suthers Work undertaken with Nathan Dwyer, Richard Medina and Ravi Vatrapu Funded in part by the U.S. National

More information

Prepared by: Assoc. Prof. Dr Bahaman Abu Samah Department of Professional Development and Continuing Education Faculty of Educational Studies

Prepared by: Assoc. Prof. Dr Bahaman Abu Samah Department of Professional Development and Continuing Education Faculty of Educational Studies Prepared by: Assoc. Prof. Dr Bahaman Abu Samah Department of Professional Development and Continuing Education Faculty of Educational Studies Universiti Putra Malaysia Serdang At the end of this session,

More information

System and User Characteristics in the Adoption and Use of e-learning Management Systems: A Cross-Age Study

System and User Characteristics in the Adoption and Use of e-learning Management Systems: A Cross-Age Study System and User Characteristics in the Adoption and Use of e-learning Management Systems: A Cross-Age Study Oscar Lorenzo Dueñas-Rugnon, Santiago Iglesias-Pradas, and Ángel Hernández-García Grupo de Tecnologías

More information

Assurance Cases for Model-based Development of Medical Devices. Anaheed Ayoub, BaekGyu Kim, Insup Lee, Oleg Sokolsky. Outline

Assurance Cases for Model-based Development of Medical Devices. Anaheed Ayoub, BaekGyu Kim, Insup Lee, Oleg Sokolsky. Outline Assurance Cases for Model-based Development of Medical Devices Anaheed Ayoub, BaekGyu Kim, Insup Lee, Oleg Sokolsky Outline Introduction State of the art in regulatory activities Evidence-based certification

More information

Foundations of AI. 10. Knowledge Representation: Modeling with Logic. Concepts, Actions, Time, & All the Rest

Foundations of AI. 10. Knowledge Representation: Modeling with Logic. Concepts, Actions, Time, & All the Rest Foundations of AI 10. Knowledge Representation: Modeling with Logic Concepts, Actions, Time, & All the Rest Wolfram Burgard, Andreas Karwath, Bernhard Nebel, and Martin Riedmiller 10/1 Contents Knowledge

More information

INTERVIEWS II: THEORIES AND TECHNIQUES 5. CLINICAL APPROACH TO INTERVIEWING PART 1

INTERVIEWS II: THEORIES AND TECHNIQUES 5. CLINICAL APPROACH TO INTERVIEWING PART 1 INTERVIEWS II: THEORIES AND TECHNIQUES 5. CLINICAL APPROACH TO INTERVIEWING PART 1 5.1 Clinical Interviews: Background Information The clinical interview is a technique pioneered by Jean Piaget, in 1975,

More information

Informal Aggregation Technique for Software Engineering Experiments

Informal Aggregation Technique for Software Engineering Experiments www.ijcsi.org 199 Informal Aggregation Technique for Software Engineering Experiments Babatunde K. Olorisade Mathematical and Computer Sciences Department, Fountain University Osogbo, Osun State, Nigeria

More information

Quantitative Methods in Computing Education Research (A brief overview tips and techniques)

Quantitative Methods in Computing Education Research (A brief overview tips and techniques) Quantitative Methods in Computing Education Research (A brief overview tips and techniques) Dr Judy Sheard Senior Lecturer Co-Director, Computing Education Research Group Monash University judy.sheard@monash.edu

More information

Optimal Flow Experience in Web Navigation

Optimal Flow Experience in Web Navigation Optimal Flow Experience in Web Navigation Hsiang Chen, Rolf T. Wigand and Michael Nilan School of Information Studies, Syracuse University Syracuse, NY 13244 Email: [ hchen04, rwigand, mnilan]@mailbox.syr.edu

More information

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING. CP 7026-Software Quality Assurance Unit-I. Part-A

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING. CP 7026-Software Quality Assurance Unit-I. Part-A DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING CP 7026-Software Quality Assurance Unit-I 1. What is quality? 2. What are the Views of Quality? 3. What is Quality Cost? 4. What is Quality Audit? 5. What

More information

Methodological Issues in Measuring the Development of Character

Methodological Issues in Measuring the Development of Character Methodological Issues in Measuring the Development of Character Noel A. Card Department of Human Development and Family Studies College of Liberal Arts and Sciences Supported by a grant from the John Templeton

More information

Controlled experiments in Software Engineering: an introduction

Controlled experiments in Software Engineering: an introduction Controlled experiments in Software Engineering: an introduction Filippo Ricca Unità CINI at DISI, Genova, Italy Filippo.ricca@disi.unige.it 1 Controlled experiments......are like chocolate once you try

More information

Investigating the Effect of Fault Category on Overall Capture Probability during Requirements Inspection

Investigating the Effect of Fault Category on Overall Capture Probability during Requirements Inspection Investigating the Effect of Fault Category on Overall Capture Probability during Requirements Inspection Tejalal Choudhary 1, Anurag Goswami 2 Computer Science Department Sushila Devi Bansal College of

More information

Are Test Cases Needed? Replicated Comparison between Exploratory and Test-Case-Based Software Testing

Are Test Cases Needed? Replicated Comparison between Exploratory and Test-Case-Based Software Testing Pre-print article accepted for publication in Empirical Software Engineering Journal, June 2013. Copyright Springer 2013 DOI: 10.1007/s10664-013-9266-8 The final publication is available at link.springer.com

More information

RESEARCH METHODS. A Process of Inquiry. tm HarperCollinsPublishers ANTHONY M. GRAZIANO MICHAEL L RAULIN

RESEARCH METHODS. A Process of Inquiry. tm HarperCollinsPublishers ANTHONY M. GRAZIANO MICHAEL L RAULIN RESEARCH METHODS A Process of Inquiry ANTHONY M. GRAZIANO MICHAEL L RAULIN STA TE UNIVERSITY OF NEW YORK A T BUFFALO tm HarperCollinsPublishers CONTENTS Instructor's Preface xv Student's Preface xix 1

More information

CSC2130: Empirical Research Methods for Software Engineering

CSC2130: Empirical Research Methods for Software Engineering CSC2130: Empirical Research Methods for Software Engineering Steve Easterbrook sme@cs.toronto.edu www.cs.toronto.edu/~sme/csc2130/ 2004-5 Steve Easterbrook. This presentation is available free for non-commercial

More information

Explanation-Boosted Question Selection in Conversational CBR

Explanation-Boosted Question Selection in Conversational CBR Explanation-Boosted Question Selection in Conversational CBR Mingyang Gu and Agnar Aamodt Department of Computer and Information Science, Norwegian University of Science and Technology, Sem Sælands vei

More information

Selecting a research method

Selecting a research method Selecting a research method Tomi Männistö 13.10.2005 Overview Theme Maturity of research (on a particular topic) and its reflection on appropriate method Validity level of research evidence Part I Story

More information

Evaluation: Scientific Studies. Title Text

Evaluation: Scientific Studies. Title Text Evaluation: Scientific Studies Title Text 1 Evaluation Beyond Usability Tests 2 Usability Evaluation (last week) Expert tests / walkthroughs Usability Tests with users Main goal: formative identify usability

More information

Investigating Explanations to Justify Choice

Investigating Explanations to Justify Choice Investigating Explanations to Justify Choice Ingrid Nunes 1,2, Simon Miles 2, Michael Luck 2, and Carlos J.P. de Lucena 1 1 Pontifical Catholic University of Rio de Janeiro - Rio de Janeiro, Brazil {ionunes,lucena}@inf.puc-rio.br

More information

An Empirical Study of Agent Programs

An Empirical Study of Agent Programs An Empirical Study of Agent Programs A Dynamic Blocks World Case Study in GOAL M. Birna van Riemsdijk and Koen V. Hindriks EEMCS, Delft University of Technology, Delft, The Netherlands {m.b.vanriemsdijk,k.v.hindriks}@tudelft.nl

More information

Support system for breast cancer treatment

Support system for breast cancer treatment Support system for breast cancer treatment SNEZANA ADZEMOVIC Civil Hospital of Cacak, Cara Lazara bb, 32000 Cacak, SERBIA Abstract:-The aim of this paper is to seek out optimal relation between diagnostic

More information

Reliability of feedback fechanism based on root cause defect analysis - case study

Reliability of feedback fechanism based on root cause defect analysis - case study Annales UMCS Informatica AI XI, 4 (2011) 21 32 DOI: 10.2478/v10065-011-0037-0 Reliability of feedback fechanism based on root cause defect analysis - case study Marek G. Stochel 1 1 Motorola Solutions

More information

Identifying a Computer Forensics Expert: A Study to Measure the Characteristics of Forensic Computer Examiners

Identifying a Computer Forensics Expert: A Study to Measure the Characteristics of Forensic Computer Examiners Journal of Digital Forensics, Security and Law Volume 5 Number 1 Article 1 2010 Identifying a Computer Forensics Expert: A Study to Measure the Characteristics of Forensic Computer Examiners Gregory H.

More information

Stepwise Knowledge Acquisition in a Fuzzy Knowledge Representation Framework

Stepwise Knowledge Acquisition in a Fuzzy Knowledge Representation Framework Stepwise Knowledge Acquisition in a Fuzzy Knowledge Representation Framework Thomas E. Rothenfluh 1, Karl Bögl 2, and Klaus-Peter Adlassnig 2 1 Department of Psychology University of Zurich, Zürichbergstraße

More information

Correlating Trust with Signal Detection Theory Measures in a Hybrid Inspection System

Correlating Trust with Signal Detection Theory Measures in a Hybrid Inspection System Correlating Trust with Signal Detection Theory Measures in a Hybrid Inspection System Xiaochun Jiang Department of Industrial and Systems Engineering North Carolina A&T State University 1601 E Market St

More information

TECH 646 Analysis of Research in Industry and Technology

TECH 646 Analysis of Research in Industry and Technology TECH 646 Analysis of Research in Industry and Technology Ch 6. Research Design: An Overview Based on the text book and supplemental materials from the text book: Cooper, D.R., & Schindler, P.S., Business

More information

Higher National Unit specification: general information. Graded Unit 1

Higher National Unit specification: general information. Graded Unit 1 Higher National Unit specification: general information This Graded Unit has been validated as part of the HNC/HND Care and Administrative Practice. Centres are required to develop the assessment instrument

More information

investigate. educate. inform.

investigate. educate. inform. investigate. educate. inform. Research Design What drives your research design? The battle between Qualitative and Quantitative is over Think before you leap What SHOULD drive your research design. Advanced

More information

Cognitive Maps-Based Student Model

Cognitive Maps-Based Student Model Cognitive Maps-Based Student Model Alejandro Peña 1,2,3, Humberto Sossa 3, Agustín Gutiérrez 3 WOLNM 1, UPIICSA 2 & CIC 3 - National Polytechnic Institute 2,3, Mexico 31 Julio 1859, # 1099-B, Leyes Reforma,

More information

A STUDY OF THE CT SCAN AREA OF A HEALTHCARE PROVIDER

A STUDY OF THE CT SCAN AREA OF A HEALTHCARE PROVIDER Proceedings of the 4 Winter Simulation Conference R.G. Ingalls, M. D. Rossetti, J. S. Smith, and B. A. Peters, eds. A STUDY OF THE CT SCAN AREA OF A HEALTHCARE PROVIDER Sreekanth Ramakrishnan Kaustubh

More information

Information and Software Technology

Information and Software Technology Information and Software Technology 51 (2009) 71 82 Contents lists available at ScienceDirect Information and Software Technology journal homepage: www.elsevier.com/locate/infsof A systematic review of

More information

Analysis A step in the research process that involves describing and then making inferences based on a set of data.

Analysis A step in the research process that involves describing and then making inferences based on a set of data. 1 Appendix 1:. Definitions of important terms. Additionality The difference between the value of an outcome after the implementation of a policy, and its value in a counterfactual scenario in which the

More information

State coverage: an empirical analysis based on a user study

State coverage: an empirical analysis based on a user study State coverage: an empirical analysis based on a user study Dries Vanoverberghe 1, Emma Eyckmans 1, and Frank Piessens 1 Katholieke Universiteit Leuven, Leuven, Belgium {dries.vanoverberghe,frank.piessens}@cs.kuleuven.be

More information

Research Plan And Design

Research Plan And Design Research Plan And Design By Dr. Achmad Nizar Hidayanto Information Management Lab Faculty of Computer Science Universitas Indonesia Salemba, 1 Agustus 2017 Research - Defined The process of gathering information

More information

A Survey on Code Coverage as a Stopping Criterion for Unit Testing

A Survey on Code Coverage as a Stopping Criterion for Unit Testing A Survey on Code Coverage as a Stopping Criterion for Unit Testing Ben Smith and Laurie Williams North Carolina State University [bhsmith3, lawilli3]@ncsu.edu Abstract The evidence regarding code coverage

More information

Toward a Heuristic Model for Evaluating the Complexity of Computer Security Visualization Interface

Toward a Heuristic Model for Evaluating the Complexity of Computer Security Visualization Interface Georgia State University ScholarWorks @ Georgia State University Computer Science Theses Department of Computer Science 12-5-2006 Toward a Heuristic Model for Evaluating the Complexity of Computer Security

More information

An approach for Brazilian Sign Language (BSL) recognition based on facial expression and k-nn classifier

An approach for Brazilian Sign Language (BSL) recognition based on facial expression and k-nn classifier An approach for Brazilian Sign Language (BSL) recognition based on facial expression and k-nn classifier Tamires Martins Rezende 1 Cristiano Leite de Castro 1 Sílvia Grasiella Moreira Almeida 2 1 The Electrical

More information

Research Methods. Donald H. McBurney. State University of New York Upstate Medical University Le Moyne College

Research Methods. Donald H. McBurney. State University of New York Upstate Medical University Le Moyne College Research Methods S I X T H E D I T I O N Donald H. McBurney University of Pittsburgh Theresa L. White State University of New York Upstate Medical University Le Moyne College THOMSON WADSWORTH Australia

More information

Documentation, Codebook, and Frequencies

Documentation, Codebook, and Frequencies Documentation, Codebook, and Frequencies MEC Exam Component: Audiometry-Tympanometry Curve Examination Data Survey Years: 2003 to 2004 SAS Export File: AUXTYM_C.XPT February 2006 NHANES 2003 2004 Data

More information

Automatic Fault Tree Derivation from Little-JIL Process Definitions

Automatic Fault Tree Derivation from Little-JIL Process Definitions Automatic Fault Tree Derivation from Little-JIL Process Definitions Bin Chen, George S. Avrunin, Lori A. Clarke, and Leon J. Osterweil Laboratory for Advanced Software Engineering Research (LASER) University

More information

CHAPTER 3 RESEARCH METHODOLOGY

CHAPTER 3 RESEARCH METHODOLOGY CHAPTER 3 RESEARCH METHODOLOGY 3.1 Introduction 3.1 Methodology 3.1.1 Research Design 3.1. Research Framework Design 3.1.3 Research Instrument 3.1.4 Validity of Questionnaire 3.1.5 Statistical Measurement

More information

Appendix I Teaching outcomes of the degree programme (art. 1.3)

Appendix I Teaching outcomes of the degree programme (art. 1.3) Appendix I Teaching outcomes of the degree programme (art. 1.3) The Master graduate in Computing Science is fully acquainted with the basic terms and techniques used in Computing Science, and is familiar

More information

Not all NLP is Created Equal:

Not all NLP is Created Equal: Not all NLP is Created Equal: CAC Technology Underpinnings that Drive Accuracy, Experience and Overall Revenue Performance Page 1 Performance Perspectives Health care financial leaders and health information

More information

COMMITMENT &SOLUTIONS UNPARALLELED. Assessing Human Visual Inspection for Acceptance Testing: An Attribute Agreement Analysis Case Study

COMMITMENT &SOLUTIONS UNPARALLELED. Assessing Human Visual Inspection for Acceptance Testing: An Attribute Agreement Analysis Case Study DATAWorks 2018 - March 21, 2018 Assessing Human Visual Inspection for Acceptance Testing: An Attribute Agreement Analysis Case Study Christopher Drake Lead Statistician, Small Caliber Munitions QE&SA Statistical

More information

KOM 5113: Communication Research Methods (First Face-2-Face Meeting)

KOM 5113: Communication Research Methods (First Face-2-Face Meeting) KOM 5113: Communication Research Methods (First Face-2-Face Meeting) Siti Zobidah Omar, Ph.D zobidah@putra.upm.edu.my Second Semester (January), 2011/2012 1 What is research? Research is a common activity

More information

Economic and Social Council

Economic and Social Council United Nations Economic and Social Council Distr.: General 13 September 2013 ECE/WG.1/2013/4 Original: English Economic Commission for Europe Working Group on Ageing Sixth meeting Geneva, 25-26 November

More information

Pilot Study: Clinical Trial Task Ontology Development. A prototype ontology of common participant-oriented clinical research tasks and

Pilot Study: Clinical Trial Task Ontology Development. A prototype ontology of common participant-oriented clinical research tasks and Pilot Study: Clinical Trial Task Ontology Development Introduction A prototype ontology of common participant-oriented clinical research tasks and events was developed using a multi-step process as summarized

More information

Towards a Semantic Alignment of the ArchiMate Motivation Extension and the Goal-Question-Metric Approach

Towards a Semantic Alignment of the ArchiMate Motivation Extension and the Goal-Question-Metric Approach Towards a Semantic Alignment of the ArchiMate Motivation Extension and the Goal-Question-Metric Approach Victorio Albani de Carvalho 1,2, Julio Cesar Nardi 1,2, Maria das Graças da Silva Teixeira 2, Renata

More information

TACKLING WITH REVIEWER S COMMENTS:

TACKLING WITH REVIEWER S COMMENTS: TACKLING WITH REVIEWER S COMMENTS: Comment (a): The abstract of the research paper does not provide a bird s eye view (snapshot view) of what is being discussed throughout the paper. The reader is likely

More information

A Concise Guide to Market

A Concise Guide to Market Marko Sarstedt Erik Mooi A Concise Guide to Market Research The Process, Data, and Methods Using IBM SPSS Statistics Second Edition 4^ Springer 1 Introduction to Market Research 1 1.1 Introduction 1 1.2

More information

Protocol analysis and Verbal Reports on Thinking

Protocol analysis and Verbal Reports on Thinking Protocol analysis and Verbal Reports on Thinking An updated and extracted version from Ericsson (2002) Protocol analysis is a rigorous methodology for eliciting verbal reports of thought sequences as a

More information

Transforming OntoUML into Alloy: Towards conceptual model validation using a lightweight formal method

Transforming OntoUML into Alloy: Towards conceptual model validation using a lightweight formal method Transforming OntoUML into Alloy: Towards conceptual model validation using a lightweight formal method Bernardo F.B. Braga, João Paulo A. Almeida, Alessander Botti Benevides and Giancarlo Guizzardi http://nemo.inf.ufes.br

More information

Tackling Random Blind Spots with Strategy-Driven Stimulus Generation

Tackling Random Blind Spots with Strategy-Driven Stimulus Generation Tackling Random Blind Spots with Strategy-Driven Stimulus Generation Matthew Ballance Mentor Graphics Corporation Design Verification Technology Division Wilsonville, Oregon matt_ballance@mentor.com Abstract

More information

Experimental evaluation of an object-oriented function point measurement procedure

Experimental evaluation of an object-oriented function point measurement procedure Information and Software Technology 49 (2007) 366 380 www.elsevier.com/locate/infsof Experimental evaluation of an object-oriented function point measurement procedure Silvia Abrahão a, *, Geert Poels

More information

Emergency Monitoring and Prevention (EMERGE)

Emergency Monitoring and Prevention (EMERGE) Emergency Monitoring and Prevention (EMERGE) Results EU Specific Targeted Research Project (STREP) 6. Call of IST-Programme 6. Framework Programme Funded by IST-2005-2.6.2 Project No. 045056 Dr. Thomas

More information

Systematic Mapping Studies

Systematic Mapping Studies Systematic Mapping Studies Marcel Heinz 23. Juli 2014 Marcel Heinz Systematic Mapping Studies 23. Juli 2014 1 / 44 Presentation Overview 1 Motivation 2 Systematic Mapping Studies 3 Comparison to Systematic

More information

Comparing the Effectiveness of Equivalence Partitioning, Branch Testing and Code Reading by Stepwise Abstraction Applied by Subjects

Comparing the Effectiveness of Equivalence Partitioning, Branch Testing and Code Reading by Stepwise Abstraction Applied by Subjects 2012 IEEE Fifth International Conference on Software Testing, Verification and Validation Comparing the Effectiveness of Equivalence Partitioning, Branch Testing and Code Reading by Stepwise Abstraction

More information

Who? What? What do you want to know? What scope of the product will you evaluate?

Who? What? What do you want to know? What scope of the product will you evaluate? Usability Evaluation Why? Organizational perspective: To make a better product Is it usable and useful? Does it improve productivity? Reduce development and support costs Designer & developer perspective:

More information

MODULE 3 APPRAISING EVIDENCE. Evidence-Informed Policy Making Training

MODULE 3 APPRAISING EVIDENCE. Evidence-Informed Policy Making Training MODULE 3 APPRAISING EVIDENCE Evidence-Informed Policy Making Training RECAP OF PREVIOUS DAY OR SESSION MODULE 3 OBJECTIVES At the end of this module participants will: Identify characteristics of basic

More information

Unifying Data-Directed and Goal-Directed Control: An Example and Experiments

Unifying Data-Directed and Goal-Directed Control: An Example and Experiments Unifying Data-Directed and Goal-Directed Control: An Example and Experiments Daniel D. Corkill, Victor R. Lesser, and Eva Hudlická Department of Computer and Information Science University of Massachusetts

More information

March 2010, 15 male adolescents between the ages of 18 and 22 were placed in the unit for treatment or PIJ-prolongation advice. The latter unit has

March 2010, 15 male adolescents between the ages of 18 and 22 were placed in the unit for treatment or PIJ-prolongation advice. The latter unit has Weeland, J., Mulders, L.T.E., Wied, M. de, & Brugman, D. Process evaluation study of observation units in Teylingereind [Procesevaluatie Observatieafdelingen Teylingereind]. Universiteit Utrecht: Vakgroep

More information

BPMN Business Process Modeling Notations

BPMN Business Process Modeling Notations BPMN Business Process Modeling Notations Hala Skaf-Molli Hala.Skaf@univ-nantes.fr http://pagesperso.lina.univ-nantes.fr/~skaf-h References BBMN January 2011: http://www.omg.org/spec/bpmn/2.0 (538 pages)

More information