TeamHCMUS: Analysis of Clinical Text

Similar documents
SemEval-2015 Task 14: Analysis of Clinical Text

Erasmus MC at CLEF ehealth 2016: Concept Recognition and Coding in French Texts

SemEval-2014 Task 7: Analysis of Clinical Text

Building Realistic Potential Patient Queries for Medical Information Retrieval Evaluation

Text mining for lung cancer cases over large patient admission data. David Martinez, Lawrence Cavedon, Zaf Alam, Christopher Bain, Karin Verspoor

ShARe Guidelines for the Annotation of Modifiers for Disorders in Clinical Notes

Automatic coding of death certificates to ICD-10 terminology

Shades of Certainty Working with Swedish Medical Records and the Stockholm EPR Corpus

A Simple Pipeline Application for Identifying and Negating SNOMED CT in Free Text

Retrieving disorders and findings: Results using SNOMED CT and NegEx adapted for Swedish

arxiv: v1 [cs.ai] 27 Apr 2015

A long journey to short abbreviations: developing an open-source framework for clinical abbreviation recognition and disambiguation (CARD)

UTHealth at SemEval-2016 Task 12: an End-to-End System for Temporal Information Extraction from Clinical Notes

CLAMP-Cancer an NLP tool to facilitate cancer research using EHRs Hua Xu, PhD

Hybrid methods for ICD-10 coding of death certificates

Chapter 12 Conclusions and Outlook

A Study of Abbreviations in Clinical Notes Hua Xu MS, MA 1, Peter D. Stetson, MD, MA 1, 2, Carol Friedman Ph.D. 1

Boundary identification of events in clinical named entity recognition

SemEval-2015 Task 6: Clinical TempEval

Modeling Annotator Rationales with Application to Pneumonia Classification

Detecting Patient Complexity from Free Text Notes Using a Hybrid AI Approach

An Intelligent Writing Assistant Module for Narrative Clinical Records based on Named Entity Recognition and Similarity Computation

Text Mining of Patient Demographics and Diagnoses from Psychiatric Assessments

Biomedical resources for text mining

Factuality Levels of Diagnoses in Swedish Clinical Text

Clinical Event Detection with Hybrid Neural Architecture

FINAL REPORT Measuring Semantic Relatedness using a Medical Taxonomy. Siddharth Patwardhan. August 2003

Semi-Automatic Construction of Thyroid Cancer Intervention Corpus from Biomedical Abstracts

Extracting geographic locations from the literature for virus phylogeography using supervised and distant supervision methods

Annotating Temporal Relations to Determine the Onset of Psychosis Symptoms

Clinical decision support (CDS) and Arden Syntax

Identifying Adverse Drug Events from Patient Social Media: A Case Study for Diabetes

Semantic Alignment between ICD-11 and SNOMED-CT. By Marcie Wright RHIA, CHDA, CCS

WikiWarsDE: A German Corpus of Narratives Annotated with Temporal Expressions

Distillation of Knowledge from the Research Literatures on Alzheimer s Dementia

Lessons Extracting Diseases from Discharge Summaries

General Symptom Extraction from VA Electronic Medical Notes

Automatically extracting, ranking and visually summarizing the treatments for a disease

Using text mining techniques to extract phenotypic information from the PhenoCHF corpus

Clinician-Driven Automated Classification of Limb Fractures from Free-Text Radiology Reports

COMPARISON OF BREAST CANCER STAGING IN NATURAL LANGUAGE TEXT AND SNOMED ANNOTATED TEXT

Извлечение информации из клинических текстов на русском языке. Information Extraction from

FDA Workshop NLP to Extract Information from Clinical Text

Heiner Oberkampf. DISSERTATION for the degree of Doctor of Natural Sciences (Dr. rer. nat.)

IBM Research Report. Automated Problem List Generation from Electronic Medical Records in IBM Watson

Asthma Surveillance Using Social Media Data

Informatics methods in Infection and Using computers to help find infection Syndromic Surveillance

Extracting Diagnoses from Discharge Summaries

Headings: Information Extraction. Natural Language Processing. Blogs. Discussion Forums. Named Entity Recognition

A review of approaches to identifying patient phenotype cohorts using electronic health records

Symbolic rule-based classification of lung cancer stages from free-text pathology reports

Annotating Clinical Events in Text Snippets for Phenotype Detection

How to Advance Beyond Regular Data with Text Analytics

Standardize and Optimize. Trials and Drug Development

Analysis of Semantic Classes in Medical Text for Question Answering

Query Refinement: Negation Detection and Proximity Learning Georgetown at TREC 2014 Clinical Decision Support Track

Unsupervised Domain Adaptation for Clinical Negation Detection

Annotation and Retrieval System Using Confabulation Model for ImageCLEF2011 Photo Annotation

Connecting Distant Entities with Induction through Conditional Random Fields for Named Entity Recognition: Precursor-Induced CRF

Extraction of Adverse Drug Effects from Clinical Records

Sentiment Analysis of Reviews: Should we analyze writer intentions or reader perceptions?

Problem-Oriented Patient Record Summary: An Early Report on a Watson Application

Capturing the Patient s Perspective: a Review of Advances in Natural Language Processing of Health-Related Text

Evaluation of Clinical Text Segmentation to Facilitate Cohort Retrieval

Improved Intelligent Classification Technique Based On Support Vector Machines

Sign Language MT. Sara Morrissey

SITIS-ISPED in CLEF ehealth 2018 Task 1 : ICD10 coding using Deep Learning

Proceedings of the Workshop on NLP for Medicine and Biology

SemEval-2017 Task 12: Clinical TempEval

An Improved Algorithm To Predict Recurrence Of Breast Cancer

TIES Cancer Research Network Y2 Face to Face Meeting U24 CA October 29 th, 2014 University of Pennsylvania

Lung Cancer Concept Annotation from Spanish Clinical Narratives

SemEval-2016 Task 12: Clinical TempEval

Time Expressions in Mental Health Records for Symptom Onset Extraction

Word2Vec and Doc2Vec in Unsupervised Sentiment Analysis of Clinical Discharge Summaries

Automatic abstraction of imaging observations with their characteristics from mammography reports

PROCEDURAL APPROACH TO MITIGATING CONCURRENTLY APPLIED CLINICAL PRACTICE GUIDELINES

Challenges and Practical Approaches with Word Sense Disambiguation of Acronyms and Abbreviations in the Clinical Domain

4. The two inferior chambers of the heart are known as the atria. the superior and inferior vena cava, which empty into the left atrium.

DOWNLOAD EASYCIE AND DATASET

Discovering Meaningful Cut-points to Predict High HbA1c Variation

Building a framework for handling clinical abbreviations a long journey of understanding shortened words "

Health Science 20 Circulatory System Notes

Case-based reasoning using electronic health records efficiently identifies eligible patients for clinical trials

Advanced Natural Language Processing

A Predictive Chronological Model of Multiple Clinical Observations T R A V I S G O O D W I N A N D S A N D A M. H A R A B A G I U

Automatic Extraction of ICD-O-3 Primary Sites from Cancer Pathology Reports

Kalpana Raja, PhD 1, Andrew J Sauer, MD 2,3, Ravi P Garg, MSc 1, Melanie R Klerer 1, Siddhartha R Jonnalagadda, PhD 1*

Improving the Accuracy of Neuro-Symbolic Rules with Case-Based Reasoning

Identifying anatomical concepts associated with ICD10 diseases

CognitiveEMS: A Cognitive Assistant System for Emergency Medical Services

Selected age-associated changes in the cardiovascular system

Extraction of Regulatory Events using Kernel-based Classifiers and Distant Supervision

CHAPTER 6 HUMAN BEHAVIOR UNDERSTANDING MODEL

The Mythos of Model Interpretability

Knowledge networks of biological and medical data An exhaustive and flexible solution to model life sciences domains

Christina Martin Kazi Russell MED INF 406 INFERENCING Session 8 Group Project November 15, 2014

Term variation in clinical records First insights from a corpus study

Medical Knowledge Attention Enhanced Neural Model. for Named Entity Recognition in Chinese EMR

Automatic Identification & Classification of Surgical Margin Status from Pathology Reports Following Prostate Cancer Surgery

Transcription:

TeamHCMUS: Analysis of Clinical Text Nghia Huynh Faculty of Information Technology University of Science, Ho Chi Minh City, Vietnam huynhnghiavn@gmail.com Quoc Ho Faculty of Information Technology University of Science, Ho Chi Minh City, Vienam hbquoc@fit.hcmus.edu.vn Abstract We developed a system to participate in shared tasks on the analyzing clinical text. Our system approaches are both machine learning-based and rule-based. We applied the machine learning-based approach for Task 1: disorder identification, and the rule-based approach for Task 2: template slot filling for the disorder. In Task 1, we developed a supervised conditional random fields model that was based on a rich set of features, and used for predicting disorder mentions. In Task 2, we based on the dependency tree to build a rule set. This rule set was extracted from the training data and applied to fill values of disorder attribute types on the test data. The evaluation on the test data showed that our system achieved the F-score of 0.656 (0.685 in case of relaxed score) for Task 1 and the F*WA of 0.576 for Task 2A and the F*WA of 0.671 for Task 2B. 1 Introduction SemEval-2015 Task 14 is a continuation of previous tasks such as: CLEF ehealth Evaluation Labs 2013 1 (Hanna Suominen et al., 2013), CLEF ehealth Evaluation Labs 2014 2 (Liadh Kelly et al., 2014), and SemEval-2014 task 7 3 (Sameer Pradhan et al., 2014). The aim of the tasks is to improve the methods of natural language processing (NLP) of the clinical domain 1 https://sites.google.com/site/shareclefehealth/ 2 http://clefehealth2014.dcu.ie/ 3 http://alt.qcri.org/semeval2014/task7/ and to widely introduce the clinical text processing to the community of NLP research. The clinical narrative is abundant in mentions of clinical conditions, anatomical sites, medications and procedures. It is completely different from the newswire domain where text is dominated by mentions of countries, locations and people. Many surface forms represent the same concept. Unlike the general domain, in biomedicine which are rich lexical and ontology resources that can be leveraged when applications are built. The SemEval-2015 Task 14 is split into two tasks: 1) Task 1 is disorder identification, and its goal is to recognize the span of disorder mentions, the named entity recognition, and the normalization to a unique CUI in a SNOMED- CT terminology in a set of clinical notes. The SNOMED-CT is a resource provided by the organizers for the normalization of Task 1; and 2) Task 2 is disorder slot filling; it focuses on identifying the normalized value for nine modifiers in a disorder mentioned in a clinical note: the CUI of the disorder (much similar to Task 1), as well as the potential attributes (e.g. negation indicator, subject, uncertainty indicator, course, severity, conditional, generic indicator, and body location). Participants can submit to either or both of the tasks. We participated in both tasks. In this paper, we describe a combined machine learning and rule-based approach for the two tasks. 2 Our Approach 2.1 Data Analysis 370 Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval 2015), pages 370 374, Denver, Colorado, June 4-5, 2015. c 2015 Association for Computational Linguistics

Devel Train The Organizing Committee provided a data set including one on-set of training data (train) and one on developing data (devel). The training data set contains 298 files, including radiology reports, discharge summaries, and ECG/ECHO reports. The developing data set contains 133 files being discharged summaries. Processing the data shows that there are 3 forms to represent disorder mentions: 1) disorder with a continuous bundle of words (Form 1); 2) disorder with two separated chunks (Form 2); 3) disorder with three separated chunks (Form 3). Figure 1 illustrates the three forms. The statistics of the appearing rate of disorder representable forms on the training and the developing data sets are shown in Table 1. Data Form 1 Form 2 Form 3 Totals #disorder 10077 1028 62 11167 Percentage 91.8% 7.9% 0.3% #disorder 7374 608 16 7998 Percentage 92.2% 7.6% 0.2% Table 1. The statistics of the number and percentage of each disorder expressed in the sets of training and developing data. Form 1: The rhythm appears to be atrial fibrillation. Form 2: The left atrium is moderately dilated. Form 3: Heart: VI systolic murmur, irregular rate and rhythm. Figure 1. Examples of disorder representable forms. The analysis results help us develop a more effective disorder extraction approach in solving problems. 2.2 Disorder Identification In disorder identification, the system is based on the machine-learning approach, the set of training data is converted into a BIO format, in which each word is assigned into one of three labels: B means the beginning of a disorder, I means the inside of a disorder, and O means the outside of a disorder. These labels can be used for a disorder only when it has consecutive words (Form 1) and cannot work when the disorder has nonconsecutive words (Form 2 or Form 3) as mentioned in Section 2.1. Therefore, we developed different strategies for the disorder forms with consecutive and nonconsecutive words. For the disorder with consecutive words, we labeled words using the traditional BIO. For discontinuous disorder mentions, we created two addition sets of tags: 1) {B2, I2} which is used to assign to the words of disorder with two separate chunks (Form 2); 2) {B3, I3} is used to label the disorder with 3 separate chunks (Form 3). Figure 2 shows some examples of labeling disorders with consecutive and nonconsecutive words using our new tagging sets. In this approach, we assigned one of seven tags {B, I, O, B2, I2, B3, I3} to each word. Thus, the disorder identification problem was converted into a classification problem to assign one of the seven labels to each word. Form 1: The/O rhythm/o appears/o to/o be/o atrial/b fibrillation/i./o Form 2: The/O left/b2 atrium/i2 is/o moderately/o dilated/i2./o Form 3: Heart/B3 :/O VI/O systolic/o murmur/o,/o irregular/i3 rate/o and/o rhythm/i3./o Figure 2. Examples of labeling for the consecutive and nonconsecutive disorder words. The algorithms machine learning and feature set offered by Stanford Named Entity Recognizer 4 was used. The Stanford CoreNLP 5 was used for splitting sentences and tokenizers from the training and test data. Also, some simple rules were used for labeling disorder words, i.e. {B, B2, B3) labeled to the begin-token of disorders, and {I, I2, I3} labeled to the inside-tokens of disorders as indicated in Figure 2. The Stanford- NER tool and the feature set offered by the Stanford NLP were used to build a supervised conditional random fields model on the training data. Then, this model was used to assign a label to each token in the test data. Some of our rules 4 http://nlp.stanford.edu/software/crf-ner.shtml 5 http://nlp.stanford.edu/software/corenlp.shtml 371

were built to identify disorders. For sentences, we identified each disorder in turn based on the label sets consisting of {B, I}, {B2, I2}, and {B3, I3}. In disorder normalization to a unique CUI in the UMLS/SNOMED-CT terminology, we extracted a list of annotated disorder from the training and developing date with disorder entities and CUI. This list was a primary search source for each of the recognized disorder entities. When a disorder was not found on the list, we used the MetaMap 6 (Willie, 2013) and UMLS 7 to continue the search. Then, when the disorder was not defined as CUI, it was defined as CUI less. 2.3 Disorder Slot Filling Huu Nghia Huynh et al. (2014) developed a system to participate in Task 2 of the CLEF ehealth Evaluation Labs 2014. They used the rule-based and machine learning methods for the task of disease/disorder template filling. The result of the system achieved the accuracy of 0.827. Our system was developed based on the rulebased approach. The rules are based on the representation of the dependency tree. One rule is established when there is a path from the node containing disorder to the node containing Cue word on the dependency tree. Each of these attributes has a rule set and a handling difference because of the data representation. For example, to fill values for the Uncertainty Indicator (UI) attribute, in the segment as illustrated in Figure 3, there are three disorders Congestive heart failure, Coronary artery disease and Aortic valve disease whose all of the cue words are Indication. This segment are split into 3 sentences as shown in Figure 4, and Sent 2 and Sent 3 lost the Cue word information. Then when we based on the dependency tree, it is impossible to determine Normalized Values for an attribute. Indication: Congestive heart failure. Coronary artery disease. Aortic valve disease. Figure 3. Example of a text segment in discharge summary. 6 http://metamap.nlm.nih.gov/javaapi.shtml 7 https://uts.nlm.nih.gov/home.html Sent 1: Indication: Congestive heart failure. Sent 2: Coronary artery disease. Sent 3: Aortic valve disease. Figure 4. Example of separating the text segment result. The attributes of Negation Indicator, Subject Class, Uncertainty Indicator, Course Class, Sever-ity Class, Conditional Class, and Generic Class are processed with the same method as follows: From the training and developing data, the system extracts lists, including a list of disorders and trigger and lists of the Normalized Values and the Cue word for each attribute. Every trigger list consists of two columns: the first column contains the Normalized Values, and the 2nd column contains the Cue Word of the respective disorder slot. The lists of disorder and trigger are the input parameters to define the sets of rules based on the dependency tree. Figure 5 is the illustration of the dependency tree in the sentence Gastric lavage shown maroon/ black but no fresh blood in which blood is a disorder, no is the Cue word of blood and the Normalized Value of blood to be determined is yes. A rule is set up to the Negation Indicator attribute type as follows: ({relation = neg } {governor = blood } {dependent = no }) ( blood : yes). Each attribute has its own separated rule set. Figure 5. An example illustrates the dependency tree of the sentence Gastric lavage showed maroon/black but no fresh blood. The disorder CUI attribute type is analyzed in the method similar to that of normalization of disorders mentioned above. For the Body Loca- 372

tion attribute type, the system determines the Cue word candidates by searching the list of triggers and UMLS, and then uses the rule set to identify the Cue word related to the disorder. 3 Results We used data that was provided by the organizers as training data for the system including 298 files (train) and 133 files (devel). The Organizing Committee provided the test data including 100 files (text) used to run Tasks 1 and 2b, followed by 100 files (pipe) used to run Task 2a. In task 2, there are two subtasks. In Task 2a, the gold-standard spans of disorder are given, and the participant has to fill the slots (including the CUI of the disorder). In Task 2b End-to-end: no gold-standard information is provided, and the participant has to (i) identify disorders (i.e. span recognition), and (ii) fill the slots for the disorders (including normalized disorders). Strict score Relaxed score Precision 0.680 0.711 Recall 0.633 0.662 F-score 0.656 0.685 Table 2. The system results of Task 1. Accuracy 0.195 F*Accuracy 0.195 Wt_Accuracy 0.576 F*Wt_Accuracy 0.576 Table 3. The system results of Task 2a. Accuracy 0.884 F*Accuracy 0.756 Wt_Accuracy 0.784 F*Wt_Accuracy 0.671 Table 4. The system results of Task 2b. Attribute types Weighted Accuracy Body Location 0.603 Disorder CUI 0.801 Conditional Class 0.725 Course Class 0.851 Generic Class 0.904 Negation Indicator 0.935 Severity Class 0.843 Subject Class 0.931 Uncertainty Indicator 0.802 Table 5. The results of the attribute types in Task 2b. Assessing the results in Task 2a, we made a mistake in filling out the default values for the slots of the disorders in the results submitted to the Organizing Committee. Therefore, the results are very low (see Table 3) and cannot reflect the effectiveness of our system. The following metrics are computed with the F-measure for span identification: A true positive disorder span is defined as any overlap with a gold-standard span. If there are several predicted spans overlapping with a gold-standard one, then only one of them is chosen to be a true positive (the longest span), and the other predicted spans are considered as false positives. #TP 5078 #FP 644 #FN 1070 Precision 88.7% Recall 82.6% F-score 85.6% Table 6. The F-measure for span identification. Table 6 illustrates the results obtained on the F-measure for span identification. On observing the results, a lot of predicted spans contain several tokens that were not part of disorders. If these tokens are removed, the results of span identification will be more accurate. 4 Discussion The disorder identification task has a lot of challenges in the clinical domain. It was shown through the results in CLEF 2013 (Souminen, 373

H., et al., 2013), SemEval 2014 (Sameer Pradhan et al., 2014), and SemEval 2015. Figure 6. Different representable forms of Disorders In Section 2.1 we presented three representable forms of disorder in the clinical text. In addition, it shows the other representable forms as illustrated in Figure 6, and the different disorders sharing a word in the sentence. For example, the sentence 2 has 3 disorders containing the same word Allergies. The diversity and complexity of representation of disorders in clinical documents lead to a major challenge in the problem of extracting concepts in the clinical domain. 5 Conclusion We described the system which realized the recognition, normalization and template filling of disorders in clinical docments. The system used the rule-based and machine learning-based approaches. The results of system will be able to serve a good foundation for our further research and propose enhancements to improve the efficiency for conceptual extraction problems. Specifically, we will study the proposal of more appropriate label sets for different representable forms of disorders as we presented in Sections 2.1 and 4, and conduct more pieces of research to supplement new features for disorder identification. In addition, we will propose a solution to remove several tokens which are not parts of disorder in the future. Danielle L. Mowery, Gareth J. F. Jones, Johannes Leveling9, Liadh Kelly, Lorraine Goeuriot, David Martinez, and Guido Zuccon. Overview of the share/clef ehealth evaluation lab 2013. In: Proceedings of ShARe/CLEF ehealth Evaluation Labs. (2013). Huu Nghia Huynh and Bao Quoc Ho (2014). A Rulebased Approach for Relation Extraction from Clinical Documents. In Proceedings of Asian Conference 2014 on Information Systems, pp. 314-317. Huu Nghia Huynh, Son Lam Vu, and Bao Quoc Ho (2014). ShARe/CLEFeHealth: A Hybrid Approach for Task 2. In Working Notes for CLEF 2014 Conference Sheffield, UK, pp. 103-110. Liadh Kelly, Lorraine Goeuriot, Hanna Suominen, Tobias Schreck, Gondy Leroy, Danielle L. Mowery, Sumithra Velupillai, Wendy W. Chapman, David Martinez, Guido Zuccon, João Palotti (2014). Overview of the ShARe/CLEF ehealth Evaluation Lab 2014. In Information Access Evaluation. Multilinguality, Multimodality, and Interaction Lecture Notes in Computer Science Volume 8685, pp. 172-191. Olivier Bodenreider and Alexa T. McCray (2003). Exploring Semantic Groups through Visual Approaches. Journal of Biomedical Informatics 36 (2003), pp. 414-432. Sameer Pradhan, Noémie Elhadad, Wendy Chapman, Suresh Manandhar and Guergana Savova (2014). SemEval-2014 Task 7: Analysis of Clinical Text. In Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014), pp.54-62. Willie Rogers (2013). Installing and Running the Public Version of MetaMap. References Hanna Suominen, Sanna Salanterä, Sumithra Velupillai, Wendy W. Chapman, Guergana Savova, Noemie Elhadad, Sameer Pradhan, Brett R. South, 374