Improving. Bracket. with. Alzheimer s Disease Clinical Research

Similar documents
The Rest is Noise. The Promise of ecoa in Increasing Signal Detection

Migrating a PRO Instrument. Presented by: Serge Bodart (Biomedical Systems) and Alisandra Johnson (Bracket)

Study Endpoint Considerations: Final PRO Guidance and Beyond

Clinician-reported Outcomes (ClinROs), Concepts and Development

ISPOR European Congress Barcelona, Spain. November 14, 2018

Considerations for requiring subjects to provide a response to electronic patient-reported outcome instruments

Introduction. Current status of 510(k) clinical data requirements. 1 Current Status&Considerations:

PSYCHIATRY. THE POWER OFx. Experts. Experience. Execution. A Deeper Dive into Psychiatry. Scientifically-Driven Clinical Development

NEUROPSYCHOMETRIC TESTS

Electronic Capture of PROs in NCI Clinical Trials

Using Data from Electronic HIV Case Management Systems to Improve HIV Services in Central Asia

Testing the measurement equivalence of electronically migrated patient-reported outcome (PRO) instruments: Challenges and Solutions

Submitted to the House Energy and Commerce Committee. Federal Efforts to Combat the Opioid Crisis

Patient-Centered Endpoints in Oncology

EUROPEAN COMMISSION HEALTH AND FOOD SAFETY DIRECTORATE-GENERAL. PHARMACEUTICAL COMMITTEE 21 October 2015

Addressing Content Validity of PRO Measures: The Unique Case of Rare Diseases

NHC Webinar Series on Clinical Outcome Assessments. Patient-Reported Outcomes and Patient-Centered Outcomes: Is There a Difference?

BACKGROUND + GENERAL COMMENTS

How To Document Length of Time Homeless in WISP

PPD S EXPERT HEMATOLOGY AND ONCOLOGY TEAM

September 30, Eric BASTINGS, MD. Acting Director Division of Neurology Products (DNP) Center for Drug Evaluation and Research (CDER)

Recommendations on Evidence Needed to Support Measurement Equivalence between Electronic and

Patient-Reported Outcomes (PROs) and the Food and Drug Administration Draft Guidance. Donald L. Patrick University of Washington

Food Safety Modernization Act - Impacts on the Grain and Feed Industry

Non-Pharmacological Strategies to Ameliorate Symptoms of Alzheimer s Disease and Related Dementia (NPSASA)

AD Prevention Trials: An Industry Perspective

LEAF Marque Assurance Programme

UNIVERSITY OF WASHINGTON PSYCHIATRY RESIDENCY PROGRAM COGNITIVE-BEHAVIORAL THERAPY (CBT) COMPETENCIES

FDA issues final guidance on benefit-risk factors to consider in medical device product availability, compliance, and enforcement decisions

MEASURING PATIENT AND OBSERVER-REPORTED OUTCOMES (PROS AND OBSROS) IN RARE DISEASE CLINICAL TRIALS - EMERGING GOOD PRACTICES TASK FORCE

Clinical Trial Endpoints from Mobile Technology: Regulatory Considerations September 29, 2016

Web-based epro: Validation, Equivalence, and Data Quality. Brian Tiplady

Language Biomarkers of Brain Health. Rhoda Au, Ph.D. 2 nd International Symposium on Language Resources and Intelligence

Charles R. Mathews Summer Scholarship for Geriatrics Education and Research Student Policy Manual

KNOWLEDGE VALORIZATION

PGEU GPUE. Pharmaceutical Group of European Union Groupement Pharmaceutique de l Union Européenne

Update on the Clinical Outcome Assessment Qualification Program and COA Compendium

Institute of Medicine Workshop: DIGITAL DATA PRIORITIES FOR CONTINUOUS LEARNING IN HEALTH AND HEALTH CARE

Scientific Working Group on Digital Evidence

Scope of Practice for the Diagnostic Ultrasound Professional

NOTICE OF INITIATION OF DISQUALIFICATION PROCEEDINGS AND OPPORTUNITY TO EXPLAIN

Critical Path Institute Coalition Against Major Diseases (CAMD) Alzheimer s Clinical Trial Database

How can EHS management help reduce your enterprise risk?

American College of Healthcare Executives 2018 Chapter Development Report Division of Regional Services March 2018

Performance Improvement Project Implementation & Submission Tool

Donald L. Patrick PhD, MSPH, Laurie B. Burke RPh, MPH, Chad J. Gwaltney PhD, Nancy Kline Leidy PhD, Mona L. Martin RN, MPA, Lena Ring PhD

WHI Memory Study (WHIMS) Investigator Data Release Data Preparation Guide December 2012

eclinical Solutions Market Share, Size, Analysis, Growth, Trends and Forecasts to 2024 Hexa Research

Agenda. Illinois Diabetes Action Plan: What s In It for You? 10/27/2017

Guidance - IDE Early/Expanded Access for Devices

Task Force Background Donald Patrick. Appreciation to Reviewers. ISPOR PRO GRP Task Force Reports

Item Analysis Explanation

In the Matter of Portland General Electric Company 2016 Integrated Resource Plan Docket LC 66

MEDICAL POLICY SUBJECT: APPLIED BEHAVIOR ANALYSIS FOR THE TREATMENT OF AUTISM SPECTRUM DISORDERS

Award Number: W81XWH TITLE: "Longitudinal Study of a Novel Performance-based Measure of Daily Function."

NIH NEW CLINICAL TRIAL REQUIREMENTS AND FORMS-E

Millhaven's specialized sex offender intake assessment: A preliminary evaluation

Progress from the Patient-Centered Outcomes Research Institute (PCORI)

Background to the Task Force. Introduction

ISCTM Suicidal Ideation & Behavior Assessment Working Group

A proposal for collaboration between the Psychometrics Committee and the Association of Test Publishers of South Africa

Request for Information National Cloud Provision of Court Remote Video Interpretation

Panelists 11/16/2017. Defining Patient Engagement in Research: A Qualitative Analysis. ISPOR Patient Centered Special Interest Group

MEDICAL POLICY SUBJECT: APPLIED BEHAVIOR ANALYSIS FOR THE TREATMENT OF AUTISM SPECTRUM DISORDERS

Framework on the feedback of health-related findings in research March 2014

Please pay attention to Question 15 in the questionnaire regarding contact information related to the payment process.

Critical Thinking Assessment at MCC. How are we doing?

Patient-Reported Outcomes to Support Medical Product Labeling Claims: FDA Perspective

Overhauling The 510(k) Process

Analysis A step in the research process that involves describing and then making inferences based on a set of data.

REVIEW MANAGEMENT. Clinical Review of Drugs to Reduce the Risk of Cancer CONTENTS

BOARD CERTIFICATION PROCESS (EXCERPTS FOR SENIOR TRACK III) Stage I: Application and eligibility for candidacy

Terms of Reference (TOR)

Global Harmonization Task Force SG3 Comments and Recommendations ISO/DIS 9001: 2000 and ISO/DIS 9000: 2000 And Revision of ISO and 13488

Video Assessment Outcome Measures for Muscular Dystrophy using a Validated Mobile Platform

Mass Spectrometry Made Simple for Clinical Diagnostic Labs

Using social media to improve dietary behaviors

The detection and management of pain in patients with dementia in acute care settings: development of a decision tool: Research protocol.

College of American Pathologists

Achieving Operational Excellence in Prospective Observational Research

British Association of Stroke Physicians Strategy 2017 to 2020

International Conference on Harmonisation of Technical Requirements for Registration of Pharmaceuticals for Human Use

July 7, Dockets Management Branch (HFA-305) Food and Drug Administration 5630 Fishers Lane, Rm Rockville, MD 20852

Mental Health Collaborative Dementia Summary of Activity

The MetroHealth System. Creating the HIT Organizational Culture at MetroHealth. Creating the HIT Organizational Culture

What is a Special Interest Group (SIG)?

Update on CDER s Drug Development Tool Qualification Program

NAVIFY Tumor Board NAVIFY

SUBSTANCE USE DISORDERS

Running head: BEHAVIORAL ASSESSMENT TOOLS TO IDENTIFY PAIN. Effectiveness of a Behavior Assessment Tool to Identify Pain in Patients with Dementia

Health and Quality of Life Outcomes BioMed Central

PERFORMANCE IMPROVEMENT PROJECT (PIP) VALIDATION WORKSHEET FY17-18

Big Data & Predictive Analytics Case Studies: Applying data science to human data Big-Data.AI Summit

Richard A. Jenkins PhD Prevention Research Branch, NIDA July 19, 2017 Fordham University

The intelligent solution. System One Sleep Therapy System uses advanced intelligence to deliver excellent care while making patient management easy

Stephen C. Joseph, M.D., M.P.H.

PST-PC Appendix. Introducing PST-PC to the Patient in Session 1. Checklist

Application of Item Response Theory to ADAS-cog Scores Modeling in Alzheimer s Disease

NHAA Submission to the Consultation: Reforms to the regulatory framework for complementary medicines: Assessment pathways, March 2017

Running Head: Overcoming Social and Communication Barriers 1

Transcription:

Improving Alzheimer s Disease Clinical Research with Bracket An ebook on the State of Alzheimer s Disease Clinical Trials and How Bracket is Improving Outcomes with Technology B RACKETG LOBA L.COM

CONTENTS Introduction... 3 Bracket s ecoa experience in Alzheimer s Disease... 4 Alzheimer s Disease Clinical Trials Outlook & Challenges... 5 A Path Forward: What Bracket is Doing... 6 Investing in New Technology...7 Expanding Pool of Researchers... 8 Evaluating Clinical Trial Data... 9 Training and Qualifying Raters... 10 Published Data: Bracket s Demonstrated Results...11 Initial Pilot Validation of an Electronic Alzheimer s Disease Assessment Scale... 12 Intelligent Clinical Interviews for Alzheimer s Disease... 16 Establishing Equivalence of Electronic Clinician-Reported Outcome Measures...20 Conclusion...27 Addendum: References...28

IMPROVING ALZHEIMER S DISEASE CLINICAL RESEARCH WITH BRACKET... 3 Introduction Alzheimer s disease is deadly and difficult and its impact is enormous. Every year, the Alzheimer s Association compiles a report that attempts to quantify the public health impact of the disease, and it s an essential resource for helping every stakeholder understand the magnitude of the problem. Yet still, understanding the scope of how the disease affects people is difficult. Overall, the disease prevalence is increasing and there is a huge unmet need for successful Alzheimer s Disease trials and new therapies. With the highest trial failure rate of any therapeutic area at 99.6%, it is critical that we continue to invest in new and better approaches and Bracket is accomplishing this by investing in new technology, expanding its pool of researchers, evaluating clinical trial data and training qualified raters. This ecosystem must be supported, grown, and coordinated to improve the success of Alzheimer s Disease trials and development of desperately needed new Alzheimer s Disease therapies. Dr. David. S. Miller Clinical Vice President Bracket

IMPROVING ALZHEIMER S DISEASE CLINICAL RESEARCH WITH BRACKET... 4 Bracket s ecoa Experience in Alzheimer s Disease 5,000 SUBJECTS HAVE BEEN EVALUATED USING RATER STATION 45,000 14,000 UNIQUE SCALE ADMINISTRATIONS SUBJECT VISITS RECORDED ACROSS ALL ALZHEIMER S STUDIES 25 UNIQUE CLINICAL OUTCOME ASSESSMENTS

IMPROVING ALZHEIMER S DISEASE CLINICAL RESEARCH WITH BRACKET... 5 Alzheimer s Disease Clinical Trials Outlook & Challenges The Alzheimer s Association and Bracket have been at the forefront of advocating for better care for patients and their caregivers, as well as advocating for more and better research into the disease. There are only limited treatments available for patients with the disease and the R&D track record has been unproductive for the last 15 years. Investment in trials investigating new treatments is at an all-time high and they are encouraging new approaches now being tested. The top reasons Alzheimer s Disease clinical trial studies fail: 1 2 3 4 Difficult disease targets Hard to identify patients Many therapeutic approaches have been shown to be ineffective Poor signal detection

A Path Forward: What Bracket Is Doing At Bracket, we advocate for the use of innovative new technologies to streamline data collection, improve patient engagement and deliver real ROI. We are proud to bring ecoa and Quality Assurance tools to Alzheimer s Disease research initiatives.

IMPROVING ALZHEIMER S DISEASE CLINICAL RESEARCH WITH BRACKET... 7 INITIATIVE 1 Investing in new technologies to help diagnose and evaluate patients better With more experience in Alzheimer s Disease than any other ecoa provider, Bracket s Rater Station ecoa platform is based on years of experience working with these difficult, subjective, clinician-reported outcomes and is used to collect patient and caregiver data. Our tools are built in collaboration with researchers and investigators who work directly with patients and caregivers every day. Published research has shown that our tools can improve the quality of the data in an Alzheimer s Disease trial, especially compared to traditional paper-and-pencil methods.

IMPROVING ALZHEIMER S DISEASE CLINICAL RESEARCH WITH BRACKET... 8 INITIATIVE 2 Expanding the pool of researchers qualified to conduct Alzheimer s disease research Bracket works to ensure that the research centers and clinicians who conduct Alzheimer s Disease research are the best possible professionals in their field. We do this by relying on our extensive database of past studies and data analytics to identify individuals who are best positioned to succeed in future research programs. We also work alongside the Alzheimer s Association as a member of its Research Roundtable initiative which seeks to facilitate the development and implementation of new treatments for Alzheimer s disease by collectively addressing obstacles to research and development, clinical care and public health education. As a member, we meet semi-annually to gather R&D leaders from across academia and industry to organize new approaches to conducting this research and publish reports on their progress.

IMPROVING ALZHEIMER S DISEASE CLINICAL RESEARCH WITH BRACKET... 9 INITIATIVE 3 Carefully evaluating clinical trial data to eliminate as much of the noise as possible Data surveillance programs are essential to ensuring that patients are being appropriately evaluated in a clinical trial. With surveillance experience in dozens of trials, Bracket has developed big data algorithms that can help prevent bad data from propagating by identifying and remediating it early in trials. There are several good reasons to expand worksheet and ratings reviews to more rigorous audio and video reviews of patient interviews, informant interviews and cognitive testing. Bracket recently released new data on how audio reviews can supplement other, more passive quality assurance measures. Outcomes that rely on subjective patient interviews are still at risk for noise or incorrect data being entered into an EDC. Adding audio reviews of scale administrations enables detection of additional errors that are not apparent on worksheet review alone, thereby data quality during a trial. Additionally, our Blinded Data Analytics and Quality Assurance programs combine the statistical analysis of raw and derived data with rigorous clinical assessments of the results to identify areas of concern.

IMPROVING ALZHEIMER S DISEASE CLINICAL RESEARCH WITH BRACKET... 10 INITIATIVE 4 Training raters and certifying they are qualified to participate in research programs Bracket s training and certification programs have been in use for 15 years, having qualified more than 16,000 clinicians to participate in Alzheimer s Disease trials. Our approach focuses on ensuring clinical outcome measures are implemented in a valid and reliable way. Clear and concise training for investigators and raters on how patient information should be collected is essential. Distinguishing between patient and proxy versions is important, as is outlining how the sponsor wants caregivers to provide the information. These nuances are important and are incorporated into our customized training programs. Through effective training and remediation efforts, researchers can better prepare for these issues when they arise with their patients in a clinical trial.

Published Data: Demonstrated Results Initial Pilot Validation of an Electronic Alzheimer s Disease Assessment Scale Cognitive Subscale (eadas-cog): Rationale & Methods Intelligent Clinical Interviews for Alzheimer s Disease: How the Addition of Audio Reviews to ecoa Scale Administration Results in Improved Data Quality Establishing Equivalence of Electronic Clinician-Reported Outcome Measures

IMPROVING ALZHEIMER S DISEASE CLINICAL RESEARCH WITH BRACKET... 12 Initial Pilot Validation of an Electronic Alzheimer s Disease Assessment Scale Cognitive Subscale (eadas-cog): Rationale & Methods Background The Alzheimer s Disease Assessment Scale (ADAS) was developed by Richard Mohs, Wilma Rosen, and colleagues, to provide a measure of change in the cognitive and behavioral functions known to be impaired by Alzheimer s disease 1,2. While the original ADAS included both cognitive and noncognitive (behavioral) subscales, most controlled clinical trials employing the ADAS have relied exclusively on the cognitive subscale (ADAS-Cog) to assess the effects of novel interventions on cognitive function 3. Over the last 20 years, the cognitive subscale, (ADAS-Cog), has become the de facto gold-standard for assessing the efficacy of putative anti-dementia treatments, serving as the primary or co-primary outcome for nearly all phase 2 and phase 3 drug development trials 4,5,6. Given its importance to therapeutic development, there has also been an increasing interest in providing greater standardization and administration consistency of the scale 7. Recently, specific guidelines for establishing equivalency for the development of electronically administered versions of clinician reported outcome (eclinro) scales such as the ADAS-Cog and paper versions have been proposed 9 and computerized versions of the ADAS-Cog (eadas-cog) have begun to be utilized in clinical trials 8. While the eadas-cog has been purported to be equivalent to paper in terms of validity, to date, there has not been a prospective trial comparing the electronic and paper versions of the ADAS-Cog in a clinical population.

IMPROVING ALZHEIMER S DISEASE CLINICAL RESEARCH WITH BRACKET... 13 Initial Pilot Validation of an Electronic Alzheimer s Disease Assessment Scale Cognitive Subscale (eadas-cog): Rationale & Methods Methods The goal of the study is to accumulate valid and reliable data in order to determine if scores derived from Bracket s eadas-cog assessment are equivalent to those derived from a standard paper ADAS-Cog in a population of individuals with clinical diagnoses of mild to moderate dementia of the Alzheimer s type 10 using a prospective methodology. A single-center, randomized, counterbalanced prospective trial of the eadas-cog in comparison to the paper version of ADAS-Cog in men and women ages 50-90 (inclusive) will be employed. For the duration fo the study, all consecutive new patients referred to the study site for a clinical evaluation will have the opportunity to participate with the intention of enrolling 25 subjects. The duration of the subject s participation in the study will last approximately 1 month. Participants will be screened for eligibility and provide informed consent prior to undertaking any study related measures. If eligible, participants will be randomized to one of two study conditions associated with the order they will receive study measures (see Figure 1, page 14). An investigator, sub-investigator or trained researcher who has previously been trained and certified on both the paper and electronic versions of the scale will administer each of the study measures [MMSE (screening only), eadas-cog or paper ADAS-Cog] prior to the subject s initial clinical evaluation. Once all study assessments have been administered, the subject will receive a standard clinical evaluation by clinic staff. Clinical staff participating in the clinical evaluation and diagnosis of the subject will be blinded to their performance on the study measures; the subject s performance on all study measures will not be included when formulating the clinical diagnosis. Those study subjects that meet inclusion criteria will return to the site for two subsequent visits at 2 week intervals at which time they will repeat the study assessments (see Table 1, page 13).

IMPROVING ALZHEIMER S DISEASE CLINICAL RESEARCH WITH BRACKET... 14 Initial Pilot Validation of an Electronic Alzheimer s Disease Assessment Scale Cognitive Subscale (eadas-cog): Rationale & Methods Results We will investigate the concurrent validity of an eadas-cog versus the paper version using Interclass Correlation Coefficients (ICC), Pearson r correlation coefficients and individual t-tests. Longitudinal test-retest reliability between study administrations will be evaluated separately for each mode of administration. TABLE 1 Procedure Screening Baseline Visit 2 Visit 3 Informed Consent Randomization MMSE X X X Clinical Assessment X eadas-cog Administration X* X* X* padas-cog Administration X* X* X* * Subjects will either receive eada-cog or paper ADAS-cog based on randomization schedule

IMPROVING ALZHEIMER S DISEASE CLINICAL RESEARCH WITH BRACKET... 15 Initial Pilot Validation of an Electronic Alzheimer s Disease Assessment Scale Cognitive Subscale (eadas-cog): Rationale & Methods Conclusions As electronic versions of validated paper measures become more widely utilized as primary outcome measures in clinical trials, establishing concurrent validity between modalities remains of paramount importance. FIGURE 1 Screening N = 25 Baseline / Randomization N = 25 padas N = 10 eadas N = 10 eadas N = 10 Visit 2 (Day 14) padas N = 10 padas N = 10 Visit 3 (Day 30) eadas N = 10

IMPROVING ALZHEIMER S DISEASE CLINICAL RESEARCH WITH BRACKET... 16 Intelligent Clinical Interviews for Alzheimer s Disease: How the Addition of Audio Reviews to ecoa Scale Administration Results in Improved Data Quality Background Prior research has shown that employing enhanced electronic versions of the ADAS-Cog and MMSE in Alzheimer s disease clinical trials significantly reduces rater error rates compared to when paper versions of these scales are used. 1 Additionally, the implementation of a customized in-study data quality program can further improve study data quality by monitoring scale administration and scoring to identify instances where raters deviate from proper administration guidelines and/or scoring conventions. 2 When deviations occur, raters are remediated resulting in a decrease in error rates over the course of the trial. Data quality programs that reduce rater error are vital to clinical trials, especially Alzheimer s Disease trials. Given the high failure rate in Alzheimer s Disease drug development over the past 10+ years 3, ensuring optimal data quality is paramount. One potential means of addressing data quality in global Alzheimer s Disease trials is the implementation of an enhanced electronic version of the ADAS-Cog and MMSE (among other scales) coupled with an in-study data quality program. In this preliminary analysis, we evaluated whether, and to what extent, adding audio reviews of scale administration to the standard review of electronic scale data resulted in additional identification of rater error and improvement in data quality.

IMPROVING ALZHEIMER S DISEASE CLINICAL RESEARCH WITH BRACKET... 17 Intelligent Clinical Interviews for Alzheimer s Disease: How the Addition of Audio Reviews to ecoa Scale Administration Results in Improved Data Quality Methods ADAS-Cog and MMSE ratings were evaluated from a multi-national Alzheimer s Disease clinical trial. Raters were trained and certified on the proper scale administration and scoring. Initial submissions of scale data from the electronic scales administered to each subject were reviewed along with the corresponding audio. The electronic version of both the ADAS-Cog and MMSE were augmented with internal logic, standardized instructions and scoring conventions taken directly from the respective scale manuals and study specific training curriculum. All raters ratings performance was assessed by a calibrated clinician.

IMPROVING ALZHEIMER S DISEASE CLINICAL RESEARCH WITH BRACKET... 18 Intelligent Clinical Interviews for Alzheimer s Disease: How the Addition of Audio Reviews to ecoa Scale Administration Results in Improved Data Quality Results On evaluation of electronic scale data alone, 20% (19/96) of the initial ADAS-Cog submissions contained errors. When the corresponding audios were reviewed, an additional 13% (12/96) of ADAS-Cog administrations contained errors (39% of all ADAS-Cog errors were identified on audio review). For the MMSE, 18% of the 170 (30/170) initial electronic scale data submissions contained errors, and an additional 7 errors or 4% (7/170) were detected upon audio review. Audio review of the MMSE administrations resulted in the identification of an additional 19% of all MMSE errors. TABLE 1 Scale % Electronic Scale Data with Errors % Audio Reviews with Errors % All Errors Identified via Audio Review ADAS-Cog 20% 13% 39% MMSE 18% 4% 19% TABLE 2 Breakdown of All Identified Errors by Review Source (%) ADAS-Cog MMSE 0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100% % Total Errors Identified by Audio Review % Total Errors Identified by Electronic Data Review

IMPROVING ALZHEIMER S DISEASE CLINICAL RESEARCH WITH BRACKET... 19 Intelligent Clinical Interviews for Alzheimer s Disease: How the Addition of Audio Reviews to ecoa Scale Administration Results in Improved Data Quality Conclusions It is essential that any and all methodologies that could maximize data quality in Alzheimer s Disease clinical trials be considered. The use of electronic versions of scales that are enhanced beyond their respective paper-pencil versions and coupled with an in-study data quality have proven to reduce rater error. Adding audio reviews of scale administrations enables detection of additional errors that are not apparent on worksheet review alone, thereby further improving data quality over the course of a trial.

IMPROVING ALZHEIMER S DISEASE CLINICAL RESEARCH WITH BRACKET... 20 Establishing Equivalence of Electronic Clinician-Reported Outcome Measures Background The use of electronic devices to collect patient-reported outcome (epro) measures is becoming increasingly widespread. Studies have shown that epros offer advantages over their paper counterparts, including reduced missing data, fewer transcription errors, and increased compliance. 1 Because of the increased acceptance and widespread usage of epros, processes to evaluate the equivalence of paper and epro scales have been developed. The International Society for Pharmacoeconomics and Outcomes Research (ISPOR) epro Task Force developed recommendations regarding establishing the equivalence of epros with their original paper-based outcome version (PRO). Dementia clinical research does not rely heavily on the use of PRO measures but instead utilizes more clinician-reported outcome (ClinRO) assessments, where scale ratings are centered upon direct assessment or interviews with patients and/or caregivers. Increasingly, ClinROs are administered in their electronic rather than their original paper format. 3 Although ample evidence exists to support electronic data capture of multiple types (e.g., case report forms 4 and PROs 5 ), limited research is available demonstrating equivalence between the electronic and paper versions of clinician-administered measure and the paper version. 6,7 Like epros, there are many benefits of eclinros including eliminating duplication of data, reducing transcription errors, facilitating data entry, assisting in remote data monitoring, enabling real-time access for data review, and promoting accurate and complete data collection. Clear guidelines have been established for other aspects of electronic data used in clinical research (i.e., electronic case report forms and epros). These guidelines have contributed to the increasing use of electronic capture of individual patient data on case report forms in industry research 9 and resolved much of the debate regarding validation procedures. However, in the absence of industry standards for evaluating the equivalence of eclinros versus paper, questions regarding the equivalence and validity of electronic versions persist. The purpose of this poster is to propose recommendations to industry, based on the existing recommendations for epros, for establishing equivalence between electronic and paper-based ClinROs.

IMPROVING ALZHEIMER S DISEASE CLINICAL RESEARCH WITH BRACKET... 21 Establishing Equivalence of Electronic Clinician-Reported Outcome Measures Methods The methods listed below are derived from the guidelines proposed by the ISPOR epro Task Force. 2 These guidelines provide a framework for assessing the changes to epros and go beyond the requirements set forth by the FDA PRO Guidance. Levels of Modification There are 3 levels of modification defined in the epro guidance. MINOR Can be justified on the basis of logic and/or existing literature. There is no change in content or meaning. Example: Changing the font from italics to bold. MODERATE Based on the current empirical literature, the modification cannot be justified as minor. There may be change in content or meaning. Example: Requiring the clinician to scroll to see all anchors without any instructions indicating the need to do so. SUBSTANTIAL There is no existing empirical support for the equivalence of the modification, and the modification clearly changes the content or meaning. Example: Removing questions, changing anchors or significantly rewording questions.

IMPROVING ALZHEIMER S DISEASE CLINICAL RESEARCH WITH BRACKET... 22 Establishing Equivalence of Electronic Clinician-Reported Outcome Measures Scale Comparison Two tiers of scale comparison are recommended and modifications noted should be categorized as described above. Content Comparison: a word-for-word comparison between the original paper scale and the Master Scale Questionnaire [spreadsheet that contains all of the electronic scale content (e.g., question text) that will be displayed on the device]. Items should be classified as: Added: not included on the paper scale and added to the electronic scale; Removed: included on the paper scale and removed from the electronic scale; Changed: included on the paper and included on the electronic scale but modified from original text on the paper; or No change: the paper scale text and electronic scale text are identical. Format Comparison: examination of the scale as it appears on the device to compare any differences between the paper and the electronic scale. TABLE 1 Hypothetical Memory Scale Item Content Comparison Item # Discrepancy Appearance (Paper) Instruction Added Not Present 1 Changed 3 Changed Page 2 header Removed Page 2 Memory Questions Please circle the employment level that best matches your current situation. You may circle more than one. Orientation to Time What is the day today? (Response) Score 0 or 1 5 Changed Please write the words recalled below. 7 Changed Please copy the design (attach drawing to document) TABLE 2 Hypothetical Memory Scale Format Comparison Tab # Item(s) Comments 1 Orientation to Time All items display on single screen 2 Orientation to Time State and Country items display on a single screen, after which scrolling is necessary to see the remaining items 3 Recall All items display on single screen 4 Naming All items display on single screen 5 Repitition All items display on single screen 7 Comprehension All items display on single screen 8 Drawing All items display on single screen; scrolling required to see instructions to clinician to upload drawing

IMPROVING ALZHEIMER S DISEASE CLINICAL RESEARCH WITH BRACKET... 23 Establishing Equivalence of Electronic Clinician-Reported Outcome Measures Adaptation Two categories of classification have been added to account for differences attributable to adaptation to an electronic device: Functionality Change that allows the item to be administered in an electronic format. For example, the use of radio buttons to indicate a response versus circling a response on paper. Instruction Change that involves taking information from the scale manual that might not be included on the paper version of the scale, to increase the likelihood that scale administrators adhere to the established scale conventions. Neither functional nor instruction adaptations are believed to alter item interpretability or item scoring but may improve reliability.

IMPROVING ALZHEIMER S DISEASE CLINICAL RESEARCH WITH BRACKET... 24 Establishing Equivalence of Electronic Clinician-Reported Outcome Measures Levels of Evidence Once scale comparison is completed and the levels of modification have been determined, the next step is to provide an adequate level of evidence to support equivalence between the paper and electronic versions of the scale. There are 3 accepted forms of measurement equivalence testing: cognitive debriefing, usability testing, and equivalence testing. Full psychometric testing should be used only when the level of modification is so great that the scale is treated as essentially a new scale. Cognitive debriefing Cognitive debriefing is a technique used to examine the process by which a target population understands and responds to a questionnaire. This requires 5 to 10 target users and if extensive changes are required, a second round may be necessary. Usability and feasibility testing Usability testing is used to determine if the target user is able to navigate the device and its software successfully. A simple device may require 5 to 10 target users for testing while a more complex system may require 20 or more. Feasibility testing differs from usability testing in that the target user has the opportunity to field test the scale(s) and provide feedback on the ease of use. Equivalence testing Equivalence testing is used to measure the similarity between the scores derived from an eclinro assessment with those obtained using the paper version. The goal is to assess whether there is a statistically significant difference in scores between the two versions.

IMPROVING ALZHEIMER S DISEASE CLINICAL RESEARCH WITH BRACKET... 25 Establishing Equivalence of Electronic Clinician-Reported Outcome Measures Results An important goal with eclinros is to collect data that have equal or higher reliability than that produced by the original paper scale. As there are no existing guidelines on establishing measurement equivalence, we have set forth guidance for documentation and a route for establishing equivalence of paper and eclinros (Table 3), based on FDA regulations for creation of PROs and ISPOR recommendations for equivalence evaluation for paper and epros. TABLE 3 eclin RO Equivalence: Instrument Adaptation and Recommended Evidence Classification Rationale Examples Level of Evidence Functionality adaptation Instruction adaptation Minor modification Moderate modification Change is made solely for adaptation to electronic format Addition of instructions from manuals or study-specific conventions not included in the paper scale. The modification can be justified on the basis of logic and/or existing literature. No change in content or meaning. Based on the current empirical literature, the modification cannot be jsutified as minor. May change content or meaning. Nonsubstantive changes in instructions (use of radio buttons rather than circling a response, addition of comment boxes to capture information) Addition of previously established guidelines (instruction from scale manual informing clinician to read question verbatim) 1. Minor changes in format (use of bold vs. italic) 2. Minor changes in wording of text that did not alter interpretability (using select item instead of underline item ) Changes in item wording or presenation that are more significant and might alter interpretability Usability testing Usability testing Cognitive defriefing; usability testing Equivalence testing; uesability testing Substantial modification There is no existing empirical support for the equivalence of the modification, and the modification clearly changes content or meaning. 1. Substantial changes in response options 2. Substantial changes in item wording Full psychometric testing; usability testing

IMPROVING ALZHEIMER S DISEASE CLINICAL RESEARCH WITH BRACKET... 26 Establishing Equivalence of Electronic Clinician-Reported Outcome Measures Conclusions We have set forth initial good practice recommendations for documentation and a route for evaluating equivalence of paper and eclinros. Further discussions with key stake holders are needed to fully define and reinforce best practice for all types of Clinical Outcome Assessments (e.g. eclinro, eobsro and eperfo), to identify areas of commonality and difference in equivalency evaluation.

For help with your next Alzheimer s Disease clinical trial, contact Bracket. (610) 225.5900 Bracketglobal.com AMERICAS ASIA PACIFIC EUROPE MIDDLE EAST AFRICA

IMPROVING ALZHEIMER S DISEASE CLINICAL RESEARCH WITH BRACKET... 28 Addendum: References Initial Pilot Validation of an Electronic Alzheimer s Disease Assessment Scale Cognitive Subscale (eadas-cog): Rationale & Methods [1] Mohs RC, Cohen L. The Alzheimer s disease assessment scale: An instrument for assessing treatment efficacy. Psychopharmacol Bull19: 445-450 (1983). [2] Rosen WG, Mohs RC, Davis KL. A new rating scale for Alzheimer s disease. Am J Psychiatry 14: 1356-64 (1984). [3] Zec RF, Landreth ES, Vicari SK, Belman J, Feldman E, Andrise A,et al. Alzheimer Disease Assessment Scale: a subtest analysis. Alzheimer Dis Assoc Disord 6(3): 164-81 (1992). [4] Doraiswamy PM, Bieber F, Kaiser L, Krishnan KR, Reuning-Scherer J, Gulanski B. The Alzheimer s disease assessment scale patterns and predictors of baseline cognitive performance in multicenter Alzheimer s disease trials. Neurology 48: 1511-1517 (1997). [5] Ferris SH, Lucca U, Mohs R, Dubois B, Wesnes K, Erzigkeit H, Geldmacher D, Bodick N. Objective psychometric tests in clinical trials of dementia drugs. Position paper from the International Working Group on Harmonization of Dementia Drug Guidelines. Alzheimer Dis Assoc Disord 11(Suppl 3): 34-8 (1997). [6] Winblad B, Brodaty H, Gauthier S, et al. Pharmacotherapy of Alzheimer s disease: is there a need to redefine treatment success? Int J Geriatr Psychiatry;16(7):653-666 (2001). [7] P O Halloran, J., et al. Psychometric Comparison of Standard and Computerized Administration of the Alzheimer s Disease Assessment Scale-Cognitive Subscale (ADASCog). Current Alzheimer Research 8.3 (2011): 323-328. [8] Fuller, Rebecca LM, et al. Establishing Equivalence of Electronic Clinician-Reported Outcome Measures. Therapeutic Innovation & Regulatory Science(2015): 2168479015618693. [9] US Food and Drug Administration. Office of the Commissioner. Guidance for Industry Computerized Systems Used in Clinical Investigations. Rockland, MD: US Department of Health and Human Services; 2007. [10] Jack, Clifford R., et al. An operational approach to National Institute on Aging Alzheimer s Association criteria for preclinical Alzheimer disease. Annals of neurology 71.6 (2012): 765-775. Intelligent Clinical Interviews for Alzheimer s Disease: How the Addition of Audio Reviews to ecoa Scale [1] Miller, D, Feaster, T, Hernandez, A, Abi-Saab, D, Kott, A and Millward, W. The Impact of Electronic Administration of the ADAS-Cog, MMSE and CDR on Clinical Trial Data Quality. Poster presentation at the 2015 Clinical Trials on Alzheimer s Disease Conference (CTAD), November 5-7 2015, Barcelona, Spain [2] Miller, DS, McNamara, C, Samuelson, P, Mulder, D and Young, A. Is an In-Study Surveillance Program Effective at Reducing Error Rates for Both Experienced and Novice Raters? Poster Presentation at the 7th Annual International Society for CNS Clinical Trials and Methodology (ISCTM) Scientific Meeting Washington, DC on February 21-23, 2011 [3] Cummings JL, Morstorf T and Zhong K (2014). Alzheimer s disease drug development pipeline: few candidates, frequent failures. Alzheimers Res Ther, 6(4), 37. Establishing Equivalence of Electronic Clinician- Reported Outcome Measures [1] Gwaltney CJ, Shields AL, Shiffman S. Equivalence of electronic and paper-and-pencil administration of patient-reported outcome measures: a meta-analytic review. Value Health. 2008; 11:322-333. doi:10.1111/j.1524-4733.2007.00231.x. [2] Coons SJ, Gwaltney CJ, Hays RD, et al. Recommendations on evidence needed to support measurement equivalence between electronic and paper-based patient-reported outcome (PRO) measures: ISPOR epro good research practices task force report. Value Heal. 2009;12(4):419-429. [3] US Food and Drug Administration. Office of the Commissioner. Guidance for Industry Computerized Systems Used in Clinical Investigations. Rockland, MD: US Department of Health and Human Services; 2007. [4] Ene-Iordache B, Carminati S, Antiga L, et al. Developing regulatory-compliant electronic case report forms for clinical trials: experience with the Demand Trial. J Am Med Inform Assoc. 2009;16:404-408. doi:10.1197/jamia.m2787. [5] Center for Drug Evaluation and Research (US); Center for Biologics Evaluation and Research (US); Center for Devices and Radiological Health (US). Guidance for Industry Patient Reported Outcome Measures: Use in Medical Product Development to Support Labeling Claims. Rockland, MD: US Department of Health and Human Services; 2009. [6] O Halloran JP, Kemp AS, Salmon DP, Tariot PN, Schneider LS. Psychometric comparison of standard and computerized administration of the Alzheimer s Disease Assessment Scale: Cognitive Subscale (ADASCog). Curr Alzheimer Res. 2011;8:323-328. [7] Moore RC, Harmell AL, Ho J, et al. NIH public access. Schizophr Res. 2013;144:87-92. doi:10.1016/j. schres.2012.12.028.initial. [8] US Food and Drug Administration. Guidance for Industry Electronic Source Data in Clinical Investigations; Availability. Rockland, MD: US Department of Health and Human Services; 2013. [9] Le Jeannic A, Quelen C, Alberti C, Durand-Zaleski I. Comparison of two data collection processes in clinical studies: electronic and paper case report forms. BMC Med Res Methodol. 2014;14:7. http://www.biomedcentral. com/1471-2288/14/7.