Assessment of lung damages from CT images using machine learning methods.

Size: px
Start display at page:

Download "Assessment of lung damages from CT images using machine learning methods."

Transcription

1 DEGREE PROJECT IN MEDICAL ENGINEERING, SECOND CYCLE, 30 CREDITS STOCKHOLM, SWEDEN 2018 Assessment of lung damages from CT images using machine learning methods. Bedömning av lungskador från CT-bilder med maskininlärning metoder. QUENTIN CHOMETON KTH ROYAL INSTITUTE OF TECHNOLOGY SCHOOL OF ENGINEERING SCIENCES IN CHEMISTRY, BIOTECHNOLOGY AND HEALTH

2

3 iii Abstract Lung cancer is the most commonly diagnosed cancer in the world and its finding is mainly incidental. New technologies and more specifically artificial intelligence has lately acquired big interest in the medical field as it can automate or bring new information to the medical staff. Many research have been done on the detection or classification of lung cancer. These works are done on local region of interest but only a few of them have been done looking at a full CT-scan. The aim of this thesis was to assess lung damages from CT images using new machine learning methods. First, single predictors had been learned by a 3D resnet architecture: cancer, emphysema, and opacities. Emphysema was learned by the network reaching an AUC of 0.79 whereas cancer and opacity predictions were not really better than chance AUC = 0.61 and AUC = Secondly, a multi-task network was used to predict the factors altogether. A training with no prior knowledge and a transfer learning approach using self supervision were compared. The transfer learning approach showed similar results in the multi-task approach for emphysema with AUC=0.78 vs 0.60 without pre-training and opacities with an AUC=0.61. Moreover using the pre-training approach enabled the network to reach the same performance as each of single factor predictor but with only one multi-task network which save a lot of computational time. Finally a risk score can be derived from the training to use these information in a clinical context. KEYWORDS: Deep Learning, Artifical Neural Networks, Lung damages, CT-Scans, Multi-task learning, Transfer learning

4 iv Acknowledgment First and foremost, I would like to thank Carlos Arteta, my supervisor and guide during my project at Optellum Ltd. He helped me a lot with my work by bringing new ideas and always took time to answer all my questions. I would also thank all the Optellum team for their welcoming and their help. I learned a lot by being part of this team and having the opportunity to discuss with them. I would like to thank Dr. Chunliang Wang, my supervisor at KTH university and Dr. Dmitry Grishenkov for guiding us through this master thesis project. I would also thank Antoine Broyelle, my co-intern-mate at Optellum. All our discussions and his ideas helped my project moved further. And all our lunch times made bearable the British weather. I would also thank Lottie Woodward who had the incredible patience to proofread a french style written report. The author thanks the National Cancer Institute for access to NCI s data collected by the National Lung Screening Trial (NLST). The statements contained herein are solely those of the authors and do not represent or imply concurrence or endorsement by NCI.

5 v Nomenclature AUC Area Under the Curve CNN Convolutional Neural Network CT DL HU LR ML Computed Tomography Deep Learning Hounsfield Unit Learning Rate Machine Learning NLST National Lung Screening Trial ROC Receiver Operating Characteristic

6 vi CONTENTS Contents 1 Introduction 1 2 Materials and methods Clinical Data Image Annotations Emphysema Opacities Consolidation Cancer Preprocessing Normalization Data Augmentation Evaluation metrics Loss and Accuracy ROC and confusion matrix Experiments Single factor prediction Network Training Combination of factors Network Full training Self-supervision Training with Pre-training Results Single factor prediction Cancer Emphysema Opacities Combining factors Training with Pre-training Risk score Discussion Main findings General impact Comparison to other methods Limitations Future work

7 CONTENTS vii 5 Conclusion 24 References 26 A State of the Art 29 A.1 Clinical Background A.1.1 Lung Cancer A.1.2 Nodules A.1.3 Risks factors for nodule malignancy A Nodule size [21] A Nodule Morphology [15] A Nodule Location [21] A Multiplicity [21] A Growth Rate [21] A Age, Sex, Race [21] A Tobacco [21] A.1.4 Problem of detection A.2 Engineering Background A.2.1 Deep Learning A.2.2 How does a deep learning network learn? A.2.3 Transfer learning A Feature Extractor A Fine-tuning A.2.4 Challenges with deep learning A Architecture selection in transfer learning.. 37 A Number and quality of data: pre-processing. 40 A Time-processing (CPU and GPU) A Overfitting A Performance evaluation

8

9 1 INTRODUCTION 1 1 Introduction The incidence of new cancer cases annually is per 100,000 human being in 2016 [14]. Additionally, lung cancer represents around 20% of deaths due to cancer. Cancer, in general, is not a well-known disease and most of the cases are discovered too late to be treated. The main challenge nowadays is to detect and predict cancer as soon as possible, in order to treat it in the best possible way. For the past 3-4 years, artificial intelligence has been developing in the medical field in order to provide useful tools helping for a better detection and prediction. Lung cancer screening or incidental findings are the two main ways to detect lung cancer in a patient. Screening should normally be the main way of finding lung cancer, as it is for breast cancer. But in most of the countries such as the UK, there is no national screening programme and most of the cases are incidental findings. Incidental findings mean that lung nodules are found while doing another exam than lung targeted. For example, nodules can be found while doing a heart or a liver CT-scan. The main problem now is that radiologists are not trained to sort the nodules and detect whether they are cancer or benign ones. These patients should be reported to a pulmonologist which will in most of the cases ask for a chest CT-scan which lead to more radiation exposure for the patient. In the worst case, the patient s report never reaches the pulmonologist and is lost resulting in their cancer never being found or being found too late. Written in association with Optellum Ltd, is trying to address this issue. Their vision is to redefine cancer care from early diagnosis to treatment, by enabling every clinician to choose the optimal diagnostic and treatment pathway for their patients. This is done by using machine learning on vast medical image repositories. This thesis is part of this vision and focuses on: "the assessment of lung damages from CT images using machine learning methods." It will focus on how to assess any kind of lung damages by using deep learning methods whilst looking only at a global scale: the entire CT-scan. This work does not focus on the finding of nodules (local scale) as it has be done in [36, 18] but on the global assessment of lung damages (global scale). Then it will be possible to combine these two scales of features to better predict cancer.

10 2 2 MATERIALS AND METHODS 2 Materials and methods 2.1 Clinical Data Machine Learning (ML) and even more Deep Learning (DL) require the use of large quantities of data. Data here are medical images and more particularly CT-Scans from the NLST dataset. NLST dataset is a screening trial in which around heavy smokers aged between 55 and 74 received either a chest X-ray or a chest CT scan at three different time points if possible. This thesis has extracted CT Scans for patients from this study to focus on. Counting the different time-points, a total number of images have been collected. The CT-scans come from a large number of sites as well as manufacturers. All the CT-scans have a 512x512 pixels dimensions in the axial plan but have a high variability in the resolution. (cf part 2.3) The entire set of clinical data is split 70:30 into training and validation set. If a scan from a patient is chosen in one set, all the other CTs for this patient would be included within the same set. 2.2 Image Annotations Approximately 500 different metadata have been recorded for each scan in the NLST trial. This data had been divided into categories like: demographic, lung cancer, smoking, death, or follow-up. Only a few of this metadata is interesting in this master thesis. After a long analysis of the dictionaries summarizing them, three of them had been chosen plus one created by members of the company: Emphysema Emphysema is defined as the "abnormal permanent enlargement of the airspaces distal to the terminal bronchioles accompanied by destruction of the alveolar wall and without obvious fibrosis". A patient presenting emphysema would be classified as 1 (fig 1a) [23] Opacities Opacities represent the result of a decrease in the ratio of gas to soft tissue (blood, lung parenchyma and stroma) in the lung (fig 1b) Consolidation Consolidation of the lung is a solidification of lung tissue due to liquid or solid accumulation in the air spaces (fig 1c).

11 2 MATERIALS AND METHODS Cancer The cancer markup had been realized by trained doctors hired by the company. Thanks to their knowledge, they had been able to differentiate cancer nodules from benign nodules. A patient is marked as cancer as soon as he has at least one cancer nodule in one of his available scans. For example, if patient X has no cancer on CT at time-point 0, but one cancer nodule at CT at time-point 1, both images of the patient will have the label cancer. (a) Emphysema (b) Opacities (c) Consolidation Figure 1: Visible diseases in the lung All the metadata but cancer annotation had been created during the NLST trial and are subject to human errors. The proportion of positive patient depending on the factor is summarized in table 1 and fig 2. The proportion of consolidation is too low in the dataset to be used for training later. The data are unbalanced. This problem was tackled during training phase by using the weighted sampler method in Pytorch. By giving a weight to each class it ensures that in each batch, there is 50:50 of images with and without the disease. It results in showing more times each disease cases in average. Class Positive Proportion Cancer % Emphysema % Opacities % Consolidation % Table 1: Proportion of each factor in the NLST database

12 4 2 MATERIALS AND METHODS 2.3 Preprocessing Normalization Figure 2: Distribution of NLST data As previously noted, the CT-Scans vary a lot in term of resolution and intensity due to the variability of devices. In order to better generalize, a normalization step for all the images is necessary before using them for training. Resampling The first normalization is to rescale the images to the same spatial resolution; 1x1x2mm (x;y;z) for removing zoom or thickness variance. Indeed a scan may have a spatial resolution of 2x2x2.5mm meaning that the distance between slices is 2.5mm. The resampling has been completed using the nearest interpolation. Standardization CT-scan intensity is measured in Hounsfield Unit (HU) and represents the radiodensity. The CT scans from our database range from to The interesting values for the lung images are around 0 as it represents water and around as it represents air. As shown in figure 3a, these values are the most common in lung CT-Scan. Data in machine learning and deep learning are commonly standardized which means removing the mean value of the dataset and dividing by the standard deviation of the dataset. Values then mostly range between -1 and 1. The mean and standard deviation values for the NLST dataset can be seen in table 2. The example of distribution of HU before and after standardization can be found in figure 3a and 3b: x standard = x mean dataset σ dataset (1)

13 2 MATERIALS AND METHODS 5 Mean dataset Standard deviation dataset (σ dataset ) -440 HU 480 HU Table 2: Dataset mean and standard deviation (a) Before (b) After Figure 3: HU distribution before and after standardization Fixed-size The network has to be fed with the same inputs however after resampling, all the CT scans have a different number of pixel per slices and a different number of slices per scan. For example: 280x280 or 400x400 (x;y). The adopted strategy is to use an input of 320x320x32 (x;y;z) which means 32 slices of 320x320 pixels. In order to do this, images have either been cropped to 320x320 or a 0 padding had been added on the side to reach the 320x320 input size. Using a random cropping could leave a tumor out of the lung, but in our study, this is not a problem. The main idea here is that the network should learn global pattern inside the lung and not local information as a tumor. So not having the tumor inside the image doesn t change the pattern of the lung in its globality, and then should not change what is the network learning from the CT scan. In order to reach the target of 32 slices several methods are computed and used in both the singular and multiple experiments: Choice of 32 slices in the CT with a pre-defined step in between 2 slices (typically 3) Random choice of 32 slices Minimum Intensity Projection (MinIP) over 3 slices: take the minimum value over the z-axis for 3 consecutive slices. This method emphasizes the dark area and helps to detect emphysema Maximum Intensity Projection (MaxIP) over 3 slices: take the maximum

14 6 2 MATERIALS AND METHODS value over the z-axis for 3 consecutive slices. This method emphasizes the light area and helps to detect nodules Average Intensity Projection (AIP) over 3 slices: takes the average value over the z-axis for 3 consecutive slices Data Augmentation In order to better generalize the results and avoid overfitting, data augmentation is used to increase the dataset. The first way is to apply a random crop to the images while fixing the size. By doing this, slightly different parts of the same image are shown which help the network learns further. The other augmentation is a random flip over the x or y axis during training meaning a reflection of the slices across the mid of the axis. The last augmentation is a random rescale of the histogram intensity of each image. As shown in fig 3a, the normal distribution of an image in HU is within the range [-1000:600]. To rescale the image s intensity, random minimum between and -850 is chosen and a random maximum between 100 and 1300 and rescale the histogram from the initial range to the [min:max] range. The intensity of the image will change each time the network sees the image. It is important as the intensity mainly depends on the machine and wrong correlation could be learned by the network. Indeed, in the NLST dataset, patients had their CT in different hospitals, and the rate of Cancer or Emphysema can be really different from one hospital to another. Image intensity is a intrinsic parameter of the used machine depending on its calibration, the followed protocol and reconstruction filters used by the manufacturer as shown in [20]. Then, without intensity augmentation, the network could learn simple correlation between emphysema cases and the average intensity of the scan. 2.4 Evaluation metrics During this thesis, many models will be trained using different networks, training methods and hyperparameters (parameters which are set by hand before the training such as learning rate, momentum, etc.). In order to compare all these models, evaluation metrics must be set before starting the training. The metrics used to evaluate a model and compare two models are: Loss and Accuracy For both loss and accuracy, they will be computed for both training and validation set (cf Annex A). Loss function (also called error or cost functions) maps the network parameters to a scalar value which specify how wrong is this set of parameters. The main task is then to minimize the loss function

15 2 MATERIALS AND METHODS 7 by updating the network s parameters. A loss function is computed during the forward pass of the network. Working on a classification problem, the cross-entropy loss function is used in all experiments (unless otherwise noted). This loss function is a combination of the Log Soft Max function and the Negative Log Likelihood function: N LL(x, class) = x[class] (2) LSM(x, class) = log(exp(x[class]) j exp x[j] (3) For the same experiment with different hyperparameters, the model reaching the lower loss function on training and validation set can be defined as the best model. The accuracy is defined as the percentage of well classified elements in a classification task. The higher the accuracy on the validation set, the better the model ROC and confusion matrix The two other metrics are ROC (and AUC) and confusion matrix. These metrics are defined in Annex A. In the ROC, the more the curve is to the top left corner the better the model is. This can also be seen by comparing the Area Under the Curve (AUC). The higher the AUC, the better the model. The confusion matrix allows us to compute different metrics: accuracy, precision, recall, etc. These are important to understand what the network mis-classify and compare with other models. The decision to keep one model rather than another are made based on these evaluation metrics. 2.5 Experiments Single factor prediction The first set of experiments is to determine the ability of the network to learn different disease factors: cancer, emphysema and opacities. These are trained separately but use the same network Network The base architecture used to train the different factors is a 3D version of ResNet18 which will be called ResNet3D 5 [12]. The input is a set of 32 slices or projection of slices having 320x320 pixels. Firstly, a 3D convolution with a 3x5x5 kernel and 32 channels. The large filter is used to detect larger component in the image (shapes, blobs, etc.). After a 3D batch normalization and

16 8 2 MATERIALS AND METHODS max pooling, the network is composed of four similar blocks (cf fig 4). For each block, the input is used twice, first in the succession of layers (left part of fig 4) but also added to the output of this succession (right part of fig 4). Then the output of one block feeds the input of two successive layers. The classifier part of the network, which aims to determine if the factor is found or not, is composed of a 3D convolution with a 1x1x1 kernel and a 3D spatial average pooling. Through this convolution, the 256 channels are mapped to a 2 classes output: whether they have the disease or not. Figure 4: ResNet 3D block: The input of a 3D block is first passing through the succession of convolution 3D, BN, ReLu activation, convolution 3D and BN (left part of the diagram) but also is added to the output of the last BN (right part) Training Implementation is done using Pytorch framework [27] and the network is trained using NVIDIA GPU. The training is done during 40 epochs, meaning iterating 40 times over the dataset. A batch size of 16 (maximum fitting the memory), a momentum of 0.99, a weight decay of 1e 4 and an initial learning

17 2 MATERIALS AND METHODS 9 Figure 5: ResNet 3D used for cancer prediction for example rate of The learning rate decay is done with the plateau method. If the validation loss does not decrease during 3 consecutive epochs, the actual learning rate is divided by a factor 10. The learning rate (LR) is defined as the amount of change applied to the model. LR decay is used in order to be more and more accurate and reach a global minima. Cancer: The cancer training is done by using the cancer metadata. The input is a volume of 32 slices of 320x320 pixels. The average intensity projection is used to get the 32 slices. As only 5.9% of the images are marked as cancer, the classes are balanced during the training using a sampler. All the described pre-processing and data augmentation steps are performed. The goal is to see the ability of the network to detect cancer from a CT volume. Emphysema The emphysema training is similar to the cancer one. The emphysema metadata is used as output. The minimum intensity projection is used as it reveals darker part of the image which correspond to emphysema. All the described preprocessing and data augmentation steps are performed. The goal is to see the ability of the network to detect emphysema from a CT volume. Opacities The opacity training is the same as emphysema and cancer but with the opacity metadata as output. The maximum intensity projection is this time used, as opacity is seen as a brighter area on the CT, the MaxIP emphasizes the presence of opacity. All the described preprocessing and data augmentation steps are performed. The goal is to see the ability of the network to detect opacity from a CT volume.

18 10 2 MATERIALS AND METHODS Combination of factors Figure 6: Multi-task Network Once training is computed and the ability of the network is understood, the goal is then to combine all these factors detection in the same network. The next experiments are computed with emphysema, cancer, and opacities annotations and try to predict the three of them with only one multi-task network. First, the network is trained from no prior knowledge and then a self-supervised learning is used before fine-tuning the network Network For this follow-up prediction, a multi-task network shown in fig 6 is used. The network is separated in two parts. First the feature extraction part is the ResNet 3D that was already used for the single factor prediction. The number of channels is the same at each layer, only the last classifier part is removed. The second part of the network is the factor predictions. In this part, the output of the ResNet 3D is divided in three subparts for the three predictors. During the training, a loss is computed for each of the classifier and backward through their own classifier and the ResNet 3D. In total three losses are computed, one for each of the factor classifier. The backward phase (update of the weights by gradient descent) is then computed according to the three losses Full training The full training is done while initializing the convolution weight with gaussian distribution centered in zero and with a standard deviation of 2/n

19 2 MATERIALS AND METHODS 11 where n is the number of inputs to the neuron. The batch normalization layers are initialized with a weight of 1 while of the bias are set to 0. The training is done during 50 epochs with a batch size of 16, a momentum of 0.99, a weight decay of 1e 4 and a initial learning rate of The learning rate is lowered using the plateau function Self-supervision Figure 7: Siamese Network Another approach for training this complex multitask network is to first pre-train the feature extractor on a specific task before using transfer learning and fine-tuning it on the multi-task. Different methods exist to pre-trained a network, self-supervision one is used in order to force the network to learn global features and not only specific to the main task. This method use Siamese network and have been first used for face recognition [6]. Siamese network Here, a Siamese network is trained to distinguish between pairs of images from the same patient at two different time points or from two different patients. The Siamese network is then two ResNet 3D networks in parallel sharing the same weights (see fig 7). The input is a pair of images which go through the ResNet 3D to compute a 256 vector. Vectors from the two inputs are then used to compute the contrastive loss (cf next paragraph). The distance between two images is computed as the absolute difference between the two output vectors. Training of Siamese network

20 12 2 MATERIALS AND METHODS To compute the pairs of images, only patients with 2 CT-scan time points or more are kept in the NLST database. It corresponds to 5,085 patients or pairs of images. Once a patient is chosen as an input, a random choice is made to determine if the other image should come from the same patient or a different one. If different, a random choice of a CT Scan among all the remaining CTs is made. All the data augmentation is performed. The random scale of the image s intensity is really important for the network to not learn specificities of the machine whilst differentiating similar from different patients. In order to improve the results of the Siamese network, the adaptive margin loss function described by Wang et al. [32] is chosen. Normally, Siamese network are trained with the contrastive loss function [6] described in equation (4). (1 label) 1 2 D2 w + label 1 2 {max(m D w, 0)} 2 (4) Where D w represents the euclidean distance between the two output vectors and m a defined margin. The main issue with this method is finding the right margin, this is why the adaptive margin loss function is chosen as it depends on the inputs. In a batch, all the distances between same patient must be smaller than an adaptive up-margin M p and all the distance between the different patients must be larger than an adaptive down-margin M n defined in equation (5). M p = 1 (1 exp( µd)), µ M n = 1 (5) γ log(1 + exp(γs)) Where s is the mean similar distance, d the mean different distance and µ and γ two hyperparameters set respectively to 8 and 2. From equation (5), M τ and M c are defined as M p = M τ M c and M n = M τ + M c. Finally the loss function is defined (6) with label { 1; 1}. Loss = max{m c label(m τ D w ), 0} (6) batch Training with Pre-training The training is the same as in part , the only difference is in the initialization of the weights. In this case of transfer learning (see A.2.3), the weights from the pre-trained network are used in the multi-task network before training following which fine-tuning of the entire network is performed.

21 3 RESULTS 13 3 Results The following sections will present the results obtained for the different sets of experiments, starting by the single factor predictor and then moving to multi-task prediction. 3.1 Single factor prediction Cancer Figure 8 represents the loss and accuracy functions for the training and validation set. The training loss decreases while the validation one is first unstable before remaining close to a constant (after 10 epochs). The ROC and prediction matrix in fig 9 show that the network hasn t learned well the Cancer prediction as the the ROC is around the random prediction (AUC = 0.51). The prediction matrix shows an inability to classify correctly cancer cases: most of the cases are classified as benign. The unbalanced validation set give us a quite high validation accuracy: approximately 75%, but this figure must be analyzed knowing that the network mainly classifies as benign and the main class is benign. (a) Loss ResNet 3D (b) Accuracy ResNet 3D Figure 8: Loss and Accuracy evolution for training and validation phase, Cancer prediction Emphysema For the second experiment, the prediction of emphysema, the ROC and prediction matrix show that the network is able to learn few aspects of emphysema. Indeed the AUC is 0.78, which is much improved than random. The confusion matrix (fig 11b) shows that presence and absence of emphysema are mainly classified accurately. In this case, the validation accuracy of 72% is meaningful as both classes are mostly classified correctly. Moreover, the training and validation loss curves (fig 10a are as expected from a network

22 14 3 RESULTS Prediction No Prediction Yes Actual No Actual Yes (b) Confusion matrix Cancer (a) ROC Cancer Figure 9: ROC Curve and Confusion Matrix for Cancer prediction which has learned something. Then, emphysema can be classified accurately by the network from a CT volume. (a) Loss ResNet 3D (b) Accuracy ResNet 3D Figure 10: Loss and Accuracy evolution for training and validation phase, Emphysema prediction Opacities Concerning the opacity prediction, the confusion matrix (fig 13b) shows the network is reasonably accurate at recognizing opacities when present but has more troubles to deal with the absence of opacities: many more false positives than false negatives. The number of false positives is too high to conclude significance and thus it cannot be assumed that the network has learned. 3.2 Combining factors Combining factors is first done by training the network from random initialization. The ROC curve and histogram distributions for emphysema, cancer and opacities in fig 17 and 18 (left column) show that the network does not

23 3 RESULTS 15 Pred No Pred Yes Actual No Actual Yes (b) Confusion matrix Emphysema (a) ROC Emphysema Figure 11: ROC Curve and Confusion Matrix for Emphysema prediction (a) Loss ResNet 3D (b) Accuracy ResNet 3D Figure 12: Loss and Accuracy evolution for training and validation phase, Opacities prediction Pred No Pred Yes Actual No Actual Yes (b) Confusion matrix Opacities (a) ROC Opacities Figure 13: ROC Curve and Confusion Matrix for Opacities prediction really learn. There is no separation between classes and the histogram distribution look like a Gaussian centered on 0.5 meaning the network has become

24 16 3 RESULTS confused for most of the cases. Pred No Pred Yes Actual No Actual Yes (b) Confusion matrix Siamese (a) ROC Siamese Figure 14: ROC Curve and Confusion Matrix for self-supervision For the self-supervision task, the maximum validation accuracy is 89.25% while the AUC is In order to determine if the network classify accurately two patients, the distance between the two output vectors of the network is computed (fig 16a). The best threshold is found in fig 16b: 89.25% of accuracy for a threshold of If the distance is lower than the threshold, images are classified as a same pair otherwise as a different pair. The two classes are well separated on the distance graph 16a The filters of the first convolution are computed (fig 15) (which corresponds to the weights of this layer), they show that the network has learned some patterns as shapes in the filters are distinguishable. More important, the filters have changed significantly from the random initialization and then it can be concluded that the network is now able to recognize specific shapes because of these filters. (a) Random initialization of the filters (b) Filters after training Figure 15: 32 filters of the first convolutional layer

25 3 RESULTS 17 (a) Distance distribution for validation set for similar and different patients images (b) Evolution of validation accuracy vs distance threshold Figure 16: Siamese network, validation set: distance and threshold Training with Pre-training When adding a pre-training to the network, it can be noted that the network is better at classifying the different diseases, especially for emphysema where an AUC = 0.78 is reached (fig 17). Moreover, the histograms representing the distribution of probability of having the disease for each class are not gaussian more or less centered in 0.5 anymore. It can be seen for the emphysema that the two class are well separated (fig 18) Risk score Now the final task is to link all these results to a possible clinical application. From the different histogram distributions, a risk score for each disease can be created. For a given probability of having the disease, the risk score is the proportion of patients having the disease at this probability. The different risk scores are displayed in fig 19 and the formula is: rs(prob = p) = disease(prob) total(prob) (7)

26 18 3 RESULTS (a) ROC Cancer (b) ROC Cancer Pre Trained (c) ROC Emphysema (d) ROC Emphysema Pre Trained (e) ROC Opacities (f) ROC Opacities Pre Trained Figure 17: ROC curves: left column shows without pre-training, right column shows with pre-training

27 3 RESULTS 19 (a) Cancer (b) Cancer Pre Trained (c) Emphysema (d) Emphysema Pre Trained (e) Opacities (f) Opacities Pre Trained Figure 18: Distribution of probability of having the disease for each class: left column shows without pre-training, right column shows with pre-training

28 20 3 RESULTS Figure 19: Risk score curve for each disease

29 4 DISCUSSION 21 4 Discussion 4.1 Main findings Throughout this thesis it is shown first that there is valuable information in a whole CT scan. Indeed, from the single factor predictor, it can be noted that it is possible to assess emphysema or opacities better than chance even if the task is not fully learned. However, in the case of cancer, the network does not learn any features which enable it to predict cancer. In the case of emphysema, which has been seen to be successful, without having any knowledge of what is emphysema, the network is able to diagnose emphysema cases correctly. The task of detecting cancer by training with a full CT-Scan seems to be too complicated for the network. It can be explained by the fact that cancer is nowadays detected by nodules, and there is no real pattern inside the lung helping to predict it. Another reason could be that the used dataset is a high lung cancer risk population as they were all heavy smokers. The emphysema detection is much better because the task is more visual and a pattern clearly exist in the whole lung. The second aim of this thesis is to predict the three diseases with a unique network, based on the assumption that the features for detecting cancer, emphysema or opacities should be the same. By training from no prior knowledge, it can be noted that the results are not comparable in terms of ROC to the ones obtained with single factor prediction. However, by using the selfsupervision as pre-training, the results are improved and ROCs get closer to the ones obtained by training one by one. Here two results are important. First, the impact of pre-training a network and using a transfer learning approach. In our case, the filters obtained with the pre-training fig 15b show pattern as horizontal and vertical slices or blobs which are well known pattern for finding edges, shapes, or texture in an image. The difference of results between non pre-trained and pre-trained network is shown in fig 17 and 18. AUC are increasing for emphysema and opacities and the histogram distributions clearly show the impact of pretraining. In emphysema, it enables the clear separation of classes. Moreover, pre-training a network ensures that it is learning visual features and not only machine related features as the resolution or the intensity of the CT-Scan. The second result is that it is possible to use a multi-task network in order to learn multiple predictions at the same time. Indeed, whilst using the multitask network, similar results as the single factor prediction are achieved. Result is slightly better in the case of emphysema: AUC = 0.79 in single prediction and AUC = 0.78 in multi-task prediction. This demonstrates the idea that combining learning can help a network find the most interesting features and have better results on some tasks. Whereas, whilst learning only one predic-

30 22 4 DISCUSSION tion, the network can become too specific and learn only local features. Finally, from these experiments, risk score for each disease can be calculated. In order to predict a global risk score, further clinical analysis is required. A simple way could just be to average the different scores in order to have the global one. 4.2 General impact Nowadays, images are essential in the detection of lung cancer. However, the use of this images is still empirical as doctors only look at them through a display software. Much research exists on the automatic detection of nodules or classification of nodules but none of them try to assess the information contained in the whole Ct-Scan. In this thesis, it was demonstrated despite having lots of limitations, that it is possible to find information by only looking at an upper scale (global CT-scan), and not a local scale or specific area or feature. This information could be a great help for doctors in the future as with further development, it might be possible to automatically give a risk score for each patient going through a CT-scan exam, and then draw doctors attention on patients at risk. The work done in this thesis could also be used in order to curate big database. For example if for a retrospective study, 2000 emphysema patients must be found, by taking the 2000 with the highest probability of having emphysema, a researcher will save a lot of time comparing to choosing them by hand. Nowadays, pneumologist use a BROCK score to evaluate the lung cancer risk [22]. This score includes risk factor such as Emphysema or presence of nodules. By combining this work, with a local features work on nodules detection, it might be possible to calculate the the BROCK score only from CT-scan. 4.3 Comparison to other methods The main use of 3D chest CT is to do lung nodules detection as it has been done in the Kaggle Challenge with the LUNA dataset or the Luna Challenge [34, 9]. They reach score up to (FROC) which is a derivative of the ROC curve. This score is way better than the AUC obtained with the global feature approach. However, these methods focus on the specific task of nodule detection using small patches (with a nodule or just background) in the training phase while our method uses the full CT-Scan during the training. Moreover, detection in deep learning is a famous and well-known task with a lot of different approaches. These two methods could then be complementary and the output of both combine in order to better predict lung cancer.

31 4 DISCUSSION 23 A study closer to what is done in this thesis is using CheXNet [28]. In this study, they use transfer learning on a 2D network to predict 14 different lung diseases on 2D X-ray images. Even if some concerns arise about this dataset and this study [24], the presented results are good. They reach an AUC of 0.92 for emphysema and 0.78 for nodule. These results can be explained by the used of a pretrained network: DenseNet, pretrained on ImageNet. In this thesis, the input are 3D volume and no 3D pretrained network have been released yet. 4.4 Limitations Several limitations apply to this project and most of them are due to the availability of the data and the consistence of them. First of all, all the training have been done on the NLST dataset, which has an inclusion criteria of having more than 30 pack years as smoking history. This of course bias the training of the network and impact its robustness. This issue raise the problem of having clinical data for healthy patients. Indeed, these data would considerably improve all the artificial intelligence applications in the medical field. Indeed, in our case, some patients are not diagnosed of cancer for example, but their lung have many damages due to their smoking history, and learning this patient as a healthy one (absence cancer) for the network might confuse it. The second limitation is the noise of the metadata. Indeed, while analyzing the results, a lot of mistakes in the markup of emphysema or opacities were found, confusing for sure the network in his learning process. A great thing is that the emphysema prediction was sometimes better than the human markup. Indeed, for most of the cases with a really high probability of having emphysema whilst the human markup says no emphysema, it is realized after checking that the patient indeed has emphysema as shown in fig 20. A more technical limitation is the gpu memory. 3D scan are heavy files and take a lot of gpu memory. A direct consequence is a small size of batch size used for the training: 16 comparing to most of the other deep learning training which use a batch size of 128. This could influence the training as the backpropagation has more chance to be sensible to the specificities when the batch is small. 4.5 Future work From this promising project, a lot of new work can derive. First of all, working on the same project but with different dataset would be the first thing to do to validate the work done. Validity of metadata will have to be ensured while collecting them.

32 24 5 CONCLUSION Figure 20: False Positive Emphysema Another idea would be to work on new metadata. Many diseases or infectious pattern can appear in the lung and training the network on all of them at the same time would give a more robust network and then better result. In a more technical approach, the self-supervision training can certainly be improved by using another task or another loss function as the histogram loss. Testing new networks would also be something interesting or combining networks. For example working on a texture network could be interesting. Finally, it is also possible to apply this idea of working on a full scanner on other part of the body. For example breast cancer or liver cancer could find an utility in such a damage score. 5 Conclusion Different approaches to assess lung damages have been explored in this thesis for a large dataset of patients. First, the single predictor approach show that a network is able to learn some disease as emphysema or opacities with a certain degree of confidence by only looking at a global scale. Moreover, the work done here shows that it is possible to predict multiple diseases by using only one network, and then that the same features are necessary to detect different diseases. This thesis also emphasizes the importance of pre-training. The self-supervision method used here enabled the network to be initialized with visual features useful for the multi-task learning. The importance of transfer learning has been shown as the results are better when pre-training and fine-tuning. Even if the work as limitations as a bias database, noisy metadata, the overall show that it is indeed possible to retrieve useful information by look-

33 5 CONCLUSION 25 ing at a full CT-Scan, and that deep neural network are able to learn at a large scale. The results are encouraging and this work can lead to many other experiments on different kind or location of images.

34 26 REFERENCES References [1] I. GLOBOCAN Data visualization tools that present current national estimates of cancer incidence, mortality, and prevalence URL: http: //gco.iarc.fr/today/online- analysis- multi-%20bars? mode=cancer&mode_population=continents&population= 900&sex=0&cancer=29&type=0&%20statistic=0&prevalence= 0&color_palette=default. [2] Y Bengio. Learning deep architectures for ai. Foundations and Trends R in Machine Learning, 2 (1): 1 127, In: Cited on (), p. 39. [3] Isabel Bush. Lung nodule detection and classification. Tech. rep. Technical report, Stanford Computer Science, [4] MEJ Callister et al. British Thoracic Society guidelines for the investigation and management of pulmonary nodules: accredited by NICE. In: Thorax 70.Suppl 2 (2015), pp. ii1 ii54. [5] Wanqing Chen et al. Cancer statistics in China, In: CA: a cancer journal for clinicians 66.2 (2016), pp [6] Sumit Chopra, Raia Hadsell, and Yann LeCun. Learning a similarity metric discriminatively, with application to face verification. In: Computer Vision and Pattern Recognition, CVPR IEEE Computer Society Conference on. Vol. 1. IEEE. 2005, pp [7] Ciro Donalek. Supervised and Unsupervised learning. In: Astronomy Colloquia. USA [8] Jacques Ferlay et al. Cancer incidence and mortality worldwide: sources, methods and major patterns in GLOBOCAN In: International journal of cancer (2015). [9] grt123. URL: solution-grt123-team.pdf. [10] Duc M Ha and Peter J Mazzone. Pulmonary Nodules. In: Age 30 (2014), pp. 05. [11] Mohammad Havaei et al. Brain tumor segmentation with deep neural networks. In: Medical image analysis 35 (2017), pp [12] Kaiming He et al. Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2016, pp [13] RT Heelan et al. Non-small-cell lung cancer: results of the New York screening program. In: Radiology (1984), pp

35 REFERENCES 27 [14] National Cancer Institute. Cancer Statistics. URL: gov/about-cancer/understanding/statistics. [15] Shingo Iwano et al. Computer-aided diagnosis: a shape classification of pulmonary nodules imaged by high-resolution CT. In: Computerized Medical Imaging and Graphics 29.7 (2005), pp [16] Michael T Jaklitsch et al. The American Association for Thoracic Surgery guidelines for lung cancer screening using low-dose computed tomography scans for lung cancer survivors and other high-risk groups. In: The Journal of thoracic and cardiovascular surgery (2012), pp [17] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In: Advances in neural information processing systems. 2012, pp [18] Devinder Kumar, Alexander Wong, and David A Clausi. Lung nodule classification using deep features in CT images. In: Computer and Robot Vision (CRV), th Conference on. IEEE. 2015, pp [19] Geert Litjens et al. A survey on deep learning in medical image analysis. In: arxiv preprint arxiv: (2017). [20] Dennis Mackin et al. Measuring CT scanner variability of radiomics features. In: Investigative radiology (2015), p [21] Heber MacMahon et al. Guidelines for management of incidental pulmonary nodules detected on CT images: from the Fleischner Society In: Radiology (2017), p [22] Annette McWilliams et al. Probability of cancer in pulmonary nodules detected on first screening CT. In: New England Journal of Medicine (2013), pp [23] KM Venkat Narayan et al. Report of a national heart, lung, and blood institute workshop: heterogeneity in cardiometabolic risk in asian americans in the US. In: Journal of the American College of Cardiology (2010), pp [24] Luke Oakden-Rayner. CheXNet: an in-depth review. URL: https : / / lukeoakdenrayner.wordpress.com/2018/01/24/chexnetan-in-depth-review/. [25] Maxime Oquab et al. Learning and transferring mid-level image representations using convolutional neural networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2014, pp [26] Sinno Jialin Pan and Qiang Yang. A survey on transfer learning. In: IEEE Transactions on knowledge and data engineering (2010), pp

36 28 REFERENCES [27] Adam Paszke et al. Automatic differentiation in PyTorch. In: (2017). [28] Pranav Rajpurkar et al. CheXNet: Radiologist-Level Pneumonia Detection on Chest X-Rays with Deep Learning. In: arxiv preprint arxiv: (2017). [29] Ali Sharif Razavian et al. CNN features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition workshops. 2014, pp [30] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. In: arxiv preprint arxiv: (2014). [31] Standford Univeristy. Transfer Learning. URL: io/transfer-learning/. [32] Jiayun Wang et al. Deep ranking model by large adaptive margin learning for person re-identification. In: Pattern Recognition 74 (2018), pp [33] WL Watson and AJ Conte. Lung cancer and smoking. In: The American Journal of Surgery 89.2 (1955), pp [34] Julian de Wit. URL: [35] Jason Yosinski et al. How transferable are features in deep neural networks? In: Advances in neural information processing systems. 2014, pp [36] Wentao Zhu et al. DeepLung: 3D deep convolutional nets for automated pulmonary nodule detection and classification. In: arxiv preprint arxiv: (2017).

37 A STATE OF THE ART 29 A State of the Art A.1 Clinical Background A.1.1 Lung Cancer According to the Globocan series published in 2012, lung cancer is the most commonly diagnosed cancer in the world with around 1.82 million new cases each year. In 2012, 1.6 million people died from lung cancer which represents 19.4% of cancer s deaths. Lung cancer s incidence is higher in developed countries from America, Europe and Asia. [5, 8] Moreover, significant upward trends are visible in these countries and especially for Asian females [5] Lung cancer is mainly related to smoking history [33], but not only as the number of lung cancer in Asian female population is increasing while this population has a really small smoking background [5]. Figure 21: Number of incident cases worldwide in 2014 [1] As every cancer, the lung cancer appeared when abnormal cells grow, in one or both lungs for later forming what is called a tumor. A tumor can be benign or malignant. Lung cancer is divided into two types: Primary lung cancer: cancer which originates in the lung and can also be divided into two main types of cancer Non-small-cell lung cancer (NSCLC): 80% of the cases and his divided into 4 sub-categories (squamous cell, adenocarcinoma, bronchioalveolar carcinoma, large-cell undifferentiated carcinoma) Small-cell lung cancer (SCLC): 20% of the cases and are small cells which multiply quickly.

38 30 A STATE OF THE ART Secondary lung cancer: cancer which starts in another part of the body and metastasizes to the lung Lung cancer is mainly discovered from incidental findings (cardiovascular computed tomography scan (CT), liver CT) and from screening program like in the US [16]. CT images are a series of X-Ray images taken from many different rotations, producing cross-sectional images and computed thanks to a computer. The use of digital geometry processing allows creating 3D volume from series of 2D images. This is an expensive technology which provides detailed information about body structures and lung structure for chest CT-Scan. According to the guidelines of this screening programs, a chest low dose computed tomography (LDCT) must be performed and from which nodules, benign or malignant, can be revealed and a decision made according to the diagnosis. A.1.2 Nodules A pulmonary nodule is a small, round or egg-shaped lesion in the lungs which results in a radiographic opacity [10]. They are considered to be less than 30mm. They are now differentiated into three main categories: solid nodules, part-solid nodules, and pure ground grass nodules [4, 21] (Fig 2). Depending on the category, the guidelines for management of pulmonary modules vary [4, 21]. Figure 3 summarises the guideline from the Fleischner Society. Sub-solid nodules have a higher likelihood of malignancy [22]. However, many factors increasing the risk of malignancy exists and they will be useful to keep in mind while choosing and training the network. (a) Solid Nodule (b) Part Solid Nodule (c) Ground Grass Opacity Nodule Figure 22: Types of nodule

39 A STATE OF THE ART 31 Nodule Type Characteristics Malignacy [22] Benign Malignant Solid Obscures the underlying bronchiovascular structure. 98.9% 1.1% Ground Grass Opacification is greater than that of the background but through which the underlying vascular structure is visible 98.1% 1.9% Part Solid Mix of the two previous types of nodules 93.4% 6.6% Table 3: Nodule type and malignancy Figure 23: Fleischner society 2017 Guidelines for Management of Incidentally Pulmonary Nodules in Adults [21] A.1.3 Risks factors for nodule malignancy The assessment of nodule malignancy is a true challenge nowadays to perform better prevention of lung cancer. Indeed, the sooner a nodule is detected as malignant, the better the treatment will be. Many risk factors for malignancy had been reported in the literature, which helps nowadays the doctors

40 32 A STATE OF THE ART to determine which nodule management to follow: A Nodule size [21] The main risk factor is the size of the nodule. Nodule sizes are divided into three categories: <6mm (<100mm 3 ), 6-8 mm ( mm 3 ) and > 8mm (> 250mm 3 ). The smaller nodules are more likely to be benign and don t require any follow-up in most of the cases whereas the biggest ones require a close follow-up (3 to 6 months). A Nodule Morphology [15] Spiculated nodules are associated with malignancy for many years [21] and are then classified as high-risk nodules. Figure 24: Different nodules morphologies A Nodule Location [21] Upper-lobe nodule location is a high-risk factor. [22] A Multiplicity [21] High multiplicity is a low-risk factor. The presence of 5 or more nodules likely results from an infection and then these nodules are benign. Having between 1 and 4 nodules increase the risk of malignancy. A Growth Rate [21] The growth rate is estimated by the Volume Doubling Time (VDT) which corresponds to the number of days in which the nodule doubles in volume. A VDT < 400 days is a high-risk factor. A Age, Sex, Race [21] Lung cancer is really unusual before 40 years old. However, lung cancer incidence increases for each added decade. Women are more likely to develop

41 A STATE OF THE ART 33 lung cancer than men and the incidence of lung cancer is much higher in black population than white population. A Tobacco [21] A smoking history increases the risk from 10 to 35 times of having lung cancer compared to non-smokers. A.1.4 Problem of detection Pulmonary nodules are now detected by radiologists by considering their shape, size and brightness of the unknown mass in the lung. Studies have shown that only 68% of the nodules are found with this visual human detection [13]. The early-classification of the nodules is remaining a challenge in order to reduce the aggressiveness of the follow-up and treatment of patients. Computer-aided detection (CAD) has then a large role to play in nodule detection and is a high topic of interest. [3, 18] Especially the new deep learning architectures are promising since their appearance less than 10 years ago and will be the main topic of my master thesis. Now we will focus more on the engineer approach of the problem by first presenting briefly what is deep learning. A.2 Engineering Background A.2.1 Deep Learning As described previously, new technologies in medicine have a more and more important role to play. Machine learning, one of these new technologies, is a branch of artificial intelligence in which the system has the ability to learn and improve by itself from experience. In a simple definition, machine learning uses algorithms to parse data, learn and output a prediction of a particular task. Machine learning consists of many sub-categories as decision tree learning, rule-based machine learning or deep learning.

42 34 A STATE OF THE ART Figure 25: Example of neural network with one hidden layer (in the center) Deep learning is then a new field of machine learning based on artificial neural network but using deeper architecture (Fig 5). A network is composed of nodes, which are linked together by weights (REFER TO FIG). When sending an input, only a few nodes will fire in order to produce an output. The goal is to adapt the weights by changing their value in order to get the right nodes firing. Deep learning was first inspired by the functioning and the structure of human brain and how the information is delivered from one neuron to another. The advantage of deep learning is that each layer produces a certain representation of the input data which are also used as input for the next level of representation. Then it is possible, passing through a lot of different layers to combine all these representations in order to perform any kind of tasks depending on the chosen network. For example in the medical world, deep learning architectures are now used to perform [2]: image, object or lesions classification [19], detection, segmentation [11], registration or image generation and enhancement. Moreover, deep learning is a hot topic in the medical field with an enormous increase in the number of papers published within the two last year [2]. (REFER TO FIG 6) A.2.2 How does a deep learning network learn? A deep learning network must learn by itself and only from the input data that a human shows it. The goal is to eliminate all the biased knowledge that we normally include in the design of our algorithms. A deep learning network is then a black box, on which we can only control hyperparameters. The learning of a network is separated in three phase: the training, the validation and the testing as the images.

43 A STATE OF THE ART 35 Figure 26: Example of Deep Learning architecture: GoogleNet Figure 27: 6 a) Nb of papers including machine learning techniques (CNN = Convolutional neural network) b)nb of paper depending on the task During the training phase, the weights and bias are updated at each step in order to reach the best model. The learning can be supervised or unsupervised [7]. Supervised learning includes labels, which represent the desired output for an input. Thus, every time we show a new input to the network (a CT image in our case), we also provide it which output it should return (benign or malignant in our case) and then, thanks to specific optimizer, the weights and bias are automatically updated to reach the best performance. For example, in the case of nodules classification, every time we show a new nodule to the network, we have to provide it the desired output which will be the type of this nodule (solid, sub-solid, ground grass). In unsupervised learning, no labels are provided, and then the network makes its own decision about how to classify the data. In the validation phase, the goal is to assess the performance of the network by showing it input data that it has never seen before. We can then assess how well the network behaves with new and unknown data. It allows engineers to find the best model.

44 36 A STATE OF THE ART The testing phase is the last phase, once the best model had been chosen through the validation phase, to evaluate the general performance of the model. A.2.3 Transfer learning As we said, deep learning enables us to work on tasks which are computed from the database rather than imposed from a model or selected by a human. Such techniques offer the advantage of selecting optimal features for the task and enable a higher number of degrees of freedom for the classifier than a model would ever do, but the training of such systems and the management of a large number of degrees of freedom becomes a challenge. Then, transfer learning is a new method more and more used in the field. The main idea is that the features learned from one system are used and adapted to another system. It enables better convergence of the system for complex tasks or tasks where the number of available data is too low [26, 25, 29, 35]. For example, it is possible to use the features of a network which had learned to classify goldfish, giant schnauzer, tiger cat,... (AlexNet trained with Imagenet [17]) in order to classify medical images, as many of the features are shared by every kind of images (for instance edges). Using transfer learning enables to save a lot of computational time comparing to train a network from no prior knowledge and initialized with random weights. Several methods exist in order to adapt these existing models to a specific application [31] A Feature Extractor It consists of removing the last fully-connected layer of a pre-trained network (CITE FIGURE). Pre-trained network means that we keep the weights learned from a previous training made with a set of general images. The remaining part of the network is then considered as a feature extractor. The last fullyconnected layer is replaced by a linear classifier and train with the specific set of images. The network learns how to classify specific images (for instances medical images) based on features learned from general images. A Fine-tuning The second approach consists in replacing a larger part of the existing and pre-trained network and to fine-tune the weights (CITE FIGURE). This is done with backpropagation on the number of replaced layers. It is possible to fine-tune the entire network. It will then take more computational time but will remain shorter than training from no prior knowledge as the weights are not distributed randomly. This method is based on the principle that a network becomes more and more specific with the layers, then if we retrained a

45 A STATE OF THE ART 37 sufficient number of layer, it is possible to erase this specificity learned from a previous dataset and train it to the new dataset. A.2.4 Challenges with deep learning Deep learning is a promising and powerful tool, but the use and comprehension of it remain tricky. An engineer has to face many challenges in order to get results from these networks. The following challenges are the main challenges that I will have to face during my master thesis, and that every engineer need to think about while working with deep learning. A Architecture selection in transfer learning Since the revolution of deep learning in 2012 with the creation of AlexNet network which won the ImageNet competition (classification challenge over 1000 classes), numerous and various network architectures have been created. Each model has its own specificities and thus advantages and drawbacks. A good comprehension of them enables a user to choose the right one depending on the achieved task. Some comparison had been done on the same dataset as seen in figure 8. Here are some of the well-known architectures broadly used:

46 38 A STATE OF THE ART Figure 28: a) AlexNet network b)alexnet network with feature extractor c) Alexnet network with fine tuning of the three last layers

47 A STATE OF THE ART 39 Figure 29: Top 1 accuracy vs operations, size & parameters Alexnet [17] AlexNet is the network which had changed the vision and use of deep neural network by being the first one with such a large network. The network is used for classification of images and from a 256*256 image give a probability to belong to one of the 1000 classes. It uses large convolution in order to extract the spatial features from the images. The main breakthrough of Alexnet is the use of GPU for the first time to perform the training which reduces it consequently. VGG [30] The main difference between VGG developed in Oxford, and AlexNet is the use of a series of smaller spatial convolution 3*3. The number of parameters and thus the power of the network increase a lot but as the computation time. GoogleNet GoogleNet is a more recent network based on the concept of inception. An inception module if a parallel combination of different operations (convolution for example) done with a smaller amount of parameters. It has been shown that parallelizing these operations lead to equivalent results with a reduced time of computation.

48 40 A STATE OF THE ART Figure 30: Inception module Resnet [12] Finally, one of the most famous networks is ResNet. Its differentiation is based on the idea of that one output should feed the input of not only one but two successive layers. Figure 31: One output feeds two different inputs The use of the network is then important and challenging, but the right network without the right data will never learn the desired output. A Number and quality of data: pre-processing In deep learning and more generally in machine learning, the number of data and their quality have a high importance on the performance of a network. Indeed, as the human brain, the more data the network will see the more experienced it will be on this task and then the more accurate it will be on a particular task. The number of data is then a key point, especially in the medical field where it is challenging to collect a large amount of data. Pre processing is a key step in the success of a network on a larger scale. Indeed, CT scans are performed following different protocols depending on the machine, the hospitals and also the user. All these differences result in differences in the image: resolution, centring of the region of interest, different level of contrast. But the network is trained on one set of images and the goal is to use

CSE Introduction to High-Perfomance Deep Learning ImageNet & VGG. Jihyung Kil

CSE Introduction to High-Perfomance Deep Learning ImageNet & VGG. Jihyung Kil CSE 5194.01 - Introduction to High-Perfomance Deep Learning ImageNet & VGG Jihyung Kil ImageNet Classification with Deep Convolutional Neural Networks Alex Krizhevsky, Ilya Sutskever, Geoffrey E. Hinton,

More information

Automated diagnosis of pneumothorax using an ensemble of convolutional neural networks with multi-sized chest radiography images

Automated diagnosis of pneumothorax using an ensemble of convolutional neural networks with multi-sized chest radiography images Automated diagnosis of pneumothorax using an ensemble of convolutional neural networks with multi-sized chest radiography images Tae Joon Jun, Dohyeun Kim, and Daeyoung Kim School of Computing, KAIST,

More information

Healthcare Research You

Healthcare Research You DL for Healthcare Goals Healthcare Research You What are high impact problems in healthcare that deep learning can solve? What does research in AI applications to medical imaging look like? How can you

More information

arxiv: v1 [stat.ml] 23 Jan 2017

arxiv: v1 [stat.ml] 23 Jan 2017 Learning what to look in chest X-rays with a recurrent visual attention model arxiv:1701.06452v1 [stat.ml] 23 Jan 2017 Petros-Pavlos Ypsilantis Department of Biomedical Engineering King s College London

More information

arxiv: v2 [cs.cv] 29 Jan 2019

arxiv: v2 [cs.cv] 29 Jan 2019 Comparison of Deep Learning Approaches for Multi-Label Chest X-Ray Classification Ivo M. Baltruschat 1,2,, Hannes Nickisch 3, Michael Grass 3, Tobias Knopp 1,2, and Axel Saalbach 3 arxiv:1803.02315v2 [cs.cv]

More information

Lung Nodule Segmentation Using 3D Convolutional Neural Networks

Lung Nodule Segmentation Using 3D Convolutional Neural Networks Lung Nodule Segmentation Using 3D Convolutional Neural Networks Research paper Business Analytics Bernard Bronmans Master Business Analytics VU University, Amsterdam Evert Haasdijk Supervisor VU University,

More information

Efficient Deep Model Selection

Efficient Deep Model Selection Efficient Deep Model Selection Jose Alvarez Researcher Data61, CSIRO, Australia GTC, May 9 th 2017 www.josemalvarez.net conv1 conv2 conv3 conv4 conv5 conv6 conv7 conv8 softmax prediction???????? Num Classes

More information

HHS Public Access Author manuscript Med Image Comput Comput Assist Interv. Author manuscript; available in PMC 2018 January 04.

HHS Public Access Author manuscript Med Image Comput Comput Assist Interv. Author manuscript; available in PMC 2018 January 04. Discriminative Localization in CNNs for Weakly-Supervised Segmentation of Pulmonary Nodules Xinyang Feng 1, Jie Yang 1, Andrew F. Laine 1, and Elsa D. Angelini 1,2 1 Department of Biomedical Engineering,

More information

arxiv: v2 [cs.cv] 8 Mar 2018

arxiv: v2 [cs.cv] 8 Mar 2018 Automated soft tissue lesion detection and segmentation in digital mammography using a u-net deep learning network Timothy de Moor a, Alejandro Rodriguez-Ruiz a, Albert Gubern Mérida a, Ritse Mann a, and

More information

Highly Accurate Brain Stroke Diagnostic System and Generative Lesion Model. Junghwan Cho, Ph.D. CAIDE Systems, Inc. Deep Learning R&D Team

Highly Accurate Brain Stroke Diagnostic System and Generative Lesion Model. Junghwan Cho, Ph.D. CAIDE Systems, Inc. Deep Learning R&D Team Highly Accurate Brain Stroke Diagnostic System and Generative Lesion Model Junghwan Cho, Ph.D. CAIDE Systems, Inc. Deep Learning R&D Team Established in September, 2016 at 110 Canal st. Lowell, MA 01852,

More information

Lung Cancer Diagnosis from CT Images Using Fuzzy Inference System

Lung Cancer Diagnosis from CT Images Using Fuzzy Inference System Lung Cancer Diagnosis from CT Images Using Fuzzy Inference System T.Manikandan 1, Dr. N. Bharathi 2 1 Associate Professor, Rajalakshmi Engineering College, Chennai-602 105 2 Professor, Velammal Engineering

More information

Multi-attention Guided Activation Propagation in CNNs

Multi-attention Guided Activation Propagation in CNNs Multi-attention Guided Activation Propagation in CNNs Xiangteng He and Yuxin Peng (B) Institute of Computer Science and Technology, Peking University, Beijing, China pengyuxin@pku.edu.cn Abstract. CNNs

More information

arxiv: v2 [cs.cv] 19 Dec 2017

arxiv: v2 [cs.cv] 19 Dec 2017 An Ensemble of Deep Convolutional Neural Networks for Alzheimer s Disease Detection and Classification arxiv:1712.01675v2 [cs.cv] 19 Dec 2017 Jyoti Islam Department of Computer Science Georgia State University

More information

DIAGNOSTIC CLASSIFICATION OF LUNG NODULES USING 3D NEURAL NETWORKS

DIAGNOSTIC CLASSIFICATION OF LUNG NODULES USING 3D NEURAL NETWORKS DIAGNOSTIC CLASSIFICATION OF LUNG NODULES USING 3D NEURAL NETWORKS Raunak Dey Zhongjie Lu Yi Hong Department of Computer Science, University of Georgia, Athens, GA, USA First Affiliated Hospital, School

More information

Convolutional Neural Networks (CNN)

Convolutional Neural Networks (CNN) Convolutional Neural Networks (CNN) Algorithm and Some Applications in Computer Vision Luo Hengliang Institute of Automation June 10, 2014 Luo Hengliang (Institute of Automation) Convolutional Neural Networks

More information

Deep CNNs for Diabetic Retinopathy Detection

Deep CNNs for Diabetic Retinopathy Detection Deep CNNs for Diabetic Retinopathy Detection Alex Tamkin Stanford University atamkin@stanford.edu Iain Usiri Stanford University iusiri@stanford.edu Chala Fufa Stanford University cfufa@stanford.edu 1

More information

Improved Intelligent Classification Technique Based On Support Vector Machines

Improved Intelligent Classification Technique Based On Support Vector Machines Improved Intelligent Classification Technique Based On Support Vector Machines V.Vani Asst.Professor,Department of Computer Science,JJ College of Arts and Science,Pudukkottai. Abstract:An abnormal growth

More information

EARLY STAGE DIAGNOSIS OF LUNG CANCER USING CT-SCAN IMAGES BASED ON CELLULAR LEARNING AUTOMATE

EARLY STAGE DIAGNOSIS OF LUNG CANCER USING CT-SCAN IMAGES BASED ON CELLULAR LEARNING AUTOMATE EARLY STAGE DIAGNOSIS OF LUNG CANCER USING CT-SCAN IMAGES BASED ON CELLULAR LEARNING AUTOMATE SAKTHI NEELA.P.K Department of M.E (Medical electronics) Sengunthar College of engineering Namakkal, Tamilnadu,

More information

Differentiating Tumor and Edema in Brain Magnetic Resonance Images Using a Convolutional Neural Network

Differentiating Tumor and Edema in Brain Magnetic Resonance Images Using a Convolutional Neural Network Original Article Differentiating Tumor and Edema in Brain Magnetic Resonance Images Using a Convolutional Neural Network Aida Allahverdi 1, Siavash Akbarzadeh 1, Alireza Khorrami Moghaddam 2, Armin Allahverdy

More information

LUNG CANCER continues to rank as the leading cause

LUNG CANCER continues to rank as the leading cause 1138 IEEE TRANSACTIONS ON MEDICAL IMAGING, VOL. 24, NO. 9, SEPTEMBER 2005 Computer-Aided Diagnostic Scheme for Distinction Between Benign and Malignant Nodules in Thoracic Low-Dose CT by Use of Massive

More information

INTRODUCTION TO MACHINE LEARNING. Decision tree learning

INTRODUCTION TO MACHINE LEARNING. Decision tree learning INTRODUCTION TO MACHINE LEARNING Decision tree learning Task of classification Automatically assign class to observations with features Observation: vector of features, with a class Automatically assign

More information

Copyright 2007 IEEE. Reprinted from 4th IEEE International Symposium on Biomedical Imaging: From Nano to Macro, April 2007.

Copyright 2007 IEEE. Reprinted from 4th IEEE International Symposium on Biomedical Imaging: From Nano to Macro, April 2007. Copyright 27 IEEE. Reprinted from 4th IEEE International Symposium on Biomedical Imaging: From Nano to Macro, April 27. This material is posted here with permission of the IEEE. Such permission of the

More information

LUNG NODULE SEGMENTATION IN COMPUTED TOMOGRAPHY IMAGE. Hemahashiny, Ketheesan Department of Physical Science, Vavuniya Campus

LUNG NODULE SEGMENTATION IN COMPUTED TOMOGRAPHY IMAGE. Hemahashiny, Ketheesan Department of Physical Science, Vavuniya Campus LUNG NODULE SEGMENTATION IN COMPUTED TOMOGRAPHY IMAGE Hemahashiny, Ketheesan Department of Physical Science, Vavuniya Campus tketheesan@vau.jfn.ac.lk ABSTRACT: The key process to detect the Lung cancer

More information

arxiv: v1 [cs.cv] 21 Jul 2017

arxiv: v1 [cs.cv] 21 Jul 2017 A Multi-Scale CNN and Curriculum Learning Strategy for Mammogram Classification William Lotter 1,2, Greg Sorensen 2, and David Cox 1,2 1 Harvard University, Cambridge MA, USA 2 DeepHealth Inc., Cambridge

More information

Object Detectors Emerge in Deep Scene CNNs

Object Detectors Emerge in Deep Scene CNNs Object Detectors Emerge in Deep Scene CNNs Bolei Zhou, Aditya Khosla, Agata Lapedriza, Aude Oliva, Antonio Torralba Presented By: Collin McCarthy Goal: Understand how objects are represented in CNNs Are

More information

Cancer Cells Detection using OTSU Threshold Algorithm

Cancer Cells Detection using OTSU Threshold Algorithm Cancer Cells Detection using OTSU Threshold Algorithm Nalluri Sunny 1 Velagapudi Ramakrishna Siddhartha Engineering College Mithinti Srikanth 2 Velagapudi Ramakrishna Siddhartha Engineering College Kodali

More information

[Kiran, 2(1): January, 2015] ISSN:

[Kiran, 2(1): January, 2015] ISSN: AN EFFICIENT LUNG CANCER DETECTION BASED ON ARTIFICIAL NEURAL NETWORK Shashi Kiran.S * Assistant Professor, JNN College of Engineering, Shimoga, Karnataka, India Keywords: Artificial Neural Network (ANN),

More information

A convolutional neural network to classify American Sign Language fingerspelling from depth and colour images

A convolutional neural network to classify American Sign Language fingerspelling from depth and colour images A convolutional neural network to classify American Sign Language fingerspelling from depth and colour images Ameen, SA and Vadera, S http://dx.doi.org/10.1111/exsy.12197 Title Authors Type URL A convolutional

More information

Early Detection of Lung Cancer

Early Detection of Lung Cancer Early Detection of Lung Cancer Aswathy N Iyer Dept Of Electronics And Communication Engineering Lymie Jose Dept Of Electronics And Communication Engineering Anumol Thomas Dept Of Electronics And Communication

More information

Lung Region Segmentation using Artificial Neural Network Hopfield Model for Cancer Diagnosis in Thorax CT Images

Lung Region Segmentation using Artificial Neural Network Hopfield Model for Cancer Diagnosis in Thorax CT Images Automation, Control and Intelligent Systems 2015; 3(2): 19-25 Published online March 20, 2015 (http://www.sciencepublishinggroup.com/j/acis) doi: 10.11648/j.acis.20150302.12 ISSN: 2328-5583 (Print); ISSN:

More information

Deep-Learning Based Semantic Labeling for 2D Mammography & Comparison of Complexity for Machine Learning Tasks

Deep-Learning Based Semantic Labeling for 2D Mammography & Comparison of Complexity for Machine Learning Tasks Deep-Learning Based Semantic Labeling for 2D Mammography & Comparison of Complexity for Machine Learning Tasks Paul H. Yi, MD, Abigail Lin, BSE, Jinchi Wei, BSE, Haris I. Sair, MD, Ferdinand K. Hui, MD,

More information

Mammogram Analysis: Tumor Classification

Mammogram Analysis: Tumor Classification Mammogram Analysis: Tumor Classification Term Project Report Geethapriya Raghavan geeragh@mail.utexas.edu EE 381K - Multidimensional Digital Signal Processing Spring 2005 Abstract Breast cancer is the

More information

COMPARATIVE STUDY ON FEATURE EXTRACTION METHOD FOR BREAST CANCER CLASSIFICATION

COMPARATIVE STUDY ON FEATURE EXTRACTION METHOD FOR BREAST CANCER CLASSIFICATION COMPARATIVE STUDY ON FEATURE EXTRACTION METHOD FOR BREAST CANCER CLASSIFICATION 1 R.NITHYA, 2 B.SANTHI 1 Asstt Prof., School of Computing, SASTRA University, Thanjavur, Tamilnadu, India-613402 2 Prof.,

More information

ImageCLEF2018: Transfer Learning for Deep Learning with CNN for Tuberculosis Classification

ImageCLEF2018: Transfer Learning for Deep Learning with CNN for Tuberculosis Classification ImageCLEF2018: Transfer Learning for Deep Learning with CNN for Tuberculosis Classification Amilcare Gentili 1-2[0000-0002-5623-7512] 1 San Diego VA Health Care System, San Diego, CA USA 2 University of

More information

Deep learning and non-negative matrix factorization in recognition of mammograms

Deep learning and non-negative matrix factorization in recognition of mammograms Deep learning and non-negative matrix factorization in recognition of mammograms Bartosz Swiderski Faculty of Applied Informatics and Mathematics Warsaw University of Life Sciences, Warsaw, Poland bartosz_swiderski@sggw.pl

More information

An Artificial Neural Network Architecture Based on Context Transformations in Cortical Minicolumns

An Artificial Neural Network Architecture Based on Context Transformations in Cortical Minicolumns An Artificial Neural Network Architecture Based on Context Transformations in Cortical Minicolumns 1. Introduction Vasily Morzhakov, Alexey Redozubov morzhakovva@gmail.com, galdrd@gmail.com Abstract Cortical

More information

Skin cancer reorganization and classification with deep neural network

Skin cancer reorganization and classification with deep neural network Skin cancer reorganization and classification with deep neural network Hao Chang 1 1. Department of Genetics, Yale University School of Medicine 2. Email: changhao86@gmail.com Abstract As one kind of skin

More information

Comparison of Two Approaches for Direct Food Calorie Estimation

Comparison of Two Approaches for Direct Food Calorie Estimation Comparison of Two Approaches for Direct Food Calorie Estimation Takumi Ege and Keiji Yanai Department of Informatics, The University of Electro-Communications, Tokyo 1-5-1 Chofugaoka, Chofu-shi, Tokyo

More information

arxiv: v1 [cs.cv] 9 Oct 2018

arxiv: v1 [cs.cv] 9 Oct 2018 Automatic Segmentation of Thoracic Aorta Segments in Low-Dose Chest CT Julia M. H. Noothout a, Bob D. de Vos a, Jelmer M. Wolterink a, Ivana Išgum a a Image Sciences Institute, University Medical Center

More information

Retinopathy Net. Alberto Benavides Robert Dadashi Neel Vadoothker

Retinopathy Net. Alberto Benavides Robert Dadashi Neel Vadoothker Retinopathy Net Alberto Benavides Robert Dadashi Neel Vadoothker Motivation We were interested in applying deep learning techniques to the field of medical imaging Field holds a lot of promise and can

More information

arxiv: v2 [cs.cv] 22 Mar 2018

arxiv: v2 [cs.cv] 22 Mar 2018 Deep saliency: What is learnt by a deep network about saliency? Sen He 1 Nicolas Pugeault 1 arxiv:1801.04261v2 [cs.cv] 22 Mar 2018 Abstract Deep convolutional neural networks have achieved impressive performance

More information

arxiv: v2 [cs.cv] 7 Jun 2018

arxiv: v2 [cs.cv] 7 Jun 2018 Deep supervision with additional labels for retinal vessel segmentation task Yishuo Zhang and Albert C.S. Chung Lo Kwee-Seong Medical Image Analysis Laboratory, Department of Computer Science and Engineering,

More information

arxiv: v1 [cs.ai] 28 Nov 2017

arxiv: v1 [cs.ai] 28 Nov 2017 : a better way of the parameters of a Deep Neural Network arxiv:1711.10177v1 [cs.ai] 28 Nov 2017 Guglielmo Montone Laboratoire Psychologie de la Perception Université Paris Descartes, Paris montone.guglielmo@gmail.com

More information

Classification of breast cancer histology images using transfer learning

Classification of breast cancer histology images using transfer learning Classification of breast cancer histology images using transfer learning Sulaiman Vesal 1 ( ), Nishant Ravikumar 1, AmirAbbas Davari 1, Stephan Ellmann 2, Andreas Maier 1 1 Pattern Recognition Lab, Friedrich-Alexander-Universität

More information

Shu Kong. Department of Computer Science, UC Irvine

Shu Kong. Department of Computer Science, UC Irvine Ubiquitous Fine-Grained Computer Vision Shu Kong Department of Computer Science, UC Irvine Outline 1. Problem definition 2. Instantiation 3. Challenge and philosophy 4. Fine-grained classification with

More information

POC Brain Tumor Segmentation. vlife Use Case

POC Brain Tumor Segmentation. vlife Use Case Brain Tumor Segmentation vlife Use Case 1 Automatic Brain Tumor Segmentation using CNN Background Brain tumor segmentation seeks to separate healthy tissue from tumorous regions such as the advancing tumor,

More information

arxiv: v2 [cs.cv] 3 Jun 2018

arxiv: v2 [cs.cv] 3 Jun 2018 S4ND: Single-Shot Single-Scale Lung Nodule Detection Naji Khosravan and Ulas Bagci Center for Research in Computer Vision (CRCV), School of Computer Science, University of Central Florida, Orlando, FL.

More information

COMPUTERIZED SYSTEM DESIGN FOR THE DETECTION AND DIAGNOSIS OF LUNG NODULES IN CT IMAGES 1

COMPUTERIZED SYSTEM DESIGN FOR THE DETECTION AND DIAGNOSIS OF LUNG NODULES IN CT IMAGES 1 ISSN 258-8739 3 st August 28, Volume 3, Issue 2, JSEIS, CAOMEI Copyright 26-28 COMPUTERIZED SYSTEM DESIGN FOR THE DETECTION AND DIAGNOSIS OF LUNG NODULES IN CT IMAGES ALI ABDRHMAN UKASHA, 2 EMHMED SAAID

More information

A comparative study of machine learning methods for lung diseases diagnosis by computerized digital imaging'"

A comparative study of machine learning methods for lung diseases diagnosis by computerized digital imaging' A comparative study of machine learning methods for lung diseases diagnosis by computerized digital imaging'" Suk Ho Kang**. Youngjoo Lee*** Aostract I\.' New Work to be 1 Introduction Presented U Mater~al

More information

Shu Kong. Department of Computer Science, UC Irvine

Shu Kong. Department of Computer Science, UC Irvine Ubiquitous Fine-Grained Computer Vision Shu Kong Department of Computer Science, UC Irvine Outline 1. Problem definition 2. Instantiation 3. Challenge 4. Fine-grained classification with holistic representation

More information

arxiv: v1 [cs.cv] 30 May 2018

arxiv: v1 [cs.cv] 30 May 2018 A Robust and Effective Approach Towards Accurate Metastasis Detection and pn-stage Classification in Breast Cancer Byungjae Lee and Kyunghyun Paeng Lunit inc., Seoul, South Korea {jaylee,khpaeng}@lunit.io

More information

arxiv: v1 [cs.lg] 4 Feb 2019

arxiv: v1 [cs.lg] 4 Feb 2019 Machine Learning for Seizure Type Classification: Setting the benchmark Subhrajit Roy [000 0002 6072 5500], Umar Asif [0000 0001 5209 7084], Jianbin Tang [0000 0001 5440 0796], and Stefan Harrer [0000

More information

A HMM-based Pre-training Approach for Sequential Data

A HMM-based Pre-training Approach for Sequential Data A HMM-based Pre-training Approach for Sequential Data Luca Pasa 1, Alberto Testolin 2, Alessandro Sperduti 1 1- Department of Mathematics 2- Department of Developmental Psychology and Socialisation University

More information

Automatic Hemorrhage Classification System Based On Svm Classifier

Automatic Hemorrhage Classification System Based On Svm Classifier Automatic Hemorrhage Classification System Based On Svm Classifier Abstract - Brain hemorrhage is a bleeding in or around the brain which are caused by head trauma, high blood pressure and intracranial

More information

Introduction to Machine Learning. Katherine Heller Deep Learning Summer School 2018

Introduction to Machine Learning. Katherine Heller Deep Learning Summer School 2018 Introduction to Machine Learning Katherine Heller Deep Learning Summer School 2018 Outline Kinds of machine learning Linear regression Regularization Bayesian methods Logistic Regression Why we do this

More information

arxiv: v3 [cs.cv] 28 Mar 2017

arxiv: v3 [cs.cv] 28 Mar 2017 Discovery Radiomics for Pathologically-Proven Computed Tomography Lung Cancer Prediction Devinder Kumar 1*, Audrey G. Chung 1, Mohammad J. Shaifee 1, Farzad Khalvati 2,3, Masoom A. Haider 2,3, and Alexander

More information

LUNG NODULE DETECTION SYSTEM

LUNG NODULE DETECTION SYSTEM LUNG NODULE DETECTION SYSTEM Kalim Bhandare and Rupali Nikhare Department of Computer Engineering Pillai Institute of Technology, New Panvel, Navi Mumbai, India ABSTRACT: The Existing approach consist

More information

Y-Net: Joint Segmentation and Classification for Diagnosis of Breast Biopsy Images

Y-Net: Joint Segmentation and Classification for Diagnosis of Breast Biopsy Images Y-Net: Joint Segmentation and Classification for Diagnosis of Breast Biopsy Images Sachin Mehta 1, Ezgi Mercan 1, Jamen Bartlett 2, Donald Weaver 2, Joann G. Elmore 1, and Linda Shapiro 1 1 University

More information

Automatic Lung Cancer Detection Using Volumetric CT Imaging Features

Automatic Lung Cancer Detection Using Volumetric CT Imaging Features Automatic Lung Cancer Detection Using Volumetric CT Imaging Features A Research Project Report Submitted To Computer Science Department Brown University By Dronika Solanki (B01159827) Abstract Lung cancer

More information

Convolutional Neural Networks for Text Classification

Convolutional Neural Networks for Text Classification Convolutional Neural Networks for Text Classification Sebastian Sierra MindLab Research Group July 1, 2016 ebastian Sierra (MindLab Research Group) NLP Summer Class July 1, 2016 1 / 32 Outline 1 What is

More information

Image-Based Estimation of Real Food Size for Accurate Food Calorie Estimation

Image-Based Estimation of Real Food Size for Accurate Food Calorie Estimation Image-Based Estimation of Real Food Size for Accurate Food Calorie Estimation Takumi Ege, Yoshikazu Ando, Ryosuke Tanno, Wataru Shimoda and Keiji Yanai Department of Informatics, The University of Electro-Communications,

More information

Learning Objectives. 1. Identify which patients meet criteria for annual lung cancer screening

Learning Objectives. 1. Identify which patients meet criteria for annual lung cancer screening Disclosure I, Taylor Rowlett, DO NOT have a financial interest /arrangement or affiliation with one or more organizations that could be perceived as a real or apparent conflict of interest in the context

More information

arxiv: v1 [cs.cv] 13 Jul 2018

arxiv: v1 [cs.cv] 13 Jul 2018 Multi-Scale Convolutional-Stack Aggregation for Robust White Matter Hyperintensities Segmentation Hongwei Li 1, Jianguo Zhang 3, Mark Muehlau 2, Jan Kirschke 2, and Bjoern Menze 1 arxiv:1807.05153v1 [cs.cv]

More information

Convolutional Neural Networks for Estimating Left Ventricular Volume

Convolutional Neural Networks for Estimating Left Ventricular Volume Convolutional Neural Networks for Estimating Left Ventricular Volume Ryan Silva Stanford University rdsilva@stanford.edu Maksim Korolev Stanford University mkorolev@stanford.edu Abstract End-systolic and

More information

arxiv: v3 [cs.cv] 26 May 2018

arxiv: v3 [cs.cv] 26 May 2018 DeepEM: Deep 3D ConvNets With EM For Weakly Supervised Pulmonary Nodule Detection Wentao Zhu, Yeeleng S. Vang, Yufang Huang, and Xiaohui Xie University of California, Irvine Lenovo AI Lab {wentaoz1,ysvang,xhx}@uci.edu,

More information

Auto-Encoder Pre-Training of Segmented-Memory Recurrent Neural Networks

Auto-Encoder Pre-Training of Segmented-Memory Recurrent Neural Networks Auto-Encoder Pre-Training of Segmented-Memory Recurrent Neural Networks Stefan Glüge, Ronald Böck and Andreas Wendemuth Faculty of Electrical Engineering and Information Technology Cognitive Systems Group,

More information

Background Information

Background Information Background Information Erlangen, November 26, 2017 RSNA 2017 in Chicago: South Building, Hall A, Booth 1937 Artificial intelligence: Transforming data into knowledge for better care Inspired by neural

More information

NMF-Density: NMF-Based Breast Density Classifier

NMF-Density: NMF-Based Breast Density Classifier NMF-Density: NMF-Based Breast Density Classifier Lahouari Ghouti and Abdullah H. Owaidh King Fahd University of Petroleum and Minerals - Department of Information and Computer Science. KFUPM Box 1128.

More information

Elad Hoffer*, Itay Hubara*, Daniel Soudry

Elad Hoffer*, Itay Hubara*, Daniel Soudry Train longer, generalize better: closing the generalization gap in large batch training of neural networks Elad Hoffer*, Itay Hubara*, Daniel Soudry *Equal contribution Better models - parallelization

More information

Detection of suspicious lesion based on Multiresolution Analysis using windowing and adaptive thresholding method.

Detection of suspicious lesion based on Multiresolution Analysis using windowing and adaptive thresholding method. Detection of suspicious lesion based on Multiresolution Analysis using windowing and adaptive thresholding method. Ms. N. S. Pande Assistant Professor, Department of Computer Science and Engineering,MGM

More information

Big Image-Omics Data Analytics for Clinical Outcome Prediction

Big Image-Omics Data Analytics for Clinical Outcome Prediction Big Image-Omics Data Analytics for Clinical Outcome Prediction Junzhou Huang, Ph.D. Associate Professor Dept. Computer Science & Engineering University of Texas at Arlington Dept. CSE, UT Arlington Scalable

More information

MEM BASED BRAIN IMAGE SEGMENTATION AND CLASSIFICATION USING SVM

MEM BASED BRAIN IMAGE SEGMENTATION AND CLASSIFICATION USING SVM MEM BASED BRAIN IMAGE SEGMENTATION AND CLASSIFICATION USING SVM T. Deepa 1, R. Muthalagu 1 and K. Chitra 2 1 Department of Electronics and Communication Engineering, Prathyusha Institute of Technology

More information

Predicting Breast Cancer Survivability Rates

Predicting Breast Cancer Survivability Rates Predicting Breast Cancer Survivability Rates For data collected from Saudi Arabia Registries Ghofran Othoum 1 and Wadee Al-Halabi 2 1 Computer Science, Effat University, Jeddah, Saudi Arabia 2 Computer

More information

Copyright 2008 Society of Photo Optical Instrumentation Engineers. This paper was published in Proceedings of SPIE, vol. 6915, Medical Imaging 2008:

Copyright 2008 Society of Photo Optical Instrumentation Engineers. This paper was published in Proceedings of SPIE, vol. 6915, Medical Imaging 2008: Copyright 2008 Society of Photo Optical Instrumentation Engineers. This paper was published in Proceedings of SPIE, vol. 6915, Medical Imaging 2008: Computer Aided Diagnosis and is made available as an

More information

MRI Image Processing Operations for Brain Tumor Detection

MRI Image Processing Operations for Brain Tumor Detection MRI Image Processing Operations for Brain Tumor Detection Prof. M.M. Bulhe 1, Shubhashini Pathak 2, Karan Parekh 3, Abhishek Jha 4 1Assistant Professor, Dept. of Electronics and Telecommunications Engineering,

More information

Mammogram Analysis: Tumor Classification

Mammogram Analysis: Tumor Classification Mammogram Analysis: Tumor Classification Literature Survey Report Geethapriya Raghavan geeragh@mail.utexas.edu EE 381K - Multidimensional Digital Signal Processing Spring 2005 Abstract Breast cancer is

More information

Identification of Tissue Independent Cancer Driver Genes

Identification of Tissue Independent Cancer Driver Genes Identification of Tissue Independent Cancer Driver Genes Alexandros Manolakos, Idoia Ochoa, Kartik Venkat Supervisor: Olivier Gevaert Abstract Identification of genomic patterns in tumors is an important

More information

PMR5406 Redes Neurais e Lógica Fuzzy. Aula 5 Alguns Exemplos

PMR5406 Redes Neurais e Lógica Fuzzy. Aula 5 Alguns Exemplos PMR5406 Redes Neurais e Lógica Fuzzy Aula 5 Alguns Exemplos APPLICATIONS Two examples of real life applications of neural networks for pattern classification: RBF networks for face recognition FF networks

More information

2D-Sigmoid Enhancement Prior to Segment MRI Glioma Tumour

2D-Sigmoid Enhancement Prior to Segment MRI Glioma Tumour 2D-Sigmoid Enhancement Prior to Segment MRI Glioma Tumour Pre Image-Processing Setyawan Widyarto, Siti Rafidah Binti Kassim 2,2 Department of Computing, Faculty of Communication, Visual Art and Computing,

More information

BREAST CANCER EARLY DETECTION USING X RAY IMAGES

BREAST CANCER EARLY DETECTION USING X RAY IMAGES Volume 119 No. 15 2018, 399-405 ISSN: 1314-3395 (on-line version) url: http://www.acadpubl.eu/hub/ http://www.acadpubl.eu/hub/ BREAST CANCER EARLY DETECTION USING X RAY IMAGES Kalaichelvi.K 1,Aarthi.R

More information

B657: Final Project Report Holistically-Nested Edge Detection

B657: Final Project Report Holistically-Nested Edge Detection B657: Final roject Report Holistically-Nested Edge Detection Mingze Xu & Hanfei Mei May 4, 2016 Abstract Holistically-Nested Edge Detection (HED), which is a novel edge detection method based on fully

More information

Supplementary Online Content

Supplementary Online Content Supplementary Online Content Ting DS, Cheung CY-L, Lim G, et al. Development and validation of a deep learning system for diabetic retinopathy and related eye diseases using retinal images from multiethnic

More information

Sleep Staging with Deep Learning: A convolutional model

Sleep Staging with Deep Learning: A convolutional model Sleep Staging with Deep Learning: A convolutional model Isaac Ferna ndez-varela1, Dimitrios Athanasakis2, Samuel Parsons3 Elena Herna ndez-pereira1, and Vicente Moret-Bonillo1 1- Universidade da Corun

More information

International Journal of Advances in Engineering Research. (IJAER) 2018, Vol. No. 15, Issue No. IV, April e-issn: , p-issn:

International Journal of Advances in Engineering Research. (IJAER) 2018, Vol. No. 15, Issue No. IV, April e-issn: , p-issn: SUPERVISED MACHINE LEARNING ALGORITHMS: DEVELOPING AN EFFECTIVE USABILITY OF COMPUTERIZED TOMOGRAPHY DATA IN THE EARLY DETECTION OF LUNG CANCER IN SMALL CELL Pushkar Garg Delhi Public School, R.K. Puram,

More information

Primary Level Classification of Brain Tumor using PCA and PNN

Primary Level Classification of Brain Tumor using PCA and PNN Primary Level Classification of Brain Tumor using PCA and PNN Dr. Mrs. K.V.Kulhalli Department of Information Technology, D.Y.Patil Coll. of Engg. And Tech. Kolhapur,Maharashtra,India kvkulhalli@gmail.com

More information

Unsupervised MRI Brain Tumor Detection Techniques with Morphological Operations

Unsupervised MRI Brain Tumor Detection Techniques with Morphological Operations Unsupervised MRI Brain Tumor Detection Techniques with Morphological Operations Ritu Verma, Sujeet Tiwari, Naazish Rahim Abstract Tumor is a deformity in human body cells which, if not detected and treated,

More information

Clustering of MRI Images of Brain for the Detection of Brain Tumor Using Pixel Density Self Organizing Map (SOM)

Clustering of MRI Images of Brain for the Detection of Brain Tumor Using Pixel Density Self Organizing Map (SOM) IOSR Journal of Computer Engineering (IOSR-JCE) e-issn: 2278-0661,p-ISSN: 2278-8727, Volume 19, Issue 6, Ver. I (Nov.- Dec. 2017), PP 56-61 www.iosrjournals.org Clustering of MRI Images of Brain for the

More information

Visual semantics: image elements. Symbols Objects People Poses

Visual semantics: image elements. Symbols Objects People Poses Visible Partisanship Polmeth XXXIII, Rice University, July 22, 2016 Convolutional Neural Networks for the Analysis of Political Images L. Jason Anastasopoulos ljanastas@uga.edu (University of Georgia,

More information

Discovery of Rare Phenotypes in Cellular Images Using Weakly Supervised Deep Learning

Discovery of Rare Phenotypes in Cellular Images Using Weakly Supervised Deep Learning Discovery of Rare Phenotypes in Cellular Images Using Weakly Supervised Deep Learning Heba Sailem *1, Mar Arias Garcia 2, Chris Bakal 2, Andrew Zisserman 1, and Jens Rittscher 1 1 Department of Engineering

More information

CS231n Project: Prediction of Head and Neck Cancer Submolecular Types from Patholoy Images

CS231n Project: Prediction of Head and Neck Cancer Submolecular Types from Patholoy Images CSn Project: Prediction of Head and Neck Cancer Submolecular Types from Patholoy Images Kuy Hun Koh Yoo Energy Resources Engineering Stanford University Stanford, CA 90 kohykh@stanford.edu Markus Zechner

More information

Research Article Multiscale CNNs for Brain Tumor Segmentation and Diagnosis

Research Article Multiscale CNNs for Brain Tumor Segmentation and Diagnosis Computational and Mathematical Methods in Medicine Volume 2016, Article ID 8356294, 7 pages http://dx.doi.org/10.1155/2016/8356294 Research Article Multiscale CNNs for Brain Tumor Segmentation and Diagnosis

More information

Rich feature hierarchies for accurate object detection and semantic segmentation

Rich feature hierarchies for accurate object detection and semantic segmentation Rich feature hierarchies for accurate object detection and semantic segmentation Ross Girshick, Jeff Donahue, Trevor Darrell, Jitendra Malik UC Berkeley Tech Report @ http://arxiv.org/abs/1311.2524! Detection

More information

Automated Brain Tumor Segmentation Using Region Growing Algorithm by Extracting Feature

Automated Brain Tumor Segmentation Using Region Growing Algorithm by Extracting Feature Automated Brain Tumor Segmentation Using Region Growing Algorithm by Extracting Feature Shraddha P. Dhumal 1, Ashwini S Gaikwad 2 1 Shraddha P. Dhumal 2 Ashwini S. Gaikwad ABSTRACT In this paper, we propose

More information

IJREAS Volume 2, Issue 2 (February 2012) ISSN: LUNG CANCER DETECTION USING DIGITAL IMAGE PROCESSING ABSTRACT

IJREAS Volume 2, Issue 2 (February 2012) ISSN: LUNG CANCER DETECTION USING DIGITAL IMAGE PROCESSING ABSTRACT LUNG CANCER DETECTION USING DIGITAL IMAGE PROCESSING Anita Chaudhary* Sonit Sukhraj Singh* ABSTRACT In recent years the image processing mechanisms are used widely in several medical areas for improving

More information

International Journal of Computer Science Trends and Technology (IJCST) Volume 5 Issue 1, Jan Feb 2017

International Journal of Computer Science Trends and Technology (IJCST) Volume 5 Issue 1, Jan Feb 2017 RESEARCH ARTICLE Classification of Cancer Dataset in Data Mining Algorithms Using R Tool P.Dhivyapriya [1], Dr.S.Sivakumar [2] Research Scholar [1], Assistant professor [2] Department of Computer Science

More information

A Reliable Method for Brain Tumor Detection Using Cnn Technique

A Reliable Method for Brain Tumor Detection Using Cnn Technique IOSR Journal of Electrical and Electronics Engineering (IOSR-JEEE) e-issn: 2278-1676,p-ISSN: 2320-3331, PP 64-68 www.iosrjournals.org A Reliable Method for Brain Tumor Detection Using Cnn Technique Neethu

More information

Understanding Convolutional Neural

Understanding Convolutional Neural Understanding Convolutional Neural Networks Understanding Convolutional Neural Networks David Stutz July 24th, 2014 David Stutz July 24th, 2014 0/53 1/53 Table of Contents - Table of Contents 1 Motivation

More information

arxiv: v1 [cs.cv] 17 Aug 2017

arxiv: v1 [cs.cv] 17 Aug 2017 Deep Learning for Medical Image Analysis Mina Rezaei, Haojin Yang, Christoph Meinel Hasso Plattner Institute, Prof.Dr.Helmert-Strae 2-3, 14482 Potsdam, Germany {mina.rezaei,haojin.yang,christoph.meinel}@hpi.de

More information

London Medical Imaging & Artificial Intelligence Centre for Value-Based Healthcare. Professor Reza Razavi Centre Director

London Medical Imaging & Artificial Intelligence Centre for Value-Based Healthcare. Professor Reza Razavi Centre Director London Medical Imaging & Artificial Intelligence Centre for Value-Based Healthcare Professor Reza Razavi Centre Director The traditional healthcare business model is changing Fee-for-service Fee-for-outcome

More information

Deep Learning-based Detection of Periodic Abnormal Waves in ECG Data

Deep Learning-based Detection of Periodic Abnormal Waves in ECG Data , March 1-16, 2018, Hong Kong Deep Learning-based Detection of Periodic Abnormal Waves in ECG Data Kaiji Sugimoto, Saerom Lee, and Yoshifumi Okada Abstract Automatic detection of abnormal electrocardiogram

More information