IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING 1

Size: px
Start display at page:

Download "IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING 1"

Transcription

1 1.119/TBME , IEEE Transactions on Biomedical Engineering IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING 1 Spatio-Temporal Representations of Rapid Visual Target Detection: A Single Trial EEG Classification Algorithm Galit Fuhrmann Alpert a,1,, Ran Manor b,1, Assaf B. Spanier e, Leon Y. Deouell a,c, and Amir B. Geva d Abstract Brain Computer Interface applications, developed for both healthy and clinical populations, critically depend on decoding brain activity in single trials. The goal of the present study was to detect distinctive spatio-temporal brain patterns within a set of event related responses. We introduce a novel classification algorithm, the Spatially Weighted FLD- PCA (SWFP), which is based on a 2-step linear classification of event-related responses, using Fisher Linear Discriminant (FLD) classifier and principal component analysis (PCA) for dimensionality reduction. As a benchmark algorithm, we consider the Hierarchical Discriminant Component Analysis (HDCA), introduced by Parra, et al. 27. We also consider a modified version of the HDCA, namely the Hierarchical Discriminant Principal Component Analysis Algorithm (HDPCA). We compare single-trial classification accuracies of all three algorithms, each applied to detect target images within a rapid serial visual presentation (RSVP, 1 Hz) of images from five different object categories, based on single trial brain responses. We find a systematic superiority of our classification algorithm in the tested paradigm. Additionally, HDPCA significantly increases classification accuracies compared to the HDCA. Finally, we show that presenting several repetitions of the same image exemplars improve accuracy, and thus may be important in cases where high accuracy is crucial. Index Terms Brain computer interface (BCI), electroencephalography (EEG), rapid serial visual presentation (RSVP), classification I. INTRODUCTION RECENT advances in Neuroscience have led to an emerging interest in Brain Computer Interface (BCI) applications for both disabled and healthy populations. These applications critically depend on online decoding of brain activity, in response to single events (trials), as opposed to delineation of the average response frequently studied in basic research. Electroencephalography (EEG), a noninvasive recording technique, is one of the commonly used systems for monitoring brain activity. EEG data is simultaneously collected from a multitude of channels at a high temporal resolution, yielding high dimensional data matrices for the representation of single a Dept. of Psychology, The Hebrew University of Jerusalem, 9195, Israel, Galit.Fuhrmann@mail.huji.ac.il. b Dept. of Electrical and Computer Engineering, Ben-Gurion University of the Negev, Beer-Sheva, 8415, Israel, ran.manor@gmail.com. c Edmond and Lily Safra Center for Brain Sciences, The Hebrew University, Jerusalem 9195, Israel, msleon@mscc.huji.ac.il. d Department of Electrical and Computer Engineering, Ben-Gurion University of the Negev, Beer-Sheva, 8415, Israel, geva@ee.bgu.ac.il. e The Selim and Rachel Benin School of Engineering, The Hebrew University, Jerusalem 9195, Israel, assaf.spanier@mail.huji.ac.il. 1 Contributed equally. trial brain activity. In addition to its unsurpassed temporal resolution, EEG is non-invasive, wearable, and more affordable than other neuroimaging techniques, and is thus a prime choice for any type of practical BCI. The two other noninvasive technologies used for decoding brain activity, namely functional MRI and MEG, require cumbersome, expensive, and non-mobile instrumentation, and although they maintain their position as highly valuable research tools, are unlikely to be useful for routine use of BCIs. Invasive imaging methods, such as electrocorticography (ECoG; intracranial EEG), also exist. These methods provide greater signal to noise ratio and increased spatial resolution compared to EEG and may prove to be of great importance for specific patient popultaions. Obviously, these methods are not applicable for healthy populations, as is EEG. Traditionally, EEG data has been averaged over trials to characterize task-related brain responses despite the on-going, task independent noise present in single trial data. However, in order to allow flexible real-time feedback or interaction, task-related brain responses need to be identified in single trials, and categorized into the associated brain states. Most classification methods use machine-learning algorithms, to classify single-trial spatio-temporal activity matrices based on statistical properties of those matrices [1], [2], [3], [4], [5]. These methods are based on two main components: a feature extraction mechanism for effective dimensionality reduction, and a classification algorithm. Typical classifiers use a sample data to learn a mapping rule by which other test data can be classified into one of two or more categories. Classifiers can be roughly divided to linear and non-linear methods. Non-linear classifiers, such as Neural Networks, Hidden Markov Model and k-nearest neighbor [1], [5], can approximate a wide range of functions, allowing discrimination of complex data structures. While non-linear classifiers have the potential to capture complex discriminative functions, their complexity can also cause overfitting and carry heavy computational demands, making them less suitable for real-time applications. Linear classifiers, on the other hand, are less complex and are thus more robust to data overfitting. Naturally, linear classifiers perform particularly well on data that can be linearly separated. Fisher Linear discriminant (FLD), linear Support Vector Machine (SVM) and Logistic Regression (LR) are popular examples [6], [3], [7], [8], [9]. FLD finds a linear combination of features that maps the data of two classes onto a separable projection axis. The criterion for separation Copyright (c) 213 IEEE. Personal use is permitted. For any other purposes, permission must be obtained from the IEEE by ing pubs-permissions@ieee.org.

2 1.119/TBME , IEEE Transactions on Biomedical Engineering IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING 2 is defined as the ratio of the squared distance between the classes mean to the variance within the classes. SVM finds a separating hyper-plane that maximizes the margin between the two classes. LR, as its name suggests, projects the data onto a logistic function. All linear classifiers offer fast solution for data discrimination, and are thus most commonly applied in classification algorithms used for real-time BCI applications. Whether linear or non-linear, most classifiers require a prior stage of feature extraction. Selecting these features has become a crucial issue, as one of the main challenges in deciphering brain activity from single trial data matrices is the high dimensional space in which they are embedded, and the relatively small sample sizes the classifiers can rely on in their learning stage. Feature extraction is in essence a dimensionality reduction procedure mapping the original data onto a lower dimensional space. A successful feature extraction procedure will pull out task-relevant information and attenuate irrelevant information. Some feature extraction approaches use prior knowledge, such as specific frequencybands relevant to the experiment (e.g Hz for motor imagery experiment [2]) or brain locations most likely to be involved in the specific classification problem. For instance, the literature has robustly pointed out parietal scalp regions to display high amplitude signals in target detection paradigms; this target-related response, maximal at parietal regions and known as the P3 component, has been repeatedly observed approximately 3-5 ms post-stimulus [1]. Such priorknowledge based algorithms, in particular P3 based systems, are commonly used for a variety of BCI applications [11], [12], [13], [14]. In contrast, other methods construct an automatic process to pull out relevant features based on supervised or unsupervised learning from training data sets. Some approaches for automatic feature extraction include Common Spatial Patterns (CSP), autoregressive models (AR) and Principal Component Analysis (PCA). CSP extracts spatial weights to discriminate between two classes [3], by maximizing the variance of one class while minimizing the variance of the second class. AR is mostly used to model temporal correlations in a signal, although spatial autoregressive models (SAR) also exist. Discriminative AR coefficients can be selected using a linear classifier [2]. Other methods search for spectral features to be used for classification [4]. PCA is used for unsupervised feature extraction, by mapping the data onto a new, uncorrelated space where the axes are ordered by the variance of the projected data samples along the axes, and only those axes reflecting most of the variance are maintained. The result is a new representation of the data that retains maximal information about the original data yet provides effective dimensionality reduction. PCA is used in the current study and is further elaborated in the following sections. Such methodologies of single-trial EEG classification algorithms have been implemented for a variety of BCI applications, using different experimental paradigms. Most commonly, single-trial EEG classification has been used for movement-based and P3 based- applications [11], [14]. Movement tasks, both imaginary and real, have been studied for their potential use with disabled subjects [15]. P3 applications, based on visual or auditory oddball experiments [11], originally aimed at providing BCI-based communication devices for locked-in patients [9], [16], [12] and can also be used for a variety of applications for healthy individuals [17]. Emotion assessment, for example, attempts to classify emotions to categories (negative, positive and neutral) using a combination of EEG and other physiological signals [18], offering a potential tool for behavior prediction and monitoring. Here we aim at implementing a BCI framework in order to sort large image databases into one of two categories (target images; Non-Targets). We use EEG patterns as markers for target-image appearance during rapid visual presentation [6], [7], [8], [19]. Subjects are instructed to search for target images (a given category out of five) within a rapid serial visual presentation (RSVP; 1 Hz). In this case, the methodological goal of the classification algorithm is to automatically identify, within a set of event related responses, single trial spatiotemporal brain responses that are associated with the targetimage detection. In addition to the common challenges faced by single-trial classification algorithms for noisy EEG data, specific challenges are introduced by the RSVP task, due to the fast presentation of stimuli and the ensuing overlap between consecutive event-related responses. Some methods have thus been constructed specifically for the RSVP task. One such method, developed by Bigdley et al. ([6]) specifically for single-trial classification of RSVP data, used spatial Independent Component Analysis (ICA) to extract a set of spatial weights and obtain maximally independent spatialtemporal sources. A parallel ICA step was performed in the frequency domain to learn spectral weights for independent time-frequency components. Principal Component Analysis (PCA) was used separately on the spatial and spectral sources to reduce the dimensionality of the data. Each feature set was classified separately using Fisher Linear Discriminants and then combined using naïve Bayes fusion (i.e., multiplication of posterior probabilities). A more general framework was proposed by Parra et al. [19] for single trial classification, and was also implemented specifically for the RSVP task. The suggested framework uses a bilinear spatial-temporal projection of event-related data on both temporal and spatial axes. These projections can be implemented in many ways. The spatial projection can be implemented, for example, as a linear transformation of EEG scalp recordings into underlying source space [2] or as ICA. The temporal projection can be thought of as a filter. The dual projections are implemented on non-overlapping time windows of the single-trial data matrix, resulting in a scalar representing a score per window. The windows scores are summed [19] or classified [7] to provide a classification score for the entire single trial. In addition to the choice of projections, this framework can support additional constraints on the structure of the projections matrix. One option is, for example, to learn the optimal time window for each channel separately and then train the spatial terms [21]. Another alternative is to learn the spatial and temporal weights simultaneously under different smoothness constraints [22]. Other variations of projections and classifiers may also be considered under this framework [17]. Within a similar framework, we present here a novel two- Copyright (c) 213 IEEE. Personal use is permitted. For any other purposes, permission must be obtained from the IEEE by ing pubs-permissions@ieee.org.

3 1.119/TBME , IEEE Transactions on Biomedical Engineering IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING 3 step classification algorithm. We compare classification performance of our algorithm to that of the basic algorithm suggested by Parra et al [19], as well as to a modified version of it, based on Gerson et al. [7]. We also present the spatio-temporal distribution of the discriminative activity, which may indicate the underlying neural networks involved. A. Subject II. MATERIALS AND METHODS Fifteen subjects participated in the main experiment (Exp1; eight females and seven males, mean age 26 years, standard deviation 5 years). Three subjects were excluded from the analysis due to excessive recording noise. The first excluded subject had excessive eye blink artifacts resulting in a loss of nearly 6% of the data. The second subject had technical problems with the electrodes causing repeated recording failures. The third excluded subject felt very uncomfortable, which lead to many motion artifacts, thus the subject was released half way through the experiment. Four subjects participated in a second experiment (Exp2; two females; two males, mean age 23, standard deviation 2 years). All subjects were students of the Hebrew University of Jerusalem, without any previous training in the task. All subjects had normal or corrected to normal vision, with no neurological problems, and were free of psychoactive medications at the time of the experiment. Subjects were paid for their participation. The experiment was approved by the local ethics committee at the Hebrew university of Jerusalem. B. Stimuli Stimuli were 36x36 pixels (6.5x6.5 of visual angle at a distance of 1 cm) grayscale images of 5 different categories including 145 exemplars each of faces, cars, painted eggs, watches and planes, presented at the center of a CRT monitor (Viewsonic model g57f, refresh rate 1 Hz, resolution 124 x 768) on a gray background (Fig. 1). The images were preprocessed to have the same mean luminance and contrast. C. Experimental Procedure Subjects were seated in a dimly lit, sound attenuated chamber, supported by a chin and forehead rest. Subjects were instructed to count the occurrence of images of a pre-defined target category, presented within a rapid serial visual presentation (RSVP). Each image exemplar was presented several times during the experiment. Eye position was monitored using an Eyelink 2k/1 eye tracker (SR research, Kanata, ON, Canada) at 1Hz resolution. Presentation was briefly paused every 8-12 trials and the subject was asked to report how many targets appeared in the last run, and thereafter restart the count. This was done to avoid the working memory load of accumulating large numbers. Two experiments were conducted. The main experiment consisted of five categories of images (cars, painted eggs, faces, planes, or clock faces). Images were presented in 4 blocks, with a different target category in each block (clock Fig. 1: Experimental Paradigm: Images of five different categories (Cars; Faces; Planes; Clock faces; Eggs) are presented every 9-11 ms. In each presentation block a different object category is defined as the Target category, and subjects are instructed to count the number of image occurrences of the target category (e.g. planes, marked here by arrows). faces were not used as targets). The order of blocks was counterbalanced across subjects. Each block consisted of an RSVP of 65 images, presented without inter-stimulus intervals every 9-11 ms rates (i.e., ~1Hz). In each block, 2% of the images were targets, randomly distributed within the block. The experimental paradigm is depicted in Fig. 1. The second experiment (Experiment 2) consisted of only two image categories. In this task, Targets were always images of cars, and non-target images were noise images produced as scrambled images of the same car images. We ran this significantly easier, pop-out detection task, in order to compare performance to similar studies reported in the literature (e.g. [19], [7]). D. Data Collection and Pre-Processing EEG recordings were acquired by an Active 2 system (BioSemi, the Netherlands) using 64 sintered Ag/AgCl electrodes, at a sampling rate of 2 Hz with an online lowpass filter of 51 Hz to prevent aliasing of high frequencies. Seven additional electrodes were placed as follows: two on the mastoid processes, two horizontal EOG channels positioned at the outer canthi of the left and right eyes (HEOGL and HEOGR, respectively), two vertical EOG channels, one below (infraorbital, VEOGI) and one above (supraorbital, VEOGS) the right eye, and a channel on the tip of the nose. All electrodes were referenced to the average of the entire electrode set, excluding the EOG channels. Offline, a bipolar vertical EOG (VEOG) channel was calculated as the difference between VEOGS and VEOGI. Similarly, a bipolar horizontal EOG channel (HEOG) was calculated as the difference between HEOGL and HEOGR. A high-pass filter of.1hz was used offline to remove slow drifts. The data was segmented to one-second event-related segments starting 1ms prior to and ending 9ms after the onset of each image presentation, yielding, for each subject, a large spatio-temporal data matrices for the representation of single trial brain activity. Each single trial matrix consisted of 64 rows of channels and 2 columns of time samples. Baseline correction was performed by subtracting the mean activity averaged over 1ms prior to stimulus onset for each trial Copyright (c) 213 IEEE. Personal use is permitted. For any other purposes, permission must be obtained from the IEEE by ing pubs-permissions@ieee.org.

4 1.119/TBME , IEEE Transactions on Biomedical Engineering IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING 4 and channel independently. Blinks were removed by rejecting trials in which the VEOG bipolar channel exceeded ±1µV. The same criterion was also applied to all other channels to reject occasional recording artifacts and gross eye movements. E. Classification Algorithms We consider three different algorithms for single-trial EEG classification. Each of the methods will be explained below in detail. In all cases, we represent the n th single trial data by the spatio-temporal activity matrix X n of size D T, containing raw event-related signals recorded from all EEG channels at all time points, locked to the onset of an image presentation. D is the number of EEG channels, and T is the number of time points per trial. y n = f(x n ) is the binary decision (target/non-target trial) of the classification algorithm for the single-trial spatio-temporal data matrix X n. We use several measures of performance. First, random subsampling cross-validation is used: a random 8% of the trials are used for training and the remaining 2% of trials are used for testing. This procedure is repeated 3 times. The classification performance is measured by the percent correct classification of single trials in the test group, as well as by the hit rate (percent Target trials which were correctly classified as Targets), false alarms (percent non-target trials which were falsely classified as Targets), and the corresponding d measure of discrimination of target images (d = Z(hit rate) Z(false alarm rate) where Z is the inverse of the cumulative Gaussian distribution [23]). To systematically compare between the different classification methods considered in this study, we run the 3 algorithms on the exact same train/test permutations. We also consider another performance measure implementing a variation of the leave-one-out procedure: the classifier is trained using the complete data set excluding all instances of one image exemplar (e.g. image of a specific face; recall that each image is presented several times per experimental block). We then test our classifier on each of the repetitions of the excluded image exemplar. The final label of the exemplar (Target/Nontarget) is decided by majority vote of the repetitions label. For example, in the case of Experiment 1 - each of the 4 experimental blocks consisted of up to 11 presentations/repetitions of the same image exemplar. Some trials were removed due to artifact rejection preprocessing, leaving N trials available for analysis. For each available image exemplar presentation trial, we compute the classifier s target/non-target decision and the accuracy is defined as the number of trials in which the classifier provided the correct response, divided by the total number of available trials (N). Total performance for N trials is computed per subject as the mean correct labelling over all available trials for a single image exemplar, averaged over all image exemplars. Additionally, to test performance as a function of the number of repetitions used for the leave-one-out voting, we compute performance following the same procedure, using only subsets of N trial repetitions for voting per image. For each considered N, performance per image is computed as the average accuracy for voting between 3 possible subsets of N, randomly chosen out of all combinations. For example, given an image exemplar that was presented a total of 9 repetitions throughout the experiment, there are 126 possible combinations to choose subsets of N = 5 trial repetitions. Only 3 of those combinations, randomly chosen, are used to compute performance. Total performance for N repetitions is computed per subject as the mean performance per image, averaged over all individual image exemplars. We turn now to describe the details of each of the 3 tested algorithms. We start with our new algorithm, and then describe the comparison algorithms. Note that all three algorithms start with a first step of computing spatial (channel) weights by applying linear classifiers to the data. In algorithm I this is performed over the whole time series, while in algorithms 2 and 3 spatial weights are computed after splitting the time series into independent time windows. The main difference between the algorithms is then in the way this information is being used in the next steps. 1) Algorithm I: Spatially Weighted FLD-PCA (SWFP): Our algorithm is based on a 2-step linear classification. In both classification steps we use Fisher Linear Discriminant (FLD) analysis [24]. 1) Step I: a) Classify time points independently to compute a spatio-temporal matrix of discriminating weights (U). To implement this, take each column vector x n,t of the input matrix X n ; Each column represents the spatial distribution of EEG signals at time t, and at this step of analysis all time points are treated independently. Train a separate FLD classifier for each time point t = 1...T, based on all n = 1...N trials in the training set, to obtain a spatial weight vector w t for each time point t. Set these weights vectors as the columns of the spatiotemporal weighting matrix U. The dimensions of U are the same as the dimensions of X. b) Use this weighting matrix U to amplify the original spatio-temporal data matrix (X n ) by the discriminating weights at each spatio-temporal point, creating the spatially-weighted matrix X w n. To implement this amplification, compute the Hadamard product of the trial input matrix X n and the weighting matrix U, by element-wise multiplication of the two matrices (Matlab notation.*). X w n = U. X n (X w n ) t,d = (U) t,d (X n ) t,d t = 1...T, d = 1...D (1) c) For dimensionality reduction of X w n, use PCA on the temporal domain - for each spatial channel d independently - to represent the time series data as a linear combination of only K components. PCA is applied independently on each row vector x d, d = 1...D of the spatially weighted matrix X w n, following mean subtraction. For each row d, this provides a projection matrix A d of size T K, which is used to project the time series data of channel d on the first K principal components; Copyright (c) 213 IEEE. Personal use is permitted. For any other purposes, permission must be obtained from the IEEE by ing pubs-permissions@ieee.org.

5 1.119/TBME , IEEE Transactions on Biomedical Engineering IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING 5 thus reducing dimensions from T to K per channel. K = 6 was empirically chosen to explain > 7% variance in Experiment 1. The resulting matrix ˆX n is of size D K where each row d holds the PCA coefficients for the K principal temporal projections. ˆx d n = x d A d (2) 2) Step II: a) Concatenate the rows of the matrix ˆX n to create a feature representation vector z n, representing the temporally approximated, spatially weighted activity of the single trial n. z n = [ˆx 1 n... ˆx D n ] (3) b) Train and run an FLD classifier on the feature vectors {z n } N n=1 to classify the single trial matrices X n into one of two classes (using zero as the decision boundary). y n = f(z n ); (4) We define the most discriminative response latency as the latency t for which the highest percent correct classification was achieved in step 1.a. The associated discriminative spatial activation pattern is given by U(t). 2) Algorithm II: Hierarchical Discriminant Component Analysis (HDCA): To evaluate the performance of our algorithms compared to existing methods, we implemented the HDCA classification algorithm as introduced by Parra et al [19], [7]. The algorithm is implemented here without any changes, and used as a benchmark algorithm for comparing performance. The algorithm uses the following model: K y n = u T k X n v k (5) k=1 where u k and v k are spatial and temporal projections vectors for time window k out of total K non-overlapping time windows of the single trial. These projections can be resolved in several ways, such as ICA, temporal filtering and source space transformation. Specifically, the projections vectors could be learned using classification, separately for the spatial and temporal domains, to find the most discriminative projection vectors [19]. The algorithm is described in the following steps (for details see original paper [19], [7]): 1) Choose a set of K non-overlapping time windows. We define X k,n as the time samples of X n which belong to window k, i.e. the cropped time series that includes only time points within the window k. In the current study, we used a window size of 1ms as suggested in [19], thus K was set to 9. For each time window k independently: a) Train a FLD classifier to compute a spatial projection vector u k, based on the data from all training trials, taken within the window (X k,n ). b) Use the spatial projection vector to map the window matrix from all channels onto a single global signal. s k,n = u T k X k,n (6) c) Sum the elements of each temporal vector s k,n to a single scalar value representing a global signal of time window k r k,n = i (s k,n ) i (7) 2) Concatenate the K scalar representations of the global signals from all time windows {r k,n } into a single combined global signal r n, representing the single trial signals from all channels at all time points. The vector r n can be thought of as a feature representation of the single trial data matrix X n, where each time-windowed global signal r k,n is a feature. r n = [r 1,n r K,n ] (8) 3) Use a linear classifier to classify the set of {r n } feature vectors: K z n = c T r n = c k r k,n (9) k=1 {c k } are the weights per window k, (c T as the concatenated form) assigned by the classifier, to compute the scalar score z n of the single trial. y n is the binary decision using a zero threshold over z n. 3) Algorithm III Hierarchical Discriminant Principal Component Analysis (HDPCA): We also consider a version of HDCA (Algorithm II above; [7], [19]) with two in-house modifications, which we predicted will improve performance. After using the spatial projection vector (in (6)), we apply PCA on the temporal axis for dimensionality reduction. The data is projected onto the first principal components that together explain 99% of the variance. This first modification leaves out components with near zero variance not contributing significant information, making sure that discriminative information is not lost. The second modification is that instead of summing temporal representations (PCA coefficients in this case), we classify them using another FLD classifier, which is trained separately on the data. This step selects the best PCA components for discrimination, as has been suggested in [19]. Overall, the modified algorithm is the same as algorithm II described above, for step 1-2a, with the following steps replacing steps 2b-2c : 2) b) Project the single trial matrix X k,n onto a single representative global temporal signal, using the spatial projection vector u k, as described in Algorithm II. c) Apply PCA on the global temporal signals to extract the first C comp principal components, which together explain 99% of the variance. The result of the PCA is a projection matrix A k of size T C comp. x pca n,k = ut k X n A k (1) d) Use the PCA coefficients (x pca n,k ) as features vectors for a linear classifier. The classifier training gives a weight vector v k which discriminates the PCA coefficients to target/non-target samples. r n,k = x pca n,k v k (11) Copyright (c) 213 IEEE. Personal use is permitted. For any other purposes, permission must be obtained from the IEEE by ing pubs-permissions@ieee.org.

6 This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 1.119/TBME , IEEE Transactions on Biomedical Engineering IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING Amplitude [µv] Perform steps 3,4 as in Algorithm II. To summarize, all three algorithms start with a first step of linear classification, which is used to determine the discrimination-based spatial projection of the data. Yet, while HDCA and its modified version (HDPCA) use the spatial projection vector (uk ) to collapse data from all channel locations onto a single representative global channel (sk ) prior to temporal manipulation of the data, SWFP uses the concatenated information from all channels. Moreover, SWFP amplifies the original data matrix by the spatio-temporal discriminating weights prior to PCA dimensionality reduction; channel and times that are most important for classification are given even greater impact further in the process Due to low signal to noise ratios (SNRs) of EEG data, the standard approach to analyzing event-related responses is to study the mean Event Related Potential (ERP), averaged over repeated trials of the same stimulus condition. Fig. 2 (Top) depicts a butterfly plot of ERPs elicited by the Target (Red) and Non-Target (blue) ERPs, collapsed over blocks of the main paradigm (Experiment 1) for a single sample subject. Each line represents a single channel. Note that on average, despite the rapid sequence of events and the overlapping responses, the main divergence between Target and Non-Target ERPs occurs between 2 6ms post-image presentation, in line with N2-P3 target detection literature [1]. The same can be observed with single trial responses (Fig. 2 Bottom), though evidently the responses are more variable, setting the challenge of detecting Target related activity in single trials. B. Single Trial Classification Analysis and Summary of Performance Fig. 3a shows the Receiver Operating Curves (ROC) and Area Under Curve (AUC) values for all subjects using the SWFP algorithm. While the classification accuracy varies across subjects, even the worst case is far from the chance performance denoted by the diagonal. Fig. 3b summarizes the comparison of performance between the three single trial classification algorithms for Experiment 1 (trials from all blocks collapsed for analysis, regardless of the different targets). Across subjects, the proposed SWFP algorithm correctly classified between 66 82% (mean: 72.89; std: 4.74) of trials. For comparison, the performance of the HDCA (Algorithm II) ranged between 57 7% correct (mean: 62.37%; std: 3.41%) and the performance of HDPCA (Algorithm III) ranged between 66-81% (mean: 71.58; std: 4.68%). In comparison with the other algorithms considered in this study, we consistently find an increased performance of our proposed SWFP algorithm, compared to both the benchmark HDCA algorithm and the modified version of it (HDPCA; Algorithm III). Indeed, non-parametric Wilcoxon signed rank tests (p <.5) show that the SWFP does significantly better than the HDCA benchmark algorithm in terms of higher correct classification rate, increased hit rate and decreased false alarms. We also find that our small modification to the HDCA algorithm (HDPCA; Target Trials 8 Standard Trials A. Event Related Responses (Targets versus Non-Targets) 4 Time [ms] 14 III. R ESULTS Time [ms] Amplitude Amplitude Time [ms] Fig. 2: Top: Mean Event Related Responses (ERPs) to Targets (red) and Non-Target (blue) images, at each of the 64 recorded channels (one line per channel; taken from a single sample subject ). Green: Channel CPz (solid: Target; dashed: NonTarget). Bottom: Single trial Event Related responses to Target (Left) and Non-Target (Right) images, recorded at CPz. (traces shown after baseline correction). The vertical black line marks the time of image onset. Algorithm III) boosts performance almost to the level of the SWFP. Specifically, hit rates are not found to be significantly different between the two methods. Yet, HDPCA has a higher false alarms rate compared to the SWFP. Congruently, we find higher d values (combining hit rates and false alarms) using SWFP compared to both HDCA and HDPCA algorithms, and significantly smaller d values for the benchmark HDCA method, compared to its modified version HDPCA. While the SWFP out-performed the two other algorithms, we note that the classification performance on our experimental paradigm was not as high as that previously reported using the benchmark HDCA algorithm of Parra et al. (Algorithm II; [19], e.g. [7]). This might be due to the harder perceptual task we used, with widely varying stimuli and changing targets across blocks, compared to simpler target detection of targets vs. backgrounds, which is kind of a pop-out perceptual phenomenon. Thus, we also tested classification performance of all three algorithms on an easier task in which targets were defined as images of a single category (Cars), and NonTargets were scrambled noise images (Experiment 2). In this case, target images were significantly easier to perceptually detect than in our main paradigm, as targets visually pop-out. Classification performance (N=4) for this task is summarized in Table I. Results indicate that in this experiment too, as in Experiment 1, performance of both SWFP and HDPCA, was increased compared to the benchmark HDCA algorithm by an average of near 15% correct classification. The following analyses used the SWFP algorithm. Copyright (c) 213 IEEE. Personal use is permitted. For any other purposes, permission must be obtained from the IEEE by ing pubs-permissions@ieee.org.

7 1.119/TBME , IEEE Transactions on Biomedical Engineering IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING 7 1 ROC graph - SWFP.9.8 AUC: Correct AUC:.84.7 AUC: AUC:.66 AUC:.7 75 Hit Rate.5.4 AUC:.71 AUC:.75 AUC:.67 HDCA HDPCA SWFP 7 65 AUC:.98 AUC: AUC:.98 AUC:.98 AUC:.99 AUC:.72 AUC:.74 AUC: Subjects False Alarm Rate Hit Rate False Alarm Rate HDCA 6 HDPCA SWFP 5 HDCA HDPCA SWFP Subjects Subjects Fig. 3: Top left: ROC curves for the SWFP algorithm. One line per subject. Continuous line: Experiment 1 subjects. Dashed line: Experiment 2. Green dots indicate False alarm and Hit rates, as calculated for the actual threshold in use. Calculated Areas Under Curve (AUC) per subject are indicated on the right. The remaining figures show performance per subject for each of the considered algorithms- SWPF, HDCA, and HDPCA in Experiment 1. Top right: Percent correct classification. Bottom left: Hit Rate; Bottom right: False Alarm Rate. Data is based on all recording blocks per subject. Error bars indicate standard deviations across 3 permutations. TABLE I: Performance comparison on Experiment 2. Correct Hits False Alarms AUC HDCA 82.7 ± ± ± ±.5 HDPCA 97.9 ± ± ± ±.67 SWFP 97.8 ± ± ± ±. C. Spatio-Temporal Maps of Discriminating Brain Responses Using the SWFP classifier allowed us not only to label each trial, but also to investigate networks most involved in the target detection task, by observing the spatiotemporal distribution of the most discriminative activity. To this end, we study the performance as a function of time post-stimulus presentation as computed at Step 1a of SWFP (see Methods). The temporal dependence is shown for a sample subject in Fig. 4. For each train/test permutation, performance is presented as a function of time post-image presentation. Clearly, the highest classification accuracy is achieved around 25-45ms, suggesting that brain-recorded activity at these latencies is the most informative about the original label of the image (Non-Target/Target). This is observed both as increased hit rate for Target images and reduced false alarms for Non-Target images. Discriminating activity decays back to baseline values around 75ms post-image presentation. Fig. 5 shows the temporal dependence of correct classification for all subjects at each cross validation permutation. Note that at different permutations, temporal dependence may vary to some extent, but there is a high degree of consistency within subject, across cross validation permutations. The specific pattern of temporal dependence of performance varies across subjects however, highlighting the somewhat idiosyncratic yet stable pattern of brain responses that usually escapes notice when grand averages are used. For each cross validation permutation, we defined the most discriminative response latency (Tbest) as the post-stimulus latency at which the highest percent correct classification is achieved. Fig. 6 shows the probability distributions for best latencies, calculated for all cross validation permutations. Evidently, different subjects have different preferred latencies for Target/Non-Target discrimination, but almost all are around 3-5ms post-image presentation (see also Fig. 5), roughly overlapping the latencies of the classic N2-P3 complex, observed in mean ERP analyses. The nature of the SWFP algorithm allows us also to investigate the spatial topography of Targets/Non-Target dis- Copyright (c) 213 IEEE. Personal use is permitted. For any other purposes, permission must be obtained from the IEEE by ing pubs-permissions@ieee.org.

8 1.119/TBME , IEEE Transactions on Biomedical Engineering IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING 8 criminating patterns, as depicted by the classifier s weights at each scalp location. As described in step 1a of the SWFP algorithm, the spatial distribution of the classification weights at time t are given by U(t). Note that these spatial maps represent the classifier s optimal spatial filters, rather than directly depicting the underlying discriminative brain patterns [25], [26]. Therefore, non-zero weights are not only the result of discriminative brain activity but could also be affected by non-discriminative noise reduction effects on the spatial filter/weights of the trained classifier. Fig. 7 depicts, for a sample subject, the mean spatial topography of Target/Non-Target classification weights, averaged over all cross validation permutations, at different time points post-stimulus presentation. It is clear that dominant classification weights build up towards 3ms post-stimulus presentation, and that for this subject classification are maximal around CPz, over the central-parietal area. Spatial weights were quite stable across permutations. Fig. 8 shows the spatial mapping of the classification weights (U(T best )), at the best latencies (T best ), as determined at individual cross validation permutations. For each single permutation, a circle is drawn at each electrode location, with the diameter of the circle reflecting the discrimination weight at that location. Thus, if the weights are similar across permutations, the circles around an electrode should overlap and the line should be thin. In contrast, variance across permutations would show up as concentric rings or as a thicker line. It is clear that while there are some differences of spatial weights at different permutations, they are relatively small. That is, spatial locations with large classification weights are consistently large at the different permutations. For further analysis we therefore investigate the corresponding mean spatial distribution of classification weights, averaged at the subject s best latency median across validation permutations (Tbest med). Fig. 9 summarizes the spatial distribution of dominant classification weights for all subjects, each at their best latency for Target/Non-Target classification (U(Tbest med )). While peak classification weights at best latencies tend to be at central, parietal or occipital electrodes, the exact location of the classification signal is subject specific. In few subjects dominant weights were found to be less localized and to involve also frontal regions. D. Image Repetitions Improve Classification Performance In an attempt to increase single image classification accuracy, we tested whether multiple presentations of the same image exemplar improves performance, using a leave-oneout procedure (see Methods). For each image exemplar in its turn, the classifier is trained based on trials from all experimental blocks, excluding trials of the considered image exemplar. The classifier is then tested on each of the repetitions of the excluded image exemplar and each repetition is thus independently classified as Target or Non-Target. The final label of the image exemplar (Target/Non-Target) is decided by majority vote of the repetitions labels. In the specific experiment considered here, the same image exemplar could appear in different blocks as either a Temporal Correct Temporal Hit Rate Mean Temporal Performance Temporal False Alarm Rate Fig. 4: Temporal Dependence of SWFP Classification Accuracy for a sample subject. Top: Percent correct (Left), Hit Rate (middle), False Alarms (Right). Performance as a function of time from image presentation (time ), for all 3 crossvalidation train/test permutations. Each row represents a single permutation; Temporal dependence is computed at Step 1a of SWFP (see Methods). Bottom: Mean temporal performance, computed as the average of all 3 permutations- Red: % correct; Blue: Hit Rate, Black: FA (sample subject ). Target/Non-Target stimulus (e.g face images were Targets in a Face-Target block, and Non-Targets in all other blocks). Testing was thus performed separately for the image-target blocks and image-non-target blocks. Performance was computed for each image exemplar, and averaged over all image exemplars in the experiment. We found that this leave-one-out voting procedure for repetitions of each image exemplar, dramatically improves image classification performance by an average of 16.5% correct classification (12.5-2% for different subjects), to near perfect classification in some subjects ( %; mean 89.4%). Specifically, it increases Target-hit rates by an average of 2% (17-27% for different subjects), and reduces false-alarms by an average of 22% (16-25% for the different subjects), resulting in hit rates of 75-91% (mean 83% hits) and false alarm rates approaching zero (-9%; mean 4%). Finally, we also investigated the dependence of leave-oneout voting performance upon the number of repetitions used for voting (see Methods for details). Results are presented in Fig. 1. We find that, compared to single trial analysis, the leave-one-out classification accuracy converges to a 2-5% increase in accuracy (reaching nearly 1% correct in some subjects) by using only 6-8 repetitions for voting. Using more than 8 trials for voting hardly affects accuracy. IV. DISCUSSION Despite considerable advances in computer vision, the capabilities of the human visuo-perceptual system still surpasses even the best artificial intelligence systems, especially as far as its flexibility, learning capacity, and robustness to variable viewing conditions. Yet when it comes to sorting through large Copyright (c) 213 IEEE. Personal use is permitted. For any other purposes, permission must be obtained from the IEEE by ing pubs-permissions@ieee.org.

9 1.119/TBME , IEEE Transactions on Biomedical Engineering IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING 9 1 % Permutations Time [ms] Subject Fig. 6: Probability distributions of best latencies of Target/Non-Target discrimination, evaluated for each subject by different permutation of train/test cross validation. 1 2 Subject Subject Subject Subject Subject Subject Subject Subject Subject Subject Subject Subject Fig. 5: Temporal Dependence of SWFP Classification Accuracy. Each color-coded image represents a different subject. Each row in an image represents a single cross-validation permutation for that subject; color-coded is percent correct. For each permutation, highly discriminative latencies are those at which percent correct classification is high (red) volumes of images, such as micro- and macroscopic medical images, or satellite aerials, human are generally accurate, but too slow. The bottleneck does not stem mainly from perceptual processes, which are pretty quick, but from the time it takes to register the decision, be it orally, in writing, or by a button press. To overcome this impediment, observers can be freed from the need to overtly report their decision, while a computerized algorithm sorts the pattern of their single trial brain responses, as images are presented at a very high rate [6], [7], [19]. In this study we re-examined the feasibility of this approach, and provided improvements over previously suggested algorithms. EEG is characterized by a low signal to noise ratio (SNR), and therefore it is traditionally analyzed by averaging out the noise over many repetitions of the same stimulus presentation. However, for the purpose of using EEG to label single images in real time (or nearly so), the algorithm must be able to deal with single trials. This was complicated in the present implementation by the need to present the images rapidly (at 1Hz), such that brain responses to subsequent image presentations overlapped. That is, the response to an image presentation has not yet decayed before the next stimulus was presented. This requires special consideration when selecting Copyright (c) 213 IEEE. Personal use is permitted. For any other purposes, permission must be obtained from the IEEE by ing pubs-permissions@ieee.org.

10 1.119/TBME , IEEE Transactions on Biomedical Engineering IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING 1 a classification algorithm. Classification performance is clearly sensitive to the algorithm in use, and is based on nontrivial feature extraction, statistical characterization and classification of the data. Some algorithms have been constructed specifically for the RSVP task. In particular, a general framework was proposed by Parra et al. [19] for single trial classification, and implemented specifically for the RSVP task. We have thus chosen to use it here as a benchmark algorithm to test classification performance on the RSVP data collected using our objectcategory target detection paradigm. Performance depends not only on the classification algorithm and features in use, but also on other factors, including the particular experimental paradigm of study. P3 speller tasks for example, in which the goal is to decipher which letter the observer has in mind, are reported to achieve near 76-8% single trial classification accuracy [27], [28]. Performance in those cases is commonly increased up to 1% by averaging across trials [29]. Correct single trial left/right hand movement classification for motor tasks may be as high as 8% - 9% in single trials [3], [2]. Particularly, classification performance in several RSVP tasks has also been reported. For example, expert imagery analysts participated in a study by Poolman et al. ([2]) detecting targets (missile sites, artillery sites and helipads) from satellite images that were presented at 1Hz. In another RSVP study of Bigdley et al. ([6]), satellite images were presented to naïve observers at 12Hz. Each presented image consisted of a focal center within blurred surrounding and target images contained a clear image of a plane pasted over the focal center. In Parra s et al. s study ([19]), where the benchmark HDCA algorithm was presented, the focus was on triaging images. Trained image analysts were instructed to identify helipads in monochromatic aerial images. The task was performed twice: once in which images were presented at random order and the other after images have been triaged by the EEG classifier. The RSVP task presented the images at either 5Hz or 1Hz and images were reordered by the classifier scores for target detection. Gerson et al. [7] use a similar algorithm to Parra [19] to triage a continuous 1Hz sequence of natural scenes, with target images defined as natural scenes containing at least one person. The subjects task was to detect the appearance of a person over a background scene. Overall, these paradigms seem to be easier than the one considered in our main Experiment 1 because they only require detection of targets vs. backgrounds. All these cases likely create a pop-out perceptual phenomenon, facilitating target detection. In order to compare performance to the studies reported in the literature, we also introduced Experiment 2. In this paradigm, where subjects were asked to detect target images of Cars, out of meaningless noise images, we find similar or better performance than previously reported. Specifically, the paper by Poolman et al. ([2]) reports an average AUC of.78 with 79% hit rate and 46% false alarms, compared to AUC of.98, 91% hit rate and 1.5% false alarms using SWFP in our study. Correct classification at Bigdley et al. ([6]) was measured only by the measure of AUC and reported as.84 on average across all subjects, compared to AUC of.98 using SWFP in our study (Experiment 2). Unfortunately, classification numbers in the benchmark study of Parra et al. ([19]) were not provided, but target detection was shown to improve significantly after the EEG triage. Gerson et al. [7] reported hit rates only slightly lower than our SWFP hit rate, but they did not report the false alarm rate. Since each experiment included different simulation, it may be difficult to compare these performance measures. Using our own data of Experiment II, the new SWFP outperformed the HDCA even in this simpler paradigm, achieving near perfect (97-99% correct) performance. The HDPCA algorithm achieved better classification than the original HDCA, although not as high as the SWFP. In our main paradigm (Experiment 1), subjects were asked to identify images from a predefined object category out of five. This is a more challenging task, as it requires discrimination between object categories and not merely detection of the presence or absence of a certain object. Performance accuracies are thus expected to be lower. Nevertheless, we find that our proposed SWFP algorithm reaches 66-82% correct performance (mean: 72.89; std: 4.74). Moreover, we find that like in the simpler case of Experiment 2, it outperforms the benchmark HDCA algorithm [19] by 1% correct classification, a difference that is statistically significant. Statistical analysis also demonstrates that SWFP performs more accurately than HDCA both in terms of increased hit rates as well as decreased false alarms. Additionally, our HDPCA modification to the benchmark original HDCA algorithm [19], boosts performance almost to the level of SWFP although we find that HDPCA is somewhat less conservative, in that it produces significantly more false alarms than SWFP, resulting in reduced total accuracy. We note that the performance of the benchmark HDCA is sensitive to the window size used for analysis (see Methods: Step 1 of Algorithm 2). Decreasing the window size from 1ms as reported in the orignial benchmark study [19], to 5ms time window as implemented in Gerson et al. [7], raises performance by an average of 7% correct in Experiment 1, and of 6% in Experiment 2. Even with the smaller window, SWFP somewhat outperforms HDCA in term of d (Wilcoxon signed rank test, p <.5) mainly because of a lower false alarm rate. This may be considered as another advantage of using SWFP over both HDCA and HDPCA, as the SWFP does not depend on a choice of a specific window size for analysis, but rather takes all time points into account. HDPCA differs from HDCA by the addition of a discriminating-based weighting vector for time windows, such that time windows that are more important for target/non-target discrimination have a greater effect on classification (see also [7]) as well as PCA for dimensionality reduction, resulting in noise reduction prior to final classification. In fact, we find that PCA, implemented in both SWFP and HDPCA on time series from individual channels, performs a kind of spectral decomposition, where the first few principle components turn out to represent the lowest frequency components of the original signal (Supplementary Fig. I). Noise reduction is thus an outcome of choosing the first few principal components, Copyright (c) 213 IEEE. Personal use is permitted. For any other purposes, permission must be obtained from the IEEE by ing pubs-permissions@ieee.org.

11 This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 1.119/TBME , IEEE Transactions on Biomedical Engineering IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING 11 5ms 1ms 15ms 2ms 25ms 3ms 35ms 4ms 45ms 5ms ms 6ms 65ms 7ms 75ms 8ms 85ms 9ms Fig. 7: Topography of target/non-target classification weights (U (t)) at different time points post-image presentation for sample subject. Color indicates the value of a weight, as determined in step I of SWFP. Dark colors (Red/Blue) correspond to high classification weights. The maps were averaged across permutations. Weights are normalized by the norm of the mean of the weights across all time samples. The green rectangle highlights times surrounding the subject s best latency for Target/Nontarget discrimination (285ms). or rather a combination of the lowest frequency components, explaining the original variability in the signal. The main difference of SWFP from the other two algorithms (HDCA and HDPCA) is that while HDCA and its modified version (HDPCA) use the spatial projection vector (uk ) generated in the first step, in order to collapse data from all channel locations onto a single representative global channel (sk ) prior to temporal manipulation of the data, SWFP uses the concatenated information from all channels. Therefore both HDCA and HDPCA discard potentially relevant information, which indeed turns out to be useful for single trial classification. In addition, SWFP amplifies the original data matrix by the spatio-temporal classification weights prior to PCA dimensionality reduction; channel and times that are most important for classification are given even greater impact further in the process. As a result, dimensionality reduction is more effective in extracting relevant features for classification. We designed our main experimental paradigm to consist of several repetitions of each image exemplar. We tested whether voting across the classification decision for multiple presentations of the same image exemplar improves performance. We used a leave-one-out procedure, by which the classifier is trained on a large database of EEG recorded brain responses, and tested separately for all presentations of a single image exemplar, which was excluded from the training set. We found that a majority-based decision rule per image exemplar dramatically improves image classification performance by an average of 16.5% correct classification, reaching over 95% correct performance in some subjects. Specifically, it increased Target-hit rates by an average of 2% to exceed 9% hits in some subjects, and reduces false alarms by an average of 22% to near zero false alarms for some subjects. The important implication for BCI application is that performance can be improved dramatically for single stimulus exemplars by presenting six to eight repetitions only of each stimulus, resulting in near perfect classification. Thus, in cases where high accuracy is crucial, the extra cost of time required to present every image several times might be justified. Finally, we were also interested in the brain activations sup- Copyright (c) 213 IEEE. Personal use is permitted. For any other purposes, permission must be obtained from the IEEE by ing pubs-permissions@ieee.org.

12 1.119/TBME , IEEE Transactions on Biomedical Engineering IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING 12 Spatial Distribution at Best Latency Fpz Fp1 Afz Fp2 AF3 AF4 AF7 AF8 F7 F5 F3 F1 Fz F2 F4 F6 F8 FT7 FC5 FC3 FC1 FCz FC2 FC4 FC6 FT8 T7 C5 C3 C1 Cz C2 C4 C6 T8 TP7 CP5 CP3 CP1 CPz CP2 CP4 TP8 CP6 P7 P9 P5 P3 P1 Pz P2 P4 P1 P6 P8 PO7 PO3 O1 Iz POz Oz O2 PO4 PO8 Fig. 8: Spatial distribution maps of classification weights at best latency of Target/Non-Target discrimination ( sample subject ) at the best latency of different cross validation permutations. Circle sizes represent the weight for Target/Non-Target discrimination at each spatial location (in absolute values). Large circles indicate spatial locations that are important for classification. At each location there are 3 circles, one for each train/test permutation (each at a different shade of yelloworange-red), hence the circle thickness represents the variance across permutations. porting single trial classification. We therefore analyzed spatiotemporal distributions of discriminating activity, without any prior- assumptions about the times and spatial locations most important for target detection. Specifically, in this study, we found that the best time and topography for Target/Non-Target discrimination was reminiscent to the reported literature of P3 target detection ERPs [1], namely central-posterior regions, most efficiently starting around 3-4ms post-image presentation. This provides a proof of concept for the SWFP algorithm, suggesting it is a feasible algorithm to be used in the future for automatically exploring spatio-temporal activations at the basis of other discriminating tasks. Interestingly, we found variance across subjects in the exact time and location of the dominant classification weights, indicating individual differences. For example, in a few of the subjects dominant classification weights were found to be less localized and to involve also frontal scalp regions contributing to Target/Standard discrimination. While the pattern of classification weights in some subjects differs from the classical spatiotemporal distribution of P3 ERPs, we find the patterns to be reliable and informative across trials; in fact, some subjects with non-classical spatio-temporal distribution of weights, show the highest single trial classification performance. This suggests that the spatio-temporal brain representation of Target detection is subject-specific, yet for each subject it is consistent across trials. V. ACKNOWLEDGMENT The study was supported by a grant from the Israei Defense Ministry. We thank Shani Shalgi, and Einat Wigderson for their help in running the experiments and analyzing the data. REFERENCES [1] F. Lotte, M. Congedo, A. Lécuyer, F. Lamarche, and B. Arnaldi, A review of classification algorithms for eeg-based brain computer interfaces, Journal of neural engineering, vol. 4, p. R1, 27. Copyright (c) 213 IEEE. Personal use is permitted. For any other purposes, permission must be obtained from the IEEE by ing pubs-permissions@ieee.org.

13 1.119/TBME , IEEE Transactions on Biomedical Engineering IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING 13 Subject 51 at 42ms Subject at 41ms Subject 53 at 492ms Subject at 285ms Subject at 281ms Subject 57 at 371ms Subject 58 at 387ms Subject 51 at 46ms Subject 511 at 3ms Subject 512 at 32ms.4.2 Subject 513 at 3ms.4.2 Subject 515 at 336ms Fig. 9: Topography of classification weights at best latency of target/non-target discrimination. The maps were averaged across permutations. Each subject s map is normalized by its Euclidean norm. Color represents the classification weight for Target/Non-Target discrimination at each spatial location. Dark colors (red/blue) indicate spatial locations that are important for classification. [2] G. Pfurtscheller, C. Neuper, A. Schlogl, and K. Lugger, Separability of eeg signals recorded during right and left motor imagery using adaptive autoregressive parameters, Rehabilitation Engineering, IEEE Transactions on, vol. 6, pp , 22. [3] B. Blankertz, R. Tomioka, S. Lemm, M. Kawanabe, and K.-R. Muller, Optimizing spatial filters for robust eeg single-trial analysis, Signal Processing Magazine, IEEE, vol. 25, pp. 41, 27. [4] T. Felzer and B. Freisieben, Analyzing eeg signals using the probability estimating guarded neural classifier, Neural Systems and Rehabilitation Engineering, IEEE Transactions on, vol. 11, pp , 24. [5] K.-R. Muller, C. W. Anderson, and G. E. Birch, Linear and nonlinear methods for brain-computer interfaces, Neural Systems and Rehabilitation Engineering, IEEE Transactions on, vol. 11, no. 2, pp , 23. [6] N. Bigdely-Shamlo, A. Vankov, R. Ramirez, and S. Makeig, Brain activity-based image classification from rapid serial visual presentation, Neural Systems and Rehabilitation Engineering, IEEE Transactions on, pp , 28. [7] A. Gerson, L. Parra, and P. Sajda, Cortically coupled computer vision for rapid image search, Neural Systems and Rehabilitation Engineering, IEEE Transactions on, vol. 14, pp , 26. [8] Y. Huang, D. Erdogmus, S. Mathan, and M. Pavel, Large-scale image database triage via eeg evoked responses. IEEE, 28, pp [9] M. Kaper, P. Meinicke, U. Grossekathoefer, T. Lingner, and H. Ritter, Bci competition 23-data set iib: Support vector machines for the p3 speller paradigm, Biomedical Engineering, IEEE Transactions on, vol. 51, pp , 24. [1] E. Donchin, W. Ritter, and W. McCallum, Cognitive psychophysiology: The endogenous components of the erp, Event-related brain potentials in man, pp , [11] E. Donchin, K. Spencer, and R. Wijesinghe, The mental prosthesis: assessing the speed of a p3-based brain-computer interface, Rehabilitation Engineering, IEEE Transactions on, vol. 8, pp , 22. [12] E. Sellers and E. Donchin, A p3-based brain-computer interface: Initial tests by als patients, Clinical Neurophysiology, vol. 117, pp , 26. [13] J. Wolpaw, N. Birbaumer, D. McFarland, G. Pfurtscheller, and T. Vaughan, Brain-computer interfaces for communication and control, Clinical neurophysiology, vol. 113, pp , 22. [14] J. Wolpaw and D. McFarland, Control of a two-dimensional movement signal by a noninvasive brain-computer interface in humans, Proceedings of the National Academy of Sciences of the United States of America, vol. 11, p , 24. [15] J. Muller-Gerking, G. Pfurtscheller, and H. Flyvbjerg, Designing optimal spatial filters for single-trial eeg classification in a movement task, Clinical neurophysiology, vol. 11, pp , [16] K.-R. Müller, M. Tangermann, G. Dornhege, M. Krauledat, G. Curio, and B. Blankertz, Machine learning for real-time single-trial eeganalysis: From brain-computer interfacing to mental state monitoring, Copyright (c) 213 IEEE. Personal use is permitted. For any other purposes, permission must be obtained from the IEEE by ing pubs-permissions@ieee.org.

Error Detection based on neural signals

Error Detection based on neural signals Error Detection based on neural signals Nir Even- Chen and Igor Berman, Electrical Engineering, Stanford Introduction Brain computer interface (BCI) is a direct communication pathway between the brain

More information

PCA Enhanced Kalman Filter for ECG Denoising

PCA Enhanced Kalman Filter for ECG Denoising IOSR Journal of Electronics & Communication Engineering (IOSR-JECE) ISSN(e) : 2278-1684 ISSN(p) : 2320-334X, PP 06-13 www.iosrjournals.org PCA Enhanced Kalman Filter for ECG Denoising Febina Ikbal 1, Prof.M.Mathurakani

More information

Neurophysiologically Driven Image Triage: A Pilot Study

Neurophysiologically Driven Image Triage: A Pilot Study Neurophysiologically Driven Image Triage: A Pilot Study Santosh Mathan Honeywell Laboratories 3660 Technology Dr Minneapolis, MN 55418 USA santosh.mathan@honeywell.com Stephen Whitlow Honeywell Laboratories

More information

ABSTRACT 1. INTRODUCTION 2. ARTIFACT REJECTION ON RAW DATA

ABSTRACT 1. INTRODUCTION 2. ARTIFACT REJECTION ON RAW DATA AUTOMATIC ARTIFACT REJECTION FOR EEG DATA USING HIGH-ORDER STATISTICS AND INDEPENDENT COMPONENT ANALYSIS A. Delorme, S. Makeig, T. Sejnowski CNL, Salk Institute 11 N. Torrey Pines Road La Jolla, CA 917,

More information

The impact of numeration on visual attention during a psychophysical task; An ERP study

The impact of numeration on visual attention during a psychophysical task; An ERP study The impact of numeration on visual attention during a psychophysical task; An ERP study Armita Faghani Jadidi, Raheleh Davoodi, Mohammad Hassan Moradi Department of Biomedical Engineering Amirkabir University

More information

A Brain Computer Interface System For Auto Piloting Wheelchair

A Brain Computer Interface System For Auto Piloting Wheelchair A Brain Computer Interface System For Auto Piloting Wheelchair Reshmi G, N. Kumaravel & M. Sasikala Centre for Medical Electronics, Dept. of Electronics and Communication Engineering, College of Engineering,

More information

Introduction to Computational Neuroscience

Introduction to Computational Neuroscience Introduction to Computational Neuroscience Lecture 5: Data analysis II Lesson Title 1 Introduction 2 Structure and Function of the NS 3 Windows to the Brain 4 Data analysis 5 Data analysis II 6 Single

More information

Neurophysiologically Driven Image Triage: A Pilot Study

Neurophysiologically Driven Image Triage: A Pilot Study Neurophysiologically Driven Image Triage: A Pilot Study Santosh Mathan 3660 Technology Dr Minneapolis, MN 55418 USA santosh.mathan@honeywell.com Stephen Whitlow 3660 Technology Dr Minneapolis, MN 55418

More information

One Class SVM and Canonical Correlation Analysis increase performance in a c-vep based Brain-Computer Interface (BCI)

One Class SVM and Canonical Correlation Analysis increase performance in a c-vep based Brain-Computer Interface (BCI) One Class SVM and Canonical Correlation Analysis increase performance in a c-vep based Brain-Computer Interface (BCI) Martin Spüler 1, Wolfgang Rosenstiel 1 and Martin Bogdan 2,1 1-Wilhelm-Schickard-Institute

More information

Supplementary materials for: Executive control processes underlying multi- item working memory

Supplementary materials for: Executive control processes underlying multi- item working memory Supplementary materials for: Executive control processes underlying multi- item working memory Antonio H. Lara & Jonathan D. Wallis Supplementary Figure 1 Supplementary Figure 1. Behavioral measures of

More information

SUPPLEMENTARY INFORMATION. Table 1 Patient characteristics Preoperative. language testing

SUPPLEMENTARY INFORMATION. Table 1 Patient characteristics Preoperative. language testing Categorical Speech Representation in the Human Superior Temporal Gyrus Edward F. Chang, Jochem W. Rieger, Keith D. Johnson, Mitchel S. Berger, Nicholas M. Barbaro, Robert T. Knight SUPPLEMENTARY INFORMATION

More information

Quality of evidence for perceptual decision making is indexed by trial-to-trial variability of the EEG

Quality of evidence for perceptual decision making is indexed by trial-to-trial variability of the EEG Quality of evidence for perceptual decision making is indexed by trial-to-trial variability of the EEG Roger Ratcliff a,1, Marios G. Philiastides b, and Paul Sajda c a Department of Psychology, Ohio State

More information

DATA MANAGEMENT & TYPES OF ANALYSES OFTEN USED. Dennis L. Molfese University of Nebraska - Lincoln

DATA MANAGEMENT & TYPES OF ANALYSES OFTEN USED. Dennis L. Molfese University of Nebraska - Lincoln DATA MANAGEMENT & TYPES OF ANALYSES OFTEN USED Dennis L. Molfese University of Nebraska - Lincoln 1 DATA MANAGEMENT Backups Storage Identification Analyses 2 Data Analysis Pre-processing Statistical Analysis

More information

Modifying the Classic Peak Picking Technique Using a Fuzzy Multi Agent to Have an Accurate P300-based BCI

Modifying the Classic Peak Picking Technique Using a Fuzzy Multi Agent to Have an Accurate P300-based BCI Modifying the Classic Peak Picking Technique Using a Fuzzy Multi Agent to Have an Accurate P3-based BCI Gholamreza Salimi Khorshidi School of cognitive sciences, Institute for studies in theoretical physics

More information

THE data used in this project is provided. SEIZURE forecasting systems hold promise. Seizure Prediction from Intracranial EEG Recordings

THE data used in this project is provided. SEIZURE forecasting systems hold promise. Seizure Prediction from Intracranial EEG Recordings 1 Seizure Prediction from Intracranial EEG Recordings Alex Fu, Spencer Gibbs, and Yuqi Liu 1 INTRODUCTION SEIZURE forecasting systems hold promise for improving the quality of life for patients with epilepsy.

More information

Prediction of Successful Memory Encoding from fmri Data

Prediction of Successful Memory Encoding from fmri Data Prediction of Successful Memory Encoding from fmri Data S.K. Balci 1, M.R. Sabuncu 1, J. Yoo 2, S.S. Ghosh 3, S. Whitfield-Gabrieli 2, J.D.E. Gabrieli 2 and P. Golland 1 1 CSAIL, MIT, Cambridge, MA, USA

More information

Introduction to Computational Neuroscience

Introduction to Computational Neuroscience Introduction to Computational Neuroscience Lecture 10: Brain-Computer Interfaces Ilya Kuzovkin So Far Stimulus So Far So Far Stimulus What are the neuroimaging techniques you know about? Stimulus So Far

More information

Early Learning vs Early Variability 1.5 r = p = Early Learning r = p = e 005. Early Learning 0.

Early Learning vs Early Variability 1.5 r = p = Early Learning r = p = e 005. Early Learning 0. The temporal structure of motor variability is dynamically regulated and predicts individual differences in motor learning ability Howard Wu *, Yohsuke Miyamoto *, Luis Nicolas Gonzales-Castro, Bence P.

More information

USING AUDITORY SALIENCY TO UNDERSTAND COMPLEX AUDITORY SCENES

USING AUDITORY SALIENCY TO UNDERSTAND COMPLEX AUDITORY SCENES USING AUDITORY SALIENCY TO UNDERSTAND COMPLEX AUDITORY SCENES Varinthira Duangudom and David V Anderson School of Electrical and Computer Engineering, Georgia Institute of Technology Atlanta, GA 30332

More information

Simultaneous Real-Time Detection of Motor Imagery and Error-Related Potentials for Improved BCI Accuracy

Simultaneous Real-Time Detection of Motor Imagery and Error-Related Potentials for Improved BCI Accuracy Simultaneous Real-Time Detection of Motor Imagery and Error-Related Potentials for Improved BCI Accuracy P. W. Ferrez 1,2 and J. del R. Millán 1,2 1 IDIAP Research Institute, Martigny, Switzerland 2 Ecole

More information

Classification and Statistical Analysis of Auditory FMRI Data Using Linear Discriminative Analysis and Quadratic Discriminative Analysis

Classification and Statistical Analysis of Auditory FMRI Data Using Linear Discriminative Analysis and Quadratic Discriminative Analysis International Journal of Innovative Research in Computer Science & Technology (IJIRCST) ISSN: 2347-5552, Volume-2, Issue-6, November-2014 Classification and Statistical Analysis of Auditory FMRI Data Using

More information

A Race Model of Perceptual Forced Choice Reaction Time

A Race Model of Perceptual Forced Choice Reaction Time A Race Model of Perceptual Forced Choice Reaction Time David E. Huber (dhuber@psyc.umd.edu) Department of Psychology, 1147 Biology/Psychology Building College Park, MD 2742 USA Denis Cousineau (Denis.Cousineau@UMontreal.CA)

More information

Morton-Style Factorial Coding of Color in Primary Visual Cortex

Morton-Style Factorial Coding of Color in Primary Visual Cortex Morton-Style Factorial Coding of Color in Primary Visual Cortex Javier R. Movellan Institute for Neural Computation University of California San Diego La Jolla, CA 92093-0515 movellan@inc.ucsd.edu Thomas

More information

Theta sequences are essential for internally generated hippocampal firing fields.

Theta sequences are essential for internally generated hippocampal firing fields. Theta sequences are essential for internally generated hippocampal firing fields. Yingxue Wang, Sandro Romani, Brian Lustig, Anthony Leonardo, Eva Pastalkova Supplementary Materials Supplementary Modeling

More information

Supporting Information

Supporting Information Supporting Information ten Oever and Sack 10.1073/pnas.1517519112 SI Materials and Methods Experiment 1. Participants. A total of 20 participants (9 male; age range 18 32 y; mean age 25 y) participated

More information

Classification of EEG signals in an Object Recognition task

Classification of EEG signals in an Object Recognition task Classification of EEG signals in an Object Recognition task Iacob D. Rus, Paul Marc, Mihaela Dinsoreanu, Rodica Potolea Technical University of Cluj-Napoca Cluj-Napoca, Romania 1 rus_iacob23@yahoo.com,

More information

A micropower support vector machine based seizure detection architecture for embedded medical devices

A micropower support vector machine based seizure detection architecture for embedded medical devices A micropower support vector machine based seizure detection architecture for embedded medical devices The MIT Faculty has made this article openly available. Please share how this access benefits you.

More information

Multichannel Classification of Single EEG Trials with Independent Component Analysis

Multichannel Classification of Single EEG Trials with Independent Component Analysis In J. Wang et a]. (Eds.), Advances in Neural Networks-ISNN 2006, Part 111: 54 1-547. Berlin: Springer. Multichannel Classification of Single EEG Trials with Independent Component Analysis Dik Kin Wong,

More information

Predicting Breast Cancer Survival Using Treatment and Patient Factors

Predicting Breast Cancer Survival Using Treatment and Patient Factors Predicting Breast Cancer Survival Using Treatment and Patient Factors William Chen wchen808@stanford.edu Henry Wang hwang9@stanford.edu 1. Introduction Breast cancer is the leading type of cancer in women

More information

Hierarchical Bayesian Modeling of Individual Differences in Texture Discrimination

Hierarchical Bayesian Modeling of Individual Differences in Texture Discrimination Hierarchical Bayesian Modeling of Individual Differences in Texture Discrimination Timothy N. Rubin (trubin@uci.edu) Michael D. Lee (mdlee@uci.edu) Charles F. Chubb (cchubb@uci.edu) Department of Cognitive

More information

Assessing Functional Neural Connectivity as an Indicator of Cognitive Performance *

Assessing Functional Neural Connectivity as an Indicator of Cognitive Performance * Assessing Functional Neural Connectivity as an Indicator of Cognitive Performance * Brian S. Helfer 1, James R. Williamson 1, Benjamin A. Miller 1, Joseph Perricone 1, Thomas F. Quatieri 1 MIT Lincoln

More information

TWO HANDED SIGN LANGUAGE RECOGNITION SYSTEM USING IMAGE PROCESSING

TWO HANDED SIGN LANGUAGE RECOGNITION SYSTEM USING IMAGE PROCESSING 134 TWO HANDED SIGN LANGUAGE RECOGNITION SYSTEM USING IMAGE PROCESSING H.F.S.M.Fonseka 1, J.T.Jonathan 2, P.Sabeshan 3 and M.B.Dissanayaka 4 1 Department of Electrical And Electronic Engineering, Faculty

More information

ECG Beat Recognition using Principal Components Analysis and Artificial Neural Network

ECG Beat Recognition using Principal Components Analysis and Artificial Neural Network International Journal of Electronics Engineering, 3 (1), 2011, pp. 55 58 ECG Beat Recognition using Principal Components Analysis and Artificial Neural Network Amitabh Sharma 1, and Tanushree Sharma 2

More information

ERP Components and the Application in Brain-Computer Interface

ERP Components and the Application in Brain-Computer Interface ERP Components and the Application in Brain-Computer Interface Dr. Xun He Bournemouth University, 12 th July 2016 The 1 st HCI and Digital Health Forum: Technologies and Business Opportunities between

More information

Discrimination and Generalization in Pattern Categorization: A Case for Elemental Associative Learning

Discrimination and Generalization in Pattern Categorization: A Case for Elemental Associative Learning Discrimination and Generalization in Pattern Categorization: A Case for Elemental Associative Learning E. J. Livesey (el253@cam.ac.uk) P. J. C. Broadhurst (pjcb3@cam.ac.uk) I. P. L. McLaren (iplm2@cam.ac.uk)

More information

Nature Neuroscience: doi: /nn Supplementary Figure 1. Behavioral training.

Nature Neuroscience: doi: /nn Supplementary Figure 1. Behavioral training. Supplementary Figure 1 Behavioral training. a, Mazes used for behavioral training. Asterisks indicate reward location. Only some example mazes are shown (for example, right choice and not left choice maze

More information

SUPPLEMENTAL MATERIAL

SUPPLEMENTAL MATERIAL 1 SUPPLEMENTAL MATERIAL Response time and signal detection time distributions SM Fig. 1. Correct response time (thick solid green curve) and error response time densities (dashed red curve), averaged across

More information

Predicting Diabetes and Heart Disease Using Features Resulting from KMeans and GMM Clustering

Predicting Diabetes and Heart Disease Using Features Resulting from KMeans and GMM Clustering Predicting Diabetes and Heart Disease Using Features Resulting from KMeans and GMM Clustering Kunal Sharma CS 4641 Machine Learning Abstract Clustering is a technique that is commonly used in unsupervised

More information

Learning Classifier Systems (LCS/XCSF)

Learning Classifier Systems (LCS/XCSF) Context-Dependent Predictions and Cognitive Arm Control with XCSF Learning Classifier Systems (LCS/XCSF) Laurentius Florentin Gruber Seminar aus Künstlicher Intelligenz WS 2015/16 Professor Johannes Fürnkranz

More information

Studying the time course of sensory substitution mechanisms (CSAIL, 2014)

Studying the time course of sensory substitution mechanisms (CSAIL, 2014) Studying the time course of sensory substitution mechanisms (CSAIL, 2014) Christian Graulty, Orestis Papaioannou, Phoebe Bauer, Michael Pitts & Enriqueta Canseco-Gonzalez, Reed College. Funded by the Murdoch

More information

Title of Thesis. Study on Audiovisual Integration in Young and Elderly Adults by Event-Related Potential

Title of Thesis. Study on Audiovisual Integration in Young and Elderly Adults by Event-Related Potential Title of Thesis Study on Audiovisual Integration in Young and Elderly Adults by Event-Related Potential 2014 September Yang Weiping The Graduate School of Natural Science and Technology (Doctor s Course)

More information

Supplementary material

Supplementary material Supplementary material S1. Event-related potentials Event-related potentials (ERPs) were calculated for stimuli for each visual field (mean of low, medium and high spatial frequency stimuli). For each

More information

Figure 1. Source localization results for the No Go N2 component. (a) Dipole modeling

Figure 1. Source localization results for the No Go N2 component. (a) Dipole modeling Supplementary materials 1 Figure 1. Source localization results for the No Go N2 component. (a) Dipole modeling analyses placed the source of the No Go N2 component in the dorsal ACC, near the ACC source

More information

Neural Strategies for Selective Attention Distinguish Fast-Action Video Game Players

Neural Strategies for Selective Attention Distinguish Fast-Action Video Game Players Neural Strategies for Selective Attention Distinguish Fast-Action Video Game Players - Online First - Springer 7/6/1 1:4 PM Neural Strategies for Selective Attention Distinguish Fast-Action Video Game

More information

Computational Perception /785. Auditory Scene Analysis

Computational Perception /785. Auditory Scene Analysis Computational Perception 15-485/785 Auditory Scene Analysis A framework for auditory scene analysis Auditory scene analysis involves low and high level cues Low level acoustic cues are often result in

More information

Pupil Dilation as an Indicator of Cognitive Workload in Human-Computer Interaction

Pupil Dilation as an Indicator of Cognitive Workload in Human-Computer Interaction Pupil Dilation as an Indicator of Cognitive Workload in Human-Computer Interaction Marc Pomplun and Sindhura Sunkara Department of Computer Science, University of Massachusetts at Boston 100 Morrissey

More information

Assessment of Reliability of Hamilton-Tompkins Algorithm to ECG Parameter Detection

Assessment of Reliability of Hamilton-Tompkins Algorithm to ECG Parameter Detection Proceedings of the 2012 International Conference on Industrial Engineering and Operations Management Istanbul, Turkey, July 3 6, 2012 Assessment of Reliability of Hamilton-Tompkins Algorithm to ECG Parameter

More information

Nature Neuroscience: doi: /nn Supplementary Figure 1

Nature Neuroscience: doi: /nn Supplementary Figure 1 Supplementary Figure 1 Hippocampal recordings. a. (top) Post-operative MRI (left, depicting a depth electrode implanted along the longitudinal hippocampal axis) and co-registered preoperative MRI (right)

More information

A Race Model of Perceptual Forced Choice Reaction Time

A Race Model of Perceptual Forced Choice Reaction Time A Race Model of Perceptual Forced Choice Reaction Time David E. Huber (dhuber@psych.colorado.edu) Department of Psychology, 1147 Biology/Psychology Building College Park, MD 2742 USA Denis Cousineau (Denis.Cousineau@UMontreal.CA)

More information

Chapter 5. Summary and Conclusions! 131

Chapter 5. Summary and Conclusions! 131 ! Chapter 5 Summary and Conclusions! 131 Chapter 5!!!! Summary of the main findings The present thesis investigated the sensory representation of natural sounds in the human auditory cortex. Specifically,

More information

Information Processing During Transient Responses in the Crayfish Visual System

Information Processing During Transient Responses in the Crayfish Visual System Information Processing During Transient Responses in the Crayfish Visual System Christopher J. Rozell, Don. H. Johnson and Raymon M. Glantz Department of Electrical & Computer Engineering Department of

More information

A model of parallel time estimation

A model of parallel time estimation A model of parallel time estimation Hedderik van Rijn 1 and Niels Taatgen 1,2 1 Department of Artificial Intelligence, University of Groningen Grote Kruisstraat 2/1, 9712 TS Groningen 2 Department of Psychology,

More information

Sum of Neurally Distinct Stimulus- and Task-Related Components.

Sum of Neurally Distinct Stimulus- and Task-Related Components. SUPPLEMENTARY MATERIAL for Cardoso et al. 22 The Neuroimaging Signal is a Linear Sum of Neurally Distinct Stimulus- and Task-Related Components. : Appendix: Homogeneous Linear ( Null ) and Modified Linear

More information

Validating the Visual Saliency Model

Validating the Visual Saliency Model Validating the Visual Saliency Model Ali Alsam and Puneet Sharma Department of Informatics & e-learning (AITeL), Sør-Trøndelag University College (HiST), Trondheim, Norway er.puneetsharma@gmail.com Abstract.

More information

EECS 433 Statistical Pattern Recognition

EECS 433 Statistical Pattern Recognition EECS 433 Statistical Pattern Recognition Ying Wu Electrical Engineering and Computer Science Northwestern University Evanston, IL 60208 http://www.eecs.northwestern.edu/~yingwu 1 / 19 Outline What is Pattern

More information

IEEE TRANSACTIONS ON NEURAL SYSTEMS AND REHABILITATION ENGINEERING, VOL.?, NO.?, JUNE

IEEE TRANSACTIONS ON NEURAL SYSTEMS AND REHABILITATION ENGINEERING, VOL.?, NO.?, JUNE IEEE TRANSACTIONS ON NEURAL SYSTEMS AND REHABILITATION ENGINEERING, VOL.?, NO.?, JUNE 2006 Cortically coupled Computer Vision for Rapid Image Search Adam D. Gerson, Lucas C. Parra, and Paul Sajda Abstract

More information

Copyright 2007 IEEE. Reprinted from 4th IEEE International Symposium on Biomedical Imaging: From Nano to Macro, April 2007.

Copyright 2007 IEEE. Reprinted from 4th IEEE International Symposium on Biomedical Imaging: From Nano to Macro, April 2007. Copyright 27 IEEE. Reprinted from 4th IEEE International Symposium on Biomedical Imaging: From Nano to Macro, April 27. This material is posted here with permission of the IEEE. Such permission of the

More information

Classification of People using Eye-Blink Based EOG Peak Analysis.

Classification of People using Eye-Blink Based EOG Peak Analysis. Classification of People using Eye-Blink Based EOG Peak Analysis. Andrews Samraj, Thomas Abraham and Nikos Mastorakis Faculty of Computer Science and Engineering, VIT University, Chennai- 48, India. Technical

More information

Intelligent Edge Detector Based on Multiple Edge Maps. M. Qasim, W.L. Woon, Z. Aung. Technical Report DNA # May 2012

Intelligent Edge Detector Based on Multiple Edge Maps. M. Qasim, W.L. Woon, Z. Aung. Technical Report DNA # May 2012 Intelligent Edge Detector Based on Multiple Edge Maps M. Qasim, W.L. Woon, Z. Aung Technical Report DNA #2012-10 May 2012 Data & Network Analytics Research Group (DNA) Computing and Information Science

More information

Natural Scene Statistics and Perception. W.S. Geisler

Natural Scene Statistics and Perception. W.S. Geisler Natural Scene Statistics and Perception W.S. Geisler Some Important Visual Tasks Identification of objects and materials Navigation through the environment Estimation of motion trajectories and speeds

More information

Using EEG to discriminate cognitive workload and performance based on neural activation and connectivity*

Using EEG to discriminate cognitive workload and performance based on neural activation and connectivity* Using EEG to discriminate cognitive workload and performance based on neural activation and connectivity* James R. Williamson 1, Thomas F. Quatieri 1, Christopher J. Smalt 1, Joey Perricone 1, Brian J.

More information

Beyond Blind Averaging: Analyzing Event-Related Brain Dynamics. Scott Makeig. sccn.ucsd.edu

Beyond Blind Averaging: Analyzing Event-Related Brain Dynamics. Scott Makeig. sccn.ucsd.edu Beyond Blind Averaging: Analyzing Event-Related Brain Dynamics Scott Makeig Institute for Neural Computation University of California San Diego La Jolla CA sccn.ucsd.edu Talk given at the EEG/MEG course

More information

Novel single trial movement classification based on temporal dynamics of EEG

Novel single trial movement classification based on temporal dynamics of EEG Novel single trial movement classification based on temporal dynamics of EEG Conference or Workshop Item Accepted Version Wairagkar, M., Daly, I., Hayashi, Y. and Nasuto, S. (2014) Novel single trial movement

More information

Evidence-Based Filters for Signal Detection: Application to Evoked Brain Responses

Evidence-Based Filters for Signal Detection: Application to Evoked Brain Responses Evidence-Based Filters for Signal Detection: Application to Evoked Brain Responses M. Asim Mubeen a, Kevin H. Knuth a,b,c a Knuth Cyberphysics Lab, Department of Physics, University at Albany, Albany NY,

More information

NeuroSky s esense Meters and Detection of Mental State

NeuroSky s esense Meters and Detection of Mental State NeuroSky s esense Meters and Detection of Mental State The Attention and Meditation esense meters output by NeuroSky s MindSet are comprised of a complex combination of artifact rejection and data classification

More information

Supporting Information

Supporting Information 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 Supporting Information Variances and biases of absolute distributions were larger in the 2-line

More information

Towards natural human computer interaction in BCI

Towards natural human computer interaction in BCI Towards natural human computer interaction in BCI Ian Daly 1 (Student) and Slawomir J Nasuto 1 and Kevin Warwick 1 Abstract. BCI systems require correct classification of signals interpreted from the brain

More information

Gray level cooccurrence histograms via learning vector quantization

Gray level cooccurrence histograms via learning vector quantization Gray level cooccurrence histograms via learning vector quantization Timo Ojala, Matti Pietikäinen and Juha Kyllönen Machine Vision and Media Processing Group, Infotech Oulu and Department of Electrical

More information

Automatic Hemorrhage Classification System Based On Svm Classifier

Automatic Hemorrhage Classification System Based On Svm Classifier Automatic Hemorrhage Classification System Based On Svm Classifier Abstract - Brain hemorrhage is a bleeding in or around the brain which are caused by head trauma, high blood pressure and intracranial

More information

alternate-form reliability The degree to which two or more versions of the same test correlate with one another. In clinical studies in which a given function is going to be tested more than once over

More information

The role of cognitive effort in subjective reward devaluation and risky decision-making

The role of cognitive effort in subjective reward devaluation and risky decision-making The role of cognitive effort in subjective reward devaluation and risky decision-making Matthew A J Apps 1,2, Laura Grima 2, Sanjay Manohar 2, Masud Husain 1,2 1 Nuffield Department of Clinical Neuroscience,

More information

UNIVERSITY of PENNSYLVANIA CIS 520: Machine Learning Final, Fall 2014

UNIVERSITY of PENNSYLVANIA CIS 520: Machine Learning Final, Fall 2014 UNIVERSITY of PENNSYLVANIA CIS 520: Machine Learning Final, Fall 2014 Exam policy: This exam allows two one-page, two-sided cheat sheets (i.e. 4 sides); No other materials. Time: 2 hours. Be sure to write

More information

Gender Based Emotion Recognition using Speech Signals: A Review

Gender Based Emotion Recognition using Speech Signals: A Review 50 Gender Based Emotion Recognition using Speech Signals: A Review Parvinder Kaur 1, Mandeep Kaur 2 1 Department of Electronics and Communication Engineering, Punjabi University, Patiala, India 2 Department

More information

Searching for Causes of Poor Classification Performance in a Brain-Computer Interface Speller. Michael Motro Advisor: Leslie Collins

Searching for Causes of Poor Classification Performance in a Brain-Computer Interface Speller. Michael Motro Advisor: Leslie Collins Searching for Causes of Poor Classification Performance in a Brain-Computer Interface Speller Michael Motro Advisor: Leslie Collins Department of Electrical and Computer Engineering Duke University April,

More information

arxiv: v1 [cs.lg] 4 Feb 2019

arxiv: v1 [cs.lg] 4 Feb 2019 Machine Learning for Seizure Type Classification: Setting the benchmark Subhrajit Roy [000 0002 6072 5500], Umar Asif [0000 0001 5209 7084], Jianbin Tang [0000 0001 5440 0796], and Stefan Harrer [0000

More information

Classification of Honest and Deceitful Memory in an fmri Paradigm CS 229 Final Project Tyler Boyd Meredith

Classification of Honest and Deceitful Memory in an fmri Paradigm CS 229 Final Project Tyler Boyd Meredith 12/14/12 Classification of Honest and Deceitful Memory in an fmri Paradigm CS 229 Final Project Tyler Boyd Meredith Introduction Background and Motivation In the past decade, it has become popular to use

More information

Nature Neuroscience: doi: /nn Supplementary Figure 1. Trial structure for go/no-go behavior

Nature Neuroscience: doi: /nn Supplementary Figure 1. Trial structure for go/no-go behavior Supplementary Figure 1 Trial structure for go/no-go behavior a, Overall timeline of experiments. Day 1: A1 mapping, injection of AAV1-SYN-GCAMP6s, cranial window and headpost implantation. Water restriction

More information

Dissociable neural correlates for familiarity and recollection during the encoding and retrieval of pictures

Dissociable neural correlates for familiarity and recollection during the encoding and retrieval of pictures Cognitive Brain Research 18 (2004) 255 272 Research report Dissociable neural correlates for familiarity and recollection during the encoding and retrieval of pictures Audrey Duarte a, *, Charan Ranganath

More information

Supplementary Material for

Supplementary Material for Supplementary Material for Selective neuronal lapses precede human cognitive lapses following sleep deprivation Supplementary Table 1. Data acquisition details Session Patient Brain regions monitored Time

More information

Functional Elements and Networks in fmri

Functional Elements and Networks in fmri Functional Elements and Networks in fmri Jarkko Ylipaavalniemi 1, Eerika Savia 1,2, Ricardo Vigário 1 and Samuel Kaski 1,2 1- Helsinki University of Technology - Adaptive Informatics Research Centre 2-

More information

Optimising the Number of Channels in EEG-Augmented Image Search

Optimising the Number of Channels in EEG-Augmented Image Search Optimising the Number of Channels in EEG-Augmented Image Search Graham Healy CLARITY: Centre for Sensor Web Technologies School of Computing Dublin City University, Ireland ghealy@computing.dcu.ie Alan

More information

Reducing multi-sensor data to a single time course that reveals experimental effects

Reducing multi-sensor data to a single time course that reveals experimental effects Schurger et al. BMC Neuroscience 2013, 14:122 METHODOLOGY ARTICLE Open Access Reducing multi-sensor data to a single time course that reveals experimental effects Aaron Schurger 1,2*, Sebastien Marti 1,2

More information

Observational Category Learning as a Path to More Robust Generative Knowledge

Observational Category Learning as a Path to More Robust Generative Knowledge Observational Category Learning as a Path to More Robust Generative Knowledge Kimery R. Levering (kleveri1@binghamton.edu) Kenneth J. Kurtz (kkurtz@binghamton.edu) Department of Psychology, Binghamton

More information

Unit 1 Exploring and Understanding Data

Unit 1 Exploring and Understanding Data Unit 1 Exploring and Understanding Data Area Principle Bar Chart Boxplot Conditional Distribution Dotplot Empirical Rule Five Number Summary Frequency Distribution Frequency Polygon Histogram Interquartile

More information

This presentation is the intellectual property of the author. Contact them for permission to reprint and/or distribute.

This presentation is the intellectual property of the author. Contact them for permission to reprint and/or distribute. Modified Combinatorial Nomenclature Montage, Review, and Analysis of High Density EEG Terrence D. Lagerlund, M.D., Ph.D. CP1208045-16 Disclosure Relevant financial relationships None Off-label/investigational

More information

NeuroImage 64 (2013) Contents lists available at SciVerse ScienceDirect. NeuroImage. journal homepage:

NeuroImage 64 (2013) Contents lists available at SciVerse ScienceDirect. NeuroImage. journal homepage: NeuroImage 64 (2013) 590 600 Contents lists available at SciVerse ScienceDirect NeuroImage journal homepage: www.elsevier.com/locate/ynimg EEG indices of reward motivation and target detectability in a

More information

10CS664: PATTERN RECOGNITION QUESTION BANK

10CS664: PATTERN RECOGNITION QUESTION BANK 10CS664: PATTERN RECOGNITION QUESTION BANK Assignments would be handed out in class as well as posted on the class blog for the course. Please solve the problems in the exercises of the prescribed text

More information

Changing expectations about speed alters perceived motion direction

Changing expectations about speed alters perceived motion direction Current Biology, in press Supplemental Information: Changing expectations about speed alters perceived motion direction Grigorios Sotiropoulos, Aaron R. Seitz, and Peggy Seriès Supplemental Data Detailed

More information

ISSN: (Online) Volume 3, Issue 7, July 2015 International Journal of Advance Research in Computer Science and Management Studies

ISSN: (Online) Volume 3, Issue 7, July 2015 International Journal of Advance Research in Computer Science and Management Studies ISSN: 2321-7782 (Online) Volume 3, Issue 7, July 2015 International Journal of Advance Research in Computer Science and Management Studies Research Article / Survey Paper / Case Study Available online

More information

MENTAL WORKLOAD AS A FUNCTION OF TRAFFIC DENSITY: COMPARISON OF PHYSIOLOGICAL, BEHAVIORAL, AND SUBJECTIVE INDICES

MENTAL WORKLOAD AS A FUNCTION OF TRAFFIC DENSITY: COMPARISON OF PHYSIOLOGICAL, BEHAVIORAL, AND SUBJECTIVE INDICES MENTAL WORKLOAD AS A FUNCTION OF TRAFFIC DENSITY: COMPARISON OF PHYSIOLOGICAL, BEHAVIORAL, AND SUBJECTIVE INDICES Carryl L. Baldwin and Joseph T. Coyne Department of Psychology Old Dominion University

More information

Toward a noninvasive automatic seizure control system with transcranial focal stimulations via tripolar concentric ring electrodes

Toward a noninvasive automatic seizure control system with transcranial focal stimulations via tripolar concentric ring electrodes Toward a noninvasive automatic seizure control system with transcranial focal stimulations via tripolar concentric ring electrodes Oleksandr Makeyev Department of Electrical, Computer, and Biomedical Engineering

More information

Effects of Light Stimulus Frequency on Phase Characteristics of Brain Waves

Effects of Light Stimulus Frequency on Phase Characteristics of Brain Waves SICE Annual Conference 27 Sept. 17-2, 27, Kagawa University, Japan Effects of Light Stimulus Frequency on Phase Characteristics of Brain Waves Seiji Nishifuji 1, Kentaro Fujisaki 1 and Shogo Tanaka 1 1

More information

MEMORABILITY OF NATURAL SCENES: THE ROLE OF ATTENTION

MEMORABILITY OF NATURAL SCENES: THE ROLE OF ATTENTION MEMORABILITY OF NATURAL SCENES: THE ROLE OF ATTENTION Matei Mancas University of Mons - UMONS, Belgium NumediArt Institute, 31, Bd. Dolez, Mons matei.mancas@umons.ac.be Olivier Le Meur University of Rennes

More information

Introduction to Computational Neuroscience

Introduction to Computational Neuroscience Introduction to Computational Neuroscience Lecture 11: Attention & Decision making Lesson Title 1 Introduction 2 Structure and Function of the NS 3 Windows to the Brain 4 Data analysis 5 Data analysis

More information

Discovering Processing Stages by combining EEG with Hidden Markov Models

Discovering Processing Stages by combining EEG with Hidden Markov Models Discovering Processing Stages by combining EEG with Hidden Markov Models Jelmer P. Borst (jelmer@cmu.edu) John R. Anderson (ja+@cmu.edu) Dept. of Psychology, Carnegie Mellon University Abstract A new method

More information

Dual Mechanisms for the Cross-Sensory Spread of Attention: How Much Do Learned Associations Matter?

Dual Mechanisms for the Cross-Sensory Spread of Attention: How Much Do Learned Associations Matter? Cerebral Cortex January 2010;20:109--120 doi:10.1093/cercor/bhp083 Advance Access publication April 24, 2009 Dual Mechanisms for the Cross-Sensory Spread of Attention: How Much Do Learned Associations

More information

The Attentional Blink is Modulated by First Target Contrast: Implications of an Attention Capture Hypothesis

The Attentional Blink is Modulated by First Target Contrast: Implications of an Attention Capture Hypothesis The Attentional Blink is Modulated by First Target Contrast: Implications of an Attention Capture Hypothesis Simon Nielsen * (sini@imm.dtu.dk) Tobias S. Andersen (ta@imm.dtu.dk) Cognitive Systems Section,

More information

A Vision-based Affective Computing System. Jieyu Zhao Ningbo University, China

A Vision-based Affective Computing System. Jieyu Zhao Ningbo University, China A Vision-based Affective Computing System Jieyu Zhao Ningbo University, China Outline Affective Computing A Dynamic 3D Morphable Model Facial Expression Recognition Probabilistic Graphical Models Some

More information

MRI Image Processing Operations for Brain Tumor Detection

MRI Image Processing Operations for Brain Tumor Detection MRI Image Processing Operations for Brain Tumor Detection Prof. M.M. Bulhe 1, Shubhashini Pathak 2, Karan Parekh 3, Abhishek Jha 4 1Assistant Professor, Dept. of Electronics and Telecommunications Engineering,

More information