RBF Based Responsive Stimulators to Control Epilepsy

Size: px
Start display at page:

Download "RBF Based Responsive Stimulators to Control Epilepsy"

Transcription

1 RBF Based Responsive Stimulators to Control Epilepsy by Siniša Čolić A thesis submitted in conformity with the requirements of the degree of Master of Applied Science Department of Electrical and Computer Engineering University of Toronto Copyright by Siniša Čolić 2009 i

2 RBF Based Responsive Stimulators to Control Epilepsy Siniša Čolić Master of Applied Science Department of Electrical and Computer Engineering University of Toronto 2009 Abstract Deep Brain Simulation (DBS) has received attention in the scientific community for its potential to suppress epileptic seizures. To date, DBS has only achieved marginal positive results. We believe that a highly complex possibly chaotic (HPC) biologically inspired stimulation is superior to periodic stimulation. Using Radial Basis Functions (RBFs), we modeled interictal and postictal time series based on electroencephalograms (EEGs) of rat hippocampus slices while under low Mg 2+ /high K +. We then compared the RBF based interictal and postictal stimulations to the periodic stimulation using a Cognitive Rhythm Generator (CRG) model for spontaneous Seizure-Like Events (SLEs). What resulted was a significant improvement in seizure suppression with the HPC stimulators at lower gains as opposed to the periodic signal. This suggests that the use of biologically inspired HPC stimulators will achieve better results while confining the stimulation to a narrow region of the brain. ii

3 Acknowledgements I would like to thank Berj Bardakjian for his guidance, understanding and abundant optimism that would have me leaving each Thursday lab meeting ready to take on the world. I would like to thank my group members and friends, Marija Cotic, Osbert Zalay, Eunji Kang, Demitre Serletis, Josh Dian, Angela Lee and Dave Stanley for the helpful discussions and advice. I would specifically like to thank Eunji for providing me with the experimental recordings; Osbert for providing me with the complexity analysis program, developing the CRG stimulation protocol and the ROC evaluation methodology; Josh for the quick fixes and helpful suggestions. I would also like to thank Dave and Berj for proof reading my thesis. Finally I would like to thank my parents for always being there through the good and the bad. iii

4 Table of Contents 1 Introduction and Motivation Stimulation Literature Review Continuous Stimulation Responsive Stimulation Outline Hypothesis Chaos and the brain Chaos and Complexity Lyapunov Exponent Correlation Dimension The Brain and Chaos Modeling Highly Complex Possibly Chaotic Time Series Time Series Modeling RBF Model RBF Architecture Recurrent RBF RBF Training Techniques Gradient Descent Regression Tree Forward Selection Application to Henon Map Henon Map Data Preprocessing RBF Training of Henon Map Application to Non-Ictal Time Series Low Mg 2+ /High K + Animal Data Data Preprocessing RBF Training of Non-Ictal Time Series Gradient Descent Forward Selection Tree Regression 41 iv

5 4 CRGs and Modeling Spontaneous Seizure-Like Events Literature Review CRG Based Spontaneous Seizure-Like Model.49 5 Controlling Seizures Application of Stimulation to CRGSLE model Periodic Stimulator Frequency Selection Results of RBF stimulation ROC Measurements ROC Curve Construction ROC Curve Comparison Area Under ROC Curve Discussion and Future Work RBF Model Captures Complexity Complex RBF Stimulation Outperforms Periodic Low Gain More Successful in Complex Stimulation Future work 74 Conclusion Bibliography.78 v

6 List of Tables Table 3.1 Henon Map Gradient descent training parameters and results 29 Table 3.2 Complexity of interictal and postictal time series..34 Table 3.3 Interictal gradient descent training parameters and results..36 Table 5.1 Determination of the ROC cases.64 Table Gain ROC Area Significance.68 List of Figures Figure 1.1: Extracellular recording of seizure time series..4 Figure 3.1: Radial Basis Function Model.18 Figure 3.2: Comparison of non-chaotic and chaotic Henon map time series 31 Figure 3.3: Comparing RBF Henon map model to chaotic time series..29 Figure 3.4: RBF Interictal model after gradient descent training..37 Figure 3.5: Results of Interictal RBF training with forward selection.. 39 Figure 3.6: RBF interictal model after training with forward selection.40 Figure 3.7: Results of interictal RBF training with tree regression 43 Figure 3.8: RBF interictal after training with tree regression...44 Figure 3.9: Results of postictal RBF training with tree regression...46 Figure 3.10: RBF postictal training with tree regression..44 vi

7 Figure 4.1: CRGSLE model.52 Figure 4.2: CRGSLE model output waveforms produced. 53 Figure 4.3: Comparison of the CRGSLE seizures to the actual seizures being modeled 54 Figure 5.1: Stimulation Setup..57 Figure 5.2: FFT Comparison of RBF Stimulator and 12Hz Periodic Stimulator.59 Figure 5.3: Stimulation of the CRGSLE mode with interictal, postictal and periodic stimulation models.61 Figure 5.4: ROC comparison of the periodic, interictal and postictal stimulation.65 Figure 5.5: ROC area under the curve comparison of the periodic, interictal and postictal stimulations..68 Figure 5.6: ROC area for different gains of the stimulation models 50 reinitialization. 69 Figure 5.7: ROC area for different gains of the stimulation models 500 reinitialization.. 70 Figure 5.8: ROC area for different gains different periodic frequencies.. 71 vii

8 List of Abbreviations ANN CRG CRGSLE DBS EEG EMG exthr FPGA FS GCV HPC LPR Lmax MSE NRE RBF ROC SLE STLmax TR VNS Artificial Neural Network Cognitive Rhythm Generator Cognitive Rhythm Generator Seizure-Like Event Model Deep Brain Stimulation Electroencephalogram Electromyogram Stimulation Threshold Field Programmable Gate Array Forwards Selection Generalized Cross-validation Error Highly Complex Possibly Chaotic Low Complexity Possibly Rhythmic Maximum Lyapunov Exponent Mean Square Error Neural Rhythm Extractor Radial Basis Function Receiver Operating Characteristic Seizure-Like Event Short Time Maximum Lyapunov Exponent Tree Regression Vegas Nerve Stimulator viii

9 CHAPTER 1 INTRODUCTION AND MOTIVATION Epilepsy is a serious neurological disorder often accompanied by seizure or ictal events. Seizures are characterized as a transition from normal or high complexity possibly chaotic activity (HPC) to low complexity possibly regular (LPR) activity [1][2][3]. The majority of epileptics (approx. 80%) can be treated with anticonvulsive drug therapies which inhibit the channel transport mechanisms [4]. Of the remaining 20% some resort to surgery which carries with it many risks. Those that are not viable for surgery have turned to a new form of treatment known as Deep Brain Stimulation (DBS). Still in the early stage of epilepsy research, DBS has shown promising results in treating patients with intractable epilepsy [5]. DBS is a crude stimulation technique that consists of implantation of electrodes around the seizure focal point and applying high voltage periodic stimulation to counteract seizures [5]. These DBS stimulators are applied for fixed durations or continuously whether the patient still needs the stimulation or not. This lack of responsiveness is a major shortcoming of the DBS 1

10 treatment. Here we propose a new responsive, highly complex possibly chaotic stimulation technique inspired from biological time series recordings. A seizure can be broken down into three main regions which we refer to as the interictal, ictal and postictal (see figure 1.1). The ictal region is where the characteristics of a seizure are present. The interictal region occurs just prior to the ictal and the postictal region occurs just after the ictal. From now on we will use the term non-ictal to refer to interictal and postictal regions of a seizure time series. As stated earlier in the work done by [1][2][3] the normal non-ictal brain activity is highly complex possibly chaotic (HPC), where as the ictal activity is of lower complexity possibly rhythmic. Our goal is to provide a stimulation technique which sustains the brain in the highly complex state preventing the transition to the low complexity seizure activity. The stimulation is only to be applied at the presence of a seizure. To this end we have constructed a responsive model based on the non-ictal brain activity. The model chosen to represent the healthy non-ictal activity was the Radial Basis Function (RBF). It was chosen due to its success in modeling highly complex time series of the financial sector, natural generalization tendencies and low processing requirements [6][7]. The RBF model was trained on extracellular recording samples of seizure-like events (SLE) accumulated from multiple slices of the rat hippocampus under the in-vitro low Mg 2+ epilepsy model. 2

11 Due to the chaotic nature of the brain signal we never intended to make perfect predictions from the time series data. Instead we opted in creating models of non-ictal activity from the interictal and postictal regions of a SLE with matching characteristics in wave shape and complexity to the original training data. To assess the feasibility of our model in seizure control we employed our stimulation paradigm on a coupled oscillator model of SLEs [8]. The goal was to achieve an improvement in seizure reduction with our biologically inspired HPC stimulation over the presently used periodic stimulation. 3

12 Figure 1.1 Extracellular recording of seizure time series The data was sampled at 2kHz from rat hippocampal slices under the influence of low Mg 2+. We have further broken the data into three regions referred to as the interictal, ictal and postictal. 1.1 Stimulation Literature Review The use of stimulation to control seizures has been around for many years. The most common and only FDA approved implantable device for treatment of epilepsy is the Vagus Nerve Stimulator (VNS)[9][10]. The vagus nerve stimulator applies periodic electrical pulses to the left vegal nerve which then make its way to the brain. Recent studies have shown that only 30-40% of patients undergoing the treatment experienced a 50% seizure reduction [11]. An alternative option known as Deep Brain Stimulation (DBS) has recently become a popular technique to control epilepsy [9]. In the past DBS has been fairly successful in treating disorders 4

13 such as Parkinson s and depression. It is believed that the same level of success can be achieved in treating epilepsy. DBS uses periodic stimulation which can be described as two square pulses one after the other with one positive and the other negative. The DBS treatment is highly dependent on the placement of the electrodes and the type of stimulation used. There are two main styles of DBS stimulation. The first and the one most often used is the continuous stimulation with periodic waveforms [9][12][13][14]. The other is responsive stimulation and it is beginning to gain notice, although it is much harder to achieve as it requires that the seizures be detected as early as possible [3][9] Continuous Stimulation Continuous stimulation, as the name implies is a continuous application of the stimulation whether the subject is experiencing a seizure or not [9]. The stimulation can also be applied on a timer basis where the stimulation turns on and off based on a time interval (i.e. on for one minute, off for 2 minutes). Many human trials have been performed with varying results [9]. A study performed recently used periodic stimulation with a frequency range of Hz to treat temporal lobe epilepsy in the hippocampus [13][14]. They showed remarkable results with most subjects experiencing a 50% reduction in seizure frequency and a significant number of patients experienced a 90% reduction and became completely seizure free. A subsequent study by a Canadian group that tried to match the same results found an improvement of only 15% [15]. The general story of DBS is that the results are not repeatable. As well many of the patients that became seizure free only remained so for a short time, then after a couple years the symptoms returned [12]. 5

14 1.1.2 Responsive Stimulation Responsive stimulation differs from continuous stimulation in that the stimulation is only applied when needed [9]. Determining when the stimulation is needed is a much more difficult task and requires some way to detect the approaching seizure. There are two ways in which responsive stimulation can be applied. The first is to apply stimulation once the seizure is observed, although this may often times be too late to stop the seizure [16][17]. The second method is to use a predictive system that warns of an impending seizure event and applies the stimulation prior to the event in the hope that the seizure would not occur at all [3]. In a recent study done by Fountas et al., eight patients had an external Responsive Neurostimulation (erns) system implanted [16][17]. The erns system detected the occurrence of a seizure and applied periodic pulses ranging in frequency from 1-333Hz, with amplitude of mA. Of the 8 patients 7 had 45% less seizure activity and 2 had more than 75% reduction in seizure activity. There have been numerous DBS trials with marginal success rates. Often times the results are not repeatable. DBS employs a periodic stimulation model where as the brain has been shown to be highly complex possibly chaotic (HPC). It is for that very reason that we constructed an RBF based stimulator using the highly complex features found in the interictal and postictal regions of the brain. In the following section we explain how the CRGSLE model was configured to compare the common periodic DBS stimulation to the highly complex interictal and postictal based stimulations. 6

15 1.2 Outline This thesis outlines the initial steps in a long process towards viable human treatment of epileptic seizures. The first step is the creation of a stimulation model and its application to an in-silico model of spontaneous seizure-like events. The subsequent steps will be introduced in the future works section of this thesis. Having introduced the problem and motivation for the thesis in Chapter 1 we move onto Chapter 2. In Chapter 2 we provide the background necessary to define chaos, show how it is quantified and its relevance to the brain and epilepsy. Chapter 3 focuses on the RBF. There we defend the use of the RBF in modeling brain complexity from the time series extracellular data. Further we explain in detail the structure of the RBF and the many training techniques used to model the complexity of the brain. Chapter 3 concludes with the training results using different training methods and the verification of the model selected. The next two chapters focus on the generation and control of Seizure-Like Events (SLEs). In chapter 4 we describe the SLE model created from the Cognitive Rhythm Generator (CRGSLE). Then in chapter 5 we show how the CRGSLE model was modified to test control efficacy of our RBF stimulations. Chapter 5 concludes with the results of stimulating with the interictal and postictal RBF models compared to the periodic stimulation commonly used in DBS literature. 7

16 Chapter 6 concludes the work with a discussion of the results and the future work planned on these results. 1.3 Hypothesis Radial Basis Functions (RBFs) will capture the highly complex possibly chaotic (HPC) features present across multiple slices of rat hippocampus from the non-ictal extracellular time series. The stimulation of the CRGSLE model with the HPC RBF generated non-ictal signals will achieve better results in terms of suppressing ictal events than those achieved through the periodic signal used in Deep Brain Stimulation (DBS). 8

17 CHAPTER 2 CHAOS AND THE BRAIN In this chapter we will introduce the concept of chaos and how it is measured. We will then proceed to provide evidence for the existence of HPC activity in a normally functioning brain and the highly rhythmic, possibly regular activity found in a seizing brain. 2.1 Chaos and Complexity Chaos is a long term aperiodic behaviour in a nonlinear deterministic system that exhibits sensitive dependence on initial conditions [18]. There are three important characteristics in this statement that separate chaotic systems from others. First they produce behaviour that never repeats, not even after long term observation. Secondly chaotic systems are not based off of random inputs, but rather from the nonlinear evolution of trajectories. Finally the most important distinguishing characteristic is that chaotic systems are highly sensitive to initial 9

18 conditions. This means that two trajectories starting close to each other will diverge exponentially with time, often referred to as the butterfly effect [18][19]. If the system s equations are known the chaotic behaviour of the system can be computed analytically. In general the system equations are not known and many times the only thing available is the time series of some variable in the system (i.e. voltage). Over the years there have been many methods developed to find how chaotic or complex a system is. Here we will present two of the commonly used methods Lyapunov Exponents The most important distinguishing feature of a chaotic system is the sensitive dependence on initial conditions, in the sense that neighbouring trajectories separate exponentially fast [18][20]. A common way to quantify this property is to use Lyapunov exponents. Consider an n- dimensional sphere in n-dimensional state space. During the evolution of the sphere in the state space it will go from being a sphere to an infinitesimal ellipsoid. Where each dimension k of the ellipsoid can be described by, ~ 0, (2.1) where represents a finite separation between two trajectories. The start and end of the trajectories are related exponentially by which are known as the Lyapunov exponents. A positive Lyapunov exponent indicates the presence of chaos and a negative or zero Lyapunov 10

19 exponent means the system is non-chaotic. For a system to be considered chaotic only one of the Lyapunov exponents needs to be positive, or another way to say it is that the maximum Lyapunov exponent is greater than 0 in chaotic systems [18][19][20]. A successful method for calculating the Lyapunov exponent from time series is known as Wolf s method [20][21]. Wolf s method takes a reference trajectory and follows the divergence of the neighbouring trajectories from it. In order to ensure the separation between the two trajectories does not diverge to infinity or extremely large values it is often necessary to renormalize. This is done by picking a new point every time a threshold value is exceeded and the process continues. An average is then taken to find the average divergence rate and with it the maximum Lyapunov exponent is obtained. The drawback of Wolf s method is that it requires many time series points to calculate the divergence. Often times the data is nonstationary and may contain multiple different regions of chaotic and non-chaotic behaviour. Therefore other short time techniques such as the short time maximum Lyapunov exponent (STL max ) [22][23] and Rosensteins [24] method are used. In this thesis we opted for the STL max method which is based closely on Wolf s algorithm and the details are outlined in Iasemidis et al, 1990 [22] Correlation Dimension Much like the Lyapunov exponent, correlation dimension tries to quantify the chaotic behaviour of a system. Correlation dimension is a geometrical quantity that characterizes the minimal number of variables needed to fully describe the dynamics of motion [21]. The larger 11

20 the number of variables needed the more chaotic it is. Grassberger and Procaccia devised an efficient way to do this which has become the standard for calculating the correlation dimension [18][25]. The Grassberger and Procaccia method works by fixing a point x on the attractor A. Then they let denote the number of points on A within the ball of radius centred on the fixed point x. Then the number of points is measured as the radius is increased. As the radius grows the number of points inside the ball centered at x grows with the relation of a power law described by, ~ (2.2) Where d is the correlation dimension. Generally the result varies with the selection of the fixed point x. To get a more accurate result many different fixed points are used to do the calculation and then their average is used to find the correlation dimension [18]. 2.2 The brain and chaos The brain is composed of billions of neurons with roughly synaptic connections. These neurons join together to form the different systems in the brain such as the cerebellum, neocortex, amygdala, and hippocampus just to name a few. No man made system in existence can match the complexity of the brain. Still the question arises whether or not the brain is chaotic. 12

21 Using electroencephalogram(eeg) readings which measure the variability of the electric field in time and space due to the firing of neuronal populations may have provided evidence for the existence of highly complex possibly chaotic neurodynamics in the brain. Some of the early work done by Babloyantz et al. had used correlation dimension measurements to assess the complexity of the different stages of the sleep cycle [26]. They measured a correlation dimension greater than 4 for the different stages and concluded in that the brain possessed chaotic dynamics in the sleep state. Further Fell et. al., provided evidence for the existence of chaotic behaviour in the brain [27]. They applied Wolf s algorithm on time series gathered from the different stages of sleep and yielded a positive Lyapunov exponent of Lastly Balboyantz et al., compared the correlation dimension of a patient in the sleep state to an epileptic state and found that the sleep state had a correlation dimension of 4.05, and the epileptic state had a correlation dimension of 2.05 [28]. The drop in dimension supports that during a seizure a patient is trapped in a lower dimensional, less chaotic state and only when the state returns to a higher complexity can normal brain function resume. Much of the work in this thesis relies on the assumption of the existence of HPC neurodynamics in normal non-ictal brain activity. Likewise we assume the existence of lower complexity, possibly rhythmic neurodynamics during the ictal region. This assumption is well established in literature [2][3][20][28]. 13

22 CHAPTER 3 MODELING HIGHLY COMPLEX POSSIBLY CHAOTIC TIME SERIES In this chapter the challenges of modeling complex time series are outlined in detail. The choice of the RBF model is defended using references in literature. The RBF model is further decomposed into its architecture and the learning techniques. The chapter concludes with a validation of the models ability to produce time series that match the characteristics of the highly complex non-ictal recording measured in the brain. 3.1 Time Series Modeling We modeled the non-ictal time series from the extracellular reading of rat hippocampal slices. Earlier in section 2.2 it was explained that this type of signal is non-stationary and HPC. The chaotic feature of the time series meant that the model would not simply be performing pattern recognition. Rather it would have to generalize to some underlying features not clearly visible but highly relevant. These features are the key in finding the right stimulation to prevent seizure propagation. There are also multiple sources of noise embedded in the system that 14

23 need to be avoided (i.e. noise from the setup and recording instruments such as the 60Hz harmonic). The model used has to be able to generalize easily and avoid falling in the traps caused by the presence of noise. It turns out that this problem is analogous to forecasting stock market trends. Stock market time series data is chaotic and highly noisy [7]. Using stock market time series prediction as a starting point it was discovered that radial basis functions (RBFs) are very successful for time series prediction [6][7]. RBFs are very similar to ANNs except for distinguishing difference that the input signals are arranged first based on a non-linear methodology followed by a linear summation. On the other hand Artificial Neural Networks (ANNs) first combine the inputs through a linear summation and then perform the non-linear transformation on those sums. The non-linear transformations in the ANNs are static, whereas in the case of RBFs the non-linear transformation of inputs is dynamic during training because the parameters of the RBF are updated. To compensate for the static non-linear transformation, the ANNs have multiple layers which add higher complexity, but further cost in training time. Whereas the RBFs have only one layer and the relationship between the weights and the output are linear and therefore the hardest training is done in finding the parameters of the non-linear RBF transformation. 15

24 3.2 RBF Model The RBF model can be described in a two parts. First the architecture and second the learning techniques. In the following section we will first describe the standard RBF architecture followed afterward by the slight modification to make the RBF function in the recurrent mode. The training of the RBF consisted of three main learning techniques. The first and standard technique of RBF training is the gradient descent method and it was based on previous work in the group by Courville [19]. The other learning techniques used were the Tree Regression (TR) and Forward Selection (FS). They were applied through a RBF Matlab training function created by the UK group based at the University of Edingburgh, Scotland [29][30] RBF Architecture The radial basis function model (RBF) defines an output y n as a linear expansion of radial functions of the input x n as shown by y = w + w, (3.1) where x is the output of the kth radial basis function given the input vector x n of dimension m. N is the number of RBFs, the weight w is the influence associated with the kth RBF. The RBF in our model was chosen to be the Gaussian RBF, as shown by 16

25 =exp, (3.2) where the vector C n is the center or mean of the kth RBF. The vector r n represents the variance of the kth RBF. The coefficient term of the Gaussian was omitted as it only modifies the scale and adds no further complexity. A visual representation of the RBF model is shown in figure 3.1 below. 17

26 Figure 3.1 Radial Basis Function Model The RBF is composed of three different layers. The input layer takes in an m dimensional input from the time series recording. The hidden layer contains N RBFs which output a value based on the proximity of the input vector to the Gaussian centre. Finally the output layer is the sum of all the RBF outputs multiplied by their weight factor w Recurrent RBF In its standard mode the RBF model is used to make a prediction based on a given input. There is an alternative mode of operation known as recurrent RBF mode where the model is only initialized by one input sample and allowed to generate predictions indefinitely based off of that first input [31]. To achieve this recurrent mode of operation the model output y was replaced by x as shown by, 18

27 x = w + w, (3.3) where x is the next point in the time series generated when the RBF functions in recurrent mode. It then gets incorporated in the next input to make the prediction of x. Recurrent mode of operation is a better validation of the models predictive capabilities [32]. If the model is able to capture the intrinsic features of the training time series then it should be able to maintain activity indefinitely without converging to zero. This was one of the main criteria used in our evaluation and selection of RBF models RBF Training Techniques The training of the RBF consists of finding superior parameters for modeling the time series training data. These parameters consist of the Gaussian center (c) and radius (r) along with the weight (w). Further parameters optimized are the embedding (m) and number of radial basis functions (N) needed to accurately model the given time series. To achieve this end we used three learning techniques: 1. Gradient descent 2. Regression Tree 3. Forward Selection 19

28 In the following subsections the algorithms of all three learning techniques will be summarized Gradient descent The gradient descent is the standard learning technique for any optimization problem and was the starting point for the training of the RBFs. The gradient descent method uses the gradient with respect to one of the three parameters mentioned above (c,r,w) to find how the error changes as those parameters increase or decrease. The error is calculated by equation 3.4 where is the predicted time series value and D is the length of the training time series data. After many training epochs the parameters and the error slowly converge to one of a number of possible error minimums or also known as a local minimum. = (3.4) Using the previous work done by Courville [19] as a guide, the gradients were calculated for the three parameters by differentiating the error function (equation 3.4 with respect to the three parameters (c,r,w). Where c and r are of size [N x m] and w is of size [N+1], with an extra 1 added for the bias. The gradients shown below, = x =0 (3.5) 20

29 = (3.6) = (3.7) The gradients are then used to update the parameters such that after each training epoch a discrete step is taken down the error surface towards a local error minimum. The standard way to do this is simply to use a predetermined learning rate to control the step size. This is where we diverged from [19]. Instead the step size was calculated using the conjugate gradient method described by Shewchuk [33]. It requires only that you feed it a function that outputs the error and gradient information. It then uses the conjugate gradient technique to find the best step direction efficiently and updates the parameters recursively until the desired stopping criteria is met. Training in a gradient descent method can be stopped in many ways. One way is to stop training when the error reaches a low enough value. Another method is to stop when the error reduction between epoch n and n+1 is smaller than a predetermined value. In our setup we used a predetermined number of training epochs as the stopping criteria. 21

30 Regression Tree Another form of learning technique was the regression tree (RT) based on the work of Orr et al., [29]. Much like unsupervised learning techniques RT takes the initial [d x p] data matrix to compute a regression tree. Where d signifies the dimensionality of the data and p is the number of data patterns used in the training. Then the first node, also referred to as root is initialized. The training algorithm orders the training data values along each parameter from least to greatest. It breaks up the data in half from nmin to p-nmin. Where nmin is the minimum number of data values that have to remain in each branch after a break. Then the boundary was found to create the lowest error, where error is determined using, =, (3.8) =, (3.9), = +, (3.10) where k represents the dimension, b is the boundary choice, is the average of the output for the left branch and is the average of the output for the right branch, and are the samples on the left and right branches. 22

31 After the greedy search through the input space for the branching corresponding to the lowest errors we can calculate the centres and radii parameters starting from the root node, or the parent node if you will. The calculation of the centres and radii are done by, =, (3.11) = +, (3.12) where and stand for the radii and centre of the kth node in the regression tree. The RT produces a large selection of radii and centres which are well suited to model the training data. From here we can create fairly good model of the training data, however the model would be very large as it contains almost as many parameter choices as the number of data samples. Not all of the parameters may contribute significantly to a reduction in error. By using a pruning method known as Forward Selection (FS) we can reduce the number of parameters and still maintain a high degree of model performance Forward Selection Forward selection (FS) finds the subset of model parameters that create the greatest reduction in output error. As opposed to backward selection, FS works by initializing to the RBF with the greatest influence on error reduction in the matrix representing all RBF. It then builds on that 23

32 by recursively searching through the remaining RBFs for the next RBF that creates the greatest shift in error. This continues until a special error known as generalized cross-validation (GCV) error stops reducing. At which point the addition of further RBFs will not contribute to reduce error. FS can be significantly improved by combining it with orthogonal least squares [34]. This is a Gram-Schmidt orthogonalisation process which ensures that each new parameter added is orthogonal to all the previously added parameters [35]. It works to improve the calculation by making it easier to compute the sum-squared-error term. Orthogonal least squares springs from the idea that any matrix can be factored into a product of an orthogonal matrix and an upper triangular matrix H, (3.13) H = ħ ħ ħ R ; ħ ħ =0,, (3.14) where is the design matrix, H is the orthogonal matrix and is an upper triangular matrix. Using this idea of orthogonality, forward selection proceeds to compute the projection of the design matrix F acquired from the RT. =, (3.15) 24

33 where represents the projection of the ith parameter and is the jth component of the design matrix being constructed. The process is iterated p times until all the remaining components of F have been calculated to form a new matrix. Then the mean-squared-error is calculated = (3.16) where y are the output values. The process is repeated p times until all the remaining components have been checked. The one that produces the lowest mean-squared-error is then added to the design matrix H. The process is repeated until either all parameters have been exhausted or until another error, the generalized cross-validation (GCV) error begins to increase indicating that the addition of any further parameters to the design matrix will have no further benefits as it leads to over-fitting. The GCV is calculated by = (3.17) = (3.18) Once the design matrix has been found it is a straight forward process to calculate the optimal weights as the RBF output is linearly dependent on the functions through the weight vector. The calculation of the weights is achieved through the calculation of equation

34 = H H H, (3.19) 0 1 = H H H, (3.20) =, (3.21) where is the upper triangular matrix introduced in the transformation to orthogonality, w is the weight matrix. 3.3 Application to Henon Map Henon map is a 2-dimensional dynamical system that has been well studied due to its ability to exhibit chaotic behavior for certain parameters. This makes it good model to test out the learning techniques introduced. The dynamics of the Henon map are assumed to be significantly less complex thus making the Henon map a good starting place to verify the RBF models abilities. In what is to follow we applied the gradient descent training technique on the Henon map to see if the gradient descent RBF model can capture the chaotic behaviour of the Henon map Henon Map The Henon map is defined by, = (3.22) = (3.23) 26

35 where a and b are two parameters that can be preset to make the map exhibit chaotic behaviour. In figure 3.2 below we show the difference between the Henon map running in nonchaotic mode with parameters a=1.25 and b = 0.3 and chaotic mode with a = 1.4 and b =0.3. The non-chaotic mode shown in figure 3.2a is periodic and can easily be predicted well in advance. The chaotic mode shown in figure 3.2b has a chaotic pattern which cannot be predicted in advance. 0.4 a) Non-chaotic Henon map (a=1.25, b=0.3) 0.2 Yn Time b) Chaotic Henon map (a=1.4, b=0.3) Yn Time Figure 3.2 Comparison of non-chaotic and chaotic Henon map time series a) Non-chaotic Henon map time series with a=1.25 and b=0.3 b) Chaotic Henon map time series with a=1.4 and b=

36 3.3.2 Data Preprocessing In order to model the Henon map it was necessary to divide the time series into training data. Four thousand time series samples were generated in Matlab for training the RBF to model the Henon map. Following the generation of the data, the data was modified to produce samples of varying time embedding. Where time embedding refers to the number of time points used prior to a prediction. It can also be thought of as the input length. Training samples were created with time embeddings of 1, 2, 5, 10 and RBF Training of Henon map The training of the Henon map was done with the gradient descent method mentioned earlier. The initial center c parameters were selected from the training data, the r or variance was set to 0.1, and the weights were randomized from a Gaussian distribution. The training variables used in the gradient descent method are shown in Table 3.1. The error calculation was done in two steps. First during the training the error was calculated using the MSE for the regular non-recurrent mode. Then afterwards the model was verified using the MSE on the recurrent mode generation compared with the actual Henon map time series. = (3.24) 28

37 where m is the embedding of the model used to make the prediction and D is the number of sample points used in the training. Since in recurrent mode the models diverge very rapidly (see figure 3.3a), it was decided that MSE was not enough to validate the model. Therefore complexity was further used as a way to confirm the model selection. To calculate the maximum Lyapunov exponent we used STLmax [22][23]. To calculate the correlation dimension we used a Matlab program based on Grasberger and Procaccia [25] written by Zalay, who is a member of our group. The complexity was compared with the complexity on the training data. The complexity was calculated using 8000 samples, time constant of 2 and embedding dimension of 7. The result was 0.99 for the maximum Lyapunov exponent and 1.33 for the correlation dimension. Table 3.1 Henon Map gradient descent training parameters and results Model Embedding (m) RBFs (N) Training Epochs MSE (non-recurrent) MSE (Recurrent) Max Lyapunov Exponent Correlation Dimension e e NaN* e e e e e e e e e e e e NaN* e e e e e e e e * NaN refers to Not A Number and commonly results when the correlation dimension is unable to be calculated, in this case it is because Model 1 and Model 2 produced a steady constant value in recurrent mode. 29

38 The trained RBFs were used in recurrent RBF mode to produce time series of length 8000 as a means to compare the effectiveness of the training. The results are shown in Table 3.1. From the results the lowest error in training MSE (non-recurrent) was 4.55e-7 for model 2 with an embedding of 2 and 10 RBFs. Furthermore the recurrent MSE was the lowest with 1.12e-2 along with models 3, 5, 7, and 8. Model 2 produced a maximum Lyapunov exponent of 0.87 and correlation dimension of 1.27 which closely matched the values found on the original data, 0.99 and 1.33 respectively. Models 7, also with an embedding of 2, but with 20 RBFs produced similar complexity to the training data, but it required more RBFs. Therefore model 2 was selected. Further by simple observation in figure 3.3a it was verified that model 2 matches the characteristics of the Henon map. In figure 3.3b the MSE training with respect to the first 200 epochs is provided to show how the model slowly converged to the error of 4.55e-7. The RBF with the gradient descent learning method was sufficient in modeling the Henon map. It was not necessary to use any of the other training techniques. 30

39 a) Comparing RBF Henon Map Model To Original Henon Map Time Series Original Henon Map time series for a=1.4 and b=0.3 Yn Yn Time RBF model of Henon Map with m=2 and N= Time Yn Comparing RBF Prediction to Henon Map Henon Map data RBF (m=2, N=10) Time b) Gradient Descent MSE on Henon Map RBF model (m=2, N=10) 0.3 MSE Epoch Figure 3.3 Comparing RBF Henon map model to chaotic time series a) Comparison of chaotic Henon map time series to RBF generated data in recurrent mode with RBF parameters of m=2 and N=10 b) Mean squared error plot with respect to epoch of gradient descent training for the same model 3.4 Application to Non-ictal Time Series data As mentioned earlier the non-ictal data is highly complex and possibly chaotic (HPC). Moreover it is non-stationary and embedded with noise. Modeling this complex time series is far more difficult than modeling the Henon map. In what is to follow we will describe the process of 31

40 modeling the non-ictal time series data starting with the acquisition of data, immediately followed by the preprocessing of the data and concluding with the training results and verification of complexity Low Mg 2+ / High K + Animal Data Acquisition Training seizure data was collected independently by Eunji, a member of our group. Eight slices from the hippocampus of male Wiser rats aged days were obtained. Then the slices were bathed in a low Mg 2+ /High K + solution and electrodes were placed in the CA1 region of the hippocampus. After roughly minutes the slices begin to exhibit spontaneous seizures due to the presence of the low Mg 2+ /High K +. The seizing activity was recorded by electrodes at a sampling frequency of 2kHz. The whole process is described in further detail in the paper by Chiu et al, with the exceptions that we use sampling of 2kHz where as Chiu et. al sampled at 10kHz [3]. Once all the data had been collected it was only a matter of separating the ictal regions from non-ictal regions. The separation of the ictal and interictal region was the most difficult as there is a steady increase in spikes as the seizure develops. To avoid overlapping the regions we selected interictal data as far away from the ictal region as possible. The postictal region was selected after the last ictal spike was observed (see figure 1.1) Data Preprocessing The non-ictal time series data is susceptible to many noise sources ranging from the external environment to electromyogram (EMG) interference from muscles to simply artifacts in the 32

41 measuring instrumentation. Therefore preprocessing consisted of filtering out the noise, trimming out outliers in the time series recording and scaling the signal to lie between the range -1 to 1. The signal was low pass filtered to 50Hz followed by a light high pass filtering at 0.5Hz to remove some low frequency oscillations which interfere in the training of the RBF. After filtering the training data was downsampled by 20. The trimming of the signal was done in such a way that only the occasional outliers would be removed and the remainder of the signals would fall just under the trimming. Following the trimming the data was scaled such that the maximum and minimum values would lie within -1 to 1 respectively RBF Training of Non-Ictal Time Series The above mentioned training methodologies were then applied in three different sequences to train our model. 1. Gradient descent 2. Forward Selection 3. Regression Tree with Forward Selection 33

42 Similarly to the Henon map the training of the RBF models was done using the MSE defined by equation To verify that the models sufficiently resemble the properties of the non-ictal extracellular time series we tested the model by running it in recurrent mode and initialized with ictal data. The final test of the models was to verify their complexity and also how well it resembled the actual non-ictal time series data used in training. The complexity was calculated by finding the maximum Lyapunov exponent and correlation dimension which were introduced in Sections and respectively. Table 3.2 below shows the complexities of the interictal and postictal training data after downsampling to 100Hz sample rate. To calculate the maximum Lyapunov exponent we used STLmax [22][23]. To calculate the correlation dimension we used a program based on Grasberger and Procaccia[16] written by a Zalay, a member of our group. The results were calculated using 6 different samples of length 8000, time constant of 2 and embedding dimension of 7. The results of the calculation yielded a maximum Laypunov exponent and correlation dimension of 1.67 and 5.66 respectively for the interictal time series data. Postictal time series data yielded a maximum Lyapunov exponent and correlation dimension of 1.64 and 6.33 respectively. These complexity results were compared later with the complexity found from the RBF models generating in recurrent mode. Table 3.2 Complexity of interictal and postictal training time series Model Maximum Lyapunov Exponent Standard Error Correlation Dimension Standard Error Interictal Time Series Postictal Time Series

43 Gradient Descent Initially the training of the non-ictal data was done with the gradient descent method. The training was first done on the interictal model. The centres c were selected initially from the training data, the variance r was set to 0.1 and the weights w were selected from a normalized Gaussian distribution. The embedding and number of RBFs were swept through a variety of choices from fairly simple to very complex (see Table 3.3). The gradient descent training worked in reducing the error on predictions of the training data. Although it failed to produce anything resembling the time series of the interictal extracellular time series when operated in recurrent mode. The MSE results from both training (nonrecurrent) and recurrent modes are shown Table 3.3 along with the complexity calculations. None of the models succeeded to capture the characteristics of the interictal time series. The best result was achieved with model 2 which had an embedding of 10 and with 20 RBFs. It produced a MSE of after training and a MSE of in recurrent mode. The maximum Lyapunov exponent was close to 0 and the correlation dimension was 0 thus lacking any sort of complexity. Figure 3.4 shows the results of model 2. From figure 3.4c it can be seen that the recurrent mode would produce oscillations until it converged to a constant close to 0. From the above results it was determined that gradient descent based methods were not going to succeed in modeling the chaotic interictal activity. Thus we proceeded to try out the other learning methods. 35

44 Table 3.3 Interictal gradient descent training parameters and results Model Embedding (m) RBFs (N) Training Epochs MSE (non- Recurrent) MSE (Recurrent) Maximum Lyapunov Exponent Correlation Dimension NaN* NaN* NaN* NaN* NaN* NaN* NaN* NaN* * NaN refers to Not A Number and commonly results when the correlation dimension is unable to be calculated, in this case it is because Models 3-10 produced a steady constant value in recurrent mode. 36

45 1 a) Training Data Voltage (mv) Time b) Non-recurrent RBF Model (m=10, N=20) 0.2 Voltage (mv) Voltage (mv) MSE Time c) Recurrent RBF Model (m=10, N=20) Time d) Gradient Descent MSE on Interictal RBF Model (m=10, N=20) Epochs Figure 3.4 RBF Interictal model after gradient descent training a) Interictal training data. b) Prediction of RBF after gradient descent on training data, embedding of the model is equal to 10 and the number of RBFs used are 20. c) Result of RBF prediction in recurrent mode. d) MSE error curve with respect to number of training epochs. 37

46 Forward Selection The forward selection (FS) learning technique uses a non-gradient based learning method which may avoid getting trapped in local minimum. The advantage of training with the FS is that it did not require a lot of parameter selection prior to training. The main parameter that was controlled was the embedding of the time series. The embedding used for training were 5, 10, 20, 30, 40, 50, 60, 80, 100, 120 and 140. As before, we trained on the interictal training data to see if forward selection learning could capture the features of the interictal region. After training the models were tested in recurrent mode. Figure 3.5 shows the results of MSE and complexity of the different RBF models. The lowest error achieved was 0.19 for embedding 5, although it failed to produce any complexity. Only the models with embedding 40 and 50 produced complexity in both the Lyapunov exponent and correlation dimension. Even so the Lyapunov complexity fell far short of the 1.67 goal for the interictal time series. The model with embedding 50 seemed slightly superior to the other models and its results were further decomposed in figure 3.6. Under embedding of 50 the model produced 3591 RBFs. The training is shown in figures 3.6b where as the number of RBFs was added the GCV error reduced. The addition of further RBFs stops once the GCV error does not change significantly for the past 5 RBF additions. At which point the selection process backtracks to the point 5 RBFs before and takes that to be the model. This occurred after 3591 RBFs were included. With such a large number of RBFs it is likely the training attempted to select one RBF for each of the training time series points which negates any real learning. In figure 3.6a we compare the recurrent RBF model time series generation to that of the interictal training data. The result is significantly better than that of the gradient descent training. Simple visual observation shows the two 38

THE data used in this project is provided. SEIZURE forecasting systems hold promise. Seizure Prediction from Intracranial EEG Recordings

THE data used in this project is provided. SEIZURE forecasting systems hold promise. Seizure Prediction from Intracranial EEG Recordings 1 Seizure Prediction from Intracranial EEG Recordings Alex Fu, Spencer Gibbs, and Yuqi Liu 1 INTRODUCTION SEIZURE forecasting systems hold promise for improving the quality of life for patients with epilepsy.

More information

Discrimination between ictal and seizure free EEG signals using empirical mode decomposition

Discrimination between ictal and seizure free EEG signals using empirical mode decomposition Discrimination between ictal and seizure free EEG signals using empirical mode decomposition by Ram Bilas Pachori in Accepted for publication in Research Letters in Signal Processing (Journal) Report No:

More information

PCA Enhanced Kalman Filter for ECG Denoising

PCA Enhanced Kalman Filter for ECG Denoising IOSR Journal of Electronics & Communication Engineering (IOSR-JECE) ISSN(e) : 2278-1684 ISSN(p) : 2320-334X, PP 06-13 www.iosrjournals.org PCA Enhanced Kalman Filter for ECG Denoising Febina Ikbal 1, Prof.M.Mathurakani

More information

CHAPTER 6 INTERFERENCE CANCELLATION IN EEG SIGNAL

CHAPTER 6 INTERFERENCE CANCELLATION IN EEG SIGNAL 116 CHAPTER 6 INTERFERENCE CANCELLATION IN EEG SIGNAL 6.1 INTRODUCTION Electrical impulses generated by nerve firings in the brain pass through the head and represent the electroencephalogram (EEG). Electrical

More information

EPILEPTIC SEIZURE DETECTION USING WAVELET TRANSFORM

EPILEPTIC SEIZURE DETECTION USING WAVELET TRANSFORM EPILEPTIC SEIZURE DETECTION USING WAVELET TRANSFORM Sneha R. Rathod 1, Chaitra B. 2, Dr. H.P.Rajani 3, Dr. Rajashri khanai 4 1 MTech VLSI Design and Embedded systems,dept of ECE, KLE Dr.MSSCET, Belagavi,

More information

Noise Cancellation using Adaptive Filters Algorithms

Noise Cancellation using Adaptive Filters Algorithms Noise Cancellation using Adaptive Filters Algorithms Suman, Poonam Beniwal Department of ECE, OITM, Hisar, bhariasuman13@gmail.com Abstract Active Noise Control (ANC) involves an electro acoustic or electromechanical

More information

Chapter 1. Introduction

Chapter 1. Introduction Chapter 1 Introduction Artificial neural networks are mathematical inventions inspired by observations made in the study of biological systems, though loosely based on the actual biology. An artificial

More information

Information Processing During Transient Responses in the Crayfish Visual System

Information Processing During Transient Responses in the Crayfish Visual System Information Processing During Transient Responses in the Crayfish Visual System Christopher J. Rozell, Don. H. Johnson and Raymon M. Glantz Department of Electrical & Computer Engineering Department of

More information

Predicting Seizures in Intracranial EEG Recordings

Predicting Seizures in Intracranial EEG Recordings Sining Ma, Jiawei Zhu sma87@stanford.edu, jiaweiz@stanford.edu Abstract If seizure forecasting systems could reliably identify periods of increased probability of seizure occurrence, patients who suffer

More information

A micropower support vector machine based seizure detection architecture for embedded medical devices

A micropower support vector machine based seizure detection architecture for embedded medical devices A micropower support vector machine based seizure detection architecture for embedded medical devices The MIT Faculty has made this article openly available. Please share how this access benefits you.

More information

INVESTIGATION OF ROUNDOFF NOISE IN IIR DIGITAL FILTERS USING MATLAB

INVESTIGATION OF ROUNDOFF NOISE IN IIR DIGITAL FILTERS USING MATLAB Clemson University TigerPrints All Theses Theses 5-2009 INVESTIGATION OF ROUNDOFF NOISE IN IIR DIGITAL FILTERS USING MATLAB Sierra Williams Clemson University, sierraw@clemson.edu Follow this and additional

More information

Adaptive Treatment of Epilepsy via Batch Mode Reinforcement Learning

Adaptive Treatment of Epilepsy via Batch Mode Reinforcement Learning Adaptive Treatment of Epilepsy via Batch Mode Reinforcement Learning Arthur Guez, Robert D. Vincent and Joelle Pineau School of Computer Science, McGill University Massimo Avoli Montreal Neurological Institute

More information

Error Detection based on neural signals

Error Detection based on neural signals Error Detection based on neural signals Nir Even- Chen and Igor Berman, Electrical Engineering, Stanford Introduction Brain computer interface (BCI) is a direct communication pathway between the brain

More information

Introduction to Machine Learning. Katherine Heller Deep Learning Summer School 2018

Introduction to Machine Learning. Katherine Heller Deep Learning Summer School 2018 Introduction to Machine Learning Katherine Heller Deep Learning Summer School 2018 Outline Kinds of machine learning Linear regression Regularization Bayesian methods Logistic Regression Why we do this

More information

This presentation is the intellectual property of the author. Contact them for permission to reprint and/or distribute.

This presentation is the intellectual property of the author. Contact them for permission to reprint and/or distribute. Modified Combinatorial Nomenclature Montage, Review, and Analysis of High Density EEG Terrence D. Lagerlund, M.D., Ph.D. CP1208045-16 Disclosure Relevant financial relationships None Off-label/investigational

More information

Shock-induced termination of cardiac arrhythmias

Shock-induced termination of cardiac arrhythmias Shock-induced termination of cardiac arrhythmias Group members: Baltazar Chavez-Diaz, Chen Jiang, Sarah Schwenck, Weide Wang, and Jinglei Zhang Abstract: Cardiac arrhythmias occur when blood flow to the

More information

Classification of Epileptic Seizure Predictors in EEG

Classification of Epileptic Seizure Predictors in EEG Classification of Epileptic Seizure Predictors in EEG Problem: Epileptic seizures are still not fully understood in medicine. This is because there is a wide range of potential causes of epilepsy which

More information

Artificial neural networks: application to electrical stimulation of the human nervous system

Artificial neural networks: application to electrical stimulation of the human nervous system Artificial neural networks: application to electrical stimulation of the human nervous system Richard B. North, M.D., J. Paul McNamee, M.S., Lee Wu, M.S., and Steven Piantadosi, M.D., Ph.D. Departments

More information

EEG signal classification using Bayes and Naïve Bayes Classifiers and extracted features of Continuous Wavelet Transform

EEG signal classification using Bayes and Naïve Bayes Classifiers and extracted features of Continuous Wavelet Transform EEG signal classification using Bayes and Naïve Bayes Classifiers and extracted features of Continuous Wavelet Transform Reza Yaghoobi Karimoi*, Mohammad Ali Khalilzadeh, Ali Akbar Hossinezadeh, Azra Yaghoobi

More information

Understand the New 2019 Neurostimulator Analysis-Programming CPT Coding Structure and Associated Relative Value Units

Understand the New 2019 Neurostimulator Analysis-Programming CPT Coding Structure and Associated Relative Value Units Understand the New 2019 Neurostimulator Analysis-Programming CPT Coding Structure and Associated Relative Units The American Academy of Neurology (AAN) presents the following case studies to help you understand

More information

Frequency Tracking: LMS and RLS Applied to Speech Formant Estimation

Frequency Tracking: LMS and RLS Applied to Speech Formant Estimation Aldebaro Klautau - http://speech.ucsd.edu/aldebaro - 2/3/. Page. Frequency Tracking: LMS and RLS Applied to Speech Formant Estimation ) Introduction Several speech processing algorithms assume the signal

More information

Learning in neural networks

Learning in neural networks http://ccnl.psy.unipd.it Learning in neural networks Marco Zorzi University of Padova M. Zorzi - European Diploma in Cognitive and Brain Sciences, Cognitive modeling", HWK 19-24/3/2006 1 Connectionist

More information

Introduction to Computational Neuroscience

Introduction to Computational Neuroscience Introduction to Computational Neuroscience Lecture 7: Network models Lesson Title 1 Introduction 2 Structure and Function of the NS 3 Windows to the Brain 4 Data analysis 5 Data analysis II 6 Single neuron

More information

Discrimination and Generalization in Pattern Categorization: A Case for Elemental Associative Learning

Discrimination and Generalization in Pattern Categorization: A Case for Elemental Associative Learning Discrimination and Generalization in Pattern Categorization: A Case for Elemental Associative Learning E. J. Livesey (el253@cam.ac.uk) P. J. C. Broadhurst (pjcb3@cam.ac.uk) I. P. L. McLaren (iplm2@cam.ac.uk)

More information

Statistical analysis of epileptic activities based on histogram and wavelet-spectral entropy

Statistical analysis of epileptic activities based on histogram and wavelet-spectral entropy J. Biomedical Science and Engineering, 0, 4, 07-3 doi:0.436/jbise.0.4309 Published Online March 0 (http://www.scirp.org/journal/jbise/). Statistical analysis of epileptic activities based on histogram

More information

MULTICLASS SUPPORT VECTOR MACHINE WITH NEW KERNEL FOR EEG CLASSIFICATION

MULTICLASS SUPPORT VECTOR MACHINE WITH NEW KERNEL FOR EEG CLASSIFICATION MULTICLASS SUPPORT VECTOR MACHINE WITH NEW KERNEL FOR EEG CLASSIFICATION Mr.A.S.Muthanantha Murugavel,M.E.,M.B.A., Assistant Professor(SG),Department of Information Technology,Dr.Mahalingam college of

More information

PSD Analysis of Neural Spectrum During Transition from Awake Stage to Sleep Stage

PSD Analysis of Neural Spectrum During Transition from Awake Stage to Sleep Stage PSD Analysis of Neural Spectrum During Transition from Stage to Stage Chintan Joshi #1 ; Dipesh Kamdar #2 #1 Student,; #2 Research Guide, #1,#2 Electronics and Communication Department, Vyavasayi Vidya

More information

Classification of Pre-Stimulus EEG of K-complexes using Competitive Learning Networks

Classification of Pre-Stimulus EEG of K-complexes using Competitive Learning Networks Classification of Pre-Stimulus EEG of K-complexes using Competitive Learning Networks Martin Golz 1, David Sommer 1, Thomas Lembcke 2, Brigitte Kurella 2 1 Fachhochschule Schmalkalden, Germany 2 Wilhelm-Griesinger-Krankenhaus,

More information

Keywords Artificial Neural Networks (ANN), Echocardiogram, BPNN, RBFNN, Classification, survival Analysis.

Keywords Artificial Neural Networks (ANN), Echocardiogram, BPNN, RBFNN, Classification, survival Analysis. Design of Classifier Using Artificial Neural Network for Patients Survival Analysis J. D. Dhande 1, Dr. S.M. Gulhane 2 Assistant Professor, BDCE, Sevagram 1, Professor, J.D.I.E.T, Yavatmal 2 Abstract The

More information

Electroencephalography

Electroencephalography The electroencephalogram (EEG) is a measure of brain waves. It is a readily available test that provides evidence of how the brain functions over time. The EEG is used in the evaluation of brain disorders.

More information

An Edge-Device for Accurate Seizure Detection in the IoT

An Edge-Device for Accurate Seizure Detection in the IoT An Edge-Device for Accurate Seizure Detection in the IoT M. A. Sayeed 1, S. P. Mohanty 2, E. Kougianos 3, and H. Zaveri 4 University of North Texas, Denton, TX, USA. 1,2,3 Yale University, New Haven, CT,

More information

Applying Data Mining for Epileptic Seizure Detection

Applying Data Mining for Epileptic Seizure Detection Applying Data Mining for Epileptic Seizure Detection Ying-Fang Lai 1 and Hsiu-Sen Chiang 2* 1 Department of Industrial Education, National Taiwan Normal University 162, Heping East Road Sec 1, Taipei,

More information

COMP9444 Neural Networks and Deep Learning 5. Convolutional Networks

COMP9444 Neural Networks and Deep Learning 5. Convolutional Networks COMP9444 Neural Networks and Deep Learning 5. Convolutional Networks Textbook, Sections 6.2.2, 6.3, 7.9, 7.11-7.13, 9.1-9.5 COMP9444 17s2 Convolutional Networks 1 Outline Geometry of Hidden Unit Activations

More information

Feature Parameter Optimization for Seizure Detection/Prediction

Feature Parameter Optimization for Seizure Detection/Prediction Feature Parameter Optimization for Seizure Detection/Prediction R. Esteller* #, J. Echauz #, M. D Alessandro, G. Vachtsevanos and B. Litt,. # IntelliMedix, Atlanta, USA * Universidad Simón Bolívar, Caracas,

More information

Review: Logistic regression, Gaussian naïve Bayes, linear regression, and their connections

Review: Logistic regression, Gaussian naïve Bayes, linear regression, and their connections Review: Logistic regression, Gaussian naïve Bayes, linear regression, and their connections New: Bias-variance decomposition, biasvariance tradeoff, overfitting, regularization, and feature selection Yi

More information

Removing ECG Artifact from the Surface EMG Signal Using Adaptive Subtraction Technique

Removing ECG Artifact from the Surface EMG Signal Using Adaptive Subtraction Technique www.jbpe.org Removing ECG Artifact from the Surface EMG Signal Using Adaptive Subtraction Technique Original 1 Department of Biomedical Engineering, Amirkabir University of technology, Tehran, Iran Abbaspour

More information

Epileptic Dogs: Advanced Seizure Prediction

Epileptic Dogs: Advanced Seizure Prediction CS 74: PROJECT FINAL WRITEUP GITHUB Epileptic Dogs: Advanced Seizure Prediction Taylor Neely & Jack Terwilliger November 22, 2014 INTRODUCTION Epilepsy is a neurological disorder defined by random, spontaneous

More information

Epileptic Seizure Classification of EEG Image Using ANN

Epileptic Seizure Classification of EEG Image Using ANN Epileptic Seizure Classification of EEG Image Using ANN Prof. (Dr.) M.K. Bhaskar Professor, Electrical Engg. Department M.B.M. Engg. College, Jodhpur, Raj, India Prof. Surendra. Bohra Professor, ECE Department

More information

Data Mining Tools used in Deep Brain Stimulation Analysis Results

Data Mining Tools used in Deep Brain Stimulation Analysis Results Data Mining Tools used in Deep Brain Stimulation Analysis Results Oana Geman PhD, Faculty of Electrical Engineering and Computer Science, Stefan cel Mare University, Suceava, Romania geman@eed.usv.ro Abstract.

More information

Empirical Mode Decomposition based Feature Extraction Method for the Classification of EEG Signal

Empirical Mode Decomposition based Feature Extraction Method for the Classification of EEG Signal Empirical Mode Decomposition based Feature Extraction Method for the Classification of EEG Signal Anant kulkarni MTech Communication Engineering Vellore Institute of Technology Chennai, India anant8778@gmail.com

More information

J2.6 Imputation of missing data with nonlinear relationships

J2.6 Imputation of missing data with nonlinear relationships Sixth Conference on Artificial Intelligence Applications to Environmental Science 88th AMS Annual Meeting, New Orleans, LA 20-24 January 2008 J2.6 Imputation of missing with nonlinear relationships Michael

More information

CHAPTER 2 LITERATURE REVIEW

CHAPTER 2 LITERATURE REVIEW 9 CHAPTER 2 LITERATURE REVIEW In this chapter, a review of literature on Epileptic Seizure Detection, Wavelet Transform techniques, Principal Component Analysis, Artificial Neural Network, Radial Basis

More information

Artificial Neural Networks (Ref: Negnevitsky, M. Artificial Intelligence, Chapter 6)

Artificial Neural Networks (Ref: Negnevitsky, M. Artificial Intelligence, Chapter 6) Artificial Neural Networks (Ref: Negnevitsky, M. Artificial Intelligence, Chapter 6) BPNN in Practice Week 3 Lecture Notes page 1 of 1 The Hopfield Network In this network, it was designed on analogy of

More information

Epileptic seizure detection using linear prediction filter

Epileptic seizure detection using linear prediction filter 11 th International conference on Sciences and Techniques of Automatic control & computer engineering December 19-1, 010, Monastir, Tunisia Epileptic seizure detection using linear prediction filter Introduction:

More information

CHAPTER I From Biological to Artificial Neuron Model

CHAPTER I From Biological to Artificial Neuron Model CHAPTER I From Biological to Artificial Neuron Model EE543 - ANN - CHAPTER 1 1 What you see in the picture? EE543 - ANN - CHAPTER 1 2 Is there any conventional computer at present with the capability of

More information

Selection of Feature for Epilepsy Seizer Detection Using EEG

Selection of Feature for Epilepsy Seizer Detection Using EEG International Journal of Neurosurgery 2018; 2(1): 1-7 http://www.sciencepublishinggroup.com/j/ijn doi: 10.11648/j.ijn.20180201.11 Selection of Feature for Epilepsy Seizer Detection Using EEG Manisha Chandani

More information

Temporal preprocessing of fmri data

Temporal preprocessing of fmri data Temporal preprocessing of fmri data Blaise Frederick, Ph.D. McLean Hospital Brain Imaging Center Scope fmri data contains temporal noise and acquisition artifacts that complicate the interpretation of

More information

IEEE SIGNAL PROCESSING LETTERS, VOL. 13, NO. 3, MARCH A Self-Structured Adaptive Decision Feedback Equalizer

IEEE SIGNAL PROCESSING LETTERS, VOL. 13, NO. 3, MARCH A Self-Structured Adaptive Decision Feedback Equalizer SIGNAL PROCESSING LETTERS, VOL 13, NO 3, MARCH 2006 1 A Self-Structured Adaptive Decision Feedback Equalizer Yu Gong and Colin F N Cowan, Senior Member, Abstract In a decision feedback equalizer (DFE),

More information

Seizure onset can be difficult to asses in scalp EEG. However, some tools can be used to increase the seizure onset activity over the EEG background:

Seizure onset can be difficult to asses in scalp EEG. However, some tools can be used to increase the seizure onset activity over the EEG background: This presentation was given during the Dianalund Summer School on EEG and Epilepsy, July 24, 2012. The main purpose of this introductory talk is to show the possibilities of improved seizure onset analysis

More information

Intracranial Studies Of Human Epilepsy In A Surgical Setting

Intracranial Studies Of Human Epilepsy In A Surgical Setting Intracranial Studies Of Human Epilepsy In A Surgical Setting Department of Neurology David Geffen School of Medicine at UCLA Presentation Goals Epilepsy and seizures Basics of the electroencephalogram

More information

^Department of Clinical Neurophysiology, Charing Cross Hospital, Charing Cross & Westminster Medical School, University of

^Department of Clinical Neurophysiology, Charing Cross Hospital, Charing Cross & Westminster Medical School, University of Relation between singular values and graph dimensions of deterministic epileptiform EEG signals V. Cabukovski,* N. de M. Rudolph,* N. Mahmood* ^Institute of Informatics, Faculty of Sciences, University

More information

arxiv: v1 [cs.lg] 4 Feb 2019

arxiv: v1 [cs.lg] 4 Feb 2019 Machine Learning for Seizure Type Classification: Setting the benchmark Subhrajit Roy [000 0002 6072 5500], Umar Asif [0000 0001 5209 7084], Jianbin Tang [0000 0001 5440 0796], and Stefan Harrer [0000

More information

Brain Rhythms and Mathematics

Brain Rhythms and Mathematics Brain Rhythms and Mathematics Christoph Börgers Mathematics Department Tufts University April 21, 2010 Oscillations in the human brain In an EEG, voltages are recorded on a person s scalp. One gets traces

More information

Parametric Optimization and Analysis of Adaptive Equalization Algorithms for Noisy Speech Signals

Parametric Optimization and Analysis of Adaptive Equalization Algorithms for Noisy Speech Signals IOSR Journal of Electrical and Electronics Engineering (IOSR-JEEE) e-issn: 2278-1676, p-issn: 2320 3331, Volume 4, Issue 6 (Mar. -Apr. 2013), PP 69-74 Parametric Optimization and Analysis of Adaptive Equalization

More information

Artificial Neural Networks to Determine Source of Acoustic Emission and Damage Detection

Artificial Neural Networks to Determine Source of Acoustic Emission and Damage Detection Artificial Neural Networks to Determine Source of Acoustic Emission and Damage Detection Mehrdad Shafiei Dizaji 1, Farzad Shafiei Dizaji 2 1 Former graduate, Department of Civil Engineering, Sharif University

More information

EEG ANALYSIS: ANN APPROACH

EEG ANALYSIS: ANN APPROACH EEG ANALYSIS: ANN APPROACH CHAPTER 5 EEG ANALYSIS: ANN APPROACH 5.1 INTRODUCTION The analysis of EEG signals using ANN deals with developing a network in order to establish a relation between input and

More information

Saichiu Nelson Tong ALL RIGHTS RESERVED

Saichiu Nelson Tong ALL RIGHTS RESERVED 2017 Saichiu Nelson Tong ALL RIGHTS RESERVED MODELING CLASSIFICATION NETWORK OF ELECTROENCEPHALOGRAPHIC ARTIFACTS AND SIGNALS ASSOCIATED WITH DEEP BRAIN STIMULATION By SAICHIU NELSON TONG A thesis submitted

More information

CHAPTER 5 WAVELET BASED DETECTION OF VENTRICULAR ARRHYTHMIAS WITH NEURAL NETWORK CLASSIFIER

CHAPTER 5 WAVELET BASED DETECTION OF VENTRICULAR ARRHYTHMIAS WITH NEURAL NETWORK CLASSIFIER 57 CHAPTER 5 WAVELET BASED DETECTION OF VENTRICULAR ARRHYTHMIAS WITH NEURAL NETWORK CLASSIFIER 5.1 INTRODUCTION The cardiac disorders which are life threatening are the ventricular arrhythmias such as

More information

Effects of Flashing Lights and Beeping Tones on Subjective Time Estimation

Effects of Flashing Lights and Beeping Tones on Subjective Time Estimation ME 224 EXPERIMENTAL ENGINEERING Effects of Flashing Lights and Beeping Tones on Subjective Time Estimation ME 224 Final Project Report Aamir Habib, Yoke Peng Leong, Yuchen Yang 12/3/2011 Contents Abstract...

More information

Database of paroxysmal iceeg signals

Database of paroxysmal iceeg signals POSTER 2017, PRAGUE MAY 23 1 Database of paroxysmal iceeg signals Ing. Nikol Kopecká 1 1 Dept. of Circuit Theory, Czech Technical University, Technická 2, 166 27 Praha, Czech Republic kopecnik@fel.cvut.cz

More information

A model of the interaction between mood and memory

A model of the interaction between mood and memory INSTITUTE OF PHYSICS PUBLISHING NETWORK: COMPUTATION IN NEURAL SYSTEMS Network: Comput. Neural Syst. 12 (2001) 89 109 www.iop.org/journals/ne PII: S0954-898X(01)22487-7 A model of the interaction between

More information

Examination of Multiple Spectral Exponents of Epileptic ECoG Signal

Examination of Multiple Spectral Exponents of Epileptic ECoG Signal Examination of Multiple Spectral Exponents of Epileptic ECoG Signal Suparerk Janjarasjitt Member, IAENG, and Kenneth A. Loparo Abstract In this paper, the wavelet-based fractal analysis is applied to analyze

More information

TIME SERIES MODELING USING ARTIFICIAL NEURAL NETWORKS 1 P.Ram Kumar, 2 M.V.Ramana Murthy, 3 D.Eashwar, 4 M.Venkatdas

TIME SERIES MODELING USING ARTIFICIAL NEURAL NETWORKS 1 P.Ram Kumar, 2 M.V.Ramana Murthy, 3 D.Eashwar, 4 M.Venkatdas TIME SERIES MODELING USING ARTIFICIAL NEURAL NETWORKS 1 P.Ram Kumar, 2 M.V.Ramana Murthy, 3 D.Eashwar, 4 M.Venkatdas 1 Department of Computer Science & Engineering,UCE,OU,Hyderabad 2 Department of Mathematics,UCS,OU,Hyderabad

More information

A Learning Method of Directly Optimizing Classifier Performance at Local Operating Range

A Learning Method of Directly Optimizing Classifier Performance at Local Operating Range A Learning Method of Directly Optimizing Classifier Performance at Local Operating Range Lae-Jeong Park and Jung-Ho Moon Department of Electrical Engineering, Kangnung National University Kangnung, Gangwon-Do,

More information

Biceps Activity EMG Pattern Recognition Using Neural Networks

Biceps Activity EMG Pattern Recognition Using Neural Networks Biceps Activity EMG Pattern Recognition Using eural etworks K. Sundaraj University Malaysia Perlis (UniMAP) School of Mechatronic Engineering 0600 Jejawi - Perlis MALAYSIA kenneth@unimap.edu.my Abstract:

More information

Basics of Computational Neuroscience: Neurons and Synapses to Networks

Basics of Computational Neuroscience: Neurons and Synapses to Networks Basics of Computational Neuroscience: Neurons and Synapses to Networks Bruce Graham Mathematics School of Natural Sciences University of Stirling Scotland, U.K. Useful Book Authors: David Sterratt, Bruce

More information

Brain and Cognitive Sciences 9.96 Experimental Methods of Tetrode Array Neurophysiology IAP 2001

Brain and Cognitive Sciences 9.96 Experimental Methods of Tetrode Array Neurophysiology IAP 2001 Brain and Cognitive Sciences 9.96 Experimental Methods of Tetrode Array Neurophysiology IAP 2001 An Investigation into the Mechanisms of Memory through Hippocampal Microstimulation In rodents, the hippocampus

More information

SUPPLEMENTARY INFORMATION. Supplementary Figure 1

SUPPLEMENTARY INFORMATION. Supplementary Figure 1 SUPPLEMENTARY INFORMATION Supplementary Figure 1 The supralinear events evoked in CA3 pyramidal cells fulfill the criteria for NMDA spikes, exhibiting a threshold, sensitivity to NMDAR blockade, and all-or-none

More information

Theme 2: Cellular mechanisms in the Cochlear Nucleus

Theme 2: Cellular mechanisms in the Cochlear Nucleus Theme 2: Cellular mechanisms in the Cochlear Nucleus The Cochlear Nucleus (CN) presents a unique opportunity for quantitatively studying input-output transformations by neurons because it gives rise to

More information

International Journal of Research in Science and Technology. (IJRST) 2018, Vol. No. 8, Issue No. IV, Oct-Dec e-issn: , p-issn: X

International Journal of Research in Science and Technology. (IJRST) 2018, Vol. No. 8, Issue No. IV, Oct-Dec e-issn: , p-issn: X CLOUD FILE SHARING AND DATA SECURITY THREATS EXPLORING THE EMPLOYABILITY OF GRAPH-BASED UNSUPERVISED LEARNING IN DETECTING AND SAFEGUARDING CLOUD FILES Harshit Yadav Student, Bal Bharati Public School,

More information

Neural Network based Heart Arrhythmia Detection and Classification from ECG Signal

Neural Network based Heart Arrhythmia Detection and Classification from ECG Signal Neural Network based Heart Arrhythmia Detection and Classification from ECG Signal 1 M. S. Aware, 2 V. V. Shete *Dept. of Electronics and Telecommunication, *MIT College Of Engineering, Pune Email: 1 mrunal_swapnil@yahoo.com,

More information

Automatic Definition of Planning Target Volume in Computer-Assisted Radiotherapy

Automatic Definition of Planning Target Volume in Computer-Assisted Radiotherapy Automatic Definition of Planning Target Volume in Computer-Assisted Radiotherapy Angelo Zizzari Department of Cybernetics, School of Systems Engineering The University of Reading, Whiteknights, PO Box

More information

Classıfıcatıon of Dıabetes Dısease Usıng Backpropagatıon and Radıal Basıs Functıon Network

Classıfıcatıon of Dıabetes Dısease Usıng Backpropagatıon and Radıal Basıs Functıon Network UTM Computing Proceedings Innovations in Computing Technology and Applications Volume 2 Year: 2017 ISBN: 978-967-0194-95-0 1 Classıfıcatıon of Dıabetes Dısease Usıng Backpropagatıon and Radıal Basıs Functıon

More information

EPILEPTIC SEIZURE PREDICTION

EPILEPTIC SEIZURE PREDICTION EPILEPTIC SEIZURE PREDICTION Submitted by Nguyen Van Vinh Department of Electrical & Computer Engineering In partial fulfillment of the requirements for the Degree of Bachelor of Engineering National University

More information

Functional MRI of the dynamic brain: quasiperiodic patterns, brain states, and trajectories. Shella Keilholz BME, Emory/Georgia Tech 20 March 2018

Functional MRI of the dynamic brain: quasiperiodic patterns, brain states, and trajectories. Shella Keilholz BME, Emory/Georgia Tech 20 March 2018 Functional MRI of the dynamic brain: quasiperiodic patterns, brain states, and trajectories Shella Keilholz BME, Emory/Georgia Tech 20 March 2018 Resting State fmri No stimulus Looks at spontaneous BOLD

More information

Unit 1 Exploring and Understanding Data

Unit 1 Exploring and Understanding Data Unit 1 Exploring and Understanding Data Area Principle Bar Chart Boxplot Conditional Distribution Dotplot Empirical Rule Five Number Summary Frequency Distribution Frequency Polygon Histogram Interquartile

More information

Spectrograms (revisited)

Spectrograms (revisited) Spectrograms (revisited) We begin the lecture by reviewing the units of spectrograms, which I had only glossed over when I covered spectrograms at the end of lecture 19. We then relate the blocks of a

More information

Shock-induced termination of cardiac arrhythmias

Shock-induced termination of cardiac arrhythmias Shock-induced termination of cardiac arrhythmias Group members: Baltazar Chavez-Diaz, Chen Jiang, Sarah Schwenck, Weide Wang, and Jinglei Zhang Cardiac arrhythmias, also known as irregular heartbeat, occur

More information

Sample Lab Report 1 from 1. Measuring and Manipulating Passive Membrane Properties

Sample Lab Report 1 from  1. Measuring and Manipulating Passive Membrane Properties Sample Lab Report 1 from http://www.bio365l.net 1 Abstract Measuring and Manipulating Passive Membrane Properties Biological membranes exhibit the properties of capacitance and resistance, which allow

More information

Learning and Adaptive Behavior, Part II

Learning and Adaptive Behavior, Part II Learning and Adaptive Behavior, Part II April 12, 2007 The man who sets out to carry a cat by its tail learns something that will always be useful and which will never grow dim or doubtful. -- Mark Twain

More information

1. Introduction

1. Introduction 965. Automatic artifacts removal from epileptic EEG using a hybrid algorithm Jing Wang, Qing Zhang, Yizhuo Zhang, Guanghua Xu 965. AUTOMATIC ARTIFACTS REMOVAL FROM EPILEPTIC EEG USING A HYBRID ALGORITHM.

More information

Learning Classifier Systems (LCS/XCSF)

Learning Classifier Systems (LCS/XCSF) Context-Dependent Predictions and Cognitive Arm Control with XCSF Learning Classifier Systems (LCS/XCSF) Laurentius Florentin Gruber Seminar aus Künstlicher Intelligenz WS 2015/16 Professor Johannes Fürnkranz

More information

An Automated Method for Neuronal Spike Source Identification

An Automated Method for Neuronal Spike Source Identification An Automated Method for Neuronal Spike Source Identification Roberto A. Santiago 1, James McNames 2, Kim Burchiel 3, George G. Lendaris 1 1 NW Computational Intelligence Laboratory, System Science, Portland

More information

An Artificial Neural Network Architecture Based on Context Transformations in Cortical Minicolumns

An Artificial Neural Network Architecture Based on Context Transformations in Cortical Minicolumns An Artificial Neural Network Architecture Based on Context Transformations in Cortical Minicolumns 1. Introduction Vasily Morzhakov, Alexey Redozubov morzhakovva@gmail.com, galdrd@gmail.com Abstract Cortical

More information

ISSN: ISO 9001:2008 Certified International Journal of Engineering and Innovative Technology (IJEIT) Volume 2, Issue 10, April 2013

ISSN: ISO 9001:2008 Certified International Journal of Engineering and Innovative Technology (IJEIT) Volume 2, Issue 10, April 2013 ECG Processing &Arrhythmia Detection: An Attempt M.R. Mhetre 1, Advait Vaishampayan 2, Madhav Raskar 3 Instrumentation Engineering Department 1, 2, 3, Vishwakarma Institute of Technology, Pune, India Abstract

More information

Modeling of Hippocampal Behavior

Modeling of Hippocampal Behavior Modeling of Hippocampal Behavior Diana Ponce-Morado, Venmathi Gunasekaran and Varsha Vijayan Abstract The hippocampus is identified as an important structure in the cerebral cortex of mammals for forming

More information

A HMM-based Pre-training Approach for Sequential Data

A HMM-based Pre-training Approach for Sequential Data A HMM-based Pre-training Approach for Sequential Data Luca Pasa 1, Alberto Testolin 2, Alessandro Sperduti 1 1- Department of Mathematics 2- Department of Developmental Psychology and Socialisation University

More information

Reactive agents and perceptual ambiguity

Reactive agents and perceptual ambiguity Major theme: Robotic and computational models of interaction and cognition Reactive agents and perceptual ambiguity Michel van Dartel and Eric Postma IKAT, Universiteit Maastricht Abstract Situated and

More information

The connection between sleep spindles and epilepsy in a spatially extended neural field model

The connection between sleep spindles and epilepsy in a spatially extended neural field model The connection between sleep spindles and epilepsy in a spatially extended neural field model 1 2 3 Carolina M. S. Lidstrom Undergraduate in Bioengineering UCSD clidstro@ucsd.edu 4 5 6 7 8 9 10 11 12 13

More information

Data mining for Obstructive Sleep Apnea Detection. 18 October 2017 Konstantinos Nikolaidis

Data mining for Obstructive Sleep Apnea Detection. 18 October 2017 Konstantinos Nikolaidis Data mining for Obstructive Sleep Apnea Detection 18 October 2017 Konstantinos Nikolaidis Introduction: What is Obstructive Sleep Apnea? Obstructive Sleep Apnea (OSA) is a relatively common sleep disorder

More information

Epileptic Seizure Classification using Statistical Features of EEG Signal

Epileptic Seizure Classification using Statistical Features of EEG Signal International Conference on Electrical, Computer and Communication Engineering (ECCE), February 6-8,, Cox s Bazar, Bangladesh Epileptic Seizure Classification using Statistical Features of EEG Signal Md.

More information

Seizure Prediction Through Clustering and Temporal Analysis of Micro Electrocorticographic Data

Seizure Prediction Through Clustering and Temporal Analysis of Micro Electrocorticographic Data Seizure Prediction Through Clustering and Temporal Analysis of Micro Electrocorticographic Data Yilin Song 1, Jonathan Viventi 2, and Yao Wang 1 1 Department of Electrical and Computer Engineering, New

More information

Principals of Object Perception

Principals of Object Perception Principals of Object Perception Elizabeth S. Spelke COGNITIVE SCIENCE 14, 29-56 (1990) Cornell University Summary Infants perceive object by analyzing tree-dimensional surface arrangements and motions.

More information

Predicting Breast Cancer Survival Using Treatment and Patient Factors

Predicting Breast Cancer Survival Using Treatment and Patient Factors Predicting Breast Cancer Survival Using Treatment and Patient Factors William Chen wchen808@stanford.edu Henry Wang hwang9@stanford.edu 1. Introduction Breast cancer is the leading type of cancer in women

More information

Spatiotemporal clustering of synchronized bursting events in neuronal networks

Spatiotemporal clustering of synchronized bursting events in neuronal networks Spatiotemporal clustering of synchronized bursting events in neuronal networks Uri Barkan a David Horn a,1 a School of Physics and Astronomy, Tel Aviv University, Tel Aviv 69978, Israel Abstract in vitro

More information

The current issue and full text archive of this journal is available at

The current issue and full text archive of this journal is available at The current issue and full text archive of this journal is available at www.emeraldinsight.com/0332-1649.htm COMPEL 26,5 1276 Received October 2005 Revised November 2006 Accepted November 2006 Epileptic

More information

ECG Beat Recognition using Principal Components Analysis and Artificial Neural Network

ECG Beat Recognition using Principal Components Analysis and Artificial Neural Network International Journal of Electronics Engineering, 3 (1), 2011, pp. 55 58 ECG Beat Recognition using Principal Components Analysis and Artificial Neural Network Amitabh Sharma 1, and Tanushree Sharma 2

More information

Removal of Baseline wander and detection of QRS complex using wavelets

Removal of Baseline wander and detection of QRS complex using wavelets International Journal of Scientific & Engineering Research Volume 3, Issue 4, April-212 1 Removal of Baseline wander and detection of QRS complex using wavelets Nilesh Parihar, Dr. V. S. Chouhan Abstract

More information

Evolutionary Programming

Evolutionary Programming Evolutionary Programming Searching Problem Spaces William Power April 24, 2016 1 Evolutionary Programming Can we solve problems by mi:micing the evolutionary process? Evolutionary programming is a methodology

More information

Business Statistics Probability

Business Statistics Probability Business Statistics The following was provided by Dr. Suzanne Delaney, and is a comprehensive review of Business Statistics. The workshop instructor will provide relevant examples during the Skills Assessment

More information