LITERATURE SURVEY 2.1 LITERATURE SURVEY CHAPTER 2

Size: px
Start display at page:

Download "LITERATURE SURVEY 2.1 LITERATURE SURVEY CHAPTER 2"

Transcription

1 CHAPTER 2 LITERATURE SURVEY Research on hand gestures are classified into three categories. The first category is glove based analysis which employs the sensors either mechanical or optical, attached to a glove that converts finger movements into electrical signals for determining the hand posture. The second category is vision based analysis. It is based on the way human beings perceive information about their surroundings and is most difficult to implement in an acceptable way. The third category is analysis of drawing gestures which usually involves the use of a stylus as an input device. The details of various methodology, algorithms, etc. used for the Sign Language Recognition system are discussed further. 2.1 LITERATURE SURVEY Gesture recognition was first proposed by Myron W. Krueger as a new form of interaction between human and computer in the middle of seventies [12]. It has become a very important research area with the rapid development of computer hardware and vision systems in recent years. Hand gesture recognition is a user- friendly and intuitive means for human to interact with computer or intelligent machines (e.g. robot, vehicle etc.). The research has gained more and more attention in the field of human machine interaction. Currently, there are several available techniques that are applicable for hand gesture recognition, which are either based on sensing devices or computer vision. A typical widespread device based example is data glove, which is developed by Zimmerman in 1987 [13]. In this system, user wears a data glove that is linked to the computer. The glove can measure the bending of fingers, the position and orientation of the hand in 3-D space. Data glove is able to capture the richness of a hand s gesture. Its successful example is real-time American Sign Language Recognition [14]. However, this approach does not meet the actual requirements of human-vehicle interaction under an outdoor environment. It is cumbersome to apply it to vehicle under public transportation context. The vision-based gesture recognition has the advantages of being spontaneous, having no specialized hardware requirement and being hand size independence. In visionbased gesture recognition, hand shape segmentation is one of the toughest problems under Research Center: E&TC Dept., WIT, Solapur Page 15

2 a dynamic environment. It can be simplified by using visual markings on the hands. Some researchers have implemented sign language and pointing gesture recognition based on different marking modes [15]. For fingerspelling recognition, most proposed methods rely on instrumented gloves, due to the hard problem of discriminating complex hand configurations with vision-based methods. Lamar and Bhuiyant [16] achieved letter recognition rates ranging from 70% to 93%, using coloured gloves and neural networks. More recently, Rebollar et al. [17] used a more sophisticated glove to classify 21 out of 26 letters with 100% accuracy. The worst case, letter U, achieved 78% accuracy. Shadows, the main cue used in our work, have already been exploited for gesture recognition and interactive applications. Segen and Kumar [18] describe a system which uses shadow information to track the user s hand in 3D. They demonstrated applications in object manipulation and computer games. Leibe et al. [19] presented the concept of a perceptive workbench, where shadows are exploited to estimate 3D hand position and pointing direction. Their method used infrared lighting and was demonstrated in augmented reality gaming and terrain navigation applications. In contrast, Rogerio Feris et al. [20] considered light sources with small baseline distance from the camera, allowing them to be built in a self-contained device, no larger than existing digital cameras. They proposed a novel method for recognition of isolated fingerspelling gestures based on depth edge features. This method is based on a simple and inexpensive modification of the capture setup: a multi-flash camera is used with flashes strategically positioned to cast shadows along depth discontinuities in the scene, allowing efficient and accurate extraction of depth edges. With the use of a shift and scale invariant shape descriptor for fingerspelling recognition, demonstrates great improvement over methods that rely on features acquired by traditional edge detection and segmentation algorithms. Iwan Njoto Sandjaja et al. [21] worked on sign language number system for recognizing Filipino Sign Language number. The author used colour glove based recognition system. The system extracts important features from the video using multicolour tracking algorithm. The system uses Hidden Markov Model (HMM) for training and testing phase. The feature extraction could track 92.3% of all objects. The recognizer also could recognize Filipino sign language number with 85.52% average accuracy. Sawant Pramada et al. [22] proposed an Intelligent Sign Language Recognition Using Image Processing. In their project they introduced an efficient and fast algorithm for identification of the number of fingers opened in a gesture representing an alphabet of the Research Center: E&TC Dept., WIT, Solapur Page 16

3 Binary Sign Language. This system did not require the hand to be perfectly aligned to the camera. The project used image processing system to identify, especially English alphabetic sign language used by the deaf people to communicate. This method was used for the signs indicated by open fingers. They used a colour coding technique. They applied different colour to the finger, below tip of each finger. Further this colour part was kept as it is by removing all other part of the hand during pre-processing. By checking the sequence of these colours which is known as coordinate mapping, and their positions in the 2D space, the gesture was recognised. The system was implemented only for Binary sign language. The system developed by R. E. Kahn [23] applies a variety of techniques (e.g. motion, colour, edge detection) to segment person s hand; it can be used to recognize the pointing gesture. This system requires a static background, and relies on off-board computation which causes delays in gesture recognition. In their paper, Qi Wang et al. [90] propose a novel viewpoint invariant method for sign language recognition. In this the recognition task was converted to a verification task. The Dempster Shafer theory was applied to improve the robustness of the geometry model. Shikha Singhal et al. [92] extracted the depth data from five different gestures corresponding to alphabets Y, V, L, S, I obtained from online database. Each segmented gesture was represented by its time series curve and feature vector was extracted from it. To recognize the class of input noisy hand shape, distance metric for hand dissimilarity measure, called Finger-Earth Mover s Distance (FEMD) was used. As it matched only fingers and not the complete hand shape, it distinguished hand gestures of slight differences better. Chang-Yi Kao et al. [24] developed an efficient mechanism for real-time hand gesture recognition based on the trajectory of hand motion and the hidden Markov models classifier. In this system, they divided gestures into single or both hands, one hand was defined with four basic types of directive gesture such as moving upward, downward, leftward, rightward. Then, two hands had twenty-four kinds of combination gesture. However, they applied the most natural and simple way to define eight kinds of gestures in their developed human-machine interaction control system so that the users can easily operate the robot. Experimental results revealed that the face tracking rate was more than 97% in general situations and over 94% when the face suffered from temporal occlusion. David Mace et al. [25] came up with Accelerometer-Based Hand Gesture Recognition using features of Weighted Naive Bayesian Classifiers and Dynamic Time Research Center: E&TC Dept., WIT, Solapur Page 17

4 Warping. They compared two approaches: Naive Bayesian classification with feature separability weighting and dynamic time warping. Algorithms based on these two approaches were introduced and the results were compared. They evaluated both algorithms with four gesture types and five samples from five different people. The gesture identification accuracy for Bayesian classification and dynamic time warping were 97% and 95%, respectively. For all this implementation they used a TI ez430-chronos Watch, as the accelerometer data provider. The watch contains a VTI-CMA axis accelerometer, with a measurement range of 2g, 8-bit resolution, and 100Hz sampling rate. They used an ASUS TF300T Android tablet to run the algorithms. The tablet received accelerometer data from the watch through an RF receiver with USB interface, which is recognized as a serial port inside of Android. Much of the work is also done in Arabic Sign Language recognition. In their work Aliaa A. A.Youssif et al. [26] designed the Arabic Sign Language (ArSL) recognition using HMM. A large set of samples were used to recognize 20 isolated words from the Standard Arabic sign language. The proposed system was signer-independent. Experiments were conducted using real ArSL videos taken for deaf people in different clothes and with different skin colours. The image features, together with information about their relative orientation, position and scale, were used for defining understated but discriminating viewbased object model. For tracking the hand, for each frame extract, the contours of all the detected skin regions in binary image using connected component analysis were detected. Features considered include the position of the head, coordinates of the centre of the hand region and direction angle of the hand region. Other features that represent the shape of the hand are also considered and are extracted from changes of image intensities called image motion. A total of 8 features per frame were extracted. Further HMM was used as classifier. The system achieved an overall recognition rate reaching up to 82.22%. Mohamed S. Abdalla et al. [27] projected a Dynamic Hand Gesture Recognition of Arabic Sign Language using Hand Motion Trajectory Features. In their system, they used the dynamic gesture (video stream) as input, extracted hand area and computed hand motion features, then used these features to recognize the gesture. The system identified the hand blob using YCbCr colour space to detect skin colour of hand. The system classified the input pattern based on correlation coefficients matching technique. The significance of the system was its simplicity and ability to recognize the gestures independent of skin colour and physical structure of the performers. The experiment results showed the gesture recognition rate of 20 different signs, performed by 8 different signers, as 85.67%. Research Center: E&TC Dept., WIT, Solapur Page 18

5 Mingyu Chen et al. [28] performed a Feature Processing and Modelling for 6 Dimensional Motion Gesture (6DMG) Recognition. A 6D motion gesture is represented by a 3D spatial trajectory and augmented by another three dimensions of orientation. Using different tracking technologies, the motion can be tracked explicitly with the position and orientation or implicitly with the acceleration and angular speed. In their work, they addressed the problem of motion gesture recognition for command-and-control applications. Their main contribution was to investigate the relative effectiveness of various feature dimensions for motion gesture recognition in both user- dependent and user-independent cases. They introduced a statistical feature-based classifier as the baseline and proposed an HMM-based recognizer, which offered more flexibility in feature selection and achieved better performance in recognition accuracy than the baseline system. 6D motion gesture database contained 20 distinct gestures totalling 5600 gesture samples performed by 28 subjects. It recorded comprehensive motion data, including the position, orientation, acceleration, and angular speed. Thus, 6DMG was used as a common ground to compare the recognition performance of different tracking signals and methods. This study also gave an insight into the attainable recognition rate with different tracking devices, which is valuable for the system designer to choose the proper tracking technology. In the user-dependent case, both approaches worked well with either implicit or explicit 6D data. The user-independent case was more challenging due to the large inclass variations between users. In light of the inherent variation in scale and speed across users, these two factors should be minimized as the differentiating feature in the definition of any gesture. For the HMM based recognizer, they proposed a normalization procedure to alleviate this problem and prove its effectiveness. Unfortunately, some of the statistical features prevented them from applying the same normalization concept, and they let the statistical nature take its course to handle the in-class variations. Overall, the statistical feature-based linear classifier achieved 85.2% and 93.5% accuracy with implicit and explicit 6D data. The HMM based recognizer had higher recognition rates, 91.9% and 96.9% respectively. In addition to better performance, the HMM-based recognizer also worked with more flexible feature combinations and in general kept the accuracy above 96%, which meant flexibility in choosing the tracking technologies. They concluded that, motion gesture recognition benefited from the complete 6D motion information. Robust motion gesture recognition is achievable even for the challenging user-independent case. Heung-Il Suk et al. [29] proposed Hand Gesture Recognition based on Dynamic Bayesian Network (DBN) framework. In this method of DBN-based inference was Research Center: E&TC Dept., WIT, Solapur Page 19

6 preceded by the steps of skin extraction and modelling, and motion tracking. Then they developed a gesture model for one- or two-hand gestures. They were used to define a cyclic gesture network for modelling continuous gesture stream. They also developed a dynamic programming (DP)-based real-time decoding algorithm for continuous gesture recognition. With 10 isolated gestures, they obtained a recognition rate up to 99.59% with cross validation. In the case of recognizing continuous stream of gestures, they recorded 84% with the precision of 80.77% for the spotted gestures. The proposed DBN- based hand gesture model and the design of a gesture network model were believed to have a strong potential for successful applications to other related problems such as sign language recognition although it was a bit more complicated requiring analysis of hand shapes. Dominique Uebersax et al. [30] proposed Real-time Sign Language Letter and Word Recognition from Depth Data. They presented a system for recognizing letters and finger-spelled words of the American Sign Language (ASL) in real-time. The system segmented the hand and estimated the hand orientation from captured depth data. The letter classification was based on average neighbourhood margin maximization and relied on the segmented depth data of the hands. For word recognition, the letter confidences were aggregated. Furthermore, the word recognition was used to improve the letter recognition by updating the training examples of the letter classifiers on-line. A similar method but a Real-Time Static Hand Gesture Recognition for American Sign Language (ASL) in Complex Background was proposed by Jayashree R. Pansare et al. [31]. Experimental setup of the system used fixed position low-cost web camera with 10 mega pixel resolution mounted on the top of monitor of computer which captured snapshots using Red Green Blue [RGB] colour space from fixed distance. This work was divided into four stages such as image pre-processing, region extraction, feature extraction and feature matching. First stage converted captured RGB image into binary image using gray threshold method with noise removed using median filter and Guassian filter, followed by morphological operations. Second stage extracted hand region using blob and crop was applied for getting region of interest. Then Sobel edge detection was applied on extracted region. Third stage produced feature vector as centroid and area of edge, which was compared with feature vectors of a training dataset of gestures using Euclidian distance in the fourth stage. Least Euclidian distance gave recognition of perfect matching gesture for display of ASL alphabet, meaningful words using file handling. They performed experimentation on 26 static hand gestures related to A-Z alphabets. Training dataset consisted of 100 samples of each ASL symbol in different lightning conditions, Research Center: E&TC Dept., WIT, Solapur Page 20

7 different sizes and shapes of hand. This gesture recognition system reliably recognized single-hand gestures in real time and achieved a 90.19% recognition rate in complex background with a minimum-possible constraints approach. As gesture involves a dynamic and complex motion, multiview observation and recognition are desirable. For the better representation of gestures, one needs to know from which views a gesture should be observed. Furthermore, it becomes increasingly important how the recognition results are integrated when larger numbers of camera views are considered. To investigate these problems, Toshiyuki Kirishima et al. [32] proposed a Real-Time Multiview Recognition of Human Gestures by making use of Distributed Image Processing. In this they proposed a framework under which multiview recognition is carried out, and, an integration scheme by which the recognition results were integrated online and in real time. For performance evaluation, they used the ViHASi (Virtual Human Action Silhouette) public image database as a benchmark and Japanese Sign Language (JSL) image database that contained 18 kinds of hand signs. By examining the recognition rates of each gesture for each view, they found gestures, that exhibited view dependency and the gestures that do not. Also, they found that the view dependency itself varied depending on the target gesture sets. By integrating the recognition results of different views, swarm-based integration provided more robust and better recognition performance than individual fixed-view recognition agents. Ulrich Von Agris et al. [33] presented a paper on recent developments in visual sign language recognition. This paper had described a comprehensive approach to robust visual sign language recognition which reflected developments in this field. The proposed recognition system aimed to signer-independent operation and utilized a single video camera for data acquisition to ensure user friendliness. In order to cover all aspects of sign languages, sophisticated algorithms were developed that robustly extracted manual and facial features, also in uncontrolled environments. The classification stage was designed for recognition of isolated signs as well as continuous sign language. In order to overcome the problem of high interpersonal variance, dedicated adaptation methods known from speech recognition were implemented and modified to consider the specifics of sign languages. Remarkable recognition performance had been achieved for signer-dependent classification and medium sized vocabularies. Furthermore, the presented recognition system was suitable for signer-independent real world applications where small vocabularies serve, as, e.g., for controlling interactive devices. Breaking down signs into smaller subunits allowed the extension of an existing vocabulary without the need of large amounts of training data. Research Center: E&TC Dept., WIT, Solapur Page 21

8 This constituted a key feature in the development of sign language recognition systems supporting large vocabularies. Methods for signer adaptation resulted in significant performance improvements. While the modified maximum likelihood linear regression approach served for rapid adaptation to unknown signers, the combined maximum a posteriori estimation resulted in high accuracy for larger sets of adaptation data. Joyeeta Singha et al. [34] proposed a method for Recognition of Indian Sign Language in Live Video stream. In this the recognition of various alphabets of Indian Sign Language is proposed where continuous video sequences of the signs have been considered. The proposed system comprised of three stages: Pre-processing stage, Feature Extraction and Classification. Pre-processing stage included skin filtering, histogram matching. Eigen values and Eigen Vectors were considered for feature extraction stage and finally Eigen value weighted Euclidean distance based classification was used to recognize the sign. It dealt with bare hands, thus allowing the user to interact with the system in natural way. The data set used for training the recognition system consisted of 24 signs of ISL for 20 people. They had tested the system with 20 videos and achieved a good success and attained a success rate of 96.25%. Features like good accuracy, use of bare hands, recognition of both single and both hand gestures, working with video were achieved. Another Sign Language Recognition (SLR) System for deaf and dumb people was proposed by Sakshi Goyal et al. [35]. They proposed the method or algorithm for an application which would help in recognising the different signs of Indian Sign Language. The images are of the palm side of right and left hand and are loaded at runtime of single signer. The real time images were captured first and then stored in directory and feature extraction was done to identify which sign had been articulated by the user through SIFT (Scale Invariance Fourier Transform) algorithm. The comparisons were performed and then after comparison the result was produced in accordance through matched key-points from the input image to the image stored for a specific letter already in the directory or the database. Out of 26 signs in Indian Sign Language corresponding to each alphabet, out of which the proposed algorithm provided with 95% accurate results for 9 alphabets with their images captured at every possible angle and distance. Nicolas Pugeault et al. [36] designed a Real Time ASL Fingerspelling Recognition system. The system made use of a Microsoft Kinect device to collect appearance and depth images, and the OpenNI+NITE framework for hand detection and tracking. Hand-shapes corresponding to letters of the alphabet were characterized using appearance and depth images and classified using random forests. They used over 500 samples of each sign, Research Center: E&TC Dept., WIT, Solapur Page 22

9 recorded from 4 different persons (non-native to sign language), amounting to a total of 48,000 samples. Half of this data was kept for validation purpose and half was used for training the random forest. The subjects were asked to make the sign facing the Kinect device and to move their hand around while keeping the hand shape fixed, in order to collect a good variety of background and viewing angles. This hand shape detection worked in real-time and was integrated in an interactive user interface allowing the signer to select between ambiguous detections and integrated with an English dictionary for efficient writing. The overall performance was recorded when using appearance (intensity) only, depth only and a combined feature vector. The best performance was obtained using the combined vector (mean precision 75%), followed by appearance (mean precision 73%) and depth (mean precision 69%). The slightly lower performance of depth was compensated by a greater robustness to environmental circumstances, like lighting. They compared classification using appearance and depth images, and showed that combination of both led to best results. Trong-Nguyen Nguyen et al. [37] used Principal Component Analysis with Artificial Neural Network for Static Hand Gesture Recognition. Images were collected from some of the open datasets. Videos were recorded from a fixed webcam, with simple background and stable light. Videos were recorded by five different persons; each person performed a set of gestures and they were transferred to AVI format (Audio Video Interleave) and tested. To segment the hand skin colour filter was used which was then converted to gray scale. Median filter was applied to remove noise and then was converted to binary image. For feature extraction eigen vectors of the covariance matrix of the image was obtained. Further 3 layer multilayer feed forward neural network with back propagation algorithm with was used for training. Each gesture was recognized with 100 images, corresponding to a set of 2400 gestures. The average recognition rate reached 94.3%, with 3600 training images. Rajeshree S. Rokade et al. [91] in their paper proposed a novel system for recognition of static hand gestures from a video, based on Growing Neural Gas (GNG) network. They proposed an algorithm to separate out key frames, which included correct gestures from a video sequence. They segmented, hand images from complex and nonuniform background. Features were extracted by applying GNG on key frames and recognition was done. In sign language recognition, it is desirable to use a shape representation technique that will sufficiently describe the shape of the hand while also being capable of fast Research Center: E&TC Dept., WIT, Solapur Page 23

10 computations, enabling the recognition to be done in real-time. It is also desirable for the technique to be invariant to translation, rotation, and scaling. In addition, a method that will allow for easy matching would be beneficial. In [38] Zhang performed comprehensive tests comparing some of the more prominent contour-based descriptors and region-based descriptors. His studies compared the shape descriptors in terms of good retrieval accuracy, compact features, general application, low computational complexity, robust retrieval performance, and hierarchical coarse-to-fine representation. His tests involved matching shapes from the MPEG-7 database that have undergone changes in translation, orientation, and scale. His research concluded that for contour-based descriptors, the Fourier descriptor was the best of the techniques tested. For region-based shape descriptors the Zernike moment descriptor and the generic Fourier descriptor were the best approaches with regards to the fore mentioned aspects. Since the interior of the hand is important for distinguishing between many hand shapes, especially closed-fist hand shapes, a region-based shape descriptor is favourable over a contour-based shape descriptor for hand shape recognition, because region-based shape descriptors encapsulate information about the interior of objects. Among the regionbased shape descriptors, the Generic Fourier Descriptor is one of the more promising descriptors due to its ability to accurately describe shape and its speed and ease of computation. Sanjay Meena [3] used Canny edge detection technique to find the boundary of hand gesture in image. A contour tracking algorithm is applied to track the contour in clockwise direction. Contour of a gesture is represented by a Localized Contour Sequence whose samples are the perpendicular distances between the contour pixels and the chord connecting the end-points of a window centred on the contour pixels. These extracted features are applied as input to classifier. Linear classifier discriminates the images based on dissimilarity between two images. Multi Class Support Vector Machine (MCSVM) and Least Square Support Vector Machine (LSSVM) are also implemented for the classification purpose. Abhinandan Julka et al. [39] designed a static hand gesture recognition system based on Local Contour Sequence (LCS). Their technique provided a human hand interface with computer which can recognize static gestures from American Sign Language. 24 static gestures of alphabets from American Sign Language (ASL) were recognized. Their system worked only offline and was mainly dependent on database. OTSU algorithm was used for segmentation. The problem arised due to improper capture of clear pictures by the camera Research Center: E&TC Dept., WIT, Solapur Page 24

11 because of the position or orientation of hand gesture and its distance from the camera was overcome by using a LCS technique. It was invariant to all these translation. It calculated the start point by searching the top most non zero pixel and from that pixel it started a numbering in a clockwise direction in a sequential manner. A change in the orientation results in a circular shift in the gesture. Database was created by them by wearing a black colour cloth around their arm from shoulder to wrist. Their system worked efficiently with 97.4% of average recognition rate. Single handed static gestures pose more recognition complexity due to the high degree of shape ambiguities. Rohit Sharma et al. [40] presented a gesture recognition setup capable of recognizing the most ambiguous static single handed gestures. Performance of the proposed scheme was tested on the alphabets of ASL. Segmentation of hand contours from image background was carried out using two different strategies; skin colour as detection cue with RGB and YCbCr colour spaces, and thresholding of gray level intensities. A rotation and size invariant, contour tracing descriptor was used to describe the gesture contours generated by each segmentation technique. A performance of k - Nearest Neighbour (k-nn) and multiclass Support Vector Machine (SVM) classification techniques was evaluated to classify a particular gesture. Gray level segmented contour traces classified by multiclass SVM achieved accuracy up to 80.8% on the most ambiguous gestures of ASL alphabets with overall accuracy of 90.1%. Vijay Kumar et al. [41] presented the importance of Statistical Measures in Digital Image Processing. They presented the comprehensive study of the various statistical measures and their applications in digital image processing at root level. They have simulated the majority of statistical measures and reviewed their existing applications. Also they have explored and proposed their importance in some more research area of digital image processing. They have done their comparative analysis with the help of MATLAB simulation to ease the selection of statistical parameter for a specific image processing technique like image enhancement, de-noising, restoration, edge detection etc. Finally they concluded that the proposed statistical model could be used as pre-processing model for various digital image processing techniques to improve the effectiveness of complex image processing technique in the next levels. A Neural Network based Static Sign Gesture Recognition System was proposed by Parul Chaudhary et al. [42]. The system used L*a*b colour space for skin region detection using thresholding technique. L*a*b is colour space defined by the CIE (the International Commission on Illumination), based on one channel for Luminance (lightness) (L) and two Research Center: E&TC Dept., WIT, Solapur Page 25

12 colour channels (a and b). Input RGB image was first converted to L*a*b* colour space to separate intensity information into a single plane of the image, and then calculated the local range in each layer. The region of interest (hand) was cropped and converted into binary image for feature extraction. Then height, area, centroid, and distance of the centroid from the origin (top-left corner) of the image were used as features. Finally each set of feature vector was used to train a feed-forward back propagation network. The image database consisted of only four static sign gestures of ASL in.jpg format. Experimental results showed successful recognition of static sign gestures with an average recognition accuracy of 85 % on a typical set of test images. Md. Atiqur Rahman et al. [43] presented a system for recognizing static gestures of alphabet in American Sign Language (ASL) using artificial neural network (ANN). The required images for the selected alphabet were obtained using a digital camera of 8 mega pixel resolution. The colour images were cropped, resized, and converted to binary images. Then height, area, centroid, and Euclidean distance of the centroid from the origin (top-left corner) of the image were used as features. Finally, the extracted features were used to train a Back-propagation Neural Network. This recognition system did not use any gloves or visual marking systems. This system only required the images of the bare hand with uniform background for the recognition. Experimental results showed that the system was able to recognize 26 selected ASL alphabets with an average accuracy of %. P. V. V. Kishore et al. [44] proposed a video based Indian Sign Language Recognition System (INSLR) using Wavelet Transform and Fuzzy Logic. This system integrated various image processing techniques and computational intelligence techniques in order to deal with sentence recognition. A wavelet based video segmentation technique was proposed which detected shapes of various hand signs and head movement in video based setup. Shape features of hand gestures were extracted using elliptical Fourier descriptions which to the highest degree reduced the feature vectors for an image. Principle component analysis (PCA) was used to still minimize the feature vector for a particular gesture video and the features were not affected by scaling or rotation of gestures within a video which made the system more flexible. Features generated by using these techniques made the feature vector unique for a particular gesture. Recognition of gestures from the extracted features was done using a Sugeno type fuzzy inference system which uses linear output membership functions. Finally the INSLR system employed an audio system to play the recognized gestures along with text output. The system was tested using a data set of 80 Research Center: E&TC Dept., WIT, Solapur Page 26

13 words and sentences by 10 different signers. The experimental results showed that the system had a recognition rate of 96%. Ali Karami et al. [45] also worked on Persian sign language (PSL) recognition using wavelet transform and neural networks. The required images for the selected alphabets were obtained using a digital camera. The colour images were cropped, resized, and converted to grayscale images. Then, the discrete wavelet transform (DWT) was applied on the gray scale images, and some features were extracted. They used the coefficients of approximation on the 6th level of wavelet decomposition along with the coefficients of diagonal and horizontal details on the levels 6th, 7th, and the coefficients of vertical details on the 6th level as the feature vector. Finally, the extracted feature was used to train a Multi-Layered Perceptron (MLP) Neural Network. The recognition system did not use any gloves or visual marking systems. This system only required the images of the bare hand for the recognition. The system was implemented and tested using a data set of 640 samples of Persian sign images; 20 images for each sign. Experimental results showed that the system was able to recognize 32 selected PSL alphabets with an average classification accuracy of 94.06%. Alaa Barkoky [46] proposed a method to recognize the image-based numbers of Persian sign language (PSL) using thinning method on segmented image. In this approach, after cleaning thinned image, the real endpoints have been used for recognition. The method is qualified to provide real-time recognition and is not affected by hand rotation and scaling. Experimentation is done on 300 images giving average recognition rate of 96.6 %. Sign Language Recognition using Thinning algorithm was proposed by S. N. Omkar et al. [47]. This endeavour was yet another approach to accomplish interpretation of human hand gestures. The first step of this work was background subtraction which was achieved by the Euclidean distance threshold method. A morphological operation like dilation and erosion was performed next with the help of a structuring element in order to further process the images. At this stage the thinning algorithm was applied to produce the thinned image of the hand for further analysis. The different feature points which included terminating points and curved edges were extracted for the recognition of the different signs. The tips of the fingers are the terminating points which were extracted. A maximum of five end points were obtained for every sign. In cases where there were no fingers opened, the curved edges were computed. The points thus extracted were used to calculate the distances between them. By comparing the different distances the required interpretation for the gesture was obtained. The input for the project was taken from video data of a human hand gesturing all the signs of the American Sign Language. The Research Center: E&TC Dept., WIT, Solapur Page 27

14 efficiency of the proposed method was calculated for five different video sets. Each of them contained all the signs for numbers (1 to10) and alphabets (A to Z) of the American Sign Language. Shekhar Singh et al. [48] proposed a method for Recognizing and Interpreting Sign Language Gesture for Human Robot Interaction. They described a sign language gesture based recognition, interpreting and imitation learning system using Indian Sign Language for performing Human Robot Interaction in real time. The classification, recognition, learning, interpretation process was carried out by extracting the features from Indian sign language (ISL) gestures. Database of 21 ISL gestures were selected by 10 different persons. 10 samples of each gesture for training and 10 separate samples for each gesture for testing were used. Chain code and fisher score was considered as a feature vector for classification and recognition process. It was done by the two statistical approaches namely Hidden Markov Model (HMM) technique and feed forward back propagation neural network (FNN) in order to achieve satisfactory recognition accuracy. The sensitivity, specificity and accuracy were found to be equal 98.60%, 97.64% and 97.52% respectively. It was concluded that FNN gave fast and accurate recognition and it worked as promising tool for recognition and interpretation of sign language gesture for human computer interaction. The overall accuracy of recognition and interpretation of the proposed system is 95.34%. As the pixel information of depth image is derived from the distance information, when implementing SURF algorithm with KINECT sensor for static sign language recognition, there can be some mismatched pairs in palm area. Zhang Fang Hu et al. [49] proposed a modified SURF algorithm combined with the SVM method for 26 ASL alphabets recognition. He proposed a feature point selection algorithm, by filtering the SURF feature points step by step based on the number of feature points within adaptive radius r and the distance between the two points,. It not only greatly improved the recognition rate, but also ensured the robustness under the environmental factors, such as skin colour, illumination intensity, complex background, angle and scale changes. The experiment results showed that the improved SURF algorithm can effectively improve the recognition rate compared to previous SURF algorithm with an average recognition rate of 97.7 %. However, these approaches require the placement of markers on the hands, and are not popularly accepted. Object recognition and tracking are classic tasks in machine vision, which may also be used for hand gesture recognition. Y. Cui [50] represented a system Research Center: E&TC Dept., WIT, Solapur Page 28

15 which was able to recognize different hand gestures under a complex background. It reached 93.1% correct recognition for 28 different gestures, but the system is not user independent and has the relative slow segmentation speed. Jonathan C. Rupe [51] developed the system to identify hand shapes commonly found in American Sign Language using the region-based Generic Fourier Descriptor. He introduces an approach to obtain image-based hand features to accurately describe hand shapes commonly found in the American Sign Language. A hand recognition system capable of identifying 31 hand shapes from the American Sign Language was developed to identify hand shapes in a given input image or video sequence. An appearance-based approach with a single camera is used to recognize the hand shape. A region-based shape descriptor, the generic Fourier descriptor, invariant of translation, scale, and orientation, has been implemented to describe the shape of the hand. A wrist detection algorithm has been developed to remove the forearm from the hand region before the features are extracted. The recognition of the hand shapes is performed with a multi-class Support Vector Machine. Testing provided a recognition rate of approximately 84% based on widely varying testing set of approximately 1,500 images and training set of about 2,400 images. With a larger training set of approximately 2,700 images and a testing set of approximately 1,200 images, a recognition rate increased to about 88%. Qutaishat Munib et al. [52] performed American Sign Language (ASL) recognition based on Hough transform and neural networks. Their system did not rely on any gloves or visual markings to achieve the recognition task. Instead, it dealt with images of bare hands, which allowed the user to interact with the system in a natural way. The feature vector is Hough Transform obtained from the Canny edge detected images of signs. The extracted features were not affected by the rotation, scaling or translation of the gesture within the image, which made the system more flexible. Further neural network was used as classifier. This system was implemented and tested using a data set of 300 samples of hand sign images; 15 images for each sign. Here altogether 20 different signs of alphabets and numbers were only used. The performance of the system was checked by varying the threshold level for Canny edge detection and the number of samples for each sign used. The Experiments revealed that the system was able to recognize selected 20 ASL signs with an accuracy of 92.3% for the threshold value of Bahare Jalilian et al. [53] also proposed a Persian Sign Language Recognition Using Radial Distance and Fourier Transform. Accurate hand segmentation is the first and important step in sign language recognition systems. Here, they proposed a method for Research Center: E&TC Dept., WIT, Solapur Page 29

16 hand segmentation that helped to build a better vision based sign language recognition system. The proposed method was based on YCbCr colour space, single Gaussian model and Bayes rule for segmentation. It detected region of hand in complex background and non-uniform illumination. Hand gesture features were extracted by radial distance and Fourier transform. They first applied Sobel edge detector to the hand segmented image, then features were extracted from edges of the hand region. Next, they used radial distance model to obtain a 1D functional representation of a boundary shapes (signatures) and to build feature vectors. Radial distance technique is based on the distance from the centroid of the shape to the selected boundary edge pixels as a function of angle. Finally, the Euclidean distance was used to compute the similarity between the input signs and all training feature vectors in the database. The system is tested on 480 posture images of the PSL, 15 images for each 32 signs. Experimental results showed that their method was capable to recognize all 32 PSL alphabets with 95.62% recognition rate. Hee-Deok Yang et al. [89] proposed Robust sign language recognition by combining manual (i.e. fingerspelling or hand-made) and non-manual (i.e. facial expressions) features based on conditional random field and support vector machine. This is carried out in three steps. In the first step, candidate segments of manual signals are detected using a hierarchical conditional random field. In second step, hand shapes of segmented signs are verified using the BoostMap embedding method to recognize fingerspellings. Finally, the facial expressions as non-manual signals are recognized using support vector machine. The final step was taken when there was some ambiguity in the previous two steps. This method recognized the sign language at an 84% rate based on utterance data. Point Pattern Matching Algorithm for Recognition of 36 ASL Gestures was used by Deval Patel [54]. She used the method of pattern recognition to recognize symbols of the ASL based on the features extracted by SIFT algorithm. For finding the validity ratio MK- RoD algorithm is used. This operation was done in order to determine the similar pattern of the matched key-points from the centre of the matched key-points. The absolute of the difference of the points which are below the given threshold were treated as valid matched key-point and its performance was compared with widely used methods such as PCA and Template Matching. Testing of one image of each type was performed which resulted into 77.7% recognition and 8.33% false rejection rate. Sansanee Auephanwiriyakul et al. [55] proposed a Thai sign language translation using Scale Invariant Feature Transform and Hidden Markov Models. They developed an Research Center: E&TC Dept., WIT, Solapur Page 30

17 automatic Thai sign language translation system that was able to translate sign language that is not finger-spelling sign language. In particular, they utilized Scale Invariant Feature Transform (SIFT) to match a test frame with observation symbols from key-point descriptors collected in the signature library. These key-point descriptors were computed from several key frames that were recorded at different times of day for several days from five subjects. Hidden Markov Models (HMMs) were then used to translate observation sequences to words. They also collected Thai sign language videos from 20 subjects for testing. The system achieved approximately 86 95% for signer-dependent on the average, 79.75% for signer-semi-independent (same subjects used in the HMM training only) on the average and 76.56% for signer-independent on the average. These results were from the constrained system in which each signer wore a shirt with long sleeves in front of dark background. The unconstrained system in which each signer didn t wear a long-sleeve shirt in front of various natural backgrounds gave a good result of around 74% on the average on the signer independent experiment. The important feature of the proposed system was the consideration of shapes and positions of fingers, in addition to hand information. This feature provided the system, ability to recognize the similar gesture hand sign words. Nachamai M. [56] proposed alphabet recognition Of American Sign Language using SIFT algorithm. The dataset comprised of all the 26 alphabets and 10 alphabet of each sign with difference in lighting and orientation. SIFT algorithm is a space, size, illumination and rotation invariant. Pre-processing started with filtering noise, followed by an image adjust; histogram equalization and image normalization. After background subtraction, simple Sobel edge detection was applied to track the hand object on the screen. Features such as hand shape, position of the hand, orientation and movement (if any) were extracted..the maximum depth of the image was calculated and stored. This approach worked well with both the standard American Sign Language (ASL) database and homemade database. The system had also qualitatively attempted recognition of real time images. Klimis Symeonidis [57] used an orientation histogram of the image to develop a simple and a fast algorithm to extract features from the static image for comparison and recognize a static ASL using neural network. Histogram orientation has the advantage of being robust in lighting change conditions. Therefore, orientation analysis gives robustness in illumination changes while histogramming offers translational invariance. This method worked only for limited ASL signs whose same gesture mapped to similar orientation Research Center: E&TC Dept., WIT, Solapur Page 31

18 histograms, and different gestures mapped to substantially different histograms. But not all the ASL gestures were identified using this method. Static Hand Gesture Recognition for Sign Language alphabets using Edge Oriented Histogram and Multi Class SVM was proposed by S. Nagarajan et al. [58]. The edge histogram count of input sign language alphabets was extracted as the features and applied to a multiclass SVM for classification. The image database contained totally 720 images for 24 classes of American Sign Language alphabets, each class containing 30 images. The images were captured with different signers, under different lighting conditions and with different orientation. In the pre-processing step, the input image was first resized and converted to gray scale. Global thresholding technique was used in this work for segmenting the hand region. As the segmented hand image still contained noise, various image morphological operations such as erosion and dilation were performed on it. Edge detection was done to significantly reduce the amount of data and to filter out the useless information in an image while preserving its structural properties. In this work, canny operator was used to detect the edges of the filtered image. After finding the edges of the hand image using canny edge detection technique, the Edge Oriented Histogram features were extracted from it. These features were further used for classification using SVM based classification. The average accuracy of the system was compared with different number of features and the experimental findings demonstrated that the proposed method have a success rate of 93.75% for 64 features. Some of the alphabets are misclassified due to the orientation of the hand in front of the camera as well as the similarity between few gestures such as A, E and S. Arindam Misra et al. [59] presented a Hand Gesture Recognition system using Histogram of Oriented Gradients and Partial Least Squares Regression. In their work they proposed a real-time hand gesture recognition system that employed the techniques developed for pedestrian detection to recognize a small vocabulary of human hand gestures. The database was developed by capturing images of the seven hand gestures to be recognized in uniformly coloured backgrounds and chroma keying the images on various backgrounds obtained from Google images. As the classifier was desired to be mainly used indoors in home or office space, they collected mostly the backgrounds in the office or home environment. Image sets with varying degree of positional variations were captured and were used to generate the training images for the classifiers. The images consisted of sets with no positional variation, slight positional variation and large positional variation. For generating the test images also, separate sets with similar attributes and different Research Center: E&TC Dept., WIT, Solapur Page 32

Available online at ScienceDirect. Procedia Computer Science 92 (2016 )

Available online at  ScienceDirect. Procedia Computer Science 92 (2016 ) Available online at www.sciencedirect.com ScienceDirect Procedia Computer Science 92 (2016 ) 455 460 2nd International Conference on Intelligent Computing, Communication & Convergence (ICCC-2016) Srikanta

More information

Real Time Sign Language Processing System

Real Time Sign Language Processing System Real Time Sign Language Processing System Dibyabiva Seth (&), Anindita Ghosh, Ariruna Dasgupta, and Asoke Nath Department of Computer Science, St. Xavier s College (Autonomous), Kolkata, India meetdseth@gmail.com,

More information

TWO HANDED SIGN LANGUAGE RECOGNITION SYSTEM USING IMAGE PROCESSING

TWO HANDED SIGN LANGUAGE RECOGNITION SYSTEM USING IMAGE PROCESSING 134 TWO HANDED SIGN LANGUAGE RECOGNITION SYSTEM USING IMAGE PROCESSING H.F.S.M.Fonseka 1, J.T.Jonathan 2, P.Sabeshan 3 and M.B.Dissanayaka 4 1 Department of Electrical And Electronic Engineering, Faculty

More information

ABSTRACT I. INTRODUCTION

ABSTRACT I. INTRODUCTION 2018 IJSRSET Volume 4 Issue 2 Print ISSN: 2395-1990 Online ISSN : 2394-4099 National Conference on Advanced Research Trends in Information and Computing Technologies (NCARTICT-2018), Department of IT,

More information

A Survey on Hand Gesture Recognition for Indian Sign Language

A Survey on Hand Gesture Recognition for Indian Sign Language A Survey on Hand Gesture Recognition for Indian Sign Language Miss. Juhi Ekbote 1, Mrs. Mahasweta Joshi 2 1 Final Year Student of M.E. (Computer Engineering), B.V.M Engineering College, Vallabh Vidyanagar,

More information

1. INTRODUCTION. Vision based Multi-feature HGR Algorithms for HCI using ISL Page 1

1. INTRODUCTION. Vision based Multi-feature HGR Algorithms for HCI using ISL Page 1 1. INTRODUCTION Sign language interpretation is one of the HCI applications where hand gesture plays important role for communication. This chapter discusses sign language interpretation system with present

More information

A Review on Feature Extraction for Indian and American Sign Language

A Review on Feature Extraction for Indian and American Sign Language A Review on Feature Extraction for Indian and American Sign Language Neelam K. Gilorkar, Manisha M. Ingle Department of Electronics & Telecommunication, Government College of Engineering, Amravati, India

More information

IDENTIFICATION OF REAL TIME HAND GESTURE USING SCALE INVARIANT FEATURE TRANSFORM

IDENTIFICATION OF REAL TIME HAND GESTURE USING SCALE INVARIANT FEATURE TRANSFORM Research Article Impact Factor: 0.621 ISSN: 2319507X INTERNATIONAL JOURNAL OF PURE AND APPLIED RESEARCH IN ENGINEERING AND TECHNOLOGY A PATH FOR HORIZING YOUR INNOVATIVE WORK IDENTIFICATION OF REAL TIME

More information

Noise-Robust Speech Recognition Technologies in Mobile Environments

Noise-Robust Speech Recognition Technologies in Mobile Environments Noise-Robust Speech Recognition echnologies in Mobile Environments Mobile environments are highly influenced by ambient noise, which may cause a significant deterioration of speech recognition performance.

More information

MRI Image Processing Operations for Brain Tumor Detection

MRI Image Processing Operations for Brain Tumor Detection MRI Image Processing Operations for Brain Tumor Detection Prof. M.M. Bulhe 1, Shubhashini Pathak 2, Karan Parekh 3, Abhishek Jha 4 1Assistant Professor, Dept. of Electronics and Telecommunications Engineering,

More information

Detection of Glaucoma and Diabetic Retinopathy from Fundus Images by Bloodvessel Segmentation

Detection of Glaucoma and Diabetic Retinopathy from Fundus Images by Bloodvessel Segmentation International Journal of Engineering and Advanced Technology (IJEAT) ISSN: 2249 8958, Volume-5, Issue-5, June 2016 Detection of Glaucoma and Diabetic Retinopathy from Fundus Images by Bloodvessel Segmentation

More information

Intelligent Edge Detector Based on Multiple Edge Maps. M. Qasim, W.L. Woon, Z. Aung. Technical Report DNA # May 2012

Intelligent Edge Detector Based on Multiple Edge Maps. M. Qasim, W.L. Woon, Z. Aung. Technical Report DNA # May 2012 Intelligent Edge Detector Based on Multiple Edge Maps M. Qasim, W.L. Woon, Z. Aung Technical Report DNA #2012-10 May 2012 Data & Network Analytics Research Group (DNA) Computing and Information Science

More information

Sign Language Recognition System Using SIFT Based Approach

Sign Language Recognition System Using SIFT Based Approach Sign Language Recognition System Using SIFT Based Approach Ashwin S. Pol, S. L. Nalbalwar & N. S. Jadhav Dept. of E&TC, Dr. BATU Lonere, MH, India E-mail : ashwin.pol9@gmail.com, nalbalwar_sanjayan@yahoo.com,

More information

Sign Language Interpretation Using Pseudo Glove

Sign Language Interpretation Using Pseudo Glove Sign Language Interpretation Using Pseudo Glove Mukul Singh Kushwah, Manish Sharma, Kunal Jain and Anish Chopra Abstract The research work presented in this paper explores the ways in which, people who

More information

ANALYSIS AND DETECTION OF BRAIN TUMOUR USING IMAGE PROCESSING TECHNIQUES

ANALYSIS AND DETECTION OF BRAIN TUMOUR USING IMAGE PROCESSING TECHNIQUES ANALYSIS AND DETECTION OF BRAIN TUMOUR USING IMAGE PROCESSING TECHNIQUES P.V.Rohini 1, Dr.M.Pushparani 2 1 M.Phil Scholar, Department of Computer Science, Mother Teresa women s university, (India) 2 Professor

More information

Gesture Recognition using Marathi/Hindi Alphabet

Gesture Recognition using Marathi/Hindi Alphabet Gesture Recognition using Marathi/Hindi Alphabet Rahul Dobale ¹, Rakshit Fulzele², Shruti Girolla 3, Seoutaj Singh 4 Student, Computer Engineering, D.Y. Patil School of Engineering, Pune, India 1 Student,

More information

DeepASL: Enabling Ubiquitous and Non-Intrusive Word and Sentence-Level Sign Language Translation

DeepASL: Enabling Ubiquitous and Non-Intrusive Word and Sentence-Level Sign Language Translation DeepASL: Enabling Ubiquitous and Non-Intrusive Word and Sentence-Level Sign Language Translation Biyi Fang Michigan State University ACM SenSys 17 Nov 6 th, 2017 Biyi Fang (MSU) Jillian Co (MSU) Mi Zhang

More information

Labview Based Hand Gesture Recognition for Deaf and Dumb People

Labview Based Hand Gesture Recognition for Deaf and Dumb People International Journal of Engineering Science Invention (IJESI) ISSN (Online): 2319 6734, ISSN (Print): 2319 6726 Volume 7 Issue 4 Ver. V April 2018 PP 66-71 Labview Based Hand Gesture Recognition for Deaf

More information

Development of novel algorithm by combining Wavelet based Enhanced Canny edge Detection and Adaptive Filtering Method for Human Emotion Recognition

Development of novel algorithm by combining Wavelet based Enhanced Canny edge Detection and Adaptive Filtering Method for Human Emotion Recognition International Journal of Engineering Research and Development e-issn: 2278-067X, p-issn: 2278-800X, www.ijerd.com Volume 12, Issue 9 (September 2016), PP.67-72 Development of novel algorithm by combining

More information

Biologically-Inspired Human Motion Detection

Biologically-Inspired Human Motion Detection Biologically-Inspired Human Motion Detection Vijay Laxmi, J. N. Carter and R. I. Damper Image, Speech and Intelligent Systems (ISIS) Research Group Department of Electronics and Computer Science University

More information

Sign Language to English (Slate8)

Sign Language to English (Slate8) Sign Language to English (Slate8) App Development Nathan Kebe El Faculty Advisor: Dr. Mohamad Chouikha 2 nd EECS Day April 20, 2018 Electrical Engineering and Computer Science (EECS) Howard University

More information

A Vision-based Affective Computing System. Jieyu Zhao Ningbo University, China

A Vision-based Affective Computing System. Jieyu Zhao Ningbo University, China A Vision-based Affective Computing System Jieyu Zhao Ningbo University, China Outline Affective Computing A Dynamic 3D Morphable Model Facial Expression Recognition Probabilistic Graphical Models Some

More information

Facial expression recognition with spatiotemporal local descriptors

Facial expression recognition with spatiotemporal local descriptors Facial expression recognition with spatiotemporal local descriptors Guoying Zhao, Matti Pietikäinen Machine Vision Group, Infotech Oulu and Department of Electrical and Information Engineering, P. O. Box

More information

An Approach to Hand Gesture Recognition for Devanagari Sign Language using Image Processing Tool Box

An Approach to Hand Gesture Recognition for Devanagari Sign Language using Image Processing Tool Box An Approach to Hand Gesture Recognition for Devanagari Sign Language using Image Processing Tool Box Prof. Abhijit V. Warhade 1 Prof. Pranali K. Misal 2 Assistant Professor, Dept. of E & C Engineering

More information

Design of Palm Acupuncture Points Indicator

Design of Palm Acupuncture Points Indicator Design of Palm Acupuncture Points Indicator Wen-Yuan Chen, Shih-Yen Huang and Jian-Shie Lin Abstract The acupuncture points are given acupuncture or acupressure so to stimulate the meridians on each corresponding

More information

PAPER REVIEW: HAND GESTURE RECOGNITION METHODS

PAPER REVIEW: HAND GESTURE RECOGNITION METHODS PAPER REVIEW: HAND GESTURE RECOGNITION METHODS Assoc. Prof. Abd Manan Ahmad 1, Dr Abdullah Bade 2, Luqman Al-Hakim Zainal Abidin 3 1 Department of Computer Graphics and Multimedia, Faculty of Computer

More information

Recognition of sign language gestures using neural networks

Recognition of sign language gestures using neural networks Recognition of sign language gestures using neural s Peter Vamplew Department of Computer Science, University of Tasmania GPO Box 252C, Hobart, Tasmania 7001, Australia vamplew@cs.utas.edu.au ABSTRACT

More information

AVR Based Gesture Vocalizer Using Speech Synthesizer IC

AVR Based Gesture Vocalizer Using Speech Synthesizer IC AVR Based Gesture Vocalizer Using Speech Synthesizer IC Mr.M.V.N.R.P.kumar 1, Mr.Ashutosh Kumar 2, Ms. S.B.Arawandekar 3, Mr.A. A. Bhosale 4, Mr. R. L. Bhosale 5 Dept. Of E&TC, L.N.B.C.I.E.T. Raigaon,

More information

Recognition of Tamil Sign Language Alphabet using Image Processing to aid Deaf-Dumb People

Recognition of Tamil Sign Language Alphabet using Image Processing to aid Deaf-Dumb People Available online at www.sciencedirect.com Procedia Engineering 30 (2012) 861 868 International Conference on Communication Technology and System Design 2011 Recognition of Tamil Sign Language Alphabet

More information

Characterization of 3D Gestural Data on Sign Language by Extraction of Joint Kinematics

Characterization of 3D Gestural Data on Sign Language by Extraction of Joint Kinematics Human Journals Research Article October 2017 Vol.:7, Issue:4 All rights are reserved by Newman Lau Characterization of 3D Gestural Data on Sign Language by Extraction of Joint Kinematics Keywords: hand

More information

Detection and Recognition of Sign Language Protocol using Motion Sensing Device

Detection and Recognition of Sign Language Protocol using Motion Sensing Device Detection and Recognition of Sign Language Protocol using Motion Sensing Device Rita Tse ritatse@ipm.edu.mo AoXuan Li P130851@ipm.edu.mo Zachary Chui MPI-QMUL Information Systems Research Centre zacharychui@gmail.com

More information

Sign Language to Number by Neural Network

Sign Language to Number by Neural Network Sign Language to Number by Neural Network Shekhar Singh Assistant Professor CSE, Department PIET, samalkha, Panipat, India Pradeep Bharti Assistant Professor CSE, Department PIET, samalkha, Panipat, India

More information

Keywords Fuzzy Logic, Fuzzy Rule, Fuzzy Membership Function, Fuzzy Inference System, Edge Detection, Regression Analysis.

Keywords Fuzzy Logic, Fuzzy Rule, Fuzzy Membership Function, Fuzzy Inference System, Edge Detection, Regression Analysis. Volume 6, Issue 3, March 2016 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Modified Fuzzy

More information

International Journal of Multidisciplinary Approach and Studies

International Journal of Multidisciplinary Approach and Studies A Review Paper on Language of sign Weighted Euclidean Distance Based Using Eigen Value Er. Vandana Soni*, Mr. Pratyoosh Rai** *M. Tech Scholar, Department of Computer Science, Bhabha Engineering Research

More information

Sign Language Recognition with the Kinect Sensor Based on Conditional Random Fields

Sign Language Recognition with the Kinect Sensor Based on Conditional Random Fields Sensors 2015, 15, 135-147; doi:10.3390/s150100135 Article OPEN ACCESS sensors ISSN 1424-8220 www.mdpi.com/journal/sensors Sign Language Recognition with the Kinect Sensor Based on Conditional Random Fields

More information

Implementation of image processing approach to translation of ASL finger-spelling to digital text

Implementation of image processing approach to translation of ASL finger-spelling to digital text Rochester Institute of Technology RIT Scholar Works Articles 2006 Implementation of image processing approach to translation of ASL finger-spelling to digital text Divya Mandloi Kanthi Sarella Chance Glenn

More information

International Journal for Science and Emerging

International Journal for Science and Emerging International Journal for Science and Emerging ISSN No. (Online):2250-3641 Technologies with Latest Trends 8(1): 7-13 (2013) ISSN No. (Print): 2277-8136 Adaptive Neuro-Fuzzy Inference System (ANFIS) Based

More information

Analysis of Recognition System of Japanese Sign Language using 3D Image Sensor

Analysis of Recognition System of Japanese Sign Language using 3D Image Sensor Analysis of Recognition System of Japanese Sign Language using 3D Image Sensor Yanhua Sun *, Noriaki Kuwahara**, Kazunari Morimoto *** * oo_alison@hotmail.com ** noriaki.kuwahara@gmail.com ***morix119@gmail.com

More information

Skin color detection for face localization in humanmachine

Skin color detection for face localization in humanmachine Research Online ECU Publications Pre. 2011 2001 Skin color detection for face localization in humanmachine communications Douglas Chai Son Lam Phung Abdesselam Bouzerdoum 10.1109/ISSPA.2001.949848 This

More information

International Journal of Engineering Research in Computer Science and Engineering (IJERCSE) Vol 5, Issue 3, March 2018 Gesture Glove

International Journal of Engineering Research in Computer Science and Engineering (IJERCSE) Vol 5, Issue 3, March 2018 Gesture Glove Gesture Glove [1] Kanere Pranali, [2] T.Sai Milind, [3] Patil Shweta, [4] Korol Dhanda, [5] Waqar Ahmad, [6] Rakhi Kalantri [1] Student, [2] Student, [3] Student, [4] Student, [5] Student, [6] Assistant

More information

Automated Brain Tumor Segmentation Using Region Growing Algorithm by Extracting Feature

Automated Brain Tumor Segmentation Using Region Growing Algorithm by Extracting Feature Automated Brain Tumor Segmentation Using Region Growing Algorithm by Extracting Feature Shraddha P. Dhumal 1, Ashwini S Gaikwad 2 1 Shraddha P. Dhumal 2 Ashwini S. Gaikwad ABSTRACT In this paper, we propose

More information

N RISCE 2K18 ISSN International Journal of Advance Research and Innovation

N RISCE 2K18 ISSN International Journal of Advance Research and Innovation The Computer Assistance Hand Gesture Recognition system For Physically Impairment Peoples V.Veeramanikandan(manikandan.veera97@gmail.com) UG student,department of ECE,Gnanamani College of Technology. R.Anandharaj(anandhrak1@gmail.com)

More information

INDIAN SIGN LANGUAGE RECOGNITION USING NEURAL NETWORKS AND KNN CLASSIFIERS

INDIAN SIGN LANGUAGE RECOGNITION USING NEURAL NETWORKS AND KNN CLASSIFIERS INDIAN SIGN LANGUAGE RECOGNITION USING NEURAL NETWORKS AND KNN CLASSIFIERS Madhuri Sharma, Ranjna Pal and Ashok Kumar Sahoo Department of Computer Science and Engineering, School of Engineering and Technology,

More information

EECS 433 Statistical Pattern Recognition

EECS 433 Statistical Pattern Recognition EECS 433 Statistical Pattern Recognition Ying Wu Electrical Engineering and Computer Science Northwestern University Evanston, IL 60208 http://www.eecs.northwestern.edu/~yingwu 1 / 19 Outline What is Pattern

More information

EARLY STAGE DIAGNOSIS OF LUNG CANCER USING CT-SCAN IMAGES BASED ON CELLULAR LEARNING AUTOMATE

EARLY STAGE DIAGNOSIS OF LUNG CANCER USING CT-SCAN IMAGES BASED ON CELLULAR LEARNING AUTOMATE EARLY STAGE DIAGNOSIS OF LUNG CANCER USING CT-SCAN IMAGES BASED ON CELLULAR LEARNING AUTOMATE SAKTHI NEELA.P.K Department of M.E (Medical electronics) Sengunthar College of engineering Namakkal, Tamilnadu,

More information

Hand Gestures Recognition System for Deaf, Dumb and Blind People

Hand Gestures Recognition System for Deaf, Dumb and Blind People Hand Gestures Recognition System for Deaf, Dumb and Blind People Channaiah Chandana K 1, Nikhita K 2, Nikitha P 3, Bhavani N K 4, Sudeep J 5 B.E. Student, Dept. of Information Science & Engineering, NIE-IT,

More information

OFFLINE CANDIDATE HAND GESTURE SELECTION AND TRAJECTORY DETERMINATION FOR CONTINUOUS ETHIOPIAN SIGN LANGUAGE

OFFLINE CANDIDATE HAND GESTURE SELECTION AND TRAJECTORY DETERMINATION FOR CONTINUOUS ETHIOPIAN SIGN LANGUAGE OFFLINE CANDIDATE HAND GESTURE SELECTION AND TRAJECTORY DETERMINATION FOR CONTINUOUS ETHIOPIAN SIGN LANGUAGE ABADI TSEGAY 1, DR. KUMUDHA RAIMOND 2 Addis Ababa University, Addis Ababa Institute of Technology

More information

Edge Detection Techniques Using Fuzzy Logic

Edge Detection Techniques Using Fuzzy Logic Edge Detection Techniques Using Fuzzy Logic Essa Anas Digital Signal & Image Processing University Of Central Lancashire UCLAN Lancashire, UK eanas@uclan.a.uk Abstract This article reviews and discusses

More information

Error Detection based on neural signals

Error Detection based on neural signals Error Detection based on neural signals Nir Even- Chen and Igor Berman, Electrical Engineering, Stanford Introduction Brain computer interface (BCI) is a direct communication pathway between the brain

More information

Blood Vessel Segmentation for Retinal Images Based on Am-fm Method

Blood Vessel Segmentation for Retinal Images Based on Am-fm Method Research Journal of Applied Sciences, Engineering and Technology 4(24): 5519-5524, 2012 ISSN: 2040-7467 Maxwell Scientific Organization, 2012 Submitted: March 23, 2012 Accepted: April 30, 2012 Published:

More information

Image processing applications are growing rapidly. Most

Image processing applications are growing rapidly. Most RESEARCH ARTICLE Kurdish Sign Language Recognition System Abdulla Dlshad, Fattah Alizadeh Department of Computer Science and Engineering, University of Kurdistan Hewler, Erbil, Kurdistan Region - F.R.

More information

Edge Detection Techniques Based On Soft Computing

Edge Detection Techniques Based On Soft Computing International Journal for Science and Emerging ISSN No. (Online):2250-3641 Technologies with Latest Trends 7(1): 21-25 (2013) ISSN No. (Print): 2277-8136 Edge Detection Techniques Based On Soft Computing

More information

7 Grip aperture and target shape

7 Grip aperture and target shape 7 Grip aperture and target shape Based on: Verheij R, Brenner E, Smeets JBJ. The influence of target object shape on maximum grip aperture in human grasping movements. Exp Brain Res, In revision 103 Introduction

More information

Gender Based Emotion Recognition using Speech Signals: A Review

Gender Based Emotion Recognition using Speech Signals: A Review 50 Gender Based Emotion Recognition using Speech Signals: A Review Parvinder Kaur 1, Mandeep Kaur 2 1 Department of Electronics and Communication Engineering, Punjabi University, Patiala, India 2 Department

More information

Artificial Intelligence Lecture 7

Artificial Intelligence Lecture 7 Artificial Intelligence Lecture 7 Lecture plan AI in general (ch. 1) Search based AI (ch. 4) search, games, planning, optimization Agents (ch. 8) applied AI techniques in robots, software agents,... Knowledge

More information

Using Deep Convolutional Networks for Gesture Recognition in American Sign Language

Using Deep Convolutional Networks for Gesture Recognition in American Sign Language Using Deep Convolutional Networks for Gesture Recognition in American Sign Language Abstract In the realm of multimodal communication, sign language is, and continues to be, one of the most understudied

More information

International Journal of Software and Web Sciences (IJSWS)

International Journal of Software and Web Sciences (IJSWS) International Association of Scientific Innovation and Research (IASIR) (An Association Unifying the Sciences, Engineering, and Applied Research) ISSN (Print): 2279-0063 ISSN (Online): 2279-0071 International

More information

TURKISH SIGN LANGUAGE RECOGNITION USING HIDDEN MARKOV MODEL

TURKISH SIGN LANGUAGE RECOGNITION USING HIDDEN MARKOV MODEL TURKISH SIGN LANGUAGE RECOGNITION USING HIDDEN MARKOV MODEL Kakajan Kakayev 1 and Ph.D. Songül Albayrak 2 1,2 Department of Computer Engineering, Yildiz Technical University, Istanbul, Turkey kkakajan@gmail.com

More information

Contour-based Hand Pose Recognition for Sign Language Recognition

Contour-based Hand Pose Recognition for Sign Language Recognition Contour-based Hand Pose Recognition for Sign Language Recognition Mika Hatano, Shinji Sako, Tadashi Kitamura Graduate School of Engineering, Nagoya Institute of Technology {pia, sako, kitamura}@mmsp.nitech.ac.jp

More information

Feasibility Evaluation of a Novel Ultrasonic Method for Prosthetic Control ECE-492/3 Senior Design Project Fall 2011

Feasibility Evaluation of a Novel Ultrasonic Method for Prosthetic Control ECE-492/3 Senior Design Project Fall 2011 Feasibility Evaluation of a Novel Ultrasonic Method for Prosthetic Control ECE-492/3 Senior Design Project Fall 2011 Electrical and Computer Engineering Department Volgenau School of Engineering George

More information

Cancer Cells Detection using OTSU Threshold Algorithm

Cancer Cells Detection using OTSU Threshold Algorithm Cancer Cells Detection using OTSU Threshold Algorithm Nalluri Sunny 1 Velagapudi Ramakrishna Siddhartha Engineering College Mithinti Srikanth 2 Velagapudi Ramakrishna Siddhartha Engineering College Kodali

More information

R Jagdeesh Kanan* et al. International Journal of Pharmacy & Technology

R Jagdeesh Kanan* et al. International Journal of Pharmacy & Technology ISSN: 0975-766X CODEN: IJPTFI Available Online through Research Article www.ijptonline.com FACIAL EMOTION RECOGNITION USING NEURAL NETWORK Kashyap Chiranjiv Devendra, Azad Singh Tomar, Pratigyna.N.Javali,

More information

Chapter 1. Introduction

Chapter 1. Introduction Chapter 1 Introduction 1.1 Motivation and Goals The increasing availability and decreasing cost of high-throughput (HT) technologies coupled with the availability of computational tools and data form a

More information

Speech recognition in noisy environments: A survey

Speech recognition in noisy environments: A survey T-61.182 Robustness in Language and Speech Processing Speech recognition in noisy environments: A survey Yifan Gong presented by Tapani Raiko Feb 20, 2003 About the Paper Article published in Speech Communication

More information

Quality Assessment of Human Hand Posture Recognition System Er. ManjinderKaur M.Tech Scholar GIMET Amritsar, Department of CSE

Quality Assessment of Human Hand Posture Recognition System Er. ManjinderKaur M.Tech Scholar GIMET Amritsar, Department of CSE Quality Assessment of Human Hand Posture Recognition System Er. ManjinderKaur M.Tech Scholar GIMET Amritsar, Department of CSE mkwahla@gmail.com Astt. Prof. Prabhjit Singh Assistant Professor, Department

More information

Image Enhancement and Compression using Edge Detection Technique

Image Enhancement and Compression using Edge Detection Technique Image Enhancement and Compression using Edge Detection Technique Sanjana C.Shekar 1, D.J.Ravi 2 1M.Tech in Signal Processing, Dept. Of ECE, Vidyavardhaka College of Engineering, Mysuru 2Professor, Dept.

More information

Hierarchical Convolutional Features for Visual Tracking

Hierarchical Convolutional Features for Visual Tracking Hierarchical Convolutional Features for Visual Tracking Chao Ma Jia-Bin Huang Xiaokang Yang Ming-Husan Yang SJTU UIUC SJTU UC Merced ICCV 2015 Background Given the initial state (position and scale), estimate

More information

Sign Language in the Intelligent Sensory Environment

Sign Language in the Intelligent Sensory Environment Sign Language in the Intelligent Sensory Environment Ákos Lisztes, László Kővári, Andor Gaudia, Péter Korondi Budapest University of Science and Technology, Department of Automation and Applied Informatics,

More information

HAND GESTURE RECOGNITION USING ADAPTIVE NETWORK BASED FUZZY INFERENCE SYSTEM AND K-NEAREST NEIGHBOR. Fifin Ayu Mufarroha 1, Fitri Utaminingrum 1*

HAND GESTURE RECOGNITION USING ADAPTIVE NETWORK BASED FUZZY INFERENCE SYSTEM AND K-NEAREST NEIGHBOR. Fifin Ayu Mufarroha 1, Fitri Utaminingrum 1* International Journal of Technology (2017) 3: 559-567 ISSN 2086-9614 IJTech 2017 HAND GESTURE RECOGNITION USING ADAPTIVE NETWORK BASED FUZZY INFERENCE SYSTEM AND K-NEAREST NEIGHBOR Fifin Ayu Mufarroha

More information

Brain Tumor Detection using Watershed Algorithm

Brain Tumor Detection using Watershed Algorithm Brain Tumor Detection using Watershed Algorithm Dawood Dilber 1, Jasleen 2 P.G. Student, Department of Electronics and Communication Engineering, Amity University, Noida, U.P, India 1 P.G. Student, Department

More information

Automatic Classification of Breast Masses for Diagnosis of Breast Cancer in Digital Mammograms using Neural Network

Automatic Classification of Breast Masses for Diagnosis of Breast Cancer in Digital Mammograms using Neural Network IJSTE - International Journal of Science Technology & Engineering Volume 1 Issue 11 May 2015 ISSN (online): 2349-784X Automatic Classification of Breast Masses for Diagnosis of Breast Cancer in Digital

More information

ERA: Architectures for Inference

ERA: Architectures for Inference ERA: Architectures for Inference Dan Hammerstrom Electrical And Computer Engineering 7/28/09 1 Intelligent Computing In spite of the transistor bounty of Moore s law, there is a large class of problems

More information

Improved Intelligent Classification Technique Based On Support Vector Machines

Improved Intelligent Classification Technique Based On Support Vector Machines Improved Intelligent Classification Technique Based On Support Vector Machines V.Vani Asst.Professor,Department of Computer Science,JJ College of Arts and Science,Pudukkottai. Abstract:An abnormal growth

More information

Communication Interface for Mute and Hearing Impaired People

Communication Interface for Mute and Hearing Impaired People Communication Interface for Mute and Hearing Impaired People *GarimaRao,*LakshNarang,*Abhishek Solanki,*Kapil Singh, Mrs.*Karamjit Kaur, Mr.*Neeraj Gupta. *Amity University Haryana Abstract - Sign language

More information

3. MANUAL ALPHABET RECOGNITION STSTM

3. MANUAL ALPHABET RECOGNITION STSTM Proceedings of the IIEEJ Image Electronics and Visual Computing Workshop 2012 Kuching, Malaysia, November 21-24, 2012 JAPANESE MANUAL ALPHABET RECOGNITION FROM STILL IMAGES USING A NEURAL NETWORK MODEL

More information

An Artificial Neural Network Architecture Based on Context Transformations in Cortical Minicolumns

An Artificial Neural Network Architecture Based on Context Transformations in Cortical Minicolumns An Artificial Neural Network Architecture Based on Context Transformations in Cortical Minicolumns 1. Introduction Vasily Morzhakov, Alexey Redozubov morzhakovva@gmail.com, galdrd@gmail.com Abstract Cortical

More information

Facial Expression Recognition Using Principal Component Analysis

Facial Expression Recognition Using Principal Component Analysis Facial Expression Recognition Using Principal Component Analysis Ajit P. Gosavi, S. R. Khot Abstract Expression detection is useful as a non-invasive method of lie detection and behaviour prediction. However,

More information

NAILFOLD CAPILLAROSCOPY USING USB DIGITAL MICROSCOPE IN THE ASSESSMENT OF MICROCIRCULATION IN DIABETES MELLITUS

NAILFOLD CAPILLAROSCOPY USING USB DIGITAL MICROSCOPE IN THE ASSESSMENT OF MICROCIRCULATION IN DIABETES MELLITUS NAILFOLD CAPILLAROSCOPY USING USB DIGITAL MICROSCOPE IN THE ASSESSMENT OF MICROCIRCULATION IN DIABETES MELLITUS PROJECT REFERENCE NO. : 37S0841 COLLEGE BRANCH GUIDE : DR.AMBEDKAR INSTITUTE OF TECHNOLOGY,

More information

Unsupervised MRI Brain Tumor Detection Techniques with Morphological Operations

Unsupervised MRI Brain Tumor Detection Techniques with Morphological Operations Unsupervised MRI Brain Tumor Detection Techniques with Morphological Operations Ritu Verma, Sujeet Tiwari, Naazish Rahim Abstract Tumor is a deformity in human body cells which, if not detected and treated,

More information

Hand Gesture Recognition and Speech Conversion for Deaf and Dumb using Feature Extraction

Hand Gesture Recognition and Speech Conversion for Deaf and Dumb using Feature Extraction Hand Gesture Recognition and Speech Conversion for Deaf and Dumb using Feature Extraction Aswathy M 1, Heera Narayanan 2, Surya Rajan 3, Uthara P M 4, Jeena Jacob 5 UG Students, Dept. of ECE, MBITS, Nellimattom,

More information

Natural Scene Statistics and Perception. W.S. Geisler

Natural Scene Statistics and Perception. W.S. Geisler Natural Scene Statistics and Perception W.S. Geisler Some Important Visual Tasks Identification of objects and materials Navigation through the environment Estimation of motion trajectories and speeds

More information

Design and Implementation study of Remote Home Rehabilitation Training Operating System based on Internet

Design and Implementation study of Remote Home Rehabilitation Training Operating System based on Internet IOP Conference Series: Materials Science and Engineering PAPER OPEN ACCESS Design and Implementation study of Remote Home Rehabilitation Training Operating System based on Internet To cite this article:

More information

Classification and Statistical Analysis of Auditory FMRI Data Using Linear Discriminative Analysis and Quadratic Discriminative Analysis

Classification and Statistical Analysis of Auditory FMRI Data Using Linear Discriminative Analysis and Quadratic Discriminative Analysis International Journal of Innovative Research in Computer Science & Technology (IJIRCST) ISSN: 2347-5552, Volume-2, Issue-6, November-2014 Classification and Statistical Analysis of Auditory FMRI Data Using

More information

Real Time Hand Gesture Recognition System

Real Time Hand Gesture Recognition System Real Time Hand Gesture Recognition System 1, 1* Neethu P S 1 Research Scholar, Dept. of Information & Communication Engineering, Anna University, Chennai 1* Assistant Professor, Dept. of ECE, New Prince

More information

Automatic Classification of Perceived Gender from Facial Images

Automatic Classification of Perceived Gender from Facial Images Automatic Classification of Perceived Gender from Facial Images Joseph Lemley, Sami Abdul-Wahid, Dipayan Banik Advisor: Dr. Razvan Andonie SOURCE 2016 Outline 1 Introduction 2 Faces - Background 3 Faces

More information

Automated Detection Of Glaucoma & D.R From Eye Fundus Images

Automated Detection Of Glaucoma & D.R From Eye Fundus Images Reviewed Paper Volume 2 Issue 12 August 2015 International Journal of Informative & Futuristic Research ISSN (Online): 2347-1697 Automated Detection Of Glaucoma & D.R Paper ID IJIFR/ V2/ E12/ 016 Page

More information

Recognizing Scenes by Simulating Implied Social Interaction Networks

Recognizing Scenes by Simulating Implied Social Interaction Networks Recognizing Scenes by Simulating Implied Social Interaction Networks MaryAnne Fields and Craig Lennon Army Research Laboratory, Aberdeen, MD, USA Christian Lebiere and Michael Martin Carnegie Mellon University,

More information

7.1 Grading Diabetic Retinopathy

7.1 Grading Diabetic Retinopathy Chapter 7 DIABETIC RETINOPATHYGRADING -------------------------------------------------------------------------------------------------------------------------------------- A consistent approach to the

More information

HAND GESTURE RECOGNITION FOR HUMAN COMPUTER INTERACTION

HAND GESTURE RECOGNITION FOR HUMAN COMPUTER INTERACTION e-issn 2455 1392 Volume 2 Issue 5, May 2016 pp. 241 245 Scientific Journal Impact Factor : 3.468 http://www.ijcter.com HAND GESTURE RECOGNITION FOR HUMAN COMPUTER INTERACTION KUNIKA S. BARAI 1, PROF. SANTHOSH

More information

Hand Sign to Bangla Speech: A Deep Learning in Vision based system for Recognizing Hand Sign Digits and Generating Bangla Speech

Hand Sign to Bangla Speech: A Deep Learning in Vision based system for Recognizing Hand Sign Digits and Generating Bangla Speech Hand Sign to Bangla Speech: A Deep Learning in Vision based system for Recognizing Hand Sign Digits and Generating Bangla Speech arxiv:1901.05613v1 [cs.cv] 17 Jan 2019 Shahjalal Ahmed, Md. Rafiqul Islam,

More information

VIDEO SURVEILLANCE AND BIOMEDICAL IMAGING Research Activities and Technology Transfer at PAVIS

VIDEO SURVEILLANCE AND BIOMEDICAL IMAGING Research Activities and Technology Transfer at PAVIS VIDEO SURVEILLANCE AND BIOMEDICAL IMAGING Research Activities and Technology Transfer at PAVIS Samuele Martelli, Alessio Del Bue, Diego Sona, Vittorio Murino Istituto Italiano di Tecnologia (IIT), Genova

More information

A Hierarchical Artificial Neural Network Model for Giemsa-Stained Human Chromosome Classification

A Hierarchical Artificial Neural Network Model for Giemsa-Stained Human Chromosome Classification A Hierarchical Artificial Neural Network Model for Giemsa-Stained Human Chromosome Classification JONGMAN CHO 1 1 Department of Biomedical Engineering, Inje University, Gimhae, 621-749, KOREA minerva@ieeeorg

More information

Mammogram Analysis: Tumor Classification

Mammogram Analysis: Tumor Classification Mammogram Analysis: Tumor Classification Term Project Report Geethapriya Raghavan geeragh@mail.utexas.edu EE 381K - Multidimensional Digital Signal Processing Spring 2005 Abstract Breast cancer is the

More information

Reading Assignments: Lecture 18: Visual Pre-Processing. Chapters TMB Brain Theory and Artificial Intelligence

Reading Assignments: Lecture 18: Visual Pre-Processing. Chapters TMB Brain Theory and Artificial Intelligence Brain Theory and Artificial Intelligence Lecture 18: Visual Pre-Processing. Reading Assignments: Chapters TMB2 3.3. 1 Low-Level Processing Remember: Vision as a change in representation. At the low-level,

More information

LSA64: An Argentinian Sign Language Dataset

LSA64: An Argentinian Sign Language Dataset LSA64: An Argentinian Sign Language Dataset Franco Ronchetti* 1, Facundo Quiroga* 1, César Estrebou 1, Laura Lanzarini 1, and Alejandro Rosete 2 1 Instituto de Investigación en Informática LIDI, Facultad

More information

10CS664: PATTERN RECOGNITION QUESTION BANK

10CS664: PATTERN RECOGNITION QUESTION BANK 10CS664: PATTERN RECOGNITION QUESTION BANK Assignments would be handed out in class as well as posted on the class blog for the course. Please solve the problems in the exercises of the prescribed text

More information

Local Image Structures and Optic Flow Estimation

Local Image Structures and Optic Flow Estimation Local Image Structures and Optic Flow Estimation Sinan KALKAN 1, Dirk Calow 2, Florentin Wörgötter 1, Markus Lappe 2 and Norbert Krüger 3 1 Computational Neuroscience, Uni. of Stirling, Scotland; {sinan,worgott}@cn.stir.ac.uk

More information

SAPOG Edge Detection Technique GUI using MATLAB

SAPOG Edge Detection Technique GUI using MATLAB SAPOG Edge Detection Technique GUI using MATLAB Poonam Kumari 1, Sanjeev Kumar Gupta 2 Software Engineer, Devansh Softech Consultancy Services Pvt. Ltd., Agra, India 1 Director, Devansh Softech Consultancy

More information

Learning Classifier Systems (LCS/XCSF)

Learning Classifier Systems (LCS/XCSF) Context-Dependent Predictions and Cognitive Arm Control with XCSF Learning Classifier Systems (LCS/XCSF) Laurentius Florentin Gruber Seminar aus Künstlicher Intelligenz WS 2015/16 Professor Johannes Fürnkranz

More information

Introduction to Computational Neuroscience

Introduction to Computational Neuroscience Introduction to Computational Neuroscience Lecture 5: Data analysis II Lesson Title 1 Introduction 2 Structure and Function of the NS 3 Windows to the Brain 4 Data analysis 5 Data analysis II 6 Single

More information