Filipino Sign Language Recognition using Manifold Learning

Size: px
Start display at page:

Download "Filipino Sign Language Recognition using Manifold Learning"

Transcription

1 Filipino Sign Language Recognition using Manifold Learning Ed Peter Cabalfin Computer Vision & Machine Intelligence Group Department of Computer Science College of Engineering University of the Philippines Diliman Rowena Cristina L. Guevara Digital Signal Processing Laboratory Electrical and Electronics Engineering Institute College of Engineering University of the Philippines Diliman Prospero C. Naval, Jr. Computer Vision & Machine Intelligence Group Department of Computer Science College of Engineering University of the Philippines Diliman ABSTRACT Sign Language is at the core of a progressive view of deafness as a culture and of deaf people as a cultural and linguistic minority. An in-depth study of Filipino Sign Language (FSL) is crucial in understanding the Deaf communities and the social issues surrounding them. Computer-aided recognition of sign language can help bridge the gap between signers and non-signers. In this paper, we propose Isomap manifold learning for the automatic recognition of FSL signs. Video of isolated signs are converted into manifolds and compiled into a library of known FSL signs. Dynamic Time Warping (DTW) is then used to match the nearest library manifold with the query manifold for an unknown FSL sign. 1. INTRODUCTION The World Health Organization (WHO) defines hearing impairment as total or partial loss of hearing on one or both ears. The levels of impairment could be mild, moderate, severe or profound. WHO defines deafness as the complete loss of ability to hear from one or both ears [17]. The 2000 Census on disability [13] reported 121,000 Filipinos with total or partial hearing loss. This is a fraction of an estimated one million Filipinos with Disabilities in the Philippines. A primary binding force for the Filipino Deaf is the language they use Filipino Sign Language (FSL). Sign language is at the core of the progressive view of deafness as a culture, and of deaf people as a cultural and linguistic minority [1]. Over half of the Deaf respondents in a study done by the National Sign Language Committee declared Filipino Sign Language as their mode of communication [3]. Unfortunately, many people do not know that there is a natural sign language used by the Deaf communities ([4],[2]). Interpreting organizations or programs in 15 regions in the Philippines are very difficult to find and there is an unequal distribution of education programs that use sign language [3]. Additional challenges come from lack of documentation about regional variations of the signs ([14],[15],[11],[2]). FSL is a key component in understanding the Deaf communities and the social issues surrounding them. Automatic analysis of FSL will make linguistic research easier, and computer-aided interpretation will help bridge the gap between signers and non-signers. It is hoped that this research can contribute to these goals. 2. FILIPINO SIGN LANGUAGE Sign Language is the natural language of the Deaf. Users of sign language are called signers. Sign language is a visual language and signers use their hands, arms, shoulders, torso, neck and face to communicate [12]. One misconception about sign language is there is only one universal, international sign language. This is incorrect since there are at least a hundred recognized sign languages in the world [8]. This study will focus on the sign language used by the Deaf communities in the Philippines. Much like spoken languages, numerous variations of FSL have been observed in the field. To reduce the scope of work, only traditional signs were used and native signers from Metro Manila were considered. Traditional signs are defined as signs used by a large part of the communities and has been around for decades. In contrast, emerging signs are defined as signs that have come into use only in the last five years or so [14]. It is only recently that documentation of indigenous signs and their origins have started [14],[2]. In spoken languages, the basic unit of utterance is called a phoneme. Similarly, the basic unit in sign language is also called a phoneme, even though they are not based on sound [7]. Although signs are often decomposed into five parameters (see 2.2), there is no consensus yet on sign language phonemes [16]. In addition, during conversations, non-sign gestures may be mixed in freely with signs. Facial expressions and body posture also play a large role in conversations as well [16], [12]. Signs are labeled by words called a gloss. This is a word borrowed from a written or spoken language to designate a particular sign; it is a linguistic tool and while the word

2 used is often the closest meaning of the sign, it is not a direct translation. When a phrase is used as a gloss, hyphens are inserted in place of spaces. In sign linguistics literature, the gloss is often capitalized to distinguish it from regular use of the word [14], [11]. This paper shall follow that convention. 2.1 The Signing Space Signing space is a three-dimensional space from about the mid-torso to just above the head, extending forward from the chest to about one-arm length away, and extending about half an arm s length on both sides. It has been established in previous sign language research that during most signs, the hands and arms do not go beyond this space [14]. Movement can be grouped into two general categories: gross arm movement (tracking the path of the arms) and internal movement (changes in hand shape). Initial inventory of Filipino Sign Language observed over ninety hand shapes, approximately twenty hand locations, and six palm orientations [14]. Liddel and Johnson further grouped these parameters into segments; one or more parameters occurring together form one segment. Movement segments (M) are portions of the sign where the hands and arms are in motion; or when the hand shape is in transition. Hold segments (H) are portions of the sign where there is no motion or where the hand shape is in steady state. Signs are then composed of one or more segments [14]. For example, HMH means there is a Hold segment followed by a Movement segment followed by a Hold segment. Segment forms observed in FSL include H, M, MH, HMH, and MHMH [14]. (a) front view (b) quarter view Figure 1: Signing Space One or both hands may be used in signing, depending on the sign and the sign language. In the case where two-hands are used where only one hand is moving, the moving hand is called the dominant hand (DH) and the stationary hand is called the non-dominant hand (NDH) or the passive hand. Two-handed signs where both hands move in the same path and use the same hand shapes are sometimes called symmetrical signs. There are no left-handed or right-handed signs; one-handed signs may be performed with either left hand or right hand. Either hand may be used as the dominant hand in twohanded signs. In practice, right-handed people usually use their right hand for one-handed signs, finger-spelling, and as the DH in two-handed signs; left-handed people usually use the reverse. 2.2 Internal Structure of Sign Language Liddel and Johnson model sign language with five parameters [14]: 1. hand shape (or HS ) - described by which fingers and/or thumb are selected, extended or flexed 2. palm orientation (or just orientation ) - described by where the palm is facing 3. hand location (or just location ) - described by where the hand position is relative to the face, head, shoulders, arm and torso 4. movement - described by motion of fingers, thumb, hands and arms 5. non-manual signals (NMS) - which includes facial expressions and body posture (a) GIVE-HARD-WORK (b) COOK Figure 2: Examples of signs showing facial expressions and body posture. 3. MANIFOLD LEARNING 3.1 Dimensionality Reduction Many applications in computer science deal with complex data sets with many factors, variables or features. Analysis of data with many dimensions (features) is difficult and, past a certain point, algorithms fail to work. Reducing the number of dimensions is often done to simplify analysis and to reduce computational effort. Of course, we would like to preserve the underlying patterns and interaction of the variables as much as possible. The goal of dimensionality reduction then is to find a good approximation of the data with fewer dimensions. Principal Components Analysis (PCA) is one popular algorithm. Unfortunately, a major limitation of PCA is the requirement that the data lie on a linear subspace. Manifolds do not have this limitation [19], [18]. 3.2 Manifolds and Manifold Learning Manifolds are high dimensional mathematical structures that can be approximated by low dimensional shapes. To illustrate, let us take the globe as an example. While a globe is a three-dimensional object (a sphere), maps are two dimensional (planes). We can approximate a three dimensional shape using two-dimensional shapes. The goal of manifold learning then, given a data set described by many variables (high dimensions), is to look for a smaller set of variables (low dimensions) that can approximate the original data set.

3 Figure 3: (A) geodesic distance in blue, (B) shortest path in red, (C) Isomap of the data One well known manifold learning algorithm is Isomap [18]. 3.3 Isomap The Isomap algorithm extracts the embedded lower dimensional subspaces by extending classical Multi dimensional Scaling (MDS). Fig 3 will help illustrate the algorithm. The algorithm can be summariezd as follows: 1. First a neighborhood graph is constructed, with each data point connected every other data point by edges with weights equal to the Eucledian distance between the data points. Edges between points over a threshold are removed. Each point is connected only to its nearest neighbors. The threshold is either a maximum distance or k-nearest neighbors is used. 2. Second, the shortest path between points are computed. Essentially, the geodesic distance is approximated by the shortest path distance. Floyd s algorithm and Dijkstra s algorithm have been both used to find the shortest path distance, depending on the application [18]. Figure 4: PANGIT sign and Isomap 3. Lastly, classical MDS is used to extract the embedded lower dimensional space [18]. In the case of Fig 3, the embedded subspace is a two dimensional surface. As it turns out, while human motion is complex and multi dimensional, Isomap has been used successfully to simplify analysis and classification of human motion [5], [6]. See Fig 4, and 5 for examples of FSL signs and their corresponding Isomap manifold. We now have reduced complex, multidimensional signs into something easier to work with. 4. DYNAMIC TIME WARPING When FSL signs are performed there is considerable variation between samples, even when performed by the same person. Without affecting the meaning, signs can be performed quickly or slowly; parts of the sign may be performed at varying speeds. How can we compare data sets that vary in length and with portions that may be slightly faster or slower? Dynamic Time Warping (DTW) deals with exactly these issues. Figure 5: BRAVE sign and Isomap DTW is a non-linear mapping from one time series data to another; aligning two similar but locally out of phase

4 datasets. It has been successfully applied to various tasks such as classification and anomaly detection in time-series data, speech recognition and data mining [9],[10]. The DTW algorithm can be summarized as follows: 1. Given two time series data, Q and C, we construct a distance matirx. Euclidean distance between every other point is calculated and stored. 2. Starting from time t = 0, a contiguous path of elements in the distance matrix is calculated that minimizes the accumulated distance. Specifically, the warping cost is minimized: DT W (Q, C) = min K k=1 Where w k is the k th element of the warping path. (see Fig 6) The warping path can be found using dynamic programming, evaluating the accumulated distance. w k 5. METHODOLOGY 5.1 Data Collection Three native FSL signers were recorded individually while performing FSL signs. Each signer performed sixty 2-handed signs and fifty-seven 1-handed signs for a total of 117 unique FSL signs. Only traditional signs were used. Signers was seated in front of a plain, black background. The video camera was placed on a tripod approximately 160 cm away from the signer. Zoom was adjusted such that the signing space was captured. Two lights were placed approximately 160 cm away on either side to reduce shadows and uniformly illuminate the signers. All signers wore plain black, short-sleeved shirts. The video camera was set to record at full color, pixel count of 640x480 at 30 frames-per-second (fps). Each sign was performed in isolation, that is, with no context and not part of a sentence or discourse. The FSL sign was performed as close to the citation form as possible. We define the neutral position (Fig 8) to be arms on the side and hands on the lap with a blank facial expression and facing forward. Signers begin at the neutral position, perform the sign, and then return to the neutral position. Figure 6: (A) Q and C are similar but slighty out of phase (B) DTW matching the datasets (C) the warping path through the distance matrix To reduce computation effort, constraints are placed on the distance matrix; only elements of the distance matrix falling within the constraint are considered in the warping path. The two most common constraints used are the Sakoe-Chiba Band and the Itakura Parallelogram [9]. (a) Sakoe-Chiba Band Figure 7: DTW Constraints (b) Ikatura Parallelogram Figure 8: The neutral position The signs were recorded in groups of 10, with about 3 seconds of the neutral position in between signs. Thirteen groups of signs were recorded with some signs appearing in more than one group. 5.2 Pre-Processing and Editing Video was scaled to 160x120 pixels, and converted to grayscale. Converting the video to grayscale simplifies the representation. Each pixel carries only the intensity information. Simple background subtraction was done by setting any pixel below a threshold to be black. This removes most of the background and foreground (the shirt of signer) leaving only the head and arms. 5.3 Training and Testing The Isomap manifolds of the signs were generated and stored. The manifolds are zero-centered about the mean and normalized by the standard deviation to simpify comparisons. Input to Isomap were either the individual signs (3-5 second clip) or a group of signs (60-70 second clip). For individual signs, the original video was edited to contain only one sign.

5 (a) original (b) pre-processed Figure 9: Image Pre-processing For a group of signs, the original video was edited to contain 10 signs in sequence; each sign separated by approximately 2 seconds of the signer in a neutral position. Isomap can use either K-nearest neighbor or epsilon neighborhood. We chose K-nearest neighbor. The value of k=10 used was obtained through experiments. These manifolds are then used as input to DTW for matching. We used the Sakoe-Chiba Band, one of the most commonly used constraint in DTW with 10% constraint as suggested by literature. [9]. Accumulated distance (S) is normalized over the length of the warping path; low values of S indicate a close match with a value of zero (0) indicating an exact match. Fig 11 show the S values of the LOLO manifold compared to other manifolds. The labels MJ, RM or RW indicate which of the three native signers performed the sign. Figure 11: Sample S values for LOLO maximum of 1.33 and a minimum of Our result mirrors the problem discovered by other sign language recognition research: sign language recognition across different signers is error prone. This is explained partially by the dataset used, a large portion of the dataset of which consists of minimal pairs. In sign language linguistics, a minimal pair is a pair of signs that differ in only one parameter (see 2.2). Minimal pairs, as defined, are already very similar, possibly to the point where it leads to false positives. 7. CONCLUSION In this paper, we described a recognition system for FSL based on Isomap manifolds. Our significant finding is that Isomap is good at discriminating large arm and body movements and weak at detecting hand shape against the large movement of the arms and body. This implies that Isomap manifold-based recognition requires additional processing for the analysis of hand shape and facial expression. Figure 10: Example warping path for BRAVE match 6. RESULTS The same FSL sign, performed by different signers, the average S value from DTW is 1.11 with σ = 0.67; with a maximum of 1.52 and a minimum of Different FSL signs, performed by different signers, the average S value from DTW is 1.28 with σ = 0.03; with a 8. REFERENCES [1] Rafaelito M. Abat and Liza B. Martinez, The History of Sign Language in the Philippines: Piecing Together the Puzzle, In 9th Philippine Linguistics Congress, 2006, Diliman, Quezon City [2] Yvette S. Apurado and Rommel L. Agravante The Phonology and Regional Variation of Filipino Sign Language: Considerations for Language Policy, In 9th Philippine Linguistics Congress, 2006, Diliman, Quezon City [3] Julius Andrada and Raphael Domingo, Key Findings For Language Planning From The National Sign Language Committee (Status Report On The Use Of Sign Language In The Philippines), In 9th Philippine Linguistics Congress, 2006, Diliman, Quezon City [4] Marie Therese A.P. Bustos and Rowella B. Tanjusay, Filipino Sign Language in Deaf Education: Deaf and

6 [12] Sylvie C.W. Ong and Surendra Ranganath, Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning, IEEE Trans. Pattern Analysis & Machine Intelligence, June 2005 no.6, vol.27, pp [13] Phil. National Statistics Office, Persons with Disability Comprised 1.23 Percent of the Total Population, Special Release No. 150, March [14] Phil. Deaf Resource Center and Phil. Federation of the Deaf, Part 1: Understanding Structure An Introduction to Filipino Sign Language, 2004, Phil. Deaf Resource Center [15] Phil. Deaf Resource Center and Phil. Federation of the Deaf, Part 2: Traditional and Emerging Signs, An Introduction to Filipino Sign Language, 2004, Phil. Deaf Resource Center [16] Christian Philipp Vogler, American Sign Language Recognition: Reducing the Complexity of the Task with Phoneme-Based Modeling and Parallel Hidden Markov Models, PhD thesis, University of Pennsylvania, 2003 Figure 12: FSL recognition flowchart Hearing Perspectives, In 9th Philippine Linguistics Congress, 2006, Diliman, Quezon City [5] Jaron Blackburn and Eraldo Ribeiro, Human Motion Recognition Using Isomap and Dynamic Time Warping, In Second Workshop, Human Motion, Oct 2007, Rio de Janeiro, Brazil Lecture Notes in Computer Science 4814, pp [6] Heeyoul Choi, Brandon Paulson, and Tracy Hammond, Gesture Recognition Based on Manifold Learning, Lecture Notes in Computer Science 5342, pp [17] World Health Organization, Deafness and Hearing Impairment, Fact Sheet N300, March 2006, World Health Organization [18] Joshua B. Tenenbaum and Vin de Silva and John C. Langford, A Global Geometric Framework for Nonlinear Dimensionality Reduction, Science, Dec 2000, no.5500 vol.290 pp , isomap [19] Sam T. Roweis and Lawrence K. Saul, Think Globally, Fit Locally: Unsupervised Learning of Low Dimensional Manifolds, Science, Dec 2000, no.5500 vol.290 pp , [7] Philippe Dreuw and Carol Neidle and Vassilis Athitsos and Stan Sclaroff and Hermann Ney, Benchmark Databases for Video-Based Automatic Sign Language Recognition, In International Conference on Language Resources and Evaluation, May 2008, Marrakech, Morocco, dreuw/database.php [8] Raymond G. Gordon Jr. (editor), Ethnologue: Languages of the World, 15th ed., SIL International, 2005, Dallas, Texas, [9] Chotirat Ann Ratanamahatana and Eamonn Keogh, Three Myths about Dynamic Time Warping, In SIAM International Conference on Data Mining, April 2005, Newport Beach, CA [10] Eamonn Keogh and Michael Pazzani Scaling up dynamic time warping to massive datasets, In 3rd European Conference on Principles and Practice of Knowledge Discovery in Databases, 1999, Prague, Czech Republic [11] Liza B. Martinez, Personal Communication, June 2008

Real Time Sign Language Processing System

Real Time Sign Language Processing System Real Time Sign Language Processing System Dibyabiva Seth (&), Anindita Ghosh, Ariruna Dasgupta, and Asoke Nath Department of Computer Science, St. Xavier s College (Autonomous), Kolkata, India meetdseth@gmail.com,

More information

1. INTRODUCTION. Vision based Multi-feature HGR Algorithms for HCI using ISL Page 1

1. INTRODUCTION. Vision based Multi-feature HGR Algorithms for HCI using ISL Page 1 1. INTRODUCTION Sign language interpretation is one of the HCI applications where hand gesture plays important role for communication. This chapter discusses sign language interpretation system with present

More information

TWO HANDED SIGN LANGUAGE RECOGNITION SYSTEM USING IMAGE PROCESSING

TWO HANDED SIGN LANGUAGE RECOGNITION SYSTEM USING IMAGE PROCESSING 134 TWO HANDED SIGN LANGUAGE RECOGNITION SYSTEM USING IMAGE PROCESSING H.F.S.M.Fonseka 1, J.T.Jonathan 2, P.Sabeshan 3 and M.B.Dissanayaka 4 1 Department of Electrical And Electronic Engineering, Faculty

More information

Facial expression recognition with spatiotemporal local descriptors

Facial expression recognition with spatiotemporal local descriptors Facial expression recognition with spatiotemporal local descriptors Guoying Zhao, Matti Pietikäinen Machine Vision Group, Infotech Oulu and Department of Electrical and Information Engineering, P. O. Box

More information

The American Sign Language Lexicon Video Dataset

The American Sign Language Lexicon Video Dataset The American Sign Language Lexicon Video Dataset Vassilis Athitsos 1, Carol Neidle 2, Stan Sclaroff 3, Joan Nash 2, Alexandra Stefan 3, Quan Yuan 3, and Ashwin Thangali 3 1 Computer Science and Engineering

More information

Centroid-Based Exemplar Selection of ASL Non-Manual Expressions using Multidimensional Dynamic Time Warping and MPEG4 Features

Centroid-Based Exemplar Selection of ASL Non-Manual Expressions using Multidimensional Dynamic Time Warping and MPEG4 Features Centroid-Based Exemplar Selection of ASL Non-Manual Expressions using Multidimensional Dynamic Time Warping and MPEG4 Features Hernisa Kacorri 1, Ali Raza Syed 1, Matt Huenerfauth 2, Carol Neidle 3 1 The

More information

doi: / _59(

doi: / _59( doi: 10.1007/978-3-642-39188-0_59(http://dx.doi.org/10.1007/978-3-642-39188-0_59) Subunit modeling for Japanese sign language recognition based on phonetically depend multi-stream hidden Markov models

More information

Recognition of sign language gestures using neural networks

Recognition of sign language gestures using neural networks Recognition of sign language gestures using neural s Peter Vamplew Department of Computer Science, University of Tasmania GPO Box 252C, Hobart, Tasmania 7001, Australia vamplew@cs.utas.edu.au ABSTRACT

More information

Sign Language in the Intelligent Sensory Environment

Sign Language in the Intelligent Sensory Environment Sign Language in the Intelligent Sensory Environment Ákos Lisztes, László Kővári, Andor Gaudia, Péter Korondi Budapest University of Science and Technology, Department of Automation and Applied Informatics,

More information

Fingerspelling in a Word

Fingerspelling in a Word Journal of Interpretation Volume 19 Issue 1 Article 4 2012 Fingerspelling in a Word Mary M.A., CSC Gallaudet University Follow this and additional works at: http://digitalcommons.unf.edu/joi Suggested

More information

Contour-based Hand Pose Recognition for Sign Language Recognition

Contour-based Hand Pose Recognition for Sign Language Recognition Contour-based Hand Pose Recognition for Sign Language Recognition Mika Hatano, Shinji Sako, Tadashi Kitamura Graduate School of Engineering, Nagoya Institute of Technology {pia, sako, kitamura}@mmsp.nitech.ac.jp

More information

Face Analysis : Identity vs. Expressions

Face Analysis : Identity vs. Expressions Hugo Mercier, 1,2 Patrice Dalle 1 Face Analysis : Identity vs. Expressions 1 IRIT - Université Paul Sabatier 118 Route de Narbonne, F-31062 Toulouse Cedex 9, France 2 Websourd Bâtiment A 99, route d'espagne

More information

Cronfa - Swansea University Open Access Repository

Cronfa - Swansea University Open Access Repository Cronfa - Swansea University Open Access Repository This is an author produced version of a paper published in: Medical Image Understanding and Analysis Cronfa URL for this paper: http://cronfa.swan.ac.uk/record/cronfa40803

More information

Recognizing Non-manual Signals in Filipino Sign Language

Recognizing Non-manual Signals in Filipino Sign Language Recognizing Non-manual Signals in Filipino Sign Language Joanna Pauline Rivera, Clement Ong De La Salle University Taft Avenue, Manila 1004 Philippines joanna rivera@dlsu.edu.ph, clem.ong@delasalle.ph

More information

Sign Language to English (Slate8)

Sign Language to English (Slate8) Sign Language to English (Slate8) App Development Nathan Kebe El Faculty Advisor: Dr. Mohamad Chouikha 2 nd EECS Day April 20, 2018 Electrical Engineering and Computer Science (EECS) Howard University

More information

A Vision-based Affective Computing System. Jieyu Zhao Ningbo University, China

A Vision-based Affective Computing System. Jieyu Zhao Ningbo University, China A Vision-based Affective Computing System Jieyu Zhao Ningbo University, China Outline Affective Computing A Dynamic 3D Morphable Model Facial Expression Recognition Probabilistic Graphical Models Some

More information

Chapter 1. Fusion of Manual and Non-Manual Information in American Sign Language Recognition

Chapter 1. Fusion of Manual and Non-Manual Information in American Sign Language Recognition Chapter 1 Fusion of Manual and Non-Manual Information in American Sign Language Recognition Sudeep Sarkar 1, Barbara Loeding 2, and Ayush S. Parashar 1 1 Computer Science and Engineering 2 Special Education

More information

Labview Based Hand Gesture Recognition for Deaf and Dumb People

Labview Based Hand Gesture Recognition for Deaf and Dumb People International Journal of Engineering Science Invention (IJESI) ISSN (Online): 2319 6734, ISSN (Print): 2319 6726 Volume 7 Issue 4 Ver. V April 2018 PP 66-71 Labview Based Hand Gesture Recognition for Deaf

More information

Implementation of image processing approach to translation of ASL finger-spelling to digital text

Implementation of image processing approach to translation of ASL finger-spelling to digital text Rochester Institute of Technology RIT Scholar Works Articles 2006 Implementation of image processing approach to translation of ASL finger-spelling to digital text Divya Mandloi Kanthi Sarella Chance Glenn

More information

TURKISH SIGN LANGUAGE RECOGNITION USING HIDDEN MARKOV MODEL

TURKISH SIGN LANGUAGE RECOGNITION USING HIDDEN MARKOV MODEL TURKISH SIGN LANGUAGE RECOGNITION USING HIDDEN MARKOV MODEL Kakajan Kakayev 1 and Ph.D. Songül Albayrak 2 1,2 Department of Computer Engineering, Yildiz Technical University, Istanbul, Turkey kkakajan@gmail.com

More information

Research Proposal on Emotion Recognition

Research Proposal on Emotion Recognition Research Proposal on Emotion Recognition Colin Grubb June 3, 2012 Abstract In this paper I will introduce my thesis question: To what extent can emotion recognition be improved by combining audio and visual

More information

PAPER REVIEW: HAND GESTURE RECOGNITION METHODS

PAPER REVIEW: HAND GESTURE RECOGNITION METHODS PAPER REVIEW: HAND GESTURE RECOGNITION METHODS Assoc. Prof. Abd Manan Ahmad 1, Dr Abdullah Bade 2, Luqman Al-Hakim Zainal Abidin 3 1 Department of Computer Graphics and Multimedia, Faculty of Computer

More information

An Evaluation of RGB-D Skeleton Tracking for Use in Large Vocabulary Complex Gesture Recognition

An Evaluation of RGB-D Skeleton Tracking for Use in Large Vocabulary Complex Gesture Recognition An Evaluation of RGB-D Skeleton Tracking for Use in Large Vocabulary Complex Gesture Recognition Christopher Conly, Zhong Zhang, and Vassilis Athitsos Department of Computer Science and Engineering University

More information

Using Deep Convolutional Networks for Gesture Recognition in American Sign Language

Using Deep Convolutional Networks for Gesture Recognition in American Sign Language Using Deep Convolutional Networks for Gesture Recognition in American Sign Language Abstract In the realm of multimodal communication, sign language is, and continues to be, one of the most understudied

More information

Copyright is owned by the Author of the thesis. Permission is given for a copy to be downloaded by an individual for the purpose of research and

Copyright is owned by the Author of the thesis. Permission is given for a copy to be downloaded by an individual for the purpose of research and Copyright is owned by the Author of the thesis. Permission is given for a copy to be downloaded by an individual for the purpose of research and private study only. The thesis may not be reproduced elsewhere

More information

MRI Image Processing Operations for Brain Tumor Detection

MRI Image Processing Operations for Brain Tumor Detection MRI Image Processing Operations for Brain Tumor Detection Prof. M.M. Bulhe 1, Shubhashini Pathak 2, Karan Parekh 3, Abhishek Jha 4 1Assistant Professor, Dept. of Electronics and Telecommunications Engineering,

More information

Finger spelling recognition using distinctive features of hand shape

Finger spelling recognition using distinctive features of hand shape Finger spelling recognition using distinctive features of hand shape Y Tabata 1 and T Kuroda 2 1 Faculty of Medical Science, Kyoto College of Medical Science, 1-3 Imakita Oyama-higashi, Sonobe, Nantan,

More information

Analysis of Emotion Recognition using Facial Expressions, Speech and Multimodal Information

Analysis of Emotion Recognition using Facial Expressions, Speech and Multimodal Information Analysis of Emotion Recognition using Facial Expressions, Speech and Multimodal Information C. Busso, Z. Deng, S. Yildirim, M. Bulut, C. M. Lee, A. Kazemzadeh, S. Lee, U. Neumann, S. Narayanan Emotion

More information

Design of Palm Acupuncture Points Indicator

Design of Palm Acupuncture Points Indicator Design of Palm Acupuncture Points Indicator Wen-Yuan Chen, Shih-Yen Huang and Jian-Shie Lin Abstract The acupuncture points are given acupuncture or acupressure so to stimulate the meridians on each corresponding

More information

An Overview of Tactile American Sign Language Michelle Radin Special Education Service Agency

An Overview of Tactile American Sign Language Michelle Radin Special Education Service Agency An Overview of Tactile American Sign Language Michelle Radin Special Education Service Agency MRadin@sesa.org Tactile Sign Language 2 Introduction American Tactile Sign Language (TSL) is very similar to

More information

Primary Level Classification of Brain Tumor using PCA and PNN

Primary Level Classification of Brain Tumor using PCA and PNN Primary Level Classification of Brain Tumor using PCA and PNN Dr. Mrs. K.V.Kulhalli Department of Information Technology, D.Y.Patil Coll. of Engg. And Tech. Kolhapur,Maharashtra,India kvkulhalli@gmail.com

More information

Scalable ASL sign recognition using model-based machine learning and linguistically annotated corpora

Scalable ASL sign recognition using model-based machine learning and linguistically annotated corpora Boston University OpenBU Linguistics http://open.bu.edu BU Open Access Articles 2018-05-12 Scalable ASL sign recognition using model-based machine learning and linguistically annotated corpora Metaxas,

More information

Analysis of Recognition System of Japanese Sign Language using 3D Image Sensor

Analysis of Recognition System of Japanese Sign Language using 3D Image Sensor Analysis of Recognition System of Japanese Sign Language using 3D Image Sensor Yanhua Sun *, Noriaki Kuwahara**, Kazunari Morimoto *** * oo_alison@hotmail.com ** noriaki.kuwahara@gmail.com ***morix119@gmail.com

More information

Modeling the Use of Space for Pointing in American Sign Language Animation

Modeling the Use of Space for Pointing in American Sign Language Animation Modeling the Use of Space for Pointing in American Sign Language Animation Jigar Gohel, Sedeeq Al-khazraji, Matt Huenerfauth Rochester Institute of Technology, Golisano College of Computing and Information

More information

International Journal of Software and Web Sciences (IJSWS)

International Journal of Software and Web Sciences (IJSWS) International Association of Scientific Innovation and Research (IASIR) (An Association Unifying the Sciences, Engineering, and Applied Research) ISSN (Print): 2279-0063 ISSN (Online): 2279-0071 International

More information

Accessible Computing Research for Users who are Deaf and Hard of Hearing (DHH)

Accessible Computing Research for Users who are Deaf and Hard of Hearing (DHH) Accessible Computing Research for Users who are Deaf and Hard of Hearing (DHH) Matt Huenerfauth Raja Kushalnagar Rochester Institute of Technology DHH Auditory Issues Links Accents/Intonation Listening

More information

Development of an Electronic Glove with Voice Output for Finger Posture Recognition

Development of an Electronic Glove with Voice Output for Finger Posture Recognition Development of an Electronic Glove with Voice Output for Finger Posture Recognition F. Wong*, E. H. Loh, P. Y. Lim, R. R. Porle, R. Chin, K. Teo and K. A. Mohamad Faculty of Engineering, Universiti Malaysia

More information

AVR Based Gesture Vocalizer Using Speech Synthesizer IC

AVR Based Gesture Vocalizer Using Speech Synthesizer IC AVR Based Gesture Vocalizer Using Speech Synthesizer IC Mr.M.V.N.R.P.kumar 1, Mr.Ashutosh Kumar 2, Ms. S.B.Arawandekar 3, Mr.A. A. Bhosale 4, Mr. R. L. Bhosale 5 Dept. Of E&TC, L.N.B.C.I.E.T. Raigaon,

More information

International Journal of Engineering Research in Computer Science and Engineering (IJERCSE) Vol 5, Issue 3, March 2018 Gesture Glove

International Journal of Engineering Research in Computer Science and Engineering (IJERCSE) Vol 5, Issue 3, March 2018 Gesture Glove Gesture Glove [1] Kanere Pranali, [2] T.Sai Milind, [3] Patil Shweta, [4] Korol Dhanda, [5] Waqar Ahmad, [6] Rakhi Kalantri [1] Student, [2] Student, [3] Student, [4] Student, [5] Student, [6] Assistant

More information

Available online at ScienceDirect. Procedia Technology 24 (2016 )

Available online at   ScienceDirect. Procedia Technology 24 (2016 ) Available online at www.sciencedirect.com ScienceDirect Procedia Technology 24 (2016 ) 1068 1073 International Conference on Emerging Trends in Engineering, Science and Technology (ICETEST - 2015) Improving

More information

Accuracy and validity of Kinetisense joint measures for cardinal movements, compared to current experimental and clinical gold standards.

Accuracy and validity of Kinetisense joint measures for cardinal movements, compared to current experimental and clinical gold standards. Accuracy and validity of Kinetisense joint measures for cardinal movements, compared to current experimental and clinical gold standards. Prepared by Engineering and Human Performance Lab Department of

More information

Bapuji Institute of Engineering and Technology, India

Bapuji Institute of Engineering and Technology, India Volume 4, Issue 1, January 2014 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com A Segmented

More information

3. MANUAL ALPHABET RECOGNITION STSTM

3. MANUAL ALPHABET RECOGNITION STSTM Proceedings of the IIEEJ Image Electronics and Visual Computing Workshop 2012 Kuching, Malaysia, November 21-24, 2012 JAPANESE MANUAL ALPHABET RECOGNITION FROM STILL IMAGES USING A NEURAL NETWORK MODEL

More information

Gesture Recognition using Marathi/Hindi Alphabet

Gesture Recognition using Marathi/Hindi Alphabet Gesture Recognition using Marathi/Hindi Alphabet Rahul Dobale ¹, Rakshit Fulzele², Shruti Girolla 3, Seoutaj Singh 4 Student, Computer Engineering, D.Y. Patil School of Engineering, Pune, India 1 Student,

More information

EARLY STAGE DIAGNOSIS OF LUNG CANCER USING CT-SCAN IMAGES BASED ON CELLULAR LEARNING AUTOMATE

EARLY STAGE DIAGNOSIS OF LUNG CANCER USING CT-SCAN IMAGES BASED ON CELLULAR LEARNING AUTOMATE EARLY STAGE DIAGNOSIS OF LUNG CANCER USING CT-SCAN IMAGES BASED ON CELLULAR LEARNING AUTOMATE SAKTHI NEELA.P.K Department of M.E (Medical electronics) Sengunthar College of engineering Namakkal, Tamilnadu,

More information

Microphone Input LED Display T-shirt

Microphone Input LED Display T-shirt Microphone Input LED Display T-shirt Team 50 John Ryan Hamilton and Anthony Dust ECE 445 Project Proposal Spring 2017 TA: Yuchen He 1 Introduction 1.2 Objective According to the World Health Organization,

More information

IDENTIFICATION OF REAL TIME HAND GESTURE USING SCALE INVARIANT FEATURE TRANSFORM

IDENTIFICATION OF REAL TIME HAND GESTURE USING SCALE INVARIANT FEATURE TRANSFORM Research Article Impact Factor: 0.621 ISSN: 2319507X INTERNATIONAL JOURNAL OF PURE AND APPLIED RESEARCH IN ENGINEERING AND TECHNOLOGY A PATH FOR HORIZING YOUR INNOVATIVE WORK IDENTIFICATION OF REAL TIME

More information

EECS 433 Statistical Pattern Recognition

EECS 433 Statistical Pattern Recognition EECS 433 Statistical Pattern Recognition Ying Wu Electrical Engineering and Computer Science Northwestern University Evanston, IL 60208 http://www.eecs.northwestern.edu/~yingwu 1 / 19 Outline What is Pattern

More information

Teacher/Class: Ms. Brison - ASL II. Standards Abilities Level 2. Page 1 of 5. Week Dates: Oct

Teacher/Class: Ms. Brison - ASL II. Standards Abilities Level 2. Page 1 of 5. Week Dates: Oct Teacher/Class: Ms. Brison - ASL II Week Dates: Oct. 10-13 Standards Abilities Level 2 Objectives Finger Spelling Finger spelling of common names and places Basic lexicalized finger spelling Numbers Sentence

More information

A Survey on Hand Gesture Recognition for Indian Sign Language

A Survey on Hand Gesture Recognition for Indian Sign Language A Survey on Hand Gesture Recognition for Indian Sign Language Miss. Juhi Ekbote 1, Mrs. Mahasweta Joshi 2 1 Final Year Student of M.E. (Computer Engineering), B.V.M Engineering College, Vallabh Vidyanagar,

More information

PDF hosted at the Radboud Repository of the Radboud University Nijmegen

PDF hosted at the Radboud Repository of the Radboud University Nijmegen PDF hosted at the Radboud Repository of the Radboud University Nijmegen The following full text is a publisher's version. For additional information about this publication click this link. http://hdl.handle.net/2066/86496

More information

Facial Expression Recognition Using Principal Component Analysis

Facial Expression Recognition Using Principal Component Analysis Facial Expression Recognition Using Principal Component Analysis Ajit P. Gosavi, S. R. Khot Abstract Expression detection is useful as a non-invasive method of lie detection and behaviour prediction. However,

More information

Analyzing Well-Formedness of Syllables in Japanese Sign Language

Analyzing Well-Formedness of Syllables in Japanese Sign Language Analyzing Well-Formedness of Syllables in Japanese Sign Language Satoshi Yawata Makoto Miwa Yutaka Sasaki Daisuke Hara Toyota Technological Institute 2-12-1 Hisakata, Tempaku-ku, Nagoya, Aichi, 468-8511,

More information

Figure 1: The relation between xyz and HSV. skin color in HSV color space from the extracted skin regions. At each frame, our system tracks the face,

Figure 1: The relation between xyz and HSV. skin color in HSV color space from the extracted skin regions. At each frame, our system tracks the face, Extraction of Hand Features for Recognition of Sign Language Words Nobuhiko Tanibata tanibata@cv.mech.eng.osaka-u.ac.jp Yoshiaki Shirai shirai@cv.mech.eng.osaka-u.ac.jp Nobutaka Shimada shimada@cv.mech.eng.osaka-u.ac.jp

More information

Extraction of Blood Vessels and Recognition of Bifurcation Points in Retinal Fundus Image

Extraction of Blood Vessels and Recognition of Bifurcation Points in Retinal Fundus Image International Journal of Research Studies in Science, Engineering and Technology Volume 1, Issue 5, August 2014, PP 1-7 ISSN 2349-4751 (Print) & ISSN 2349-476X (Online) Extraction of Blood Vessels and

More information

Development of novel algorithm by combining Wavelet based Enhanced Canny edge Detection and Adaptive Filtering Method for Human Emotion Recognition

Development of novel algorithm by combining Wavelet based Enhanced Canny edge Detection and Adaptive Filtering Method for Human Emotion Recognition International Journal of Engineering Research and Development e-issn: 2278-067X, p-issn: 2278-800X, www.ijerd.com Volume 12, Issue 9 (September 2016), PP.67-72 Development of novel algorithm by combining

More information

Monster Walk Stand with your feet slightly closer than shoulder-width apart in an athletic stance. Loop an elastic band around your ankles.

Monster Walk Stand with your feet slightly closer than shoulder-width apart in an athletic stance. Loop an elastic band around your ankles. Off-season Lower-Body Tennis Exercises Research conducted on elite tennis players shows that lower-body strength is the same on both the left and right sides. Therefore, lower-body training for tennis

More information

Tumor cut segmentation for Blemish Cells Detection in Human Brain Based on Cellular Automata

Tumor cut segmentation for Blemish Cells Detection in Human Brain Based on Cellular Automata Tumor cut segmentation for Blemish Cells Detection in Human Brain Based on Cellular Automata D.Mohanapriya 1 Department of Electronics and Communication Engineering, EBET Group of Institutions, Kangayam,

More information

Teacher/Class: Ms. Brison - ASL II Week Dates: March Standards Abilities Level 2. Page 1 of 5

Teacher/Class: Ms. Brison - ASL II Week Dates: March Standards Abilities Level 2. Page 1 of 5 Teacher/Class: Ms. Brison - ASL II Week Dates: March 6-10 Standards Abilities Level 2 Objectives Finger Spelling Finger spelling of common names and places Basic lexicalized finger spelling Numbers Sentence

More information

ABSTRACT I. INTRODUCTION

ABSTRACT I. INTRODUCTION 2018 IJSRSET Volume 4 Issue 2 Print ISSN: 2395-1990 Online ISSN : 2394-4099 National Conference on Advanced Research Trends in Information and Computing Technologies (NCARTICT-2018), Department of IT,

More information

Translation of Sign Language Into Text Using Kinect for Windows v2

Translation of Sign Language Into Text Using Kinect for Windows v2 Translation of Sign Language Into Text Using Kinect for Windows v2 Preeti Amatya, Kateryna Sergieieva, Gerrit Meixner UniTyLab Heilbronn University Heilbronn, Germany emails: preetiamatya@gmail.com, kateryna.sergieieva@hs-heilbronn.de,

More information

Analysis of Visual Properties in American Sign Language

Analysis of Visual Properties in American Sign Language Analysis of Visual Properties in American Sign Language Rain G. Bosworth Karen R. Dobkins Dept of Psychology University of California, San Diego La Jolla, CA Charles E. Wright Dept of Cognitive Science

More information

Characterization of 3D Gestural Data on Sign Language by Extraction of Joint Kinematics

Characterization of 3D Gestural Data on Sign Language by Extraction of Joint Kinematics Human Journals Research Article October 2017 Vol.:7, Issue:4 All rights are reserved by Newman Lau Characterization of 3D Gestural Data on Sign Language by Extraction of Joint Kinematics Keywords: hand

More information

Teacher/Class: Ms. Brison - ASL II. Standards Abilities Level 2. Page 1 of 5. Week Dates: Aug 29-Sept2

Teacher/Class: Ms. Brison - ASL II. Standards Abilities Level 2. Page 1 of 5. Week Dates: Aug 29-Sept2 Teacher/Class: Ms. Brison - ASL II Week Dates: Aug 29-Sept2 Standards Abilities Level 2 Objectives Finger Spelling Finger spelling of common names and places Basic lexicalized finger spelling Numbers Sentence

More information

JOAN NASH DIMITRIS METAXAS

JOAN NASH DIMITRIS METAXAS Proceedings of the Language and Logic Workshop, Formal Approaches to Sign Languages, European Summer School in Logic, Language, and Information (ESSLLI '09), Bordeaux, France, July 20-31, 2009. A Method

More information

Brain Tumor segmentation and classification using Fcm and support vector machine

Brain Tumor segmentation and classification using Fcm and support vector machine Brain Tumor segmentation and classification using Fcm and support vector machine Gaurav Gupta 1, Vinay singh 2 1 PG student,m.tech Electronics and Communication,Department of Electronics, Galgotia College

More information

A STATISTICAL PATTERN RECOGNITION PARADIGM FOR VIBRATION-BASED STRUCTURAL HEALTH MONITORING

A STATISTICAL PATTERN RECOGNITION PARADIGM FOR VIBRATION-BASED STRUCTURAL HEALTH MONITORING A STATISTICAL PATTERN RECOGNITION PARADIGM FOR VIBRATION-BASED STRUCTURAL HEALTH MONITORING HOON SOHN Postdoctoral Research Fellow ESA-EA, MS C96 Los Alamos National Laboratory Los Alamos, NM 87545 CHARLES

More information

Director of Testing and Disability Services Phone: (706) Fax: (706) E Mail:

Director of Testing and Disability Services Phone: (706) Fax: (706) E Mail: Angie S. Baker Testing and Disability Services Director of Testing and Disability Services Phone: (706)737 1469 Fax: (706)729 2298 E Mail: tds@gru.edu Deafness is an invisible disability. It is easy for

More information

Learning Utility for Behavior Acquisition and Intention Inference of Other Agent

Learning Utility for Behavior Acquisition and Intention Inference of Other Agent Learning Utility for Behavior Acquisition and Intention Inference of Other Agent Yasutake Takahashi, Teruyasu Kawamata, and Minoru Asada* Dept. of Adaptive Machine Systems, Graduate School of Engineering,

More information

A Kinematic Assessment of Knee Prosthesis from Fluoroscopy Images

A Kinematic Assessment of Knee Prosthesis from Fluoroscopy Images Memoirs of the Faculty of Engineering, Kyushu University, Vol. 68, No. 1, March 2008 A Kinematic Assessment of Knee Prosthesis from Fluoroscopy Images by Mohammad Abrar HOSSAIN *, Michihiko FUKUNAGA and

More information

Early Detection of Lung Cancer

Early Detection of Lung Cancer Early Detection of Lung Cancer Aswathy N Iyer Dept Of Electronics And Communication Engineering Lymie Jose Dept Of Electronics And Communication Engineering Anumol Thomas Dept Of Electronics And Communication

More information

Sign Language Recognition using Webcams

Sign Language Recognition using Webcams Sign Language Recognition using Webcams Overview Average person s typing speed Composing: ~19 words per minute Transcribing: ~33 words per minute Sign speaker Full sign language: ~200 words per minute

More information

COMBINING CATEGORICAL AND PRIMITIVES-BASED EMOTION RECOGNITION. University of Southern California (USC), Los Angeles, CA, USA

COMBINING CATEGORICAL AND PRIMITIVES-BASED EMOTION RECOGNITION. University of Southern California (USC), Los Angeles, CA, USA COMBINING CATEGORICAL AND PRIMITIVES-BASED EMOTION RECOGNITION M. Grimm 1, E. Mower 2, K. Kroschel 1, and S. Narayanan 2 1 Institut für Nachrichtentechnik (INT), Universität Karlsruhe (TH), Karlsruhe,

More information

Skin color detection for face localization in humanmachine

Skin color detection for face localization in humanmachine Research Online ECU Publications Pre. 2011 2001 Skin color detection for face localization in humanmachine communications Douglas Chai Son Lam Phung Abdesselam Bouzerdoum 10.1109/ISSPA.2001.949848 This

More information

Mammogram Analysis: Tumor Classification

Mammogram Analysis: Tumor Classification Mammogram Analysis: Tumor Classification Term Project Report Geethapriya Raghavan geeragh@mail.utexas.edu EE 381K - Multidimensional Digital Signal Processing Spring 2005 Abstract Breast cancer is the

More information

Comparison of Lip Image Feature Extraction Methods for Improvement of Isolated Word Recognition Rate

Comparison of Lip Image Feature Extraction Methods for Improvement of Isolated Word Recognition Rate , pp.57-61 http://dx.doi.org/10.14257/astl.2015.107.14 Comparison of Lip Image Feature Extraction Methods for Improvement of Isolated Word Recognition Rate Yong-Ki Kim 1, Jong Gwan Lim 2, Mi-Hye Kim *3

More information

LSA64: An Argentinian Sign Language Dataset

LSA64: An Argentinian Sign Language Dataset LSA64: An Argentinian Sign Language Dataset Franco Ronchetti* 1, Facundo Quiroga* 1, César Estrebou 1, Laura Lanzarini 1, and Alejandro Rosete 2 1 Instituto de Investigación en Informática LIDI, Facultad

More information

Visual Recognition of Isolated Swedish Sign Language Signs

Visual Recognition of Isolated Swedish Sign Language Signs Visual Recognition of Isolated Swedish Sign Language Signs 1 CVAP/CAS, Saad Akram1 Jonas Beskow2 Hedvig Kjellstro m1 2 KTH, Stockholm, Sweden Speech, Music and Hearing, KTH, Stockholm, Sweden saadua,beskow,hedvig@kth.se

More information

Principals of Object Perception

Principals of Object Perception Principals of Object Perception Elizabeth S. Spelke COGNITIVE SCIENCE 14, 29-56 (1990) Cornell University Summary Infants perceive object by analyzing tree-dimensional surface arrangements and motions.

More information

Sign Language Recognition System Using SIFT Based Approach

Sign Language Recognition System Using SIFT Based Approach Sign Language Recognition System Using SIFT Based Approach Ashwin S. Pol, S. L. Nalbalwar & N. S. Jadhav Dept. of E&TC, Dr. BATU Lonere, MH, India E-mail : ashwin.pol9@gmail.com, nalbalwar_sanjayan@yahoo.com,

More information

INDIAN SIGN LANGUAGE RECOGNITION USING NEURAL NETWORKS AND KNN CLASSIFIERS

INDIAN SIGN LANGUAGE RECOGNITION USING NEURAL NETWORKS AND KNN CLASSIFIERS INDIAN SIGN LANGUAGE RECOGNITION USING NEURAL NETWORKS AND KNN CLASSIFIERS Madhuri Sharma, Ranjna Pal and Ashok Kumar Sahoo Department of Computer Science and Engineering, School of Engineering and Technology,

More information

Question 2. The Deaf community has its own culture.

Question 2. The Deaf community has its own culture. Question 1 The only communication mode the Deaf community utilizes is Sign Language. False The Deaf Community includes hard of hearing people who do quite a bit of voicing. Plus there is writing and typing

More information

Detection of Glaucoma and Diabetic Retinopathy from Fundus Images by Bloodvessel Segmentation

Detection of Glaucoma and Diabetic Retinopathy from Fundus Images by Bloodvessel Segmentation International Journal of Engineering and Advanced Technology (IJEAT) ISSN: 2249 8958, Volume-5, Issue-5, June 2016 Detection of Glaucoma and Diabetic Retinopathy from Fundus Images by Bloodvessel Segmentation

More information

Facial Expression Biometrics Using Tracker Displacement Features

Facial Expression Biometrics Using Tracker Displacement Features Facial Expression Biometrics Using Tracker Displacement Features Sergey Tulyakov 1, Thomas Slowe 2,ZhiZhang 1, and Venu Govindaraju 1 1 Center for Unified Biometrics and Sensors University at Buffalo,

More information

A Smart Texting System For Android Mobile Users

A Smart Texting System For Android Mobile Users A Smart Texting System For Android Mobile Users Pawan D. Mishra Harshwardhan N. Deshpande Navneet A. Agrawal Final year I.T Final year I.T J.D.I.E.T Yavatmal. J.D.I.E.T Yavatmal. Final year I.T J.D.I.E.T

More information

OBJECTIVE BAKE ASSESSMENT USING IMAGE ANALYSIS AND ARTIFICIAL INTELLIGENCE

OBJECTIVE BAKE ASSESSMENT USING IMAGE ANALYSIS AND ARTIFICIAL INTELLIGENCE OBJECTIVE BAKE ASSESSMENT USING IMAGE ANALYSIS AND ARTIFICIAL INTELLIGENCE L. G. C. Hamey 1,2, J. C-H. Yeh 1,2 and C. Ng 1,3 1 Cooperative Research Centre for International Food Manufacture and Packaging

More information

CHAPTER 6 HUMAN BEHAVIOR UNDERSTANDING MODEL

CHAPTER 6 HUMAN BEHAVIOR UNDERSTANDING MODEL 127 CHAPTER 6 HUMAN BEHAVIOR UNDERSTANDING MODEL 6.1 INTRODUCTION Analyzing the human behavior in video sequences is an active field of research for the past few years. The vital applications of this field

More information

arxiv: v1 [cs.lg] 4 Feb 2019

arxiv: v1 [cs.lg] 4 Feb 2019 Machine Learning for Seizure Type Classification: Setting the benchmark Subhrajit Roy [000 0002 6072 5500], Umar Asif [0000 0001 5209 7084], Jianbin Tang [0000 0001 5440 0796], and Stefan Harrer [0000

More information

Cancer Cells Detection using OTSU Threshold Algorithm

Cancer Cells Detection using OTSU Threshold Algorithm Cancer Cells Detection using OTSU Threshold Algorithm Nalluri Sunny 1 Velagapudi Ramakrishna Siddhartha Engineering College Mithinti Srikanth 2 Velagapudi Ramakrishna Siddhartha Engineering College Kodali

More information

Two Themes. MobileASL: Making Cell Phones Accessible to the Deaf Community. Our goal: Challenges: Current Technology for Deaf People (text) ASL

Two Themes. MobileASL: Making Cell Phones Accessible to the Deaf Community. Our goal: Challenges: Current Technology for Deaf People (text) ASL Two Themes MobileASL: Making Cell Phones Accessible to the Deaf Community MobileASL AccessComputing Alliance Advancing Deaf and Hard of Hearing in Computing Richard Ladner University of Washington ASL

More information

Hand Gestures Recognition System for Deaf, Dumb and Blind People

Hand Gestures Recognition System for Deaf, Dumb and Blind People Hand Gestures Recognition System for Deaf, Dumb and Blind People Channaiah Chandana K 1, Nikhita K 2, Nikitha P 3, Bhavani N K 4, Sudeep J 5 B.E. Student, Dept. of Information Science & Engineering, NIE-IT,

More information

Unsupervised MRI Brain Tumor Detection Techniques with Morphological Operations

Unsupervised MRI Brain Tumor Detection Techniques with Morphological Operations Unsupervised MRI Brain Tumor Detection Techniques with Morphological Operations Ritu Verma, Sujeet Tiwari, Naazish Rahim Abstract Tumor is a deformity in human body cells which, if not detected and treated,

More information

NMF-Density: NMF-Based Breast Density Classifier

NMF-Density: NMF-Based Breast Density Classifier NMF-Density: NMF-Based Breast Density Classifier Lahouari Ghouti and Abdullah H. Owaidh King Fahd University of Petroleum and Minerals - Department of Information and Computer Science. KFUPM Box 1128.

More information

Evaluation of Interactive Systems. Anthropometry & Anatomy

Evaluation of Interactive Systems. Anthropometry & Anatomy Evaluation of Interactive Systems Anthropometry & Anatomy Caroline Appert - 2018/2019 References and Inspirations [1] Halla Olafsdottir s slides and discussions [2] http://researchguides.library.tufts.edu/

More information

BTF. Bi-directional Texture Function (BTF) BTF BTF BTF BTF

BTF. Bi-directional Texture Function (BTF) BTF BTF BTF BTF BTF ( ) ( ) ( ) ( ) Bi-directional Texture Function (BTF) BTF BTF BTF BTF 3 2 2 BTF Summary We developed a efficient BTF compression and reproducing technique based on a dichromatic reflection model. For

More information

Sign Language MT. Sara Morrissey

Sign Language MT. Sara Morrissey Sign Language MT Sara Morrissey Introduction Overview Irish Sign Language Problems for SLMT SLMT Data MaTrEx for SLs Future work Introduction (1) Motivation SLs are poorly resourced and lack political,

More information

HANDSHAPE ASSIMILATION IN ASL FINGERSPELLING

HANDSHAPE ASSIMILATION IN ASL FINGERSPELLING HANDSHAPE ASSIMILATION IN ASL FINGERSPELLING ULNAR DIGIT FLEXION AND SELECTED FINGERS Jonathan Keane, Diane Brentari, Jason Riggle University of Chicago Societas Linguistica Europa 2012, 29 August 01 September

More information

COMPARATIVE STUDY ON FEATURE EXTRACTION METHOD FOR BREAST CANCER CLASSIFICATION

COMPARATIVE STUDY ON FEATURE EXTRACTION METHOD FOR BREAST CANCER CLASSIFICATION COMPARATIVE STUDY ON FEATURE EXTRACTION METHOD FOR BREAST CANCER CLASSIFICATION 1 R.NITHYA, 2 B.SANTHI 1 Asstt Prof., School of Computing, SASTRA University, Thanjavur, Tamilnadu, India-613402 2 Prof.,

More information

N RISCE 2K18 ISSN International Journal of Advance Research and Innovation

N RISCE 2K18 ISSN International Journal of Advance Research and Innovation The Computer Assistance Hand Gesture Recognition system For Physically Impairment Peoples V.Veeramanikandan(manikandan.veera97@gmail.com) UG student,department of ECE,Gnanamani College of Technology. R.Anandharaj(anandhrak1@gmail.com)

More information

Sign Language Interpretation Using Pseudo Glove

Sign Language Interpretation Using Pseudo Glove Sign Language Interpretation Using Pseudo Glove Mukul Singh Kushwah, Manish Sharma, Kunal Jain and Anish Chopra Abstract The research work presented in this paper explores the ways in which, people who

More information