ANALYSIS OF MOTION-BASED CODING FOR SIGN LANGUAGE VIDEO COMMUNICATION APPLICATIONS
|
|
- Bridget Johnston
- 5 years ago
- Views:
Transcription
1 ANALYSIS OF MOTION-BASED CODING FOR SIGN LANGUAGE VIDEO COMMUNICATION APPLICATIONS Benjaporn Saksiri*, Supavadee Aramvith**, Chai Phongphanphanee* *Information System Engineering Laboratory, Department of Computer Engineering, Chulalongkorn University, Bangkok 10330, Thailand Tel: (662) , Fax: (662) **Department of Electrical Engineering Chulalongkorn University, Bangkok 10330, Thailand Tel: (662) , Fax: (662) Abstract Digital video communications have been characterized by an exponential growth in the last decades. The continuous growth causes an evolution of several new applications such as video conferencing, video , mobile video, video streaming over internet, and sign language video communication, etc. To transmit video over internet or wireless networks, such applications needs digital video coding algorithm such that video could be compressed to meet bandwidth requirements. In this paper, we focus on the study of motion estimation and compensation in sign language video coding. We discuss and identify the approach of utilizing such motion information of sign language video for efficient sign language video communication applications. Key Words Motion Estimation, Motion Compensation, Video Coding, Sign Language Video Communication 1. Introduction Communication is the important means that people use in their correspondence to convey or exchange messages, news and information, thoughts and feelings. Good communication helps achieve better understanding and establish strong relationship. It enables people to live in the same society peacefully. Therefore, communication is crucially necessary component in human life as it is a fundamental for human survival and instrumental for people to achieve their objectives of working and living together in the society. However, for hearing-impaired, it is difficult for them to communicate with others to achieve the goals through the conventional telephone system due to their hearing disabilities, which become a barrier to their meaningful speaking. Presently, telecommunications technology has advanced rapidly but unfortunately it has been designed primarily for normal people, not for hearing-impaired. Thus, they are barred from achieving a successful life in the definition of Independent Living Philosophy [1]. Sign language is a visual language used by hearing-impaired people to communicate. A common device used by hearingimpaired people to communicate is the text telephone. Unfortunately, the speed of text conversation is limited by typing ability, and is at least 10 times slower than sign language [2]. Currently, there exist video conferencing tools such as Microsoft Netmeeting [3]. However, it is not designed to be easily used by hearing-impaired. There are several difficulties in sending video information over standard telephone lines. This is due to the fact that video signals contain much more information than other kinds of signals. Thus, two options exist; an either to send video through higher bandwidth medium or reduce the size of transmitted video data. Compression is mainly realized by exploiting the redundancy present in the data and is defined in several video coding standards H.263 [4], MPEG-4 [5]. The nature of sign language video is different from typical video sequences in a sense that the motion is compensated on sign s hand and arms. Thus, to apply video coding to sign language video, ones need to be able to identify signer s hands, arm, possibly face regions such that those regions can be coded with higher quality than other regions. Previous research in sign language video coding includes the identification of important regions of sign language video sequence images [6]. S. Akyol et.al [7] use image cues of color and motion for the fast detection of signing persons hands. In this paper, we focus on the study of motion estimation and compensation in sign language video coding. We discuss and identify the approach of utilizing such motion information of sign language video for efficient sign language video communication applications. The rest of the paper is organized as followed. In section 2, we briefly introduce the motion information in sign language video coding. In section 3, we describe
2 the experimental system and results. Then, the conclusions and future works is in section Motion Information in Sign Language Video Coding To exploit motion information in sign language video codec, one of the key elements is to motion estimation and motion compensation in video coding. A video sequence consists of a series of frames. To achieve compression, the temporal redundancy between adjacent frames can be exploited. That is, a frame is selected as a reference, and subsequent frames are predicted from the reference using a technique known as motion estimation. Motion Estimation(ME) is the estimation of the parameters of the model that describes the temporal variations, usually from consecutive frames. To encode differences between frames take the encoder needs to into account motion of the frames. The motion estimation is usually done on a block basis. A sequence of images contains an intrinsic, intuitive and simple idea of redundancy: two successive images are very similar. This simple concept is called temporal redundancy. The research of a proper scheme to exploit the temporal redundancy completely changes the scenario between compression of still pictures and sequence of images. The displacement of objects between successive frames is estimated (motion estimation). Hence the resulting motion information is exploited for an efficient interframe coding (motion compensation). Consequently the prediction error as well as the motion representation are transmitted instead of the frame itself. Interframe coding can be considered as a special case of predictive coding where the prediction is based on pixel values from the previous frame. For instance, in the portion of a scene with small motion, pixels are precisely predicted from the pixels at the same location in the previous frame. The last observation is not true any longer in the scenes with large motion. In this case, pixels in the previous frame spatially displaced by the appropriate motion vector are much more efficient for prediction. This prediction is named motion compensated(mc) prediction. The temporal redundancy can clearly be exploited to perform video compression. This has resulted in many motion based video compression strategies. Simple framedifferencing strategies assume the average motion is small, and simply compress the pixel-by-pixel difference between two frames. The motion estimation problem, in fact, consists of two related sub-problems: identify the moving object boundaries, so called motion segmentation estimate the motion parameters of each moving objects, so called motion estimation in strict sense Most of the motion estimation algorithms assume the following conditions: objects are rigid bodies; hence object deformation can be neglected for at least a few nearby frames objects move in a translational movement for, at least, a few frames illumination is spatially and temporally uniform; hence the observed object intensities are unchanged under movement occlusion of one object by another and uncovered background are neglected The purpose of motion estimation is very different depending on the application it is applied to. In image sequence analysis, the motion information is used to extract useful features of the image sequence. In image sequence interpolation and restoration, adaptive filtering exploits the motion information in order to avoid blurring of moving objects. Finally, motion information is used to reduce the temporal redundancies between successive frames in video coding. For each block(size NxN pixels) in the current frame, we need to search for best measures matching block in the reference frame. The block-matching algorithm(bma), the most commonly used in the standardized block-based coding schemes, searches the best match block of the current frame from the candidate blocks inside the search window(mxm pixels) with search range +/-d in the previous frame. The Full Search(FS) algorithm can find the optimal matching block by exhaustively checking all candidate blocks within the search window. However, its computational requirement is expensive. To overcome this problem, many fast blockmatching algorithms have been developed, such as 2-D logarithmic search(logs) [8], three-step search(tss) [9], new three-step search(ntss) [10], four-step search(fss) [11], new diamond search(ds) [12], blockbased gradient descent search(bbgds) [13], and etc. These fast algorithm use different search patterns and search strategies to find motion vector with much less number of search points compared to the FS algorithm. In this study, we perform experiments based on Full Search and TSS only. Distortion measures that are usually used to determine the best match are Mean Square Error(MSE) and Mean Absolute Error(MAE), as defined in eq. 1 and eq. 2, respectively. Figure 1 shows related parameters in block matching algorithm in motion estimation. (1) (2)
3 Figure 1 Related parameters in Block-matching algorithm in motion estimation a) original image b) frame difference 3. Experimental System and Results We use the standard sequence silent and a Thai Sign Language(TSL) sequence in this experiment. The format of silent sequence is QCIF(Quarter Common Intermediate) A videoconferencing standard that uses frames of 176 pixels by 144 lines at 30 frames/second. The format of TSL sequence is QCIF coded at 15 fps. The software tool is VCDEMO [14]. The input image sequence must be converted to yuv video format. Our experiment use full search and three step search technique. Motion is estimated on all frames of the sign language sequence. c) motion field d) motion compensated frame difference Figure 4 Motion information in terms of frame difference, motion compensate frame difference, and motion field of silent sequence. Frame 1 Frame 16 Frame 23 a) original image b) frame difference Figure 2 Example of video frames from silent sequence that demonstrates motion of sign language. c) motion field d) motion compensated frame difference Frame 3 Frame 39 Frame 162 Figure 3 Example of video frames from TSL sequence that demonstrates motion of sign language. Figure 5 Motion information in terms of frame difference, motion compensate frame difference, and motion field of TSL sequence.
4 a) Silent Sequence b) TSL Sequence Figure 6 Comparison between the variance of frame difference and motion compensation frame difference using full search for (a) silent (b) TSL sequences. a) Silent Sequence Figure 4 and 5 show the original image,motion field, frame difference and motion compensated frame difference for silent and TSL sequence respectively. The results indicate that the amount of information needed for coding and transmission of motion compensated frame difference is much lower than using frame difference. Thus, it is more efficient to utilize motion information to efficiently code sign language video. This conclusion is confirmed in a quantitative manner in figures 6 and 7. Figure 6 shows the comparison between the variance of frame difference and motion compensation frame difference using full search for silent and TSL sequences. The same settings are used for the sequences shown in figure 7 using three-step search. Since the variance of the frame difference represents the signal energy levels and the variance of the motion compensated frame difference represents the signal energy levels after motion compensation, therefore the variance can be used as an indicators of the number of bits used to encode that frame. The complexity of the block matching algorithm is reduced when using three-step search, while the accuracy is reduced, as can be seen from the increasing values of variance i.e., implied increasing number of bits used, as for motion compensated frame difference in figure 7, compared to full search algorithm in figure Conclusions and Future Works In this paper, we focus on the study of motion estimation and compensation in sign language video coding. The experimental results indicate using motion information in sign language video coding, i.e., the uses of motion estimation and motion compensation, can significantly reduce the bit rate because the motion of the sign language sequence is concentrated on well-defined region around the signer s hands and arms. Thus, the motion information can be used to indicate the region of interest and as an input to bit allocation scheme to allocate higher number of bits to improve the spatial and temporal qualities around the face, hands, and arms. Thus it results in the improvement of subjective quality of sign language video communication. Future works include the implementation and evaluation of priority and bit allocation scheme for sign language video coding. It is also necessary to optimize in complexity in terms of computation, storage, and memory requirements. 5. Acknowledgement b) TSL Sequence We thank the Ratchasuda foundation who supported the education and researches of the Thai sign language video communication. Figure 7 Comparison between the variance of frame difference and motion compensation frame difference using three-step search for (a) silent (b) TSL sequences.
5 References [1] L. Theeratorn, Behavior and satisfaction of people with hearing disabilities toward telecommunication technology, M.A. Thesis, Ratchasuda College, Mahidol University, 2002, ISBN [2] H. Nariman, Automatic Segmentation of the Face and Hands in Sign Language Video Sequences, Technical Report, Department of Electrical and Electronic Engineering, Adelaide University, SA 5005, Australia, July 3, [3] Microsoft Netmeeting, [4] ITU-T Draft Recommendation H.263, Video coding for low bit-rate communication, May [5] ISO/IEC JTC1/SC29/WG11. Doc. N4668, MPEG-4 Overview V.21. Jeju, March [6] Laura J. Muir, Video Telephony for the Deaf: Analysis and Development of an Optimised Video Compression Product, ACM Multimedia Conference, Juan Les Pins, France, 1-6 December [7] S. Akyol, P. Alvarado, Finding Relevant Image Content for mobile Sign Language Recognition, [8] J. Jain and A. Jain, Displacement measurement and its application in interframe image coding, IEEE Trans. Commun., vol. COMM-29, pp , Dec [9] T. Koga, K. Iinuma, A. Hirano, Y. Iijima and T. Ishiguro, Motion compensated interframe coding for video conferenceing, Proc.Nat. Telecommun. Conf., New Orleans, LA, Nov. 29-Dec , pp. G [10] S. Zhu and K. K. Ma, A new three-step search algorithm for block motion estimation, IEEE Trans.Circuits Syst.Video Technol., Vol. 4, pp , Aug [11] L. M. Po and W. C. Ma, A new diamond search algorithm for fast block motion estimation, IEEE Trans.Circuits Syst. Video Technol., Vol. 6, pp , June [12] S. Zhu and K. K. Ma, A new diamond search algorithm for fast block-matching motion estimation, IEEE Trans.Image Processing, vol. 9, pp , Feb [13] L. K. Liu and E. Feig, A block-based gradient descent search algorithm for block motion estimation in Video Coding, IEEE Trans.Circuits Syst.Video Technol., Vol. 6, pp , Aug [14] VcDemo Manual and Excercises, Delft University of Technology, Information and communication Theory Group.
MobileASL: Making Cell Phones Accessible to the Deaf Community
American Sign Language (ASL) MobileASL: Making Cell Phones Accessible to the Deaf Community Richard Ladner University of Washington ASL is the preferred language for about 500,000-1,000,000 Deaf people
More informationTwo Themes. MobileASL: Making Cell Phones Accessible to the Deaf Community. Our goal: Challenges: Current Technology for Deaf People (text) ASL
Two Themes MobileASL: Making Cell Phones Accessible to the Deaf Community MobileASL AccessComputing Alliance Advancing Deaf and Hard of Hearing in Computing Richard Ladner University of Washington ASL
More informationMobileASL: Making Cell Phones Accessible to the Deaf Community. Richard Ladner University of Washington
MobileASL: Making Cell Phones Accessible to the Deaf Community Richard Ladner University of Washington American Sign Language (ASL) ASL is the preferred language for about 500,000-1,000,000 Deaf people
More informationCan You See Me Now? Communication Over Cell Phones for Deaf People. Eve Riskin Richard Ladner Computer Engineering University of Washington
Can You See Me Now? Communication Over Cell Phones for Deaf People Eve Riskin Richard Ladner Computer Engineering University of Washington Co-PI Thanks Sheila Hemami (Cornell) Graduate Students Anna Cavender,
More informationImage Enhancement and Compression using Edge Detection Technique
Image Enhancement and Compression using Edge Detection Technique Sanjana C.Shekar 1, D.J.Ravi 2 1M.Tech in Signal Processing, Dept. Of ECE, Vidyavardhaka College of Engineering, Mysuru 2Professor, Dept.
More informationThe Open Access Institutional Repository at Robert Gordon University
OpenAIR@RGU The Open Access Institutional Repository at Robert Gordon University http://openair.rgu.ac.uk This is an author produced version of a paper published in Visual Information Engineering : The
More informationDigital Encoding Applied to Sign Language Video
IEICE TRANS. INF. & SYST., VOL.E89 D, NO.6 JUNE 2006 1893 PAPER Special Section on Human Communication II Digital Encoding Applied to Sign Language Video Kaoru NAKAZONO a),yujinagashima, Members, and Akira
More informationSaliency Inspired Modeling of Packet-loss Visibility in Decoded Videos
1 Saliency Inspired Modeling of Packet-loss Visibility in Decoded Videos Tao Liu*, Xin Feng**, Amy Reibman***, and Yao Wang* *Polytechnic Institute of New York University, Brooklyn, NY, U.S. **Chongqing
More informationTWO HANDED SIGN LANGUAGE RECOGNITION SYSTEM USING IMAGE PROCESSING
134 TWO HANDED SIGN LANGUAGE RECOGNITION SYSTEM USING IMAGE PROCESSING H.F.S.M.Fonseka 1, J.T.Jonathan 2, P.Sabeshan 3 and M.B.Dissanayaka 4 1 Department of Electrical And Electronic Engineering, Faculty
More informationUniversity of Bristol - Explore Bristol Research. Link to published version (if available): /ICECS
Agrafiotis, D., Canagarajah, C. N., & Bull, D. R. (2003). Perceptually optimised sign language video coding. In IEEE International Conference on Electronics, Circuits and Systems. (Vol. 2, pp. 623-626).
More informationReal Time Sign Language Processing System
Real Time Sign Language Processing System Dibyabiva Seth (&), Anindita Ghosh, Ariruna Dasgupta, and Asoke Nath Department of Computer Science, St. Xavier s College (Autonomous), Kolkata, India meetdseth@gmail.com,
More informationFROM H.264 TO HEVC: CODING GAIN PREDICTED BY OBJECTIVE VIDEO QUALITY ASSESSMENT MODELS. Kai Zeng, Abdul Rehman, Jiheng Wang and Zhou Wang
FROM H. TO : CODING GAIN PREDICTED BY OBJECTIVE VIDEO QUALITY ASSESSMENT MODELS Kai Zeng, Abdul Rehman, Jiheng Wang and Zhou Wang Dept. of Electrical and Computer Engineering, University of Waterloo, Waterloo,
More informationNetworx Enterprise Proposal for Internet Protocol (IP)-Based Services. Supporting Features. Remarks and explanations. Criteria
Section 1194.21 Software Applications and Operating Systems Internet Protocol Telephony Service (IPTelS) Detail Voluntary Product Accessibility Template (a) When software is designed to run on a system
More informationAvaya G450 Branch Gateway, Release 7.1 Voluntary Product Accessibility Template (VPAT)
Avaya G450 Branch Gateway, Release 7.1 Voluntary Product Accessibility Template (VPAT) can be administered via a graphical user interface or via a text-only command line interface. The responses in this
More informationSection Web-based Internet information and applications VoIP Transport Service (VoIPTS) Detail Voluntary Product Accessibility Template
Section 1194.22 Web-based Internet information and applications VoIP Transport Service (VoIPTS) Detail Voluntary Product Accessibility Template Remarks and explanations (a) A text equivalent for every
More informationRelay Conference Captioning
Relay Conference Captioning Real-time captioning allows deaf and hard-of-hearing individuals to actively participate. www.njrelaycc.com It s really neat that I can participate in a conference call online
More informationTowards Human-Centered Optimization of Mobile Sign Language Video Communication
Towards Human-Centered Optimization of Mobile Sign Language Video Communication Jessica J. Tran Electrical Engineering DUB Group University of Washington Seattle, WA 98195 USA jjtran@uw.edu Abstract The
More informationInternational Journal of Software and Web Sciences (IJSWS)
International Association of Scientific Innovation and Research (IASIR) (An Association Unifying the Sciences, Engineering, and Applied Research) ISSN (Print): 2279-0063 ISSN (Online): 2279-0071 International
More informationGesture Recognition using Marathi/Hindi Alphabet
Gesture Recognition using Marathi/Hindi Alphabet Rahul Dobale ¹, Rakshit Fulzele², Shruti Girolla 3, Seoutaj Singh 4 Student, Computer Engineering, D.Y. Patil School of Engineering, Pune, India 1 Student,
More informationRecognition of sign language gestures using neural networks
Recognition of sign language gestures using neural s Peter Vamplew Department of Computer Science, University of Tasmania GPO Box 252C, Hobart, Tasmania 7001, Australia vamplew@cs.utas.edu.au ABSTRACT
More information2-D ECG Compression Using Optimal Sorting and Mean Normalization
2009 International Conference on Machine Learning and Computing IPCSIT vol.3 (2011) (2011) IACSIT Press, Singapore 2-D ECG Compression Using Optimal Sorting and Mean Normalization Young-Bok Joo, Gyu-Bong
More informationBrain Tumor Detection using Watershed Algorithm
Brain Tumor Detection using Watershed Algorithm Dawood Dilber 1, Jasleen 2 P.G. Student, Department of Electronics and Communication Engineering, Amity University, Noida, U.P, India 1 P.G. Student, Department
More informationSpeech Compression for Noise-Corrupted Thai Dialects
American Journal of Applied Sciences 9 (3): 278-282, 2012 ISSN 1546-9239 2012 Science Publications Speech Compression for Noise-Corrupted Thai Dialects Suphattharachai Chomphan Department of Electrical
More informationBroadband and Deaf People. Discussion Paper
Broadband and Deaf People Discussion Paper What is a Discussion Paper? A discussion paper is a document that is sent to the Deaf community, for you to read and think about things that are in this Discussion
More informationNetworx Enterprise Proposal for Internet Protocol (IP)-Based Services. Supporting Features. Remarks and explanations. Criteria
Section 1194.21 Software Applications and Operating Systems Converged Internet Protocol Services (CIPS) Detail Voluntary Product Accessibility Template Criteria Supporting Features Remarks and explanations
More informationImplementation of image processing approach to translation of ASL finger-spelling to digital text
Rochester Institute of Technology RIT Scholar Works Articles 2006 Implementation of image processing approach to translation of ASL finger-spelling to digital text Divya Mandloi Kanthi Sarella Chance Glenn
More informationBMMC (UG SDE) IV SEMESTER
UNIVERSITY OF CALICUT SCHOOL OF DISTANCE EDUCATION BMMC (UG SDE) IV SEMESTER GENERAL COURSE IV COMMON FOR Bsc ELECTRONICS, COMPUTER SCIENCE, INSTRUMENTATION & MULTIMEDIA BASICS OF AUDIO & VIDEO MEDIA QUESTION
More information1. INTRODUCTION. Vision based Multi-feature HGR Algorithms for HCI using ISL Page 1
1. INTRODUCTION Sign language interpretation is one of the HCI applications where hand gesture plays important role for communication. This chapter discusses sign language interpretation system with present
More informationIn this chapter, you will learn about the requirements of Title II of the ADA for effective communication. Questions answered include:
1 ADA Best Practices Tool Kit for State and Local Governments Chapter 3 In this chapter, you will learn about the requirements of Title II of the ADA for effective communication. Questions answered include:
More informationDate: April 19, 2017 Name of Product: Cisco Spark Board Contact for more information:
Date: April 19, 2017 Name of Product: Cisco Spark Board Contact for more information: accessibility@cisco.com Summary Table - Voluntary Product Accessibility Template Criteria Supporting Features Remarks
More informationSign Language in the Intelligent Sensory Environment
Sign Language in the Intelligent Sensory Environment Ákos Lisztes, László Kővári, Andor Gaudia, Péter Korondi Budapest University of Science and Technology, Department of Automation and Applied Informatics,
More informationSee what they say with Captioned Telephone
RelayIowa.com See what they say with Captioned Telephone Captioned Telephone (CapTel ) allows individuals who have difficulty hearing on the phone to listen while reading captions of what s said to them.
More informationCancer Cells Detection using OTSU Threshold Algorithm
Cancer Cells Detection using OTSU Threshold Algorithm Nalluri Sunny 1 Velagapudi Ramakrishna Siddhartha Engineering College Mithinti Srikanth 2 Velagapudi Ramakrishna Siddhartha Engineering College Kodali
More informationVoluntary Product Accessibility Template (VPAT)
Avaya Vantage TM Basic for Avaya Vantage TM Voluntary Product Accessibility Template (VPAT) Avaya Vantage TM Basic is a simple communications application for the Avaya Vantage TM device, offering basic
More informationAvaya IP Office R9.1 Avaya one-x Portal Call Assistant Voluntary Product Accessibility Template (VPAT)
Avaya IP Office R9.1 Avaya one-x Portal Call Assistant Voluntary Product Accessibility Template (VPAT) Avaya IP Office Avaya one-x Portal Call Assistant is an application residing on the user s PC that
More informationAvaya G450 Branch Gateway, R6.2 Voluntary Product Accessibility Template (VPAT)
` Avaya G450 Branch Gateway, R6.2 Voluntary Product Accessibility Template (VPAT) 1194.21 Software Applications and Operating Systems The Avaya G450 Branch Gateway can be administered via a graphical user
More informationAccessing the "Far World": A New Age of Connectivity for Hearing Aids by George Lindley, PhD, AuD
Accessing the "Far World": A New Age of Connectivity for Hearing Aids by George Lindley, PhD, AuD Mobile phones, PDAs, computers, televisions, music players, Bluetooth devices and even the other hearing
More informationRequirements for Maintaining Web Access for Hearing-Impaired Individuals
Requirements for Maintaining Web Access for Hearing-Impaired Individuals Daniel M. Berry 2003 Daniel M. Berry WSE 2001 Access for HI Requirements for Maintaining Web Access for Hearing-Impaired Individuals
More informationHuman-Centered Approach Evaluating Mobile Sign Language Video Communication
Human-Centered Approach Evaluating Mobile Sign Language Video Communication Jessica J. Tran 1, Eve A. Riskin 1, Richard E. Ladner 2, Jacob O. Wobbrock 3 1 Electrical Engineering {jjtran, riskin}@uw.edu
More informationFacial expression recognition with spatiotemporal local descriptors
Facial expression recognition with spatiotemporal local descriptors Guoying Zhao, Matti Pietikäinen Machine Vision Group, Infotech Oulu and Department of Electrical and Information Engineering, P. O. Box
More informationInternational Journal of Computer Sciences and Engineering. Review Paper Volume-5, Issue-12 E-ISSN:
International Journal of Computer Sciences and Engineering Open Access Review Paper Volume-5, Issue-12 E-ISSN: 2347-2693 Different Techniques for Skin Cancer Detection Using Dermoscopy Images S.S. Mane
More informationNatural Scene Statistics and Perception. W.S. Geisler
Natural Scene Statistics and Perception W.S. Geisler Some Important Visual Tasks Identification of objects and materials Navigation through the environment Estimation of motion trajectories and speeds
More informationADAPTING COPYCAT TO CONTEXT-DEPENDENT VISUAL OBJECT RECOGNITION
ADAPTING COPYCAT TO CONTEXT-DEPENDENT VISUAL OBJECT RECOGNITION SCOTT BOLLAND Department of Computer Science and Electrical Engineering The University of Queensland Brisbane, Queensland 4072 Australia
More informationNote: This document describes normal operational functionality. It does not include maintenance and troubleshooting procedures.
Date: 9 September 2011 Voluntary Accessibility Template (VPAT) This Voluntary Product Accessibility Template (VPAT) describes accessibility of Polycom s SoundStation IP5000 conference phone against the
More informationAccessible Computing Research for Users who are Deaf and Hard of Hearing (DHH)
Accessible Computing Research for Users who are Deaf and Hard of Hearing (DHH) Matt Huenerfauth Raja Kushalnagar Rochester Institute of Technology DHH Auditory Issues Links Accents/Intonation Listening
More informationSEIZURE occurrence represents one of the most frequent
676 IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, VOL. 52, NO. 4, APRIL 2005 Automated Extraction of Temporal Motor Activity Signals From Video Recordings of Neonatal Seizures Based on Adaptive Block Matching
More informationEffective Communication
Page 1 of 5 U.S. Department of Justice Civil Rights Division Disability Rights Section Effective Communication The Department of Justice published revised final regulations implementing the Americans with
More informationMAC Sleep Mode Control Considering Downlink Traffic Pattern and Mobility
1 MAC Sleep Mode Control Considering Downlink Traffic Pattern and Mobility Neung-Hyung Lee and Saewoong Bahk School of Electrical Engineering & Computer Science, Seoul National University, Seoul Korea
More informationBuilding an Application for Learning the Finger Alphabet of Swiss German Sign Language through Use of the Kinect
Zurich Open Repository and Archive University of Zurich Main Library Strickhofstrasse 39 CH-8057 Zurich www.zora.uzh.ch Year: 2014 Building an Application for Learning the Finger Alphabet of Swiss German
More informationFace Analysis : Identity vs. Expressions
Hugo Mercier, 1,2 Patrice Dalle 1 Face Analysis : Identity vs. Expressions 1 IRIT - Université Paul Sabatier 118 Route de Narbonne, F-31062 Toulouse Cedex 9, France 2 Websourd Bâtiment A 99, route d'espagne
More informationFujitsu LifeBook T Series TabletPC Voluntary Product Accessibility Template
Fujitsu LifeBook T Series TabletPC Voluntary Product Accessibility Template 1194.21 Software Applications and Operating Systems* (a) When software is designed to run on a system that This product family
More informationSmart Gloves for Hand Gesture Recognition and Translation into Text and Audio
Smart Gloves for Hand Gesture Recognition and Translation into Text and Audio Anshula Kumari 1, Rutuja Benke 1, Yasheseve Bhat 1, Amina Qazi 2 1Project Student, Department of Electronics and Telecommunication,
More informationVIDEO SURVEILLANCE AND BIOMEDICAL IMAGING Research Activities and Technology Transfer at PAVIS
VIDEO SURVEILLANCE AND BIOMEDICAL IMAGING Research Activities and Technology Transfer at PAVIS Samuele Martelli, Alessio Del Bue, Diego Sona, Vittorio Murino Istituto Italiano di Tecnologia (IIT), Genova
More informationVPAT Summary. VPAT Details. Section Telecommunications Products - Detail. Date: October 8, 2014 Name of Product: BladeCenter HS23
Date: October 8, 2014 Name of Product: BladeCenter HS23 VPAT Summary Criteria Status Remarks and Explanations Section 1194.21 Software Applications and Operating Systems Section 1194.22 Web-based Internet
More informationSummary Table Voluntary Product Accessibility Template. Supports. Please refer to. Supports. Please refer to
Date Aug-07 Name of product SMART Board 600 series interactive whiteboard SMART Board 640, 660 and 680 interactive whiteboards address Section 508 standards as set forth below Contact for more information
More informationNote: This document describes normal operational functionality. It does not include maintenance and troubleshooting procedures.
Date: 18 Nov 2013 Voluntary Accessibility Template (VPAT) This Voluntary Product Accessibility Template (VPAT) describes accessibility of Polycom s C100 and CX100 family against the criteria described
More informationNational Relay Service: The Deaf Perspective DISCUSSION PAPER
National Relay Service: The Deaf Perspective DISCUSSION PAPER Purpose This discussion paper National Relay Service: The Deaf Perspective has been written with the intention of generating feedback from
More informationAvaya one-x Communicator for Mac OS X R2.0 Voluntary Product Accessibility Template (VPAT)
Avaya one-x Communicator for Mac OS X R2.0 Voluntary Product Accessibility Template (VPAT) Avaya one-x Communicator is a unified communications client that allows people to communicate using VoIP and Contacts.
More informationABSTRACT I. INTRODUCTION
International Journal of Scientific Research in Computer Science, Engineering and Information Technology 2017 IJSRCSEIT Volume 2 Issue 5 ISSN : 2456-3307 An Innovative Artificial Replacement to Facilitate
More informationSummary Table Voluntary Product Accessibility Template
PLANTRONICS VPAT 3 Product: Wireless Hearing Aid Compatible (HAC) Headsets Operated with dedicated Base assembly Over the Head Voice Tube: Pulsar 590, CS351 Over the Head Noise-Canceling: CS351N Convertible
More informationMember 1 Member 2 Member 3 Member 4 Full Name Krithee Sirisith Pichai Sodsai Thanasunn
Microsoft Imagine Cup 2010 Thailand Software Design Round 1 Project Proposal Template PROJECT PROPOSAL DUE: 31 Jan 2010 To Submit to proposal: Register at www.imaginecup.com; select to compete in Software
More informationAn active unpleasantness control system for indoor noise based on auditory masking
An active unpleasantness control system for indoor noise based on auditory masking Daisuke Ikefuji, Masato Nakayama, Takanabu Nishiura and Yoich Yamashita Graduate School of Information Science and Engineering,
More informationComparative Analysis of Canny and Prewitt Edge Detection Techniques used in Image Processing
Comparative Analysis of Canny and Prewitt Edge Detection Techniques used in Image Processing Nisha 1, Rajesh Mehra 2, Lalita Sharma 3 PG Scholar, Dept. of ECE, NITTTR, Chandigarh, India 1 Associate Professor,
More informationSummary Table Voluntary Product Accessibility Template. Supporting Features. Supports. Supports. Supports. Supports
Date: March 31, 2016 Name of Product: ThinkServer TS450, TS550 Summary Table Voluntary Product Accessibility Template Section 1194.21 Software Applications and Operating Systems Section 1194.22 Web-based
More informationSummary Table Voluntary Product Accessibility Template. Not Applicable
PLANTRONICS VPAT 14 Product: Wireless Hearing Aid Sub-Compatible (HAS-C) Headsets Summary Table Voluntary Product Accessibility Template Section 1194.21 Software Applications and Operating Systems Section
More informationNote: This document describes normal operational functionality. It does not include maintenance and troubleshooting procedures.
Date: 28 SEPT 2016 Voluntary Accessibility Template (VPAT) This Voluntary Product Accessibility Template (VPAT) describes accessibility of Polycom s SoundStation Duo against the criteria described in Section
More informationInternational Journal of Advance Engineering and Research Development EARLY DETECTION OF GLAUCOMA USING EMPIRICAL WAVELET TRANSFORM
Scientific Journal of Impact Factor (SJIF): 4.72 International Journal of Advance Engineering and Research Development Volume 5, Issue 1, January -218 e-issn (O): 2348-447 p-issn (P): 2348-646 EARLY DETECTION
More informationAn Avatar-Based Weather Forecast Sign Language System for the Hearing-Impaired
An Avatar-Based Weather Forecast Sign Language System for the Hearing-Impaired Juhyun Oh 1, Seonggyu Jeon 1, Minho Kim 2, Hyukchul Kwon 2, and Iktae Kim 3 1 Technical Research Institute, Korean Broadcasting
More informationImplementation of Spectral Maxima Sound processing for cochlear. implants by using Bark scale Frequency band partition
Implementation of Spectral Maxima Sound processing for cochlear implants by using Bark scale Frequency band partition Han xianhua 1 Nie Kaibao 1 1 Department of Information Science and Engineering, Shandong
More informationSUMMARY TABLE VOLUNTARY PRODUCT ACCESSIBILITY TEMPLATE
Date: 2 November 2010 Updated by Alan Batt Name of Product: Polycom CX600 IP Phone for Microsoft Lync Company contact for more Information: Ian Jennings, ian.jennings@polycom.com Note: This document describes
More informationA Communication tool, Mobile Application Arabic & American Sign Languages (ARSL) Sign Language (ASL) as part of Teaching and Learning
A Communication tool, Mobile Application Arabic & American Sign Languages (ARSL) Sign Language (ASL) as part of Teaching and Learning Fatima Al Dhaen Ahlia University Information Technology Dep. P.O. Box
More informationTitle. Author(s)Aoki, Naofumi. Issue Date Doc URL. Type. Note. File Information. A Lossless Steganography Technique for G.
Title A Lossless Steganography Technique for G.711 Telepho Author(s)Aoki, Naofumi Proceedings : APSIPA ASC 29 : Asia-Pacific Signal Citationand Conference: 27477 Issue Date 294 Doc URL http://hdl.handle.net/2115/3969
More informationAVR Based Gesture Vocalizer Using Speech Synthesizer IC
AVR Based Gesture Vocalizer Using Speech Synthesizer IC Mr.M.V.N.R.P.kumar 1, Mr.Ashutosh Kumar 2, Ms. S.B.Arawandekar 3, Mr.A. A. Bhosale 4, Mr. R. L. Bhosale 5 Dept. Of E&TC, L.N.B.C.I.E.T. Raigaon,
More informationANALYSIS AND DETECTION OF BRAIN TUMOUR USING IMAGE PROCESSING TECHNIQUES
ANALYSIS AND DETECTION OF BRAIN TUMOUR USING IMAGE PROCESSING TECHNIQUES P.V.Rohini 1, Dr.M.Pushparani 2 1 M.Phil Scholar, Department of Computer Science, Mother Teresa women s university, (India) 2 Professor
More informationInternational Journal of Advance Engineering and Research Development. Gesture Glove for American Sign Language Representation
Scientific Journal of Impact Factor (SJIF): 4.14 International Journal of Advance Engineering and Research Development Volume 3, Issue 3, March -2016 Gesture Glove for American Sign Language Representation
More informationREVIEW ON ARRHYTHMIA DETECTION USING SIGNAL PROCESSING
REVIEW ON ARRHYTHMIA DETECTION USING SIGNAL PROCESSING Vishakha S. Naik Dessai Electronics and Telecommunication Engineering Department, Goa College of Engineering, (India) ABSTRACT An electrocardiogram
More informationCONTACTLESS HEARING AID DESIGNED FOR INFANTS
CONTACTLESS HEARING AID DESIGNED FOR INFANTS M. KULESZA 1, B. KOSTEK 1,2, P. DALKA 1, A. CZYŻEWSKI 1 1 Gdansk University of Technology, Multimedia Systems Department, Narutowicza 11/12, 80-952 Gdansk,
More informationSignWave: Human Perception of Sign Language Video Quality as Constrained by Mobile Phone Technology
SignWave: Human Perception of Sign Language Video Quality as Constrained by Mobile Phone Technology Anna Cavender, Erika A. Rice, Katarzyna M. Wilamowska Computer Science and Engineering University of
More informationDeepASL: Enabling Ubiquitous and Non-Intrusive Word and Sentence-Level Sign Language Translation
DeepASL: Enabling Ubiquitous and Non-Intrusive Word and Sentence-Level Sign Language Translation Biyi Fang Michigan State University ACM SenSys 17 Nov 6 th, 2017 Biyi Fang (MSU) Jillian Co (MSU) Mi Zhang
More informationOn the feasibility of speckle reduction in echocardiography using strain compounding
Title On the feasibility of speckle reduction in echocardiography using strain compounding Author(s) Guo, Y; Lee, W Citation The 2014 IEEE International Ultrasonics Symposium (IUS 2014), Chicago, IL.,
More informationSign Language Interpretation in Broadcasting Service
ITU Workshop on Making Media Accessible to all: The options and the economics (Geneva, Switzerland, 24 (p.m.) 25 October 2013) Sign Language Interpretation in Broadcasting Service Takayuki Ito, Dr. Eng.
More informationAvailable online at ScienceDirect. Procedia Technology 24 (2016 )
Available online at www.sciencedirect.com ScienceDirect Procedia Technology 24 (2016 ) 1068 1073 International Conference on Emerging Trends in Engineering, Science and Technology (ICETEST - 2015) Improving
More informationDesigning Caption Production Rules Based on Face, Text and Motion Detections
Designing Caption Production Rules Based on Face, Text and Motion Detections C. Chapdelaine *, M. Beaulieu, L. Gagnon R&D Department, Computer Research Institute of Montreal (CRIM), 550 Sherbrooke West,
More informationA dynamic approach for optic disc localization in retinal images
ISSN 2395-1621 A dynamic approach for optic disc localization in retinal images #1 Rutuja Deshmukh, #2 Karuna Jadhav, #3 Nikita Patwa 1 deshmukhrs777@gmail.com #123 UG Student, Electronics and Telecommunication
More informationSummary Table Voluntary Product Accessibility Template. Supporting Features. Not Applicable- Supports with Exception. Not Applicable.
Voyager 3200 UC, Voyager 5200 UC Summary Table Section 1194.21 Software Applications and Operating Systems Section 1194.22 Web-based internet information and applications Section 1194.23 Telecommunications
More informationAND9020/D. Adaptive Feedback Cancellation 3 from ON Semiconductor APPLICATION NOTE INTRODUCTION
Adaptive Feedback Cancellation 3 from ON Semiconductor APPLICATION NOTE INTRODUCTION This information note describes the feedback cancellation feature provided in ON Semiconductor s latest digital hearing
More informationSUMMARY TABLE VOLUNTARY PRODUCT ACCESSIBILITY TEMPLATE
Date: 1 August 2009 Voluntary Accessibility Template (VPAT) This Voluntary Product Accessibility Template (VPAT) describes accessibility of Polycom s Polycom CX200, CX700 Desktop IP Telephones against
More informationKeywords Fuzzy Logic, Fuzzy Rule, Fuzzy Membership Function, Fuzzy Inference System, Edge Detection, Regression Analysis.
Volume 6, Issue 3, March 2016 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Modified Fuzzy
More informationSign Language to English (Slate8)
Sign Language to English (Slate8) App Development Nathan Kebe El Faculty Advisor: Dr. Mohamad Chouikha 2 nd EECS Day April 20, 2018 Electrical Engineering and Computer Science (EECS) Howard University
More informationSummary Table Voluntary Product Accessibility Template. Supports. Not Applicable. Not Applicable- Not Applicable- Supports
PLANTRONICS VPAT 1 Product: Telephony Call Center Hearing Aid Compatible (HAC) Headsets Summary Table Section 1194.21 Software Applications and Operating Systems Section 1194.22 Web-based internet information
More informationAccess to Internet for Persons with Disabilities and Specific Needs
Access to Internet for Persons with Disabilities and Specific Needs For ITU WCG (Resolution 1344) Prepared by Mr. Kyle Miers Chief Executive Deaf Australia 15 January 2016 Page 1 of 5 EXECUTIVE SUMMARY
More informationIncorporation of Imaging-Based Functional Assessment Procedures into the DICOM Standard Draft version 0.1 7/27/2011
Incorporation of Imaging-Based Functional Assessment Procedures into the DICOM Standard Draft version 0.1 7/27/2011 I. Purpose Drawing from the profile development of the QIBA-fMRI Technical Committee,
More informationJitter-aware time-frequency resource allocation and packing algorithm
Jitter-aware time-frequency resource allocation and packing algorithm The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters. Citation As
More informationTTY/TDD Minimum Performance Specification
GPP C.S00-B Version.0 May 0 TTY/TDD Minimum Performance Specification 0 GPP GPP and its Organizational Partners claim copyright in this document and individual Organizational Partners may copyright and
More informationSummary Table Voluntary Product Accessibility Template. Criteria Supporting Features Remarks and explanations
Plantronics VPAT 6 Product: Non-Adjustable Gain Hearing Aid Compatible (HAC) Handsets Summary Table Voluntary Product Accessibility Template Section 1194.21 Software Applications and Operating Systems
More informationHEARING LOSS TECHNOLOGY
1 HEARING LOSS TECHNOLOGY Where have we been? Where are we headed? Laura E. Plummer, MA, CRC, ATP Sr. Rehabilitation Technologist / Wistech Director Stout Vocational Rehabilitation Institute UW Stout 2
More informationVIDEO SALIENCY INCORPORATING SPATIOTEMPORAL CUES AND UNCERTAINTY WEIGHTING
VIDEO SALIENCY INCORPORATING SPATIOTEMPORAL CUES AND UNCERTAINTY WEIGHTING Yuming Fang, Zhou Wang 2, Weisi Lin School of Computer Engineering, Nanyang Technological University, Singapore 2 Department of
More information1971: First National Conference on Television for the Hearing Impaired is held in Nashville, Tennessee. The Captioning Center, now Media Access Group
Captioning Timeline The majority of these captioning timeline highlights were found at and copied from the Described and Captioned Media Program website. In 1927, when sound was introduced to the silent
More informationSonic Spotlight. Binaural Coordination: Making the Connection
Binaural Coordination: Making the Connection 1 Sonic Spotlight Binaural Coordination: Making the Connection Binaural Coordination is the global term that refers to the management of wireless technology
More informationiclicker+ Student Remote Voluntary Product Accessibility Template (VPAT)
iclicker+ Student Remote Voluntary Product Accessibility Template (VPAT) Date: May 22, 2017 Product Name: iclicker+ Student Remote Product Model Number: RLR15 Company Name: Macmillan Learning, iclicker
More information