4 User-friendly information presentation

Size: px
Start display at page:

Download "4 User-friendly information presentation"

Transcription

1 The start of NHK s Hybridcast is one step in our work to enhance broadcasting services. At the same time, it is important to develop the means for all viewers, including the disabled, the elderly, and non-native Japanese speakers, to be able to receive and enjoy its content. We are conducting research on using information technology to deliver the benefits of broadcasting to all people in forms that are appropriate for them. In our research on user-friendly presentation of information, we have studied tactile and haptic presentation methods for visually impaired people. We evaluated a method to facilitate understanding of 2D information and a method to present 3D shape information by giving multipoint stimuli to fingers. For hearing-impaired people, we upgraded our Japanese sign-language CG translation system for weather reports. We refined the speech recognition technology used for closed-captioning of information programs and succeeded in improving the recognition rate to 95.9%. We also developed an algorithm that uses news scripts for auto correction of recognition errors so that closed caption services can be economically offered by local broadcasting stations. In our research on speech and audio signal processing for the elderly, we upgraded the equipment to adjust program sound at the receiver. We refined the speech synthesis technology toward the realization of automatic reading services of broadcast scripts and data broadcasting texts and started research on the speech processing technology to enable various emotional expressions. In our research on natural language processing for barrier-free services, we developed technologies to improve the efficiency of the rewriting process of the news conversion assistance system into easy Japanese for non-native Japanese speakers in Japan and continued with our research on technology to automatically convert programming into easy Japanese. We also developed a natural language analysis system to categorize viewers opinions on broadcast programs. In our research on content retrieval and recommendation, we built a content map connecting programs with various relations by analyzing program description texts and studied a method to give reasons for retrieving and recommending programs. The program retrieval technology has been incorporated into a Hybridcast application. We also researched Video bank to assist in the retrieval and processing for raw video footage by adding metadata useful for video production to the raw video footage. The analysis technology we developed was used for managing raw video footage at a local station. In our research on estimating the psychological state of viewers, we conducted experiments to clarify the cognitive characteristics and measure brain activities related to the sensation depth caused by high-resolution video such as Super Hi-Vision. We also continued with our research on the analysis of psychological states while watching video by measuring the brain activities of viewers and research on advanced technologies to prevent negative effects of video on the human body. 4.1 User-friendly information presentation NHK STRL is researching user-friendly information media that will allow people with vision or hearing impairments to get more information from broadcasts. Haptic display technology of nonverbal information We are researching technologies to convey information that is difficult to convey in words, such as diagrams and the shapes of artworks, to people with visual impairments by using 2D and 3D information that can be apprehended by the tactile and kinesthetic senses. Our work on tactile presentation of 2D information includes a method to convey an understanding of diagrams and graphs to people with visual impairments by vibrating a tactile display. With this method, we found that vibration per line segment in addition to vibration per facet can reduce the time required for people to grasp spatial positions in diagrams and line charts. For the powered mechanical leading method that conveys the overall composition of a diagram by guiding the fingers, we derived conditions that can be placed on the leading speed to reduce errors in spatial position recognition. We developed a method to present 3D information using a multi-point force presentation method that gives stimuli to the finger. We conducted subjective evaluations with visually impaired people and objective evaluations using the length Figure 1. Multi-point force presentation method 22 NHK STRL ANNUAL REPORT 2013

2 4.1 User-friendly information presentation 4.2 Speech recognition for closed captioning of finger tracks as an evaluation indicator, in order to identify how the number of stimulus points affects cognition of the ridge line. The results showed that using four stimulus points is preferable (1). We also evaluated the effect of the degrees of freedom of the stimulus points when force is applied in the x, y and z directions on tactile cognition of curved surfaces. The results showed that at least one stimulus point needs to have three directional degrees of freedom. This result can be used as a design guideline for building presentation equipment. Part of this research was conducted in cooperation with the University of Tokyo. Sign-language CG translation technology for weather information We are researching technology to automatically translate Japanese weather reports (text) into sign language computer graphics. We continued our work from FY 2012 on improving the accuracy of translation by using example-based and statistical machine translation of clauses or phrases and upgraded our Japanese sign-language CG translation system. We prototyped a translation system in which the presentation speed of the sign-language CGs is adjustable so that the CG can be played in synchronization with the original weather report. The system was exhibited at the NHK STRL Open House. We also developed a system to release the Japanese and sign-language dictionary used in our translation technologies on NHK Online to allow anyone to access the service (Figure 2). This system will make the results of our studies helpful for people learning sign language and will make it easier for us to gather opinions on the quality of sign-language CGs and use them to make improvements to them (2). Part of this study was conducted in cooperation with Kogakuin University. Figure 2. Sign-language CG website (1) T. Handa, T. Sakai, T. Shimizu: Recognition of Three-dimensional Geometry using Multipoint Force feedback on a Fingertip, IEICE Technical Report, vol. 113, no. 347, WIT , (2013) (2) NHK Sign language CG Website, Speech recognition for closed captioning We are researching speech recognition for expanding the range of programs covered by real-time closed captioning so that more people including the elderly and those with hearing difficulties can enjoy TV programs At the end of March Addition of 150 million words Recognition error rate (%) Discriminative acoustic model Search optimization + Topic adapting language model Elimination of restatement Figure 1. Reduction of recognition errors in re-spoken speech for information program Closed captioning for information programs The re-speak method, in which automatic speech recognition is used on a speaker in a quiet environment who is repeating the speech of a program, is used to produce closed captions for sports programs such as sumo wrestling as well as information programs such as Asaichi. Closed captioning for information programs covering a wide range of topics requires a language model having word-sequence probabilities that change daily with each new topic and an acoustic model compliant with the changing frequency distributions of vowels and consonants of the re-speaker. These frequency distributions may change when the speaker becomes fatigued during the re-speak task. In FY 2013, we trained the language model by adding about 150 million words including closed caption texts for information programs with a broad range of topics. We also incorporated a discriminative acoustic model trained to take advantage of complementary searches with the language model and optimized the search method. We developed a method of training the language model by adapting it to topics (1). In an experiment, these efforts reduced the recognition error rate of re-spoken speech selected from the Asaichi program from 7.3% to 4.1%. Closed-captioning for news programs We helped to install speech-recognition closed-captioning systems for news programs (2) in the NHK Osaka, Nagoya, Fukuoka and Sendai stations. These systems are being used for closed captioning of local programs. We also developed an al- NHK STRL ANNUAL REPORT

3 4.2 Speech recognition for closed captioning 4.3 Speech and audio signal processing for the elderly gorithm to automatically correct recognition errors by using news scripts. This algorithm will be implemented in a closed captioning system at smaller local stations (3). (1) Y. Fujita, T. Oku, A. Kobayashi, S. Sato: Language Model Adaptation by Constrained Non-negative Matrix Factorization, Spring Meeting of the Acoustical Society of Japan, (2013) (in Japanese) (2) A. Kobayashi, Y. Fujita, T. Oku, S. Sato, S. Homma, T. Arai, T. Imai: Live Closed-Captioning System Using Hybrid Automatic Speech Recognition for Broadcast News, NAB Proceedings, pp (2013) (3) S. Sato, K. Onoe, A. Kobayashi, T. Oku, Y. Fujita, M. Ichiki: An error correction algorithm by using a WFST built from news scripts, Spring Meeting of the Acoustical Society of Japan, (2014) (in Japanese) 4.3 Speech and audio signal processing for the elderly To provide easy-to-hear audio services for everyone including the elderly, we are researching technologies to adjust program sound at the receiver and to synthesize good-quality speech. Earlier, we had developed a technology to adjust the balance of speech and background sound arbitrarily (1). In FY 2013, we made the algorithm less computationally intensive in order to make it more practical and developed equipment that incorporates the algorithm and speech rate conversion technology. Experimental evaluations showed the effectiveness of this equipment (Figure 1). In our research on high-quality speech synthesis and process technologies, we had previously developed a method to synthesize high-quality speech of arbitrary sentences by using massive amounts of speech data recorded from news programs. We also had developed automatic reading equipment and used it in broadcasts of the Stock market report program on NHK Radio 2. In FY 2013, we improved the speech synthesis technology and began study on speech process technology to convey emotional impressions and thereby expand the range of expression of synthesized speech. To improve the performance of speech synthesis technology, we modified the previous database method, which directly handles acoustic and linguistic features of speech data. By changing the search unit to strengthen the connection of unvoiced consonants and silences that have less auditory influence, we increased the sequentiality of phoneme strings segmented from text (2). To stabilize the synthesized sound, we studied a way to switch to statistical models built by using machine learning of features when no phoneme string in the database. In our work on speech process technology, we prepared an emotional voice database for adding emotional expressions to individual synthesized speakers. We collected speech judged to convey certain emotions and devised rules for adding emotional expressions to speech by analyzing the differences between the acoustic features of speech conveying no emotion and the features of emotional speech (3). We also developed a speech conversion technology that applies these rules. We conducted experimental evaluations that verified the emotional expression rules could correctly alter unemotional speech to a particular emotion 60% of the time. We incorporated our results in a system for automatically reading stock market reports. This resulted in lowering the minimum unit of nominal prices to sen (1/100 of one yen) from yen, extending the maximum readable number from less than 10 million to less than 100 million, and allowing a market overview to be read. We also worked on an automatic weather report reading system. We manually checked all the waveform extension and contraction units obtained from a prior analysis of speech data and corrected errors in order to avoid reductions in sound quality due to the speech rate conversion. The improved automatic stock market reading system began operations at the end of March (Figure 2). Figure 2. Automatic reading system Figure 1. Evaluation experiments (1) Komori, Imai, Seiyama, Takou, Takagi, Oikawa: Development of volume balance adjustment device for voices and background sounds within programs, for Elderly people, AES 135th Convention Paper 9010 (2013) (2) Segi: Search-unit selection for concatenative speech synthesis systems, Proceedings of 2014 Spring Meeting of Acoustical Society of Japan (3) Seiyama, Segi, Imai, Takagi: Evaluation of speech database for emotional speech synthesis, ITE Winter Annual Convention 13-4, NHK STRL ANNUAL REPORT 2013

4 4.4 Language processing for barrier-free services 4.4 Language processing for barrier-free services Japanese translation/conversion assistance technology We are researching technology to assist in the task of converting news program scripts into easy Japanese for non-native Japanese speakers in Japan. The NEWSWEB EASY test site started in FY 2012, which provides regular news text rewritten into easy Japanese, began its full-scale operation in FY Five news articles are released per day. To shorten the rewriting time, we developed a method to automatically assigning detailed information used for showing readings in furigana on top of kanji, displaying dictionary data for difficult words and highlighting proper names, achieving the accuracy of 95% or higher (1). We also began research in FY 2013 on methods to divide a news script sentence into several short sentences and convert modifier phrases in independent sentences (2) in order to automatically convert news scripts into easy Japanese. We prepared text data required for this study, such as examples of rewritten news scripts. Opinion analysis technology We are researching technology to analyze opinions of viewers of a program in order to utilize them in program production. In FY 2013, we developed an algorithm to automatically detect program names in Twitter messages about programs and built a system to analyze tweets in a structured and exhaustive way using the algorism (Figure 1). The official name of a program or a hashtag indicating a program name is not explicitly mentioned in most of the tweets about TV programs. So, we developed a method to identify which program is mentioned in a tweet. We focused on three elements: a program name including its abbreviation and the names spelled in different ways, program description, and expressions indicating viewing of the program, and devised a method to combine those three elements. This enabled detection of the program mentioned in tweets with 77% accuracy, even if the official program name or hashtag is not included (3). The program name detection algorism requires handmade rules to cover program names in abbreviation and different spellings. As these rules are used in combination with the algorisms to detect program description and expressions of viewing status, setting rules for the names of regular program serial will be enough to detect individual program name. This made the maintenance of rules easy and saved workload for continuous feedback analysis. (1) H.Kumano, H.Tanaka: Online learning of tagging to Japanese documents with dependent Dirichlet process, 20th annual convention of ANLP, pp , (2014) (in Japanese) (2) I.Goto, H.Kumano, H.Tanaka: Analysis on manual rewrite from normal Japanese news to easy Japanese news, 20th annual convention of ANLP, pp.15-18, (2014) (in Japanese) (3) M.Hirano, K.Kanbe, T.Kobayakawa: Automatic detection of TV program mentioned in tweets-for the overall and continuous detection-, ITE Annual Convention 2013, 3-7, (2013) (in Japanese) Results of structured and exhaustive analysis Documentary / Culture Cartoon / Special effects Movie News / Report Variety of programs Theater / Performance Hobby / Education Welfare Program A 421 tweets Program B 81 tweets Total number of tweets (1,322) Variety show Kids Program C 87 tweets Music Drama Information / Tabloid show Language Program D 733 tweets 0% 50% 100% Various tweets about TV programs Positive Negative Neutral Request Recommend Stand-by Viewing Plan Others Figure 1. analysis system NHK STRL ANNUAL REPORT

5 4.5 Content retrieval and recommendation technology 4.5 Content retrieval and recommendation technology Content utilization technology using language information We are researching technology that uses text information to manage, retrieve, and recommend broadcasting content. In our study on managing large amounts of content, we developed a method to identify the topics of a program and infer the relations between the topics and the program by using program descriptions in the Electronic Program Guide (EPG) (1). Using the obtained topics and relations and the semantic relations between words we had developed earlier, we built a content map to connect programs (Figure 1). The content map can manage programs with relation names, for example, treatment and prevention for a program with the topic of high blood pressure. We also developed a technology to extract relations between programs from viewer feedback about programs. Experiments with actual feedback demonstrated that the method can identify many useful relations that are not included in the program descriptions; for example, it can determine if the same singer s songs are used in two different programs. In our studies on retrieving and recommending content, we devised a method to extract words indicating the reason for retrieving and recommending a program from the program description (2). The effectiveness was experimentally demonstrated in tests on a week s worth of programs to be broadcast. The results of these studies are being utilized by the Hybridcast application Minogashi, natsukashi launched in December 2013 (Figure 2). Video bank for utilizing raw video footage To make use of raw video footage stored in video archives, we are researching a video asset management system, called Video bank, which uses physical sensor and video analysis technologies to automatically add metadata useful for video production. We developed a hybrid sensor consisting of small attitude and distance sensors created using semiconductor manufacturing technology to acquire camera motion information (camera parameters) required for video synthesis. We also improved its operability by adding error compensation functionality using pattern recognition. We also developed a method to acquire more robustand precise camera parameters that uses the hybrid sensor and a camera parameter estimation method that works by analyzing video footage. In our work on video processing technology, we developed a method to interactively extract a specified object region in video. We improved the usability of the object extraction function by preprocessing the region segmentation that requires a high calculation cost and by devising an extraction algorithm that takes into account the size of the pre-segmented regions to make sure an appropriate region matching the user s directions is extracted. Regarding our work on video retrieval technology, we made the discrimination of objects more precise by analyzing numerous feature points, the relations between these points, and their positions in a region of the image (3). We devised a technology for extracting key frame images containing an entire object by using the region segmentation method and the image features of individual regions. We also studied video retrieval technology using the similarity of images. Here, we improved the retrieval accuracy of cuts, which is the minimum shooting unit, by evaluation of the similarity of the main object regions. After that, we developed a technology to extract a series of cuts shot at the same place (i.e., scenes) by using the appearance frequency of similar pictures together with a technology to retrieve scenes. We used these video retrieval and object extraction technologies to improve our earthquake disaster metadata system. We also built an experimental system with functions for getting and processing video from archive tapes by adding and correcting metadata. The system has been used at the NHK Fukushima station since the end of September 2013, and it automatically added metadata that would enable retrieval-byshot to more than 10,000 video tapes related to earthquake disasters in only three months (Figure 3). Figure 1. Content map PC for retrieval Data storage disk Related programs Figure 2. The Minogashi, natsukashi Hybridcast application PC for processing LTO drive Figure 3. Earthquake disaster metadata complementary system being tested at NHK Fukushima station 26 NHK STRL ANNUAL REPORT 2013

6 4.5 Content retrieval and recommendation technology 4.6 Viewersʼ mental state estimation technology Part of this research was commissioned to NHK Engineering System, Inc. The research was conducted in cooperation with the Shimizu Corporation. (1) K. Miura, I. Yamada, T. Miyazaki, N. Kato, H. Tanaka: Generating a TV Program Map Using Semantic Relations between Words, In Proceedings of the 76th National Convention of IPSJ, 5C-4 (2014) (in Japanese) (2) T. Miyazaki, A. Matsui, I. Yamada, N. Kato, M. Naemura, H. Sumiyoshi: ICA based keywords extraction for TV program recommendation, In Proceedings of ITE Annual Conference 2013, 12-4(2013) (in Japanese) (3) Y. Kawai, M. Fujii: Semantic Concept Detection based on Spatial Pyramid Matching and Semi-supervised Learning, ITE Transactions on Media Technology and Applications, vol. 1, no. 2, pp (2013) 4.6 Viewers mental state estimation technology To understand how the viewers watch TV programs and how they are psychologically influenced by them, we are researching ways of estimating their mental state from subjective evaluations and brain activity. In FY 2013, we evaluated the sensation of the depth of objects in natural (i.e., not artificially shaded) scenes with varying resolution to explore the psychological effects of high-resolution images such as those of Super Hi-Vision. The results showed that the depth sensation increases as the resolution becomes higher, and this finding corroborates the results of our previous experiments in FY 2012 on artificially shaded images (1). To see what neural mechanism underlies the depth sensation, we measured brain activities when viewers make a judgment about the depths of objects in images of varying resolution. The results demonstrated that a specific brain area that is wellknown to be specialized for spatial and motion perception can be associated with the judgment of monocular depth (2). In our research on analysis of mental states of persons while they are watching video, we measured the brain activities of subjects while they viewed a program by using functional magnetic resonance imaging (fmri) equipment and analyzed them using machine learning in order to estimate changes in mental state through time. The results of an experiment on persons watching comedy TV programs showed that viewers are in a mental state of expecting the development of the program before they actually find it funny and that such a mental state is measurable (3) (Figure 1). To prevent video from having undesirable effects on the human body, we developed a method to estimate the degree of unpleasantness of shakiness, flicker and striped patterns in video and experimentally demonstrated its effectiveness. (1) K. Komine, Y. Tsushima, N. Hiruma: Higher-resolution image enhances subjective depth sensation in natural scenes, Perception, Vol. 42, supplement, p. 119 (2013) (2) Y. Tsushima, K. Komine, N. Hiruma: Cortical area MT+ plays a role in monocular depth perception, Perception, Vol. 42, supplement, p. 149 (2013) (3) Y. Sawahata, K. Komine, T. Morita, N. Hiruma: Decoding humor experiences from brain activity of people viewing comedy movies, PLoS One 8: e81009 (2013) Program video Continuous evaluation by slider Intensity of percieving humor Perception of Humor Brain activities Predict a future humor experience Time (sec) fmri Figure 1. Prediction of humor experiences from brain activity analysis NHK STRL ANNUAL REPORT

Accessible Computing Research for Users who are Deaf and Hard of Hearing (DHH)

Accessible Computing Research for Users who are Deaf and Hard of Hearing (DHH) Accessible Computing Research for Users who are Deaf and Hard of Hearing (DHH) Matt Huenerfauth Raja Kushalnagar Rochester Institute of Technology DHH Auditory Issues Links Accents/Intonation Listening

More information

ITU-T. FG AVA TR Version 1.0 (10/2013) Part 3: Using audiovisual media A taxonomy of participation

ITU-T. FG AVA TR Version 1.0 (10/2013) Part 3: Using audiovisual media A taxonomy of participation International Telecommunication Union ITU-T TELECOMMUNICATION STANDARDIZATION SECTOR OF ITU FG AVA TR Version 1.0 (10/2013) Focus Group on Audiovisual Media Accessibility Technical Report Part 3: Using

More information

Prosody Rule for Time Structure of Finger Braille

Prosody Rule for Time Structure of Finger Braille Prosody Rule for Time Structure of Finger Braille Manabi Miyagi 1-33 Yayoi-cho, Inage-ku, +81-43-251-1111 (ext. 3307) miyagi@graduate.chiba-u.jp Yasuo Horiuchi 1-33 Yayoi-cho, Inage-ku +81-43-290-3300

More information

Interact-AS. Use handwriting, typing and/or speech input. The most recently spoken phrase is shown in the top box

Interact-AS. Use handwriting, typing and/or speech input. The most recently spoken phrase is shown in the top box Interact-AS One of the Many Communications Products from Auditory Sciences Use handwriting, typing and/or speech input The most recently spoken phrase is shown in the top box Use the Control Box to Turn

More information

Consonant Perception test

Consonant Perception test Consonant Perception test Introduction The Vowel-Consonant-Vowel (VCV) test is used in clinics to evaluate how well a listener can recognize consonants under different conditions (e.g. with and without

More information

Requirements for Maintaining Web Access for Hearing-Impaired Individuals

Requirements for Maintaining Web Access for Hearing-Impaired Individuals Requirements for Maintaining Web Access for Hearing-Impaired Individuals Daniel M. Berry 2003 Daniel M. Berry WSE 2001 Access for HI Requirements for Maintaining Web Access for Hearing-Impaired Individuals

More information

How Hearing Impaired People View Closed Captions of TV Commercials Measured By Eye-Tracking Device

How Hearing Impaired People View Closed Captions of TV Commercials Measured By Eye-Tracking Device How Hearing Impaired People View Closed Captions of TV Commercials Measured By Eye-Tracking Device Takahiro Fukushima, Otemon Gakuin University, Japan Takashi Yasuda, Dai Nippon Printing Co., Ltd., Japan

More information

Use the following checklist to ensure that video captions are compliant with accessibility guidelines.

Use the following checklist to ensure that video captions are compliant with accessibility guidelines. Table of Contents Purpose 2 Objective 2 Scope 2 Technical Background 2 Video Compliance Standards 2 Section 508 Standards for Electronic and Information Technology... 2 Web Content Accessibility Guidelines

More information

Before the Department of Transportation, Office of the Secretary Washington, D.C

Before the Department of Transportation, Office of the Secretary Washington, D.C Before the Department of Transportation, Office of the Secretary Washington, D.C. 20554 ) In the Matter of ) Accommodations for Individuals Who Are ) OST Docket No. 2006-23999 Deaf, Hard of Hearing, or

More information

Gesture Recognition using Marathi/Hindi Alphabet

Gesture Recognition using Marathi/Hindi Alphabet Gesture Recognition using Marathi/Hindi Alphabet Rahul Dobale ¹, Rakshit Fulzele², Shruti Girolla 3, Seoutaj Singh 4 Student, Computer Engineering, D.Y. Patil School of Engineering, Pune, India 1 Student,

More information

Accessibility to broadcasting services for persons with disabilities

Accessibility to broadcasting services for persons with disabilities Report ITU-R BT.2207-2 (11/2012) Accessibility to broadcasting services for persons with disabilities BT Series Broadcasting service (television) ii Rep. ITU-R BT.2207-2 Foreword The role of the Radiocommunication

More information

1. INTRODUCTION. Vision based Multi-feature HGR Algorithms for HCI using ISL Page 1

1. INTRODUCTION. Vision based Multi-feature HGR Algorithms for HCI using ISL Page 1 1. INTRODUCTION Sign language interpretation is one of the HCI applications where hand gesture plays important role for communication. This chapter discusses sign language interpretation system with present

More information

A Case Study for Reaching Web Accessibility Guidelines for the Hearing-Impaired

A Case Study for Reaching Web Accessibility Guidelines for the Hearing-Impaired PsychNology Journal, 2003 Volume 1, Number 4, 400-409 A Case Study for Reaching Web Accessibility Guidelines for the Hearing-Impaired *Miki Namatame, Makoto Kobayashi, Akira Harada Department of Design

More information

easy read Your rights under THE accessible InformatioN STandard

easy read Your rights under THE accessible InformatioN STandard easy read Your rights under THE accessible InformatioN STandard Your Rights Under The Accessible Information Standard 2 Introduction In June 2015 NHS introduced the Accessible Information Standard (AIS)

More information

easy read Your rights under THE accessible InformatioN STandard

easy read Your rights under THE accessible InformatioN STandard easy read Your rights under THE accessible InformatioN STandard Your Rights Under The Accessible Information Standard 2 1 Introduction In July 2015, NHS England published the Accessible Information Standard

More information

Note: This document describes normal operational functionality. It does not include maintenance and troubleshooting procedures.

Note: This document describes normal operational functionality. It does not include maintenance and troubleshooting procedures. Date: 26 June 2017 Voluntary Accessibility Template (VPAT) This Voluntary Product Accessibility Template (VPAT) describes accessibility of Polycom s CX5100 Unified Conference Station against the criteria

More information

Section Web-based Internet information and applications VoIP Transport Service (VoIPTS) Detail Voluntary Product Accessibility Template

Section Web-based Internet information and applications VoIP Transport Service (VoIPTS) Detail Voluntary Product Accessibility Template Section 1194.22 Web-based Internet information and applications VoIP Transport Service (VoIPTS) Detail Voluntary Product Accessibility Template Remarks and explanations (a) A text equivalent for every

More information

Sign Language Interpretation in Broadcasting Service

Sign Language Interpretation in Broadcasting Service ITU Workshop on Making Media Accessible to all: The options and the economics (Geneva, Switzerland, 24 (p.m.) 25 October 2013) Sign Language Interpretation in Broadcasting Service Takayuki Ito, Dr. Eng.

More information

General Soundtrack Analysis

General Soundtrack Analysis General Soundtrack Analysis Dan Ellis oratory for Recognition and Organization of Speech and Audio () Electrical Engineering, Columbia University http://labrosa.ee.columbia.edu/

More information

Assistant Professor, PG and Research Department of Computer Applications, Sacred Heart College (Autonomous), Tirupattur, Vellore, Tamil Nadu, India

Assistant Professor, PG and Research Department of Computer Applications, Sacred Heart College (Autonomous), Tirupattur, Vellore, Tamil Nadu, India International Journal of Scientific Research in Computer Science, Engineering and Information Technology 2018 IJSRCSEIT Volume 3 Issue 7 ISSN : 2456-3307 Collaborative Learning Environment Tool In E-Learning

More information

Analysis of Recognition System of Japanese Sign Language using 3D Image Sensor

Analysis of Recognition System of Japanese Sign Language using 3D Image Sensor Analysis of Recognition System of Japanese Sign Language using 3D Image Sensor Yanhua Sun *, Noriaki Kuwahara**, Kazunari Morimoto *** * oo_alison@hotmail.com ** noriaki.kuwahara@gmail.com ***morix119@gmail.com

More information

Assistive Technology for Regular Curriculum for Hearing Impaired

Assistive Technology for Regular Curriculum for Hearing Impaired Assistive Technology for Regular Curriculum for Hearing Impaired Assistive Listening Devices Assistive listening devices can be utilized by individuals or large groups of people and can typically be accessed

More information

LiveTalk: Real-time Information Sharing. between hearing-impaired people and

LiveTalk: Real-time Information Sharing. between hearing-impaired people and : Real-time Information Sharing between Hearing-impaired People and People with Normal Hearing Shinichi Ono Yasuaki Takamoto Yoshiki Matsuda In recent years, more and more people have been recognizing

More information

Video Captioning Workflow and Style Guide Overview

Video Captioning Workflow and Style Guide Overview Video Captioning Workflow and Style Guide Overview The purpose of this document is to provide a workflow and style guide for the purposes of video captioning within the Garland Independent School District

More information

Broadcasting live and on demand relayed in Japanese Local Parliaments. Takeshi Usuba (Kaigirokukenkyusho Co.,Ltd)

Broadcasting live and on demand relayed in Japanese Local Parliaments. Takeshi Usuba (Kaigirokukenkyusho Co.,Ltd) Provision of recording services to Local parliaments Parliament and in publishing. proper equipment and so on, much like any other major stenography company in Japan. that occurred in stenography since

More information

Date: April 19, 2017 Name of Product: Cisco Spark Board Contact for more information:

Date: April 19, 2017 Name of Product: Cisco Spark Board Contact for more information: Date: April 19, 2017 Name of Product: Cisco Spark Board Contact for more information: accessibility@cisco.com Summary Table - Voluntary Product Accessibility Template Criteria Supporting Features Remarks

More information

Panopto: Captioning for Videos. Automated Speech Recognition for Individual Videos

Panopto: Captioning for Videos. Automated Speech Recognition for Individual Videos Panopto: Captioning for Videos Automated Speech Recognition (ASR) is a technology used to identify each word that is spoken in a recording. Once identified, the words are time stamped and added to a search

More information

Summary Table Voluntary Product Accessibility Template. Supporting Features. Supports. Supports. Supports. Supports

Summary Table Voluntary Product Accessibility Template. Supporting Features. Supports. Supports. Supports. Supports Date: March 31, 2016 Name of Product: ThinkServer TS450, TS550 Summary Table Voluntary Product Accessibility Template Section 1194.21 Software Applications and Operating Systems Section 1194.22 Web-based

More information

Note: This document describes normal operational functionality. It does not include maintenance and troubleshooting procedures.

Note: This document describes normal operational functionality. It does not include maintenance and troubleshooting procedures. Date: 18 Nov 2013 Voluntary Accessibility Template (VPAT) This Voluntary Product Accessibility Template (VPAT) describes accessibility of Polycom s C100 and CX100 family against the criteria described

More information

Virtual Sensors: Transforming the Way We Think About Accommodation Stevens Institute of Technology-Hoboken, New Jersey Katherine Grace August, Avi

Virtual Sensors: Transforming the Way We Think About Accommodation Stevens Institute of Technology-Hoboken, New Jersey Katherine Grace August, Avi Virtual Sensors: Transforming the Way We Think About Accommodation Stevens Institute of Technology-Hoboken, New Jersey Katherine Grace August, Avi Hauser, Dave Nall Fatimah Shehadeh-Grant, Jennifer Chen,

More information

Summary Table Voluntary Product Accessibility Template. Supports. Please refer to. Supports. Please refer to

Summary Table Voluntary Product Accessibility Template. Supports. Please refer to. Supports. Please refer to Date Aug-07 Name of product SMART Board 600 series interactive whiteboard SMART Board 640, 660 and 680 interactive whiteboards address Section 508 standards as set forth below Contact for more information

More information

OSEP Leadership Conference

OSEP Leadership Conference OSEP Leadership Conference Presenter Guidelines Prepared by: 1000 Thomas Jefferson Street NW Washington, DC 20007-3835 202.403.5000 www.air.org Copyright. All rights reserved. Contents OSEP Leadership

More information

Voluntary Product Accessibility Template (VPAT)

Voluntary Product Accessibility Template (VPAT) (VPAT) Date: Product Name: Product Version Number: Organization Name: Submitter Name: Submitter Telephone: APPENDIX A: Suggested Language Guide Summary Table Section 1194.21 Software Applications and Operating

More information

Making Sure People with Communication Disabilities Get the Message

Making Sure People with Communication Disabilities Get the Message Emergency Planning and Response for People with Disabilities Making Sure People with Communication Disabilities Get the Message A Checklist for Emergency Public Information Officers This document is part

More information

Human cogition. Human Cognition. Optical Illusions. Human cognition. Optical Illusions. Optical Illusions

Human cogition. Human Cognition. Optical Illusions. Human cognition. Optical Illusions. Optical Illusions Human Cognition Fang Chen Chalmers University of Technology Human cogition Perception and recognition Attention, emotion Learning Reading, speaking, and listening Problem solving, planning, reasoning,

More information

INTERLINGUAL SUBTITLES AND SDH IN HBBTV

INTERLINGUAL SUBTITLES AND SDH IN HBBTV INTERLINGUAL SUBTITLES AND SDH IN HBBTV CONTENTS 0. Definition 2 1. DisTINCTIVE FEATURES OF SDH 3 2. PRESENTATION 5 3. DISPLAY ON THE SCREEN 6 4. FONT SIZE 7 5. FONT TYPE 8 6. COLOUR AND CONTRAST 8 7.

More information

In this chapter, you will learn about the requirements of Title II of the ADA for effective communication. Questions answered include:

In this chapter, you will learn about the requirements of Title II of the ADA for effective communication. Questions answered include: 1 ADA Best Practices Tool Kit for State and Local Governments Chapter 3 In this chapter, you will learn about the requirements of Title II of the ADA for effective communication. Questions answered include:

More information

icommunicator, Leading Speech-to-Text-To-Sign Language Software System, Announces Version 5.0

icommunicator, Leading Speech-to-Text-To-Sign Language Software System, Announces Version 5.0 For Immediate Release: William G. Daddi Daddi Brand Communications (P) 212-404-6619 (M) 917-620-3717 Bill@daddibrand.com icommunicator, Leading Speech-to-Text-To-Sign Language Software System, Announces

More information

Networx Enterprise Proposal for Internet Protocol (IP)-Based Services. Supporting Features. Remarks and explanations. Criteria

Networx Enterprise Proposal for Internet Protocol (IP)-Based Services. Supporting Features. Remarks and explanations. Criteria Section 1194.21 Software Applications and Operating Systems Internet Protocol Telephony Service (IPTelS) Detail Voluntary Product Accessibility Template (a) When software is designed to run on a system

More information

Analysis of Emotion Recognition using Facial Expressions, Speech and Multimodal Information

Analysis of Emotion Recognition using Facial Expressions, Speech and Multimodal Information Analysis of Emotion Recognition using Facial Expressions, Speech and Multimodal Information C. Busso, Z. Deng, S. Yildirim, M. Bulut, C. M. Lee, A. Kazemzadeh, S. Lee, U. Neumann, S. Narayanan Emotion

More information

3. MANUAL ALPHABET RECOGNITION STSTM

3. MANUAL ALPHABET RECOGNITION STSTM Proceedings of the IIEEJ Image Electronics and Visual Computing Workshop 2012 Kuching, Malaysia, November 21-24, 2012 JAPANESE MANUAL ALPHABET RECOGNITION FROM STILL IMAGES USING A NEURAL NETWORK MODEL

More information

Lecture 6. Human Factors in Engineering Design

Lecture 6. Human Factors in Engineering Design GE105 Introduction to Engineering Design College of Engineering King Saud University Lecture 6. Human Factors in Engineering Design SPRING 2016 What is Human Factors in Design? Considering information

More information

Director of Testing and Disability Services Phone: (706) Fax: (706) E Mail:

Director of Testing and Disability Services Phone: (706) Fax: (706) E Mail: Angie S. Baker Testing and Disability Services Director of Testing and Disability Services Phone: (706)737 1469 Fax: (706)729 2298 E Mail: tds@gru.edu Deafness is an invisible disability. It is easy for

More information

Making TV Accessible in India

Making TV Accessible in India Making TV Accessible in India A report based on ITU's "Making Television Accessible Report" prepared by the Centre for Internet and Society Srividya Vaidyanathan Foreword The purpose of this report is

More information

Source and Description Category of Practice Level of CI User How to Use Additional Information. Intermediate- Advanced. Beginner- Advanced

Source and Description Category of Practice Level of CI User How to Use Additional Information. Intermediate- Advanced. Beginner- Advanced Source and Description Category of Practice Level of CI User How to Use Additional Information Randall s ESL Lab: http://www.esllab.com/ Provide practice in listening and comprehending dialogue. Comprehension

More information

Sign Language Interpretation Using Pseudo Glove

Sign Language Interpretation Using Pseudo Glove Sign Language Interpretation Using Pseudo Glove Mukul Singh Kushwah, Manish Sharma, Kunal Jain and Anish Chopra Abstract The research work presented in this paper explores the ways in which, people who

More information

Sound Interfaces Engineering Interaction Technologies. Prof. Stefanie Mueller HCI Engineering Group

Sound Interfaces Engineering Interaction Technologies. Prof. Stefanie Mueller HCI Engineering Group Sound Interfaces 6.810 Engineering Interaction Technologies Prof. Stefanie Mueller HCI Engineering Group what is sound? if a tree falls in the forest and nobody is there does it make sound?

More information

BROADCASTING OF AUSLAN INTERPRETER ON BROADCAST AND DIGITAL NETWORKS

BROADCASTING OF AUSLAN INTERPRETER ON BROADCAST AND DIGITAL NETWORKS POSITION STATEMENT BROADCASTING OF AUSLAN INTERPRETER ON BROADCAST AND DIGITAL NETWORKS PURPOSE OVERVIEW POSITION STATEMENT Deaf Australia s Position Statement on Broadcasting of Auslan Interpreter on

More information

Research Proposal on Emotion Recognition

Research Proposal on Emotion Recognition Research Proposal on Emotion Recognition Colin Grubb June 3, 2012 Abstract In this paper I will introduce my thesis question: To what extent can emotion recognition be improved by combining audio and visual

More information

An Avatar-Based Weather Forecast Sign Language System for the Hearing-Impaired

An Avatar-Based Weather Forecast Sign Language System for the Hearing-Impaired An Avatar-Based Weather Forecast Sign Language System for the Hearing-Impaired Juhyun Oh 1, Seonggyu Jeon 1, Minho Kim 2, Hyukchul Kwon 2, and Iktae Kim 3 1 Technical Research Institute, Korean Broadcasting

More information

Content. The Origin. What is new in Déjà Vu X3? Transition smoothly from Déjà Vu X2 to. Déjà Vu X3. Help and guidance for my Déjà Vu X3

Content. The Origin. What is new in Déjà Vu X3? Transition smoothly from Déjà Vu X2 to. Déjà Vu X3. Help and guidance for my Déjà Vu X3 Discover Déjà Vu X3 Content The Origin What is new in Déjà Vu X3? Transition smoothly from Déjà Vu X2 to Déjà Vu X3 Help and guidance for my Déjà Vu X3 The Origin Right from our very beginnings, Atril

More information

Optical Illusions 4/5. Optical Illusions 2/5. Optical Illusions 5/5 Optical Illusions 1/5. Reading. Reading. Fang Chen Spring 2004

Optical Illusions 4/5. Optical Illusions 2/5. Optical Illusions 5/5 Optical Illusions 1/5. Reading. Reading. Fang Chen Spring 2004 Optical Illusions 2/5 Optical Illusions 4/5 the Ponzo illusion the Muller Lyer illusion Optical Illusions 5/5 Optical Illusions 1/5 Mauritz Cornelis Escher Dutch 1898 1972 Graphical designer World s first

More information

Audiovisual to Sign Language Translator

Audiovisual to Sign Language Translator Technical Disclosure Commons Defensive Publications Series July 17, 2018 Audiovisual to Sign Language Translator Manikandan Gopalakrishnan Follow this and additional works at: https://www.tdcommons.org/dpubs_series

More information

ACCESSIBILITY FOR THE DISABLED

ACCESSIBILITY FOR THE DISABLED ACCESSIBILITY FOR THE DISABLED Vyve Broadband is committed to making our services accessible for everyone. HEARING/SPEECH SOLUTIONS: Closed Captioning What is Closed Captioning? Closed Captioning is an

More information

Apple emac. Standards Subpart Software applications and operating systems. Subpart B -- Technical Standards

Apple emac. Standards Subpart Software applications and operating systems. Subpart B -- Technical Standards Apple emac Standards Subpart 1194.21 Software applications and operating systems. 1194.22 Web-based intranet and internet information and applications. 1194.23 Telecommunications products. 1194.24 Video

More information

VINAYAKA MISSIONS SIKKIM UNIVERSITY

VINAYAKA MISSIONS SIKKIM UNIVERSITY Programme: BA(English) Session: 2015-16 Full Marks: 10 Assignment No. 1 Last Date of Submission: 31 st March 2016 NOTE : All Sections in the Assignments are compulsory to be attempted as per Instructions.

More information

for hearing impaired people

for hearing impaired people TCT Education of Disabilities, 2002 Vol. 1 (1) Computer education and assistive equipment for hearing impaired people Hiroshi MURAKAMI, Hiroki MINAGAWA, Tomoyuki NISHIOKA and Yutaka SHIMIZU Department

More information

A Guide to Theatre Access: Marketing for captioning

A Guide to Theatre Access: Marketing for captioning Guide A Guide to Theatre Access: Marketing for captioning Image courtesy of Stagetext. Heather Judge. CaptionCue test event at the National Theatre, 2015. Adapted from www.accessibletheatre.org.uk with

More information

Note: This document describes normal operational functionality. It does not include maintenance and troubleshooting procedures.

Note: This document describes normal operational functionality. It does not include maintenance and troubleshooting procedures. Date: 9 September 2011 Voluntary Accessibility Template (VPAT) This Voluntary Product Accessibility Template (VPAT) describes accessibility of Polycom s SoundStation IP5000 conference phone against the

More information

Introduction. The 21 st Century Video Accessibility Act (CVAA) of tackles that question.

Introduction. The 21 st Century Video Accessibility Act (CVAA) of tackles that question. CVAA Online Video Captioning Requirements and Deadlines Introduction Closed captioning, which was mandated by the FCC in the early 1980s as a requirement for broadcast TV, originated to make programming

More information

I. Language and Communication Needs

I. Language and Communication Needs Child s Name Date Additional local program information The primary purpose of the Early Intervention Communication Plan is to promote discussion among all members of the Individualized Family Service Plan

More information

Inventions on expressing emotions In Graphical User Interface

Inventions on expressing emotions In Graphical User Interface From the SelectedWorks of Umakant Mishra September, 2005 Inventions on expressing emotions In Graphical User Interface Umakant Mishra Available at: https://works.bepress.com/umakant_mishra/26/ Inventions

More information

As of: 01/10/2006 the HP Designjet 4500 Stacker addresses the Section 508 standards as described in the chart below.

As of: 01/10/2006 the HP Designjet 4500 Stacker addresses the Section 508 standards as described in the chart below. Accessibility Information Detail Report As of: 01/10/2006 the HP Designjet 4500 Stacker addresses the Section 508 standards as described in the chart below. Family HP DesignJet 4500 stacker series Detail

More information

CHAPTER 1 INTRODUCTION

CHAPTER 1 INTRODUCTION 1 CHAPTER 1 INTRODUCTION This chapter provides a general description of the research, covering background of the study, research questions, research aims, scope of the research, methodology and significance

More information

Glossary of Inclusion Terminology

Glossary of Inclusion Terminology Glossary of Inclusion Terminology Accessible A general term used to describe something that can be easily accessed or used by people with disabilities. Alternate Formats Alternate formats enable access

More information

Development of an interactive digital signage based on F-formation system

Development of an interactive digital signage based on F-formation system Development of an interactive digital signage based on F-formation system Yu Kobayashi 1, Masahide Yuasa 2, and Daisuke Katagami 1 1 Tokyo Polytechnic University, Japan 2 Shonan Institute of Technology,

More information

Networx Enterprise Proposal for Internet Protocol (IP)-Based Services. Supporting Features. Remarks and explanations. Criteria

Networx Enterprise Proposal for Internet Protocol (IP)-Based Services. Supporting Features. Remarks and explanations. Criteria Section 1194.21 Software Applications and Operating Systems Converged Internet Protocol Services (CIPS) Detail Voluntary Product Accessibility Template Criteria Supporting Features Remarks and explanations

More information

Note: This document describes normal operational functionality. It does not include maintenance and troubleshooting procedures.

Note: This document describes normal operational functionality. It does not include maintenance and troubleshooting procedures. Date: 28 SEPT 2016 Voluntary Accessibility Template (VPAT) This Voluntary Product Accessibility Template (VPAT) describes accessibility of Polycom s SoundStation Duo against the criteria described in Section

More information

INTELLIGENT LIP READING SYSTEM FOR HEARING AND VOCAL IMPAIRMENT

INTELLIGENT LIP READING SYSTEM FOR HEARING AND VOCAL IMPAIRMENT INTELLIGENT LIP READING SYSTEM FOR HEARING AND VOCAL IMPAIRMENT R.Nishitha 1, Dr K.Srinivasan 2, Dr V.Rukkumani 3 1 Student, 2 Professor and Head, 3 Associate Professor, Electronics and Instrumentation

More information

AVR Based Gesture Vocalizer Using Speech Synthesizer IC

AVR Based Gesture Vocalizer Using Speech Synthesizer IC AVR Based Gesture Vocalizer Using Speech Synthesizer IC Mr.M.V.N.R.P.kumar 1, Mr.Ashutosh Kumar 2, Ms. S.B.Arawandekar 3, Mr.A. A. Bhosale 4, Mr. R. L. Bhosale 5 Dept. Of E&TC, L.N.B.C.I.E.T. Raigaon,

More information

Psychology Perception

Psychology Perception Psychology 343 - Perception James R. Sawusch, 360 Park Hall jsawusch@buffalo.edu 645-0238 TA is Timothy Pruitt, 312 Park tapruitt@buffalo.edu Text is Sensation & Perception by Goldstein (8th edition) PSY

More information

HearIntelligence by HANSATON. Intelligent hearing means natural hearing.

HearIntelligence by HANSATON. Intelligent hearing means natural hearing. HearIntelligence by HANSATON. HearIntelligence by HANSATON. Intelligent hearing means natural hearing. Acoustic environments are complex. We are surrounded by a variety of different acoustic signals, speech

More information

Speech to Text Wireless Converter

Speech to Text Wireless Converter Speech to Text Wireless Converter Kailas Puri 1, Vivek Ajage 2, Satyam Mali 3, Akhil Wasnik 4, Amey Naik 5 And Guided by Dr. Prof. M. S. Panse 6 1,2,3,4,5,6 Department of Electrical Engineering, Veermata

More information

Section Telecommunications Products Toll-Free Service (TFS) Detail Voluntary Product Accessibility Template

Section Telecommunications Products Toll-Free Service (TFS) Detail Voluntary Product Accessibility Template (a) Telecommunications products or systems which provide a function allowing voice communication and which do not themselves provide a TTY functionality shall provide a standard non-acoustic connection

More information

Valence-arousal evaluation using physiological signals in an emotion recall paradigm. CHANEL, Guillaume, ANSARI ASL, Karim, PUN, Thierry.

Valence-arousal evaluation using physiological signals in an emotion recall paradigm. CHANEL, Guillaume, ANSARI ASL, Karim, PUN, Thierry. Proceedings Chapter Valence-arousal evaluation using physiological signals in an emotion recall paradigm CHANEL, Guillaume, ANSARI ASL, Karim, PUN, Thierry Abstract The work presented in this paper aims

More information

The following information relates to NEC products offered under our GSA Schedule GS-35F- 0245J and other Federal Contracts.

The following information relates to NEC products offered under our GSA Schedule GS-35F- 0245J and other Federal Contracts. The following information relates to NEC products offered under our GSA Schedule GS-35F- 0245J and other Federal Contracts. NEC Unified Solutions, Inc., based upon its interpretation of the Section 508

More information

Mechanicsburg, Ohio. Policy: Ensuring Effective Communication for Individuals with Disabilities Policy Section: Inmate Supervision and Care

Mechanicsburg, Ohio. Policy: Ensuring Effective Communication for Individuals with Disabilities Policy Section: Inmate Supervision and Care Tri-County Regional Jail Policy & Procedure Policy: Ensuring Effective Communication for Individuals with Disabilities Policy Section: Inmate Supervision and Care Tri-County Regional Jail Mechanicsburg,

More information

group by pitch: similar frequencies tend to be grouped together - attributed to a common source.

group by pitch: similar frequencies tend to be grouped together - attributed to a common source. Pattern perception Section 1 - Auditory scene analysis Auditory grouping: the sound wave hitting out ears is often pretty complex, and contains sounds from multiple sources. How do we group sounds together

More information

Mr Chris Chapman ACMA Chair The ACMA PO Box Q500 Queen Victoria Building NSW July Dear Mr Chapman

Mr Chris Chapman ACMA Chair The ACMA PO Box Q500 Queen Victoria Building NSW July Dear Mr Chapman Mr Chris Chapman ACMA Chair The ACMA PO Box Q500 Queen Victoria Building NSW 1230 24 July 2012 Dear Mr Chapman Broadcasting Services Act 1999: caption quality next steps The Australian Communications Consumer

More information

Webstreaming City Council and Standing Committee Meetings

Webstreaming City Council and Standing Committee Meetings Report To: From: Subject: Finance and Administration Committee Rick Stockman, Commissioner Corporate Services Department Item: FA- 1 1-44 File: A-3500 Webstreaming City Council and Standing Committee Meetings

More information

Microphone Input LED Display T-shirt

Microphone Input LED Display T-shirt Microphone Input LED Display T-shirt Team 50 John Ryan Hamilton and Anthony Dust ECE 445 Project Proposal Spring 2017 TA: Yuchen He 1 Introduction 1.2 Objective According to the World Health Organization,

More information

Effect of Sensor Fusion for Recognition of Emotional States Using Voice, Face Image and Thermal Image of Face

Effect of Sensor Fusion for Recognition of Emotional States Using Voice, Face Image and Thermal Image of Face Effect of Sensor Fusion for Recognition of Emotional States Using Voice, Face Image and Thermal Image of Face Yasunari Yoshitomi 1, Sung-Ill Kim 2, Takako Kawano 3 and Tetsuro Kitazoe 1 1:Department of

More information

Note: This document describes normal operational functionality. It does not include maintenance and troubleshooting procedures.

Note: This document describes normal operational functionality. It does not include maintenance and troubleshooting procedures. Date: 4 May 2012 Voluntary Accessibility Template (VPAT) This Voluntary Product Accessibility Template (VPAT) describes accessibility of Polycom s QDX Series against the criteria described in Section 508

More information

Recommended Practices for Closed Captioning Quality Compliance

Recommended Practices for Closed Captioning Quality Compliance Recommended Practices for Closed Captioning Quality Compliance BILL MCLAUGHLIN EEG Enterprises Brooklyn, NY Abstract - In 2014, the United States Federal Communications Commission (FCC) issued a detailed

More information

Is there a case for making digital media accessible? Peter Olaf LOOMS Chairman ITU-T Focus Group FG AVA

Is there a case for making digital media accessible? Peter Olaf LOOMS Chairman ITU-T Focus Group FG AVA ECom-ICom Experts Address Series 29 November 2011 7-8 p.m. Room 611, 6/F, United Centre, Admiralty, Hong Kong Is there a case for making digital media accessible? Peter Olaf LOOMS Chairman ITU-T Focus

More information

Language Volunteer Guide

Language Volunteer Guide Language Volunteer Guide Table of Contents Introduction How You Can Make an Impact Getting Started 3 4 4 Style Guidelines Captioning Translation Review 5 7 9 10 Getting Started with Dotsub Captioning Translation

More information

1971: First National Conference on Television for the Hearing Impaired is held in Nashville, Tennessee. The Captioning Center, now Media Access Group

1971: First National Conference on Television for the Hearing Impaired is held in Nashville, Tennessee. The Captioning Center, now Media Access Group Captioning Timeline The majority of these captioning timeline highlights were found at and copied from the Described and Captioned Media Program website. In 1927, when sound was introduced to the silent

More information

Incorporation of Imaging-Based Functional Assessment Procedures into the DICOM Standard Draft version 0.1 7/27/2011

Incorporation of Imaging-Based Functional Assessment Procedures into the DICOM Standard Draft version 0.1 7/27/2011 Incorporation of Imaging-Based Functional Assessment Procedures into the DICOM Standard Draft version 0.1 7/27/2011 I. Purpose Drawing from the profile development of the QIBA-fMRI Technical Committee,

More information

Learning Objectives. AT Goals. Assistive Technology for Sensory Impairments. Review Course for Assistive Technology Practitioners & Suppliers

Learning Objectives. AT Goals. Assistive Technology for Sensory Impairments. Review Course for Assistive Technology Practitioners & Suppliers Assistive Technology for Sensory Impairments Review Course for Assistive Technology Practitioners & Suppliers Learning Objectives Define the purpose of AT for persons who have sensory impairment Identify

More information

Communication. Jess Walsh

Communication. Jess Walsh Communication Jess Walsh Introduction. Douglas Bank is a home for young adults with severe learning disabilities. Good communication is important for the service users because it s easy to understand the

More information

Recognition of sign language gestures using neural networks

Recognition of sign language gestures using neural networks Recognition of sign language gestures using neural s Peter Vamplew Department of Computer Science, University of Tasmania GPO Box 252C, Hobart, Tasmania 7001, Australia vamplew@cs.utas.edu.au ABSTRACT

More information

A Smart Texting System For Android Mobile Users

A Smart Texting System For Android Mobile Users A Smart Texting System For Android Mobile Users Pawan D. Mishra Harshwardhan N. Deshpande Navneet A. Agrawal Final year I.T Final year I.T J.D.I.E.T Yavatmal. J.D.I.E.T Yavatmal. Final year I.T J.D.I.E.T

More information

Product Model #: Digital Portable Radio XTS 5000 (Std / Rugged / Secure / Type )

Product Model #: Digital Portable Radio XTS 5000 (Std / Rugged / Secure / Type ) Rehabilitation Act Amendments of 1998, Section 508 Subpart 1194.25 Self-Contained, Closed Products The following features are derived from Section 508 When a timed response is required alert user, allow

More information

IMAGINE APP COMPATIBLE

IMAGINE APP COMPATIBLE IMAGINE APP COMPATIBLE Imagine is a groundbreaking ipad app that turns your ipad into a powerful fitness equipment console. Download imagine from the App Store and then connect your ipad* to compatible

More information

A C C E S S I B I L I T Y. in the Online Environment

A C C E S S I B I L I T Y. in the Online Environment A C C E S S I B I L I T Y in the Online Environment Blindness People who are blind or visually impaired are endowed with a sharper sense of touch, hearing, taste, or smell. Most blind people are not proficient

More information

Advanced Audio Interface for Phonetic Speech. Recognition in a High Noise Environment

Advanced Audio Interface for Phonetic Speech. Recognition in a High Noise Environment DISTRIBUTION STATEMENT A Approved for Public Release Distribution Unlimited Advanced Audio Interface for Phonetic Speech Recognition in a High Noise Environment SBIR 99.1 TOPIC AF99-1Q3 PHASE I SUMMARY

More information

Good Habits Great Readers Shared Reading. Correlated to the New Jersey Core Curriculum Content Standards Language Arts Literacy Grades K-5

Good Habits Great Readers Shared Reading. Correlated to the New Jersey Core Curriculum Content Standards Language Arts Literacy Grades K-5 Good Habits Great Readers Shared Reading Grades K-5 Copyright 2010 Pearson Education, Inc. or its affiliate(s). All rights reserved. Good Habits Great Readers - Shared Reading New Jersey Core Curriculum

More information

Solutions for better hearing. audifon innovative, creative, inspired

Solutions for better hearing. audifon innovative, creative, inspired Solutions for better hearing audifon.com audifon innovative, creative, inspired Hearing systems that make you hear well! Loss of hearing is frequently a gradual process which creeps up unnoticed on the

More information

Informal Functional Hearing Evaluation (IFHE)

Informal Functional Hearing Evaluation (IFHE) Informal Functional Hearing Evaluation (IFHE) Developed by Texas School for the Blind and Visually Impaired Outreach Programs www.tsbvi.edu 512-454-8631 1100 W. 45th St. Austin, TX 78756 Contents Introduction

More information