WEB ACCESSIBILITY FOR THE HEARING IMPAIRED

Size: px
Start display at page:

Download "WEB ACCESSIBILITY FOR THE HEARING IMPAIRED"

Transcription

1 WEB ACCESSIBILITY FOR THE HEARING IMPAIRED by Simone Pasmore A Thesis Submitted to the Faculty of The College of Engineering and Computer Science in Partial Fulfillment of the Requirements for the Degree of Master of Science Florida Atlantic University Boca Raton, Florida. December, 2008

2 Copyright by Simone A. Pasmore 2008 ii

3 WEB ACCESSIBILITY FOR THE HEARING IMPAIRED by Simone Pasmore This thesis was prepared under the direction of the candidate s thesis advisor, Dr. Shihong Huang, Department of Computer Science and Engineering, and has been approved by the members of her supervisory committee. It was submitted to the faculty of the College of Engineering and Computer Science and was accepted as partial fulfillment of the requirement for the degree of Master of Science. SUPERVISORY COMMITTEE: Shihong Huang, Ph.D. Thesis Advisor Sam Hsu, Ph.D. Borko Furht, Ph.D. Chair, Computer Science and Engineering Oge Marques, Ph.D. Karl K. Stevens, Ph.D., P.E Dean, College of Engineering and Computer Science Barry T. Rosson, Ph.D. Dean, Graduate College iii Date

4 ACKNOWLEGMENTS I am grateful to all my professors for their guidance and help. In particular, I would like to thank Dr. Shihong Huang for guiding me throughout the steps of this thesis. She has been motivating me and constantly increasing my confidence as a researcher as well as a writer. In addition, I would like to thank Dr. Sam Hsu and Dr. Oge Marques for their valuable support and time. I would also like to thank Dr. Gustavo Rossi and the collaborators from the Universidad de la Plata, in Argentina. Further, I would like to thank Paula Sargeant, Christie Cohn, and Nadine Fordham from the Miami Dade College s Interpretation Program for facilitating the development of my research, and Schott Communities for aiding in the evaluation process. Thanks to my family and friends for their support during this period of my academic career. Thank you all for your invaluable assistance. iv

5 ABSTRACT Author: Title: Institution: Thesis Advisor: Degree: Simone Pasmore Web Accessibility for the Hearing Impaired Florida Atlantic University Dr. Shihong Huang Master of Science Year: 2008 With the exponential increase of Internet usage and the embedding of multimedia content on the Web, some of the Internet resources remain inaccessible for people with disabilities. Particularly, people who are deaf or Hard of Hearing (HOH) experience inaccessible Web sites due to a lack of Closed Captioning (CC) for multimedia content on the Web, no sign language equivalents for the content on the Web, and an insufficient evaluation framework for determining if a Web page is accessible to the Hearing Impaired community. Several opportunities for accessing content are need to be rectified in order for the Hearing Impaired community to access the full benefits of the information repository on the Internet. The research contributions of this thesis are to resolve some of the Web accessibility problems being faced by the Hearing Impaired community. These objectives v

6 are to create an automated CC for the Web for multimedia content, to embed sign language equivalent for content available on the Web, to create a framework to evaluate Web accessibility for the Hearing Impaired community, and to create a social network for the Deaf community. To demonstrate the feasibility of fulfilling the above listed objectives several prototypes were implemented. These prototypes have been used in real life scenarios in order to have an objective evaluation of the proposed framework. Further, the implemented prototypes have had an impact to both the academic community and to the industry. vi

7 DEDICATION To Jeanine Jones

8 TABLE OF CONTENTS LIST OF TABLES... xi LIST OF FIGURES... xii Chapter 1 Introduction Sectors of the Hearing Impaired Community Problem Statement Research Objectives Research Contribution Defining an Evaluation Framework Providing the Architecture to Produce Automatic Closed Captioning Embedding Sign Content Creating a Social Network for the Deaf Community Research Approach The Approach for the Hard of Hearing Sector The Approach for the Deaf Community Thesis Organization Chapter 2 Background & Related Work Background American Sign Language Web Accessibility Web Accessibility for the Visually Impaired Web Accessibility for Cognitive Disabilities Web Accessibility for Motor Disabilities Web Accessibility for the Hearing Impaired Web Accessibility Evaluation Approaches of Web Accessibility for the Hearing Impaired viii

9 2.4.1 Avatars Gesture Recognition Signlinking Multi-person Type As You Chat Text Emotive Captioning Automated Captioning Summary Chapter 3 Evaluation Framework Developers Framework I: Hard of Hearing (HOH) Developers Framework II: Deaf Summary Chapter 4 Implementation Automatic Captioning Components for Automatic Closed Captioning Architecture for Automatic Closed Captioning Approach I: HOH Evaluation of Automatic Closed Captioning Survey of Automatic Closed Captioning Sign Translation Architecture Components for Sign Translation Architecture for Embedded Sign Videos Approach II: Deaf Evaluation of Embedded Sign Content Survey of Embedded Sign Language Summary of the Implementation Chapter 5 Conclusion ix

10 5.1 Research Summary Research Impact Impact to the Academic Community Impact to the Industry Future Work Appendix A: Survey Appendix B: Acronyms References x

11 LIST OF TABLES Table 1: Web Accessibility Overview... 4 Table 2: Developers Requirements for the Hard of Hearing Table 3: Interpreter Requirements for Deaf Framework Table 4: Developer Requirements for Deaf Framework Table 5: Results from Automatic Closed Captioning xi

12 LIST OF FIGURES Figure 1 : Deaf vs. Hard of Hearing [7]... 3 Figure 2: Persons in the U.S. Facing Disabilities [7] Figure 3: Distribution of Disabilities in the United States [7] Figure 4: Venn Diagram of Disabilities [20] Figure 5: Synthesized Sign Animation [26] Figure 6: IBM's SiSi, Translates Speech into Sign Language. [33] Figure 7: Tools Employed for Gesture Recognition [21] Figure 8: Signlinking, Hyperlinking with Signed Videos [10] Figure 9: Multichat illustration [34] Figure 10: Emotive Captioning [8] Figure 11 : Framework Overview Figure 12 : Comparison of Interpreter Capture Figure 13: Implementation of Closed Captioning Figure 14: Caption Bar on YouTube Site Figure 15: Implementation Signed Translations on YouTube.com Figure 16: Implementation of Sign Translation on SignTubeUs.com Figure 17: Sign Translation of a Tutorial (Recipe) Figure 18: Sign Translation of a Song xii

13 Figure 19: Sign Translation of a Story Figure 20: Sign Translation of a Weather Alert Figure 21: Login page for SignTubeUs Figure 22: Display of SignTubeUs Figure 23: SignTubeUs Where Video is Not Available xiii

14 Chapter 1 Introduction Kindness is a language which the deaf can hear and the blind can see. Mark Twain Deaf and Hard of Hearing (HOH) users use the Internet and all its related resources as actively as those who do not face any hearing challenges. However, information presented on the Internet is not as easily accessible by the Hearing Impaired community. This thesis explains the difficulties faced by Deaf and HOH users of the Internet, and presents possible solutions to rectify the barriers faced by any affected individual. This chapter gives an overview of the thesis, explaining, the reason change is required in current Web accessibility standards for the Hearing Impaired community. It proposes different implementation recommendations to improve Web accessibility for the Hearing Impaired and discusses the immense possibilities that promise to arise based on the thesis investigations. 1.1 Sectors of the Hearing Impaired Community Although the deaf community do not generally distinguish amongst the degree of deafness, it is imperative that the different categories are explained for the sake of Web development and the understanding of content among the Hearing Impaired community. 1

15 It is essential that the Deaf community is uniquely identified and their criteria appropriately explained. Similarly for the HOH sector, their requirements should be distinctly explained to understand the need for differed implementations. Those who belong to the HOH sector of society are generally defined as those who may use devices such as hearing aids, which assist in their hearing abilities, or those who can hear some environmental sounds without using technological aids. Generally those who are classified as HOH have been able to learn both the spoken and written forms of languages at normal levels. Persons in this category may or may not use sign languages, but have normal competency of the spoken and written language. Similarly, there is another category of deaf individuals. Those who are deaf, but lost their hearing later in the ageing process, these persons have been brought up in mainstream schooling, where there is limited use of sign languages within their daily activities. These individuals also have a firm grasp of spoken and written languages, and may or may not use sign languages to communicate. However, because the spoken language is the first acquired language, it is generally more natural to utilize the spoken language (lip reading) and textual formats as their primary choices for rendering audio and video material. Finally, the Deaf community (with a capital D ) is the community for those who have been determined as culturally Deaf. This may refer to the HOH or deaf whose first and primary language is of signed format, and they adapt the cultural norms of the Deaf community. Parties in this sector generally have very limited understanding of spoken and written languages and rely primarily on sign languages as their preferred form of 2

16 communication. As noted in Figure 1 the Deaf sector is a very small fraction of the Hearing Impaired community which has resulted in limited research for the Deaf community for Web accessibility studies. Figure 1 : Deaf vs. Hard of Hearing [7] It is critical that both the Deaf and HOH are distinctly identified as their needs vary greatly. The HOH sector relies on the oral (lip reading) or written forms to render multimedia, while the Deaf sectors preferred method of communication is sign language. 3

17 Table 1: Web Accessibility Overview Sector HOH Deaf Preferred language Written language Oral Language Sign Language Existing solutions Textual equivalents for multimedia content Textual equivalents for multimedia content Positives of existing solutions Provides some information for multimedia content Provides some information for multimedia content Negatives of existing solutions Does not provide emotional content of original multimedia Does not relay all content information, such as pots dropping or music playing in the background Does not relay the intent purpose and idea of the original content Does not relay emotional content (sarcasm, anger, happiness, etc.) 1.2 Problem Statement Most of today s resources are available on the Internet. Resources such as employment opportunities, distance learning, government applications, news, e- commerce, and entertainment, are a few of these elements that many people access daily. For those who are Deaf or HOH, retrieving different forms of information on the Web is not as simple a task as for those who are hearing. Much of the audio content is lost by both groups, and understanding of critical information may be misinterpreted by the Deaf community who use sign language as their primary method of communicating, and 4

18 similarly for the HOH community, the textual forms of translation are missing pertinent information from the multimedia content. The reason sign interpretation is essential for the Deaf community is because the understanding of the given content is required to achieve Web accessibility. Sign language is the preferred language for the Deaf community and therefore come more naturally than spoken languages. Also, textual formats are not an equivalent representation of multimedia material. Much of the context is lost when the written language is read in place of signed languages, due to differed grammatical and syntactical structure. Further, emotional and rhythmic data can be portrayed in signed format, but are not achievable in a simple textual solution. This information impresses the fact that Closed Captioning (CC) solutions are an insufficient remedy for rendering multimedia or contextual information for the Deaf (which has been the conventional solution recommended). However, it is essential that we do not remove the use of CC and subtitling, as there still exists some of the Hearing Impaired community who do not utilize signed languages, but rely on written languages. Therefore CC for the multimedia content which is posted on the Internet is still a necessity for those who consider themselves part of the HOH sector. However, CC does need to be further developed to portray the equivalent of emotional, background, and rhythmic data that are produced by multimedia content. Moreover, the audio material available online have minimal CC available, as is the recommended solution by the W3C [48]. It is essential that developers, organizations or authors, include both the CC and signed translations for rendering all aspects of 5

19 multimedia content. The scope of this project is a recommendation involving; growing social networks, businesses, along with the Deaf / HOH community, and interpreters, to provide a snow ball effect for translating information online; ensuring that the Deaf and HOH communities will have equal access to the information provided on the Internet. The use of sign languages, as the Deaf focal communication method, is the reason that current research is vital in providing sufficient alternatives for much of the content available on the Web (which is not currently available in a signed form). Additionally, for users who are considered HOH and have a firm grasp of written and spoken languages, it is essential that CC be made available for all audio content that is posted, and further enhanced to capture all aspects of the given data. Both sign and textual format translations need to be displayed where they are appropriate, ensuring Deaf users do not have to relate to a foreign language and the HOH group have access for the audio presented. It is also a deficiency that an applicable metric to assess Web pages in relation to accessibility for the varied sectors of the Hearing Impaired community is unavailable. 1.3 Research Objectives Initially the recommendation [48] for Web accessibility for both the Deaf and HOH individuals has required textual equivalents for audio content. However, textual content has been an insufficient solution for the Deaf and HOH community. The first objective of this research is to provide an automatic method for producing textual equivalents for multimedia content on the Internet. Providing an 6

20 automatic method for rendering a textual version for the audio material will accommodate current content providers who find the additions of CC a tedious task. Further the automatic solution only requires a onetime set up, ensuring additional posts of audio and video will add textual data in an automated fashion. The process is further simplified, and less time consuming, the commonality of producing an accessible Web will be more intriguing to the average content provider. The second objective is to embed sign translations of multimedia content on the Internet. As a result of implementing signed translations for the content posted, the Deaf community is able to attain a thorough understanding of the content on the Internet. The use of sign format is especially vital in educational scenarios, specifically for distance learning formats, where closed captioning has been the traditional remedy. Furthermore, another objective is to enhance the current guidelines for evaluating Web accessibility for the Hearing Impaired. The current guidelines provide a broad overview by providing textual equivalents. However, the Hearing Impaired community has different sectors; the HOH and the Deaf sector, which have different needs. The HOH prefer textual equivalents and the Deaf prefer sign language equivalents of content; therefore we need to develop guidelines to accommodate each sectors unique requirement. Also, increasing sign translations online facilitates sign language acquisition to further promote studies in Natural Language Processing (NLP) with the goal of enhancing automated software such as avatars, gesture recognition for the purpose of 7

21 allowing users to sign answers as opposed to writing or typing their response, and to further create an automated evaluation tool of signing and machine translation systems. Automated tools within NLP may also include sign translations from spoken languages to sign languages, and vice versa; therefore, resulting in ease of access for both the users and the developers to render content on the Internet. 1.4 Research Contribution The contribution of this research includes: Defining an evaluation framework for developers to asses Web accessibility standards for the Hearing Impaired community. Providing the architecture to produce automatic CC for multimedia content on the Web. Embedding sign content directly to Web page(s) / Web sites. Creating a social network infrastructure for the Deaf community and all persons involved Defining an Evaluation Framework Currently there have been limited guidelines for Web developers, content providers and all related parties, to ensure adequate Web accessibility for those who are Hearing Impaired. The current requirements of implementing synchronized textual equivalents do not distinguish varied methods for those who are Deaf or those who are HOH. 8

22 The guidelines for CC [48] need to be further developed to include information previously not portrayed in traditional CC, such as background effects, emotional elements, and rhythmic data. Furthermore, for the Deaf community there has been no framework for developers to embed interpreted sign language equivalents for text, audio, and video into Web page(s)/ sites. The framework must give a list of essential guidelines for developers to fulfill an accessible Web for all Hearing Impaired users. Proposed in this thesis are the guidelines for both the Web developers and interpreters, where each entity accomplishes a set of tasks, which together achieve accessibility objectives Providing the Architecture to Produce Automatic Closed Captioning Although automatic CC has been achieved in Japan for the purpose of broadcasting systems [1], automatic CC has not been integrated into Web page(s) / sites. Integrating an automated way of producing textual equivalents of multimedia content on the Web should be of major concern, as the Web provides a wide variety of information that are currently not accessible to the Hearing Impaired community. The reason that much of the information has not been accessible is due to a lack of CC by the majority of content providers. Content providers either are not aware of the CC capabilities or disregard the need for CC as it can be a time consuming and daunting task. Automatic CC for content providers is a new and promising area of research. Any content provider may simply complete a onetime voice training session, and each 9

23 subsequent time the dictation will occur automatically, producing textual equivalents for the audio content for which the author provides. Further automatic CC proves to be promising in an array of fields, including news broadcast shows, television programs, academic lectures, and conferences, are just a few of the arenas where the institution of automated CC can make a difference for the HOH community Embedding Sign Content Previously no adequate signed translations have been instituted on a Web platform. The content, audio and video files online have remained inaccessible to the Deaf community. The fact that the Deaf community utilize signed languages, which contain its own grammatical structure and syntactical rules (differed from English, or written formats), make reading a difficult task. Further misunderstandings are very prevalent when signed forms are not the chosen method of communication. Therefore, it is imperative that provisions be made to provide the equivalent signed interpretation for all material online. Further, investigations in NLP for the purpose of an automated solution of signed translations may be more easily investigated with a larger data set of signed content. Moreover, the prospect of this application to embed signed content can also benefit those who are learning sign languages and interpreters of sign languages by having a repository of multimedia with the equivalent sign videos. 10

24 1.4.4 Creating a Social Network for the Deaf Community The Deaf community has had limited access to much of the multimedia materials available on the Internet. Part of my contributions is creating a Web portal dedicated to the Deaf community (which also includes interpreters and students of sign languages) in the form of a social network. The concept is to provide a Web site where interpreters have the opportunity to upload interpreted sign videos so that the Deaf community can access these videos, and students of sign languages can improve their signing skills by having access to multiple videos, all in one specified location. By having a social network of signed videos, this may initiate a snow ball effect. Therefore, creating a multitude of sign language videos, which were previously inaccessible to the Deaf community. Sign language interpreters may also use this social network as an opportunity to market themselves to various signing agencies and to the Deaf community themselves. Students and anyone who is wishing to improve upon their signing abilities, may access the site, promoting the acquisition of new signs and also improving their receptive skills. The contribution of a video social network for the Deaf community and all those involved is a novel approach which can also benefit studies in NLP for signed languages, improve the automation of avatars and further enhance the framework for Web developers. 11

25 1.5 Research Approach In order to have a systematic approach for rendering content and videos for the Deaf and HOH, this thesis proposes several protocols to ensure that both communities are given access to all the information represented on the Web. The HOH communities will need to retrieve audio and video information in textual formats, while the Deaf sector will need to retrieve all information in signed format. This chapter has the proposed methods to achieve textual and sign language equivalents on a Web platform The Approach for the Hard of Hearing Sector For the HOH / non-signing sector, it is fundamental that CC be integrated to all forms of audio content presented on the Web. The utmost responsibility is for developers and Web hosters to ensure that no information is withheld from any individual when attempting to gain access from an Internet resource. The guideline proposed in the thesis includes critical elements that can be used to produce an adequate form of CC. It is equally important to convey information that is not necessarily conveyed in traditional CC, such as emotions, background effects, and music. The HOH community must have equal access to all the surrounding information, to retrieve full details of the content available. While additional annotations to the traditional CC environment are outside the scope of this research, some previous research in emoticons [27] (discussed in section 2.4.5) might help facilitate the integration of available related tools to render all necessary criteria in producing a sufficient textual equivalent. 12

26 1.5.2 The Approach for the Deaf Community The implementation of sign language videos was one of the essential elements of this thesis. Ensuring all the necessary applications and software were available was the first stage of the integration process. The second element was to attain sign translations of videos from interpreters, and finally to integrate the various programs to result in an adequate prototype for the presented solution. In order to achieve sufficient accessibility, this strategy needs to be adopted by major corporations, Web developers, and content providers, to ensure that adequate interpretation for the information on the Web is completed. By employing this strategy, all parties involved would avoid possible law suits for disregarding the needs of the Deaf community. Most importantly is the need to standardize protocols to ensure consistent accessibility trends among Web developers and loaders. It is essential that videos are implemented utilizing a set of criteria that ensures optimal resolution for the deaf community. The framework section advises the interpreter, or group of interpreters to follow essential guidelines for interpreting standards. These standards should be evaluated prior to the post of applicable videos, so that developers may embed the videos appropriately. Further, the Web developer or Web designer must also follow specific criteria detailed in the framework (chapter 3), to ensure the accessibility of the videos for the end-user. 13

27 1.6 Thesis Organization This thesis is organized as follows: Chapter 2 explores the background and related research that are being performed in the related field of Web accessibility. This section not only explores Web accessibility for the HOH and Deaf, but also discusses Web accessibility as a whole, thus ensuring a more sufficient overview on the topic of accessibility. Chapter 3 discusses the theoretical approach that serves as the foundation of this research. This section focuses on the criteria that are essential to evaluate Web accessibility for the Hearing Impaired community. In order to implement an updated solution to current accessibility trends, it is imperative that we have an adequate metric to evaluate the deficits in the current development of Web pages. Chapter 4 first introduces the implementation used in the prototype, and further reviews case studies to prove the beneficial aspects of adding CC and embedding signed translations for all content available on the Internet. The case studies also exemplifies differed approaches for varied content such as tutorials, news casts, stories, and songs, and examines prospects to embed content in alternate sites. Chapter 5 concludes with overall summary, reviews the impact of this research to the academic community, to the industry, and discusses future opportunities in this field of research. 14

28 Chapter 2 Background & Related Work It is a terrible thing to see and have no vision Helen Keller This section of the thesis discusses the background and related work of Web accessibility and American Sign Language (ASL). Web Accessibility covers the various studies in relation to the various categories of impairments, and how people with such impairmdents interact with the Web. These categories include those who have visual, cognitive, motor and hearing impairments. In order to ensure these categories of disabilities are addressed for the purpose of Web accessibility, several validation services [47] [3] [49] [45] [42] to automatically check for errors, or improvements, to ensure accessibility standards have been met. An instrumental infrastructure is the W3C Web Accessibility Initiative (WAI) [48] that provides guidelines to make the Web more accessible for people with disabilities. W3C WAI has been one of the more commonly used frameworks for evaluating Web accessibility. Also included in this section are studies directly relating to Human Computer Interaction (HCI) and Web accessibility standards for the Hearing Impaired. Some of the related research include: avatars, gesture recognition, signlinking, emotive captioning, and automated captioning. Details of how these research topics relate to Web 15

29 accessibility and their intricate details are unraveled within this section. Then chapter 2 concludes with a summary of the overall background and related work discussed. 2.1 Background The Internet is an information repository with educational, recreational and informational facilities available to all people with access to the World Wide Web. One factor is how to provide equal access to the information posted on the Web. However, not everyone has the same access to this information repository. Persons with varied disabilities are frequently limited to the access to some of the resources, such as visual and audio content. Disabilities including, visual, hearing, mobility and cognitive impairments, are some of the categories of people who do not have access to certain material on the Internet. Persons with visual impairments, generally require speech mechanisms to render data for a specified page for example, page readers, braille readers and speech recognition devices are often employed. Persons with mobility disabilities may also utilize speech devices along with other navigational tools to traverse through the site in a hands-free manner. Persons with cognitive disabilities generally have a simplified means of accessing data, dependent on the users ability. And, people with hearing disabilities need either textual or signed representation for varied content, dependent on their preferred language. We will discuss the four categories of Web accessibility studies in detail in section

30 The research aligned with the previously discussed disabilities or impairments and how people who have these impairments access this information on the Internet is termed Web accessibility. Web accessibility may be defined as the ability for a person using any software or hardware that retrieves and renders Web content (including assistive technologies) to understand and fully interact with the content of a Web site [37] [41]. This thesis predominantly focuses on the accessibility of the Web for those who are Hearing Impaired and utilize differed modes for communicating. Deaf users should be able to attain all information, tools, and resources available on the Web as successfully as any hearing individual. However, this has been unattainable in several areas which include: loss of context in textual content, due to cross language barriers and all multimedia mediums that are not interpreted to the appropriate sign language. Also, those who are HOH have not been able to access much of the audio content due to limited CC by content providers. With the exponential increase of technology, especially in cross-language interpretation, there should be no reason that a sign language interpretation of all content and audio material should remain unattainable for the Deaf community. Deaf Internet users should have the opportunity to retrieve sign language content for any resource available on the Web. Similarly, HOH users should have access to CC for all audio and video content posted. Further, it is important to understand that signed content should not be used to replace CC, as there are non-signers who are deaf and prefer the use of CC to attain data. 17

31 Certain criteria will be discussed for the purpose of Web developers and others of similar interest, ensuring both solutions are adopted in the Web design architecture. Sign language content should be an outlet for relaying information for those who are predominant signers and utilize sign language as their method of communicating. This thesis is not trying to propose signed content as an option for Web development or design, but rather as a standard for Web accessibility. 2.2 American Sign Language The term sign language has traditionally been used to refer to varied forms of sign communication. However, there is a unique distinction between ASL and other varieties of sign languages [25]. A few common misconceptions of ASL are: (1) the grammatical formation is a literal word for word translation for the written English, (2) that the construction of the language is simple gestural movements, and (3) that ASL is a universal language. The grammatical structure of ASL is indeed a quite complex arrangement dependent on the message being conveyed. The syntax rules of ASL differ significantly from the written or spoken form of the English language. Therefore, the difference in language syntax impacts the deaf community extensively; the reading skill of a deaf adult is approximately equal to fourth-grade level of a hearing person [1]. However, the majorities of deaf people who have English reading difficulty are fluent in ASL and therefore utilize ASL as their preferred means of communicating. 18

32 Additionally, ASL is no more a combination of gestural movements, as the spoken language is not an arbitrary list of words. Word order, context and understanding are vital to the transmission of the information being conveyed. Moreover, ASL is not solely dependent on hand movements but also relies heavily on facial expressions and body movements known as non-manuals, to convey the meaning of a word, a sentence or a phrase. ASL is indeed its own language. For the purpose of this research, ASL was used for implementation to convey the significant impact that the Deaf community can salvage by improving Web accessibility with sign interpreted videos. 2.3 Web Accessibility Web accessibility may be defined as the ability for a person using any software or hardware that retrieves and renders Web content (including assistive technologies) to understand and fully interact with the content of a Web site [37] [41]. Web Accessibility has been studied in respect to the four main categories of impairments based on user needs. The categorical studies are based on visual, hearing, mobility, and cognitive impairments. Where visual impairments require screen readers, mobility impairments require navigational techniques for accessing data, cognitive impairments require a systematic approach for rendering the information to ensure thorough comprehension of all posted material, and traditionally hearing disabilities have required the use of textual facilities for audio content posted. Figure 2 is a graphical representation of persons in the US who face disabilities, 15% of the population has varied forms of impairments. This number is indicative that a 19

33 huge percentage of the U.S population stands to benefit from studies within Web accessibility. Figure 2: Persons in the U.S. Facing Disabilities [7] Figure 3 indicates a breakdown for each category of impairment giving the number of persons with these varied categories of hindrances. One dilemma based on the data retrieved is that physical impairments for the scope of Web accessibility is generally for those who have the limited ability to click or use conventional input devices, the total number given [7] is only a representative of wheel chair users, therefore the number can only be used for an approximate tally. This chart, however, demonstrates the need to survey each categorical impairment separately. 20

34 Figure 3: Distribution of Disabilities in the United States [7] Figure 4 is a representation of the overlap of impairments which is also insightful of the work to be completed in Web Accessibility; specifically that each impairment may need to include multiple applications to render the data on the Internet. Figure 4: Venn Diagram of Disabilities [20] 21

35 2.3.1 Web Accessibility for the Visually Impaired Persons with visual impairments have difficulty accessing the information on the Web, and therefore employ assistive technologies, such as screen readers which voice the material, or Braille displays which also render the information for the user [9]. One of the difficulties faced by those with visual impairments is the navigation techniques that assistive technologies commission based on the Web design itself. Screen readers read the content of the page using a synthesized voice. The general reading layout of screen readers is from top-left to bottom-right. Davide Bolchini et al. recommend an accessibility standard for aural Web sites [5]. They argue that the content of a Web site is insufficient if the orientation is missing and the navigation is misleading. The aural applications need to consider the fact that users process information sequentially. Designers should follow the suggested requirements [5]: (1) Provide the user with an aural quick glance of the application, both at the beginning and whenever necessary. (2) Provide the user with an aural semantic map of the whole application. (3) Provide an executive summary of any list of items (especially for the long ones). (4) Define strategies to partition long list into smaller meaningful chunks. (5) Provide a semantic back mechanism, emphasizing the history of visited pieces of content, rather than the sequence of physical pages. 22

36 (6) Provide a semantic navigation mechanism to go up to the last visited list of items. (7) Keep consistency across pages by creating aural page templates. (8) Minimize the number of templates. (9) Allow the user quickly grasping how the page is organized by communicating the structure. (10) Read the first key message of the page (e.g. the content), and then the other sections. (11) Allow the user to access directly a section of interest at anytime. (12) Allow the user to move forward and backward across page sections (according to the given reading strategy). (13) Allow the user to pause and resume dialogue flow. (14) Allow the user to re-play an item or an entire section. The proposed requirements for aural Web sites were well received from the customers of the Munch Web site [5], and evaluations of AURA applications is indicative of positive direction with the aural research. While thorough research in the area of Web accessibility has been conducted for those who are visually impaired, there still remain several opportunities for enhancements including the above mentioned requirements. 23

37 2.3.2 Web Accessibility for Cognitive Disabilities People with Developmental Cognitive Disorder (DCD) also have needs to access and navigate information available on the Internet. However, many Web sites remain inaccessible to people with DCD. The most commonly used techniques in handling DCD recommend the use of navigational aids like visual displays [28]. Various studies investigated navigation aids such as table of contents, indices, horizontal tabs, hierarchical chains [31], vertical lists of links [8], hierarchical menus [15], and other devices based on paper-based documents. Further studies also suggest more attention focuses on, animation, graphics, audio, and video, as these options may be the most effective method for improving communication for users with cognitive disabilities [4]. A pilot study [4] was completed to evaluate Web site navigation with W3C accessibility-compliant Web sites by persons with DCD [39]. The study examined how persons with mild to moderate DCD navigate through Web sites using a mixed method. Four determinants of cognitive studies were utilized, namely, situation awareness a person s momentary knowledge of his or her surroundings, spatial awareness a person s awareness of how content is located in relation to navigational devices, task- switching awareness - a person s ability to move from one task to another, and anticipated system response a person s perception of how the system should appropriately respond to a user s action. This study concluded that much of the previous research were lacking appropriate solutions for those with DCD, as the trial group was not able to easily navigate, or access 24

38 information on the Internet without being prompted by an individual. The Web sites used for this research concluded unclear navigational confirmation, inconsistent navigation, non-standard interaction techniques, lack of perceived click-ability, user willingness to scroll pages, and user ability or willingness to read instructions. Therefore, there is need for more thorough studies within the field of Web accessibility for those with cognitive impairments. Cognitive impairments are further categorized by the W3C [48] including: persons with auditory and visual perceptions, persons with attention deficit disorder, persons with memory impairments, persons with mental health disabilities and those with seizure disorders. For persons with impairments relating to visual and audio perception W3C [48] state multiple methods of rendering information may be necessary, persons having difficulty reading may use speech synthesizers to better comprehend the information, and persons with auditory processing problems may use captions to better understand audio formats. Those with attention deficit disorder may need to have less distracting information and so may need to eliminate animations, video or audio, that deter from the intended content. Persons experiencing intellectual disabilities may have problems comprehending the information. W3C states some of the barriers encountered by this category of individuals are the ability to render complex languages, lack of graphics, and clear and consistent organization of Web sites. 25

39 Those who have memory impairments may need to rely on a consistent method to navigate through Web Sites. For example, all pages within a Web site should be traversed from top left to bottom right therefore, the navigational schema should not change. Persons with mental health disabilities may need to have visual or audio turned off, and have screen magnifiers as they may experience hand tremors and blurred vision as side-effects from their prescribed medications. Also, persons who experience seizures are usually triggered from visual flickering, or audios of certain frequencies. These individuals may also need to turn off audio or visual features on a Web site. Web Accessibility research for those with cognitive impairments have several areas of interest. Several methods such as the strategies discussed above, are employed to achieve Web accessibility for the differed areas of cognitive research and understanding these varied opportunities are critical for developing an accessible Web site Web Accessibility for Motor Disabilities Persons with motor disabilities have difficulties with computer interaction. Simple tasks such as keyboard, mouse input, or other tasks requiring the use of software applications might be difficult for persons with motor disabilities. Commonly, assistive technologies such as speech recognition devices and special keyboards have been employed to assist these users with computer interaction. However, the cost, complexity and availability of assistive technologies often result in frustration for those with motor disabilities. Research to provide a solution where user interfaces may be adapted to an individuals impairment was implemented [18] using SUPPLE [16] and SUPPLE++ 26

40 [17], which is hoping to provide a more robust solution for computer interaction for those with motor challenges. Various tasks including; pointing dragging, list selection, and multiple clicking were used to elicit participants motor abilities. The study demonstrated that participants with motor impairments were significantly faster and made fewer errors and preferred automatically generated personal interfaces adapted to their individual capabilities using SUPPLE++ than with traditional navigational aids for those with motor disabilities. Therefore, the results of this study concluded that assistive technologies that operate based on group assumptions are not adequate for individual users. Further investigations to adapt each person s unique circumstance for those with motor impairments would be beneficial. W3C [48] also indicates that persons facing motor impairments may employ alternate keyboards, switches, or software devices to render the information on the Web. So Web accessibility tendencies for those with motor disabilities are to use access keys. They allow Web authors to replace mouse click with key strokes, rendered by the users alternate device. The Web author needs to select which links and controls are important enough to receive the designated access key, designate a reserved access key to the relevant elements and ensure no conflicts between the different access keys. Also, a user might not prefer access keys and instead rely on icons and the Web author will then need to ensure the icons or buttons are consistent in page design. 27

41 Speech driven technologies also could facilitate those who have motor disabilities. Windows Speech Recognition Device allows both command driven access and dictation access for those who face motor impairment [32]. For those who endure motor hindrances, several technologies such as those previously discussed have been investigated and implemented to facilitate ease of access both for Human Computer Interaction, and for Web accessibility. Further studies are underway to improve upon the current trends, tools and technologies, leading to a promising user-friendly nature of accessing the Web Web Accessibility for the Hearing Impaired Current implementations to render Web pages for those who are Hearing Impaired have included CC as recommended by W3C [48]. Any audio and video files on the Internet should have textual equivalents. Some applications available on the Internet include the option for content providers to add clickable text to audio and videos using the Veotag [46] Website service. Users can see the veotags whenever they play your audio file on the Web. Another similar application used for adding text to videos is the BubblePLY [6] Website service, which also facilitates the addition of bubbles, images, emotions and symbols previously not integrated with traditional CC. Both Veotag and BubblePLY have synchronization techniques, ensuring that the author can add subtitles simultaneously with the auditory information. Synchronization is also a recommendation by the W3C and is necessary in the conveyance of visual information coincident with text to ensure less loss of context for the given scene. 28

42 W3C also recommends the use of visual notifications when end-users are receiving messages or alerts that may generally be rendered in audio format. Implementations for the Hearing Impaired community have an array of research areas, for example, avatars, emoticons, and gesture recognition studies. Further discussions on related issues to the Hearing Impaired are deliberated in detail in section Web Accessibility Evaluation In conjunction with Web accessibility, there is a need to provide an evaluation framework of the given requirements and is therefore necessary to have accessibility evaluators. Some of the evaluation systems include: the W3C validation Service [47], Bobby [3], Valet [45], Tidy [42], and Web Accessibility Evaluation Tool (WAVE) [49]. The W3C Validation service allows users to input a pages URL, upload a file, or directly input a file, to validate the markup validity of Web documents in HTML, XHTML, SMIL, MathML, etc. Completing the required corrections would gain a W3C rating from the validation service and the author would then be able to embed the accessibility logo to their page. Bobby was one of the first validation tools that documented both W3C standards and accessibility requirements based on section 508 of the rehabilitation act of The service Booby offered was one of the outstanding accessibility validators and achieved several CAST awards. Bobby report contained three sections based on their priority. Priority one, were those errors that affected the usability of the Web page, and fixing these problems would achieve a Bobby rating. Priority two, were not as critical as 29

43 priority one elements, but correcting issues of priority one or two were considered the minimum for a Web page to be deemed accessible. Priority three included other errors that once rectified, the page would achieve a rating of AAA Bobby approved level. Bobby was sold to Watchfire in 2004, later acquired by IBM in 2007 and unfortunately Bobby is no longer available. Valet is another currently used Web accessibility evaluation site. The Valet tool analyzes pages, and provides reports of various errors and warnings. Page Valet s accessibility checks are applied to the page elements and attributes. Accessibility warnings are linked to the elements, therefore drawing attention to the problems in the report. Tidy was built based on the W3C validator, initially designed to check HTML correctness [29]. Tidy is also available as an open source project and was designed for easy integration with other software. Tidy itself is able to correct many of the errors, but errors that cannot be amended will be logged as errors [48]. WAVE a current accessibility evaluation system is a free tool provided by WebAIM. WAVE takes a pages URL and evaluates accessibility based on W3C standards. WAVE is used to assist humans in the evaluation process. Unlike the previously mentioned evaluation tools, which give a complex report of errors, WAVE shows the original Web page with embedded icons and indicators to identify the errors, which might be preferable to authors during the evaluation process. Evaluation of Web pages is necessary to ensure that all users have access to the information being presented on the Web. Also, having evaluation services for 30

44 organizations that have Web pages help ensure that the appropriate standards, for textual and signed equivalents, are followed to permit access to consumers. 2.4 Approaches of Web Accessibility for the Hearing Impaired Several areas of research have been conducted for those who are Hearing Impaired, such as avatars, gesture recognition, signlinking, multi-person type as you chat text, emotive captioning and automated captioning. Continued in section 2.4 is a detailed description of these varied areas of research and how they relate to Web accessibility and the Hearing Impaired community Avatars Avatars have been implemented for a myriad of reasons. Avatars as it relates to Web accessibility for the Deaf user have been investigated mainly for machine translation purposes. A sign language output may be produced via an avatar that has been given some written or coded input. Although studies have attempted an adequate translation to sign languages, several obstacles in the grammatical and linguistic functionalities of signed languages have made implementation quite difficult, with room for several enhancements. Limitations, such as dialogued communication, limit the scope of interpretation skills via an automated tool, especially in the scenario where audio is the original given source. Ensuring the correct word is applied for conceptual accuracy is also a very difficult task. When in English a word may have multiple meanings, in sign language the 31

45 word may not be conceptually appropriate for a given sentence, and a more suitable word or sign may convey the intended meaning. Providing signed content on the Internet by synthesized animation [10] does not aim to replace human interpreters, as the lack of vital information is not fulfilled with the use of animation. Animations cannot attain the level of expressiveness produced by human interpreters, and further, cannot distinguish among contextual scenarios. The synthesized signing animation is delivered to the end user through a browser plug-in, which contains an avatar and software to translate Signing Gesture Markup Language (SiGML) [38] to motion data. An example of delivery to the end user was completed on a Dutch government form [26], where clicking on a caption beside each item played the related sign animation as displayed in Figure 5. Figure 5: Synthesized Sign Animation [26] 32

46 The evaluation of the signing system was conducted on users who utilized sign languages as their primary communication method. Comprehensibility of single signs achieved 70% in the UK and 75% in the Netherlands and final improvements led to further comprehensibility of 95% in the Netherlands. However, comprehension of signed sentences and text chunks achieved 40.4% in Germany, 46% in the UK and 35% in the Netherlands. Again, in the Netherlands with further updates, the comprehension increased to 58%, and the second test in Germany achieved 62%. Generally, misunderstandings of comprehension were due to unclear signs, ambiguity, missing or unclear non-manual (facial and body expressions), and missing of incorrect prosody (pauses). Work with Avatars are promising to the Internet environment. Production of videos, cost of interpreters, storing and downloading of videos will all be eliminated overheads by companies who would need to implement sign translations into their Web page(s) or Web sites. However, the need to develop the concept, in order to achieve a level of comprehensibility, is vital in presenting synthesized animation to the end-user. Another avatar based research is by the UK IBM scientists. They translated speech to text, and then text to sign using IBM s ViaVoice speech to text technology, and therefore developing SiSi (say it, sign it) [33], the avatar technology. SiSi was also not developed to replace human interpreters. But perhaps to be used when a human interpreter is not available, or when confidentiality is a factor. Further, SiSi does not translate signs to voice, as can human interpreters, demonstrating more research in avatars is needed to create an adequate sign translation application. SiSi was designed to 33

47 work with both Sign Supported English(SSE) and British Sign Language(BSL), however further improvements are necessary to correct the syntax and grammatical structure of BSL. Figure 6 is representation of the IBM prototype SiSi. IBM adds there are no plans to license or sell SiSi as it is still very much a prototype requiring further development. Figure 6: IBM's SiSi, Translates Speech into Sign Language. [33] Gesture Recognition Gesture recognition research, attempts to analyze motional movement by an individual. The motional movement of a person generally occurs where there is some video capture device to evaluate the motion of various body parts such as arm movement, facial movement, mouth movement, or the body as a whole. Gesture recognition in relation to accessibility, has been researched to provide a method in which the Deaf can relate directly with Hearing people. This is done when a sign is formed the rendered textual format is portrayed. In terms of Web accessibility, 34

48 gesture recognition could provide a solution where the Deaf community can sign information and the equivalent textual form is written, therefore facilitating duplex forms of communication between Deaf and hearing participants (who do not generally know each other s language). Previous research within gesture recognition has been deemed quite difficult. Research explaining where one sign ends and another begins is certainly a challenge, and requires supplementary examination. The use of many contraptions to recognize body movements has also been a part of this investigation. Using color gloves to distinguish hand movements, eye goggles and microphone to capture facial behavior, and accelerometers to take account of speed, and directional motions. Figure 7 depicts the devices used to capture gestural movements. Eye display Blow switch Acceleglove Figure 7: Tools Employed for Gesture Recognition [21] 35

49 One example of gesture recognition research is the phraselator [21], which attempts a portable ASL to English translation. This device was initially developed with the aim of recognizing the ASL alphabet and a two-link arm skeleton that detects hand location and movement with respect to the body. The device interprets the finger-spelled words and hand gestures into voiced output. To enhance the system, predictors were introduced. Predictors, work by searching for words with the initial signed letter, such that if the signer begins with the letter A, the program immediately looks in a file with words beginning with A. Similarly, the phrase predictor follows the same logic: once the user starts with a specific word, the phraselator begins to single out the correct phrase, so that the user may select the appropriate match. The ASL phraselator introduces the concept that predictors improve the accuracy and speed at which the information is relayed. Additionally, the provision of communicating between hearing and Deaf individuals are facilitated with the use of gesture recognition software, hardware, and instruments which indicate strides to improve dialog for ASL users are proceeding Signlinking Although the Web interface has been evolving, many aspects still remain static with text and image based as the main aspects of Web authoring. In order to support ASL in the Web for the Deaf community, a proposed solution enables Web designers to create online Web links (based on signs and gestures utilizing hyperlinks) within video material that allows Web browsing without the use of written languages [10]. 36

50 Hyperlink customs include a text or image in a document embedded with a URL target. The user is then able to click on the hyperlink, to load the target page. Signlinking provides a similar concept, but is accomplished with the use of videos and animation rather than textual or image formats. The author would flag links in the video for a specified time interval, and indicate the link in a red rectangular box, ensuring the rectangular box encompasses the signers upper torso and hands to avoid any visual distractions the user might encounter. The involved case study included nine Deaf ASL users; five of whom found Signlinking difficult to learn, three whom found it easy, and one participant who was neutral. Seven of the participants confused the play and link controls. However, they all commented the ASL Web was an innovative and enjoyable experience. Figure 8 is a diagram which further portrays the elements of Signlinking that facilitate a fluid hyperlink format. It is indicative that this research accommodates previous barriers between ASL and the written languages. 37

51 Figure 8: Signlinking, Hyperlinking with Signed Videos [10] Multi person Type As You Chat Text In common face to face transactions turn-taking (altering of communication between multiple people) may not have set rules such as classroom environments where students will generally raise their hand and be selected to speak in turn. However, in situations where Deaf people are involved, turn-taking is essential, as attention needs to be focused on the interpreter or the other signer. Deaf individuals are often frustrated when attempting to communicate with hearing people, due to the difference of fluidity in hearing peoples conversations. A Multichat proposal [34], for conversations involving both deaf and hearing individuals, incorporate a Web site that facilitates multiple people typing as they speak. 38

52 This method of typing as you talk ensure the fluidity of the conversation, and lessen the loss of material experienced by the Deaf community when attempting to have discussions with hearing individuals. The multichat system offers a URL to the specified conversational participants, so each speaker can type as they speak and the Deaf participants have visual access to the conversation being conveyed in textual format. Figure 9 shows a graphical layout of the multichat system. Figure 9: Multichat illustration [34] The architecture the multichat researchers presented synthesizes Web technologies, Web page authoring and computer mediated conversation, so that the Web itself becomes a platform for fluid conversation among all participants. 39

53 2.4.5 Emotive Captioning Emotive Captioning (EC) [27] was investigated for the purpose of integrating emotional effects not currently produced by traditional captioning. The research of EC promises to portray the six common emotions including: anger, fear, sadness, happiness, disgust and surprise. The implementation of EC had the main objective to combine graphics and text to represent emotions and sound effects to television captions. Further upgrades facilitated by the new EC framework produced graphics, color and icons, which more effectively conveyed equivalent information produced in the movie. The EC were then embedded into a movie script as subtitles depicted in Figure 10. Figure 10: Emotive Captioning [8] 40

54 The EC were then presented to group of individuals to complete a survey of their reception of the new form of captioning. The survey utilized six Deaf users and five hard of hearing users to evaluate the efficiency of EC. The surveys of EC concluded that the needs of HOH participants differ greatly from Deaf participants. HOH participants were more positive about the position of captioning, colors and enhanced captioning produced by the EC, but the Deaf group remained positive towards traditional forms of captioning Automated Captioning Automated Captioning is a need for the HOH community. Currently an implementation has been achieved for Broadcast News Programs in Japan. [1]. The implementation of this project firstly identified the need for automation based on the speed of entry when handling ideographic characters. The research achieved recognition rates of 95%. The technique invoked Hidden Markov Models (HMM s), word bigrams and trigrams to better fulfill recognition rates. Implementation would further require that acoustic models would be gender dependent, but speaker independent, to ensure each broadcaster would not necessarily need to train the system independently. Fulfillments in automatic CC completed in this research could serve both for broadcasting on television and Web based outlets (as proposed by this thesis). 2.5 Summary The work researched for each category of accessibility included visual, cognitive, motor, and hearing impairments. These categorical studies have demonstrated that Web 41

55 accessibility dilemmas remain prevalent and more extensive work is needed to produce sophisticated approaches for achieving Web accessibility. In recognizing the four main categorical studies of accessibility, it is essential to note that each sector cannot be handled independently. For example, if a Web site is completely aural for those who are blind, it eliminates accessibility for those who are Hearing Impaired and the same holds true vice versa. Work completed particularly for the Deaf and HOH are underway. However, more work is left to be completed on any individual project, and as a whole, ensuring the Hearing Impaired community has access to any material they wish to retrieve. Also, the integration of some of the work completed may lead to more productivity and less redundancy. However, all discussed work for the Hearing Impaired have specified roles, each of which can benefit a sector(s) of the Hearing Impaired community tremendously, and lead to further insight, and prototypes. 42

56 Chapter 3 Evaluation Framework The only disability in life is a bad attitude Scott Hamilton The commonality of posting videos online has been fast increasing on the Web, by news entities, individuals, and corporations. For example, YouTube [51] has approximately 65,000 new videos uploaded daily. Unfortunately, the majority of these videos have not been accessible by the Hearing Impaired community, due to the lack of CC or any other resource to accommodate those with hearing deficiencies. Also, content has not been fully accessible for the Deaf community due to language barriers between written languages and sign languages. This chapter proposes a framework that presents guidelines for developers to ensure Web accessibility for all sectors of the Hearing Impaired community as well as presenting new criteria currently not implemented in past regulations. In order to effectively evaluate the adequacy of the Internet this framework proposes guidelines for both Deaf and HOH sectors, to assure Web developers and the other necessary participants (such as interpreters) have a sufficient metric for evaluating development and providing satisfactory results for presenting an accessible Web interface. 43

57 Figure 11 provides an overview of the requirements for all the Web developers, and Interpreters. Figure 11 : Framework Overview 3.1 Developers Framework I: Hard of Hearing (HOH) Daniel Berry et al. recommended that for sound based interfaces output must be both in sound and textual formats [2]. Therefore ensuring that visually impaired and the Hearing Impaired are able to successfully retrieve any data available on the Internet. It is therefore essential that a textual equivalent is always available for audio content. As recommended by the W3C, the text must further be synchronized with the visual content, so that HOH individuals are able to relate essential visual information being portrayed (when video formats are present). Some of the essential visual information might include: lip reading, personal expressions, and background movement 44

58 that might not be presented in the audio, demonstrating that traditional CC of the material is lacking pertinent information. Also important is that emotional content and background information must be presented so no aspect of the material is eliminated from any of the data being communicated. As previously discussed in section research regarding emoticons concluded that rendering emotions with icons was positively received by the HOH community, and verified the lack of ample CC facilities for the HOH sector in current Web application implementations. The guideline list of the proposed framework I: HOH for developers includes that the Internet should have CC for audio and video content. Some of the audio format on the Internet include: voice messages, music, videos, and news casts, just to name a few. These audio formats have been gaining popularity, and more importantly may convey critical information. However, the integration of CC with emotional representation is essential to ensuring no aspect of the Web is inaccessible by the HOH users. Further, the synchronization of videos with captions is of vital importance as it facilitates the users ability to link visual information concurrently with the textual format of data. If synchronization is not facilitated, HOH users will not have as equal the access as the hearing users for the information being presented, and the video would be interpreted as a story, rather than a visual display of the content. Emotional content is vital in presenting the aspects of video and audio material that have been previously disregarded. Emotional portrayal will enhance the textual data and produce a more equivalent feel to the hearing people for the given audio. 45

59 It is important to note, that each individual recommendation mentioned above is tightly coupled with the other. For example, the achievement of CC, without synchronization or emotional content, does not achieve adequate levels of equality for the requested audio or video file. It is pertinent that the intent, purpose, and idea are equivalent to the original data. Table 2 describes the criteria to be fulfilled when evaluating Web accessibility for the HOH. The table is specified towards the guidelines for developers which ensure complete Web accessibility for the HOH sector. 46

60 Table 2: Developers Requirements for the Hard of Hearing Developer Requirements Closed Captioning (CC) Synchronization Emotional content and background information Description CC must be complete for all online audio and videos to be accessible to the HOH communities. CC itself may not have a specified threshold for evaluation, but must meet a level of comprehensibility, so that the appropriate intent is conveyed in the textual format. Synchronization of video material with textual equivalent is vital in portraying visual information the content provider is presenting. Emotional content such as sarcasm or screaming, and other background information such as sound effects that may be present in audio should be integrated with CC to have complete understanding and interpretation of the material presented. 3.2 Developers Framework II: Deaf This framework for the Deaf sector depicts the requirements for embedding sign language content, where the term embedded is loosely used. Embedded content for the sake of this thesis may be used for a newly directed page, or may for the same page as the original content. The embedded content should be an optional feature, so that users with 47

61 other disabilities are not burdened with alternate textual and then signed reading that will result in redundant data, which they have already received by using assistive technologies. Moreover, certain criteria must be met when facilitating video translations into signed languages for embedding the translations. This Developers Framework II: Deaf, lists the essential guidelines which are separated into two categories; guidelines for interpreters, and guidelines for Web developers. The essential purpose for the interpreters guidelines is to give an equivalent translation for material presented. Interpreters also need to ensure the video quality is of sufficient quality to be rendered by the Web page, so that the user can view the video without hindrance or obstruction. Interpreters responsibilities include ensuring the accuracy of sign, such that an equivalent sign is completed for the textual or audio content being translated, and ensuring the video to be published is of adequate standards to be presented to the Deaf community. The interpreter responsibilities to have adequate video standards entail, to have the appropriate lighting conditions for the video to be viewed, to ensure the video meets the appropriate resolution standards, and to ensure that the spatial reference with the signer and the video is not obstructive, so that the facial expressions are clear, and both hands are visible. Of equal importance are the criteria to be followed by the Web developers or designers including; the placement of the video in the embedded page (such that flashing material or any other distractive information does not surround the video), the size of the video must be large enough for a comfortable view for the user, and where applicable, (in 48

62 cases of video material) the sign videos and the original videos must be synchronized, so that relevant visual information is not lost. Other relevant information could include pictorial information in scenes of the video, the location of the video, time lines and other such elements. Moreover, having synchronization leaves the deaf individual with the reassurance that an interpreter has not eliminated any aspect of the video, by being able to see the visual representation of the signs being produced by the interpreter. Table 3 presents the guidelines that an interpreter should adhere to when completing sign language videos for a Web platform. Figure 12 displays a comparison of two videos, illustrating some of the problems that result when the interpreter guidelines are not considered in a video capture. 49

63 Table 3: Interpreter Requirements for Deaf Framework Interpreter Requirements Sign Accuracy Video Quality for Capture Description Interpreter must present the exact purpose, thought, and intent conveyed in the original message. 1. Should not have latency, jerks, and must have stable capture. 2. Should have clear resolution. 3. Should have the appropriate lighting and Background. Interpreter Spatial Reference Interpreter must be centered to ensure, clear visual of hand, facial and body movements. 50

64 Poor Interpreter Capture Background clutter is distracting Inappropriate lighting conditions Preferred Capture : Following the Guidelines Interpreter spatial reference off Figure 12 : Comparison of Interpreter Capture 51

65 Analysis of the comparison: Background must be of solid nature, as noted in Figure 12. It is essential to have plain backgrounds to ensure the visual information being relayed by the interpreter is not detracted. Appropriate lighting conditions are essential. Inappropriate lighting may also lead to unnecessary distractions for users; and may further result in the inability to view the interpreter successfully. The spatial reference should be centered. In the left video both hands cannot be seen by the users, and therefore results with the loss of information being presented. Table 4 discusses the elements a developer should adopt when integrating signed translations into Web pages. 52

66 Table 4: Developer Requirements for Deaf Framework Developer Requirements Video Description 1. Size of Video Video must be of adequate size to relay the necessary information. However the video should not be too big so that it detracts from the actual document. 2. Placement on the page Video must be placed in a prominent location on the page. However it should also not detract from the content on the page. Synchronization with original video Where video is the content being interpreted it is essential that the signed video is synchronized with the original video, to portray the visual information that might not be available in the signed interpretation. The framework proposed in this thesis is to provide a guideline on how to embed adequate videos for content available on the Internet. The strategy employed is to access the video and create a script that embeds the video in the required page. Web developers may simply make this script available on the page so that any user who wishes to download the script may simply click the script and the video is then displayed. Once the script has been downloaded there should be no need to update the script unless the content has been changed. 53

67 3.3 Summary Adequate frameworks are essential for evaluating Web accessibility for both the HOH and the Deaf sectors. Since each sector of the Hearing Impaired community have different requirements the need to have separate frameworks are essential and were therefore provided accordingly. Those who are a part of the HOH group, may be comfortable with the written and spoken forms of languages (language dependent on country of origin, or residency), and therefore simply require textual equivalents. Textual equivalents of multimedia content should provide emotional and background content, and must also be synchronized to ensure adequate conveyance of all the information posted. Furthermore, textual representations are an insufficient translation of content, video and audio for the Deaf community. Therefore, signed translations must be available for the varied forms of content. Sign language translations must be appropriately interpreted by the signer, thus providing a sign language video of superb quality. Also, Developers have an equal responsibility in maintaining page standards, determining video sizing and placement, according to the data on the page, and ensuring synchronization with the original video is adhered. 54

68 Chapter 4 Implementation A community that excludes even one of its members is no community at all Dan Wilkins Two separate approaches were investigated to provide adequate solutions for Web accessibility for both the Deaf and HOH communities. The first endeavor, attempted to achieve automatic CC on the Web utilizing Windows Vista Speech Recognition application with a trained voice for a YouTube loader. The implementation of automatic CC on the Web facilitated the process for creating text for video content posted on YouTube. Automatic CC is geared towards the HOH sector who are more comfortable with the written forms of language. The second venture implements video sign translations. These video sign translations are embedded into the Web for both online content and multimedia, and targets improvements for Web accessibility to the Deaf sector, who use sign language as the primary method of communication. 55

69 4.1 Automatic Captioning In the preliminary approach, an automated solution was proposed to resolve the lack of CC. Speech recognition software would be used to translate audio into textual format. Using this process would ensure a solution for the HOH sector. The speech recognition software utilized must be trained for a specific person s voice in order to obtain a textual representation. This meant that every potential speaker would need to complete a one time ten minute training session of the speech recognition software. The speech recognition for single user input achieved approximately eighty percent word accuracy; out of one hundred and ninety six words one hundred and fifty seven were accurately written. However, the captions produced, was insufficient to ensure conceptual accuracy due to the ambiguity of the spoken language and speech recognition devices misinterpretations. However, this research concludes that further investigation of automatic CC could achieve more effective recognition rates for conceptual accuracy Components for Automatic Closed Captioning Gathering the necessary tools and applications that would aid in developing a prototype for automatic CC would be the first element of implementation. Some of the applications included: Firefox [1], Greasemonkey [19], Apache Tomcat [43], Windows Vista Speech Recognition application [32], Youtube-dl [52] and a command line driven media player. 56

70 Firefox [1] was chosen as the browser to test the prototype. Firefox supports certain add-ons [12] that were used in the development process. Firebug is one of the utilized add-ons that allows users to inspect the Document Object Model (DOM) of a Web page, and permits developers to edit, debug and monitor; HTML, JavaScript, and CSS code, in any Web page. The other add-on vital to the formation of the prototype is Greasemonkey [11] [12] [19]. Greasemonkey is a client side Firefox plugin that allows for dynamic of rendering of Web pages using small bits of JavaScript. Greasemonkey is intended to make sites more user-friendly, and is also commonly used to better accommodate assistive technologies such as screen readers. Greasemonkey by itself does nothing, but requires user scripts which facilitate the custom user-driven content. User scripts consist of JavaScript code that advises Greasmonkey where and when it should run. Greasemonkey scripts may target a specified page, group of pages, or an entire site. Due to the nature of Greasmonkey, both itself, and the user script can be made available for any user who downloads them. Greasemonkey is used in the prototype in order to embed a Caption Bar into the YouTube Web site. The Caption Bar is then used as an input box for the text document that is dictated from the associated video. Once the server returns the text document to the client the text file is inputted to the Caption Bar. Another vital application was Tomcat [43], which can be used as a Java servlet container and as a Web server. Basically, a Web server responds to client requests from 57

71 either static or HTML pages or runs user programs in response to the user request from a Web browser. The servlet that was created captures the URL and passes it as an argument to the batch file on the server computer. This is followed by an execution of a few instructions to complete the dictation from the video to a notepad.txt file. The servlet then takes the output from the batch file and returns this output to the caption bar, which is created by the Greasemonkey s JavaScript file on the client side. The speech recognition tool for translating the audio into a textual document was the Windows Vista Speech Recognition [32] Device, an application developed for accepting windows commands, and dictating to word processor applications. The speech recognition application requires a simple ten minute voice training session, and includes a learning tool that can add specified words via spelling and voice repetition. The Windows Speech Recognition application is used here to control the computer automatically, as both a command tool, and as dictating software. The hosting site chosen was YouTube [51]. The reason for choosing YouTube was because of its open source nature. A video was created during this research that was uploaded to the YouTube site. Youtube-dl [52] is a command line Python script that permits the downloading of videos from YouTube s Web site. Youtube-dl requires the Python interpreter and works in most Operating System (OS) environments. As YouTube saves videos in Flash Video format the extension is.flv when downloaded by Youdube-dl, and can be played by flash players such as MPlayer and VLC. The command line invocation for Youtube-dl is 58

72 youtube-dl followed by the video URL or identifier. For example, the argument for a fictitious video would be c:\>youtube-dl The video would be saved as foobar.flv to the local machine and becomes accessible for use. MPlayer is a blocking command line driven media player, developed for Linux. MPlayer plays most media file formats including flash videos. MPlayer is used in the prototype to ensure that further execution of the batch file is not completed until the media file has completed playing for the purpose of dictation. All of the above components work together to achieve automatic CC on the Web for user created videos. While each individual component may have a better suited alternative, these applications were chosen for the prototype because they were all open source options that eliminated expenditure and proved a viable concept for the average content provider Architecture for Automatic Closed Captioning Figure 13 gives a graphical overview of the implementation discussed in this section. The figure describes both the server and client perspective and the associated the details of the implementation. 59

73 Figure 13: Implementation of Closed Captioning First, on the client side, the Greasemonkey script embeds a Caption Bar directly beneath the video on the YouTube page. The Uniform Resource Locator (URL) is then parsed by the Greasemonkey script so that Tomcat can retrieve the required elements from the page on the server side. On the server side, a batch file is called to invoke several applications required to complete the automatic CC. The first element of the batch file is an audio instruction which initiates the Windows Speech Recognition System to begin listening. The start.wav file is played, start listening, using Sounder [40] which is an audio application that plays two seconds of.wav audio files after being invoked from the command line. Once the Recognition system is set to listening mode, the YouTube URL for the video is then inputted so that Youtube-dl can begin downloading the video with the 60

74 required input. The audio captured from Youtube-dl is copied to MPlayer, and then converted from.flv format to.mp3 format The notepad editor is then invoked directly from the batch file, via command line arguments c:\>start notepad c:\captioner\text.txt ; this enables a notepad.txt file to begin recording the dictation. The copied.mp3 audio file obtained from the video is then played using the MPlayer application (a blocking audio player), therefore ensuring that the batch file does not continue execution until the audio file has completed playing. If the batch file continues execution while the video is still being played, an incomplete dictated.txt file will be returned to the client. MPlayer plays the audio so that dictation is documented directly to notepad file simultaneously. Finally, MPlayer, has an audio file called save.wav, which contains a set of instructions (initially obtained from the speaker) to control notepad and saves the text.txt file. The instructions are file, close, enter. Since Notepad was opened with a given name, the requirements to save the document did not require further instructions. The text file is then passed back to the client side via the Tomcat server. The Greasemonkey Script embeds the text into the Caption Bar previously placed below the video on the YouTube Page Approach I: HOH In order to automatically manipulate the various programs and applications automatically it is essential to create a few audio files of voice commands, which enable 61

75 the computer to open and close applications including the Windows Speech Recognition Device, and a word processing tool (Notepad). An initial voice training session by Web posters (which should be initiated by hosting websites that facilitate video posting) should be conducted in order to offer automated CC for the HOH end-users. The simple training session should also include phrases to control necessary computer software to initiate the speech recognition software (as previously mentioned), and the opening and closing of the appropriate word processing tool needs to be integrated. Therefore, ensuring, a onetime voice instruction setup per speaker / content provider. The onetime setup to achieve automatic CC is by far more attainable if Web content providers realize that once an initial training session has been completed and that further work to complete CC for future video posts would be unnecessary. Content providers would be more susceptible to apply CC to their multimedia content; therefore, facilitating an accessible Web with minimal time requirements from the video loaders themselves. The primary results obtained from such a system, ensure the availability of material previously inaccessible by the HOH community Evaluation of Automatic Closed Captioning According to the Developers Framework I for the HOH community, it is essential to generate CC from the audio, synchronize the audio with the visual media, and render any emotional content that has been conveyed from the presented multimedia. 62

76 The Windows Speech Recognition [32] software that is being used in the prototype produces recognition rates of approximately 80% accuracy, including words that are incorrectly placed and words that are misinterpreted. This standard does not initially satisfy the need for conceptual accuracy, as certain words were taken out of context. Table 5 has both the given audio and the dictated audio, where incorrect translations are highlighted with red text. In order to adequately evaluate the system, certain limitations of speech recognition devices need to be identified. One limitation includes the loss of context in human ambiguity. A vital consideration is that human ambiguity is a difficult element to resolve, and may only be rectified with manual correction or appropriate probabilistic methods. However, it is essential to realize that speech recognition systems recognize certain words such as dot as a literal input and not as a punctuation mark the period. Another example is the misrepresentation of words such as there and their. A further limitation to be resolved is the synchronization of the caption along with the video, which is the custom of CC on Television (TV) systems and is the standard recommended by the W3C WAI [48]. As previously discussed in the framework section, synchronization is one of the essential criteria to be adhered by developers in presenting an adequate remedy for the Hearing Impaired community overall; therefore ensuring visual content is easily conveyed with the textual information. Emotional content must also be integrated in CC to satisfy all the elements of the Developers Framework I: HOH Figure 14 shows the integration of the Caption Bar with the textual output on the YouTube site to demonstrate the end-users retrieval of automatic CC. 63

77 Table 5: Results from Automatic Closed Captioning Actual Audio Produced Output Weekly Status Report, Simone Pasmore, Date June 2, 2008, Activities met with two fellow colleagues to discuss Web plugin feasibility, in relation with the software and the integration. We decided on the Greasemonkey and MonkeyGrease for client server and server client. Read 3 related related papers based on closed captioning and speech recognition. Mixed audio for input, so that audio may be captured directly from the computer, rather than the microphone. Downloaded software both SoundForge the trial version, and Audacity, to research feasibility of voice extraction from music background, implementing pauses for dictation to be captured, and to analyze voice patterns. Recorded several audio tracks from YouTube, news.bbc.co.uk and CNN.com to decipher specific voice regulations with dictation software. Issues and Problems, Voice patterns are recognizable for single user only attaining API or all utilized software was unattainable. Plans have a meeting with colleagues for a further update. Test the possibility of having multiple user recognition from the Windows Speech Recognition Software. Test the feasibility of having Windows Speech Recognition validate the software for online Web loaders automatically. Upload video for testing purposes, Integrate trial software process and investigate coding strategies to more efficiently, and effectively produce results. Weekly status report Simone Pasmore date june in SAC and 2008 activities met with two fellow colleagues to discuss Web plugin feasibility in relation with the software and the integration the decided on grease monkey mom degrees for 10 server and server client wrench three related papers based on tools captioning and speech recognition mixed audio for input so that the body may be captured directly from the computer rather than microphone downloaded software will sound forge the trial version and a lot of city to research feasibility of voice extraction from music background implementing pauses for dictation to be capture of 10 to analyze voice patterns recorded several audio tracks from youtube news of BBC SCO the UK and CNN dot com to decipher specific voice regulations with dictation software issues and problems boards patterns or recognise of both for single user only a tiny garden for all utilized software was unattainable towns have a meeting with colleagues for a further update test the possibility of having multiple user recognition for the windows speech recognition software test the feasibility of having windows speech recognition validate the software for online Web loaders automatically of no big deal for testing purposes integrate trial software? and investigate coding strategies to more efficiently and effectively produce results 64

78 Figure 14: Caption Bar on YouTube Site Survey of Automatic Closed Captioning Four Deaf (ASL users) participants were surveyed to evaluate the effectiveness of the automatic CC. Two participants rated the implementation as good and two others rated it as very good. However all four participants felt the color red for the background and white text was hard to read. Further, they added that the textual equivalents lacked feeling (emotion), and expression that they could receive from ASL. 65

79 Due to limited resources, the research only obtained Deaf participants, however future studies hope to attain more participants including HOH end-users in the survey. A copy of the utilized survey may be obtained in the appendix section. 4.2 Sign Translation In implementing a manual prototype of sign language video translations the intricacies were less technical for application. However, certain tactics and applications still needed to be utilized in order to embed the video directly to the hosting site or any other page(s). Nonetheless, the audio content is translated to sign content by employing the services of a sign language interpreter Architecture Components for Sign Translation Firefox [1] was the browser of choice for the prototype. Firefox houses add-ons [12], which may be used for information, entertainment, and may also aid developers in the process of authoring. Firebug was one of the add-ons used in the aid of the development, as Firebug permitted inspection of the DOM, to inspect HTML, CSS, and JavaScript of the desired Web page. Further, Greasemonkey [11] [12] [19] a Firefox add-on, was utilized in the architectural process of embedding sign language videos, as it permitted directly changing elements on any page with the use of JavaScript code. Greasemonkey is used to directly enhance, or add qualities to a page, group of pages, or a Web site. The JavaScript file is used to instruct Greasmonkey where and when to make the appropriate changes for the Web page(s) / site. Generally, Greasemonkey has been used for the improvement of accessibility, assistive technologies, and for other page enhancements. 66

80 In the sign language video translation prototype Greasemonkey facilitates the capturing of the video from the host and embeds the video to the requested site or page. Also, due to limited hosting resources, Greasemonkey was chosen as simple scripting technologies are available to retrieve directly from YouTube s API GData. YouTube s API GData is an open source developers tool which is used to access information, make changes, and render new programs directly from the YouTube site. Greasemonkey looks through GData s XML file to see if there is a _SignSupport version of the original video. If the _SignSupport version is found, it is then embedded directly to the requested Website. As the YouTube [51] site offers free access to any user who wishes to publish videos, the prototype uses it as a repository to host sign language videos for evaluation purposes. Also, with direct access to GData, the methodology was simplified by utilizing the YouTube site. Another approach was to create a separate Website to view the sign language translated videos. The utilized Web site was SignTubeUs.com which was a separate resource that eliminated surrounding textual content of the YouTube site; designed specifically for Deaf community Architecture for Embedded Sign Videos Figure 15 overviews the implementation of embedding sign videos on YouTube.com. The general design is as follows. 67

81 Figure 15: Implementation Signed Translations on YouTube.com The first stage in the architecture is to retrieve the name of the video from the YouTube site. Once the name / title, has been retrieved NameOfVideo, the Greasemonkey script applies the appendage _SignSupport resulting in NameOfVideo_SignSupport. The script then retrieves NameOfVideo_SignSupport from the YouTube API GData. If the video is available, the resulting video will be embedded directly to the YouTube site. However, if the video is unavailable, the option to upload a _SignSupport video is requested from the user. 68

82 Figure 16: Implementation of Sign Translation on SignTubeUs.com SignTubeUs.com is an alternate site, which was used to implement the same architecture as with the YouTube site as depicted by Figure 16. The main reason an alternate site was used was to eliminate the textual surrounding content on the YouTube site itself; therefore presenting a site specifically for the Deaf community. The concept is to create a social network that allows interpreters to upload translated videos, and have the Deaf community have direct access to a signing site that is dedicated to the Deaf community; rendering any video the user searches that has a _SignSupport version Approach II: Deaf The sign video is created by sign language interpreters. These sign videos do not need to be limited to those who use ASL only, but also may be expanded to other forms of sign language, dependent on the geographical location, or spoken language for which 69

83 the content needs to be rendered. In the creation of the video, essential guidelines need to be adhered, which is detailed further in the Developers Framework II; however, video creation is the first stage in the architectural process. Secondly, YouTube is chosen as the hosting for translated material as a central hub which can be easily accessed, due to the open source nature provided by Google. YouTubes API GData is employed for the prototype as an open source portal for hosting videos, resulting in a simple access point for the translated materials. Using YouTube is simply an experimental tool, which demonstrates the feasibility of embedding the necessary translated videos in a non-obtrusive manner. Finally, Greasemonkey script is used to retrieve the requested video from the YouTube site, it then matches the sign video labels therefore accessing the interpreted content for which the applicable material needs to be translated, and lastly embedds the video directly into the site requested. All online content that developers/organizations need to have interpreted must be completed by qualified or certified interpreters. Once interpreters have a video to interpret they need to ensure the Developers Framework II: Deaf for Interpreters is strictly followed, such that signing accuracy is achieved, lighting conditions, background, and clothing are all appropriate to produce an adequate sign language video format. Developers are further responsible for ensuring that the video is embedded efficiently, so that end-users can view the video as needed. Developers are also responsible for providing sufficient synchronization with the original multimedia content and verifying interpreter requirements. 70

84 4.2.4 Evaluation of Embedded Sign Content The ability to translate varied categories of videos were essential in emphasizing the impact signed videos have to the Deaf community. There exists such an array of information on the Web that has been previously been inaccessible to the Deaf community. To demonstrate the efficiency and effectiveness of Developers Framework II: Deaf, and to have an evaluation of the framework, some varied categories of content were chosen, they are; a tutorial (recipe), a song, a story (for entertainment), and a weather forecast as a news item. It is important to note that different interpreters may be better suited to interpret for different situations. For instance, interpreters who perform educational interpreting may not be suited for legal interpreting, and those who perform songs, may not be suited for news broadcast interpreting. It should be noted that interpreters are generally versatile and are capable of interpreting for numerous circumstances. However, an interpreter may be better qualified for certain appointments than others, and this should be taken into account when selecting the information they chose to interpret. The embedded videos used in the evaluation met all the criteria of the Developers Framework II; i.e., conceptual accuracy of signs, synchronization with the original video, video requirements, size, placement and interpreter spatial reference within the video frame. Therefore the research accomplishes the task of equivalent conveyance of the original multimedia content that was provided. The following section will show the four categories previously mentioned that were chosen to demonstrate the evaluation of embedding sign language for Web 71

85 accessibility. The figures illustrate the original content on the left and the sign language video on the right. The illustrations demonstrate the necessity to follow the requirements listed in the framework (Chapter 3), such as video placement and lighting conditions. Category 1: Tutorial A Tutorial is generally a computer assisted method for giving instructions. The tutorial chosen here is a recipe for cooking. The reason a tutorial was chosen is because of the unique format for which it is given. For the purpose of the case study it demonstrates that educational material is better rendered in a signed format for the Deaf community. The written forms of languages do not adequately convey instructional methods as well as with signed formats (specifically ASL). Figure 17 indicates the step by step instruction, being presented in both audio and ASL format for the recipe video selected. Figure 17: Sign Translation of a Tutorial (Recipe) 72

86 Category 2: Song Songs have a unique structure, where rhythm, timing and melody are portrayed in the audio form. In order to adequately render musical structure it is essential that the interpreter signs according to these unique structures. Here it is indicative that textual formats do not convey the intricacies of music as do the sign language formats. Sign equivalents of the original song give the Deaf community a comparable interpretation for rhythmic and emotional data for that particular song. Figure 18 depicts the rhythmic formats of music portrayed in ASL. Figure 18: Sign Translation of a Song 73

87 Category 3: Entertainment Content Entertainment media may be portrayed in several formats such as stories, comedies and other such forms of enjoyment. In this case study a story is used to indicate the unique nature of entertainment content. In the original video, the author uses pictorial representations of the characters in her story, while the interpreter represents the same characters using classifiers. The format of this story demonstrates the need to have the original video synchronized with the interpreter. Figure 19 depicts the characters used by the author along with the interpreters presentation. In stories, emotional content are also better expressed in sign languages in order to convey the intent of the characters. The story chosen illustrates the degree of emotional content that is more heavily relayed in the format of narrations. Figure 19: Sign Translation of a Story 74

88 Category 4: Newscasts Newscasts are ubiquitous in daily life and on the Internet. Various Web pages use their sites to update populations on different news items. In this illustration a weather report is used for the purpose of demonstrating the unique features of a newscast. Generally newscasts lack emotional content, but may have vital details that are not portrayed in textual forms, such as urgency, level of effect, and other such crucial details. It should be mentioned that the audio of newscasts are generally more rapid in nature, as a multitude of information needs to be delivered in a short space of time. Figure 20 demonstrates an ASL interpreter conveying the urgency of an approaching hurricane, giving details as the newscaster presents the data. Figure 20: Sign Translation of a Weather Alert Utilizing a separate Web site to access and load the desired Deaf-accessible content, as opposed to embedding videos directly into the YouTube site, was an alternate 75

89 approach that aimed at providing direct access to the Deaf community. Having an optional site promotes a growing social network of itself. The Deaf community (including interpreters, and students of sign languages) have the benefit of directly accessing sign language content for the videos that have been interpreted, and moreover, have the opportunity to upload or author translated versions for any video of their choice. Figure 21 shows the home page of the dedicated Web site aimed at providing a social network for all involved in the Deaf community. Figure 22 presents the rendition of the original content and the signed content in a more pleasant manner. Further, Figure 23 depicts the add translation button that appears when a _SignSupport version of the original video is not available. The Interpreters have the opportunity to upload the sign version of the video, as the button redirects the user to the YouTube upload page, therefore expanding the video repository of _SignSupport videos. Figure 21: Login page for SignTubeUs 76

90 Figure 22: Display of SignTubeUs Figure 23: SignTubeUs Where Video is Not Available 77

In this chapter, you will learn about the requirements of Title II of the ADA for effective communication. Questions answered include:

In this chapter, you will learn about the requirements of Title II of the ADA for effective communication. Questions answered include: 1 ADA Best Practices Tool Kit for State and Local Governments Chapter 3 In this chapter, you will learn about the requirements of Title II of the ADA for effective communication. Questions answered include:

More information

Date: April 19, 2017 Name of Product: Cisco Spark Board Contact for more information:

Date: April 19, 2017 Name of Product: Cisco Spark Board Contact for more information: Date: April 19, 2017 Name of Product: Cisco Spark Board Contact for more information: accessibility@cisco.com Summary Table - Voluntary Product Accessibility Template Criteria Supporting Features Remarks

More information

Avaya IP Office R9.1 Avaya one-x Portal Call Assistant Voluntary Product Accessibility Template (VPAT)

Avaya IP Office R9.1 Avaya one-x Portal Call Assistant Voluntary Product Accessibility Template (VPAT) Avaya IP Office R9.1 Avaya one-x Portal Call Assistant Voluntary Product Accessibility Template (VPAT) Avaya IP Office Avaya one-x Portal Call Assistant is an application residing on the user s PC that

More information

Summary Table Voluntary Product Accessibility Template. Supporting Features. Supports. Supports. Supports. Supports

Summary Table Voluntary Product Accessibility Template. Supporting Features. Supports. Supports. Supports. Supports Date: March 31, 2016 Name of Product: ThinkServer TS450, TS550 Summary Table Voluntary Product Accessibility Template Section 1194.21 Software Applications and Operating Systems Section 1194.22 Web-based

More information

Voluntary Product Accessibility Template (VPAT)

Voluntary Product Accessibility Template (VPAT) Avaya Vantage TM Basic for Avaya Vantage TM Voluntary Product Accessibility Template (VPAT) Avaya Vantage TM Basic is a simple communications application for the Avaya Vantage TM device, offering basic

More information

User Guide V: 3.0, August 2017

User Guide V: 3.0, August 2017 User Guide V: 3.0, August 2017 a product of FAQ 3 General Information 1.1 System Overview 5 1.2 User Permissions 6 1.3 Points of Contact 7 1.4 Acronyms and Definitions 8 System Summary 2.1 System Configuration

More information

Glossary of Inclusion Terminology

Glossary of Inclusion Terminology Glossary of Inclusion Terminology Accessible A general term used to describe something that can be easily accessed or used by people with disabilities. Alternate Formats Alternate formats enable access

More information

Planning and Hosting Accessible Webinars

Planning and Hosting Accessible Webinars Series: Designing accessible resources for people with disabilities and Deaf people Planning and Hosting Accessible Webinars Webinars are a great option for sharing information and providing training for

More information

Networx Enterprise Proposal for Internet Protocol (IP)-Based Services. Supporting Features. Remarks and explanations. Criteria

Networx Enterprise Proposal for Internet Protocol (IP)-Based Services. Supporting Features. Remarks and explanations. Criteria Section 1194.21 Software Applications and Operating Systems Internet Protocol Telephony Service (IPTelS) Detail Voluntary Product Accessibility Template (a) When software is designed to run on a system

More information

Accessibility and Lecture Capture. David J. Blezard Michael S. McIntire Academic Technology

Accessibility and Lecture Capture. David J. Blezard Michael S. McIntire Academic Technology Accessibility and Lecture Capture David J. Blezard Michael S. McIntire Academic Technology IANAL WANL WANADACO What do they have in common? California Community Colleges California State University Fullerton

More information

Voluntary Product Accessibility Template (VPAT)

Voluntary Product Accessibility Template (VPAT) (VPAT) Date: Product Name: Product Version Number: Organization Name: Submitter Name: Submitter Telephone: APPENDIX A: Suggested Language Guide Summary Table Section 1194.21 Software Applications and Operating

More information

I. Language and Communication Needs

I. Language and Communication Needs Child s Name Date Additional local program information The primary purpose of the Early Intervention Communication Plan is to promote discussion among all members of the Individualized Family Service Plan

More information

VPAT Summary. VPAT Details. Section Telecommunications Products - Detail. Date: October 8, 2014 Name of Product: BladeCenter HS23

VPAT Summary. VPAT Details. Section Telecommunications Products - Detail. Date: October 8, 2014 Name of Product: BladeCenter HS23 Date: October 8, 2014 Name of Product: BladeCenter HS23 VPAT Summary Criteria Status Remarks and Explanations Section 1194.21 Software Applications and Operating Systems Section 1194.22 Web-based Internet

More information

1. INTRODUCTION. Vision based Multi-feature HGR Algorithms for HCI using ISL Page 1

1. INTRODUCTION. Vision based Multi-feature HGR Algorithms for HCI using ISL Page 1 1. INTRODUCTION Sign language interpretation is one of the HCI applications where hand gesture plays important role for communication. This chapter discusses sign language interpretation system with present

More information

Summary Table Voluntary Product Accessibility Template. Supports. Please refer to. Supports. Please refer to

Summary Table Voluntary Product Accessibility Template. Supports. Please refer to. Supports. Please refer to Date Aug-07 Name of product SMART Board 600 series interactive whiteboard SMART Board 640, 660 and 680 interactive whiteboards address Section 508 standards as set forth below Contact for more information

More information

Making Sure People with Communication Disabilities Get the Message

Making Sure People with Communication Disabilities Get the Message Emergency Planning and Response for People with Disabilities Making Sure People with Communication Disabilities Get the Message A Checklist for Emergency Public Information Officers This document is part

More information

Florida Standards Assessments

Florida Standards Assessments Florida Standards Assessments Assessment Viewing Application User Guide 2017 2018 Updated February 9, 2018 Prepared by the American Institutes for Research Florida Department of Education, 2018 Descriptions

More information

Networx Enterprise Proposal for Internet Protocol (IP)-Based Services. Supporting Features. Remarks and explanations. Criteria

Networx Enterprise Proposal for Internet Protocol (IP)-Based Services. Supporting Features. Remarks and explanations. Criteria Section 1194.21 Software Applications and Operating Systems Converged Internet Protocol Services (CIPS) Detail Voluntary Product Accessibility Template Criteria Supporting Features Remarks and explanations

More information

ITU-T. FG AVA TR Version 1.0 (10/2013) Part 3: Using audiovisual media A taxonomy of participation

ITU-T. FG AVA TR Version 1.0 (10/2013) Part 3: Using audiovisual media A taxonomy of participation International Telecommunication Union ITU-T TELECOMMUNICATION STANDARDIZATION SECTOR OF ITU FG AVA TR Version 1.0 (10/2013) Focus Group on Audiovisual Media Accessibility Technical Report Part 3: Using

More information

Use the following checklist to ensure that video captions are compliant with accessibility guidelines.

Use the following checklist to ensure that video captions are compliant with accessibility guidelines. Table of Contents Purpose 2 Objective 2 Scope 2 Technical Background 2 Video Compliance Standards 2 Section 508 Standards for Electronic and Information Technology... 2 Web Content Accessibility Guidelines

More information

Access to Internet for Persons with Disabilities and Specific Needs

Access to Internet for Persons with Disabilities and Specific Needs Access to Internet for Persons with Disabilities and Specific Needs For ITU WCG (Resolution 1344) Prepared by Mr. Kyle Miers Chief Executive Deaf Australia 15 January 2016 Page 1 of 5 EXECUTIVE SUMMARY

More information

Mechanicsburg, Ohio. Policy: Ensuring Effective Communication for Individuals with Disabilities Policy Section: Inmate Supervision and Care

Mechanicsburg, Ohio. Policy: Ensuring Effective Communication for Individuals with Disabilities Policy Section: Inmate Supervision and Care Tri-County Regional Jail Policy & Procedure Policy: Ensuring Effective Communication for Individuals with Disabilities Policy Section: Inmate Supervision and Care Tri-County Regional Jail Mechanicsburg,

More information

Departmental ADA Coordinators Academy. Session II A April 5, 2016 Effective Communication: The Basics

Departmental ADA Coordinators Academy. Session II A April 5, 2016 Effective Communication: The Basics Departmental ADA Coordinators Academy Session II A April 5, 2016 Effective Communication: The Basics Presented by the San Francisco Mayor s Office on Disability Today s Learning Objectives: Define effective

More information

Avaya G450 Branch Gateway, Release 7.1 Voluntary Product Accessibility Template (VPAT)

Avaya G450 Branch Gateway, Release 7.1 Voluntary Product Accessibility Template (VPAT) Avaya G450 Branch Gateway, Release 7.1 Voluntary Product Accessibility Template (VPAT) can be administered via a graphical user interface or via a text-only command line interface. The responses in this

More information

Section Web-based Internet information and applications VoIP Transport Service (VoIPTS) Detail Voluntary Product Accessibility Template

Section Web-based Internet information and applications VoIP Transport Service (VoIPTS) Detail Voluntary Product Accessibility Template Section 1194.22 Web-based Internet information and applications VoIP Transport Service (VoIPTS) Detail Voluntary Product Accessibility Template Remarks and explanations (a) A text equivalent for every

More information

A C C E S S I B I L I T Y. in the Online Environment

A C C E S S I B I L I T Y. in the Online Environment A C C E S S I B I L I T Y in the Online Environment Blindness People who are blind or visually impaired are endowed with a sharper sense of touch, hearing, taste, or smell. Most blind people are not proficient

More information

Product Model #: Digital Portable Radio XTS 5000 (Std / Rugged / Secure / Type )

Product Model #: Digital Portable Radio XTS 5000 (Std / Rugged / Secure / Type ) Rehabilitation Act Amendments of 1998, Section 508 Subpart 1194.25 Self-Contained, Closed Products The following features are derived from Section 508 When a timed response is required alert user, allow

More information

Communications Sciences & Disorders Course Descriptions

Communications Sciences & Disorders Course Descriptions Communications Sciences & Disorders Course Descriptions Undergraduate Level 3.2018 CSD 1126 Deaf Studies: 2 semester hours. Survey of the field of Deaf studies, emphasizing Deafhood, the role of allies;

More information

As of: 01/10/2006 the HP Designjet 4500 Stacker addresses the Section 508 standards as described in the chart below.

As of: 01/10/2006 the HP Designjet 4500 Stacker addresses the Section 508 standards as described in the chart below. Accessibility Information Detail Report As of: 01/10/2006 the HP Designjet 4500 Stacker addresses the Section 508 standards as described in the chart below. Family HP DesignJet 4500 stacker series Detail

More information

Accessible Computing Research for Users who are Deaf and Hard of Hearing (DHH)

Accessible Computing Research for Users who are Deaf and Hard of Hearing (DHH) Accessible Computing Research for Users who are Deaf and Hard of Hearing (DHH) Matt Huenerfauth Raja Kushalnagar Rochester Institute of Technology DHH Auditory Issues Links Accents/Intonation Listening

More information

ACCESSIBILITY FOR THE DISABLED

ACCESSIBILITY FOR THE DISABLED ACCESSIBILITY FOR THE DISABLED Vyve Broadband is committed to making our services accessible for everyone. HEARING/SPEECH SOLUTIONS: Closed Captioning What is Closed Captioning? Closed Captioning is an

More information

Universal Usability. Ethical, good business, the law SWEN-444

Universal Usability. Ethical, good business, the law SWEN-444 Universal Usability Ethical, good business, the law SWEN-444 Topics Universal usability and software ethics Visually impaired Deaf and hard of hearing Dexterity and mobility impairments Section 508 the

More information

Accessible Internet Video

Accessible Internet Video Accessible Internet Video Andrew Kirkpatrick Adobe Systems akirkpat@adobe.com 1 What is Needed? Captions Subtitles Full screen captions/subtitles Audio description (video description) Keyboard accessible

More information

AUXILIARY AIDS PLAN FOR PERSONS WITH DISABILITIES AND LIMITED ENGLISH PROFICIENCY

AUXILIARY AIDS PLAN FOR PERSONS WITH DISABILITIES AND LIMITED ENGLISH PROFICIENCY AUXILIARY AIDS PLAN FOR PERSONS WITH DISABILITIES AND LIMITED ENGLISH PROFICIENCY PURPOSE This plan provides the policies and procedures for Directions for Living to ensure: A. That all clients and/or

More information

Interact-AS. Use handwriting, typing and/or speech input. The most recently spoken phrase is shown in the top box

Interact-AS. Use handwriting, typing and/or speech input. The most recently spoken phrase is shown in the top box Interact-AS One of the Many Communications Products from Auditory Sciences Use handwriting, typing and/or speech input The most recently spoken phrase is shown in the top box Use the Control Box to Turn

More information

User Interface. Colors, Icons, Text, and Presentation SWEN-444

User Interface. Colors, Icons, Text, and Presentation SWEN-444 User Interface Colors, Icons, Text, and Presentation SWEN-444 Color Psychology Color can evoke: Emotion aesthetic appeal warm versus cold colors Colors can be used for Clarification, Relation, and Differentiation.

More information

EDUCATIONAL TECHNOLOGY MAKING AUDIO AND VIDEO ACCESSIBLE

EDUCATIONAL TECHNOLOGY MAKING AUDIO AND VIDEO ACCESSIBLE EDUCATIONAL TECHNOLOGY MAKING AUDIO AND VIDEO ACCESSIBLE Videos integrated in courses must be accessible by all users. An accessible video includes captions, that is a transcript of the audio description

More information

Summary Table Voluntary Product Accessibility Template

Summary Table Voluntary Product Accessibility Template The following Voluntary Product Accessibility refers to the Apple MacBook Air. For more on the accessibility features of Mac OS X and the MacBook Air, visit Apple s accessibility Web site at http://www.apple.com/accessibility.

More information

Assistant Professor, PG and Research Department of Computer Applications, Sacred Heart College (Autonomous), Tirupattur, Vellore, Tamil Nadu, India

Assistant Professor, PG and Research Department of Computer Applications, Sacred Heart College (Autonomous), Tirupattur, Vellore, Tamil Nadu, India International Journal of Scientific Research in Computer Science, Engineering and Information Technology 2018 IJSRCSEIT Volume 3 Issue 7 ISSN : 2456-3307 Collaborative Learning Environment Tool In E-Learning

More information

Avaya one-x Communicator for Mac OS X R2.0 Voluntary Product Accessibility Template (VPAT)

Avaya one-x Communicator for Mac OS X R2.0 Voluntary Product Accessibility Template (VPAT) Avaya one-x Communicator for Mac OS X R2.0 Voluntary Product Accessibility Template (VPAT) Avaya one-x Communicator is a unified communications client that allows people to communicate using VoIP and Contacts.

More information

Konftel 300Mx. Voluntary Product Accessibility Template (VPAT)

Konftel 300Mx. Voluntary Product Accessibility Template (VPAT) Konftel 300Mx Voluntary Product Accessibility Template (VPAT) The Konftel 300Mx is a sophisticated speakerphone, intended for use by groups of up to ten people in conference room and meeting room environments.

More information

Glossary of Disability-Related Terms

Glossary of Disability-Related Terms Glossary of Disability-Related Terms Accessible: In the case of a facility, readily usable by a particular individual; in the case of a program or activity, presented or provided in such a way that a particular

More information

Note: This document describes normal operational functionality. It does not include maintenance and troubleshooting procedures.

Note: This document describes normal operational functionality. It does not include maintenance and troubleshooting procedures. Date: 26 June 2017 Voluntary Accessibility Template (VPAT) This Voluntary Product Accessibility Template (VPAT) describes accessibility of Polycom s CX5100 Unified Conference Station against the criteria

More information

Effective Communication

Effective Communication Page 1 of 5 U.S. Department of Justice Civil Rights Division Disability Rights Section Effective Communication The Department of Justice published revised final regulations implementing the Americans with

More information

Director of Testing and Disability Services Phone: (706) Fax: (706) E Mail:

Director of Testing and Disability Services Phone: (706) Fax: (706) E Mail: Angie S. Baker Testing and Disability Services Director of Testing and Disability Services Phone: (706)737 1469 Fax: (706)729 2298 E Mail: tds@gru.edu Deafness is an invisible disability. It is easy for

More information

Avaya B159 Conference Telephone Voluntary Product Accessibility Template (VPAT)

Avaya B159 Conference Telephone Voluntary Product Accessibility Template (VPAT) Avaya B159 Conference Telephone Voluntary Product Accessibility Template (VPAT) The Avaya B159 Conference Telephone is a sophisticated speakerphone, intended for use by groups of ten or more individuals

More information

BROADCASTING OF AUSLAN INTERPRETER ON BROADCAST AND DIGITAL NETWORKS

BROADCASTING OF AUSLAN INTERPRETER ON BROADCAST AND DIGITAL NETWORKS POSITION STATEMENT BROADCASTING OF AUSLAN INTERPRETER ON BROADCAST AND DIGITAL NETWORKS PURPOSE OVERVIEW POSITION STATEMENT Deaf Australia s Position Statement on Broadcasting of Auslan Interpreter on

More information

Accessibility. Reporting Interpretation and Accommodation Requests

Accessibility. Reporting Interpretation and Accommodation Requests A presentation by: The WorkSmart Network Accessibility Reporting Interpretation and Accommodation Requests Version 2.2018 Overview As subrecipients and contractors of Federally-funded programs and activities,

More information

Senior Design Project

Senior Design Project Senior Design Project Project short-name: YouTalkWeSign Low-Level Design Report Abdurrezak Efe, Yasin Erdoğdu, Enes Kavak, Cihangir Mercan Supervisor:Hamdi Dibeklioğlu Jury Members: Varol Akman, Mustafa

More information

Note: This document describes normal operational functionality. It does not include maintenance and troubleshooting procedures.

Note: This document describes normal operational functionality. It does not include maintenance and troubleshooting procedures. Date: 4 May 2012 Voluntary Accessibility Template (VPAT) This Voluntary Product Accessibility Template (VPAT) describes accessibility of Polycom s QDX Series against the criteria described in Section 508

More information

iclicker+ Student Remote Voluntary Product Accessibility Template (VPAT)

iclicker+ Student Remote Voluntary Product Accessibility Template (VPAT) iclicker+ Student Remote Voluntary Product Accessibility Template (VPAT) Date: May 22, 2017 Product Name: iclicker+ Student Remote Product Model Number: RLR15 Company Name: Macmillan Learning, iclicker

More information

Avaya G450 Branch Gateway, R6.2 Voluntary Product Accessibility Template (VPAT)

Avaya G450 Branch Gateway, R6.2 Voluntary Product Accessibility Template (VPAT) ` Avaya G450 Branch Gateway, R6.2 Voluntary Product Accessibility Template (VPAT) 1194.21 Software Applications and Operating Systems The Avaya G450 Branch Gateway can be administered via a graphical user

More information

Note: This document describes normal operational functionality. It does not include maintenance and troubleshooting procedures.

Note: This document describes normal operational functionality. It does not include maintenance and troubleshooting procedures. Date: 9 September 2011 Voluntary Accessibility Template (VPAT) This Voluntary Product Accessibility Template (VPAT) describes accessibility of Polycom s SoundStation IP5000 conference phone against the

More information

Senior Design Project

Senior Design Project Senior Design Project Project short-name: YouTalkWeSign ( https://youtalkwesign.com ) Final Report Abdurrezak Efe, Yasin Erdoğdu, Enes Kavak, Cihangir Mercan Supervisor: Hamdi Dibeklioğlu Jury Members:

More information

Inventions on expressing emotions In Graphical User Interface

Inventions on expressing emotions In Graphical User Interface From the SelectedWorks of Umakant Mishra September, 2005 Inventions on expressing emotions In Graphical User Interface Umakant Mishra Available at: https://works.bepress.com/umakant_mishra/26/ Inventions

More information

Note: This document describes normal operational functionality. It does not include maintenance and troubleshooting procedures.

Note: This document describes normal operational functionality. It does not include maintenance and troubleshooting procedures. Date: 18 Nov 2013 Voluntary Accessibility Template (VPAT) This Voluntary Product Accessibility Template (VPAT) describes accessibility of Polycom s C100 and CX100 family against the criteria described

More information

Note: This document describes normal operational functionality. It does not include maintenance and troubleshooting procedures.

Note: This document describes normal operational functionality. It does not include maintenance and troubleshooting procedures. Date: 28 SEPT 2016 Voluntary Accessibility Template (VPAT) This Voluntary Product Accessibility Template (VPAT) describes accessibility of Polycom s SoundStation Duo against the criteria described in Section

More information

Designing a Web Page Considering the Interaction Characteristics of the Hard-of-Hearing

Designing a Web Page Considering the Interaction Characteristics of the Hard-of-Hearing Designing a Web Page Considering the Interaction Characteristics of the Hard-of-Hearing Miki Namatame 1,TomoyukiNishioka 1, and Muneo Kitajima 2 1 Tsukuba University of Technology, 4-3-15 Amakubo Tsukuba

More information

Before the Department of Transportation, Office of the Secretary Washington, D.C

Before the Department of Transportation, Office of the Secretary Washington, D.C Before the Department of Transportation, Office of the Secretary Washington, D.C. 20554 ) In the Matter of ) Accommodations for Individuals Who Are ) OST Docket No. 2006-23999 Deaf, Hard of Hearing, or

More information

iclicker2 Student Remote Voluntary Product Accessibility Template (VPAT)

iclicker2 Student Remote Voluntary Product Accessibility Template (VPAT) iclicker2 Student Remote Voluntary Product Accessibility Template (VPAT) Date: May 22, 2017 Product Name: i>clicker2 Student Remote Product Model Number: RLR14 Company Name: Macmillan Learning, iclicker

More information

SUMMARY TABLE VOLUNTARY PRODUCT ACCESSIBILITY TEMPLATE

SUMMARY TABLE VOLUNTARY PRODUCT ACCESSIBILITY TEMPLATE Date: 1 August 2009 Voluntary Accessibility Template (VPAT) This Voluntary Product Accessibility Template (VPAT) describes accessibility of Polycom s Polycom CX200, CX700 Desktop IP Telephones against

More information

2. We would first like to set out some general principles and then apply them to specific areas for review.

2. We would first like to set out some general principles and then apply them to specific areas for review. Communications Review DCMS seminar paper Response from UKCoD/TAG 1. UKCoD/TAG welcomes the opportunity to respond to the DCMS seminar paper on the Communications Review. UKCoD is an umbrella organisation

More information

Avaya IP Office 10.1 Telecommunication Functions

Avaya IP Office 10.1 Telecommunication Functions Avaya IP Office 10.1 Telecommunication Functions Voluntary Product Accessibility Template (VPAT) Avaya IP Office is an all-in-one solution specially designed to meet the communications challenges facing

More information

Video Captioning Basics

Video Captioning Basics Video Captioning Basics Perhaps the most discussed aspect of accessible video is closed captioning, but misinformation about captioning runs rampant! To ensure you're using and creating accessible video

More information

Elluminate and Accessibility: Receive, Respond, and Contribute

Elluminate and Accessibility: Receive, Respond, and Contribute Elluminate and Accessibility: Receive, Respond, and Contribute More than 43 million Americans have one or more physical or mental disabilities. What s more, as an increasing number of aging baby boomers

More information

Increasing Access to Technical Science Vocabulary Through Use of Universally Designed Signing Dictionaries

Increasing Access to Technical Science Vocabulary Through Use of Universally Designed Signing Dictionaries UNIVERSAL DESIGN in Higher Education P R O M I S I N G P R A C T I C E S Increasing Access to Technical Science Vocabulary Through Use of Universally Designed Signing Dictionaries Judy Vesel and Tara Robillard,

More information

Making YouTube Videos Accessible through Closed Captioning and Community Contributions

Making YouTube Videos Accessible through Closed Captioning and Community Contributions Making YouTube Videos Accessible through Closed Captioning and Community Contributions Emily Manson and Taylor Thomas April 23-24, 2018 Presentation Link: http://bit.ly/2u2ggec Learning Objectives 1. Participants

More information

Wireless Emergency Communications Project

Wireless Emergency Communications Project Wireless Emergency Communications Project NAS Workshop on Public Response to Alerts and Warnings on Mobile Devices April 13, 2010 The Rehabilitation Engineering Research Center for Wireless Technologies

More information

Captioning Your Video Using YouTube Online Accessibility Series

Captioning Your Video Using YouTube Online Accessibility Series Captioning Your Video Using YouTube This document will show you how to use YouTube to add captions to a video, making it accessible to individuals who are deaf or hard of hearing. In order to post videos

More information

ACCESSIBILITY FOR THE DISABLED

ACCESSIBILITY FOR THE DISABLED ACCESSIBILITY FOR THE DISABLED Vyve Broadband is committed to making our services accessible for everyone. HEARING/SPEECH SOLUTIONS: Closed Captioning What is Closed Captioning? Closed Captioning is an

More information

Avaya Model 9611G H.323 Deskphone

Avaya Model 9611G H.323 Deskphone Avaya Model 9611G H.323 Deskphone Voluntary Product Accessibility Template (VPAT) The statements in this document apply to Avaya Model 9611G Deskphones only when they are configured with Avaya one-x Deskphone

More information

Communications Accessibility with Avaya IP Office

Communications Accessibility with Avaya IP Office Accessibility with Avaya IP Office Voluntary Product Accessibility Template (VPAT) 1194.23, Telecommunications Products Avaya IP Office is an all-in-one solution specially designed to meet the communications

More information

Communication Access Features on Apple devices

Communication Access Features on Apple devices Communication Access Features on Apple devices The information in this guide is correct for devices running ios 10. Devices running earlier operating systems may differ. Page 2 Page 3 Page 4 Page 5 Page

More information

PROPOSED WORK PROGRAMME FOR THE CLEARING-HOUSE MECHANISM IN SUPPORT OF THE STRATEGIC PLAN FOR BIODIVERSITY Note by the Executive Secretary

PROPOSED WORK PROGRAMME FOR THE CLEARING-HOUSE MECHANISM IN SUPPORT OF THE STRATEGIC PLAN FOR BIODIVERSITY Note by the Executive Secretary CBD Distr. GENERAL UNEP/CBD/COP/11/31 30 July 2012 ORIGINAL: ENGLISH CONFERENCE OF THE PARTIES TO THE CONVENTION ON BIOLOGICAL DIVERSITY Eleventh meeting Hyderabad, India, 8 19 October 2012 Item 3.2 of

More information

VPAT for Apple MacBook Air (mid 2013)

VPAT for Apple MacBook Air (mid 2013) VPAT for Apple MacBook Air (mid 2013) The following Voluntary Product Accessibility information refers to the Apple MacBook air (mid 2013). For more information on the accessibility features of Mac OS

More information

Fujitsu LifeBook T Series TabletPC Voluntary Product Accessibility Template

Fujitsu LifeBook T Series TabletPC Voluntary Product Accessibility Template Fujitsu LifeBook T Series TabletPC Voluntary Product Accessibility Template 1194.21 Software Applications and Operating Systems* (a) When software is designed to run on a system that This product family

More information

Apple emac. Standards Subpart Software applications and operating systems. Subpart B -- Technical Standards

Apple emac. Standards Subpart Software applications and operating systems. Subpart B -- Technical Standards Apple emac Standards Subpart 1194.21 Software applications and operating systems. 1194.22 Web-based intranet and internet information and applications. 1194.23 Telecommunications products. 1194.24 Video

More information

A Communication tool, Mobile Application Arabic & American Sign Languages (ARSL) Sign Language (ASL) as part of Teaching and Learning

A Communication tool, Mobile Application Arabic & American Sign Languages (ARSL) Sign Language (ASL) as part of Teaching and Learning A Communication tool, Mobile Application Arabic & American Sign Languages (ARSL) Sign Language (ASL) as part of Teaching and Learning Fatima Al Dhaen Ahlia University Information Technology Dep. P.O. Box

More information

Chapter 3 - Deaf-Blindness

Chapter 3 - Deaf-Blindness Chapter 3 - Deaf-Blindness Definition under IDEA of Deaf-Blindness Deaf-blindness refers to concomitant hearing and visual impairments, the combination of which causes such severe communication and other

More information

The power to connect us ALL.

The power to connect us ALL. Provided by Hamilton Relay www.ca-relay.com The power to connect us ALL. www.ddtp.org 17E Table of Contents What Is California Relay Service?...1 How Does a Relay Call Work?.... 2 Making the Most of Your

More information

Simple Caption Editor User Guide. May, 2017

Simple Caption Editor User Guide. May, 2017 Simple Caption Editor User Guide May, 2017 Table of Contents Overview Type Mode Time Mode Submitting your work Keyboard Commands Video controls Typing controls Timing controls Adjusting timing in the timeline

More information

A Case Study for Reaching Web Accessibility Guidelines for the Hearing-Impaired

A Case Study for Reaching Web Accessibility Guidelines for the Hearing-Impaired PsychNology Journal, 2003 Volume 1, Number 4, 400-409 A Case Study for Reaching Web Accessibility Guidelines for the Hearing-Impaired *Miki Namatame, Makoto Kobayashi, Akira Harada Department of Design

More information

Hearing Impaired K 12

Hearing Impaired K 12 Hearing Impaired K 12 Section 20 1 Knowledge of philosophical, historical, and legal foundations and their impact on the education of students who are deaf or hard of hearing 1. Identify federal and Florida

More information

ODP Deaf Services Overview Lesson 2 (PD) (music playing) Course Number

ODP Deaf Services Overview Lesson 2 (PD) (music playing) Course Number (music playing) This webcast includes spoken narration. To adjust the volume, use the controls at the bottom of the screen. While viewing this webcast, there is a pause and reverse button that can be used

More information

Providing Equally Effective Communication

Providing Equally Effective Communication Providing Equally Effective Communication 4 th Annual Marin Disaster Readiness Conference June 19 th, 2012 What Do We Mean by Effective Communication? For non-english speakers; some individuals for whom

More information

Understanding Users. - cognitive processes. Unit 3

Understanding Users. - cognitive processes. Unit 3 Understanding Users - cognitive processes Unit 3 Why do we need to understand users? Interacting with technology involves a number of cognitive processes We need to take into account Characteristic & limitations

More information

Arts and Entertainment. Ecology. Technology. History and Deaf Culture

Arts and Entertainment. Ecology. Technology. History and Deaf Culture American Sign Language Level 3 (novice-high to intermediate-low) Course Description ASL Level 3 furthers the study of grammar, vocabulary, idioms, multiple meaning words, finger spelling, and classifiers

More information

Sound Interfaces Engineering Interaction Technologies. Prof. Stefanie Mueller HCI Engineering Group

Sound Interfaces Engineering Interaction Technologies. Prof. Stefanie Mueller HCI Engineering Group Sound Interfaces 6.810 Engineering Interaction Technologies Prof. Stefanie Mueller HCI Engineering Group what is sound? if a tree falls in the forest and nobody is there does it make sound?

More information

TIPS FOR TEACHING A STUDENT WHO IS DEAF/HARD OF HEARING

TIPS FOR TEACHING A STUDENT WHO IS DEAF/HARD OF HEARING http://mdrl.educ.ualberta.ca TIPS FOR TEACHING A STUDENT WHO IS DEAF/HARD OF HEARING 1. Equipment Use: Support proper and consistent equipment use: Hearing aids and cochlear implants should be worn all

More information

Voluntary Product Accessibility Template (VPAT)

Voluntary Product Accessibility Template (VPAT) Voluntary Product Accessibility Template (VPAT) Date: January 25 th, 2016 Name of Product: Mitel 6730i, 6731i, 6735i, 6737i, 6739i, 6753i, 6755i, 6757i, 6863i, 6865i, 6867i, 6869i, 6873i Contact for more

More information

Digital Accommodation Strategies for making BSU s website more accessible

Digital Accommodation Strategies for making BSU s website more accessible Digital Accommodation Strategies for making BSU s website more accessible Andre Cutair, Web Content Specialist University Relations and Marketing, Bowie State University 2016 Disability Defined 2 What

More information

Available online at ScienceDirect. Procedia Technology 24 (2016 )

Available online at   ScienceDirect. Procedia Technology 24 (2016 ) Available online at www.sciencedirect.com ScienceDirect Procedia Technology 24 (2016 ) 1068 1073 International Conference on Emerging Trends in Engineering, Science and Technology (ICETEST - 2015) Improving

More information

SPECIAL EDUCATION (SED) DeGarmo Hall, (309) Website:Education.IllinoisState.edu Chairperson: Stacey R. Jones Bock.

SPECIAL EDUCATION (SED) DeGarmo Hall, (309) Website:Education.IllinoisState.edu Chairperson: Stacey R. Jones Bock. 368 SPECIAL EDUCATION (SED) 591 533 DeGarmo Hall, (309) 438-8980 Website:Education.IllinoisState.edu Chairperson: Stacey R. Jones Bock. General Department Information Program Admission Requirements for

More information

Multimodal Interaction for Users with Autism in a 3D Educational Environment

Multimodal Interaction for Users with Autism in a 3D Educational Environment Multimodal Interaction for Users with Autism in a 3D Educational Environment Ing. Alessandro Trivilini Prof. Licia Sbattella Ing. Roberto Tedesco 1 Screenshots 2 Screenshots 3 Introduction Existing Projects

More information

Universal Usability. Ethical, good business, the law SE 444 R.I.T. S. Ludi/R. Kuehl p. 1 R I T. Software Engineering

Universal Usability. Ethical, good business, the law SE 444 R.I.T. S. Ludi/R. Kuehl p. 1 R I T. Software Engineering Universal Usability Ethical, good business, the law SE 444 S. Ludi/R. Kuehl p. 1 Topics Universal usability and software ethics Visually impaired Deaf and hard of hearing Dexterity and mobility impairments

More information

Re: Docket No. FDA D Presenting Risk Information in Prescription Drug and Medical Device Promotion

Re: Docket No. FDA D Presenting Risk Information in Prescription Drug and Medical Device Promotion 1201 Maryland Avenue SW, Suite 900, Washington, DC 20024 202-962-9200, www.bio.org August 25, 2009 Dockets Management Branch (HFA-305) Food and Drug Administration 5600 Fishers Lane, Rm. 1061 Rockville,

More information

Copyright is owned by the Author of the thesis. Permission is given for a copy to be downloaded by an individual for the purpose of research and

Copyright is owned by the Author of the thesis. Permission is given for a copy to be downloaded by an individual for the purpose of research and Copyright is owned by the Author of the thesis. Permission is given for a copy to be downloaded by an individual for the purpose of research and private study only. The thesis may not be reproduced elsewhere

More information

Creating YouTube Captioning

Creating YouTube Captioning Creating YouTube Captioning Created June, 2017 Upload your video to YouTube Access Video Manager Go to Creator Studio by clicking the option from your account icon located in the topright corner of the

More information