Voice In Text Out And Text In Voice Out Communication Device For Deaf-Dump

Authors

  • Govindan M

DOI:

https://doi.org/10.20894/IJMSR.117.004.001.003

Keywords:

communication aid, PIC18F, MP Lab.

Abstract

Communications between deaf-mute and a normal person have always been a challenging task. This paper describes a way to reduce barrier of communication by developing an assistive device for deaf-mute persons. The advancement in embedded systems, provides a space to design and develop a sign language translator system to assist the dumb people, there exist a number of assistant tools. The main objective is to develop a real time embedded device for physically challenged to aid their communication in effective means. The proposed work in this paper is to implement a system without handheld gloves and sensors and by text to voice converter speck jet IC and voice to text conversion and vice versa, thus making the communication simpler for deaf and dumb people by a handheld embedded device along with the hardware setup.

Downloads

Download data is not yet available.

Author Biography

Govindan M

BE-ECE, Saveetha Engineering College, Chennai, India.

References

[1] Beukelmanand.D Mirenda.P, Augmentative and Alternative Communication, 3rd ed. Baltimore, MD: Paul Brookes, 2005.

[2] Enderbyand.P, Emerson.L, Does Speech and Language Therapy Work?. London, U.K.: Singular, 1995.

[3] Kleinke C. L, “Gaze and eye contact: A research review,” Psychol. Bull., vol. 100, no. 1, pp. 78–100, 1986.

[4] O’Keefe.B, Kozak .N, and Schuller.R, “Research priorities in augmentative and alternative communication as identified by people who use AAC and their facilitators,” Augmentative Alternative Commun..vol. 23, no. 1, pp. 89–96, 2007.

[5] Murphy.J, “I prefer contact this close: Perceptions of AAC by people with motor neurone disease and their communication partners,” Augmentative Alternative Commun., vol. 20, pp. 259–271, 2004.

[6] Todman J, N. Alm, J. Higginbotham, and P. File, “Whole utterance approaches in AAC,” Augmentative Alternative Commun., vol. 24, no.3, pp. 235–254, 2008.

[7] Ferrier L.J, Shane H.C, Ballard, Carpenter,and Benoit,“Dysarthric speakers’ intelligibility and speech characteristics in relation to computer speech recognition,” Augmentative Alternative Commun., vol. 11, no. 3, pp. 165–175, 1995.

[8] Thomas-Stonell, Kotler A.L, Leeper H.A,and Doyle P.C, “Computerized speech recognition: Influence of intelligibility and perceptual consistency on recognition accuracy,” Augmentative Alternative Commun., vol. 14, no. 1, pp. 51–56, 1998.

[9] Bloor R. N, Barrett.K, and Geldard.C, “The clinical application of microcomputers in the treatment of patients with severe speech dysfunction,” IEE Colloquium High-Tech Help Handicapped, pp. 9/1–9/2,1990.

[10] Sandler.U and Sonnenblick.Y, “A system for recognition and translation of the speech of handicapped individuals,” in 9th Mediterranean Electrotech. Conf. (MELECON’98), 1998, pp. 16–19.

[11] Wisenburn.B and Higginbotham D. J, “An AAC application using speaking partner speech recognition to automatically produce contextually relevant utterances:Objective results,” Augmentative Alternative Commun., vol. 24, no. 2, pp. 100–109, 2008.

[12] Wisenburn and Higginbotham, “Participant evaluations of rate and communication efficacy of an AAC application using natural language processing,” Augmentative Alternative Commun., vol. 25, no. 2, pp. 78–89, 2009.

[13] Hawley et al. M. S., “A speech-controlled environmental control system for people with severe dysarthria,” Med. Eng. Phys., vol. 29, no. 5, pp. 586–593, 2007.

[14] Sharma H. V. and Hasegawa-Johnson.M, “State-transition interpolation and MAP adaptation for HMM-based dysarthric speech recognition,” in NAACL HLT Workshop Speech Language Process. Assistive Technol., 2010, pp. 72–79.

[15] Palmer.R, Enderby.P, and M. S. Hawley, “A voice input voice output communication aid: What do users and therapists require?,” J. Assistive Technol., vol. 4, no. 2, pp. 4–14, 2010.

Downloads

Published

2012-12-15

Issue

Section

Articles