Subject dependent speech verification approach for assistive special education

The developmental, characteristics and educational competencies of students who need special education are developing slowly in compared to their agemates. This is because their expressive language is different. In order to overcome these challenges, assistive technologies can be used under the supe...

Full description

Saved in:
Bibliographic Details
Published inEducation and information technologies Vol. 29; no. 13; pp. 16157 - 16175
Main Authors Zeki, Umut, Karanfiller, Tolgay, Yurtkan, Kamil
Format Journal Article
LanguageEnglish
Published New York Springer US 01.09.2024
Springer Nature B.V
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:The developmental, characteristics and educational competencies of students who need special education are developing slowly in compared to their agemates. This is because their expressive language is different. In order to overcome these challenges, assistive technologies can be used under the supervision of the teachers. In this paper, a person dependent speech verification approach is proposed for special education students. The system verifies the speech of special education students in various ways. Convolutional Neural Network (CNN) is employed for the classification task. Audio signals that are collected as datasets are accomplished by collecting samples from the real education centers involving special education students. For each subject, a different CNN is trained. Obtained audio signals undergo a frequency domain transform, and then their spectrograms are computed. The spectrogram images of every audio sample are then processed as inputs to the CNN. In this way, better representations of the audio signals are achieved where the spectrogram images of the audio files of different subjects are discriminable. This is also the result of special education students’ personal and unique speaking styles. The proposed approach is tested on the dataset that is constructed by real subject recordings. The system achieves promising results by performing comparable recognition accuracies of around 96%.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1360-2357
1573-7608
DOI:10.1007/s10639-024-12474-9