Please use this identifier to cite or link to this item:
http://dspace.dtu.ac.in:8080/jspui/handle/repository/20845
Title: | APPLICATIONS OF END TO END AUTOMATIC SPEECH RECOGNITION |
Authors: | KUMAR, RUPESH |
Keywords: | SPEECH RECOGNITION END TO END ASR RNN CNN |
Issue Date: | May-2024 |
Series/Report no.: | TD-7381; |
Abstract: | This project comprehensively investigates the applications of end-to-end ASR, including models like Transformers and the combination of RNNs with CNNs and CTC loss for the English language. The primary goal is to evaluate the performances of these architectures for sequence-to-sequence tasks that require accurate temporal alignment and robust handling of input sequences with varying lengths, specifically in the context of speech recognition. We tried to compare applications of E2E ASR by using RNN-CNN models and transformers models. We used the datasets from LJspeech for the English language. The RNN-CNN model combines the advantages of CNNs for extracting features and RNNs for processing sequential input to enable alignment-free training. The CNN component enhances the encoding of local features, while the RNN component captures temporal dependencies. The combined effect of both components leads to an improvement in recognition accuracy. The second model utilizes a Transformer architecture, which utilises self-attention for capturing long-range dependency without recurrent connections. This architectural design tackles the constraints of RNNs in managing lengthy sequences and parallel processing, resulting in the potential for quicker training and inference durations. The results of our experiments on a commonly used English language dataset namely LJspeech indicate significant performance improvements. The Transformer model also demonstrates higher scalability and efficiency when dealing with huge datasets. We compared the WER and computation time for both models and found superior WER performance by 3% to 4% for the transformer-based model over the RNN-CNN model. Additionally, the transformer based model was found to be five times more time efficient per epoch but requires more number of epochs for training The results indicate that RNN-CNN models are efficient for tasks with prominent local dependencies, whereas Transformers exhibit notable benefits in terms of computational efficiency and managing long-range dependencies. This makes Transformers a compelling option for large-scale English language processing applications. |
URI: | http://dspace.dtu.ac.in:8080/jspui/handle/repository/20845 |
Appears in Collections: | M.E./M.Tech. Electronics & Communication Engineering |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
RUPESH KUMAR M.Tech..pdf | 908.3 kB | Adobe PDF | View/Open |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.