For this project there should be a major emphasis and focus on utilizing deep learning techniques using Python and Jupyter notebook for visualizations (.ypnb files).
Each utterance in a dialogue has been labelled by any of these seven emotions: Neutral, Joyful, Peaceful, Powerful, Scared, Mad and Sad.
Question to answer: How can be improved detection, classification and accuracy of speech recognition?
Datasets and reference source of similar project
Data and Python code source example: [login to view URL]
Titles references: MELD: A Multimodal Multi-Party Dataset for Emotion Recognition in Conversations:
[login to view URL]
Deep learning technique: recurrent neural network (RNN) for Automatic speech recognition (audio and text files).
Deep learning model: Convolutional neural network (CNN)
Deep learning arquitecture:Long short-term memory (LSTM)
Accuracy, F-1 Score, Recall or Precision
Code plus report