Introduction

Currently I'm a research student in Worcester Polytechnic Institute CakeLab, my research interests include the broad area of computer system and machine learning. You can see the projects I'm working on now. Previously in my undergraduate study, I'm a student in the cognition and affective computing lab, Tianjin Normal University. In there, my research project is focused on speech emotion recognition with deep learning.

Besides my research, I'm also a geek and a developer. I love to explore and play with popular programming languages and frameworks. Specifically, I love JavaScript and Swift. And I also do creative artworks. Here are serval hackathon and open source projects I created.

System and Machine Learning

Still Working on it...

Affective Computing

Deep Spectrum Feature Representations for Speech Emotion Recognition

Automatically detecting emotional state in human speech, which plays an effective role in areas of human machine interactions, has been a difficult task for machine learning algorithms. Previous work for emotion recognition have mostly focused on the extraction of carefully hand-crafted and tailored features. Recently, spectrogram representations of emotion speech have achieved competitive per- formance for automatic speech emotion recognition. In this work we propose a method to tackle the problem of deep features, herein denoted as deep spectrum features, extraction from the spectrogram by leveraging Attention-based Bidirectional Long Short-Term Mem- ory Recurrent Neural Networks with fully convolutional networks. The learned deep spectrum features are then fed into a deep neural network (DNN) to predict the final emotion. The proposed model is then evaluated on the Interactive Emotional Dyadic Motion Cap- ture (IEMOCAP) dataset to validate its effectiveness. Promising results indicate that our deep spectrum representations extracted from the proposed model perform the best, 65.2% for weighted accuracy and 68.0% for unweighted accuracy when compared to other existing methods. We then compare the performance of our deep spectrum features with two standard acoustic feature repre- sentations for speech-based emotion recognition. When combined with a support vector classifier, the performance of the deep feature representations extracted are comparable with the conventional features. Moreover, we also investigate the impact of different fre- quency resolutions of the input spectrogram on the performance of the system.

Hackathon Projects

My Personal Blog yiqinzhao.me

TODO