“Human Behavior Understanding from Language through Unsupervised Modeling”
by Shao-Yen Tseng
December 2020
Identifying behavioral cues in organic conversational interactions is a challenging task even for humans and is the culmination of decades of social experience; knowledge unavailable to machine learning algorithms. While many problems have been solved by applying deep learning on “big dataâ€, in the domain of behavior understanding data is often scarce and application specific leading to poor performance in automated methods. In regards to this, we propose multiple approaches of improving neural models by leveraging out-of-domain data using unsupervised learning. Our work not only focuses on improving current behavior models but also proposes methods for unsupervised representation learning on speech that can help models better identify human behavior. As such, we investigate how concepts of behavior and language use can be transferred between domains through machine learning methods such as contextual learning, online mutlitask learning, and multimodal approaches. We show that through these methods our proposed models can become increasingly adept at identifying behaviors for psychotherapy applications in real world data.