Speech emotion recognition (SER) has long benefited from the adoption of deep learning methodologies. Deeper models– with more layers and more trainable parameters are generally perceived as being ‘better’ by the SER community. This raises the question– how much better are modern-era deep neural networks compared to their earlier iterations? Beyond that, the more important question of how to move forward remains as poignant as ever. SER is far from a solved problem; therefore, identifying the most prominent avenues of future research is of paramount importance. In the present contribution, we attempt a quantification of progress in the 15 years of research beginning with the introduction of the landmark 2009 INTERSPEECH Emotion Challenge. We conduct a large scale investigation of model architectures, spanning both audio-based models that rely on speech inputs and text-based models that rely solely on transcriptions. Our results point towards diminishing returns and reaching a plateau after the recent introduction of transformers. Moreover, we demonstrate how perceptions of progress are conditioned on the particular selection of models that are compared. Our findings demonstrate that new methods should compare with an adequate selection of standard baselines and ensure that hyperparameters are properly tuned to not disadvantage older methods. Moreover, they cast a shadow of doubt to the premise that bigger models necessarily lead to an improvement in performance.
article TBS+26a
BibTeXKey: TBS+26a