洪滿珍、陳昭沂、陳品蓉、李龍豪。
In Proceedings of the 33th Conference on Computational Linguistics and Speech Processing (ROCLING’21), pages 380-384.
Abstract
We use the MacBERT transformers and fine-tune them to ROCLING-2021 shared tasks using the CVAT and CVAS data. We compare the performance of MacBERT with other two transformers BERT and RoBERTa in the valance and arousal dimensions, respectively. MAE and correlation coefficient (r) were used as evaluation metrics. On ROCLING-2021 test set, our used MacBERT model achieves 0.611 of MAE and 0.904 of r in the valance dimensions; and 0.938 of MAE and 0.549 of r in the arousal dimension.