Ensemble Pre-trained Transformer Models for Writing Style Change Detection

Tzu-Mi Lin, Chao-Yi Chen, Yu-Wen Tzeng and Lung-Hao Lee.

In Proceedings of the Working Notes of CLEF 2022 – Conference and Labs of the Evaluation Forum, 3180, pages 2565-2573.


Abstract

This paper describes a proposed system design for Style Change Detection (SCD) tasks for PAN at CLEF 2022. We propose a unified architecture of ensemble neural networks to solve three SCD- 2022 edition tasks. We fine-tune the BERT, RoBERTa and ALBERT transformers and their connecting classifiers to measure the similarity of two given paragraphs or sentences for authorship analysis. Each transformer model is regarded as a standalone method to detect differences in the writing styles of each testing pair. The final output prediction is then combined using the majority voting ensemble mechanism. For SCD-2022 Task 1, which requires finding the only one position of a single style at the paragraph level, our approach achieves a macro F1-score of 0.7540. For SCD-2022 Task 2 to detect the actual authors of each written paragraph, our method achieves a macro F1-score of 0.5097, a Diarization error rate of 0.1941 and a Jaccard error rate of 0.3095. For SCD-2022 Task 3 to find located writing style changes at the sentence level, our model achieves a macro F1-score of 0.7156. In summary, our method is the winning approach in the list of all intrinsic approaches.