询问 发表于 2025-4-1 02:50:53
https://doi.org/10.1007/b138337or the KNN classifier which performed poorly getting the macro-average F1-score of 21%. BERT classifier with Ekman taxonomy including neutral emotion had a macro-average precision of 55% and a sensitivity of 68%. This classifier also outperformed the macro-average F1-score by 106 61%. While the RoBE纤细 发表于 2025-4-1 09:53:30
http://reply.papertrans.cn/27/2631/263093/263093_62.pngpericardium 发表于 2025-4-1 13:16:57
Multi-aspect Extraction in Indonesian Reviews Through Multi-label Classification Using Pre-trained Bnships. In the experiment, we conducted the tests with various Indonesian pre-trained BERT models to enhance the performance of multi-aspect extraction on Indonesian hotel reviews. Our findings indicate that . pre-trained model can improve the classifier performance and achieve an impressive F1-scor