询问 发表于 2025-4-1 02:50:53

https://doi.org/10.1007/b138337or the KNN classifier which performed poorly getting the macro-average F1-score of 21%. BERT classifier with Ekman taxonomy including neutral emotion had a macro-average precision of 55% and a sensitivity of 68%. This classifier also outperformed the macro-average F1-score by 106 61%. While the RoBE

纤细 发表于 2025-4-1 09:53:30

http://reply.papertrans.cn/27/2631/263093/263093_62.png

pericardium 发表于 2025-4-1 13:16:57

Multi-aspect Extraction in Indonesian Reviews Through Multi-label Classification Using Pre-trained Bnships. In the experiment, we conducted the tests with various Indonesian pre-trained BERT models to enhance the performance of multi-aspect extraction on Indonesian hotel reviews. Our findings indicate that . pre-trained model can improve the classifier performance and achieve an impressive F1-scor
页: 1 2 3 4 5 6 [7]
查看完整版本: Titlebook: Data Science and Emerging Technologies; Proceedings of DaSET Yap Bee Wah,Dhiya Al-Jumeily OBE,Michael W. Berry Conference proceedings 2024