量被毁坏 发表于 2025-3-23 11:50:21

http://reply.papertrans.cn/47/4615/461486/461486_11.png

含糊 发表于 2025-3-23 14:50:44

http://reply.papertrans.cn/47/4615/461486/461486_12.png

巩固 发表于 2025-3-23 21:27:10

s is expensive. Existing models address this issue by code-switched data augmentation or intermediate fine-tuning of multilingual pre-trained models. However, these models can only perform implicit alignment across languages. In this paper, we propose a novel model named .ontrastive .earning for .ro

tackle 发表于 2025-3-23 23:31:06

Minhao Xue,Li Wang,Jie Shen,Kangning Wang,Wanning Wu,Long Fu has focused on predicting entity information after fusing visual and text features. However, pre-training language models have already acquired vast amounts of knowledge during their pre-training process. To leverage this knowledge, we propose a prompt network for MNER tasks (P-MNER). To minimize t

巫婆 发表于 2025-3-24 03:07:09

http://reply.papertrans.cn/47/4615/461486/461486_15.png

得罪人 发表于 2025-3-24 07:51:22

http://reply.papertrans.cn/47/4615/461486/461486_16.png

Decline 发表于 2025-3-24 13:18:55

Graph Network (GN) and Question Decomposition (QD) are two common approaches at present. The former uses the “black-box” reasoning process to capture the potential relationship between entities and sentences, thus achieving good performance. At the same time, the latter provides a clear reasoning lo

conscience 发表于 2025-3-24 15:46:39

Xingjian Zhang,Shaohuai Yu,Xinghua Li,Shuang Li,Zhenyu Tanfirst. This paper analyzed English-Chinese MT errors from the perspective of naming-telling clause (NT clause, hereafter). Two types of text were input to get the MT output: one was to input the whole original English sentences into an MT engine; the other was to parse English sentences into English

originality 发表于 2025-3-24 23:04:30

http://reply.papertrans.cn/47/4615/461486/461486_19.png

BRACE 发表于 2025-3-25 03:13:13

http://reply.papertrans.cn/47/4615/461486/461486_20.png
页: 1 [2] 3 4 5 6
查看完整版本: Titlebook: Image and Graphics; 12th International C Huchuan Lu,Wanli Ouyang,Min Xu Conference proceedings 2023 The Editor(s) (if applicable) and The A