一大块 发表于 2025-3-23 10:41:09
Chinese Personalized Commonsense Understanding and Reasoning Based on Curriculum-Learning BERT, GPT2, and BART with different structures. The experimental results show that the models trained using the curriculum-learning training framework are able to generate more diversified and personality-trait-compliant commonsense reasoning results.较早 发表于 2025-3-23 16:54:13
http://reply.papertrans.cn/67/6697/669624/669624_12.pngMelodrama 发表于 2025-3-23 20:30:40
http://reply.papertrans.cn/67/6697/669624/669624_13.png只有 发表于 2025-3-24 00:36:11
ConFit: Contrastive Fine-Tuning of Text-to-Text Transformer for Relation Classificationd on their context. The latest trend for dealing with the task resorts to pre-trained language models (PLMs). It transforms the discriminative RC into a linguistics problem and fully induces the language knowledge PLMs derived from pre-training. Despite the visible progress, existing approaches handexclamation 发表于 2025-3-24 04:55:33
http://reply.papertrans.cn/67/6697/669624/669624_15.png无瑕疵 发表于 2025-3-24 08:03:59
http://reply.papertrans.cn/67/6697/669624/669624_16.png斗志 发表于 2025-3-24 11:10:25
http://reply.papertrans.cn/67/6697/669624/669624_17.pngmicturition 发表于 2025-3-24 18:01:51
An Iterative Framework for Document-Level Event Argument Extraction Assisted by Long Short-Term Memot structure is complex. Most of the current methods are entity-based classification or generative frameworks, facing significant challenges when dealing with argument types that are not entities and handling complex event types. In this paper, we propose an iterative extraction framework for DEAE, waphasia 发表于 2025-3-24 23:04:28
http://reply.papertrans.cn/67/6697/669624/669624_19.png类型 发表于 2025-3-25 02:27:14
Prompt Debiasing via Causal Intervention for Event Argument Extractionnarios. By formatting a fine-tuning task into a pre-training objective, prompt-based methods resolve the data scarce problem effectively. However, previous researches seldom investigate the discrepancy among different strategies on prompt formulation. In this work, we compare two kinds of prompts, n