使委屈 发表于 2025-3-21 19:33:47

书目名称Natural Language Processing and Chinese Computing影响因子(影响力)<br>        http://impactfactor.cn/2024/if/?ISSN=BK0669627<br><br>        <br><br>书目名称Natural Language Processing and Chinese Computing影响因子(影响力)学科排名<br>        http://impactfactor.cn/2024/ifr/?ISSN=BK0669627<br><br>        <br><br>书目名称Natural Language Processing and Chinese Computing网络公开度<br>        http://impactfactor.cn/2024/at/?ISSN=BK0669627<br><br>        <br><br>书目名称Natural Language Processing and Chinese Computing网络公开度学科排名<br>        http://impactfactor.cn/2024/atr/?ISSN=BK0669627<br><br>        <br><br>书目名称Natural Language Processing and Chinese Computing被引频次<br>        http://impactfactor.cn/2024/tc/?ISSN=BK0669627<br><br>        <br><br>书目名称Natural Language Processing and Chinese Computing被引频次学科排名<br>        http://impactfactor.cn/2024/tcr/?ISSN=BK0669627<br><br>        <br><br>书目名称Natural Language Processing and Chinese Computing年度引用<br>        http://impactfactor.cn/2024/ii/?ISSN=BK0669627<br><br>        <br><br>书目名称Natural Language Processing and Chinese Computing年度引用学科排名<br>        http://impactfactor.cn/2024/iir/?ISSN=BK0669627<br><br>        <br><br>书目名称Natural Language Processing and Chinese Computing读者反馈<br>        http://impactfactor.cn/2024/5y/?ISSN=BK0669627<br><br>        <br><br>书目名称Natural Language Processing and Chinese Computing读者反馈学科排名<br>        http://impactfactor.cn/2024/5yr/?ISSN=BK0669627<br><br>        <br><br>

生命层 发表于 2025-3-21 23:34:13

http://reply.papertrans.cn/67/6697/669627/669627_2.png

muffler 发表于 2025-3-22 00:50:41

LasQ: Largest Singular Components Fine-Tuning for LLMs with Quantizationge generation tasks. The experiments show that our method can significantly outperform existing methods with fewer training parameters. Compared with LoftQ and QLoRA methods, it has a 2%–15% improvement, and it can even achieve equivalent LoRA fine-tuning effects and full parameter fine-tuning effects.

Cpr951 发表于 2025-3-22 08:10:32

Sparse Mixture of Experts Language Models Excel in Knowledge Distillationillation using MoE without the necessity of continued pretraining. Experimental results indicate that our approach enhances the model’s capabilities compared to dense model distillation, achieving superior performance across a multitude of tasks. We will release our code at ..

隐语 发表于 2025-3-22 11:37:20

Evaluation and Analysis of the Chinese Semantic Dependency Understanding Ability of Large Language Mderstanding of high-order semantic structure knowledge and semantic relation knowledge. Furthermore, our experiments reveal that while LLMs perform well on the in-domain (ID) test set via SFT, their generalization ability on out-of-domain (OOD) test set remains inadequate.

枯萎将要 发表于 2025-3-22 15:10:13

http://reply.papertrans.cn/67/6697/669627/669627_6.png

SUGAR 发表于 2025-3-22 20:15:23

0302-9743 ing and Chinese Computing, NLPCC 2024, held in Hangzhou, China, during November 2024..The 161 full papers and 33 evaluation workshop papers included in these proceedings were carefully reviewed and selected from 451 submissions. They deal with the following areas: Fundamentals of NLP; Information Ex

CALL 发表于 2025-3-22 21:13:47

Improving Causal Inference of Large Language Models with SCM Toolsools, and combine the inference results of the causal inference tools to generate the final causal question answers. The experimental results show that the method proposed in this paper outperforms the best existing methods.

遭受 发表于 2025-3-23 04:53:26

http://reply.papertrans.cn/67/6697/669627/669627_9.png

壕沟 发表于 2025-3-23 06:02:42

http://reply.papertrans.cn/67/6697/669627/669627_10.png
页: [1] 2 3 4 5 6 7
查看完整版本: Titlebook: Natural Language Processing and Chinese Computing; 13th National CCF Co Derek F. Wong,Zhongyu Wei,Muyun Yang Conference proceedings 2025 Th