找回密码
 To register

QQ登录

只需一步,快速开始

扫一扫,访问微社区

Titlebook: Chinese Computational Linguistics; 18th China National Maosong Sun,Xuanjing Huang,Yang Liu Conference proceedings 2019 Springer Nature Swi

[复制链接]
楼主: 压缩
发表于 2025-3-23 12:17:29 | 显示全部楼层
发表于 2025-3-23 16:17:20 | 显示全部楼层
BB-KBQA: BERT-Based Knowledge Base Question Answeringuistic knowledge to obtain deep contextualized representations. Experimental results demonstrate that our model can achieve the state-of-the-art performance on the NLPCC- ICCPOL 2016 KBQA dataset, with an 84.12% averaged F1 score(1.65% absolute improvement).
发表于 2025-3-23 19:35:42 | 显示全部楼层
发表于 2025-3-23 22:21:10 | 显示全部楼层
Lecture Notes in Computer Scienced to explain the recognition ability of four NN-based models at a fine-grained level. The experimental results show that all the models have poor performance in the commonsense reasoning than in other entailment categories. The highest accuracy difference is 13.22%.
发表于 2025-3-24 02:36:38 | 显示全部楼层
Paulin Jacobé de Naurois,Virgile Mogbilnd the interactive effects of keypoints in two sentences to learn sentence similarity. With less computational complexity, our model yields state-of-the-art improvement compared with other baseline models in paraphrase identification task on the Ant Financial competition dataset.
发表于 2025-3-24 10:05:38 | 显示全部楼层
Synthesis Problems for One-Counter Automata,l results show due to different linguistic features, the neural components have different effects in English and Chinese. Besides, our models achieve state-of-the-art performance on CoNLL-2016 English and Chinese datasets.
发表于 2025-3-24 12:02:25 | 显示全部楼层
https://doi.org/10.1007/978-3-319-45994-3elected sentence by an abstractive decoder. Moreover, we apply the BERT pre-trained model as document encoder, sharing the context representations to both decoders. Experiments on the CNN/DailyMail dataset show that the proposed framework outperforms both state-of-the-art extractive and abstractive models.
发表于 2025-3-24 18:30:29 | 显示全部楼层
Testing the Reasoning Power for NLI Models with Annotated Multi-perspective Entailment Datasetd to explain the recognition ability of four NN-based models at a fine-grained level. The experimental results show that all the models have poor performance in the commonsense reasoning than in other entailment categories. The highest accuracy difference is 13.22%.
发表于 2025-3-24 22:59:04 | 显示全部楼层
ERCNN: Enhanced Recurrent Convolutional Neural Networks for Learning Sentence Similaritynd the interactive effects of keypoints in two sentences to learn sentence similarity. With less computational complexity, our model yields state-of-the-art improvement compared with other baseline models in paraphrase identification task on the Ant Financial competition dataset.
发表于 2025-3-25 02:18:06 | 显示全部楼层
Comparative Investigation of Deep Learning Components for End-to-end Implicit Discourse Relationshipl results show due to different linguistic features, the neural components have different effects in English and Chinese. Besides, our models achieve state-of-the-art performance on CoNLL-2016 English and Chinese datasets.
 关于派博传思  派博传思旗下网站  友情链接
派博传思介绍 公司地理位置 论文服务流程 影响因子官网 SITEMAP 大讲堂 北京大学 Oxford Uni. Harvard Uni.
发展历史沿革 期刊点评 投稿经验总结 SCIENCEGARD IMPACTFACTOR 派博系数 清华大学 Yale Uni. Stanford Uni.
|Archiver|手机版|小黑屋| 派博传思国际 ( 京公网安备110108008328) GMT+8, 2025-6-30 07:18
Copyright © 2001-2015 派博传思   京公网安备110108008328 版权所有 All rights reserved
快速回复 返回顶部 返回列表