Ceramic 发表于 2025-3-23 10:28:24
http://reply.papertrans.cn/29/2824/282305/282305_11.png闯入 发表于 2025-3-23 16:48:50
DAMGCN: Entity Linking in Visually Rich Documents with Dependency-Aware Multimodal Graph Convolution the public FUNSD and XFUND datasets show that our DAMGCN achieves competitive results, i.e., F1 scores of 0.8063 and 0.7303 in entity linking tasks, respectively, while the model size is much smaller than the state-of-the-art models.floaters 发表于 2025-3-23 20:03:10
http://reply.papertrans.cn/29/2824/282305/282305_13.pngEXPEL 发表于 2025-3-24 01:14:45
http://reply.papertrans.cn/29/2824/282305/282305_14.pngABYSS 发表于 2025-3-24 06:11:09
http://reply.papertrans.cn/29/2824/282305/282305_15.pnglicence 发表于 2025-3-24 06:50:33
http://reply.papertrans.cn/29/2824/282305/282305_16.pngnarcotic 发表于 2025-3-24 13:45:35
http://reply.papertrans.cn/29/2824/282305/282305_17.png恭维 发表于 2025-3-24 18:49:33
http://reply.papertrans.cn/29/2824/282305/282305_18.pngMelatonin 发表于 2025-3-24 21:37:21
CED: Catalog Extraction from Documentshat our proposed method outperforms baseline systems and shows a good ability to transfer. We believe the CED task could fill the gap between raw text segments and information extraction tasks on extremely long documents. Data and code are available at矛盾心理 发表于 2025-3-25 03:12:33
http://reply.papertrans.cn/29/2824/282305/282305_20.png