Ceramic 发表于 2025-3-23 10:28:24

http://reply.papertrans.cn/29/2824/282305/282305_11.png

闯入 发表于 2025-3-23 16:48:50

DAMGCN: Entity Linking in Visually Rich Documents with Dependency-Aware Multimodal Graph Convolution the public FUNSD and XFUND datasets show that our DAMGCN achieves competitive results, i.e., F1 scores of 0.8063 and 0.7303 in entity linking tasks, respectively, while the model size is much smaller than the state-of-the-art models.

floaters 发表于 2025-3-23 20:03:10

http://reply.papertrans.cn/29/2824/282305/282305_13.png

EXPEL 发表于 2025-3-24 01:14:45

http://reply.papertrans.cn/29/2824/282305/282305_14.png

ABYSS 发表于 2025-3-24 06:11:09

http://reply.papertrans.cn/29/2824/282305/282305_15.png

licence 发表于 2025-3-24 06:50:33

http://reply.papertrans.cn/29/2824/282305/282305_16.png

narcotic 发表于 2025-3-24 13:45:35

http://reply.papertrans.cn/29/2824/282305/282305_17.png

恭维 发表于 2025-3-24 18:49:33

http://reply.papertrans.cn/29/2824/282305/282305_18.png

Melatonin 发表于 2025-3-24 21:37:21

CED: Catalog Extraction from Documentshat our proposed method outperforms baseline systems and shows a good ability to transfer. We believe the CED task could fill the gap between raw text segments and information extraction tasks on extremely long documents. Data and code are available at

矛盾心理 发表于 2025-3-25 03:12:33

http://reply.papertrans.cn/29/2824/282305/282305_20.png
页: 1 [2] 3 4 5 6 7
查看完整版本: Titlebook: Document Analysis and Recognition - ICDAR 2023; 17th International C Gernot A. Fink,Rajiv Jain,Richard Zanibbi Conference proceedings 2023