植物学 发表于 2025-3-26 23:08:22

s on the same data, we find that the CNN-based Kraken model slightly outperforms the transformer-based TESTR model on recognition character accuracy and some object-level metrics, even though it lags behind on pixel-level metrics.

单色 发表于 2025-3-27 04:27:24

http://reply.papertrans.cn/29/2849/284816/284816_32.png

抵消 发表于 2025-3-27 08:13:39

http://reply.papertrans.cn/29/2849/284816/284816_33.png

粗鲁性质 发表于 2025-3-27 13:05:37

An Interpretable Deep Learning Approach for Morphological Script Type Analysis prototypes, representative of letter morphology, and provide qualitative and quantitative tools for their comparison and analysis. We demonstrate our approach by applying it to the . script type and its two subtypes formalized by A. Derolez: Northern and Southern ..

Aerophagia 发表于 2025-3-27 15:42:52

MONSTERMASH: Multidirectional, Overlapping, Nested, Spiral Text Extraction for Recognition Models ofs on the same data, we find that the CNN-based Kraken model slightly outperforms the transformer-based TESTR model on recognition character accuracy and some object-level metrics, even though it lags behind on pixel-level metrics.

改进 发表于 2025-3-27 20:55:39

http://reply.papertrans.cn/29/2849/284816/284816_36.png

MUTE 发表于 2025-3-27 23:02:30

Ablation Study of a Multimodal Gat Network on Perfect Synthetic and Real-world Data to Investigate tthetic and an imperfect real-world dataset. The results of the study show the importance of language modules for semantic embeddings in multimodal invoice recognition and illustrate the impact of data annotation quality. We further contribute an adapted GAT model for German invoices.

VEST 发表于 2025-3-28 05:36:46

http://reply.papertrans.cn/29/2849/284816/284816_38.png

AFFIX 发表于 2025-3-28 07:26:44

http://reply.papertrans.cn/29/2849/284816/284816_39.png

脱毛 发表于 2025-3-28 11:46:00

http://reply.papertrans.cn/29/2849/284816/284816_40.png
页: 1 2 3 [4] 5
查看完整版本: Titlebook: Document Analysis and Recognition – ICDAR 2024 Workshops; Athens, Greece, Augu Harold Mouchère,Anna Zhu Conference proceedings 2024 The Edi