PON 发表于 2025-3-26 23:36:34

http://reply.papertrans.cn/67/6636/663598/663598_31.png

晚间 发表于 2025-3-27 04:15:17

http://reply.papertrans.cn/67/6636/663598/663598_32.png

健谈的人 发表于 2025-3-27 08:04:48

A Spiking Neural Architecture for Vector Quantization and Clusteringattain. Moreover these architectures make use of rate codes that require an unplausible high number of spikes and consequently a high energetical cost. This paper presents for the first time a SNN architecture that uses temporal codes, more precisely first-spike latency code, while performing compet

烦忧 发表于 2025-3-27 10:05:58

A Survey of Graph Curvature and Embedding in Non-Euclidean Spaces ranging from social network graphs, brain images, sensor networks to 3-dimensional objects. To understand the underlying geometry and functions of these high dimensional discrete data with non-Euclidean structure, it requires their representations in non-Euclidean spaces. Recently, graph embedding

担忧 发表于 2025-3-27 15:32:46

A Tax Evasion Detection Method Based on Positive and Unlabeled Learning with Network Embedding Featubeled taxpayers who evade tax (positive samples) and a large number of unlabeled taxpayers who either evade tax or do not evade tax. It is difficult to address this issue due to this nontraditional dataset. In addition, the basic features of taxpayers designed according to tax experts’ domain knowle

衣服 发表于 2025-3-27 20:49:40

http://reply.papertrans.cn/67/6636/663598/663598_36.png

碎片 发表于 2025-3-28 01:51:35

http://reply.papertrans.cn/67/6636/663598/663598_37.png

AVID 发表于 2025-3-28 02:46:57

http://reply.papertrans.cn/67/6636/663598/663598_38.png

审问 发表于 2025-3-28 08:43:55

AutoGraph: Automated Graph Neural Networkme state-of-the-art GNN models have been proposed, e.g., Graph Convolutional Networks (GCNs), Graph Attention Networks (GATs), etc. Despite these successes, most of the GNNs only have shallow structure. This causes the low expressive power of the GNNs. To fully utilize the power of the deep neural n

栏杆 发表于 2025-3-28 14:08:01

Automatic Curriculum Generation by Hierarchical Reinforcement Learning efficiency than traditional reinforcement learning algorithms because curriculum learning enables agents to learn tasks in a meaningful order: from simple tasks to difficult ones. However, most curriculum learning in RL still relies on fixed hand-designed sequences of tasks. We present a novel sche
页: 1 2 3 [4] 5 6 7
查看完整版本: Titlebook: Neural Information Processing; 27th International C Haiqin Yang,Kitsuchart Pasupa,Irwin King Conference proceedings 2020 Springer Nature Sw