不利 发表于 2025-3-25 06:34:48
TCMCoRep: Traditional Chinese Medicine Data Mining with Contrastive Graph Representation LearningM diagnosis in real life. Hybridization of homogeneous and heterogeneous graph convolutions is able to preserve graph heterogeneity preventing the possible damage from early augmentation, to convey strong samples for contrastive learning. Experiments conducted in practical datasets demonstrate our pNOT 发表于 2025-3-25 08:30:33
http://reply.papertrans.cn/55/5441/544049/544049_22.pngAnonymous 发表于 2025-3-25 12:49:56
PRACM: Predictive Rewards for Actor-Critic with Mixing Function in Multi-Agent Reinforcement Learnin action space, PRACM uses Gumbel-Softmax. And to promote cooperation among agents and to adapt to cooperative environments with penalties, the predictive rewards is introduced. PRACM was evaluated against several baseline algorithms in “Cooperative Predator-Prey” and the challenging “SMAC” scenarioscyanosis 发表于 2025-3-25 16:22:35
A Cybersecurity Knowledge Graph Completion Method for Scalable Scenariosn matrix and multi-head attention mechanism to explore the relationships between samples. To mitigate the catastrophic forgetting problem, a new self-distillation algorithm is designed to enhance the robustness of the trained model. We construct knowledge graph based on cybersecurity data, and conduPACK 发表于 2025-3-25 21:40:07
http://reply.papertrans.cn/55/5441/544049/544049_25.png把…比做 发表于 2025-3-26 01:20:12
http://reply.papertrans.cn/55/5441/544049/544049_26.pngapropos 发表于 2025-3-26 05:40:51
http://reply.papertrans.cn/55/5441/544049/544049_27.pngVEN 发表于 2025-3-26 11:32:41
Importance-Based Neuron Selective Distillation for Interference Mitigation in Multilingual Neural Mahe important ones representing general knowledge of each language and the unimportant ones representing individual knowledge of each low-resource language. Then, we prune the pre-trained model, retaining only the important neurons, and train the pruned model supervised by the original complete modelopinionated 发表于 2025-3-26 14:41:19
Are GPT Embeddings Useful for Ads and Recommendation?embedding aggregation, and as a pre-training task (EaaP) to replicate the capability of LLMs, respectively. Our experiments demonstrate that, by incorporating GPT embeddings, basic PLMs can improve their performance in both ads and recommendation tasks. Our code is available atmetropolitan 发表于 2025-3-26 17:49:50
http://reply.papertrans.cn/55/5441/544049/544049_30.png