下级 发表于 2025-3-28 17:45:01

http://reply.papertrans.cn/63/6206/620513/620513_41.png

chapel 发表于 2025-3-28 19:48:31

Maximum Entropy Linear Manifold for Learning Discriminative Low-Dimensional Representationticular low-dimensional representation which discriminates classes can not only enhance the classification procedure, but also make it faster, while contrary to the high-dimensional embeddings can be efficiently used for visual based exploratory data analysis..In this paper we propose Maximum Entrop

Enrage 发表于 2025-3-28 23:28:47

http://reply.papertrans.cn/63/6206/620513/620513_43.png

修剪过的树篱 发表于 2025-3-29 03:45:58

Parameter Learning of Bayesian Network Classifiers Under Computational Constraintsrizing the BNCs are represented by low bit-width fixed-point numbers. In contrast to previous work, we analyze the learning of these parameters using reduced-precision arithmetic only which is important for computationally constrained platforms, e.g. embedded- and ambient-systems, as well as power-a

evasive 发表于 2025-3-29 10:46:54

Predicting Unseen Labels Using Label Hierarchies in Large-Scale Multi-label Learning of learning underlying structures over labels is to project both instances and labels into the same space where an instance and its relevant labels tend to have similar representations. In this paper, we present a novel method to learn a joint space of instances and labels by leveraging a hierarchy

omnibus 发表于 2025-3-29 15:22:46

Regression with Linear Factored Functionsis paper introduces a novel .-algorithm that learns . (LFF). This class of functions has structural properties that allow to analytically solve certain integrals and to calculate point-wise products. Applications like . and . can exploit these properties to break the curse and speed up computation.

Platelet 发表于 2025-3-29 17:20:46

Ridge Regression, Hubness, and Zero-Shot Learningel space. Contrary to the existing approach, which attempts to find a mapping from the example space to the label space, we show that mapping labels into the example space is desirable to suppress the emergence of hubs in the subsequent nearest neighbor search step. Assuming a simple data model, we

保留 发表于 2025-3-29 22:21:24

http://reply.papertrans.cn/63/6206/620513/620513_48.png

翻动 发表于 2025-3-30 01:18:01

Structured Regularizer for Neural Higher-Order Sequence Modelsence modelling. We show that this regularizer can be derived as lower bound from a mixture of models sharing parts, e.g. neural sub-networks, and relate it to ensemble learning. Furthermore, it can be expressed explicitly as regularization term in the training objective..We exemplify its effectivene

FRAUD 发表于 2025-3-30 04:30:33

Versatile Decision Trees for Learning Over Multiple Contextss can vary significantly when they are learned and deployed in different contexts with different data distributions. In the literature, this phenomenon is called dataset shift. In this paper, we address several important issues in the dataset shift problem. First, how can we automatically detect tha
页: 1 2 3 4 [5] 6 7
查看完整版本: Titlebook: Machine Learning and Knowledge Discovery in Databases; European Conference, Annalisa Appice,Pedro Pereira Rodrigues,Alípio Jor Conference p