| 期刊全称 | Algorithmic Learning Theory | | 期刊简称 | 10th International C | | 影响因子2023 | Osamu Watanabe,Takashi Yokomori | | 视频video | http://file.papertrans.cn/153/152974/152974.mp4 | | 发行地址 | Includes supplementary material: | | 学科分类 | Lecture Notes in Computer Science | | 图书封面 |  | | Pindex | Conference proceedings 1999 |
| 1 |
Front Matter |
|
|
Abstract
|
| 2 |
Tailoring Representations to Different Requirements |
Katharina Morik |
|
Abstract
Designing the representation languages for the input and output of a learning algorithm is the hardest task within machine learning applications. Transforming the given representation of observations into a well-suited language .. may ease learning such that a simple and efficient learning algorithm can solve the learning problem. Learnability is defined with respect to the representation of the output of learning, ... If the predictive accuracy is the only criterion for the success of learning, the choice of .. means to find the hypothesis space with most easily learnable concepts, which contains the solution. Additional criteria for the success of learning such as comprehensibility and embeddedness may ask for transformations of .. such that users can easily interpret and other systems can easily exploit the learning results. Designing a language .. that is optimal with respect to all the criteria is too difficult a task. Instead, we design families of representations, where each family member is well suited for a particular set of requirements, and implement transformations between the representations. In this paper, we discuss a representation family of Horn logic. Work on tail
|
| 3 |
Theoretical Views of Boosting and Applications |
Robert E. Schapire |
|
Abstract
Boosting is a general method for improving the accuracy of any given learning algorithm. Focusing primarily on the AdaBoost algorithm, we briefly survey theoretical work on boosting including analyses of AdaBoost’s training error and generalization error, connections between boosting and game theory, methods of estimating probabilities using boosting, and extensions of AdaBoost for multiclass classification problems. Some empirical work and applications are also described.
|
| 4 |
Extended Stochastic Complexity and Minimax Relative Loss Analysis |
Kenji Yamanishi |
|
Abstract
We are concerned with the problem of sequential prediction using a given hypothesis class of continuously-many prediction strategies. An effective performance measure is the minimax relative cumulative loss (RCL), which is the minimum of the worst-case difference between the cumulative loss for any prediction algorithm and that for the best assignment in a given hypothesis class. The purpose of this paper is to evaluate the minimax RCL for general continuous hypothesis classes under general losses. We first derive asymptotical upper and lower bounds on the minimax RCL to show that they match (./2.) ln. within error of .(ln.) where . is the dimension of parameters for the hypothesis class, . is the sample size, and . is the constant depending on the loss function. We thereby show that the cumulative loss attaining the minimax RCL asymptotically coincides with the extended stochastic complexity (ESC), which is an extension of Rissanen’s stochastic complexity (SC) into the decision-theoretic scenario. We further derive non-asymptotical upper bounds on the minimax RCL both for parametric and nonparametric hypothesis classes. We apply the analysis into the regression problem to derive t
|
|
|