信任 发表于 2025-3-23 12:52:23
http://reply.papertrans.cn/16/1530/152983/152983_11.pngtympanometry 发表于 2025-3-23 16:24:25
The Complexity of Learning SUBSEQ (,)following inductive inference problem: given .(.), .(0), .(1), .(00), ... learn, in the limit, a DFA for SUBSEQ(.). We consider this model of learning and the variants of it that are usually studied in inductive inference: anomalies, mindchanges, and teams.Hemiparesis 发表于 2025-3-23 20:43:05
Mind Change Complexity of Inferring Unbounded Unions of Pattern Languages from Positive Datative data with mind change bound between .. and .. We give a very tight bound on the mind change complexity based on the length of the constant segments and the size of the alphabet of the pattern languages. This is, to the authors’ knowledge, the first time a natural class of languages has been shoMyelin 发表于 2025-3-24 01:44:14
http://reply.papertrans.cn/16/1530/152983/152983_14.pngtemperate 发表于 2025-3-24 02:23:17
Iterative Learning from Positive Data and Negative Counterexamplesture with a teacher (oracle) if it is a subset of the target language (and if it is not, then it receives a negative counterexample), and uses only limited long-term memory (incorporated in conjectures). Three variants of this model are compared: when a learner receives least negative counterexample假装是你 发表于 2025-3-24 10:04:08
http://reply.papertrans.cn/16/1530/152983/152983_16.png蜈蚣 发表于 2025-3-24 11:56:45
Risk-Sensitive Online Learninghe best trade-off between rewards and .. Motivated by finance applications, we consider two common measures balancing returns and risk: the . and the . criterion of Markowitz . We first provide negative results establishing the impossibility of no-regret algorithms under these measures, thusirritation 发表于 2025-3-24 17:55:17
Leading Strategies in Competitive On-Line Predictiony prediction strategies admits a “leading prediction strategy”, which not only asymptotically performs at least as well as any continuous limited-memory strategy but also satisfies the property that the excess loss of any continuous limited-memory strategy is determined by how closely it imitates thGREEN 发表于 2025-3-24 22:32:06
Solving Semi-infinite Linear Programs Using Boosting-Like Methodsg. .=ℝ. In the finite case the constraints can be described by a matrix with . rows and . columns that can be used to directly solve the LP. In semi-infinite linear programs (SILPs) the constraints are often given in a functional form depending on . or implicitly defined, for instance by the outcome of another algorithm.nettle 发表于 2025-3-25 00:51:29
http://reply.papertrans.cn/16/1530/152983/152983_20.png