lymphedema 发表于 2025-3-25 03:27:33

http://reply.papertrans.cn/29/2837/283688/283688_21.png

Prostaglandins 发表于 2025-3-25 10:46:07

Introduction and Organization of the Book,In this treatise we deal with optimization problems whose objective functions show a sequential structure and hence are amenable to sequential methods. The corresponding field mostly goes by the name .. Other names are . and .. In order to avoid possible confusion with programming in computer science we speak of ..

投票 发表于 2025-3-25 14:10:44

http://reply.papertrans.cn/29/2837/283688/283688_23.png

规章 发表于 2025-3-25 16:37:29

Examples of Deterministic Dynamic ProgramsIn this chapter we explicitly solve the following: optimal routing of a freighter, a production-inventory problem with linear costs, allocation and linear-quadratic problems and a scheduling problem. Then we discuss some further models: DPs with random length of periods and with random termination.

萤火虫 发表于 2025-3-25 23:26:33

Absorbing Dynamic Programs and Acyclic NetworksWe study the problem of maximizing the sum of discounted rewards, earned not over a fixed number of periods, but until the decision process enters a given absorbing set. The basic theorem for absorbing DPs is derived. Moreover, we show how absorbing DPs can be used to find cost-minimal subpaths in acyclic networks.

OFF 发表于 2025-3-26 03:47:12

Concavity and Convexity of the Value FunctionsHere we deal with the following questions, assuming that the functions under consideration are defined on convex sets or on a non-degenerate discrete interval.

FANG 发表于 2025-3-26 06:13:27

http://reply.papertrans.cn/29/2837/283688/283688_27.png

摄取 发表于 2025-3-26 11:14:00

http://reply.papertrans.cn/29/2837/283688/283688_28.png

BLANK 发表于 2025-3-26 15:16:12

Control Models with DisturbancesIn this chapter we introduce control models with finite and i.i.d. disturbances. We prove the reward iteration and derive the basic solution techniques: value iteration and optimality criterion.

易改变 发表于 2025-3-26 17:28:36

Markovian Decision Processes with Finite Transition LawFirstly we introduce MDPs with finite state spaces, prove the reward iteration and derive the basic solution techniques: value iteration and optimality criterion. Then MDPs with finite transition law are considered. There the set of reachable states is finite.
页: 1 2 [3] 4 5 6
查看完整版本: Titlebook: Dynamic Optimization; Deterministic and St Karl Hinderer,Ulrich Rieder,Michael Stieglitz Textbook 2016 Springer International Publishing AG