Zusammenfassung: | This paper presents a framework grounded on convex optimization and economics ideas to solve by index policies problems of optimal dynamic allocation of effort to a discrete-state (finite or countable) binary-action (work/rest) semi-Markov restless bandit project, elucidating issues raised by previous work. Its contributions include: (i) the concept of a restless bandit's marginal productivity index (MPI), characterizing optimal policies relative to general cost and work measures; (ii) the characterization of indexable restless bandits as those satisfying diminishing marginal returns to work, consistently with a nested family of threshold policies; (iii) sufficient indexability conditions via partial conservation laws (PCLs); (iv) the characterization of the MPI as an optimal marginal productivity rate relative to feasible active-state sets; (v) application to semi-Markov bandits under several criteria, including a new mixed average-bias criterion; and (vi) PCL-indexability analyses and MPIs for optimal service control of make-to-order/make-to-stock queues with convex holding costs, under discounted and average-bias criteria.
|