Some decomposition methods for revenue management

William L. Cooper, Tito Homem-De-mello

Research output: Contribution to journalArticlepeer-review

21 Scopus citations


Working within a Markov decision process (MDP) framework, we study revenue management policies that combine aspects of mathematical programming approaches and pure MDP methods by decomposing the problem by time, state, or both. The "time decomposition" policies employ heuristics early in the booking horizon and switch to a more-detailed decision rule closer to the time of departure. We present a family of formulations that yield such policies and discuss versions of the formulation that have appeared in the literature. Subsequently, we describe sampling-based stochastic optimization methods for solving a particular case of the formulation. Numerical results for two-leg problems suggest that the policies perform well. By viewing the MDP as a large stochastic program, we derive some structural properties of two-leg problems. We show that these properties cannot, in general, be extended to larger networks. For such larger networks we also present a "state-space decomposition" approach that partitions the network problem into two-leg subproblems, each of which is solved. The solutions of these subproblems are then recombined to obtain a booking policy for the network problem.

Original languageEnglish
Pages (from-to)332-353
Number of pages22
JournalTransportation Science
Issue number3
StatePublished - Aug 2007
Externally publishedYes


  • Markov decision processes
  • Network revenue management
  • Stochastic optimization
  • Yield management


Dive into the research topics of 'Some decomposition methods for revenue management'. Together they form a unique fingerprint.

Cite this