Interval uncertainty propagation by a parallel Bayesian global optimization method

Chao Dang, Pengfei Wei, Matthias G.R. Faes, Marcos A. Valdebenito, Michael Beer

Research output: Contribution to journalArticlepeer-review

10 Scopus citations


This paper is concerned with approximating the scalar response of a complex computational model subjected to multiple input interval variables. Such task is formulated as finding both the global minimum and maximum of a computationally expensive black-box function over a prescribed hyper-rectangle. On this basis, a novel non-intrusive method, called ‘triple-engine parallel Bayesian global optimization’, is proposed. The method begins by assuming a Gaussian process prior (which can also be interpreted as a surrogate model) over the response function. The main contribution lies in developing a novel infill sampling criterion, i.e., triple-engine pseudo expected improvement strategy, to identify multiple promising points for minimization and/or maximization based on the past observations at each iteration. By doing so, these identified points can be evaluated on the real response function in parallel. Besides, another potential benefit is that both the lower and upper bounds of the model response can be obtained with a single run of the developed method. Four numerical examples with varying complexity are investigated to demonstrate the proposed method against some existing techniques, and results indicate that significant computational savings can be achieved by making full use of prior knowledge and parallel computing.

Original languageEnglish
Pages (from-to)220-235
Number of pages16
JournalApplied Mathematical Modelling
StatePublished - Aug 2022
Externally publishedYes


  • Bayesian global optimization
  • Gaussian process
  • Infill sampling criterion
  • Interval uncertainty propagation
  • Parallel computing


Dive into the research topics of 'Interval uncertainty propagation by a parallel Bayesian global optimization method'. Together they form a unique fingerprint.

Cite this