Publications can be found also at
my Google Scholar Profile.
Some papers are downloadable from the
arXiv.
Abstract
We consider two
classes of exponential dispersion models of discrete
probability distributions which are defined by specifying their variance
functions in their mean value parameterization.
These classes were considered in Barlev and Ridder (2021) as models
of overdispersed zero-inflated distributions.
In this paper we analyse the application of these classes to
fit count data having overdispersed and zero-inflated statistics.
For this reason, we first elaborate on the computational aspects
of the probability distributions, before
we consider the data fitting with our models.
We execute an extensive comparison with other statistical models
that are recently proposed, on both real data sets, and
simulated data sets.
Our findings are that our framework is a flexible tool that
gives excellent results in a
wide range of cases. Moreover, specifically when the data characteristics
show also large skewness and kurtosis our models perform best.
DOI: [10.1080/03610918.2021.1934020]
The large arcsine exponential dispersion model - properties and
applications to count data and insurance risk
Shaul K. Bar-Lev, Ad Ridder
MATHEMATICS Volume: 10 Number 19 Article-number 3715, 2022
View the abstract and full paper
Hide the abstract
|
|
Abstract
The large arcsine exponential dispersion model (LAEDM) is a class of
three-parameter distributions on the non-negative integers.
These distributions show the specific characteristics of being leptokurtic,
zero-inflated, overdispersed, and skewed to the right.
Therefore, these distributions are well suited to fit count data with these properties.
Furthermore, recent studies in actuarial sciences argue for the consideration
of such distributions in the computation of risk factors. In this paper,
we provide a thorough analysis of the LAEDM by deriving
(a) the mean value parameterization of the LAEDM;
(b) exact expressions for its probability mass function at n=0,1,…;
(c) a simple bound for these probabilities that is sharp for large n;
(d) a simulation algorithm for sampling from LAEDM.
We have implemented the LAEDM for statistical modeling of various
real count data sets. We assess its fitting performance by comparing
it with the performances of traditional counting models.
We use a simulation algorithm for computing tail probabilities
of the aggregated claim size in an insurance risk model.
DOI: [10.3390/math10193715]
Abstract
In this study we are interested in rare-event probabilities in
Markovian queues with time-varying arrival rates (nonhomogeneous Poisson arrivals)
and time-varying service rates.
DOI: [10.1007/s11134-022-09763-w]
PDF: [nonstationary.pdf]
Abstract
In their fundamental paper on cubic variance functions (VFs), Letac and Mora
(The Annals of Statistics, 1990) presented a systematic, rigorous
and comprehensive study of natural exponential families (NEFs) on the real
line, their characterization through their VFs and mean value
parameterization. They presented a section that for some reason has been
left unnoticed. This section deals with the construction of VFs associated
with NEFs of counting distributions on the set of nonnegative integers and
allows to find the corresponding generating measures. As EDMs are based on
NEFs, we introduce in this paper two
new classes of EDMs based on their results.
For these classes, which are associated with simple VFs, we derive
their mean value parameterization and their associated generating measures.
We also prove that they have some desirable properties.
Both classes are
shown to be overdispersed and zero inflated in ascending order,
making them as competitive statistical models for those in use in both,
statistical and actuarial modeling. To our best knowledge,
the
classes of counting distributions we present in this paper, have not been
introduced or discussed before in the literature.
To show that our classes can serve as
competitive statistical models for those in use (e.g., Poisson, Negative
binomial),
we include
a numerical example of real data.
In this example, we compare the performance of
our classes with relevant competitive models.
DOI: [10.1051/ps/2021001]
arXiv: [2003.13849]
Minimizing bed occupancy variance by scheduling
patients under uncertainty
Anne van den Broek d'Obrenan, Dennis Roubos, Ad Ridder, Leen Stougie
EUROPEAN JOURNAL OF OPERATIONAL REASEARCH Volume 286 Issue 1 Pages 336-349, 2020
View the abstract and full paper
Hide the abstract
|
|
Abstract
In this paper we consider the problem of scheduling patients in allocated
surgery blocks in a Master Surgical Schedule.
We pay attention to both the available surgery blocks
and the bed occupancy in the hospital wards.
More specifically,
large probabilities of overtime in each surgery block are undesirable
and costly, while large fluctuations in the number of
used beds requires extra buffer capacity
and makes the staff planning more challenging.
The stochastic nature of surgery durations and length of stay on a ward hinders
the use of classical techniques. Transforming the stochastic problem into a
deterministic problem does not result into practically feasible solutions.
In this paper we develop a technique to solve the stochastic scheduling
problem, whose primary objective it to
minimize variation in the necessary bed capacity, while maximizing the number of patients operated, and minimizing the maximum waiting time, and guaranteeing a
small probability of overtime in surgery blocks.
The method starts with solving an
Integer Linear Programming (ILP) formulation of the problem,
and then
simulation and local search techniques are
applied to guarantee small probabilities of overtime and to improve
upon the ILP solution. Numerical experiments applied to a Dutch
hospital show promising results.
DOI: [10.1016/j.ejor.2020.03.026]
VU RESEARCH PORTAL [minbedoccupancy]
Abstract
In this paper we consider the problem of computing tail probabilities
of the distribution of a random sum of positive random variables.
We assume that the individual variables follow a reproducible natural
exponential family (NEF) distribution, and that the random number has
a NEF counting distribution with a cubic variance function.
This specific modelling is supported by data of the aggregated claim
distribution of an insurance company. Large tail probabilities are
important as they reflect the risk of large losses, however,
analytic or numerical expressions are not available.
We propose several simulation algorithms which are based on an
asymptotic analysis of the distribution of the counting variable and
on the reproducibility property of the claim distribution.
The aggregated sum is simulated efficiently by importance
sampling using an exponential cahnge of measure.
We conclude by numerical experiments of these algorithms.
DOI: [10.5539/ijsp.v8n3p54]
Abstract
Increased computer speed and memory have
encouraged simulation analysts to develop ever
more realistic stochastic models. Despite these advancements
in computing hardware, the most significant gains in the speed of
stochastic simulation are still the result of methodological
advances and clever algorithmic design.
Here we survey a few of the most important methods for variance
reduction and speedup that will benefit any simulation,
no matter what the capabilities of the computing hardware.
DOI: [10.1002/9781118445112.stat07975]
A simulation tool for truck loading at fuel filling plants
Ben Cohen, Bart Mateman, Ad Ridder
In PROCEEDINGS WINTER SIMULATION CONFERENCE 2017
(eds. W.K.V. Chan, A. D'Ambrogio, G. Zacharewicz, N. Mustafee, G. Wainer, and E. Page),
Las Vegas, NV, USA, 3-6 December 2017, Pages 3102-3113.
View the abstract and full paper
Hide the abstract
|
|
Abstract
The various processes of truck loading at a fuel filling plant
are interrelated which makes it complex to analyze and
improve the performance of the filling plants.
This paper describes these processes and presents
a software tool developed in FlexSim for modelling
the drivers' activities, analyzing different decision
scenarios, and optimizing the filling processes at fuel terminals.
The simulation model has been used to identify
bottlenecks and evaluating opportunities for
performance improvement.
This paper reports the
application of the model to a lubricant
filling plant for analyzing and supporting decision making
on the assignment of trucks to bays,
and on the availability of the lubricants
on the fuel arms at the lading bays.
DOI:[10.1109/WSC.2017.8248030]
URL: [wsc17papers track117]
PDF: [
proceedings paper]
An M-estimator for rare-event probability estimation
Zdravko Botev, Ad Ridder
In PROCEEDINGS WINTER SIMULATION CONFERENCE
(eds. T.M.K. Roeder, P.I. Frazier, R.Szechtman, E. Zhou, T. Huschka, and S.E. Chick),
Arlington, VA, USA, 11-14 December 2016, Pages 359-369.
View the abstract and full paper
Hide the abstract
|
|
Abstract
We describe a maximum-likelihood type estimator, or M-estimator,
for Monte Carlo estimation of rare-event probabilities.
In this method, we first sample from the zero-variance measure using
Markov Chain Monte Carlo (MCMC), and then given the simulated data,
we compute a maximum-likelihood-type estimator. We show that the resulting
M-estimator is consistent, and that it subsumes as a special case the well-known
fixed-effort splitting estimator. We give a numerical example of estimating
accurately the tail distribution of the sum of log-normal random variables
under a Gaussian copula. The numerical results suggests that for this
example the method is competitive.
DOI: [10.1109/WSC.2016.7822103]
URL: [wsc16papers track105]
PDF: [
proceedings paper]
This paper received the Winter Simulation Conference Best Paper Award 2016.
Abstract
The Cross Entropy method is a well-known adaptive importance
sampling method for rare-event
probability estimation, which requires estimating an optimal
importance sampling density within
a parametric class. In this article we estimate an optimal
importance sampling density
within a wider semiparametric clas of distributions.
We show that this semiparametric version
of the Cross Entropy method frequently yields efficient estimators.
We illustrate the excellent
practical performance of the method with numerical experiments
and show that for the problems
we consider it typically outperforms alternative schemes by
orders of magnitude.
DOI: [10.1017/jpr.2016.31]
[TI Discussion Paper]
Abstract
In this paper we describe a Sequential Importance Sampling (SIS) procedure for counting
the number of vertex covers in general graphs. The performance of SIS depends
heavily on how close the SIS proposal distribution is to a uniform one over a
suitably restricted set. The proposed algorithm introduces a probabilistic relaxation
technique that uses Dynamic Programming in order to efficiently estimate
this uniform distribution. The numerical experiments show that the scheme
compares favorably with other existing methods. In particular the method is
compared with cachet - an exact model counter, and the state of the
art SampleSearch, which is based on Belief Networks and importance sampling.
DOI: [10.1007/s11222-015-9546-9]
[TI Discussion Paper]
Tail distribution of the maximum of correlated Gaussian random variables
Zdravko Botev, Michel Mandjes, Ad Ridder
In PROCEEDINGS WINTER SIMULATION CONFERENCE
(eds. L. Yilmaz, W. K. V. Chan, I. Moon, T. M. K. Roeder, C. Macal, and M. D. Rossetti),
Huntington Beach, 6-9 December 2015, Pages 633 - 642.
View the abstract and full paper
Hide the abstract
Cited in Wikipedia lemma on
Multivariate normal distribution
|
|
Abstract
In this article we consider the efficient
estimation of the tail distribution of the maximum of
correlated normal random variables. We show that the
currently recommended Monte Carlo estimator has difficulties
in quantifying its precision, because its sample variance
estimator is an inefficient estimator of the true variance.
We propose a simple remedy: to still use this estimator,
but to rely on an alternative quantification of its precision.
In addition to this we also consider a
completely new sequential importance sampling estimator of the desired tail
probability. Numerical experiments suggest that the sequential
importance sampling estimator can be significantly more efficient than its
competitor.
DOI:[10.1109/WSC.2015.7408202]
PDF: [
proceedings paper]
Optimal control for an M^X/G/1/N+1 queue with two service modes
Rein Nobel, Ad Ridder
In COMPUTATIONAL INTELLIGENCE, CYBER SECURITY AND COMPUTATIONAL MODELS
(Proceedings of ICC3, 2013),
Volume 246 of the series Advances in Intelligent Systems and Computing
(eds. Krishnan, G.S.S., Anitha, R., Lekshmi, R.S., Kumar, M.S., Bonato, A., Grana),
Springer 2014, Pages 47-59.
View the abstract and full paper
Hide the abstract
|
|
Abstract
A finite-buffer queueing model is considered with batch Poisson input and
controllable service rate. A batch that upon arrival does not fit in the
unoccupied places of the buffer is partially rejected. A decision to change
the service mode can be made at service completion epochs only, and
vacation (switch-over) times are involved in preparing the new mode.
During a switch-over time, service is disabled. For the control of this model,
three optimization criteria are consid- ered: the average number of jobs
in the buffer, the fraction of lost jobs, and the fraction of batches
not fully accepted. Using Markov decision theory, the optimal switching
policy can be determined for any of these criteria by the value-iteration
algorithm. In the calculation of the expected one-step costs and the
transition probabilities, an essential role is played by the discrete
fast Fourier transform.
DOI: [10.1007/978-81-322-1680-3_6]
Counting the number of Sudoku's by importance sampling simulation
Ad Ridder
In PROCEEDINGS of the RECREATIONAL MATHEMATICS COLLOQUIUM III
(ed. Jorge Nuno Silva),
Ponta Delgada, 3-6 April 2013,
Associacao Ludus, Lisboa 2013, Pages 85 - 92.
View the abstract and full paper
Hide the abstract
|
|
Abstract
Stochastic simulation can be applied to estimate the number of
feasible solutions in a combinatorial problem. This idea will be
illustrated to count the number of possible Sudoku grids. It will
be argued why this becomes a rare-event simulation, and how
an importance sampling algorithm resolves this difficulty.
Download the paper [full (PDF)].
Fast Sequential Monte Carlo Methods for Counting and Optimization
Reuven Y. Rubinstein, Ad Ridder, Radislav Vaisman
Wiley 2014, Wiley Series in Probability and Statistics
(ISBN: 978-1-118-61226-2)
View the description
Hide the description
|
|
Description
This book presents the first comprehensive account of fast sequential
Monte Carlo (SMC) methods for counting and optimization
at an exceptionally accessible level. Written by authorities
in the field, it places great emphasis on cross-entropy,
minimum cross-entropy, splitting, and stochastic enumeration.
The overall aim is to make SMC methods accessible to readers
who want to apply and to accentuate the unifying and novel
mathematical ideas behind SMC in their future studies or work.
Wiley Page
Amazon Page
Mathematical Review
Variance reduction techniques in Monte Carlo methods
J. Kleijnen, A. Ridder, R. Rubinstein
ENCYCLOPEDIA OF OPERATIONS RESEARCH & MANAGEMENT SCIENCE 3RD EDITION
(eds S. Gass and M. Fu), Springer-Verlag 2013, pages 1598 - 1610.
View the abstract and full paper
Hide the abstract
|
|
Abstract
An overview of variance reduction techniques is given, notably
common random numbers, antithetic variates, control variates, conditioning,
stratified sampling, importance sampling, splitting techniques, and
quasi-Mont Carlo sampling.
CentER [Discussion Paper 2010-117]
SSRN [ssrn.com/abstract=1715474]
Probabilistic bounded relative error for rare event simulation learning techniques
A. Ridder, B. Tuffin
In PROCEEDINGS WINTER SIMULATION CONFERENCE
(eds. C. Laroque, J. Himmelspach, R. Pasupathy, O. Rose, and A. M. Uhrmacher),
Berlin, 9-12 December 2012, Pages 387 - 398.
View the abstract and full paper
Hide the abstract
|
|
Abstract
In rare event simulation, we look for estimators such that the relative accuracy of
the output is controlled when the rarity is getting more and more critical.
Different robustness properties of estimators have been defined in the literature.
However, these properties are not adapted to estimators coming from a parametric
family for which the optimal parameter is random due to a learning algorithm.
These estimators have random
accuracy. For this reason, we motivate in this paper the need to define
probabilistic robustness properties.
We especially focus on the so-called probabilistic bounded relative error property.
We additionally provide
sufficient conditions, both in general and Markov settings,
to satisfy such a property, and hope that it will
foster discussions and new works in the area.
DOI: [10.1109/WSC.2012.6465041]
PDF: [proceedings paper]
Abstract
We apply the splitting method to three well-known counting
problems, namely 3-SAT, random graphs with
prescribed degrees, and binary contingency tables.
We present an enhanced version of the splitting method
based on the capture-recapture technique,
and show by experiments the superiority of this technique
for SAT problems in terms of variance of the associated estimators,
and speed of the algorithms.
DOI: [10.1080/15326349.2012.699761]
Abstract
We present a method to obtain state- and time-dependent importance
sampling estimators by repeatedly solving a minimum cross-entropy (MCE)
program as the simulation progresses. This MCE-based approach lends a
foundation to the natural notion to stop changing the measure when it is no
longer needed. We use this method to obtain a state- and time-dependent
estimator for the one-tailed probability of a light-tailed i.i.d. sum
that is logarithmically efficient in general and strongly efficient when the
jumps are Gaussian. We go on to construct an estimator for the two-tailed
problem which is shown to be similarly efficient.
We consider minor variants of the algorithm obtained via MCE, and
present some numerical comparisons between our algorithms and others
from the literature.
DOI: [10.1007/s10479-009-0611-7]
Abstract
A sequence of real numbers (xn) is Benford if the significands,
i.e. the fraction parts in the floating-point representation of
(xn) are distributed logarithmically. Similarly, a discrete-time
irreducible and aperiodic finite-state Markov chain with probability
transition matrix P and limiting matrix P* is Benford if every
component of both sequences of matrices (Pn - P*) and
(Pn+1-Pn) is Benford or eventually zero.
Using recent tools
that established Benford behavior both for Newton's method and for
finite-dimensional linear maps, via the classical theories of uniform
distribution modulo 1 and Perron-Frobenius, this paper derives a
simple sufficient condition (nonresonant) guaranteeing that P,
or the Markov chain associated with it, is Benford. This result in
turn is used to show that almost all Markov chains are Benford, in
the sense that if the transition probabilities are chosen independently
and continuously, then the resulting Markov chain is Benford with
probability one. Concrete examples illustrate the various cases that
arise, and the theory is complemented with several simulations and
potential applications.
DOI: [10.1137/100789890]
Abstract
A version of the classical secretary problem is studied, in which one is
interested in selecting one of the
b best out of a group of n differently ranked persons who are presented
one by one in a random order. It is assumed
that b≥1 is a preassigned number. It is known, already for a long time,
that for the optimal policy one needs to
compute b position thresholds, for instance via backwards induction.
In this paper we study approximate policies,
that use just a single or a double position threshold, albeit in conjunction
with a level rank. We give exact and
asymptotic (as n→∞) results, which show that the double-level policy is
an extremely accurate approximation.
DOI: [10.1017/S026996481000032X]
Abstract
The correspondence between the cross-entropy method
and the zero-variance approximation to simulate
a rare event problem in Markov chains is shown.
This leads to a sufficient condition that the cross-entropy
estimator is asymptotically optimal.
arXiv: [arXiv:1003.1950]
DOI: [10.1016/j.procs.2010.04.176]
The cross-entropy method with patching for rare-event simulation of large Markov chains
B. Kaynar and A. Ridder
EUROPEAN JOURNAL OF OPERATIONAL RESEARCH Volume 207 Issue 3 Pages 1380 - 1397, 2010
View the abstract and full paper
Hide the abstract
|
|
Abstract
There are various importance sampling schemes to estimate rare event
probabilities in Markovian systems such as Markovian reliability models and
Jackson networks. In this work, we present a general state dependent importance
sampling method which partitions the state space and applies the cross-entropy
method to each partition. We investigate two versions of our algorithm
and apply them to several examples of reliability and queueing models.
In all these examples we compare our method with other importance sampling schemes.
The performance of the importance sampling schemes is measured by the
relative error of the estimator and by the efficiency of the algorithm.
The results from experiments show considerable improvements both in
running time of the algorithm and the variance of the estimator.
DOI: [10.1016/j.ejor.2010.07.002]
Importance sampling algorithms for the fork-join queue
A. Ridder
In PROCEEDINGS 6-TH ST. PETERSBURG WORKSHOP ON SIMULATION
(Eds. S.M. Ermakov, V.B. Melas, A.N. Pepelyshev). VVM com. Ltd.,
St. Petersburg, Pages. 791 - 796, 2009
View the abstract and full paper
Hide the abstract
|
|
Abstract
In this paper we consider a rare-event problem in the fork-join
queue for which we develop an efficient importance sampling algorithm.
Download the paper [full (PDF)].
Minimum cross-entropy methods for rare-event simulation
A. Ridder and R. Rubinstein
SIMULATION-TRANSACTIONS OF THE SOCIETY FOR MODELING AND SIMULATION INTERNATIONAL Volume 83 Issue 11 Pages 769-784, 2007
View the abstract and full paper
Hide the abstract
|
|
Abstract
In this paper we apply the minimum cross-entropy method (MinxEnt) for
estimating rare-event probabilities for the sum of i.i.d. random variables.
MinxEnt is an analogy of the Maximum Entropy Principle in the
sense that the objective is to minimize a relative (or cross) entropy
of a target density h$from an unknown density f
under suitable constraints.
The main idea is to use the solution to this optimization program
as the simulation density in importance sampling.
We shall see that some existing importance sampling methods
can be cast in a MinxEnt program,
such as the large deviations approach for light tails and
the hazard rate twisting for heavy tails.
As an extension we shall consider a correlated version
of this hazard rate twisted solution which give
better simulation results. The sample generation
is based on a Gibbs sampler algorithm.
DOI: [10.1177/0037549707087713]
Proceedings version in
Proceedings Korea-Netherlands Joint Conference on
Queueing Theory and its Applications to Telecommunication Systems
Seoul, June 2005.
Abstract
We develop a methodology for studying ``large deviations type" questions. Our
approach does not require that the large deviations principle holds, and is
thus applicable to a larg class of systems.
We study a system of queues with exponential servers, which share an arrival
stream. Arrivals are routed to the (weighted) shortest queue.
It is not known whether the large deviations principle holds for this system.
Using the tools developed here
we derive large deviations type estimates for the most likely
behavior, the most likely path to overflow and the probability of overflow.
The analysis applies to any finite number of queues.
We show via a counterexample that this sytem may exhibit unexpected behavior.
DOI: [10.1007/s00186-005-0037-1]
An extended version is published as
Tinbergen Institute Discussion Paper TI 2005-016/4.
Abstract
This paper reports simulation experiments, applying the cross entropy
method such as the importance sampling algorithm for efficient estimation
of rare event probabilities in Markovian reliability systems.
The method is compared to various failure biasing schemes
that have been proved to give estimators with bounded relative
errors. The results from the experiments indicate a considerable
improvement of the performance of the importance sampling
estimators, where performance is measured by the relative error
of the estimate, by the relative error of the estimator,
and by the gain of the importance sampling simulation to
the normal simulation.
DOI: [10.1007/s10479-005-5727-9]
Abstract
A fluid approximation gives the main term in the asymptotic
expression of the value function for a controllable stochastic network.
The policies which have the same asymptotic of their value functions
as the value function of the optimal policy, are called
asymptotically optimal policies.
We consider a problem of finding from this set of asymptotically optimal
policies a best one in the sense that the next term of its asymptotic
expression is minimal. The analysis of this problem is closely connected
with large deviation problems for a random walk.
DOI: [10.1214/aoap/1069786504]
JSTOR: [1193181].
A large deviations analysis of the transient of a queue with many Markov fluid inputs: approximations and fast simulation
M. Mandjes and A. Ridder
ACM TRANSACTIONS ON MODELING AND COMPUTER SIMULATION Volume 12 Pages 1 -26, 2002
View the abstract and full paper
Hide the abstract
|
|
Abstract
This paper analyzes the transient buffer content distribution of
a queue fed by a large number of Markov fluid sources. We characterize the
probability of overflow at time t, given the current buffer level and the
number of sources in the on-state.
After scaling buffer and bandwidth resources by the number of sources n,
we can apply large deviations techniques. The transient overflow probability
decays exponentially in n. In case of exponential
on/off sources we derive an expression for the decay rate of the
rare event probability under consideration. For general Markov fluid sources
we present a plausible conjecture. We also provide the `most likely path'
from the initial state to overflow (at time t).
Knowledge of the decay rate and the most likely path to overflow leads to
(i) approximations of the transient overflow probability, and (ii) efficient
simulation methods of the rare event of buffer overflow. The simulation
methods, based on importance sampling, give a huge speed-up compared to
straightforward simulations. The approximations are of low computational
complexity, and accurate, as verified by means of simulation experiments.
DOI: [10.1145/511442.511443].
Proceedings version in
Proceedings 2nd International Workshop on Rare Event
Simulation, Twente, March 1999.
Abstract
This paper deals with the transient behavior of the Erlang loss model.
After scaling both arrival rate and number of trunks, an asymptotic
analysis of the blocking probability is given. Apart from that, the
most likely path to blocking is given.
Compared to results of Shwartz and Weiss, more explicit
expressions are obtained by using probabilistic arguments.
The computation method is applied to the problem of (real-time)
dimensioning of virtual paths in ATM networks,
and to the problem of integrating scheduled and switched
connections in a single network.
DOI: [10.1016/S0166-5316(00)00050-X].
Abstract
This paper describes a discrete-time retrial queue and shows
how importance sampling simulations can be applied for
estimating the probability of large orbit content and the
overflow fraction of primary calls.
Download the paper: [full (PDF)].
An analytic model for capacity planning of prisons in the Netherlands
R. Korporaal, R. Dekker, A. Ridder, P. Kloprogge
JOURNAL OF THE OPERATIONAL RESEARCH SOCIETY Volume 51 Issue 11 Pages 1228-1237, 2000
View the abstract
Hide the abstract
|
|
Abstract
In this paper we describe a decision support system developed to help in
assessing the need for various type of prison cells. In particular we predict
the probability that a criminal has to be sent home because of a shortage of
cells. The problem is modelled through a queueing network with blocking
after service. The main objective of our study is to describe
our analytical method and an approximate algorithm to solve this
network. Through simulation studies we evaluate our method.
Both the analytic and the simulation tool are elements of
the decision support system.
DOI: [10.1057/palgrave.jors.2601021]
This paper received the
Goodeve Medal (2001)i for being the best paper published
in any of the Operational Research Society's journals in 2000.
Abstract
We analyse the deviant behavior of a queue fed by
a large number of traffic streams, which models an ATM switch.
In particular, we explicitly give the
most likely trajectory (or optimal path) to buffer overflow, by applying
large deviations techniques. This is done for a broad class
of sources,
consisting of Markov fluid sources and periodic sources.
Apart from a number of ramifications of this result, we present
guidelines for the numerical evaluation of the optimal path.
DOI: [10.1023/A:1019154129708].
Abstract
Intuition may lead to the hypothesis that in stochastic
inventory systems a higher demand variability results in
larger variances and in an increase of total expected system costs.
In a recent paper, Song (1994) formally proved this
assertion to hold for a certain class of
inventory models (including the Newsboy Problem),
given a particular definition of variability.
Here we use stochastic dominance relations
in the Newsboy Problem to characterize
demand distributions for which the opposite effect may occur,
i.e., higher demand variability may result in
larger variances and lower costs.
In addition, we provide necessary and sufficient conditions under
which larger demand variances and lower costs occur
simultaneously.
DOI: [10.1287/opre.46.6.934]
JSTOR: [222946].
An extended version is published as
Management Report Series 265 Erasmus University Rotterdam, 1996.
Fast simulation of discrete time queues with Markov modulated batch arrivals and batch departures
A. Ridder
AEU-INTERNATIONAL JOURNAL OF ELECTRONICS AND COMMUNICATIONS Volume 52 Issue 3 Pages 127 - 132, 1998
View the abstract and full paper
Hide the abstract
|
|
Abstract
In this paper we consider a slotted queueing
model with finite buffers and correlated input.
Arrivals are generated
by different sources and offered in batches at the
end of each slot. The batch sizes depend on the states
of the sources which change their states according to
Markov schemes.
In case that the buffer reaches
its limits, the excess of the buffer is lost.
At the end of a slot a random batch is removed from the buffer.
We prove a large deviations result for the loss
probability in this model, and
apply importance sampling for estimation of this probability.
Download the paper: [full (PS)].
Abstract
In this paper we analyse an (s, Q) inventory model in which used
products can be remanufactured to new ones. We develop two
approximations for the average costs and compare their performance
with that of an approximation suggested by Muckstadt and Isaac.
Next we extend the model with the option to dispose returned products
and present a heuristic optimisation procedure which is checked
with full enumeration.
DOI: [10.1016/0925-5273(95)00020-8].
Proceedings version in
Proceedings 8th International Working Seminar on
Production Economics , Vol. 3, p. 19-23, 1994.
Abstract
In deze studie behandelen we twee eenvoudige voorbeelden
die illustratief zijn voor kwaliteitsgarantie van
communicatiesystemen, namelijk een kleine kans
op verminking van verzonden berichten en een kleine kans
op verlies van delen van berichten.
Deze kansen kunnen geschat worden door bijvoorbeeld het
uitvoeren van simulaties.
Omdat we hier typisch spreken over kansen van de orde 10^{-6} en
kleiner, is het wenselijk
variantiereductietechnieken toe te passen.
Wij zullen in de voorbeelden laten zien hoe
importance sampling
de simulaties versnelt. Om een optimale nieuwe kansmaat te vinden
worden limietresultaten uit de theorie van grote afwijkingen
gebruikt.
Abstract
In this paper we study continuous flow finite buffer systems
with input rates modulated by Markov chains.
Discrete event simulations are applied for estimating
loss probabilities. The simulations are executed under
a twisted version of the original probability measure
(importance sampling).
We present a simple rule for determining a new measure,
then show that the new measure matches the `most likely'
empirical measure that we expect from large deviations
arguments, and finally prove optimality of the new measure.
DOI: [10.1017/s002190020010021x]
JSTOR: [3215359].
Proceedings version in
Proceedings B-ISDN teletraffic modelling
Symposium, Antwerpen, February 1995.
Abstract
In this paper we study weak stochastic ordering on
multidimensional lattices.
The main objective of our study is to derive
comparison of multidimensional marginal distributions
of two Markov chains.
We are able to prove this when the
matrices of transition probabilities satisfy
monotonicity and comparability properties.
DOI: [10.1016/0167-6377(95)00045-3].
Abstract
This paper addresses characteristics of finite-buffer
Markov-modulated fluid processes, particularly
those related to their deviant behavior.
Our aim in this paper is to find rough asymptotics for the
probability of a loss cycle.
Apart from that, we derive some properties of the fluid
process in case of the buffer contents reaching a high level
(a process we call the conjugate of the original process).
Our main goal is to obtain practicable methods to find
the rate matrix of this conjugate process.
For this purpose we use large deviations techniques, but we consider the
governing eigensystem, as well, and we discuss the relation
between these two approaches.
We extend the analysis to the multiple source case.
Finally, we use the obtained results in simulation.
We examine variance reduction by importance sampling in a multiple source
example. The new statistical law of the fluid process
is based on the conjugate rate matrices.
DOI: [10.1017/S0269964800003879].
Admission control and routing in ATM networks using inferences from measured buffered occupancy
C. Courcoubetis, G. Kesidis, A. Ridder, J. Walrand, R. Weber
IEEE TRANSACTIONS ON COMMUNICATIONS Volume 43 Issue 2-4 Pages 1778-1784, 1995
View the abstract and full paper
Hide the abstract
|
|
Abstract
Addresses the issue of call acceptance and routing in ATM networks.
The goal is to design an algorithm that guarantees bounds on the fraction
of cells lost by a call. The method proposed for call acceptance and
routing does not require models describing the traffic. Each switch
estimates the additional fraction of cells that would be lost if new
calls were routed through the switch. The routing algorithm uses these
estimates. The estimates are obtained by monitoring the switch operations
and extrapolating to the situation where more calls are routed through
the switch. The extrapolation is justified by a scaling property.
To reduce the variance of the estimates, the switches calculate the
cell loss that would occur with virtual buffers. A way to choose the
sizes of the virtual buffers in order to minimize the variance is discussed.
Thus, the switches constantly estimate their spare capacity.
Simulations were performed using Markov fluid sources to test
the validity of the approach.
DOI: [10.1109/26.380228].
Preprint available.
On (r,Q) inventory systems with truncated normal lead times and Poisson demands
A. Ridder
In OPERATIONS RESEARCH PROCEEDINGS Volume 22 (eds. H. Dyckhoff, U. Derigs, M. Salomon
and H.C. Tijms) Springer-Verlag, Berlin, Pages 226-231, 1994.
View the abstract and full paper
Hide the abstract
|
|
Abstract
In this note we shall deal with a continuous review (r,Q) inventory
control system.
Demands for a single product item occur at epochs
generated by a Poisson process, and the replenishment lead time has a
truncated normal distribution.
We shall derive expressions for the demand probabilities during
lead times based on exact expressions for tail moments
of the standard normal distribution.
The program for
finding an optimal reorder point r and an optimal
order quantity Q under service level constraints
is solved numerically.
Download the paper [full (PDF)].
Abstract
Markov modulated fluid models are studied in this paper. When the inputof the
fluid model is represented by one Markov chain, two approaches are given that result
in asymptotic expressions for the overflow probability. Both approaches are
based on large deviations theories. The equivalence of the expressions is proved.
When the input is represented by N similar Markov chains, a reduction property is derived.
DOI: [10.1017/S0269964800002722].
Adaptive control of admissions and routing in an ATM network
C. Courcoubetis, G. Kesidis, A. Ridder, J. Walrand, R. Weber
In LECTURE NOTES IN CONTROL AND INFORMATION SCIENCES (Springer-Verlag, Berlin) Volume 184 Pages 121-126, 1992
|
|
Abstract
A separable closed queueing network is decomposed into two subnetworks.
The rates of the servers in one of the subnetworks are controllable in order to maximize
the throughput of the other one. This problem of optimal flow control is transformed
to a linear programming problem. The optimal solution of the linear program yields the
structure of the optimal control. Conditions for the network are given to guarantee
the applicability of this approach.
DOI: [10.1109/9.21103].
Approximating sensitive queueing networks by reversible Markov chains
A. Hordijk, A. Ridder
In COMPUTER PERFORMANCE AND RELIABILITY (eds. G. Iazeolla, P.J. Courtois and O.J. Boxma).
Proceedings of the Second International MCPR Workshop, Rome, Italy, 25-29 May, 1987.
Elsevier Science, North-Holland, Amsterdam, Pages 105-117, 1988
View the abstract and full paper
Hide the abstract
|
|
Abstract
A general method will be presented to approximate networks of queues for which
the stationary distribution is not of the standard product form.
The approximation is made through reversible Markov chains.
This means that the obtained bounds for the stationary distribution are insensitive
with respect to service time distributions.
The results are derived by using the treeform solution of the
stationary distribution of a markov chain.
Download the paper [full (PDF)].
Abstract
A general method is developed to compute easy bounds of the
weighted stationary probabilities for networks of queues which do not
satisfy the standard product form. The bounds are obtained by constructing
approximating reversible Markov chains. Thus, the bounds are
insensitive with respectto service-time distributions.
A special representation, called the tree-form solution, of the
stationary distribution is used to derive the bounds.
The results are applied to an overflow model.
DOI: [10.2307/3214229],
JSTOR: [3214229].
Abstract
In this paper we define functional monotonicity and functional dominance for nonnegative
matrices and apply these properties to the blocks of a partitioned stochastic matrix.
In this way we obtain stochastic ordering of conditional steady-state distributions
of a Markov chain. The method will be applied to an overflow queueing model for
showing that an upper bound of the blocking probabilities is valid for general
service time distributions at one of the facilities.
DOI: [10.1080/15326348808807085].
Stochastic inequalities for queues
A. Ridder
Ph. D. Thesis, Rijksuniversiteit Leiden, 1987
|
|
Abstract
A general method to obtain insensitive upper and lower bounds for the
stationary distribution of queueing networks is sketched. It is applied to an
overflow model. The bounds are shown to be valid for service distributions
with decreasing failure rate. A characterization of phase-type distributions
with decreasing failure rate is given. A approximationmethod is proposed.
The methods are illustrated with numerical results.
DOI: [10.2307/3214100]
JSTOR: [3214100].
Stochastic inequalities for queueing networks
A. Hordijk and A. Ridder
In TELETRAFFIC ANALYSIS AND COMPUTER PERFORMANCE ANALYSIS
(eds. O.J. Boxma, J.W. Cohen and H.C. Tijms)
Elsevier Science, North-Holland, Amsterdam, Pages 489-497, 1986
View the abstract and full paper
Hide the abstract
|
|
Abstract
This paper summarizes some techniques for comparing stochastic processes.
We focus on (multi-dimensional) birth-death processes because in that case
conditions to establish monotonicity and stochastic inequalities become
very simple. The conditions concern rely the transition rates.
A queueing system in computer communication is worked out to illustrate
the ideas.
Download the paper [full (PDF)].