dc.description.abstract | The problem to be considered is non-preemptive stochastic single machine scheduling to minimize the
total weighted completion time of a set of N jobs. This is a classical problem setting that has been
studied in the literature on scheduling, with many applications in operations research. In this thesis, we
try to understand algorithms and models where the underlying probability distributions are unknown,
and processing times can only be assessed via samples. Specifically, the thesis asks what can be done
when there is one single sample only.
Whenever the expected processing times are known exactly, it is known since the 1960's that one
must schedule jobs in decreasing order of weight over expected processing time. For the setting with
access to only one sample of the processing time, we define SWOPS, the algorithm that Schedules by
Weights Over Processing Time Samples. This algorithm is the most intuitive candidate for a better-
than-random algorithm. We first formalize the concept of comparing algorithm performance. This allows
us to formulate our main research question; in which scenarios does SWOPS perform at least as well as
R, the uniformly random algorithm? We show by counter-example that in general, the performance of
SWOPS can be arbitrarily worse than R, motivating that our research question is not trivial. We then
show that the matter of comparing relative performance of algorithms can be simplified to a case where
the number of jobs N equals 2. We also introduce a novel concept called the Relative Optimality Gap
(ROG) of an algorithm. This scales the algorithm's cost to always lie in the interval [0,1], where a ROG
of 0 corresponds to an optimal algorithm and a ROG of 1 means the expected performance is the worst
possible. It is then proven that finding an upper ROG-bound of at most 1/2 on an algorithm A is equivalent
to showing that A performs at least as well as R.
We apply these methods to several scenarios, for different types of processing time distributions, and
show that SWOPS performs well in these cases. The first scenario is when processing time distributions are
symmetric. Second, when all processing time distributions are translated versions of the same underlying
distribution. Third, when all processing time distributions are linearly scaled versions of the same under-
lying distribution. Lastly, we consider several special sub-cases for exponentially distributed processing
times.
In the end, we discuss what perspective these findings give us about the performance of SWOPS. We
also speculate about the broader implications of our results, in terms of what types of critical failure
modes might exist when insufficient samples are available. | |