Performance of Sampling Methods on Deterministic Bayesian Networks
MetadataShow full item record
The performance of the recently introduced prune sampling algorithm is characterised for various types of Bayesian networks and is compared to conventional sampling methods like Gibbs sampling, backward sampling, likelihood weighting, and SampleSearch. A procedure is devised to obtain the performance of sampling methods in the limit of infinite simulation time, extrapolated from relatively short simulations. This approach was used to conduct an experimental analysis to compare the accuracy, rate of convergence and the time consumption of the sampling methods. It is shown that Markov chains created by prune sampling always converge to the desired posterior distribution, also for networks where conventional Gibbs sampling fails. In addition, it is demonstrated that prune sampling outperforms Gibbs sampling – arguably the most widely used MCMC inference technique – at least for one class of BNs. Though, this tempting feature comes at a price. In the first implementation of prune sampling, the procedure to assign an initial configuration to a BN is rather time intensive. A solution to mitigate this drawback is implemented and reviewed. In total 72 experiments are conducted on 12 different BNs. Our conclusion is that, being devised specifically to deal with determinism, prune sampling thwarts its expectations. Particularly in its accuracy on the deterministic class of Grid BNs. Prune sampling shows consistently fast performance on all types of small BNs. However, on this type of BNs its accuracy is seriously inadequate. On all other BNs, prune sampling shows serious shortcomings in terms of all three performance indicators. Hence, overall it needs to be concluded that prune sampling is not a competitive sampling method in comparison to established (MCMC) sampling methods.