Machine Learning Classical Spin Models
Summary
Machine learning has become increasingly popular as a computational tool in all aspects of science and the private sector, however its application as an additional computational physics tool has only recently began to gain traction. This thesis aims to study the application of supervised and unsupervised neural networks to computational classical statistical physics problems, focusing on what is learned and
whether it is a useful tool for these applications. Supervised neural networks are used to distinguish between the different phases of four different models, containing a second-order phase-transition (PT), infinite-order PT, and both in close proximity to one another, as well as a model without a PT but with frustration. Different iterations of the restricted Boltzmann machine (RBM), a type of unsupervised neural
network, are trained on the one- and two-dimensional Ising model. The neural network is able to differentiate between the different phases only if the PT is clearly discernible from the input configurations. It is too crude a tool to differentiate between two close subsequent PTs. It is concluded that application of neural networks to detect PTs in classical statistical physics models where an intuition for the PT exists, holds no advantage over alternative conventional computational methods. This thesis concludes that restrictions placed on the RBM, such that the RBM has translation
invariance, still allow the restricted RBMs to learn the magnetisation really well, while two-spin correlations are learned less well. Surprisingly, the block Gibbs sampling of the restricted RBMs is better behaved than for the unrestricted RBM, as can be explained using an analysis of the trained weight-matrix.