dc.description.abstract | A common problem when studying many-body systems is accurately sampling the distribution function. In 2019, Noé et al. (Science, 365, 6457 (2019)) introduced a novel sampling method: Boltzmann Generators (BGs). Motivated by this, we study how well these generators perform in different settings. BGs are essentially an invertible neural network, which learns a transformation from a simple distribution (Gaussian) to a complex one (Boltzmann) in order to sample from the Boltzmann distribution. To construct this neural network, a ``flow of transformations'' is used, i.e. the invertible transformation is broken into smaller pieces. Because the BG is invertible, we can train forwards and backwards. We have studied the generators in two different applications. First, we have used the generators to sample an artificial distribution: a smiley face that consisted of three separate Gaussian distributions. We have found that the results generally improve when we use more backwards training, especially in the early stages of training. Second, we have tested whether the BGs can predict the effective colloid-colloid potential in a colloid-polymer mixture. We have found that BGs are a viable way to extract this potential. The conclusion is that BGs are a strong approach for sampling the distribution function of many-body systems and therefore, they have possible applications in many branches of physics. | |