The course focuses on the design of numerical methods for molecular dynamics simulation, in both its deterministic and stochastic forms. The primary source of material is my book with Charlie Matthews Molecular Dynamics, supplemented by the articles mentioned below. Copies of the book will be available onsite. The course is divided into four lectures. The lectures will be given using data projector. Slides will be posted here as soon as available. It is recommended to obtain the free software MD.M (works with MATLAB, Octave) and to perform some of the demo simulations on small systems, especially if this is your first exposure to molecular dynamics.
Lecture 1. Molecular dynamics models goals and purposes, numerical methods by splitting, analysis using BakerCampbellHausdorff expansion, and examples.
Materials for Lecture 1. Chapters 13 of Molecular Dynamics.
Slides for Lecture 1 [uploaded 3.4.2017].
Lecture 2. Ensembles for molecular simulation (microcanonical, canonical), stochastic differential equations (Brownian/Langevin dynamics), numerical methods for Langevin dynamics (especially splitting methods), error analysis, examples in biomolecular dynamics.
Materials for Lecture 2. Chapters 67 of Molecular Dynamics, ref. [1].
Slides for Lecture 2 [uploaded 3.4.2017].
Lecture 3. The timestep problem in MD, holonomic constraints, SHAKE and RATTLE discretization, stochastic schemes, geodesic integration, isokinetic MD, examples.
Materials for Lecture 3. Chapters 4, 6 of Molecular Dynamics, ref. [2,3].
Slides for Lecture 3 [uploaded 5.4.2017].
Lecture 4. Extended systems, thermostats, Nose dynamics, pairwise adaptive schemes and shear flows, an application in Bayesian inference.
Materials for Lecture 4. Chapter 8 of Molecular Dynamics, ref. [4,5].
Benedict Leimkuhler
Studies
PhD 1988, MS 1986, University of Illinois
BS 1983, Purdue University
Career History
2006   Chair of Applied Mathematics, University of Edinburgh 
2000  2006  Professor of Applied Mathematics, University of Leicester 
1996  1999  Associate Professor of Mathematics, University of Kansas 
1990  1996  Assistant Professor of Mathematics, University of Kansas 
1988  1990  Researcher, Helsinki University of Technology 
1986  Researcher, Lawrence Livermore National Laboratory 
1983  1988  Research Assistant, University of Illinois 
Research Interests
My research is in the broad area of the computational modelling of dynamical systems, such systems ranging in scale from computer simulation of the motion of atoms and molecules to the modelling of celestial mechanics. This involves the development of appropriate numerical methods to solve the system of equations driving the dynamics. It is desirable to develop approximation schemes that preserve important qualitative features, socalled geometric integrators, and this has been the focus of much of my work. Of particular importance in this regard are the symplectic integrators for Hamiltonian systems which preserve the symplectic structure of phase space and have superior stability properties, particularly for long time computations.
While many of my articles are related to Hamiltonian systems and the development of geometrypreserving integration methods, I have lately been more focussed on stochastic differential equations. Results in this direction have been obtained on both formulation and numerical solution of SDE models for thermodynamic modelling, including proving the ergodicity of degenerate diffusion techniques and studying the perturbation of dynamics by stochastic methods. Most recently, I have focused on the design of Langevin dynamics integration strategies, including the construction of superconvergent Langevin dynamics methods for invariant measures relevant for molecular dynamics.
From 2009 to 2014 I was heavily involved with the EPSRC (Science and Innovation) Centre for Numerical Algorithms and Intelligent Software (NAIS) which links Edinburgh, HeriotWatt and Strathclyde Universities. I was also part of the ExTASY Project (Extensible Toolkit for Advanced Sampling and analYsis) which is cofunded by the UK’s EPSRC and the US NSF to develop advanced methods for the study of biological molecular energy surfaces.
In recent years I have been working mor eand more in the areas of data science. I was involved with setting up the Alan Turing Institute and I am currently a Faculty Fellow of the ATI. This means that I spend part of my time at the new Alan Turing Institute headquarters in the British Library. The directions in data science that I am exploring include the use of stochastic differential equations to explore Bayesian parameterization of complicated distributions for data analysis. This work has potential high impact in neural network training and applications in "big data". Much of my work on molecular simulation can be applied in data science through this approach.
I currently hold a new EPSRC grant in DataDriven CoarseGraining using SpaceTime Diffusion Maps. This is an exciting project that bridges the data science and molecular modelling worlds. I also have an ERCfunded collaboration in Biological Modelling with V. Danos in Informatics.
Associations, Positions, Roles
2016  Research Director, School of Mathematics 
2016  CoDirector, Maxwell Institute for Mathematical Sciences 
2016  Faculty Fellow, Alan Turing Institute 
20152016  Science Committee, Alan Turing Institute 
201415  IMA Leslie Fox Prize Adjudication Committee 
2014  Fellowships subpanel, Royal Society of Edinburgh 
2014  Fellow of the Royal Society of Edinburgh (FRSE) 
2012  Fellow of the Institute of Mathematics and its Applications (FIMA) 
2012  2014 CoDirector, Maxwell Institute for Mathematical Sciences 
2012  Senior Research Fellow, Dutch Science Foundation 
2011  JT Oden Fellowship, University of Texas 
2009  Member, Steering Committee of the Centre for Numerical Algorithms (NAIS) 
2009  11 Director, NAIS 
2009  SIAM Dahlquist Prize Selection Committee 
200811  Deputy Director, Maxwell Institute for Mathematical Sciences 
20072015  Member, Board of the International Centre for Mathematical Sciences (ICMS) 
2007  Member, Programme Committee, ICMS 
20045  Leverhulme Trust Research Fellow 
20045  Visiting Researcher, Institute for Mathematics and Its Applications, Minneapolis 
1998  Member, Mathematical Sciences Research Insitute, Berkeley 
19967  Visiting Researcher, Cambridge University 
Editorial Boards
2014  Proceedings of the Royal Society A 
2013  European Journal of Applied Mathematics 
2012  Journal of Computational Dynamics (AIMS) 
2008  IMA Journal on Numerical Analysis 
200913  Nonlinearity 
200913  SIAM Journal on Numerical Analysis 
20017  SIAM Journal on Scientific Computing 
Grants and Projects
2016  EPSRC International Centre for Mathematical Sciences (coI) 
2016   EPSRC DataDriven CoarseGraining using SpaceTime Diffusion Maps (PI) 
2014  Adaptive Collective Variables: Automatic Identification and Application of Multiresolution Modelling (coI) 
2014  Adaptive QM/MM simulations (coI) 
2014  Highly efficient timedomain quantum chemistry algorithms (coI) 
2013  2016  NSFEPSRC SI2CHE:ExTASY Extensible Tools for Advanced Sampling and analYsis (Edinburgh PI) 
2013  2018  ERC Advanced Grant in RuledBased Modelling for Biology (CoI) 
2009  2014  EPSRC (S&I)/SFC Numerical Algorithms and Intelligent Software for the Evolving HPC Platform (coI) 
2008  2010  EPSRC Network (Bath, Bristol, Edinburgh, Warwick) “Mathematical Challenges of Molecular Dynamics” 
2004  2007  Australian RC Geometric Integration (coI) 
2004  US NIH Algorithms for Macromolecular Modelling (coI) 
2004  2005  EPSRC Algorithms for Macromolecular Modelling (PI) 
2004  2007  EPSRC Developing an Efficient Method for Locating Periodic Orbits (coI) 
2003  2005  SRIF Advanced Computing Facility (HPC) (coI) 
2002  2004  Australian RC Geometric Numerical Integration (coI) 
2001  2004  EPSRC A Mixed Atomistic and Continuum Model for Crossing Multiple Length and Time Scales (coI) 
2001  2004  EPSRC Geometric Integrators for Switched and Multiple Timescale Dynamics (PI) 
2000  2004  EU Research Training Network MASIE (Mechanics and Symmetry in Europe) 
1994  99  Multiple grants awarded whilst at the University of Kansas, mostly by US National Science Foundation 
Conferences and Workshops [examples]
Principal Organiser of the inaugural conference in 1994 of the series on Algorithms for Macromolecular Modelling (AM3) in Lawrence, Kansas, followed by service on the Organising Committee for subsequent meetings in the series (Berlin, 1997; New York City, 2000; Leicester 2004 as Principal Organiser; Austin, 2009)
Organising Committee, LMS Durham Symposium, 2000
CoOrganiser, Advanced Integration Methods for Molecular Dynamics, CECAM, Lyon, 2000
Scientific Committee, Prestissimo/DFG Conference on Molecular Simulation, Inst. Henri Poincaré, Paris, 2004
CoOrganiser, Workshop on Astrophysical Nbody Problems, Inst. for Pure and Applied Mathematics, UCLA, 2005
CoOrganiser, The Interplay between Mathematical Theory and Applications, Newton Institute, 2007
CoOrganiser, NSFNAIS Workshop on Intelligent Software, Edinburgh 2009
CoOrganiser, Capstone Conference, EPSRC Conference on Challenges in Scientific Computing, Warwick 2009
Principal Organiser, Multiscale Molecular Modelling, Edinburgh, 2010
Principal Organiser, Stateoftheart Algorithms for Molecular Dynamics, Edinburgh 2012
Coorganiser, Complex Molecular Systems, Lorentz Center, Leiden, 2012
Organiser, Multiscale Computational Methods in Materials Modelling, Edinburgh 2014
Coorganiser, three Alan Turing Institute ``scoping workshops'', 201516
Coorganiser, Stochastic numerical algorithms, multiscale modeling and highdimensional data analytics, ICERM/Brown University, 2016
Coorganiser, Trends and advances in Monte Carlo sampling algorithms, SAMSI/Duke University, 2017
Books
Molecular Dynamics (Springer, 2015) one of the first mathematical books on the subject.
Simulating Hamiltonian Dynamics (Cambridge University Press, 2005), coauthored with S. Reich (Potsdam) is an introduction to the subject of Geometric Integration for undergraduate and graduate students in mathematics and cognate disciplines.
Please see the separate "books" page for more details on these.
Research Articles
Most of my articles are uploaded to the arXiv. For complete, uptodate lists of publications, please refer to my papers page or see my page in Google Scholar.
Molecular Dynamics
B. Leimkuhler and C. Matthews Springer, Berlin, 2015. Link to Publisher's Website: SpringerVerlag 

This book describes the mathematical underpinnings of algorithms used for molecular dynamics simulation, including both deterministic and stochastic numerical methods. Molecular dynamics is one of the most versatile and powerful methods of modern computational science and engineering and is used widely in chemistry, physics, materials science and biology. Understanding the foundations of numerical methods means knowing how to select the best one for a given problem (from the wide range of techniques on offer) and how to create new, efficient methods to address particular challenges as they arise in complex applications. Aimed at a broad audience, this book presents the basic theory of Hamiltonian mechanics and stochastic differential equations, as well as topics including symplectic numerical methods, the handling of constraints and rigid bodies, the efficient treatment of Langevin dynamics, thermostats to control the molecular ensemble, multiple timestepping, and the dissipative particle dynamics method.  
Contents 1. Introduction; 2. Numerical integrators; 3. Analyzing geometric integrators; 4. The stability threshold; 5. Phase space distributions and microcanonical averages; 6. The canonical distribution and stochastic differential equations; 7. Numerical methods for stochastic differential equations; 8. Extended variable methods. 

Reviews

Simulating Hamiltonian Dynamics
B. Leimkuhler and S. Reich Cambridge University Press, 2005. Link to Publisher's Website: Cambridge University Press 

Geometric integrators are timestepping methods, designed such that they exactly satisfy conservation laws, symmetries or symplectic properties of a system of differential equations. In this book the authors outline the principles of geometric integration and demonstrate how they can be applied to provide efficient numerical methods for simulating conservative models. Beginning from basic principles and continuing with discussions regarding the advantageous properties of such schemes, the book introduces methods for the Nbody problem, systems with holonomic constraints, and rigid bodies. More advanced topics treated include highorder and variable stepsize methods, schemes for treating problems involving multiple timescales, and applications to molecular dynamics and partial differential equations. The emphasis is on providing a unified theoretical framework as well as a practical guide for users. The inclusion of examples, background material and exercises enhance the usefulness of the book for selfinstruction or as a text for a graduate course on the subject. • Thorough treatment of a relatively new subject, covers theory, applications and also gives practical advice on implementing the techniques • Emphasis on 'efficient' numerical methods • Large number of examples and exercises 

Contents 1. Introduction; 2. Numerical methods; 3. Hamiltonian mechanics; 4. Geometric integrators; 5. The modified equations; 6. Higher order methods; 7. Contained mechanical systems; 8. Rigid Body dynamics; 9. Adaptive geometric integrators; 10. Highly oscillatory problems; 11. Molecular dynamics; 12. Hamiltonian PDEs. 

Reviews

My group's research is in computational applied mathematics, including numerical analysis, mathematical modelling and software. We are interested in taking challenging science or engineering challenges and identifying efficient computational strategies to address them. We work with applications scientists in many different areas, as well as computer scientists, high performance computing specialists, and with industry. We use methods from analysis (especially numerical analysis), probability, and statistics in our work. We also incorporate and rely on many concepts from statistical physics and theoretical chemistry. We also develop our own industrialstrength software for various applications, an increasingly demanding part of our work.
Mathematical modelling used to be about writing down a system of differential or algebraic equations. Today, it's as much about computing. A mathematical modeller needs to consider the "scalability" of algorithms with the dimension of the system (or level of resolution). Modern computing hardware comes in many different varieties (distributed parallel systems, cloud computers, graphics processing units...) and it is crucial to have a perspective on the development of computers to ensure that one's work remains relevant. Equations are important but the algorithm is just as important.
Moore's law is a curious empirical relationship between time and computing power. For years we have had this systematic doubling of computing power (measured by any sort of metric) every 18 months or so. Originally the cause was something to do with increasing miniaturization of the components on a chip, as measure by the 'chip density'. But chip densities have pretty much plateaued (there is a limit to how many transistors you can pack on a chip). For some time the raw processor performances have kept increasing due to innovations in the chip structure (multicore, superscalar processing, and multithreading). It does seem like it might all be coming to an end now. What many people don't realize is that even as computer performance has steadily increased algorithms have had their own Moore's law; in fact algorithm performance grew at a much faster rate! (See figure for an illustration in one context in numerical linear algebra, pointed out by D. Womble, but others are similar.) Fortunately there's no end in sight for that one!
Our society is generating crazy amounts of new data all the time, whether from observations and sensors or from scientific experiment or through computation. It is therefore, increasingly, crucial to integrate data directly into modelling, or to find models for large data sets. Since statistics is the modelling of data, we have to develop bridges between applied/computational mathematics and statistics. More and more, our models incorporate data, from observation and experiment, and increasingly from simulation. You can use one model and algorithm to generate data which then gets processed to synthesise levels of understanding that are then used in subsequent modelling stages. In fact, it's models all the way down.
About data and modelling. There is a great quote attributed to John von Neumann: "With four parameters I can fit an elephant, and with five I can make him wiggle his trunk." It raises a very interesting question  what is a model? If I have enough free parameters (deep neural nets have many many parameters), I can get almost any sort of system to represent almost any specific situation. The really important question is what underpins the model. If it is a very good model it will be based on physical laws or some empirical laws that reflect hardearned wisdom and then there is a good chance that it will "generalise" to be relevant for lots of different systems. Think about what a challenge it is to 'learn' to model Newtonian dynamics naively starting from data for the solar system. Herbert Simon's Bacon program used rules for constructing algebraic propositions, candidates for simple 'laws', and he showed that you can find out the inverse square law by a systematic trial and error algorithm. Moreover, if you start with the general form of the power law, you can learn the power using a neural network, as my colleague Iain Murray once demonstrated.
But Newtonian dynamics generalises because it includes things like force additivity and Galilean frames, which is also what makes it interpretable. And trying to learn all that from data is a tall order, unless you know just what you are looking for to begin with. If you don't include all those things, you don't really have Newtonian dynamics. Learning mechanics is a real AI challenge like "logic learning". As a simpler task in this area, consider training a neural network to play the children's game FizzBuzz...it is already very difficult!
I think a lot of methods used in data science are sophisticated filing systems. That is a different thing from a model. Not to disparage AI's successes, but if you want to build something exciting in an area with a lot of accumulated domain knowledge you need to combine machine learning with other forms of modelling. Take Atlas Parkour, for example, which marries robot dynamics and kinematics with machine learning.
About choosing good research problems. Applied mathematicians sometimes take the position that "if I find a mathematical method or solve an equation, someone somewhere will need it".^{†} I do not much care for this perspective. The reason is that the scope for doing mathematics is almost infinite but the useful mathematics is sparsely distributed within this vast expanse. So it is necessary to be very selective about where one concentrates one's effort or one ends up doing research that is of interest neither for pure mathematicians (as it is too applied) nor for scientists and engineers (as it is too obscure). How do I decide which problem to work on? I ask the scientist. So the work in my group is led by the science. We are willing to go into any appropriate area of mathematics to look for a solution, so in this sense my group is very wide in the scope of problems studied and in the toolset that we bring to examine them.
^{†} The author of this quote shall remain nameless!
As examples of our recent successes in this area, I would highlight the understanding of theoretical foundations and the development of numerical methods for Langevin dynamics. In Langevin dynamics, the Newtonian dynamics model is supplemented by dissipative and random perturbations which capture the interactions with suppressed degrees of freedom or compensate for other limitations of the conservative description. The system of interest is thus a set of stochastic differential equations. We have consider the formulation of the equations of motion, the properties of those equations (e.g. questions of ergodicity), and the design of accurate and efficient numerical discretisation methods. Some of our work in this area is described in the textbook I have written with Charles Matthews (Molecular Dynamics, Springer 2015).
More recent work has generalised the algorithms to constrained systems and systems with memory (generalised Langevin), or looked at the redesign of the stochasticdynamical model to enhance exploration in the presence of extreme metastability due to energetic or entropic barriers. We also have an important stream of research on mesoscale particle models such as dissipative particle dynamics.
It is also worth mentioning our recent work on molecular dynamics software. In this domain there are some very large consortia which have produced the most important products, such as CHARMM, NAMD, Gromacs, Amber, Lammps. These packages generally involve combined contributions of dozens of authors from different disciplines. Because all classical molecular dynamics models involve compromises (quantum mechanics would be more accurate, but is computationally astronomically expensive due to the 'curse of dimensionality'), a lot of work is put into making the force fields as accurate as possible. This means fitting empirical force field models to functions of the pair distances, coordinates of local triples, etc. It's highly nontrivial. Other substantial work goes into making the codes run fast on the latest hardware, for example GPUs.
When we develop new algorithms, it can take a long time to get them implemented in these codes. There are often development queues and priorities that would delay implementation. So our algorithms languish on the shelf sometimes for many years before finally getting tested, which can be frustrating. For this reason, we have worked with scientists from the Edinburgh Parallel Computing Centre (EPCC) to create a software interface called MIST ("Molecular Integrator Software Tools") that allows us to very easily implement our algorithms. The software interface handles all the communication with various molecular simulation software packages such as NAMD, GROMACS, etc. In this way we can take advantage of the optimisations built into those codes while also having the ability to test our algorithms rapidly on "realworld" problems. MIST is freely available.
We have also been progressively exploring algorithms that can be used for Bayesian parameterisation of complex models. The model can be a neural network or a traditional mathematical model. Let us denote the model by $\Phi$ and suppose that it takes input $X$ to output $Y$, given parameters $\theta$:
$Y = \Phi(X,\theta)$.Now in a common setup, one assumes there is stochastic uncertainty in the relationship and so it is natural to consider a model in which the inputs and outputs are interpreted as random variables and the strict equality above is replaced by a statistical model, say
(1) $\pi \left (x,y\theta \right ) = Z^{1}\exp \left ( \frac{\left [ y\Phi(x,\theta)\right ]^2}{2\sigma^2} \right )$.where, for simplicity, we take $\sigma$ fixed. This is just one (elementary) choice for the statistical model, but it will serve for this exposition. Now given a collection of samples (data) $D=\{(x_{\alpha},y_{\alpha})\}$, we may ask what is the best choice the parameters $\theta$ to fit the given model to the given data. From the Bayesian perspective, the parameters themselves can be viewed as uncertain, with a distribution defined by Bayesian inversion:
$\pi\left (\theta D \right) \propto \pi(D\theta)\pi_0(\theta).The factors here are all probability densities. $\pi_0$ is an assumed ``prior'' for the parameters which might, for example, restrict the parameters to be taken from a specified finite domain (or it may be Gaussian). The usual choice for the conditional distribution, $\pi(D\theta)$, is based on the likelihood which is defined by the statistical model (1) under the assumption that all the data points are independent:
$\pi(D\theta) = \prod_{\alpha=1}^{D} \pi(x_{\alpha},y_{\alpha}\theta).$
It is now possible to treat the parameterization problem in various ways. The most usual approach is to optimise the parameters by finding the global maximum of the probability density $\pi(\thetaD)$. A more nuanced approach is to embrace the uncertainty in the choice of $\theta$ in the Bayesian perspective, and explore, i.e. ``sample'' the parameter distribution. The distribution is normally multimodal and we may attempt to find a number of the modes or to average the parameter values across the distribution. This is the approach we are currently taking in our data analytics work.
Among the specific problems we have looked at/are looking at in this domain, I would mention the following. First, there is a basic problem in the simple technique I mentioned above in the context of large data sets. Every time we compute a new value of the density (or its gradient, as is often needed), we must compute a product of likelihoods across the entire data set which will be very expensive if $D$ is large. One way to reduce the computational burden is to replace the full likelihood by an approximation obtained by using a subsampling of the data set. This might just be a small percentage of the full data set, so the properties of the data set, e.g. its concentration, play a critical role in determining when this approach will be successful in practice. Nonetheless it is widely used in the data community, where it goes under the name of "Stochastic Gradients". A very similar problem arises in multiscale modelling, when the forces are computed by some device which perturbs the conservative structure. This gradient noise creates an additional source of noise in the parameterisation process. It turns out that this noise can be neutralised by use of a ``thermostat'' method, something we studied extensively in the context of chemical/physical modelling. This remains very much an open problem and we are currently exploring it as part of other work we are doing.
$\textrm{d}x = \nabla \log \pi(x) \textrm{d}t + \sqrt{2}\textrm{dW}.$This is a stochastic differential equation system. The system is "ergodic" with respect to $\pi$, meaning that the solution paths generated by solving the stochastic system have the property that they allow us to compute averages of suitable functions with respect to $\pi$, that is, if $x(t)$ represents a solution path for any initial condition, and $\varphi$ is a $C^{\infty}$ function, then
$\lim_{\tau\rightarrow \infty} \tau^{1} \int_{\!0}^{\,\tau} \varphi(x(t)) \textrm{d}t = \int_{R^d} \varphi(x) \pi(x)\textrm{d} x$.Now in practice we need to do our computations using a numerical method which replaces $x(t)$ representing a stochastic path by a sequence $\{x_n \}$. A typical example of a rule to compute such a sequence is the EulerMaruyama method
(EM) $x_{n+1} = x_n + {\delta t}\nabla \log \pi(x_n) + \sqrt{2 \delta t}R_n$,where $R_n$ represents a standard normal random variable, i.e. $R_n\sim N(0,1)$, and $\langle R_n R_m\rangle =\delta_{mn}$. We can think of this as approximating the solution of the SDE starting from $x_n$ a short time later.
The question then becomes, how well does the discrete average of an observable using the numerical method approximate the continuous path average obtained by using the stochastic differential equation, which, in turn is exactly equal to the ensemble average due to ergodicity. A priori, we do not even know that the discrete process has a similar ergodic property to the SDE. It turns out that the EulerMaruyama method does have a unique invariant distribution $\hat{\pi}_{\textrm{EM}}(x)$ which depends on the stepsize $\delta t$. It is not the same as $\pi$, but it is close, in the sense that, for suitable observables $\varphi$, we have
$\int_{R^d} \varphi(x) \hat{\pi}_{\textrm{EM}}(x) = \int_{R^d} \varphi(x) {\pi}(x) + O(\delta t).$We could say that the EulerMaruyama method has ``ergodic order'' one, since the error decreases as the first power of $\delta t$. Now what is really remarkable, and something we discovered for the first time, is that if you just alter the EulerMaruyama method a little bit, replacing that method by the LM (short for LeimkuhlerMatthews) method, which is defined by:
(LM) $x_{n+1} = x_n + {\delta t}\nabla \log \pi(x_n) + \sqrt{\delta t/2}(R_n+R_{n+1})$,then it possible to show that, for this method, there is again a unique modified invariant distribution with density $\hat{\pi}_{\textrm{LM}}(x)$, and one has
$\int_{R^d} \varphi(x) \hat{\pi}_{\textrm{LM}}(x) = \int_{R^d} \varphi(x) {\pi}(x) + O(\delta t^2).$Since the method (LM) just involves adding a couple of Gaussians, there is essentially no additional cost compared to the EulerMaruyama method. It is an explicit method, just like EulerMaruyama. Since the sum of Gaussians is again a Gaussian, it is not immediately clear why the LM method should be superior. The key is that the random variates introduced at each step are no longer independentthe successive ones have a nontrivial covariance. This is enough to change the ergodic order from one to two.
It's worth giving an example to illustrate how dramatic this improvement can be in practice. In the figure below you can see the approximation of the solution of the bimodal probability distribution associated to the double well potential using the overdamped Langevin dynamics SDE as discretised by the EulerMaruyama method (EM) and the LM scheme mentioned above, for two different values of the stepsize. Considering that an identical amount of computational effort is needed for both simulations, it is very clear which should be recommended for sampling computations.