Clever Geek Handbook
📜 ⬆️ ⬇️

Wang Landau Algorithm

The Wang-Landau algorithm proposed by Fugao Wang and David Landau [1] is the Monte Carlo method designed to calculate the density of states of a system. The method performs non-Markov random transitions to build the density of states, visiting all possible states. The Wang and Landau algorithm is an important method for obtaining the density of states required for multicanonical modeling

The Wang-Landau algorithm can be applied to any system that is characterized by a certain parameter (for example, energy, volume, etc.). For example, it can be used for numerical integration [2] and protein modeling [3] [4] .

Content

Description

The Wang-Landau algorithm is an implementation of the method of entropic modeling , in which the density of states is studied using walks in the energy space with an equally probable visit to all energy states. The algorithm solves the problem of selecting suitable transition probabilities to obtain the uniform visit of energy states required in entropic modeling and, therefore, allows to obtain the density of statesΩ(E) {\ displaystyle \ Omega (E)}   .

Algorithm

Consider a system in phase spaceΩ {\ displaystyle \ Omega}   and energyE {\ displaystyle E}   limited to a rangeEmin≤E≤Emax {\ displaystyle E_ {min} \ leq E \ leq E_ {max}}   . Let the system under consideration have a probability densityρ(E)∼exp⁡(S(E)) {\ displaystyle \ rho (E) \ sim \ exp (S (E))}   which we need to calculate. Since the Wang and Landau algorithm works with a discrete energy spectrum, the rangeEmin≤E≤Emax {\ displaystyle E_ {min} \ leq E \ leq E_ {max}}   splits into a finite number (M {\ displaystyle M}   ) equal segments ("boxes"), the size of which is equal toΔ {\ displaystyle \ Delta}   . In this way:

M=Emax-EminΔ{\ displaystyle M = {\ frac {E_ {max} -E_ {min}} {\ Delta}}}   .

Given this discrete spectrum, the algorithm has the following initial conditions:

  • S(Ei)=0,i=one,2,...,M{\ displaystyle S (E_ {i}) = 0, i = 1,2, ..., M}  
  • f=one{\ displaystyle f = 1}   (used below as an addition to entropyS(Ei) {\ displaystyle S (E_ {i})}   after each step taken))
  • Random system configuration selectedr→∈Ω {\ displaystyle {\ vec {r}} \ in \ Omega}  

The algorithm performs modeling in a multicanonical ensemble: random metropolis-hastings wandering in phase spaceΩ {\ displaystyle \ Omega}   probability distribution systemsP(r→)=one/ρ(E(r→))=exp⁡(-S(E(r→))) {\ displaystyle P ({\ vec {r}}) = 1 / \ rho (E ({\ vec {r}})) = \ exp (-S (E ({\ vec {r}}))}}   and the probability of generating a new state given by the probability distributiong(r→→r→′) {\ displaystyle g ({\ vec {r}} \ rightarrow {\ vec {r}} ')}   which is chosen arbitrarily (usually any state can be generated with equal probability). During the simulation, a visit to each “box” is recorded in a histogramH(E) {\ displaystyle H (E)}   (i.e., valueH(E) {\ displaystyle H (E)}   increases by one). As in the Metropolis-Hastings algorithm, the generation and adoption of a new state is performed as follows:

  1. new state generationr→′∈Ω {\ displaystyle {\ vec {r}} '\ in \ Omega}   according to probability distributiong(r→→r→′) {\ displaystyle g ({\ vec {r}} \ rightarrow {\ vec {r}} ')}  
  2. acceptance / rejection of a new state is performed as follows:

If the entropy of the new state is less than the current, then it is immediately accepted. If entropy has increased, then the new state is accepted with probability:A=eS-S′g(r→′→r→)g(r→→r→′) {\ displaystyle A = e ^ {S-S '} {\ frac {g ({\ vec {r}}' \ rightarrow {\ vec {r}})} {g ({\ vec {r}} \ rightarrow {\ vec {r}} ')}}}   whereS=S(E(r→)) {\ displaystyle S = S (E ({\ vec {r}}))}   andS′=S(E(r→′)) {\ displaystyle S '= S (E ({\ vec {r}}'))}   .

That is, the general formula is as follows:

A(r→→r→′)=min(one,eS-S′g(r→′→r→)g(r→→r→′)){\ displaystyle A ({\ vec {r}} \ rightarrow {\ vec {r}} ') = \ min \ left (1, e ^ {S-S'} {\ frac {g ({\ vec {r }} '\ rightarrow {\ vec {r}})} {g ({\ vec {r}} \ rightarrow {\ vec {r}}')}} \ right)}   .

Thus, the entropy of the most frequently visited conditions will increase, as a result of which they will be visited less frequently, and the rarest states, therefore, will be visited more often. Thus, we achieve an equiprobable visit to all conditions.

After each generation-acceptance step, the system goes into some stateEi {\ displaystyle E_ {i}}   , valueH(Ei) {\ displaystyle H (E_ {i})}   increases by one, and the following change is also performed:

S(Ei)←S(Ei)+f{\ displaystyle S (E_ {i}) \ leftarrow S (E_ {i}) + f}  

This is an important step of the algorithm, and this is what makes the Wang-Landau algorithm non-Markovian: the random process now depends on the history of the process. Thus, the next time a state with energy is proposedEi {\ displaystyle E_ {i}}   , this condition will be rejected more likely; in this sense, the algorithm forces the system to visit all states with the same frequency. As a result, the histogramH(E) {\ displaystyle H (E)}   becoming more and more flat. Although this uniformity depends on how close the calculated entropy is to the exact entropy, which depends onf {\ displaystyle f}   . To improve the approximation of the exact entropy (and thus the uniformity of the histogram),f {\ displaystyle f}   decreases afterM {\ displaystyle M}   generation-acceptance steps:

f←f/2{\ displaystyle f \ leftarrow f / 2}  

After some time, it was shown that when changingf {\ displaystyle f}   by constant division into two, the algorithm may not converge [5] . A small modification of the Wang-Landau method avoids this: division is performed not by two, but byt {\ displaystyle t}   at whatt {\ displaystyle t}   in proportion to the simulation step.

As a result of using this algorithm, the transition probability weights are automatically adjusted, which simultaneously determine the density of states. At the end of the calculation, the array is calculatedΩ(E)=exp⁡S(E) {\ displaystyle \ Omega (E) = \ exp S (E)}   and is normalized to one.

Code Example

Below is a sample Python code that assumes the distribution function is symmetricalg {\ displaystyle g}   :

g(x′→x)=g(x→x′){\ displaystyle g ({\ boldsymbol {x}} '\ rightarrow {\ boldsymbol {x}}) = g ({\ boldsymbol {x}} \ rightarrow {\ boldsymbol {x}}')}  

  currentEnergy = system .  randomConfiguration () # random generation of the initial state of the system
 while ( f > epsilon ):
     system .  proposeConfiguration () # generating a new configuration
     proposedEnergy = system .  proposedEnergy () # calculating the energy of a new state

     if ( random () < exp ( entropy [ currentEnergy ] - entropy [ proposedEnergy ])):
         # if accepted, update energy and system
         currentEnergy = proposedEnergy
         system .  acceptProposedConfiguration ()
     else :
         # if rejected
         system .  rejectProposedConfiguration ()
    
     H [ currentEnergy ] + = 1
     entropy [ currentEnergy ] + = f
    
     if ( isFlat ( H )): # isFlat checks if the histogram is smooth enough (e.g. 95%)
         H [:] = 0
         f * = 0.5 # refine the f parameter

Notes

  1. ↑ Wang, Fugao & Landau, DP (Mar 2001). "Efficient, Multiple-Range Random Walk Algorithm to Calculate the Density of States." Phys. Rev. Lett. American Physical Society. 86 (10): 2050-2053. arXiv [1] (inaccessible link) . Bibcode : [2] . doi : [3] . PMID [4] .
  2. ↑ RE Belardinelli and S. Manzi and VD Pereyra (Dec 2008). "Analysis of the convergence of the 1 ∕ t and Wang-Landau algorithms in the calculation of multidimensional integrals." Phys. Rev. E. American Physical Society. 78 (6): 067701. arXiv : [5] (inaccessible link) . Bibcode : [6] . doi : [7] .
  3. ↑ P. Ojeda and M. Garcia and A. Londono and NY Chen (Feb 2009). "Monte Carlo Simulations of Proteins in Cages: Influence of Confinement on the Stability of Intermediate States." Biophys. Jour. Biophysical Society. 96 (3): 1076-1082. Bibcode: 2009BpJ ... .96.1076O. doi: 10.1529 / biophysj.107.125369
  4. ↑ P. Ojeda & M. Garcia (Jul 2010). "Electric Field-Driven Disruption of a Native beta-Sheet Protein Conformation and Generation of alpha-Helix-Structure." Biophys. Jour. Biophysical Society. 99 (2): 595-599. Bibcode: 2009BpJ ... .96.1076O. doi: 10.1016 / j.bpj.2010.04.040. PMC 2905109 Freely accessible. PMID 20643079
  5. ↑ Belardinelli, RE & Pereyra, VD (2007). "Wang-Landau algorithm: A theoretical analysis of the saturation of the error." Jour. Chem. Phys. 127 (18): 184105. arXiv: cond-mat / 0702414 Freely accessible. Bibcode: 2007JChPh.127r4105B. doi: 10.1063 / 1.2803061
Source - https://ru.wikipedia.org/w/index.php?title=Wang-Landau_algorithm&oldid=96757737


More articles:

  • Sport (football club)
  • Saltykov, Mikhail Alexandrovich
  • Price Thomas Edmunds
  • Belokon, Nadezhda Avramovna
  • Tendring
  • Rhododendron adenogynum
  • Reality Tunnel
  • Don't Think Twice, It's All Right
  • City of hundreds of blast furnaces
  • Joy Global

All articles

Clever Geek | 2019