Skip to content
mmorale3 edited this page Sep 27, 2017 · 38 revisions

Welcome to the miniAFQMC wiki!

How-To Guides

Prerequisites

Build miniAFQMC

cd build
cmake -DCMAKE_C_COMPILER=mpicc -DCMAKE_CXX_COMPILER=mpicxx ..
make 

CMake provides a number of optional variables that can be set to control the build and configure steps. When passed to CMake, these variables will take precident over the enviornmental and default variables. To set them add -D FLAG=VALUE to the configure line between the cmake command and the path to the source directory.

  • General build options
CMAKE_C_COMPILER    Set the C compiler
CMAKE_CXX_COMPILER  Set the C++ compiler
CMAKE_BUILD_TYPE    A variable which controls the type of build (defaults to Release).
                    Possible values are:
                    None (Do not set debug/optmize flags, use CMAKE_C_FLAGS or CMAKE_CXX_FLAGS)
                    Debug (create a debug build)
                    Release (create a release/optimized build)
                    RelWithDebInfo (create a release/optimized build with debug info)
                    MinSizeRel (create an executable optimized for size)
CMAKE_C_FLAGS       Set the C flags.  Note: to prevent default debug/release flags
                    from being used, set the CMAKE_BUILD_TYPE=None
                    Also supported: CMAKE_C_FLAGS_DEBUG, CMAKE_C_FLAGS_RELEASE,
                                    CMAKE_C_FLAGS_RELWITHDEBINFO
CMAKE_CXX_FLAGS     Set the C++ flags.  Note: to prevent default debug/release flags
                    from being used, set the CMAKE_BUILD_TYPE=None
                    Also supported: CMAKE_CXX_FLAGS_DEBUG, CMAKE_CXX_FLAGS_RELEASE,
                                    CMAKE_CXX_FLAGS_RELWITHDEBINFO

Executable

The main executable is created in ./bin folder.

miniafqmc              # runs a fake AFQMC and report the time spent in each component.

The miniafqmc executable requires a hdf5 input file and accepts various command line options. The input file, whose default name is afqmc.h5, contains various data structures associated with the calculation including trial wave-functions, hamiltonian matrix elements, problem dimensions (# electrons, # orbitals, ...), etc. Due to the complicated process required to generate this input, it is not possible to execute the miniapp without this file. Several sample files are supplied with the miniapp in the examples directory. For additional examples covering different parameter space, contact the QMCPACK developers. Command line options are available to control several parameters in the calculation, e.g. # walkers, # steps, etc, all with reasonable default values. If more controls is needed, query by -h to print out available options.

Output

This is an example of the output produced by miniafqmc executed on example/N32_M64/afqmc.h5 with default options. The miniapp reports i) basic calculation parameters, ii) the average energy at each step, 3) timing information for the main sections of the calculation.

***********************************************************
                         Summary                           
***********************************************************

  AFQMC info: 
    name: miniAFQMC
    # of molecular orbitals: 64
    # of up electrons: 32
    # of down electrons: 32

  Execution details: 
    nsteps: 10
    nsubsteps: 10
    nwalk: 16
    northo: 10
    verbose: false
    # Chol Vectors: 1031
    transposed Spvn: true
    Chol. Matrix Sparsity: 0.0912627
    Hamiltonian Sparsity: 0.0585666

***********************************************************
                     Beginning Steps                       
***********************************************************

# Step   Energy   
0   24.5761
1   24.4027
2   24.3019
3   24.2015
4   24.1474
5   24.1166
6   24.0866
7   24.0929
8   23.9847
9   23.9405

***********************************************************
                   Finished Calculation                    
***********************************************************

Stack timer profile
Timer                   Inclusive_time  Exclusive_time  Calls       Time_per_call
Total                              3.0442     0.0002              1       3.044162035
  Bias Potential                   0.7084     0.7084            100       0.007084172
  H-S Potential                    0.5992     0.5992            100       0.005991509
  Local Energy                     0.1485     0.1485             10       0.014847875
  Orthgonalization                 0.0452     0.0452              9       0.005027586
  Other                            0.0004     0.0004            200       0.000001969
  Overlap                          0.1252     0.1252            109       0.001148493
  Propagation                      1.0463     1.0463            100       0.010463102
  Sigma                            0.0806     0.0806            100       0.000805798
  compact Mixed Density Matrix     0.2902     0.2902            100       0.002901709

Basics of Auxiliary-Field Quantum Monte Carlo

The Auxiliary-Field quantum Monte Carlo (AFQMC) method is an orbital-based quantum Monte Carlo (QMC) method designed for the study of interacting many-body quantum systems. For a complete description of the method and the algorithms, we refer the reader to one of the review articles on subject; for example see Chapter 15 "Auxiliary-Field Quantum Monte Carlo for Correlated Electron Systems" of the online book "Emergent Phenomena in Correlated Matter" (https://www.cond-mat.de/events/correl13/manuscripts/) and references therein. Here we give a very brief description of the main elements of the algorithm, with emphasis on the steps of the imaginary-time propagation process implemented in the miniAFQMC miniapp. The algorithm is defined in terms of a given single particle basis, f1.

f2

f3

![f4]

![f5]

![f6]

![f7]

![f8]

miniAFQMC versions

master      # Base implementation. Serial execution with no explicit threading support.
mpi3_shm    # Distributed implementation based on MPI for internode communication and MPI3-Shared Memory for intranode communication. Operations local to a node are distributed among various core groups using an mpi-only framework based on shared-memory.
cuda        # Implementation for GPU architectures based on CUDA.
kokkos      # Portable implementation based on Kokkos.  

Future versions of the miniapp (not yet available) include a distributed implementation based on Scalapack and a distributed implementation based on MPI+OpenMP, similar in spirit to mpi3_shm, but using OpenMP for intranode concurrency.

Base miniAFQMC implementation details

Alfredo, describe multi_array and your wrapper for blas/lapack, e.g. general syntax (product(A,B,C) ==> C = A*B), ma::T, etc. Current limitations, etc. Not too much detail, just enough for people to understand what is going on. Also, no need for a lengthy description of multi_array, send people to the appropriate documentation. Describe the goal of the library at high level and why we use it here, e.g. easy access to matrix views, etc.

Dense Matrix: Boost multi_array

Sparse Matrix representation/operations

[f4]: http://chart.apis.google.com/chart?cht=tx&chl=E_L=\sum_{i,j,k,l}(<ij|kl>-<ij|lk>)G(i,k)G(j,l)=\sum_{ik,jl} G(ik)M(ik,jl)G(jl)=G\prodMG [f5]: http://chart.apis.google.com/chart?cht=tx&chl=M(ik,jl)=<ij|kl>-<ij|lk> [f6]: http://chart.apis.google.com/chart?cht=tx&chl=E_L=G_c*\tilde{M}*G_c [f7]: http://chart.apis.google.com/chart?cht=tx&chl=Gc(i,j)=\frac{\left<A_{trial}|c^{\dagger}ic_j|W\right>}{\left<A{trial}|W\right>} [f8]: http://chart.apis.google.com/chart?cht=tx&chl=E_L=\sum_{a,b,k,l}(<ij|kl>-<ij|lk>)*G(i,k)*G(j,l)

Clone this wiki locally