/usr/share/doc/quantum-espresso/html/node18.html is in quantum-espresso-data 6.0-3.1.
This file is owned by root:root, with mode 0o644.
The actual contents of the file can be viewed below.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 | <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">
<!--Converted with LaTeX2HTML 2008 (1.71)
original version by: Nikos Drakos, CBLU, University of Leeds
* revised and updated by: Marcus Hennecke, Ross Moore, Herb Swan
* with significant contributions from:
Jens Lippmann, Marek Rouchal, Martin Wilck and others -->
<HTML>
<HEAD>
<TITLE>3.3 Parallelization levels</TITLE>
<META NAME="description" CONTENT="3.3 Parallelization levels">
<META NAME="keywords" CONTENT="user_guide">
<META NAME="resource-type" CONTENT="document">
<META NAME="distribution" CONTENT="global">
<META NAME="Generator" CONTENT="LaTeX2HTML v2008">
<META HTTP-EQUIV="Content-Style-Type" CONTENT="text/css">
<LINK REL="STYLESHEET" HREF="user_guide.css">
<LINK REL="next" HREF="node19.html">
<LINK REL="previous" HREF="node17.html">
<LINK REL="up" HREF="node15.html">
<LINK REL="next" HREF="node19.html">
</HEAD>
<BODY >
<!--Navigation Panel-->
<A NAME="tex2html367"
HREF="node19.html">
<IMG WIDTH="37" HEIGHT="24" ALIGN="BOTTOM" BORDER="0" ALT="next" SRC="next.png"></A>
<A NAME="tex2html363"
HREF="node15.html">
<IMG WIDTH="26" HEIGHT="24" ALIGN="BOTTOM" BORDER="0" ALT="up" SRC="up.png"></A>
<A NAME="tex2html357"
HREF="node17.html">
<IMG WIDTH="63" HEIGHT="24" ALIGN="BOTTOM" BORDER="0" ALT="previous" SRC="prev.png"></A>
<A NAME="tex2html365"
HREF="node1.html">
<IMG WIDTH="65" HEIGHT="24" ALIGN="BOTTOM" BORDER="0" ALT="contents" SRC="contents.png"></A>
<BR>
<B> Next:</B> <A NAME="tex2html368"
HREF="node19.html">3.4 Tricks and problems</A>
<B> Up:</B> <A NAME="tex2html364"
HREF="node15.html">3 Parallelism</A>
<B> Previous:</B> <A NAME="tex2html358"
HREF="node17.html">3.2 Running on parallel</A>
<B> <A NAME="tex2html366"
HREF="node1.html">Contents</A></B>
<BR>
<BR>
<!--End of Navigation Panel-->
<!--Table of Child-Links-->
<A NAME="CHILD_LINKS"><STRONG>Subsections</STRONG></A>
<UL>
<LI><UL>
<LI><A NAME="tex2html369"
HREF="node18.html#SECTION00043010000000000000">3.3.0.1 About communications</A>
<LI><A NAME="tex2html370"
HREF="node18.html#SECTION00043020000000000000">3.3.0.2 Choosing parameters</A>
<LI><A NAME="tex2html371"
HREF="node18.html#SECTION00043030000000000000">3.3.0.3 Massively parallel calculations</A>
</UL>
<BR>
<LI><A NAME="tex2html372"
HREF="node18.html#SECTION00043100000000000000">3.3.1 Understanding parallel I/O</A>
</UL>
<!--End of Table of Child-Links-->
<HR>
<H2><A NAME="SECTION00043000000000000000">
3.3 Parallelization levels</A>
</H2>
<P>
In Q<SMALL>UANTUM </SMALL>ESPRESSO several MPI parallelization levels are
implemented, in which both calculations
and data structures are distributed across processors.
Processors are organized in a hierarchy of groups,
which are identified by different MPI communicators level.
The groups hierarchy is as follow:
<UL>
<LI><B>world</B>: is the group of all processors (MPI_COMM_WORLD).
</LI>
<LI><B>images</B>: Processors can then be divided into different "images", each corresponding to a
different self-consistent or linear-response
calculation, loosely coupled to others.
</LI>
<LI><B>pools</B>: each image can be subpartitioned into
"pools", each taking care of a group of k-points.
</LI>
<LI><B>bands</B>: each pool is subpartitioned into
"band groups", each taking care of a group
of Kohn-Sham orbitals (also called bands, or
wavefunctions) (still experimental)
</LI>
<LI><B>PW</B>: orbitals in the PW basis set,
as well as charges and density in either
reciprocal or real space, are distributed
across processors.
This is usually referred to as "PW parallelization".
All linear-algebra operations on array of PW /
real-space grids are automatically and effectively parallelized.
3D FFT is used to transform electronic wave functions from
reciprocal to real space and vice versa. The 3D FFT is
parallelized by distributing planes of the 3D grid in real
space to processors (in reciprocal space, it is columns of
G-vectors that are distributed to processors).
</LI>
<LI><B>tasks</B>:
In order to allow good parallelization of the 3D FFT when
the number of processors exceeds the number of FFT planes,
FFTs on Kohn-Sham states are redistributed to
"task" groups so that each group
can process several wavefunctions at the same time.
</LI>
<LI><B>linear-algebra group</B>:
A further level of parallelization, independent on
PW or k-point parallelization, is the parallelization of
subspace diagonalization / iterative orthonormalization.
Both operations required the diagonalization of
arrays whose dimension is the number of Kohn-Sham states
(or a small multiple of it). All such arrays are distributed block-like
across the ``linear-algebra group'', a subgroup of the pool of processors,
organized in a square 2D grid. As a consequence the number of processors
in the linear-algebra group is given by <I>n</I><SUP>2</SUP>, where <I>n</I> is an integer;
<I>n</I><SUP>2</SUP> must be smaller than the number of processors in the PW group.
The diagonalization is then performed
in parallel using standard linear algebra operations.
(This diagonalization is used by, but should not be confused with,
the iterative Davidson algorithm). The preferred option is to use
ScaLAPACK; alternative built-in algorithms are anyway available.
</LI>
</UL>
Note however that not all parallelization levels
are implemented in all codes!
<P>
<H4><A NAME="SECTION00043010000000000000">
3.3.0.1 About communications</A>
</H4>
Images and pools are loosely coupled and processors communicate
between different images and pools only once in a while, whereas
processors within each pool are tightly coupled and communications
are significant. This means that Gigabit ethernet (typical for
cheap PC clusters) is ok up to 4-8 processors per pool, but <EM>fast</EM>
communication hardware (e.g. Mirynet or comparable) is absolutely
needed beyond 8 processors per pool.
<P>
<H4><A NAME="SECTION00043020000000000000">
3.3.0.2 Choosing parameters</A>
</H4>:
To control the number of processors in each group,
command line switches:
<TT>-nimage</TT>, <TT>-npools</TT>, <TT>-nband</TT>,
<TT>-ntg</TT>, <TT>-ndiag</TT> or <TT>-northo</TT>
(shorthands, respectively: <TT>-ni</TT>, <TT>-nk</TT>, <TT>-nb</TT>,
<TT>-nt</TT>, <TT>-nd</TT>)
are used.
As an example consider the following command line:
<PRE>
mpirun -np 4096 ./neb.x -ni 8 -nk 2 -nt 4 -nd 144 -i my.input
</PRE>
This executes a NEB calculation on 4096 processors, 8 images (points in the configuration
space in this case) at the same time, each of
which is distributed across 512 processors.
k-points are distributed across 2 pools of 256 processors each,
3D FFT is performed using 4 task groups (64 processors each, so
the 3D real-space grid is cut into 64 slices), and the diagonalization
of the subspace Hamiltonian is distributed to a square grid of 144
processors (12x12).
<P>
Default values are: <TT>-ni 1 -nk 1 -nt 1</TT> ;
<TT>nd</TT> is set to 1 if ScaLAPACK is not compiled,
it is set to the square integer smaller than or equal to half the number
of processors of each pool.
<P>
<H4><A NAME="SECTION00043030000000000000">
3.3.0.3 Massively parallel calculations</A>
</H4>
For very large jobs (i.e. O(1000) atoms or more) or for very long jobs,
to be run on massively parallel machines (e.g. IBM BlueGene) it is
crucial to use in an effective way all available parallelization levels.
Without a judicious choice of parameters, large jobs will find a
stumbling block in either memory or CPU requirements. Note that I/O
may also become a limiting factor.
<P>
Since v.4.1, ScaLAPACK can be used to diagonalize block distributed
matrices, yielding better speed-up than the internal algorithms for
large (<!-- MATH
$> 1000\times 1000$
-->
> 1000 <TT>x</TT> 1000) matrices, when using a large number of processors
(> 512). You need to have <TT>-D__SCALAPACK</TT> added to DFLAGS
in <TT>make.inc</TT>, LAPACK_LIBS set to something like:
<PRE>
LAPACK_LIBS = -lscalapack -lblacs -lblacsF77init -lblacs -llapack
</PRE>
The repeated <TT>-lblacs</TT> is not an error, it is needed!
<TT>configure</TT> tries to find a ScaLAPACK library, unless
<TT>configure -with-scalapack=no</TT> is specified.
If it doesn't, inquire with your system manager
on the correct way to link it.
<P>
A further possibility to expand scalability, especially on machines
like IBM BlueGene, is to use mixed MPI-OpenMP. The idea is to have
one (or more) MPI process(es) per multicore node, with OpenMP
parallelization inside a same node. This option is activated by <TT>configure -with-openmp</TT>,
which adds preprocessing flag <TT>-D__OPENMP</TT>
and one of the following compiler options:
<P>
<TABLE CELLPADDING=3>
<TR><TD ALIGN="LEFT">ifort</TD>
<TD ALIGN="LEFT"><TT>-openmp</TT></TD>
</TR>
<TR><TD ALIGN="LEFT">xlf</TD>
<TD ALIGN="LEFT"><TT>-qsmp=omp</TT></TD>
</TR>
<TR><TD ALIGN="LEFT">PGI</TD>
<TD ALIGN="LEFT"><TT>-mp</TT></TD>
</TR>
<TR><TD ALIGN="LEFT">ftn</TD>
<TD ALIGN="LEFT"><TT>-mp=nonuma</TT></TD>
</TR>
</TABLE>
<P>
OpenMP parallelization is currently implemented and tested for the following combinations of FFTs
and libraries:
<P>
<TABLE CELLPADDING=3>
<TR><TD ALIGN="LEFT">internal FFTW copy</TD>
<TD ALIGN="LEFT">requires <TT>-D__FFTW</TT></TD>
</TR>
<TR><TD ALIGN="LEFT">ESSL</TD>
<TD ALIGN="LEFT">requires <TT>-D__ESSL</TT> or <TT>-D__LINUX_ESSL</TT>, link
with <TT>-lesslsmp</TT></TD>
</TR>
</TABLE>
<P>
Currently, ESSL (when available) are faster than internal FFTW.
<P>
<H3><A NAME="SECTION00043100000000000000">
3.3.1 Understanding parallel I/O</A>
</H3>
In parallel execution, each processor has its own slice of data
(Kohn-Sham orbitals, charge density, etc), that have to be written
to temporary files during the calculation,
or to data files at the end of the calculation.
This can be done in two different ways:
<UL>
<LI>``distributed'': each processor
writes its own slice to disk in its internal
format to a different file.
</LI>
<LI>``collected'': all slices are
collected by the code to a single processor
that writes them to disk, in a single file,
using a format that doesn't depend upon
the number of processors or their distribution.
</LI>
</UL>
<P>
The ``distributed'' format is fast and simple,
but the data so produced is readable only by
a job running on the same number of processors,
with the same type of parallelization, as the
job who wrote the data, and if all
files are on a file system that is visible to all
processors (i.e., you cannot use local scratch
directories: there is presently no way to ensure
that the distribution of processes across
processors will follow the same pattern
for different jobs).
<P>
Currently, <TT>CP</TT> uses the ``collected'' format;
<TT>PWscf</TT> uses the ``distributed'' format, but
has the option to write the final data file in
``collected'' format (input variable <TT>wf_collect</TT>)
so that it can be easily read by <TT>CP</TT> and by other
codes running on a different number of processors.
<P>
In addition to the above, other restrictions to file
interoperability apply: e.g., <TT>CP</TT> can read only files
produced by <TT>PWscf</TT> for the <I>k</I> = 0 case.
<P>
The directory for data is specified in input variables
<TT>outdir</TT> and <TT>prefix</TT> (the former can be specified
as well in environment variable ESPRESSO_TMPDIR):
<TT>outdir/prefix.save</TT>. A copy of pseudopotential files
is also written there. If some processor cannot access the
data directory, the pseudopotential files are read instead
from the pseudopotential directory specified in input data.
Unpredictable results may follow if those files
are not the same as those in the data directory!
<P>
<EM>IMPORTANT:</EM>
Avoid I/O to network-mounted disks (via NFS) as much as you can!
Ideally the scratch directory <TT>outdir</TT> should be a modern
Parallel File System. If you do not have any, you can use local
scratch disks (i.e. each node is physically connected to a disk
and writes to it) but you may run into trouble anyway if you
need to access your files that are scattered in an unpredictable
way across disks residing on different nodes.
<P>
You can use input variable <TT>disk_io</TT> to reduce the the
amount of I/O done by <TT>pw.x</TT>. Since v.5.1, the dafault value is
<TT>disk_io='low'</TT>, so the code will store wavefunctions
into RAM and not on disk during the calculation. Specify
<TT>disk_io='medium'</TT> only if you have too many k-points
and you run into trouble with memory; choose <TT>disk_io='none'</TT>
if you do not need to keep final data files.
<P>
For very large <TT>cp.x</TT> runs, you may consider using
<TT>wf_collect=.false.</TT>, <TT>memory='small'</TT> and
<TT>saverho=.false.</TT> to reduce I/O to the strict minimum.
<P>
<HR>
<!--Navigation Panel-->
<A NAME="tex2html367"
HREF="node19.html">
<IMG WIDTH="37" HEIGHT="24" ALIGN="BOTTOM" BORDER="0" ALT="next" SRC="next.png"></A>
<A NAME="tex2html363"
HREF="node15.html">
<IMG WIDTH="26" HEIGHT="24" ALIGN="BOTTOM" BORDER="0" ALT="up" SRC="up.png"></A>
<A NAME="tex2html357"
HREF="node17.html">
<IMG WIDTH="63" HEIGHT="24" ALIGN="BOTTOM" BORDER="0" ALT="previous" SRC="prev.png"></A>
<A NAME="tex2html365"
HREF="node1.html">
<IMG WIDTH="65" HEIGHT="24" ALIGN="BOTTOM" BORDER="0" ALT="contents" SRC="contents.png"></A>
<BR>
<B> Next:</B> <A NAME="tex2html368"
HREF="node19.html">3.4 Tricks and problems</A>
<B> Up:</B> <A NAME="tex2html364"
HREF="node15.html">3 Parallelism</A>
<B> Previous:</B> <A NAME="tex2html358"
HREF="node17.html">3.2 Running on parallel</A>
<B> <A NAME="tex2html366"
HREF="node1.html">Contents</A></B>
<!--End of Navigation Panel-->
<ADDRESS>
Filippo Spiga
2016-10-04
</ADDRESS>
</BODY>
</HTML>
|