PICKSC

Particle-in-Cell Kinetic Simulation Software Center

UCLA Logo
Particle-in-Cell and Kinetic Simulation Software Center
Funded by NSF and SciDac
  • News
    • PICKSC News
    • Collaborators’ News
    • PICKSC Results
    • Software Releases
  • People
  • Publications
    • Overview
    • PICKSC Members’ Publications
    • Reports and Notes
    • Presentations
  • Software
    • Overview
    • Production Codes
      • Overview
      • OSIRIS
        • OSIRIS WIKI
      • QuickPIC
      • UPIC-EMMA
      • OSHUN
    • Skeleton Codes
      • Overview
      • Serial
      • QuickStart
      • OpenMP
      • Vectorization
      • MPI
      • Coarray Fortran
      • OpenMP/MPI
      • OpenMP/Vectorization
      • GPU
    • UPIC Framework
    • Gridless Particle Codes
    • Educational Software
      • Overview
      • JupyterPIC
      • Particle Orbit Visualization
      • Python-PIC-GUI
      • ZPIC
    • Fortran 2003 Techniques
  • Research
    • Overview
    • High-Performance Computing
    • Plasma Based Acceleration
    • Nonlinear Optics of Plasmas
  • Engagement
    • Workshops
    • Opportunities
You are here: Home / Software / Skeleton Code / MPI

MPI

Production Codes  |  Skeleton Codes  :  Serial | QuickStart | OpenMP | Vectorization | MPI | Coarray Fortran | OpenMP/MPI | OpenMP/Vectorization | GPU    |    UPIC Framework  |  Educational Software  |  Fortran 2003 Techniques

MPI Codes:

ppic2
ppic3
pbpic2
pbpic3
pdpic2
pdpic3

These codes illustrate how to use domain decomposition with message-passing (MPI). This is the dominant programming paradigm for PIC codes today.

The 2D codes use only a simple 1D domain decomposition. The algorithm is described in detail in Refs. [2-3]. For the 2D electrostatic code, a typical execution time for the particle part of this code is about 140 ps/particle/time-step with 256 MPI nodes. For the 2-1/2D electromagnetic code, a typical execution time for the particle part of this code is about 400 ps/particle/time-step with 256 MPI nodes.  For the 2-1/2D Darwin code, a typical execution time for the particle part of this code is about 1.15 ns/particle/time-step with 256 MPI nodes.

The 3D codes use a simple 2D domain decomposition. For the 3D electrostatic code, a typical execution time for the particle part of this code is about 150 ps/particle/time-step with 512 MPI nodes. For the 3D electromagnetic code, a typical execution time for the particle part of this code is about 360 ps/particle/time-step with 512 MPI nodes.  For the 3D Darwin code, a typical execution time for the particle part of this code is about 1.0 ns/particle/time-step with 512 MPI nodes.

The CPUs (2.67GHz Intel Nehalem processors) were throttled down to 1.6 GHz for these benchmarks.

Electrostatic:
1. 2D Parallel Electrostatic Spectral code:  ppic2
2. 3D Parallel Electrostatic Spectral code:  ppic3

Electromagnetic:
3. 2-1/2D Parallel Electromagnetic Spectral code:  pbpic2
4. 3D Parallel Electromagnetic Spectral code:  pbpic3

Darwin:
5. 2-1/2D Parallel Darwin Spectral code:  pdpic2
6. 3D Parallel Darwin Spectral code:  pdpic3

Figures below: Performance of 2-1/2D electromagnetic and 3D electrostatic, electromagnetic, and darwin MPI codes as a function of the number of cores. Dashed red line is ideal scaling, blue shows particle time, black shows total time. Degradation of total time is due to the all to all transpose in the FFT, which for large number of cores is dominated by message latency. Particle time continues to scale well.

fppic3fpbpic3fpdpic3

Want to contact developer?

Send mail to Viktor Decyk – decyk@physics.ucla.edu 

© 2014 UC REGENTS TERMS OF USE & PRIVACY POLICY

  1. HOME
  2. NEWS
  3. PEOPLE
  4. PUBLICATIONS
  5. RESEARCH
  6. SOFTWARE
  7. OPPORTUNITIES