The GPU version of OSIRIS is fully operational in one, two and three dimensions, with support for most of the features of OSIRIS. Dynamic GPU load balancing/tuning is included and the code is fully MPI ready and capable of running on thousands of GPU nodes, with tailored support for Fermi and Kepler generations.
Inertial (laser-initiated) fusion energy (IFE) holds incredible promise as a source of clean and sustainable energy for powering devices. However, significant obstacles to obtaining and harnessing IFE in a controllable manner remain, including the fact that self-sustained ignition has not yet been achieved in IFE experiments. This inability is attributed in large part to excessive laser-plasma instabilities (LPIs) encountered by the laser beams.
LPIs such as two-plasmon decay and stimulated Raman scattering can absorb, deflect, or reflect laser light, disrupting the fusion drive, and can also generate energetic electrons that threaten to preheat the target. Nevertheless, IFE schemes like shock ignition (where a high-intensity laser is introduced toward the end of the compression pulse) could potentially take advantage of LPIs to generate energetic particles to create a useful shock that drives fusion. Therefore, developing an understanding of LPIs will be crucial to the success of any IFE scheme.
The physics involved in LPI processes is complex and highly nonlinear, involving both wave- wave and wave-particle interactions and necessitating the use of fully nonlinear kinetic computer models, such as fully explicit particle-in-cell (PIC) simulations that are computationally intensive and thus limit how many spatial and temporal scales can be modeled.
By using highly optimized PIC codes, however, researchers will focus on using fully kinetic simulations to study the key basic high energy density science directly relevant to IFE. The ultimate goal is to develop a hierarchy of kinetic, fluid, and other reduced-description approaches that can model the full space and time scales, and close the gap between particle- based simulations and current experiments.
The UCLA Simulation of Plasmas Group and the OSIRIS Consortium have been given access to Blue Waters, one of the most powerful supercomputing machines in the world. Blue Waters is supported by the National Science Foundation and the University of Illinois at Urbana-Champaign, and it is managed by the National Center for Supercomputing Applications. The UCLA group hopes to use their access to investigate scientific questions about inertial fusion energy, plasma-based acceleration, energetic particle generation in the cosmos, and magnetotail substorms.
Dr. Tsung’s presentation for the group at the 2014 Blue Waters Symposium may be viewed here.
- The desire for a code of ethics, acknowledgment when code is reused. This could be an acknowledgement in a publication, references to papers, or authorship in a publication. The latter might be appropriate if a shared code enabled new research capability.
- Desire for standard problems to verify or validate new codes or modified codes, including a database of physics benchmarks with standard inputs. One would like to easily reproduce the results of a paper, in hours, not months, with an independently developed code.
- Desire for common display formats.
- Interoperability of software may be enabled via middleware, with simple interfaces.
- Desire for workflow interoperability between different codes, using output of one code as input to another.
The second major topic was how to enable software interoperability. The attendees discussed and compared units, data structures and objects used in the various codes. Two languages were in common use in the community, Fortran and C/C++. Scripting languages (often Python) was sometimes used to glue components together. Fortran2003 has standard interoperability with C which simplifies language interoperability. There were two common types of units in use, dimensionless units and SI. Dimensionless units are used by those who adhere to the philosophy that a simulation represents many actual physical systems. Translating units is generally straightforward, but can be tricky since not everything is well documented. Some codes had public units for input/output but different units internally. Among object-oriented codes, there was a wide variety of classes with different dependences. It was felt that only simple objects could actually interoperate at this time. Different parallel domain decompositions used in the code could also pose a problem, but this was not extensively discussed.
The third major topic was how to enable interoperability of algorithms. There was a consensus that providing a simple unit test for each new algorithm, which compares the algorithm with some analytic solution and could be run and executed independently of the actual PIC code. The use of skeleton codes (or mini-apps) to illustrate how a collection of algorithms interoperate was also discussed. There was a consensus that PICKSC can serve as a focal point of PIC codes containing pointers between various codes in the community.