High-performance computers running analysis and modeling routines increasingly produce larger volumes of output data—data that is only of value if someone can make sense of it.
Naturally, the best way to do this is to visualize the data. But that’s often a problem. Rendering the volumes of data into intelligent and perhaps manipulatable representations takes enormous computing resources.
Increasingly, dedicated systems of graphics processing units (GPUs) and new algorithms are being used to visualize data in novel and faster ways.
A perfect example of the elevated role of GPUs and computer graphics can be seen at the U.S. Department of Energy (DoE) Argonne National Laboratory. The lab has a graphics system nicknamed Eureka. The system includes 214 Intel Xeon quad-core CPUs and 212 NVIDIA Quadro GPUs, and serves as the primary visualization resource for the lab’s IBM BlueGene/P supercomputer.
Specifically, Eureka works in conjunction with the IBM Blue Gene/P Intrepid computer in the lab’s Argonne Leadership Computing Facility (ALCF). The Intrepid, which is capable of a peak performance of 557 teraflops (557 trillion calculations per second), placed eighth in the most recent (November 2009) ranking of the world’s most powerful supercomputers. (A new list will be published in early June at the International Supercomputing Conference slated to be held in Hamburg, Germany.)
The Intrepid is used for DoE research and to support its annual Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program, which aims to advance scientific discovery.
The Intrepid’s output data from simulations and modeling is enormous, and the high-performance connection with Eureka is capable of delivering nearly 80 billion bytes per second to and from disk. To visualize and analyze this data, scientists can use a variety of visualization software.
One example of the combined power of Intrepid and Eureka is the lab’s research to improve the efficiency of the fuel-burning process in new nuclear reactors. This research has great implications: If more of the fuel can be burned, then there is less waste, meaning that on-site storage facilities take longer to reach capacity and less used fuel is stored offsite.
Part of this work involves the study of hydrodynamics in liquid-metal-cooled reactor cores and examining coolant flow around fuel rod assemblies. The simulations give researchers a better understanding of the thermal mixing within advanced recycling reactor cores. In addition to the thermal hydrodynamics studies of coolant flow in and around fuel rod assembly, another model looks at the neutronics of the core.
Simulation of the turbulent flow of coolant in an advanced recycling nuclear reactor (source: Argonne National Laboratory).
The Next Step
Taking visualization beyond what has traditionally been done, the lab and other researchers at the University of Chicago have developed vl3, a volume-rendering tool kit that gives researchers extended features (beyond OpenGL) and capabilities when visualizing this data in real time.
The most recent vl3 enhancements allow researchers to stream high-resolution images created on Eureka to a remote site for examination on a high-resolution, tiled display. In a demonstration this past November at the SC09 conference in Portland, Ore., visualizations of a 4096 x 4096 x 4096 data volume were streamed using the DoE’s Energy Science Network (ESnet) from Eureka to a San Diego Supercomputer Center visualization cluster in a booth on the conference’s exhibit floor.
The demonstration was part of an effort to demonstrate end-to-end scientific workflows that leverage high-performance computing and visualization resources, high-speed networks, and advanced displays spread across the country. These workflows automatically create visualizations from simulation data as it is generated, and enable researchers to remotely view and control these visualizations on ultra high-resolution displays.