Revolutionary Multicore Computing for Exascale

Keynote – Multicore World 2015

The challenge of achieving useful Exascale computing will demand innovations in multicore computer architecture, parallel programming models, and system software. But much more so than in accomplishing Petaflops five years ago, Exascale computing will break from the past and incorporate conceptual advances in execution models to enable dynamic adaptive control for dramatic improvements in efficiency and substantial increases in concurrency for scalability. It is likely that there will be two families of exascale computers: the first prior to 2020 will be simply extrapolations of conventional heterogeneous architectures for high Linpack benchmark ratings, the second around 2023 will be of a new class of architectures that support runtime control dynamics within a global address space through message-driven computing. This revolutionary system class will provide active means of increasing reliability while reducing energy consumption even as it favorably impacts application generality and user productivity. This presentation will describe the areas where advances are anticipated and the concepts behind them. It will provide examples from experimental systems demonstrating early results that support this approach. The keynote address will include current findings from the recently developed HPX-5 runtime system and the new XPI asynchronous programming interface. Questions and comments throughout this presentation from the audience will be encouraged.

——

HPC supporting the Cloud Supporting HPC

Keynote – workshop HPC in the Cloud

The impact of the cloud computing on many domains is dramatic and rapidly changing. Its role in high performance computing is both promising and yet challenged. HPC spans a broad range of execution profiles, some of which such as the SKA workload can benefit from throughput computing models likely to be supported by emerging cloud technologies and system architectures. But other forms of HPC computing may not be so well supported, in particular those strong scaled applications that demand tight coupling and well articulated resource management and task scheduling. The uncertainty of asynchrony that is a property of scaled up systems inhabiting the cloud as well as the variability of the subsystems themselves makes a priori optimization nearly impossible, at least through conventional methods. But even outside the cloud, such problems are a challenge especially in the exascale arena likely in the next decade. As a consequence, new methods being explored even now to address the increasing difficulties of future HPC may in fact contribute to the ability of cloud computing supporting HPC. This presentation will discuss dynamic adaptive methods that may guide future HPC and may also contribute to new methods of cloud computing to more fully serve the HPC computing workflow.

Thomas Sterling

School of Informatics and Computing – Pervasive Technology Institute – Indiana University

Brief Biography

Dr. Thomas Sterling holds the position of Professor of Informatics and Computing at the Indiana University (IU) School of Informatics and Computing as well as serves as Chief Scientist and Executive Associate Director of the Center for Research in Extreme Scale Technologies (CREST). Since receiving his Ph.D from MIT in 1984 as a Hertz Fellow Dr. Sterling has engaged in applied research in fields associated with parallel computing system structures, semantics, and operation in industry, government labs, and academia. Dr. Sterling is best known as the “father of Beowulf” for his pioneering research in commodity/Linux cluster computing. He was awarded the Gordon Bell Prize in 1997 with his collaborators for this work. He was the PI of the HTMT Project sponsored by NSF, DARPA, NSA, and NASA to explore advanced technologies and their implication for high-end system architectures. Other research projects included the DARPA DIVA PIM architecture project with USC-ISI, the Cray Cascade Petaflops architecture project sponsored by the DARPA HPCS Program, and the Gilgamesh high-density computing project at NASA JPL.

Thomas Sterling is currently engaged in research associated with the innovative ParalleX execution model for extreme scale computing to establish the foundation principles to guide the co-design for the development of future generation Exascale computing systems by the end of this decade. ParalleX is currently the conceptual centerpiece of the XPRESS project as part of the DOE X-stack program and has been demonstrated in proof-of-concept in the HPX runtime system software. Dr. Sterling is the co-author of six books and holds six patents. He was the recipient of the 2013 Vanguard Award. In 2014, he was named a fellow of the American Association for the Advancement of Science.

Prof. Thomas Sterling

Bio & Research interests (CREST: Center for Research in Extreme Scale Technologies)

Affiliations, Panel at SC’14

Wikipedia, LinkedIn