Parallel Worlds – Programming Models for Heterogeneous Parallel Systems
Mark Patane (NVIDIA), John Gustafson (A*STAR), Lance Brown (INTEL-Altera), Andrew Ensor (AUT)
Moderator: Peter Kogge (Notre Dame)
Monday 15 February, 14:45 – 15:20
In the data-centre, multi-core architectures are pretty common. That homogeneity is changing. To illustrate even low-priced single-board computers are a hybrid mix of general-purpose multi-core CPUs, many-core accelerators and FPGAs. As these mixed architecture computing platforms become more accessible so does the pressure to write software to portable APIs and programming models. The many different architectures and component configurations has lead to a plethora of open industry de-facto standards efforts. One unified model is unlikely. A single model that covers the spectrum of demands of extreme scalability and optimal performance, tempered with ease of use. Instead the panel will discuss the critical abstractions and directions we should be considering; in-turn identify what efforts we should pay attention to and why.
Concurrency – Patterns for Architecting Distributed Systems
Richard O’Keefe (Otago), Balazs Gerofi (RIKEN), Geoffrey C Fox (Indiana), Michael Barker (LMAX)
Moderator: Mark Moir (Oracle)
Tuesday 16 February, 14:40 – 15:20
The art of architecting and building distributed systems is about the composition of concurrent components. Patterns of architecting distributed systems can be implicit through the APIs used e.g. MPI and OpenMP, explicit in larger system components such as messaging sub-systems or explicitly supported by a language and its supporting runtime e.g. Erlang. Developing distributed concurrent systems is now common place. The panel will discuss trends, emergent paradigms, common patterns and their trade-offs for developing concurrent systems (distributed or not).
Making the SKA platform accessible for the science
Alex Szalay (Johns Hopkins), Markus Dolensky (ICRAR), Happy Sithole (CHPC), Chun-Yu Lin (NCHC)
Moderator: Nicolás Erdödy (Open Parallel)
Wednesday 17 February, 11:35 – 12:10
Most scientists are not software developers or engineers. Nor are they interested in getting an in-depth understanding of the nitty-gritty or parallel architectures, distributed systems and writing production quality code. They are after all interested in doing the science. The panel will discuss what is needed to make the SKA platform more accessible to scientists. How it can and should support the science. How that might be achieved. How should the platform operate and how should the associated software be governed.
HPC role in Science and Economic Policy Development
Nicola Gaston (MacDiarmid), Happy Sithole (CHPC), Chun-Yu Lin (NCHC), Tshiamo Motshegwa (University of Botswana)
Moderator: Don Christie (Catalyst)
Wednesday 17 February, 14:50 – 15:20
Life, the biosphere, weather, finance, our economic and social interactions are all complex systems. HPC has been pivotal in enabling dramatic advances in understanding complex networks. Generative and agent-based simulations, machine-learning, dynamical systems and network analysis. Leading a Cambrian explosion of scientific advancement that is flowing into business and government. Computational is now often prefixed to traditional disciplines. Highlighting the computing algorithms’ augmenting role in experimentation, investigation and analysis. The panel will discuss the effect HPC is having on traditional disciplines in science, government and business. Are HPC Centres of excellences needed for developing computational thinking and practise? What are the social, ethical and economic concerns implied by an algorithmic world?
Bridging HPC and Clouds – Physical Infrastructure, Networks, Storage and Applications
Peter Kogge (Notre Dame), Alex Szalay (Johns Hopkins), Geoffrey C. Fox (Indiana), John Gustafson (A*STAR)
Moderator: Robert O’Brien
Wednesday 17 February, 16:10 – 16:50
Today’s HPC is tomorrow’s general computing infrastructure. Indeed data intensive applications in finance, engineering and science are the tip of the iceberg. Functionality of many systems are increasingly driven by data. Trends in consumer applications in big-data, machine-learning (ML), and deep neutral networks (DNN) are pushing at the multi-core CPU boundary. The wider deployment of sensors, robotics and pervasive computing (IoT) will demand new approaches to. All are harbingers of change. Speed, data volume and answers-per-watt metrics will drive demands for HPC Clouds (IaaS and PaaS). The panel will discuss what is needed to enable HPC Clouds. How should things change from current models of architecting physical and software infrastructure. How does networking and storage approaches factor into this. What opportunities exist with COTS (components off the shelve) to take a different approach than is typical of HPC systems.