13 February 2016 – (another) UPDATE…!! 

Schedule_Multicore_World_2016_v4.1_13.2.16

PROGRAM

Day 1: Monday 15 February 2016

 

8:30 – 9:00 Opening Welcome – Setting the Scene

Nicolás Erdödy (Open Parallel)

— —

9:00 – 9:30 “Bridging Islands – You build it, you run it, and make it scale”

Robert O’Brien – New Zealand

 

Technology advancements develop in islands of practise. Driven by the demands of a particular market context. Multi-core CPUs and many-core GPUs are the norm in todays hardware. Tailored runtimes, associated software architectures and development practices are not. That is changing. Data intensive applications, often with time-critical demands, are pushing that boundary. A challenge therefore, is to match the workload anxiety demands of cloud computing with compute intensive pipelines.

Scaling of cloud-based mobile services and big-data are creating a renaissance in software development. Distributed processing and distributed data, programming languages, software architecture, physical infrastructure and process improvement. This is exposing more developers to old ideas. The rush to do more, be more agile and deal with scale in the Cloud is also flowing into Enterprise practise. Likewise practice is being redefined and refined as the Cloud extends out to the edge.

In this context on-demand HPC is part-of the infrastructure equation. In the data-centre and at the edge. Understanding our world with more data intensive applications requires bridging Cloud, Enterprise and IoT with HPC.

— —

 

9:30 – 10:30  “The past and future of multi-core at the high end”

KeynoteProf Peter Kogge (University of Notre Dame) – USA

Moore’s Law has driven computing for almost 50 years, but perhaps the biggest architectural change occurred in 2004, with the rise of multi-core. This talk will discuss the fundamental reasons that forced the change, and then compare how the characteristics of cores have changed since then. In addition, the relationship between core characteristics and performance at the high end (through data from both TOP500 and GRAPH500) will be explored, with discussions of the current set of design issues architects are wrestling with. The talk will conclude with some projections on what a multi-core exascale machine of the future might look like, what new architectural features may emerge, and what might be the effect on software for such machines.

— —

11:00 – 11:55  “A Radical Approach to Computation with Real Numbers”

KeynoteProf John Gustafson (A*Star) – Singapore

If we are willing to give up compatibility with IEEE 754 floats and design a number format with goals appropriate for 2016, we can achieve several goals simultaneously: Extremely high energy efficiency and information-per-bit, no penalty for decimal operations instead of binary, rigorous bounds on answers without the overly pessimistic bounds produced by interval methods, and unprecedented high speed up to some precision. This approach extends the ideas of unum arithmetic introduced two years ago by breaking completely from the IEEE float-type format, resulting in fixed bit size values, fixed execution time, no exception values or “gradual underflow” issues, no wasted bit patterns, and no redundant representations (like “negative zero”). As an example of the power of this format, a difficult 12-dimensional nonlinear robotic kinematics problem that has defied solvers to date is quickly solvable with absolute bounds. Also unlike interval methods, it becomes possible to operate on arbitrary disconnected subsets of the real number line with the same speed as operating on a simple bound.

— —

11:55 – 12:45  “SKA Signal Processing and Compute Island Designs”

Dr Andrew Ensor (AUT University) – New Zealand

The Square Kilometre Array is the largest mega-Science project of the next decade and represents numerous firsts for New Zealand. This talk will outline the project, its unprecedented computing challenges, and New Zealand’s key involvement. Design work for its eventual computer system includes prototyping manycore processors, GPU and FPGA, new compression and communication algorithms, next generation Big Data boards, advances in middleware and software development environments.

— —

13:40 – 13:50 Ask an Expert #1

13:50 – 14:05  Fireside Chat with Geoffrey C. Fox (Indiana) – Interviewer: Richard O’Keefe (Otago)

— —

14:05 – 14:45  “Methods of Power Optimization and code modernization for High Performance Computing Systems”

drs. Martin Hilgeman (Dell) – Netherlands

With all the advances in massively parallel and multi-core computing with CPUs and accelerators it is often overlooked whether the computational work is being done in an efficient manner. This presentation drills down into the incompatibility between Moore’s Law and Amdahl’s Law and the challenge for a system builder, while trying to program for energy efficient parallel application performance. It also covers some of the latest trends in core modernization for multi-core systems in order to unleash the performance potential covered in clusters of COTS components with a fast, low latency network.

— —

14:45 – 15:20Parallel Worlds – Programming Models for Heterogeneous Parallel Systems”

Panel Discussion: Mark Patane (NVIDIA), John Gustafson (A*STAR), Lance Brown (INTEL-Altera), Andrew Ensor (AUT). Moderator: Peter Kogge (Notre Dame)

In the data-centre, multi-core architectures are pretty common. That homogeneity is changing. To illustrate even low-priced single-board computers are a hybrid mix of general-purpose multi-core CPUs, many-core accelerators and FPGAs. As these mixed architecture computing platforms become more accessible so does the pressure to write software to portable APIs and programming models. The many different architectures and component configurations has lead to a plethora of open industry de-facto standards efforts. One unified model is unlikely. A single model that covers the spectrum of demands of extreme scalability and optimal performance, tempered with ease of use. Instead the panel will discuss the critical abstractions and directions we should be considering; in-turn identify what efforts we should pay attention to and why.

— —

15:40 – 16:20  “Meeting High Memory Bandwidth Challenges in Heterogeneous Computing in an OpenCL Environment”

Lance Brown (Intel Corporation | Military, Aerospace, Government (MAG) Business Unit) – USA

When designing many HPC deployments, data movement between processing elements along with on-chip bandwidth are some of the greatest challenges.   FPGAs have inherent advantages for inter-element data movement with rich and flexible transceivers options that CPU and GPU elements do not have.   Intel Programmable Solutions Group (PSG) announced at SC ’15 the addition of new Stratix 10 System in Package (SiP) allowing up to 8 GB of on-chip DRAM and 1 TB/S of memory bandwidth using Intel’s patented Embedded Multi-die Interconnect Bridge (EMIB) technology.   This session will explore how Stratix 10 SiP when introduced to CPU and GPU systems can increase the overall peak TFLOP/Watt performance, increase data bandwidth ingress/egress and lower overall total power while maintaining a standard OpenCL programming environment.   An example system performing Convolutional Neural Network learning will be highlighted.

— —

16:20 – 16:55  “Who would win in a multicore war: the “good guys” or the “bad guys”?”

Dr. Mark Moir (Oracle Labs) – USA & New Zealand

This talk will neither answer the question of its title, nor settle the question of exactly who these “good guys” and “bad guys” are.  Instead, it will include some observations about the role of multicore computing for various purposes that might be considered “good” or “bad”, depending on context and perspective.  Possible topics include: cryptocurrencies such as Bitcoin, denial of service attacks and defenses against them, encryption and privacy.

— — — —

 

Day 2: Tuesday 16 February 2016

 

8:30 – 8:45   General Information and Recap

8:45 – 9:00  “Unums Implementations”

Prof John Gustafson (A*Star)Singapore

Short talk: Report on the progress of efforts to put unums into C, Julia and Python.

— —

9:00 – 9:30 “Weaving a Runtime Fabric: The Open Parallel Stack”

Dr. Richard O’Keefe (University of Otago) – New Zealand

The Open Parallel Stack (TOPS) is something we need but don’t have yet. The idea is to assemble a framework from the operating system up to enable testing and debugging high performance programs on a small to medium scale before deploying them to systems like the SKA in high demand.

The intention is not to do research or major development but to collect, preserve, and build on Open Source work.

— —

9:30 – 10:30 “Big Data HPC Convergence”

KeynoteProf Geoffrey C. Fox (Indiana University) – USA

Two major trends in computing systems are the growth in high performance computing (HPC) with an international exascale initiative, and the big data phenomenon with an accompanying cloud infrastructure of well publicized dramatic and increasing size and sophistication. In studying and linking these trends one needs to consider multiple aspects: hardware, software, applications/algorithms and even broader issues like business model and education. In this talk we study in detail a convergence approach for software and applications / algorithms and show what hardware architectures it suggests. We start by dividing applications into data plus model components and classifying each component (whether from Big Data or Big Simulations) in the same way. These leads to 64 properties divided into 4 views, which are Problem Architecture (Macro pattern); Execution Features (Micro patterns); Data Source and Style; and finally the Processing (runtime) View. We discuss convergence software built around HPC-ABDS (High Performance Computing enhanced Apache Big Data Stack) http://hpc-abds.org/kaleidoscope/ and show how one can merge Big Data and HPC (Big Simulation) concepts into a single stack. We give examples of data analytics running on HPC systems including details on persuading Java to run fast. Some details can be found at http://dsc.soic.indiana.edu/publications/HPCBigDataConvergence.pdf

— —

11:00 – 11:55  “Taming big data with micro-services”

KeynoteGiuseppe Maxia (VMware) – Spain

Presented by Robert O’Brien

Big data is a daunting concept, as it usually means more data that we can handle. When looked at more closely, though, big data is just a very large collection of small data pieces.

There are solutions for this problem, the most widespread of which uses clusters of computers to tackle a problem in parallel. Think of Hadoop, where thousands of nodes can analyze data concurrently and report a unified result. But the growing challenges of data collection make such solution not easy to adapt. What if your data requires a different kind of software that does not fit with Hadoop?  What if you discovered a completely new algorithm that allows you to get results quickly and more reliably, but you are stuck with a humungous cluster of non-flexible software?

Open your eyes to the world of micro services, where your software takes the shape of clusters in seconds, rather than months or days. The old method of divide-and-conquer has never felt this exciting!

— —

11:55 – 12:20  Accelerating AI using GPUs”

Michael Wang (NVIDIA) – Australia

A brief overview of deep neural network (DNN) training on GPUs, from supercomputing and Big Data, to ultra-low power embedded platforms that enable smarter autonomous machines.

We will look at the latest developments on in GPU architectures that enable these discoveries, as well as CUDA-accelerated software libraries that are accelerating machine learning research and rapid innovation.

— —

12:20 – 12:45  “A Laconic HPC with an Orgone Accumulator”

Lev Lafayette (University of Melbourne) – Australia

After four years of operation the University of Melbourne’s high performance computer system, “Edward” is being gradually decommissioned this year. It’s replacement, named “Spartan” has a novel design based on metrics of Edward’s usage, which showed a large use of single core tasks.
In such circumstances the new HPC system will be smaller in terms of bare metal, but larger in the sense that it will connect to the NeCTaR (National eResearch Collaboration Tools and Resources project) OpenStack research cloud for the single-core and shared memory multi-core tasks. In addition to providing a description of the architecture differences, there will also be some discussion of the efficiencies and effectiveness of the two multicore implementations.

— —

13:40 – 14:10  “State of the Art: Apache Cassandra”

Aaron Morton (Apache Cassandra Committer) – New Zealand

Apache Cassandra provides a scalable, highly available, geographically distributed data store that has proven itself in production environments. It implements an Amazon Dynamo model for cluster management, with a Big Table inspired storage engine that uses a Staged Event Driven Architecture (SEDA). While very successful, there are alternatives. Scylla DB launched in 2015 as a Cassandra compatible platform underpinned by a Thread Per Core (TPC) architecture. Work is now under way to investigate Thread Per Core architecture in Cassandra.

In this talk Aaron Morton, Co Founder at The Last Pickle and Apache Cassandra Committer, will discuss the current Apache Cassandra architecture and possible future directions. The talk will examine the cluster design before diving down to the node level and comparing the SEDA and TPC approaches.

— —

14:10 – 14:40  “Multi-core Systems in a Low Latency World”

Michael Barker (LMAX Exchange) – New Zealand

Building systems with low and predictable response times using modern software and hardware is notoriously tricky.  Unfortunately multi-core systems don’t provide direct benefit for reducing response time, but do provide some interesting opportunities for scaling up.  This talk will look at some of the ways that LMAX Exchange have addressed these challenges to hit typical latencies of 100µs and become one of the world’s fastest foreign currency exchanges. We will introduce an open source tool called the Disruptor that we use within the core of our Exchange.

Being a technically detailed talk, having a basic understanding of concurrency primitives, e.g. locks, condition variables, memory barriers would facilitate understanding by the audience.

— —

14:40 – 15:20 – “Concurrency – Patterns for Architecting Distributed Systems”

Panel Discussion: Richard O’Keefe (Otago), Balazs Gerofi (RIKEN), Geoffrey C Fox (Indiana), Michael Barker (LMAX). Moderator: Mark Moir (Oracle)

The art of architecting and building distributed systems is about the composition of concurrent components. Patterns of architecting distributed systems can be implicit through the APIs used e.g. MPI and OpenMP, explicit in larger system components such as messaging sub-systems or explicitly supported by a language and its supporting runtime e.g. Erlang. Developing distributed concurrent systems is now common place. The panel will discuss trends, emergent paradigms, common patterns and their trade-offs for developing concurrent systems (distributed or not).

15:40 – 15:55  Fireside Chat with Alex Szalay (Johns Hopkins). Interviewer: Nicola Gaston (MacDiarmid Institute)

— —

15:55 – 16:30  “An Overview of RIKEN’s System Software Research and Development Targeting the Next Generation Japanese Flagship Supercomputer”

Dr. Balazs Gerofi (RIKEN) – Japan

RIKEN Advanced Institute for Computation Science has been appointed by the Japanese government as the main organization for leading the development of Japan’s next generation flagship supercomputer, the successor of the K. Part of this effort is to design and develop a system software stack that suits the needs of future extreme scale computing. In this talk, we first provide a high-level overview of RIKEN’s system software stack effort covering various topics, including operating systems, I/O and networking. We then narrow the focus on OS research and describe IHK/McKernel, our hybrid operating system framework. IHK/McKernel runs Linux with a light-weight kernel side-by-side on compute nodes with the primary motivation of providing scalable, consistent performance for large scale HPC simulations, but at the same time, to retain a fully Linux compatible execution environment.  We detail the organization of the stack and provide preliminary performance results.

— — 

 

16:30 – 17:00  “Science, Scientific Infrastructure, and Individual Incentives”

Dr. Nicola Gaston (The MacDiarmid Institute for Advanced Materials and Nanotechnology) – New Zealand

The conduct of scientific research depends crucially on underlying technologies and infrastructure that enable measurement, calculation, and analysis. Using HPC as an example, I’ll discuss the progress of areas of fundamental research as a function of the development of underlying technologies, with particular reference to computational chemistry and physics. I’ll invite you to think about how technologies and the cultures around them are able to enable scientific progress, with reference to the mobility of scientists, knowledge transfer, transparency in science communication, the commercialization of science, and the development of conversations about scientific ethics in times of culture change

— — — —

 

 

Day 3: Wednesday 17 February 2016

 

8:30 – 8:45   General Information and Recap

8:45 – 9:15  “SKA’s Science Data Processor Design in Cloudy Regions”

Markus Dolensky (International Centre for Radio Astronomy Research – ICRAR) – Australia

Despite the SKA rebaselining and inherent data parallelism it is apparent that existing storage and processing systems do not scale all too well given a correlator output rate of 1 TByte/s. Data flow management is a key element of the solution. The presentation discusses design considerations relevant to the SKA Science Data Processor and reports on practical experiments in HPC and Cloud environments. Connectivity to regional compute infrastructure is another issue, but once resolved, is a means to overcome the SKA power limit.

— —

9:15 – 10:15  “Data Intensive Discoveries in Science: the Fourth Paradigm and Beyond”

Keynote – Prof Alex Szalay (Johns Hopkins University) – USA

The talk will describe how science is changing as a result of the vast amounts of data we are collecting from gene sequencers to telescopes and supercomputers. This “Fourth Paradigm of Science”, predicted by Jim Gray, is moving at full speed, and is transforming one scientific area after another. The talk will present various examples on the similarities of the emerging new challenges and how this vision is realized by the scientific community. Scientists are increasingly limited by their ability to analyze the large amounts of complex data available. These data sets are generated not only by instruments but also computational experiments; the sizes of the largest numerical simulations are on par with data collected by instruments, crossing the petabyte threshold this year. The importance of large synthetic data sets is increasingly significant, as scientists compare their experiments to reference simulations. All disciplines need a new “instrument for data” that can deal not only with large data sets but the cross product of large and diverse data sets. There are several multi-faceted challenges related to this conversion, e.g. how to move, visualize, analyze and in general interact with Petabytes of data.

— —

10:45 – 11:25  “Building a Sustainable Model of HPC Services: The Challenges and Benefits.”

Dr Happy Sithole (Director, Centre for High Performance Computing) – South Africa

South Africa started looking at the development of HPC capabilities in 2006. In 2007, the Center for High Performance Computing start production to provide HPC resources to the broader South African Researchers. The building of the infrastructure and the community comes with a number of challenges, technologically and politically, which have to be addressed carefully, for one to provide a sustainable system to support both economic and academic ambitions of a developing nation. In this presentation, we will discuss the South African experience on developing Cyber-infrastructure and extend this to the initiatives around the Southern African region.

— —

11:25 – 11:55  “Making the SKA platform accessible for the science”

Panel Discussion: Alex Szalay (Johns Hopkins), Markus Dolensky (ICRAR), Happy Sithole (CHPC), Duncan Hall (DIA). Moderator: Nicolás Erdödy (Open Parallel)

Most scientists are not software developers or engineers. Nor are they interested in getting an in-depth understanding of the nitty-gritty or parallel architectures, distributed systems and writing production quality code. They are after all interested in doing the science. The panel will discuss what is needed to make the SKA platform more accessible to scientists. How it can and should support the science. How that might be achieved. How should the platform operate and how should the associated software be governed.

— —

11:55 – 12:30  “Informatics in New Zealand Forestry – Data challenges for 1 billion Radiata trees”

Bryan Graham (Science Leader, Forest Industry Informatics, Scion) – New Zealand

New Zealand is currently the world’s largest exporter of raw Radiata pine logs. Each log has to be individually managed and measured throughout its ~28 year lifespan, and then on through the value chain. Historically this has been done using manual measurements, estimation, and “gut feel”. As new technologies are being implemented the scale and complexity of the data being collected, processed, and used to drive decision making is escalating. Coupled with increasingly demanding eResearch requirements,  developing and delivering decision making systems to industry is challenging. This presentation highlights some of the key challenges facing informatics at Scion, the main provider of Forest Research in New Zealand.

— —

12:30 – 12:45  Fireside Chat with Peter Kogge (Notre Dame). Interviewer: Nicolás Erdödy

— —

 

13:40 – 13:50  Ask an Expert #2

— —

13:50 – 14:25 “Developing HPC Research Infrastructure and Capacity Building for SKA Preparedness in a Developing Country”

Dr. Tshiamo Motshegwa (University of Botswana) – Botswana

This talk discusses education, infrastructure developments, research, policy developments, collaborations together with opportunities and challenges in developing capacity in HPC in Botswana.

Botswana through representation of the University of Botswana is participating in the SADC regional collaborative framework for High Performance Computing and Big Data – a framework that aims to develop regional capacity in HPC through infrastructure development, R&D, building strategic partnerships and governance.

Through this framework, the department of Computer Science is executing aspects of the University’s research strategy and strategic plan by setting up a HPC Data centre and infrastructure to provide a HPC-As-A-Service to the university. There is an immediate need to support to curriculum development and teaching in the department in the areas of Distributed systems, Parallel and Concurrent Programming and in Computational Science and Engineering (CSE) to support a base for multidisciplinary teaching and potential research in faculty of Science in the areas Computational Chemistry, Physics and Geology, Mathematical modelling, Earth Observation and Geographical Information Systems and at the faculty of Engineering.

There is also an immediate need to develop technical and human capacity in HPC to support research for current MSc and PhD programs – this to support the growth of a pipeline of researchers. There is need build infrastructure to support potential national and regional projects and to prepare support for Botswana’s participation and commitment to global science projects such as the Square Kilometre Array (SKA) Astrophysics project.

The infrastructure being developed is also part of wider long term national vision – for the university to provide National leadership in HPC and to grow this service organically and ultimately serve country stakeholders like research institutes and the Botswana Innovation Hub Science park tenants as a national service.

The Department of Computer Science is also working with the Department of Research Science and Technology (DSRT) from the Ministry of Science and Technology at a policy level in the formulation of the country’s Space Science and Technology strategy that covers development of supporting platforms such as HPC cyber-infrastructure.

— —

14:25 – 14:45  “What is OpenHPC?”

Balazs Gerofi (RIKEN) – Japan

An overview of this Linux Foundation collaborative project and its participants.

— —

14:45 – 15:20  HPC role in Science and Economic Policy Development”

Panel Discussion: Nicola Gaston (MacDiarmid), Happy Sithole (CHPC), Chun-Yu Lin (NCHC), Tshiamo Motshegwa (University of Botswana). Moderator: Don Christie (Catalyst)

Life, the biosphere, weather, finance, our economic and social interactions are all complex systems. HPC has been pivotal in enabling dramatic advances in understanding complex networks. Generative and agent-based simulations, machine-learning, dynamical systems and network analysis. Leading a Cambrian explosion of scientific advancement that is flowing into business and government. Computational is now often prefixed to traditional disciplines. Highlighting the computing algorithms’ augmenting role in experimentation, investigation and analysis. The panel will discuss the effect HPC is having on traditional disciplines in science, government and business. Are HPC Centres of excellences needed for developing computational thinking and practise? What are the social, ethical and economic concerns implied by an algorithmic world?

— —

15:40 – 16:10  “Innovative Applications, Scientific Research and simPlatform in NCHC”

Dr. Chun-Yu Lin (National Center for High-performance Computing -NCHC, National Applied Research Laboratories – NARLabs) – Taiwan

As the supercomputing center in Taiwan, NCHC plays an enabling role to bridge the gap between research and practice. Several innovative applications from collaborations with academia, government and industry are presented in this talk. We also introduce researches related to the bio-imaging and material science that have a great practical impact and huge demands on computing method and resources. We lastly introduce the simPlatform aimed to improve the ecosystem for computational sciences.

— —

16:10 – 16:50  “Bridging HPC and Clouds – Physical Infrastructure, Networks, Storage and Applications”

Panel Discussion: Peter Kogge (Notre Dame), Alex Szalay (Johns Hopkins), Geoffrey C. Fox (Indiana), John Gustafson (A*STAR). Moderator: Robert O’Brien

 

Today’s HPC is tomorrow’s general computing infrastructure. Indeed data intensive applications in finance, engineering and science are the tip of the iceberg. Functionality of many systems are increasingly driven by data. Trends in consumer applications in big-data, machine-learning (ML), and deep neutral networks (DNN) are pushing at the multi-core CPU boundary. The wider deployment of sensors, robotics and pervasive computing (IoT) will demand new approaches to. All are harbingers of change. Speed, data volume and answers-per-watt metrics will drive demands for HPC Clouds (IaaS and PaaS). The panel will discuss what is needed to enable HPC Clouds. How should things change from current models of architecting physical and software infrastructure. How does networking and storage approaches factor into this. What opportunities exist with COTS (components off the shelve) to take a different approach than is typical of HPC systems.

 

— —

16:50 – 17:15  Closing Remarks, Discussion and Feedback. Towards Multicore World 2017.

— —

Interested?

Register HERE – limited tickets still available.