Mark Moir

Can the world’s problems be solved with good abstractions? How about without?

Abstractions precisely specify how a system component (a concurrent data

structure, for example) is allowed to behave, without dictating how this

is to be enforced.  Users need not know or understand details of the

implementation, and implementors are free to innovate, provided they

continue to guarantee behaviour consistent with the defined abstraction.

 The art of defining good abstractions that provide what users need

while minimally constraining implementors is critical to successful

development and evolution of computer systems.

This talk explores whether good abstractions may also be useful—or

even necessary—in addressing social, legal and political issues that

require careful balance of technical issues.  We will touch on examples

around issues such as privacy, security, anonymity, accountability, and

freedom of expression.  Rather than proposing specific solutions, this

talk aims to stimulate discussion about ways in which technical people

could potentially contribute to addressing some of these important issues.

Dr. Mark Moir

Bio:

Dr. Moir is an expert in Concurrency Abstractions with more than 20 years in Computer Science research. He received the B.Sc.(Hons.) degree in Computer Science from Victoria University of Wellington, New Zealand in 1988, and the Ph.D. degree in Computer Science from the University of North Carolina at Chapel Hill, USA in 1996.  From August 1996 until June 2000, he was an assistant professor in the Department of Computer Science at the University of Pittsburgh.  In June 2000, he joined Sun Labs.  Moir later become the Principal Investigator of the Scalable Synchronization Research Group in Oracle Labs, due to Oracle acquiring Sun in 2010.

Dr. Moir was named as a Sun Microsystems Distinguished Engineer in 2009.  His main research interests concern practical and theoretical aspects of concurrent, distributed, and real-time computing.  His current research focuses on hardware and software mechanisms for making it easier to develop scalable, efficient, and correct concurrent programs for shared-memory multiprocessors.

Dr. Moir is the only speaker that have presented at every Multicore World since its creation in 2012.

Mark Moir

Page at Multicore World 2014 -with links to 2013 and 2012.

Mark Moir

Mark Moir – Principal Investigator, Scalable Synchronization Research Group, Oracle Labs (USA – New Zealand)

Mark Moir

Presentation at III Multicore World

Adaptive Integration of Hardware and Software Lock Elision Techniques

Mark Moir, Oracle – USA / New Zealand

—————

Hardware Transactional Memory (HTM) has recently entered mainstream computing.  There has been significant research into ways to exploit HTM, ranging from supporting new transactional programming models, to supporting scalable concurrent data structures, to “transactional lock elision” (TLE) techniques, which use HTM to boost performance and scalability of applications based on traditional lock-based programming with no changes to application code.

In recent work, we have been exploring the use of software techniques that similarly aim to improve the scalability of lock-based programs. These techniques involve somewhat more programmer effort than TLE but work in the absence of HTM, and furthermore can provide benefits in cases in which HTM is available but not effective. Different combinations of these hardware and software techniques are most effective in different environments and for different workloads.

This talk introduces the Adaptive Lock Elision (ALE) library, which supports integration of these techniques and facilitates dynamic choices between them at runtime, guided by a pluggable policy. Results of preliminary evaluation on four different platforms—two of which support HTM—will be presented.  Our results illustrate the need for policies that adapt to the platform and workload, and evaluate preliminary work in this direction.

—————————————————————————————————————————–

Bio:

Mark Moir received the B.Sc.(Hons.) degree in Computer Science from Victoria University of Wellington, New Zealand in 1988, and the Ph.D. degree in Computer Science from the University of North Carolina at Chapel Hill, USA in 1996.  From August 1996 until June 2000, he was an assistant professor in the Department of Computer Science at the University of Pittsburgh.  In June 2000, he joined Sun Labs.  Moir is now the Principal Investigator of the Scalable Synchronization Research Group in Oracle Labs, due to Oracle acquiring Sun in 2010.

Dr. Moir was named as a Sun Microsystems Distinguished Engineer in 2009.  His main research interests concern practical and theoretical aspects of concurrent, distributed, and real-time computing.  His current research focuses on hardware and software mechanisms for making it easier to develop scalable, efficient, and correct concurrent programs for shared-memory multiprocessors.

—————————————————————————————————————————-

Publications

Mark Moir at I Multicore World (Wellington – 2012) – “Concurrency and synchronization in an increasingly multicore world” – video

Mark Moir at II Multicore World (Wellington – 2013) – “Transactional memory hardware, algorithms, and C++ language features”

Panels 2017

Panels – 6th Multicore World

Draft v3.5 (subject to changes) – Updated 20th February 2017

General

  • Panels are motivated by 2-3 questions

  • Goal is to freely debate about the topic in brainstorming mode -no lectures, and trigger new thinking

  • Moderator will briefly introduce each panelist (30”)

  • Questions and panelists’ names will be in a slide at the screen

  • Each panelist will briefly (3’) introduce her/his position about the questions -no slides required

  • Open and dynamic discussion between the panel and with the audience

  • Controversial statements are encouraged -panel will not be broadcasted

  • Ideal outcome is to reach some consensus but no conclusions are required


Day 1 – Monday 20th February 2017

11:30 – 12:15  Panel – From a Multicore World to the Exascale Era -and beyond! What could happen? What NEEDS to happen?

Around 2004 multicore hardware became common. Vendors pushed the new architectures to the market without asking if the software community was ready. Now that we envisage a new era towards exascale for the next decade -with massive challenges, do we have time to prepare ourselves? What do we need to do? Which are the possible benefits and, are worth the effort?

Prof. Satoshi Matsuoka (Japan) – Moderator

Prof. Michelle Y. Simmons (Australia)

Pete Beckman (US)

Dave Jaggar (NZ)

Prof. Michael Kelly (UK-NZ)

Prof. Tony Hey (UK)


3:30 – 4:15 PanelBD / AI / ML / IoT / C@E / Deep Learning, etc, etc…Which is the Real Technology Behind These Trendy Buzzwords?

Artificial Intelligence (AI) is hot topic today but the term originated more than 60 years ago. Marketing experts “insert” buzzwords such as BD to reach a broad audience even if they don’t understand it. Do these practices actually explain technical progress? Are we entering an era of accepting “magic boxes” full of data which instead of advancing basic science and core technologies is actually creating a monopoly of a few owners of knowledge? Will IBM’s Watson or Amazon’s Alexa emerge as the AI equivalents of Intel?

Pete Beckman (US) – Moderator

Paul Fenwick (Australia)

Prof Michael Kelly (UK-NZ)

Paul McKenney (US) 

John Gustafson (Singapore)

Prof Satoshi Matsuoka (Japan)


Day 2 – Tuesday 21st February 2017

11: 45 – 12:25 Panel – Towards the SKA Tender: Challenges and Opportunities

The Square Kilometre Array radio-telescope project (SKA) will be the largest scientific instrument of the 21st century and the ultimate Big Data project. Construction budget for its first phase is capped at € 674 million (2016 euros) and the tender process will start to be defined from 2018. The panel will discuss the opportunities for their countries and the potential for a collaborative approach towards development of new computing technologies

Nicolás Erdödy (NZ) – Moderator

Simon Rae (NZ)

Prof. Tony Hey (UK)

Dr. Happy Sithole (South Africa)

Dr. Andrew Ensor (NZ)

Juan Carlos Guzman (Australia)

Prof Andreas Wicenec (Australia)


3:30 – 4:15 Panel – Where is New Zealand’s ICT / High-Tech Ecosystem Heading?

New Zealand is known as a nation of innovators and early adopters of new technologies. We are an open and democratic society with modern regulations and laws about software and intellectual property, plus a vibrant start up scene and top ranked universities. Still, our economy is mostly based in primary industries with a classic development model. Overall, the primary sector contributes just over half of NZ’s total export earnings. Will the High-Tech sector gain traction and move the economy from a pastoral base into a knowledge-based one? Would be a radical transformation or just an upgrade?

Victoria Mclennan (2016 ICT NZer of the year) – Moderator

Clare Curran, MP (Labour Party ICT Spokesperson)

Ralph Highnam (CEO, Volpara Technologies)

Guy Kloss (Qrious)

Mark Moir (Oracle)

Dave Jaggar (ex-ARM)


Day 3 – Wednesday 22nd February 2017

10:40 – 11:25 Panel – Does Big Science necessarily mean Big Budgets?

Big Science projects are becoming larger and larger in all facets: goals more ambitious than ever, hundreds -if not thousands, of participants distributed all over the globe, and budgets that invariably are in the range of tens to hundreds millions of dollars. Are totally gone the days of the solitary researcher or garage inventor? Or would the HPC evolution through the cloud foster the emergency of a new breed of collective investigation -similar to the open source software development model? How “open” should science be to adapt to that model? And how would evolve one of the assets of a scientist -its reputation, in such a collective model?

Prof. Michael Kelly (UK-NZ) – Moderator.

Prof. Andreas Wicenec (Australia)

Dr. Happy Sithole (South Africa)

Dr. Pete Beckman (US)

Dr. John Gustafson (Singapore)

Prof Satoshi Matsuoka (Japan)


2:15 – 3:00 Panel – Enterprise Systems: How big is the gap to reach 21st century performance? How will legacy code and hardware be updated?

Legacy code is everywhere. Hardware replacement is seen always as a cost -not an investment. As an example, COBOL was written for mainframes created 10 years before man walked on the moon. Those same mainframes still operate some of the biggest institutionalised computing today. How are we going to “teach” these machines to be “intelligent”? How will these systems enter an exascale era in the next decade?

Mark Moir (Oracle, NZ-US) – Moderator

Victoria Maclennan (Optimal BI, NZ)

Paul McKenney (IBM, US)

Paul Fenwick (Perl, Australia)

John Gustafson (ex-Intel, AMD, Sun, Singapore)

Nathan DeBardeleben (LANL, US)


Abstracts 2017

Program – 6th Multicore World

Draft v4.0 (subject to changes) – Updated 16th February 2017

Speakers List is here

Buy Tickets here

Program MW17 v4.0 (PDF)

Panels list is here


Day 1 – Monday 20th February 2017

8:20 – 8:40 Opening Welcome – Setting the Scene

Nicolás Erdödy (Open Parallel)

8:40 – 9:10 Japanese plans for Open High Performance Computing and Big Data / Artificial Intelligence Infrastructure

Prof. Satoshi Matsuoka – Professor, Tokyo Institute of Technology and Fellow, Advanced Institute for Science and Technology, Japan

Abstract – Japanese investment into public, open science HPC infrastructures for research and academia have had a long history, in fact longer than that of the United States. There now is also a focus on Big Data / Artificial Intelligence (BD/AI), with national-level AI centers: In particular, I lead a project of facilitating one of the world’s largest BD/AI focused open and public computing infrastructure. The performance of the machine is slated to be well above 130 Petaflops for machine learning, as well as acceleration in I/O and other properties desirable for accelerating BD / AI.

9:15 – 10:00 Keynote – The Revolution in Experimental and Observational Science: The Convergence of Data-Intensive and Compute-Intensive Infrastructure

Prof. Tony Hey – Chief Data Scientist, Science & Technology Facilities Council (STFC), UK

Abstract – The revolution in Experimental and Observational Science (EOS) is being driven by the new generation of facilities and instruments, and by dramatic advances in detector technology. In addition, the experiments now being performed at large-scale facilities, such as the Diamond Light Source in the UK and Argonne Advanced Photon Source in the US, are becoming increasingly complex, often requiring advanced computational modelling to interpret the results. There is also an increasing requirement for the facilities to provide near real-time feedback on the progress of an experiment as the data is being collected. A final complexity comes from the need to understand multi-modal data which combines data from several different experiments on the same instrument or data from several different instruments. All of these trends are requiring a closer coupling between data and compute resources.

10:00 – 10:30 – Morning Tea

10:30 – 11:15 Keynote – The Quantum Revolution: Computing, past, present and future

Prof. Michelle Y. Simmons – Scientia Professor of Physics, University of New South Wales, Australia. Director, Centre for Quantum Computation & Communication Technology, School of Physics, UNSW. Australian Research Council Laureate Fellow

Abstract – Down-scaling has been the leading paradigm of the semiconductor industry since the invention of the first transistor in 1947, producing faster and smaller computers every year. However device miniaturization will soon reach the atomic limit, set by the discreteness of matter, leading to intensified research in alternative approaches for creating logic devices of the future. In this talk I will describe the emerging field of quantum information. In particular I will focus on the development of quantum computing, where Australia is leading an international race to build a large-scale prototype in silicon.

11:20 – 12:00 Panel – From a Multicore World to the Exascale Era -and beyond! What could happen? What NEEDS to happen?

Prof. Satoshi Matsuoka (Japan) – Moderator, Prof. Michelle Y. Simmons (Australia), Pete Beckman (US), Dave Jaggar (NZ), Prof. Michael Kelly (UK-NZ), Prof. Tony Hey (UK)

12:10 – 1:10 Lunch

1:10 – 1:35 HPC/Cloud Hybrids for Efficient Resource Allocation and Throughput

Lev Lafayette – HPC Support & Training Officer, The University of Melbourne, Australia

Abstract – HPC systems running massively parallel jobs need a fairly static software operating environment running on bare metal hardware, a high speed interconnect to reach their full potential, and offer linear performance scaling for cleverly designed applications. Cloud computing, on the other hand, offers flexible virtual environments and can be used for pleasingly parallel workloads.

Two approaches have been undertaken to make HPC resources available in a dynamically reconfigurable hybrid HPC/Cloud system. Both can can be achieved with few modifications to existing HPC/Cloud environments. The first approach, the “Nemo” system at the University of Freiburg, deploys a cloud-client on the HPC compute nodes, so the HPC hardware can run Cloud-Workloads for backfilling free compute slots. The second approach, the “Spartan” system at the University of Melbourne, generates a consistent compute node operating system image with variation in the virtual hardware specification which can be modified according to needs.

1:40 – 2:10 The ASKAP Science Data Processor system: A Computing precursor of the SKA

Juan Carlos Guzman – Head of the ATNF Software and Computing Group, Australia

Abstract – The ASKAP Science Data Processor software named ASKAPsoft has been in development since the ASKAP project started more than 10 years ago and is now processing and commissioning early science data with a third of the array. The software runs in a dedicated HPC platform located at the Pawsey Supercomputing Centre, processing radio interferometry data at a current rate sufficient to support the Early Science program. Software development and commissioning is ongoing to reach an order of magnitude better performance for the full ASKAP array.

The software system was originally designed to calibrate and image ASKAP science data, and over the last few months we’ve also started to look to expand its usage to other instruments such as MWA and SKA1_LOW. This talk describes the unique features of ASKAPsoft for the ASKAP Science projects, shares a few “words of wisdom” gained (and still gaining) during development and commissioning, and describes our development roadmap to support the full ASKAP and other instruments, in particular the SKA.

2:15 – 3:00 KeynoteSupercomputer Resilience – Then, Now, and Tomorrow

Nathan DeBardeleben – Senior research scientist at Los Alamos National Laboratory in High Performance Computing Design and the lead of the Ultrascale Systems Research Center. USA.

Abstract – Supercomputer resilience, reliability, and fault-tolerance are increasingly challenging areas of extreme-scale computer research as government agencies and large companies strive to solve the most challenging scientific problems. Today’s supercomputers are so large that failures are a common occurrence and various tolerance and mitigation strategies are required by the system, middleware, and even application software. In this talk we discuss trends in this field and how data analytics and machine learning techniques are being applied to influence the design, procurement, and operation of some of the world’s largest supercomputers.

3:00 – 3:30 Afternoon Tea

3:30 – 4:15 PanelBD / AI / ML / IoT / C@E / Deep Learning, etc, etc…Which is the Real Technology Behind These Trendy Buzzwords?

Pete Beckman (US) – Moderator, Paul Fenwick (Australia), Prof Michael Kelly (UK-NZ), John Gustafson (Singapore), Paul McKenney (US), Prof Satoshi Matsuoka (Japan)

4:15 – 5:00 Keynote – Parallel Computing at the Edge: Technology for Chicago Street Poles and for Exascale Systems

Pete Beckman – Co-Director, Northwestern-Argonne Institute for Science and Engineering, Argonne National Laboratory, USA. Leads Projects Argo Exascale Operating System, Waggle – Sensors for the Array of Things

Abstract – Sensors and embedded computing devices are being woven into buildings, roads, household appliances, and light bulbs. Most are as simple as possible, with low-power microprocessors that just push sensor values up to the cloud. However, another class of powerful, programmable sensor node is emerging. The Waggle (wa8.gl) platform supports parallel computing, machine learning, and computer vision for advanced intelligent sensing applications. Waggle is an open source and open hardware project at Argonne National Laboratory that has developed a novel wireless sensor system to enable a new breed of smart city research and sensor-driven environmental science. Leveraging machine learning tools such as Google’s TensorFlow and Berkeley’s Caffe and computer vision packages such as OpenCV, Waggle sensors can understand their surroundings while also measuring air quality and environmental conditions. Waggle is the core technology for the Chicago ArrayOfThings (AoT) project (arrayofthings.github.io). So how is this work related to Exascale operating systems and extreme-scale platforms for science? Join me as we explore the new world of parallel computing everywhere. The presentation will outline the current progress of designing and deploying new platforms and our progress on research topics in computer science, including parallel computing, operating system resilience, data aggregation, and HPC modelling and simulation.

5:00 – 5:15 Debate

5:15 – 7:00 Light Dinner


Day 2 – Tuesday 21st February 2017

8:30 – 8:40 – General Information and Recap

Nicolás Erdödy (Open Parallel)

8:45 – 9:25 – Keynote – OpenStack for HPC in Africa

Happy Sithole – Director, Centre for High Performance Computing, South Africa

Abstract – This presentation will cover the South African perspective in advancing HPC Technologies in the Region and how we plan to contribute to the global space. The focus will be on the technologies, what is required for a hub -looking at the opportunities and challenges, and how will it attract the attention of the world to be a global hub. It will cover HPC developments in the region, the OpenStack initiative and the African Research Cloud perspective.

9:30 – 10:00 – Ministerial Address

The Honourable Paul Goldsmith, New Zealand’s Minister for Science and Innovation, Minister of Tertiary Education, Skills and Employment, and Minister for Regulatory Reform

10:00 – 10:30 Morning Tea

10:30 – 11:15 Keynote – Extreme Scale Multi-tasking using DALiuGE

Prof. Andreas Wicenec – Professor of Data Intensive Research, ICRAR, Perth, Australia. Task leader of the Data Layer for the SKA Science Data Processor

Abstract – The SKA processing will require to launch, control and terminate up to some 100 million individual, but logically related tasks for a single reduction run spanning around 6-12 hours. This translates to about 5,000 tasks/s on average, distributed across potentially thousands of compute nodes. Since the actual reduction components as well as the compute platform are not yet determined and will also change during the operational lifetime of the SKA, both the number of tasks as well as the number of compute nodes are likely to be quite variable. In addition the size of a reduction deployment is also dependant on the actual scientific goals of a given experiment.

DALiuGE is a prototype framework designed and developed on the basis of these requirements. It is using many of the ideas published for other existing frameworks, such as SPARK, Swift/T, TensorFlow or Parsec. Different from those it goes back to first principles in order to tackle the most relevant requirements for the SKA use case first and foremost. For the scalability in particular, it is following a very strict architecture, enforcing completely asynchronous task execution as well as a hierarchical task deployment and tear-down. DALiuGE also enables the usage of existing reduction components without any change, while also allowing the development of more optimised, dedicated components, if required. The other unique feature is the complete separation of the logic of the reduction process from the deployment onto hardware at run-time. This also allows scientists to concentrate on the development of the logic, without having to deal with the final deployment issues. In this talk we will present the architectural concepts as well as results from very large scale deployments.

11:15 – 11:40 Update on New Zealand SKA Alliance participation in the SKA project

Andrew Ensor – Director, New Zealand SKA Alliance

Abstract – The Square Kilometre Array is the largest mega-Science project of the next decade and represents numerous firsts for New Zealand. This talk will outline the project, its unprecedented computing challenges, New Zealand’s key involvements, and progress on the computer system design as the construction phase draws closer.

11: 45 – 12:15 Panel – Towards the SKA 2018 Tender: Challenges and Opportunities

Simon Rae (NZ) – Moderator, Prof. Tony Hey (UK), Happy Sithole (South Africa), Andrew Ensor (NZ), Juan Carlos Guzman (Australia), Prof Andreas Wicenec (Australia)

12:20 – 1:20 Lunch

1:20 – 1:50 SKA – SDP Middleware: open and collaborative HPC in the SKA

Piers Harding – Senior Consultant, Catalyst IT, New Zealand

Abstract – The aspirations of the Science Data Processor (SDP) has many of the characteristics of a commodity ‘big data’ problem even though the bulk of the known processing requirements are seemingly narrow, well defined and closed in nature. Everything apart from the processing time frames, and power allocation are big: logging, messaging, storage, network, computation. Even the project structure is enormously collaborative with many countries and institutions involved.

Within the SDP processing the SKA could have opted for a closed solution for the rendering and delivery of data products, but the project has recognised an opportunity to do things differently by stipulating the guiding principles of using open standards, open source, and commodity computing. This framework will make it possible to bring the widest possible research audience closer to the processing frontier, giving them greater access to the telescope and more fine grained control of their own observations. This talk walks through some of the opportunities and problems that will be faced by the SDP as the project attempts to realise the ambition of utilising commodity computing technology, with stringent high performance computing and energy requirements, whilst reflecting on how this can take advantage of major solution architecture trends that will unfold over the coming years.

1:50 – 2:10 IHK/McKernel: A Lightweight Multi-kernel based Operating System for Extreme Scale HPC

Balasz Gerofi – Research Scientist in the System Software Research Team at RIKEN Advanced Institute for Computational Science (AICS) – Japan

Abstract – RIKEN Advanced Institute for Computation Science has been appointed by the Japanese government as the main organization for leading the development of Japan’s next generation flagship supercomputer, the successor of the K. Part of this effort is to design and develop a system software stack that suits the needs of future extreme scale computing. In this talk, we primarily focus on OS research and discuss IHK/McKernel, our multi-kernel based operating system framework. IHK/McKernel runs Linux with a light-weight kernel side-by-side on compute nodes with the primary motivation of providing scalable, consistent performance for large scale HPC simulations, but at the same time to retain a fully Linux compatible execution environment. Lightweight multi-kernels and specialized OS kernels in general have been receiving a great deal of attention recently, not only in HPC but in the context of cloud and embedded computing as well. We provide an update of the project and outline future research directions.

2:15 – 3:00 Keynote – Does RCU Really Work?

Paul McKenney – IBM Distinguished Engineer, IBM Linux Technology Center, USA

Abstract – Bugs will always be with us, and given that there are well over a billion instances of the Linux kernel running around the world, a hypothetical linux-kernel RCU race condition that happens once per million years of runtime will be happening several times per day across the installed base. Yet achieving even this level of robustness in a highly concurrent software artifact poses a severe challenge to the current software engineering state of the art. This presentation will give an overview of the techniques being used to start to meet this challenge, up to and including some exciting advances in formal verification, which have resulted in formal verification being added to the Linux-kernel RCU regression-test suite.

3:00 – 3:30 Afternoon Tea

3:30 – 4:15 Panel – Where is New Zealand’s ICT / High-Tech Ecosystem Heading?

Victoria Mclennan (2016 ICT NZer of the year) – Moderator, Clare Curran, MP (Labour Party ICT Spokesperson), Ralph Highnam (CEO, Volpara Technologies), Guy Kloss (Qrious), Mark Moir (Oracle), Dave Jaggar (ex-ARM)

4:15 – 5:00 Keynote – The Future Is Awesome (and what you can do about it)

Paul Fenwick – Public speaker, open source authority, and science educator. Managing Director, Perl Training, Australia.

Abstract – Technology is advancing at a faster rate than society’s expectations, and can go from science-fiction to being consumer-available, with very little in the way of discussion in between, but the questions they raise are critically important: What happens when self-driving vehicles cause unemployment, when medical expert systems work on behalf of insurance agencies rather than patients, and weapon platforms make their own lethal decisions?

5:00 – 5:15 Debate

5:15 – 7:00 Light Dinner


Day 3 – Wednesday 22nd February 2017

8:45 – 8:55 General Information and Recap

8:55 – 9:05 The Exascale Institute -and other projects from and for New Zealand

Nicolás Erdödy (Open Parallel)

9:1010:00 Keynote – FLOPS to BYTES: Accelerating Beyond Moore’s Law

Satoshi Matsuoka – Professor, Tokyo Institute of Technology and Fellow, Advanced Institute for Science and Technology, Japan

Abstract – The so-called “Moore’s Law”, by which the performance of the processors will increase exponentially by factor of 4 every 3 years or so, is slated to be ending in 10-15 year timeframe due to the lithography of VLSIs reaching its limits around that time, and combined with other physical factors. This is largely due to the transistor power becoming largely constant, and as a result, means to sustain continuous performance increase must be sought otherwise than increasing the clock rate or the number of floating point units in the chips, i.e., increase in the FLOPS. The promising new parameter in place of the transistor count is the perceived increase in the capacity and bandwidth of storage, driven by device, architectural, as well as packaging innovations: DRAM-alternative Non-Volatile Memory (NVM) devices, 3-D memory and logic stacking evolving from VIAs to direct silicone stacking, as well as next-generation terabit optics and networks. The overall effect of this is that, the trend to increase the computational intensity as advocated today will no longer result in performance increase, but rather, exploiting the memory and bandwidth capacities will instead be the right methodology. However, such shift in compute-vs-data tradeoffs would not exactly be return to the old vector days, since other physical factors such as latency will not change when spatial communication is involved in X-Y directions. Such conversion of performance metrics from FLOPS to BYTES could lead to disruptive alterations on how the computing system, both hardware and software, would be evolving towards the future.

10:0 – 10:30 Morning Tea

10:30 – 11:15 Panel – Does Big Science necessarily mean Big Budgets?

Prof. Michael Kelly (UK-NZ) – Moderator, Prof. Andreas Wicenec (Australia), Dr. Happy Sithole (South Africa), Dr. Andrew Ensor (New Zealand), Dr. John Gustafson (Singapore), Pete Beckman (US), Prof Satoshi Matsuoka (Japan)

11:15 – 12:00 Keynote – Beating Floats at Their Own Game: Faster Hardware and Better Answers

John Gustafson – Visiting Scientist at A*STAR and Professor in the School of Computing at the National University of Singapore. He is a former Director at Intel Labs and the former Chief Product Architect at AMD.

Abstract – A new data type called a “posit” is designed for direct drop-in replacement for IEEE Standard 754 floats. Unlike unum arithmetic, posits do not require interval-type mathematics or variable size operands, and they round if an answer is inexact, much the way floats do. However, they provide compelling advantages over floats, including simpler hardware implementation that scales from as few as two-bit operands to thousands of bits. For any bit width, they have a larger dynamic range, higher accuracy, better closure under arithmetic operations, and simpler exception-handling. For example, posits never overflow to infinity or underflow to zero, and there is no “Not-a-Number” (NaN) value. Posits take up less space to implement in silicon than an IEEE float of the same size, largely because there is no “gradual underflow” or subnormal numbers. With fewer gate delays per operation as well as lower silicon footprint, the posit operations per second (POPS) supported by a chip can be significantly higher than the FLOPs using similar hardware resources. GPU accelerators, in particular, could do more arithmetic per watt and per dollar yet deliver superior answer quality.

The “Accuracy on a 32-bit budget” benchmark compares how many decimals of accuracy can be produced for a set number of bits-per-value, using various number formats. Low-precision posits provide a better solution than “approximate computing” methods that try to tolerate decreases in answer quality. High-precision posits provide better answers (more correct decimals) than floats of the same size, suggesting that in some cases, a 32-bit posit may do a better job than a 64-bit float. In other words, posits beat floats at their own game.

12:00 – 1:00 Lunch

1:00 – 1:30 Why are still failing to attract and retain women in STEM?

Why we are still failing to attract, retain and keep women in Science, Technology, Engineering and Mathematics related areas and subjects (STEM) (and what every single one of us can do to lead the change)?

In a world where alternative facts have become an excuse for ignorance we need to face up to the language, culture and environment that turn women off our education and employment systems. We can all become leaders of change. In this short talk Victoria will provide insight into how all of us can make an enduring, sustainable difference for women and diversity in STEM.

Victoria Maclennan – Managing Director, Optimal BI; 2016 New Zealand ICT Professional of the Year; Co-Chair NZ Rise.

1:30 – 2:00 Breast imaging analytics that improve clinical decision-making and the early detection of breast cancer

Ralph Highnam – CEO, Volpara Solutions, New Zealand

Abstract – Dr Highnam will talk about breast cancer screening and some of the challenges and opportunities it presents for advanced computing techniques. Dr Highnam was a major part of the UK eDiamond project, and EU MammoGrid project during his time at the University of Oxford, those projects sought to apply “the Grid” (an early Cloud) to improving breast cancer detection. Roll-on 20 years, and Dr Highnam now leads an ASX-listed company based in Wellington which uses Azure to improve breast cancer detection.

2:05 – 2:50 Panel – Enterprise Systems: How big is the gap to reach 21st century performance? How will legacy code and hardware be updated?

Mark Moir (Oracle, NZ-US) – Moderator, Victoria Maclennan (Optimal BI, NZ), Paul McKenney (IBM, US), Paul Fenwick (Perl, Australia), John Gustafson (ex-Intel, AMD, Sun, Singapore), Nathan DeBardeleben (LANL, US)

2:50 – 3:15 Afternoon Tea

3:15 – 4:00 Keynote – How Might the Manufacturability of the Hardware at Device Level Impact on Exascale Computing?

Prof Michael J Kelly – MacDiarmid Institute for Advanced Materials and Nanomaterials, Victoria University of Wellington, New Zealand, and Department of Engineering, University of Cambridge, United Kingdom.

Abstract – The International Technology Roadmap for Semiconductors has been the main guide for scientists developing on-going solutions to miniaturisation of devices. Between 1990 and 2010 the main thrust was the continuation of Moore’s law applied to high performance computing. The last decade saw a rapid growth in the number of red boxes in the technology tables, indicating the absence of any solution, let alone a workable solution, to achieving key device parameters needed to keep Moore’s law on track. The introduction of ‘More than Moore’ in recognition of the expanding role of high-speed communications as the output of IT R&D broadened the output of the Roadmap and took the immediate pressure off the red boxes. Now that the Internet of Things envisages vast networks of interaction sensor nodes is the newer and broader output. In the meantime, the limitations of CMOS and the as-yet unmanufacturability of research devices to continue miniaturisation is slowing and will stop progress. Quantum tunnelling through thin barrier layers offers one route to very high-speed uncooled semiconductor devices with 1.9THz speeds demonstrated on a one-off basis. The low-cost high-volume manufacture of 0.2-0.3THz devices and circuits is still a challenge. I will focus on some recent progress in this latter space and attempt to draw wider implications from the present status.

4:05 – 4:50 Keynote – The ARM Architecture – From Sunk to Success

Dave Jaggar – former ARM’s Head of Architecture Design, New Zealand

Abstract – In the late 1980’s Acorn, a British one-hit-wonder computer company, developed its own workstation microprocessor, the Acorn RISC Machine (ARM). By the end of 1990 Acorn was down a very dark financial alley, and the ARM processor, which may well have been the worst microprocessor ever designed, was practically extinct. Acorn’s VLSI design team, minus the two original CPU designers, were cast out to fend for themselves, provisioned with only 18 months of financial rations from Apple. When Dave Jaggar joined the new company in the summer of 1991, with the ink not quite dry on his Master’s thesis, he thought perhaps it was the worst move since Martin Luther journeyed to Rome. However after twelve months he was given free rein to redefine the processor, mostly because the company couldn’t afford anyone better. The Advanced RISC Machine, as the company was renamed, had a completely new instruction set which sidestepped many of the problems inherited from the original. Over the next eight years Dave systematically defined the entire ARM architecture, enabling it to be a popular embedded controller for the digital revolution, with around 100 billion units shipped. Along the way he worked out a little bit about computer architecture, and then retired back to New Zealand to work out a bit more. Now that ARM is no longer independent, and Britain is about to be, Dave thinks it’s about time he explained his part in ARM’s downfall.

4:55 – 5:10 Conference Wrap-up. Feedback: Towards Multicore World 2018

Nicolás Erdödy (Open Parallel)

5:15 – 6:30 Drinks and Nibbles


Speakers 2017

6th Multicore World – Speakers

BUY TICKETS HERE

Click for abstract and biography

  • The Honourable Paul Goldsmith, New Zealand’s Minister for Science and Innovation, Minister of Tertiary Education, Skills and Employment, and Minister for Regulatory Reform – Ministerial Address.
  • Dave Jaggar – Retired, former ARM’s Head of Architecture Design – Cambridge, UK. Author: ARM Architectural Reference ManualNew Zealand. During Dave’s nine years at ARM he took a British processor architecture, gave it a 2nd integer instruction set which made it excellent for embedded control, added on chip debug so it could be buried in a SoC, gave it a new floating point instruction set, and rebuilt the system architecture so it could run Unix like OSes properly. Founding Director of the ARM Austin Design Center in Texas -where about half of the A series chips for phones are done. Oral History interview – Mountain View, CA, 2012. Full abstract & Bio.
  • Prof Satoshi Matsuoka – Professor, High Performance Computing Systems Group, Global Scientific Information and Computing Center (GSIC) & Department of Mathematical and Computing Sciences, Tokyo Institute of Technology, Japan. Leader of the TSUBAME series of supercomputers, recently #1 in the world for power efficiency for both the Green 500 and Green Graph 500 lists. He has written over 500 articles and chaired numerous ACM/IEEE conferences. A fellow of the ACM and European ISC, has won the ACM Gordon Bell Prize in 2011, and the 2014 IEEE-CS Sidney Fernbach Memorial Award. Full abstract & Bio.
  • Prof Michael Kelly, FREng, FRS Hon, FRSNZ – Prince Philip Professor of Technology at Department of Engineering, University of Cambridge, UK. Solid State Electronics and Nanoscale Science. Advanced electronic devices for very high speed operation, development of new class of low power consumption devices and circuits -and its computing applications. Awarded 2006 Hughes Medal “for his work in the fundamental physics of electron transport and the creation of practical electronic devices which can be deployed in advanced systems”. Wikipedia.
  • Pete Beckman – Co-Director, Northwestern Argonne Institute for Science and Engineering-Mathematics-Computer Science, Argonne National Laboratory, USA. Lead, Projects Argo (An exascale operating system),  Array of Things (smart city – 500 sensors installed in Chicago) & Waggle (open platform for intelligent sensors)
  • Prof Andreas Wicenec – Head of the Data Intensive Astronomy (DIA) program at ICRAR (International Centre for Radio Astronomy Research), Perth, Australia. Prof Wicenec leads the SKA Science Data Processor Data Layer design. MW2014
  • Paul Fenwick – Managing Director, Perl Training Australia. A high-profile software engineer and internationally acclaimed presenter at conferences and user groups worldwide -where he is well-known for his humour and off-beat topics, Paul won the 2013 O’Reilly Open Source award for outstanding contributions to the open source community.
  • Andrew Ensor – Director New Zealand SKA Alliance (NZA), Research Pathway Senior Lecturer, Auckland University of Technology (AUT), Auckland, New ZealandThe NZ SKA Alliance is worldwide the 4th largest group working on the SKA Computing Platform. MW2013-14-15-16
  • Nathan DeBardeleben – High Performance Computing Design (HPC-DES)
    UltraScale Systems Research Center Lead and Resilience Lead at Los Alamos National Laboratory (LANL), USAExpert in resilience, fault-tolerance, dependable computing, HPC and general computer reliability on new computing platforms.
  • Juan Carlos Guzmán – Software & Computing Group Leader, Australia National Telescope Facility (ATNF) – CASS (CSIRO Astronomy and Space Science), a division of CSIRO – Australia‘s national science agency.
  • Mark Moir – Principal Investigator of the Scalable Synchronization Research Group at Oracle LabsUSA – New ZealandMark is giving a talk in every Multicore World since its creation. He works on concurrent, distributed, and real-time systems, particularly hardware and software support for programming constructs that facilitate scalable synchronization in shared memory multiprocessors. MW2012-13-14-15-16.
  • Piers Harding, Senior Solution Architect, Catalyst ITNew Zealand – Australia – UK. With 250+ engineers and developers, Catalyst is the largest open source software company in Australasia. Global leaders in OpenStack (fully deployed in the Catalyst Cloud). As members of the NZ SKA Alliance, since 2014 Catalyst is contributing to the design of the Computing Platform of the SKA project.
  • Simon Rae, Manager Innovation Policy, Ministry of Business, Innovation & Employment (MBIE), New Zealand
  • Lev Lafayette – HPC Support and Training Officer at The University of Melbourne, Australia. OpenStack for HPC -papers here. MW2012-13-14-16

full abstracts list shortly, in the meantime check this page for a complete list of all the speakers

Program 2017

Program – Multicore World 2017

Draft v4.1 (subject to changes) –  Updated 19th February 2017

Speakers List is here

Buy Tickets here

Program with Abstracts is here

Program MW17 v4.0 (PDF)

Panels list is here


Day 1 – Monday 20th February 2017

8:20 – 8:40 Opening Welcome – Setting the Scene

Nicolás Erdödy (Open Parallel)

8:40 – 9:10 Japanese plans for Open High Performance Computing and Big Data / Artificial Intelligence Infrastructure

Prof. Satoshi Matsuoka – Professor, Tokyo Institute of Technology and Fellow, Advanced Institute for Science and Technology, Japan

9:15 – 10:00 Keynote – The Revolution in Experimental and Observational Science: The Convergence of Data-Intensive and Compute-Intensive Infrastructure

Prof. Tony Hey – Chief Data Scientist, Science & Technology Facilities Council (STFC), UK

10:00 – 10:30 – Morning Tea

10:30 – 11:15 Keynote – The Quantum Revolution: Computing, past, present and future

Prof. Michelle Y. Simmons – Scientia Professor of Physics, University of New South Wales, Australia. Director, Centre for Quantum Computation & Communication Technology, School of Physics, UNSW. Australian Research Council Laureate Fellow

11:20 – 12:00 Panel – From a Multicore World to the Exascale Era -and beyond! What could happen? What NEEDS to happen?

Prof. Satoshi Matsuoka (Japan) – Moderator, Prof. Michelle Y. Simmons (Australia), Pete Beckman (US), Dave Jaggar (NZ), Prof. Michael Kelly (UK-NZ), Prof. Tony Hey (UK)

12:10 – 1:10 Lunch

1:10 – 1:35 HPC/Cloud Hybrids for Efficient Resource Allocation and Throughput

Lev Lafayette – HPC Support & Training Officer, The University of Melbourne, Australia

1:40 – 2:10 The ASKAP Science Data Processor system: A Computing precursor of the SKA

Juan Carlos Guzman – Head of the ATNF Software and Computing Group, Australia

2:15 – 3:00 KeynoteSupercomputer Resilience – Then, Now, and Tomorrow

Nathan DeBardeleben – Senior research scientist at Los Alamos National Laboratory in High Performance Computing Design and the lead of the Ultrascale Systems Research Center. USA.

3:00 – 3:30 Afternoon Tea

3:30 – 4:15 PanelBD / AI / ML / IoT / C@E / Deep Learning, etc, etc…Which is the Real Technology Behind These Trendy Buzzwords?

Pete Beckman (US) – Moderator, Paul Fenwick (Australia), Prof Michael Kelly (UK-NZ), John Gustafson (Singapore), Paul McKenney (US), Prof Satoshi Matsuoka (Japan)

4:15 – 5:00 Keynote – Parallel Computing at the Edge: Technology for Chicago Street Poles and for Exascale Systems

Pete Beckman – Co-Director, Northwestern-Argonne Institute for Science and Engineering, Argonne National Laboratory, USA. Leads Projects Argo Exascale Operating System, Waggle – Sensors for the Array of Things

5:00 – 5:15 Debate

5:15 – 7:00 Light Dinner


Day 2 – Tuesday 21st February 2017

8:30 – 8:40 – General Information and Recap

Nicolás Erdödy (Open Parallel)

8:45 – 9:25 – Keynote – OpenStack for HPC in Africa

Happy Sithole – Director, Centre for High Performance Computing, South Africa

9:30 – 10:00 – Ministerial Address

The Honourable Paul Goldsmith, New Zealand’s Minister for Science and Innovation, Minister of Tertiary Education, Skills and Employment, and Minister for Regulatory Reform

10:00 – 10:30 Morning Tea

10:30 – 11:15 Keynote – Extreme Scale Multi-tasking using DALiuGE

Prof. Andreas Wicenec – Professor of Data Intensive Research, ICRAR, Perth, Australia. Task leader of the Data Layer for the SKA Science Data Processor

11:15 – 11:40 Update on New Zealand SKA Alliance participation in the SKA project

Andrew Ensor – Director, New Zealand SKA Alliance

11: 45 – 12:15 Panel – Towards the SKA Tender: Challenges and Opportunities

Simon Rae (NZ) – Moderator, Prof. Tony Hey (UK), Happy Sithole (South Africa), Andrew Ensor (NZ), Juan Carlos Guzman (Australia), Prof Andreas Wicenec (Australia)

12:20 – 1:20 Lunch

1:20 – 1:50 SKA – SDP Middleware: open and collaborative HPC in the SKA

Piers Harding – Senior Consultant, Catalyst IT, New Zealand

1:50 – 2:10 IHK/McKernel: A Lightweight Multi-kernel based Operating System for Extreme Scale HPC

Balasz Gerofi – Research Scientist in the System Software Research Team at RIKEN Advanced Institute for Computational Science (AICS) – Japan

2:15 – 3:00 Keynote – Does RCU Really Work?

Paul McKenney – IBM Distinguished Engineer, IBM Linux Technology Center, USA

3:00 – 3:30 Afternoon Tea

3:30 – 4:15 Panel – Where is New Zealand’s ICT / High-Tech Ecosystem Heading?

Victoria Mclennan (2016 ICT NZer of the year) – Moderator, Clare Curran, MP (Labour Party ICT Spokesperson), Ralph Highnam (CEO, Volpara Technologies), Guy Kloss (Qrious), Mark Moir (Oracle), Dave Jaggar (ex-ARM)

4:15 – 5:00 Keynote – The Future Is Awesome (and what you can do about it)

Paul Fenwick – Public speaker, open source authority, and science educator. Managing Director, Perl Training, Australia.

5:00 – 5:15 Debate

5:15 – 7:00 Light Dinner


Day 3 – Wednesday 22nd February 2017

8:45 – 8:55 General Information and Recap

8:55 – 9:05 The Exascale Institute -and other projects from and for New Zealand

Nicolás Erdödy (Open Parallel)

9:1010:00 Keynote – FLOPS to BYTES: Accelerating Beyond Moore’s Law

Satoshi Matsuoka – Professor, Tokyo Institute of Technology and Fellow, Advanced Institute for Science and Technology, Japan

10:0 – 10:30 Morning Tea

10:30 – 11:15 Panel – Does Big Science necessarily mean Big Budgets?

Prof. Michael Kelly (UK-NZ) – Moderator, Prof. Andreas Wicenec (Australia), Dr. Happy Sithole (South Africa), Dr. Andrew Ensor (New Zealand), Dr. John Gustafson (Singapore), Pete Beckman (US), Prof Satoshi Matsuoka (Japan)

11:15 – 12:00 Keynote – Beating Floats at Their Own Game: Faster Hardware and Better Answers

John Gustafson – Visiting Scientist at A*STAR and Professor in the School of Computing at the National University of Singapore. He is a former Director at Intel Labs and the former Chief Product Architect at AMD.

12:00 – 1:00 Lunch

1:00 – 1:30 Why are still failing to attract and retain women in STEM?

Victoria Maclennan – Managing Director, Optimal BI; 2016 New Zealand ICT Professional of the Year; Co-Chair NZ Rise.

1:30 – 2:00 Breast imaging analytics that improve clinical decision-making and the early detection of breast cancer

Ralph Highnam – CEO, Volpara Solutions, New Zealand

2:05 – 2:50 Panel – Enterprise Systems: How big is the gap to reach 21st century performance? How will legacy code and hardware be updated?

Mark Moir (Oracle, NZ-US) – Moderator, Victoria Maclennan (Optimal BI, NZ), Paul McKenney (IBM, US), Paul Fenwick (Perl, Australia), John Gustafson (ex-Intel, AMD, Sun, Singapore), Nathan DeBardeleben (LANL, US)

2:50 – 3:15 Afternoon Tea

3:15 – 4:00 Keynote – How Might the Manufacturability of the Hardware at Device Level Impact on Exascale Computing?

Prof Michael J Kelly – MacDiarmid Institute for Advanced Materials and Nanomaterials, Victoria University of Wellington, New Zealand, and Department of Engineering, University of Cambridge, United Kingdom.

4:05 – 4:50 Keynote – The ARM Architecture – From Sunk to Success

Dave Jaggar – former ARM’s Head of Architecture Design, New Zealand

4:55 – 5:10 Conference Wrap-up. Feedback: Towards Multicore World 2018

Nicolás Erdödy (Open Parallel)

5:15 – 6:30 Drinks and Nibbles


Multicore World 2017

Speakers

UPDATED 16th February 2017 – BUY TICKETS HERE 

The 6th Multicore World will be Monday 20th, Tuesday 21st and Wednesday 22nd February 2017

Venue: Multicore World will be at Shed 6 at Wellington’s waterfront

Program (here)


Speakers (Preliminary list (*)

  • The Honourable Paul Goldsmith, New Zealand’s Minister for Science and Innovation, Minister of Tertiary Education, Skills and Employment, and Minister for Regulatory Reform – Ministerial Address.
  • Prof Satoshi Matsuoka – Professor, High Performance Computing Systems Group, Global Scientific Information and Computing Center (GSIC) & Department of Mathematical and Computing Sciences, Tokyo Institute of Technology, Japan. Leader of the TSUBAME series of supercomputers, recently #1 in the world for power efficiency for both the Green 500 and Green Graph 500 lists. He has written over 500 articles and chaired numerous ACM/IEEE conferences. A fellow of the ACM and European ISC, has won the ACM Gordon Bell Prize in 2011, and the 2014 IEEE-CS Sidney Fernbach Memorial Award.
  • Prof Michael Kelly, FREng, FRS Hon, FRSNZ – Prince Philip Professor of Technology at Department of Engineering, University of Cambridge, UK. Solid State Electronics and Nanoscale Science. Advanced electronic devices for very high speed operation, development of new class of low power consumption devices and circuits -and its computing applications. Awarded 2006 Hughes Medal “for his work in the fundamental physics of electron transport and the creation of practical electronic devices which can be deployed in advanced systems”. Wikipedia.
  • Prof Tony Hey, CBE, FREng, FIET, FInstP, FBCS Chief Data Scientist, Science and Technology Facilities Council (STFC), UK. Fmr Corporate VP, Microsoft Research. Author – The Fourth Paradigm Data-Intensive Scientific Discovery. Wikipedia.
  • Pete BeckmanCo-Director, Northwestern Argonne Institute for Science and Engineering-Mathematics-Computer Science, Argonne National Laboratory, USA. Lead, Projects Argo (An exascale operating system),  Array of Things (smart city – 500 sensors installed in Chicago) & Waggle (open platform for intelligent sensors)
  • Prof Andreas Wicenec – Head of the Data Intensive Astronomy (DIA) program at ICRAR (International Centre for Radio Astronomy Research), Perth, Australia. Prof Wicenec leads the SKA Science Data Processor Data Layer design. MW2014
  • Paul Fenwick – Managing Director, Perl Training Australia. A high-profile software engineer and internationally acclaimed presenter at conferences and user groups worldwide -where he is well-known for his humour and off-beat topics, Paul won the 2013 O’Reilly Open Source award for outstanding contributions to the open source community.
  • Dave Jaggar – Retired, former ARM’s Head of Architecture Design – Cambridge, UK. Author: ARM Architectural Reference ManualNew Zealand. During Dave’s nine years at ARM he took a British processor architecture, gave it a 2nd integer instruction set which made it excellent for embedded control, added on chip debug so it could be buried in a SoC, gave it a new floating point instruction set, and rebuilt the system architecture so it could run Unix like OSes properly. Founding Director of the ARM Austin Design Center in Texas -where about half of the A series chips for phones are done. Oral History interview – Mountain View, CA, 2012
  • Andrew Ensor – Director New Zealand SKA Alliance (NZA), Research Pathway Senior Lecturer, Auckland University of Technology (AUT), Auckland, New Zealand. The NZ SKA Alliance is worldwide the 4th largest group working on the SKA Computing Platform. MW2013-14-15-16
  • Nathan DeBardeleben – High Performance Computing Design (HPC-DES)
    UltraScale Systems Research Center Lead and Resilience Lead at Los Alamos National Laboratory (LANL), USA. Expert in resilience, fault-tolerance, dependable computing, HPC and general computer reliability on new computing platforms.
  • Juan Carlos Guzmán – Software & Computing Group Leader, Australia National Telescope Facility (ATNF) – CASS (CSIRO Astronomy and Space Science), a division of CSIROAustralia‘s national science agency.
  • Mark Moir – Principal Investigator of the Scalable Synchronization Research Group at Oracle Labs, USA – New ZealandMark is giving a talk in every Multicore World since its creation. He works on concurrent, distributed, and real-time systems, particularly hardware and software support for programming constructs that facilitate scalable synchronization in shared memory multiprocessors. MW2012-13-14-15-16.
  • Piers Harding, Senior Solution Architect, Catalyst ITNew Zealand – Australia – UK. With 250+ engineers and developers, Catalyst is the largest open source software company in Australasia. Global leaders in OpenStack (fully deployed in the Catalyst Cloud). As members of the NZ SKA Alliance, since 2014 Catalyst is contributing to the design of the Computing Platform of the SKA project.
  • Simon Rae, Manager Innovation Policy, Ministry of Business, Innovation & Employment (MBIE), New Zealand
  • Lev Lafayette – HPC Support and Training Officer at The University of Melbourne, Australia. OpenStack for HPC -papers here. MW2012-13-14-16

 ———-

BUY TICKETS HERE 

Please direct all information requests to Nicolás Erdödy, Conference Director, nicolas@multicoreworld.com


Multicore World 2017 – SPONSORS

Catalyst - New Zealand

MBIE-logo-black-FULL

Oracle Labs

Oracle Labs

nzrise-logo-02

Pasquale label


(*) IMPORTANT NOTICE: This is a preliminary list regularly updated.

The Organiser, Open Parallel Ltd (T/A Multicore World) cannot be liable for changes in the program due to unforeseen circumstances. You should not register, buy airfare tickets, book accommodation or do any other arrangement based only on the current names of this list. Speakers confirm their attendance well in advance but their personal and professional circumstances can change at the last minute. The Organiser will do its best to replace them while keeping the conference to the highest possible standards. Full terms and conditions are in the Registration Form.

Panels 2016

Schedule_Multicore_World_2016_v4.1_13.2.16

Panel 1

Parallel Worlds – Programming Models for Heterogeneous Parallel Systems

Mark Patane (NVIDIA), John Gustafson (A*STAR), Lance Brown (INTEL-Altera), Andrew Ensor (AUT)

Moderator: Peter Kogge (Notre Dame)

Monday 15 February, 14:45 – 15:20

In the data-centre, multi-core architectures are pretty common. That homogeneity is changing. To illustrate even low-priced single-board computers are a hybrid mix of general-purpose multi-core CPUs, many-core accelerators and FPGAs. As these mixed architecture computing platforms become more accessible so does the pressure to write software to portable APIs and programming models. The many different architectures and component configurations has lead to a plethora of open industry de-facto standards efforts. One unified model is unlikely. A single model that covers the spectrum of demands of extreme scalability and optimal performance, tempered with ease of use. Instead the panel will discuss the critical abstractions and directions we should be considering; in-turn identify what efforts we should pay attention to and why.

Panel 2

Concurrency – Patterns for Architecting Distributed Systems

Richard O’Keefe (Otago), Balazs Gerofi (RIKEN), Geoffrey C Fox (Indiana), Michael Barker (LMAX)

Moderator: Mark Moir (Oracle)

Tuesday 16 February, 14:40 – 15:20

The art of architecting and building distributed systems is about the composition of concurrent components. Patterns of architecting distributed systems can be implicit through the APIs used e.g. MPI and OpenMP, explicit in larger system components such as messaging sub-systems or explicitly supported by a language and its supporting runtime e.g. Erlang. Developing distributed concurrent systems is now common place. The panel will discuss trends, emergent paradigms, common patterns and their trade-offs for developing concurrent systems (distributed or not).

Panel 3

Making the SKA platform accessible for the science

Alex Szalay (Johns Hopkins), Markus Dolensky (ICRAR), Happy Sithole (CHPC), Chun-Yu Lin (NCHC)

Moderator: Nicolás Erdödy (Open Parallel)

Wednesday 17 February, 11:35 – 12:10

Most scientists are not software developers or engineers. Nor are they interested in getting an in-depth understanding of the nitty-gritty or parallel architectures, distributed systems and writing production quality code. They are after all interested in doing the science. The panel will discuss what is needed to make the SKA platform more accessible to scientists. How it can and should support the science. How that might be achieved. How should the platform operate and how should the associated software be governed.

Panel 4

HPC role in Science and Economic Policy Development

Nicola Gaston (MacDiarmid), Happy Sithole (CHPC), Chun-Yu Lin (NCHC), Tshiamo Motshegwa (University of Botswana)

Moderator: Don Christie (Catalyst)

Wednesday 17 February, 14:50 – 15:20

Life, the biosphere, weather, finance, our economic and social interactions are all complex systems. HPC has been pivotal in enabling dramatic advances in understanding complex networks. Generative and agent-based simulations, machine-learning, dynamical systems and network analysis. Leading a Cambrian explosion of scientific advancement that is flowing into business and government. Computational is now often prefixed to traditional disciplines. Highlighting the computing algorithms’ augmenting role in experimentation, investigation and analysis. The panel will discuss the effect HPC is having on traditional disciplines in science, government and business. Are HPC Centres of excellences needed for developing computational thinking and practise? What are the social, ethical and economic concerns implied by an algorithmic world?

Panel 5

Bridging HPC and Clouds – Physical Infrastructure, Networks, Storage and Applications

Peter Kogge (Notre Dame), Alex Szalay (Johns Hopkins), Geoffrey C. Fox (Indiana), John Gustafson (A*STAR)

Moderator: Robert O’Brien

Wednesday 17 February, 16:10 – 16:50

Today’s HPC is tomorrow’s general computing infrastructure. Indeed data intensive applications in finance, engineering and science are the tip of the iceberg. Functionality of many systems are increasingly driven by data. Trends in consumer applications in big-data, machine-learning (ML), and deep neutral networks (DNN) are pushing at the multi-core CPU boundary. The wider deployment of sensors, robotics and pervasive computing (IoT) will demand new approaches to. All are harbingers of change. Speed, data volume and answers-per-watt metrics will drive demands for HPC Clouds (IaaS and PaaS). The panel will discuss what is needed to enable HPC Clouds. How should things change from current models of architecting physical and software infrastructure. How does networking and storage approaches factor into this. What opportunities exist with COTS (components off the shelve) to take a different approach than is typical of HPC systems.

 —

Program 2016

13 February 2016 – (another) UPDATE…!! 

Schedule_Multicore_World_2016_v4.1_13.2.16

PROGRAM

Day 1: Monday 15 February 2016

 

8:30 – 9:00 Opening Welcome – Setting the Scene

Nicolás Erdödy (Open Parallel)

— —

9:00 – 9:30 “Bridging Islands – You build it, you run it, and make it scale”

Robert O’Brien – New Zealand

 

Technology advancements develop in islands of practise. Driven by the demands of a particular market context. Multi-core CPUs and many-core GPUs are the norm in todays hardware. Tailored runtimes, associated software architectures and development practices are not. That is changing. Data intensive applications, often with time-critical demands, are pushing that boundary. A challenge therefore, is to match the workload anxiety demands of cloud computing with compute intensive pipelines.

Scaling of cloud-based mobile services and big-data are creating a renaissance in software development. Distributed processing and distributed data, programming languages, software architecture, physical infrastructure and process improvement. This is exposing more developers to old ideas. The rush to do more, be more agile and deal with scale in the Cloud is also flowing into Enterprise practise. Likewise practice is being redefined and refined as the Cloud extends out to the edge.

In this context on-demand HPC is part-of the infrastructure equation. In the data-centre and at the edge. Understanding our world with more data intensive applications requires bridging Cloud, Enterprise and IoT with HPC.

— —

 

9:30 – 10:30  “The past and future of multi-core at the high end”

KeynoteProf Peter Kogge (University of Notre Dame) – USA

Moore’s Law has driven computing for almost 50 years, but perhaps the biggest architectural change occurred in 2004, with the rise of multi-core. This talk will discuss the fundamental reasons that forced the change, and then compare how the characteristics of cores have changed since then. In addition, the relationship between core characteristics and performance at the high end (through data from both TOP500 and GRAPH500) will be explored, with discussions of the current set of design issues architects are wrestling with. The talk will conclude with some projections on what a multi-core exascale machine of the future might look like, what new architectural features may emerge, and what might be the effect on software for such machines.

— —

11:00 – 11:55  “A Radical Approach to Computation with Real Numbers”

KeynoteProf John Gustafson (A*Star) – Singapore

If we are willing to give up compatibility with IEEE 754 floats and design a number format with goals appropriate for 2016, we can achieve several goals simultaneously: Extremely high energy efficiency and information-per-bit, no penalty for decimal operations instead of binary, rigorous bounds on answers without the overly pessimistic bounds produced by interval methods, and unprecedented high speed up to some precision. This approach extends the ideas of unum arithmetic introduced two years ago by breaking completely from the IEEE float-type format, resulting in fixed bit size values, fixed execution time, no exception values or “gradual underflow” issues, no wasted bit patterns, and no redundant representations (like “negative zero”). As an example of the power of this format, a difficult 12-dimensional nonlinear robotic kinematics problem that has defied solvers to date is quickly solvable with absolute bounds. Also unlike interval methods, it becomes possible to operate on arbitrary disconnected subsets of the real number line with the same speed as operating on a simple bound.

— —

11:55 – 12:45  “SKA Signal Processing and Compute Island Designs”

Dr Andrew Ensor (AUT University) – New Zealand

The Square Kilometre Array is the largest mega-Science project of the next decade and represents numerous firsts for New Zealand. This talk will outline the project, its unprecedented computing challenges, and New Zealand’s key involvement. Design work for its eventual computer system includes prototyping manycore processors, GPU and FPGA, new compression and communication algorithms, next generation Big Data boards, advances in middleware and software development environments.

— —

13:40 – 13:50 Ask an Expert #1

13:50 – 14:05  Fireside Chat with Geoffrey C. Fox (Indiana) – Interviewer: Richard O’Keefe (Otago)

— —

14:05 – 14:45  “Methods of Power Optimization and code modernization for High Performance Computing Systems”

drs. Martin Hilgeman (Dell) – Netherlands

With all the advances in massively parallel and multi-core computing with CPUs and accelerators it is often overlooked whether the computational work is being done in an efficient manner. This presentation drills down into the incompatibility between Moore’s Law and Amdahl’s Law and the challenge for a system builder, while trying to program for energy efficient parallel application performance. It also covers some of the latest trends in core modernization for multi-core systems in order to unleash the performance potential covered in clusters of COTS components with a fast, low latency network.

— —

14:45 – 15:20Parallel Worlds – Programming Models for Heterogeneous Parallel Systems”

Panel Discussion: Mark Patane (NVIDIA), John Gustafson (A*STAR), Lance Brown (INTEL-Altera), Andrew Ensor (AUT). Moderator: Peter Kogge (Notre Dame)

In the data-centre, multi-core architectures are pretty common. That homogeneity is changing. To illustrate even low-priced single-board computers are a hybrid mix of general-purpose multi-core CPUs, many-core accelerators and FPGAs. As these mixed architecture computing platforms become more accessible so does the pressure to write software to portable APIs and programming models. The many different architectures and component configurations has lead to a plethora of open industry de-facto standards efforts. One unified model is unlikely. A single model that covers the spectrum of demands of extreme scalability and optimal performance, tempered with ease of use. Instead the panel will discuss the critical abstractions and directions we should be considering; in-turn identify what efforts we should pay attention to and why.

— —

15:40 – 16:20  “Meeting High Memory Bandwidth Challenges in Heterogeneous Computing in an OpenCL Environment”

Lance Brown (Intel Corporation | Military, Aerospace, Government (MAG) Business Unit) – USA

When designing many HPC deployments, data movement between processing elements along with on-chip bandwidth are some of the greatest challenges.   FPGAs have inherent advantages for inter-element data movement with rich and flexible transceivers options that CPU and GPU elements do not have.   Intel Programmable Solutions Group (PSG) announced at SC ’15 the addition of new Stratix 10 System in Package (SiP) allowing up to 8 GB of on-chip DRAM and 1 TB/S of memory bandwidth using Intel’s patented Embedded Multi-die Interconnect Bridge (EMIB) technology.   This session will explore how Stratix 10 SiP when introduced to CPU and GPU systems can increase the overall peak TFLOP/Watt performance, increase data bandwidth ingress/egress and lower overall total power while maintaining a standard OpenCL programming environment.   An example system performing Convolutional Neural Network learning will be highlighted.

— —

16:20 – 16:55  “Who would win in a multicore war: the “good guys” or the “bad guys”?”

Dr. Mark Moir (Oracle Labs) – USA & New Zealand

This talk will neither answer the question of its title, nor settle the question of exactly who these “good guys” and “bad guys” are.  Instead, it will include some observations about the role of multicore computing for various purposes that might be considered “good” or “bad”, depending on context and perspective.  Possible topics include: cryptocurrencies such as Bitcoin, denial of service attacks and defenses against them, encryption and privacy.

— — — —

 

Day 2: Tuesday 16 February 2016

 

8:30 – 8:45   General Information and Recap

8:45 – 9:00  “Unums Implementations”

Prof John Gustafson (A*Star)Singapore

Short talk: Report on the progress of efforts to put unums into C, Julia and Python.

— —

9:00 – 9:30 “Weaving a Runtime Fabric: The Open Parallel Stack”

Dr. Richard O’Keefe (University of Otago) – New Zealand

The Open Parallel Stack (TOPS) is something we need but don’t have yet. The idea is to assemble a framework from the operating system up to enable testing and debugging high performance programs on a small to medium scale before deploying them to systems like the SKA in high demand.

The intention is not to do research or major development but to collect, preserve, and build on Open Source work.

— —

9:30 – 10:30 “Big Data HPC Convergence”

KeynoteProf Geoffrey C. Fox (Indiana University) – USA

Two major trends in computing systems are the growth in high performance computing (HPC) with an international exascale initiative, and the big data phenomenon with an accompanying cloud infrastructure of well publicized dramatic and increasing size and sophistication. In studying and linking these trends one needs to consider multiple aspects: hardware, software, applications/algorithms and even broader issues like business model and education. In this talk we study in detail a convergence approach for software and applications / algorithms and show what hardware architectures it suggests. We start by dividing applications into data plus model components and classifying each component (whether from Big Data or Big Simulations) in the same way. These leads to 64 properties divided into 4 views, which are Problem Architecture (Macro pattern); Execution Features (Micro patterns); Data Source and Style; and finally the Processing (runtime) View. We discuss convergence software built around HPC-ABDS (High Performance Computing enhanced Apache Big Data Stack) http://hpc-abds.org/kaleidoscope/ and show how one can merge Big Data and HPC (Big Simulation) concepts into a single stack. We give examples of data analytics running on HPC systems including details on persuading Java to run fast. Some details can be found at http://dsc.soic.indiana.edu/publications/HPCBigDataConvergence.pdf

— —

11:00 – 11:55  “Taming big data with micro-services”

KeynoteGiuseppe Maxia (VMware) – Spain

Presented by Robert O’Brien

Big data is a daunting concept, as it usually means more data that we can handle. When looked at more closely, though, big data is just a very large collection of small data pieces.

There are solutions for this problem, the most widespread of which uses clusters of computers to tackle a problem in parallel. Think of Hadoop, where thousands of nodes can analyze data concurrently and report a unified result. But the growing challenges of data collection make such solution not easy to adapt. What if your data requires a different kind of software that does not fit with Hadoop?  What if you discovered a completely new algorithm that allows you to get results quickly and more reliably, but you are stuck with a humungous cluster of non-flexible software?

Open your eyes to the world of micro services, where your software takes the shape of clusters in seconds, rather than months or days. The old method of divide-and-conquer has never felt this exciting!

— —

11:55 – 12:20  Accelerating AI using GPUs”

Michael Wang (NVIDIA) – Australia

A brief overview of deep neural network (DNN) training on GPUs, from supercomputing and Big Data, to ultra-low power embedded platforms that enable smarter autonomous machines.

We will look at the latest developments on in GPU architectures that enable these discoveries, as well as CUDA-accelerated software libraries that are accelerating machine learning research and rapid innovation.

— —

12:20 – 12:45  “A Laconic HPC with an Orgone Accumulator”

Lev Lafayette (University of Melbourne) – Australia

After four years of operation the University of Melbourne’s high performance computer system, “Edward” is being gradually decommissioned this year. It’s replacement, named “Spartan” has a novel design based on metrics of Edward’s usage, which showed a large use of single core tasks.
In such circumstances the new HPC system will be smaller in terms of bare metal, but larger in the sense that it will connect to the NeCTaR (National eResearch Collaboration Tools and Resources project) OpenStack research cloud for the single-core and shared memory multi-core tasks. In addition to providing a description of the architecture differences, there will also be some discussion of the efficiencies and effectiveness of the two multicore implementations.

— —

13:40 – 14:10  “State of the Art: Apache Cassandra”

Aaron Morton (Apache Cassandra Committer) – New Zealand

Apache Cassandra provides a scalable, highly available, geographically distributed data store that has proven itself in production environments. It implements an Amazon Dynamo model for cluster management, with a Big Table inspired storage engine that uses a Staged Event Driven Architecture (SEDA). While very successful, there are alternatives. Scylla DB launched in 2015 as a Cassandra compatible platform underpinned by a Thread Per Core (TPC) architecture. Work is now under way to investigate Thread Per Core architecture in Cassandra.

In this talk Aaron Morton, Co Founder at The Last Pickle and Apache Cassandra Committer, will discuss the current Apache Cassandra architecture and possible future directions. The talk will examine the cluster design before diving down to the node level and comparing the SEDA and TPC approaches.

— —

14:10 – 14:40  “Multi-core Systems in a Low Latency World”

Michael Barker (LMAX Exchange) – New Zealand

Building systems with low and predictable response times using modern software and hardware is notoriously tricky.  Unfortunately multi-core systems don’t provide direct benefit for reducing response time, but do provide some interesting opportunities for scaling up.  This talk will look at some of the ways that LMAX Exchange have addressed these challenges to hit typical latencies of 100µs and become one of the world’s fastest foreign currency exchanges. We will introduce an open source tool called the Disruptor that we use within the core of our Exchange.

Being a technically detailed talk, having a basic understanding of concurrency primitives, e.g. locks, condition variables, memory barriers would facilitate understanding by the audience.

— —

14:40 – 15:20 – “Concurrency – Patterns for Architecting Distributed Systems”

Panel Discussion: Richard O’Keefe (Otago), Balazs Gerofi (RIKEN), Geoffrey C Fox (Indiana), Michael Barker (LMAX). Moderator: Mark Moir (Oracle)

The art of architecting and building distributed systems is about the composition of concurrent components. Patterns of architecting distributed systems can be implicit through the APIs used e.g. MPI and OpenMP, explicit in larger system components such as messaging sub-systems or explicitly supported by a language and its supporting runtime e.g. Erlang. Developing distributed concurrent systems is now common place. The panel will discuss trends, emergent paradigms, common patterns and their trade-offs for developing concurrent systems (distributed or not).

15:40 – 15:55  Fireside Chat with Alex Szalay (Johns Hopkins). Interviewer: Nicola Gaston (MacDiarmid Institute)

— —

15:55 – 16:30  “An Overview of RIKEN’s System Software Research and Development Targeting the Next Generation Japanese Flagship Supercomputer”

Dr. Balazs Gerofi (RIKEN) – Japan

RIKEN Advanced Institute for Computation Science has been appointed by the Japanese government as the main organization for leading the development of Japan’s next generation flagship supercomputer, the successor of the K. Part of this effort is to design and develop a system software stack that suits the needs of future extreme scale computing. In this talk, we first provide a high-level overview of RIKEN’s system software stack effort covering various topics, including operating systems, I/O and networking. We then narrow the focus on OS research and describe IHK/McKernel, our hybrid operating system framework. IHK/McKernel runs Linux with a light-weight kernel side-by-side on compute nodes with the primary motivation of providing scalable, consistent performance for large scale HPC simulations, but at the same time, to retain a fully Linux compatible execution environment.  We detail the organization of the stack and provide preliminary performance results.

— — 

 

16:30 – 17:00  “Science, Scientific Infrastructure, and Individual Incentives”

Dr. Nicola Gaston (The MacDiarmid Institute for Advanced Materials and Nanotechnology) – New Zealand

The conduct of scientific research depends crucially on underlying technologies and infrastructure that enable measurement, calculation, and analysis. Using HPC as an example, I’ll discuss the progress of areas of fundamental research as a function of the development of underlying technologies, with particular reference to computational chemistry and physics. I’ll invite you to think about how technologies and the cultures around them are able to enable scientific progress, with reference to the mobility of scientists, knowledge transfer, transparency in science communication, the commercialization of science, and the development of conversations about scientific ethics in times of culture change

— — — —

 

 

Day 3: Wednesday 17 February 2016

 

8:30 – 8:45   General Information and Recap

8:45 – 9:15  “SKA’s Science Data Processor Design in Cloudy Regions”

Markus Dolensky (International Centre for Radio Astronomy Research – ICRAR) – Australia

Despite the SKA rebaselining and inherent data parallelism it is apparent that existing storage and processing systems do not scale all too well given a correlator output rate of 1 TByte/s. Data flow management is a key element of the solution. The presentation discusses design considerations relevant to the SKA Science Data Processor and reports on practical experiments in HPC and Cloud environments. Connectivity to regional compute infrastructure is another issue, but once resolved, is a means to overcome the SKA power limit.

— —

9:15 – 10:15  “Data Intensive Discoveries in Science: the Fourth Paradigm and Beyond”

Keynote – Prof Alex Szalay (Johns Hopkins University) – USA

The talk will describe how science is changing as a result of the vast amounts of data we are collecting from gene sequencers to telescopes and supercomputers. This “Fourth Paradigm of Science”, predicted by Jim Gray, is moving at full speed, and is transforming one scientific area after another. The talk will present various examples on the similarities of the emerging new challenges and how this vision is realized by the scientific community. Scientists are increasingly limited by their ability to analyze the large amounts of complex data available. These data sets are generated not only by instruments but also computational experiments; the sizes of the largest numerical simulations are on par with data collected by instruments, crossing the petabyte threshold this year. The importance of large synthetic data sets is increasingly significant, as scientists compare their experiments to reference simulations. All disciplines need a new “instrument for data” that can deal not only with large data sets but the cross product of large and diverse data sets. There are several multi-faceted challenges related to this conversion, e.g. how to move, visualize, analyze and in general interact with Petabytes of data.

— —

10:45 – 11:25  “Building a Sustainable Model of HPC Services: The Challenges and Benefits.”

Dr Happy Sithole (Director, Centre for High Performance Computing) – South Africa

South Africa started looking at the development of HPC capabilities in 2006. In 2007, the Center for High Performance Computing start production to provide HPC resources to the broader South African Researchers. The building of the infrastructure and the community comes with a number of challenges, technologically and politically, which have to be addressed carefully, for one to provide a sustainable system to support both economic and academic ambitions of a developing nation. In this presentation, we will discuss the South African experience on developing Cyber-infrastructure and extend this to the initiatives around the Southern African region.

— —

11:25 – 11:55  “Making the SKA platform accessible for the science”

Panel Discussion: Alex Szalay (Johns Hopkins), Markus Dolensky (ICRAR), Happy Sithole (CHPC), Duncan Hall (DIA). Moderator: Nicolás Erdödy (Open Parallel)

Most scientists are not software developers or engineers. Nor are they interested in getting an in-depth understanding of the nitty-gritty or parallel architectures, distributed systems and writing production quality code. They are after all interested in doing the science. The panel will discuss what is needed to make the SKA platform more accessible to scientists. How it can and should support the science. How that might be achieved. How should the platform operate and how should the associated software be governed.

— —

11:55 – 12:30  “Informatics in New Zealand Forestry – Data challenges for 1 billion Radiata trees”

Bryan Graham (Science Leader, Forest Industry Informatics, Scion) – New Zealand

New Zealand is currently the world’s largest exporter of raw Radiata pine logs. Each log has to be individually managed and measured throughout its ~28 year lifespan, and then on through the value chain. Historically this has been done using manual measurements, estimation, and “gut feel”. As new technologies are being implemented the scale and complexity of the data being collected, processed, and used to drive decision making is escalating. Coupled with increasingly demanding eResearch requirements,  developing and delivering decision making systems to industry is challenging. This presentation highlights some of the key challenges facing informatics at Scion, the main provider of Forest Research in New Zealand.

— —

12:30 – 12:45  Fireside Chat with Peter Kogge (Notre Dame). Interviewer: Nicolás Erdödy

— —

 

13:40 – 13:50  Ask an Expert #2

— —

13:50 – 14:25 “Developing HPC Research Infrastructure and Capacity Building for SKA Preparedness in a Developing Country”

Dr. Tshiamo Motshegwa (University of Botswana) – Botswana

This talk discusses education, infrastructure developments, research, policy developments, collaborations together with opportunities and challenges in developing capacity in HPC in Botswana.

Botswana through representation of the University of Botswana is participating in the SADC regional collaborative framework for High Performance Computing and Big Data – a framework that aims to develop regional capacity in HPC through infrastructure development, R&D, building strategic partnerships and governance.

Through this framework, the department of Computer Science is executing aspects of the University’s research strategy and strategic plan by setting up a HPC Data centre and infrastructure to provide a HPC-As-A-Service to the university. There is an immediate need to support to curriculum development and teaching in the department in the areas of Distributed systems, Parallel and Concurrent Programming and in Computational Science and Engineering (CSE) to support a base for multidisciplinary teaching and potential research in faculty of Science in the areas Computational Chemistry, Physics and Geology, Mathematical modelling, Earth Observation and Geographical Information Systems and at the faculty of Engineering.

There is also an immediate need to develop technical and human capacity in HPC to support research for current MSc and PhD programs – this to support the growth of a pipeline of researchers. There is need build infrastructure to support potential national and regional projects and to prepare support for Botswana’s participation and commitment to global science projects such as the Square Kilometre Array (SKA) Astrophysics project.

The infrastructure being developed is also part of wider long term national vision – for the university to provide National leadership in HPC and to grow this service organically and ultimately serve country stakeholders like research institutes and the Botswana Innovation Hub Science park tenants as a national service.

The Department of Computer Science is also working with the Department of Research Science and Technology (DSRT) from the Ministry of Science and Technology at a policy level in the formulation of the country’s Space Science and Technology strategy that covers development of supporting platforms such as HPC cyber-infrastructure.

— —

14:25 – 14:45  “What is OpenHPC?”

Balazs Gerofi (RIKEN) – Japan

An overview of this Linux Foundation collaborative project and its participants.

— —

14:45 – 15:20  HPC role in Science and Economic Policy Development”

Panel Discussion: Nicola Gaston (MacDiarmid), Happy Sithole (CHPC), Chun-Yu Lin (NCHC), Tshiamo Motshegwa (University of Botswana). Moderator: Don Christie (Catalyst)

Life, the biosphere, weather, finance, our economic and social interactions are all complex systems. HPC has been pivotal in enabling dramatic advances in understanding complex networks. Generative and agent-based simulations, machine-learning, dynamical systems and network analysis. Leading a Cambrian explosion of scientific advancement that is flowing into business and government. Computational is now often prefixed to traditional disciplines. Highlighting the computing algorithms’ augmenting role in experimentation, investigation and analysis. The panel will discuss the effect HPC is having on traditional disciplines in science, government and business. Are HPC Centres of excellences needed for developing computational thinking and practise? What are the social, ethical and economic concerns implied by an algorithmic world?

— —

15:40 – 16:10  “Innovative Applications, Scientific Research and simPlatform in NCHC”

Dr. Chun-Yu Lin (National Center for High-performance Computing -NCHC, National Applied Research Laboratories – NARLabs) – Taiwan

As the supercomputing center in Taiwan, NCHC plays an enabling role to bridge the gap between research and practice. Several innovative applications from collaborations with academia, government and industry are presented in this talk. We also introduce researches related to the bio-imaging and material science that have a great practical impact and huge demands on computing method and resources. We lastly introduce the simPlatform aimed to improve the ecosystem for computational sciences.

— —

16:10 – 16:50  “Bridging HPC and Clouds – Physical Infrastructure, Networks, Storage and Applications”

Panel Discussion: Peter Kogge (Notre Dame), Alex Szalay (Johns Hopkins), Geoffrey C. Fox (Indiana), John Gustafson (A*STAR). Moderator: Robert O’Brien

 

Today’s HPC is tomorrow’s general computing infrastructure. Indeed data intensive applications in finance, engineering and science are the tip of the iceberg. Functionality of many systems are increasingly driven by data. Trends in consumer applications in big-data, machine-learning (ML), and deep neutral networks (DNN) are pushing at the multi-core CPU boundary. The wider deployment of sensors, robotics and pervasive computing (IoT) will demand new approaches to. All are harbingers of change. Speed, data volume and answers-per-watt metrics will drive demands for HPC Clouds (IaaS and PaaS). The panel will discuss what is needed to enable HPC Clouds. How should things change from current models of architecting physical and software infrastructure. How does networking and storage approaches factor into this. What opportunities exist with COTS (components off the shelve) to take a different approach than is typical of HPC systems.

 

— —

16:50 – 17:15  Closing Remarks, Discussion and Feedback. Towards Multicore World 2017.

— —

Interested?

Register HERE – limited tickets still available.

 

 

Multicore World 2016

mw2016_logo_large

5th Multicore World

15, 16, 17 February 2016 – Wellington, New Zealand

Venue: Shed 6 – Wellington’s waterfront

BUY TICKETS HERE 

Program

Schedule_Multicore_World_2016_v4.1_13.2.16

 

Confirmed Keynotes, Speakers and Panelists

Prof Alex Szalay – Bloomberg Distinguished Professor at the Johns Hopkins University – USA

Prof Geoffrey C. Fox – Distinguished Professor of Computer Science, Informatics and Physics at Indiana University – USA

Dr. Happy Sithole – Director, Centre for High Performance Computing (CHPC) of the Council for Scientific and Industrial Research (CSIR) – Republic of South Africa

Prof Peter Kogge – IBM Fellow, University of Notre Dame – USA

Prof John Gustafson – Gustafson’s Law, A*Star – Agency for Science, Technology and Research – Singapore

Lance Brown – Sr. Strategic & Technical Marketing Mgr | CyberSecurity, Intelligence, Analytics, HPC Infrastructure. Intel Corporation | Military, Aerospace, Government (MAG) Business Unit – USA

Dr.Nicola Gaston – Deputy Director, The MacDiarmid Institute for Advanced Materials and Nanotechnology – New Zealand

Dr.Balasz GerofiRIKEN Advanced Institute for Computational Science – Japan

Dr.Mark Moir -Principal Investigator of the Scalable Synchronization Research Group, Oracle Labs – USA and New Zealand

drs.Martin Hilgeman – High Performance Computing consultant, Dell – The Netherlands

Dr. Tshiamo Motshegwa – HPC Lecturer / Researcher, University of Botswana – Botswana

Dr.Andrew Ensor – Director New Zealand SKA Alliance, AUT university

Dr. Chun-Yu Lin – Associate Researcher, National Center for HPC, National Applied Research Laboratories (NARLabs) – Taiwan

Bryan Graham – Science Leader, SCION – New Zealand

Aaron Morton – Apache Cassandra Committer, The Apache Software Foundation – New Zealand

Markus Dolensky – Technical Leader, ICRAR – Australia

Don Christie – Director, Co-Owner, Catalyst – New Zealand

Robert O’Brien New Zealand

Mark Patane – Country Manager, Australia / New Zealand, NVIDIA – Australia

Dr.Richard O’ KeefeUniversity of Otago – New Zealand

Lev Lafayette – HPC support V3, NeCTAR, The University of Melbourne – Australia

Michael BarkerLMAX Exchange – New Zealand

Michael Wang – Solutions Architect, NVIDIA – Australia

Duncan Hall – Enterprise Architect, DIA – New Zealand

— —

Program 2016

Schedule 2016

Sponsors

Previous event – Multicore World 2015