Thursday 27th – Friday 28th February
Four key factors in computing for the Square Kilometre Array
Tim Cornwell, SKA architect – UK
The Square Kilometre Array is now in design and is due to move to construction in 2017. The array is actually three telescopes each one of which will provide a different window into the universe yielding insights into pressing astrophysical questions. This groundbreaking scientific instrument will be one of the jewels of 21st-century science. Such a system has multiple components with the computing providing a framework for the control, monitoring, data flow, and data processing functions. I will concentrate on those issues connected with the data. There are four interrelated factors that work together tightly to constrain the final solution. These are data rate, computer architecture, algorithms, and software systems. An integrated solution must address all of these factors in a combined architecture. I will describe this landscape as we see it now.
(via video conference)
A Persistence Layer for the SKA Science Data Processor
Prof. Andreas Wicenec, UWA – Australia
The SKA will require flexible persistent short, mid- and long-term storage and data access infrastructures distributed across several locations on wide-area networks spanning hundreds of kilometers. Long-term archival storage and access will potentially be spread around the globe to several regional data and processing centres. Models similar to the Worldwide LHC Computing Grid (WLCG ) could be adopted and adjusted to the SKA requirements, provided that we would be able to attract similar scientific and political interest in certain regions of the world. Although the SKA requirements are significantly different at a first glance, a more careful look reveals quite similar ‘valuable’ information content in SKA and LHC data. Filtering and preserving those valuable data items requires novel, automated data-mining and a sophisticated data life-cycle management infrastructure. Both are pretty much unheard of in current astronomical data archives. In this paper we will present the basic design for the SKA data layer and discuss why we think this design and the development approach will be able to address the challenges of the SKA.
 The LHC’s worldwide computer, The CERN Courier, 2013, http://cerncourier.com/cws/article/cern/52744
PowerMX: a flexible platform specification for signal processing
Brent Carlson, NRC – Canada
SKA Phase 1 (SKA1) presents significant signal processing challenges, both in terms of real-time computational requirements, and in lifetime of pre-construction through to operations. PowerMX is a specification that aims to provide a standard platform with enough commonality for sharing and plug-and-play of modules built according to the specification, but with enough flexibility and performance to fuel innovation for use and deployment in various ways hopefully for many years to come. A preliminary hardware Base Specification (informed by considerable input from multiple organizations) now exists and designs built to that specification are in progress. This talk will discuss the Base Specification, current status of some designs, and examples of use and deployments currently being envisioned.
2nd presentation: “White board discussion of correlators and beamformers”
An open conversation with the audience.
The Revolutionary Impact of CUDA 6.0 on multicore programming
Alex St. John – United States – New Zealand
CUDA 6.0 enables the CPU and GPU to simultaneously share access to the same memory. This seemingly trivial advance in multicore programming API’s may actually have a revolutionary impact on heterogeneous programming. Having been privileged to participate in Nvidia’s early access developer program for CUDA 6.0, I realized in the course of porting a large body of complex code to the new architecture that the new feature of shared memory support in CUDA 6.0 was not merely incremental but potentially revolutionary to the way large scale heterogeneous programs could be developed. In a body of sample code I recently released publicly, I demonstrated that with CUDA 6.0 it is now possible to create C++ objects that can compile and run on the CPU or GPU dynamically depending on which processors memory the same object resides in. This amazing new versatility in object oriented heterogeneous software development has dramatically altered my perspective on how to approach efficiently crafting large scale HPC applications. I will demonstrate why I believe that the introduction of this capability will revolutionize heterogeneous software development.
SKA – Scientific Nirvana or Data-led Revolution?
John Humphreys, Chair, ASKAIC – Australia
SKA can be seen as an excellent global project to extend the boundaries of scientific-only endeavour, or as a catalyst for ‘step-jump’ innovation that is led by the Big Data story and the realisation of associated non-astronomy opportunities. “What’s in it for industry” will be a significant segment of this talk.
SKA and the Economic Opportunities for New Zealand
Prof. John Bancroft, STFC, AUT. UK – New Zealand
The SKA Project is not only the largest and most powerful radio telescope ever contemplated, it is also the largest IT Project yet undertaken. The computing requirements of the Project are extremely demanding and many new computer/computing technologies will need to be developed, plus many existing technologies will need to be stretched, in order for the telescope to be delivered and operated.
These technological developments constitute multiple, significant opportunities to create positive economic impact. The SKA is a good source of defined needs which, over the coming years, will be shared by many industries across the World.
For companies potentially involved in supplying the systems and technologies the telescope needs, by joining in the work and helping to develop the necessary technologies, they will be helping to create new offerings that will be of use and value to many other applications and markets. For those companies now trying to factor the swing to Cloud computing and e-infrastructure into their business planning for the coming years and decades, this is an opportunity to join in the Project and have their needs worked into the project and so ensure the delivered outputs help them in their everyday businesses.
This talk will focus on examples of possible opportunities for both supply companies and potential new user communities to leverage the SKA project and its outputs to their own commercial advantage.
“Hunting Pulsars and Fast Transients with the SKA”
Ewan Barr – Swinburne University of Technology, Australia
“Pulsars and fast radio transients are invaluable tools for probing the fundamental physics that govern our Universe. From binary pulsars facilitating incredible tests of gravity in the strong field regime, to Fast Radio Bursts potentially providing a method by which the baryon content of the Universe can be measured, it is no surprise that these phenomena are vital to many of the SKA’s key science goals. However, to achieve these goals we must first expand the known populations of of these objects. In particular, if the SKA is to revolutionise our view of the Universe by opening up a new observational window in gravitational waves, we will need a large population accurately timing millisecond pulsars, the discovery of which poses large computational challenges. The extreme data rates from the SKA will require new approaches to the process of searching for pulsars and fast transients, as traditional methods simply lack the performance required. These approaches are arriving in the form of new search algorithms that use the data-parallel model offered by Graphics Processing Units and Field Programmable Gate Arrays to vastly improve the speed of our searching capabilities and enable truly real-time analysis of our data. In this talk I will introduce pulsars and fast transients in the context of the SKA and look at the challenges that we face in the coming `Big Data’ era. I will also review several of the newly developed tools that may form the basis of the SKA’s pulsar and fast transient search pipelines.”
Extracting Value from Big Data
Melanie Johnston-Hollitt, Chair MWA Board – New Zealand
The SKA project has repeatedly been described at the largest computing project in the world, capable of generating massive amounts of data which all need to be processed efficiently in order to extract scientific value. Part of the challenge to extract value from the SKA will be in developing automated processes that can shift through the data and order and classify it into digestible pieces. The pipelines needed to deal with this ‘big data’ problem are the subject of part of the SKA design study. I will present an overview of this channel and discuss some of the specifics of the problem for the SKA and how these aspects might be further generalized to other ‘big data’ areas.
SKA Computing: Humans in the Loop?
Jasper Horrell, GM Innovation, SKA South Africa
The SKA telescopes have the potential to generate data at rates that force a paradigm shift over traditional approaches of data analysis. The days of expert astronomers carefully examining each dataset and fine tuning the processing parameters in order to extract the best science seem to be numbered. Approaches involving automated processing pipelines, incorporating machine learning into the processing chain, utilizing compute on demand with flexible, reconfigurable computing platforms and new developments in algorithms are perhaps pointers of things to come. Drawing from the work on the MeerKAT SKA precursor and associated research programmes, we look at what is being built and investigated now as well as emerging themes likely to play during the construction of the SKA.
Computing for SKA: Cray’s perspective
Kent Winchell – Office of the CTO, Cray Inc. – USA
Abstract – TBA
“Predictive data modeling of large and fast streams of spatio/spectro temporal data using neuromorphic computation”
N.Kasabov, R.Pears, R.Hartono, KEDRI/AUT
The talk presents first the main principles of neuromorphic computation and spiking neural networks (SNN), before it introduces the NeuCube SNN architecture developed at KEDRI (www.kedri.aut.ac.nz). A methodology of using NeuCube for predictive data modelling of large and fast streams of data is presented and demonstrated on a small scale example.
The implementation of NeuCube-based models on the high performance SNN platform SpiNNaker, of millions of processing elements, developed at the University of Manchester is discussed. Future work of exploring NeuCube/SpiNNaker models and systems for SKA data is discussed.
“The AUT University Radio Astronomical Observatory – A review of developments and status”
Tim Natusch – Deputy Director Operations, IRASR
The AUT University Radio Observatory at Warkworth New Zealand operates two radio telescopes. I will discuss the instrumentation, the scientific programmes contributed to and the implications these have for computing and data networking.