Christian Rolf (Open Parallel) – December 2012
Analysing the data produced by SKA requires a modern approach. The computational needs of SKA exceed, by far, any other radio astronomy project. Extending existing software packages is more likely to increase the cost of computing hardware than provide the necessary scalability.
GPU computing is a necessary, but not sufficient, step for analysing the data from SKA. GPUs provide excellent scalability for the easily parallelisable parts of the data analysis. However, dependencies between data sets inherently limits overall scalability.
Modern programming paradigms, proven to provide the best scalability for cloud computing, are necessary to ensure cost-efficient performance for SKA. Functional programming can be efficiently and automatically parallelised, thanks to it’s syntax being mathematical in nature. Merely extending existing software packages is unlikely to provide cost-efficient scalability.
Open Parallel is uniquely positioned in the region to ensure that SKA middleware scales at a low cost. Massively parallel execution of functional programming is part of Open Parallel’s expertise portfolio. As is the parallelisation of legacy code, which may be useful in the early prototype stage.
——
Tim Cornwell (Head of Computing, SKA Organisation) – December 2012
Nicolás,
Your colleague’s comments about GPUs are well taken. The lead candidates must be Intel MIC or GPU devices.
Beyond computation the problems are memory and memory bandwidth. Unlike LHC (Large Hadron Collider) workloads, SKA workloads require all the data to be in one place. This rules out a distributed resource like Seti@HOME so we probably are forced towards a single computer in one place.
I enclose a SKA memo that discusses the low level processing in some detail.
Analysis of Convolutional Resampling Algorithm Performance
B. Humphreys (CSIRO) T. Cornwell (CSIRO) – January 2011