How do we study the Universe?
The research done at the Institute for Computational Cosmology (ICC) is based around running really big cosmological simulations which simulate how the Universe and the galaxies therein form and evolve over time. These simulations allow us to test our understanding of how the processes driving said evolution work, which is done by comparing the predicted properties of galaxies and the Universe to what we actually observe in the real Universe. By modifying the relative importance and behaviour of the ingredients that go into the simulations, we can refine our knowledge of how the Universe works.
In addition to this, simulations offer the possibility of viewing the universe in a way which would otherwise be impossible. The relevant timescales at which the driving processes behind the evolution of the universe operate are much longer than any single human life, which is why pictures of cosmos seem to be ‘frozen’ in time. Make sure to check out the media page to see videos showing how galaxies form!
Running these types of simulations require enormous amounts of computational power, which is why supercomputers are one of the tools we use on a day-to-day basis. Supercomputers offer the possibility of speeding up calculations to take a few weeks instead of months (or even years) if we were to use normal laptops.
The working principle that allows such a speedup is the parallelisation of tasks, by which the huge amount of number crunching that go into simulations are split among many CPU cores. An analogy would be building a skyscraper with just one person as opposed to using several hundred. Whereas the former would be virtually impossible to undertake, the latter becomes a more manageable and realistic task.
The ICC’s main workhorse is a Durham-based supercomputer known as the COSmology MAchine (COSMA), but do not let the name fool you! COSMA has other applications aside from cosmology, such as simulating the planetary collision which gave rise to our Moon, and more recently, modelling how coronavirus could spread in the UK and testing which strategies could be the most effective at mitigating the spread.
COSMA has been updated over the years, with the current fully operational iteration COSMA7 having almost 13000 CPU cores, the same as 3000 laptops combined! The amount of data required to save all of the details of a simulation necessitates a large amount of storage, which is why we have a total of 3.5 petabytes at our disposal. To put this in perspective, if it was to be filled just with films, it would take about 400 years to watch all of them!
The latest iteration of COSMA is COSMA8. The COSMA8 prototype became operational in October 2020. It was then extended in early 2021, to become the initial COSMA8 installation. COSMA8 is undergoing work until October 2021, when it will become fully operational. The current installation has about 2000 CPU cores, but this will increase to over 23000 cores. COSMA8 will be the first of all COSMA systems to reach Petascale, meaning it will be able to do over peta (1 with 15 zeros) floating-point calculations per second!