
UCL for Code in Research
The companion podcast for courses on programming from the Advanced Research Computing Centre of the University College of London, UK.
UCL for Code in Research
9/10 - Distributed Memory and Parallel Computing
•
Peter Schmidt
•
Season 1
•
Episode 9
Marc Hartung and I will discuss distributed memory in parallel computing in this episode, with tools like OpenMPI. We also discuss some of the hardware aspects around HPC systems and how shared and distributed memory computations differ.
Links:
- https://www.open-mpi.org OpenMPI homepage
- https://docs.open-mpi.org/ the docs for OpenMPI
- https://www.mpi-forum.org The MPI Forum (who write the MPI standard)
- http://openshmem.org/site/ OpenSHMEM
- https://en.wikipedia.org/wiki/Distributed_memory summary page on distributed memory
- https://en.wikipedia.org/wiki/InfiniBand InfiniBand network solution
- https://www.nextplatform.com/2022/01/31/crays-slingshot-interconnect-is-at-the-heart-of-hpes-hpc-and-ai-ambitions/ Slingshot network solution
- https://en.wikipedia.org/wiki/Partitioned_global_address_space
- https://www.techtarget.com/whatis/definition/von-Neumann-bottleneck the bottleneck named after John von Neumann
- https://en.wikipedia.org/wiki/Floating_point_operations_per_second overview of FLOPS (floating point operations per second)
- https://www.openmp.org/wp-content/uploads/HybridPP_Slides.pdf OpenMP and OpenMPI working together in a hybrid solution
- https://blogs.fau.de/hager/hpc-book Georg Hager/Gerhard Wellein book on HPC for scientists and engineers
This podcast is brought to you by the Advanced Research Computing Centre of the University College London, UK.
Producer and Host: Peter Schmidt