Pär Persson Mattsson | March 20, 2014

One thing we haven’t talked much about so far in the Hybrid Modeling blog series is what speedup we can expect when adding more resources to our computations. Today, we consider some theoretical investigations that explain the limitations in parallel computing. We will also show you how to use the COMSOL software’s Batch Sweeps option, which is a built-in, embarrassingly parallel functionality for improving performance when you reach these limits.

Read more ⇢

Article Categories

Jan-Philipp Weiss | March 6, 2014

Previously in this blog series, my colleague Pär described parallel numerical simulations with COMSOL Multiphysics on shared and distributed memory platforms. Today, we discuss the combination of these two methods: hybrid computing. I will try to shed some light onto the various aspects of hybrid computing and modeling, and show how COMSOL Multiphysics can use hybrid configurations in order to squeeze out the best performance on parallel platforms.

Read more ⇢

Article Categories

Pär Persson Mattsson | February 20, 2014

In the latest post in this Hybrid Modeling blog series, we discussed the basic principles behind shared memory computing — what it is, why we use it, and how the COMSOL software uses it in its computations. Today, we are going to discuss the other building block of hybrid parallel computing: distributed memory computing.

Read more ⇢

Article Categories

Pär Persson Mattsson | February 6, 2014

A couple of weeks ago, we published the first blog post in a Hybrid Modeling series, about hybrid parallel computing and how it helps COMSOL Multiphysics model faster. Today, we are going to briefly discuss one of the building blocks that make up the hybrid version, namely shared memory computing. Before that, we need to consider what it means that an “application is running in parallel”. You will also learn when and how to use shared memory with COMSOL.

Read more ⇢

Article Categories

Pär Persson Mattsson | January 23, 2014

Twenty years ago, the TOP500 list was dominated by vector processing supercomputers equipped with up to a thousand processing units. Later on, these machines were replaced by clusters for massively parallel computing, which soon dominated the list, and gave rise to distributed computing. The first clusters used dedicated single-core processors per compute node, but soon, additional processors were placed on the node requiring the sharing of memory. The capabilities of these shared-memory parallel machines heralded a sea change towards multicore […]

Read more ⇢

Article Categories