How Much Memory Is Needed to Solve Large COMSOL Models?

October 24, 2014

One of the most common questions we get is: How large of a model can you solve in COMSOL Multiphysics? It turns out that this is quite tricky to answer decisively, so in this blog entry, we will talk about memory requirements, model size, and how you can predict the amount of memory you will need for solving large 3D finite element problems.

Let’s Look at Some Data

The plot below shows the amount of memory needed to solve various different 3D finite element problems in terms of the number of degrees of freedom (DOF) in the model.

Graph depicting memory requirements with respect to degrees of freedom.
Memory requirements (with a second-polynomial curve fit) with respect to degrees of freedom for various representative cases.

There are five different cases presented here:

  • Case 1: A heat transfer problem of a spherical shell. There is radiative heat transfer between all of the surfaces. The model is solved with the default iterative solver.
  • Case 2: A structural mechanics problem of a cantilevered beam, solved with the default direct solver.
  • Case 3: A wave electromagnetics problem solved with the default iterative solver.
  • Case 4: The same structural mechanics problem as Case 2, but using an iterative solver.
  • Case 5: A heat transfer problem of a block of material. Only conductive heat transfer is considered. The model is solved with the default iterative solver.

What you should see from this graph is that, with a computer that has 64 GB of random access memory (RAM), you can solve problems that range in size anywhere from ~26,000 DOF on the low end all the way up to almost 14 million degrees of freedom. So why this wide range of numbers? Let’s look at how to interpret these data…

Degrees of Freedom, Explained

For most problems, COMSOL Multiphysics solves a set of governing partial differential equations via the finite element method, which takes your CAD model and subdivides the domains into elements, which are defined by a set of nodes on the boundaries.

At each node, there will be at least one unknown, and the number of these unknowns is based upon the physics that you are solving. For example, when solving for temperature, you only have a single unknown (called T, by default) at each node. When solving a structural problem, you are instead computing strains and the resultant stresses, thus you are solving for three unknowns (u,v,w), which are the displacements of each node in the x-y-z space.

For a turbulent fluid flow problem, you are solving for the fluid velocities (also called u,v,w by default) and pressure (p) as well as extra unknowns describing the turbulence. If you are solving a diffusion problem with many different species, you will have as many unknowns per node as you have chemical species. Additionally, different physics within the same model can have a different default discretization order, meaning there can be additional nodes along the element edges, as well as in the element interior.

Diagram of various elements.
A second-order tetrahedral element solving for the temperature field, T, will have a total of 10 unknowns per element, while a first-order element solving the laminar Navier-Stokes equations for velocity, \mathbf{u}=(u_x,u_y,u_z), and pressure, p, will have a total of 16 unknowns per element.

COMSOL Multiphysics will use the information about the physics, material properties, boundary condition, element type, and element shape to assemble a system of equations (a square matrix), which need to be solved to get the answer to the finite element problem. The size of this matrix is the number of degrees of freedom (DOFs) of the model, where the number of DOFs is a function of the number of elements, the discretization order used in each physics, and the number of variables solved for.

These systems of equations are typically sparse, which means that most of the terms in the matrix are zero. For most types of finite element models, each node is only connected to the neighboring nodes in the mesh. Note that element shape matters; a mesh composed of tetrahedra will have different matrix sparsity from a mesh composed of hexahedra (brick) elements.

Some models will include non-local couplings between nodes, resulting in a relatively dense system matrix. Radiative heat transfer is a typical problem that will have a dense system matrix. There is radiative heat exchange between any surfaces than can see each other, so each node on the radiating surfaces is connected to every other node. The result of this is clearly seen in the plots I shared at the beginning of this blog post. The thermal model that includes radiation has much higher memory requirements than the thermal model without radiation.

You should see, at this point, that it is not just the number of DOFs, but also the sparsity of the system matrix that will affect the amount of memory needed to solve your COMSOL Multiphysics model. Let’s now take a look at how your computer manages memory.

How Your Operating System Manages Memory

COMSOL Multiphysics uses the memory management algorithms provided by the Operating System (OS) that you are working with. Regardless of which OS you are using, the performance of these algorithms is quite similar on all of the latest OS’s that we support.

The OS creates a Virtual Memory Stack, which the COMSOL software sees as a continuous space of free memory. This continuous block of virtual memory can actually map to different physical locations, so some part of the data may be stored within RAM and other parts will be stored on the hard disk. The OS manages where (in RAM or on disk) that the data is actually stored, and by default you do not have any control over this. The amount of virtual memory is controlled by the OS, and it is not something that you usually want to change.

Under ideal circumstances, the data that COMSOL Multiphysics needs to store will fit entirely within RAM, but once there is no longer enough space, part of the data will spill over to the hard disk. When this happens, performance of all programs running on the computer will be noticeably degraded.

If too much memory space is requested by the COMSOL software, then the OS will determine that it can no longer manage memory efficiently (even via the hard disk) and will tell COMSOL Multiphysics that there is no more memory available. This is the point at which you will get an out-of-memory message and COMSOL Multiphysics will stop trying to solve the model.

Next, let’s take a look at what COMSOL Multiphysics is doing when you get this out-of-memory message and what you can do about it.

When Does COMSOL Use the Most Memory?

When you set up and solve a finite element problem, there are three memory intensive steps: Meshing, Assembly, and Solving.

  • Meshing: During the meshing step, the CAD geometry is subdivided into finite elements. The default meshing algorithm applies a free tetrahedral mesh over most of the modeling space. Free tetrahedral meshing of large complex structures will require a lot of memory. In fact, it can sometimes require more memory than actually solving the system of equations, so it is possible to run out of memory even at this step. If you do find that meshing is taking significant time and memory, then you should subdivide (or partition) your geometry into smaller sub-domains. Generally, the smaller the domains, the less memory intensive they are to mesh. By meshing in a sequence of operations, rather than all at once, you can reduce the memory requirements. Within the context of this blog entry, it is also assumed that there are no modeling simplifications (such as exploiting symmetry or using thin layer boundary conditions) that could be leveraged to simplify the model and reduce the mesh size.
  • Assembly: During the assembly step, COMSOL Multiphysics forms the system matrix as well as a vector describing the loads. Assembling and storing this matrix requires significant memory — possibly more than the meshing step — but always less than the solution step. If you run out of available memory here, you should increase the amount of RAM in your system.
  • Solving: During the solution step, COMSOL Multiphysics employs very general and robust algorithms capable of solving nonlinear problems, which can consist of arbitrarily coupled physics. At the very core of these algorithms, however, the software will always be solving a system of linear equations, and this can be done using either direct or iterative methods. So let’s look at these two methods from the point of view of when they should be used and how much memory they need.

Direct Solvers

Direct solvers are very robust and can handle essentially any problem that will arise during finite element modeling. The sparse matrix direct solvers used by COMSOL Multiphysics are the MUMPS, PARDISO, and SPOOLES solvers. There is also a dense matrix solver, which should only be used if you know the system matrix is fully populated.

The drawback to all of these solvers is that the memory and time required goes up very rapidly as the number of DOFs and the matrix density increase; the scaling is very close to quadratic with respect to number of DOFs.

As of writing this, both the MUMPS and PARDISO direct solvers in the COMSOL software come with an out-of-core option. This option overrides the OS’s memory management and lets COMSOL Multiphysics directly control how much data will be stored in RAM and when and how to start writing data to the hard drive. Although this is superior to the OS’s memory management algorithm, it will be slower than solving the problem entirely in RAM.

If you have access to a cluster supercomputer, such as the Amazon Web Service™ Amazon Elastic Compute Cloud™, you can also use the MUMPS solver to distribute the problem over many nodes of the cluster. Although this does allow you to solve much larger problems, it is also important to realize that solving on a cluster may be slower than solving on a single machine.

Due to their aggressive (approximately quadratic) scaling with problem size, the direct solvers are only used as the default for a few of the 3D physics interfaces (although they are almost always used for 2D models, for which their scaling is much better.)

The most common case where the direct solver is used by default is for 3D structural mechanics problems. While this choice has been made for robustness, it is also possible to use an iterative solver for many structural mechanics problems. The method for switching the solver settings is demonstrated in the example model of the stresses in a wrench.

Iterative Solvers

Iterative solvers require relatively much less memory than the direct solvers, but they require more customization of the settings to get them to work well.

With all of the predefined physics interfaces where it is reasonable to do so, we have provided default iterative solver suggestions that are selected for robustness. These settings are handled automatically and do not require any user interaction, so as long as you are using the built-in physics interfaces, you do not need worry about these settings.

The memory and time needed by an iterative solver will be relatively much less than a direct solver for the same problem, so when they can be used, they should be. The scaling as the problem size increases is much closer to linear, as opposed to the quadratic scaling typical of the direct solvers.

At the time of writing this, the iterative solvers should be used on a computer that has enough RAM to solve the problem, so if you get an out-of-memory message when using an iterative solver, you should upgrade the amount of RAM on your computer.

It is also possible to use an iterative solver on a cluster computer using Domain Decomposition methods. This class of iterative methods has recently been introduced into the software, so stay tuned for more details about this in the future.

Predicting Memory Requirements

Although the data shown above do provide an upper and lower bound of memory requirements, these bounds are quite wide. We’ve seen that introducing a small change to a model, such as introducing a non-local coupling like radiative heat transfer, can significantly change memory requirements. So let’s introduce a general recipe for how you can predict memory requirements.

Start with a representative model that contains the combination of physics you want to solve and approximates the true geometric complexity. Begin with as coarse a mesh as possible, and then gradually increase the mesh refinement. Alternatively, start with a smaller representative model and gradually increase the size.

Solve each model and monitor memory requirements. Observe the default solver being used. If it is a direct solver, use the out-of-core option in your tests, or consider if an iterative solver can be used instead. Fit a second-order polynomial to the data, and use this curve to predict the memory required by the size of the larger problem that you eventually want to solve. This is the most reliable way to predict the memory requirements of large, complex, 3D multiphysics models.

As we have now seen, the memory needed will depend upon (at least) the geometry, mesh, element types, combination of physics being solved, couplings between the physics, and the scope of any non-local model couplings. At this point, it should also be made clear that it is not generally possible to predict the memory requirements in all cases. You may need to repeat this procedure several times for variations of your model.

It is also fair to say that setting up and solving large models in the most efficient way possible is something that can require some deep expertise of not just the solver settings, but also of finite element modeling in general. If you do have a particular modeling concern, please contact your COMSOL Support Team for guidance.


You should now have an understanding of why the memory requirements for a COMSOL Multiphysics model can vary dramatically. You should also be able to predict with confidence the memory requirements of your larger models and decide what kind of hardware is appropriate for your modeling challenges.

Amazon Web Services and Amazon Elastic Compute Cloud are trademarks of, Inc. or its affiliates in the United States and/or other countries.

Comments (5)

Leave a Comment
Log In | Registration
Ivar Kjelberg
Ivar Kjelberg
October 25, 2014

Hi Walter
Thanks for again an interesting and clearly presented Blog 🙂
There is only one point where I do not agree with your writing: for me an iterative solver uses less memory, but generally quite some more time to solve, so both do not decrease together the total computing “energy” required as RAM*time remains more or less constant.
A typical example is when you turn on the non-linear geometry tick for structural, the time to solve increases highly as the solver maps its way to a local minimum.


Walter Frei
Walter Frei
October 28, 2014

Dear Ivar,
Iterative solvers of linear systems are almost always faster than direct solvers. There are some rare cases where iterative solvers can be slower, and this is indicative of a model that may have an ill-conditioned system matrix. But it is fair to say that for the vast majority of cases that you will ever encounter in practice for large 3D models, iterative solvers are faster.

However, it seems that there is here a confusion between using a iterative solver for solving a linear system of equations, and using an iterative technique for solving a nonlinear problem (such as when considering geometric nonlinearities.) To clearly understand the differences I suggest the following blog series:

Robert Koslover
Robert Koslover
March 26, 2015

Thanks for the helpful essay. A couple of comments and questions:
1. In my 3D full-wave frequency-domain RF problems, which typically involve models of antennas and/or waveguides, I have found that iterative solvers (such as GMRES with SOR Vector preconditioning) use *much* less memory, but are also usually much slower than direct solvers (such as PARDISO). And the default iterative solver (BiCGStab) often fails to converge at all on these types of problems. Any comments on that?
2. Also, what are your thoughts about using fewer numbers of higher-order elements vs. greater numbers of lower-order elements? I usually (but not always) find that using a large number of linear elements is more effective (in terms of memory and speed) than using a smaller number of quadratic or higher-order elements. But I’d like to understand this better.

Walter Frei
Walter Frei
March 26, 2015

Dear Robert,

Yes, as you correctly note, the iterative solvers will use less memory. They usually should not be slower, but they might be if:
– You have a mesh which is too coarse to resolve the wavelength well. This would be the most likely reason for non-convergence.
– You have two different materials with highly varying refractive indices (permittivity). This would be somewhat mathematically similar to saying that the finite element matrix is not very well-conditioned.
– There are highly anisotropic materials

Please note that in version 5.0 there are three different solver types that you can select (in the “Analysis Methodology” section of the Electromagentic Waves interface) that let you choose between Robust/Intermediate/Fast

With respect to different element orders: We would not recommend this. What you will actually need to consider is the memory used vs. accuracy achieved. You will find that, for EM wave problems, 2nd order elements will typically be the best. This also has to do with that Maxwell’s equations as a 2nd order PDE.

If you do have a particular model file which you have questions about, you can always send this you the COMSOL Support Team as well:

Jungin Baek
Jungin Baek
November 16, 2017

Hi Walter.

I’m Rescale BD in Korea. Your article has been a great help to me.
I will meet COMSOL user at user conference in Korea(11/17)

In here, many customers used single machine(workstation), they didn’t care memory limit, cause own machine have low memory. But it will be change next CAE environment.
Using cloud platform(likes Rescale) can go beyond the memory limit, but I was wondering how to guide it to the users.