When setting up a simulation in COMSOL Multiphysics, you may want to seek out more information on the software as you go. Whether it’s learning about a node in the model tree, the settings for an operation you’re currently working in, or the differences between a set of options you are choosing from and what they will mean for your model, it’s helpful to have guidance available at your fingertips. This is the convenience that the Help window in COMSOL Multiphysics provides.
The Help window, displaying topic-based content for the Electric Potential boundary condition.
The Help window, accessed by clicking the Help button in the top right-hand corner of the software (the blue question mark) or the F1 key on your keyboard, enables you to promptly access information pertaining to the model tree node or window in which you are currently working. The text that displays updates automatically as you select items in the software or add settings to your model. This enables you to instantly get help right when you need it.
Since this window appears in the COMSOL Desktop® when opened, you can access the information you need without having to compete for screen space with your simulation. Instead of having to fit multiple windows on your monitor, you are able to view the help content and Model Builder together.
Additionally, you can search and navigate the text in the Help window using the respective buttons.
In addition to receiving topic-based help, there may be times when you want to more easily access, navigate, and search all of the comprehensive COMSOL Multiphysics documentation. This includes the user guides and manuals for any modules for which you have a license. You can find this documentation in the Documentation window, which you can access either from within COMSOL Multiphysics, by going to File > Help, or externally from your computer in your COMSOL Multiphysics installation folder.
The Documentation window, with the AC/DC Module User’s Guide opened.
The Documentation window enables you to quickly and easily access your entire library of COMSOL Multiphysics documentation, all within a single window. When open, you can choose between the PDF or HTML version of any guide, manual, or handbook. Additionally, the sections of each individual document are hyperlinked and bookmarked. The sections are displayed on the left side of the window, as shown above. This enables you to quickly jump between different chapters and documents.
This resource also provides more options when it comes to searching through the software documentation. This includes the ability to search through the entire library, only within a specified set of documents you have preselected, or exclusively through the Application Library Manual for all licensed products. Searching the Application Library Manual, in particular, enables you to find models and applications that demonstrate use of some specific physics, software features, and functionality.
Whereas the Help window provides quick access to documentation while modeling, the Documentation window serves as a more comprehensive resource when you need further clarification and more powerful search tools.
Now you know that you can access information about what you are working on. The ability to access modeling examples relevant to your work is equally as important. These examples enable you to learn how to use the software, examine COMSOL Multiphysics models, and access guidance that you can apply to your own simulations. In the COMSOL® software, the modeling examples can be found in another valuable resource, the Application Libraries window.
The Application Libraries window, with the Thermal Actuator tutorial model selected and displayed.
The Application Libraries window, accessed by going to File > Application Libraries, contains hundreds of models and simulation applications, spanning every module and engineering discipline. Using the Search field, you can find applications and models that cover some specific physics or feature that you want to see how you can use. Each entry includes a brief summary of the model; the COMSOL Multiphysics model file; and a PDF document that provides a comprehensive introduction and detailed, step-by-step overview of the model-building process. This provides you with the logic behind how the model is built, why and how boundary conditions are applied, and other useful information that you can use as insight into the models you create.
By following along with any of the tutorial models available, you can experience building a model firsthand. In addition, relevant examples from the Application Libraries can be experimented with and expanded upon, serving as a starting point for your own designs.
Tip: The tutorial models and demo applications featured in the Application Libraries are also available online in the Application Gallery.
Now that we’ve introduced the help tools available in the COMSOL® software and the advantages they provide, watch our video tutorial covering it all. In the video, we demonstrate how to access and use each of the resources discussed above.
After you’ve finished watching the video, you’ll be ready to use all of the help tools and resources available to you in COMSOL Multiphysics.
]]>
If you have used simulation tools for any significant period of time, you may have found yourself creating new models faster than your computer can solve them. This is especially common if your models are quick to set up but take a fair amount of time to solve. Running multiple models at the same time on the same computer is not a good option, as they would compete for resources (RAM, in particular) and therefore take longer to run simultaneously than they would sequentially, or back to back.
So, what’s a modeler to do?
You could launch your first model in the graphical user interface (GUI) and wait for it to solve, launch the second model in the GUI and wait for it to solve, and so on. But who would want to return to the office after hours or on weekends just to launch their next model?
Fortunately, there is a solution: creating a shell script, or a batch file, which automatically launches your COMSOL Multiphysics simulations one after the other. I’ll explain how to do this step by step on a computer with the Windows® operating system, but these ideas also apply to the other supported platforms (macOS and the Linux® operating system).
Let’s start with a demonstration of how to run a single COMSOL Multiphysics model from the command line.
First, we create a model file in the COMSOL Multiphysics GUI, also known as the COMSOL Desktop. Since we’re going over how to use a new functionality, the smaller and less detailed the model, the better. This will allow us to understand the functionality and perform tests on it quickly. Once you are comfortable with this functionality, it can, of course, also be applied to sophisticated models that take a long time to solve.
At this stage, we check that the model is properly set up by running it with a relatively coarse mesh. This presents the additional benefit of generating a default data set and one or two default plots in the Study branch of the model tree. Now that we’ve ensured that the model is properly set up, we can refine the mesh and save the file under the name Model1.mph
in our working folder. In this example, that’s C:/Users/jf
.
At this point, we can close the COMSOL Desktop.
Next, we open a Command Prompt window and, at the command line, make our way to our working folder. We type the name of the working folder:
cd C:\Users\jf
Then, we press the Enter key.
We are just about to call the COMSOL® software using the comsolbatch
command. Before we can do that, we need to make sure that the Windows® operating system knows where to find that command. This is where, if we have not done so before, we add the path to the COMSOL® software executables to the Windows® path environment variable. On a computer running Windows®, with a default installation, these executables are located in C:\Program Files\COMSOL\COMSOL52a\Multiphysics\bin\win64
.
Now, drum roll, please!
Back at the command line, we type the following command and then press Enter:
comsolbatch -inputfile Model1.mph -outputfile Model1_solved.mph
This command instructs Windows® to launch COMSOL Multiphysics® in Batch mode, hence without a graphical user interface. As the syntax suggests, we use Model1.mph
as an input and the file Model1_solved.mph
is the file with the solution. If we were to omit the “-outputfile Model1_solved.mph
” part in the command above, the solution would be stored in the input file, Model1.mph
.
As the software runs, some progress information is displayed at the command line. After a few moments, the run is done and we can open the output file, Model1_solved.mph
, in the GUI. We can see that the model has indeed been solved and that we can postprocess the results interactively, just as if we had computed the solution in the COMSOL Desktop.
Now that we’ve figured out how to launch a COMSOL Multiphysics model from the command line, let’s see how to automate running two or more simulations in a row.
Let’s create a second model, check that it is properly set up, and save the file to our working folder under the name Model2.mph
. With that done, we can close the COMSOL Desktop again.
Using a text editor like Notepad, we create a plain text file containing the following two lines:
comsolbatch -inputfile Model1.mph -outputfile Model1_solved.mph
comsolbatch -inputfile Model2.mph -outputfile Model2_solved.mph
We then save this in our working folder as a plain text file with the .bat extension. Here, we named the file Batch_Commands_For_COMSOL.bat
.
At the command prompt, still in our working folder, we launch Batch_Commands_For_COMSOL.bat
. At the command line, we type:
Batch_Commands_For_COMSOL
Then, we press the Enter key.
COMSOL Multiphysics will run without the GUI open and solve the problem defined in the file Model1.mph
. The COMSOL® software will then do the same for the problem defined in the file Model2.mph
. Once the runs are finished, we can inspect the files Model1_solved.mph
and Model2_solved.mph
in the COMSOL Desktop to see that they indeed contain the solutions of these two analyses. On the other hand, if we open the files Model1.mph
and Model2.mph
in the GUI, we see that they have not changed and still contain the problem definitions, but no solutions.
If we want to run more than two files sequentially, we can just modify the .bat
file accordingly and add lines for each file that we wish to run.
By learning how to run your COMSOL Multiphysics simulations in Batch mode from the command line, you will be able to complete your projects more efficiently and with ease.
Microsoft and Windows are either registered trademarks or trademarks of Microsoft Corporation in the United States and/or other countries.
macOS is a trademark of Apple Inc., in the U.S. and other countries.
Linux is a registered trademark of Linus Torvalds in the U.S. and other countries.
]]>
Because they’re fun to watch! But seriously, animations are an engaging way of clearly conveying your simulation results to your audience. Let’s take a look at some scenarios where animations can be helpful, using simulation results featured in past blog posts as examples. Yes, this is partly an excuse to repost cool animations like this one made by two COMSOL Multiphysics® users:
The carangiform swimming pattern of a fish. The animation was created by M. Curatolo and L. Teresi and originally featured in the blog post “Studying the Swimming Patterns of Fish with Simulation“.
When presenting your simulation results to colleagues, clients, or customers, analytical results don’t always provide the whole picture. For instance, the animations featured in a previous blog post about the simulation of sloshing in vehicle fuel tanks help you better visualize the movement and impact of fluid in a tank as compared to a static plot alone.
Of course, analytical results are paramount when presenting the conclusions that you draw from a simulation. However, analytical results can sometimes be hard to understand or relate to. Animations give the viewer a better idea of the real-world effects that were simulated in your research.
Animations can often help explain a concept or idea. In a blog post on thermal ablation, animations are used to illustrate how this process is used for material removal. In that blog post, we explain the concept of thermal ablation and detail how to model the phenomenon in COMSOL Multiphysics. Towards the end, we shared an animation that shows what thermal ablation looks like with a laser heating example. Here it is again:
In general, simulation software is often used to analyze phenomena that cannot be seen with the naked eye. This is true for physics related to acoustics, electromagnetic waves, MEMS, and more. Animations extend this idea by providing a graphical representation of the process or design that you aim to study. For example, the following animation shows the far-field radiation pattern for a monopole antenna array featured in an introductory antenna modeling blog post.
The reasons why you should create animations from your simulation results don’t just apply to your COMSOL Multiphysics® models. You can also add the animation functionality to any simulation app that you create with the Application Builder. Depending on who is using your app and for what purpose, you can build it so they can easily generate an animation of the results by the click of a button in the app’s user interface.
Several of the demo applications within the Application Library contain animations, such as the Biosensor Design app featured in a previous blog post.
Ready to make your own animations in COMSOL Multiphysics? We have a tutorial video to show you how. Animations can at first seem tricky and time-consuming to get just right. In the video, you will see a few best practices that will help minimize the amount of time you’ll spend producing animations.
After watching the video, you’ll be ready to generate and export animations of your own simulation results.
]]>
If you work with computationally large problems, the Domain Decomposition solver can increase efficiency by dividing the problem’s spatial domain into subdomains and computing the subdomain solutions concurrently and sequentially on the fly. We have already learned about using the Domain Decomposition solver as a preconditioner for an iterative solver and discussed the possibilities to enable simulations that are constrained by the available memory. Today, we will take a detailed look at how to use this functionality with a thermoviscous acoustics example.
Let’s start with the Transfer Impedance of a Perforate tutorial model, which can be found in the Application Library of the Acoustics Module. This example model uses the Thermoviscous Acoustics, Frequency Domain interface to model a perforate, a plate with a distribution of small perforations or holes.
A simulation of transfer impedance in a perforate.
For this complex simulation, we are interested in the velocity, temperature, and total acoustic pressure in the transfer impedance of the perforate model. Let’s see how we can use the Domain Decomposition solver to compute these quantities in situations where the required resolution exceeds the margins of available memory.
Let’s take a closer look at how we can set up a Domain Decomposition solver for the perforate model. The original model uses a fully coupled solver combined with a GMRES iterative solver. As a preconditioner, two hybrid direct preconditioners are used; i.e., the preconditioners separate the temperature from the velocity and pressure. By default, the hybrid direct preconditioners are used with PARDISO.
As the mesh resolution becomes refined, the amount of memory used continues to grow. An important parameter in the model is the minimum thickness of the viscous boundary layer (dvisc), which has a typical size of 50 μm. The perforates are a few millimeters in size. The minimum element size of the mesh element is taken to be dvisc/2. To refine the solution, we divide dvisc by the refinement factors r = 1, 2, 3, 5. We can insert the domain decomposition preconditioner by right-clicking on the Iterative node and selecting Domain Decomposition. Below the Domain Decomposition node, we find the Coarse Solver and Domain Solver nodes.
To accelerate the convergence, we need to use the coarse solver. Since we do not want to use an additional coarse mesh, we set Coarse Level > Use coarse level to Algebraic in order to use an algebraic coarse grid correction. On the Domain Solver node, we add two Direct Preconditioners and enable the hybrid settings like they were used in the original model. For the coarse solver, we take the direct solver PARDISO. If we use a Geometric coarse mesh grid correction instead, we can also apply a hybrid direct coarse solver.
Settings for the Domain Decomposition solver.
We can compare the default iterative solver with hybrid direct preconditioning to both the direct solver and the iterative solver with domain decomposition preconditioning on a single workstation. For the unrefined mesh with a mesh refinement factor of r = 1, we use approximately 158,682 degrees of freedom. All 3 solvers use around 5-6 GB of memory to find the solution for a single frequency. For r = 2 with 407,508 degrees of freedom and r = 3 with 812,238 degrees of freedom, the direct solver uses a little bit more memory than the 2 iterative solvers (12-14 GB for r = 2 and 24-29 GB for r = 3). For r = 5 and 2,109,250 degrees of freedom, the direct solver uses 96 GB and the iterative solvers use around 80 GB on a sequential machine.
As we will learn in the subsequent discussion, the Recompute and clear option for the Domain Decomposition solver gives a significant advantage with respect to the total memory usage.
Memory Usage, Nondistributed Case | Degrees of Freedom | Memory Usage, Direct Solver | Memory Usage, Iterative Solver with Hybrid Direct Preconditioning | Memory Usage, Iterative Solver with Domain Decomposition Preconditioning | Memory Usage, Iterative Solver with Domain Decomposition Preconditioning with Recompute and clear enabled |
---|---|---|---|---|---|
Refinement r = 1 | 158,682 | 5.8 GB | 5.3 GB | 5.4 GB | 3.6 GB |
Refinement r = 2 | 407,508 | 14 GB | 12 GB | 13 GB | 5.5 GB |
Refinement r = 3 | 812,238 | 29 GB | 24 GB | 26 GB | 6.4 GB |
Refinement r = 5 | 2,109,250 | 96 GB | 79 GB | 82 GB | 12 GB |
Memory usage for the direct solver and the two iterative solvers in the nondistributed case.
On a cluster, the memory load per node can be much lower than on a single-node computer. Let us consider the model with a refinement factor of r = 5. The direct solver scales nicely with respect to memory, using 65 GB and 35 GB per node on 2 and 4 nodes, respectively. On a cluster with 4 nodes, the iterative solver with domain decomposition preconditioning with 4 subdomains only uses around 24 GB per node.
Memory Usage per Node on a Cluster | Memory Usage, Direct Solver | Memory Usage, Iterative Solver with Hybrid Direct Preconditioning | Memory Usage, Iterative Solver with Domain Decomposition Preconditioning |
---|---|---|---|
1 node | 96 GB | 79 GB | 82 GB (with 2 subdomains) |
2 nodes | 65 GB | 56 GB | 47 GB (with 2 subdomains) |
4 nodes | 35 GB | 35 GB | 24 GB (with 4 subdomains) |
Memory usage per node on a cluster for the direct solver and the two iterative solvers for refinement factor r = 5.
On a single-node computer, the Recompute and clear option for the Domain Decomposition solver gives us the benefit we expect: reduced memory usage. However, it comes with the additional cost of decreased performance. For r = 5, the memory usage is around 41 GB for 2 subdomains, 25 GB for 4 subdomains, and 12 GB for 22 subdomains (the default settings result in 22 subdomains). For r = 3, we use around 15 GB of memory for 2 subdomains, 10 GB for 4 subdomains, and 6 GB for 8 subdomains (default settings).
Even on a single-node computer, the Recompute and clear option for the domain decomposition method gives a significantly lower memory consumption than the direct solver: 12 GB instead of 96 GB for refinement factor r = 5 and 6 GB instead of 30 GB for refinement factor r = 3. Despite the performance penalty, the Domain Decomposition solver with the Recompute and Clear option is a viable alternative to the out-of-core option for the direct solvers when there is insufficient memory.
Refinement Factor | r = 3 | r = 5 |
---|---|---|
Memory Usage | 30 GB | 96 GB |
Memory usage on a single-node computer with a direct solver for refinement factors r = 3 and r = 5.
Number of Subdomains | Recompute and clear Option | Refinement r = 3 | Refinement r = 5 |
---|---|---|---|
2 | Off | 24 GB | 82 GB |
2 | On | 15 GB | 41 GB |
4 | On | 10 GB | 25 GB |
8 | On | 6 GB | 20 GB |
22 | On | - | 12 GB |
Memory usage on a single-node computer with an iterative solver, domain decomposition preconditioning, and the Recompute and clear option enabled for refinement factors r = 3 and r = 5.
As demonstrated with this thermoviscous acoustics example, using the Domain Decomposition solver can greatly lower the memory footprint of your simulation. By this means, domain decomposition methods can enable the solution of large and complex problems. In addition, parallelism based on distributed subdomain processing is an important building block for improving computational efficiency when solving large problems.
The Domain Decomposition solver is based on a decomposition of the spatial domain into overlapping subdomains, where the subdomain solutions are less complex and more efficient in terms of memory usage and parallelization compared to the solution of the original problem.
In order to describe the basic idea of the iterative spatial Domain Decomposition solver, we consider an elliptic partial differential equation (PDE) over a domain D and a spatial partition {D_{i}}_{i}, such that the whole domain D = U_{i}D_{i} is covered by the union of the subdomains D_{i}. Instead of solving the PDE on the entire domain at once, the algorithm iteratively solves a number of problems for each subdomain D_{i}.
For Schwarz-type domain decomposition methods, the subdomains encompass an overlap of their support in order to transfer the information. On the interfaces between the subdomains, the solutions of the neighboring subdomains are used to update the current subdomain solution. For instance, if the subdomain D_{i} is adjacent to a boundary, its boundary conditions are used. The iterative domain decomposition procedure is typically combined with a global solver on a coarser mesh in order to accelerate convergence.
A 2D domain with a regular triangular mesh and its degrees of freedom decomposed into quadratic subdomains.
To illustrate the spatial domain decomposition, consider a 2D regular triangular mesh. For simplicity, we consider linear finite element shape functions with degrees of freedom in the 3 nodes of the triangular elements. The domain (more precisely, its degrees of freedom) is decomposed into quadratic subdomains, each consisting of 25 degrees of freedom. All interior subdomains have 8 neighboring subdomains and all degrees of freedom are unique to a single subdomain. The support of the linear element functions for a single subdomain is overlapping with the support of its neighbors.
Support of the linear element functions of the blue subdomain.
To improve the convergence rate of the iterative procedure, we may need to include a larger number of degrees of freedom in order to have a larger overlap of the subdomain support. This may give a more efficient coupling between the subdomains and a lower iteration count until convergence of the iterative procedure. However, this benefit comes at the costs of additional memory usage and additional computations during the setup and solution phase because of the larger subdomain sizes.
If an additional overlap of width 1 is requested, we add an additional layer of degrees of freedom to the existing subdomain. In our example, 22 degrees of freedom (marked with blue rectangles) are added to the blue subdomain. The support of the blue subdomain is enlarged accordingly.
The same procedure is repeated for the red, green, and yellow subdomains. In the resulting subdomain configuration, some of the degrees of freedom are unique to a single subdomain, while others are shared by two, three, or even four subdomains. It is obvious that dependencies arise for the shared degrees of freedom if one of its adjacent subdomains updates its solution.
Extended subdomain with 47 degrees of freedom and its support. The additional 22 degrees of freedom are shared with the neighboring subdomains.
It is known (Ref. 1) that the iterative solution to the set of subdomain problems on the subdomains, D_{i}, converges toward the solution of the original problem formulated over the whole domain D. Hence, the global solution can be found by iteratively solving each subdomain problem with all other domains fixed until the convergence criteria is met. The optional coarse grid problem can improve the convergence rate considerably. The coarse grid problem, which is solved on the entire domain D, gives an estimate of the solution on the fine grid on D and can transfer global information faster. The convergence rate of the method depends on the ratio between the size of the coarse grid mesh elements and the width of the overlap zone on the fine grid.
If we compute the solution on a particular subdomain D_{i}, the neighboring subdomains need to update their degrees of freedom adjacent to the support of D_{i}. In COMSOL Multiphysics, there are four options available for the coordination of the subdomain overlap and the global coarse grid solution. The selector Solver in the domain decomposition settings can be set to Additive Schwarz, Multiplicative Schwarz (default), Hybrid Schwarz, and Symmetric Schwarz. For Additive Schwarz methods, the affected degrees of freedom are updated after all solutions have been computed on all subdomains without an intermediate data exchange. In this case, the order of the subdomain solutions is arbitrary and there are no dependencies between the subdomains during this solution phase.
In contrast, Multiplicative Schwarz methods update the affected degrees of freedom at the overlap of the support of neighboring subdomains after every subdomain solution. This typically speeds up the iterative solution procedure. However, there is an additional demand for prescribing an order of the subdomain solutions, which are no longer fully independent of each other.
The Hybrid Schwarz method updates the solution after the global solver problem is solved. The subdomain problems are then solved concurrently as in the Additive Schwarz solver case. The solution is then updated again and the global solver problem is solved a second time. The Symmetric Schwarz method solves the subdomain problems in a given sequence like the Multiplicative Schwarz solver, but in a symmetric way.
Direct linear solvers are typically more robust and require less tweaking of the physics-dependent settings than iterative solvers with tuned preconditioners. Due to their memory requirements, however, direct solvers may become unfeasible to use for larger problems. Iterative solvers are typically leaner in memory consumption, but some models still can’t be solved due to resource limitations. We discuss the memory requirements for solving large models in a previous blog post. Other preconditioners for iterative solvers may also fail due to specific characteristics of the system matrix. Domain decomposition is a preconditioner that in many cases requires less tuning than other preconditioners.
In case of limitations by the available memory, we can move the solution process to a cluster that provides a larger amount of accumulated memory. We can consider the domain decomposition preconditioner, using a domain solver with settings that mimic the original solver settings, since the Domain Decomposition solver has the potential to do more concurrent work. As we will see, the Domain Decomposition solver can also be used in a Recompute and clear mode, where you can get a significant memory reduction, even on a workstation.
If we do not want to use an additional coarse mesh to construct the global solver, we can compute its solution using an algebraic method. This may come at the price of an increased amount of GMRES iterations compared to when we set the Use coarse level selector to Geometric, which is based on an additional coarser mesh. The advantage is that the algebraic method constructs the global solver from the finest-level system matrix, and not by means of an additional coarser mesh. With the Algebraic option, the generation of an additional coarse mesh, which might be costly or not even possible, can be avoided.
On a cluster, a subdomain problem can be solved on a single node (or on a small subset of the available nodes). The size of the subdomains, hence the memory consumption per node, can be controlled by the Domain Decomposition solver settings. For the Additive Schwarz solver, all subdomain problems can be solved concurrently on all nodes. The solution updates at the subdomain interfaces occur in the final stage of the outer solver iteration.
For the Multiplicative Schwarz solver, there are intermediate updates of the subdomain interface data. This approach can speed up the convergence of the iterative procedure, but introduces additional dependencies for the parallel solution. We must use a subdomain coloring mechanism in order to identify a set of subdomains that can be processed concurrently. This may limit the degree of parallelism if there is a low number of subdomains per color. In general, the Multiplicative Schwarz and Symmetric Schwarz methods converge faster than the Additive Schwarz and Hybrid Schwarz methods, while these methods can result in better parallel speedup.
A subdomain coloring mechanism is used for multiplicative Schwarz-type domain decomposition preconditioning.
In the Domain Decomposition solver settings, there is a Use subdomain coloring checkbox for the Multiplicative Schwarz and Hybrid Schwarz methods. This option is enabled by default and takes care of grouping subdomains into sets — so-called colors — that can be handled concurrently. Let us consider a coloring scheme with four colors (blue, green, red, and yellow). All subdomains of the same color can compute their subdomain solution at the same time and communicate the solution at the subdomain overlap to their neighbors. For four colors, the procedure is repeated four times until the global solution can be updated.
Domain decomposition on a cluster with nine nodes. A subdomain coloring scheme is used to compute subdomain solutions simultaneously for each different color.
On a cluster, the subdomains can be distributed across the available compute nodes. Every color can be handled in parallel and all of the nodes compute their subdomain solutions for the current color at the same time and then proceed with the next color. The coloring scheme coordinates the order of the subdomain updates for the Multiplicative Schwarz and Symmetric Schwarz methods. Communication is required for updating the degrees of freedom across the compute node boundaries in between every color. No subdomain coloring scheme is required for the Additive Schwarz and Hybrid Schwarz methods.
The different Domain Decomposition solver types.
If the Domain Decomposition solver is run on a single workstation, all data needs to be set up in the same memory space and there is no more benefit from storing only specific subdomain data. Due to the subdomain overlap, the memory consumption might even increase compared to the original problem. In order to overcome this limitation, the Domain Decomposition solver can be run in the Recompute and clear mode, where the data used by each subdomain is computed on the fly. This results in a significant memory reduction and solves larger problems without needing to store the data in virtual memory. These problems will take longer to compute due to repeated setup of the subdomain problems.
This method is particularly useful when the solution uses a lot of virtual memory with disk swapping. If the Automatic option is used, the Recompute and clear mechanism is activated if there is an out-memory error during the setup phase. The setup is then repeated with Recompute and clear activated. The Recompute and clear option is comparable to the out-of-core option of the direct solvers. Both methods have an additional penalty; either because of storing additional data to the disk (Out-of-core) or because of recomputing specific parts of the data again and again (Recompute and clear). We can save even more memory by using the matrix-free format on top of the Recompute and clear option.
In the settings of the Domain Decomposition solver, we can specify the intended Number of subdomains (see the figure below). In addition, the Maximum number of DOFs per subdomain is specified. If the latter bound is missed — i.e., one of the subdomains has to handle more degrees of freedom than specified — then all subdomains are recreated, taking a larger number of subdomains.
Settings window for the Domain Decomposition solver.
The subdomains are created by means of the element and vertex lists taken from the mesh. We are able to choose from different subdomain ordering schemes. The Nested Dissection option creates a subdomain distribution by means of graph partitioning. This option typically gives a low number of colors and results in balanced subdomains with an approximately equal number of degrees of freedom, minimal subdomain interfaces, and a small overlap.
An alternative method that also avoids slim domains is to use the Preordering algorithm based on a Space-filling curve. If we select the option None for the Preordering algorithm, the subdomain ordering is based on the ordering of the mesh elements and degrees of freedom. This can result in slim domains. Detailed information about the applied subdomain configuration is given in the solver log if the Solver log on the Advanced node is set to Detailed.
When simulating problems with large memory requirements in the COMSOL® software, we are limited by the available hardware resources. An iterative solver with domain decomposition preconditioning should be considered as a memory-lean alternative to direct solvers. On a workstation, the Recompute and clear option for the Domain Decomposition solver is an alternative to the out-of-core mechanism for the direct solvers.
Although memory-heavy simulations can fail on computers with insufficient memory, we can enable them on clusters. The direct solvers in COMSOL Multiphysics automatically use the distributed memory, leading to a memory reduction on each node. The Domain Decomposition solver is an additional option that takes advantage of the parallelization based on the spatial subdomain decomposition.
The Domain Decomposition solver, clusters, and a variety of the options discussed here will help you improve computational efficiency when working with large models in COMSOL Multiphysics. In an upcoming blog post, we will demonstrate using the domain decomposition preconditioner in a specific application scenario. Stay tuned!
A. Toselli and O. Widlund, “Domain Decomposition Methods — Algorithms and Theory,” Springer Series in Computational Mathematics, vol. 34, 2005.
We have already gone over the physical basis of the firing mechanism that generates action potential in cells and we studied the generation of such a waveform using the Fitzhugh-Nagumo (FH) model.
The dynamics of the simple Fitzhugh-Nagumo model, featured in a computational app.
Today, we will convert the FH model study into a more rigorous mathematical model, the Hodgkin-Huxley (HH) model. Unlike the Fitzhugh-Nagumo model, which works well as a proof of concept, the Hodgkin-Huxley model is based on cell physiology and the simulation results match well with experiments.
In the HH model, the cell membrane contains gated and nongated channels that allow the passage of ions through them. The nongated channels are always open and the gated channels open under particular conditions. When the cell is at rest, the neurons allow the passage of sodium and potassium ions through the nongated channels. First, let us presume that only the potassium channels exist. For potassium, which is in excess inside the cell, the difference of concentration between the inside and outside of the cell acts as a driving force for the ions to migrate. This is the process of movement of ions by diffusion, or the chemical mechanism that initially drives potassium out of the cells.
This movement process cannot go on indefinitely. This is because the potassium ions are charged. Once they accumulate outside the cell, these ions establish an electrical gradient that drives some potassium ions into the cells. This is the second mechanism (the electrical mechanism) that affects the movement of ions. Eventually, these two mechanisms balance each other and the potassium efflux and outflux balances. The potential at which the balance happens is known as the Nernst potential for that ion. In excitable cells, the Nernst potential value for potassium, , is -77 mV and for sodium ions, , is around 50 mV.
We allow the presence of a few nongated sodium channels in the membrane. Because the sodium ions abound in the extracellular region, an influx of sodium ions into the cell must occur. The incoming sodium ions reduce the electrical gradient, disturb the potassium equilibrium, and result in a net potassium efflux from the cell until the cell reaches its resting potential at around -70 mV. It is important to mention here that the net efflux of potassium and net influx of sodium ions cannot go on forever, otherwise the chemical gradient that causes the movement will eventually cease. Ion pumps bring potassium back into the cell and drive sodium out through active transport and maintain the resting potential of the cells in normal conditions.
Let’s derive an equivalent circuit model of a cell in which we can imitate the effects of the different cellular mechanisms we just described by different commonly found circuit components, such as capacitors, resistors, and batteries. The voltage response of the circuit is the signal that corresponds to the action potential.
Overall, there are four currents that are important for the HH model:
Schematic of the currents in a Hodgkin-Huxley model.
The four currents flow through parallel branches, with the membrane potential V as the driving force (see the figure above; the ground denotes extracellular potential). The cell membrane has a capacitive character, which allows it to store charge. In the figure above, this is the left-most branch, modeled with a capacitor of strength C_{m}. The other branches account for three ionic currents that flow through ion channels. In each branch, the effects of channels are modeled through conductance (shown as resistance in the diagram), and the effect of the concentration gradient is represented by the Nernst potential of the ions, which are represented as batteries.
Thus, when a current is injected in the cell, it gets divided into four parts and the conservation of charges leads us to the following balance equation
Or equivalently
What is of paramount importance is that the sodium and potassium channel conductances are not constant; rather, they are functions of the cell potential. So how do we model them? Remember that some of the ion channels are gated and they can have multiple gates. Assume that there are voltage-dependent rate functions α_{ρ} (V) and β_{ρ} (V), which give us the rate constants of a gate going from a closed state to open and open to closed, respectively. If ρ denotes the fraction of gates that are open, a simple balance law yields the following equation for the evolution of ρ
Different gated channels are characterized by their gates. In the HH model, the potassium channel is hypothesized to be composed of four n-type gates. Since the channel conducts when all four are open, the potassium conductance is modeled through the equation
For sodium, the situation is assumed to be more complicated. The sodium-gated channel has four gates, but three m-type gates (activation-type gates that are open when the cell depolarizes) and one h-type gate (a deactivation gate that closes when the cell depolarizes). Therefore, the sodium channel conductance is given by
In the above equations, is the maximum potassium and sodium conductance. The functional forms of α_{ρ} (V), β_{Ρ} (V) for can be found in any standard reference.
The leak conductance is assumed to be a constant. Therefore, the HH model is completely described by the following set of equations
The key to understanding the Hodgkin-Huxley model lies in understanding the gate equations. We can recast the equations for the gates in the following form
with .
This is a very well-known equation in electrical circuits. If we assume ρ_{∞} is voltage independent, then the equation says that ρ asymptotically approaches ρ_{∞} as its final value, and Τ_{ρ}, the time constant, dictates the rate of approach. This means that the smaller the Τ_{ρ}, the faster the approach. The following figure shows the values of these two quantities for .
The asymptotic values (left) and time constants (right) for the gate equations of the Hodgkin-Huxley model.
It is easy to conclude from the figures above that n_{∞}, m_{∞} increases as the cell depolarizes and h_{∞} decreases under similar conditions. From the second graph, we find that the activation for sodium is much faster compared to the activation of potassium or the leak current.
When depolarization starts, n_{∞}, m_{∞} increases and h_{∞} decreases. The governing equations of all of these quantities demand that they should approach the steady-state values; therefore, n, m increases and h decreases. However, we should also remember the differences in time constants of the gating variables. A comparison says that the activation of sodium gates happens much faster as compared to their deactivation or the opening of potassium channels. Therefore, there is an initial overall increase in the sodium conductance. This results in an increase of the sodium current, which raises the membrane potential and causes V to approach . This is how the HH model accounts for the rising part of the action potential.
However, as this process continues, . Once the value of h goes below a threshold, the sodium channels are effectively closed. Also, the approach of V toward kills the driving force for the sodium current. Meanwhile, the potassium channels, which have a slower time constant, open up to a large extent. This, coupled with the large driving force that is available for the potassium current, forces the reverse flow. The potassium ions move out of the cell and eventually the membrane potential settles toward the hyperpolarized state.
We can build a computational simulation app to analyze the Hodgkin-Huxley model, which enables us to test various parameters without changing the underlying complex model. We can do this by designing a user-friendly app interface using the Application Builder in the COMSOL Multiphysics® software. As a first step, we create a model of the Hodgkin-Huxley equations using the Model Builder in the COMSOL software. After building the underlying model, we transform it into an app using the Application Builder. By building an app, we can restrict and control the various inputs and outputs of our model. We then pass the app to the end user, who doesn’t need to worry about the model setup process and can focus on extracting and analyzing the results of the simulation.
In our case, we implemented the underlying Hodgkin-Huxley model using the Global ODEs and DAEs interface in COMSOL Multiphysics. This interface is a part of the Mathematics features in the COMSOL software and is capable of solving a system of (ordinary) differential-algebraic equations (Global ODEs and DAEs interface). This interface is often used to construct models for which the equations and their initial boundary conditions are generic. In the interface, we can specify the equations and unknowns and add initial conditions. The interface, with model equations, is shown below.
We also create the postprocessing elements, graphs, and animations in the Model Builder. Once the model is ready, we move on to the Application Builder again. We connect the elements of the model to the app’s user interface through various GUI options like input fields, control buttons, display panels, and some coded methods.
You can learn more about how to build and run simulation apps in this archived webinar.
Finally, we can design the user interface of the Hodgkin-Huxley app. With the Form Editor in the Application Builder, we can design a custom user interface with a number of different buttons, panels, and displays. This user interface features a Model Parameters section to input the different parameters of the HH model, such as the Nernst potential, maximum gate conductance, and membrane capacitance. We can also provide two types of excitation current to the model: a unit step current or an excitation train. As the parameters change, the app displays the action potential and excitation current, as well as the evolution of gate variables m, n, and h.
With the Reset and Compute buttons, it is easy to run multiple tests after changing the parameters. There are also graphical panels that display visualizations and plots of the model results and an Animate button that creates captivating animations. The Simulation Report button generates a summary of the simulation.
The user interface for the Hodgkin-Huxley Model simulation app.
Making buttons work in an app is a simple process. All we have to do is write a few methods using the Method Editor tool that comes with the Application Builder and connect them to the buttons properly. Let me illustrate with an example. We can design the Hodgkin-Huxley Model app so that when it launches, the Animate and Simulation Report buttons are inactive (see the figure below). This is because the app user will not need to use either of these buttons until after they perform a simulation.
The Simulation Report and Animate buttons are disabled at the start of the simulation.
To do so, we can write a method that instructs the app to execute certain functions during the launch.
A method that disables the Simulation Report and Animate buttons during launch.
Observe that we have disabled the Simulation Report button and Animate button using instructions in lines 7 and 8 in the method. If you are worried about coming up with the syntax for your methods, let me assure you that it is much more simple than it seems. First, the methods execute some actions. If we want to record the code corresponding to these actions, we click on the button called Record Code in the Application Builder ribbon. Then, we can go to the Model Builder, execute the actions, and once done, click on the Stop Recording button. The corresponding code will be placed in the method. If necessary, we can then modify the instructions.
Once a simulation is complete, we would like these buttons to become active in the app. In another method associated with the Compute button, we insert the following code segment
We then ensure that this segment is executed if the solution is computed successfully. You will see that this enables both buttons.
To summarize, you can use a simulation app to easily compute and visualize parameter changes when working with a complex model that involves multiple equations and types of physics, such as the Hodgkin-Huxley model discussed here. This simulation app is just one example of how you can design the layout of an app and customize its input parameters to fit your needs. Use this app as inspiration to build your own app, whether you are analyzing the action potential in a cell with a mathematical model or teaching students about complicated math and engineering concepts. No matter what purpose your app serves, it will ensure that your simulation process is simple and intuitive.
Nerve cells are separated from the extracellular region by a lipid bilayer membrane. When the cells aren’t conducting a signal, there is a potential difference of about -70 mV across the membrane. This difference is known as the cell’s resting potential. Mineral ions, such as sodium and potassium, and negatively charged protein ions, contained within the cell, maintain the resting potential. When the cell receives an external stimulus, its potential spikes toward a positive value, a process known as depolarization, before falling off again to the resting potential, called repolarization.
Plot of a cell’s action potential.
In one example, the concentration of the sodium ions at rest is much higher in the extracellular region than it is within the cell. The membrane contains gated channels that selectively allow the passage of ions though them. When the cell is stimulated, the sodium channels open up and there is a rush of sodium ions into the cell. This sodium “current” raises the potential of the cell, resulting in depolarization. However, since the channel gates are voltage driven, the sodium gates close after a while. The potassium channels then open up and an outbound potassium current flows, leading to the repolarization of the cell.
Hodgkin and Huxley explained this mechanism of generating action potential through mathematical equations (Ref. 2). While this was a great success in the mathematical modeling of biological phenomena, the full Hodgkin-Huxley model is quite complicated. On the other hand, the FitzHugh-Nagumo model is relatively simple, consisting of fewer parameters and only two equations. One is for the quantity V, which mimics the action potential, and the other is for the variable W, which modulates V.
Today, we’ll focus on the FN model, while the HH model will be a topic of discussion for a later time.
The two equations in the FN model are
and
The parameter corresponds to an excitation while a and b are the controlling parameters of the model. The evolution of W is slower than that of the evolution of V due to the parameter ε multiplying everything on the right-hand side of the second equation. The fixed points of the FN model equations are the solutions of the following equation system,
and
The V-nullcline and W-nullcline are the curves and , respectively, in the VW-plane. Note that the V-nullcline is a cubic curve in the VW-plane and the W-nullcline is a straight line. The slope of the line is controlled in such a way that the nullclines intersect at a single point, making it the system’s only fixed point.
The parameter simply shifts V-nullcline up or down. Thus, changing modulates the position of the fixed point so that different values of cause the fixed point to be on the left, middle, or right part of the curve .
To simulate what happens when the fixed point is in each region, we can use the Global DAE interface included in the base package of COMSOL Multiphysics.
The V-nullcline is shown in the figure below in a green color. In the region above this nullcline , while in the region below it is positive. The W-nullcline is shown in red; in the region to the right of this straight line, , and to the left, .
Let’s first examine what happens if the fixed point is on the right side, Region 3, of the V-nullcline. We’ll say that when t, representing time, equals zero, both V and W are also at zero. In this case, both and are positive at and around the starting point and thus both change as time progresses. But since V evolves faster than W, V increases rapidly while W remains virtually unchanged. In the figure, we can see that this results in a near-horizontal part of the V-W curve.
As the curve approaches the V-nullcline, the rate of change of V slows down and W becomes more prominent. Since is still positive, W must increase, and the curve moves upwards. The fixed point then attracts this curve and the evolution ends at the fixed point.
Plot of the VW-plane when the fixed point is on the right side of the V-nullcline.
If the fixed point is in the middle, Region 2, then what we have discussed so far holds true. The difference is that once the curve goes beyond the right knee of the V-nullcline, becomes negative and V rapidly decays. While moving left, the curve crosses the red nullcline from right to left. From this point on, while both V and W diminish, the evolution of V dominates and the curve becomes horizontal once again.
This continues until the curve hits the left part of the V-nullcline. The curve begins to hug the V-nullcline and starts a slow downward journey. When it touches the left knee of the V-nullcline, it moves rapidly toward the right part of the V-nullcline. Note that this motion never hits the fixed point and therefore keeps repeating, which we can see in the plot below.
Plot of the VW-plane when the fixed point is in the middle region of the V-nullcline.
That leaves us with one last case to discuss — when the fixed point is on the left part, Region 1, of the V-nullcline. The results should look like the following plot. Note that the analyses we previously performed carry over.
Plot of the VW-plane when the fixed point is on the left side of the V-nullcline.
To explore the rich dynamics of the FN model described above, we need to repeatedly change various inputs without changing the underlying model. As such, a user interface that allows us to easily change the model parameters, perform the simulation, and analyze the new results without having to navigate the Model Builder tree structure to perform these various actions is desirable.
To accomplish this, we can turn to the Application Builder. This platform allows us to create an easy-to-use simulation app that exposes all of the essential aspects of the model, while keeping the rest behind the scenes. With this app, we can rapidly change the parameters via a user-friendly interface and study the results using both static figures and animations. The app also makes it easy for students to understand the FN model’s dynamics without having to worry about creating a model.
The important parameters of the FN model, i.e. a, b, ε, and I, are displayed in the app’s Model Parameters section. The graphical panels display various quantities of interest such as the waveform for V and W. We display the phase plane diagram in the top-right panel along with the V- and W-nullclines. The position of the fixed point is easily identifiable from that plot. Once the simulation is complete, we can animate the time trajectories by choosing the animation option from the ribbon toolbar. To get a summary of the simulation parameters and results, we can select the Simulation Report button.
App showing the dynamics of the FitzHugh-Nagumo model when the fixed point is in Region 2.
We can easily reproduce the cases described in the previous section with our app. The image above, for example, shows what happens when the fixed point is in Region 2. We can easily move the fixed point to either Region 1 or 3 by making the current 0.1 or 2.5, respectively. Note that any other parameters in the app can also be changed to see if other interesting trends emerge.
The app that we’ve presented here is just one example of what you can create with the Application Builder. The design of your app, from its layout to the parameters that are included, is all up to you. The flexibility of the Application Builder enables you to add as much complexity as needed, in part thanks to the Method Editor for Java® methods. In a follow-up blog post, we’ll create an app to illustrate the dynamics of the more complicated HH model. Stay tuned!
Oracle and Java are registered trademarks of Oracle and/or its affiliates.
]]>
In COMSOL Multiphysics, we can evaluate spatial integrals by using either integration component coupling operators or the integration tools under Derived Values in the Results section. While these integrals are always evaluated over a fixed region, we will sometimes want to vary the limits of integration and obtain results with respect to the new limits.
In a 1D problem, for example, the integration operators will normally calculate something like
where a and b are fixed-end points of a domain.
What we want to do instead, though, is to compute
and obtain the result with respect to the variable limit of integration s.
Since the integration operators work over a fixed domain, let’s think about how to use them to obtain integrals over varying limits. The trick is to change the integrand by multiplying it by a function that is equal to one within the limits of integration and zero outside of the limits. That is, we define a kernel function
and compute
As indicated in our previous blog post about integration methods in time and space, we can build this kernel function by using a logical expression or a step function.
We also need to know how the auxiliary variable s is specified in COMSOL Multiphysics. This is where the dest operator comes into play. The dest operator forces its argument to be evaluated on the destination point rather than on the source points. In our case, if we define the left-hand side of the above equation as a variable in the COMSOL software, we type dest(x) instead of s on the right-hand side.
Let’s demonstrate this with an example. In this case, a model simulates a steady-state exothermic reaction in a parallel plate reactor. There is a heating cylinder near the center and an inlet on the left, at x = 0. One of the compounds has a concentration cA.
What we want to do is to calculate the total moles per unit thickness between the inlet and a cross section at a distance of s from the inlet. We then plot the result with respect to the distance from the inlet.
First, we define an integration coupling operator for the whole domain, keeping the default integration operator name as intop1. If we evaluate intop1(cA), we get the integral for the entire domain. To vary the limit of integration horizontally, we build a kernel using a step function, which we’ll call step1. We then define a new variable, intUpTox.
Combining the integration coupling operator, dest operator, and new variable to evaluate an integral with moving limits.
Let’s see how the variable described in the image above works. As a variable, it is a distributed quantity and has a value equal to what the integration operator returns. During the integration, we evaluate x at every point in the domain of integration and dest(x) only at the point where intUpTox is computed. We find a horizontal line that spans from the inlet all the way to the outlet and plot intUpTox.
Integrating concentration cA over a horizontally varying limit of integration.
If we instead plot intUpTox/intop1(cA)*100, we get a graph of the percentile mass to the left of a given point with respect to the x-coordinate.
In the above integral, the limit of integration is given explicitly in terms of the x-coordinate. Sometimes, though, the limit may only be given in an implicit criteria, and it may not be straightforward to invert such criteria and obtain explicit limits. For example, say that we want to know the percentage of total moles within a certain radial distance from the center of the heating cylinder. Given a distance s from the center (x_{pos}, y_{pos}) of the cylinder, we want a kernel function equal to one inside the radial distance and zero outside of it. To do so, we can use:
But how do we specify s? We again use the dest operator: , and the kernel is
We implement this method by defining a Cut Line data set to obtain the horizontal line through the hole’s center and placing a graph of our integration expression over it. It is not necessary that the cut line is horizontal; it just needs to traverse the full domain that the integration operator defines. Furthermore, s should vary monotonically over the cut line.
New data set made with a cut line passing through the center of the hole.
In the image below, we added an insert by zooming in at the bottom left area of the graph. This section shows that there is no result on the plot for a distance of less than 2 mm from the center of the heating hole. This is because that region is not in our computational domain. Since the hole has a radius of 2 mm, the ordinate starts at 0 at an abscissa of 2 mm.
Percentage of mass in a domain, which is within a radial distance from the fixed point. The radial distance is varied by using the dest operator in an implicit expression.
In the previous sections, we evaluated integrals where the integrands were given. But what do we do if we have the integral and want to solve for the integrands? An example of such a problem is the Fredholm equation of the first kind
where we want to solve for the function u when given the function g and Kernel K. These types of integral equations often arise in inverse problems.
In integro-differential equations, both the integrals and derivatives of the unknown function are involved. For example, say we have
and want to solve for u given all of the other functions.
In our Application Gallery, we have the following integro-partial differential equation:
where we solve for temperature T(x) and are given all of the other functions and parameters.
We can solve the above problem using the Heat Transfer in Solids interface. In this interface, we add the right-hand side of the above equation as a temperature-dependent heat source. The first source term is straightforward, but we need to add the integral in the second term using an integration coupling operator and the dest operator. For the integration operator named intop1, we can evaluate
with intop1(k(dest(x),x)*T^4).
For more details on the implementation and physical background of this problem, you can download the integro-partial differential program tutorial model here. Please note that some integral equations tend to be singular and we need to use regularization to obtain solutions.
In today’s blog post, we’ve learned how to integrate over varying spatial limits. This is necessary for evaluating integrals in postprocessing or formulating integral and integro-differential equations. For more information, you can browse related content on the COMSOL Blog:
For a complete list of integration and other operators, please refer to the COMSOL Reference Manual.
If you have questions about the technique discussed here or with your COMSOL Multiphysics model, feel free to contact us.
]]>
Today’s example involves solving an axisymmetric heat conduction problem on a cylinder. This model was taken from the NAFEMS benchmark collection and solved in COMSOL Multiphysics using the Heat Transfer in Solids interface. We will take dimensions and material properties from that model and reproduce the result using the General Form PDE interface. Essentially we are now benchmarking our manual PDE interface approach to the problem by comparing it with the Heat Transfer in Solids interface, where the axial symmetry is automatically taken care of.
The equation for a stationary temperature distribution T on a rigid solid is
If the translational velocity u is uniformly zero, the heat capacity C_{p} has no effect and the only material property we have to specify is the thermal conductivity κ. Using the parameter kappa for thermal conductivity, we obtain the settings shown below with the Heat Transfer in Solids interface. The solution for a certain set of temperature and flux boundary conditions is also depicted.
An axisymmetric heat conduction problem solved with the Heat Transfer in Solids interface. This benchmark model is included in our Application Gallery.
Let’s now solve the same problem on the same finite element mesh using the General Form PDE interface. This interface is one of the several mathematics interfaces in COMSOL Multiphysics that facilitate solving custom equations. When the equation you want to solve is not built into one of the physics interfaces, you can use the mathematics interfaces to solve algebraic equations, ordinary differential equations, and partial differential equations. The new equations you add can be linear or nonlinear, and they can be solved either on their own or coupled with the prebuilt physics interfaces. The equations for heat transfer are a good starting point since those are already available in the built-in interfaces and we can easily compare the results with those where we have used our own PDE.
The General Form PDE interface has the template
and we have to provide the mass coefficient e_{a}, the damping coefficient d_{a}, the flux Γ, and the source f to specify our mathematical model.
If we compare this template with the equation from the Heat Transfer in Solids interface, with the dependent variable u standing for temperature, our flux specification should be that . To achieve this, we’ll specify the r- and z-components of the flux in the PDE interface as -kappa*ur and -kappa*uz, respectively. The mass and damping coefficients are not included in a stationary analysis.
As the image below indicates, the solution using the above settings differs from the solution we obtained using the Heat Transfer in Solids interface. What is the reason for this?
An axisymmetric heat conduction problem solved with the General Form PDE interface. The effect of a curvilinear coordinate system is not accounted for here.
The short explanation to such discrepancy is that the Heat Transfer in Solids interface understands as the divergence of the thermal flux, whereas the PDE interfaces understand as
This expression is not the divergence of Γ in the cylindrical coordinate system. While we’ll demonstrate how to fix this in a later section, we’ll first provide a more detailed explanation of what causes the discrepancy.
The equations that the various physics interfaces in COMSOL Multiphysics solve are mathematical abstractions of the laws of physics. Often stated as conservation laws or accounting principles, these laws describe how a certain quantity changes on account of activities on a domain and across the domain’s boundary. While conservation laws hold for all materials, the extent of a given material’s response to domain forces and boundary fluxes differs from one material to another. The material responses are specified by so-called constitutive equations or equations of state. We have to make sure that constitutive equations do not violate the second law of thermodynamics. Together, conservation laws and valid constitutive equations provide enough information to derive a well-posed mathematical model.
For instance, thermal conduction in a rigid solid object is governed by the law of conservation of thermal energy. The rate of change of thermal energy equals the rate at which heat is supplied by sources in the domain plus heat flux through the boundary. From empirical observations, the heat flux in solids is proportional to the temperature gradient and is directed from hotter areas to colder areas. This is the constitutive equation. From here, we can use multivariable calculus to write the heat transfer equation for an isotropic material, for example, as
where T is temperature, which is the primary variable whose evolution we want to track; p, C_{p}, and K are material parameters describing density, heat capacity, and thermal conductivity, respectively; u is the translational velocity; and Q is the heat source per unit volume. Boundary conditions are also used to describe what happens on the boundary.
After deriving mathematical models like the above equation, the next step is to solve them for the primary dependent variable and other quantities of interest. In heat conduction, for example, we want to obtain the temperature in time and space. To decide how we will identify a point in space, we select a coordinate system. Making a wise choice for a coordinate system can facilitate our analysis, whereas a bad choice can make our work unnecessarily complicated. No matter which coordinate system we choose, it is important to make sure that the physical meaning of the equation stays the same.
The heat conduction equation, for instance, contains the gradient of temperature. In the Cartesian coordinate system, we have
whereas in the cylindrical coordinate system, we have
In the first case, partial derivatives of a scalar, with respect to independent variables, provide components of the gradient. This is not the case in a curvilinear coordinate system like the cylindrical coordinate system. As referenced earlier, the physical meaning of the gradient of temperature, as the vector that points in the direction of the greatest increase of temperature with a magnitude equal to the rate of increase, should stay the same. To ensure such invariance, we have to use covariant derivatives instead of regular partial derivatives. In the Cartesian coordinate system, covariant derivatives and partial derivatives overlap; in curvilinear systems, they do not.
The other differential operator we have is the divergence operator . In the heat transfer equation, this operator acts on the flux . When using the Cartesian coordinate system, taking the divergence of the flux involves differentiating only the components , , and of the flux. The basis vectors e_{x}, e_{y}, and e_{z} remain the same from one point to another. For the cylindrical and other curvilinear coordinate systems, the basis changes and taking the divergence thus involves taking derivatives of the basis vectors as well. Covariant derivatives take this into account. Simple sums of partial derivatives, when used in curvilinear coordinate systems, lose the physical meaning of the divergence reflecting how much a vector spreads out from a given point.
Similarly, the dot product should always coincide with the product of the magnitudes of the two vectors and the cosine of the angle between them. In the Cartesian coordinate system, this is simply the sum of the products of the corresponding components of the two vectors. In curvilinear coordinate systems, the metric tensor of the coordinate system is added into the mix.
Because of the mathematical complexity that arises in curvilinear coordinate systems, you might wonder why we would want to use anything other than the Cartesian coordinate system. But, as we’ll highlight here, there are some applications where curvilinear coordinate systems are particularly useful.
Consider a three-dimensional problem where the geometry, material properties, boundary conditions, and heat sources are symmetric about an axis. The solution is rotationally symmetric about that axis. If we use a coordinate system with the z direction along the symmetry axis, all partial derivatives with respect to Φ vanish. What that leaves us with is a two-dimensional problem in the rz-plane. This results in an equation that is easier to solve than the one in the Cartesian coordinate system, where all three spatial partial derivatives remain in the equation. In the finite element modeling of such problems, using an axisymmetric formulation facilitates the use of 2D meshes rather than 3D meshes, which leads to significant savings for both memory and time.
Curvilinear coordinate systems are also useful when material properties, such as the thermal conductivity κ, are not isotropic. If the anisotropy occurs in certain directions, we can define coordinate systems that align with those preferential directions and simplify the material property input. Note that COMSOL Multiphysics features a Curvilinear Coordinates interface that can be used to generate nonstandard coordinate systems. You can learn more about how to use the Curvilinear Coordinates interface and how to solve anisotropic problems on curvilinear coordinate systems on the COMSOL Blog.
For our particular example, we’ll focus on using the standard cylindrical coordinate system.
In COMSOL Multiphysics, there are several physics-based interfaces that solve equations arising from one or more conservation laws. For instance, in the heat transfer in rigid solids example referenced above, we follow the conservation of thermal energy. In isothermal stress analysis problems, we follow the physical laws for conservation of mass, linear momentum, and angular momentum. When you use one of these physics interfaces, the software maintains the physical meanings of differential operators by using the corresponding expression for the Cartesian or curvilinear coordinates. That is, covariant derivatives are used instead of partial derivatives and, as a result, the coordinate system invariance is maintained.
But when you use the Coefficient Form PDE or General Form PDE interfaces, the software uses partial derivatives.
In other words, the physics interfaces understand , , and to be the gradient, divergence, and curl, respectively, of a physical scalar u or higher-order tensor Γ. On the other hand, in the PDE interfaces, these tensorial meanings no longer apply and the operators are replaced by partial derivatives with respect to the independent variables. For example, the independent variables may not represent physical coordinates.
Consider . In the physics interfaces, this means divergence. But in the PDE interfaces, this means in a 3D component and in a 2D axisymmetric component. The first results in , which overlaps with the divergence, and the latter results in , which is not the divergence of Γ.
In a cylindrical coordinate system, the divergence is given by
In an axisymmetric problem, the second item in this sum vanishes, leaving
(1)
The expression in the PDE interfaces does not contain the last term, as the operator is interpreted simply as the sum of partial derivatives. For the divergence of a physical quantity in the cylindrical coordinate, we have to compensate for the last term in (1). We will use an example to highlight one approach for doing so using the source term.
Let’s now go back to our initial problem. What we want to solve is the stationary problem
where is the divergence operator. For an axisymmetric problem, we have
This is equivalent to
The left-hand side of this equation is what the General Form PDE interface in an axisymmetric component understands to be . The right-hand side of the equation is added as an extra source term, as shown in the screenshot below. The solution now matches the solution that we obtained using the Heat Transfer in Solids interface.
An axisymmetric heat conduction problem solved with the General Form PDE interface. The effect of a curvilinear coordinate system is accounted for here.
Of course the item added in the source term is not a physical heat source. It simply compensates for the missing term between the covariant differentiation that we need for divergence and the partial differentiation the PDE interface does. A good practice is to add this term in the PDE settings window and add physical heat sources to the Model Builder by right-clicking the PDE interface and selecting “Source”.
In this example, only the divergence operator was used. For other differential operators, such as the curl, you should make similar compensations when solving a physics-based problem.
These adjustments are for curvilinear coordinate systems embedded in a Euclidean space. If the underlying space you are working with is not Euclidean, there are no Cartesian coordinates at all. In such cases, you need to be vigilant, even when using 2D or 3D components without axisymmetry. COMSOL Multiphysics features built-in physics interfaces for these types of problems. They include the Shell and Membrane interfaces in structural mechanics, the Thin-Film Flow interfaces in fluid mechanics, the Electric Currents, Shell interface in electromagnetics, and more. For a more extensive list, take a look at our product specification chart.
Another approach to equation-based modeling is to use a physics interface as a PDE template for a problem with a similar mathematical structure. For example, to solve a physical problem that has a convection-diffusion-reaction nature, we can use either the Heat Transfer or the Transport of Diluted Species interfaces. These interfaces keep the tensorial meanings of differential operators in an axisymmetric component. All you need to do is to use the boundary and domain conditions that mathematically match the items you want to add.
The only downside to this strategy is that the units for the variables may not match the units you have. In such cases, you can use nondimensionalization. Once you obtain the dimensionless equations, you can make your COMSOL model dimensionless by going to the root of the Model Builder and setting Unit System to None.
With nondimensionalization, you can use a physics interface to solve a different type of problem with a similar mathematical structure but different dimensions.
With the Coefficient Form PDE and General Form PDE interfaces in COMSOL Multiphysics, you can implement partial differential equations to solve novel problems not yet built into the software. These partial differential equations may or may not be derived from a physical problem. Therefore, the differential operators in the PDE interfaces are by design kept simple and not converted to tensorial operators automatically. When solving a physical problem using curvilinear coordinate systems — say when you want to exploit axisymmetry — make sure that the items you enter in the software’s templates accurately represent your physical problem. In the physics-based interfaces, COMSOL Multiphysics takes care of this. But in the PDE interfaces, the software can not attach any meaning to your equations, leaving the responsibility to you.
One way to address this is to add extra source terms to balance the difference between partial derivatives and covariant derivatives. Another way is to use an existing physics interface that has the same mathematical structure as your equation. If you have any questions about these strategies or other questions pertaining to this topic, please do not hesitate to contact us.
For more details on differential operators in curvilinear coordinate systems and partial differential equations on surfaces, you can turn to various books on tensor calculus or differential geometry. Here are some of my personal favorites, in no particular order:
The demands of a mesh generator can be quite extensive. The generated mesh, for instance, must conform to the geometry as well as create elements of the most optimal sizes and shapes. Elements where the edges and angles between them are close to being equal in size provide a greater chance of reaching solution convergence as well as more accurate results. Further, the generator may have to grade over short distances to create very small elements in tight spaces and very large elements in more open spaces without creating problems in the solution algorithms. Finally, it should be preferable that the generator acts automatically and works for all types of geometries.
Every meshing operation in COMSOL Multiphysics creates a mesh that conforms to the respective geometry. But the tetrahedral mesh generator, which operates under the Free Tetrahedral node in the Model Builder, is the only mesh generator in 3D that is fully automatic and can be applied to every geometry. And since it creates an unstructured mesh — that is, a mesh with irregular connectivity — it is well suited for complex-shaped geometries requiring a varying element size. Since tetrahedral meshes in COMSOL Multiphysics are used for a variety of physics, including multiphysics, the mesh generator needs to be very flexible. It should be possible to generate very fine meshes, very coarse meshes, meshes with fine resolution on curved boundaries, meshes with anisotropic elements in narrow regions, etc.
A tetrahedral mesh of a gas turbine.
Thanks to recent updates in COMSOL Multiphysics, you can now achieve improved quality in your tetrahedral meshing and thus advance the reliability of your simulation results. To demonstrate this, let’s walk through the steps of generating a tetrahedral mesh.
Most tetrahedral mesh generators fall into one of the following three classes:
The tetrahedral mesh generator in COMSOL Multiphysics is a Delaunay-based mesh generator. As a Delaunay mesher, the process of generating a tetrahedral mesh can be divided into the five main steps described below. The third and fifth step of the meshing process have been significantly improved with upgrades to COMSOL Multiphysics version 5.2a. To illustrate the different steps, we’ll use a very coarse mesh of the piston geometry, which is available in the meshing tutorials of the Application Library within COMSOL Multiphysics.
The geometry of the piston_mesh application.
If you monitor the Progress window when building a tetrahedral mesh, you can see that the first 35% of the progress is devoted to the generation of the boundary mesh. But creating a boundary mesh that is well suited for the subsequent steps of the tetrahedral mesh generation process is not a straightforward task. There are several issues that must be addressed, such as:
The boundary mesh for the piston_mesh application using the element size Extremely Coarse.
The next step is to create the Delaunay tetrahedralization of the boundary mesh points, which form the convex hull of these points. These are also a set of points that have some nice mathematical properties, such as that no point of the point set will be placed inside the circumsphere of any tetrahedron. In 2D, the Delaunay triangulation of a set of points maximizes the minimum angle of all the angles of the triangles in the triangulation, although this property does not apply in 3D. Yet, there is no guarantee that the edge and triangle elements of the boundary mesh exist as edges and triangles in the Delaunay tetrahedralization of the boundary mesh points — not even for a convex boundary mesh. We will deal with this in the following step.
The Delaunay tetrahedralization of the boundary mesh points forming the convex hull of the points.
So far, we have generated the final boundary mesh and a Delaunay tetrahedralization of the boundary mesh points. Here, we will enforce the edges and triangles of the boundary mesh into the tetrahedralization. This is the most demanding part of the entire meshing process.
Last year, we released a completely new algorithm to account for this step, and the algorithm was significantly improved in COMSOL Multiphysics version 5.2a. In earlier versions, when meshing complex geometries, you may have received error messages like “failed to respect boundary element edge on geometry face” or “internal error in boundary respecting”. Such failures originated from this part of the meshing process. With the new improvements, it is possible to safely remove all tetrahedra on the outside of the boundary once all of the edges of the boundary mesh have been enforced into the tetrahedralization, and all of the tetrahedra intersecting the triangles of the boundary mesh have been addressed.
The upper left part of the figure shows the boundary mesh in gray and the Delaunay tetrahedralization of the boundary mesh points in cyan. The zoomed view in the lower right part of the figure shows a few of the hundreds of tetrahedra that intersect edges and triangles of the boundary mesh. In this step, the tetrahedralization is modified such that no tetrahedron intersects the boundary mesh. Some additional points (so-called Steiner points) might be inserted to achieve a boundary conforming tetrahedralization.
Now that we have a tetrahedralization across the entire geometry, with the boundary mesh from the first step serving as an outer boundary, we still have a tetrahedralization that does not contain any interior points (except the Steiner points that may have been added previously). Our next task is to refine the tetrahedralization by inserting points in the interior until the specified element size is achieved everywhere. The points can easily be inserted using a regular Delaunay refinement scheme. Some special treatment is needed though as at this stage, the tetrahedralization does not fulfill the Delaunay properties everywhere.
The upper left part of the figure shows a cut through of the tetrahedralization after the third step, with only a few interior points inserted during that part of the meshing process. The lower right part of the figure shows the same cut through the tetrahedralization after the fourth step, where the tetrahedralization fulfills the element size specification also in the interior of the domain.
At this point, we are almost done. But before we return the mesh to the user, we need to first improve the quality of the tetrahedra. Each tetrahedron can be assigned a quality value in the range of 0 to 1. A regular tetrahedron has a quality of 1 and a totally flat tetrahedron has a quality of 0. COMSOL Multiphysics version 5.2a delivers a new algorithm that further improves the quality of the meshes. The algorithm also features an option for reducing the risk of obtaining inverted curved elements as well as an option for avoiding the creation of elements that are too large.
In this step, we will increase the quality of the worst elements such that the quality of the mesh, which is largely dictated by the quality of its worst elements, is sufficiently good for a typical simulation. The new algorithm has a broader palette of operations for improving the quality of the mesh including point relocation (often referred to as smoothing) and topological modification operations such as edge and face swapping, edge collapsing, and vertex insertions. By applying these operations repeatedly, an infinite number of unique tetrahedralizations can be reached for a domain defined by its boundary mesh, which means a mesh with better quality will always exist. However, for a given tetrahedralization where the minimum element quality cannot be improved by smoothing, it’s not obvious which topological operation to perform to improve the quality. Sometimes even a series of operations must be applied — vertex insertion followed by edge flipping and smoothing — before you can reach an optimized mesh.
There are three optimization levels included in the algorithm: Basic, Medium, and High. These levels determine the amount of effort put into the optimization process. Say, for instance, you build your mesh using the Basic option (the default) and encounter problems with convergence when computing the solution or perhaps reduced accuracy in your results due to a poor quality mesh. In this case, you can rebuild your mesh with a higher optimization level to get a better chance at convergence with better results.
The quality improvement algorithm offers three levels of optimization (Basic, Medium, and High) that determine how much effort is put into the optimization process.
These mesh cut throughs show the elements with the lowest quality before optimization (upper left), with the optimization level set to Basic (middle), and with the optimization level set to High (lower right). The red tetrahedra have a quality value less than 0.1, while the yellow tetrahedra have a quality value between 0.1 and 0.25. The gray triangles define the mesh cut through’s boundary mesh.
The algorithm also offers two options for reducing the risk of obtaining inverted curved elements as well as reducing the size of the largest tetrahedra. Note that these selections come at the cost of a longer meshing time and slightly lower element quality.
If the geometry includes fillets or other curved regions with a relatively coarse mesh, and you solve with a geometry shape order higher than one, you can select the Avoid inverted curved elements check box. This will let the optimization algorithm try to reduce the number of mesh elements that become inverted when they are curved. If the computation is sensitive to mesh elements that are too large, you can select the Avoid too large elements check box in an effort to avoid generating tetrahedra that are larger than the specified element size.
Mesh generation is a concept that is rather easy to understand: It is all about partitioning a geometry into pieces of linear shape. There are, however, some difficulties to address when it comes to generating a tetrahedral mesh for simulation purposes. There will always be challenging geometries for which the generator fails or gives a mesh with elements that are of a lower quality. But with recent improvements to the tetrahedral mesher in COMSOL Multiphysics, you can now better address such complex geometries and further advance your modeling processes for continued optimization.
When assigning materials to your model geometry, you may want to experiment with a few options and see how different materials affect your simulation results. In COMSOL Multiphysics, you can automate this process via the Material Sweep parametric study and Material Switch feature. As such, you do not need to add several materials one at a time and compute for the corresponding solution. In addition to saving you time during model set up, this facilitates the comparison of results during postprocessing.
Screenshot from the material sweeps video, showcasing the ability to switch between different results based on the material.
The Material Switch node houses the materials that you want to sweep over and provides functionality to automatically switch materials while your model is solving.
In the five-minute tutorial video below, we outline the procedure for performing a material sweep in your model and then walk you through the steps for doing so. This includes adding a Material Switch node; specifying parts of the geometry that the material sweep will be applied to; selecting the materials to switch between; adding the Material Sweep parametric study; and finally postprocessing the sweep’s results. We also briefly discuss how you can customize the materials being swept over as well as how to easily toggle between different sets of results obtained from your material sweep.
As mentioned earlier, COMSOL Multiphysics features a large collection of built-in materials that are available regardless of which modules you hold a license for. Upon adding any of these materials to your model, you will notice that the material properties are provided with certain default values.
In some cases, material properties are constant. In other cases, they may vary in space or be dependent on a physics variable such as temperature. If you want to make a constant material property variable, or if the built-in variation is not what you want to use, you can define your own function. In COMSOL Multiphysics, there are three types of functions that you can use to define a material property: Interpolation, Analytic, and Piecewise functions.
Data table and plot for an Interpolation function.
Interpolation functions are used to define a material property through reading in data from a table or file that contains values of the function at discrete points. You can enter this data manually or import it from an external file. This is useful when you have material properties that are obtained from experiments. COMSOL Multiphysics will automatically evaluate and then generate a function that fits the data you provide. Then, you can also choose how the function interpolates between the measured values or extrapolates outside of your specified range of data.
Input fields and plot for an Analytic function.
Analytic functions are used to define a function using built-in mathematical functions or other user-defined functions. You can enter an expression, specify the input arguments, and define the value range for each of the arguments in your equation.
Settings for a Piecewise function.
Piecewise functions are used to define a material property using different expressions over different intervals. The start and end point for each set of values, as well as the function applicable to that interval, can be entered manually or imported from an external file. The intervals that you define cannot overlap and there cannot be any holes between the intervals. That way, you have a continuous function uniquely defined in terms of the independent variable.
In the following seven-minute tutorial video, we discuss how to create and define Interpolation, Analytic, and Piecewise functions for any material property in your model, the advantages of using each type, and best practices to keep in mind when creating them. We also go over the settings for each function type, demonstrate how the selection of options such as Extrapolation will change your data plot, and show how you can call out your function in the Material Contents table.
While creating a model in COMSOL Multiphysics, you will at some point need to identify the materials that your objects are made of. Normally, this requires completing a series of steps in which you open the Add Material or Material Browser windows; choose the material; select and add it to your component; and then go into the material node’s settings to select the parts of the geometry to which the material applies. You would then need to repeat this procedure for each unique material that you want to include in your simulation. In COMSOL Multiphysics, you can expedite the above process using global materials and material links.
Screenshot displaying use of the global materials and material links functionality.
When a material is added under the Global Materials node, it is available to use anywhere throughout the model. Further, global materials can be used for any geometric entity level, whether you assign them to domains, boundaries, edges, or points.
Material links are used locally under a component’s material node to refer to a global material. This is advantageous when you have a COMSOL Multiphysics file that contains multiple components that are made up of similar materials, as you only need to specify the material once under the Global Materials node and can then link to it under each individual component. It is also beneficial for models in which the same material is assigned to different geometric entity levels such as domains and boundaries. In this case, you would again only need to add the material once and could also add a separate Material Link node for each geometric entity type.
In the six-minute tutorial video below, we show you how to use the global materials and material links functionality. We begin by demonstrating how to add global materials to your model and discuss the differences between adding materials globally and locally. Then, we walk through the steps of how to add material links to your model components and assign them to the geometry. After watching this video, we encourage you to try out this functionality yourself and see firsthand the ease with which you can assign materials in a model that contains multiple components or when you want to use the same materials on multiple parts.
You can significantly expedite the process of assigning materials to your model geometry using the features and functionality discussed here. To complement these tools, we’ve created instructional videos to help you learn how to utilize them in your own simulations. Whether you have a model file that involves multiple components, need to define a complicated material property, or have to test different materials in your simulation, COMSOL Multiphysics features built-in tools that make this process simpler and more efficient for you.