On the shoreline, crashing waves and the continuous movement of the tides cause coastal erosion, a phenomenon that removes sediment from beaches and wears away land.
A rock formation affected by coastal erosion. Image by John Nuttall — Own work. Licensed under CC BY 2.0, via Flickr Creative Commons.
Although coastal erosion has benefits, such as creating sand for beaches, it also causes damage to seaside property and habitats. To help predict this damage, researchers can use shallow water equations to learn more about coastal erosion. These equations enable scientists to model oceanographic and atmospheric fluid flow, thereby predicting what areas will be affected by coastal erosion and other issues, such as polar ice cap melting and pollution.
Shallow water equations are beneficial compared to the Navier-Stokes equations, which may be problematic depending on how free surfaces are resolved and the scale of the modeling domain. Today, we highlight a tutorial that showcases how to solve shallow water equations using the power of equation-based modeling.
In this shallow water equation model, we can describe the physics by adding our own equations — a feature called equation-based modeling. We use the General Form PDE interface and two dependent variables to ensure that the modeling process is straightforward. This way, we can easily define expressions as model variables, which comes in handy when defining the initial wave profile.
This simple 1D model uses the Saint-Venant shallow water equations to study a wave settling over a variable bed as a function of time.
The 1D model featured here would require substantial work to convert into a 2D model for solving typical applications. This tutorial is therefore most useful as an example of the benefits of equation-based modeling.
A vertical section of the fluid domain. Here, z_{f} is the analytical expression for the seabed profile and z_{s} is the water surface profile.
The model, which investigates shallow water in a channel, has constraints at both ends and uses a wave profile as the initial condition. In order to easily alter parameters like the wave amplitude and bed shape, we can use mathematical relations to represent the initial wave and bed shapes. Please note that the model has a difference in scale between the x- and y-directions, as seen in the plots below.
Plots of the seabed profile (left) and a comparison of the initial water surface profile with the seabed profile (right).
We see that the flow develops hydraulic jump discontinuities over time, which can cause instability in the solution. To stabilize the solution, we can add an artificial viscosity that makes the cell Reynolds number of order unity. The hydraulic jumps are replaced with steep fronts that can be resolved on a grid.
Switching gears, let’s take a look at our results. After running the simulation for 60 seconds, we see results that show the water surface and seabed slope at 6 different times, from the start of the simulation to 15 seconds later.
Plots of the seabed profile and water surface level at 3-second increments.
These results clearly indicate that the seabed topography influences the water surface elevation. This, in turn, affects the impact of coastal erosion.
We can share these custom results with others by creating and exporting an animation to help visualize our findings — something that is easy to do in COMSOL Multiphysics.
An animation of the simulation results.
As a next step, try this tutorial yourself by clicking on the button below.
The Open Recovery Files feature is useful for anyone running simulations with multiple parameters, namely:
While these simulations are running, a recovery file is created after the first solution is found and is then updated after each solved iteration. The recovery file is also updated following any of these three events:
If, at any point in your simulation, COMSOL Multiphysics closes unexpectedly, you can open the recovery file with the saved solutions. You can then continue running the simulation from where it left off.
The Open Recovery File window. You can view additional details by clicking on the Show/Hide Details button.
Please note that there are some limitations and subtleties to using this functionality. Currently, the main limitation is that the Parametric Sweep feature can’t be continued — you can rerun the simulation from the beginning or manually run the Parametric Sweep with the remaining values and store the solutions in a new place (not overwriting the first part of the simulation). When the simulation is terminated this way, the data is stored in the Parametric Solution as it should, but the number of solutions from the sweep are not complete. To access the individual parametric solutions, you might need to redirect the datasets to use individual solutions.
To learn more about the ins and outs of this feature, watch the video at the top of this post, where we discuss everything you need to know about opening recovery files. First, we open a model and run a simulation. After waiting until the simulation is nearly done, we force quit the software so that we may reopen it and finish the simulation.
Topology optimization is a useful capability because it can help us find designs that we would not have reasonably been able to think of ourselves. When developing a design, however, this is only the first step. It may not be reasonable or possible to construct a particular design found through topology optimization, either because the design is too costly to produce or it is simply not possible to manufacture.
Topology optimization results for an MBB beam.
To address these concerns, we can come up with new designs that are based on the results of topology optimization, and then carry out further simulation analyses on them. But how do we do this? As it turns out, COMSOL Multiphysics makes it simple to create geometries from the 2D and 3D plots of your topology optimization results, which you can continue to work with directly in COMSOL Multiphysics or export to a wide range of CAD software platforms.
To view topology optimization results that are in 2D, we can create a contour plot. Let’s use the Minimizing the Flow Velocity in a Microchannel tutorial to demonstrate this process. The goal of the tutorial is to find an optimal distribution of a porous filling material to minimize the horizontal flow velocity in the center of a microchannel.
First, we open up the model file included in the tutorial and go to the Contour 1 plot feature under the Velocity (spf) plot group.
The horizontal velocity (surface plot) and velocity field (streamlines) after optimization. The black contours represent the filling material.
In the above plot, the black contour is where the design variable, , equals 0.5. This indicates the border between the open channel and filling material. This is the result that we would like to incorporate into the geometry. In other applications, the expression and exact level to plot may differ, but the principle is the same: to find a contour that describes the limit between the solid and nonsolid materials (typically a fluid of some kind).
To create a geometry from this contour plot, we right-click the Contour feature node and choose Add Plot Data to Export. We need to make sure that we choose the data format as Sectionwise before we export the file.
The Sectionwise format describes the exported data using one section with coordinates, one with the element connectivity, and another that includes the data columns. It is important to note that the middle section, which describes how the coordinates of the first section are connected, will allow a contour plot with several closed loops or open curves.
The Spreadsheet export format is not suited for this particular use for several reasons, most importantly because it will assume that all coordinates are connected one after the other. This means that if there is more than one isolated contour, it will not be possible to build the Interpolation Curve feature. Also, the coordinates are scrambled, so the curve in the next step (discussed below) will not be drawn in the same way as seen in the contour plot.
To create the new geometry, we choose Add Component from the Home toolbar and choose a new 2D Component. Then, we copy the geometry feature nodes from the original geometry and paste them to the geometry sequence of the new 2D component. After this, we add an Interpolation Curve from the More Primitives menu on the Geometry toolbar and set the type as Open Curve, data format as Sectionwise, and a tolerance of 2e-2.
A smaller tolerance will give a curve that is more true to the data, but the outcome might be an intricate or “wiggly” geometry. In turn, a higher tolerance may give a curve that is too simplified and quite far from the optimized result.
Geometry with the interpolation curves representing the results of the topology optimization.
The geometry can now be used to run further simulations and to verify the created geometry within COMSOL Multiphysics.
The DXF format is a 2D format that most CAD software platforms can read. DXF also describes the higher-order polygons between the points, so it usually gives a better representation than exporting only the points.
To export the optimized topology from this geometry to a DXF file, we can follow the steps below. Please note that there is an optional step for if you only want to include the shape of the optimized topology in your DXF file.
Now, let’s see what to do when working with topology optimization results that are in 3D.
After performing a topology optimization in 3D, we usually view the resulting shape by creating a plot of the design variable; for example, an isosurface plot. We can directly export such a plot to a format that is compatible with COMSOL Multiphysics and CAD software and can even be used directly for 3D printing. This file format is the STL format, where the surfaces from the results plot are saved as a collection of triangles. It is a common standard file format for 3D printing and 3D scans in general.
In COMSOL Multiphysics, it is possible to export an STL file from the following plot features:
The software also supports adding a Deformation node on the plot feature, in case we want to export a deformed plot. The volume and isosurface plots are the most commonly used plot types for topology optimization, so we will focus our discussion on these two options.
To create an isosurface plot, we first add a 3D plot group to which we add an Isosurface feature node. In the Expression field, we then enter the design variable name, set the entry method as Levels, and fill in an appropriate value of the design variable representing the interface between the solid and nonsolid materials.
To demonstrate this process, let’s look at the example of the bridge shown below, where the optimal material distribution takes the familiar shape of an arch bridge. The optimization algorithm is maximizing the stiffness of the bridge subjected to a load to reach the displayed solution. To obtain the displayed isosurface plot, we use the expression 0.1 for the level of the design variable.
An isosurface plot of the 3D topology optimization for a deck arch bridge.
As you can see in the screenshot above, isosurface plots are not necessarily capped or airtight, so an exported volume plot may be a better choice, especially if we want to run further simulation analyses in COMSOL Multiphysics.
We can create a suitable plot by adding a Volume feature node to a 3D plot group. Then, we add a Filter node under Volume and set a suitable expression for inclusion. In this example, we use the expression rho_design > 0.1.
A volume plot of the deck arch bridge.
Exporting the data into an appropriate file format is simple. We right-click the Volume or Isosurface feature node and select Add Plot Data to Export. In the settings window of the resulting Plot node, we then select STL Binary File (*.stl) or STL Text File (*.stl) from the Data format drop-down list.
The exported STL file is readily readable by most CAD software platforms. To continue with the simulation of the geometry, import the STL file to a new COMSOL Multiphysics model, a process that we discuss in a previous blog post.
If you want to compare actual CAD drawings with your optimized results, you need to export the data in a format that can be imported into the CAD software you are using. The DXF format (for 2D) and the STL format (for 3D) are widely used formats and should be possible to import in almost any software platform.
In this blog post, we have discussed the steps needed to export topology optimization results in the DXF and STL formats. This will enable you to more efficiently analyze your model geometries within COMSOL Multiphysics and CAD software.
When setting up a simulation in COMSOL Multiphysics, you may want to seek out more information on the software as you go. Whether it’s learning about a node in the model tree, the settings for an operation you’re currently working in, or the differences between a set of options you are choosing from and what they will mean for your model, it’s helpful to have guidance available at your fingertips. This is the convenience that the Help window in COMSOL Multiphysics provides.
The Help window, displaying topic-based content for the Electric Potential boundary condition.
The Help window, accessed by clicking the Help button in the top right-hand corner of the software (the blue question mark) or the F1 key on your keyboard, enables you to promptly access information pertaining to the model tree node or window in which you are currently working. The text that displays updates automatically as you select items in the software or add settings to your model. This enables you to instantly get help right when you need it.
Since this window appears in the COMSOL Desktop® when opened, you can access the information you need without having to compete for screen space with your simulation. Instead of having to fit multiple windows on your monitor, you are able to view the help content and Model Builder together.
Additionally, you can search and navigate the text in the Help window using the respective buttons.
In addition to receiving topic-based help, there may be times when you want to more easily access, navigate, and search all of the comprehensive COMSOL Multiphysics documentation. This includes the user guides and manuals for any modules for which you have a license. You can find this documentation in the Documentation window, which you can access either from within COMSOL Multiphysics, by going to File > Help, or externally from your computer in your COMSOL Multiphysics installation folder.
The Documentation window, with the AC/DC Module User’s Guide opened.
The Documentation window enables you to quickly and easily access your entire library of COMSOL Multiphysics documentation, all within a single window. When open, you can choose between the PDF or HTML version of any guide, manual, or handbook. Additionally, the sections of each individual document are hyperlinked and bookmarked. The sections are displayed on the left side of the window, as shown above. This enables you to quickly jump between different chapters and documents.
This resource also provides more options when it comes to searching through the software documentation. This includes the ability to search through the entire library, only within a specified set of documents you have preselected, or exclusively through the Application Library Manual for all licensed products. Searching the Application Library Manual, in particular, enables you to find models and applications that demonstrate use of some specific physics, software features, and functionality.
Whereas the Help window provides quick access to documentation while modeling, the Documentation window serves as a more comprehensive resource when you need further clarification and more powerful search tools.
Now you know that you can access information about what you are working on. The ability to access modeling examples relevant to your work is equally as important. These examples enable you to learn how to use the software, examine COMSOL Multiphysics models, and access guidance that you can apply to your own simulations. In the COMSOL® software, the modeling examples can be found in another valuable resource, the Application Libraries window.
The Application Libraries window, with the Thermal Actuator tutorial model selected and displayed.
The Application Libraries window, accessed by going to File > Application Libraries, contains hundreds of models and simulation applications, spanning every module and engineering discipline. Using the Search field, you can find applications and models that cover some specific physics or feature that you want to see how you can use. Each entry includes a brief summary of the model; the COMSOL Multiphysics model file; and a PDF document that provides a comprehensive introduction and detailed, step-by-step overview of the model-building process. This provides you with the logic behind how the model is built, why and how boundary conditions are applied, and other useful information that you can use as insight into the models you create.
By following along with any of the tutorial models available, you can experience building a model firsthand. In addition, relevant examples from the Application Libraries can be experimented with and expanded upon, serving as a starting point for your own designs.
Tip: The tutorial models and demo applications featured in the Application Libraries are also available online in the Application Gallery.
Now that we’ve introduced the help tools available in the COMSOL® software and the advantages they provide, watch our video tutorial covering it all. In the video, we demonstrate how to access and use each of the resources discussed above.
After you’ve finished watching the video, you’ll be ready to use all of the help tools and resources available to you in COMSOL Multiphysics.
]]>
If you have used simulation tools for any significant period of time, you may have found yourself creating new models faster than your computer can solve them. This is especially common if your models are quick to set up but take a fair amount of time to solve. Running multiple models at the same time on the same computer is not a good option, as they would compete for resources (RAM, in particular) and therefore take longer to run simultaneously than they would sequentially, or back to back.
So, what’s a modeler to do?
You could launch your first model in the graphical user interface (GUI) and wait for it to solve, launch the second model in the GUI and wait for it to solve, and so on. But who would want to return to the office after hours or on weekends just to launch their next model?
Fortunately, there is a solution: creating a shell script, or a batch file, which automatically launches your COMSOL Multiphysics simulations one after the other. I’ll explain how to do this step by step on a computer with the Windows® operating system, but these ideas also apply to the other supported platforms (macOS and the Linux® operating system).
Let’s start with a demonstration of how to run a single COMSOL Multiphysics model from the command line.
First, we create a model file in the COMSOL Multiphysics GUI, also known as the COMSOL Desktop. Since we’re going over how to use a new functionality, the smaller and less detailed the model, the better. This will allow us to understand the functionality and perform tests on it quickly. Once you are comfortable with this functionality, it can, of course, also be applied to sophisticated models that take a long time to solve.
At this stage, we check that the model is properly set up by running it with a relatively coarse mesh. This presents the additional benefit of generating a default data set and one or two default plots in the Study branch of the model tree. Now that we’ve ensured that the model is properly set up, we can refine the mesh and save the file under the name Model1.mph
in our working folder. In this example, that’s C:/Users/jf
.
At this point, we can close the COMSOL Desktop.
Next, we open a Command Prompt window and, at the command line, make our way to our working folder. We type the name of the working folder:
cd C:\Users\jf
Then, we press the Enter key.
We are just about to call the COMSOL® software using the comsolbatch
command. Before we can do that, we need to make sure that the Windows® operating system knows where to find that command. This is where, if we have not done so before, we add the path to the COMSOL® software executables to the Windows® path environment variable. On a computer running Windows®, with a default installation, these executables are located in C:\Program Files\COMSOL\COMSOL52a\Multiphysics\bin\win64
.
Now, drum roll, please!
Back at the command line, we type the following command and then press Enter:
comsolbatch -inputfile Model1.mph -outputfile Model1_solved.mph
This command instructs Windows® to launch COMSOL Multiphysics® in Batch mode, hence without a graphical user interface. As the syntax suggests, we use Model1.mph
as an input and the file Model1_solved.mph
is the file with the solution. If we were to omit the “-outputfile Model1_solved.mph
” part in the command above, the solution would be stored in the input file, Model1.mph
.
As the software runs, some progress information is displayed at the command line. After a few moments, the run is done and we can open the output file, Model1_solved.mph
, in the GUI. We can see that the model has indeed been solved and that we can postprocess the results interactively, just as if we had computed the solution in the COMSOL Desktop.
Now that we’ve figured out how to launch a COMSOL Multiphysics model from the command line, let’s see how to automate running two or more simulations in a row.
Let’s create a second model, check that it is properly set up, and save the file to our working folder under the name Model2.mph
. With that done, we can close the COMSOL Desktop again.
Using a text editor like Notepad, we create a plain text file containing the following two lines:
comsolbatch -inputfile Model1.mph -outputfile Model1_solved.mph
comsolbatch -inputfile Model2.mph -outputfile Model2_solved.mph
We then save this in our working folder as a plain text file with the .bat extension. Here, we named the file Batch_Commands_For_COMSOL.bat
.
At the command prompt, still in our working folder, we launch Batch_Commands_For_COMSOL.bat
. At the command line, we type:
Batch_Commands_For_COMSOL
Then, we press the Enter key.
COMSOL Multiphysics will run without the GUI open and solve the problem defined in the file Model1.mph
. The COMSOL® software will then do the same for the problem defined in the file Model2.mph
. Once the runs are finished, we can inspect the files Model1_solved.mph
and Model2_solved.mph
in the COMSOL Desktop to see that they indeed contain the solutions of these two analyses. On the other hand, if we open the files Model1.mph
and Model2.mph
in the GUI, we see that they have not changed and still contain the problem definitions, but no solutions.
If we want to run more than two files sequentially, we can just modify the .bat
file accordingly and add lines for each file that we wish to run.
By learning how to run your COMSOL Multiphysics simulations in Batch mode from the command line, you will be able to complete your projects more efficiently and with ease.
Microsoft and Windows are either registered trademarks or trademarks of Microsoft Corporation in the United States and/or other countries.
macOS is a trademark of Apple Inc., in the U.S. and other countries.
Linux is a registered trademark of Linus Torvalds in the U.S. and other countries.
]]>
Because they’re fun to watch! But seriously, animations are an engaging way of clearly conveying your simulation results to your audience. Let’s take a look at some scenarios where animations can be helpful, using simulation results featured in past blog posts as examples. Yes, this is partly an excuse to repost cool animations like this one made by two COMSOL Multiphysics® users:
The carangiform swimming pattern of a fish. The animation was created by M. Curatolo and L. Teresi and originally featured in the blog post “Studying the Swimming Patterns of Fish with Simulation“.
When presenting your simulation results to colleagues, clients, or customers, analytical results don’t always provide the whole picture. For instance, the animations featured in a previous blog post about the simulation of sloshing in vehicle fuel tanks help you better visualize the movement and impact of fluid in a tank as compared to a static plot alone.
Of course, analytical results are paramount when presenting the conclusions that you draw from a simulation. However, analytical results can sometimes be hard to understand or relate to. Animations give the viewer a better idea of the real-world effects that were simulated in your research.
Animations can often help explain a concept or idea. In a blog post on thermal ablation, animations are used to illustrate how this process is used for material removal. In that blog post, we explain the concept of thermal ablation and detail how to model the phenomenon in COMSOL Multiphysics. Towards the end, we shared an animation that shows what thermal ablation looks like with a laser heating example. Here it is again:
In general, simulation software is often used to analyze phenomena that cannot be seen with the naked eye. This is true for physics related to acoustics, electromagnetic waves, MEMS, and more. Animations extend this idea by providing a graphical representation of the process or design that you aim to study. For example, the following animation shows the far-field radiation pattern for a monopole antenna array featured in an introductory antenna modeling blog post.
The reasons why you should create animations from your simulation results don’t just apply to your COMSOL Multiphysics® models. You can also add the animation functionality to any simulation app that you create with the Application Builder. Depending on who is using your app and for what purpose, you can build it so they can easily generate an animation of the results by the click of a button in the app’s user interface.
Several of the demo applications within the Application Library contain animations, such as the Biosensor Design app featured in a previous blog post.
Ready to make your own animations in COMSOL Multiphysics? We have a tutorial video to show you how. Animations can at first seem tricky and time-consuming to get just right. In the video, you will see a few best practices that will help minimize the amount of time you’ll spend producing animations.
After watching the video, you’ll be ready to generate and export animations of your own simulation results.
]]>
If you work with computationally large problems, the Domain Decomposition solver can increase efficiency by dividing the problem’s spatial domain into subdomains and computing the subdomain solutions concurrently and sequentially on the fly. We have already learned about using the Domain Decomposition solver as a preconditioner for an iterative solver and discussed the possibilities to enable simulations that are constrained by the available memory. Today, we will take a detailed look at how to use this functionality with a thermoviscous acoustics example.
Let’s start with the Transfer Impedance of a Perforate tutorial model, which can be found in the Application Library of the Acoustics Module. This example model uses the Thermoviscous Acoustics, Frequency Domain interface to model a perforate, a plate with a distribution of small perforations or holes.
A simulation of transfer impedance in a perforate.
For this complex simulation, we are interested in the velocity, temperature, and total acoustic pressure in the transfer impedance of the perforate model. Let’s see how we can use the Domain Decomposition solver to compute these quantities in situations where the required resolution exceeds the margins of available memory.
Let’s take a closer look at how we can set up a Domain Decomposition solver for the perforate model. The original model uses a fully coupled solver combined with a GMRES iterative solver. As a preconditioner, two hybrid direct preconditioners are used; i.e., the preconditioners separate the temperature from the velocity and pressure. By default, the hybrid direct preconditioners are used with PARDISO.
As the mesh resolution becomes refined, the amount of memory used continues to grow. An important parameter in the model is the minimum thickness of the viscous boundary layer (dvisc), which has a typical size of 50 μm. The perforates are a few millimeters in size. The minimum element size of the mesh element is taken to be dvisc/2. To refine the solution, we divide dvisc by the refinement factors r = 1, 2, 3, 5. We can insert the domain decomposition preconditioner by right-clicking on the Iterative node and selecting Domain Decomposition. Below the Domain Decomposition node, we find the Coarse Solver and Domain Solver nodes.
To accelerate the convergence, we need to use the coarse solver. Since we do not want to use an additional coarse mesh, we set Coarse Level > Use coarse level to Algebraic in order to use an algebraic coarse grid correction. On the Domain Solver node, we add two Direct Preconditioners and enable the hybrid settings like they were used in the original model. For the coarse solver, we take the direct solver PARDISO. If we use a Geometric coarse mesh grid correction instead, we can also apply a hybrid direct coarse solver.
Settings for the Domain Decomposition solver.
We can compare the default iterative solver with hybrid direct preconditioning to both the direct solver and the iterative solver with domain decomposition preconditioning on a single workstation. For the unrefined mesh with a mesh refinement factor of r = 1, we use approximately 158,682 degrees of freedom. All 3 solvers use around 5-6 GB of memory to find the solution for a single frequency. For r = 2 with 407,508 degrees of freedom and r = 3 with 812,238 degrees of freedom, the direct solver uses a little bit more memory than the 2 iterative solvers (12-14 GB for r = 2 and 24-29 GB for r = 3). For r = 5 and 2,109,250 degrees of freedom, the direct solver uses 96 GB and the iterative solvers use around 80 GB on a sequential machine.
As we will learn in the subsequent discussion, the Recompute and clear option for the Domain Decomposition solver gives a significant advantage with respect to the total memory usage.
Memory Usage, Nondistributed Case | Degrees of Freedom | Memory Usage, Direct Solver | Memory Usage, Iterative Solver with Hybrid Direct Preconditioning | Memory Usage, Iterative Solver with Domain Decomposition Preconditioning | Memory Usage, Iterative Solver with Domain Decomposition Preconditioning with Recompute and clear enabled |
---|---|---|---|---|---|
Refinement r = 1 | 158,682 | 5.8 GB | 5.3 GB | 5.4 GB | 3.6 GB |
Refinement r = 2 | 407,508 | 14 GB | 12 GB | 13 GB | 5.5 GB |
Refinement r = 3 | 812,238 | 29 GB | 24 GB | 26 GB | 6.4 GB |
Refinement r = 5 | 2,109,250 | 96 GB | 79 GB | 82 GB | 12 GB |
Memory usage for the direct solver and the two iterative solvers in the nondistributed case.
On a cluster, the memory load per node can be much lower than on a single-node computer. Let us consider the model with a refinement factor of r = 5. The direct solver scales nicely with respect to memory, using 65 GB and 35 GB per node on 2 and 4 nodes, respectively. On a cluster with 4 nodes, the iterative solver with domain decomposition preconditioning with 4 subdomains only uses around 24 GB per node.
Memory Usage per Node on a Cluster | Memory Usage, Direct Solver | Memory Usage, Iterative Solver with Hybrid Direct Preconditioning | Memory Usage, Iterative Solver with Domain Decomposition Preconditioning |
---|---|---|---|
1 node | 96 GB | 79 GB | 82 GB (with 2 subdomains) |
2 nodes | 65 GB | 56 GB | 47 GB (with 2 subdomains) |
4 nodes | 35 GB | 35 GB | 24 GB (with 4 subdomains) |
Memory usage per node on a cluster for the direct solver and the two iterative solvers for refinement factor r = 5.
On a single-node computer, the Recompute and clear option for the Domain Decomposition solver gives us the benefit we expect: reduced memory usage. However, it comes with the additional cost of decreased performance. For r = 5, the memory usage is around 41 GB for 2 subdomains, 25 GB for 4 subdomains, and 12 GB for 22 subdomains (the default settings result in 22 subdomains). For r = 3, we use around 15 GB of memory for 2 subdomains, 10 GB for 4 subdomains, and 6 GB for 8 subdomains (default settings).
Even on a single-node computer, the Recompute and clear option for the domain decomposition method gives a significantly lower memory consumption than the direct solver: 12 GB instead of 96 GB for refinement factor r = 5 and 6 GB instead of 30 GB for refinement factor r = 3. Despite the performance penalty, the Domain Decomposition solver with the Recompute and Clear option is a viable alternative to the out-of-core option for the direct solvers when there is insufficient memory.
Refinement Factor | r = 3 | r = 5 |
---|---|---|
Memory Usage | 30 GB | 96 GB |
Memory usage on a single-node computer with a direct solver for refinement factors r = 3 and r = 5.
Number of Subdomains | Recompute and clear Option | Refinement r = 3 | Refinement r = 5 |
---|---|---|---|
2 | Off | 24 GB | 82 GB |
2 | On | 15 GB | 41 GB |
4 | On | 10 GB | 25 GB |
8 | On | 6 GB | 20 GB |
22 | On | - | 12 GB |
Memory usage on a single-node computer with an iterative solver, domain decomposition preconditioning, and the Recompute and clear option enabled for refinement factors r = 3 and r = 5.
As demonstrated with this thermoviscous acoustics example, using the Domain Decomposition solver can greatly lower the memory footprint of your simulation. By this means, domain decomposition methods can enable the solution of large and complex problems. In addition, parallelism based on distributed subdomain processing is an important building block for improving computational efficiency when solving large problems.
The Domain Decomposition solver is based on a decomposition of the spatial domain into overlapping subdomains, where the subdomain solutions are less complex and more efficient in terms of memory usage and parallelization compared to the solution of the original problem.
In order to describe the basic idea of the iterative spatial Domain Decomposition solver, we consider an elliptic partial differential equation (PDE) over a domain D and a spatial partition {D_{i}}_{i}, such that the whole domain D = U_{i}D_{i} is covered by the union of the subdomains D_{i}. Instead of solving the PDE on the entire domain at once, the algorithm iteratively solves a number of problems for each subdomain D_{i}.
For Schwarz-type domain decomposition methods, the subdomains encompass an overlap of their support in order to transfer the information. On the interfaces between the subdomains, the solutions of the neighboring subdomains are used to update the current subdomain solution. For instance, if the subdomain D_{i} is adjacent to a boundary, its boundary conditions are used. The iterative domain decomposition procedure is typically combined with a global solver on a coarser mesh in order to accelerate convergence.
A 2D domain with a regular triangular mesh and its degrees of freedom decomposed into quadratic subdomains.
To illustrate the spatial domain decomposition, consider a 2D regular triangular mesh. For simplicity, we consider linear finite element shape functions with degrees of freedom in the 3 nodes of the triangular elements. The domain (more precisely, its degrees of freedom) is decomposed into quadratic subdomains, each consisting of 25 degrees of freedom. All interior subdomains have 8 neighboring subdomains and all degrees of freedom are unique to a single subdomain. The support of the linear element functions for a single subdomain is overlapping with the support of its neighbors.
Support of the linear element functions of the blue subdomain.
To improve the convergence rate of the iterative procedure, we may need to include a larger number of degrees of freedom in order to have a larger overlap of the subdomain support. This may give a more efficient coupling between the subdomains and a lower iteration count until convergence of the iterative procedure. However, this benefit comes at the costs of additional memory usage and additional computations during the setup and solution phase because of the larger subdomain sizes.
If an additional overlap of width 1 is requested, we add an additional layer of degrees of freedom to the existing subdomain. In our example, 22 degrees of freedom (marked with blue rectangles) are added to the blue subdomain. The support of the blue subdomain is enlarged accordingly.
The same procedure is repeated for the red, green, and yellow subdomains. In the resulting subdomain configuration, some of the degrees of freedom are unique to a single subdomain, while others are shared by two, three, or even four subdomains. It is obvious that dependencies arise for the shared degrees of freedom if one of its adjacent subdomains updates its solution.
Extended subdomain with 47 degrees of freedom and its support. The additional 22 degrees of freedom are shared with the neighboring subdomains.
It is known (Ref. 1) that the iterative solution to the set of subdomain problems on the subdomains, D_{i}, converges toward the solution of the original problem formulated over the whole domain D. Hence, the global solution can be found by iteratively solving each subdomain problem with all other domains fixed until the convergence criteria is met. The optional coarse grid problem can improve the convergence rate considerably. The coarse grid problem, which is solved on the entire domain D, gives an estimate of the solution on the fine grid on D and can transfer global information faster. The convergence rate of the method depends on the ratio between the size of the coarse grid mesh elements and the width of the overlap zone on the fine grid.
If we compute the solution on a particular subdomain D_{i}, the neighboring subdomains need to update their degrees of freedom adjacent to the support of D_{i}. In COMSOL Multiphysics, there are four options available for the coordination of the subdomain overlap and the global coarse grid solution. The selector Solver in the domain decomposition settings can be set to Additive Schwarz, Multiplicative Schwarz (default), Hybrid Schwarz, and Symmetric Schwarz. For Additive Schwarz methods, the affected degrees of freedom are updated after all solutions have been computed on all subdomains without an intermediate data exchange. In this case, the order of the subdomain solutions is arbitrary and there are no dependencies between the subdomains during this solution phase.
In contrast, Multiplicative Schwarz methods update the affected degrees of freedom at the overlap of the support of neighboring subdomains after every subdomain solution. This typically speeds up the iterative solution procedure. However, there is an additional demand for prescribing an order of the subdomain solutions, which are no longer fully independent of each other.
The Hybrid Schwarz method updates the solution after the global solver problem is solved. The subdomain problems are then solved concurrently as in the Additive Schwarz solver case. The solution is then updated again and the global solver problem is solved a second time. The Symmetric Schwarz method solves the subdomain problems in a given sequence like the Multiplicative Schwarz solver, but in a symmetric way.
Direct linear solvers are typically more robust and require less tweaking of the physics-dependent settings than iterative solvers with tuned preconditioners. Due to their memory requirements, however, direct solvers may become unfeasible to use for larger problems. Iterative solvers are typically leaner in memory consumption, but some models still can’t be solved due to resource limitations. We discuss the memory requirements for solving large models in a previous blog post. Other preconditioners for iterative solvers may also fail due to specific characteristics of the system matrix. Domain decomposition is a preconditioner that in many cases requires less tuning than other preconditioners.
In case of limitations by the available memory, we can move the solution process to a cluster that provides a larger amount of accumulated memory. We can consider the domain decomposition preconditioner, using a domain solver with settings that mimic the original solver settings, since the Domain Decomposition solver has the potential to do more concurrent work. As we will see, the Domain Decomposition solver can also be used in a Recompute and clear mode, where you can get a significant memory reduction, even on a workstation.
If we do not want to use an additional coarse mesh to construct the global solver, we can compute its solution using an algebraic method. This may come at the price of an increased amount of GMRES iterations compared to when we set the Use coarse level selector to Geometric, which is based on an additional coarser mesh. The advantage is that the algebraic method constructs the global solver from the finest-level system matrix, and not by means of an additional coarser mesh. With the Algebraic option, the generation of an additional coarse mesh, which might be costly or not even possible, can be avoided.
On a cluster, a subdomain problem can be solved on a single node (or on a small subset of the available nodes). The size of the subdomains, hence the memory consumption per node, can be controlled by the Domain Decomposition solver settings. For the Additive Schwarz solver, all subdomain problems can be solved concurrently on all nodes. The solution updates at the subdomain interfaces occur in the final stage of the outer solver iteration.
For the Multiplicative Schwarz solver, there are intermediate updates of the subdomain interface data. This approach can speed up the convergence of the iterative procedure, but introduces additional dependencies for the parallel solution. We must use a subdomain coloring mechanism in order to identify a set of subdomains that can be processed concurrently. This may limit the degree of parallelism if there is a low number of subdomains per color. In general, the Multiplicative Schwarz and Symmetric Schwarz methods converge faster than the Additive Schwarz and Hybrid Schwarz methods, while these methods can result in better parallel speedup.
A subdomain coloring mechanism is used for multiplicative Schwarz-type domain decomposition preconditioning.
In the Domain Decomposition solver settings, there is a Use subdomain coloring checkbox for the Multiplicative Schwarz and Hybrid Schwarz methods. This option is enabled by default and takes care of grouping subdomains into sets — so-called colors — that can be handled concurrently. Let us consider a coloring scheme with four colors (blue, green, red, and yellow). All subdomains of the same color can compute their subdomain solution at the same time and communicate the solution at the subdomain overlap to their neighbors. For four colors, the procedure is repeated four times until the global solution can be updated.
Domain decomposition on a cluster with nine nodes. A subdomain coloring scheme is used to compute subdomain solutions simultaneously for each different color.
On a cluster, the subdomains can be distributed across the available compute nodes. Every color can be handled in parallel and all of the nodes compute their subdomain solutions for the current color at the same time and then proceed with the next color. The coloring scheme coordinates the order of the subdomain updates for the Multiplicative Schwarz and Symmetric Schwarz methods. Communication is required for updating the degrees of freedom across the compute node boundaries in between every color. No subdomain coloring scheme is required for the Additive Schwarz and Hybrid Schwarz methods.
The different Domain Decomposition solver types.
If the Domain Decomposition solver is run on a single workstation, all data needs to be set up in the same memory space and there is no more benefit from storing only specific subdomain data. Due to the subdomain overlap, the memory consumption might even increase compared to the original problem. In order to overcome this limitation, the Domain Decomposition solver can be run in the Recompute and clear mode, where the data used by each subdomain is computed on the fly. This results in a significant memory reduction and solves larger problems without needing to store the data in virtual memory. These problems will take longer to compute due to repeated setup of the subdomain problems.
This method is particularly useful when the solution uses a lot of virtual memory with disk swapping. If the Automatic option is used, the Recompute and clear mechanism is activated if there is an out-memory error during the setup phase. The setup is then repeated with Recompute and clear activated. The Recompute and clear option is comparable to the out-of-core option of the direct solvers. Both methods have an additional penalty; either because of storing additional data to the disk (Out-of-core) or because of recomputing specific parts of the data again and again (Recompute and clear). We can save even more memory by using the matrix-free format on top of the Recompute and clear option.
In the settings of the Domain Decomposition solver, we can specify the intended Number of subdomains (see the figure below). In addition, the Maximum number of DOFs per subdomain is specified. If the latter bound is missed — i.e., one of the subdomains has to handle more degrees of freedom than specified — then all subdomains are recreated, taking a larger number of subdomains.
Settings window for the Domain Decomposition solver.
The subdomains are created by means of the element and vertex lists taken from the mesh. We are able to choose from different subdomain ordering schemes. The Nested Dissection option creates a subdomain distribution by means of graph partitioning. This option typically gives a low number of colors and results in balanced subdomains with an approximately equal number of degrees of freedom, minimal subdomain interfaces, and a small overlap.
An alternative method that also avoids slim domains is to use the Preordering algorithm based on a Space-filling curve. If we select the option None for the Preordering algorithm, the subdomain ordering is based on the ordering of the mesh elements and degrees of freedom. This can result in slim domains. Detailed information about the applied subdomain configuration is given in the solver log if the Solver log on the Advanced node is set to Detailed.
When simulating problems with large memory requirements in the COMSOL® software, we are limited by the available hardware resources. An iterative solver with domain decomposition preconditioning should be considered as a memory-lean alternative to direct solvers. On a workstation, the Recompute and clear option for the Domain Decomposition solver is an alternative to the out-of-core mechanism for the direct solvers.
Although memory-heavy simulations can fail on computers with insufficient memory, we can enable them on clusters. The direct solvers in COMSOL Multiphysics automatically use the distributed memory, leading to a memory reduction on each node. The Domain Decomposition solver is an additional option that takes advantage of the parallelization based on the spatial subdomain decomposition.
The Domain Decomposition solver, clusters, and a variety of the options discussed here will help you improve computational efficiency when working with large models in COMSOL Multiphysics. In an upcoming blog post, we will demonstrate using the domain decomposition preconditioner in a specific application scenario. Stay tuned!
A. Toselli and O. Widlund, “Domain Decomposition Methods — Algorithms and Theory,” Springer Series in Computational Mathematics, vol. 34, 2005.
We have already gone over the physical basis of the firing mechanism that generates action potential in cells and we studied the generation of such a waveform using the Fitzhugh-Nagumo (FH) model.
The dynamics of the simple Fitzhugh-Nagumo model, featured in a computational app.
Today, we will convert the FH model study into a more rigorous mathematical model, the Hodgkin-Huxley (HH) model. Unlike the Fitzhugh-Nagumo model, which works well as a proof of concept, the Hodgkin-Huxley model is based on cell physiology and the simulation results match well with experiments.
In the HH model, the cell membrane contains gated and nongated channels that allow the passage of ions through them. The nongated channels are always open and the gated channels open under particular conditions. When the cell is at rest, the neurons allow the passage of sodium and potassium ions through the nongated channels. First, let us presume that only the potassium channels exist. For potassium, which is in excess inside the cell, the difference of concentration between the inside and outside of the cell acts as a driving force for the ions to migrate. This is the process of movement of ions by diffusion, or the chemical mechanism that initially drives potassium out of the cells.
This movement process cannot go on indefinitely. This is because the potassium ions are charged. Once they accumulate outside the cell, these ions establish an electrical gradient that drives some potassium ions into the cells. This is the second mechanism (the electrical mechanism) that affects the movement of ions. Eventually, these two mechanisms balance each other and the potassium efflux and outflux balances. The potential at which the balance happens is known as the Nernst potential for that ion. In excitable cells, the Nernst potential value for potassium, , is -77 mV and for sodium ions, , is around 50 mV.
We allow the presence of a few nongated sodium channels in the membrane. Because the sodium ions abound in the extracellular region, an influx of sodium ions into the cell must occur. The incoming sodium ions reduce the electrical gradient, disturb the potassium equilibrium, and result in a net potassium efflux from the cell until the cell reaches its resting potential at around -70 mV. It is important to mention here that the net efflux of potassium and net influx of sodium ions cannot go on forever, otherwise the chemical gradient that causes the movement will eventually cease. Ion pumps bring potassium back into the cell and drive sodium out through active transport and maintain the resting potential of the cells in normal conditions.
Let’s derive an equivalent circuit model of a cell in which we can imitate the effects of the different cellular mechanisms we just described by different commonly found circuit components, such as capacitors, resistors, and batteries. The voltage response of the circuit is the signal that corresponds to the action potential.
Overall, there are four currents that are important for the HH model:
Schematic of the currents in a Hodgkin-Huxley model.
The four currents flow through parallel branches, with the membrane potential V as the driving force (see the figure above; the ground denotes extracellular potential). The cell membrane has a capacitive character, which allows it to store charge. In the figure above, this is the left-most branch, modeled with a capacitor of strength C_{m}. The other branches account for three ionic currents that flow through ion channels. In each branch, the effects of channels are modeled through conductance (shown as resistance in the diagram), and the effect of the concentration gradient is represented by the Nernst potential of the ions, which are represented as batteries.
Thus, when a current is injected in the cell, it gets divided into four parts and the conservation of charges leads us to the following balance equation
Or equivalently
What is of paramount importance is that the sodium and potassium channel conductances are not constant; rather, they are functions of the cell potential. So how do we model them? Remember that some of the ion channels are gated and they can have multiple gates. Assume that there are voltage-dependent rate functions α_{ρ} (V) and β_{ρ} (V), which give us the rate constants of a gate going from a closed state to open and open to closed, respectively. If ρ denotes the fraction of gates that are open, a simple balance law yields the following equation for the evolution of ρ
Different gated channels are characterized by their gates. In the HH model, the potassium channel is hypothesized to be composed of four n-type gates. Since the channel conducts when all four are open, the potassium conductance is modeled through the equation
For sodium, the situation is assumed to be more complicated. The sodium-gated channel has four gates, but three m-type gates (activation-type gates that are open when the cell depolarizes) and one h-type gate (a deactivation gate that closes when the cell depolarizes). Therefore, the sodium channel conductance is given by
In the above equations, is the maximum potassium and sodium conductance. The functional forms of α_{ρ} (V), β_{Ρ} (V) for can be found in any standard reference.
The leak conductance is assumed to be a constant. Therefore, the HH model is completely described by the following set of equations
The key to understanding the Hodgkin-Huxley model lies in understanding the gate equations. We can recast the equations for the gates in the following form
with .
This is a very well-known equation in electrical circuits. If we assume ρ_{∞} is voltage independent, then the equation says that ρ asymptotically approaches ρ_{∞} as its final value, and Τ_{ρ}, the time constant, dictates the rate of approach. This means that the smaller the Τ_{ρ}, the faster the approach. The following figure shows the values of these two quantities for .
The asymptotic values (left) and time constants (right) for the gate equations of the Hodgkin-Huxley model.
It is easy to conclude from the figures above that n_{∞}, m_{∞} increases as the cell depolarizes and h_{∞} decreases under similar conditions. From the second graph, we find that the activation for sodium is much faster compared to the activation of potassium or the leak current.
When depolarization starts, n_{∞}, m_{∞} increases and h_{∞} decreases. The governing equations of all of these quantities demand that they should approach the steady-state values; therefore, n, m increases and h decreases. However, we should also remember the differences in time constants of the gating variables. A comparison says that the activation of sodium gates happens much faster as compared to their deactivation or the opening of potassium channels. Therefore, there is an initial overall increase in the sodium conductance. This results in an increase of the sodium current, which raises the membrane potential and causes V to approach . This is how the HH model accounts for the rising part of the action potential.
However, as this process continues, . Once the value of h goes below a threshold, the sodium channels are effectively closed. Also, the approach of V toward kills the driving force for the sodium current. Meanwhile, the potassium channels, which have a slower time constant, open up to a large extent. This, coupled with the large driving force that is available for the potassium current, forces the reverse flow. The potassium ions move out of the cell and eventually the membrane potential settles toward the hyperpolarized state.
We can build a computational simulation app to analyze the Hodgkin-Huxley model, which enables us to test various parameters without changing the underlying complex model. We can do this by designing a user-friendly app interface using the Application Builder in the COMSOL Multiphysics® software. As a first step, we create a model of the Hodgkin-Huxley equations using the Model Builder in the COMSOL software. After building the underlying model, we transform it into an app using the Application Builder. By building an app, we can restrict and control the various inputs and outputs of our model. We then pass the app to the end user, who doesn’t need to worry about the model setup process and can focus on extracting and analyzing the results of the simulation.
In our case, we implemented the underlying Hodgkin-Huxley model using the Global ODEs and DAEs interface in COMSOL Multiphysics. This interface is a part of the Mathematics features in the COMSOL software and is capable of solving a system of (ordinary) differential-algebraic equations (Global ODEs and DAEs interface). This interface is often used to construct models for which the equations and their initial boundary conditions are generic. In the interface, we can specify the equations and unknowns and add initial conditions. The interface, with model equations, is shown below.
We also create the postprocessing elements, graphs, and animations in the Model Builder. Once the model is ready, we move on to the Application Builder again. We connect the elements of the model to the app’s user interface through various GUI options like input fields, control buttons, display panels, and some coded methods.
You can learn more about how to build and run simulation apps in this archived webinar.
Finally, we can design the user interface of the Hodgkin-Huxley app. With the Form Editor in the Application Builder, we can design a custom user interface with a number of different buttons, panels, and displays. This user interface features a Model Parameters section to input the different parameters of the HH model, such as the Nernst potential, maximum gate conductance, and membrane capacitance. We can also provide two types of excitation current to the model: a unit step current or an excitation train. As the parameters change, the app displays the action potential and excitation current, as well as the evolution of gate variables m, n, and h.
With the Reset and Compute buttons, it is easy to run multiple tests after changing the parameters. There are also graphical panels that display visualizations and plots of the model results and an Animate button that creates captivating animations. The Simulation Report button generates a summary of the simulation.
The user interface for the Hodgkin-Huxley Model simulation app.
Making buttons work in an app is a simple process. All we have to do is write a few methods using the Method Editor tool that comes with the Application Builder and connect them to the buttons properly. Let me illustrate with an example. We can design the Hodgkin-Huxley Model app so that when it launches, the Animate and Simulation Report buttons are inactive (see the figure below). This is because the app user will not need to use either of these buttons until after they perform a simulation.
The Simulation Report and Animate buttons are disabled at the start of the simulation.
To do so, we can write a method that instructs the app to execute certain functions during the launch.
A method that disables the Simulation Report and Animate buttons during launch.
Observe that we have disabled the Simulation Report button and Animate button using instructions in lines 7 and 8 in the method. If you are worried about coming up with the syntax for your methods, let me assure you that it is much more simple than it seems. First, the methods execute some actions. If we want to record the code corresponding to these actions, we click on the button called Record Code in the Application Builder ribbon. Then, we can go to the Model Builder, execute the actions, and once done, click on the Stop Recording button. The corresponding code will be placed in the method. If necessary, we can then modify the instructions.
Once a simulation is complete, we would like these buttons to become active in the app. In another method associated with the Compute button, we insert the following code segment
We then ensure that this segment is executed if the solution is computed successfully. You will see that this enables both buttons.
To summarize, you can use a simulation app to easily compute and visualize parameter changes when working with a complex model that involves multiple equations and types of physics, such as the Hodgkin-Huxley model discussed here. This simulation app is just one example of how you can design the layout of an app and customize its input parameters to fit your needs. Use this app as inspiration to build your own app, whether you are analyzing the action potential in a cell with a mathematical model or teaching students about complicated math and engineering concepts. No matter what purpose your app serves, it will ensure that your simulation process is simple and intuitive.
Nerve cells are separated from the extracellular region by a lipid bilayer membrane. When the cells aren’t conducting a signal, there is a potential difference of about -70 mV across the membrane. This difference is known as the cell’s resting potential. Mineral ions, such as sodium and potassium, and negatively charged protein ions, contained within the cell, maintain the resting potential. When the cell receives an external stimulus, its potential spikes toward a positive value, a process known as depolarization, before falling off again to the resting potential, called repolarization.
Plot of a cell’s action potential.
In one example, the concentration of the sodium ions at rest is much higher in the extracellular region than it is within the cell. The membrane contains gated channels that selectively allow the passage of ions though them. When the cell is stimulated, the sodium channels open up and there is a rush of sodium ions into the cell. This sodium “current” raises the potential of the cell, resulting in depolarization. However, since the channel gates are voltage driven, the sodium gates close after a while. The potassium channels then open up and an outbound potassium current flows, leading to the repolarization of the cell.
Hodgkin and Huxley explained this mechanism of generating action potential through mathematical equations (Ref. 2). While this was a great success in the mathematical modeling of biological phenomena, the full Hodgkin-Huxley model is quite complicated. On the other hand, the FitzHugh-Nagumo model is relatively simple, consisting of fewer parameters and only two equations. One is for the quantity V, which mimics the action potential, and the other is for the variable W, which modulates V.
Today, we’ll focus on the FN model, while the HH model will be a topic of discussion for a later time.
The two equations in the FN model are
and
The parameter corresponds to an excitation while a and b are the controlling parameters of the model. The evolution of W is slower than that of the evolution of V due to the parameter ε multiplying everything on the right-hand side of the second equation. The fixed points of the FN model equations are the solutions of the following equation system,
and
The V-nullcline and W-nullcline are the curves and , respectively, in the VW-plane. Note that the V-nullcline is a cubic curve in the VW-plane and the W-nullcline is a straight line. The slope of the line is controlled in such a way that the nullclines intersect at a single point, making it the system’s only fixed point.
The parameter simply shifts V-nullcline up or down. Thus, changing modulates the position of the fixed point so that different values of cause the fixed point to be on the left, middle, or right part of the curve .
To simulate what happens when the fixed point is in each region, we can use the Global DAE interface included in the base package of COMSOL Multiphysics.
The V-nullcline is shown in the figure below in a green color. In the region above this nullcline , while in the region below it is positive. The W-nullcline is shown in red; in the region to the right of this straight line, , and to the left, .
Let’s first examine what happens if the fixed point is on the right side, Region 3, of the V-nullcline. We’ll say that when t, representing time, equals zero, both V and W are also at zero. In this case, both and are positive at and around the starting point and thus both change as time progresses. But since V evolves faster than W, V increases rapidly while W remains virtually unchanged. In the figure, we can see that this results in a near-horizontal part of the V-W curve.
As the curve approaches the V-nullcline, the rate of change of V slows down and W becomes more prominent. Since is still positive, W must increase, and the curve moves upwards. The fixed point then attracts this curve and the evolution ends at the fixed point.
Plot of the VW-plane when the fixed point is on the right side of the V-nullcline.
If the fixed point is in the middle, Region 2, then what we have discussed so far holds true. The difference is that once the curve goes beyond the right knee of the V-nullcline, becomes negative and V rapidly decays. While moving left, the curve crosses the red nullcline from right to left. From this point on, while both V and W diminish, the evolution of V dominates and the curve becomes horizontal once again.
This continues until the curve hits the left part of the V-nullcline. The curve begins to hug the V-nullcline and starts a slow downward journey. When it touches the left knee of the V-nullcline, it moves rapidly toward the right part of the V-nullcline. Note that this motion never hits the fixed point and therefore keeps repeating, which we can see in the plot below.
Plot of the VW-plane when the fixed point is in the middle region of the V-nullcline.
That leaves us with one last case to discuss — when the fixed point is on the left part, Region 1, of the V-nullcline. The results should look like the following plot. Note that the analyses we previously performed carry over.
Plot of the VW-plane when the fixed point is on the left side of the V-nullcline.
To explore the rich dynamics of the FN model described above, we need to repeatedly change various inputs without changing the underlying model. As such, a user interface that allows us to easily change the model parameters, perform the simulation, and analyze the new results without having to navigate the Model Builder tree structure to perform these various actions is desirable.
To accomplish this, we can turn to the Application Builder. This platform allows us to create an easy-to-use simulation app that exposes all of the essential aspects of the model, while keeping the rest behind the scenes. With this app, we can rapidly change the parameters via a user-friendly interface and study the results using both static figures and animations. The app also makes it easy for students to understand the FN model’s dynamics without having to worry about creating a model.
The important parameters of the FN model, i.e. a, b, ε, and I, are displayed in the app’s Model Parameters section. The graphical panels display various quantities of interest such as the waveform for V and W. We display the phase plane diagram in the top-right panel along with the V- and W-nullclines. The position of the fixed point is easily identifiable from that plot. Once the simulation is complete, we can animate the time trajectories by choosing the animation option from the ribbon toolbar. To get a summary of the simulation parameters and results, we can select the Simulation Report button.
App showing the dynamics of the FitzHugh-Nagumo model when the fixed point is in Region 2.
We can easily reproduce the cases described in the previous section with our app. The image above, for example, shows what happens when the fixed point is in Region 2. We can easily move the fixed point to either Region 1 or 3 by making the current 0.1 or 2.5, respectively. Note that any other parameters in the app can also be changed to see if other interesting trends emerge.
The app that we’ve presented here is just one example of what you can create with the Application Builder. The design of your app, from its layout to the parameters that are included, is all up to you. The flexibility of the Application Builder enables you to add as much complexity as needed, in part thanks to the Method Editor for Java® methods. In a follow-up blog post, we’ll create an app to illustrate the dynamics of the more complicated HH model. Stay tuned!
Oracle and Java are registered trademarks of Oracle and/or its affiliates.
]]>
In COMSOL Multiphysics, we can evaluate spatial integrals by using either integration component coupling operators or the integration tools under Derived Values in the Results section. While these integrals are always evaluated over a fixed region, we will sometimes want to vary the limits of integration and obtain results with respect to the new limits.
In a 1D problem, for example, the integration operators will normally calculate something like
where a and b are fixed-end points of a domain.
What we want to do instead, though, is to compute
and obtain the result with respect to the variable limit of integration s.
Since the integration operators work over a fixed domain, let’s think about how to use them to obtain integrals over varying limits. The trick is to change the integrand by multiplying it by a function that is equal to one within the limits of integration and zero outside of the limits. That is, we define a kernel function
and compute
As indicated in our previous blog post about integration methods in time and space, we can build this kernel function by using a logical expression or a step function.
We also need to know how the auxiliary variable s is specified in COMSOL Multiphysics. This is where the dest operator comes into play. The dest operator forces its argument to be evaluated on the destination point rather than on the source points. In our case, if we define the left-hand side of the above equation as a variable in the COMSOL software, we type dest(x) instead of s on the right-hand side.
Let’s demonstrate this with an example. In this case, a model simulates a steady-state exothermic reaction in a parallel plate reactor. There is a heating cylinder near the center and an inlet on the left, at x = 0. One of the compounds has a concentration cA.
What we want to do is to calculate the total moles per unit thickness between the inlet and a cross section at a distance of s from the inlet. We then plot the result with respect to the distance from the inlet.
First, we define an integration coupling operator for the whole domain, keeping the default integration operator name as intop1. If we evaluate intop1(cA), we get the integral for the entire domain. To vary the limit of integration horizontally, we build a kernel using a step function, which we’ll call step1. We then define a new variable, intUpTox.
Combining the integration coupling operator, dest operator, and new variable to evaluate an integral with moving limits.
Let’s see how the variable described in the image above works. As a variable, it is a distributed quantity and has a value equal to what the integration operator returns. During the integration, we evaluate x at every point in the domain of integration and dest(x) only at the point where intUpTox is computed. We find a horizontal line that spans from the inlet all the way to the outlet and plot intUpTox.
Integrating concentration cA over a horizontally varying limit of integration.
If we instead plot intUpTox/intop1(cA)*100, we get a graph of the percentile mass to the left of a given point with respect to the x-coordinate.
In the above integral, the limit of integration is given explicitly in terms of the x-coordinate. Sometimes, though, the limit may only be given in an implicit criteria, and it may not be straightforward to invert such criteria and obtain explicit limits. For example, say that we want to know the percentage of total moles within a certain radial distance from the center of the heating cylinder. Given a distance s from the center (x_{pos}, y_{pos}) of the cylinder, we want a kernel function equal to one inside the radial distance and zero outside of it. To do so, we can use:
But how do we specify s? We again use the dest operator: , and the kernel is
We implement this method by defining a Cut Line data set to obtain the horizontal line through the hole’s center and placing a graph of our integration expression over it. It is not necessary that the cut line is horizontal; it just needs to traverse the full domain that the integration operator defines. Furthermore, s should vary monotonically over the cut line.
New data set made with a cut line passing through the center of the hole.
In the image below, we added an insert by zooming in at the bottom left area of the graph. This section shows that there is no result on the plot for a distance of less than 2 mm from the center of the heating hole. This is because that region is not in our computational domain. Since the hole has a radius of 2 mm, the ordinate starts at 0 at an abscissa of 2 mm.
Percentage of mass in a domain, which is within a radial distance from the fixed point. The radial distance is varied by using the dest operator in an implicit expression.
In the previous sections, we evaluated integrals where the integrands were given. But what do we do if we have the integral and want to solve for the integrands? An example of such a problem is the Fredholm equation of the first kind
where we want to solve for the function u when given the function g and Kernel K. These types of integral equations often arise in inverse problems.
In integro-differential equations, both the integrals and derivatives of the unknown function are involved. For example, say we have
and want to solve for u given all of the other functions.
In our Application Gallery, we have the following integro-partial differential equation:
where we solve for temperature T(x) and are given all of the other functions and parameters.
We can solve the above problem using the Heat Transfer in Solids interface. In this interface, we add the right-hand side of the above equation as a temperature-dependent heat source. The first source term is straightforward, but we need to add the integral in the second term using an integration coupling operator and the dest operator. For the integration operator named intop1, we can evaluate
with intop1(k(dest(x),x)*T^4).
For more details on the implementation and physical background of this problem, you can download the integro-partial differential program tutorial model here. Please note that some integral equations tend to be singular and we need to use regularization to obtain solutions.
In today’s blog post, we’ve learned how to integrate over varying spatial limits. This is necessary for evaluating integrals in postprocessing or formulating integral and integro-differential equations. For more information, you can browse related content on the COMSOL Blog:
For a complete list of integration and other operators, please refer to the COMSOL Reference Manual.
If you have questions about the technique discussed here or with your COMSOL Multiphysics model, feel free to contact us.
]]>