Let’s consider two objects labeled A and B, shown below. The three distances that we want to compute are:
Two objects, A and B, and the distances that we want to compute.
We can compute all of these various distances using a combination of the General Extrusion and Minimum component couplings in COMSOL Multiphysics. Let’s first look at how to use the General Extrusion component coupling. We name the operator A_b
and define its Source Selection to be the boundaries of object A. Within the Advanced section, we use the Mesh search method of Closest point. These settings are shown below. All other settings for this operator can be left at their defaults.
The settings for the General Extrusion component coupling used to compute the closest point distance. Note that the Mesh search method is set to Closest point.
We use this operator within the definition of a variable called d_A
, defined as:
sqrt((x-A_b(x))^2+(y-A_b(y))^2)
This variable is defined over the domains where we want to compute the distance field; in this case, just the surrounding domain. We can also compute the negative of the gradient of this distance field, . This gives us the components of a vector field that points toward the closest point on the boundary of A. We can use the Differentiation operator d(d_A,x)
and d(d_A,y)
to take the spatial derivatives, as shown in the screenshot below.
The variable definitions.
We can use these variables anywhere that we want. For example, we can plot the distance field or make material properties dependent upon distance. The image below plots the contours of the distance and the direction vectors. Note that the distance is computed even in the region behind object B. We clearly get quite a bit of information here, but there is a substantial computational cost, since the shortest distance is computed at every point in the surrounding domain. There are also times when we don’t need all of this information and just want the distances between objects.
The distance field (contour lines) and shortest direction to the boundary of object A (arrows) in the domain surrounding the two objects.
Let’s make things a bit easier and only concern ourselves with the distance between two objects and not the direction. We use the same General Extrusion component coupling, but only need to define a variable on the boundary of object B to compute the distance.
The variable defining the distance between the objects.
While this is the same distance function we used before, we don’t need a mesh in the intermediate space. We don’t even need a mesh over domains A and B; there just needs to be a mesh on the boundaries of the objects. This approach takes much less time, but it gives us only the shortest distance from object A to every point on the boundary of object B. We cannot recover the direction vector. We can also flip all of these definitions around and compute the shortest distance from object B to every point on the boundary of object A. These distances, shown in the plot below, are available along the boundaries of the objects.
The distance from every point on the boundary of object B to the closest point on object A and vice versa.
Now, let’s find the line that describes the shortest distance between the two objects. In the previous section, we saw that we can compute two variables, d_AB
and d_BA
, which describe the shortest distances between A and B and vice versa. We now want to find the minimum distance between the boundaries of these domains. Thus, we set up two different Minimum component couplings: one for the boundary of object A and another for object B. We call these operators minA
and minB
, as shown in the screenshot below.
The definition of the Minimum component coupling over the boundary of object A.
We then call these Minimum component couplings to extract the minimum distance. We can also provide a second argument to the Minimum component coupling to find the coordinates at which the distance is at a minimum. For example, by defining the variable A_x
as the expression minA(d_BA,x)
, it takes on the value of the x-coordinate at which d_BA
is at a minimum over the boundary of A.
The definitions for the coordinates of the shortest line segment between two domains.
We can call the variables defining these coordinates anywhere we want. For example, we can use the Cut Line feature to show the shortest line segment connecting the two objects, as seen in the following image. If we have a meshed domain and a solution between the two objects, then we can plot the fields just along the shortest line between the two.
The Cut Line feature, used to determine the shortest line between objects.
These techniques for determining distances can be used in any model. Although the examples presented here are in 2D, they can all be generalized to 3D as well. However, computing the 3D distance field does take a relatively long time, whereas calculating distances between boundaries and clearances is less intensive.
Computing the distance field around nonsmooth shapes also requires a bit more care. As shown in the figure below, the distance field around reentrant corners are nonsmooth, hence the direction vector will be undefined along those lines that are equidistant from two different parts of the boundary. Resolving this nonsmoothness of the distance field requires a finer mesh.
The distance field around and inside an object with reentrant corners on a coarse mesh (left) and a more refined mesh (right). The smoothness of the distance field is mesh dependent in such cases.
Once we have computed this distance field on an appropriately fine mesh, we treat it like any other variable in our model. For example, we can make material properties a function of distance from a surface. The image below shows such a representative material distribution.
A representative material distribution that is a function of distance to the surface.
It is also possible to use the distance function to help visualize our results. Suppose we are only interested in the part of the solution that is within a specific distance of the surface. In this case, we can use the Filter subfeature when making a volume plot. We then enter a logical expression to only display the results that are within a certain distance of the object’s surface, an example of which is shown below.
Using the distance function to plot only the solution within 5 mm of the surface.
We have demonstrated how to compute a distance field to a boundary within a model, the distances between boundaries, and the shortest line segment between two boundaries. This approach also works to calculate distance fields from edges and points in 3D models. The computed distances can be used anywhere within the setup, physics definitions, and results evaluation of a model. We’ve shared a couple of examples here, but now it’s your turn. We would love to hear what you come up with!
Implementing the Fourier transformation in a simulation can be useful in Fourier optics, signal processing (for use in frequency pattern extraction), and noise reduction and filtering via image processing. In Fourier optics, the Fresnel approximation is one of the approximation methods used for calculating the field near the diffracting aperture. Suppose a diffracting aperture is located in the plane at . The diffracted electric field in the plane at the distance from the diffracting aperture is calculated as
where, is the wavelength and account for the electric field at the plane and the plane, respectively. (See Ref. 1 for more details.)
In this approximation formula, the diffracted field is calculated by Fourier transforming the incident field multiplied by the quadratic phase function .
The sign convention of the phase function must follow the sign convention of the time dependence of the fields. In COMSOL Multiphysics, the time dependence of the electromagnetic fields is of the form . So, the sign of the quadratic phase function is negative.
Now, let’s take a look at an example of a Fresnel lens. A Fresnel lens is a regular plano-convex lens except for its curved surface, which is folded toward the flat side at every multiple of along the lens height, where m is an integer and n is the refractive index of the lens material. This is called an m^{th}-order Fresnel lens.
The shift of the surface by this particular height along the light propagation direction only changes the phase of the light by (roughly speaking and under the paraxial approximation). Because of this, the folded lens fundamentally reproduces the same wavefront in the far field and behaves like the original unfolded lens. The main difference is the diffraction effect. Regular lenses basically don’t show any diffraction (if there is no vignetting by a hard aperture), while Fresnel lenses always show small diffraction patterns around the main spot due to the surface discontinuities and internal reflections.
When a Fresnel lens is designed digitally, the lens surface is made up of discrete layers, giving it a staircase-like appearance. This is called a multilevel Fresnel lens. Due to the flat part of the steps, the diffraction pattern of a multilevel Fresnel lens typically includes a zeroth-order background in addition to the higher-order diffraction.
A Fresnel lens in a lighthouse in Boston. Image by Manfred Schmidt — Own work. Licensed under CC BY-SA 4.0, via Wikimedia Commons.
Why are we using a Fresnel lens as our example? The reason is similar to why lighthouses use Fresnel lenses in their operations. A Fresnel lens is folded into in height. It can be extremely thin and therefore of less weight and volume, which is beneficial for the optics of lighthouses compared to a large, heavy, and thick lens of the conventional refractive type. Likewise, for our purposes, Fresnel lenses can be easier to simulate in COMSOL Multiphysics and the add-on Wave Optics Module because the number of elements are manageable.
The figure below depicts the optics layout that we are trying to simulate to demonstrate how we can implement the Fourier transformation, applied to a computed solution solved for by the Wave Optics, Frequency Domain interface.
Focusing 16-level Fresnel lens model.
This is a first-order Fresnel lens with surfaces that are digitized in 16 levels. A plane wave is incident on the incidence plane. At the exit plane at , the field is diffracted by the Fresnel lens to be . This process can be easily modeled and simulated by the Wave Optics, Frequency Domain interface. Then, we calculate the field at the focal plane at by applying the Fourier transformation in the Fresnel approximation, as described above.
The figures below are the result of our computation, with the electric field component in the domains (top) and on the boundary corresponding to the exit plane (bottom). Note that the geometry is not drawn to scale in the vertical axis. We can clearly see the positively curved wavefront from the center and from every air gap between the saw teeth. Note that the reflection from the lens surfaces leads to some small interferences in the domain field result and ripples in the boundary field result. This is because there is no antireflective coating modeled here.
The computed electric field component in the Fresnel lens and surrounding air domains (vertical axis is not to scale).
The computed electric field component at the exit plane.
Let’s move on to the Fourier transformation. In the previous example of an analytical function, we prepared two data sets: one for the source space and one for the Fourier space. The parameter names that were defined in the Settings window of the data set were the spatial coordinates in the source plane and the spatial coordinates in the image plane.
In today’s example, the source space is already created in the computed data set, Study 1/Solution 1 (sol1){dset1}, with the computed solutions. All we need to do is create a one-dimensional data set, Grid1D {grid1}, with parameters for the Fourier space; i.e., the spatial coordinate in the focal plane. We then relate it to the source data set, as seen in the figure below. Then, we define an integration operator intop1
on the exit plane.
Settings for the data set for the transformation.
The intop1
operator defined on the exit plane (vertical axis is not to scale).
Finally, we define the Fourier transformation in a 1D plot, shown below. It’s important to specify the data set we previously created for the transformation and to let COMSOL Multiphysics know that is the destination independent variable by using the dest
operator.
Settings for the Fourier transformation in a 1D plot.
The end result is shown in the following plot. This is a typical image of the focused beam through a multilevel Fresnel lens in the focal plane (see Ref. 2). There is the main spot by the first-order diffraction in the center and a weaker background caused by the zeroth-order (nondiffracted) and higher-order diffractions.
Electric field norm plot of the focused beam through a 16-level Fresnel lens.
In this blog post, we learned how to implement the Fourier transformation for computed solutions. This functionality is useful for long-distance propagation calculation in COMSOL Multiphysics and extends electromagnetic simulation to Fourier optics.
On the shoreline, crashing waves and the continuous movement of the tides cause coastal erosion, a phenomenon that removes sediment from beaches and wears away land.
A rock formation affected by coastal erosion. Image by John Nuttall — Own work. Licensed under CC BY 2.0, via Flickr Creative Commons.
Although coastal erosion has benefits, such as creating sand for beaches, it also causes damage to seaside property and habitats. To help predict this damage, researchers can use shallow water equations to learn more about coastal erosion. These equations enable scientists to model oceanographic and atmospheric fluid flow, thereby predicting what areas will be affected by coastal erosion and other issues, such as polar ice cap melting and pollution.
Shallow water equations are beneficial compared to the Navier-Stokes equations, which may be problematic depending on how free surfaces are resolved and the scale of the modeling domain. Today, we highlight a tutorial that showcases how to solve shallow water equations using the power of equation-based modeling.
In this shallow water equation model, we can describe the physics by adding our own equations — a feature called equation-based modeling. We use the General Form PDE interface and two dependent variables to ensure that the modeling process is straightforward. This way, we can easily define expressions as model variables, which comes in handy when defining the initial wave profile.
This simple 1D model uses the Saint-Venant shallow water equations to study a wave settling over a variable bed as a function of time.
The 1D model featured here would require substantial work to convert into a 2D model for solving typical applications. This tutorial is therefore most useful as an example of the benefits of equation-based modeling.
A vertical section of the fluid domain. Here, z_{f} is the analytical expression for the seabed profile and z_{s} is the water surface profile.
The model, which investigates shallow water in a channel, has constraints at both ends and uses a wave profile as the initial condition. In order to easily alter parameters like the wave amplitude and bed shape, we can use mathematical relations to represent the initial wave and bed shapes. Please note that the model has a difference in scale between the x- and y-directions, as seen in the plots below.
Plots of the seabed profile (left) and a comparison of the initial water surface profile with the seabed profile (right).
We see that the flow develops hydraulic jump discontinuities over time, which can cause instability in the solution. To stabilize the solution, we can add an artificial viscosity that makes the cell Reynolds number of order unity. The hydraulic jumps are replaced with steep fronts that can be resolved on a grid.
Switching gears, let’s take a look at our results. After running the simulation for 60 seconds, we see results that show the water surface and seabed slope at 6 different times, from the start of the simulation to 15 seconds later.
Plots of the seabed profile and water surface level at 3-second increments.
These results clearly indicate that the seabed topography influences the water surface elevation. This, in turn, affects the impact of coastal erosion.
We can share these custom results with others by creating and exporting an animation to help visualize our findings — something that is easy to do in COMSOL Multiphysics.
An animation of the simulation results.
As a next step, try this tutorial yourself by clicking on the button below.
The Open Recovery Files feature is useful for anyone running simulations with multiple parameters, namely:
While these simulations are running, a recovery file is created after the first solution is found and is then updated after each solved iteration. The recovery file is also updated following any of these three events:
If, at any point in your simulation, COMSOL Multiphysics closes unexpectedly, you can open the recovery file with the saved solutions. You can then continue running the simulation from where it left off.
The Open Recovery File window. You can view additional details by clicking on the Show/Hide Details button.
Please note that there are some limitations and subtleties to using this functionality. Currently, the main limitation is that the Parametric Sweep feature can’t be continued — you can rerun the simulation from the beginning or manually run the Parametric Sweep with the remaining values and store the solutions in a new place (not overwriting the first part of the simulation). When the simulation is terminated this way, the data is stored in the Parametric Solution as it should, but the number of solutions from the sweep are not complete. To access the individual parametric solutions, you might need to redirect the datasets to use individual solutions.
To learn more about the ins and outs of this feature, watch the video at the top of this post, where we discuss everything you need to know about opening recovery files. First, we open a model and run a simulation. After waiting until the simulation is nearly done, we force quit the software so that we may reopen it and finish the simulation.
Topology optimization is a useful capability because it can help us find designs that we would not have reasonably been able to think of ourselves. When developing a design, however, this is only the first step. It may not be reasonable or possible to construct a particular design found through topology optimization, either because the design is too costly to produce or it is simply not possible to manufacture.
Topology optimization results for an MBB beam.
To address these concerns, we can come up with new designs that are based on the results of topology optimization, and then carry out further simulation analyses on them. But how do we do this? As it turns out, COMSOL Multiphysics makes it simple to create geometries from the 2D and 3D plots of your topology optimization results, which you can continue to work with directly in COMSOL Multiphysics or export to a wide range of CAD software platforms.
To view topology optimization results that are in 2D, we can create a contour plot. Let’s use the Minimizing the Flow Velocity in a Microchannel tutorial to demonstrate this process. The goal of the tutorial is to find an optimal distribution of a porous filling material to minimize the horizontal flow velocity in the center of a microchannel.
First, we open up the model file included in the tutorial and go to the Contour 1 plot feature under the Velocity (spf) plot group.
The horizontal velocity (surface plot) and velocity field (streamlines) after optimization. The black contours represent the filling material.
In the above plot, the black contour is where the design variable, , equals 0.5. This indicates the border between the open channel and filling material. This is the result that we would like to incorporate into the geometry. In other applications, the expression and exact level to plot may differ, but the principle is the same: to find a contour that describes the limit between the solid and nonsolid materials (typically a fluid of some kind).
To create a geometry from this contour plot, we right-click the Contour feature node and choose Add Plot Data to Export. We need to make sure that we choose the data format as Sectionwise before we export the file.
The Sectionwise format describes the exported data using one section with coordinates, one with the element connectivity, and another that includes the data columns. It is important to note that the middle section, which describes how the coordinates of the first section are connected, will allow a contour plot with several closed loops or open curves.
The Spreadsheet export format is not suited for this particular use for several reasons, most importantly because it will assume that all coordinates are connected one after the other. This means that if there is more than one isolated contour, it will not be possible to build the Interpolation Curve feature. Also, the coordinates are scrambled, so the curve in the next step (discussed below) will not be drawn in the same way as seen in the contour plot.
To create the new geometry, we choose Add Component from the Home toolbar and choose a new 2D Component. Then, we copy the geometry feature nodes from the original geometry and paste them to the geometry sequence of the new 2D component. After this, we add an Interpolation Curve from the More Primitives menu on the Geometry toolbar and set the type as Open Curve, data format as Sectionwise, and a tolerance of 2e-2.
A smaller tolerance will give a curve that is more true to the data, but the outcome might be an intricate or “wiggly” geometry. In turn, a higher tolerance may give a curve that is too simplified and quite far from the optimized result.
Geometry with the interpolation curves representing the results of the topology optimization.
The geometry can now be used to run further simulations and to verify the created geometry within COMSOL Multiphysics.
The DXF format is a 2D format that most CAD software platforms can read. DXF also describes the higher-order polygons between the points, so it usually gives a better representation than exporting only the points.
To export the optimized topology from this geometry to a DXF file, we can follow the steps below. Please note that there is an optional step for if you only want to include the shape of the optimized topology in your DXF file.
Now, let’s see what to do when working with topology optimization results that are in 3D.
After performing a topology optimization in 3D, we usually view the resulting shape by creating a plot of the design variable; for example, an isosurface plot. We can directly export such a plot to a format that is compatible with COMSOL Multiphysics and CAD software and can even be used directly for 3D printing. This file format is the STL format, where the surfaces from the results plot are saved as a collection of triangles. It is a common standard file format for 3D printing and 3D scans in general.
In COMSOL Multiphysics, it is possible to export an STL file from the following plot features:
The software also supports adding a Deformation node on the plot feature, in case we want to export a deformed plot. The volume and isosurface plots are the most commonly used plot types for topology optimization, so we will focus our discussion on these two options.
To create an isosurface plot, we first add a 3D plot group to which we add an Isosurface feature node. In the Expression field, we then enter the design variable name, set the entry method as Levels, and fill in an appropriate value of the design variable representing the interface between the solid and nonsolid materials.
To demonstrate this process, let’s look at the example of the bridge shown below, where the optimal material distribution takes the familiar shape of an arch bridge. The optimization algorithm is maximizing the stiffness of the bridge subjected to a load to reach the displayed solution. To obtain the displayed isosurface plot, we use the expression 0.1 for the level of the design variable.
An isosurface plot of the 3D topology optimization for a deck arch bridge.
As you can see in the screenshot above, isosurface plots are not necessarily capped or airtight, so an exported volume plot may be a better choice, especially if we want to run further simulation analyses in COMSOL Multiphysics.
We can create a suitable plot by adding a Volume feature node to a 3D plot group. Then, we add a Filter node under Volume and set a suitable expression for inclusion. In this example, we use the expression rho_design > 0.1.
A volume plot of the deck arch bridge.
Exporting the data into an appropriate file format is simple. We right-click the Volume or Isosurface feature node and select Add Plot Data to Export. In the settings window of the resulting Plot node, we then select STL Binary File (*.stl) or STL Text File (*.stl) from the Data format drop-down list.
The exported STL file is readily readable by most CAD software platforms. To continue with the simulation of the geometry, import the STL file to a new COMSOL Multiphysics model, a process that we discuss in a previous blog post.
If you want to compare actual CAD drawings with your optimized results, you need to export the data in a format that can be imported into the CAD software you are using. The DXF format (for 2D) and the STL format (for 3D) are widely used formats and should be possible to import in almost any software platform.
In this blog post, we have discussed the steps needed to export topology optimization results in the DXF and STL formats. This will enable you to more efficiently analyze your model geometries within COMSOL Multiphysics and CAD software.
When setting up a simulation in COMSOL Multiphysics, you may want to seek out more information on the software as you go. Whether it’s learning about a node in the model tree, the settings for an operation you’re currently working in, or the differences between a set of options you are choosing from and what they will mean for your model, it’s helpful to have guidance available at your fingertips. This is the convenience that the Help window in COMSOL Multiphysics provides.
The Help window, displaying topic-based content for the Electric Potential boundary condition.
The Help window, accessed by clicking the Help button in the top right-hand corner of the software (the blue question mark) or the F1 key on your keyboard, enables you to promptly access information pertaining to the model tree node or window in which you are currently working. The text that displays updates automatically as you select items in the software or add settings to your model. This enables you to instantly get help right when you need it.
Since this window appears in the COMSOL Desktop® when opened, you can access the information you need without having to compete for screen space with your simulation. Instead of having to fit multiple windows on your monitor, you are able to view the help content and Model Builder together.
Additionally, you can search and navigate the text in the Help window using the respective buttons.
In addition to receiving topic-based help, there may be times when you want to more easily access, navigate, and search all of the comprehensive COMSOL Multiphysics documentation. This includes the user guides and manuals for any modules for which you have a license. You can find this documentation in the Documentation window, which you can access either from within COMSOL Multiphysics, by going to File > Help, or externally from your computer in your COMSOL Multiphysics installation folder.
The Documentation window, with the AC/DC Module User’s Guide opened.
The Documentation window enables you to quickly and easily access your entire library of COMSOL Multiphysics documentation, all within a single window. When open, you can choose between the PDF or HTML version of any guide, manual, or handbook. Additionally, the sections of each individual document are hyperlinked and bookmarked. The sections are displayed on the left side of the window, as shown above. This enables you to quickly jump between different chapters and documents.
This resource also provides more options when it comes to searching through the software documentation. This includes the ability to search through the entire library, only within a specified set of documents you have preselected, or exclusively through the Application Library Manual for all licensed products. Searching the Application Library Manual, in particular, enables you to find models and applications that demonstrate use of some specific physics, software features, and functionality.
Whereas the Help window provides quick access to documentation while modeling, the Documentation window serves as a more comprehensive resource when you need further clarification and more powerful search tools.
Now you know that you can access information about what you are working on. The ability to access modeling examples relevant to your work is equally as important. These examples enable you to learn how to use the software, examine COMSOL Multiphysics models, and access guidance that you can apply to your own simulations. In the COMSOL® software, the modeling examples can be found in another valuable resource, the Application Libraries window.
The Application Libraries window, with the Thermal Actuator tutorial model selected and displayed.
The Application Libraries window, accessed by going to File > Application Libraries, contains hundreds of models and simulation applications, spanning every module and engineering discipline. Using the Search field, you can find applications and models that cover some specific physics or feature that you want to see how you can use. Each entry includes a brief summary of the model; the COMSOL Multiphysics model file; and a PDF document that provides a comprehensive introduction and detailed, step-by-step overview of the model-building process. This provides you with the logic behind how the model is built, why and how boundary conditions are applied, and other useful information that you can use as insight into the models you create.
By following along with any of the tutorial models available, you can experience building a model firsthand. In addition, relevant examples from the Application Libraries can be experimented with and expanded upon, serving as a starting point for your own designs.
Tip: The tutorial models and demo applications featured in the Application Libraries are also available online in the Application Gallery.
Now that we’ve introduced the help tools available in the COMSOL® software and the advantages they provide, watch our video tutorial covering it all. In the video, we demonstrate how to access and use each of the resources discussed above.
After you’ve finished watching the video, you’ll be ready to use all of the help tools and resources available to you in COMSOL Multiphysics.
]]>
If you have used simulation tools for any significant period of time, you may have found yourself creating new models faster than your computer can solve them. This is especially common if your models are quick to set up but take a fair amount of time to solve. Running multiple models at the same time on the same computer is not a good option, as they would compete for resources (RAM, in particular) and therefore take longer to run simultaneously than they would sequentially, or back to back.
So, what’s a modeler to do?
You could launch your first model in the graphical user interface (GUI) and wait for it to solve, launch the second model in the GUI and wait for it to solve, and so on. But who would want to return to the office after hours or on weekends just to launch their next model?
Fortunately, there is a solution: creating a shell script, or a batch file, which automatically launches your COMSOL Multiphysics simulations one after the other. I’ll explain how to do this step by step on a computer with the Windows® operating system, but these ideas also apply to the other supported platforms (macOS and the Linux® operating system).
Let’s start with a demonstration of how to run a single COMSOL Multiphysics model from the command line.
First, we create a model file in the COMSOL Multiphysics GUI, also known as the COMSOL Desktop. Since we’re going over how to use a new functionality, the smaller and less detailed the model, the better. This will allow us to understand the functionality and perform tests on it quickly. Once you are comfortable with this functionality, it can, of course, also be applied to sophisticated models that take a long time to solve.
At this stage, we check that the model is properly set up by running it with a relatively coarse mesh. This presents the additional benefit of generating a default data set and one or two default plots in the Study branch of the model tree. Now that we’ve ensured that the model is properly set up, we can refine the mesh and save the file under the name Model1.mph
in our working folder. In this example, that’s C:/Users/jf
.
At this point, we can close the COMSOL Desktop.
Next, we open a Command Prompt window and, at the command line, make our way to our working folder. We type the name of the working folder:
cd C:\Users\jf
Then, we press the Enter key.
We are just about to call the COMSOL® software using the comsolbatch
command. Before we can do that, we need to make sure that the Windows® operating system knows where to find that command. This is where, if we have not done so before, we add the path to the COMSOL® software executables to the Windows® path environment variable. On a computer running Windows®, with a default installation, these executables are located in C:\Program Files\COMSOL\COMSOL52a\Multiphysics\bin\win64
.
Now, drum roll, please!
Back at the command line, we type the following command and then press Enter:
comsolbatch -inputfile Model1.mph -outputfile Model1_solved.mph
This command instructs Windows® to launch COMSOL Multiphysics® in Batch mode, hence without a graphical user interface. As the syntax suggests, we use Model1.mph
as an input and the file Model1_solved.mph
is the file with the solution. If we were to omit the “-outputfile Model1_solved.mph
” part in the command above, the solution would be stored in the input file, Model1.mph
.
As the software runs, some progress information is displayed at the command line. After a few moments, the run is done and we can open the output file, Model1_solved.mph
, in the GUI. We can see that the model has indeed been solved and that we can postprocess the results interactively, just as if we had computed the solution in the COMSOL Desktop.
Now that we’ve figured out how to launch a COMSOL Multiphysics model from the command line, let’s see how to automate running two or more simulations in a row.
Let’s create a second model, check that it is properly set up, and save the file to our working folder under the name Model2.mph
. With that done, we can close the COMSOL Desktop again.
Using a text editor like Notepad, we create a plain text file containing the following two lines:
comsolbatch -inputfile Model1.mph -outputfile Model1_solved.mph
comsolbatch -inputfile Model2.mph -outputfile Model2_solved.mph
We then save this in our working folder as a plain text file with the .bat extension. Here, we named the file Batch_Commands_For_COMSOL.bat
.
At the command prompt, still in our working folder, we launch Batch_Commands_For_COMSOL.bat
. At the command line, we type:
Batch_Commands_For_COMSOL
Then, we press the Enter key.
COMSOL Multiphysics will run without the GUI open and solve the problem defined in the file Model1.mph
. The COMSOL® software will then do the same for the problem defined in the file Model2.mph
. Once the runs are finished, we can inspect the files Model1_solved.mph
and Model2_solved.mph
in the COMSOL Desktop to see that they indeed contain the solutions of these two analyses. On the other hand, if we open the files Model1.mph
and Model2.mph
in the GUI, we see that they have not changed and still contain the problem definitions, but no solutions.
If we want to run more than two files sequentially, we can just modify the .bat
file accordingly and add lines for each file that we wish to run.
By learning how to run your COMSOL Multiphysics simulations in Batch mode from the command line, you will be able to complete your projects more efficiently and with ease.
Microsoft and Windows are either registered trademarks or trademarks of Microsoft Corporation in the United States and/or other countries.
macOS is a trademark of Apple Inc., in the U.S. and other countries.
Linux is a registered trademark of Linus Torvalds in the U.S. and other countries.
]]>
Because they’re fun to watch! But seriously, animations are an engaging way of clearly conveying your simulation results to your audience. Let’s take a look at some scenarios where animations can be helpful, using simulation results featured in past blog posts as examples. Yes, this is partly an excuse to repost cool animations like this one made by two COMSOL Multiphysics® users:
The carangiform swimming pattern of a fish. The animation was created by M. Curatolo and L. Teresi and originally featured in the blog post “Studying the Swimming Patterns of Fish with Simulation“.
When presenting your simulation results to colleagues, clients, or customers, analytical results don’t always provide the whole picture. For instance, the animations featured in a previous blog post about the simulation of sloshing in vehicle fuel tanks help you better visualize the movement and impact of fluid in a tank as compared to a static plot alone.
Of course, analytical results are paramount when presenting the conclusions that you draw from a simulation. However, analytical results can sometimes be hard to understand or relate to. Animations give the viewer a better idea of the real-world effects that were simulated in your research.
Animations can often help explain a concept or idea. In a blog post on thermal ablation, animations are used to illustrate how this process is used for material removal. In that blog post, we explain the concept of thermal ablation and detail how to model the phenomenon in COMSOL Multiphysics. Towards the end, we shared an animation that shows what thermal ablation looks like with a laser heating example. Here it is again:
In general, simulation software is often used to analyze phenomena that cannot be seen with the naked eye. This is true for physics related to acoustics, electromagnetic waves, MEMS, and more. Animations extend this idea by providing a graphical representation of the process or design that you aim to study. For example, the following animation shows the far-field radiation pattern for a monopole antenna array featured in an introductory antenna modeling blog post.
The reasons why you should create animations from your simulation results don’t just apply to your COMSOL Multiphysics® models. You can also add the animation functionality to any simulation app that you create with the Application Builder. Depending on who is using your app and for what purpose, you can build it so they can easily generate an animation of the results by the click of a button in the app’s user interface.
Several of the demo applications within the Application Library contain animations, such as the Biosensor Design app featured in a previous blog post.
Ready to make your own animations in COMSOL Multiphysics? We have a tutorial video to show you how. Animations can at first seem tricky and time-consuming to get just right. In the video, you will see a few best practices that will help minimize the amount of time you’ll spend producing animations.
After watching the video, you’ll be ready to generate and export animations of your own simulation results.
]]>
If you work with computationally large problems, the Domain Decomposition solver can increase efficiency by dividing the problem’s spatial domain into subdomains and computing the subdomain solutions concurrently and sequentially on the fly. We have already learned about using the Domain Decomposition solver as a preconditioner for an iterative solver and discussed the possibilities to enable simulations that are constrained by the available memory. Today, we will take a detailed look at how to use this functionality with a thermoviscous acoustics example.
Let’s start with the Transfer Impedance of a Perforate tutorial model, which can be found in the Application Library of the Acoustics Module. This example model uses the Thermoviscous Acoustics, Frequency Domain interface to model a perforate, a plate with a distribution of small perforations or holes.
A simulation of transfer impedance in a perforate.
For this complex simulation, we are interested in the velocity, temperature, and total acoustic pressure in the transfer impedance of the perforate model. Let’s see how we can use the Domain Decomposition solver to compute these quantities in situations where the required resolution exceeds the margins of available memory.
Let’s take a closer look at how we can set up a Domain Decomposition solver for the perforate model. The original model uses a fully coupled solver combined with a GMRES iterative solver. As a preconditioner, two hybrid direct preconditioners are used; i.e., the preconditioners separate the temperature from the velocity and pressure. By default, the hybrid direct preconditioners are used with PARDISO.
As the mesh resolution becomes refined, the amount of memory used continues to grow. An important parameter in the model is the minimum thickness of the viscous boundary layer (dvisc), which has a typical size of 50 μm. The perforates are a few millimeters in size. The minimum element size of the mesh element is taken to be dvisc/2. To refine the solution, we divide dvisc by the refinement factors r = 1, 2, 3, 5. We can insert the domain decomposition preconditioner by right-clicking on the Iterative node and selecting Domain Decomposition. Below the Domain Decomposition node, we find the Coarse Solver and Domain Solver nodes.
To accelerate the convergence, we need to use the coarse solver. Since we do not want to use an additional coarse mesh, we set Coarse Level > Use coarse level to Algebraic in order to use an algebraic coarse grid correction. On the Domain Solver node, we add two Direct Preconditioners and enable the hybrid settings like they were used in the original model. For the coarse solver, we take the direct solver PARDISO. If we use a Geometric coarse mesh grid correction instead, we can also apply a hybrid direct coarse solver.
Settings for the Domain Decomposition solver.
We can compare the default iterative solver with hybrid direct preconditioning to both the direct solver and the iterative solver with domain decomposition preconditioning on a single workstation. For the unrefined mesh with a mesh refinement factor of r = 1, we use approximately 158,682 degrees of freedom. All 3 solvers use around 5-6 GB of memory to find the solution for a single frequency. For r = 2 with 407,508 degrees of freedom and r = 3 with 812,238 degrees of freedom, the direct solver uses a little bit more memory than the 2 iterative solvers (12-14 GB for r = 2 and 24-29 GB for r = 3). For r = 5 and 2,109,250 degrees of freedom, the direct solver uses 96 GB and the iterative solvers use around 80 GB on a sequential machine.
As we will learn in the subsequent discussion, the Recompute and clear option for the Domain Decomposition solver gives a significant advantage with respect to the total memory usage.
Memory Usage, Nondistributed Case | Degrees of Freedom | Memory Usage, Direct Solver | Memory Usage, Iterative Solver with Hybrid Direct Preconditioning | Memory Usage, Iterative Solver with Domain Decomposition Preconditioning | Memory Usage, Iterative Solver with Domain Decomposition Preconditioning with Recompute and clear enabled |
---|---|---|---|---|---|
Refinement r = 1 | 158,682 | 5.8 GB | 5.3 GB | 5.4 GB | 3.6 GB |
Refinement r = 2 | 407,508 | 14 GB | 12 GB | 13 GB | 5.5 GB |
Refinement r = 3 | 812,238 | 29 GB | 24 GB | 26 GB | 6.4 GB |
Refinement r = 5 | 2,109,250 | 96 GB | 79 GB | 82 GB | 12 GB |
Memory usage for the direct solver and the two iterative solvers in the nondistributed case.
On a cluster, the memory load per node can be much lower than on a single-node computer. Let us consider the model with a refinement factor of r = 5. The direct solver scales nicely with respect to memory, using 65 GB and 35 GB per node on 2 and 4 nodes, respectively. On a cluster with 4 nodes, the iterative solver with domain decomposition preconditioning with 4 subdomains only uses around 24 GB per node.
Memory Usage per Node on a Cluster | Memory Usage, Direct Solver | Memory Usage, Iterative Solver with Hybrid Direct Preconditioning | Memory Usage, Iterative Solver with Domain Decomposition Preconditioning |
---|---|---|---|
1 node | 96 GB | 79 GB | 82 GB (with 2 subdomains) |
2 nodes | 65 GB | 56 GB | 47 GB (with 2 subdomains) |
4 nodes | 35 GB | 35 GB | 24 GB (with 4 subdomains) |
Memory usage per node on a cluster for the direct solver and the two iterative solvers for refinement factor r = 5.
On a single-node computer, the Recompute and clear option for the Domain Decomposition solver gives us the benefit we expect: reduced memory usage. However, it comes with the additional cost of decreased performance. For r = 5, the memory usage is around 41 GB for 2 subdomains, 25 GB for 4 subdomains, and 12 GB for 22 subdomains (the default settings result in 22 subdomains). For r = 3, we use around 15 GB of memory for 2 subdomains, 10 GB for 4 subdomains, and 6 GB for 8 subdomains (default settings).
Even on a single-node computer, the Recompute and clear option for the domain decomposition method gives a significantly lower memory consumption than the direct solver: 12 GB instead of 96 GB for refinement factor r = 5 and 6 GB instead of 30 GB for refinement factor r = 3. Despite the performance penalty, the Domain Decomposition solver with the Recompute and Clear option is a viable alternative to the out-of-core option for the direct solvers when there is insufficient memory.
Refinement Factor | r = 3 | r = 5 |
---|---|---|
Memory Usage | 30 GB | 96 GB |
Memory usage on a single-node computer with a direct solver for refinement factors r = 3 and r = 5.
Number of Subdomains | Recompute and clear Option | Refinement r = 3 | Refinement r = 5 |
---|---|---|---|
2 | Off | 24 GB | 82 GB |
2 | On | 15 GB | 41 GB |
4 | On | 10 GB | 25 GB |
8 | On | 6 GB | 20 GB |
22 | On | - | 12 GB |
Memory usage on a single-node computer with an iterative solver, domain decomposition preconditioning, and the Recompute and clear option enabled for refinement factors r = 3 and r = 5.
As demonstrated with this thermoviscous acoustics example, using the Domain Decomposition solver can greatly lower the memory footprint of your simulation. By this means, domain decomposition methods can enable the solution of large and complex problems. In addition, parallelism based on distributed subdomain processing is an important building block for improving computational efficiency when solving large problems.
The Domain Decomposition solver is based on a decomposition of the spatial domain into overlapping subdomains, where the subdomain solutions are less complex and more efficient in terms of memory usage and parallelization compared to the solution of the original problem.
In order to describe the basic idea of the iterative spatial Domain Decomposition solver, we consider an elliptic partial differential equation (PDE) over a domain D and a spatial partition {D_{i}}_{i}, such that the whole domain D = U_{i}D_{i} is covered by the union of the subdomains D_{i}. Instead of solving the PDE on the entire domain at once, the algorithm iteratively solves a number of problems for each subdomain D_{i}.
For Schwarz-type domain decomposition methods, the subdomains encompass an overlap of their support in order to transfer the information. On the interfaces between the subdomains, the solutions of the neighboring subdomains are used to update the current subdomain solution. For instance, if the subdomain D_{i} is adjacent to a boundary, its boundary conditions are used. The iterative domain decomposition procedure is typically combined with a global solver on a coarser mesh in order to accelerate convergence.
A 2D domain with a regular triangular mesh and its degrees of freedom decomposed into quadratic subdomains.
To illustrate the spatial domain decomposition, consider a 2D regular triangular mesh. For simplicity, we consider linear finite element shape functions with degrees of freedom in the 3 nodes of the triangular elements. The domain (more precisely, its degrees of freedom) is decomposed into quadratic subdomains, each consisting of 25 degrees of freedom. All interior subdomains have 8 neighboring subdomains and all degrees of freedom are unique to a single subdomain. The support of the linear element functions for a single subdomain is overlapping with the support of its neighbors.
Support of the linear element functions of the blue subdomain.
To improve the convergence rate of the iterative procedure, we may need to include a larger number of degrees of freedom in order to have a larger overlap of the subdomain support. This may give a more efficient coupling between the subdomains and a lower iteration count until convergence of the iterative procedure. However, this benefit comes at the costs of additional memory usage and additional computations during the setup and solution phase because of the larger subdomain sizes.
If an additional overlap of width 1 is requested, we add an additional layer of degrees of freedom to the existing subdomain. In our example, 22 degrees of freedom (marked with blue rectangles) are added to the blue subdomain. The support of the blue subdomain is enlarged accordingly.
The same procedure is repeated for the red, green, and yellow subdomains. In the resulting subdomain configuration, some of the degrees of freedom are unique to a single subdomain, while others are shared by two, three, or even four subdomains. It is obvious that dependencies arise for the shared degrees of freedom if one of its adjacent subdomains updates its solution.
Extended subdomain with 47 degrees of freedom and its support. The additional 22 degrees of freedom are shared with the neighboring subdomains.
It is known (Ref. 1) that the iterative solution to the set of subdomain problems on the subdomains, D_{i}, converges toward the solution of the original problem formulated over the whole domain D. Hence, the global solution can be found by iteratively solving each subdomain problem with all other domains fixed until the convergence criteria is met. The optional coarse grid problem can improve the convergence rate considerably. The coarse grid problem, which is solved on the entire domain D, gives an estimate of the solution on the fine grid on D and can transfer global information faster. The convergence rate of the method depends on the ratio between the size of the coarse grid mesh elements and the width of the overlap zone on the fine grid.
If we compute the solution on a particular subdomain D_{i}, the neighboring subdomains need to update their degrees of freedom adjacent to the support of D_{i}. In COMSOL Multiphysics, there are four options available for the coordination of the subdomain overlap and the global coarse grid solution. The selector Solver in the domain decomposition settings can be set to Additive Schwarz, Multiplicative Schwarz (default), Hybrid Schwarz, and Symmetric Schwarz. For Additive Schwarz methods, the affected degrees of freedom are updated after all solutions have been computed on all subdomains without an intermediate data exchange. In this case, the order of the subdomain solutions is arbitrary and there are no dependencies between the subdomains during this solution phase.
In contrast, Multiplicative Schwarz methods update the affected degrees of freedom at the overlap of the support of neighboring subdomains after every subdomain solution. This typically speeds up the iterative solution procedure. However, there is an additional demand for prescribing an order of the subdomain solutions, which are no longer fully independent of each other.
The Hybrid Schwarz method updates the solution after the global solver problem is solved. The subdomain problems are then solved concurrently as in the Additive Schwarz solver case. The solution is then updated again and the global solver problem is solved a second time. The Symmetric Schwarz method solves the subdomain problems in a given sequence like the Multiplicative Schwarz solver, but in a symmetric way.
Direct linear solvers are typically more robust and require less tweaking of the physics-dependent settings than iterative solvers with tuned preconditioners. Due to their memory requirements, however, direct solvers may become unfeasible to use for larger problems. Iterative solvers are typically leaner in memory consumption, but some models still can’t be solved due to resource limitations. We discuss the memory requirements for solving large models in a previous blog post. Other preconditioners for iterative solvers may also fail due to specific characteristics of the system matrix. Domain decomposition is a preconditioner that in many cases requires less tuning than other preconditioners.
In case of limitations by the available memory, we can move the solution process to a cluster that provides a larger amount of accumulated memory. We can consider the domain decomposition preconditioner, using a domain solver with settings that mimic the original solver settings, since the Domain Decomposition solver has the potential to do more concurrent work. As we will see, the Domain Decomposition solver can also be used in a Recompute and clear mode, where you can get a significant memory reduction, even on a workstation.
If we do not want to use an additional coarse mesh to construct the global solver, we can compute its solution using an algebraic method. This may come at the price of an increased amount of GMRES iterations compared to when we set the Use coarse level selector to Geometric, which is based on an additional coarser mesh. The advantage is that the algebraic method constructs the global solver from the finest-level system matrix, and not by means of an additional coarser mesh. With the Algebraic option, the generation of an additional coarse mesh, which might be costly or not even possible, can be avoided.
On a cluster, a subdomain problem can be solved on a single node (or on a small subset of the available nodes). The size of the subdomains, hence the memory consumption per node, can be controlled by the Domain Decomposition solver settings. For the Additive Schwarz solver, all subdomain problems can be solved concurrently on all nodes. The solution updates at the subdomain interfaces occur in the final stage of the outer solver iteration.
For the Multiplicative Schwarz solver, there are intermediate updates of the subdomain interface data. This approach can speed up the convergence of the iterative procedure, but introduces additional dependencies for the parallel solution. We must use a subdomain coloring mechanism in order to identify a set of subdomains that can be processed concurrently. This may limit the degree of parallelism if there is a low number of subdomains per color. In general, the Multiplicative Schwarz and Symmetric Schwarz methods converge faster than the Additive Schwarz and Hybrid Schwarz methods, while these methods can result in better parallel speedup.
A subdomain coloring mechanism is used for multiplicative Schwarz-type domain decomposition preconditioning.
In the Domain Decomposition solver settings, there is a Use subdomain coloring checkbox for the Multiplicative Schwarz and Hybrid Schwarz methods. This option is enabled by default and takes care of grouping subdomains into sets — so-called colors — that can be handled concurrently. Let us consider a coloring scheme with four colors (blue, green, red, and yellow). All subdomains of the same color can compute their subdomain solution at the same time and communicate the solution at the subdomain overlap to their neighbors. For four colors, the procedure is repeated four times until the global solution can be updated.
Domain decomposition on a cluster with nine nodes. A subdomain coloring scheme is used to compute subdomain solutions simultaneously for each different color.
On a cluster, the subdomains can be distributed across the available compute nodes. Every color can be handled in parallel and all of the nodes compute their subdomain solutions for the current color at the same time and then proceed with the next color. The coloring scheme coordinates the order of the subdomain updates for the Multiplicative Schwarz and Symmetric Schwarz methods. Communication is required for updating the degrees of freedom across the compute node boundaries in between every color. No subdomain coloring scheme is required for the Additive Schwarz and Hybrid Schwarz methods.
The different Domain Decomposition solver types.
If the Domain Decomposition solver is run on a single workstation, all data needs to be set up in the same memory space and there is no more benefit from storing only specific subdomain data. Due to the subdomain overlap, the memory consumption might even increase compared to the original problem. In order to overcome this limitation, the Domain Decomposition solver can be run in the Recompute and clear mode, where the data used by each subdomain is computed on the fly. This results in a significant memory reduction and solves larger problems without needing to store the data in virtual memory. These problems will take longer to compute due to repeated setup of the subdomain problems.
This method is particularly useful when the solution uses a lot of virtual memory with disk swapping. If the Automatic option is used, the Recompute and clear mechanism is activated if there is an out-memory error during the setup phase. The setup is then repeated with Recompute and clear activated. The Recompute and clear option is comparable to the out-of-core option of the direct solvers. Both methods have an additional penalty; either because of storing additional data to the disk (Out-of-core) or because of recomputing specific parts of the data again and again (Recompute and clear). We can save even more memory by using the matrix-free format on top of the Recompute and clear option.
In the settings of the Domain Decomposition solver, we can specify the intended Number of subdomains (see the figure below). In addition, the Maximum number of DOFs per subdomain is specified. If the latter bound is missed — i.e., one of the subdomains has to handle more degrees of freedom than specified — then all subdomains are recreated, taking a larger number of subdomains.
Settings window for the Domain Decomposition solver.
The subdomains are created by means of the element and vertex lists taken from the mesh. We are able to choose from different subdomain ordering schemes. The Nested Dissection option creates a subdomain distribution by means of graph partitioning. This option typically gives a low number of colors and results in balanced subdomains with an approximately equal number of degrees of freedom, minimal subdomain interfaces, and a small overlap.
An alternative method that also avoids slim domains is to use the Preordering algorithm based on a Space-filling curve. If we select the option None for the Preordering algorithm, the subdomain ordering is based on the ordering of the mesh elements and degrees of freedom. This can result in slim domains. Detailed information about the applied subdomain configuration is given in the solver log if the Solver log on the Advanced node is set to Detailed.
When simulating problems with large memory requirements in the COMSOL® software, we are limited by the available hardware resources. An iterative solver with domain decomposition preconditioning should be considered as a memory-lean alternative to direct solvers. On a workstation, the Recompute and clear option for the Domain Decomposition solver is an alternative to the out-of-core mechanism for the direct solvers.
Although memory-heavy simulations can fail on computers with insufficient memory, we can enable them on clusters. The direct solvers in COMSOL Multiphysics automatically use the distributed memory, leading to a memory reduction on each node. The Domain Decomposition solver is an additional option that takes advantage of the parallelization based on the spatial subdomain decomposition.
The Domain Decomposition solver, clusters, and a variety of the options discussed here will help you improve computational efficiency when working with large models in COMSOL Multiphysics. In an upcoming blog post, we will demonstrate using the domain decomposition preconditioner in a specific application scenario. Stay tuned!
A. Toselli and O. Widlund, “Domain Decomposition Methods — Algorithms and Theory,” Springer Series in Computational Mathematics, vol. 34, 2005.
We have already gone over the physical basis of the firing mechanism that generates action potential in cells and we studied the generation of such a waveform using the Fitzhugh-Nagumo (FH) model.
The dynamics of the simple Fitzhugh-Nagumo model, featured in a computational app.
Today, we will convert the FH model study into a more rigorous mathematical model, the Hodgkin-Huxley (HH) model. Unlike the Fitzhugh-Nagumo model, which works well as a proof of concept, the Hodgkin-Huxley model is based on cell physiology and the simulation results match well with experiments.
In the HH model, the cell membrane contains gated and nongated channels that allow the passage of ions through them. The nongated channels are always open and the gated channels open under particular conditions. When the cell is at rest, the neurons allow the passage of sodium and potassium ions through the nongated channels. First, let us presume that only the potassium channels exist. For potassium, which is in excess inside the cell, the difference of concentration between the inside and outside of the cell acts as a driving force for the ions to migrate. This is the process of movement of ions by diffusion, or the chemical mechanism that initially drives potassium out of the cells.
This movement process cannot go on indefinitely. This is because the potassium ions are charged. Once they accumulate outside the cell, these ions establish an electrical gradient that drives some potassium ions into the cells. This is the second mechanism (the electrical mechanism) that affects the movement of ions. Eventually, these two mechanisms balance each other and the potassium efflux and outflux balances. The potential at which the balance happens is known as the Nernst potential for that ion. In excitable cells, the Nernst potential value for potassium, , is -77 mV and for sodium ions, , is around 50 mV.
We allow the presence of a few nongated sodium channels in the membrane. Because the sodium ions abound in the extracellular region, an influx of sodium ions into the cell must occur. The incoming sodium ions reduce the electrical gradient, disturb the potassium equilibrium, and result in a net potassium efflux from the cell until the cell reaches its resting potential at around -70 mV. It is important to mention here that the net efflux of potassium and net influx of sodium ions cannot go on forever, otherwise the chemical gradient that causes the movement will eventually cease. Ion pumps bring potassium back into the cell and drive sodium out through active transport and maintain the resting potential of the cells in normal conditions.
Let’s derive an equivalent circuit model of a cell in which we can imitate the effects of the different cellular mechanisms we just described by different commonly found circuit components, such as capacitors, resistors, and batteries. The voltage response of the circuit is the signal that corresponds to the action potential.
Overall, there are four currents that are important for the HH model:
Schematic of the currents in a Hodgkin-Huxley model.
The four currents flow through parallel branches, with the membrane potential V as the driving force (see the figure above; the ground denotes extracellular potential). The cell membrane has a capacitive character, which allows it to store charge. In the figure above, this is the left-most branch, modeled with a capacitor of strength C_{m}. The other branches account for three ionic currents that flow through ion channels. In each branch, the effects of channels are modeled through conductance (shown as resistance in the diagram), and the effect of the concentration gradient is represented by the Nernst potential of the ions, which are represented as batteries.
Thus, when a current is injected in the cell, it gets divided into four parts and the conservation of charges leads us to the following balance equation
Or equivalently
What is of paramount importance is that the sodium and potassium channel conductances are not constant; rather, they are functions of the cell potential. So how do we model them? Remember that some of the ion channels are gated and they can have multiple gates. Assume that there are voltage-dependent rate functions α_{ρ} (V) and β_{ρ} (V), which give us the rate constants of a gate going from a closed state to open and open to closed, respectively. If ρ denotes the fraction of gates that are open, a simple balance law yields the following equation for the evolution of ρ
Different gated channels are characterized by their gates. In the HH model, the potassium channel is hypothesized to be composed of four n-type gates. Since the channel conducts when all four are open, the potassium conductance is modeled through the equation
For sodium, the situation is assumed to be more complicated. The sodium-gated channel has four gates, but three m-type gates (activation-type gates that are open when the cell depolarizes) and one h-type gate (a deactivation gate that closes when the cell depolarizes). Therefore, the sodium channel conductance is given by
In the above equations, is the maximum potassium and sodium conductance. The functional forms of α_{ρ} (V), β_{Ρ} (V) for can be found in any standard reference.
The leak conductance is assumed to be a constant. Therefore, the HH model is completely described by the following set of equations
The key to understanding the Hodgkin-Huxley model lies in understanding the gate equations. We can recast the equations for the gates in the following form
with .
This is a very well-known equation in electrical circuits. If we assume ρ_{∞} is voltage independent, then the equation says that ρ asymptotically approaches ρ_{∞} as its final value, and Τ_{ρ}, the time constant, dictates the rate of approach. This means that the smaller the Τ_{ρ}, the faster the approach. The following figure shows the values of these two quantities for .
The asymptotic values (left) and time constants (right) for the gate equations of the Hodgkin-Huxley model.
It is easy to conclude from the figures above that n_{∞}, m_{∞} increases as the cell depolarizes and h_{∞} decreases under similar conditions. From the second graph, we find that the activation for sodium is much faster compared to the activation of potassium or the leak current.
When depolarization starts, n_{∞}, m_{∞} increases and h_{∞} decreases. The governing equations of all of these quantities demand that they should approach the steady-state values; therefore, n, m increases and h decreases. However, we should also remember the differences in time constants of the gating variables. A comparison says that the activation of sodium gates happens much faster as compared to their deactivation or the opening of potassium channels. Therefore, there is an initial overall increase in the sodium conductance. This results in an increase of the sodium current, which raises the membrane potential and causes V to approach . This is how the HH model accounts for the rising part of the action potential.
However, as this process continues, . Once the value of h goes below a threshold, the sodium channels are effectively closed. Also, the approach of V toward kills the driving force for the sodium current. Meanwhile, the potassium channels, which have a slower time constant, open up to a large extent. This, coupled with the large driving force that is available for the potassium current, forces the reverse flow. The potassium ions move out of the cell and eventually the membrane potential settles toward the hyperpolarized state.
We can build a computational simulation app to analyze the Hodgkin-Huxley model, which enables us to test various parameters without changing the underlying complex model. We can do this by designing a user-friendly app interface using the Application Builder in the COMSOL Multiphysics® software. As a first step, we create a model of the Hodgkin-Huxley equations using the Model Builder in the COMSOL software. After building the underlying model, we transform it into an app using the Application Builder. By building an app, we can restrict and control the various inputs and outputs of our model. We then pass the app to the end user, who doesn’t need to worry about the model setup process and can focus on extracting and analyzing the results of the simulation.
In our case, we implemented the underlying Hodgkin-Huxley model using the Global ODEs and DAEs interface in COMSOL Multiphysics. This interface is a part of the Mathematics features in the COMSOL software and is capable of solving a system of (ordinary) differential-algebraic equations (Global ODEs and DAEs interface). This interface is often used to construct models for which the equations and their initial boundary conditions are generic. In the interface, we can specify the equations and unknowns and add initial conditions. The interface, with model equations, is shown below.
We also create the postprocessing elements, graphs, and animations in the Model Builder. Once the model is ready, we move on to the Application Builder again. We connect the elements of the model to the app’s user interface through various GUI options like input fields, control buttons, display panels, and some coded methods.
You can learn more about how to build and run simulation apps in this archived webinar.
Finally, we can design the user interface of the Hodgkin-Huxley app. With the Form Editor in the Application Builder, we can design a custom user interface with a number of different buttons, panels, and displays. This user interface features a Model Parameters section to input the different parameters of the HH model, such as the Nernst potential, maximum gate conductance, and membrane capacitance. We can also provide two types of excitation current to the model: a unit step current or an excitation train. As the parameters change, the app displays the action potential and excitation current, as well as the evolution of gate variables m, n, and h.
With the Reset and Compute buttons, it is easy to run multiple tests after changing the parameters. There are also graphical panels that display visualizations and plots of the model results and an Animate button that creates captivating animations. The Simulation Report button generates a summary of the simulation.
The user interface for the Hodgkin-Huxley Model simulation app.
Making buttons work in an app is a simple process. All we have to do is write a few methods using the Method Editor tool that comes with the Application Builder and connect them to the buttons properly. Let me illustrate with an example. We can design the Hodgkin-Huxley Model app so that when it launches, the Animate and Simulation Report buttons are inactive (see the figure below). This is because the app user will not need to use either of these buttons until after they perform a simulation.
The Simulation Report and Animate buttons are disabled at the start of the simulation.
To do so, we can write a method that instructs the app to execute certain functions during the launch.
A method that disables the Simulation Report and Animate buttons during launch.
Observe that we have disabled the Simulation Report button and Animate button using instructions in lines 7 and 8 in the method. If you are worried about coming up with the syntax for your methods, let me assure you that it is much more simple than it seems. First, the methods execute some actions. If we want to record the code corresponding to these actions, we click on the button called Record Code in the Application Builder ribbon. Then, we can go to the Model Builder, execute the actions, and once done, click on the Stop Recording button. The corresponding code will be placed in the method. If necessary, we can then modify the instructions.
Once a simulation is complete, we would like these buttons to become active in the app. In another method associated with the Compute button, we insert the following code segment
We then ensure that this segment is executed if the solution is computed successfully. You will see that this enables both buttons.
To summarize, you can use a simulation app to easily compute and visualize parameter changes when working with a complex model that involves multiple equations and types of physics, such as the Hodgkin-Huxley model discussed here. This simulation app is just one example of how you can design the layout of an app and customize its input parameters to fit your needs. Use this app as inspiration to build your own app, whether you are analyzing the action potential in a cell with a mathematical model or teaching students about complicated math and engineering concepts. No matter what purpose your app serves, it will ensure that your simulation process is simple and intuitive.