If you work with computationally large problems, the Domain Decomposition solver can increase efficiency by dividing the problem’s spatial domain into subdomains and computing the subdomain solutions concurrently and sequentially on the fly. We have already learned about using the Domain Decomposition solver as a preconditioner for an iterative solver and discussed the possibilities to enable simulations that are constrained by the available memory. Today, we will take a detailed look at how to use this functionality with a thermoviscous acoustics example.
Let’s start with the Transfer Impedance of a Perforate tutorial model, which can be found in the Application Library of the Acoustics Module. This example model uses the Thermoviscous Acoustics, Frequency Domain interface to model a perforate, a plate with a distribution of small perforations or holes.
A simulation of transfer impedance in a perforate.
For this complex simulation, we are interested in the velocity, temperature, and total acoustic pressure in the transfer impedance of the perforate model. Let’s see how we can use the Domain Decomposition solver to compute these quantities in situations where the required resolution exceeds the margins of available memory.
Let’s take a closer look at how we can set up a Domain Decomposition solver for the perforate model. The original model uses a fully coupled solver combined with a GMRES iterative solver. As a preconditioner, two hybrid direct preconditioners are used; i.e., the preconditioners separate the temperature from the velocity and pressure. By default, the hybrid direct preconditioners are used with PARDISO.
As the mesh resolution becomes refined, the amount of memory used continues to grow. An important parameter in the model is the minimum thickness of the viscous boundary layer (dvisc), which has a typical size of 50 μm. The perforates are a few millimeters in size. The minimum element size of the mesh element is taken to be dvisc/2. To refine the solution, we divide dvisc by the refinement factors r = 1, 2, 3, 5. We can insert the domain decomposition preconditioner by right-clicking on the Iterative node and selecting Domain Decomposition. Below the Domain Decomposition node, we find the Coarse Solver and Domain Solver nodes.
To accelerate the convergence, we need to use the coarse solver. Since we do not want to use an additional coarse mesh, we set Coarse Level > Use coarse level to Algebraic in order to use an algebraic coarse grid correction. On the Domain Solver node, we add two Direct Preconditioners and enable the hybrid settings like they were used in the original model. For the coarse solver, we take the direct solver PARDISO. If we use a Geometric coarse mesh grid correction instead, we can also apply a hybrid direct coarse solver.
Settings for the Domain Decomposition solver.
We can compare the default iterative solver with hybrid direct preconditioning to both the direct solver and the iterative solver with domain decomposition preconditioning on a single workstation. For the unrefined mesh with a mesh refinement factor of r = 1, we use approximately 158,682 degrees of freedom. All 3 solvers use around 5-6 GB of memory to find the solution for a single frequency. For r = 2 with 407,508 degrees of freedom and r = 3 with 812,238 degrees of freedom, the direct solver uses a little bit more memory than the 2 iterative solvers (12-14 GB for r = 2 and 24-29 GB for r = 3). For r = 5 and 2,109,250 degrees of freedom, the direct solver uses 96 GB and the iterative solvers use around 80 GB on a sequential machine.
As we will learn in the subsequent discussion, the Recompute and clear option for the Domain Decomposition solver gives a significant advantage with respect to the total memory usage.
Memory Usage, Nondistributed Case | Degrees of Freedom | Memory Usage, Direct Solver | Memory Usage, Iterative Solver with Hybrid Direct Preconditioning | Memory Usage, Iterative Solver with Domain Decomposition Preconditioning | Memory Usage, Iterative Solver with Domain Decomposition Preconditioning with Recompute and clear enabled |
---|---|---|---|---|---|
Refinement r = 1 | 158,682 | 5.8 GB | 5.3 GB | 5.4 GB | 3.6 GB |
Refinement r = 2 | 407,508 | 14 GB | 12 GB | 13 GB | 5.5 GB |
Refinement r = 3 | 812,238 | 29 GB | 24 GB | 26 GB | 6.4 GB |
Refinement r = 5 | 2,109,250 | 96 GB | 79 GB | 82 GB | 12 GB |
Memory usage for the direct solver and the two iterative solvers in the nondistributed case.
On a cluster, the memory load per node can be much lower than on a single-node computer. Let us consider the model with a refinement factor of r = 5. The direct solver scales nicely with respect to memory, using 65 GB and 35 GB per node on 2 and 4 nodes, respectively. On a cluster with 4 nodes, the iterative solver with domain decomposition preconditioning with 4 subdomains only uses around 24 GB per node.
Memory Usage per Node on a Cluster | Memory Usage, Direct Solver | Memory Usage, Iterative Solver with Hybrid Direct Preconditioning | Memory Usage, Iterative Solver with Domain Decomposition Preconditioning |
---|---|---|---|
1 node | 96 GB | 79 GB | 82 GB (with 2 subdomains) |
2 nodes | 65 GB | 56 GB | 47 GB (with 2 subdomains) |
4 nodes | 35 GB | 35 GB | 24 GB (with 4 subdomains) |
Memory usage per node on a cluster for the direct solver and the two iterative solvers for refinement factor r = 5.
On a single-node computer, the Recompute and clear option for the Domain Decomposition solver gives us the benefit we expect: reduced memory usage. However, it comes with the additional cost of decreased performance. For r = 5, the memory usage is around 41 GB for 2 subdomains, 25 GB for 4 subdomains, and 12 GB for 22 subdomains (the default settings result in 22 subdomains). For r = 3, we use around 15 GB of memory for 2 subdomains, 10 GB for 4 subdomains, and 6 GB for 8 subdomains (default settings).
Even on a single-node computer, the Recompute and clear option for the domain decomposition method gives a significantly lower memory consumption than the direct solver: 12 GB instead of 96 GB for refinement factor r = 5 and 6 GB instead of 30 GB for refinement factor r = 3. Despite the performance penalty, the Domain Decomposition solver with the Recompute and Clear option is a viable alternative to the out-of-core option for the direct solvers when there is insufficient memory.
Refinement Factor | r = 3 | r = 5 |
---|---|---|
Memory Usage | 30 GB | 96 GB |
Memory usage on a single-node computer with a direct solver for refinement factors r = 3 and r = 5.
Number of Subdomains | Recompute and clear Option | Refinement r = 3 | Refinement r = 5 |
---|---|---|---|
2 | Off | 24 GB | 82 GB |
2 | On | 15 GB | 41 GB |
4 | On | 10 GB | 25 GB |
8 | On | 6 GB | 20 GB |
22 | On | - | 12 GB |
Memory usage on a single-node computer with an iterative solver, domain decomposition preconditioning, and the Recompute and clear option enabled for refinement factors r = 3 and r = 5.
As demonstrated with this thermoviscous acoustics example, using the Domain Decomposition solver can greatly lower the memory footprint of your simulation. By this means, domain decomposition methods can enable the solution of large and complex problems. In addition, parallelism based on distributed subdomain processing is an important building block for improving computational efficiency when solving large problems.
The Domain Decomposition solver is based on a decomposition of the spatial domain into overlapping subdomains, where the subdomain solutions are less complex and more efficient in terms of memory usage and parallelization compared to the solution of the original problem.
In order to describe the basic idea of the iterative spatial Domain Decomposition solver, we consider an elliptic partial differential equation (PDE) over a domain D and a spatial partition {D_{i}}_{i}, such that the whole domain D = U_{i}D_{i} is covered by the union of the subdomains D_{i}. Instead of solving the PDE on the entire domain at once, the algorithm iteratively solves a number of problems for each subdomain D_{i}.
For Schwarz-type domain decomposition methods, the subdomains encompass an overlap of their support in order to transfer the information. On the interfaces between the subdomains, the solutions of the neighboring subdomains are used to update the current subdomain solution. For instance, if the subdomain D_{i} is adjacent to a boundary, its boundary conditions are used. The iterative domain decomposition procedure is typically combined with a global solver on a coarser mesh in order to accelerate convergence.
A 2D domain with a regular triangular mesh and its degrees of freedom decomposed into quadratic subdomains.
To illustrate the spatial domain decomposition, consider a 2D regular triangular mesh. For simplicity, we consider linear finite element shape functions with degrees of freedom in the 3 nodes of the triangular elements. The domain (more precisely, its degrees of freedom) is decomposed into quadratic subdomains, each consisting of 25 degrees of freedom. All interior subdomains have 8 neighboring subdomains and all degrees of freedom are unique to a single subdomain. The support of the linear element functions for a single subdomain is overlapping with the support of its neighbors.
Support of the linear element functions of the blue subdomain.
To improve the convergence rate of the iterative procedure, we may need to include a larger number of degrees of freedom in order to have a larger overlap of the subdomain support. This may give a more efficient coupling between the subdomains and a lower iteration count until convergence of the iterative procedure. However, this benefit comes at the costs of additional memory usage and additional computations during the setup and solution phase because of the larger subdomain sizes.
If an additional overlap of width 1 is requested, we add an additional layer of degrees of freedom to the existing subdomain. In our example, 22 degrees of freedom (marked with blue rectangles) are added to the blue subdomain. The support of the blue subdomain is enlarged accordingly.
The same procedure is repeated for the red, green, and yellow subdomains. In the resulting subdomain configuration, some of the degrees of freedom are unique to a single subdomain, while others are shared by two, three, or even four subdomains. It is obvious that dependencies arise for the shared degrees of freedom if one of its adjacent subdomains updates its solution.
Extended subdomain with 47 degrees of freedom and its support. The additional 22 degrees of freedom are shared with the neighboring subdomains.
It is known (Ref. 1) that the iterative solution to the set of subdomain problems on the subdomains, D_{i}, converges toward the solution of the original problem formulated over the whole domain D. Hence, the global solution can be found by iteratively solving each subdomain problem with all other domains fixed until the convergence criteria is met. The optional coarse grid problem can improve the convergence rate considerably. The coarse grid problem, which is solved on the entire domain D, gives an estimate of the solution on the fine grid on D and can transfer global information faster. The convergence rate of the method depends on the ratio between the size of the coarse grid mesh elements and the width of the overlap zone on the fine grid.
If we compute the solution on a particular subdomain D_{i}, the neighboring subdomains need to update their degrees of freedom adjacent to the support of D_{i}. In COMSOL Multiphysics, there are four options available for the coordination of the subdomain overlap and the global coarse grid solution. The selector Solver in the domain decomposition settings can be set to Additive Schwarz, Multiplicative Schwarz (default), Hybrid Schwarz, and Symmetric Schwarz. For Additive Schwarz methods, the affected degrees of freedom are updated after all solutions have been computed on all subdomains without an intermediate data exchange. In this case, the order of the subdomain solutions is arbitrary and there are no dependencies between the subdomains during this solution phase.
In contrast, Multiplicative Schwarz methods update the affected degrees of freedom at the overlap of the support of neighboring subdomains after every subdomain solution. This typically speeds up the iterative solution procedure. However, there is an additional demand for prescribing an order of the subdomain solutions, which are no longer fully independent of each other.
The Hybrid Schwarz method updates the solution after the global solver problem is solved. The subdomain problems are then solved concurrently as in the Additive Schwarz solver case. The solution is then updated again and the global solver problem is solved a second time. The Symmetric Schwarz method solves the subdomain problems in a given sequence like the Multiplicative Schwarz solver, but in a symmetric way.
Direct linear solvers are typically more robust and require less tweaking of the physics-dependent settings than iterative solvers with tuned preconditioners. Due to their memory requirements, however, direct solvers may become unfeasible to use for larger problems. Iterative solvers are typically leaner in memory consumption, but some models still can’t be solved due to resource limitations. We discuss the memory requirements for solving large models in a previous blog post. Other preconditioners for iterative solvers may also fail due to specific characteristics of the system matrix. Domain decomposition is a preconditioner that in many cases requires less tuning than other preconditioners.
In case of limitations by the available memory, we can move the solution process to a cluster that provides a larger amount of accumulated memory. We can consider the domain decomposition preconditioner, using a domain solver with settings that mimic the original solver settings, since the Domain Decomposition solver has the potential to do more concurrent work. As we will see, the Domain Decomposition solver can also be used in a Recompute and clear mode, where you can get a significant memory reduction, even on a workstation.
If we do not want to use an additional coarse mesh to construct the global solver, we can compute its solution using an algebraic method. This may come at the price of an increased amount of GMRES iterations compared to when we set the Use coarse level selector to Geometric, which is based on an additional coarser mesh. The advantage is that the algebraic method constructs the global solver from the finest-level system matrix, and not by means of an additional coarser mesh. With the Algebraic option, the generation of an additional coarse mesh, which might be costly or not even possible, can be avoided.
On a cluster, a subdomain problem can be solved on a single node (or on a small subset of the available nodes). The size of the subdomains, hence the memory consumption per node, can be controlled by the Domain Decomposition solver settings. For the Additive Schwarz solver, all subdomain problems can be solved concurrently on all nodes. The solution updates at the subdomain interfaces occur in the final stage of the outer solver iteration.
For the Multiplicative Schwarz solver, there are intermediate updates of the subdomain interface data. This approach can speed up the convergence of the iterative procedure, but introduces additional dependencies for the parallel solution. We must use a subdomain coloring mechanism in order to identify a set of subdomains that can be processed concurrently. This may limit the degree of parallelism if there is a low number of subdomains per color. In general, the Multiplicative Schwarz and Symmetric Schwarz methods converge faster than the Additive Schwarz and Hybrid Schwarz methods, while these methods can result in better parallel speedup.
A subdomain coloring mechanism is used for multiplicative Schwarz-type domain decomposition preconditioning.
In the Domain Decomposition solver settings, there is a Use subdomain coloring checkbox for the Multiplicative Schwarz and Hybrid Schwarz methods. This option is enabled by default and takes care of grouping subdomains into sets — so-called colors — that can be handled concurrently. Let us consider a coloring scheme with four colors (blue, green, red, and yellow). All subdomains of the same color can compute their subdomain solution at the same time and communicate the solution at the subdomain overlap to their neighbors. For four colors, the procedure is repeated four times until the global solution can be updated.
Domain decomposition on a cluster with nine nodes. A subdomain coloring scheme is used to compute subdomain solutions simultaneously for each different color.
On a cluster, the subdomains can be distributed across the available compute nodes. Every color can be handled in parallel and all of the nodes compute their subdomain solutions for the current color at the same time and then proceed with the next color. The coloring scheme coordinates the order of the subdomain updates for the Multiplicative Schwarz and Symmetric Schwarz methods. Communication is required for updating the degrees of freedom across the compute node boundaries in between every color. No subdomain coloring scheme is required for the Additive Schwarz and Hybrid Schwarz methods.
The different Domain Decomposition solver types.
If the Domain Decomposition solver is run on a single workstation, all data needs to be set up in the same memory space and there is no more benefit from storing only specific subdomain data. Due to the subdomain overlap, the memory consumption might even increase compared to the original problem. In order to overcome this limitation, the Domain Decomposition solver can be run in the Recompute and clear mode, where the data used by each subdomain is computed on the fly. This results in a significant memory reduction and solves larger problems without needing to store the data in virtual memory. These problems will take longer to compute due to repeated setup of the subdomain problems.
This method is particularly useful when the solution uses a lot of virtual memory with disk swapping. If the Automatic option is used, the Recompute and clear mechanism is activated if there is an out-memory error during the setup phase. The setup is then repeated with Recompute and clear activated. The Recompute and clear option is comparable to the out-of-core option of the direct solvers. Both methods have an additional penalty; either because of storing additional data to the disk (Out-of-core) or because of recomputing specific parts of the data again and again (Recompute and clear). We can save even more memory by using the matrix-free format on top of the Recompute and clear option.
In the settings of the Domain Decomposition solver, we can specify the intended Number of subdomains (see the figure below). In addition, the Maximum number of DOFs per subdomain is specified. If the latter bound is missed — i.e., one of the subdomains has to handle more degrees of freedom than specified — then all subdomains are recreated, taking a larger number of subdomains.
Settings window for the Domain Decomposition solver.
The subdomains are created by means of the element and vertex lists taken from the mesh. We are able to choose from different subdomain ordering schemes. The Nested Dissection option creates a subdomain distribution by means of graph partitioning. This option typically gives a low number of colors and results in balanced subdomains with an approximately equal number of degrees of freedom, minimal subdomain interfaces, and a small overlap.
An alternative method that also avoids slim domains is to use the Preordering algorithm based on a Space-filling curve. If we select the option None for the Preordering algorithm, the subdomain ordering is based on the ordering of the mesh elements and degrees of freedom. This can result in slim domains. Detailed information about the applied subdomain configuration is given in the solver log if the Solver log on the Advanced node is set to Detailed.
When simulating problems with large memory requirements in the COMSOL® software, we are limited by the available hardware resources. An iterative solver with domain decomposition preconditioning should be considered as a memory-lean alternative to direct solvers. On a workstation, the Recompute and clear option for the Domain Decomposition solver is an alternative to the out-of-core mechanism for the direct solvers.
Although memory-heavy simulations can fail on computers with insufficient memory, we can enable them on clusters. The direct solvers in COMSOL Multiphysics automatically use the distributed memory, leading to a memory reduction on each node. The Domain Decomposition solver is an additional option that takes advantage of the parallelization based on the spatial subdomain decomposition.
The Domain Decomposition solver, clusters, and a variety of the options discussed here will help you improve computational efficiency when working with large models in COMSOL Multiphysics. In an upcoming blog post, we will demonstrate using the domain decomposition preconditioner in a specific application scenario. Stay tuned!
A. Toselli and O. Widlund, “Domain Decomposition Methods — Algorithms and Theory,” Springer Series in Computational Mathematics, vol. 34, 2005.
We have already gone over the physical basis of the firing mechanism that generates action potential in cells and we studied the generation of such a waveform using the Fitzhugh-Nagumo (FH) model.
The dynamics of the simple Fitzhugh-Nagumo model, featured in a computational app.
Today, we will convert the FH model study into a more rigorous mathematical model, the Hodgkin-Huxley (HH) model. Unlike the Fitzhugh-Nagumo model, which works well as a proof of concept, the Hodgkin-Huxley model is based on cell physiology and the simulation results match well with experiments.
In the HH model, the cell membrane contains gated and nongated channels that allow the passage of ions through them. The nongated channels are always open and the gated channels open under particular conditions. When the cell is at rest, the neurons allow the passage of sodium and potassium ions through the nongated channels. First, let us presume that only the potassium channels exist. For potassium, which is in excess inside the cell, the difference of concentration between the inside and outside of the cell acts as a driving force for the ions to migrate. This is the process of movement of ions by diffusion, or the chemical mechanism that initially drives potassium out of the cells.
This movement process cannot go on indefinitely. This is because the potassium ions are charged. Once they accumulate outside the cell, these ions establish an electrical gradient that drives some potassium ions into the cells. This is the second mechanism (the electrical mechanism) that affects the movement of ions. Eventually, these two mechanisms balance each other and the potassium efflux and outflux balances. The potential at which the balance happens is known as the Nernst potential for that ion. In excitable cells, the Nernst potential value for potassium, , is -77 mV and for sodium ions, , is around 50 mV.
We allow the presence of a few nongated sodium channels in the membrane. Because the sodium ions abound in the extracellular region, an influx of sodium ions into the cell must occur. The incoming sodium ions reduce the electrical gradient, disturb the potassium equilibrium, and result in a net potassium efflux from the cell until the cell reaches its resting potential at around -70 mV. It is important to mention here that the net efflux of potassium and net influx of sodium ions cannot go on forever, otherwise the chemical gradient that causes the movement will eventually cease. Ion pumps bring potassium back into the cell and drive sodium out through active transport and maintain the resting potential of the cells in normal conditions.
Let’s derive an equivalent circuit model of a cell in which we can imitate the effects of the different cellular mechanisms we just described by different commonly found circuit components, such as capacitors, resistors, and batteries. The voltage response of the circuit is the signal that corresponds to the action potential.
Overall, there are four currents that are important for the HH model:
Schematic of the currents in a Hodgkin-Huxley model.
The four currents flow through parallel branches, with the membrane potential V as the driving force (see the figure above; the ground denotes extracellular potential). The cell membrane has a capacitive character, which allows it to store charge. In the figure above, this is the left-most branch, modeled with a capacitor of strength C_{m}. The other branches account for three ionic currents that flow through ion channels. In each branch, the effects of channels are modeled through conductance (shown as resistance in the diagram), and the effect of the concentration gradient is represented by the Nernst potential of the ions, which are represented as batteries.
Thus, when a current is injected in the cell, it gets divided into four parts and the conservation of charges leads us to the following balance equation
Or equivalently
What is of paramount importance is that the sodium and potassium channel conductances are not constant; rather, they are functions of the cell potential. So how do we model them? Remember that some of the ion channels are gated and they can have multiple gates. Assume that there are voltage-dependent rate functions α_{ρ} (V) and β_{ρ} (V), which give us the rate constants of a gate going from a closed state to open and open to closed, respectively. If ρ denotes the fraction of gates that are open, a simple balance law yields the following equation for the evolution of ρ
Different gated channels are characterized by their gates. In the HH model, the potassium channel is hypothesized to be composed of four n-type gates. Since the channel conducts when all four are open, the potassium conductance is modeled through the equation
For sodium, the situation is assumed to be more complicated. The sodium-gated channel has four gates, but three m-type gates (activation-type gates that are open when the cell depolarizes) and one h-type gate (a deactivation gate that closes when the cell depolarizes). Therefore, the sodium channel conductance is given by
In the above equations, is the maximum potassium and sodium conductance. The functional forms of α_{ρ} (V), β_{Ρ} (V) for can be found in any standard reference.
The leak conductance is assumed to be a constant. Therefore, the HH model is completely described by the following set of equations
The key to understanding the Hodgkin-Huxley model lies in understanding the gate equations. We can recast the equations for the gates in the following form
with .
This is a very well-known equation in electrical circuits. If we assume ρ_{∞} is voltage independent, then the equation says that ρ asymptotically approaches ρ_{∞} as its final value, and Τ_{ρ}, the time constant, dictates the rate of approach. This means that the smaller the Τ_{ρ}, the faster the approach. The following figure shows the values of these two quantities for .
The asymptotic values (left) and time constants (right) for the gate equations of the Hodgkin-Huxley model.
It is easy to conclude from the figures above that n_{∞}, m_{∞} increases as the cell depolarizes and h_{∞} decreases under similar conditions. From the second graph, we find that the activation for sodium is much faster compared to the activation of potassium or the leak current.
When depolarization starts, n_{∞}, m_{∞} increases and h_{∞} decreases. The governing equations of all of these quantities demand that they should approach the steady-state values; therefore, n, m increases and h decreases. However, we should also remember the differences in time constants of the gating variables. A comparison says that the activation of sodium gates happens much faster as compared to their deactivation or the opening of potassium channels. Therefore, there is an initial overall increase in the sodium conductance. This results in an increase of the sodium current, which raises the membrane potential and causes V to approach . This is how the HH model accounts for the rising part of the action potential.
However, as this process continues, . Once the value of h goes below a threshold, the sodium channels are effectively closed. Also, the approach of V toward kills the driving force for the sodium current. Meanwhile, the potassium channels, which have a slower time constant, open up to a large extent. This, coupled with the large driving force that is available for the potassium current, forces the reverse flow. The potassium ions move out of the cell and eventually the membrane potential settles toward the hyperpolarized state.
We can build a computational simulation app to analyze the Hodgkin-Huxley model, which enables us to test various parameters without changing the underlying complex model. We can do this by designing a user-friendly app interface using the Application Builder in the COMSOL Multiphysics® software. As a first step, we create a model of the Hodgkin-Huxley equations using the Model Builder in the COMSOL software. After building the underlying model, we transform it into an app using the Application Builder. By building an app, we can restrict and control the various inputs and outputs of our model. We then pass the app to the end user, who doesn’t need to worry about the model setup process and can focus on extracting and analyzing the results of the simulation.
In our case, we implemented the underlying Hodgkin-Huxley model using the Global ODEs and DAEs interface in COMSOL Multiphysics. This interface is a part of the Mathematics features in the COMSOL software and is capable of solving a system of (ordinary) differential-algebraic equations (Global ODEs and DAEs interface). This interface is often used to construct models for which the equations and their initial boundary conditions are generic. In the interface, we can specify the equations and unknowns and add initial conditions. The interface, with model equations, is shown below.
We also create the postprocessing elements, graphs, and animations in the Model Builder. Once the model is ready, we move on to the Application Builder again. We connect the elements of the model to the app’s user interface through various GUI options like input fields, control buttons, display panels, and some coded methods.
You can learn more about how to build and run simulation apps in this archived webinar.
Finally, we can design the user interface of the Hodgkin-Huxley app. With the Form Editor in the Application Builder, we can design a custom user interface with a number of different buttons, panels, and displays. This user interface features a Model Parameters section to input the different parameters of the HH model, such as the Nernst potential, maximum gate conductance, and membrane capacitance. We can also provide two types of excitation current to the model: a unit step current or an excitation train. As the parameters change, the app displays the action potential and excitation current, as well as the evolution of gate variables m, n, and h.
With the Reset and Compute buttons, it is easy to run multiple tests after changing the parameters. There are also graphical panels that display visualizations and plots of the model results and an Animate button that creates captivating animations. The Simulation Report button generates a summary of the simulation.
The user interface for the Hodgkin-Huxley Model simulation app.
Making buttons work in an app is a simple process. All we have to do is write a few methods using the Method Editor tool that comes with the Application Builder and connect them to the buttons properly. Let me illustrate with an example. We can design the Hodgkin-Huxley Model app so that when it launches, the Animate and Simulation Report buttons are inactive (see the figure below). This is because the app user will not need to use either of these buttons until after they perform a simulation.
The Simulation Report and Animate buttons are disabled at the start of the simulation.
To do so, we can write a method that instructs the app to execute certain functions during the launch.
A method that disables the Simulation Report and Animate buttons during launch.
Observe that we have disabled the Simulation Report button and Animate button using instructions in lines 7 and 8 in the method. If you are worried about coming up with the syntax for your methods, let me assure you that it is much more simple than it seems. First, the methods execute some actions. If we want to record the code corresponding to these actions, we click on the button called Record Code in the Application Builder ribbon. Then, we can go to the Model Builder, execute the actions, and once done, click on the Stop Recording button. The corresponding code will be placed in the method. If necessary, we can then modify the instructions.
Once a simulation is complete, we would like these buttons to become active in the app. In another method associated with the Compute button, we insert the following code segment
We then ensure that this segment is executed if the solution is computed successfully. You will see that this enables both buttons.
To summarize, you can use a simulation app to easily compute and visualize parameter changes when working with a complex model that involves multiple equations and types of physics, such as the Hodgkin-Huxley model discussed here. This simulation app is just one example of how you can design the layout of an app and customize its input parameters to fit your needs. Use this app as inspiration to build your own app, whether you are analyzing the action potential in a cell with a mathematical model or teaching students about complicated math and engineering concepts. No matter what purpose your app serves, it will ensure that your simulation process is simple and intuitive.
Nerve cells are separated from the extracellular region by a lipid bilayer membrane. When the cells aren’t conducting a signal, there is a potential difference of about -70 mV across the membrane. This difference is known as the cell’s resting potential. Mineral ions, such as sodium and potassium, and negatively charged protein ions, contained within the cell, maintain the resting potential. When the cell receives an external stimulus, its potential spikes toward a positive value, a process known as depolarization, before falling off again to the resting potential, called repolarization.
Plot of a cell’s action potential.
In one example, the concentration of the sodium ions at rest is much higher in the extracellular region than it is within the cell. The membrane contains gated channels that selectively allow the passage of ions though them. When the cell is stimulated, the sodium channels open up and there is a rush of sodium ions into the cell. This sodium “current” raises the potential of the cell, resulting in depolarization. However, since the channel gates are voltage driven, the sodium gates close after a while. The potassium channels then open up and an outbound potassium current flows, leading to the repolarization of the cell.
Hodgkin and Huxley explained this mechanism of generating action potential through mathematical equations (Ref. 2). While this was a great success in the mathematical modeling of biological phenomena, the full Hodgkin-Huxley model is quite complicated. On the other hand, the FitzHugh-Nagumo model is relatively simple, consisting of fewer parameters and only two equations. One is for the quantity V, which mimics the action potential, and the other is for the variable W, which modulates V.
Today, we’ll focus on the FN model, while the HH model will be a topic of discussion for a later time.
The two equations in the FN model are
and
The parameter corresponds to an excitation while a and b are the controlling parameters of the model. The evolution of W is slower than that of the evolution of V due to the parameter ε multiplying everything on the right-hand side of the second equation. The fixed points of the FN model equations are the solutions of the following equation system,
and
The V-nullcline and W-nullcline are the curves and , respectively, in the VW-plane. Note that the V-nullcline is a cubic curve in the VW-plane and the W-nullcline is a straight line. The slope of the line is controlled in such a way that the nullclines intersect at a single point, making it the system’s only fixed point.
The parameter simply shifts V-nullcline up or down. Thus, changing modulates the position of the fixed point so that different values of cause the fixed point to be on the left, middle, or right part of the curve .
To simulate what happens when the fixed point is in each region, we can use the Global DAE interface included in the base package of COMSOL Multiphysics.
The V-nullcline is shown in the figure below in a green color. In the region above this nullcline , while in the region below it is positive. The W-nullcline is shown in red; in the region to the right of this straight line, , and to the left, .
Let’s first examine what happens if the fixed point is on the right side, Region 3, of the V-nullcline. We’ll say that when t, representing time, equals zero, both V and W are also at zero. In this case, both and are positive at and around the starting point and thus both change as time progresses. But since V evolves faster than W, V increases rapidly while W remains virtually unchanged. In the figure, we can see that this results in a near-horizontal part of the V-W curve.
As the curve approaches the V-nullcline, the rate of change of V slows down and W becomes more prominent. Since is still positive, W must increase, and the curve moves upwards. The fixed point then attracts this curve and the evolution ends at the fixed point.
Plot of the VW-plane when the fixed point is on the right side of the V-nullcline.
If the fixed point is in the middle, Region 2, then what we have discussed so far holds true. The difference is that once the curve goes beyond the right knee of the V-nullcline, becomes negative and V rapidly decays. While moving left, the curve crosses the red nullcline from right to left. From this point on, while both V and W diminish, the evolution of V dominates and the curve becomes horizontal once again.
This continues until the curve hits the left part of the V-nullcline. The curve begins to hug the V-nullcline and starts a slow downward journey. When it touches the left knee of the V-nullcline, it moves rapidly toward the right part of the V-nullcline. Note that this motion never hits the fixed point and therefore keeps repeating, which we can see in the plot below.
Plot of the VW-plane when the fixed point is in the middle region of the V-nullcline.
That leaves us with one last case to discuss — when the fixed point is on the left part, Region 1, of the V-nullcline. The results should look like the following plot. Note that the analyses we previously performed carry over.
Plot of the VW-plane when the fixed point is on the left side of the V-nullcline.
To explore the rich dynamics of the FN model described above, we need to repeatedly change various inputs without changing the underlying model. As such, a user interface that allows us to easily change the model parameters, perform the simulation, and analyze the new results without having to navigate the Model Builder tree structure to perform these various actions is desirable.
To accomplish this, we can turn to the Application Builder. This platform allows us to create an easy-to-use simulation app that exposes all of the essential aspects of the model, while keeping the rest behind the scenes. With this app, we can rapidly change the parameters via a user-friendly interface and study the results using both static figures and animations. The app also makes it easy for students to understand the FN model’s dynamics without having to worry about creating a model.
The important parameters of the FN model, i.e. a, b, ε, and I, are displayed in the app’s Model Parameters section. The graphical panels display various quantities of interest such as the waveform for V and W. We display the phase plane diagram in the top-right panel along with the V- and W-nullclines. The position of the fixed point is easily identifiable from that plot. Once the simulation is complete, we can animate the time trajectories by choosing the animation option from the ribbon toolbar. To get a summary of the simulation parameters and results, we can select the Simulation Report button.
App showing the dynamics of the FitzHugh-Nagumo model when the fixed point is in Region 2.
We can easily reproduce the cases described in the previous section with our app. The image above, for example, shows what happens when the fixed point is in Region 2. We can easily move the fixed point to either Region 1 or 3 by making the current 0.1 or 2.5, respectively. Note that any other parameters in the app can also be changed to see if other interesting trends emerge.
The app that we’ve presented here is just one example of what you can create with the Application Builder. The design of your app, from its layout to the parameters that are included, is all up to you. The flexibility of the Application Builder enables you to add as much complexity as needed, in part thanks to the Method Editor for Java® methods. In a follow-up blog post, we’ll create an app to illustrate the dynamics of the more complicated HH model. Stay tuned!
Oracle and Java are registered trademarks of Oracle and/or its affiliates.
]]>
In COMSOL Multiphysics, we can evaluate spatial integrals by using either integration component coupling operators or the integration tools under Derived Values in the Results section. While these integrals are always evaluated over a fixed region, we will sometimes want to vary the limits of integration and obtain results with respect to the new limits.
In a 1D problem, for example, the integration operators will normally calculate something like
where a and b are fixed-end points of a domain.
What we want to do instead, though, is to compute
and obtain the result with respect to the variable limit of integration s.
Since the integration operators work over a fixed domain, let’s think about how to use them to obtain integrals over varying limits. The trick is to change the integrand by multiplying it by a function that is equal to one within the limits of integration and zero outside of the limits. That is, we define a kernel function
and compute
As indicated in our previous blog post about integration methods in time and space, we can build this kernel function by using a logical expression or a step function.
We also need to know how the auxiliary variable s is specified in COMSOL Multiphysics. This is where the dest operator comes into play. The dest operator forces its argument to be evaluated on the destination point rather than on the source points. In our case, if we define the left-hand side of the above equation as a variable in the COMSOL software, we type dest(x) instead of s on the right-hand side.
Let’s demonstrate this with an example. In this case, a model simulates a steady-state exothermic reaction in a parallel plate reactor. There is a heating cylinder near the center and an inlet on the left, at x = 0. One of the compounds has a concentration cA.
What we want to do is to calculate the total moles per unit thickness between the inlet and a cross section at a distance of s from the inlet. We then plot the result with respect to the distance from the inlet.
First, we define an integration coupling operator for the whole domain, keeping the default integration operator name as intop1. If we evaluate intop1(cA), we get the integral for the entire domain. To vary the limit of integration horizontally, we build a kernel using a step function, which we’ll call step1. We then define a new variable, intUpTox.
Combining the integration coupling operator, dest operator, and new variable to evaluate an integral with moving limits.
Let’s see how the variable described in the image above works. As a variable, it is a distributed quantity and has a value equal to what the integration operator returns. During the integration, we evaluate x at every point in the domain of integration and dest(x) only at the point where intUpTox is computed. We find a horizontal line that spans from the inlet all the way to the outlet and plot intUpTox.
Integrating concentration cA over a horizontally varying limit of integration.
If we instead plot intUpTox/intop1(cA)*100, we get a graph of the percentile mass to the left of a given point with respect to the x-coordinate.
In the above integral, the limit of integration is given explicitly in terms of the x-coordinate. Sometimes, though, the limit may only be given in an implicit criteria, and it may not be straightforward to invert such criteria and obtain explicit limits. For example, say that we want to know the percentage of total moles within a certain radial distance from the center of the heating cylinder. Given a distance s from the center (x_{pos}, y_{pos}) of the cylinder, we want a kernel function equal to one inside the radial distance and zero outside of it. To do so, we can use:
But how do we specify s? We again use the dest operator: , and the kernel is
We implement this method by defining a Cut Line data set to obtain the horizontal line through the hole’s center and placing a graph of our integration expression over it. It is not necessary that the cut line is horizontal; it just needs to traverse the full domain that the integration operator defines. Furthermore, s should vary monotonically over the cut line.
New data set made with a cut line passing through the center of the hole.
In the image below, we added an insert by zooming in at the bottom left area of the graph. This section shows that there is no result on the plot for a distance of less than 2 mm from the center of the heating hole. This is because that region is not in our computational domain. Since the hole has a radius of 2 mm, the ordinate starts at 0 at an abscissa of 2 mm.
Percentage of mass in a domain, which is within a radial distance from the fixed point. The radial distance is varied by using the dest operator in an implicit expression.
In the previous sections, we evaluated integrals where the integrands were given. But what do we do if we have the integral and want to solve for the integrands? An example of such a problem is the Fredholm equation of the first kind
where we want to solve for the function u when given the function g and Kernel K. These types of integral equations often arise in inverse problems.
In integro-differential equations, both the integrals and derivatives of the unknown function are involved. For example, say we have
and want to solve for u given all of the other functions.
In our Application Gallery, we have the following integro-partial differential equation:
where we solve for temperature T(x) and are given all of the other functions and parameters.
We can solve the above problem using the Heat Transfer in Solids interface. In this interface, we add the right-hand side of the above equation as a temperature-dependent heat source. The first source term is straightforward, but we need to add the integral in the second term using an integration coupling operator and the dest operator. For the integration operator named intop1, we can evaluate
with intop1(k(dest(x),x)*T^4).
For more details on the implementation and physical background of this problem, you can download the integro-partial differential program tutorial model here. Please note that some integral equations tend to be singular and we need to use regularization to obtain solutions.
In today’s blog post, we’ve learned how to integrate over varying spatial limits. This is necessary for evaluating integrals in postprocessing or formulating integral and integro-differential equations. For more information, you can browse related content on the COMSOL Blog:
For a complete list of integration and other operators, please refer to the COMSOL Reference Manual.
If you have questions about the technique discussed here or with your COMSOL Multiphysics model, feel free to contact us.
]]>
Today’s example involves solving an axisymmetric heat conduction problem on a cylinder. This model was taken from the NAFEMS benchmark collection and solved in COMSOL Multiphysics using the Heat Transfer in Solids interface. We will take dimensions and material properties from that model and reproduce the result using the General Form PDE interface. Essentially we are now benchmarking our manual PDE interface approach to the problem by comparing it with the Heat Transfer in Solids interface, where the axial symmetry is automatically taken care of.
The equation for a stationary temperature distribution T on a rigid solid is
If the translational velocity u is uniformly zero, the heat capacity C_{p} has no effect and the only material property we have to specify is the thermal conductivity κ. Using the parameter kappa for thermal conductivity, we obtain the settings shown below with the Heat Transfer in Solids interface. The solution for a certain set of temperature and flux boundary conditions is also depicted.
An axisymmetric heat conduction problem solved with the Heat Transfer in Solids interface. This benchmark model is included in our Application Gallery.
Let’s now solve the same problem on the same finite element mesh using the General Form PDE interface. This interface is one of the several mathematics interfaces in COMSOL Multiphysics that facilitate solving custom equations. When the equation you want to solve is not built into one of the physics interfaces, you can use the mathematics interfaces to solve algebraic equations, ordinary differential equations, and partial differential equations. The new equations you add can be linear or nonlinear, and they can be solved either on their own or coupled with the prebuilt physics interfaces. The equations for heat transfer are a good starting point since those are already available in the built-in interfaces and we can easily compare the results with those where we have used our own PDE.
The General Form PDE interface has the template
and we have to provide the mass coefficient e_{a}, the damping coefficient d_{a}, the flux Γ, and the source f to specify our mathematical model.
If we compare this template with the equation from the Heat Transfer in Solids interface, with the dependent variable u standing for temperature, our flux specification should be that . To achieve this, we’ll specify the r- and z-components of the flux in the PDE interface as -kappa*ur and -kappa*uz, respectively. The mass and damping coefficients are not included in a stationary analysis.
As the image below indicates, the solution using the above settings differs from the solution we obtained using the Heat Transfer in Solids interface. What is the reason for this?
An axisymmetric heat conduction problem solved with the General Form PDE interface. The effect of a curvilinear coordinate system is not accounted for here.
The short explanation to such discrepancy is that the Heat Transfer in Solids interface understands as the divergence of the thermal flux, whereas the PDE interfaces understand as
This expression is not the divergence of Γ in the cylindrical coordinate system. While we’ll demonstrate how to fix this in a later section, we’ll first provide a more detailed explanation of what causes the discrepancy.
The equations that the various physics interfaces in COMSOL Multiphysics solve are mathematical abstractions of the laws of physics. Often stated as conservation laws or accounting principles, these laws describe how a certain quantity changes on account of activities on a domain and across the domain’s boundary. While conservation laws hold for all materials, the extent of a given material’s response to domain forces and boundary fluxes differs from one material to another. The material responses are specified by so-called constitutive equations or equations of state. We have to make sure that constitutive equations do not violate the second law of thermodynamics. Together, conservation laws and valid constitutive equations provide enough information to derive a well-posed mathematical model.
For instance, thermal conduction in a rigid solid object is governed by the law of conservation of thermal energy. The rate of change of thermal energy equals the rate at which heat is supplied by sources in the domain plus heat flux through the boundary. From empirical observations, the heat flux in solids is proportional to the temperature gradient and is directed from hotter areas to colder areas. This is the constitutive equation. From here, we can use multivariable calculus to write the heat transfer equation for an isotropic material, for example, as
where T is temperature, which is the primary variable whose evolution we want to track; p, C_{p}, and K are material parameters describing density, heat capacity, and thermal conductivity, respectively; u is the translational velocity; and Q is the heat source per unit volume. Boundary conditions are also used to describe what happens on the boundary.
After deriving mathematical models like the above equation, the next step is to solve them for the primary dependent variable and other quantities of interest. In heat conduction, for example, we want to obtain the temperature in time and space. To decide how we will identify a point in space, we select a coordinate system. Making a wise choice for a coordinate system can facilitate our analysis, whereas a bad choice can make our work unnecessarily complicated. No matter which coordinate system we choose, it is important to make sure that the physical meaning of the equation stays the same.
The heat conduction equation, for instance, contains the gradient of temperature. In the Cartesian coordinate system, we have
whereas in the cylindrical coordinate system, we have
In the first case, partial derivatives of a scalar, with respect to independent variables, provide components of the gradient. This is not the case in a curvilinear coordinate system like the cylindrical coordinate system. As referenced earlier, the physical meaning of the gradient of temperature, as the vector that points in the direction of the greatest increase of temperature with a magnitude equal to the rate of increase, should stay the same. To ensure such invariance, we have to use covariant derivatives instead of regular partial derivatives. In the Cartesian coordinate system, covariant derivatives and partial derivatives overlap; in curvilinear systems, they do not.
The other differential operator we have is the divergence operator . In the heat transfer equation, this operator acts on the flux . When using the Cartesian coordinate system, taking the divergence of the flux involves differentiating only the components , , and of the flux. The basis vectors e_{x}, e_{y}, and e_{z} remain the same from one point to another. For the cylindrical and other curvilinear coordinate systems, the basis changes and taking the divergence thus involves taking derivatives of the basis vectors as well. Covariant derivatives take this into account. Simple sums of partial derivatives, when used in curvilinear coordinate systems, lose the physical meaning of the divergence reflecting how much a vector spreads out from a given point.
Similarly, the dot product should always coincide with the product of the magnitudes of the two vectors and the cosine of the angle between them. In the Cartesian coordinate system, this is simply the sum of the products of the corresponding components of the two vectors. In curvilinear coordinate systems, the metric tensor of the coordinate system is added into the mix.
Because of the mathematical complexity that arises in curvilinear coordinate systems, you might wonder why we would want to use anything other than the Cartesian coordinate system. But, as we’ll highlight here, there are some applications where curvilinear coordinate systems are particularly useful.
Consider a three-dimensional problem where the geometry, material properties, boundary conditions, and heat sources are symmetric about an axis. The solution is rotationally symmetric about that axis. If we use a coordinate system with the z direction along the symmetry axis, all partial derivatives with respect to Φ vanish. What that leaves us with is a two-dimensional problem in the rz-plane. This results in an equation that is easier to solve than the one in the Cartesian coordinate system, where all three spatial partial derivatives remain in the equation. In the finite element modeling of such problems, using an axisymmetric formulation facilitates the use of 2D meshes rather than 3D meshes, which leads to significant savings for both memory and time.
Curvilinear coordinate systems are also useful when material properties, such as the thermal conductivity κ, are not isotropic. If the anisotropy occurs in certain directions, we can define coordinate systems that align with those preferential directions and simplify the material property input. Note that COMSOL Multiphysics features a Curvilinear Coordinates interface that can be used to generate nonstandard coordinate systems. You can learn more about how to use the Curvilinear Coordinates interface and how to solve anisotropic problems on curvilinear coordinate systems on the COMSOL Blog.
For our particular example, we’ll focus on using the standard cylindrical coordinate system.
In COMSOL Multiphysics, there are several physics-based interfaces that solve equations arising from one or more conservation laws. For instance, in the heat transfer in rigid solids example referenced above, we follow the conservation of thermal energy. In isothermal stress analysis problems, we follow the physical laws for conservation of mass, linear momentum, and angular momentum. When you use one of these physics interfaces, the software maintains the physical meanings of differential operators by using the corresponding expression for the Cartesian or curvilinear coordinates. That is, covariant derivatives are used instead of partial derivatives and, as a result, the coordinate system invariance is maintained.
But when you use the Coefficient Form PDE or General Form PDE interfaces, the software uses partial derivatives.
In other words, the physics interfaces understand , , and to be the gradient, divergence, and curl, respectively, of a physical scalar u or higher-order tensor Γ. On the other hand, in the PDE interfaces, these tensorial meanings no longer apply and the operators are replaced by partial derivatives with respect to the independent variables. For example, the independent variables may not represent physical coordinates.
Consider . In the physics interfaces, this means divergence. But in the PDE interfaces, this means in a 3D component and in a 2D axisymmetric component. The first results in , which overlaps with the divergence, and the latter results in , which is not the divergence of Γ.
In a cylindrical coordinate system, the divergence is given by
In an axisymmetric problem, the second item in this sum vanishes, leaving
(1)
The expression in the PDE interfaces does not contain the last term, as the operator is interpreted simply as the sum of partial derivatives. For the divergence of a physical quantity in the cylindrical coordinate, we have to compensate for the last term in (1). We will use an example to highlight one approach for doing so using the source term.
Let’s now go back to our initial problem. What we want to solve is the stationary problem
where is the divergence operator. For an axisymmetric problem, we have
This is equivalent to
The left-hand side of this equation is what the General Form PDE interface in an axisymmetric component understands to be . The right-hand side of the equation is added as an extra source term, as shown in the screenshot below. The solution now matches the solution that we obtained using the Heat Transfer in Solids interface.
An axisymmetric heat conduction problem solved with the General Form PDE interface. The effect of a curvilinear coordinate system is accounted for here.
Of course the item added in the source term is not a physical heat source. It simply compensates for the missing term between the covariant differentiation that we need for divergence and the partial differentiation the PDE interface does. A good practice is to add this term in the PDE settings window and add physical heat sources to the Model Builder by right-clicking the PDE interface and selecting “Source”.
In this example, only the divergence operator was used. For other differential operators, such as the curl, you should make similar compensations when solving a physics-based problem.
These adjustments are for curvilinear coordinate systems embedded in a Euclidean space. If the underlying space you are working with is not Euclidean, there are no Cartesian coordinates at all. In such cases, you need to be vigilant, even when using 2D or 3D components without axisymmetry. COMSOL Multiphysics features built-in physics interfaces for these types of problems. They include the Shell and Membrane interfaces in structural mechanics, the Thin-Film Flow interfaces in fluid mechanics, the Electric Currents, Shell interface in electromagnetics, and more. For a more extensive list, take a look at our product specification chart.
Another approach to equation-based modeling is to use a physics interface as a PDE template for a problem with a similar mathematical structure. For example, to solve a physical problem that has a convection-diffusion-reaction nature, we can use either the Heat Transfer or the Transport of Diluted Species interfaces. These interfaces keep the tensorial meanings of differential operators in an axisymmetric component. All you need to do is to use the boundary and domain conditions that mathematically match the items you want to add.
The only downside to this strategy is that the units for the variables may not match the units you have. In such cases, you can use nondimensionalization. Once you obtain the dimensionless equations, you can make your COMSOL model dimensionless by going to the root of the Model Builder and setting Unit System to None.
With nondimensionalization, you can use a physics interface to solve a different type of problem with a similar mathematical structure but different dimensions.
With the Coefficient Form PDE and General Form PDE interfaces in COMSOL Multiphysics, you can implement partial differential equations to solve novel problems not yet built into the software. These partial differential equations may or may not be derived from a physical problem. Therefore, the differential operators in the PDE interfaces are by design kept simple and not converted to tensorial operators automatically. When solving a physical problem using curvilinear coordinate systems — say when you want to exploit axisymmetry — make sure that the items you enter in the software’s templates accurately represent your physical problem. In the physics-based interfaces, COMSOL Multiphysics takes care of this. But in the PDE interfaces, the software can not attach any meaning to your equations, leaving the responsibility to you.
One way to address this is to add extra source terms to balance the difference between partial derivatives and covariant derivatives. Another way is to use an existing physics interface that has the same mathematical structure as your equation. If you have any questions about these strategies or other questions pertaining to this topic, please do not hesitate to contact us.
For more details on differential operators in curvilinear coordinate systems and partial differential equations on surfaces, you can turn to various books on tensor calculus or differential geometry. Here are some of my personal favorites, in no particular order:
The demands of a mesh generator can be quite extensive. The generated mesh, for instance, must conform to the geometry as well as create elements of the most optimal sizes and shapes. Elements where the edges and angles between them are close to being equal in size provide a greater chance of reaching solution convergence as well as more accurate results. Further, the generator may have to grade over short distances to create very small elements in tight spaces and very large elements in more open spaces without creating problems in the solution algorithms. Finally, it should be preferable that the generator acts automatically and works for all types of geometries.
Every meshing operation in COMSOL Multiphysics creates a mesh that conforms to the respective geometry. But the tetrahedral mesh generator, which operates under the Free Tetrahedral node in the Model Builder, is the only mesh generator in 3D that is fully automatic and can be applied to every geometry. And since it creates an unstructured mesh — that is, a mesh with irregular connectivity — it is well suited for complex-shaped geometries requiring a varying element size. Since tetrahedral meshes in COMSOL Multiphysics are used for a variety of physics, including multiphysics, the mesh generator needs to be very flexible. It should be possible to generate very fine meshes, very coarse meshes, meshes with fine resolution on curved boundaries, meshes with anisotropic elements in narrow regions, etc.
A tetrahedral mesh of a gas turbine.
Thanks to recent updates in COMSOL Multiphysics, you can now achieve improved quality in your tetrahedral meshing and thus advance the reliability of your simulation results. To demonstrate this, let’s walk through the steps of generating a tetrahedral mesh.
Most tetrahedral mesh generators fall into one of the following three classes:
The tetrahedral mesh generator in COMSOL Multiphysics is a Delaunay-based mesh generator. As a Delaunay mesher, the process of generating a tetrahedral mesh can be divided into the five main steps described below. The third and fifth step of the meshing process have been significantly improved with upgrades to COMSOL Multiphysics version 5.2a. To illustrate the different steps, we’ll use a very coarse mesh of the piston geometry, which is available in the meshing tutorials of the Application Library within COMSOL Multiphysics.
The geometry of the piston_mesh application.
If you monitor the Progress window when building a tetrahedral mesh, you can see that the first 35% of the progress is devoted to the generation of the boundary mesh. But creating a boundary mesh that is well suited for the subsequent steps of the tetrahedral mesh generation process is not a straightforward task. There are several issues that must be addressed, such as:
The boundary mesh for the piston_mesh application using the element size Extremely Coarse.
The next step is to create the Delaunay tetrahedralization of the boundary mesh points, which form the convex hull of these points. These are also a set of points that have some nice mathematical properties, such as that no point of the point set will be placed inside the circumsphere of any tetrahedron. In 2D, the Delaunay triangulation of a set of points maximizes the minimum angle of all the angles of the triangles in the triangulation, although this property does not apply in 3D. Yet, there is no guarantee that the edge and triangle elements of the boundary mesh exist as edges and triangles in the Delaunay tetrahedralization of the boundary mesh points — not even for a convex boundary mesh. We will deal with this in the following step.
The Delaunay tetrahedralization of the boundary mesh points forming the convex hull of the points.
So far, we have generated the final boundary mesh and a Delaunay tetrahedralization of the boundary mesh points. Here, we will enforce the edges and triangles of the boundary mesh into the tetrahedralization. This is the most demanding part of the entire meshing process.
Last year, we released a completely new algorithm to account for this step, and the algorithm was significantly improved in COMSOL Multiphysics version 5.2a. In earlier versions, when meshing complex geometries, you may have received error messages like “failed to respect boundary element edge on geometry face” or “internal error in boundary respecting”. Such failures originated from this part of the meshing process. With the new improvements, it is possible to safely remove all tetrahedra on the outside of the boundary once all of the edges of the boundary mesh have been enforced into the tetrahedralization, and all of the tetrahedra intersecting the triangles of the boundary mesh have been addressed.
The upper left part of the figure shows the boundary mesh in gray and the Delaunay tetrahedralization of the boundary mesh points in cyan. The zoomed view in the lower right part of the figure shows a few of the hundreds of tetrahedra that intersect edges and triangles of the boundary mesh. In this step, the tetrahedralization is modified such that no tetrahedron intersects the boundary mesh. Some additional points (so-called Steiner points) might be inserted to achieve a boundary conforming tetrahedralization.
Now that we have a tetrahedralization across the entire geometry, with the boundary mesh from the first step serving as an outer boundary, we still have a tetrahedralization that does not contain any interior points (except the Steiner points that may have been added previously). Our next task is to refine the tetrahedralization by inserting points in the interior until the specified element size is achieved everywhere. The points can easily be inserted using a regular Delaunay refinement scheme. Some special treatment is needed though as at this stage, the tetrahedralization does not fulfill the Delaunay properties everywhere.
The upper left part of the figure shows a cut through of the tetrahedralization after the third step, with only a few interior points inserted during that part of the meshing process. The lower right part of the figure shows the same cut through the tetrahedralization after the fourth step, where the tetrahedralization fulfills the element size specification also in the interior of the domain.
At this point, we are almost done. But before we return the mesh to the user, we need to first improve the quality of the tetrahedra. Each tetrahedron can be assigned a quality value in the range of 0 to 1. A regular tetrahedron has a quality of 1 and a totally flat tetrahedron has a quality of 0. COMSOL Multiphysics version 5.2a delivers a new algorithm that further improves the quality of the meshes. The algorithm also features an option for reducing the risk of obtaining inverted curved elements as well as an option for avoiding the creation of elements that are too large.
In this step, we will increase the quality of the worst elements such that the quality of the mesh, which is largely dictated by the quality of its worst elements, is sufficiently good for a typical simulation. The new algorithm has a broader palette of operations for improving the quality of the mesh including point relocation (often referred to as smoothing) and topological modification operations such as edge and face swapping, edge collapsing, and vertex insertions. By applying these operations repeatedly, an infinite number of unique tetrahedralizations can be reached for a domain defined by its boundary mesh, which means a mesh with better quality will always exist. However, for a given tetrahedralization where the minimum element quality cannot be improved by smoothing, it’s not obvious which topological operation to perform to improve the quality. Sometimes even a series of operations must be applied — vertex insertion followed by edge flipping and smoothing — before you can reach an optimized mesh.
There are three optimization levels included in the algorithm: Basic, Medium, and High. These levels determine the amount of effort put into the optimization process. Say, for instance, you build your mesh using the Basic option (the default) and encounter problems with convergence when computing the solution or perhaps reduced accuracy in your results due to a poor quality mesh. In this case, you can rebuild your mesh with a higher optimization level to get a better chance at convergence with better results.
The quality improvement algorithm offers three levels of optimization (Basic, Medium, and High) that determine how much effort is put into the optimization process.
These mesh cut throughs show the elements with the lowest quality before optimization (upper left), with the optimization level set to Basic (middle), and with the optimization level set to High (lower right). The red tetrahedra have a quality value less than 0.1, while the yellow tetrahedra have a quality value between 0.1 and 0.25. The gray triangles define the mesh cut through’s boundary mesh.
The algorithm also offers two options for reducing the risk of obtaining inverted curved elements as well as reducing the size of the largest tetrahedra. Note that these selections come at the cost of a longer meshing time and slightly lower element quality.
If the geometry includes fillets or other curved regions with a relatively coarse mesh, and you solve with a geometry shape order higher than one, you can select the Avoid inverted curved elements check box. This will let the optimization algorithm try to reduce the number of mesh elements that become inverted when they are curved. If the computation is sensitive to mesh elements that are too large, you can select the Avoid too large elements check box in an effort to avoid generating tetrahedra that are larger than the specified element size.
Mesh generation is a concept that is rather easy to understand: It is all about partitioning a geometry into pieces of linear shape. There are, however, some difficulties to address when it comes to generating a tetrahedral mesh for simulation purposes. There will always be challenging geometries for which the generator fails or gives a mesh with elements that are of a lower quality. But with recent improvements to the tetrahedral mesher in COMSOL Multiphysics, you can now better address such complex geometries and further advance your modeling processes for continued optimization.
When assigning materials to your model geometry, you may want to experiment with a few options and see how different materials affect your simulation results. In COMSOL Multiphysics, you can automate this process via the Material Sweep parametric study and Material Switch feature. As such, you do not need to add several materials one at a time and compute for the corresponding solution. In addition to saving you time during model set up, this facilitates the comparison of results during postprocessing.
Screenshot from the material sweeps video, showcasing the ability to switch between different results based on the material.
The Material Switch node houses the materials that you want to sweep over and provides functionality to automatically switch materials while your model is solving.
In the five-minute tutorial video below, we outline the procedure for performing a material sweep in your model and then walk you through the steps for doing so. This includes adding a Material Switch node; specifying parts of the geometry that the material sweep will be applied to; selecting the materials to switch between; adding the Material Sweep parametric study; and finally postprocessing the sweep’s results. We also briefly discuss how you can customize the materials being swept over as well as how to easily toggle between different sets of results obtained from your material sweep.
As mentioned earlier, COMSOL Multiphysics features a large collection of built-in materials that are available regardless of which modules you hold a license for. Upon adding any of these materials to your model, you will notice that the material properties are provided with certain default values.
In some cases, material properties are constant. In other cases, they may vary in space or be dependent on a physics variable such as temperature. If you want to make a constant material property variable, or if the built-in variation is not what you want to use, you can define your own function. In COMSOL Multiphysics, there are three types of functions that you can use to define a material property: Interpolation, Analytic, and Piecewise functions.
Data table and plot for an Interpolation function.
Interpolation functions are used to define a material property through reading in data from a table or file that contains values of the function at discrete points. You can enter this data manually or import it from an external file. This is useful when you have material properties that are obtained from experiments. COMSOL Multiphysics will automatically evaluate and then generate a function that fits the data you provide. Then, you can also choose how the function interpolates between the measured values or extrapolates outside of your specified range of data.
Input fields and plot for an Analytic function.
Analytic functions are used to define a function using built-in mathematical functions or other user-defined functions. You can enter an expression, specify the input arguments, and define the value range for each of the arguments in your equation.
Settings for a Piecewise function.
Piecewise functions are used to define a material property using different expressions over different intervals. The start and end point for each set of values, as well as the function applicable to that interval, can be entered manually or imported from an external file. The intervals that you define cannot overlap and there cannot be any holes between the intervals. That way, you have a continuous function uniquely defined in terms of the independent variable.
In the following seven-minute tutorial video, we discuss how to create and define Interpolation, Analytic, and Piecewise functions for any material property in your model, the advantages of using each type, and best practices to keep in mind when creating them. We also go over the settings for each function type, demonstrate how the selection of options such as Extrapolation will change your data plot, and show how you can call out your function in the Material Contents table.
While creating a model in COMSOL Multiphysics, you will at some point need to identify the materials that your objects are made of. Normally, this requires completing a series of steps in which you open the Add Material or Material Browser windows; choose the material; select and add it to your component; and then go into the material node’s settings to select the parts of the geometry to which the material applies. You would then need to repeat this procedure for each unique material that you want to include in your simulation. In COMSOL Multiphysics, you can expedite the above process using global materials and material links.
Screenshot displaying use of the global materials and material links functionality.
When a material is added under the Global Materials node, it is available to use anywhere throughout the model. Further, global materials can be used for any geometric entity level, whether you assign them to domains, boundaries, edges, or points.
Material links are used locally under a component’s material node to refer to a global material. This is advantageous when you have a COMSOL Multiphysics file that contains multiple components that are made up of similar materials, as you only need to specify the material once under the Global Materials node and can then link to it under each individual component. It is also beneficial for models in which the same material is assigned to different geometric entity levels such as domains and boundaries. In this case, you would again only need to add the material once and could also add a separate Material Link node for each geometric entity type.
In the six-minute tutorial video below, we show you how to use the global materials and material links functionality. We begin by demonstrating how to add global materials to your model and discuss the differences between adding materials globally and locally. Then, we walk through the steps of how to add material links to your model components and assign them to the geometry. After watching this video, we encourage you to try out this functionality yourself and see firsthand the ease with which you can assign materials in a model that contains multiple components or when you want to use the same materials on multiple parts.
You can significantly expedite the process of assigning materials to your model geometry using the features and functionality discussed here. To complement these tools, we’ve created instructional videos to help you learn how to utilize them in your own simulations. Whether you have a model file that involves multiple components, need to define a complicated material property, or have to test different materials in your simulation, COMSOL Multiphysics features built-in tools that make this process simpler and more efficient for you.
The STL format, as mentioned previously on the blog, is one of the standard file formats for 3D printing, but it is also often used as the format for 3D scans. STL files contain only the triangulated surface, which we can also call a surface mesh, of a 3D object. COMSOL Multiphysics supports multiple objects in a single STL file, according to a widely used nonstandard extension of the format.
In its standard form, a text STL file begins with a line starting with solid
and ends with a line starting with endsolid
. Many programs support text STL files that can contain several of these sections in one file. Importing this type of file now results in multiple geometric objects, the number of which depends on how many such sections are found.
An STL file of a vertebra imported into COMSOL Multiphysics. This sequence shows a number of the steps that can be taken to simplify the mesh and thus the simulation of this geometry: the imported surface mesh, the generated geometry, the geometry after partitioning and, finally, embedding this geometry into a surrounding volume and the final mesh that results from this. Image credit: Mark Yeoman, Continuum Blue Ltd., UK.
The NASTRAN® file format is the most common format for exchanging 3D meshes among simulation tools. This format supports hundreds of NASTRAN® file entries describing various mesh elements, loads, and materials, making it possible to define a complete finite element model. The COMSOL Multiphysics software supports the import of the mesh, selections, and materials.
If we compare these two formats from the point of view of a COMSOL Multiphysics user, we will find some similarities. Both formats contain a mesh, although the NASTRAN® format can contain both volume and boundary mesh in addition to all of the other information mentioned above. When importing these files into COMSOL Multiphysics, the most important concern is how to prepare them for simulation. What we need to do depends on the type of simulation as well as on the contents of the files. The mesh in a NASTRAN® file may already be of a good enough quality to suit our simulation; in this case, we don’t really need to do anything else but import and start defining the physics.
This blog post concerns all other cases when we need to create a new mesh in COMSOL Multiphysics, or we need to modify the imported mesh in some way. Modifications can include, for example, that we need to create a solid object from the imported surface mesh, or that we need to add a surrounding domain, or sometimes just to partition and delete portions of what is imported. All of these operations, including the creation of a new mesh, require working with a geometry in COMSOL Multiphysics. To help address cases like these, let’s take a look at how to create a geometry using an imported STL or NASTRAN® file.
A couple of versions back, the steps to create a geometry from a mesh in COMSOL Multiphysics became more user friendly and intuitive. Now, when you import an STL file or a mesh in NASTRAN® file format as a geometry, the software automatically creates a Mesh Part that we can easily access by clicking the Go To Source button next to the Mesh drop-down list.
The Go to Source button in the Geometry Settings window will take you to the created Mesh Part.
In the Mesh Part, we can influence how the boundary of the mesh is partitioned by using the Boundary partitioning setting. Using a Minimal boundary partitioning creates as few faces as possible, sometimes only one. The Minimal setting is usually a good choice when the source is a 3D scan. Then there’s the default Automatic boundary partitioning, which is best suited for cases where the imported mesh has a natural face partitioning (i.e., when the mesh is from a CAD design). There’s also an option to manually set the parameters by choosing Detect faces.
Left: The Boundary partitioning settings of the mesh import. Right: An STL file imported with the Minimal boundary partitioning setting. Geometry image credit: Mark Yeoman, Continuum Blue Ltd., UK.
The Detect faces option is most useful when a mesh comes from a 3D scan and has one or more planar faces. In this case, we prefer to have as few faces as possible, but we want the planar faces to have their own boundaries. To accomplish this, we can set the Maximum boundary neighbor angle to 180 degrees as this produces the same behavior as the Minimal setting. To detect the planar faces, it is important to keep the Detect planar boundaries check box selected.
For times when changing the Maximum boundary neighbor angle is not satisfactory for our needs, we may also need to adjust the Minimum relative area and the Maximum neighbor angle. The Minimum relative area setting limits how small the detected planar faces can be relative to the total area of the surface mesh, while the Maximum neighbor angle setting sets the maximum tolerated angle between neighboring boundary elements in the same planar face.
For example, to detect the two highlighted boundaries in the example below, we decreased the Minimum relative area to 0.001 and increased the Maximum neighbor angle to 1 degree.
The Detect faces boundary partitioning feature (left) showing the exact settings used to create the geometry (right). Geometry image credit: Mark Yeoman, Continuum Blue Ltd., UK.
If we need a boundary at a specific position, we can use the different partition features found under the Mesh Part menu. Using a Ball, Box, or Cylinder partitioning operation of a specified size can create boundaries that are not automatically detected or act as a complement to the Minimal boundary partitioning setting.
The next step in the import process involves creating a geometry with smooth edges and faces. The import settings involved influence how easy it is to build a working geometry out of the mesh.
The geometry import settings highlighting the Simplify mesh settings.
To make meshing the resulting geometry more robust, keep the Simplify mesh check box selected. If the triangles in the imported mesh are isotropic and define a relatively smooth surface, we can lower the Relative simplification tolerance and the Defect removal factor so that less simplification is performed.
By setting a tighter Relative simplification tolerance, we reduce the degree to which the mesh simplification algorithm can modify the mesh. This tolerance is relative to the entire geometry, while the Defect removal factor is relative to the local feature size. The two together limit how much the imported mesh can be modified at a certain location before it converts to a geometry. Based on personal experience, lowering one or both of these factors is more commonly required with meshes in NASTRAN® file format than with STL files.
Left: Crankshaft mesh in NASTRAN® file format with fairly isotropic triangles and good representation of the shape. Middle: The created geometry with default settings. Right: The geometry with a lowered Relative simplification factor and Defect removal factor.
On the other hand, raising the Relative simplification tolerance and Defect removal factor settings helps when the mesh triangles are anisotropic and the surface mesh doesn’t accurately represent the surfaces (i.e., when the mesh comes from scanned data). Increasing one or both of these parameters fixes more issues in the mesh, although it may result in a less accurate representation of the imported surface mesh.
Left: Two anisotropic triangles among isotropic triangles. Middle: The geometry created with default import settings. Right: The geometry after increasing the Defect removal factor. Image credit: Mark Yeoman, Continuum Blue Ltd., UK.
If a face appears strange, as pictured in the middle image above, it is usually due to problems that occur when generating the rendering mesh, or problems that occur when generating the on-screen visualization of the face. This could indicate that there is an underlying problem that will cause issues when, for example, partitioning the geometry, combining it with other objects, or even meshing. In this case, the geometry meshed without a problem and increasing the Defect removal factor produced a more “well-behaved” face.
The last import setting is the Form solids from surface objects check box, which creates solid objects from surface meshes with a ‘watertight’ region. This check box does not need to be selected when working with shells.
Another important note is that mesh files do not contain information about the Length unit, so this must be set manually in the settings window of the Mesh Part and the Geometry nodes.
It is important to remember to set the correct Length unit for the Mesh Part and Geometry.
After the mesh has been turned into a geometry, it is possible to add primitives, including blocks and spheres. It is also possible to introduce intersecting surfaces when performing Boolean operations where one geometry object is created from the mesh.
As the geometry’s surfaces are interpolated, they are not exact. Due to this, we can’t assume that the surfaces of an STL sphere, for example, are perfectly spherical. It is also difficult to combine these objects when they are expected to contain perfectly matching faces that are supposed to touch. In this case, it may help to apply the Form an assembly option and define Identity pairs, either automatically or manually, rather than using the default Form Union method to form the geometry for meshing.
For a design created in a CAD software, we recommend exporting it in an MCAD format and importing it with the CAD Import Module. In an earlier blog post, we discussed which module to choose for CAD import and concluded that the STL format is best when the data is originating from a 3D scan or when exporting deformed geometries or plots from COMSOL Multiphysics.
While changing the import parameters is not an exact science and involves some trial and error, we have demonstrated some best practices for creating a geometry from an imported mesh. We hope that you find these tips to be helpful as you utilize STL and NASTRAN® files within your own modeling processes. For more information about this topic and related areas of modeling, check out the resources below:
If you have questions about this blog post or need help importing your meshes, please do not hesitate to contact us.
NASTRAN is a registered trademark of NASA.
]]>
COMSOL Multiphysics provides two ways for you to include only selected parts of a solution in your output. The first option is to define one or more selections, including the points, boundaries, or domains of interest, so you can restrict the output from the study to only include the fields in the parts of the geometry that those selections define. This method is straightforward and suitable if you are only interested in the simulation output within a certain part of the geometry, where you can perform postprocessing and access the fields and derived quantities as usual.
Note that if you store the solution for some boundaries or points, only the solution fields (the dependent variables) are available using this method. This means that derivatives and quantities that include derivatives, such as stresses and fluxes, are not available as they require that the solution is also available in the domain. If you are interested in storing some derived values such as stresses on just a few boundaries or points of interest, you will want to use our second option.
The second option is to add an ODE and DAE interface to define a new dependent variable, to which you can transfer the quantity of interest. The quantity of interest could be a global scalar quantity, such as the average or maximum of some field, or it could be the field value along some boundary. (In the latter case, using a selection provides an easier way of achieving the same result.) This method is useful in situations where the output of interest is a global scalar value that you transfer to a single degree of freedom as a variable within a simple algebraic equation. As mentioned above, it is also useful if the quantity of interest is a derived value based on derivatives, such as stresses and fluxes.
To only store the solution in a selected part of a geometry, create a named selection in the model component. To do so, right-click Definitions and choose a selection from the Selections submenu. An Explicit selection is simple to use if you want the selection to include some specific geometric entities — for example, domains, boundaries, or points. You can give the selection node a descriptive label like Surrounding Air or Substrate Contact. You can use several selection nodes to represent different parts of the geometry and combine them, either when determining what to store in the output or by creating another selection node that implements a Boolean operation, such as a Union or an Intersection selection node. Use one or more of these created selections in the study step settings to define the parts of the geometry for which the solution will be stored.
In a solid mechanics simulation, the value of the deflection (displacement) on one or more specific surfaces or points of interest can provide valuable information, which may be sufficient for you as the output from a simulation. The following plot, for instance, indicates the selected top surfaces on the model geometry for a feeder clamp within the Graphics window.
Selected surfaces for a solid mechanics simulation.
To store only the solution on those surfaces (boundaries), create an Explicit selection with the input entities set to the four top surfaces within the geometry where you want to store the solution. If a surface or point is not part of the created geometry, you can add extra Curve or Point nodes in the geometry sequence to divide a boundary or add a point (and a mesh node) at the desired location. The settings for the Explicit selection node appear as shown below.
Explicit selection node settings.
Here, the label has been changed to Deflection Surfaces to reflect what the selection contains.
You can now modify what to store in the output. In the Settings window of the Study step node, locate and expand the Values of Dependent Variables section. This section contains Store fields in output settings, where you can choose For selections from the drop-down menu. You can then click the Add button to open a list with available selections. In this case, choose the Deflection Surface selection and click OK.
Settings for storing the deflection for the selected surfaces in the geometry.
Now you can run your simulation. The only output will be the solution on the top surfaces, which you can visualize using a Surface plot.
The displacement in the y direction on the selected surfaces in the geometry. For the rest of the geometry, there is no solution output.
For the solution at points, you can postprocess the output using a Point Evaluation node under Derived Values and display the total displacement, for example. You can also use a Point Graph node in a 1D Plot Group to plot the displacement versus time in a time-dependent simulation or a parameter value in a parametric sweep.
If you are interested in a scalar quantity, you only need a single degree of freedom (variable) that represents that quantity in the output. You can create such a variable as a simple algebraic equation using a global equation defined via the Global ODEs and DAEs interface. This interface and similar physics interfaces for defining ODEs and DAEs on domains or at points, for instance, are available under Mathematics > ODE and DAE Interfaces in the Add Physics window and in the Model Wizard. In the Settings window for the Global Equations node, define the name of the variable and the simple equation that sets it to the scalar quantity that you want to include in your simulation output.
Scalar coupling operators are important features in COMSOL Multiphysics to utilize for this purpose as well. With these powerful tools, you have the ability to create scalar quantities that are made globally available within the model.
Say that you are primarily interested in the average temperature within the model geometry. To extract its value, add an Average Coupling Operator (aveop1
, for example), defining it to be valid in the entire geometry (all domains) or in the domains of interest. You can then use this operator in a global algebraic equation to make it available in the simulation output as a scalar variable. If you want to store the maximum temperature instead, you can use a Maximum Coupling Operator.
The equation that the Global ODEs and DAEs interface solves for is
In this case, however, the equation should only set the variable equal to the average temperature. As such, it is sufficient to enter aveop1(T)-avtemp
if you have called the variable in the algebraic equation avtemp
. Remember that the Global ODEs and DAEs interface sets the expression to zero to form the equation, thus solving the simple equation avtemp = aveop1(T)
. Here, T represents the temperature field in the geometry, which is the dependent variable that you solve for but don’t want to include in the output. The following screenshot shows the Settings window for a Global Equations node in the Global ODEs and DAEs interface for this case.
Global Equations node settings for creating a variable to store.
Notice the Units section within the Settings window shown above. To avoid unit inconsistencies and to evaluate the created variable with the correct units, you should adjust the respective units for the dependent variable and for the source term.
Your final step is to set up the simulation so just the new scalar variable that you have defined is stored in the output. The following plot shows the average temperature in a geometry for a time-dependent simulation as a Global plot defined in a 1D Plot Group.
The mean (average) temperature versus time, computed as a single scalar output from a time-dependent simulation.
In the study that you want to perform, first make sure that you have access to the solver configurations by right-clicking the main Study node and choosing Show Default Solver. Select Solver Configurations > Solution > Dependent Variable to access the settings for the dependent variables. In the cases above, there will be two nodes: a Field node (temperature) and a State node (average temperature). The first node represents the full temperature field in the geometry, while the second node represents the variable for the average temperature.
Click the Temperature node to display its Settings window. At the bottom of the General section, clear the Store in output check box so that the solver no longer stores the temperature field in the output.
Clearing the Store in output check box within the Temperature node settings.
When you have computed the solution, the scalar quantity will be available as a variable that you can display in a table using a Global Evaluation node. You can also plot a Global graph plot to show the deflection versus time or the mean temperature versus some parametric solutions. Since there are no field solutions, any plots that show such variables, or quantities using such variables, will appear with values of zero.
Let’s revisit our feeder clamp model to show how to use an algebraic equation to store the effective stress (von Mises stress) values on the top surfaces. To do so, add a Boundary ODEs and DAEs interface, available under Mathematics > ODE and DAE Interfaces in the Add Physics window and in the Model Wizard.
In the Settings window for the main Boundary ODEs and DAEs interface, use the same Deflection Boundaries selection for the top surfaces. Make sure to specify the units so that they are compatible with a stress quantity. You can do so by choosing Stress tensor as the quantity for the new dependent variable and for the source, which is the von Mises stress from the main Solid Mechanics interface.
The settings for the Boundary ODEs and DAEs interface, set up for a mechanical stress quantity.
In the Distributed ODE subnode, you define the algebraic equation that sets the new dependent variable bndstress (defined on the top surfaces only) equal to the predefined variable for the von Mises stress, solid.mises, from the Solid Mechanics interface: bndstress-nojac(solid.mises)
.
The nojac()
operator is required, since we do not want this equation to contribute to the Jacobian (the system matrix) for the full model. You enter this expression in the source term and set all other coefficients to zero to solve the equation 0 = bndstress-nojac(solid.mises)
(that is, bndstress = nojac(solid.mises)
).
The settings in the Distributed ODE node for defining an algebraic equation for the boundary stress.
Before solving, you have to clear the Store in output check box in the Displacement field node under Dependent Variables in the solver sequence. You can then compute the solution and only store the stress values on the top surfaces, which you can plot using a Surface plot.
A plot of the von Mises stress, stored only for the top surfaces.
As we have shown here today, with just a few simple steps, you can set up a simulation where only part of the fields computed for, or even just one or a few scalar quantities of interest, can be stored in the output. These modeling techniques can significantly reduce your model file size and the time it takes to calculate and display these scalar quantities of interest. This is especially true if you plan to make simulations that include large parametric sweeps or long, detailed time-dependent studies.
To learn about other tools in COMSOL Multiphysics that are designed to optimize your simulation workflow, browse other posts within our Core Functionality category on the COMSOL Blog. If you have further questions relating to the modeling techniques presented here, please feel free to contact us.
]]>
Solid objects change their size, shape, and orientation in response to external loads. In classical linear elasticity, this deformation is ignored in the problem formulation. As such, the equilibrium equations are formulated on the undeformed configuration. In many engineering problems, the deformations are so small that the deformed configurations are not appreciably different from the undeformed configuration. Ignoring the changing geometry therefore makes practical sense as this yields a linear problem that is easier to solve.
On the other hand, for problems like metal forming, where the deformation is large, the equilibrium equations have to include the effect of changing geometry. Updating the equilibrium equations to include the effect of changing geometry introduces a nonlinearity known as geometric nonlinearity.
Model geometry for a sheet metal forming process, where deformations can be rather large.
When geometric nonlinearity is included in structural analysis, the COMSOL Multiphysics® software automatically makes a distinction between Material and Spatial frames. The material frame corresponds to the undeformed configuration, while the spatial frame corresponds to the deformed configuration. The software allows us to make a new geometry out of the deformed configuration; what we refer to as remeshing a deformed configuration. We can use this geometry as part of a new geometry sequence. Drill a hole in it, subtract it out of a bounding object, or simply add other geometric objects. Finally, solve a new physics problem on the composite domain. The new physics can be applied to the same COMSOL Multiphysics model in a different component, or in a different model. This is the first point that we will address.
If geometric nonlinearity is not included in structural analysis, the software does not distinguish between the material and spatial frames. Does that mean that if you do not want to include the effects of geometric nonlinearity in the equilibrium equations, you can not remesh a deformed configuration? The answer is no. You can split the two frames and force linear strains in the equilibrium equations. This is the second item that we will address.
For three-dimensional problems, there is an additional option. Surface plots can be exported as STL files. These files can be imported and used for solid modeling. In this process, we do not need to split the material and spatial frames. This is the third and last item we will discuss in today’s blog post.
Please note that remeshing a deformed configuration means simply obtaining the deformed shape computed in structural analysis. When we use this deformed geometry for a later analysis, we are not considering residual stresses. If the second analysis is another structural analysis, keep in mind that the remeshed configuration is being used as a stress-free configuration for subsequent studies.
To consider effects of finite deformation in structural analysis, we have to select the Include geometric nonlinearity check box in the settings window of the study step. In some cases, COMSOL Multiphysics automatically enables geometric nonlinearity, such as when you include hyperelastic or other nonlinearly elastic materials, large-strain plastic/viscoelastic materials, or add any contact boundary conditions.
After we complete our structural analysis, we use the Remesh Deformed Configuration command to get the deformed shape. This is done in the meshing section of the Model Builder. Finally, the deformed mesh can be exported and imported back as a geometry object.
We demonstrate the above steps in the following sections.
Let’s consider the problem of squeezing a circular pipe between two flat stiff indenters. Because of the large deformation involved, geometric nonlinearity is included in the structural analysis, as shown in the screenshot below. Because of symmetries, we consider only a quarter of the geometry.
The original geometry (outline) and the deformed geometry.
The next step is to remesh the deformed configuration. This can be done by right-clicking on the data set (Study 1/Solution 1 in this example) and choosing Remesh Deformed Configuration. Alternatively, we can use Results > Remesh Deformed Configuration from the menu while the data set is highlighted.
In either case, this adds a new mesh to the mesh sequence and opens the Deformed Configuration settings window. Next, we click on Update. Note that for parametric or time-dependent problems, we have to pick a parameter value or time step.
Each parameter of a parametrized data set has its own deformed configuration.
Finally, we go to the new mesh under Meshes > Deformed Configuration and build it.
Remeshing a deformed configuration creates a new meshing sequence.
One possibility is to reuse the deformed configuration in the same model file. To do so, we add another component and import the deformed mesh in the Geometry node of the new component, as highlighted in the screenshot below.
A deformed mesh from one component can be imported in the geometry sequence of another component.
We can now add more items to the geometry sequence. Let’s cut out from the bent pipe. We probably do not need the rigid indenter once we have used it to squeeze the pipe and will therefore get rid of it. The result is shown in the screenshot below. A new physics can be added to the second component.
Deformed objects resulting from structural analysis can be used as part of a new geometry sequence.
To use the deformed configuration in a different model file, export it to a separate file first.
If the Include geometric nonlinearity check box is unchecked, the spatial frame stays the same as the material frame. Therefore, we can not remesh the deformed configuration. If we do select the check box, COMSOL Multiphysics will include nonlinear terms in the strain tensor. What if we have a problem with infinitesimal strains and do not want to include expensive and unnecessary nonlinear strains in the equilibrium equations? The solution is to select the Include geometric nonlinearity check box in the study step, while ignoring the nonlinear strain terms by selecting the Force linear strains check box in the material model.
Splitting material and spatial frames while keeping only linear strains in the equilibrium equation.
The procedure for remeshing the deformed configuration remains the same as in the previous section.
The above method, including geometric nonlinearity and remeshing the deformed configuration, can be applied to both 2D and 3D problems. In 3D cases, we have an additional option via STL files. Any 3D surface plot can be exported as an STL file. This file can then be imported in the geometry sequence of another component or model file. By adding a Deformation node to a surface plot before exporting, we can get the deformed geometry. Do not include geometric nonlinearity, unless your problem is a large deformation problem.
Add a deformation to a 3D surface plot and export the surface plot in the STL format.
We can edit the X-, Y-, and Z-components of the displacements in the Deformation settings window of the above screenshot to introduce anisotropic or nonuniform scaling of the displacements. In fact, these quantities do not need to be structural displacements. By typing any valid mathematical expression for deformation components, we can subject the original geometry to arbitrary transformations.
To use the deformed geometry in a new file or component, the STL file generated in the above step can be imported in a geometry sequence, as shown below.
Importing an STL file to a geometry sequence.
COMSOL Multiphysics allows seamless coupling of different physics effects. If you want to couple structural analysis with another physics on the same domain, you will find built-in tools within our software that enable you to do so. The Moving Mesh and Deformed Geometry interfaces are often used together with physics interfaces to solve problems on evolving domains.
However, if you want to use the deformed configuration from a structural analysis as part of a new geometry sequence, where you add new objects to the deformed shape or include it in Boolean operations, you can apply the strategies demonstrated above.
As always, if you have any questions, please feel free to contact us.
]]>