We often want to stop a time-dependent or parametric solver when a certain condition is met or violated. But we usually do not know the exact time or parameter value at which the stop condition is going to occur. In this case, we need to specify a solution time or parameter range large enough so that we are reasonably certain that the stop condition is going to be activated. We then add a stop condition to terminate the solver.

Physical conditions or computational issues can trigger a stop. Examples of such physical conditions include allowable stresses, temperature limits, and species depletion in reactions. As for computational issues, one example is a very small time step size taken by the solver.

In COMSOL Multiphysics, a stop condition can be added to the following solvers:

- Time-dependent solver
- Frequency-domain solver
- Stationary solver with a parametric or auxiliary sweep

The first step is to define a scalar that we can use in a relational or logical expression. This scalar can either be a quantity defined at some point of interest or a global quantity, such as an integral, maximum value, minimum value, or average of a variable over domains or boundaries. In the software, we can define this using a point probe or a component coupling. The second step is adding a stop condition to the solver configuration. The condition has to be a statement that evaluates to true or false.

To demonstrate how to set up a stop condition, let’s look at a time-dependent heat transfer problem. We will tell the software to stop the computation when the maximum temperature exceeds a threshold value. (We can also use different conditions, but the procedure remains the same.)

Consider a silicon wafer heated by a moving heat pulse and cooled via radiation to the surroundings. (For more details, check out the tutorial model.) In this case, we want to modify the model such that the computation stops when a threshold temperature is reached.

*The temperature distribution on a rotating wafer heated by a moving heat pulse.*

To start, we define a scalar by using integrals, averages, maximums, or minimums evaluated over geometric entities. If we are interested in what happens at a specific point, a point probe or an integration component coupling can be used. In this example, the scalar that we monitor is the maximum temperature. This can be obtained using the *Maximum* component coupling.

*Steps for adding the Maximum component coupling.*

When we compute the study for the first time, we use the *Show Default Solver* command to open the settings. Then, we can go to the *Time-Dependent Solver* node under *Solver Configurations* and add a *Stop Condition* node.

*A* Stop Condition *node can be added to the* Time-Dependent Solver *node in the solver configurations.*

The final step is to add the expression and conditions in the *Stop Condition* Settings window.

*The Stop Condition Settings window for a time-dependent solver.*

The stop condition above stops the solver when the maximum temperature is greater than or equal to 250°C.

The solver stops after 27.238 seconds because the stop condition has been satisfied (even though we asked it to compute up to 60 seconds in the study settings). We can see the cutoff time in the *Warnings* node that the solver adds in the solver sequence.

*By default, the solver adds a* Warnings *node if the computation is terminated by a stop condition.*

In the Stop Condition Settings window, we use the *comp1* name scope to identify both the maximum operator and the temperature variable *T*. These are items defined under *Component 1*, whereas the solver sequence is a global item. For example, if we have a second component in our model and redefine *maxop1* there, the solver sequence cannot tell which operator we are referring to. Thus, we must use the component identifier.

Note that we can choose whether or not we want to stop when the condition is true or false. Additionally, in the *Output at Stop* section, we can decide if we want to add a time step just before or after the stop condition is met.

If we have an *Event* interface in the Model Builder, we can use it as a stop condition by adding it under *Stop Events* in the *Stop Condition* Settings window.

The time-dependent solvers in COMSOL Multiphysics are adaptive. As such, the time steps are picked based on user-specified error tolerances and computed local error estimates. When the error estimates are very high, the software takes smaller and smaller steps. For example, this happens when the solution becomes singular. If we want to stop the solver when the time steps are too small, rather than trying to approach this singularity with increasingly smaller and smaller time steps, we can add a stop condition based on the reserved variable *timestep*.

*Since* timestep *is a predefined global parameter, it is recognized without a name scope.*

Stop conditions can be added to frequency-domain studies as well as parameterized stationary studies. Parameterized stationary studies can either be regular parametric sweeps or auxiliary sweeps. In all of these cases, the *Stop Condition* node should be added to the *Parametric* node under *Solver Configurations* > *Stationary Solver*.

*Stop conditions can be added in stationary analyses when performing an auxiliary or parametric sweep.*

Note that, as in the time-dependent problem, we need a scalar variable to monitor and must use the right variable scope in the stop condition.

To see how to use a stop condition in an auxiliary sweep to implement a nonstandard load ramping procedure, check out the example model of a postbuckling analysis of a shell structure.

Expressions used in the stop condition and other items in the solver configuration have to be global to be automatically recognized. Otherwise, we have to provide the component name as a prefix. This is the case for every variable (including physics variables like *T* for temperature) and functions inside a component. On the other hand, parameters defined under *Global Definitions* in the Model Builder are recognized without the component prefix.

For example, when referring to the built-in integration operator, we can simply use *integrate*. In contrast, when referring to the integration operator *myint* that is defined in component *comp1*, we have to use *comp1.myint*.

The following predefined constants, variables, functions, and operators can be used in the solver configuration without identifying a component:

- Physical constants, such as the gravitational acceleration
*g_const* - Mathematical constants, such as pi
- Built-in global variables, such as
*t*for time and*freq*for frequency - Built-in mathematical functions, such as trigonometric and exponential functions
- Built-in operators, such as differentiation and integration operators

See the *COMSOL Reference Manual* for a full list and syntax.

In this blog post, we have discussed how to add conditions that stop a time-dependent or parametric solver when one or more criterion is met. If you have any questions related to this topic or using COMSOL Multiphysics, contact us via the button below.

- Learn about other ways you can improve your modeling process:

We sometimes get warnings and errors while meshing models. When this happens, we should inspect the entities listed in the *Warning* and *Error* nodes. Most warnings are caused by using mesh settings that are too coarse, preventing thin regions and short edges from being resolved properly.

To find these geometric entities, we can use the *Zoom to Selection* button in the *Warning* node. Toggling off the *Mesh Rendering* button and toggling on the *Wireframe* button for 3D meshes lets us easily see the entities reported inside the geometry. We can gain further insight into the issue by using the *Measure* button from the *Geometry* or *Mesh* toolbars for selected entities, for example, to get the length of edges or distance between points.

With our measurements and information from the *Warning* node, we can then set up *Virtual Operations*, or *CAD Defeaturing*, to eliminate the small geometric entities or reduce the mesh size if the features are important for the simulation.

*A mesh of an airplane (left), where some interior boundaries are indicated as being too narrow to be properly resolved by the current mesh settings. The same boundaries, highlighted in blue, after clicking the* Mesh Rendering *and* Wireframe *buttons (right).*

An *Error* node referring to a coordinate will have a button that enables us to zoom in on the coordinate. A small red sphere will appear around the coordinate so that the particular region can be studied in detail. When a warning indicates that one or more low-quality elements have been generated, it requires some special attention. We can check the *Minimum element quality* in the *Statistics* window and plot the mesh elements of the worst quality (explained in further detail later in this post).

If we have negative mesh quality or values very close to zero, it indicates that the reported mesh elements are inverted or nearly inverted. Note that inverted linear mesh elements, which we discuss here, are not the same phenomenon as inverted higher-order elements that you may run into when solving. The inverted linear mesh elements must be avoided to achieve convergence and accurate results.

One way to quickly get an overview of the created mesh is to have a look at the statistics in the *Mesh Statistics* window, which we open by right-clicking the *Mesh* head node.

*The Mesh Statistics window, showing a wide variety of statistics for different selections and quality measures.*

It is possible to change the selection of domains, boundaries, or edges for which we display the numbers. For this, we use the *Geometric Entity Level* drop-down menu at the top of the window. The *Quality Measure* menu lets us choose from a list of quality measures, including:

- Skewness
- Maximum angle
- Volume versus circumradius
- Volume versus length
- Condition number
- Growth rate

The *Skewness* measure is a suitable measure for most types of meshes; hence, it is the default measure. This quality measure is based on the equiangular skew that penalizes elements with large or small angles as compared to the angles in an ideal element. This quality measure is also used when reporting bad element quality during mesh generation. With the *Maximum angle* measure, only elements with large angles are penalized, making this option particularly well suited for meshes where anisotropic elements are desired, such as boundary layer meshes.

*Volume versus circumradius* is based on a quotient of the element volume and the radius of the circumscribed sphere (or circle) of the element. This quality measure is sensitive to large angles, small angles, and anisotropy. For triangular meshes in 2D and tetrahedral meshes in 3D where isotropic elements are desired, *Volume versus circumradius* is a suitable measure. On the other hand, *Volume versus length* is based on a quotient of element edge lengths and element volume. This quality measure is primarily sensitive to anisotropy.

The *Condition number* quality measure is based on properties of the matrix transforming the actual element to an ideal element. Lastly, *Growth rate* is based on a comparison of the local element size to the sizes of neighboring elements in all directions.

For all quality measures, a quality of 1 is the best possible and it indicates an optimal element in the chosen quality measure. At the other end of the interval, 0 represents a degenerated element. Although the meshing algorithms in COMSOL Multiphysics try to avoid low-quality elements, it is not always possible to do so for all geometries. High geometric aspect ratios, small edges and faces, thin regions, and highly curved surfaces may all lead to poor-quality meshes. When the geometry does lead to a poor-quality mesh, the mesher returns the poor-quality mesh for examination, rather than no mesh at all.

Depending on the quality measure used, the *Minimum element quality*, *Average element quality*, and the *Element Quality Histogram* sections will change accordingly. To get accurate results, it is important to know which *Minimum element quality* and *Average element quality* are sufficient for your particular application.

There are no absolute numbers to present for what the quality should be, as the physics and solvers used will have different requirements on the quality needed. In general, elements with a quality below 0.1 are considered as poor quality for many applications. The mesher will automatically present a warning for elements with a quality below 0.01, as these are considered to be very low quality and should be avoided in most cases. In some cases, a couple of low-quality elements may be okay if they are located in a part of the model with less importance, while in other cases, one low-quality element may lead to convergence problems.

The histogram in the Mesh Statistics window will give us a visual of the quality of the mesh, which can be a quick way to see if we need to change the overall mesh sizes in some way.

To understand where low-quality elements are positioned and which mesh size parameters to change, it can be a good idea to perform a plot of the mesh. We do this either by clicking the *Plot* button in the *Mesh* ribbon or by right-clicking the *Mesh* head node of the mesh we would like to plot and selecting *Plot*. This gives us a *Mesh* data set, available under *Results* > *Data Sets*, under which we can add *Selections* to narrow down the amount of entities shown in the plot. The *Mesh* plot feature can also be combined with other plot features.

We can gain a general understanding of how a specific mesh is set up by the different types of mesh elements. To do so, we set *Level* to *Volume*, choose an *Element Type* from the list, and set a uniform *Element Color* for this element type. To duplicate the *Mesh* plot feature node, we select another *Element Type* and *Element Color*. Then, we repeat the process until we have colored all of the available element types in the mesh. In the image below, the elements shrink by setting the *Element Scale Factor* to 0.8.

*A colorful representation of the different element types in a mesh. The tetrahedrals are shown in cyan, the pyramids in magenta, and the prisms in gray. To understand more about how the elements are connected, they shrink with a factor of 0.8.*

As we already mentioned, it can be important to understand where the elements of poor quality are located. This will help us understand if the geometry needs to be changed in any way or if the mesh-size parameters need to be modified to better handle the problematic area.

We can start by setting *Level* to *Volume* and in the *Element Filter* section selecting the *Enable Filter* check box. Then, we enter a Boolean expression, which reflects the elements we want to check. In the image below, the elements with a *Skewness* that is below 0.05 are shown. We can use the *Replace Expression* feature to easily access the names of the different quality measures. These measures can be used to spot different weaknesses in the generated mesh, so we should make sure we check all of them to see which is best used for our particular meshes.

*The volume elements with a* Skewness *below 0.05 are displayed for the Shell-and-Tube Heat Exchanger model. In front of the Graphics window, the Replace Expression window gives easy access to the different quality measures.*

Among the new quality measures, the *Growth Rate* is a bit different, as it shows a relation between two mesh elements, whereas the other quality measures display the quality of the shape of each single mesh element. The growth rate evaluates toward a maximum of 1 in regions of the mesh where the elements are constant in size. It is lower in regions where the element growth rate increases from one element to the next. The most important things to plot are often happening inside the mesh of a domain, while it can be good to add a filter expression including the space dimensions. Here is one such example:

*The growth rate displayed for the mesh of the Biconical Antenna model. The plot shows that the boundary layer mesh in the PML domains are of similar size, while the growth rate shifts more in the tetrahedral mesh in the middle domains. In this example, the mesh elements for where* y *> 0.1 mm are shown by using the* Element filter *option. The slice plot shows the Electric field norm (dB).*

We have discussed three different ways of inspecting a mesh, which can be used to spot regions with low-quality mesh elements. Now that we know how to find out where the low-quality mesh elements are, we can either manually adjust the mesh in these regions or address the issues with the underlying CAD geometry itself. To learn about modifying CAD geometries for meshing purposes, see the following blog posts:

- Working with Imported CAD Designs
- Using Virtual Operations to Simplify Your Geometry
- Improving Your Meshing with Partitioning

If you want to evaluate the meshing capabilities of COMSOL Multiphysics for your own modeling needs, please contact us.

]]>

Let’s say that we want to run a time-dependent simulation in two steps:

- From the starting time to an intermediate time
- From the intermediate time to an ending time

Further, let’s say that in the second time-dependent study step, we make some changes to the physics that represent a change in the conditions for the simulated device at the intermediate time. However, when postprocessing the results, we want to treat the output from the two study steps as a single continuous time-dependent solution.

We can achieve such a combined solution using the *Combine Solutions* study step, which is new in version 5.3 of COMSOL Multiphysics®. The Combine Solutions study step makes it possible to concatenate two solutions created in other study steps in the model. As an alternative to the concatenation of solutions (time-dependent or parametric solutions, for example), it is also possible to use summations to combine solutions (such as creating a solution that is the sum of various eigenmodes).

As an example of a concatenated solution, let’s look at the Axisymmetric Transient Heat Transfer example model. This model shows how a cylindrical object heats up when the temperature on its exterior boundaries changes from 0°C to 1000°C at the start of the simulation, which runs for 190 s.

Let’s see what happens if we add another time-dependent study step, which starts at 190 s where the boundaries are instead thermally insulated. To combine the two time-dependent simulations, a Combine Solutions study step provides a concatenation of the first and second study step. (In the settings, the second study step is called the *Current* solution, as it’s the solution computed in the step just above Combine Solutions.)

As a reference, we add a third time-dependent study step, which demonstrates that the heating of the boundaries continues uninterrupted during the time space of the two combined solutions. The study now contains the study steps shown below.

*The study steps for a combined solution and a reference solution with only heating.*

In the Settings window for the Combine Solutions study step, we specify the study steps to concatenate.

*In the settings for the Combine Solutions study step, we can choose the type of solution operation and the solutions to combine.*

When we compute the study, the study steps create corresponding solvers and *Solution Store* nodes in the solver configuration and also solution data sets for analyzing the results from the solvers. The data sets and corresponding solver configuration are shown below.

*The data sets (left) and solver configuration (right). The top data set refers to the output from the third and final time-dependent study step.*

In addition, we add the following data sets:

- A Revolution 2D data set to visualize the 2D axisymmetric solution in a corresponding 3D cylindrical geometry
- Two Cut Point 2D data sets to evaluate and plot a graph of the temperature in a reference location
- One of these data sets refers to the concatenated solution
- The other refers to the continuous solution

- A Join data set to evaluate and plot the difference between the concatenated solution and continuous solution

The following plot shows the temperature versus time for the concatenated solution, where heating is replaced by insulation after 190 s, and the continuous solution, where the heating continues during the time span of the entire simulation.

*The concatenated solution (pink, solid) and the continuous solution (black, dashed). As expected, the temperature increase slows down when insulation is added.*

Using a Join data set, we can plot the temperature difference between the two solutions. Until the point in time when insulation is added, the temperatures are the same, as expected.

*Temperature difference between the two solutions.*

The temperature distribution in the full 3D geometry is available using the Revolve 2D data set.

*Temperature distribution in a segment of the full 3D geometry at the end of the continuous simulation.*

Another case where it is of interest to manage multiple solutions is when we run a simulation multiple times (with some variation in the model or settings) and want to postprocess and analyze the results from each run.

After computing an initial instance of the model, we right-click the *Solver Configurations* node in the study and choose *Create Solution Copy* to make a *Solution – Copy* solution and a corresponding data set available for our current solution (pointing to the *Solution – Copy* solution node under *Solver Configurations*).

When we then recompute with changes to the model or solver settings, the new solution is available through the original Solution data set, so that we can postprocess and evaluate both solutions by pointing to one of the two data sets (we can extend this approach to create additional solutions). We can also use a Join data set to combine solutions from two different Solution data sets (as a difference, for example).

Another option for creating additional Solution data sets is to right-click a Solution data set and choose *Duplicate* (or press Ctrl+Shift+D). The difference between this and the Solution Copy operation is that a duplicated Solution data set does not create a new solution and, by default, refers to the same solution as the original data set. For instance, we can use a duplicated data set to add a *Selection* node, if it is of interest to postprocess the solution in only a part of the model geometry. As of the latest version of COMSOL Multiphysics, *Selection* nodes are also available for most plot types in the plot groups. This enables us to directly hide boundaries for specific plots without having to create a separate data set, for example.

To illustrate these concepts, let’s open the Stresses and Strains in a Wrench example model.

One option is to use a parametric sweep or load case to investigate the stresses and deflections in the wrench for various forces applied as a boundary load at the end of the wrench. But let’s see what happens if the load’s direction is reversed so that the wrench is pulled upward (instead of pressed downward as it is in the example model from the Application Gallery). We can then store the original solution using a solution copy so that both cases can be postprocessed and analyzed individually.

To verify that the stresses are close to zero for a difference between the two cases, we also add a Join data set that contains the difference between the two solutions. By duplicating the original solution data set twice, we create two additional solution data sets — one for the wrench only and one for the bolt only — using Selection nodes. The model then contains the data sets shown below.

*The wrench model’s data sets for comparing the two cases and for postprocessing using only the wrench and only the bolt.*

The Solution data set on the left refers to the latest computed solutions; that is, the case where the load is reversed. The other Solution data set contains a copy of the first solution; that is, the cases with the original load direction.

*The solution with a load acting upward (left) and the original solution, with a load acting downward (right).*

The Join data set provides the difference between the two solutions above.

*The difference in effective stresses is close to zero, evaluated using a Join data set.*

The last two Solution data sets restrict the original solution to the bolt only and to the wrench only, defined by selections of boundaries that represent the bolt and the wrench, respectively.

*Results for the wrench only (left) and the bolt only (right). These plots use data sets with applied selections.*

It is now possible to analyze and plot the results in a flexible way for both load cases: for the difference between those solutions and for individual parts of the model’s geometry.

These examples illustrate some of the flexible and powerful options that are available in version 5.3 of the COMSOL Multiphysics® software. The options for concatenating, using summations, and copying solutions make it possible to combine solutions for easier postprocessing as well as to store and analyze multiple solutions for different variants of a model. The options for duplicating and joining data sets make it possible to compare solutions and to visualize aspects of the same solution, perhaps for a certain part of the model. These tools are generally applicable and useful in many cases beyond what we have shown here.

- Learn more about Join data sets in this blog post: Solution Joining for Parametric, Eigenfrequency, and Time-Dependent Problems
- To get started with managing multiple solutions, download the tutorial models featured in this blog post:
- See other new features and functionality in COMSOL Multiphysics® version 5.3 on the Release Highlights page

We often want to know the maximum or minimum values of some quantities. When the number of degrees of freedom is large, searching for extreme values can be computationally expensive. The cost is higher if we use the extreme values as inputs, such as in boundary conditions or material properties. One example is the design of controllers.

Imagine having prior information about possible critical locations. Now, we can focus the search to those locations. In time-dependent problems, we can save memory if we could store data only on such suspected areas.

The challenge is that for many problems, we do not have such information. But for some problems, we can prove that the extreme values are on the boundaries. In mathematical literature, such properties are called maximum principles.

COMSOL Multiphysics provides tools to look for extreme values just on boundaries. The software also lets us store solutions on a limited set of domains or boundaries.

Today, we will show you how to exploit maximum principles for efficient problem solving and postprocessing. First, we will discuss conditions under which maximum principles exist and sketch mathematical proofs. Next, we will demonstrate the tools that COMSOL Multiphysics provides to use these principles, including maximum and minimum operators on boundaries, physics interfaces defined on surfaces, and the boundary element method, among others. Finally, we will highlight common situations where the maximum principles do not hold.

Before we begin, note that while maximum principles are useful, we should only apply them in situations where the premises in them are valid. Otherwise, we will be led astray by the streetlight effect.

*The streetlight effect occurs when you can’t find something because you are only searching where it is easy to look. It comes from a well-known joke in which a person looks for their wallet under a streetlight, even though they know they left it in a nearby park.*

Let’s start with a simple one-dimensional problem. Consider the second-order ordinary differential equation

\frac{d^2u}{dx^2}=0

over an interval.

This equation can represent several 1D stationary boundary value problems such as heat conduction or chemical transport in the absence of advection or sources/sinks. Irrespective of the physics, the general solution is a straight line

u(x) = ax+b,

where the constants *a* and *b* can be determined from boundary conditions.

As the solution is a straight line, its maximum and minimum value will be at the left or right end of the domain (interval). As such, if we are interested in the extreme values, we just need to check the values at the left and right ends. If both boundary conditions are of the Dirichlet type, prescribing *u*, we do not even need to solve the equation to find its maximum and minimum. The smaller of the Dirichlet boundary conditions is the minimum value and the larger is the maximum value. This obviously saves us some time.

In the above analysis, we used our knowledge of the general form of the solution to make conclusions about the locations of extreme values. This does not generalize to a practical method for more complicated source terms or spatial dimensions higher than one. Instead of deriving a general form for the solution, let’s use properties of local maxima or minima from elementary calculus to make similar conclusions about the following linear partial differential equation.

-\kappa\Delta u + \mathbf{v}\cdot \nabla u = f, \quad f<0, \textrm{ in } \Omega

where is the Laplace operator ; is the gradient operator ; *k* is a positive scalar (diffusion or conduction coefficient); *v* is a convective velocity; and *f* is a source term. Notice that the source term is negative everywhere in the domain.

Can the solution have a maximum at an interior point of the domain? From optimality conditions for functions of several variables, we know that if a function is *smooth*, the gradient should be zero at critical points. Additionally, at a local maximum, all of the second partial derivatives should be negative or zero. These two conditions result in a positive or zero left-hand side of the partial differential equation.

-\kappa\Delta u + \mathbf{v}\cdot \nabla u \ge 0 \textrm{ at an interior local maximum.}

This contradicts with the source term

f<0

on the right-hand side of the same equation.

This means that the maximum of our solution can only be found on the boundary. What we just proved is called the *strong maximum principle*.

There is also the *weak maximum principle*, which holds when the source term is allowed to be zero; i.e., is the condition. In this case, the maximum is either at the boundary or the solution is constant throughout. Either way, if we are looking for the maximum, it is enough to search just the boundary. We skip the proof of the weak maximum principle here. We can derive analogous principles for the minimum when .

Finally, if there are no sources or sinks, we can combine the weak maximum and minimum principles to conclude that both the maximum and minimum of our solution should be on the boundary. It is customary to use the term maximum principles to refer to both maximum and minimum principles, with the distinction being understood from the context.

Notice that we require smoothness of the solution to make the above conclusions.

Our conclusions so far can be used for stationary convection-diffusion-type physical problems such as heat transfer, current flow, or chemical transport, provided the conductivity (diffusivity) is homogenous and isotropic and the convection velocity *v* is homogenous as well. Let’s lift those restrictions now.

Typically, a stationary convection-diffusion problem has the form

\nabla \cdot (-\mathcal{D} \nabla u + \mathbf{v}u) = f,

where *D* is a conductivity (diffusion) tensor, which can be anisotropic and heterogeneous; *v* is a potentially nonuniform convective velocity; and *f* is a heat (chemical/current) source.

The requirements for establishing a weak maximum principle for a smooth solution are as follows:

- The diffusion tensor
*D*is positive definite - The convective velocity
*v*is divergence free; i.e., - The source term is nonpositive; i.e.,

Physically speaking, this covers transport problems with incompressible convective velocities. Heat transfer, reacting flows, and charge transport problems are some examples. In any case, if we have , we can conclude that the minimum should be on the boundary.

Consider the Joule heating problem (shown in detail in this video) in electric heaters, fuses, and other conductors where resistive losses in the electrical part lead to heating. The heat transfer in solids equation is given by

\nabla \cdot (-\mathcal{\kappa} \nabla T ) = \sigma|\nabla V|^2,

where *k* and *σ* are the thermal and electrical conductivity, and *T* and *V* are the temperature and electric potential.

The source term is always nonnegative. Based on our discussion so far, we can conclude that the minimum temperature has to be at the boundaries of our model. See the images below, where the locations of the volume and surface minimum have been annotated. The two locations overlap.

*Temperature distribution in a Joule heating problem (left), location of the minimum overall temperature (center), and location of the minimum boundary temperature (right).*

Transient problems can broadly be divided into parabolic problems and hyperbolic problems. The first group contains diffusion-type phenomena such as heat transfer, chemical reactions, and groundwater flow. The second group contains wave-type problems such as acoustics, electromagnetic waves, and structural dynamics.

For parabolic problems, we can show that the overall maximum or minimum values of solutions are encountered either at the initial time or at the boundary. As in stationary problems, the sign of the source term plays a decisive role here. Hyperbolic problems in general do not obey maximum/minimum principles.

Consider the following parabolic equation

\frac{\partial u}{\partial t}+\nabla \cdot (-\mathcal{D} \nabla u +\mathbf{v}u) = f,

with appropriate initial and boundary conditions.

If the diffusion tensor *D* is positive definite, the convective velocity *v* is divergence free and the source term is such that . We can show that the maximum over all times and points is either at the boundaries or at the initial time. This is not a statement about the maximum at any given time. It is about the overall maximum.

To simplify the discussion here, let’s consider the strong maximum principle for a 1D version of the above parabolic equation.

\frac{\partial u}{\partial t}+\frac{\partial}{\partial x}(-D \frac{\partial u}{\partial x} +vu) = f, \quad x \in [0,L], \quad t\in[0,T], u(x,0) = u_0(x).

Expand this equation with the product rule for derivatives to get

\frac{\partial u}{\partial t}-\frac{\partial D}{\partial x}\frac{\partial u}{\partial x} -D \frac{\partial^2 u}{\partial x^2} +v\frac{\partial u}{\partial x}+u\frac{\partial v}{\partial x} = f, \quad x \in [0,L], \quad t\in[0,T], u(x,0) = u_0(x).

To prove the strong maximum principle, we will assume the maximum value of *u* to be at an interior point of the space-time domain, , and show how that leads to a contradiction.

At an interior maximum, a smooth solution will have Additionally, an incompressible advection velocity will have everywhere for a 1D problem. Thus, at an interior maximum, the above equation reduces to

- D \frac{\partial^2 u}{\partial x^2} = f.

Since *D* is positive and *f* is negative, this means that at an interior maximum, This contradicts the usual requirement from calculus that at an interior maximum,

This leaves us with the boundaries of the space-time domain as possible locations for the maximum solution. The boundary of this composite domain includes the initial time *t* = 0, the end time *t* = *T*, and the spatial boundaries at *x* = 0 and *x* = *L*.

*A space-time domain for a 1D time-dependent PDE.*

We narrowed down the location of the maximum solution to the four boundaries shown above. Our original statement was that the maximum is either at the spatial boundaries or at the initial condition. This excludes points that are spatially interior but at the terminal time *t* = *T*; i.e., the top of the rectangle in the above space-time domain. Let’s show why that is the case.

Say the maximum is at a spatially interior point at the temporal boundary *t* = *T*. If we go left or right from that point, i.e., move in space but not in time, the solution should not increase. Therefore, . However, the first partial time derivative does not have to be zero, as we can only move in one direction in time from the terminal time. The restriction here is that for a maximum to happen at the terminal time, the time partial derivative cannot be negative there. That is, we need . Does this agree with the partial differential equation? Setting the first spatial derivatives to zero, we have

\frac{\partial u}{\partial t}- D \frac{\partial^2 u}{\partial x^2} = f \Rightarrow \frac{\partial u}{\partial t}= f+D \frac{\partial^2 u}{\partial x^2}.

From this, we can see that the starting assumption about the source term and the restriction on the second spatial partial derivative render a negative time derivative. This conclusion, based on the PDE, contradicts the statement we made based on requirements of a local maximum. Using a similar reasoning at the initial time, in contrast, shows that the time derivative at a possible interior maximum at the initial condition is a possibility.

In a computational work, this means that if we are looking for the maximum solution, we should limit our search to the boundaries, except at the initial time, where we should include interior points as well. Beware, however, that this statement only applies to the maximum of all times and points. It is still possible for the maximum at a given time to be at an internal point.

Maximum principles are not generally available for equations with an order higher than two, such as the biharmonic equation or second-order systems such as two- and three-dimensional stress analysis. The basic idea in looking for maximum principles is to get some scalar function *P*(**u**) of the solution vector **u** to satisfy the Poisson equation. Such a scalar function is called a P-function, after L. E. Payne, who contributed significantly to this line of thinking. The challenge in engineering is to find a P-function that has practical significance.

Consider, for example, stationary stress analysis. The equilibrium equation is given by the system of equations

\sigma_{ij,j}+f_i = 0, i,j=1,2,3,

where *σ _{ij}* and

In the case of linear elasticity, the stress and strain are related by the constitutive relation

\sigma_{ij}=\lambda \epsilon_{kk}\delta_{ij}+2\mu \epsilon_{ij},

where are components of the strain tensor, which in turn are related to displacements *u _{i}* by the strain-displacement relationship

\qquad \epsilon_{ij}=\frac{1}{2}(\frac{\partial u_i}{\partial x_j}+\frac{\partial u_j}{\partial x_i}).

If the material properties *λ* and *μ* are constant and the body force **f** is solenoidal (divergence free), such as the case with gravitational force in nonrelativistic applications, we can show that the volume change , also called the dilatation, satisfies both the maximum and minimum principles.

We can show this by substituting the constitutive relation in the equilibrium equation to get

\lambda \epsilon_{kk,i}+2\mu\epsilon_{ij,j}=0, \quad i,j=1,2,3

and taking the divergence of the above vector equation to get

\lambda \epsilon_{kk,ii}+2\mu\epsilon_{ij,ji}=0, \quad i,j=1,2,3.

From the strain-displacement relationship and interchangeability of the order of differentiation, we see that . This finally leads to

\epsilon_{kk,ii}=\Delta \epsilon_{kk} = 0.

Since the dilatation satisfies the Laplace equation, both its maximum and minimum must be achieved at the boundary, unless the dilatation is uniform over the object. Taking the trace of the constitutive relation reveals that the same is true for the mean stress under the assumption of linear elasticity, homogenous material properties, and solenoidal body forces.

While the above result is theoretically appealing, engineers are rarely interested in finding the extreme values of the dilatation or mean stress. On the other hand, the maximum von Mises stress and the maximum shear stress are scalars of significant importance. For example, elastic limits (yield criteria) are often given in terms of either of these two quantities. Unfortunately, we do not have broadly applicable maximum principles for these and other quantities of practical interest. But we can still derive maximum principles for special cases.

A good example is the analysis of homogenous beams under torsion. If we pick the *z*-axis to be the axis of the beam, the stress analysis problem can be reformulated in terms of the stress function , defined over the cross section Ω. In the absence of body forces, the stress function satisfies

\frac{\partial^2 \phi}{\partial x^2}+\frac{\partial^2 \phi}{\partial y^2} =-2, \quad (x,y) \in \Omega,

and the nonzero components of stress are

\sigma_{xz}=\mu\theta\frac{\partial \phi}{\partial y}, \quad \sigma_{yz}=-\mu\theta\frac{\partial \phi}{\partial x},

where *θ* is the angle of twist per unit length of the beam.

*Torsional stress distribution calculated with the* Beam Cross Section *interface.*

We are interested in the maximum shear stress. Note that since the torsion problem does not have normal stresses, the magnitude of the shear stress is proportional to the von Mises stress. For magnitude *τ* of the shear stress, we have

\tau = \sqrt{\sigma_{xz}^2+\sigma_{yz}^2} \Rightarrow \tau^2= \sigma_{xz}^2+\sigma_{yz}^2=(\mu\theta)^2|\nabla\phi|^2.

Let’s take the divergence of to investigate the possibility of a maximum principle. Using the index notation

div(|\nabla\phi|^2)=\frac{\partial^2}{\partial x_j\partial x_j}(\frac{\partial \phi}{\partial x_i}\frac{\partial \phi}{\partial x_i})=2\frac{\partial}{\partial x_j}(\frac{\partial \phi}{\partial x_i}\frac{\partial^2\phi}{\partial x_i\partial x_j})=2\frac{\partial^2\phi}{\partial x_i\partial x_j}\frac{\partial^2\phi}{\partial x_i\partial x_j}+2\frac{\partial\phi}{\partial x_i}\frac{\partial^3\phi}{\partial x_j\partial x_i\partial x_j}.

If we exchange the order of differentiation in the last term of the above equation, we get

div(|\nabla\phi|^2)=2\frac{\partial^2\phi}{\partial x_i\partial x_j}\frac{\partial^2\phi}{\partial x_i\partial x_j}+2\frac{\partial\phi}{\partial x_i}\frac{\partial}{\partial x_i}(\frac{\partial^2\phi}{\partial x_j\partial x_j}).

The term in parentheses is the divergence of the stress function. According to the equation we started from, this value is -2. Thus, the second right-hand term above is zero. The first right-hand term is nonnegative because it is a sum of squares of components of the Hessian (matrix of second partial derivatives) of . Additionally, it cannot be zero because the diagonals of the Hessian add up to a nonzero value, -2.

As a result, we have

div(\tau^2)>0 \quad \forall (x,y) \in \Omega.

This means that the magnitude of the shear stress achieves its maximum value at the boundary of the cross section of the beam. Many textbooks on linear elasticity contain analytical solutions to the Saint-Venant torsion problem for simple cross sections and boundary conditions. The explicit solutions there show that the maximum shear stress is encountered at the boundary. What we have done here is arrive at that conclusion for arbitrary cross sections and boundary conditions, without explicitly solving the boundary value problem. See the comparison below for an I-beam subjected to torsion.

*Comparing the locations of the overall maximum shear stress (left) and boundary maximum shear stress (right) in a beam under torsion.*

Say you are interested in checking extreme values only on boundaries of your problem. This may be because you have a maximum principle for the quantity of interest or you simply want to focus on the boundary. What functionality does COMSOL Multiphysics have to help you do so?

If you go to *Results* > *Derived Values* from the Model Builder or the menu in the GUI, you find maximum and minimum operators defined on volumes, surfaces, or lines. When you are interested in extreme values on the boundaries, you can use the surface or line operators as shown below, depending on the spatial dimension of your problem.

*Maximum and minimum evaluations can be done on different geometric entity levels.*

The use of the above operators is limited to postprocessing. If you have to use the extreme values in your physics, say to implement a controller, you can define maximum or minimum component coupling operators in the *Definitions* section of your model. These operators also provide you with a choice of geometric entity between volumes, surfaces, curves, or points. After adding the operator, choose a geometric entity level in the settings window as shown below.

*Geometric entity selection for component coupling operators.*

In addition to saving time and memory because of fewer degrees of freedom, the boundary coupling operators cause lesser disturbance to the structure of the matrices involved than domain coupling operators. When you know that the equation you are solving has a maximum or minimum principle and you need to use an extreme value in your problem formulation, COMSOL Multiphysics offers you an economic alternative of using boundary coupling operators over domain coupling operators.

In a time-dependent problem where you are interested in postprocessing only boundary data, it can be economical or necessary — when you have bigger models and memory constraints — to store solutions from only the critical parts of your geometry. There is more than one way to do this in COMSOL Multiphysics, which we discuss in a previous blog post.

If you are performing a fatigue analysis and you know that the critical behavior is at the boundaries of your geometry, COMSOL Multiphysics provides you with fatigue analysis features limited to just boundaries.

*Fatigue analysis on domains vs. boundaries.*

COMSOL Multiphysics is largely based on the finite element method. For some problems, it supports the boundary element method. In the latest version of the software (version 5.3), the boundary element method is available for electrostatics, corrosion, and user-defined equations.

Problems solved with the boundary element method have no source terms. As such, these problems are likely to obey both the maximum and minimum principles. Exploiting the maximum principles is not a legitimate motivation for using the boundary elements method. But when you can use the method, the problem most likely has extreme values on boundaries.

The maximum principles are very useful when applied to problems where their premises apply. However, we should refrain from applying them to problems where they are not applicable. Discontinuities and changes in the sign of the source terms, for example, are flagrant violations of these premises.

In our mathematical analysis, we assumed smoothness of the coefficients. If there are discontinuities, the principles do not apply to the whole domain. However, it is possible to subdivide the domain into regions with no discontinuities and apply the maximum principles to each region. Practically, this means you have to include internal boundaries as well as external boundaries. The interface between two materials with different material properties is one example.

We mentioned earlier that in COMSOL Multiphysics, fatigue analysis can be done either on domains or on surfaces. In many problems, fatigue failure starts at the surface. However, in contact mechanics, it is known that the fatigue failure starts in subsurface points. For more on this, including the implications of the finite element meshing strategy, please see the following blog post on modeling contact fatigue.

In problems with nonzero source terms, the maximum principles require nonnegativity or nonpositivity. As such, they do not apply if the source term changes signs in the domain. For example, in a Joule heating problem, the electromagnetic heat source is always positive. In a chemical reaction, on the other hand, the same chemical species can be produced in parts of the domain and consumed in other parts. Unless we are sure that the reaction will be producing a species everywhere or consuming a species everywhere, we cannot use the maximum principles.

Our derivation of the maximum principles uses the partial differential equation directly. It is possible that discretizations in the finite element or other numerical methods destroy this property. If a problem you expect to obey one of the maximum principles does not exhibit that when you solve it numerically, check if the numerical method obeys the so-called *discrete maximum principles*. Keep numerical errors in mind as well.

The maximum principles are traditionally used in the analysis of partial differential equations for existence/uniqueness type proofs, establishing error bounds, and other similar ends. In this blog post, we have shown how to use these results for efficient computational analyses.

Note that the mathematical discussion provided in the first section of this post was meant for a broader audience. For a more rigorous discussion, please refer to standard texts in the analysis of partial differential equations. Here are my personal favorites for this subject:

- Lawrence C. Evans,
*Partial Differential Equations*, American Mathematical Society, 2010 - Rene Sperb,
*Maximum Principles and Their Applications*, Academic Press, Inc., 1981

If you have any questions related to this topic or using COMSOL Multiphysics, please contact us.

With the Find tool, we can search within a model for a parameter, variable, or even general text. To open this tool, we either click the respective *Find* button on the *Quick Access Toolbar* (or the *Main Toolbar* for the Linux® operating system and macOS) or use the *Ctrl+F* keys.

*The* Find *button in the Windows® operating system version of the COMSOL Desktop®.*

The Find tool searches the entire model for every instance that a search entry is used throughout the nodes of the model tree. We can search for all components and their sequence of operations, node names, identifiers, tags, and labels by using the Find tool.

After opening the Find tool, we have several options with which we can conduct our search. Using the *All* tab enables us to search the model itself, while using the *Methods* tab lets us search within the Method Editor of the built-in Application Builder tool. There are also check boxes that we can use to obtain specific results, including:

- Exact matches
- A text string within a model, using the regular expression syntax
- Case-sensitive searches

In the tutorial video at the beginning of this blog post, you can get a detailed demonstration of how to search with the *All* tab. For more information on the other option for the Find tool — in the *Methods* section of the Application Builder — see the “Find and Replace” section of the *Introduction to Application Builder* documentation.

*The Find tool.*

After performing a search, a new tab appears in the *Messages/Progress/Log* window. This new *Find Results* tab is where we can find our search results in table form. The table lists every instance that the term or terms that we searched are used throughout the model. We can double-click any row within the table to automatically be redirected to the node and section in which the parameter or variable is being used.

In the tutorial video, we show how the Settings window and node selected in the Model Builder update as we toggle between several of the rows in the search results table. Additionally, each column within the table lists properties such as the node it is being used under, the context of how it is being used within the node, and the text with which it appears through the *Node*, *Type*, and *Text* columns, respectively. In the *Text* column, if we search some terms used in an expression, for example, the software provides the other text and characters that are a part of that same expression. If we search text that appears in the description for a parameter, the remaining text part of that description column also appears.

Every time we complete a new search, a new *Find Results* tab opens. This means that we can always refer back to previous search results in the designated tabs. Additionally, as we continue to work on and adjust a model, we can refresh the table to repopulate the search results.

When you want to quickly recall and use parameters, variables, functions, and other definitions that you have created in your simulation, you can do so by using the Auto Completion tool.

To open this tool, we hold down the *Control* key and press the space bar while in the *Expression* field of any window.

*The Auto Completion tool, used to define a Heat Flux boundary condition.*

When using a second input language, an alternative shortcut is to hold down the

Controlkey and press theForward Slashkey.

When we create a model and progress through each step of the modeling workflow, it can become cumbersome to have to return to the node and window section where a definition such as a parameter, variable, or function is originally defined. This is especially the case when working with large or complex models that use numerous definitions.

The Auto Completion tool enables us to quickly and easily formulate expressions and fill in the settings for almost any node in the model tree, such as when defining a physics boundary condition. Almost anywhere we enter an expression in the software, we are able to use this functionality.

In the tutorial video, we demonstrate how to use the Auto Completion tool to define a parameter, review several categories of definitions and operators available when defining a variable, and show how to define a boundary condition in a model.

After prompting the Auto Completion window to open, we can access not only the parameters, variables, and functions that we’ve defined, but also several other categories — including math operators, physical constants, and other operators. Keep in mind that the categories available for selection will vary depending on the node under which we work. Different options will be available when we enter an expression within a geometry node versus a mesh or physics node. For example, if we use the Auto Completion tool to define a parameter, we can use other parameters defined previously in the Parameters table. If we used the Auto Completion tool to define a variable, we are able to implement other variables as well as parameters in our expression.

No matter the simplicity or complexity of your simulation, the Find and Auto Completion tools are useful for your modeling workflow. Whether you are dealing with 5 or 50 parameters, being able to quickly and easily locate where a definition is used and access it to define other aspects of your simulation makes the process of setting up your model more efficient. To learn how to take further advantage of these features in COMSOL Multiphysics, watch the video at the top of this post.

- Learn about other modeling tools and resources on the COMSOL Blog:
- Browse more tutorials on the core functionality and tools available in COMSOL Multiphysics in the Video Gallery

Let’s consider the model shown below of two current-carrying circular coils, each carrying a current of I_{0} = 0.25 mA. The magnetic flux density, the *B*-field, is plotted in the space around these primary coils. Suppose that we want to introduce a smaller pickup coil in the space between the larger primary coils. This pickup coil intercepts part of the magnetic flux and is defined by its outside perimeter curve *C* and enclosed area *A*.

*The magnetic flux density around a Helmholtz coil with a pickup coil inside. The enclosed area of the coil is shaded in gray. We want to recompute the mutual inductance as the pickup coil changes orientation.*

The mutual inductance between these primary and pickup coils is defined as:

L=\frac{1}{I_0}\int_A\mathbf{B \cdot n } dA

where *n* is a vector normal to the surface *A*.

Since the *B*-field is computed from the magnetic vector potential, , we can use Stokes’ theorem to see that the above surface integral is equivalent to the line integral:

L=\frac{1}{I_0}\oint_C\mathbf{A \cdot t } dC

where *t* is the tangent vector to the curve *C*.

This method of computing mutual inductance is also shown in the Application Gallery example of an RFID system.

We can place the pickup coil at any location and orientation around the primary coils, solve the model, and evaluate either of the above integrals. We can even add the pickup coil geometry features after solving the model by using the *Update Solution* functionality. This functionality remeshes the entire model geometry and maps the previously computed solution from the old mesh onto the new mesh. This is appropriate and easy to do if the changes to the geometry do not affect the solution and if we only want to try out a few different pickup coil locations.

Suppose that we want to try out many different locations and orientations for the pickup coil. Since the *A*-field doesn’t change, we don’t want to re-solve or remesh the entire model, but just want to move the pickup coil geometry around. We can achieve this by using a combination of multiple geometry components, the *General Extrusion* component coupling, and *Integration* component couplings.

We begin with the existing Helmholtz coil example and introduce another 3D component into our model. The geometry within this second component is used to define the pickup coil’s edges, cross-sectional surface, and orientation. The *Rotate* and *Move* geometry features enable us to reorient the coil into any position that we would like. For visualization purposes, we can also include the edges that define the primary coils, as shown in the screenshot below.

*The setup of a second component and geometry.*

The spatial coordinates of *Component 2* overlap exactly with *Component 1*, but otherwise there is no connection between the two. A mapping is introduced via the General Extrusion component coupling that is defined in *Component 1*. This coupling uses a *Source Selection* for all of the domains in *Component 1*. Whenever this coupling is evaluated at a point in space in *Component 2*, it takes the fields at the same point in space in *Component 1*.

The approach of using two components and mapping the solution between them is also introduced in the Submodeling Analysis of a Shaft tutorial model.

Within *Component 2*, we define two Integration component couplings, one defined over the edges of the pickup coil, named *intC*, and the other over the cross-sectional boundary, named *intA*. This allows us to compute the mutual inductance with either of the above approaches by defining two variables. These variables, named `L_C`

and ` L_A`

, are defined via the equations:

intC(t1x*comp1.genext1(comp1.Ax)+t1y*comp1.genext1(comp1.Ay)+t1z*comp1.genext1(comp1.Az))/I0

and

intA(nx*comp1.genext1(comp1.mf.Bx)+ny*comp1.genext1(comp1.mf.By)+nz*comp1.genext1(comp1.mf.Bz))/I0

Here, t1, t1y, and t1z are the components of the tangent vector to the pickup coil perimeter; nx, ny, and nz are the components of the normal vector to the pickup coil surface; and I_{0} = 0.25 mA is a global parameter.

Since there are multiple components in the model, we must use the full name of the component couplings and fields that reside within *Component 1*. Also, note that the normal vector and perimeter tangent vector can be oriented in one of two opposite directions, which results in a sign change that we need to be aware of.

*Integration component coupling defined over the pickup coil perimeter in the second component.*

We can also sweep over different positions and orientations of the pickup coil. We already have the solution for the magnetic fields computed in *Study 1*. We add a second study that includes a parametric sweep, but does not solve for any physics. Within the study step settings, we can specify that we want to use the existing solution for the magnetic field, as shown in the screenshot below.

*Study step settings showing how the solution from* Study 1 *is used in* Study 2*.*

This second study takes relatively little computational resources when compared to remeshing the entire model and re-solving for the magnetic fields. For each different position of the pickup coil, the software only needs to remesh the pickup coil surface. The solution from *Study 1* is then mapped onto this new pickup coil position and the two variables are evaluated.

This approach also works for nonplanar integration surfaces and multiturn integration curves, as demonstrated in the RFID example. Not only can the integration edges and surfaces be almost arbitrary, but they can also easily be reoriented into any position using the *Rotate* and *Move* geometry operations. Thus, this is a very general approach for evaluating fields over arbitrary geometries and locations.

Now that we’ve seen the most flexible approach, which enables us to perform an integration over an arbitrary shape, let’s look at a simpler case. Suppose that we are dealing with a planar integration surface and the surface edges can easily be defined in terms of the *x*- and *y*-coordinates relative to an origin point on that plane.

The first step in this approach is to take a slice (cut plane) through the entire modeling space. We can do this via the *Cut Plane* data set, as described in this blog post on computing the total normal flux on a planar surface. We can control the origin of the cut plane and the normal vector to the cut plane via the global parameters. Also, note that the cut plane defines local variables called *cpl1x* and *cpl1y* for the local *x* and *y* locations, respectively, as well as *cpl1nx*, *cpl1ny*, and *cpl1nz* for the components of the normal vector to the plane.

*Using the Cut Plane data set. The origin point and normal are defined in terms of global parameters. The advanced settings show the names of the local* x*- and* y*-coordinates and normal coordinates.*

We can now perform a surface integration over this entire cut plane, but we want to restrict ourselves to a subspace within this plane. We do this by using a space-dependent logical expression that evaluates to true (or 1) within our area of interest and false (or 0) elsewhere. This logical expression multiplies our integrand. In the screenshot below, for example, we see the surface integration performed over the cut plane is the expression:

-(sqrt(cpl1x^2+cpl1y^2)<0.1[m])*(cpl1nx*mf.Bx+cpl1ny*mf.By+cpl1nz*mf.Bz)/I0

which includes the logical expression `(sqrt(cpl1x^2+cpl1y^2)<0.1[m])`

that evaluates to 1 within a circle with a 0.1-m radius centered at the origin.

The remainder of the expression evaluates the flux dotted with the cut plane normal vector, thus giving us the flux through a 0.1-m-radius circle centered at the cut plane origin.

*The evaluation of an integral over a subregion of a cut plane using logical expressions.*

The boundaries of the subregion within the cut plane are a bit jagged; however, this gets reduced with mesh refinement. As in the earlier approach, we use a second study with a parametric sweep to store all of the different orientations of the cut plane in a second data set. In this case, there isn't a second component or geometry that is getting reoriented and remeshed, so the evaluation is faster.

Let's now look at an even simpler approach that is useful in a smaller set of cases. Suppose that we want to integrate along a curve that can be described via a parametric equation. One of the simplest curves to describe via a parametric equation is the unit circle on the xy-plane, which is defined by for . It's also straightforward to compute the tangent vector for any parametric curve. For a unit circle, the tangent vector components are:

<\text{tx,ty}>=<-sin(s),cos(s)>

We can use these simple equations within a *Parameterized Curve 3D* data set for a 0.1-m-radius circle lying in the xz-plane. The circle's centerpoint is offset from the global origin via a set of global parameters.

*Settings in a Parametric Curve 3D data set. The curve is shown in black and the tangent vector arrows in gray.*

We can create a second data set with another study and use a parametric sweep to evaluate many different origin points for the circle. We then perform a line integral over this new data set, as shown in the screenshot below. The integrand

(-sin(s)*Ax+0*Ay+cos(s)*Az)/I0

assumes a circle in the xz-plane and evaluates the *A*-field along the parametric curve.

*The line integration over a Parametric Curve data set.*

Of the three approaches, this is the simplest and most accurate for a given mesh discretization, but also has the most limitations. Writing parametric equations for curves other than circles, ellipses, and squares aligned with the coordinate axes is more difficult.

In the figure below, we plot and compare the results for all of these approaches for a circular integration area moved back and forth along the coil axis.

*Comparison of the various approaches for computing mutual inductance.*

Here, we have shown three approaches for extracting surface and line integrals over subregions within a modeling space. The first approach is the most general; it allows integration over arbitrary (and even nonplanar) surfaces and curves. The first approach also allows arbitrary reorientation of the geometries being integrated over. While this approach is the most flexible, it also requires the most work to set up.

The second approach (using a Cut Plane data set) is applicable only to planar integration surfaces and is more limited in the shapes of integration surfaces that can be considered. The third approach (using a parameterized curve) is the quickest and simplest to implement, but is best suited for simple integration curves such as circles.

- Learn more about integration over subregions on the COMSOL Blog
- Find out how to use the General Extrusion component coupling in these blog posts:

Let’s consider two objects labeled A and B, shown below. The three distances that we want to compute are:

- Distance to object A as a distance field. In this case, we calculate the distance and direction from all of the points surrounding and inside object A to the closest point on its boundary (
*d*)._{A} - Distance from every point on the boundary of object B to the closest point on object A (
*d*)._{AB} - Endpoints of the line that is the shortest distance between objects A and B (
*d*)._{AB,min}

*Two objects, A and B, and the distances that we want to compute.*

We can compute all of these various distances using a combination of the General Extrusion and Minimum component couplings in COMSOL Multiphysics. Let’s first look at how to use the General Extrusion component coupling. We name the operator `A_b`

and define its *Source Selection* to be the boundaries of object A. Within the *Advanced* section, we use the *Mesh search method* of *Closest point*. These settings are shown below. All other settings for this operator can be left at their defaults.

*The settings for the General Extrusion component coupling used to compute the closest point distance. Note that the* Mesh search method *is set to* Closest point*.*

We use this operator within the definition of a variable called `d_A`

, defined as:

sqrt((x-A_b(x))^2+(y-A_b(y))^2)

This variable is defined over the domains where we want to compute the distance field; in this case, just the surrounding domain. We can also compute the negative of the gradient of this distance field, . This gives us the components of a vector field that points toward the closest point on the boundary of A. We can use the Differentiation operator `d(d_A,x)`

and ` d(d_A,y)`

to take the spatial derivatives, as shown in the screenshot below.

*The variable definitions.*

We can use these variables anywhere that we want. For example, we can plot the distance field or make material properties dependent upon distance. The image below plots the contours of the distance and the direction vectors. Note that the distance is computed even in the region behind object B. We clearly get quite a bit of information here, but there is a substantial computational cost, since the shortest distance is computed at every point in the surrounding domain. There are also times when we don’t need all of this information and just want the distances between objects.

*The distance field (contour lines) and shortest direction to the boundary of object A (arrows) in the domain surrounding the two objects.*

Let’s make things a bit easier and only concern ourselves with the distance between two objects and not the direction. We use the same General Extrusion component coupling, but only need to define a variable on the boundary of object B to compute the distance.

*The variable defining the distance between the objects.*

While this is the same distance function we used before, we don’t need a mesh in the intermediate space. We don’t even need a mesh over domains A and B; there just needs to be a mesh on the boundaries of the objects. This approach takes much less time, but it gives us only the shortest distance from object A to every point on the boundary of object B. We cannot recover the direction vector. We can also flip all of these definitions around and compute the shortest distance from object B to every point on the boundary of object A. These distances, shown in the plot below, are available along the boundaries of the objects.

*The distance from every point on the boundary of object B to the closest point on object A and vice versa.*

Now, let’s find the line that describes the shortest distance between the two objects. In the previous section, we saw that we can compute two variables, `d_AB`

and `d_BA`

, which describe the shortest distances between A and B and vice versa. We now want to find the minimum distance between the boundaries of these domains. Thus, we set up two different Minimum component couplings: one for the boundary of object A and another for object B. We call these operators `minA`

and ` minB`

, as shown in the screenshot below.

*The definition of the Minimum component coupling over the boundary of object A.*

We then call these Minimum component couplings to extract the minimum distance. We can also provide a second argument to the Minimum component coupling to find the coordinates at which the distance is at a minimum. For example, by defining the variable `A_x`

as the expression `minA(d_BA,x)`

, it takes on the value of the *x*-coordinate at which ` d_BA`

is at a minimum over the boundary of A.

*The definitions for the coordinates of the shortest line segment between two domains.*

We can call the variables defining these coordinates anywhere we want. For example, we can use the *Cut Line* feature to show the shortest line segment connecting the two objects, as seen in the following image. If we have a meshed domain and a solution between the two objects, then we can plot the fields just along the shortest line between the two.

*The Cut Line feature, used to determine the shortest line between objects.*

These techniques for determining distances can be used in any model. Although the examples presented here are in 2D, they can all be generalized to 3D as well. However, computing the 3D distance field does take a relatively long time, whereas calculating distances between boundaries and clearances is less intensive.

Computing the distance field around nonsmooth shapes also requires a bit more care. As shown in the figure below, the distance field around reentrant corners are nonsmooth, hence the direction vector will be undefined along those lines that are equidistant from two different parts of the boundary. Resolving this nonsmoothness of the distance field requires a finer mesh.

*The distance field around and inside an object with reentrant corners on a coarse mesh (left) and a more refined mesh (right). The smoothness of the distance field is mesh dependent in such cases.*

Once we have computed this distance field on an appropriately fine mesh, we treat it like any other variable in our model. For example, we can make material properties a function of distance from a surface. The image below shows such a representative material distribution.

*A representative material distribution that is a function of distance to the surface.*

It is also possible to use the distance function to help visualize our results. Suppose we are only interested in the part of the solution that is within a specific distance of the surface. In this case, we can use the *Filter* subfeature when making a volume plot. We then enter a logical expression to only display the results that are within a certain distance of the object’s surface, an example of which is shown below.

*Using the distance function to plot only the solution within 5 mm of the surface.*

We have demonstrated how to compute a distance field to a boundary within a model, the distances between boundaries, and the shortest line segment between two boundaries. This approach also works to calculate distance fields from edges and points in 3D models. The computed distances can be used anywhere within the setup, physics definitions, and results evaluation of a model. We’ve shared a couple of examples here, but now it’s your turn. We would love to hear what you come up with!

- Find a variety of help resources for modeling in COMSOL Multiphysics:

Implementing the Fourier transformation in a simulation can be useful in Fourier optics, signal processing (for use in frequency pattern extraction), and noise reduction and filtering via image processing. In Fourier optics, the Fresnel approximation is one of the approximation methods used for calculating the field near the diffracting aperture. Suppose a diffracting aperture is located in the plane at . The diffracted electric field in the plane at the distance from the diffracting aperture is calculated as

E(u,v,f) = \frac{-1}{i\lambda f}e^{-i2\pi f /\lambda} e^{-i\pi(u^2+v^2)/(\lambda f)} \iint_{-\infty}^{\infty} E(x,y,0)e^{-i \pi(x^2+y^2)/(\lambda f)}e^{i2 \pi (xu+yv)/(\lambda f)}dxdy,

where, is the wavelength and account for the electric field at the plane and the plane, respectively. (See Ref. 1 for more details.)

In this approximation formula, the diffracted field is calculated by Fourier transforming the incident field multiplied by the quadratic phase function .

The sign convention of the phase function must follow the sign convention of the time dependence of the fields. In COMSOL Multiphysics, the time dependence of the electromagnetic fields is of the form . So, the sign of the quadratic phase function is negative.

Now, let’s take a look at an example of a Fresnel lens. A Fresnel lens is a regular plano-convex lens except for its curved surface, which is folded toward the flat side at every multiple of along the lens height, where *m* is an integer and *n* is the refractive index of the lens material. This is called an *m*^{th}-order Fresnel lens.

The shift of the surface by this particular height along the light propagation direction only changes the phase of the light by (roughly speaking and under the paraxial approximation). Because of this, the folded lens fundamentally reproduces the same wavefront in the far field and behaves like the original unfolded lens. The main difference is the diffraction effect. Regular lenses basically don’t show any diffraction (if there is no vignetting by a hard aperture), while Fresnel lenses always show small diffraction patterns around the main spot due to the surface discontinuities and internal reflections.

When a Fresnel lens is designed digitally, the lens surface is made up of discrete layers, giving it a staircase-like appearance. This is called a multilevel Fresnel lens. Due to the flat part of the steps, the diffraction pattern of a multilevel Fresnel lens typically includes a zeroth-order background in addition to the higher-order diffraction.

*A Fresnel lens in a lighthouse in Boston. Image by Manfred Schmidt — Own work. Licensed under CC BY-SA 4.0, via Wikimedia Commons.*

Why are we using a Fresnel lens as our example? The reason is similar to why lighthouses use Fresnel lenses in their operations. A Fresnel lens is folded into in height. It can be extremely thin and therefore of less weight and volume, which is beneficial for the optics of lighthouses compared to a large, heavy, and thick lens of the conventional refractive type. Likewise, for our purposes, Fresnel lenses can be easier to simulate in COMSOL Multiphysics and the add-on Wave Optics Module because the number of elements are manageable.

The figure below depicts the optics layout that we are trying to simulate to demonstrate how we can implement the Fourier transformation, applied to a computed solution solved for by the *Wave Optics, Frequency Domain* interface.

*Focusing 16-level Fresnel lens model.*

This is a first-order Fresnel lens with surfaces that are digitized in 16 levels. A plane wave is incident on the incidence plane. At the exit plane at , the field is diffracted by the Fresnel lens to be . This process can be easily modeled and simulated by the *Wave Optics, Frequency Domain* interface. Then, we calculate the field at the focal plane at by applying the Fourier transformation in the Fresnel approximation, as described above.

The figures below are the result of our computation, with the electric field component in the domains (top) and on the boundary corresponding to the exit plane (bottom). Note that the geometry is not drawn to scale in the vertical axis. We can clearly see the positively curved wavefront from the center and from every air gap between the saw teeth. Note that the reflection from the lens surfaces leads to some small interferences in the domain field result and ripples in the boundary field result. This is because there is no antireflective coating modeled here.

*The computed electric field component in the Fresnel lens and surrounding air domains (vertical axis is not to scale).*

*The computed electric field component at the exit plane.*

Let’s move on to the Fourier transformation. In the previous example of an analytical function, we prepared two data sets: one for the source space and one for the Fourier space. The parameter names that were defined in the Settings window of the data set were the spatial coordinates in the source plane and the spatial coordinates in the image plane.

In today’s example, the source space is already created in the computed data set, Study 1/Solution 1 *(sol1){dset1}*, with the computed solutions. All we need to do is create a one-dimensional data set, Grid1D *{grid1}*, with parameters for the Fourier space; i.e., the spatial coordinate in the focal plane. We then relate it to the source data set, as seen in the figure below. Then, we define an integration operator `intop1`

on the exit plane.

*Settings for the data set for the transformation.*

*The intop1 operator defined on the exit plane (vertical axis is not to scale).*

Finally, we define the Fourier transformation in a 1D plot, shown below. It’s important to specify the data set we previously created for the transformation and to let COMSOL Multiphysics know that is the destination independent variable by using the `dest`

operator.

*Settings for the Fourier transformation in a 1D plot.*

The end result is shown in the following plot. This is a typical image of the focused beam through a multilevel Fresnel lens in the focal plane (see Ref. 2). There is the main spot by the first-order diffraction in the center and a weaker background caused by the zeroth-order (nondiffracted) and higher-order diffractions.

*Electric field norm plot of the focused beam through a 16-level Fresnel lens.*

In this blog post, we learned how to implement the Fourier transformation for computed solutions. This functionality is useful for long-distance propagation calculation in COMSOL Multiphysics and extends electromagnetic simulation to Fourier optics.

- Simulating Holographic Data Storage in COMSOL Multiphysics
- How to Simulate a Holographic Page Data Storage System
- How to Implement the Fourier Transformation in COMSOL Multiphysics

- J.W. Goodman,
*Introduction to Fourier Optics*, The McGraw-Hill Company, Inc. - D. C. O’Shea,
*Diffractive Optics*, SPIE Press.

On the shoreline, crashing waves and the continuous movement of the tides cause *coastal erosion*, a phenomenon that removes sediment from beaches and wears away land.

*A rock formation affected by coastal erosion. Image by John Nuttall — Own work. Licensed under CC BY 2.0, via Flickr Creative Commons.*

Although coastal erosion has benefits, such as creating sand for beaches, it also causes damage to seaside property and habitats. To help predict this damage, researchers can use shallow water equations to learn more about coastal erosion. These equations enable scientists to model oceanographic and atmospheric fluid flow, thereby predicting what areas will be affected by coastal erosion and other issues, such as polar ice cap melting and pollution.

Shallow water equations are beneficial compared to the Navier-Stokes equations, which may be problematic depending on how free surfaces are resolved and the scale of the modeling domain. Today, we highlight a tutorial that showcases how to solve shallow water equations using the power of equation-based modeling.

In this shallow water equation model, we can describe the physics by adding our own equations — a feature called equation-based modeling. We use the *General Form PDE* interface and two dependent variables to ensure that the modeling process is straightforward. This way, we can easily define expressions as model variables, which comes in handy when defining the initial wave profile.

This simple 1D model uses the Saint-Venant shallow water equations to study a wave settling over a variable bed as a function of time.

The 1D model featured here would require substantial work to convert into a 2D model for solving typical applications. This tutorial is therefore most useful as an example of the benefits of equation-based modeling.

*A vertical section of the fluid domain. Here,* z_{f} *is the analytical expression for the seabed profile and* z_{s} *is the water surface profile.*

The model, which investigates shallow water in a channel, has constraints at both ends and uses a wave profile as the initial condition. In order to easily alter parameters like the wave amplitude and bed shape, we can use mathematical relations to represent the initial wave and bed shapes. Please note that the model has a difference in scale between the *x*- and *y*-directions, as seen in the plots below.

*Plots of the seabed profile (left) and a comparison of the initial water surface profile with the seabed profile (right).*

We see that the flow develops hydraulic jump discontinuities over time, which can cause instability in the solution. To stabilize the solution, we can add an artificial viscosity that makes the cell Reynolds number of order unity. The hydraulic jumps are replaced with steep fronts that can be resolved on a grid.

Switching gears, let’s take a look at our results. After running the simulation for 60 seconds, we see results that show the water surface and seabed slope at 6 different times, from the start of the simulation to 15 seconds later.

*Plots of the seabed profile and water surface level at 3-second increments.*

These results clearly indicate that the seabed topography influences the water surface elevation. This, in turn, affects the impact of coastal erosion.

We can share these custom results with others by creating and exporting an animation to help visualize our findings — something that is easy to do in COMSOL Multiphysics.

*An animation of the simulation results.*

As a next step, try this tutorial yourself by clicking on the button below.

The Open Recovery Files feature is useful for anyone running simulations with multiple parameters, namely:

- Time-dependent studies
- Frequency domain studies
- Auxiliary sweeps
- Parametric sweeps

While these simulations are running, a recovery file is created after the first solution is found and is then updated after each solved iteration. The recovery file is also updated following any of these three events:

- After solving for a specified output time in a time-dependent simulation
- After each parametric value in a parametric simulation
- After each iteration of a nonlinear stationary simulation

If, at any point in your simulation, COMSOL Multiphysics closes unexpectedly, you can open the recovery file with the saved solutions. You can then continue running the simulation from where it left off.

*The Open Recovery File window. You can view additional details by clicking on the Show/Hide Details button.*

Please note that there are some limitations and subtleties to using this functionality. Currently, the main limitation is that the *Parametric Sweep* feature can’t be continued — you can rerun the simulation from the beginning or manually run the Parametric Sweep with the remaining values and store the solutions in a new place (not overwriting the first part of the simulation). When the simulation is terminated this way, the data is stored in the Parametric Solution as it should, but the number of solutions from the sweep are not complete. To access the individual parametric solutions, you might need to redirect the datasets to use individual solutions.

To learn more about the ins and outs of this feature, watch the video at the top of this post, where we discuss everything you need to know about opening recovery files. First, we open a model and run a simulation. After waiting until the simulation is nearly done, we force quit the software so that we may reopen it and finish the simulation.

- Learn more about the core functionality available in the COMSOL® software in these blog posts:
- Watch other tutorial videos in our Video Gallery

Topology optimization is a useful capability because it can help us find designs that we would not have reasonably been able to think of ourselves. When developing a design, however, this is only the first step. It may not be reasonable or possible to construct a particular design found through topology optimization, either because the design is too costly to produce or it is simply not possible to manufacture.

*Topology optimization results for an MBB beam.*

To address these concerns, we can come up with new designs that are based on the results of topology optimization, and then carry out further simulation analyses on them. But how do we do this? As it turns out, COMSOL Multiphysics makes it simple to create geometries from the 2D and 3D plots of your topology optimization results, which you can continue to work with directly in COMSOL Multiphysics or export to a wide range of CAD software platforms.

To view topology optimization results that are in 2D, we can create a contour plot. Let’s use the Minimizing the Flow Velocity in a Microchannel tutorial to demonstrate this process. The goal of the tutorial is to find an optimal distribution of a porous filling material to minimize the horizontal flow velocity in the center of a microchannel.

First, we open up the model file included in the tutorial and go to the *Contour 1* plot feature under the *Velocity (spf)* plot group.

*The horizontal velocity (surface plot) and velocity field (streamlines) after optimization. The black contours represent the filling material.*

In the above plot, the black contour is where the design variable, , equals 0.5. This indicates the border between the open channel and filling material. This is the result that we would like to incorporate into the geometry. In other applications, the expression and exact level to plot may differ, but the principle is the same: to find a contour that describes the limit between the solid and nonsolid materials (typically a fluid of some kind).

To create a geometry from this contour plot, we right-click the *Contour* feature node and choose *Add Plot Data to Export*. We need to make sure that we choose the data format as *Sectionwise* before we export the file.

The *Sectionwise* format describes the exported data using one section with coordinates, one with the element connectivity, and another that includes the data columns. It is important to note that the middle section, which describes how the coordinates of the first section are connected, will allow a contour plot with several closed loops or open curves.

The *Spreadsheet* export format is not suited for this particular use for several reasons, most importantly because it will assume that all coordinates are connected one after the other. This means that if there is more than one isolated contour, it will not be possible to build the *Interpolation Curve* feature. Also, the coordinates are scrambled, so the curve in the next step (discussed below) will not be drawn in the same way as seen in the contour plot.

To create the new geometry, we choose *Add Component* from the *Home* toolbar and choose a new *2D Component*. Then, we copy the geometry feature nodes from the original geometry and paste them to the geometry sequence of the new 2D component. After this, we add an *Interpolation Curve* from the *More Primitives* menu on the *Geometry* toolbar and set the type as *Open Curve*, data format as *Sectionwise*, and a tolerance of 2e-2.

A smaller tolerance will give a curve that is more true to the data, but the outcome might be an intricate or “wiggly” geometry. In turn, a higher tolerance may give a curve that is too simplified and quite far from the optimized result.

*Geometry with the interpolation curves representing the results of the topology optimization.*

The geometry can now be used to run further simulations and to verify the created geometry within COMSOL Multiphysics.

The DXF format is a 2D format that most CAD software platforms can read. DXF also describes the higher-order polygons between the points, so it usually gives a better representation than exporting only the points.

To export the optimized topology from this geometry to a DXF file, we can follow the steps below. Please note that there is an optional step for if you only want to include the shape of the optimized topology in your DXF file.

- Add a
*Union*from the*Booleans and Partitions*menu on the*Geometry*toolbar - Include all of the objects
- Use a
*Delete Entities*feature to remove the unwanted domains (optional) - Click the
*Export*button on the*Geometry*toolbar to write to the DXF format for a 2D geometry

Now, let’s see what to do when working with topology optimization results that are in 3D.

After performing a topology optimization in 3D, we usually view the resulting shape by creating a plot of the design variable; for example, an isosurface plot. We can directly export such a plot to a format that is compatible with COMSOL Multiphysics and CAD software and can even be used directly for 3D printing. This file format is the STL format, where the surfaces from the results plot are saved as a collection of triangles. It is a common standard file format for 3D printing and 3D scans in general.

In COMSOL Multiphysics, it is possible to export an STL file from the following plot features:

*Volume**Isosurface**Surface**Slice**Multislice**Far Field*

The software also supports adding a *Deformation* node on the plot feature, in case we want to export a deformed plot. The volume and isosurface plots are the most commonly used plot types for topology optimization, so we will focus our discussion on these two options.

To create an isosurface plot, we first add a 3D plot group to which we add an *Isosurface* feature node. In the *Expression* field, we then enter the design variable name, set the entry method as *Levels*, and fill in an appropriate value of the design variable representing the interface between the solid and nonsolid materials.

To demonstrate this process, let’s look at the example of the bridge shown below, where the optimal material distribution takes the familiar shape of an arch bridge. The optimization algorithm is maximizing the stiffness of the bridge subjected to a load to reach the displayed solution. To obtain the displayed isosurface plot, we use the expression 0.1 for the level of the design variable.

*An isosurface plot of the 3D topology optimization for a deck arch bridge.*

As you can see in the screenshot above, isosurface plots are not necessarily capped or airtight, so an exported volume plot may be a better choice, especially if we want to run further simulation analyses in COMSOL Multiphysics.

We can create a suitable plot by adding a *Volume* feature node to a 3D plot group. Then, we add a *Filter* node under *Volume* and set a suitable expression for inclusion. In this example, we use the expression rho_design > 0.1.

*A volume plot of the deck arch bridge.*

Exporting the data into an appropriate file format is simple. We right-click the *Volume* or *Isosurface* feature node and select *Add Plot Data to Export*. In the settings window of the resulting *Plot* node, we then select *STL Binary File (*.stl)* or *STL Text File (*.stl)* from the *Data format* drop-down list.

The exported STL file is readily readable by most CAD software platforms. To continue with the simulation of the geometry, import the STL file to a new COMSOL Multiphysics model, a process that we discuss in a previous blog post.

If you want to compare actual CAD drawings with your optimized results, you need to export the data in a format that can be imported into the CAD software you are using. The DXF format (for 2D) and the STL format (for 3D) are widely used formats and should be possible to import in almost any software platform.

In this blog post, we have discussed the steps needed to export topology optimization results in the DXF and STL formats. This will enable you to more efficiently analyze your model geometries within COMSOL Multiphysics and CAD software.

- Learn more about topology optimization and exporting geometries on the COMSOL Blog: