At a point P_d in the destination entity, we want to compute a quantity that is a function of another quantity defined at the source entity. Thus, the latter quantity from a source point P_s needs to be copied to the destination entity. Extrusion operators are used to identify which point in the source entity corresponds to a point in the destination entity. In other words, the operators define the pointtopoint map.
If the mapping is affine, it is sufficient to know how some points in the source correspond to points in the destination entity. From such sourcedestination pairs, one can infer the general mapping from superposition. However, in general, we need to write the mathematical expression for the mapping. This can be either an explicit definition of the source point P_s as a function of P_d or an implicit relation between P_d and P_s.
When using Linear Extrusion operators, we visually indicate the mappings for enough points (bases) and COMSOL Multiphysics figures out how to transform the remaining points. In the case of General Extrusion operators, we write out the mathematical description of the mapping for an arbitrary point in the destination.
To begin, let’s focus on how to replicate a Linear Extrusion operator with a General Extrusion operator. We can then consider examples in which the General Extrusion operator must be used.
For affine relations, General Extrusion operators can be used as an alternative to Linear Extrusion operators. When it comes to general nonlinear mappings, General Extrusion operators are necessary. To add a General Extrusion operator, we go to Definitions > Component Couplings > General Extrusion.
In our earlier blog post on Linear Extrusion operators, we considered an affine mapping that pairs up points 1, 4, and 2 in the source domain to points 1, 5, and 2 in the destination domain. Take a look at the figure below. The two circles in the geometry have centers at the origin and radii of 1.0 and 1.5.
Any affine transformation can be expressed as the sum of a linear transformation and a translation operation. Therefore, we have
Now we need to find the constants a,b,c,d,e, and f. Since source points (0, 0); (1.0, 0); and (0, 1.0) correspond respectively to destination points at (0, 0); (1.5, 0); and (0, 1.5), we get
Now that we know how to find the corresponding coordinates of the source point, given any point (x,y) in the destination, we enter the righthand side of the above equation (without the subscripts) in the destination map of the General Extrusion settings window.
A linear mapping built using a General Extrusion operator.
Let’s now explore how to use a General Extrusion operator to copy data from a 2D axisymmetric component to a 3D component, such that the source and destination points correspond to the same point in space. Consider thermal expansion with axisymmetric thermal boundary conditions and material properties. If the structural boundary conditions are not axisymmetric, we can save time by performing an axisymmetric thermal analysis in one component, and then mapping the temperature from the 2D axisymmetric domain to the 3D domain for structural analysis in another component.
When building the mapping, it is important to ask the following question: Given the coordinates of the destination point, how do we go to the source point? In this instance, that relationship is given by
As in Example 1, we enter the expression on the righthand side in the destination map.
Using a General Extrusion operator to copy data from the 2D axisymmetric domain to the corresponding 3D domain. Note that for axisymmetric components, variables can be viewed in 3D with a Revolution 2D data set in the Results node. However, if we want to use variables from a 2D axisymmetric component in the physics node of a 3D component (i.e., thermal expansion), we need to utilize General Extrusion operators.
The operator genext1 is not known inside the 3D component comp2; neither is T. If we want to use the temperature from the 2D axisymmetric component as an input in the 3D component, we have to use comp1.genext1(comp1.T). This approach helps avoid confusion if there is an extrusion or another operator also called genext1 or another variable called T in the second component.
Note that a Linear Extrusion operator cannot be used here. Because the source and destination objects have different dimensions, affine transformations are impossible.
In these first two examples, the Use source map check box in the Source section of the settings window has been left unchecked. COMSOL Multiphysics filled in x and y in the first case and r and z in the second case. When this check box is left unchecked, COMSOL Multiphysics assumes that we have explicit expressions for each coordinate of the source as functions of coordinates of the destination. Oftentimes, however, we may not have explicit expressions.
Next, we’ll look at how to use a General Extrusion operator to specify implicit relations.
A 2D parabolic curve given by \frac{y}{d} =(\frac{ x}{d})^2 is in a square domain of side d. Our task is to build an operator that maps data from this curve (represented in blue in the figure below) to different parts of the square. The parabola is the source. We want an operator that will copy from a point on the parabola to a point in the square, such that the distance of the destination point from the origin is equal to the length of the segment of the parabola between the origin and the source point.
A little calculus gives us the arc length of the parabola between the origin and the source point (x,y).
The relationship between the source and destination points is therefore
If we want an explicit sourcedestination mapping of the form
we first need to invert the expression L=\frac{x_s}{2}\sqrt{1+4(\frac{x_s}{d})^2}+\frac{d}{4}\ln(2\frac{x_s}{d}+\sqrt{1+4(\frac{x_s}{d})^2}) and write x_s in terms of L. That’s no fun at all!
This is exactly why COMSOL Multiphysics allows us to specify implicit relations between source and destination coordinates by using two mappings: the destination map and the source map. We need to provide T_d and T_s, such that
Using source and destination maps to define implicit relations between source and destination coordinates in a General Extrusion operator.
COMSOL Multiphysics will take care of T_s^{1}(T_d(x_d,y_d)), a necessary step in identifying the source coordinates. Note that the source map needs to be onetoone for the inverse to exist. In practice, COMSOL Multiphysics does not construct an analytic expression for the inverse of the source map. Instead, at every destination point, it first evaluates T_d(x_d,y_d) and carries out a mesh search operation to find the point on the source where this evaluation matches T_s(x_s,y_s). A onetoone source map makes the search return, at most, one source point for a given destination point.
In the General Extrusion settings window shown above, the labels under Destination Map and Source read x^iexpression and y^iexpression rather than xexpression and yexpression. The reason is that x^i and y^i are indices for the first and second pairs of expressions used to define the sourcedestination relationship implicitly. They are not necessarily pertaining to the x or y coordinates in the source or destination. These indices are, in a sense, coordinates of an intermediate mesh, and a General Extrusion operator matches source and destination points that have the same intermediate coordinates. In this example, one expression is sufficient enough to uniquely relate any destination point in the square domain to a source point on the parabolic curve. Thus, the second line y^iexpression is left blank.
To see how this General Extrusion operator maps variables, consider a plane stationary heat conduction problem with the left and right edges at temperatures of 300 K and 400 K, respectively. The top and bottom surfaces are thermally insulated, and there are no heat sources. The temperature will vary linearly with x. From the graph below, can you see why the plot of arcext(T) on the right shows a radial variation?
Left: Temperature varies linearly from left to right. Center: Temperature along the parabola. Right: Temperature mapped from the parabola to the domain. All points in the domain with the same distance from the origin copy temperature from the same point on the parabola.
To apply what we have learned thus far, let’s now build a diode model using the Electric Currents physics interface in COMSOL Multiphysics. Extrusion operators help us construct normal current density boundary conditions on each side of the ideal pn junction. We can tag the different sides as 1 and 2, as illustrated in the figure below. The Shockley diode equation for the currentvoltage (IV) relation is used at the junction. The parameters J_s, q, k, \textrm{and } T represent the following, respectively: the saturation current density, the electronic charge, Boltzmann’s constant, and temperature.
Extrusion operators can be used to access the electric potential on the other side of a junction.
To implement the normal current boundary condition on side 1, we need access to the electric potential V_2 on side 2. Similarly, on side 2, we need access to the electric potential V_1 on the other side of the junction. Thus, two extrusion operators are required. Each side of the junction becomes a source entity in one of the extrusion operators, as depicted below.
Both cases involve mapping between points that share the same xcoordinate. Because the source entities are different, two operators are needed.
Now we will use the operators in the physics nodes to implement the boundary conditions. The boundary condition at the top side is illustrated below. Note that V refers to the electric potential at a point on the top side while genext2(V) refers to the electric potential vertically on the bottom side.
Using a General Extrusion operator to refer to the electric potential at a point on the other side of the junction.
A similar boundary condition is used on the bottom side of the junction. The corresponding normal current density for the Normal Current Density 2 node applied to edge 3 is Js*(exp((Vgenext1(V))/kTbyq)1). Here, V refers to the electric potential at a point on the bottom side, while genext1(V) refers to the electric potential vertically on the top side.
In fact, a shortcut can be made by using the expression genext2(V)genext1(V) for the voltage difference, regardless of which side it is being applied. For clarity, we did not use this trick here.
With a voltage terminal at the bottom of the device and ground at the top of the device, the following results are obtained.
Extrusion operators can be used to make couplings between points in the same component or different components. Here, the pn junction in a diode is represented by a thin gap in the geometry. The electric potential on one side of the gap is accessed from the other side by using an extrusion operator in order to compute the current density flowing across the gap.
Extrusion operators are used to construct pointwise relations between source and destination points. Sometimes, we may want to access an integral, average, maximum, or minimum over a source line, surface, or volume. In such cases, we can use projection, integration, average, maximum, or minimum component couplings. You can learn more about the use of projection operators in this previous blog post.
Today, we have discussed how to use General Extrusion operators to create mappings for copying variables from one part of a simulation domain to another. In addition to simply copying known quantities, these operators can be used to create nonlocal couplings between unknown variables, as illustrated in our pn junction example. This approach is also useful in other analyses including structural contact or surfacetosurface radiation in heat transfer. COMSOL Multiphysics includes builtin features pertaining to such physical effects.
If the nonlocal couplings you want to simulate are not included in the builtin features of COMSOL Multiphysics, you can use the strategies you’ve learned today to implement them. Please feel free to contact us if you have any questions!
To explore the use of General Extrusion operators in other types of situations, consult the following blog posts:
]]>
There are many practical situations in which mapping variables from one component, or part of a component, to another is needed. One instance is the linking of two submodels, for example, the generation of inlet boundary conditions for turbulent flow models. The boundary conditions at the inlets significantly affect the flow in the domain. However, the flow profiles at the inlets are not as easily defined as with laminar flow. To generate turbulent inlet boundary conditions, an auxiliary model with normal inflow can be used. The resulting velocity profile at the outlet then needs to be copied to the inlet of the main model.
Efficiency can be another reason to map variables between regions. Consider thermal expansion with axisymmetric thermal boundary conditions and material properties. If the structural boundary conditions are not axisymmetric, we can save time by performing an axisymmetric thermal analysis first, and then mapping the temperature from the 2D axisymmetric domain to the 3D domain for structural analysis.
Another common scenario is the implementation of periodic or other boundary conditions where a quantity at a point on a boundary is related to a quantity at a point on another boundary. For example, in a diode, the normal current density on one side of the pn junction depends on the electric potential at the same point and the electric potential on the other side of the junction. While a variety of such boundary conditions are built into the appropriate physics interfaces in COMSOL Multiphysics, from time to time users may need to construct their own.
Such instances require pointwise mapping of variables from one domain or boundary to another. Today, we will show how these mappings can be constructed.
The idea of a mapping involves two geometric entities: the source where a quantity is known and the destination where the quantity will be used. We know a quantity, q_s, at the source and want to calculate another quantity, q_d, at the destination. The new quantity, q_d, can be a copy of q_s or a function of it.
We can break this problem down into these steps:
We end up with
The focus of this blog post is on the transformation T : x_d \rightarrow x_s.
COMSOL Multiphysics offers two coupling operators to specify this mapping: Linear Extrusion operators and General Extrusion operators. Linear Extrusion operators are easier to build, but their utility is limited to affine transformations. General Extrusion operators are more general but take more work to define.
Here, we will discuss Linear Extrusion operators. In a later blog post, we will deal with General Extrusion operators.
When the source and destination points are related to each other by affine transformations such as translation, scaling, reflection, rotation, or shear, COMSOL Multiphysics provides a simple way of specifying the extrusion operator: the Linear Extrusion operator. To add a Linear Extrusion operator, we go to Definitions>Component Couplings>Linear Extrusion.
The basic idea of a Linear Extrusion operator is that an affine transformation between two lines can be defined if we know two corresponding pairs of points on the lines. Similarly, three pairs of noncollinear points and four pairs of nonplanar (with no more than two collinear) points are enough to describe affine mappings of 2D and 3D domains, respectively.
This is similar to linear system analysis in general. If we know the transformation of a sufficient number of base points/vectors, we can transform every point/vector using linear superposition. Think of the Linear Extrusion operator as a visual way of picking the basis and their transformations. From that information, COMSOL Multiphysics automatically derives the mapping that needs to be applied on an arbitrary point/vector.
We will illustrate this with a few examples.
The first operator is used for mapping data from the line segment with ends 1 and 4 to the line segment with ends 4 and 5, with the orientation preserved. All we need to do is indicate to COMSOL Multiphysics which point goes where, as shown in the image below.
A Linear Extrusion operator matching points 1 and 4 in the source to points 4 and 5 in the destination, respectively.
Note that even though we are in a 2D space, we are working with 1D objects (lines). Thus, it suffices to indicate the correspondence between two sets of vertices. What if we choose 5 for Destination vertex 1 and 4 for Destination vertex 2? In that case, in addition to the translation and stretching needed to take segment 14 to segment 45, there will be a flipping. See the plot below.
The order of vertices under Source Vertices and Destination Vertices of a Linear Extrusion operator determines the orientation of the mapping.
Now, let’s increase the dimension of our objects and build the Linear Extrusion operator to copy data from the interior circle to the outer domain by radially stretching. All we need to do is to add one more pair of vertices to the above vertex pairing. See the Linear Extrusion settings window below.
The higher the dimension of objects in the mapping, the more vertices in the geometry need to be paired up to define a Linear Extrusion operator.
The mapping matches points 1, 4, and 2 in the source domain to points 1, 5, and 3 in the destination domain, respectively. If we want to look at this in terms of basis vectors, segment 14 in the source corresponds to segment 15 in the destination. Similarly, segment 12 in the source corresponds to segment 13 in the destination. From these two linearly independent bases, COMSOL Multiphysics gets enough information to construct the mapping that takes any destination point in the 2D domain to a source point. The figure below shows how a variable \phi defined on the interior circle is mapped by the Linear Extrusion operator linext2 that we just defined.
A variable defined over interior circle (left) and mapped to all points using the Linear Extrusion operator (right).
What we have done up to now is build the infrastructure that will help us access variables. Now, let’s see how to use the tool we built.
If we look at any of the Linear Extrusion settings windows shown in the images above, we see Operator name at the top. That name is what we will use to access the mapping. If linext2 was the name of the extrusion operator, any time we are at a destination point and want to refer to a quantity, say u, from the corresponding source point, we use the expression linext2(u). If we want the variable w from the same source point, we use linext2(w). This explains why we call them operators. Once we build them, we can use them with any legitimate argument.
Instead of just using a variable, if we want a function of the variable, we can put the function either inside the operator or apply the function to the output of the operator. For example, linext2(w^2) is equivalent to linext2(w)^2. Does linext2(w)+u return the same value as linext2(w+u)? Generally, no. In the first case, u at a destination point is added to w evaluated at the corresponding source point. In the second case, both u and w are evaluated at the source point. The image below illustrates this point using the Linear Extrusion operator from Example 2.
The variable \phi is evaluated at the destination in the inner circle and at the source in the outer arc (left). All evaluations are at the source (right). Note that in general, the argument, such as \phi, can be a valid quantity at a destination point. In such cases, its value is generally different from the value returned by the extrusion operator.
Finally, when using an extrusion operator to map variables between different components, the operator should be added to the Definitions node of the component containing the source object. That component’s tag should be used when we use the operator in another component. To access a variable u defined in component 1 from component 2 using the Linear Extrusion operator linext2, the correct syntax is comp1.linext2(comp1.u). This avoids confusion if there is an extrusion or another operator also called linext2 or another variable called u in the second component.
The focus of this blog post is the construction of Linear Extrusion operators. For their use in a full modeling problem, please see the simulation of the backward facing step. In this example, a Linear Extrusion operator was used to transfer accurate inlet velocity from an auxiliary analysis in a turbulent flow simulation.
Stay tuned for our upcoming blog post on how to build General Extrusion operators, where we will illustrate their use in a full modeling example. In the meantime, please feel free to contact us if you have any questions.
]]>
Canoeing is one of my favorite hobbies. In fact, I used to paddle dragon boats competitively on the beautiful Charles River here in Boston. If you’re ever in the city on a nice summer day, I strongly encourage you to rent a canoe and paddle down the river.
Dragon boats on the Charles River.
Boston’s weather, however, can change rather quickly. While it may be calm when you begin moving down the river, high winds can pick up and start blowing your canoe all over the place. Every time you raise your oar out of the water, it will catch the wind. As you swing your oar forward, you will hit the crests of the small waves that are being kicked up. Such movement leaves you wet and, even worse, throws off the balance of the boat. How might you change your paddle stroke to regain control of the situation?
An experienced paddler would start to use the Indian stroke, also called the Canadian Jstroke — just one of the many kinds of canoe paddle strokes. This stroke moves the paddle backward in the typical power stroke. Without lifting the paddle out of the water, the paddler rotates their grip, such that the paddle’s blade is parallel to the direction of travel on the return stroke.
The advantage of the Indian stroke, nicely demonstrated here, is that the paddle blade is never exposed to the wind and does not strike any of the wave crests. While easy to perform at slow speeds, this stroke is rarely used in races. It requires exceptionally strong control of the paddle as well as a solid understanding of how to move the paddle through the water.
With COMSOL Multiphysics, you can analyze and improve upon your paddle stroke. We’ll show you how.
Let’s simplify things a bit. We will consider a simplified paddle stroke and model it in the 2D plane. Although it is assumed that there is no variation in the zdirection (the water depth), we can begin here to gain some insight. The model under consideration is comprised of a rectangular fluid domain that is filled with water. The example includes a canoeshaped cutout as well as a rectangular cutout that represents the paddle.
We want to model the paddle moving back and forth along a known path and rotating about its center. We can specify the fluid velocity at the paddle blade based on the known translational and rotational movement. The fluid velocity at the canoe wall is zero and open boundary conditions surround our model space.
The modeling domain considers the paddle translating and rotating next to the canoe. The boat moves from left to right.
Previously on the blog, we highlighted approaches for modeling general translations of domains as well as rotations and linear translations. A combination of these techniques can be used to model the deformation and rotation of the paddle. The animation below illustrates such techniques, showing the mesh through one full stroke. The mesh faces can slide relative to one another, yet still maintain continuity of the solution across these disjoint mesh interfaces.
An animation depicting moving mesh. A very coarse mesh is shown for illustrative purposes.
Since the translation and rotation of the paddle blade are now completely defined, there is only one thing left to solve: the NavierStokes equations on the moving domains. In this analysis, we will assume laminar flow. We will also assume that the canoe is quite heavy and is not yet moving. The animation below allows you to visualize the results for the first few strokes.
Flow during the paddle stroke.
Now that you can visualize the flow patterns around the paddle blade, albeit in this somewhat simplified case, how might you improve your own stroke? Once you determine your optimal stroke, you might even want to consider joining a competitive dragon boat team here in Boston, or elsewhere — a healthy and fun way to put your COMSOL Multiphysics modeling skills to use!
Can you think of other situations in which this approach could be applicable? We are happy to hear your feedback. If you are interested in using COMSOL Multiphysics to model your paddle, or perhaps another form of fluidstructure interaction, please contact us.
]]>
In a previous blog post, we discussed the modeling of objects translating inside of domains that are filled with a fluid, or just a vacuum. This initial approach introduced the use of the deformed mesh interfaces and the concept of quadrilateral (or triangular) deforming domains, where the deformation is defined via bilinear interpolation. Such a technique works well even for large deformations, as long as the regions around the moving object can be appropriately subdivided. This, however, is not always possible.
A solid object moving along a linear path inside of a complicated domain.
Consider the case shown above, where an object moves along a straight line path, defined by \mathbf x(t), through a domain with protrusions from the sides. In this situation, it would be quite difficult to implement the original approach. So what else can we do?
The solution involves four steps. They are:
We can begin by dividing our original model space into two different geometry objects, as depicted in the figure below. Here, the red domains represent the stationary domains and the blue domains represent the regions in which our object is linearly translating. The subdivision process takes place in the geometry sequence, which is then finalized with the Form Assembly operation.
For a description of this functionality and steps on how to use it, watch this video.
Subdividing the modeling space into different geometry objects.
The Form Assembly step will allow the finite element meshes in the blue domains to slide relative to the meshes in the red domains. This step will also automatically introduce identity pairs that can be used to maintain continuity of the fields for which we will be solving. Let’s take a look at a representative mesh that may be appropriate in this case.
The subdivided domains with a representative mesh.
In the figure above, note that the dark red domains contain meshes that will not move at all. The dark blue domains, meanwhile, have translations completely defined by our known function, \mathbf x(t). The light blue domains are the regions in which the mesh will deform. We can simply use the previously introduced method of bilinear interpolation in these domains. You should also note that a Mapped mesh is used for these two rectangular domains and that the distribution of the elements is adjusted, such that they are reasonably similar in size or smaller than the adjacent nondeforming elements.
After the object has been linearly translated, the mesh in the light blue domains deforms.
From the previous images, you can clearly see that the meshes no longer line up between the moving and stationary domains. While COMSOL Multiphysics version 5.0 has introduced greater accuracy in the handling of noncongruent meshes between domains, there are some things that you should be aware of when using this functionality.
As a result of the Form Assembly geometric finalization step, COMSOL Multiphysics will automatically determine the identity pairs — the mating faces at which the mesh can be misaligned. We simply need to tell each physics interface within our model to maintain continuity at these boundaries. This can be accomplished via the Pairs > Continuity boundary condition, which is available within the boundary conditions for all of the physics interfaces.
Once this feature is added and applied to all of the identity pairs, the software will apply additional conditions at these interfaces to ensure that the solution is as smooth as possible over the mesh discontinuity. Each identity pair has a socalled source side and a destination side. The mesh on the destination side should be finer in all configurations of the mesh.
Assembly meshes with noncongruent meshes at the boundaries can be used in combination with most physics interfaces. There are, however, a few important exceptions. Whenever you are solving an electromagnetics problem involving a curl operator on a vector field, such a technique cannot be used. Common physics interfaces that fall into this category include the 3D Electromagnetic Waves interfaces, the 3D Magnetic Fields interfaces, and the 3D Magnetic and Electric Fields interfaces. This still, of course, leaves us with a wide range of physics.
Let’s look at one case of computing the temperature fields around our object, with differing temperatures for the object and the surrounding domain’s outer walls. The contour plots of the temperature fields shown below verify that the solution is quite smooth over the boundary where the mesh is not continuous.
The temperature fields over time are smooth across the Continuity boundary condition applied to the identity pair.
At this point, you can probably already see how this same technique can be applied to a rotating object. We simply create a circular domain around our rotating object and use all of the same techniques we have discussed here. Of course, if the object is only rotating, we no longer need the deforming mesh — making things even a bit simpler for us.
The figures below show the same object from before, except now the object is rotating.
An object rotating about a point.
An assembly composed of stationary and rotating objects as well as the mesh.
The expressions for the prescribed deformation of the rotating domain, (X_{r}, Y_{r}), can be expressed in terms of the angular frequency, \omega; the undeformed geometry coordinates, (X_{g}, Y_{g}); the point about which the object is rotating, (X_{0}, Y_{0}); and time, t. This gives us the following expressions:
where the prescribed deformation in the Deformed Geometry interface is quite simply:
Seems easy, right? In fact, the technique outlined in this section is actually applied automatically in COMSOL Multiphysics when using the Rotating Machinery, Fluid Flow and the Rotating Machinery, Magnetic physics interfaces. This provides you with a behindthescenes look at what is going on within these interfaces!
We have now introduced methods for modeling the motion of solid objects inside fluid or vacuumfilled domains. Although we have simply prescribed a displacement in all of these cases, the displacement of our solid objects could be computed and coupled to the field solutions in the surrounding regions. That is, however, a topic for another day.
Interested in learning more about using the deformed mesh interfaces for your modeling? Download the tutorial from our Application Gallery.
]]>
Suppose we want to set up a COMSOL Multiphysics model of a solid object moving around inside of a larger domain filled with fluid such as air, or even just a vacuum. To start, let’s assume that we know what path the object will take over time. We won’t worry about which physics we need to solve the model, but we’ll assume that we want to solve for some fields in both the moving domain and the surrounding domain. Of course, we will need a finite element mesh in both of these regions, but this finite element mesh will need to change.
A solid object moves freely around inside of a larger domain along a known path.
For situations like this, there are two options: The Deformed Geometry interface and the Moving Mesh interface. These two interfaces actually work identically, but are meant to be used in different situations.
The actual use of these two interfaces is identical, but choosing between them depends on which other physics you want to solve, as the interfaces handle each type of physics differently. Although we won’t cover how to choose between these two interfaces in this blog post, it is worth reading the sections “Deformed Mesh Fundamentals” and “Handling Frames in Heat Transfer” in the COMSOL Multiphysics Reference Manual as a starting point.
It is also worth mentioning that the Solid Mechanics interface cannot be combined with the Moving Mesh interface. The Solid Mechanics interface already computes the domain deformation via the balance of momentum. Other physics, such as heat transfer in solids, are solved on this deformed shape. On the other hand, it is reasonable to combine the Deformed Geometry interface with the Solid Mechanics interface if you want to study the change in stresses due to material removal, or if you want to perform a parametric sweep over a dimension without parameterizing the geometry, as described in this previous blog post.
Here, we look at the conceptual case of an object moving around inside of a larger domain with stationary boundaries, as shown in the figure above. The path of the object over time is known. We will look at how to set up the Deformed Geometry interface for this problem. But first, we need to take a quick look at which equations will be solved in COMSOL Multiphysics.
Our case of an object moving around inside of a domain is actually a boundary value problem. All boundaries have known displacements, and these boundary displacements can be used to define the deformation of the mesh within the interior of both domains.
There are four types of approaches for computing the deformation of the mesh within each domain: Laplace, Winslow, Hyperelastic, and Yeoh smoothing types. Here, we will address only the simplest case, referred to as a Laplace smoothing, and demonstrate how this approach is sufficient for most cases. The Laplace smoothing approach solves the following partial differential equation within the domain:
where lowercase (x,y,z) are the deformed positions of the mesh and uppercase (X,Y,Z) are the original, undeformed positions.
Since the displacements at all boundaries are known, this is a wellposed problem, and theoretically, the solution to this equation will give us the deformation of the mesh. However, in practice, we may run into cases where the computed deformation field is not very useful. This is illustrated in the figure below, which shows the original mesh on the original domain and the deformed mesh as the part is moved along the diagonal. Observe the highlighted region and note that the mesh gets highly distorted around the moving part edges, especially at sharp corners. This high distortion prevents the model from solving the above equation past a certain amount of deformation.
Original and deformed mesh. The region where the mesh gets highly distorted is highlighted.
In the above image, the deformation of the blue domain is completely described by its boundaries and can be prescribed. On the other hand, the deformation within the red region requires solving the above partial differential equation, and this leads to difficulties. What we want is an approach that allows us to model greater deformations while minimizing the mesh deformation.
If you have a mathematical background, you will recognize the above governing equation as Laplace’s equation and you might even know the solutions to it for a few simple cases. One of the simpler cases is the solution to Laplace’s equation on a Cartesian domain with Dirichlet boundary conditions that vary linearly along each boundary and continuously around the perimeter. For this case, the solution within the domain is equal to bilinear interpolation between the boundary conditions given at the four corners. As it turns out, you can use bilinear interpolation to find the solution to Laplace’s equation for any convex foursided domain with straight boundaries.
The first thing that we have to do is subdivide our complicated deforming domain into convex foursided domains with straight boundaries. One such possible subdivision is shown below.
Subdividing the domain so that the deforming region (red) is composed of foursided convex domains.
The deforming domain is divided into convex quadrilateral domains. In fact, we could have also divided it into triangular domains since that would simply be a special case of a quadrilateral with two vertices at the same location — a socalled degenerate domain. We would only need to decompose the domain into triangles if it were not possible to split the domains into quadrilaterals.
Now that we have these additional boundaries, we need to completely define all of the boundary conditions for the deformation within the domain. The boundaries adjacent to the deforming domain are known and there is no deformation at the outside boundaries. But what about the boundaries connecting these? We have a straight line connecting two points where the deformation is known, so we could just apply linear interpolation along these lines to specify the deformation there as well.
And how can we easily compute this linear interpolation? As you might have already guessed, we can simply solve Laplace’s equation along these connecting lines!
A very general way of doing this is by adding a Coefficient Form Boundary PDE interface to our model to solve for two variables that describe the displacement along each of these four boundaries. This interface allows you to specify the coefficients of a partial differential equation to set up Laplace’s equation along a boundary. We know the displacements at the points on either end of the boundary, which gives us a fully defined and solvable boundary value problem for the displacements along the boundaries.
These new help variables completely define the deforming domains. The results are shown below and demonstrate that larger deformations of the mesh are possible. Of course, we still cannot move the object such that it collides with the boundary. That would imply that the topology of the domain would change; also, the elements cannot have zero area. We can, however, make the deformed domain very small and thin.
The undeformed and deformed mesh after adding the help variables for the Deformed Geometry along the interior boundaries.
You are probably thinking that the mesh shown above appears rather distorted, but keep in mind that all of these distorted elements still have straightsided edges, which is good. In practice, you will often find that you can get good results even from what appear to be highly distorted elements.
However, we can observe that there are now very many small, distorted elements in one region and larger, stretched elements in other parts of our moving domain. The last piece of the puzzle is to use Automatic Remeshing, which will stop a transient simulation based on a mesh quality metric and remesh the current deformed shape.
The deformed geometry immediately before and after the Automatic Remeshing step.
We can see from the above images that Automatic Remeshing leads to a lower element count in the compressed region and adds elements in the stretched region, such that the elements are reasonably uniform. This total number of elements in the mesh stays about the same. There is also an added computational burden due to the remeshing, so this step is only warranted if the element distortion adversely affects the accuracy of the results.
We have looked at a case where we know exactly how our solid object will move around in our fluid domain. But what if there is an unknown deformation of the solid, such as due to some applied loads that are computed during the solution? A classic example of such a situation is a fluidstructure interaction analysis, where the deformation of the solid is due to the surrounding fluid flow.
In such situations, we can use the Integration Component Coupling operator, which makes the deformation at one point of a deforming solid structure available everywhere within the model space. The deformation of one or more points can then be used to control the deformation of the mesh. A good example of this technique is available in the Micropump Mechanism tutorial. The technique is visualized below.
When the actual deformation is unknown, an integration component coupling at a helper point can be used to control a helper line that defines the mesh deformation.
We can see in the image above that the modeling domain is actually not divided into convex quadrilaterals, and that the helper line is allowed to slide along the top boundary of the modeling domain. So this modeling approach is a little bit less strict, yet still allows the mesh to deform significantly. Hopefully it is clear that there is no single best approach to every situation. You may want to investigate a combination of techniques for your particular case.
We have described how to use the deformed mesh interfaces efficiently by decomposing the deforming domain into quadrilateral domains and introducing help variables along the boundaries. This approach makes the problem easier for the COMSOL Multiphysics software to solve. The addition of Automatic Remeshing is helpful if there is significant deformation. The approach outlined here can also be applied to 3D geometries. An example that uses both 2D and 3D cases is available here.
So far, we have only looked at translations of objects inside of relatively simple domains where it is easy to set up deforming domains. When we need to consider geometries that cannot easily be subdivided and cases with rotations of objects, we need to use a different approach. We will cover that topic in an upcoming blog post on this subject, so stay tuned!
]]>
Suppose you are tasked with computing the fluid flow through a network of pipes, as depicted below. You can see that there are many bends with long straight sections in between.
A piping network. Image by Hervé Cozanet, via Wikimedia Commons.
The geometry for a fluid flow model of just one pipe in this network might look like the image below.
A CAD model of a pipe volume for fluid flow analysis.
If you go ahead and mesh this geometry with just the default PhysicsControlled Mesh capability, you will obtain a mesh like that pictured below. Note that the boundary layer mesh is applied to the pipe walls and that the mesh is otherwise quite uniform in size within the long, straight sections of the pipe.
The default finite element mesh for this fluid flow problem includes a boundary layer mesh on all noslip boundaries.
An experienced fluid flow analyst would immediately recognize that the flow field in the long, straight sections will be primarily parallel to the pipe and vary quite gradually along the axis. Meanwhile, the variation of the velocity along the cross section and around the bends will be significant. We can exploit this foreknowledge of the solution to partition the geometry into various domains.
The pipe domain is partitioned into several subdomains, which are shown in different colors.
Once the geometry is partitioned, we can apply a Free Tetrahedral mesh feature. This mesh should only be applied to one of the domains along the length of the pipe; a domain that represents a bend (depicted below). Note that the Boundary Layers mesh feature is not yet applied.
A tetrahedral mesh is applied to only one of the domains.
From this one meshed domain, we can now use the Swept mesh functionality in the straight sections, as illustrated below. It is also possible to specify a Distribution subfeature to the Swept feature to explicitly control the element distribution and set up a nonuniform element size along the length. Since we anticipate that the flow will vary gradually along the length, the elements can be quite stretched in the axial direction.
The swept mesh along the straight sections also has a nonuniform element distribution.
We can now apply a tetrahedral mesh to nest the two bent sections and sweep the remaining straight sections. The last step of the meshing sequence is to apply the Boundary Layers mesh feature.
The combination of a tetrahedral and swept mesh with the boundary layers applied at the walls.
From the above images, we can observe that the swept mesh can significantly reduce the size of the model for this fluid flow problem. Our Flow Through a Pipe Elbow tutorial is one example in which this swept meshing technique is used.
Shifting gears, let’s now consider an inductive coil similar to the one pictured below.
An inductive coil. Image by Spinningspark, via Wikimedia Commons.
This coil consists of a long wire with quite gradual bends. If tasked with computing the inductance, we would also need to consider the surrounding air and the magnetic core materials. The geometry for such a model and the default mesh might look like the image below.
A coil surrounding a magnetic core in an air domain.
The default Free Tetrahedral mesh feature is applied to the entire model.
You’ve probably already recognized that the coil itself is an excellent candidate for swept meshing. The coil is long and uniform in cross section. As such, we can begin with a triangular surface mesh at one end and then sweep it along the entire length of the coil to create triangular prismatic elements.
A triangular mesh (represented in blue) is applied to the crosssectional surface at one end of the coil and then swept along the entire length.
We do, however, still need a volumetric mesh of the surroundings. This surrounding volume is amenable to only tetrahedral meshing, not swept meshing. A volume that is to be meshed with tetrahedral elements can only have triangular surface elements on all of its boundaries. Thus, we must first add a Convert feature to the mesh sequence and apply it to the surfaces between the coil and its surroundings. The operation is designed to split the elements touching the boundaries such that triangular face elements are created.
The convert operation introduces triangular elements on the boundaries of the coil.
The remaining domains are meshed with tetrahedra.
From the above image, we can see that fewer elements are used to describe the coil than in the default mesh settings. A similar example is the Anisotropic Heat Transfer Through Woven Carbon Fibers tutorial, which considers a combination of swept meshing and tetrahedral meshing of the surroundings (albeit with different physics involved).
Finally, let us consider a microelectromechanical system (MEMS) structure that is composed of microscale structural features that deflect. If different electrical potentials are applied to different objects, a perturbation of the structure will be measurable through a change in capacitance. A change in applied potentials can deform the system. Such an effect is exploited in devices like comb drives, accelerometers, and gyroscopes.
A MEMS cantilever beam at resonance. Image by Pcflet01, via Wikimedia Commons.
A common characteristic of such MEMS structures is that they are composed of various thin planar layers that need to be meshed along with the surrounding air domain. The gaps between structures may also be quite slender. A simplified model for part of such a MEMS structure might appear similar to the model shown below, with interleaved fingers.
A simplified model representing part of a typical MEMS structure.
When using the default mesh settings, small elements will be inserted in the narrow air gaps between the parts (illustrated below). However, we do know that the fingers on either side will be at different potentials and that the gap between the straight sections of the fingers and the ground plane will have a uniform electric field.
The default mesh settings show smaller elements than those that are needed in regions where we know the electric field will be nearly uniform.
This present structure is actually not amenable to swept meshing, as there are no domains in this model that have a uniform cross section. But, if we introduce some partitioning planes, we can break this domain up into prismatic domains that are amenable to swept meshing. We will first introduce two partitioning planes — one at the top and one at the bottom surface of the fingers — that will partition both the air domain and the two solid domains. We add these planes as Work Plane features to the geometry sequence and they are used as input by the two Partition Object features that divide the solids.
Two planes are introduced that partition both the air and the solid domains.
It is then possible to introduce additional partitioning planes, as shown below, to delineate the long, straight sections of the fingers. This is important because we know the electric fields and displacements will vary quite gradually in these regions.
Two additional planes divide the fingers into prismatic domains.
Now we can begin the meshing process using the Mapped mesh feature on the new rectangular surfaces introduced by the partitioning. The nonrectangular faces on the same plane can be meshed with triangular elements, as illustrated below.
A surface mesh applied to one of the partitioning planes.
The surface mesh can be used as the starting point for the swept mesh, which can be applied to the two layers of the thin domains — the fingers and the air gaps between the fingers and the ground. The air domain can be meshed with tetrahedral elements after a convert operation is applied to the adjacent rectangular element faces.
The final mesh consists of a combination of free and swept meshes.
We can observe that the number of total elements in the finite element model has been reduced. For an example demonstrating this technique of partitioning planes and swept meshing, please see our Surface Micromachined Accelerometer tutorial.
Swept meshing is a powerful technique for minimizing the computational complexity of many classes of COMSOL Multiphysics models. By using your engineering judgment and knowledge to address each problem, you can obtain highaccuracy results quickly and at relatively lower computational costs than with default mesh settings.
While you, of course, do not always need to use this approach, you should consider applying it to cases where your geometry has high aspect ratios, there are relatively thin or thick regions, and you are reasonably certain that the solution will be represented well by the swept mesh.
In conjunction with this topic, here are some additional blog posts to read:
Consider a onedimensional domain on the xaxis with a source localized around x = 0. We can plot the strength of the source as a function of x and it may look like this:
Here, we have assumed that the strength has a constant value of 1/w within the interval [w/2, w/2] and is zero everywhere else. This gives a rectangular shape of width w and height 1/w, as shown in the figure above. The function is often called a rectangular, tophat, or sometimes, a disc function. The total strength of the source is given by the area of the rectangle, which is unity.
For linear systems, if we only care about what happens far away from the source where \left x \right \gg w, then the actual shape of the source strength does not matter much, as long as the area beneath that shape is the same. Furthermore, we are free to make w progressively smaller and smaller: the width of the rectangle decreases while its height increases in such a way that the total area remains the same, as shown in the graph below.
The localized source represented by the blue curve is progressively made thinner and taller (the orange and green curves), while maintaining the integrated strength of unity.
Eventually, we arrive at a rectangle that is infinitesimally thin and infinitely tall, but still has a well defined area of unity. This leads us to the socalled delta function \delta(x) and, correspondingly, the localized source now becomes an idealized point source of unit strength.
The delta function has some convenient properties. Its value is zero everywhere except at the origin:
Integrating the product of a delta function and another function just extracts the value of the latter function at the origin:
A point source at a general position x=a can be obtained by a simple coordinate shift of the delta function \delta(xa). We have
and
It is also easy to generalize the delta function and the corresponding point source to higher dimensions. For example, in 2D, we have
and
(1)
This tutorial solves the Poisson equation on a unit disc with a point source at the origin. The equation reads
(2)
where u is the dependent field variable to be solved.
At first sight it may not be obvious how to discretize this equation to be solved numerically. What value do we put at the origin for the source term on the righthand side? The value of the delta function is infinite there, but computers don’t like infinities!
Here, we will see that the weak formulation comes in handy. Recall that in this introductory blog post on the weak form, we multiply the differential equation to be solved by a test function and integrate over the entire domain (See Eq.(4) in that post). We can follow the same procedure here to solve Eq. (2). After multiplying by a test function \tilde{u}(x,y) and integrating over the unit disc domain, the righthand side of Eq. (2) simply becomes
(3)
by using the integration property of the delta function given in Eq. (1). This gives us something very easy to implement in COMSOL Multiphysics.
Start with a new 2D model with the Weak Form PDE physics interface and a Stationary study. Draw a unit circle centered at (0,0) and draw a point there as well. Set the Weak Expressions field under the default Weak Form PDE 1 feature to test(ux)*uxtest(uy)*uy
. This takes care of the lefthand side of Eq. (2) in exactly the same fashion as for the 1D case discussed in this previous post.
Now, for the point source on the righthand side, \tilde{u}(0,0), we simply add a point Weak Contribution node and select the point at the origin. For the Weak expression, we enter test(u)
. It’s that simple for the point source!
It may be worth noting that by entering test(u)
, we set the strength of the point source to unity. For any other source magnitude, simply multiply by a factor. For example, the expression 2*test(u)
gives a point source of strength 2.
After finishing the setup with a Dirichlet boundary condition at the perimeter of the circle, we can solve the model and observe the same solution as seen in the point source tutorial mentioned above:
Also as seen in the tutorial, the numerical solution (blue curve) matches the analytical solution (green curve) very well, except near the original where a singularity occurs:
As mentioned earlier, the point source provides a convenient idealization of a localized source in situations where we only care about the solution far away from the source. We illustrate this point with the following graph, where we have added three more curves to the graph above. These three curves are numerical solutions to the same Poisson equation in the same unit disc domain, but with various sizes of tophat, or disc, shaped sources replacing the point source. The integrated strength of each tophat source is calibrated to unity by setting its height to one over its area, in the same fashion as in the 1D case shown in the image above. As we see clearly from the figure below, all solutions are indistinguishable from one another far away from the sources. (In this example for x \gg 10 \, mm.)
Here, we have demonstrated the ease of creating point sources using the weak form. The numerical difficulty in the representation of the delta function is circumvented with a simple integration. In upcoming posts we will look at discontinuities and boundary conditions. Stay tuned!
]]>
Let’s begin by looking at a microfluidic device, as shown below. Such devices feature small channels that are filled with fluids carrying different chemical species. Within their design, a common goal is to achieve optimal mixing within a small surface area, hence the serpentine channel.
A typical microfluidic device. Image by IXfactory STK — Own work, via Wikimedia Commons.
The schematic below illustrates that there are two fluid inlets, both of which carry the same solvent (water) but a different solute. At the outlet, we want the species to be well mixed. To model such a situation, we want to solve the NavierStokes equations for the flow. This computed flow field can then be used as input for the convectiondiffusion equation governing the species concentration. The Micromixer tutorial, available in our Application Gallery, is an example of such a model.
Now, if desired, it is possible to model the entire device shown above. However, if we neglect the structure near the inlet and the outlet, we can reasonably assume that the flow within the channel bends will be identical between the unit cells. Therefore, we can greatly reduce our model by solving only for the fluid flow within one unit cell and patterning this flow solution throughout the modeling domain for the convectiondiffusion problem.
Schematic of a microfluidic mixer that depicts the repeated unit cell and the inlet and outlet zones.
For such a unit cell model, the walls of the channels are set to the Wall, No Slip condition. The Periodic Flow condition is used to set the velocity so it is identical at the inlet and outlet boundaries, allowing us to specify a pressure drop over a single unit cell. A pressure constraint at a single point is used to gauge fix the pressure field. The working fluid is water with properties defined at room temperature and pressure. The flow solution on this unit cell is also plotted, as shown below.
The periodic modeling domain and the fluid flow solution.
Now that we have the solution on one unit cell, we can use the General Extrusion component coupling to map the solution from this one unit cell onto the repeated domains. This will enable us to define the flow field in the entire serpentine section.
The General Extrusion feature is available in the model tree under Component > Definitions > Component Coupling. The settings for this feature are illustrated below. To map the solution from one domain into the other domains that are offset by a known displacement along the xaxis, the destination map uses the expression “xDisp” for the xexpression. Thus, every point in the original domain is mapped along the positive xdirection by the specified displacement. Since there is no displacement in the ydirection, the yexpression is set at its default “y”.
The variable Disp is individually defined within each of the three domains, as shown in the figure below. Therefore, only a single operator is needed to map the velocity field into all of the domains. Within the original domain, a displacement of zero is used.
The settings for the General Extrusion operator and the definitions of the variable in the three domains.
With the General Extrusion operator defined, we can now use it throughout the model. In this example, the operator is used by the Transport of Diluted Species interface to define the velocity field (illustrated below). The velocity field is given by u and v, the fluid velocity in the x and ydirections, respectively. The components of this velocity field are now defined in all of the repeated domains via the General Extrusion operator: genext1(u) and genext1(v), respectively.
The General Extrusion operator is used to define the velocity field in all three periodic domains.
Now that the velocity field is defined throughout the modeling domain, the species concentration at the inlet is defined via the Inflow boundary condition. This applies a varying species concentration over the inlet boundary. An Outlet boundary condition is applied at the other end.
Although it is not strictly necessary to do so, the mesh is copied from the one domain used to solve for the fluid flow to all of the other domains. The Copy Domain mesh feature can copy the mesh exactly, thereby avoiding any interpolation of the flow solution between meshes.
The model is solved in two steps — first, the Laminar Flow physics interface is solved, and then the Transport of Diluted Species interface is solved. This is reasonable to do since it is assumed that the flow field is independent of the species concentration. The results of the analysis, including the concentration and the mapped velocity field, are depicted below.
The species concentration (shown in color) is solved in all three repeating domains. The periodic velocity field, indicated by the arrows, is solved in one domain and mapped into the others.
We have discussed how the General Extrusion component coupling can be used to set up a linear pattern of a periodic solution as part of a multiphysics analysis. For circular periodicity, a rotation matrix, not a linear shift, must be used in the destination map. An example of defining such a rotation matrix is detailed in this previous blog post.
The approach we have applied here is appropriate for any instance in which a spatially repeating solution needs to be utilized by other physics. Where might you use it in your multiphysics modeling?
]]>
Let’s start this conversation with a very simple problem — computing the capacitance of two parallel flat square metal plates, of side length L=1\:m, separated by a distance D=0.1\:m, and with a dielectric material of relative permittivity \epsilon_r= 2 sandwiched in between. Under the assumption that the fringing fields are insignificant (a rather severe assumption, but we will use it here to get started), we can write an analytic expression for the capacitance:
where \epsilon_0 is the permittivity of free space.
We can easily differentiate this expression with respect to our three inputs to find the design sensitivities:
Now let’s look at computing these same sensitivities using the functionality of COMSOL Multiphysics.
Schematic of a parallel plate capacitor model, neglecting the fringing fields.
We can start by building a model using the Electrostatics physics interface. Our domain will be a block of length L and height D with a relative permittivity of \epsilon_r. The boundary condition at the bottom is a Ground condition, and at the top, an Electric Potential condition sets the voltage to V_0=1\:V. The sides of the block have the default Zero Charge boundary condition, which is equivalent to neglecting the fringing fields. We can solve this model and find the voltage field between the two plates. Based on this solution, we can also calculate the system capacitance based on the integral of the electric energy density, W_e, throughout the entire model:
This equation does assume that one plate (or terminal) is held at a voltage V_0, while all other terminals in the model are grounded. The integral of the electric energy density over all domains is already computed via the builtin variable comp1.es.intWe
, and we can use it to define an expression for the computed capacitance. Of course, we will want to compare this value and our computed sensitivities to the analytic values so we can define some variables for these quantities. We can use the builtin differentiation operator, d(f(x),x), to evaluate the exact sensitivities, as shown in the screenshot below.
Variables are used to compute the model capacitance, as well as to compute the exact capacitance and its sensitivities. The builtin differentiation operator, for example d(C_exact,L), can be used to evaluate the exact sensitivities.
After solving our model, we can evaluate the computed system capacitance, compare it to the analytic value, and evaluate the design sensitivities analytically. Now let’s look at how to compute these sensitivities with COMSOL Multiphysics.
The parameters that we are considering affect both the material properties as well as the geometric dimensions of the model. When the design parameters affect the geometry, we need to use the Deformed Geometry interface, which lets us evaluate the sensitivities with respect to a geometric deformation.
The design parameters will affect a change in the geometry as shown. The hidden faces experience no displacement normal to the boundaries.
We introduce two new Global Parameters, dL and dD, which represent a change in L and D. These will be used in the Deformed Geometry interface, which has four relevant features. First, a Free Deformation feature is applied to the domain, which means that the computational domain can deform based on the applied boundary conditions. Next, Prescribed Mesh Displacement features are applied to the six faces of the domain. In the screenshot below, the deformation (dL) normal to the faces is prescribed as shown in the sketch above.
The Prescribed Mesh Displacement features are used to control the displacement normal to all domain boundaries.
Finally, to actually compute the sensitivities, we must add a Sensitivity node to the Study Sequence. This is shown in the screenshot below. You will want to enter the objective function expression, in this case C_computed
, as well as all of the design parameters that you are interested in studying. Also, choose the value for the design parameters around which you want to evaluate the sensitivities. Since dL and dD represent an incremental change in the dimensions, we can leave these both at zero to compute the sensitivities for L=1\:m and D=0.1\:m. The parameter controlling the material permittivity needs no special handling, other than to choose it as one of the parameters in the Sensitivity study.
There are two options in the form shown below for the gradient method:
A Sensitivity feature using the adjoint method is added to the study sequence, and the settings show the objective function and the parameters that are considered.
After solving, you will now be able to go to Results > Derived Values > Global Evaluation and enter the expressions fsens(dL)
, fsens(dD)
, and fsens(epsilon_r)
to evaluate the sensitivity of the capacitance with respect to the design parameters. Of course, you can also compare these to the previously computed analytic sensitivities and observe agreement.
Analytic  Computed  

\frac{\partial C}{\partial \epsilon_r}

88.542 nF  88.542 nF 
\frac{\partial C}{\partial L}

354.17 nF/m  354.17 nF/m 
\frac{\partial C}{\partial D}

1770.8 nF/m  1770.8 nF/m 
Now that we have the basic idea down in terms of computing the sensitivity of the capacitance of this system, what else can we do? Certainly, we can move on to some more complicated geometries, but there are a few points that we need to keep in mind as we move beyond this example.
There are two conditions that must be fulfilled for sensitivity analysis to work. First, the objective function itself must be differentiable with respect to the solution field. This means that objective functions such as the maximum and minimum of a field are not possible. Second, the parameters must be continuous in the realnumber space. Thus, integer parameters (e.g., the number of spokes on a wheel) are not possible.
Sensitivity calculations are not currently available for eigenvalue problems, nor ray tracing or particle tracing.
The design parameters themselves are typically Global Parameters, but you can also use the Sensitivity interface to add a Control Variable Field defined over domains, boundaries, edges, and points as desired.
Objective functions are typically defined in terms of integrals of the solution over domains or boundaries. It is also possible to set up an objective function as a Probe at a particular location in the model space. Any derived quantity based on the solution field, such as the spatial gradients of the solution, can be used as part of the objective function.
Computing design sensitivities is helpful for determining which parameters affect our objective function the most and gives us an idea about which parameters we might want to focus on as we start to consider design changes. Some other examples that use this functionality include our tutorial model of an axial magnetic bearing and our sensitivity analysis of a communication mast detail. Of course, this method can be used in far more cases than we can describe at once.
This story continues when we start to use these sensitivities to improve our objective function — where we optimize the design. This can be done with the Optimization Module, which we will cover in an upcoming blog post.
]]>
Before using a numerical simulation tool to predict outcomes from previously unforeseen situations, we want to build trust in its reliability. We can do this by checking whether the simulation tool accurately reproduces available analytical solutions or whether its results match experimental observations. This brings us to two closely related topics of verification and validation. Let’s clarify what these two terms mean in the context of numerical simulations.
To numerically simulate a physical problem, we take two steps:
There are two situations where errors can be introduced. First, they can occur in the mathematical model itself. Potential errors include overlooking an important factor or assuming an unphysical relationship between variables. Validation is the process of making sure such errors are not introduced when constructing the mathematical model. Verification, on the other hand, is to ascertain that the mathematical model is accurately solved. Here, we are ensuring that the numerical algorithm is convergent and the computer implementation is correct, so that the numerical solution is accurate.
In brief, during validation we ask if we posed the appropriate mathematical model to describe the physical system, whereas in verification we investigate if we are obtaining an accurate numerical solution to the mathematical model.
A comparison between the processes of validation and verification.
Now, we will dive deeper into the verification of numerical solutions to initial boundary value problems (IBVPs).
How do we check if a simulation tool is accurately solving an IBVP?
One possibility is to choose a problem that has an exact analytical solution and use the exact solution as a benchmark. The method of separation of variables, for example, can be used to obtain solutions to simple IBVPs. The utility of this approach is limited by the fact that most problems of practical interest do not have exact solutions — the raison d’être of computer simulation. Still, this approach is useful as a sanity check for algorithms and programming.
Another approach is to compare simulation results with experimental data. To be clear, this is combining validation and verification in one step, which is sometimes called qualification. It is possible but unlikely that experimental observations are matched coincidentally by a faulty solution through a combination of a flawed mathematical model and a wrong algorithm or a bug in the programming. Barring such rare occurrences, a good match between a numerical solution and an experimental observation vouches for the validity of the mathematical model and the veracity of the solution procedure.
The Application Libraries in COMSOL Multiphysics contain many verification models that use one or both of these approaches. They are organized by physics areas.
Verification models are available in the Application Libraries of COMSOL Multiphysics.
What if we want to verify our results in the absence of exact mathematical solutions and experimental data? We can turn to the method of manufactured solutions.
The goal of solving an IBVP is to find an explicit expression for the solution in terms of independent variables, usually space and time, given problem parameters such as material properties, boundary conditions, initial conditions, and source terms. Common forms of source terms include body forces such as gravity in structural mechanics and fluid flow problems, reaction terms in transport problems, and heat sources in thermal problems.
In the Method of Manufactured Solutions (MMS), we flip the script and start with an assumed explicit expression for the solution. Then, we substitute the solution to the differential equations and obtain a consistent set of source terms, initial conditions, and boundary conditions. This usually involves evaluating a number of derivatives. We will soon see how the symbolic algebra routines in COMSOL Multiphysics can help with this process. Similarly, we evaluate the assumed solution at time t = 0 and at the boundaries to obtain initial conditions and the boundary conditions.
Next comes the verification step. Given the source terms and auxiliary conditions just obtained, we use the simulation tool to obtain a numerical solution to the IBVP and compare it to the original assumed solution with which we started.
Let us illustrate the steps with a simple example.
Consider a 1D heat conduction problem in a bar of length L
with initial condition
and fixed temperatures at the two ends given by
The coefficients A_c, \rho, C_p, and k stand for the crosssectional area, mass density, heat capacity, and thermal conductivity, respectively. The heat source is given by Q.
Our goal is to verify the solution of this problem using the method of manufactured solutions.
First, we assume an explicit form for the solution. Let’s consider the temperature distribution
where \tau is a characteristic time, which for this example is an hour. We introduce a new variable u for the assumed temperature to distinguish it from the computed temperature T.
Next, we find the source term consistent with the assumed solution. We can hand calculate partial derivatives of the solution with respect to space and time and substitute them in the differential equation to obtain Q. Alternatively, since COMSOL Multiphysics is able to perform symbolic manipulations, we will use that feature instead of hand calculating the source term.
In the case of uniform material and crosssectional properties, we can declare A_c, \rho, C_p, and k as parameters. The general heterogeneous case requires variables, as do timedependent boundary conditions. Notice the use of the operator d(), one of the builtin differentiation operators in COMSOL Multiphysics, shown in the screenshot below.
The symbolic algebra routine in COMSOL Multiphysics can automate the evaluation of partial derivatives.
We perform this symbolic manipulation with the caveat that we trust the symbolic algebra. Otherwise, any errors observed later could be from the symbolic manipulation and not the numerical solution. Of course, we can plot a handcalculated expression for Q alongside the result of the symbolic manipulation shown above to verify the symbolic algebra routine.
Next, we compute the initial and boundary conditions. The initial condition is the assumed solution evaluated at t = 0.
The values of the temperature at the two ends of the bar are g_1(t) = g_2(t) = 500 K.
Next, we obtain the numerical solution of the problem using the source term, as well as the initial and boundary conditions we have just calculated. For this example, let us use the Heat Transfer in Solids physics interface.
Add initial values, boundary conditions, and sources derived from the assumed solution.
For the final step, we compare the numerical solution with the assumed solution. The plots below show the temperature after a time period of one day. The first solution is obtained using linear elements, whereas the second solution is obtained using the quadratic elements. For this type of problem, COMSOL Multiphysics chooses quadratic elements by default.
The solution computed using the manufactured solution with linear elements (left) and quadratic elements (right).
The MMS gives us the flexibility to check different parts of the code. In the example given above, for the purpose of simplicity we have intentionally left many parts of the IBVP unchecked. In practice, every item in the equation should be checked in the most general form. For example, to check if the code accurately handles nonuniform crosssectional areas, we need to define a spatially variable area before deriving the source term. The same is true for other coefficients such as material properties.
A similar check should be made for all boundary and initial conditions. If, for example, we want to specify the flux on the left end instead of the temperature, we first evaluate the flux corresponding to the manufactured solution, i.e., n\cdot(A_ck \nabla u), where n is the outward unit normal. For the assumed solution in this example, the inward flux at the left end becomes \frac{A_ck}{L}\frac{t}{\tau}*1K.
In COMSOL Multiphysics, the default boundary condition for heat transfer in solids is thermal insulation. What if we want to verify the handling of thermal insulation on the left end? We would need to manufacture a new solution where the derivative vanishes on the left end. For example, we can use
Note that during verification, we are checking if the equations are being correctly solved. We are not concerned with whether the solution corresponds to physical situations.
Remember that once we manufacture a new solution, we have to recalculate the source term, initial conditions, and boundary conditions according to the assumed solution. Of course, when we use the symbolic manipulation tools in COMSOL Multiphysics, we are exempt from the tedium!
As shown in the graph above, the solutions obtained by the linear element and the quadratic element converged as the mesh size was reduced. This qualitative convergence gives us some confidence in the numerical solution. We can further scrutinize the numerical method by studying its rate of convergence, which will provide a quantitative check of the numerical procedure.
For example, for the stationary version of the problem, the standard finite element error estimate for error measured in the morder Sobolev norm is
where u and u_h are the exact and finite element solutions, h is the maximum element size, p is the order of the approximation polynomials (shape functions). For m = 0, this gives the error estimate
where C is a mesh independent constant.
Returning to the method of manufactured solutions, this implies that the solution with linear element (p = 1) should show secondorder convergence when the mesh is refined. If we plot the norm of the error with respect to mesh size on a loglog plot, the slope should asymptotically approach 2. If this does not happen, we will have to check the code or the accuracy and regularity of inputs such as material and geometric properties. As the figures below show, the numerical solution converges at the theoretically expected rate.
Left: Use integration operators to define norms. The operator intop1 is defined to integrate over the domain. Right: Loglog plot of error versus mesh size shows secondorder convergence in the L_2norm (m = 0) for linear elements, which is consistent with theoretical prediction.
While we should always check convergence, the theoretical convergence rate can only be checked for those problems like the one above where a priori error estimates are available. When you have such problems, remember that the method of manufactured solutions can help you verify if your code shows the correct asymptotic behavior.
In case of constitutive nonlinearity, the coefficients in the equation depend on the solution. In heat conduction, for example, thermal conductivity can depend on the temperature. In such cases, the coefficients need to be derived from the assumed solution.
Coupled (multiphysics) problems have more than one governing equation. Once solutions are assumed for all the fields involved, source terms have to be derived for each governing equation.
Note that the logic behind the method of manufactured solutions holds only if the governing system of equations has a unique solution under the conditions (source term, boundary, and initial conditions) implied by the assumed solution. For example, in the stationary heat conduction problem, uniqueness proofs require positive thermal conductivity. While this is straightforward to check for in the case of isotropic uniform thermal conductivity, in the case of temperature dependent conductivity or anisotropy more thought should be given when manufacturing the solution to not violate such assumptions.
When using the method of manufactured solutions, the solution exists by construction. In addition, uniqueness proofs are available for a much larger class of problems than we have exact analytical solutions. Thus, the method gives us more room to work with than looking for exact solutions starting with source terms and initial and boundary conditions.
The builtin symbolic manipulation functionality of COMSOL Multiphysics makes it easy to implement the MMS for code verification as well as for educational purposes. While we do extensive testing of our codes, we welcome scrutiny on the part of our users. This blog post introduced a versatile tool that you can use to verify the various physics interfaces. You can also verify your own implementations when using equationbased modeling or the Physics Builder in COMSOL Multiphysics. If you have any questions about this technique, please feel free to contact us!
When modeling a manufacturing process, such as the heating of an object, it is possible for irreversible damage to occur due to a change in temperature. This may even be a desired step in the process. With the Previous Solution operator, we can model such damage in COMSOL Multiphysics. Here, we will look at the “baking off” of a thin coating on a wafer heated by a laser.
Let’s consider a wafer of silicon with a very thin layer of material coated on the surface. This thin film may have been introduced in a previous processing step, and we now want to quickly “bake off” this material by heating the wafer with a laser. The wafer is mounted on a spinning stage while the laser heat source traverses back and forth over the surface.
We will consider a layer of material that is very thin compared to the wafer thickness. We can thus assume that the film does not contribute to the thermal mass of the system, nor will it provide any additional conductive heat path. However, this coating will affect the surface emissivity. If the coating is undamaged, the emissivity is 0.8. Once the coating has baked off, the emissivity of that region of the wafer will change to 0.6. This will alter both the amount of heat absorbed from the laser heat source and the heat radiated from the wafer to the surroundings.
A laser beam traversing over a rotating wafer ablates a thin surface coating when the temperature is high enough.
We will not concern ourselves too greatly with the process by which the coating is removed from the wafer. Although the actual process may include phase change, melting, boiling, ablation, and chemical reactions, we are, in this case, dealing with a very thin layer of material. Thus, we can simply say that once the temperature of the wafer surface exceeds 60°C, the coating immediately disappears. Under the assumption of very fast dynamics of the material removal process relative to the heating of the wafer, this is a valid approach.
We will begin with the previously developed model of a rotating wafer exposed to a moving heat source. An additional boundary equation will be added to our existing model. Hence, we need to model this in a coordinate system that moves with the rotation of the wafer.
The equation we add will track the surface emissivity on the top boundary of the wafer. The Previous Solution operator is used since we simply want to change the surface emissivity once the temperature gets above the specified value and leave it otherwise unchanged. We have already introduced the use of the Previous Solution operator in a previous blog entry. We will now focus more specifically on modeling the removal of the film from the wafer surface.
The settings for the Boundary ODEs and DAEs interface. Note the shape function settings.
The domain settings and initial values settings for the Boundary ODEs and DAEs interface, which models the emissivity of the surface of the wafer.
The settings for the Boundary ODEs and DAEs interface are shown above. Note that a Constant Discontinuous Lagrange discretization is used to solve for the field “emissivity”, the surface emissivity. This discretization is equivalent to saying that the emissivity will have a constant value over each element and that the field can be discontinuous across different elements. We are assuming that the film is either present or not present, so the surface emissivity will have two discrete states. The initial value of the field variable is the undamaged value of the surface emissivity.
The settings for the Heat Flux boundary condition use the computed surface emissivity.
The settings for the Diffuse Surface boundary condition use the computed surface emissivity.
The computed surface emissivity is used in two places within the Heat Transfer in Solids interface, as shown above. The applied Heat Flux boundary condition and the radiation of ambient temperature via the Diffuse Surface boundary condition both reference the emissivity field.
Since the surface emissivity is constant across each element, a finite element mesh size of 0.3 mm is used to obtain a smoother representation of the damage field. Also, the relative solver tolerance is set to 1e6.
The results of the simulation are depicted in the animation below. As the temperature rises, certain portions of the wafer surface rise above the ablation temperature and the surface emissivity changes. The process is complete once the entirety of the wafer surface has been heated above the desired temperature.
An animation of the temperature field of the laser heating the rotating wafer (left). The dark gray color indicates the damage zone (right).
We have demonstrated how to model an irreversible change in the state of a material. In this case, we have analyzed the removal of a thermodynamically negligible thin layer of material from the surface of a wafer and modified the resultant surface emissivity as a consequence. The technique outlined here for using the Previous Solution operator can also be used in many other cases. What comes to your mind?
If you are interested in downloading the model related to this article,
it is available in our Application Gallery.