Whenever an alternating electric current (or a direct current, for that matter) is applied to living tissue, there will be heat generation and temperature rise due to Joule heating. The ability to target this heat to specific localized tissue areas is a key advantage of the radiofrequency tissue ablation technique.
In one of many medical applications, a cancerous tumor is a localized target. Using heat, the temperature of the area is raised to kill the cancer cells. Alternating current is used (rather than direct) to avoid stimulating nerve cells and causing pain. When alternating current is used, and the frequency is high enough, the nerve cells are not directly stimulated.
To understand how we can model this process, let’s examine the figures below, which show some of the key concepts of this technique.
A tumor within healthy tissue. Capillaries perfuse blood through the tissue and tumor.
When an undesirable tissue mass is identified, such as a tumor, a doctor can use either a monopolar or bipolar applicator to inject current into and around the tumor. The current comes from a generator and varies sinusoidally in time. Frequencies of 300 to 500 kHz are common, although the procedure can use much lower frequencies.
There are a wide variety of electrode configurations ranging from flat plates and single needles to a cluster of needles, depending on the desired shape of the heated domain and how the doctor will access the tissue. One common class of applicator is deployed through the circulatory system by using a long, flexible catheter and then extending a set of needles from the distal end into the tissue to be heated.
A monopolar applicator is made up of a needle and patch applicator, whereas a bipolar applicator consists of two needle electrodes. More than two applicators and other applicator configurations are also possible. By convention, one electrode is called the ground, or reference, electrode. The voltage applied at the other electrode is with respect to this ground.
A monopolar radiofrequency applicator and a patch electrode on the skin’s surface.
A bipolar applicator primarily heats the region between the electrodes.
An engineer designing one of these devices has a complicated problem to solve. The shape of the heated tissue depends on the shape and number of electrodes; which part is insulated and which is not; and ultimately, the thermal energy absorption distribution of the nearby tissue over time.
The sharp, pointed ends of the needle electrodes complicate the design process, since they lead to high current densities and thus uneven temperature rise along the needle. For the cancerous tumor application, the goal is to kill the undesirable tissue mass and leave the surrounding healthy tissue unharmed. For shrinking collagen, the goal is still to heat tissue, but to avoid any possibility of damaging cells. COMSOL Multiphysics simulation streamlines and shortens this process.
To properly model this procedure, we must build a model of the electric current flow through the tissue as well as the heat generation and temperature rise. Let’s explore these steps.
We begin by examining the typical material properties of both the applicator and living tissue and discuss how these materials behave at an operating frequency of 500 kHz. The table below shows the representative electrical conductivity, \sigma; relative permittivity, \epsilon_r; skin depth, \delta; and complexvalued conductivity, (\sigma+j\omega \epsilon_0 \epsilon_r) at 500 kHz.
Although there is a variation to the electrical conductivity and relative permittivity of different tissues, for the purposes of this discussion, we will approximate the human body as having the properties of a weak saline solution. The actual properties of tissue do not vary by much more than one order of magnitude from this value, while the conductivity of the electrode and insulator are over five orders of magnitude larger or smaller.
Electrical Conductivity (S/m)  Relative Permittivity  Skin Depth at 500 kHz (m)  Complex Conductivity at 500 kHz (S/m)  

Metal Electrode  10^{6}  1  ~10^{4}  10^{6} + j 4 x 10^{6} 
Polymer Insulator  10^{12}  2  ~10^{10}  10^{12} + j 9 x 10^{5} 
“Average” Human Tissue  0.5  65  1  0.5 + j 0.0003 
We compute the skin depth to decide if we need to compute the magnetic fields and any heating due to induced currents. At 500 kHz, the electrical skin depth of the human body is on the order of one meter, while the heated regions have a typical size on the order of a centimeter. Hence, we can make the approximation that heating due to induced currents in the tissue is negligible and need not be calculated. Note that this approximation will not be valid if some small pieces of metal exist within the tissue, such as a stent within a nearby blood vessel.
We can also see from the magnitude of the complex conductivity in the above table that the electrodes are essentially perfect conductors when compared to tissue. Similarly, the polymer insulators can be well approximated as perfect insulators when compared to human tissue.
This information lets us choose the form of our governing equation. Under the assumption that magnetic fields and induction currents are negligible and operating at a constant frequency, we can solve the frequencydomain form of the electric currents equation. Further assuming that the human body itself does not generate any significant currents, the governing equation is:
which solves for the voltage field, V, throughout the modeling domain. The electric field is computed from the gradient of the voltage: \mathbf{E} = \nabla V. The total current is \mathbf{J} = (\sigma+j\omega \epsilon_0 \epsilon_r) \mathbf{E} and the cycleaveraged Joule heating is Q = \frac{_1}{^2} \Re (\mathbf{J}^* \cdot \mathbf{E} ).
Since the conductors are essentially perfectly conducting compared to the tissue, we can omit these domains from our electrical model. That is, we can assume that all surfaces of the metal electrodes are equipotential. This is reasonable if the equivalent freespace wavelength (\lambda = c_0/f = 600m) is much larger than the model size. When using the AC/DC Module, we can use the Terminal boundary condition to fix the voltage on all surfaces of the electrode. The Terminal boundary condition can specify the applied voltage, total current, or total power fed into the boundaries.
It is reasonable to ask why the conductor is omitted, for there is indeed some finite heat loss within the electrode itself. The heating within the electrode, however, is many orders of magnitude lower than in the surrounding tissue. Although the currents in the conductor can be quite high, the electric field (the variation in the voltage along the electrode) is quite small, hence the heating is negligible.
Similarly, since the insulators are essentially perfect, these domains can also be eliminated from the electrical model. In the insulators, the electric fields may be quite high, but the current is essentially zero, which again means negligible heating. The Electric Insulation boundary condition, \mathbf{n} \cdot \mathbf{J} = 0, can be applied on the boundaries of the insulators and implies that no current (neither conduction nor displacement currents) passes through these boundaries. There is one caveat to this: If the electrodes are completely enclosed within the insulators, then there will be significant displacement currents in the insulators and these domains should be included in the model.
On the exposed surface of the skin, the Electric Insulation boundary condition is also appropriate. However, if there is an external electrode patch applied to the skin’s surface, then current can pass through the skin to the electrode. The conductivity of skin is lower than that of the underlying tissue, and this should be modeled. However, we may not want to model the skin explicitly as a separate domain. In such cases, the Distributed Impedance boundary condition applies the condition \mathbf{n} \cdot \mathbf{J} = Z_s^{1}(VV_0), where V_0 is the external electrode voltage and Z_s is the equivalent computed impedance of the skin.
A schematic of such a model is shown below, with representative material properties and boundary conditions. Now that the electrical model is addressed, let’s move on to the thermal model.
A schematic of an electrical model of radiofrequency tissue ablation. Representative material properties are shown on the left. The modeling domain and governing equations are shown to the right.
The objective of the thermal model is quite straightforward: to compute the rise in tissue temperature over time due to the electrical heating and predict the size of the ablated region. The governing equation for temperature, T, is the Pennes Bioheat equation:
where \rho and C_p are the density and specific heat of the tissue, while \rho_b and C_{p,b} are the density and specific heat of the blood perfusing through the tissue at a rate of \omega_b. T_b is the arterial blood temperature and Q_{met} is the metabolic heat rate of the tissue itself. This equation is implemented within the Heat Transfer Module. If the last two terms are omitted, then the above equation reduces to the standard transient heat transfer equation.
It is also necessary to specify boundary conditions on the exterior of the modeling domain. The most conservative condition would be the Thermal Insulation boundary condition, which implies that the body is perfectly insulated. This would lead to the fastest rise in temperature over time. A more physically realistic boundary condition would be the Convective Heat Flux condition:
with a heat transfer coefficient of h = 510 W/m^2K and an external temperature of T_{ext}=2025 ^{\circ}C. This reasonably approximates the free convective cooling from uncovered skin to ambient conditions.
Along with the change in temperature, we also want to compute the tissue damage. The Heat Transfer Module offers two different methods for evaluating this:
Along with these predefined damage integrals, it is also possible to implement a userdefined equation for damage analysis via the equationbased modeling capabilities of COMSOL Multiphysics.
Representative radiofrequency ablation results from a 2D axisymmetric model. Two insulated applicators are inserted into a tumor within the body to heat and kill the diseased tissue. The plotted results include the voltage field (top left), resistive heating (bottom left), and the temperature and size of the completely damaged tissue at two different times (right).
We have now developed a model that is a combination of a frequencydomain electromagnetics problem and a transient thermal problem. COMSOL Multiphysics solves this coupled problem using a socalled frequencytransient study type. The frequencydomain problem is a linear stationary equation, since it is reasonable to assume that the electrical properties are linear with respect to electric field strength over one period of oscillation. Thus, COMSOL Multiphysics first solves for the voltage field using a stationary solver and then computes the resistive heating. This resistive heating term is then passed over to the transient thermal problem, which is solved with a timedependent solver. This solver computes the change in temperature over time.
The frequencytransient study type automatically accounts for material properties that change with temperature and the tissue damage fraction. If the temperature rises or tissue damage causes the material properties to change sufficiently to alter the magnitude of the resistive heating, then the electrical problem is automatically recomputed with updated material properties. This can also be described as a segregated approach to solving a multiphysics problem.
In such thermal ablation processes, it is also common to vary the magnitude of the applied electrical heating to pulse the load on and off at known times. In such situations, the Explicit Events interface can be used, as described in our earlier blog post on modeling periodic heat loads. If you instead want to model the heat load changing as a function of the solution itself, then the Implicit Events interface can be used to implement feedback, as described in our earlier blog post on implementing a thermostatic controller.
If you are interested in studying radiofrequency tissue ablation, there are several other resources worth exploring. If your electrodes have sharp edges and you are concerned about localized heating near these edges, consider adding fillets to your model, since a sharp edge leads to a locally inaccurate result for the heating. Also keep in mind that, despite any locally inaccurate heating, the total global heating will nevertheless be quite accurate with a sharp edge. Thus, the filleting of sharp edges is not always necessary, since the local temperature field can still be quite accurate.
If there are any relatively thin layers of materials that have relatively higher or lower electrical conductivity compared to their surroundings, consider using the Electric Shielding or Contact Impedance boundary conditions for the electrical problem. There are similar boundary conditions available for thin layers in thermal models as well.
If you are interested in modeling at much higher frequencies, such as in the microwave regime, then you need to consider an electromagnetic wave propagating through the tissue. In such cases, look to the RF Module and the Conical Dielectric Probe for Skin Cancer Diagnosis example in the Application Gallery. At even higher frequencies in the optical regime, a range of modeling approaches are possible, as described in our blog post on modeling lasermaterial interactions.
The heat source for your problem need not even be electrical. Highintensity focused ultrasound is another ablation technique and can be modeled, as described in the Focused Ultrasound Induced Heating in Tissue Phantom tutorial in the Application Gallery.
In closing, we have shown that COMSOL Multiphysics, in conjunction with the AC/DC Module and Heat Transfer Module, gives you the capability and flexibility to model radiofrequency tissue ablation procedures.
If you are interested in using COMSOL Multiphysics for this type of modeling, or have any other questions, please contact us.
]]>
As we have already mentioned on the blog, fractals have some interesting engineering applications. The Koch snowflake is a fractal that is notable due to its very simple iterative construction process:
This procedure is illustrated in the figure below for the first four iterations of the snowflake.
The first four iterations of a Koch snowflake. Image by Wxs — Own work. Licensed under CC BYSA 3.0, via Wikimedia Commons.
Now that we know what algorithm to use, let’s look at how to create such a structure with the Application Builder and COMSOL Multiphysics. We begin with a new file and create a 2D geometry part within the Global Definitions node. This part takes five inputs: the side length of an equilateral triangle; the x and ylocations of the midpoint of the base; and the components of the normal vector, pointing from the midpoint of the base to the far vertex, as shown in the images below.
The five parameters that are used to control the size, position, and orientation of an equilateral triangle.
The input parameters for the geometry part are defined.
A polygon primitive is used to define an equilateral triangle.
The part is rotated about the center of the bottom edge.
The part is moved from the origin.
Now that we defined the geometry part, we can call a single instance of the part within the Geometry branch. This single triangle is equivalent to the zeroth iteration of the Koch snowflake, and we are now ready to use the Application Builder to create more complex snowflakes.
The app has a very simple user interface. It has only two features with which the user can interact: a Slider (labeled 1 in the image below) that specifies the number of iterations to take to produce the snowflake, and a Button (label 2) to select, which creates and displays the resultant geometry. There is also a Text Label (label 3) and a Data Display (label 4) that show the number of iterations that are taken, as well as a Graphics window (label 5) in which the resultant geometry is plotted.
The app has a single form with five features.
There are two Declarations within the app that define an integer value, named Iterations
, which is zero by default but changed by the user. There is also a 1D array of doubleprecision numbers, named Center
. A single element in the array has a value of 0.5, which is used to find the centerpoint of each edge. This value is never changed.
The settings for the two declarations.
The slider feature within the user interface controls the value of the integer, Iterations
. The screenshot below shows the slider’s settings and the values, which are specified to be integers between 0 and 5. This source is similarly selected for the Data Display feature to display the number of iterations to take. We limit the user to five iterations, as we use an algorithm that is not very efficient but is quite simple to implement.
The settings for the slider feature.
Next, we can look at the settings for our button, as shown in the screenshot below. There are two commands that are run when the button is pressed. First, the method CreateSnowFlake
is called. Second, the resultant geometry is plotted in the graphics window.
The button settings.
We have now looked over the user interface of our app and can see that all of the geometry creation for the snowflake must happen within the method. Let’s take a look at the code of this method, with line numbers added on the left and text strings highlighted in red:
1 model.geom("geom1").feature().clear(); 2 model.geom("geom1").create("pi1", "PartInstance"); 3 model.geom("geom1").run("fin"); 4 for (int iter = 1; iter <= Iterations; iter++) { 5 String[] UnionList = new String[model.geom("geom1").getNEdges()+1]; 6 UnionList[0] = "pi" + iter; 7 for (int edge = 1; edge <= model.geom("geom1").getNEdges(); edge++) { 8 String newPartInstance = "pi" + iter + edge; 9 model.geom("geom1").create(newPartInstance, "PartInstance").set("part", "part1"); 10 with(model.geom("geom1").feature(newPartInstance)); 11 setEntry("inputexpr", "Length", toString(Math.pow(1.0/3.0, iter))); 12 setEntry("inputexpr", "px", model.geom("geom1").edgeX(edge, Center)[0][0]); 13 setEntry("inputexpr", "py", model.geom("geom1").edgeX(edge, Center)[0][1]); 14 setEntry("inputexpr", "nx", model.geom("geom1").edgeNormal(edge, Center)[0][0]); 15 setEntry("inputexpr", "ny", model.geom("geom1").edgeNormal(edge, Center)[0][1]); 16 endwith(); 17 UnionList[edge] = newPartInstance; 18 } 19 model.geom("geom1").create("pi"+(iter+1), "Union").selection("input").set(UnionList); 20 model.geom("geom1").feature("pi"+(iter+1)).set("intbnd", "off"); 21 model.geom("geom1").run("fin"); 22 }
Let’s go over what each line of code does:
pi1
.Iterations
as the stop condition.UnionList
. Each element of the array contains the tag identifier of a different geometry object. The length of this array equals the number of edges in the last iteration, plus one.UnionList
array. This is the tag identifier of the result of the previous iteration. Keep in mind that the zeroth iteration is already created on lines 13. The integer value of iter
is automatically converted to a string and appended to the string "pi"
.iter
and edge
are each sequentially appended to the string pi
, the part instance tag identifier.with()/endwith()
statement.toString()
function is needed to cast the floating point value into a string.edgeX
method is documented in the COMSOL Programming Reference Manual. Recall that Center
is set to be 0.5.edgeNormal
method is documented in the COMSOL Programming Reference Manual.with()/endwith()
statement.piN
, where N
is the next iteration number. The parentheses are needed around (iter+1)
such that the incremented value of iter
is converted to a string.And with that, we have covered everything that goes into our app. Let’s look at some results!
Our simple Koch snowflake application.
We could expand our app to write out the geometry to a file, or even to perform additional analyses directly. For example, we could design a fractal antenna. If you’re interested in antenna design, take a look at our example of a Sierpinski fractal antenna, or even make one from scratch.
If you want to build this app yourself and you haven’t started using the Application Builder yet, you will find the following resources helpful:
Once you’ve gone through that material, you’ll see how this app can be extended to change the snowflake size, export the created geometry, evaluate the area and perimeter, and much more.
What kind of app would you like to create with COMSOL Multiphysics? Contact us for help.
]]>
As engineers, researchers, and scientists, we are always striving to come up with improved designs. Optimization is the idea of altering model inputs, such as part dimensions and material properties, with the objective of improving some metric, while also considering a set of constraints. The Optimization Module in COMSOL Multiphysics is a useful tool for addressing such problems.
Dimensional optimization is one of the more common optimization techniques. The approach involves changing CAD dimensions directly to minimize mass, as illustrated in our Multistudy Optimization of a Bracket tutorial. In the bracket example, we use socalled gradientfree techniques to adjust dimensions and consider constraints on the relationships between the dimensions, the peak stress, and the lowest natural eigenfrequency. These techniques are very flexible in the type of objective functions and constraints that can be addressed. However, one drawback to these techniques is having to remesh the part repeatedly to numerically approximate the sensitivities of the objective function and constraints with respect to the design variables.
As we have previously discussed on the blog, it is also possible to analytically compute the design sensitivities due to geometric changes when using the Deformed Geometry interface. Further, the gradientbased solvers can use the sensitivities to optimize the dimensions of a part without remeshing — an element that we highlighted in the design of a capacitor. It is helpful to review the two blog posts referenced here to understand the functionality that we will use today.
Shape optimization is an extension of the previously developed concepts, and it considers not just straightforward dimensional changes, but general changes in shape as well. The shape of the structure is controlled via a set of design parameters that use a set of basis functions, which can describe quite arbitrary shapes. Let’s take a look at an example.
We begin with a classical shape optimization problem: adjusting the thickness of a cantilevered beam to minimize the mass, while maintaining a constraint on the peak deflection of the free end. The beam of initially uniform thickness has a nonuniform load distributed over its top surface, as shown in the diagram below.
A cantilevered beam with a nonuniform load applied. Point A should not deflect more than a specified value. The mesh is also shown.
First, we want to choose our design variables. Both the length of the beam and the thickness at the cantilevered end are fixed. What we can vary is the thickness of the beam along the length. It is somewhat simpler to vary the change in the thickness from the initial configuration. Instead, we introduce a function, DT(X), which is initially zero along the length.
The optimization problem studies a change in the thickness of the beam.
Here, we choose to represent the change in thickness via a set of Bernstein polynomials of the fourth order:
expressed in terms of the normalized dimension: \bar X = X/L_0. Note that the function is scaled such that the polynomial coefficients have an order of magnitude near unity. Again, we do this for scaling purposes.
Since the thickness of the beam at X=0 is specified, DT(\bar X)=0, fixing C_0=0 so that the term can be omitted. At the far end, we add the constraint that the beam cannot become too thin: C_4<0.9.
In the intermediate region, we would also like to add some constraints to further limit the design space. We could add the constraint that 0 < DT(\bar X) < 0.9T0. The constraint, however, has a drawback: It would allow the thickness of the beam to oscillate and, from first principles, we know that this is not reasonable. There is no advantage to having the thickness of the beam increase along the length. We can instead add a constraint on the derivative: DT^\prime(\bar X) > 0. This forces the thickness of the beam to change monotonically along the length and has the added advantage of naturally satisfying the constraint: 0 < DT(\bar X) < 0.9T0.
There is one more constraint to consider: the displacement of the point at the end of the beam. We want the magnitude of the displacement of point A to be less than some specified value, u_{max}. With such information, we now have a complete optimization problem:
Here, the normalization of the objective function with respect to the initial mass of the beam, M_0, is done to scale the objective function so that it is of order unity. Similarly, the magnitude of the displacement of the beam tip, \mathbf{u}_A, is normalized with respect to the peak permissible displacement, u_{max}. The normalized displacement should be less than one. Let’s now look at implementing such a problem in COMSOL Multiphysics using the Optimization Module.
We can begin with our initial design, simply a beam of fixed length and uniform thickness. The design is cantilevered at one end, with a nonuniform load across the top face that varies as \bar X^4(1\bar X ). We want to first introduce the change in the thickness function. The polynomial function described earlier is the variable DT, as shown in the screenshot below. The expression Xg refers to the xdimension of the original, undeformed geometry. The derivative of this function, with respect to the normalized xdirection, is the variable dDTdX. Two Global Parameters, L0 and T0, define the length and maximum thickness.
A screenshot showing the change in the thickness function and its derivative.
The change in the thickness variable is used within the Deformed Geometry interface to define how the entire volume of the beam is altered with a change in thickness. Since it is only the thickness that changes, a simple linear mapping can be used, as illustrated below.
The displacement within the beam is completely prescribed.
We can now set up the optimization problem via the Optimization interface. The interface provides an easy way of setting up more complicated optimization problems with several constraints. The relevant settings are shown in the screenshots below, starting with the objective function. The Integral Objective feature integrates the material density over the modeling domain and normalizes with respect to the initial part mass.
The optimization objective is to minimize the mass.
The settings for the Global Control Variables feature are shown below. The four variables, C1, C2, C3, and C4, have an initial value of zero, which is equivalent to the initial beam shape. The constraint on C4 is imposed as an upper bound and the scaling of all variables is unity.
The definitions of the control variables, their bounds, and scaling.
Next, we apply the Pointwise Inequality Constraint feature to the bottom boundary of the domain. This feature enforces that the derivative of the displacement function remains positive at every point, thereby ensuring a monotonically increasing function.
The constraint on the derivative along the length of the beam is enforced via a pointwise inequality constraint.
Finally, the peak displacement of the point at the far end of the beam is constrained so that it is below a maximum specified value. This value is set via the Point Sum Inequality Constraint feature.
The implementation of the constraint on the normalized peak displacement.
Our optimization problem is now almost entirely set up. The only remaining step is to add an Optimization feature to the study sequence and to select the gradientbased SNOPT solver, which proves to be the fastest approach to our problem. All other settings can be left at their default values. The objective function and constraints are automatically taken from the Optimization interface.
The relevant optimization solver settings.
The results are depicted in the image below. The optimal shape within this basis has been identified. The displacement at the tip is at its maximum value, with the thickness monotonically changing along the length. Due to the expected deformation of the geometry, a mapped mesh was used.
The optimized shape of the beam, which minimizes mass for the applied nonuniform load and constraints. The displacement field is plotted along with the applied load distribution and the mesh.
We may ask ourselves how we know that the above structure is truly optimized. There is always the urge to perform a mesh refinement study, trying out finer and finer meshes to see how the solution converges. It is also reasonable to study convergence with respect to basis functions. We can use a higherorder Bernstein basis function and compare the results. This, however, can lead to a problem known as Runge’s phenomenon, along with slow convergence.
We can circumvent such issues by subdividing the original interval into multiple subintervals, using different lowerorder shape functions within each interval (a piecewise polynomial). Other basis functions beyond the Bernstein basis can also be applied, such as the Chebyshev polynomials and the Fourier basis. The Optimizing the Shape of a Horn tutorial, available in our Application Gallery, features an example of the latter instance.
The cases discussed here include quite simple deformations. When considering more complicated deformations, you will need to put more effort into defining the deformation mapping. For very complicated deformations, it is also useful to add helper equations in order to compute the deformation.
If you have any questions about using these shape optimization techniques or are interested in adding the Optimization Module to your suite of modeling tools, please contact us.
]]>
Let’s begin by referring back to a previous blog post on the computation of design sensitivities that shows how we can use the Deformed Geometry interface and the Sensitivity study to analytically evaluate the design sensitivities of a parallel plate capacitor. For that problem, we computed the change in the capacitance with respect to geometric and material property variations. We also computed the design sensitivities to geometric changes without altering the geometry or performing any remeshing. We now want to use that same framework to change the geometry of our capacitor with the objective of minimizing a particular quantity by using the functionality of the Optimization Module.
We start with a simple extension to the previous example: a parallel plate capacitor of side length L=1\ m with two dielectrics, \epsilon_{r1}=2, \epsilon_{r2}=4, each of the same initial thickness, T_0=0.1\ m. Further, we will neglect all fringing fields. This lets us model only the region between two parallel plates so that our computational model looks just like the figure shown below.
Schematic of a parallel plate capacitor with two dielectrics between the plates. Fringing fields are ignored.
This model can be built by sketching two geometric blocks of dimensions as described above. The Electrostatics physics interface allows us to apply a voltage and a Ground condition at the top and bottom faces as well as apply material dielectric properties. It is possible to compute the capacitance by integrating the electric energy density, as described in the previously mentioned blog post, and get a value of C_{computed}=118\ pF.
Now, let’s suppose that we want to design a 100 pF capacitor by changing the thicknesses of the two layers, without altering the overall dimensions of the device. This can be posed as an optimization problem:
That is, we want to get the capacitance to be as close to 100 pF as possible by varying the change in the dielectric thicknesses within limits such that neither dielectric is of zero thickness. The design parameter dT is the changing in the thicknesses of the two layers, as shown above. The objective function itself is formulated such that the absolute magnitude is on the order of unity. For numerical reasons, this form is preferred over (C_{computed}100\ pF)^2 or the absolute value function: C_{computed}100\ pF.
We can begin by defining the variation of the thickness of the dielectric layers using the Deformed Geometry interface. The Deformed Geometry interface is necessary because we want to compute the analytic sensitivities without having to remesh the geometry as we change the dimensions. Since we will be changing the sizes of the two dielectrics, we want to define these deformations as completely as possible. We will do this with a Prescribed Deformation domain feature, as shown in the screenshot below.
The capacitor itself is originally sketched such that it is centered at the origin so the original, undeformed part has a coordinate system: (Xg,Yg,Zg). For this simple Cartesian geometry, we can use this coordinate system to directly define the deformation as the thicknesses of the dielectric layers are changed. The deformations of the bottom and top layer are dT*(1+Zg/T0) and dT*(1Zg/T0), respectively, where dT and T0 are Global Parameters.
The change in the thicknesses of the dielectric layers is controlled with a Prescribed Deformation feature.
Next, let’s look at the optimization user interface. For this simple problem, we can just add an Optimization feature to our study sequence, as shown in the screenshot below.
This minimization problem statement and scaling can be implemented entirely within the Optimization study node, as shown in the screenshot below. The relevant settings are the Objective Function, which is the expression (C_computed/100[pF]1)^2, and the Control Parameter, dT, which has an initial value of 0[m]. The upper and lower limits are specified to prevent zero, or negative, thicknesses. Lastly, we apply a scaling to dT, the design parameter, based upon the original thickness, D, such that the optimized value will have an order of magnitude near unity.
The optimization solver settings.
The SNOPT method is used to solve the optimization problem. Both the SNOPT and MMA methods use the analytically computed sensitivities, but SNOPT converges the fastest to within the default tolerance of 1E6. The resultant device capacitance is 100 pF and the thickness of the dielectrics is D_{1} = 0.1542 m and D_{2} = 0.0458 m. The voltage field in the original model and the optimized structure are shown below, along with the finite element mesh. Observe that the finite element mesh is stretched and compressed, but that no remeshing has occurred.
The original and final structure. The voltage field and mesh are shown.
We’ve looked at a fairly straightforward example of shape optimization, although with a little more effort, we could have found the solution to this problem by hand or by performing a parametric sweep. The geometric deformation demonstrated here is also quite simple. As you consider more complex geometries and more complex geometric changes, you will not always be able to directly use the undeformed spatial coordinates to define the deformation. In such cases, you will need to add equations to the model to help define the deformation. Of course, you may also want to consider more complex deformations, not just simple dimensional changes. We will cover this topic in an upcoming blog post on optimization.
In the meantime, if you have any questions about this technique and would like to use the Optimization Module for your design needs, please contact us.
]]>
I recently wrote a blog post about exploiting periodicity to reuse the flow solution in the modeling of a microfluidic device. To quickly refresh our memories, a microfluidic device may feature small serpentine channels, as shown in the image below. Two inlets introduce different solutes in the same solvent, and good mixing at the outlet is desired.
A typical microfluidic device. Image by IXfactory STK — Own work. Licensed under CC BYSA 3.0, via Wikimedia Commons.
We know that it is possible to compute the fluid flow field in only one of the repeating unit cells, and to pattern this flow solution along the repeating structure to define the flow field along the entire device. The Transport of Diluted Species problem can then be solved in the repeated section, as shown in the figure below.
One approach is to compute the flow field on one unit subcell. This flow solution can be used by the chemical species transport problem, solved over the entire domain.
Shortly after finishing my earlier blog post on this topic, one of my colleagues challenged me to come up with an even simpler way of modeling the same situation, which made me think of the following problem.
It may not be immediately obvious from the above image that this case has a very high Péclet number, meaning that the transport of the species due to the motion of the fluid is far greater than the transport via species, via diffusion. Stated another way, the solution downstream does not affect the solution upstream.
Now, if the Péclet number is high, then we do not necessarily need to solve the Transport of Diluted Species problem on the entire domain. We can solve it on the same unit cell used to compute the fluid flow field, but we need to come up with a way to map the species distribution at the output boundary back to the input boundary and rerun the simulation. Let’s find out how to do this.
The modeling procedure we should follow is sketched out in the figure below. We can reduce our entire modeling domain down to one unit cell. We will compute the flow field in this unit cell using laminar inflow and outflow conditions. This computed flow field will be used as the transport term in the Transport of Diluted Species interface, which additionally requires a concentration profile at the inlet.
We can start by assuming a particular species concentration at the inlet and solve for the concentration field throughout the modeling domain. Then, we evaluate the concentration profile at the outlet; map that profile back to the inlet boundary, where it can be applied as a new inlet condition; and solve the model again. Each time we repeat this process, we are essentially solving for the concentration profile in the next (downstream) unit cell of our microfluidic device.
The solution procedure for modeling a repeated microfluidic device with a single unit cell.
The modeling implementation begins with a General Extrusion component coupling, named genext1, defined at the outlet boundary. This coupling simply maps the fields at the outlet boundary back to the inlet boundary by specifying a displacement along the xaxis, similar to the method shown in an earlier blog post.
The General Extrusion component coupling maps the solution on one boundary to another boundary by the specified offset of 3 mm.
Next, a Boundary ODEs and DAEs interface is added to the inlet boundary. The settings for this interface are shown below. The variable is named c_b and the discretization is set to Lagrange — Linear to match the discretization of the Transport of Diluted Species interface. The Source Term for the Distributed ODE is (c_bgenext1(c))[m^3/mol], which sets the value of c_b at the inlet equal to c. The species concentration is computed at the outlet, which is mapped back to the inlet via the General Extrusion operator.
The Boundary ODEs and DAEs interface. Relevant settings are highlighted.
Next, let’s look at where the variable c_b is used. The screenshot below shows the Inflow boundary condition for the Transport of Diluted Species interface.
The inlet condition for the Transport of Diluted Species interface.
The expression entered into this field is if(Index,c_b,(1+tanh(x/0.1[mm]))/2), which uses the if() statement to set up a different inlet concentration based upon a Global Parameter, Index. The expression (1+tanh(x/0.1[mm]))/2 is the assumed species concentration at the inlet of the device (this concentration is arbitrarily set to range from zero to one) and is only applied if Index is equal to zero. For any other values of Index, the species concentration at the inlet is actually taken from the computed species concentration at the outlet.
But how do we specify that we want the previously computed outlet concentration to be used as the inlet concentration? For that, we need to modify our Study settings. The Study is composed of two sequential Stationary Steps. The first step solves the Laminar Flow interface alone. The resulting steadystate fluid velocity field is automatically passed along to the second step, which solves for both the Transport of Diluted Species and Boundary ODEs and DAEs interfaces.
The second Stationary Step includes an Auxiliary Sweep, as shown in the screenshot below. Note that the Index parameter is swept over the values 0, 1, and 2, which represent the threeunit cells of the system that we want to model. Also note that Run continuation for is set to No parameter (since there is no benefit of using load ramping or nonlinearity ramping in this case) and that Reuse solution for previous step is set to Yes.
The Stationary study step with the Auxiliary sweep enabled.
There is one more modification that needs to be made to the Solver Configurations. We need to manually add a Previous Solution node to the Parametric node and specify that the variable c_b should be accessed at the previous step in the parameter sweep. The settings are shown below.
The Previous Solution node settings.
With these study settings, we repeat the simulation for each one of the three unit cells, passing the concentration field from the outlet back to the inlet before computing the concentration field. The results can be combined into a single plot showing the concentration profile throughout the entire domain of interest.
The concentration solution plotted for three unit cells, solved sequentially.
The approach shown in this blog post is valid for models of chemical species transport in periodic structures where the Péclet number is high. The process is certainly valid for solving other transportdominant problems in COMSOL Multiphysics as well. Even though we assume here that the flow is periodic and the properties of the fluid are invariant, this approach can be extended to mapping the flow field from the outlet back to the inlet.
The problem shown here could also be addressed via LiveLink™ for MATLAB®, which provides us with a scripting interface that allows us to extract data, remap it, and rerun solutions with different inputs. Still, it is nice to see that we can build such models in the graphical user interface (GUI) as well.
The computational advantage here will grow with the number of unit cells that we have to analyze. If there are N unit cells, the memory requirements and the solution times for using this approach will be approximately N times smaller than building an entire model.
If you have questions about this, and are interested in using COMSOL Multiphysics for your modeling needs, please contact us.
]]>
Canoeing is one of my favorite hobbies. In fact, I used to paddle dragon boats competitively on the beautiful Charles River here in Boston. If you’re ever in the city on a nice summer day, I strongly encourage you to rent a canoe and paddle down the river.
Dragon boats on the Charles River.
Boston’s weather, however, can change rather quickly. While it may be calm when you begin moving down the river, high winds can pick up and start blowing your canoe all over the place. Every time you raise your oar out of the water, it will catch the wind. As you swing your oar forward, you will hit the crests of the small waves that are being kicked up. Such movement leaves you wet and, even worse, throws off the balance of the boat. How might you change your paddle stroke to regain control of the situation?
An experienced paddler would start to use the Indian stroke, also called the Canadian Jstroke — just one of the many kinds of canoe paddle strokes. This stroke moves the paddle backward in the typical power stroke. Without lifting the paddle out of the water, the paddler rotates their grip, such that the paddle’s blade is parallel to the direction of travel on the return stroke.
The advantage of the Indian stroke, nicely demonstrated here, is that the paddle blade is never exposed to the wind and does not strike any of the wave crests. While easy to perform at slow speeds, this stroke is rarely used in races. It requires exceptionally strong control of the paddle as well as a solid understanding of how to move the paddle through the water.
With COMSOL Multiphysics, you can analyze and improve upon your paddle stroke. We’ll show you how.
Let’s simplify things a bit. We will consider a simplified paddle stroke and model it in the 2D plane. Although it is assumed that there is no variation in the zdirection (the water depth), we can begin here to gain some insight. The model under consideration is comprised of a rectangular fluid domain that is filled with water. The example includes a canoeshaped cutout as well as a rectangular cutout that represents the paddle.
We want to model the paddle moving back and forth along a known path and rotating about its center. We can specify the fluid velocity at the paddle blade based on the known translational and rotational movement. The fluid velocity at the canoe wall is zero and open boundary conditions surround our model space.
The modeling domain considers the paddle translating and rotating next to the canoe. The boat moves from left to right.
Previously on the blog, we highlighted approaches for modeling general translations of domains as well as rotations and linear translations. A combination of these techniques can be used to model the deformation and rotation of the paddle. The animation below illustrates such techniques, showing the mesh through one full stroke. The mesh faces can slide relative to one another, yet still maintain continuity of the solution across these disjoint mesh interfaces.
An animation depicting moving mesh. A very coarse mesh is shown for illustrative purposes.
Since the translation and rotation of the paddle blade are now completely defined, there is only one thing left to solve: the NavierStokes equations on the moving domains. In this analysis, we will assume laminar flow. We will also assume that the canoe is quite heavy and is not yet moving. The animation below allows you to visualize the results for the first few strokes.
Flow during the paddle stroke.
Now that you can visualize the flow patterns around the paddle blade, albeit in this somewhat simplified case, how might you improve your own stroke? Once you determine your optimal stroke, you might even want to consider joining a competitive dragon boat team here in Boston, or elsewhere — a healthy and fun way to put your COMSOL Multiphysics modeling skills to use!
Can you think of other situations in which this approach could be applicable? We are happy to hear your feedback. If you are interested in using COMSOL Multiphysics to model your paddle, or perhaps another form of fluidstructure interaction, please contact us.
]]>
In a previous blog post, we discussed the modeling of objects translating inside of domains that are filled with a fluid, or just a vacuum. This initial approach introduced the use of the deformed mesh interfaces and the concept of quadrilateral (or triangular) deforming domains, where the deformation is defined via bilinear interpolation. Such a technique works well even for large deformations, as long as the regions around the moving object can be appropriately subdivided. This, however, is not always possible.
A solid object moving along a linear path inside of a complicated domain.
Consider the case shown above, where an object moves along a straight line path, defined by \mathbf x(t), through a domain with protrusions from the sides. In this situation, it would be quite difficult to implement the original approach. So what else can we do?
The solution involves four steps. They are:
We can begin by dividing our original model space into two different geometry objects, as depicted in the figure below. Here, the red domains represent the stationary domains and the blue domains represent the regions in which our object is linearly translating. The subdivision process takes place in the geometry sequence, which is then finalized with the Form Assembly operation.
For a description of this functionality and steps on how to use it, watch this video.
Subdividing the modeling space into different geometry objects.
The Form Assembly step will allow the finite element meshes in the blue domains to slide relative to the meshes in the red domains. This step will also automatically introduce identity pairs that can be used to maintain continuity of the fields for which we will be solving. Let’s take a look at a representative mesh that may be appropriate in this case.
The subdivided domains with a representative mesh.
In the figure above, note that the dark red domains contain meshes that will not move at all. The dark blue domains, meanwhile, have translations completely defined by our known function, \mathbf x(t). The light blue domains are the regions in which the mesh will deform. We can simply use the previously introduced method of bilinear interpolation in these domains. You should also note that a Mapped mesh is used for these two rectangular domains and that the distribution of the elements is adjusted, such that they are reasonably similar in size or smaller than the adjacent nondeforming elements.
After the object has been linearly translated, the mesh in the light blue domains deforms.
From the previous images, you can clearly see that the meshes no longer line up between the moving and stationary domains. While COMSOL Multiphysics version 5.0 has introduced greater accuracy in the handling of noncongruent meshes between domains, there are some things that you should be aware of when using this functionality.
As a result of the Form Assembly geometric finalization step, COMSOL Multiphysics will automatically determine the identity pairs — the mating faces at which the mesh can be misaligned. We simply need to tell each physics interface within our model to maintain continuity at these boundaries. This can be accomplished via the Pairs > Continuity boundary condition, which is available within the boundary conditions for all of the physics interfaces.
Once this feature is added and applied to all of the identity pairs, the software will apply additional conditions at these interfaces to ensure that the solution is as smooth as possible over the mesh discontinuity. Each identity pair has a socalled source side and a destination side. The mesh on the destination side should be finer in all configurations of the mesh.
Assembly meshes with noncongruent meshes at the boundaries can be used in combination with most physics interfaces. There are, however, a few important exceptions. Whenever you are solving an electromagnetics problem involving a curl operator on a vector field, such a technique cannot be used. Common physics interfaces that fall into this category include the 3D Electromagnetic Waves interfaces, the 3D Magnetic Fields interfaces, and the 3D Magnetic and Electric Fields interfaces. This still, of course, leaves us with a wide range of physics.
Let’s look at one case of computing the temperature fields around our object, with differing temperatures for the object and the surrounding domain’s outer walls. The contour plots of the temperature fields shown below verify that the solution is quite smooth over the boundary where the mesh is not continuous.
The temperature fields over time are smooth across the Continuity boundary condition applied to the identity pair.
At this point, you can probably already see how this same technique can be applied to a rotating object. We simply create a circular domain around our rotating object and use all of the same techniques we have discussed here. Of course, if the object is only rotating, we no longer need the deforming mesh — making things even a bit simpler for us.
The figures below show the same object from before, except now the object is rotating.
An object rotating about a point.
An assembly composed of stationary and rotating objects as well as the mesh.
The expressions for the prescribed deformation of the rotating domain, (X_{r}, Y_{r}), can be expressed in terms of the angular frequency, \omega; the undeformed geometry coordinates, (X_{g}, Y_{g}); the point about which the object is rotating, (X_{0}, Y_{0}); and time, t. This gives us the following expressions:
where the prescribed deformation in the Deformed Geometry interface is quite simply:
Seems easy, right? In fact, the technique outlined in this section is actually applied automatically in COMSOL Multiphysics when using the Rotating Machinery, Fluid Flow and the Rotating Machinery, Magnetic physics interfaces. This provides you with a behindthescenes look at what is going on within these interfaces!
We have now introduced methods for modeling the motion of solid objects inside fluid or vacuumfilled domains. Although we have simply prescribed a displacement in all of these cases, the displacement of our solid objects could be computed and coupled to the field solutions in the surrounding regions. That is, however, a topic for another day.
Interested in learning more about using the deformed mesh interfaces for your modeling? Download the tutorial from our Application Gallery.
]]>
Suppose we want to set up a COMSOL Multiphysics model of a solid object moving around inside of a larger domain filled with fluid such as air, or even just a vacuum. To start, let’s assume that we know what path the object will take over time. We won’t worry about which physics we need to solve the model, but we’ll assume that we want to solve for some fields in both the moving domain and the surrounding domain. Of course, we will need a finite element mesh in both of these regions, but this finite element mesh will need to change.
A solid object moves freely around inside of a larger domain along a known path.
For situations like this, there are two options: The Deformed Geometry interface and the Moving Mesh interface. These two interfaces actually work identically, but are meant to be used in different situations.
The actual use of these two interfaces is identical, but choosing between them depends on which other physics you want to solve, as the interfaces handle each type of physics differently. Although we won’t cover how to choose between these two interfaces in this blog post, it is worth reading the sections “Deformed Mesh Fundamentals” and “Handling Frames in Heat Transfer” in the COMSOL Multiphysics Reference Manual as a starting point.
It is also worth mentioning that the Solid Mechanics interface cannot be combined with the Moving Mesh interface. The Solid Mechanics interface already computes the domain deformation via the balance of momentum. Other physics, such as heat transfer in solids, are solved on this deformed shape. On the other hand, it is reasonable to combine the Deformed Geometry interface with the Solid Mechanics interface if you want to study the change in stresses due to material removal, or if you want to perform a parametric sweep over a dimension without parameterizing the geometry, as described in this previous blog post.
Here, we look at the conceptual case of an object moving around inside of a larger domain with stationary boundaries, as shown in the figure above. The path of the object over time is known. We will look at how to set up the Deformed Geometry interface for this problem. But first, we need to take a quick look at which equations will be solved in COMSOL Multiphysics.
Our case of an object moving around inside of a domain is actually a boundary value problem. All boundaries have known displacements, and these boundary displacements can be used to define the deformation of the mesh within the interior of both domains.
There are four types of approaches for computing the deformation of the mesh within each domain: Laplace, Winslow, Hyperelastic, and Yeoh smoothing types. Here, we will address only the simplest case, referred to as a Laplace smoothing, and demonstrate how this approach is sufficient for most cases. The Laplace smoothing approach solves the following partial differential equation within the domain:
where lowercase (x,y,z) are the deformed positions of the mesh and uppercase (X,Y,Z) are the original, undeformed positions.
Since the displacements at all boundaries are known, this is a wellposed problem, and theoretically, the solution to this equation will give us the deformation of the mesh. However, in practice, we may run into cases where the computed deformation field is not very useful. This is illustrated in the figure below, which shows the original mesh on the original domain and the deformed mesh as the part is moved along the diagonal. Observe the highlighted region and note that the mesh gets highly distorted around the moving part edges, especially at sharp corners. This high distortion prevents the model from solving the above equation past a certain amount of deformation.
Original and deformed mesh. The region where the mesh gets highly distorted is highlighted.
In the above image, the deformation of the blue domain is completely described by its boundaries and can be prescribed. On the other hand, the deformation within the red region requires solving the above partial differential equation, and this leads to difficulties. What we want is an approach that allows us to model greater deformations while minimizing the mesh deformation.
If you have a mathematical background, you will recognize the above governing equation as Laplace’s equation and you might even know the solutions to it for a few simple cases. One of the simpler cases is the solution to Laplace’s equation on a Cartesian domain with Dirichlet boundary conditions that vary linearly along each boundary and continuously around the perimeter. For this case, the solution within the domain is equal to bilinear interpolation between the boundary conditions given at the four corners. As it turns out, you can use bilinear interpolation to find the solution to Laplace’s equation for any convex foursided domain with straight boundaries.
The first thing that we have to do is subdivide our complicated deforming domain into convex foursided domains with straight boundaries. One such possible subdivision is shown below.
Subdividing the domain so that the deforming region (red) is composed of foursided convex domains.
The deforming domain is divided into convex quadrilateral domains. In fact, we could have also divided it into triangular domains since that would simply be a special case of a quadrilateral with two vertices at the same location — a socalled degenerate domain. We would only need to decompose the domain into triangles if it were not possible to split the domains into quadrilaterals.
Now that we have these additional boundaries, we need to completely define all of the boundary conditions for the deformation within the domain. The boundaries adjacent to the deforming domain are known and there is no deformation at the outside boundaries. But what about the boundaries connecting these? We have a straight line connecting two points where the deformation is known, so we could just apply linear interpolation along these lines to specify the deformation there as well.
And how can we easily compute this linear interpolation? As you might have already guessed, we can simply solve Laplace’s equation along these connecting lines!
A very general way of doing this is by adding a Coefficient Form Boundary PDE interface to our model to solve for two variables that describe the displacement along each of these four boundaries. This interface allows you to specify the coefficients of a partial differential equation to set up Laplace’s equation along a boundary. We know the displacements at the points on either end of the boundary, which gives us a fully defined and solvable boundary value problem for the displacements along the boundaries.
These new help variables completely define the deforming domains. The results are shown below and demonstrate that larger deformations of the mesh are possible. Of course, we still cannot move the object such that it collides with the boundary. That would imply that the topology of the domain would change; also, the elements cannot have zero area. We can, however, make the deformed domain very small and thin.
The undeformed and deformed mesh after adding the help variables for the Deformed Geometry along the interior boundaries.
You are probably thinking that the mesh shown above appears rather distorted, but keep in mind that all of these distorted elements still have straightsided edges, which is good. In practice, you will often find that you can get good results even from what appear to be highly distorted elements.
However, we can observe that there are now very many small, distorted elements in one region and larger, stretched elements in other parts of our moving domain. The last piece of the puzzle is to use Automatic Remeshing, which will stop a transient simulation based on a mesh quality metric and remesh the current deformed shape.
The deformed geometry immediately before and after the Automatic Remeshing step.
We can see from the above images that Automatic Remeshing leads to a lower element count in the compressed region and adds elements in the stretched region, such that the elements are reasonably uniform. This total number of elements in the mesh stays about the same. There is also an added computational burden due to the remeshing, so this step is only warranted if the element distortion adversely affects the accuracy of the results.
We have looked at a case where we know exactly how our solid object will move around in our fluid domain. But what if there is an unknown deformation of the solid, such as due to some applied loads that are computed during the solution? A classic example of such a situation is a fluidstructure interaction analysis, where the deformation of the solid is due to the surrounding fluid flow.
In such situations, we can use the Integration Component Coupling operator, which makes the deformation at one point of a deforming solid structure available everywhere within the model space. The deformation of one or more points can then be used to control the deformation of the mesh. A good example of this technique is available in the Micropump Mechanism tutorial. The technique is visualized below.
When the actual deformation is unknown, an integration component coupling at a helper point can be used to control a helper line that defines the mesh deformation.
We can see in the image above that the modeling domain is actually not divided into convex quadrilaterals, and that the helper line is allowed to slide along the top boundary of the modeling domain. So this modeling approach is a little bit less strict, yet still allows the mesh to deform significantly. Hopefully it is clear that there is no single best approach to every situation. You may want to investigate a combination of techniques for your particular case.
We have described how to use the deformed mesh interfaces efficiently by decomposing the deforming domain into quadrilateral domains and introducing help variables along the boundaries. This approach makes the problem easier for the COMSOL Multiphysics software to solve. The addition of Automatic Remeshing is helpful if there is significant deformation. The approach outlined here can also be applied to 3D geometries. An example that uses both 2D and 3D cases is available here.
So far, we have only looked at translations of objects inside of relatively simple domains where it is easy to set up deforming domains. When we need to consider geometries that cannot easily be subdivided and cases with rotations of objects, we need to use a different approach. We will cover that topic in an upcoming blog post on this subject, so stay tuned!
]]>
Suppose you are tasked with computing the fluid flow through a network of pipes, as depicted below. You can see that there are many bends with long straight sections in between.
A piping network. Image by Hervé Cozanet, via Wikimedia Commons.
The geometry for a fluid flow model of just one pipe in this network might look like the image below.
A CAD model of a pipe volume for fluid flow analysis.
If you go ahead and mesh this geometry with just the default PhysicsControlled Mesh capability, you will obtain a mesh like that pictured below. Note that the boundary layer mesh is applied to the pipe walls and that the mesh is otherwise quite uniform in size within the long, straight sections of the pipe.
The default finite element mesh for this fluid flow problem includes a boundary layer mesh on all noslip boundaries.
An experienced fluid flow analyst would immediately recognize that the flow field in the long, straight sections will be primarily parallel to the pipe and vary quite gradually along the axis. Meanwhile, the variation of the velocity along the cross section and around the bends will be significant. We can exploit this foreknowledge of the solution to partition the geometry into various domains.
The pipe domain is partitioned into several subdomains, which are shown in different colors.
Once the geometry is partitioned, we can apply a Free Tetrahedral mesh feature. This mesh should only be applied to one of the domains along the length of the pipe; a domain that represents a bend (depicted below). Note that the Boundary Layers mesh feature is not yet applied.
A tetrahedral mesh is applied to only one of the domains.
From this one meshed domain, we can now use the Swept mesh functionality in the straight sections, as illustrated below. It is also possible to specify a Distribution subfeature to the Swept feature to explicitly control the element distribution and set up a nonuniform element size along the length. Since we anticipate that the flow will vary gradually along the length, the elements can be quite stretched in the axial direction.
The swept mesh along the straight sections also has a nonuniform element distribution.
We can now apply a tetrahedral mesh to nest the two bent sections and sweep the remaining straight sections. The last step of the meshing sequence is to apply the Boundary Layers mesh feature.
The combination of a tetrahedral and swept mesh with the boundary layers applied at the walls.
From the above images, we can observe that the swept mesh can significantly reduce the size of the model for this fluid flow problem. Our Flow Through a Pipe Elbow tutorial is one example in which this swept meshing technique is used.
Shifting gears, let’s now consider an inductive coil similar to the one pictured below.
An inductive coil. Image by Spinningspark, via Wikimedia Commons.
This coil consists of a long wire with quite gradual bends. If tasked with computing the inductance, we would also need to consider the surrounding air and the magnetic core materials. The geometry for such a model and the default mesh might look like the image below.
A coil surrounding a magnetic core in an air domain.
The default Free Tetrahedral mesh feature is applied to the entire model.
You’ve probably already recognized that the coil itself is an excellent candidate for swept meshing. The coil is long and uniform in cross section. As such, we can begin with a triangular surface mesh at one end and then sweep it along the entire length of the coil to create triangular prismatic elements.
A triangular mesh (represented in blue) is applied to the crosssectional surface at one end of the coil and then swept along the entire length.
We do, however, still need a volumetric mesh of the surroundings. This surrounding volume is amenable to only tetrahedral meshing, not swept meshing. A volume that is to be meshed with tetrahedral elements can only have triangular surface elements on all of its boundaries. Thus, we must first add a Convert feature to the mesh sequence and apply it to the surfaces between the coil and its surroundings. The operation is designed to split the elements touching the boundaries such that triangular face elements are created.
The convert operation introduces triangular elements on the boundaries of the coil.
The remaining domains are meshed with tetrahedra.
From the above image, we can see that fewer elements are used to describe the coil than in the default mesh settings. A similar example is the Anisotropic Heat Transfer Through Woven Carbon Fibers tutorial, which considers a combination of swept meshing and tetrahedral meshing of the surroundings (albeit with different physics involved).
Finally, let us consider a microelectromechanical system (MEMS) structure that is composed of microscale structural features that deflect. If different electrical potentials are applied to different objects, a perturbation of the structure will be measurable through a change in capacitance. A change in applied potentials can deform the system. Such an effect is exploited in devices like comb drives, accelerometers, and gyroscopes.
A MEMS cantilever beam at resonance. Image by Pcflet01, via Wikimedia Commons.
A common characteristic of such MEMS structures is that they are composed of various thin planar layers that need to be meshed along with the surrounding air domain. The gaps between structures may also be quite slender. A simplified model for part of such a MEMS structure might appear similar to the model shown below, with interleaved fingers.
A simplified model representing part of a typical MEMS structure.
When using the default mesh settings, small elements will be inserted in the narrow air gaps between the parts (illustrated below). However, we do know that the fingers on either side will be at different potentials and that the gap between the straight sections of the fingers and the ground plane will have a uniform electric field.
The default mesh settings show smaller elements than those that are needed in regions where we know the electric field will be nearly uniform.
This present structure is actually not amenable to swept meshing, as there are no domains in this model that have a uniform cross section. But, if we introduce some partitioning planes, we can break this domain up into prismatic domains that are amenable to swept meshing. We will first introduce two partitioning planes — one at the top and one at the bottom surface of the fingers — that will partition both the air domain and the two solid domains. We add these planes as Work Plane features to the geometry sequence and they are used as input by the two Partition Object features that divide the solids.
Two planes are introduced that partition both the air and the solid domains.
It is then possible to introduce additional partitioning planes, as shown below, to delineate the long, straight sections of the fingers. This is important because we know the electric fields and displacements will vary quite gradually in these regions.
Two additional planes divide the fingers into prismatic domains.
Now we can begin the meshing process using the Mapped mesh feature on the new rectangular surfaces introduced by the partitioning. The nonrectangular faces on the same plane can be meshed with triangular elements, as illustrated below.
A surface mesh applied to one of the partitioning planes.
The surface mesh can be used as the starting point for the swept mesh, which can be applied to the two layers of the thin domains — the fingers and the air gaps between the fingers and the ground. The air domain can be meshed with tetrahedral elements after a convert operation is applied to the adjacent rectangular element faces.
The final mesh consists of a combination of free and swept meshes.
We can observe that the number of total elements in the finite element model has been reduced. For an example demonstrating this technique of partitioning planes and swept meshing, please see our Surface Micromachined Accelerometer tutorial.
Swept meshing is a powerful technique for minimizing the computational complexity of many classes of COMSOL Multiphysics models. By using your engineering judgment and knowledge to address each problem, you can obtain highaccuracy results quickly and at relatively lower computational costs than with default mesh settings.
While you, of course, do not always need to use this approach, you should consider applying it to cases where your geometry has high aspect ratios, there are relatively thin or thick regions, and you are reasonably certain that the solution will be represented well by the swept mesh.
In conjunction with this topic, here are some additional blog posts to read:
Let’s begin by looking at a microfluidic device, as shown below. Such devices feature small channels that are filled with fluids carrying different chemical species. Within their design, a common goal is to achieve optimal mixing within a small surface area, hence the serpentine channel.
A typical microfluidic device. Image by IXfactory STK — Own work, via Wikimedia Commons.
The schematic below illustrates that there are two fluid inlets, both of which carry the same solvent (water) but a different solute. At the outlet, we want the species to be well mixed. To model such a situation, we want to solve the NavierStokes equations for the flow. This computed flow field can then be used as input for the convectiondiffusion equation governing the species concentration. The Micromixer tutorial, available in our Application Gallery, is an example of such a model.
Now, if desired, it is possible to model the entire device shown above. However, if we neglect the structure near the inlet and the outlet, we can reasonably assume that the flow within the channel bends will be identical between the unit cells. Therefore, we can greatly reduce our model by solving only for the fluid flow within one unit cell and patterning this flow solution throughout the modeling domain for the convectiondiffusion problem.
Schematic of a microfluidic mixer that depicts the repeated unit cell and the inlet and outlet zones.
For such a unit cell model, the walls of the channels are set to the Wall, No Slip condition. The Periodic Flow condition is used to set the velocity so it is identical at the inlet and outlet boundaries, allowing us to specify a pressure drop over a single unit cell. A pressure constraint at a single point is used to gauge fix the pressure field. The working fluid is water with properties defined at room temperature and pressure. The flow solution on this unit cell is also plotted, as shown below.
The periodic modeling domain and the fluid flow solution.
Now that we have the solution on one unit cell, we can use the General Extrusion component coupling to map the solution from this one unit cell onto the repeated domains. This will enable us to define the flow field in the entire serpentine section.
The General Extrusion feature is available in the model tree under Component > Definitions > Component Coupling. The settings for this feature are illustrated below. To map the solution from one domain into the other domains that are offset by a known displacement along the xaxis, the destination map uses the expression “xDisp” for the xexpression. Thus, every point in the original domain is mapped along the positive xdirection by the specified displacement. Since there is no displacement in the ydirection, the yexpression is set at its default “y”.
The variable Disp is individually defined within each of the three domains, as shown in the figure below. Therefore, only a single operator is needed to map the velocity field into all of the domains. Within the original domain, a displacement of zero is used.
The settings for the General Extrusion operator and the definitions of the variable in the three domains.
With the General Extrusion operator defined, we can now use it throughout the model. In this example, the operator is used by the Transport of Diluted Species interface to define the velocity field (illustrated below). The velocity field is given by u and v, the fluid velocity in the x and ydirections, respectively. The components of this velocity field are now defined in all of the repeated domains via the General Extrusion operator: genext1(u) and genext1(v), respectively.
The General Extrusion operator is used to define the velocity field in all three periodic domains.
Now that the velocity field is defined throughout the modeling domain, the species concentration at the inlet is defined via the Inflow boundary condition. This applies a varying species concentration over the inlet boundary. An Outlet boundary condition is applied at the other end.
Although it is not strictly necessary to do so, the mesh is copied from the one domain used to solve for the fluid flow to all of the other domains. The Copy Domain mesh feature can copy the mesh exactly, thereby avoiding any interpolation of the flow solution between meshes.
The model is solved in two steps — first, the Laminar Flow physics interface is solved, and then the Transport of Diluted Species interface is solved. This is reasonable to do since it is assumed that the flow field is independent of the species concentration. The results of the analysis, including the concentration and the mapped velocity field, are depicted below.
The species concentration (shown in color) is solved in all three repeating domains. The periodic velocity field, indicated by the arrows, is solved in one domain and mapped into the others.
We have discussed how the General Extrusion component coupling can be used to set up a linear pattern of a periodic solution as part of a multiphysics analysis. For circular periodicity, a rotation matrix, not a linear shift, must be used in the destination map. An example of defining such a rotation matrix is detailed in this previous blog post.
The approach we have applied here is appropriate for any instance in which a spatially repeating solution needs to be utilized by other physics. Where might you use it in your multiphysics modeling?
]]>
Let’s start this conversation with a very simple problem — computing the capacitance of two parallel flat square metal plates, of side length L=1\:m, separated by a distance D=0.1\:m, and with a dielectric material of relative permittivity \epsilon_r= 2 sandwiched in between. Under the assumption that the fringing fields are insignificant (a rather severe assumption, but we will use it here to get started), we can write an analytic expression for the capacitance:
where \epsilon_0 is the permittivity of free space.
We can easily differentiate this expression with respect to our three inputs to find the design sensitivities:
Now let’s look at computing these same sensitivities using the functionality of COMSOL Multiphysics.
Schematic of a parallel plate capacitor model, neglecting the fringing fields.
We can start by building a model using the Electrostatics physics interface. Our domain will be a block of length L and height D with a relative permittivity of \epsilon_r. The boundary condition at the bottom is a Ground condition, and at the top, an Electric Potential condition sets the voltage to V_0=1\:V. The sides of the block have the default Zero Charge boundary condition, which is equivalent to neglecting the fringing fields. We can solve this model and find the voltage field between the two plates. Based on this solution, we can also calculate the system capacitance based on the integral of the electric energy density, W_e, throughout the entire model:
This equation does assume that one plate (or terminal) is held at a voltage V_0, while all other terminals in the model are grounded. The integral of the electric energy density over all domains is already computed via the builtin variable comp1.es.intWe
, and we can use it to define an expression for the computed capacitance. Of course, we will want to compare this value and our computed sensitivities to the analytic values so we can define some variables for these quantities. We can use the builtin differentiation operator, d(f(x),x), to evaluate the exact sensitivities, as shown in the screenshot below.
Variables are used to compute the model capacitance, as well as to compute the exact capacitance and its sensitivities. The builtin differentiation operator, for example d(C_exact,L), can be used to evaluate the exact sensitivities.
After solving our model, we can evaluate the computed system capacitance, compare it to the analytic value, and evaluate the design sensitivities analytically. Now let’s look at how to compute these sensitivities with COMSOL Multiphysics.
The parameters that we are considering affect both the material properties as well as the geometric dimensions of the model. When the design parameters affect the geometry, we need to use the Deformed Geometry interface, which lets us evaluate the sensitivities with respect to a geometric deformation.
The design parameters will affect a change in the geometry as shown. The hidden faces experience no displacement normal to the boundaries.
We introduce two new Global Parameters, dL and dD, which represent a change in L and D. These will be used in the Deformed Geometry interface, which has four relevant features. First, a Free Deformation feature is applied to the domain, which means that the computational domain can deform based on the applied boundary conditions. Next, Prescribed Mesh Displacement features are applied to the six faces of the domain. In the screenshot below, the deformation (dL) normal to the faces is prescribed as shown in the sketch above.
The Prescribed Mesh Displacement features are used to control the displacement normal to all domain boundaries.
Finally, to actually compute the sensitivities, we must add a Sensitivity node to the Study Sequence. This is shown in the screenshot below. You will want to enter the objective function expression, in this case C_computed
, as well as all of the design parameters that you are interested in studying. Also, choose the value for the design parameters around which you want to evaluate the sensitivities. Since dL and dD represent an incremental change in the dimensions, we can leave these both at zero to compute the sensitivities for L=1\:m and D=0.1\:m. The parameter controlling the material permittivity needs no special handling, other than to choose it as one of the parameters in the Sensitivity study.
There are two options in the form shown below for the gradient method:
A Sensitivity feature using the adjoint method is added to the study sequence, and the settings show the objective function and the parameters that are considered.
After solving, you will now be able to go to Results > Derived Values > Global Evaluation and enter the expressions fsens(dL)
, fsens(dD)
, and fsens(epsilon_r)
to evaluate the sensitivity of the capacitance with respect to the design parameters. Of course, you can also compare these to the previously computed analytic sensitivities and observe agreement.
Analytic  Computed  

\frac{\partial C}{\partial \epsilon_r}

88.542 nF  88.542 nF 
\frac{\partial C}{\partial L}

354.17 nF/m  354.17 nF/m 
\frac{\partial C}{\partial D}

1770.8 nF/m  1770.8 nF/m 
Now that we have the basic idea down in terms of computing the sensitivity of the capacitance of this system, what else can we do? Certainly, we can move on to some more complicated geometries, but there are a few points that we need to keep in mind as we move beyond this example.
There are two conditions that must be fulfilled for sensitivity analysis to work. First, the objective function itself must be differentiable with respect to the solution field. This means that objective functions such as the maximum and minimum of a field are not possible. Second, the parameters must be continuous in the realnumber space. Thus, integer parameters (e.g., the number of spokes on a wheel) are not possible.
Sensitivity calculations are not currently available for eigenvalue problems, nor ray tracing or particle tracing.
The design parameters themselves are typically Global Parameters, but you can also use the Sensitivity interface to add a Control Variable Field defined over domains, boundaries, edges, and points as desired.
Objective functions are typically defined in terms of integrals of the solution over domains or boundaries. It is also possible to set up an objective function as a Probe at a particular location in the model space. Any derived quantity based on the solution field, such as the spatial gradients of the solution, can be used as part of the objective function.
Computing design sensitivities is helpful for determining which parameters affect our objective function the most and gives us an idea about which parameters we might want to focus on as we start to consider design changes. Some other examples that use this functionality include our tutorial model of an axial magnetic bearing and our sensitivity analysis of a communication mast detail. Of course, this method can be used in far more cases than we can describe at once.
This story continues when we start to use these sensitivities to improve our objective function — where we optimize the design. This can be done with the Optimization Module, which we will cover in an upcoming blog post.
]]>