Suppose you are tasked with computing the fluid flow through a network of pipes, as depicted below. You can see that there are many bends with long straight sections in between.
A piping network. Image by Hervé Cozanet, via Wikimedia Commons.
The geometry for a fluid flow model of just one pipe in this network might look like the image below.
A CAD model of a pipe volume for fluid flow analysis.
If you go ahead and mesh this geometry with just the default PhysicsControlled Mesh capability, you will obtain a mesh like that pictured below. Note that the boundary layer mesh is applied to the pipe walls and that the mesh is otherwise quite uniform in size within the long, straight sections of the pipe.
The default finite element mesh for this fluid flow problem includes a boundary layer mesh on all noslip boundaries.
An experienced fluid flow analyst would immediately recognize that the flow field in the long, straight sections will be primarily parallel to the pipe and vary quite gradually along the axis. Meanwhile, the variation of the velocity along the cross section and around the bends will be significant. We can exploit this foreknowledge of the solution to partition the geometry into various domains.
The pipe domain is partitioned into several subdomains, which are shown in different colors.
Once the geometry is partitioned, we can apply a Free Tetrahedral mesh feature. This mesh should only be applied to one of the domains along the length of the pipe; a domain that represents a bend (depicted below). Note that the Boundary Layers mesh feature is not yet applied.
A tetrahedral mesh is applied to only one of the domains.
From this one meshed domain, we can now use the Swept mesh functionality in the straight sections, as illustrated below. It is also possible to specify a Distribution subfeature to the Swept feature to explicitly control the element distribution and set up a nonuniform element size along the length. Since we anticipate that the flow will vary gradually along the length, the elements can be quite stretched in the axial direction.
The swept mesh along the straight sections also has a nonuniform element distribution.
We can now apply a tetrahedral mesh to nest the two bent sections and sweep the remaining straight sections. The last step of the meshing sequence is to apply the Boundary Layers mesh feature.
The combination of a tetrahedral and swept mesh with the boundary layers applied at the walls.
From the above images, we can observe that the swept mesh can significantly reduce the size of the model for this fluid flow problem. Our Flow Through a Pipe Elbow tutorial is one example in which this swept meshing technique is used.
Shifting gears, let’s now consider an inductive coil similar to the one pictured below.
An inductive coil. Image by Spinningspark, via Wikimedia Commons.
This coil consists of a long wire with quite gradual bends. If tasked with computing the inductance, we would also need to consider the surrounding air and the magnetic core materials. The geometry for such a model and the default mesh might look like the image below.
A coil surrounding a magnetic core in an air domain.
The default Free Tetrahedral mesh feature is applied to the entire model.
You’ve probably already recognized that the coil itself is an excellent candidate for swept meshing. The coil is long and uniform in cross section. As such, we can begin with a triangular surface mesh at one end and then sweep it along the entire length of the coil to create triangular prismatic elements.
A triangular mesh (represented in blue) is applied to the crosssectional surface at one end of the coil and then swept along the entire length.
We do, however, still need a volumetric mesh of the surroundings. This surrounding volume is amenable to only tetrahedral meshing, not swept meshing. A volume that is to be meshed with tetrahedral elements can only have triangular surface elements on all of its boundaries. Thus, we must first add a Convert feature to the mesh sequence and apply it to the surfaces between the coil and its surroundings. The operation is designed to split the elements touching the boundaries such that triangular face elements are created.
The convert operation introduces triangular elements on the boundaries of the coil.
The remaining domains are meshed with tetrahedra.
From the above image, we can see that fewer elements are used to describe the coil than in the default mesh settings. A similar example is the Anisotropic Heat Transfer Through Woven Carbon Fibers tutorial, which considers a combination of swept meshing and tetrahedral meshing of the surroundings (albeit with different physics involved).
Finally, let us consider a microelectromechanical system (MEMS) structure that is composed of microscale structural features that deflect. If different electrical potentials are applied to different objects, a perturbation of the structure will be measurable through a change in capacitance. A change in applied potentials can deform the system. Such an effect is exploited in devices like comb drives, accelerometers, and gyroscopes.
A MEMS cantilever beam at resonance. Image by Pcflet01, via Wikimedia Commons.
A common characteristic of such MEMS structures is that they are composed of various thin planar layers that need to be meshed along with the surrounding air domain. The gaps between structures may also be quite slender. A simplified model for part of such a MEMS structure might appear similar to the model shown below, with interleaved fingers.
A simplified model representing part of a typical MEMS structure.
When using the default mesh settings, small elements will be inserted in the narrow air gaps between the parts (illustrated below). However, we do know that the fingers on either side will be at different potentials and that the gap between the straight sections of the fingers and the ground plane will have a uniform electric field.
The default mesh settings show smaller elements than those that are needed in regions where we know the electric field will be nearly uniform.
This present structure is actually not amenable to swept meshing, as there are no domains in this model that have a uniform cross section. But, if we introduce some partitioning planes, we can break this domain up into prismatic domains that are amenable to swept meshing. We will first introduce two partitioning planes — one at the top and one at the bottom surface of the fingers — that will partition both the air domain and the two solid domains. We add these planes as Work Plane features to the geometry sequence and they are used as input by the two Partition Object features that divide the solids.
Two planes are introduced that partition both the air and the solid domains.
It is then possible to introduce additional partitioning planes, as shown below, to delineate the long, straight sections of the fingers. This is important because we know the electric fields and displacements will vary quite gradually in these regions.
Two additional planes divide the fingers into prismatic domains.
Now we can begin the meshing process using the Mapped mesh feature on the new rectangular surfaces introduced by the partitioning. The nonrectangular faces on the same plane can be meshed with triangular elements, as illustrated below.
A surface mesh applied to one of the partitioning planes.
The surface mesh can be used as the starting point for the swept mesh, which can be applied to the two layers of the thin domains — the fingers and the air gaps between the fingers and the ground. The air domain can be meshed with tetrahedral elements after a convert operation is applied to the adjacent rectangular element faces.
The final mesh consists of a combination of free and swept meshes.
We can observe that the number of total elements in the finite element model has been reduced. For an example demonstrating this technique of partitioning planes and swept meshing, please see our Surface Micromachined Accelerometer tutorial.
Swept meshing is a powerful technique for minimizing the computational complexity of many classes of COMSOL Multiphysics models. By using your engineering judgment and knowledge to address each problem, you can obtain highaccuracy results quickly and at relatively lower computational costs than with default mesh settings.
While you, of course, do not always need to use this approach, you should consider applying it to cases where your geometry has high aspect ratios, there are relatively thin or thick regions, and you are reasonably certain that the solution will be represented well by the swept mesh.
In conjunction with this topic, here are some additional blog posts to read:
Consider a onedimensional domain on the xaxis with a source localized around x = 0. We can plot the strength of the source as a function of x and it may look like this:
Here, we have assumed that the strength has a constant value of 1/w within the interval [w/2, w/2] and is zero everywhere else. This gives a rectangular shape of width w and height 1/w, as shown in the figure above. The function is often called a rectangular, tophat, or sometimes, a disc function. The total strength of the source is given by the area of the rectangle, which is unity.
For linear systems, if we only care about what happens far away from the source where \left x \right \gg w, then the actual shape of the source strength does not matter much, as long as the area beneath that shape is the same. Furthermore, we are free to make w progressively smaller and smaller: the width of the rectangle decreases while its height increases in such a way that the total area remains the same, as shown in the graph below.
The localized source represented by the blue curve is progressively made thinner and taller (the orange and green curves), while maintaining the integrated strength of unity.
Eventually, we arrive at a rectangle that is infinitesimally thin and infinitely tall, but still has a well defined area of unity. This leads us to the socalled delta function \delta(x) and, correspondingly, the localized source now becomes an idealized point source of unit strength.
The delta function has some convenient properties. Its value is zero everywhere except at the origin:
Integrating the product of a delta function and another function just extracts the value of the latter function at the origin:
A point source at a general position x=a can be obtained by a simple coordinate shift of the delta function \delta(xa). We have
and
It is also easy to generalize the delta function and the corresponding point source to higher dimensions. For example, in 2D, we have
and
(1)
This tutorial solves the Poisson equation on a unit disc with a point source at the origin. The equation reads
(2)
where u is the dependent field variable to be solved.
At first sight it may not be obvious how to discretize this equation to be solved numerically. What value do we put at the origin for the source term on the righthand side? The value of the delta function is infinite there, but computers don’t like infinities!
Here, we will see that the weak formulation comes in handy. Recall that in this introductory blog post on the weak form, we multiply the differential equation to be solved by a test function and integrate over the entire domain (See Eq.(4) in that post). We can follow the same procedure here to solve Eq. (2). After multiplying by a test function \tilde{u}(x,y) and integrating over the unit disc domain, the righthand side of Eq. (2) simply becomes
(3)
by using the integration property of the delta function given in Eq. (1). This gives us something very easy to implement in COMSOL Multiphysics.
Start with a new 2D model with the Weak Form PDE physics interface and a Stationary study. Draw a unit circle centered at (0,0) and draw a point there as well. Set the Weak Expressions field under the default Weak Form PDE 1 feature to test(ux)*uxtest(uy)*uy
. This takes care of the lefthand side of Eq. (2) in exactly the same fashion as for the 1D case discussed in this previous post.
Now, for the point source on the righthand side, \tilde{u}(0,0), we simply add a point Weak Contribution node and select the point at the origin. For the Weak expression, we enter test(u)
. It’s that simple for the point source!
It may be worth noting that by entering test(u)
, we set the strength of the point source to unity. For any other source magnitude, simply multiply by a factor. For example, the expression 2*test(u)
gives a point source of strength 2.
After finishing the setup with a Dirichlet boundary condition at the perimeter of the circle, we can solve the model and observe the same solution as seen in the point source tutorial mentioned above:
Also as seen in the tutorial, the numerical solution (blue curve) matches the analytical solution (green curve) very well, except near the original where a singularity occurs:
As mentioned earlier, the point source provides a convenient idealization of a localized source in situations where we only care about the solution far away from the source. We illustrate this point with the following graph, where we have added three more curves to the graph above. These three curves are numerical solutions to the same Poisson equation in the same unit disc domain, but with various sizes of tophat, or disc, shaped sources replacing the point source. The integrated strength of each tophat source is calibrated to unity by setting its height to one over its area, in the same fashion as in the 1D case shown in the image above. As we see clearly from the figure below, all solutions are indistinguishable from one another far away from the sources. (In this example for x \gg 10 \, mm.)
Here, we have demonstrated the ease of creating point sources using the weak form. The numerical difficulty in the representation of the delta function is circumvented with a simple integration. In upcoming posts we will look at discontinuities and boundary conditions. Stay tuned!
]]>
Let’s begin by looking at a microfluidic device, as shown below. Such devices feature small channels that are filled with fluids carrying different chemical species. Within their design, a common goal is to achieve optimal mixing within a small surface area, hence the serpentine channel.
A typical microfluidic device. Image by IXfactory STK — Own work, via Wikimedia Commons.
The schematic below illustrates that there are two fluid inlets, both of which carry the same solvent (water) but a different solute. At the outlet, we want the species to be well mixed. To model such a situation, we want to solve the NavierStokes equations for the flow. This computed flow field can then be used as input for the convectiondiffusion equation governing the species concentration. The Micromixer tutorial, available in our Application Gallery, is an example of such a model.
Now, if desired, it is possible to model the entire device shown above. However, if we neglect the structure near the inlet and the outlet, we can reasonably assume that the flow within the channel bends will be identical between the unit cells. Therefore, we can greatly reduce our model by solving only for the fluid flow within one unit cell and patterning this flow solution throughout the modeling domain for the convectiondiffusion problem.
Schematic of a microfluidic mixer that depicts the repeated unit cell and the inlet and outlet zones.
For such a unit cell model, the walls of the channels are set to the Wall, No Slip condition. The Periodic Flow condition is used to set the velocity so it is identical at the inlet and outlet boundaries, allowing us to specify a pressure drop over a single unit cell. A pressure constraint at a single point is used to gauge fix the pressure field. The working fluid is water with properties defined at room temperature and pressure. The flow solution on this unit cell is also plotted, as shown below.
The periodic modeling domain and the fluid flow solution.
Now that we have the solution on one unit cell, we can use the General Extrusion component coupling to map the solution from this one unit cell onto the repeated domains. This will enable us to define the flow field in the entire serpentine section.
The General Extrusion feature is available in the model tree under Component > Definitions > Component Coupling. The settings for this feature are illustrated below. To map the solution from one domain into the other domains that are offset by a known displacement along the xaxis, the destination map uses the expression “xDisp” for the xexpression. Thus, every point in the original domain is mapped along the positive xdirection by the specified displacement. Since there is no displacement in the ydirection, the yexpression is set at its default “y”.
The variable Disp is individually defined within each of the three domains, as shown in the figure below. Therefore, only a single operator is needed to map the velocity field into all of the domains. Within the original domain, a displacement of zero is used.
The settings for the General Extrusion operator and the definitions of the variable in the three domains.
With the General Extrusion operator defined, we can now use it throughout the model. In this example, the operator is used by the Transport of Diluted Species interface to define the velocity field (illustrated below). The velocity field is given by u and v, the fluid velocity in the x and ydirections, respectively. The components of this velocity field are now defined in all of the repeated domains via the General Extrusion operator: genext1(u) and genext1(v), respectively.
The General Extrusion operator is used to define the velocity field in all three periodic domains.
Now that the velocity field is defined throughout the modeling domain, the species concentration at the inlet is defined via the Inflow boundary condition. This applies a varying species concentration over the inlet boundary. An Outlet boundary condition is applied at the other end.
Although it is not strictly necessary to do so, the mesh is copied from the one domain used to solve for the fluid flow to all of the other domains. The Copy Domain mesh feature can copy the mesh exactly, thereby avoiding any interpolation of the flow solution between meshes.
The model is solved in two steps — first, the Laminar Flow physics interface is solved, and then the Transport of Diluted Species interface is solved. This is reasonable to do since it is assumed that the flow field is independent of the species concentration. The results of the analysis, including the concentration and the mapped velocity field, are depicted below.
The species concentration (shown in color) is solved in all three repeating domains. The periodic velocity field, indicated by the arrows, is solved in one domain and mapped into the others.
We have discussed how the General Extrusion component coupling can be used to set up a linear pattern of a periodic solution as part of a multiphysics analysis. For circular periodicity, a rotation matrix, not a linear shift, must be used in the destination map. An example of defining such a rotation matrix is detailed in this previous blog post.
The approach we have applied here is appropriate for any instance in which a spatially repeating solution needs to be utilized by other physics. Where might you use it in your multiphysics modeling?
]]>
Let’s start this conversation with a very simple problem — computing the capacitance of two parallel flat square metal plates, of side length L=1\:m, separated by a distance D=0.1\:m, and with a dielectric material of relative permittivity \epsilon_r= 2 sandwiched in between. Under the assumption that the fringing fields are insignificant (a rather severe assumption, but we will use it here to get started), we can write an analytic expression for the capacitance:
where \epsilon_0 is the permittivity of free space.
We can easily differentiate this expression with respect to our three inputs to find the design sensitivities:
Now let’s look at computing these same sensitivities using the functionality of COMSOL Multiphysics.
Schematic of a parallel plate capacitor model, neglecting the fringing fields.
We can start by building a model using the Electrostatics physics interface. Our domain will be a block of length L and height D with a relative permittivity of \epsilon_r. The boundary condition at the bottom is a Ground condition, and at the top, an Electric Potential condition sets the voltage to V_0=1\:V. The sides of the block have the default Zero Charge boundary condition, which is equivalent to neglecting the fringing fields. We can solve this model and find the voltage field between the two plates. Based on this solution, we can also calculate the system capacitance based on the integral of the electric energy density, W_e, throughout the entire model:
This equation does assume that one plate (or terminal) is held at a voltage V_0, while all other terminals in the model are grounded. The integral of the electric energy density over all domains is already computed via the builtin variable comp1.es.intWe
, and we can use it to define an expression for the computed capacitance. Of course, we will want to compare this value and our computed sensitivities to the analytic values so we can define some variables for these quantities. We can use the builtin differentiation operator, d(f(x),x), to evaluate the exact sensitivities, as shown in the screenshot below.
Variables are used to compute the model capacitance, as well as to compute the exact capacitance and its sensitivities. The builtin differentiation operator, for example d(C_exact,L), can be used to evaluate the exact sensitivities.
After solving our model, we can evaluate the computed system capacitance, compare it to the analytic value, and evaluate the design sensitivities analytically. Now let’s look at how to compute these sensitivities with COMSOL Multiphysics.
The parameters that we are considering affect both the material properties as well as the geometric dimensions of the model. When the design parameters affect the geometry, we need to use the Deformed Geometry interface, which lets us evaluate the sensitivities with respect to a geometric deformation.
The design parameters will affect a change in the geometry as shown. The hidden faces experience no displacement normal to the boundaries.
We introduce two new Global Parameters, dL and dD, which represent a change in L and D. These will be used in the Deformed Geometry interface, which has four relevant features. First, a Free Deformation feature is applied to the domain, which means that the computational domain can deform based on the applied boundary conditions. Next, Prescribed Mesh Displacement features are applied to the six faces of the domain. In the screenshot below, the deformation (dL) normal to the faces is prescribed as shown in the sketch above.
The Prescribed Mesh Displacement features are used to control the displacement normal to all domain boundaries.
Finally, to actually compute the sensitivities, we must add a Sensitivity node to the Study Sequence. This is shown in the screenshot below. You will want to enter the objective function expression, in this case C_computed
, as well as all of the design parameters that you are interested in studying. Also, choose the value for the design parameters around which you want to evaluate the sensitivities. Since dL and dD represent an incremental change in the dimensions, we can leave these both at zero to compute the sensitivities for L=1\:m and D=0.1\:m. The parameter controlling the material permittivity needs no special handling, other than to choose it as one of the parameters in the Sensitivity study.
There are two options in the form shown below for the gradient method:
A Sensitivity feature using the adjoint method is added to the study sequence, and the settings show the objective function and the parameters that are considered.
After solving, you will now be able to go to Results > Derived Values > Global Evaluation and enter the expressions fsens(dL)
, fsens(dD)
, and fsens(epsilon_r)
to evaluate the sensitivity of the capacitance with respect to the design parameters. Of course, you can also compare these to the previously computed analytic sensitivities and observe agreement.
Analytic  Computed  

\frac{\partial C}{\partial \epsilon_r}

88.542 nF  88.542 nF 
\frac{\partial C}{\partial L}

354.17 nF/m  354.17 nF/m 
\frac{\partial C}{\partial D}

1770.8 nF/m  1770.8 nF/m 
Now that we have the basic idea down in terms of computing the sensitivity of the capacitance of this system, what else can we do? Certainly, we can move on to some more complicated geometries, but there are a few points that we need to keep in mind as we move beyond this example.
There are two conditions that must be fulfilled for sensitivity analysis to work. First, the objective function itself must be differentiable with respect to the solution field. This means that objective functions such as the maximum and minimum of a field are not possible. Second, the parameters must be continuous in the realnumber space. Thus, integer parameters (e.g., the number of spokes on a wheel) are not possible.
Sensitivity calculations are not currently available for eigenvalue problems, nor ray tracing or particle tracing.
The design parameters themselves are typically Global Parameters, but you can also use the Sensitivity interface to add a Control Variable Field defined over domains, boundaries, edges, and points as desired.
Objective functions are typically defined in terms of integrals of the solution over domains or boundaries. It is also possible to set up an objective function as a Probe at a particular location in the model space. Any derived quantity based on the solution field, such as the spatial gradients of the solution, can be used as part of the objective function.
Computing design sensitivities is helpful for determining which parameters affect our objective function the most and gives us an idea about which parameters we might want to focus on as we start to consider design changes. Some other examples that use this functionality include our tutorial model of an axial magnetic bearing and our sensitivity analysis of a communication mast detail. Of course, this method can be used in far more cases than we can describe at once.
This story continues when we start to use these sensitivities to improve our objective function — where we optimize the design. This can be done with the Optimization Module, which we will cover in an upcoming blog post.
]]>
Before using a numerical simulation tool to predict outcomes from previously unforeseen situations, we want to build trust in its reliability. We can do this by checking whether the simulation tool accurately reproduces available analytical solutions or whether its results match experimental observations. This brings us to two closely related topics of verification and validation. Let’s clarify what these two terms mean in the context of numerical simulations.
To numerically simulate a physical problem, we take two steps:
There are two situations where errors can be introduced. First, they can occur in the mathematical model itself. Potential errors include overlooking an important factor or assuming an unphysical relationship between variables. Validation is the process of making sure such errors are not introduced when constructing the mathematical model. Verification, on the other hand, is to ascertain that the mathematical model is accurately solved. Here, we are ensuring that the numerical algorithm is convergent and the computer implementation is correct, so that the numerical solution is accurate.
In brief, during validation we ask if we posed the appropriate mathematical model to describe the physical system, whereas in verification we investigate if we are obtaining an accurate numerical solution to the mathematical model.
A comparison between the processes of validation and verification.
Now, we will dive deeper into the verification of numerical solutions to initial boundary value problems (IBVPs).
How do we check if a simulation tool is accurately solving an IBVP?
One possibility is to choose a problem that has an exact analytical solution and use the exact solution as a benchmark. The method of separation of variables, for example, can be used to obtain solutions to simple IBVPs. The utility of this approach is limited by the fact that most problems of practical interest do not have exact solutions — the raison d’être of computer simulation. Still, this approach is useful as a sanity check for algorithms and programming.
Another approach is to compare simulation results with experimental data. To be clear, this is combining validation and verification in one step, which is sometimes called qualification. It is possible but unlikely that experimental observations are matched coincidentally by a faulty solution through a combination of a flawed mathematical model and a wrong algorithm or a bug in the programming. Barring such rare occurrences, a good match between a numerical solution and an experimental observation vouches for the validity of the mathematical model and the veracity of the solution procedure.
The Application Libraries in COMSOL Multiphysics contain many verification models that use one or both of these approaches. They are organized by physics areas.
Verification models are available in the Application Libraries of COMSOL Multiphysics.
What if we want to verify our results in the absence of exact mathematical solutions and experimental data? We can turn to the method of manufactured solutions.
The goal of solving an IBVP is to find an explicit expression for the solution in terms of independent variables, usually space and time, given problem parameters such as material properties, boundary conditions, initial conditions, and source terms. Common forms of source terms include body forces such as gravity in structural mechanics and fluid flow problems, reaction terms in transport problems, and heat sources in thermal problems.
In the Method of Manufactured Solutions (MMS), we flip the script and start with an assumed explicit expression for the solution. Then, we substitute the solution to the differential equations and obtain a consistent set of source terms, initial conditions, and boundary conditions. This usually involves evaluating a number of derivatives. We will soon see how the symbolic algebra routines in COMSOL Multiphysics can help with this process. Similarly, we evaluate the assumed solution at time t = 0 and at the boundaries to obtain initial conditions and the boundary conditions.
Next comes the verification step. Given the source terms and auxiliary conditions just obtained, we use the simulation tool to obtain a numerical solution to the IBVP and compare it to the original assumed solution with which we started.
Let us illustrate the steps with a simple example.
Consider a 1D heat conduction problem in a bar of length L
with initial condition
and fixed temperatures at the two ends given by
The coefficients A_c, \rho, C_p, and k stand for the crosssectional area, mass density, heat capacity, and thermal conductivity, respectively. The heat source is given by Q.
Our goal is to verify the solution of this problem using the method of manufactured solutions.
First, we assume an explicit form for the solution. Let’s consider the temperature distribution
where \tau is a characteristic time, which for this example is an hour. We introduce a new variable u for the assumed temperature to distinguish it from the computed temperature T.
Next, we find the source term consistent with the assumed solution. We can hand calculate partial derivatives of the solution with respect to space and time and substitute them in the differential equation to obtain Q. Alternatively, since COMSOL Multiphysics is able to perform symbolic manipulations, we will use that feature instead of hand calculating the source term.
In the case of uniform material and crosssectional properties, we can declare A_c, \rho, C_p, and k as parameters. The general heterogeneous case requires variables, as do timedependent boundary conditions. Notice the use of the operator d(), one of the builtin differentiation operators in COMSOL Multiphysics, shown in the screenshot below.
The symbolic algebra routine in COMSOL Multiphysics can automate the evaluation of partial derivatives.
We perform this symbolic manipulation with the caveat that we trust the symbolic algebra. Otherwise, any errors observed later could be from the symbolic manipulation and not the numerical solution. Of course, we can plot a handcalculated expression for Q alongside the result of the symbolic manipulation shown above to verify the symbolic algebra routine.
Next, we compute the initial and boundary conditions. The initial condition is the assumed solution evaluated at t = 0.
The values of the temperature at the two ends of the bar are g_1(t) = g_2(t) = 500 K.
Next, we obtain the numerical solution of the problem using the source term, as well as the initial and boundary conditions we have just calculated. For this example, let us use the Heat Transfer in Solids physics interface.
Add initial values, boundary conditions, and sources derived from the assumed solution.
For the final step, we compare the numerical solution with the assumed solution. The plots below show the temperature after a time period of one day. The first solution is obtained using linear elements, whereas the second solution is obtained using the quadratic elements. For this type of problem, COMSOL Multiphysics chooses quadratic elements by default.
The solution computed using the manufactured solution with linear elements (left) and quadratic elements (right).
The MMS gives us the flexibility to check different parts of the code. In the example given above, for the purpose of simplicity we have intentionally left many parts of the IBVP unchecked. In practice, every item in the equation should be checked in the most general form. For example, to check if the code accurately handles nonuniform crosssectional areas, we need to define a spatially variable area before deriving the source term. The same is true for other coefficients such as material properties.
A similar check should be made for all boundary and initial conditions. If, for example, we want to specify the flux on the left end instead of the temperature, we first evaluate the flux corresponding to the manufactured solution, i.e., n\cdot(A_ck \nabla u), where n is the outward unit normal. For the assumed solution in this example, the inward flux at the left end becomes \frac{A_ck}{L}\frac{t}{\tau}*1K.
In COMSOL Multiphysics, the default boundary condition for heat transfer in solids is thermal insulation. What if we want to verify the handling of thermal insulation on the left end? We would need to manufacture a new solution where the derivative vanishes on the left end. For example, we can use
Note that during verification, we are checking if the equations are being correctly solved. We are not concerned with whether the solution corresponds to physical situations.
Remember that once we manufacture a new solution, we have to recalculate the source term, initial conditions, and boundary conditions according to the assumed solution. Of course, when we use the symbolic manipulation tools in COMSOL Multiphysics, we are exempt from the tedium!
As shown in the graph above, the solutions obtained by the linear element and the quadratic element converged as the mesh size was reduced. This qualitative convergence gives us some confidence in the numerical solution. We can further scrutinize the numerical method by studying its rate of convergence, which will provide a quantitative check of the numerical procedure.
For example, for the stationary version of the problem, the standard finite element error estimate for error measured in the morder Sobolev norm is
where u and u_h are the exact and finite element solutions, h is the maximum element size, p is the order of the approximation polynomials (shape functions). For m = 0, this gives the error estimate
where C is a mesh independent constant.
Returning to the method of manufactured solutions, this implies that the solution with linear element (p = 1) should show secondorder convergence when the mesh is refined. If we plot the norm of the error with respect to mesh size on a loglog plot, the slope should asymptotically approach 2. If this does not happen, we will have to check the code or the accuracy and regularity of inputs such as material and geometric properties. As the figures below show, the numerical solution converges at the theoretically expected rate.
Left: Use integration operators to define norms. The operator intop1 is defined to integrate over the domain. Right: Loglog plot of error versus mesh size shows secondorder convergence in the L_2norm (m = 0) for linear elements, which is consistent with theoretical prediction.
While we should always check convergence, the theoretical convergence rate can only be checked for those problems like the one above where a priori error estimates are available. When you have such problems, remember that the method of manufactured solutions can help you verify if your code shows the correct asymptotic behavior.
In case of constitutive nonlinearity, the coefficients in the equation depend on the solution. In heat conduction, for example, thermal conductivity can depend on the temperature. In such cases, the coefficients need to be derived from the assumed solution.
Coupled (multiphysics) problems have more than one governing equation. Once solutions are assumed for all the fields involved, source terms have to be derived for each governing equation.
Note that the logic behind the method of manufactured solutions holds only if the governing system of equations has a unique solution under the conditions (source term, boundary, and initial conditions) implied by the assumed solution. For example, in the stationary heat conduction problem, uniqueness proofs require positive thermal conductivity. While this is straightforward to check for in the case of isotropic uniform thermal conductivity, in the case of temperature dependent conductivity or anisotropy more thought should be given when manufacturing the solution to not violate such assumptions.
When using the method of manufactured solutions, the solution exists by construction. In addition, uniqueness proofs are available for a much larger class of problems than we have exact analytical solutions. Thus, the method gives us more room to work with than looking for exact solutions starting with source terms and initial and boundary conditions.
The builtin symbolic manipulation functionality of COMSOL Multiphysics makes it easy to implement the MMS for code verification as well as for educational purposes. While we do extensive testing of our codes, we welcome scrutiny on the part of our users. This blog post introduced a versatile tool that you can use to verify the various physics interfaces. You can also verify your own implementations when using equationbased modeling or the Physics Builder in COMSOL Multiphysics. If you have any questions about this technique, please feel free to contact us!
When modeling a manufacturing process, such as the heating of an object, it is possible for irreversible damage to occur due to a change in temperature. This may even be a desired step in the process. With the Previous Solution operator, we can model such damage in COMSOL Multiphysics. Here, we will look at the “baking off” of a thin coating on a wafer heated by a laser.
Let’s consider a wafer of silicon with a very thin layer of material coated on the surface. This thin film may have been introduced in a previous processing step, and we now want to quickly “bake off” this material by heating the wafer with a laser. The wafer is mounted on a spinning stage while the laser heat source traverses back and forth over the surface.
We will consider a layer of material that is very thin compared to the wafer thickness. We can thus assume that the film does not contribute to the thermal mass of the system, nor will it provide any additional conductive heat path. However, this coating will affect the surface emissivity. If the coating is undamaged, the emissivity is 0.8. Once the coating has baked off, the emissivity of that region of the wafer will change to 0.6. This will alter both the amount of heat absorbed from the laser heat source and the heat radiated from the wafer to the surroundings.
A laser beam traversing over a rotating wafer ablates a thin surface coating when the temperature is high enough.
We will not concern ourselves too greatly with the process by which the coating is removed from the wafer. Although the actual process may include phase change, melting, boiling, ablation, and chemical reactions, we are, in this case, dealing with a very thin layer of material. Thus, we can simply say that once the temperature of the wafer surface exceeds 60°C, the coating immediately disappears. Under the assumption of very fast dynamics of the material removal process relative to the heating of the wafer, this is a valid approach.
We will begin with the previously developed model of a rotating wafer exposed to a moving heat source. An additional boundary equation will be added to our existing model. Hence, we need to model this in a coordinate system that moves with the rotation of the wafer.
The equation we add will track the surface emissivity on the top boundary of the wafer. The Previous Solution operator is used since we simply want to change the surface emissivity once the temperature gets above the specified value and leave it otherwise unchanged. We have already introduced the use of the Previous Solution operator in a previous blog entry. We will now focus more specifically on modeling the removal of the film from the wafer surface.
The settings for the Boundary ODEs and DAEs interface. Note the shape function settings.
The domain settings and initial values settings for the Boundary ODEs and DAEs interface, which models the emissivity of the surface of the wafer.
The settings for the Boundary ODEs and DAEs interface are shown above. Note that a Constant Discontinuous Lagrange discretization is used to solve for the field “emissivity”, the surface emissivity. This discretization is equivalent to saying that the emissivity will have a constant value over each element and that the field can be discontinuous across different elements. We are assuming that the film is either present or not present, so the surface emissivity will have two discrete states. The initial value of the field variable is the undamaged value of the surface emissivity.
The settings for the Heat Flux boundary condition use the computed surface emissivity.
The settings for the Diffuse Surface boundary condition use the computed surface emissivity.
The computed surface emissivity is used in two places within the Heat Transfer in Solids interface, as shown above. The applied Heat Flux boundary condition and the radiation of ambient temperature via the Diffuse Surface boundary condition both reference the emissivity field.
Since the surface emissivity is constant across each element, a finite element mesh size of 0.3 mm is used to obtain a smoother representation of the damage field. Also, the relative solver tolerance is set to 1e6.
The results of the simulation are depicted in the animation below. As the temperature rises, certain portions of the wafer surface rise above the ablation temperature and the surface emissivity changes. The process is complete once the entirety of the wafer surface has been heated above the desired temperature.
An animation of the temperature field of the laser heating the rotating wafer (left). The dark gray color indicates the damage zone (right).
We have demonstrated how to model an irreversible change in the state of a material. In this case, we have analyzed the removal of a thermodynamically negligible thin layer of material from the surface of a wafer and modified the resultant surface emissivity as a consequence. The technique outlined here for using the Previous Solution operator can also be used in many other cases. What comes to your mind?
If you are interested in downloading the model related to this article,
it is available in our Application Gallery.
After obtaining a solution, we often want to zoom in and take a closer look at a smaller region. For example, after we run an analysis for the structural mechanics of a loaded spring, we may want to not only visualize the solved variables on the entire spring, but also plot the data in some local regions. The figure below shows such an area along the midplane of the curved geometry.
Elastic strain energy density of a loaded spring plotted on all surfaces (left) versus on a vertical midplane (right). Note that this midplane does not exist in the original geometry when the structural mechanics analysis is performed.
Another example is when we are interested in visualizing data in a particular 3D shape that does not exist in the original model. This comes after performing a structural mechanics analysis of a wrench. See the figure below.
First principal strain of a wrench plotted on an arbitrary 3D domain. Again, the 3D domain does not exist in the original geometry.
If this region of interest is an existing, separate geometry domain, then the data extraction and visualization can be easily done via a selection of that domain. However, we do not usually know where to look before seeing the results, and it could be difficult to partition the whole geometry into separate domains a priori. On the other hand, once we have solved the physics, it is impractical to modify the geometry, mesh, and solve the model all over again only for postprocessing purposes. How should we deal with this challenge?
One of the component coupling operators, General Extrusion, gives us the solution. The key idea is to build the region of interest as a new geometry in a new component and then use the General Extrusion operator to map the solution data from the original component onto this new component. This approach avoids remeshing and resolving of the original component. Another advantage is that the new component can be of arbitrary shape (but not larger than the original component) and space dimension (i.e., equal to or lower than that of the original component).
For a stepbystep tutorial to postprocessing local data, we will use the wrench tutorial model as an example, also found via File > Application Libraries > COMSOL Multiphysics > Structural Mechanics. Following the steps below, you will be able to quickly master this trick.
COMSOL Multiphysics is an extremely flexible software platform. Besides what is discussed in this blog post, you can use component couplings to perform submodeling, implement a controller, and perform model rotation. Meanwhile, the COMSOL support team would be more than happy to assist you as you explore even more possibilities.
]]>
Whenever we want to solve a modeling problem involving Maxwell’s equations under the assumption that:
and
we can treat the problem as Frequency Domain. When the electromagnetic field solutions are wavelike, such as for resonant structures, radiating structures, or any problem where the effective wavelength is comparable to the sizes of the objects we are working with, then the problem can be treated as a wave electromagnetic problem.
COMSOL Multiphysics has a dedicated physics interface for this type of modeling — the Electromagnetic Waves, Frequency Domain interface. Available in the RF and Wave Optics modules, it uses the finite element method to solve the frequency domain form of Maxwell’s equations. Here’s a guide for when to use this interface:
The wave electromagnetic modeling approach is valid in the regime where the object sizes range from approximately \lambda/100 to 10 \lambda, regardless of the absolute frequency. Below this size, the Low Frequency regime is appropriate. In the Low Frequency regime, the object will not be acting as an antenna or resonant structure. If you want to build models in this regime, there are several different modules and interfaces that you could use. For details, please see this blog post.
The upper limit of \sim 10 \lambda comes from the memory requirements for solving large 3D models. Once your modeling domain size is greater than \sim 10\lambda in each direction, corresponding to a domain size of (10\lambda)^3 or 1000 cubic wavelengths, you will start to need significant computational resources to solve your models. For more details about this, please see this previous blog post. On the other hand, 2D models have far more modest memory requirements and can solve much larger problems.
For problems where the objects being modeled are much larger than the wavelength, there are two options:
If you are interested in Xray frequencies and above, then the electromagnetic wave will interact with and scatter from the atomic lattice of materials. This type of scattering is not appropriate to model with the wave electromagnetics approach, since it is assumed that within each modeling domain the material can be treated as a continuum.
So now that we understand what is meant by wave electromagnetics problems, let’s further classify the most common application areas of the Electromagnetic Waves, Frequency Domain interface and look at some examples of its usage. We will only look at a few representative examples here that are good starting points for learning the software. These applications are selected from the RF Module Application Library and online Application Gallery and the Wave Optics Module Application Library, as well as online.
An antenna is any device that radiates electromagnetic radiation for the purposes of signal (and sometimes power) transmission. There is an almost infinite number of ways to construct an antenna, but one of the simplest is a dipole antenna. On the other hand, a patch antenna is more compact and used in many applications. Quantities of interest include the Sparameters, antenna impedance, losses, and farfield patterns, as well as the interactions of the radiated fields with any surrounding structures, as seen in our Car Windshield Antenna Effect on a Cable Harness tutorial model.
Whereas an antenna radiates into free space, waveguides and transmission lines guide the electromagnetic wave along a predefined path. It is possible to compute the impedance of transmission lines and the propagation constants and Sparameters of both microwave and optical waveguides.
Rather than transmitting energy, a resonant cavity is a structure designed to store electromagnetic energy of a particular frequency within a small space. Such structures can be either closed cavities, such as a metallic enclosure, or an open structure like an RF coil or FabryPerot cavity. Quantities of interest include the resonant frequency and the Qfactor.
Conceptually speaking, the combination of a waveguide with a resonant structure results in a filter or coupler. Filters are meant to either prevent or allow certain frequencies propagating through a structure and couplers are meant to allow certain frequencies to pass from one waveguide to another. A microwave filter can be as simple as a series of connected rectangular cavities, as seen in our Waveguide Iris Bandpass Filter tutorial model.
A scattering problem can be thought of as the opposite of an antenna problem. Rather than finding the radiated field from an object, an object is modeled in a background field coming from a source outside of the modeling domain. The farfield scattering of the electromagnetic wave by the object is computed, as demonstrated in the benchmark example of a perfectly conducting sphere in a plane wave.
Some electromagnetics problems can be greatly simplified in complexity if it can be assumed that the structure is quasiinfinite. For example, it is possible to compute the band structure of a photonic crystal by considering a single unit cell. Structures that are periodic in one or two directions such as gratings and frequency selective surfaces can also be analyzed for their reflection and transmission.
Whenever there is a significant amount of power transmitted via radiation, any object that interacts with the electromagnetic waves can heat up. The microwave oven in your kitchen is a perfect example of where you would need to model the coupling between electromagnetic fields and heat transfer. Another good introductory example is RF heating, where the transient temperature rises and temperaturedependent material properties are considered.
Applying a large DC magnetic bias to a ferrimagnetic material results in a relative permeability that is anisotropic for small (with respect to the DC bias) AC fields. Such materials can be used in microwave circulators. The nonreciprocal behavior of the material provides isolation.
You should now have a general overview of the capabilities and applications of the RF and Wave Optics modules for frequency domain wave electromagnetics problems. The examples listed above, as well as the other examples in the Application Gallery, are a great starting point for learning to use the software, since they come with documentation and stepbystep modeling instructions.
Please also keep in mind that the RF and Wave Optics modules also include other functionality and formulations not described here, including transient electromagnetic wave interfaces for modeling of material nonlinearities, such as second harmonic generation and modeling of signal propagation time. The RF Module additionally includes a circuit modeling tool for connecting a finite element model of a system to a circuit model, as well as an interface for modeling the transmission line equations.
As you delve deeper into COMSOL Multiphysics and wave electromagnetics modeling, please also read our other blog posts on meshing and solving options; various material models that you are able to use; as well as the boundary conditions available for modeling metallic objects, waveguide ports, and open boundaries. These posts will provide you with the foundation you need to model wave electromagnetics problems with confidence.
If you have any questions about the capabilities of using COMSOL Multiphysics for wave electromagnetics and how it can be used for your modeling needs, please contact us.
]]>
When you are solving a transient model, the COMSOL software by default uses an implicit timestepping algorithm with adaptive time step size. This has the advantage of being unconditionally stable for many classes of problems and it lets the software choose the optimal time step size for the specified solver tolerances, thereby reducing the computational cost of the solution.
Two classes of timestepping algorithms are available: a backward difference formula (BDF) and a generalizedalpha method. These algorithms use the solutions at several previous time steps (up to five) to numerically approximate the time derivatives of the fields and to predict the solution at the next time step.
However, these previous solutions are not by default accessible within the model. The Previous Solution operator makes the solution at the previous time step available as a field variable in the model. This Previous Solution operator is available for both transient as well as stationary problems solved using the continuation method. Let us take a look at how you can implement and use this Previous Solution operator in a transient model in COMSOL Multiphysics.
Using the Previous Solution operator requires only two additional features within the model tree. You must add an ODE and DAE interface to store the fields that you are interested in and you must add the Previous Solution feature to the TimeDependent Solver. Let us take a look at the implementation in terms of a transient heat transfer example: The laser heating of a wafer with a moving heat load, solved on a rotating coordinate system.
The first step is to add a Domain ODEs and DAEs interface to the model, since we will be interested in tracking the solution at the previous time step throughout the volume of the part. If we were only interested in the previous solution across a boundary, edge, or point, or in some global quantity, we could also use a Boundary, Edge, Point, or Global ODEs and DAEs interface.
The Domain ODEs and DAEs interface for tracking the solution at the previous time step.
The screenshot above shows the relevant settings for the Domain ODEs and DAEs interface. Note that the units of both the dependent variable and the source are set to Temperature. It is a good modeling practice to appropriately set units. The discretization is set to a Lagrange Quadratic, which matches the discretization used by the Heat Transfer in Solids interface. You will always want to make sure that you are using the appropriate discretization. The name of the field variable here is left at the default “u”, although you can rename it to anything you would like.
The equation being solved by the Domain ODEs and DAEs interface.
The screenshot above shows the equation that stores the temperature solution at the previous time step. This equation can be read as:
unojac(T)=0
The nojac()
operator is needed, since we do not want this equation to contribute to the Jacobian (the system matrix). Lastly, we need to specify that this equation should be evaluated at the previous time step. This is done within the Solver Configurations.
The Previous Solution feature added to the Solver Configuration.
The screenshot above shows the Previous Solution feature added to the TimeDependent Solver. Once you add this feature, simply select the appropriate field variable to be evaluated at the previous time step. It will also be faster (although not necessary) to use the Segregated Solver rather than the Fully Coupled solver.
And that is all there is to it. You can now solve the model just as you usually do and you will be able to evaluate the temperature at the previous computational time step.
Of course, having the solution at the previous time step isn’t really all that interesting in itself, but we can do quite a bit more than just store this solution. For example, we can apply logical expressions directly with the ODEs and DAEs equation interface. Consider the equation:
unojac(if(T>u,T,u))
This equation can be read as: “If the temperature at the previous time step is greater than u, set u equal to temperature. Otherwise, leave u unchanged.”
That is, it stores the maximum temperature reached at the previous time step at every point in the modeling domain. You can now evaluate the variables T and u at any point in the model to get both the temperature over time and the maximum temperature attained. To get the maximum temperature, you will want to take the maximum of the temperature at the previous time step and the temperature at the current time step, so you can introduce a variable in the model:
MaxTemp = max(T,u)
This will return the maximum temperature up to that time as shown in the plot below.
Temperature at a point plotted over time. The variable MaxTemp is also plotted and shows the maximum temperature reached up to that instant in time.
We have shown here the implementation of the newly introduced Previous Solution operator for timedependent models. The three steps to use this functionality appropriately are:
We have shown how to evaluate the maximum temperature in this example, but there is a great deal more that can be done with this functionality, so stay tuned!
]]>
While many different types of laser light sources exist, they are all quite similar in terms of their outputs. Laser light is very nearly single frequency (single wavelength) and coherent. Typically, the output of a laser is also focused into a narrow collimated beam. This collimated, coherent, and single frequency light source can be used as a very precise heat source in a wide range of applications, including cancer treatment, welding, annealing, material research, and semiconductor processing.
When laser light hits a solid material, part of the energy is absorbed, leading to localized heating. Liquids and gases (and plasmas), of course, can also be heated by lasers, but the heating of fluids almost always leads to significant convective effects. Within this blog post, we will neglect convection and concern ourselves only with the heating of solid materials.
Solid materials can be either partially transparent or completely opaque to light at the laser wavelength. Depending upon the degree of transparency, different approaches for modeling the laser heat source are appropriate. Additionally, we must concern ourselves with the relative scale as compared to the wavelength of light. If the laser is very tightly focused, then a different approach is needed compared to a relatively wide beam. If the material interacting with the beam has geometric features that are comparable to the wavelength, we must additionally consider exactly how the beam will interact with these small structures.
Before starting to model any lasermaterial interactions, you should first determine the optical properties of the material that you are modeling, both at the laser wavelength and in the infrared regime. You should also know the relative sizes of the objects you want to heat, as well as the laser wavelength and beam characteristics. This information will be useful in guiding you toward the appropriate approach for your modeling needs.
In cases where the material is opaque, or very nearly so, at the laser wavelength, it is appropriate to treat the laser as a surface heat source. This is most easily done with the Deposited Beam Power feature (shown below), which is available with the Heat Transfer Module as of COMSOL Multiphysics version 5.1. It is, however, also quite easy to manually set up such a surface heat load using only the COMSOL Multiphysics core package, as shown in the example here.
A surface heat source assumes that the energy in the beam is absorbed over a negligibly small distance into the material relative to the size of the object that is heated. The finite element mesh only needs to be fine enough to resolve the temperature fields as well as the laser spot size. The laser itself is not explicitly modeled, and it is assumed that the fraction of laser light that is reflected off the material is never reflected back. When using a surface heat load, you must manually account for the absorptivity of the material at the laser wavelength and scale the deposited beam power appropriately.
The Deposited Beam Power feature in the Heat Transfer Module is used to model two crossed laser beams. The resultant surface heat source is shown.
In cases where the material is partially transparent, the laser power will be deposited within the domain, rather than at the surface, and any of the different approaches may be appropriate based on the relative geometric sizes and the wavelength.
If the heated objects are much larger than the wavelength, but the laser light itself is converging and diverging through a series of optical elements and is possibly reflected by mirrors, then the functionality in the Ray Optics Module is the best option. In this approach, light is treated as a ray that is traced through homogeneous, inhomogeneous, and lossy materials.
As the light passes through lossy materials (e.g., optical glasses) and strikes surfaces, some power deposition will heat up the material. The absorption within domains is modeled via a complexvalued refractive index. At surfaces, you can use a reflection or an absorption coefficient. Any of these properties can be temperature dependent. For those interested in using this approach, this tutorial model from our Application Gallery provides a great starting point.
A laser beam focused through two lenses. The lenses heat up due to the highintensity laser light, shifting the focal point.
If the heated objects and the spot size of the laser are much larger than the wavelength, then it is appropriate to use the BeerLambert law to model the absorption of the light within the material. This approach assumes that the laser light beam is perfectly parallel and unidirectional.
When using the BeerLambert law approach, the absorption coefficient of the material and reflection at the material surface must be known. Both of these material properties can be functions of temperature. The appropriate way to set up such a model is described in our earlier blog entry “Modeling LaserMaterial Interactions with the BeerLambert Law“.
You can use the BeerLambert law approach if you know the incident laser intensity and if there are no reflections of the light within the material or at the boundaries.
Laser heating of a semitransparent solid modeled with the BeerLambert law.
If the heated domain is large, but the laser beam is tightly focused within it, neither the ray optics nor the BeerLambert law modeling approach can accurately solve for the fields and losses near the focus. These techniques do not directly solve Maxwell’s equations, but instead treat light as rays. The beam envelope method, available within the Wave Optics Module, is the most appropriate choice in this case.
The beam envelope method solves the full Maxwell’s equations when the field envelope is slowly varying. The approach is appropriate if the wave vector is approximately known throughout the modeling domain and whenever you know approximately the direction in which light is traveling. This is the case when modeling a focused laser light as well as waveguide structures like a MachZehnder modulator or a ring resonator. Since the beam direction is known, the finite element mesh can be very coarse in the propagation direction, thereby reducing computational costs.
A laser beam focused in a cylindrical material domain. The intensity at the incident side and within the material are plotted, along with the mesh.
The beam envelope method can be combined with the Heat Transfer in Solids interface via the Electromagnetic Heat Source multiphysics couplings. These couplings are automatically set up when you add the Laser Heating interface under Add Physics.
The Laser Heating interface adds the Beam Envelopes and the Heat Transfer in Solids interfaces and the multiphysics couplings between them.
Finally, if the heated structure has dimensions comparable to the wavelength, it is necessary to solve the full Maxwell’s equations without assuming any propagation direction of the laser light within the modeling space. Here, we need to use the Electromagnetic Waves, Frequency Domain interface, which is available in both the Wave Optics Module and the RF Module. Additionally, the RF Module offers a Microwave Heating interface (similar to the Laser Heating interface described above) and couples the Electromagnetic Waves, Frequency Domain interface to the Heat Transfer in Solids interface. Despite the nomenclature, the RF Module and the Microwave Heating interface are appropriate over a wide frequency band.
The fullwave approach requires a finite element mesh that is fine enough to resolve the wavelength of the laser light. Since the beam may scatter in all directions, the mesh must be reasonably uniform in size. A good example of using the Electromagnetic Waves, Frequency Domain interface: Modeling the losses in a gold nanosphere illuminated by a plane wave, as illustrated below.
Laser light heating a gold nanosphere. The losses in the sphere and the surrounding electric field magnitude are plotted, along with the mesh.
You can use any of the previous five approaches to model the power deposition from a laser source in a solid material. Modeling the temperature rise and heat flux within and around the material additionally requires the Heat Transfer in Solids interface. Available in the core COMSOL Multiphysics package, this interface is suitable for modeling heat transfer in solids and features fixed temperature, insulating, and heat flux boundary conditions. The interface also includes various boundary conditions for modeling convective heat transfer to the surrounding atmosphere or fluid, as well as modeling radiative cooling to ambient at a known temperature.
In some cases, you may expect that there is also a fluid that provides significant heating or cooling to the problem and cannot be approximated with a boundary condition. For this, you will want to explicitly model the fluid flow using the Heat Transfer Module or the CFD Module, which can solve for both the temperature and flow fields. Both modules can solve for laminar and turbulent fluid flow. The CFD Module, however, has certain additional turbulent flow modeling capabilities, which are described in detail in this previous blog post.
For instances where you are expecting significant radiation between the heated object and any surrounding objects at varying temperatures, the Heat Transfer Module has the additional ability to compute gray body radiative view factors and radiative heat transfer. This is demonstrated in our Rapid Thermal Annealing tutorial model. When you expect the temperature variations to be significant, you may also need to consider the wavelengthdependent surface emissivity.
If the materials under consideration are transparent to laser light, it is likely that they are also partially transparent to thermal (infraredband) radiation. This infrared light will be neither coherent nor collimated, so we cannot use any of the above approaches to describe the reradiation within semitransparent media. Instead, we can use the radiation in participating media approach. This technique is suitable for modeling heat transfer within a material, where there is significant heat flux inside the material due to radiation. An example of this approach from our Application Gallery can be found here.
In this blog post, we have looked at the various modeling techniques available in the COMSOL Multiphysics environment for modeling the laser heating of a solid material. Surface heating and volumetric heating approaches are presented, along with a brief overview of the heat transfer modeling capabilities. Thus far, we have only considered the heating of a solid material that does not change phase. The heating of liquids and gases — and the modeling of phase change — will be covered in a future blog post. Stay tuned!
]]>
One helpful technique (which we’ve occasionally shown in other posts, but haven’t explicitly named) is to layer different surfaces together in a single plot group. This allows you to view several different results at once. For instance, you can combine plots that show the temperature and fluid flow for a CFD model, or show both the stress and deformation in a simulation of fluidstructure interaction. Let’s take the example of an aluminum heat sink, which was shown in this blog post on using surface, volume, and line plots. The plot below shows the temperature in the heat sink and the surrounding domain:
Adding a second plot simply requires rightclicking on the plot group node and choosing the next plot type you want. The new plot will be added on top of the first one and shown in the results tree. For this example, we can also add an arrow plot showing the fluid flow field past the heat sink:
After adding a second plot, the plot group node looks like this (and the arrow volume contains a color expression, not shown):
We can also add, for instance, a contour plot to see the temperature variation more clearly around the heat sink:
That addition makes the plot group tree look like this:
Note that there is no limit to the number of plots that can be added to a plot group, although it’s important to make sure that the plot doesn’t get too crowded. In certain cases, plots may interfere with each other — for example, it wouldn’t make sense to include a surface plot of the fluid flow velocity directly on top of the temperature plot, because it isn’t possible to see both at once. Even in this case, the contour and the arrow plots together make it look a little too busy.
In the previous example, you might have noticed that the contour plot showing the temperature lines around the base of the heat sink has a different color scheme than the surface plot showing the temperature in the heat sink and surrounding domain. This could become confusing, especially since the rainbow color table is used in the arrow plot — which is showing fluid flow, not temperature. So as not to be misleading, we can change the two temperature plots to have the same color scheme (and make sure it’s different than the color scheme for plots showing other physics).
The obvious way to do this is simply to change the color table in the settings for each plot. In this case, we can change the contour plot to the thermal color table:
However, for a case where there are many subplots or there might be frequent changes made to style choices, it’s more beneficial to use the Inherit Style options. These enable you to inherit the style settings from a previous plot. We can set the contour plot to inherit its style from the temperature surface, locking the color settings in the contour plot unless we reset the inherit options:
There are several checkboxes (shown above) that allow you to choose exactly which aspects of the plot’s style to keep. This is especially helpful for certain plot types where you have manually adjusted a setting — for instance, where a manual scale factor has been introduced and you want to make sure that all the successive plots maintain the scale.
Choosing to inherit the style from a previous plot also means that any changes made to the “parent” (the plot the settings are inherited from) will cause automatic changes in any plots relying on its style settings.
For COMSOL Multiphysics version 5.1, new buttons have been added to the graphics window toolbar that allow quick enabling and disabling of the grid, the coordinate axis symbol, and the color legends:
This means that it’s no longer necessary to go into the View node, and you can easily turn them off and on temporarily as needed.
Another helpful tool (again shown in previous posts, but seldom discussed) is the Wireframe rendering option in the Coloring and Style tab. This checkbox allows you to display the mesh elements in a surface plot.
In many cases, it’s preferable to overlay two surface plots (as discussed earlier) so that there is a “background” for the mesh. For instance, in the heat sink, we might use a temperature surface on the heat sink, and then include a black mesh on top:
The mesh doesn’t have to be a uniform color (and will show an appropriate color gradient if a physics expression is used for the surface plot), but this feature is normally used with a gray or black mesh on top of a surface of a different color.
COMSOL Multiphysics contains several options for exporting results such as images, animations, and reports. These are quite flexible and may be adapted to your specific needs.
For exporting images from the graphics window, use the Image Snapshot button on the toolbar:
You can adjust the size and resolution, and control which aspects of the visible graphic are included in the image:
To create animations, rightclick on the Export node and choose Animation. This will create a subnode where you can adjust the settings for a video to be exported to your chosen file type:
Among the settings are fields for controlling the number of frames, size, and speed of the video. Like the graphics export options, there are also choices for resolution and layout.
Note: The Player node is very similar to the Animation node (also accessible through the ribbon or by rightclicking on the Export node), except that it doesn’t save a file. Rather, this feature enables you to play and watch the animation directly in the graphics window.
Finally, the Reports node allows you to generate a report of your results as an HTML or Microsoft® Word file. Several options are available for different amounts of detail, as well as a Custom Report option that lets you choose exactly which aspects of the simulation you’d like to include.
A complete report will include everything in the Model Builder, from the parameters table to the final results plots.
Lastly, a littleknown fact about the COMSOL Desktop is that you can rearrange the windows using the draganddrop method. All of the windows below the ribbon can be moved and dropped into different sections. Release the mouse over one of the blue boxes to set it in the corresponding position (shown below):
If you ever want to reset the Desktop layout, the Reset Desktop button on the ribbon will revert the windows to the default arrangement. This feature also allows you to choose between two different layouts, depending on the size of your window and monitor:
This completes our overview of using additional tricks and tools for postprocessing in COMSOL Multiphysics. We hope that you’ll find these techniques useful in your own work for visualizing, understanding, and sharing your simulation results. Thanks for reading!