Before using a numerical simulation tool to predict outcomes from previously unforeseen situations, we want to build trust in its reliability. We can do this by checking whether the simulation tool accurately reproduces available analytical solutions or whether its results match experimental observations. This brings us to two closely related topics of *verification* and *validation*. Let’s clarify what these two terms mean in the context of numerical simulations.

To numerically simulate a physical problem, we take two steps:

- Construct a mathematical model of the physical system. This is where we account for all of the factors (inputs) that influence observed behavior (outputs) and postulate the governing equations. The result is often a set of implicit relations between inputs and outputs. This is frequently a system of partial differential equations with initial and boundary conditions that collectively are referred to as an
*initial boundary value problem*(IBVP). - Solve the mathematical model to obtain the outputs as explicit functions of the inputs. However, such closed form solutions are not available for most problems of practical interest. In this case, we use numerical methods to obtain approximate solutions, often with the help of computers to solve large systems of generally nonlinear algebraic equations and inequalities.

There are two situations where errors can be introduced. First, they can occur in the mathematical model itself. Potential errors include overlooking an important factor or assuming an unphysical relationship between variables. *Validation* is the process of making sure such errors are not introduced when constructing the mathematical model. *Verification*, on the other hand, is to ascertain that the mathematical model is accurately solved. Here, we are ensuring that the numerical algorithm is convergent and the computer implementation is correct, so that the numerical solution is accurate.

In brief, during *validation* we ask if we posed the appropriate mathematical model to describe the physical system, whereas in *verification* we investigate if we are obtaining an accurate numerical solution to the mathematical model.

*A comparison between the processes of validation and verification.*

Now, we will dive deeper into the verification of numerical solutions to initial boundary value problems (IBVPs).

How do we check if a simulation tool is accurately solving an IBVP?

One possibility is to choose a problem that has an exact analytical solution and use the exact solution as a benchmark. The method of separation of variables, for example, can be used to obtain solutions to simple IBVPs. The utility of this approach is limited by the fact that most problems of practical interest do not have exact solutions — the *raison d’être* of computer simulation. Still, this approach is useful as a sanity check for algorithms and programming.

Another approach is to compare simulation results with experimental data. To be clear, this is combining validation and verification in one step, which is sometimes called *qualification*. It is possible but unlikely that experimental observations are matched coincidentally by a faulty solution through a combination of a flawed mathematical model and a wrong algorithm or a bug in the programming. Barring such rare occurrences, a good match between a numerical solution and an experimental observation vouches for the validity of the mathematical model and the veracity of the solution procedure.

The Application Libraries in COMSOL Multiphysics contain many verification models that use one or both of these approaches. They are organized by physics areas.

*Verification models are available in the Application Libraries of COMSOL Multiphysics.*

What if we want to verify our results in the absence of exact mathematical solutions and experimental data? We can turn to the method of manufactured solutions.

The goal of solving an IBVP is to find an explicit expression for the solution in terms of independent variables, usually space and time, given problem parameters such as material properties, boundary conditions, initial conditions, and source terms. Common forms of source terms include body forces such as gravity in structural mechanics and fluid flow problems, reaction terms in transport problems, and heat sources in thermal problems.

In the Method of Manufactured Solutions (MMS), we flip the script and start with an assumed explicit expression for the solution. Then, we substitute the solution to the differential equations and obtain a consistent set of source terms, initial conditions, and boundary conditions. This usually involves evaluating a number of derivatives. We will soon see how the symbolic algebra routines in COMSOL Multiphysics can help with this process. Similarly, we evaluate the assumed solution at time t = 0 and at the boundaries to obtain initial conditions and the boundary conditions.

Next comes the verification step. Given the source terms and auxiliary conditions just obtained, we use the simulation tool to obtain a numerical solution to the IBVP and compare it to the original assumed solution with which we started.

Let us illustrate the steps with a simple example.

Consider a 1D heat conduction problem in a bar of length L

A_c\rho C_p \frac{\partial T}{\partial t} + \frac{\partial}{\partial x}(-A_ck\frac{\partial T}{\partial x}) = A_cQ, t \in (0,t_f), x \in (0,L)

with initial condition

T(x,0) = f(x)

and fixed temperatures at the two ends given by

T(0,t) = g_1(t), \quad T(L,t) = g_2(t).

The coefficients A_c, \rho, C_p, and k stand for the cross-sectional area, mass density, heat capacity, and thermal conductivity, respectively. The heat source is given by Q.

Our goal is to verify the solution of this problem using the method of manufactured solutions.

First, we assume an explicit form for the solution. Let’s consider the temperature distribution

u(x,t) = 500 K + \frac{x}{L}(\frac{x}{L}-1)\frac{t}{\tau} K,

where \tau is a characteristic time, which for this example is an hour. We introduce a new variable u for the assumed temperature to distinguish it from the computed temperature T.

Next, we find the source term consistent with the assumed solution. We can hand calculate partial derivatives of the solution with respect to space and time and substitute them in the differential equation to obtain Q. Alternatively, since COMSOL Multiphysics is able to perform symbolic manipulations, we will use that feature instead of hand calculating the source term.

In the case of uniform material and cross-sectional properties, we can declare A_c, \rho, C_p, and k as parameters. The general heterogeneous case requires variables, as do time-dependent boundary conditions. Notice the use of the operator **d()**, one of the built-in differentiation operators in COMSOL Multiphysics, shown in the screenshot below.

*The symbolic algebra routine in COMSOL Multiphysics can automate the evaluation of partial derivatives.*

We perform this symbolic manipulation with the caveat that we trust the symbolic algebra. Otherwise, any errors observed later could be from the symbolic manipulation and not the numerical solution. Of course, we can plot a hand-calculated expression for Q alongside the result of the symbolic manipulation shown above to verify the symbolic algebra routine.

Next, we compute the initial and boundary conditions. The initial condition is the assumed solution evaluated at t = 0.

f(x) = u(x,0) = 500 K.

The values of the temperature at the two ends of the bar are g_1(t) = g_2(t) = 500 K.

Next, we obtain the numerical solution of the problem using the source term, as well as the initial and boundary conditions we have just calculated. For this example, let us use the *Heat Transfer in Solids* physics interface.

*Add initial values, boundary conditions, and sources derived from the assumed solution.*

For the final step, we compare the numerical solution with the assumed solution. The plots below show the temperature after a time period of one day. The first solution is obtained using linear elements, whereas the second solution is obtained using the quadratic elements. For this type of problem, COMSOL Multiphysics chooses quadratic elements by default.

*The solution computed using the manufactured solution with linear elements (left) and quadratic elements (right).*

The MMS gives us the flexibility to check different parts of the code. In the example given above, for the purpose of simplicity we have intentionally left many parts of the IBVP unchecked. In practice, every item in the equation should be checked in the most general form. For example, to check if the code accurately handles nonuniform cross-sectional areas, we need to define a spatially variable area before deriving the source term. The same is true for other coefficients such as material properties.

A similar check should be made for all boundary and initial conditions. If, for example, we want to specify the flux on the left end instead of the temperature, we first evaluate the flux corresponding to the manufactured solution, i.e., -n\cdot(-A_ck \nabla u), where n is the outward unit normal. For the assumed solution in this example, the inward flux at the left end becomes \frac{A_ck}{L}\frac{t}{\tau}*1K.

In COMSOL Multiphysics, the default boundary condition for heat transfer in solids is thermal insulation. What if we want to verify the handling of thermal insulation on the left end? We would need to manufacture a new solution where the derivative vanishes on the left end. For example, we can use

u(x,t) = 500 K + (\frac{x}{L})^2\frac{t}{\tau} K.

Note that during verification, we are checking if the equations are being correctly solved. We are not concerned with whether the solution corresponds to physical situations.

Remember that once we manufacture a new solution, we have to recalculate the source term, initial conditions, and boundary conditions according to the assumed solution. Of course, when we use the symbolic manipulation tools in COMSOL Multiphysics, we are exempt from the tedium!

As shown in the graph above, the solutions obtained by the linear element and the quadratic element converged as the mesh size was reduced. This qualitative convergence gives us some confidence in the numerical solution. We can further scrutinize the numerical method by studying its rate of convergence, which will provide a quantitative check of the numerical procedure.

For example, for the stationary version of the problem, the standard finite element error estimate for error measured in the m-order Sobolev norm is

||u-u_h||_m \leq C h^{p+1-m}||u||_{p+1},

where u and u_h are the exact and finite element solutions, h is the maximum element size, p is the order of the approximation polynomials (shape functions). For m = 0, this gives the error estimate

||e|| = ||u-u_h|| = (\int_{\Omega}(u-u_h)^2d\Omega)^{\frac{1}{2}} \leq C h^{p+1}||u||_{p+1},

where C is a mesh independent constant.

Returning to the method of manufactured solutions, this implies that the solution with linear element (p = 1) should show second-order convergence when the mesh is refined. If we plot the norm of the error with respect to mesh size on a log-log plot, the slope should asymptotically approach 2. If this does not happen, we will have to check the code or the accuracy and regularity of inputs such as material and geometric properties. As the figures below show, the numerical solution converges at the theoretically expected rate.

*Left: Use integration operators to define norms. The operator intop1 is defined to integrate over the domain. Right: Log-log plot of error versus mesh size shows second-order convergence in the L_2-norm (m = 0) for linear elements, which is consistent with theoretical prediction.*

While we should always check convergence, the theoretical convergence rate can only be checked for those problems like the one above where *a priori* error estimates are available. When you have such problems, remember that the method of manufactured solutions can help you verify if your code shows the correct asymptotic behavior.

In case of constitutive nonlinearity, the coefficients in the equation depend on the solution. In heat conduction, for example, thermal conductivity can depend on the temperature. In such cases, the coefficients need to be derived from the assumed solution.

Coupled (multiphysics) problems have more than one governing equation. Once solutions are assumed for all the fields involved, source terms have to be derived for each governing equation.

Note that the logic behind the method of manufactured solutions holds only if the governing system of equations has a unique solution under the conditions (source term, boundary, and initial conditions) implied by the assumed solution. For example, in the stationary heat conduction problem, uniqueness proofs require positive thermal conductivity. While this is straightforward to check for in the case of isotropic uniform thermal conductivity, in the case of temperature dependent conductivity or anisotropy more thought should be given when manufacturing the solution to not violate such assumptions.

When using the method of manufactured solutions, the solution exists by construction. In addition, uniqueness proofs are available for a much larger class of problems than we have exact analytical solutions. Thus, the method gives us more room to work with than looking for exact solutions starting with source terms and initial and boundary conditions.

The built-in symbolic manipulation functionality of COMSOL Multiphysics makes it easy to implement the MMS for code verification as well as for educational purposes. While we do extensive testing of our codes, we welcome scrutiny on the part of our users. This blog post introduced a versatile tool that you can use to verify the various physics interfaces. You can also verify your own implementations when using equation-based modeling or the Physics Builder in COMSOL Multiphysics. If you have any questions about this technique, please feel free to contact us!

- For an extensive discussion of the method of manufactured solutions including relative strengths and limitations, see this report from Sandia National Laboratories. The report details a set of blind tests in which one author planted a series of code mistakes unbeknownst to the second author, who had to mine-sweep using the method described in this blog post.
- For a broader discussion on verification and validation in the context of scientific computing, check out
- W. J. Oberkampf and C. J. Roy,
*Verification and Validation in Scientific Computing*, Cambridge University Press, 2010

- W. J. Oberkampf and C. J. Roy,
- Standard error estimates for the finite element method are available in texts such as
- Thomas J. R. Hughes,
*The Finite Element Method: Linear Static and Dynamic Finite Element Analysis*, Dover Publications, 2000 - B. Daya Reddy,
*Introductory Functional Analysis: With Applications to Boundary Value Problems and Finite Elements*, Springer-Verlag, 1997

- Thomas J. R. Hughes,

When modeling a manufacturing process, such as the heating of an object, it is possible for irreversible damage to occur due to a change in temperature. This may even be a desired step in the process. With the *Previous Solution* operator, we can model such damage in COMSOL Multiphysics. Here, we will look at the “baking off” of a thin coating on a wafer heated by a laser.

Let’s consider a wafer of silicon with a very thin layer of material coated on the surface. This thin film may have been introduced in a previous processing step, and we now want to quickly “bake off” this material by heating the wafer with a laser. The wafer is mounted on a spinning stage while the laser heat source traverses back and forth over the surface.

We will consider a layer of material that is very thin compared to the wafer thickness. We can thus assume that the film does not contribute to the thermal mass of the system, nor will it provide any additional conductive heat path. However, this coating will affect the surface emissivity. If the coating is undamaged, the emissivity is 0.8. Once the coating has baked off, the emissivity of that region of the wafer will change to 0.6. This will alter both the amount of heat absorbed from the laser heat source and the heat radiated from the wafer to the surroundings.

*A laser beam traversing over a rotating wafer ablates a thin surface coating when the temperature is high enough.*

We will not concern ourselves too greatly with the process by which the coating is removed from the wafer. Although the actual process may include phase change, melting, boiling, ablation, and chemical reactions, we are, in this case, dealing with a very thin layer of material. Thus, we can simply say that once the temperature of the wafer surface exceeds 60°C, the coating immediately disappears. Under the assumption of very fast dynamics of the material removal process relative to the heating of the wafer, this is a valid approach.

We will begin with the previously developed model of a rotating wafer exposed to a moving heat source. An additional boundary equation will be added to our existing model. Hence, we need to model this in a coordinate system that moves with the rotation of the wafer.

The equation we add will track the surface emissivity on the top boundary of the wafer. The *Previous Solution* operator is used since we simply want to change the surface emissivity once the temperature gets above the specified value and leave it otherwise unchanged. We have already introduced the use of the *Previous Solution* operator in a previous blog entry. We will now focus more specifically on modeling the removal of the film from the wafer surface.

*The settings for the* Boundary ODEs and DAEs *interface. Note the shape function settings.*

*The domain settings and initial values settings for the* Boundary ODEs and DAEs *interface, which models the emissivity of the surface of the wafer.*

The settings for the *Boundary ODEs and DAEs* interface are shown above. Note that a *Constant Discontinuous Lagrange* discretization is used to solve for the field “emissivity”, the surface emissivity. This discretization is equivalent to saying that the emissivity will have a constant value over each element and that the field can be discontinuous across different elements. We are assuming that the film is either present or not present, so the surface emissivity will have two discrete states. The initial value of the field variable is the undamaged value of the surface emissivity.

*The settings for the Heat Flux boundary condition use the computed surface emissivity.*

*The settings for the Diffuse Surface boundary condition use the computed surface emissivity.*

The computed surface emissivity is used in two places within the *Heat Transfer in Solids* interface, as shown above. The applied Heat Flux boundary condition and the radiation of ambient temperature via the Diffuse Surface boundary condition both reference the emissivity field.

Since the surface emissivity is constant across each element, a finite element mesh size of 0.3 mm is used to obtain a smoother representation of the damage field. Also, the relative solver tolerance is set to 1e-6.

The results of the simulation are depicted in the animation below. As the temperature rises, certain portions of the wafer surface rise above the ablation temperature and the surface emissivity changes. The process is complete once the entirety of the wafer surface has been heated above the desired temperature.

*An animation of the temperature field of the laser heating the rotating wafer (left). The dark gray color indicates the damage zone (right).*

We have demonstrated how to model an irreversible change in the state of a material. In this case, we have analyzed the removal of a thermodynamically negligible thin layer of material from the surface of a wafer and modified the resultant surface emissivity as a consequence. The technique outlined here for using the *Previous Solution* operator can also be used in many other cases. What comes to your mind?

If you are interested in downloading the model related to this article,

it is available in our Application Gallery.

After obtaining a solution, we often want to zoom in and take a closer look at a smaller region. For example, after we run an analysis for the structural mechanics of a loaded spring, we may want to not only visualize the solved variables on the entire spring, but also plot the data in some local regions. The figure below shows such an area along the midplane of the curved geometry.

*Elastic strain energy density of a loaded spring plotted on all surfaces (left) versus on a vertical midplane (right). Note that this midplane does not exist in the original geometry when the structural mechanics analysis is performed. *

Another example is when we are interested in visualizing data in a particular 3D shape that does not exist in the original model. This comes after performing a structural mechanics analysis of a wrench. See the figure below.

*First principal strain of a wrench plotted on an arbitrary 3D domain. Again, the 3D domain does not exist in the original geometry.*

If this region of interest is an existing, separate geometry domain, then the data extraction and visualization can be easily done via a selection of that domain. However, we do not usually know where to look before seeing the results, and it could be difficult to partition the whole geometry into separate domains *a priori*. On the other hand, once we have solved the physics, it is impractical to modify the geometry, mesh, and solve the model all over again only for postprocessing purposes. How should we deal with this challenge?

One of the component coupling operators, General Extrusion, gives us the solution. The key idea is to build the region of interest as a new geometry in a new component and then use the General Extrusion operator to map the solution data from the original component onto this new component. This approach avoids remeshing and re-solving of the original component. Another advantage is that the new component can be of arbitrary shape (but not larger than the original component) and space dimension (i.e., equal to or lower than that of the original component).

For a step-by-step tutorial to postprocessing local data, we will use the wrench tutorial model as an example, also found via File > Application Libraries > COMSOL Multiphysics > Structural Mechanics. Following the steps below, you will be able to quickly master this trick.

- Create a
*General Extrusion*operator in the existing component, Component 1 > Definitions. Select*All domains*as its source.

- Create a new component in the model tree. Note that the new component must have the same or lower space dimension than the original component. Next, build the new geometry according to the way you want to extract data. Make sure that the new geometry fits inside the original component.

- Right-click on the
*Study*node and select*Update Solution*. This step ensures that the solution data becomes accessible in the new component.

- Create a dummy Solution node in Results > Data Sets and point this solution to the new component, Component 2 in this case.
- It’s time to plot the data. Remember to plot from the newly created data set and define the expression using the General Extrusion operator we defined earlier,
*comp1.genext1*. This means that the plot will show results on the geometry of Component 2 (the block), using the results extruded from the study on Component 1 (the wrench). For example,*solid.ep1*stands for the first principal strain in Component 1. To extrude the first principal strain from Component 1 to Component 2, we need to type*comp1.genext1(solid.ep1)*in the Expression input field instead.

COMSOL Multiphysics is an extremely flexible software platform. Besides what is discussed in this blog post, you can use component couplings to perform submodeling, implement a controller, and perform model rotation. Meanwhile, the COMSOL support team would be more than happy to assist you as you explore even more possibilities.

]]>

Whenever we want to solve a modeling problem involving Maxwell’s equations under the assumption that:

- All material properties are constant with respect to field strength
- That the fields will change sinusoidally in time at a known frequency or range of frequencies

and

we can treat the problem as *Frequency Domain*. When the electromagnetic field solutions are wave-like, such as for resonant structures, radiating structures, or any problem where the effective wavelength is comparable to the sizes of the objects we are working with, then the problem can be treated as a *wave electromagnetic* problem.

COMSOL Multiphysics has a dedicated physics interface for this type of modeling — the *Electromagnetic Waves, Frequency Domain* interface. Available in the RF and Wave Optics modules, it uses the finite element method to solve the frequency domain form of Maxwell’s equations. Here’s a guide for when to use this interface:

The wave electromagnetic modeling approach is valid in the regime where the object sizes range from approximately \lambda/100 to 10 \lambda, regardless of the absolute frequency. Below this size, the Low Frequency regime is appropriate. In the Low Frequency regime, the object will not be acting as an antenna or resonant structure. If you want to build models in this regime, there are several different modules and interfaces that you could use. For details, please see this blog post.

The upper limit of \sim 10 \lambda comes from the memory requirements for solving large 3D models. Once your modeling domain size is greater than \sim 10\lambda in each direction, corresponding to a domain size of (10\lambda)^3 or 1000 cubic wavelengths, you will start to need significant computational resources to solve your models. For more details about this, please see this previous blog post. On the other hand, 2D models have far more modest memory requirements and can solve much larger problems.

For problems where the objects being modeled are much larger than the wavelength, there are two options:

- The beam envelopes formulation is appropriate if the device being simulated has relatively gradual variations in the structure — and magnitude of the electromagnetic fields — in the direction of beam propagation compared to the transverse directions. For details about this, please see this post.
- The Ray Optics Module formulation treats light as rays rather than waves. In terms of the above plot, there is a wide region of overlap between these two regimes. For an introduction to the ray optics approach, please see our introduction to the Ray Optics Module.

If you are interested in X-ray frequencies and above, then the electromagnetic wave will interact with and scatter from the atomic lattice of materials. This type of scattering is not appropriate to model with the wave electromagnetics approach, since it is assumed that within each modeling domain the material can be treated as a continuum.

So now that we understand what is meant by wave electromagnetics problems, let’s further classify the most common application areas of the *Electromagnetic Waves, Frequency Domain* interface and look at some examples of its usage. We will only look at a few representative examples here that are good starting points for learning the software. These applications are selected from the RF Module Application Library and online Application Gallery and the Wave Optics Module Application Library, as well as online.

An antenna is any device that radiates electromagnetic radiation for the purposes of signal (and sometimes power) transmission. There is an almost infinite number of ways to construct an antenna, but one of the simplest is a dipole antenna. On the other hand, a patch antenna is more compact and used in many applications. Quantities of interest include the S-parameters, antenna impedance, losses, and far-field patterns, as well as the interactions of the radiated fields with any surrounding structures, as seen in our Car Windshield Antenna Effect on a Cable Harness tutorial model.

Whereas an antenna radiates into free space, waveguides and transmission lines guide the electromagnetic wave along a predefined path. It is possible to compute the impedance of transmission lines and the propagation constants and S-parameters of both microwave and optical waveguides.

Rather than transmitting energy, a resonant cavity is a structure designed to store electromagnetic energy of a particular frequency within a small space. Such structures can be either closed cavities, such as a metallic enclosure, or an open structure like an RF coil or Fabry-Perot cavity. Quantities of interest include the resonant frequency and the Q-factor.

Conceptually speaking, the combination of a waveguide with a resonant structure results in a filter or coupler. Filters are meant to either prevent or allow certain frequencies propagating through a structure and couplers are meant to allow certain frequencies to pass from one waveguide to another. A microwave filter can be as simple as a series of connected rectangular cavities, as seen in our Waveguide Iris Bandpass Filter tutorial model.

A scattering problem can be thought of as the opposite of an antenna problem. Rather than finding the radiated field from an object, an object is modeled in a background field coming from a source outside of the modeling domain. The far-field scattering of the electromagnetic wave by the object is computed, as demonstrated in the benchmark example of a perfectly conducting sphere in a plane wave.

Some electromagnetics problems can be greatly simplified in complexity if it can be assumed that the structure is quasi-infinite. For example, it is possible to compute the band structure of a photonic crystal by considering a single unit cell. Structures that are periodic in one or two directions such as gratings and frequency selective surfaces can also be analyzed for their reflection and transmission.

Whenever there is a significant amount of power transmitted via radiation, any object that interacts with the electromagnetic waves can heat up. The microwave oven in your kitchen is a perfect example of where you would need to model the coupling between electromagnetic fields and heat transfer. Another good introductory example is RF heating, where the transient temperature rises and temperature-dependent material properties are considered.

Applying a large DC magnetic bias to a ferrimagnetic material results in a relative permeability that is anisotropic for small (with respect to the DC bias) AC fields. Such materials can be used in microwave circulators. The nonreciprocal behavior of the material provides isolation.

You should now have a general overview of the capabilities and applications of the RF and Wave Optics modules for frequency domain wave electromagnetics problems. The examples listed above, as well as the other examples in the Application Gallery, are a great starting point for learning to use the software, since they come with documentation and step-by-step modeling instructions.

Please also keep in mind that the RF and Wave Optics modules also include other functionality and formulations not described here, including transient electromagnetic wave interfaces for modeling of material nonlinearities, such as second harmonic generation and modeling of signal propagation time. The RF Module additionally includes a circuit modeling tool for connecting a finite element model of a system to a circuit model, as well as an interface for modeling the transmission line equations.

As you delve deeper into COMSOL Multiphysics and wave electromagnetics modeling, please also read our other blog posts on meshing and solving options; various material models that you are able to use; as well as the boundary conditions available for modeling metallic objects, waveguide ports, and open boundaries. These posts will provide you with the foundation you need to model wave electromagnetics problems with confidence.

If you have any questions about the capabilities of using COMSOL Multiphysics for wave electromagnetics and how it can be used for your modeling needs, please contact us.

]]>

When you are solving a transient model, the COMSOL software by default uses an implicit time-stepping algorithm with adaptive time step size. This has the advantage of being unconditionally stable for many classes of problems and it lets the software choose the optimal time step size for the specified solver tolerances, thereby reducing the computational cost of the solution.

Two classes of time-stepping algorithms are available: a backward difference formula (BDF) and a generalized-alpha method. These algorithms use the solutions at several previous time steps (up to five) to numerically approximate the time derivatives of the fields and to predict the solution at the next time step.

However, these previous solutions are not by default accessible within the model. The *Previous Solution* operator makes the solution at the previous time step available as a field variable in the model. This *Previous Solution* operator is available for both transient as well as stationary problems solved using the continuation method. Let us take a look at how you can implement and use this *Previous Solution* operator in a transient model in COMSOL Multiphysics.

Using the *Previous Solution* operator requires only two additional features within the model tree. You must add an *ODE and DAE* interface to store the fields that you are interested in and you must add the *Previous Solution* feature to the Time-Dependent Solver. Let us take a look at the implementation in terms of a transient heat transfer example: The laser heating of a wafer with a moving heat load, solved on a rotating coordinate system.

The first step is to add a *Domain ODEs and DAEs* interface to the model, since we will be interested in tracking the solution at the previous time step throughout the volume of the part. If we were only interested in the previous solution across a boundary, edge, or point, or in some global quantity, we could also use a *Boundary*, *Edge*, *Point*, or *Global ODEs and DAEs* interface.

*The* Domain ODEs and DAEs *interface for tracking the solution at the previous time step.*

The screenshot above shows the relevant settings for the *Domain ODEs and DAEs* interface. Note that the units of both the dependent variable and the source are set to Temperature. It is a good modeling practice to appropriately set units. The discretization is set to a Lagrange Quadratic, which matches the discretization used by the *Heat Transfer in Solids* interface. You will always want to make sure that you are using the appropriate discretization. The name of the field variable here is left at the default “u”, although you can rename it to anything you would like.

*The equation being solved by the* Domain ODEs and DAEs *interface.*

The screenshot above shows the equation that stores the temperature solution at the previous time step. This equation can be read as:

`u-nojac(T)=0`

The `nojac()`

operator is needed, since we do not want this equation to contribute to the Jacobian (the system matrix). Lastly, we need to specify that this equation should be evaluated at the previous time step. This is done within the Solver Configurations.

*The Previous Solution feature added to the Solver Configuration.*

The screenshot above shows the *Previous Solution* feature added to the *Time-Dependent Solver*. Once you add this feature, simply select the appropriate field variable to be evaluated at the previous time step. It will also be faster (although not necessary) to use the Segregated Solver rather than the Fully Coupled solver.

And that is all there is to it. You can now solve the model just as you usually do and you will be able to evaluate the temperature at the previous computational time step.

Of course, having the solution at the previous time step isn’t really all that interesting in itself, but we can do quite a bit more than just store this solution. For example, we can apply logical expressions directly with the *ODEs and DAEs equation* interface. Consider the equation:

`u-nojac(if(T>u,T,u))`

This equation can be read as: “If the temperature at the previous time step is greater than u, set u equal to temperature. Otherwise, leave u unchanged.”

That is, it stores the maximum temperature reached at the previous time step at every point in the modeling domain. You can now evaluate the variables T and u at any point in the model to get both the temperature over time and the maximum temperature attained. To get the maximum temperature, you will want to take the maximum of the temperature at the previous time step and the temperature at the current time step, so you can introduce a variable in the model:

`MaxTemp = max(T,u)`

This will return the maximum temperature up to that time as shown in the plot below.

*Temperature at a point plotted over time. The variable MaxTemp is also plotted and shows the maximum temperature reached up to that instant in time.*

We have shown here the implementation of the newly introduced *Previous Solution* operator for time-dependent models. The three steps to use this functionality appropriately are:

- Choose the appropriate
*ODEs and DAEs*interface and discretization. - Enter an appropriate equation.
- Add the
*Previous Solution*feature to the Solver Configurations.

We have shown how to evaluate the maximum temperature in this example, but there is a great deal more that can be done with this functionality, so stay tuned!

]]>

While many different types of laser light sources exist, they are all quite similar in terms of their outputs. Laser light is very nearly single frequency (single wavelength) and coherent. Typically, the output of a laser is also focused into a narrow collimated beam. This collimated, coherent, and single frequency light source can be used as a very precise heat source in a wide range of applications, including cancer treatment, welding, annealing, material research, and semiconductor processing.

When laser light hits a solid material, part of the energy is absorbed, leading to localized heating. Liquids and gases (and plasmas), of course, can also be heated by lasers, but the heating of fluids almost always leads to significant convective effects. Within this blog post, we will neglect convection and concern ourselves only with the heating of solid materials.

Solid materials can be either partially transparent or completely opaque to light at the laser wavelength. Depending upon the degree of transparency, different approaches for modeling the laser heat source are appropriate. Additionally, we must concern ourselves with the relative scale as compared to the wavelength of light. If the laser is very tightly focused, then a different approach is needed compared to a relatively wide beam. If the material interacting with the beam has geometric features that are comparable to the wavelength, we must additionally consider exactly how the beam will interact with these small structures.

Before starting to model any laser-material interactions, you should first determine the optical properties of the material that you are modeling, both at the laser wavelength and in the infrared regime. You should also know the relative sizes of the objects you want to heat, as well as the laser wavelength and beam characteristics. This information will be useful in guiding you toward the appropriate approach for your modeling needs.

In cases where the material is opaque, or very nearly so, at the laser wavelength, it is appropriate to treat the laser as a surface heat source. This is most easily done with the *Deposited Beam Power* feature (shown below), which is available with the Heat Transfer Module as of COMSOL Multiphysics version 5.1. It is, however, also quite easy to manually set up such a surface heat load using only the COMSOL Multiphysics core package, as shown in the example here.

A surface heat source assumes that the energy in the beam is absorbed over a negligibly small distance into the material relative to the size of the object that is heated. The finite element mesh only needs to be fine enough to resolve the temperature fields as well as the laser spot size. The laser itself is not explicitly modeled, and it is assumed that the fraction of laser light that is reflected off the material is never reflected back. When using a surface heat load, you must manually account for the absorptivity of the material at the laser wavelength and scale the deposited beam power appropriately.

*The Deposited Beam Power feature in the Heat Transfer Module is used to model two crossed laser beams. The resultant surface heat source is shown.*

In cases where the material is partially transparent, the laser power will be deposited within the domain, rather than at the surface, and any of the different approaches may be appropriate based on the relative geometric sizes and the wavelength.

If the heated objects are much larger than the wavelength, but the laser light itself is converging and diverging through a series of optical elements and is possibly reflected by mirrors, then the functionality in the Ray Optics Module is the best option. In this approach, light is treated as a ray that is traced through homogeneous, inhomogeneous, and lossy materials.

As the light passes through lossy materials (e.g., optical glasses) and strikes surfaces, some power deposition will heat up the material. The absorption within domains is modeled via a complex-valued refractive index. At surfaces, you can use a reflection or an absorption coefficient. Any of these properties can be temperature dependent. For those interested in using this approach, this tutorial model from our Application Gallery provides a great starting point.

*A laser beam focused through two lenses. The lenses heat up due to the high-intensity laser light, shifting the focal point.*

If the heated objects and the spot size of the laser are much larger than the wavelength, then it is appropriate to use the Beer-Lambert law to model the absorption of the light within the material. This approach assumes that the laser light beam is perfectly parallel and unidirectional.

When using the Beer-Lambert law approach, the absorption coefficient of the material and reflection at the material surface must be known. Both of these material properties can be functions of temperature. The appropriate way to set up such a model is described in our earlier blog entry “Modeling Laser-Material Interactions with the Beer-Lambert Law“.

You can use the Beer-Lambert law approach if you know the incident laser intensity and if there are no reflections of the light within the material or at the boundaries.

*Laser heating of a semitransparent solid modeled with the Beer-Lambert law.*

If the heated domain is large, but the laser beam is tightly focused within it, neither the ray optics nor the Beer-Lambert law modeling approach can accurately solve for the fields and losses near the focus. These techniques do not directly solve Maxwell’s equations, but instead treat light as rays. The beam envelope method, available within the Wave Optics Module, is the most appropriate choice in this case.

The beam envelope method solves the full Maxwell’s equations when the field envelope is slowly varying. The approach is appropriate if the wave vector is approximately known throughout the modeling domain and whenever you know approximately the direction in which light is traveling. This is the case when modeling a focused laser light as well as waveguide structures like a Mach-Zehnder modulator or a ring resonator. Since the beam direction is known, the finite element mesh can be very coarse in the propagation direction, thereby reducing computational costs.

*A laser beam focused in a cylindrical material domain. The intensity at the incident side and within the material are plotted, along with the mesh.*

The beam envelope method can be combined with the *Heat Transfer in Solids* interface via the *Electromagnetic Heat Source* multiphysics couplings. These couplings are automatically set up when you add the *Laser Heating* interface under *Add Physics*.

*The* Laser Heating *interface adds the* Beam Envelopes *and the* Heat Transfer in Solids *interfaces and the multiphysics couplings between them.*

Finally, if the heated structure has dimensions comparable to the wavelength, it is necessary to solve the full Maxwell’s equations without assuming any propagation direction of the laser light within the modeling space. Here, we need to use the *Electromagnetic Waves, Frequency Domain* interface, which is available in both the Wave Optics Module and the RF Module. Additionally, the RF Module offers a *Microwave Heating* interface (similar to the *Laser Heating* interface described above) and couples the *Electromagnetic Waves, Frequency Domain* interface to the *Heat Transfer in Solids* interface. Despite the nomenclature, the RF Module and the *Microwave Heating* interface are appropriate over a wide frequency band.

The full-wave approach requires a finite element mesh that is fine enough to resolve the wavelength of the laser light. Since the beam may scatter in all directions, the mesh must be reasonably uniform in size. A good example of using the *Electromagnetic Waves, Frequency Domain* interface: Modeling the losses in a gold nanosphere illuminated by a plane wave, as illustrated below.

*Laser light heating a gold nanosphere. The losses in the sphere and the surrounding electric field magnitude are plotted, along with the mesh.*

You can use any of the previous five approaches to model the power deposition from a laser source in a solid material. Modeling the temperature rise and heat flux within and around the material additionally requires the *Heat Transfer in Solids* interface. Available in the core COMSOL Multiphysics package, this interface is suitable for modeling heat transfer in solids and features fixed temperature, insulating, and heat flux boundary conditions. The interface also includes various boundary conditions for modeling convective heat transfer to the surrounding atmosphere or fluid, as well as modeling radiative cooling to ambient at a known temperature.

In some cases, you may expect that there is also a fluid that provides significant heating or cooling to the problem and cannot be approximated with a boundary condition. For this, you will want to explicitly model the fluid flow using the Heat Transfer Module or the CFD Module, which can solve for both the temperature and flow fields. Both modules can solve for laminar and turbulent fluid flow. The CFD Module, however, has certain additional turbulent flow modeling capabilities, which are described in detail in this previous blog post.

For instances where you are expecting significant radiation between the heated object and any surrounding objects at varying temperatures, the Heat Transfer Module has the additional ability to compute gray body radiative view factors and radiative heat transfer. This is demonstrated in our Rapid Thermal Annealing tutorial model. When you expect the temperature variations to be significant, you may also need to consider the wavelength-dependent surface emissivity.

If the materials under consideration are transparent to laser light, it is likely that they are also partially transparent to thermal (infrared-band) radiation. This infrared light will be neither coherent nor collimated, so we cannot use any of the above approaches to describe the reradiation within semitransparent media. Instead, we can use the radiation in participating media approach. This technique is suitable for modeling heat transfer within a material, where there is significant heat flux inside the material due to radiation. An example of this approach from our Application Gallery can be found here.

In this blog post, we have looked at the various modeling techniques available in the COMSOL Multiphysics environment for modeling the laser heating of a solid material. Surface heating and volumetric heating approaches are presented, along with a brief overview of the heat transfer modeling capabilities. Thus far, we have only considered the heating of a solid material that does not change phase. The heating of liquids and gases — and the modeling of phase change — will be covered in a future blog post. Stay tuned!

]]>

One helpful technique (which we’ve occasionally shown in other posts, but haven’t explicitly named) is to layer different surfaces together in a single plot group. This allows you to view several different results at once. For instance, you can combine plots that show the temperature and fluid flow for a CFD model, or show both the stress and deformation in a simulation of fluid-structure interaction. Let’s take the example of an aluminum heat sink, which was shown in this blog post on using surface, volume, and line plots. The plot below shows the temperature in the heat sink and the surrounding domain:

Adding a second plot simply requires right-clicking on the *plot group* node and choosing the next plot type you want. The new plot will be added on top of the first one and shown in the results tree. For this example, we can also add an arrow plot showing the fluid flow field past the heat sink:

After adding a second plot, the plot group node looks like this (and the arrow volume contains a color expression, not shown):

We can also add, for instance, a contour plot to see the temperature variation more clearly around the heat sink:

That addition makes the plot group tree look like this:

Note that there is no limit to the number of plots that can be added to a plot group, although it’s important to make sure that the plot doesn’t get too crowded. In certain cases, plots may interfere with each other — for example, it wouldn’t make sense to include a surface plot of the fluid flow velocity directly on top of the temperature plot, because it isn’t possible to see both at once. Even in this case, the contour and the arrow plots together make it look a little too busy.

In the previous example, you might have noticed that the *contour* plot showing the temperature lines around the base of the heat sink has a different color scheme than the *surface* plot showing the temperature in the heat sink and surrounding domain. This could become confusing, especially since the rainbow color table is used in the arrow plot — which is showing fluid flow, not temperature. So as not to be misleading, we can change the two temperature plots to have the same color scheme (and make sure it’s different than the color scheme for plots showing other physics).

The obvious way to do this is simply to change the color table in the settings for each plot. In this case, we can change the contour plot to the thermal color table:

However, for a case where there are many subplots or there might be frequent changes made to style choices, it’s more beneficial to use the *Inherit Style* options. These enable you to inherit the style settings from a previous plot. We can set the contour plot to inherit its style from the temperature surface, locking the color settings in the contour plot unless we reset the inherit options:

There are several checkboxes (shown above) that allow you to choose exactly which aspects of the plot’s style to keep. This is especially helpful for certain plot types where you have manually adjusted a setting — for instance, where a manual scale factor has been introduced and you want to make sure that all the successive plots maintain the scale.

Choosing to inherit the style from a previous plot also means that any changes made to the “parent” (the plot the settings are inherited from) will cause automatic changes in any plots relying on its style settings.

For COMSOL Multiphysics version 5.1, new buttons have been added to the graphics window toolbar that allow quick enabling and disabling of the grid, the coordinate axis symbol, and the color legends:

This means that it’s no longer necessary to go into the View node, and you can easily turn them off and on temporarily as needed.

Another helpful tool (again shown in previous posts, but seldom discussed) is the *Wireframe* rendering option in the Coloring and Style tab. This checkbox allows you to display the mesh elements in a surface plot.

In many cases, it’s preferable to overlay two surface plots (as discussed earlier) so that there is a “background” for the mesh. For instance, in the heat sink, we might use a temperature surface on the heat sink, and then include a black mesh on top:

The mesh doesn’t have to be a uniform color (and will show an appropriate color gradient if a physics expression is used for the surface plot), but this feature is normally used with a gray or black mesh on top of a surface of a different color.

COMSOL Multiphysics contains several options for exporting results such as images, animations, and reports. These are quite flexible and may be adapted to your specific needs.

For exporting images from the graphics window, use the *Image Snapshot* button on the toolbar:

You can adjust the size and resolution, and control which aspects of the visible graphic are included in the image:

To create animations, right-click on the *Export* node and choose *Animation*. This will create a subnode where you can adjust the settings for a video to be exported to your chosen file type:

Among the settings are fields for controlling the number of frames, size, and speed of the video. Like the graphics export options, there are also choices for resolution and layout.

Note: The Player node is very similar to the Animation node (also accessible through the ribbon or by right-clicking on the Export node), except that it doesn’t save a file. Rather, this feature enables you to play and watch the animation directly in the graphics window.

Finally, the *Reports* node allows you to generate a report of your results as an HTML or Microsoft® Word file. Several options are available for different amounts of detail, as well as a *Custom Report* option that lets you choose exactly which aspects of the simulation you’d like to include.

A complete report will include everything in the Model Builder, from the parameters table to the final results plots.

Lastly, a little-known fact about the COMSOL Desktop is that you can rearrange the windows using the drag-and-drop method. All of the windows below the ribbon can be moved and dropped into different sections. Release the mouse over one of the blue boxes to set it in the corresponding position (shown below):

If you ever want to reset the Desktop layout, the Reset Desktop button on the ribbon will revert the windows to the default arrangement. This feature also allows you to choose between two different layouts, depending on the size of your window and monitor:

This completes our overview of using additional tricks and tools for postprocessing in COMSOL Multiphysics. We hope that you’ll find these techniques useful in your own work for visualizing, understanding, and sharing your simulation results. Thanks for reading!

- Postprocessing handbook
- Postprocessing webinar
- Read more from our Postprocessing blog series

A *deformation* refers to a node that can be added to a plot to deform the visual results according to a chosen vector quantity. In structural mechanics applications, you might want to show the displacement of a part. In acoustics models, which we’ll discuss most extensively here, you can visualize the actual shape of the wave.

Let’s consider the example of a condenser microphone, which can be found under *File > Application Libraries > Acoustics Module > Electroacoustic Transducers > condenser microphone*, or downloaded here. For those of you who do not have the Acoustics Module installed, note that you can still open the tutorial model to investigate the settings and run postprocessing.

The condenser microphone tutorial model studies the deformation of a membrane (a diaphragm) in the microphone that transforms mechanical displacement into an AC signal. An existing plot in the solved tutorial model called *3D Membrane deformation* already shows the deformation of the diaphragm:

The surface node plots the displacement field in the vertical direction at a chosen frequency. The plot below illustrates the maximum frequency in the simulation, with the deformation disabled (to disable a node, right-click and choose *Disable*):

To add a deformation from scratch, right-click the appropriate plot node and choose *Deformation*:

Deformations can be applied to most 2D and 3D plot types — arrow, isosurface, contour, streamline, surface, slice, and volume. In the settings, you can choose the relevant vector quantity (in this case, the displacement field):

With the deformation plot re-enabled, we can see the changes in the shape of the membrane for this frequency:

If you take another quick look at the settings, you’ll notice that there is a very large scale factor:

Deformations are often scaled by a large factor in order to make microscale changes and deflections visible, or to shrink enormous deformations so they don’t obscure important parts of the model. In very small devices such as MEMS hardware or in the case of this microphone, real deformations are not always visible to the naked eye.

*Height expressions* are a special kind of deformation that, rather than being based on a vector quantity, plot a single variable. This type of deformation allows you to transform a 2D plot into a 3D plot. In another acoustics simulation (found under *File > Application Libraries > Acoustics Module > Piezoelectric Devices > piezoacoustic transducer* or downloaded here), one plot uses a height expression to demonstrate the acoustic pressure field generated by a piezoacoustic transducer.

The pressure field is originally plotted on a surface in two dimensions:

A height expression, however, will turn this surface into a 3D plot, showing the height of the peaks and troughs:

This particular height expression pulls settings from the parent node; however, the settings can also be set to use an expression selected by the user. In this tutorial model, the default expression for the height data will base the height of the surface at each point on the magnitude of the acoustic pressure field:

Height expressions also have an offset toggle bar (shown above) that allows you to manually shift the entire deformed structure in the *z*-direction. The plot below indicates the results, with an offset of 1.5.

Finally, we’ll take a look at 2D deformations that are very useful for creating *periodic arrays* — arrangements where an object or pattern is repeated over and over, but usually only a single cell or slice of the device has been modeled.

To demonstrate this, we’ll turn to the plasmonic wire grating tutorial model. The tutorial model can be found under *File > Application Libraries > RF Module > Demo Tutorials > plasmonic wire grating* or downloaded here. This example computes transmission and reflection coefficients for a planar electromagnetic wave incident on a wire grating. Rather than modeling the entire device, a unit cell representing only one bar of the grating is used. However, the tutorial model contains a periodic condition to indicate that, in the real structure, the cell is repeated on either side of itself.

The results for the plasmonic wire grating tutorial model show the electric field norm on the grating for a chosen angle of incidence. Although the solved tutorial model in the Application Libraries contains an array data set, we have generated a new surface plot that demonstrates the creation of an array using deformations:

In some cases, a 2D array data set is faster and easier to use than a deformation; however, individual deformations enable you to control the exact placement of different copies of a solution. Here, we have translated a second surface by *d* nanometers (nm), which is the width of the unit cell, in the positive *x*-direction:

The scaling factor is set to *1* to ensure that the surface moves by the correct distance. Notice that the title is now duplicated as well. The plot title can be disabled in the node for each surface plot in order to avoid repeats. The same can be done for color legends.

We can duplicate this surface and simply change the expression for the *x*-component in each plot, and thus line up several cells next to each other:

The figure below illustrates the results with four translated copies of the surface. The original plot is in the center, with the outline of the data set shown:

While we won’t go into detail here about other cases of using deformations, you might also want to show displacements in many structural mechanics and fluid models. For example, the plot below shows the displacement of a reciprocating engine in motion, with the outlines of the original position:

In this tutorial model of a micromirror, a deformation shows the displacement field (with *u*, *v*, and *w* components) to depict the mirror’s response to different levels of prestress. The outline of the original position (flat) is shown underneath:

Deformations can also be used to creatively illustrate fluid flow so the meaning becomes clearer. In the figure below, a line plot is deformed according to the velocity of air flowing across a heat sink:

That wraps up our blog post on deformations. In the next blog post of our postprocessing series, we will focus on various tips and tricks designed to advance your postprocessing techniques!

]]>

Recall our simple example of 1D heat transfer at steady state with no heat source, where the temperature T is a function of the position x in the domain defined by the interval 1 \le x \le 5. We imposed the Neumann boundary condition such that the outgoing flux should be 2 at the left boundary (x=1) and the Dirichlet boundary condition such that the temperature should be 9 at the right boundary (x=5). After discretizing the weak form equation, we obtained this matrix equation:

\left(

\begin{array}{cccccc}

1 & -1 & 0 & 0 & 0 & 0 \\

-1 & 2 & -1 & 0 & 0 & 0 \\

0 & -1 & 2 & -1 & 0 & 0 \\

0 & 0 & -1 & 2 & -1 & 0 \\

0 & 0 & 0 & -1 & 1 & 1 \\

0 & 0 & 0 & 0 & 1 & 0

\end{array}

\right)

\left(

\begin{array}{c} a_1 \\ a_2 \\ a_3 \\ a_4 \\ a_5 \\ \lambda_2 \end{array}

\right)

= \left(

\begin{array}{c} -2 \\ 0 \\ 0 \\ 0 \\ 0 \\ 9 \end{array}

\right)

\begin{array}{cccccc}

1 & -1 & 0 & 0 & 0 & 0 \\

-1 & 2 & -1 & 0 & 0 & 0 \\

0 & -1 & 2 & -1 & 0 & 0 \\

0 & 0 & -1 & 2 & -1 & 0 \\

0 & 0 & 0 & -1 & 1 & 1 \\

0 & 0 & 0 & 0 & 1 & 0

\end{array}

\right)

\left(

\begin{array}{c} a_1 \\ a_2 \\ a_3 \\ a_4 \\ a_5 \\ \lambda_2 \end{array}

\right)

= \left(

\begin{array}{c} -2 \\ 0 \\ 0 \\ 0 \\ 0 \\ 9 \end{array}

\right)

where a_1, a_2, \cdots , a_5 are unknown temperature values at the nodal points (x=1, 2, \cdots, 5) and \lambda_2 is an unknown heat flux at the right boundary (x=5). The matrix on the left-hand side is customarily called the *stiffness matrix* and the vector on the right is called the *load vector*, due to the application of this technique in structural mechanics.

The steps for implementing the weak form equation in COMSOL Multiphysics have been discussed in this earlier blog entry, thus we will not repeat them here. To view the stiffness matrix and load vector, right-click *Solution 1* in the model tree → *Other* → *Assemble*, as shown in the screenshot below:

Then, in the settings window for the *Assemble 1* node, we can select the matrices of interest by checking the corresponding checkbox for each item. After solving, we can evaluate the matrices by right-clicking *Derived Values* and then selecting *System Matrix*, as illustrated below:

In the corresponding settings window, we can select *Stiffness matrix* from the *Matrix* drop-down menu and evaluate it in a table. As indicated in the screenshot below, we obtain exactly the same 6×6 matrix as the one in Eq. (1).

The load vector on the right-hand side of Eq. (1) can be evaluated and verified by the same procedure.

We mentioned before that it is sometimes desirable not to solve for the Lagrange multiplier \lambda_2 — for example, to save computation resources. To do so, we right-click the *Weak Form PDE* main node → *More* → *Pointwise Constraint*, as depicted below:

Then, in the settings window, we enter 9-T for the *Constraint expression* input field. After solving, we obtain exactly the same solution, but a smaller 5×5 stiffness matrix:

If we look at this matrix closely, we find that it matches the upper left-hand part of the 6×6 matrix that we observed earlier. This should not be too surprising, as we are solving exactly the same problem (represented by Eq. (1)), just in a slightly different way. We will briefly discuss this next.

When we implement the Dirichlet boundary condition using the *Weak Contribution* feature with a Lagrange multiplier \lambda_2 (as shown in this previous blog post), we effectively ask that the entire matrix equation (1) be solved to yield the coefficients a_1, a_2, \cdots , a_5 as well as the multiplier \lambda_2.

On the other hand, when we implement the fixed boundary condition using the *Pointwise Constraint* feature as shown above, we essentially ask that the same matrix equation (1) be solved without explicitly solving for the Lagrange multiplier \lambda_2. The software effectively segregates Eq. (1) to this form (see below)

\left(\begin{array}{cc}

K & N_F \\

N & 0 \end{array}\right)

\left(\begin{array}{c}

U \\

\Lambda \end{array}\right)

=

\left(\begin{array}{c}

L \\

M \end{array}\right)

K & N_F \\

N & 0 \end{array}\right)

\left(\begin{array}{c}

U \\

\Lambda \end{array}\right)

=

\left(\begin{array}{c}

L \\

M \end{array}\right)

where the stiffness matrix K is the 5×5 matrix shown above, the constraint Jacobian matrix N is (0 \, 0 \, 0 \, 0 \, 1), the constraint force Jacobian matrix N_F is (0 \, 0 \, 0 \, 0 \, 1)^T, the solution vector U is formed by the coefficients a_1, a_2, \cdots , a_5, the (one-element) vector \Lambda is formed by the Lagrange multiplier \lambda_2, the load vector L is (-2 \, 0 \, 0\, 0\, 0)^T, and the (one-element) constraint vector M is (9).

This segregation of Eq. (1) is shown graphically below:

Of course, in reality, when the

Pointwise Constraintfeature is specified, the software does not bother to assemble the full 6×6 matrix in Eq. (1). Instead, it assembles K, N, and N_F, which requires less computational resources than assembling the full matrix.

Eq. (2) can be written as a system of two matrix equations:

K \, U+ N_F \, \Lambda = L

N \, U = M

where the first one (3) contains the part of the discretized weak form equation with heat flux boundary conditions and the second one (4) contains the constraint imposed by the Dirichlet boundary condition.

The equation system can be solved in two steps. In the first step, the constraint equation (4) is solved for the degree(s) of freedom involved with the Dirichlet boundary condition. In the second step, the solution from the first step is substituted into Eq. (3) to solve for the remaining degrees of freedom.

The resulting solution vector U is then given as the linear combination of the solutions from the two steps:

(5)

U=U_d+Null \, U_n

The first term U_d is the solution vector of the constraint equation (4) solved in the first step. The second term is obtained from the second step in the form of the product of a matrix Null and a solution vector U_n. The columns of the matrix Null are formed by the basis vectors spanning the null space of the constraint Jacobian matrix N. So, we have

N \, Null \equiv 0

The solution vector U_n is the solution to the *eliminated* matrix equation

K_c \, U_n = L_c

where the *eliminated stiffness matrix* K_c and the *eliminated load vector* L_c are given by

\begin{align}

K_c& = Nullf^T \, K \, Null \\

L_c& = Nullf^T \, (L-K \, U_d)

\end{align}

K_c& = Nullf^T \, K \, Null \\

L_c& = Nullf^T \, (L-K \, U_d)

\end{align}

and the columns of the matrix Nullf are formed by the basis vectors spanning the null space of the transpose of the constraint force Jacobian matrix N_F. So, we have

Nullf^T \, N_F \equiv 0

The term *eliminated* indicates that the degree(s) of freedom involved in the Dirichlet boundary condition have been removed from the matrix equation (6). In our example, the size of the eliminated stiffness matrix K_c is 4×4.

Notice that the solution procedure described above does not involve the Lagrange multiplier vector \Lambda. Indeed, the value of the Lagrange multiplier \lambda_2 is left unsolved by this procedure. The advantage of this method is that the required computational resources are reduced. For our simple example, the size of the matrix is reduced from 6×6 to 4×4 (plus an even smaller one for the constraint equation).

COMSOL Multiphysics allows us to evaluate and view any of the matrices and vectors mentioned above. All we have to do is to follow the same procedure outlined in the previous section on viewing the stiffness matrix and load vector. However, we quickly find out that we will spend a lot of time clicking on each System Matrix node in the Model Builder and then clicking on the “Evaluate” button in the settings window. Also, we can only see one matrix or vector at a time, making it tedious to examine all the matrices and vectors.

This represents one of the many situations in which a COMSOL application can help enhance modeling experience and productivity (learn about application webinars below). The core package of COMSOL Multiphysics can be used to build a COMSOL app based on any COMSOL model by wrapping a user interface (UI) around it. The UI is completely customizable and can be easily configured to suit individual modeling needs.

For an introduction to COMSOL applications, see the following webinars: Intro to COMSOL Multiphysics® 5.0 and the Application Builder (with a focus on the second half) and How to Build and Run Simulation Apps with COMSOL Server™.

The following screenshots show just one of the essentially infinite number of ways that a UI can be arranged to serve as a convenient tool for investigating the matrices and vectors. The app is set up to switch among several different kinds of boundary conditions via checkboxes. All matrices and vectors are evaluated and displayed by a single click of the “Compute” button. Here is a screenshot of the case in which the full 6×6 matrix is used to solve:

We see that N, N_F, and M are empty and K_c remains the full size of 6×6. In contrast, here is the screenshot for the second case where the eliminated 4×4 matrix is used to solve:

Using our simple heat transfer example, we have explored the two different ways to solve the matrix equation obtained from discretizing the weak form equation. We found that it can be much simpler to evaluate and inspect the various matrices and vectors involved in the solution process by setting up a COMSOL application. In the next blog posts, we will continue to use COMSOL apps to help us investigate more complex examples.

As a general-purpose development environment for multiphysics simulation, the COMSOL Desktop® environment must be structured with a limited arrangement of UI elements. The COMSOL app frees us from such limitations, providing the power for building specialized UI for unique requirements. Additionally, the built-in coding functionality allows much more versatile computations, and COMSOL Server™ license enables the deployment of apps for worldwide access by coworkers as well as clients.

We hope that you will use these powerful features in COMSOL Multiphysics to boost your own work efficiency!

]]>

When light is incident upon a semi-transparent material, some of the energy will be absorbed by the material itself. If we can assume that the light is single wavelength, collimated (such as from a laser), and experiences very minimal refraction, reflection, or scattering within the material itself, then it is appropriate to model the light intensity via the *Beer-Lambert law*. This law can be written in differential form for the light intensity I as:

\frac{\partial I }{\partial z} = \alpha(T) I

where *z* is the coordinate along the beam direction and \alpha(T) is the temperature-dependent absorption coefficient of the material. Because this temperature can vary in space and time, we must also solve the governing partial differential equation for temperature distribution within the material:

\rho C_p \frac{\partial T }{\partial t}-\nabla \cdot (k \nabla T)= Q = \alpha(T) I

where the heat source term, Q, equals the absorbed light. These two equations present a bidirectionally coupled multiphysics problem that is well suited for modeling within the core architecture of COMSOL Multiphysics. Let’s find out how…

We will consider the problem shown above, which depicts a solid cylinder of material (20 mm in diameter and 25 mm in length) with a laser incident on the top. To reduce the model size, we will exploit symmetry and consider only one quarter of the entire cylinder. We will also partition the domain up into two volumes. These volumes will represent the same material, but we will only solve the Beer-Lambert law on the inside domain — the only region that the beam is heating up.

To implement the Beer-Lambert law, we will begin by adding the *General Form PDE* interface with the *Dependent Variables* and *Units* settings, as shown in the figure below.

*Settings for the implementing the Beer-Lambert law. Note the Units settings.*

Next, the equation itself is implemented via the *General Form PDE* interface, as illustrated in the following screenshot. Aside from the source term, f, all terms within the equation are set to zero; thus, the equation being solved is f=0. The source term is set to **Iz-(50[1/m]*(1+(T-300[K])/40[K]))*I**, where the partial derivative of light intensity with respect to the *z*-direction is **Iz**, and the absorption coefficient is **(50[1/m]*(1+(T-300[K])/40[K]))**, which introduces a temperature dependency for illustrative purposes. This one line implements the Beer-Lambert law for a material with a temperature-dependent absorption coefficient, assuming that we will also solve for the temperature field, **T**, in our model.

*Implementation of the Beer-Lambert law with the* General Form PDE *interface.*

Since this equation is linear and stationary, the *Initial Values* do not affect the solution for the intensity variable. The *Zero Flux* boundary condition is appropriate on most faces, with the exception of the illuminated face. We will assume that the incident laser light intensity follows a Gaussian distribution with respect to distance from the origin. At the origin, and immediately above the material, the incident intensity is 3 W/mm^{2}. Some of the laser light will be reflected at the dielectric interface, so the intensity of light at the surface of the material is reduced to 0.95 of the incident intensity. This condition is implemented with a *Dirichlet Boundary Condition*. At the face opposite to the incident face, the Zero Flux boundary condition simply means that any light reaching that boundary will leave the domain.

*The Dirichlet Boundary Condition sets the incident light intensity within the material.*

With these settings described above, the problem of temperature-dependent light absorption governed by the Beer-Lambert law has been implemented. It is, of course, also necessary to solve for the temperature variation in the material over time. We will consider an arbitrary material with a thermal conductivity of 2 W/m/K, a density of 2000 kg/m^{3}, and a specific heat of 1000 J/kg/K that is initially at 300 K with a volumetric heat source.

The heat source itself is simply the absorption coefficient times the intensity, or equivalently, the derivative of the intensity with respect to the propagation direction, which can be entered as shown below.

*The heat source term is the absorbed light.*

Most other boundaries can be left at the default *Thermal Insulation*, which will also be appropriate for implementing the symmetry of the temperature field. However, at the illuminated boundary, the temperature will rise significantly and radiative heat loss can occur. This can be modeled with the *Diffuse Surface* boundary condition, which takes the ambient temperature of the surroundings and the surface emissivity as inputs.

*Thermal radiation from the top face to the surroundings is modeled with the Diffuse Surface boundary condition.*

It is worth noting that using the Diffuse Surface boundary condition implies that the object radiates as a gray body. However, the gray body assumption would imply that this material is opaque. So how can we reconcile this with the fact that we are using the Beer-Lambert law, which is appropriate for semi-transparent materials?

We can resolve this apparent discrepancy by noting that the material absorptivity is highly wavelength-dependent. At the wavelength of incident laser light that we are considering in this example, the penetration depth is large. However, when the part heats up, it will re-radiate primarily in the long-infrared regime. At long-infrared wavelengths, we can assume that the penetration depth is very small, and thus the assumption that the material bulk is opaque for emitted radiation is valid.

It is possible to solve this model either for the steady-state solution or for the transient response. The figure below shows the temperature and light intensity in the material over time, as well as the finite element mesh that is used. Although it is not necessary to use a swept mesh in the absorption direction, applying this feature provides a smooth solution for the light intensity with relatively fewer elements than a tetrahedral mesh. The plot of light intensity and temperature with respect to depth at the centerline illustrates the effect of the varying absorption coefficient due to the rise in temperature.

*Plot of the mesh (on the far left) and the light intensity and temperature at different times.*

*Light intensity and temperature as a function of depth along the centerline over time.*

Here, we have highlighted how the *General Form PDE* interface, available in the core COMSOL Multiphysics package, can be used for implementing the Beer-Lambert law to model the heating of a semi-transparent medium. This approach is appropriate if the incident light is collimated and at a wavelength where the material is semi-transparent.

Although this approach has been presented in the context of laser heating, the incident light needs only to be collimated for this approach to be valid. The light does not need to be coherent nor single wavelength. A wide spectrum source can be broken down into a sum of several wavelength bands over which the material absorption coefficient is roughly constant, with each solved using a separate *General Form PDE* interface.

In the approach presented here, the material itself is assumed to be completely opaque to ambient thermal radiation. It is, however, possible to model thermal re-radiation within the material using the *Radiation in Participating Media* physics interface available within the Heat Transfer Module.

The Beer-Lambert law does assume that the incident laser light is perfectly collimated and propagates in a single direction. If you are instead modeling a focused laser beam with gradual variations in the intensity along the optical path then the *Beam Envelopes* interface in the Wave Optics Module is more appropriate.

In future blog posts, we will introduce these as well as alternate approaches for modeling laser-material interactions. Stay tuned!

]]>

Consider a drum head constructed by stretching a membrane over a stiff frame that encloses a flat 2D domain. The vibration of the membrane is described by the wave equation (Helmholtz equation) with the Dirichlet boundary condition at the periphery of the domain where the membrane is constrained by the stiff frame. In this case, there is a set of discrete solutions to the wave equation, called *normal modes* or *eigenmodes*, each of which vibrates at a characteristic frequency, called *eigenfrequencies*.

The lowest eigenfrequency defines the fundamental tone, which for instance could be concert pitch A (440 Hz). The set of higher eigenfrequencies, or *overtones* in musical terms, gives rise to the tone color or timbre of the vibrating membrane. Kac’s lecture drew our attention to the eigenfrequencies: Is it possible to construct two drum heads with different shapes that share a set of eigenfrequencies? The idea was that if the two drums have an identical set of eigenfrequencies (being *isospectral*), then they would have the same timbre and sound the same to the ear, even though their shapes are different.

Kac commented on the asymptotic behavior of the eigenfrequencies in the limit of very high frequencies and made connections to various branches of physics and mathematics to provide a ground for intuitive understanding. The uniqueness question (in 2D flat space) remained unsolved until over two decades later when Gordon, Webb, and Wolpert finally constructed two polygons with an identical set of eigenvalues (see “One cannot hear the shape of a drum” and “Isospectral plane domains and surfaces via Riemannian orbifolds“).

The eigenvalues of the two polygons can be computed numerically, which is shown in this Isospectral Drums model in our Model Gallery.

The image below shows the first three normal modes of the two polygons that share the same set of eigenfrequencies:

In Gordon and Webb’s easy-to-read introductory article on this subject (“You can’t hear the shape of a drum”, *American Scientist*, vol. 84 (1996), pp. 46–55), they commented that such isospectral drums with different shapes are expected to be the exception, not the rule. In other words, they expected that, in general, one *can* hear the shape of a drum, unless the shape of the drum is specially constructed to be isospectral with another shape, like the two polygons depicted above.

In the following discussion, we will take a closer look at such special shapes by considering various physical mechanisms involved in the sound production and detection. We will find that when we include relevant physical effects, we actually *can* tell two drums apart by the sound, even if they are specially constructed to share the same set of eigenfrequencies.

The first effect we will examine is the excitation of the vibrational modes in the membrane. Since the timbre is determined by the set of relative amplitudes of the normal modes, it is not enough to just have an identical set of eigenfrequencies for the two drums to sound the same. They also need to have the same relative amplitude for each eigenmode, which may not be trivial to achieve.

Let’s take, for example, the same two polygonal drums from above and hit them with a drum stick at a few arbitrary places, one at a time, as such:

Each location of striking is somewhere in the middle of the drum, where a child may instinctively choose to hit if given such a drum and a drum stick. We use COMSOL Multiphysics simulation software to calculate the frequency response of each of the locations and plot the results in the graphs below.

We first focus on just one drum, say, the one on the left. Here is a plot showing the left drum’s frequency response:

As we hinted at earlier, the drum sounds differently depending on the location where it is struck by the drum stick. We see different energy distribution among the first three eigenmodes, which will result in different timbre. This is, of course, a well-known fact to percussionists and is the result of the same principle that enables a single bell to ring in two distinct tones, as demonstrated by this ancient set of bells from over two thousand years ago.

Now we know we can’t even make one drum sound the same unless we have a perfect aim of the drum stick. Is there any hope that we can make the two different drums sound the same?

In the graph below, we’ve added the frequency response of the second drum (the dashed curves). As we examine the graph, it becomes evident that none of the dashed curves match the solid curves in all three of the eigenmodes. In other words, the two drums do sound differently, even though they are isospectral, sharing the same set of eigenfrequencies.

Of course, we haven’t done an exhaustive search of all the possible combinations of striking locations. However, this simple example illustrates that it is not an easy job to make the drums sound the same, due to the different coupling strengths of energy from the drum stick to the various vibrational modes of the membrane.

The magic of mathematics never ceases to amaze us. Not long after the two isospectral polygons were published, Buser, Conway, Doyle, and Semmler constructed a pair of domains that are not only isospectral (sharing the same set of eigenfrequencies), but also *homophonic*: having a special point in the domain such that “corresponding normalized Dirichlet eigenfunctions take equal values at the distinguished points” (“Some planar isospectral domains“). In other words, if the special point of each drum is hit by a drum stick, then each corresponding pair of eigenmodes of the two drums will be excited with the same amplitude and the two drums will sound the same.

Shown below are the first few normal mode shapes computed numerically:

The special point of each domain is marked with a blue square in the schematic below:

In the following graph, we plot the computed frequency response of the two drums to a narrow Gaussian area load centered on each of the special points:

Isn’t it amazing how well the two frequency response curves (solid blue curve and green circles) lie on top of each other? With such a perfectly matched vibrational energy spectrum, wouldn’t the two drums sound exactly the same? Let’s continue our journey to explore more physical effects and find out.

Our ears do not sense the vibration of the membrane directly. Rather, the sensing is mediated by the acoustic wave in the air. Let’s set up the two homophonic drums outdoors, where the sound is allowed to propagate away from the drums without significant reflection. In this case, we can easily compute the frequency spectrum of the sound wave using COMSOL Multiphysics to find out what we really hear with our ears.

Let’s take a look at the three vibrational modes with the highest energies at about 111, 146, and 184 Hz as shown in the spectral graph above. For convenience, we will call them the first, second, and third mode, with the understanding that there are other modes in between being neglected since they are much less energetic.

The polar graph below compares the computed sound pressure level (in dB) in the plane of each of the two drums, a few meters away from each drum.

We see that the sound pressure field produced by the first mode is more or less independent of direction (solid and dashed blue curves). This is not surprising, since the mode shape of each drum looks pretty much like a monopole source:

On the other hand, the directionality of the sound field from the second or the third mode of each of the drums is quite pronounced and also quite different between the two drums. For example, for the second mode, the sound field from Drum 1 looks like a dipole field (solid red curve), while the one from Drum 2 is more complex (dashed red curve). This observation again matches what we see in the mode shapes of the two drums:

What really determines the perceived timbre is the ratio of the amplitudes of the higher modes (the overtones) to the lowest mode (the fundamental tone). So, in the next graph, we plot the amplitude ratios of the second and the third modes to the first mode, at a sampling of directions:

The blue square points are from Drum 1 and the red round points from Drum 2. The graph can be viewed as a map of timbre — if two points on the graph are near each other, then they sound similar; on the other hand, if two points on the graph are far away from each other, then they have very distinct timbre. As qualitatively illustrated by the green dashed boundaries, each drum can produce a range of timbre that the other cannot, in some range of directions.

As long as a listener is allowed to move around each drum, perhaps blindfolded, he or she will hear distinct ranges of timbre that tell the two drums apart. Therefore, even though the two “homophonic” drums share the same energy spectrum in their vibration modes, due to the difference in the mode shape and to the difference in energy transfer to the sound field in the air, the acoustic energy spectrum in some range of directions can be quite different. This is what would cause the two drums to sound differently to our ears.

In the previous analysis, we ignored the reaction force acted on the membrane by the air, the so-called *air loading effect*. It turns out that this effect is very significant for a real drum, since, after all, the entire area of the membrane participates in the pushing and pulling of the air around it.

We can simulate this effect using the *Acoustic-Structure Boundary* Multiphysics coupling feature of COMSOL Multiphysics. We find, for example, that the eigenfrequency of the second mode that we were discussing shifts from 146 Hz down to about 86 Hz. In addition, the magnitudes of shift of the two drums are different. The eigenfrequency of one drum was shifted down to 85.6 Hz, while the one of the other drum shifted to 86.8 Hz. This difference causes a pitch difference of about 23 cents, which is very audible in a side-by-side comparison.

Therefore, not only do the two drums differ in timbre (in some range of directions), they also differ in absolute pitch when we take the air loading effect into account.

The graph below shows the frequency response of the two drums around this mode. The difference in the resonant frequency is clearly seen, as well as the difference in the width of the resonance. There should be no doubt in our mind that with such different frequency responses, the two drums will produce easily distinguishable sounds.

It is a great achievement in mathematics to invent the isospectral drums that share the same set of eigenfrequencies and the homophonic drums that share the same power spectrum of the vibrational modes when excited at a special point. However, these phenomena only happen in vacuum, where there is no sound. Once we put the drums to test in air, they start to sound differently due to the air loading effect and the directionality of the energy transfer from the membrane to the sound wave.

In his lecture, Kac told the early 20^{th}-century story of Lorentz calling for mathematicians’ attention to the eigenvalue problem involved in the theory of black body radiation and Weyl answering the call with the proof of the theorem of the asymptotic behavior of eigenvalues at very high frequencies.

Here, we could use the help of our mathematician friends again, even though the subject matter may not be as grand as black body radiation and quantum mechanics. Is it possible to construct homophonic drums with different shapes that sound the same when including directionality and air loading effects? It may be possible to pose this as an optimization problem to solve numerically for a solution with a finite set of audible frequencies.

However, the computation cost will be high and the result will be approximate. An elegant analytical solution similar to those shown in the papers mentioned above would be much nicer. I hope this will arouse the interest of mathematicians who are reading.

]]>