Here, we will speak about the frequency-domain form of Maxwell’s equations in the *Electromagnetic Waves, Frequency Domain* interface available in the RF Module and the Wave Optics Module. The information presented here also applies to the *Electromagnetic Waves, Beam Envelopes* formulation in the Wave Optics Module.

Under the assumption that material response is linear with field strength, we formulate Maxwell’s equations in the frequency domain, so the governing equations can be written as:

\nabla \times \left( \mu_r^{-1} \nabla \times \mathbf{E} \right)-\frac{\omega^2}{c_0^2} \left( \epsilon_r -\frac{j \sigma}{\omega \epsilon_0} \right) \mathbf{E}= 0

This equation solves for the electric field, \mathbf{E}, at the operating (angular) frequency \omega = 2 \pi f (c_0 is the speed of light in vacuum). The other inputs are the material properties \mu_r, the relative permeability; \epsilon_r, the relative permittivity; and \sigma , the electrical conductivity. All of these material inputs can be positive or negative, real or complex-valued numbers, and they can be scalar or tensor quantities. These material properties can vary as a function of frequency as well, though it is not always necessary to consider this variation if we are only looking at a relatively narrow frequency range.

Let us now explore each of these material properties in detail.

The *electrical conductivity* quantifies how well a material conducts current — it is the inverse of the electrical resistivity. The material conductivity is measured under steady-state (DC) conditions, and we can see from the above equation that as the frequency increases, the effective resistivity of the material increases. We typically assume that the conductivity is constant with frequency, and later on we will examine different models for handling materials with frequency-dependent conductivity.

Any material with non-zero conductivity will conduct current in an applied electric field and dissipate energy as a resistive loss, also called *Joule heating*. This will often lead to a measurable rise in temperature, which will alter the conductivity. You can enter any function or tabular data for variation of conductivity with temperature, and there is also a built-in model for linearized resistivity.

*Linearized Resistivity* is a commonly used model for the variation of conductivity with temperature, given by:

\sigma = \frac{1}{\rho_0 (1 + \alpha ( T-T_{ref} )) }

where \rho_0 is the reference resistivity, T_{ref} is the reference temperature, and \alpha is the resistivity temperature coefficient. The spatially-varying temperature field, T, can either be specified or computed.

Conductivity is entered as a real-valued number, but it can be anisotropic, meaning that the material’s conductivity varies in different coordinate directions. This is an appropriate approach if you have, for example, a laminated material in which you do not want to explicitly model the individual layers. You can enter a homogenized conductivity for the composite material, which would be either experimentally determined or computed from a separate analysis.

Within the RF Module, there are two other options for computing a homogenized conductivity: Archie’s Law for computing effective conductivity of non-conductive porous media filled with conductive liquid and a Porous Media model for mixtures of materials.

*Archie’s Law* is a model typically used for the modeling of soils saturated with seawater or crude oil, fluids with relatively higher conductivity compared to the soil.

*Porous Media* refers to a model that has three different options for computing an effective conductivity for a mixture of up to five materials. First, the *Volume Average, Conductivity* formulation is:

\sigma_{eff}=\sideset{}{^n_{i=1}}

\sum \theta_i \sigma_i

\sum \theta_i \sigma_i

where \theta is the volume fraction of each material. This model is appropriate if the material conductivities are similar. If the conductivities are quite different, the *Volume Average, Resistivity* formulation is more appropriate:

\frac{1}{\sigma_{eff}} = \sideset{}{^n_{i=1}}

\sum\frac{\theta_i}{ \sigma_i}

\sum\frac{\theta_i}{ \sigma_i}

Lastly, the *Power Law* formulation will give a conductivity lying between the other two formulations:

\sigma_{eff} = \sideset{}{^n_{i=1}}

\prod\sigma_i^{\theta_i }

\prod\sigma_i^{\theta_i }

These models are all only appropriate to use if the length scale over which the material properties’ change is much smaller than the wavelength.

The *relative permittivity* quantifies how well a material is polarized in response to an applied electric field. It is typical to call any material with \epsilon_r>1 a *dielectric material*, though even vacuum (\epsilon_r=1) can be called a dielectric. It is also common to use the term *dielectric constant* to refer to a material’s relative permittivity.

A material’s relative permittivity is often given as a complex-valued number, where the negative imaginary component represents the loss in the material as the electric field changes direction over time. Any material experiencing a time-varying electric field will dissipate some of the electrical energy as heat. Known as *dielectric loss*, this results from the change in shape of the electron clouds around the atoms as the electric fields change. Dielectric loss is conceptually distinct from the resistive loss discussed earlier; however, from a mathematical point of view, they are actually handled identically — as a complex-valued term in the governing equation. Keep in mind that COMSOL Multiphysics follows the convention that a negative imaginary component (a positive-valued electrical conductivity) will lead to loss, while a positive complex component (a negative-valued electrical conductivity) will lead to gain within the material.

There are seven different material models for the relative permittivity. Let’s take a look at each of these models.

*Relative Permittivity* is the default option for the RF Module. A real- or complex-valued scalar or tensor value can be entered. The same Porous Media models described above for the electrical conductivity can be used for the relative permittivity.

*Refractive Index* is the default option for the Wave Optics Module. You separately enter the real and imaginary part of the refractive index, called n and k, and the relative permittivity is \epsilon_r=(n-jk)^2. This material model assumes zero conductivity and unit relative permeability.

*Loss Tangent* involves entering a real-valued relative permittivity, \epsilon_r', and a scalar loss tangent, \delta. The relative permittivity is computed via \epsilon_r=\epsilon_r'(1-j \tan \delta), and the material conductivity is zero.

*Dielectric Loss* is the option for entering the real and imaginary components of the relative permittivity \epsilon_r=\epsilon_r'-j \epsilon_r''. Be careful to note the sign: Entering a positive-valued real number for the imaginary component \epsilon_r'' when using this interface will lead to loss, since the multiplication by -j is done within the software. For an example of the appropriate usage of this material model, please see the Optical Scattering off of a Gold Nanosphere tutorial.

The *Drude-Lorentz Dispersion* model is a material model that was developed based upon the Drude free electron model and the Lorentz oscillator model. The Drude model (when \omega_0=0) is used for metals and doped semiconductors, while the Lorentz model describes resonant phenomena such as phonon modes and interband transitions. With the sum term, the combination of these two models can accurately describe a wide array of solid materials. It predicts the frequency-dependent variation of complex relative permittivity as:

\epsilon_r=\epsilon_{\infty}+\sideset{}{^M_{k=1}}

\sum\frac{f_k\omega_p^2}{\omega_{0k}^2-\omega^2+i\Gamma_k \omega}

\sum\frac{f_k\omega_p^2}{\omega_{0k}^2-\omega^2+i\Gamma_k \omega}

where \epsilon_{\infty} is the high-frequency contribution to the relative permittivity, \omega_p is the plasma frequency, f_k is the oscillator strength, \omega_{0k} is the resonance frequency, and \Gamma_k is the damping coefficient. Since this model computes a complex-valued permittivity, the conductivity inside of COMSOL Multiphysics is set to zero. This approach is one way of modeling frequency-dependent conductivity.

The *Debye Dispersion* model is a material model that was developed by Peter Debye and is based on polarization relaxation times. The model is primarily used for polar liquids. It predicts the frequency-dependent variation of complex relative permittivity as:

\epsilon_r=\epsilon_{\infty}+\sideset{}{^M_{k=1}}

\sum\frac{\Delta \epsilon_k}{1+i\omega \tau_k}

\sum\frac{\Delta \epsilon_k}{1+i\omega \tau_k}

where \epsilon_{\infty} is the high-frequency contribution to the relative permittivity, \Delta \epsilon_k is the contribution to the relative permittivity, and \tau_k is the relaxation time. Since this model computes a complex-valued permittivity, the conductivity is assumed to be zero. This is an alternate way to model frequency-dependent conductivity.

The *Sellmeier Dispersion* model is available in the Wave Optics Module and is typically used for optical materials. It assumes zero conductivity and unit relative permeability and defines the relative permittivity in terms of the operating wavelength, \lambda, rather than frequency:

\epsilon_r=1+\sideset{}{^M_{k=1}}

\sum\frac{B_k \lambda^2}{\lambda^2-C_k}

\sum\frac{B_k \lambda^2}{\lambda^2-C_k}

where the coefficients B_k and C_k determine the relative permittivity.

The choice between these seven models will be dictated by the way the material properties are available to you in the technical literature. Keep in mind that, mathematically speaking, they enter the governing equation identically.

The *relative permeability* quantifies how a material responds to a magnetic field. Any material with \mu_r>1 is typically referred to as a magnetic material. The most common magnetic material on Earth is iron, but pure iron is rarely used for RF or optical applications. It is more typical to work with materials that are ferrimagnetic. Such materials exhibit strong magnetic properties with an anisotropy that can be controlled by an applied DC magnetic field. Opposed to iron, ferrimagnetic materials have a very low conductivity, so that high-frequency electromagnetic fields are able to penetrate into and interact with the bulk material. This tutorial demonstrates how to model ferrimagnetic materials.

There are two options available for specifying relative permeability: The *Relative Permeability* model, which is the default for the RF Module, and the *Magnetic Losses* model. The Relative Permeability model allows you to enter a real- or complex-valued scalar or tensor value. The same Porous Media models described above for the electrical conductivity can be used for the relative permeability. The Magnetic Losses model is analogous to the Dielectric Loss model described above in that you enter the real and imaginary components of the relative permeability as real-valued numbers. An imaginary-valued permeability will lead to a magnetic loss in the material.

In any electromagnetic modeling, one of the most important things to keep in mind is the concept of *skin depth*, the distance into a material over which the fields fall off to 1/e of their value at the surface. Skin depth is defined as:

\delta=\left[ \operatorname{Re} \left( \sqrt{j \omega \mu_0 \mu_r (\sigma + j \omega \epsilon_0 \epsilon_r)} \right) \right] ^{-1}

where we have seen that relative permittivity and permeability can be complex-valued.

You should always check the skin depth and compare it to the characteristic size of the domains in your model. If the skin depth is much smaller than the object, you should instead model the domain as a boundary condition as described here: “Modeling Metallic Objects in Wave Electromagnetics Problems“. If the skin depth is comparable to or larger than the object size, then the electromagnetic fields will penetrate into the object and interact significantly within the domain.

*A plane wave incident upon objects of different conductivities and hence different skin depths. When the skin depth is smaller than the wavelength, a boundary layer mesh is used (right). The electric field is plotted.*

If the skin depth is smaller than the object, it is advised to use boundary layer meshing to resolve the strong variations in the fields in the direction normal to the boundary, with a minimum of one element per skin depth and a minimum of three boundary layer elements. If the skin depth is larger than the effective wavelength in the medium, it is sufficient to resolve the wavelength in the medium itself with five elements per wavelength, as shown in the left figure above.

In this blog post, we have looked at the various options available for defining the material properties within your electromagnetic wave models in COMSOL Multiphysics. We have seen that the material models for defining the relative permittivity are appropriate even for metals over a certain frequency range. On the other hand, we can also define metal domains via boundary conditions, as previously highlighted on the blog. Along with earlier blog posts on modeling open boundary conditions and modeling ports, we have now covered almost all of the fundamentals of modeling electromagnetic waves. There are, however, a few more points that remain. Stay tuned!

]]>

When approaching the question of what a metal is, we can do so from the point of view of the governing Maxwell’s equations that are solved for electromagnetic wave problems. Consider the frequency-domain form of Maxwell’s equations:

\nabla \times \left( \mu_r^{-1} \nabla \times \mathbf{E} \right) – {-\frac{\omega^2}{c_0^2}} \left( \epsilon_r -\frac{i \sigma}{\omega \epsilon_0} \right) \mathbf{E}= 0

The above equation is solved in the *Electromagnetic Waves, Frequency Domain* interface available in the RF Module and the Wave Optics Module. This equation solves for the electric field, \mathbf{E}, at the operating (angular) frequency \omega = 2 \pi f. The other inputs are the material properties: \mu_r is the relative permeability, \epsilon_r is the relative permittivity, and \sigma is the electrical conductivity.

For the purposes of this discussion, we will say that a metal is any material that is both lossy and has a relatively small skin depth. A *lossy material* is any material that has a complex-valued permittivity or permeability or a non-zero conductivity. That is, a lossy material introduces an imaginary-valued term into the governing equation. This will lead to electric currents within the material, and the *skin depth* is a measure of the distance into the material over which this current flows.

At any non-zero operating frequency, inductive effects will drive any current flowing in a lossy material towards the boundary. The skin depth is the distance into the material within which approximately 63% of the current flows. It is given by:

\delta=\left[ \operatorname{Re} \left( \sqrt{i \omega \mu_0 \mu_r (\sigma + i \omega \epsilon_0 \epsilon_r)} \right) \right] ^{-1}

where both \mu_r and \epsilon_r can be complex-valued.

At very high frequencies, approaching the optical regime, we are near the material plasma resonance and do in fact represent metals via a complex-valued permittivity. But when modeling metals below these frequencies, we can say that the permittivity is unity, the permeability is real-valued, and the electrical conductivity is very high. So the above equation reduces to:

\delta=\sqrt{\frac{2}{\omega \mu_0 \mu_r \sigma }}

Before you even begin your modeling in COMSOL Multiphysics, you should compute or have some rough estimate of the skin depth of all of the materials you are modeling. The skin depth, along with your knowledge of the dimensions of the part, will determine if it is possible to use the Impedance boundary condition or the Transition boundary condition.

Now that we have the skin depth, we will want to compare this to the *characteristic size*, L_c, of the object we are simulating. There are different ways of defining L_c. Depending on the situation, the characteristic size can be defined as the ratio of volume to surface area or as the thickness of the thinnest part of the object being simulated.

Let’s consider an object in which L_c \gg \delta. That is, the object is much larger than the skin depth. Although there are currents flowing inside of the object, the skin effect drives these currents to the surface. So, from a modeling point of view, we can treat the currents as flowing *on* the surface. In this situation, it is appropriate to use the Impedance boundary condition, which treats any material “behind” the boundary as being infinitely large. From the point of view of the electromagnetic wave, this is true, since L_c \gg \delta means that the wave does not penetrate through the object.

*The Impedance boundary condition is appropriate if the skin depth is much smaller than the object.*

With the Impedance boundary condition (IBC), we are able to avoid modeling Maxwell’s equations in the interior of any of the model’s metal domains by assuming that the currents flow entirely on the surface. Thus, we can avoid meshing the interior of these domains and save significant computational effort. Additionally, the IBC computes losses due to the finite conductivity. For an example of the appropriate usage of the IBC and a comparison with analytic results, please see the Computing Q-Factors and Resonant Frequencies of Cavity Resonators tutorial.

The IBC becomes increasingly accurate as L_c / \delta \rightarrow \infty; however, it is accurate even when L_c / \delta \gt > 10 for smooth objects like spheres. Sharp-edged objects such as wedges will have some inaccuracy at the corners, but this is a local effect and also an inherent issue whenever a sharp corner is introduced into the model, as discussed in this previous blog post.

Now, what if we are dealing with an object that has one dimension that is much smaller than the others, perhaps a thin film of material like aluminum foil? In that case, the skin depth in one direction may actually be comparable to the thickness, so the electromagnetic fields will partially penetrate through the material. Here, the IBC is not appropriate. We will instead want to use the Transition boundary condition.

The Transition boundary condition (TBC) is appropriate for a layer of conductive material with a thickness relatively smaller than the characteristic size, and curvature, of the objects being modeled. The TBC can be used even if the thickness is many times greater than the skin depth.

The TBC takes the material properties as well as the thickness of the film as inputs, computing an impedance through the thickness of the film as well as a tangential impedance. These are used to relate the current flowing on the surface of either side of the film. That is, the TBC will lead to a drop in the transmitted electric field.

From a computational point of view, the number of degrees of freedom on the boundary is doubled to compute the electric field on either surface of the TBC, as shown below. Additionally, the total losses through the thickness of the film are computed. For an example of using this boundary condition, see the Beam Splitter tutorial, which models a thin layer of silver via a complex-valued permittivity.

*The Transition boundary condition computes a surface current on either side of the boundary.*

So far, with both the TBC and the IBC, we have assumed that the surfaces are perfect. A planar boundary is assumed to be geometrically perfect. Curved boundaries will be resolved to within the accuracy of the finite element mesh used, the geometric discretization error, as discussed here.

*Rough surfaces impede current flow compared to smooth surfaces.*

All real surfaces, however, have some roughness, which may be significant. Imperfections in the surface prevent the current from flowing purely tangentially and effectively reduce the conductivity of the surface (illustrated in the figure above). With COMSOL Multiphysics version 5.1, this effect can be accounted for with the *Surface Roughness* feature that can be added to the IBC and TBC conditions.

For the IBC, the input is the Root Mean Square (RMS) roughness of the surface height. For the TBC, the input is instead given in terms of the RMS of the thickness variation of the film. The magnitude of this roughness should be greater than the skin depth, but much smaller than the characteristic size of the part. The effective conductivity of the surface is decreased as the roughness increases, as described in “Accurate Models for Microstrip Computer-Aided Design” by E. Hammerstad and O. Jensen. There is a second roughness model available, known as the *Snowball model*, which uses the relationships described in *The Foundation of Signal Integrity* by P. G. Huray.

It is also worth looking at the idealized situation — the Perfect Electric Conductor (PEC) boundary condition. For many applications in the radio and microwave regime, the losses at metallic boundaries are quite small relative to the other losses within the system. In microwave circuits, for example, the losses in the dielectric substrate typically far exceed the losses at any metallization.

The PEC boundary condition is a surface without loss; it will reflect 100% of any incident wave. This boundary condition is good enough for many modeling purposes and can be used early in your model-building process. It is also sometimes interesting to see how well your device would perform without any material losses.

Additionally, the PEC boundary condition can be used as a symmetry condition to simplify your modeling. Depending on your foreknowledge of the fields, you can use the PEC boundary condition, as well as its complement — the Perfect Magnetic Conductor (PMC) boundary condition — to enforce symmetry of the electric fields. The Computing the Radar Cross Section of a Perfectly Conducting Sphere tutorial illustrates the use of the PEC and PMC boundary conditions as symmetry conditions.

Lastly, COMSOL Multiphysics also includes Surface Current, Magnetic Field, and Electric Field boundary conditions. These conditions are provided primarily for mathematical completeness, since the currents and fields at a surface are almost never known ahead of time.

In this blog post, we have highlighted how the Impedance, Transition, and Perfect Electric Conductor boundary conditions can be used for modeling metallic surfaces, helping to identify situations in which each should be used. But, what if you cannot use any of these boundary conditions? What if the characteristic size of the parts you are simulating are similar to the skin depth? In that case, you cannot use a boundary condition. You will have to model the metal domain explicitly, just as you would for any other material. This will be the next topic we focus on in this series, so stay tuned.

]]>

The entire COMSOL® software product suite is built on top of the general-purpose software platform, COMSOL Multiphysics. This platform contains all of the core preprocessing, meshing, solving, and postprocessing capabilities, as well as several physics interfaces. (See our Specification Chart for complete details about what is available in each product.)

With COMSOL Multiphysics®, you can import 2D DXF™ files and 3D STL and 3D VRML files. You can use the 2D DXF™ file format to import profiles and extrude, revolve, or sweep them along a path to create 3D geometry.

The STL and VRML formats are best suited for simple shapes; complex CAD data does not transfer reliably since these formats lack the sophistication of modern CAD file formats. To work with STL files containing 3D scans, we recommend that you import those as a mesh and use the built-in functionality to convert the imported mesh to geometry. Depending on the complexity and quality of the 3D scan, the resulting geometry can then be combined with other geometric objects that are either imported or created in COMSOL Multiphysics.

Also part of the core COMSOL Multiphysics capabilities, the Virtual Operations approximate the CAD data for meshing purposes and are useful for cleaning up all imported CAD data, or even COMSOL native geometry.

The LiveLink™ products allow you to work with the data directly from your CAD program. Supported CAD packages include SOLIDWORKS® software, Inventor® software, Autodesk® AutoCAD® software, PTC® Creo® Parametric™ software, PTC® Pro/ENGINEER® software, Solid Edge® software, and the building information modeling (BIM) software Autodesk® Revit® Architecture software. Both LiveLink™ *for *SOLIDWORKS® and LiveLink™ *for* Inventor® offer the *One Window* interface, which directly embeds the COMSOL® modeling environment within the CAD software user interface. The list of version compatibility with these products is maintained here.

When using these LiveLink™ tools, you must have both COMSOL Multiphysics and the CAD program installed and running on the computer you are using. The CAD data as well as materials definitions and other selections will be bidirectionally synchronized between your CAD package and COMSOL Multiphysics, with full associativity. You can read more about that here. This means that any modifications that you make within the CAD package will be available within the COMSOL environment, and you can use COMSOL Multiphysics to change any of the dimensions within your CAD file. The functionality of each of these modules is described here.

Since the data is transferred with associativity, as you change the dimensions in your CAD program to reshape the part, the COMSOL software will track these changes and appropriately re-map all of the boundary conditions and other geometry- and selection-based settings. To see a demonstration of this, please watch the relevant videos in our Video Gallery. You will find this functionality useful when you want to perform parametric sweeps over the dimensions in your CAD file or perform dimensional optimization using the COMSOL Optimization Module.

In addition to synchronizing CAD data between a CAD software and COMSOL, the LiveLink™ products also include support for file import of the full range of CAD file formats supported by the CAD Import Module. If you are solving problems where you actually want to model the volume *inside* of the CAD domain (such as for fluid flow models), you can also use the Cap Faces command to create enclosed volumes based upon an existing geometry, as described here. You will also be able to perform repair and geometric clean-up (defeaturing) operations on your CAD data and write out the resultant geometry, or any geometry you create in COMSOL Multiphysics®, to the Parasolid® software or ACIS® software file format.

The LiveLink™ products are the best option for you if you can have your CAD software and COMSOL software installed on the same computer and you want to take advantage of the benefits offered by the included integration. However, if you are working with CAD data that is coming from someone else and don’t have their CAD software on your computer, then you may want to use the CAD Import Module or the Design Module instead.

The CAD Import Module and the Design Module will allow you to import a wide range of CAD file formats. You can find the complete list of formats and versions that are importable here.

If you are planning to make many design iterations, then the relative drawback of both the CAD Import Module and the Design Module compared to the LiveLink™ products is that the data import is one-way and there is no associativity that is maintained between the CAD data and the COMSOL model. That is, if you make a modification to the CAD file and have to re-import the geometry, the physics features and other geometry-based settings in the COMSOL model may not reflect these changes. You will need to manually check all settings and re-apply them to the modified geometry. Additionally, the dimensional data in the original CAD file is not accessible, so you will not be able to perform parametric sweeps or optimization.

It is possible to work around this limitation as described in the “Parameterizing the Dimensions of Imported CAD Files” blog post, but this technique is usually only practical for simpler geometries.

The Design Module provides additional functionality for creating geometry. It includes all of the capabilities of the CAD Import Module, but also provides extra geometric modeling tools. The Parasolid® software Kernel functionality is used to provide 3D Fillet, 3D Chamfer, Loft, Midsurface, and Thicken operations. You can learn more about these operations in this introduction to the Design Module.

The CAD Import Module is recommended only if you are certain that you will never be using COMSOL Multiphysics in conjunction with any of the CAD packages for which there is a LiveLink™ product and if you do not want to create any complex CAD geometries within the COMSOL environment. The Design Module is recommended over the CAD Import Module since it provides all of the same functionality, but will also allow you to create more complex CAD geometries within COMSOL Multiphysics. These geometries can then be exported to the Parasolid® software or ACIS® software file formats. Both modules include the full suite of defeaturing operations as well as the Cap Faces operation.

In addition to the products mentioned here, there is also the File Import *for* CATIA® V5 Module, which can import CATIA® V5 software files and is an add-on to any of the LiveLink™ products for CAD packages, the CAD Import Module, or the Design Module.

The ECAD Import Module is used for the import of ECAD data, which are files that are typically meant to describe the layout of an integrated circuit (IC), micro-electro-mechanical systems (MEMS) device, or printed circuit board (PCB) and thus contain planar layouts, and in some cases thickness and elevation information.

While the data transfer with this module is without associativity, you can take advantage of selections created by the import functionality to preserve model settings after importing a changed file. Also, the layered structure of the generated geometric objects makes the use of coordinate-based selections for model settings especially suitable to automate model set-up with imported ECAD files. Look out for a future blog post on how to do this.

We recommend the LiveLink™ products if you have your CAD software and COMSOL simulation software installed on the same computer. The Design Module or the CAD Import Module can be used if you only want to import files, and the Design Module is preferred since it has enhanced functionality. The add-on File Import *for* CATIA® V5 Module is only needed for that specific file type. Finally, to incorporate geometry from ECAD layout files into your simulations, you will need the ECAD Import Module.

If you have any other questions about how best to interact with your CAD data, please contact us.

*ACIS is a registered trademark of Spatial Corporation.*

Autodesk, the Autodesk logo, AutoCAD, DXF, Inventor, and Revit are registered trademarks or trademarks of Autodesk, Inc., and/or its subsidiaries and/or affiliates in the USA and/or other countries.

CATIA is a registered trademark of Dassault Systèmes or its subsidiaries in the US and/or other countries.

Parasolid and Solid Edge are trademarks or registered trademarks of Siemens Product Lifecycle Management Software Inc. or its subsidiaries in the United States and in other countries.

PTC, Creo, Parametric, and Pro/ENGINEER are trademarks or registered trademarks of PTC Inc. or its subsidiaries in the U.S. and in other countries.

*SOLIDWORKS is a registered trademark of Dassault Systèmes SolidWorks Corp.*

When light is incident upon a semi-transparent material, some of the energy will be absorbed by the material itself. If we can assume that the light is single wavelength, collimated (such as from a laser), and experiences very minimal refraction, reflection, or scattering within the material itself, then it is appropriate to model the light intensity via the *Beer-Lambert law*. This law can be written in differential form for the light intensity I as:

\frac{\partial I }{\partial z} = \alpha(T) I

where *z* is the coordinate along the beam direction and \alpha(T) is the temperature-dependent absorption coefficient of the material. Because this temperature can vary in space and time, we must also solve the governing partial differential equation for temperature distribution within the material:

\rho C_p \frac{\partial T }{\partial t}-\nabla \cdot (k \nabla T)= Q = \alpha(T) I

where the heat source term, Q, equals the absorbed light. These two equations present a bidirectionally coupled multiphysics problem that is well suited for modeling within the core architecture of COMSOL Multiphysics. Let’s find out how…

We will consider the problem shown above, which depicts a solid cylinder of material (20 mm in diameter and 25 mm in length) with a laser incident on the top. To reduce the model size, we will exploit symmetry and consider only one quarter of the entire cylinder. We will also partition the domain up into two volumes. These volumes will represent the same material, but we will only solve the Beer-Lambert law on the inside domain — the only region that the beam is heating up.

To implement the Beer-Lambert law, we will begin by adding the *General Form PDE* interface with the *Dependent Variables* and *Units* settings, as shown in the figure below.

*Settings for the implementing the Beer-Lambert law. Note the Units settings.*

Next, the equation itself is implemented via the *General Form PDE* interface, as illustrated in the following screenshot. Aside from the source term, f, all terms within the equation are set to zero; thus, the equation being solved is f=0. The source term is set to **Iz-(50[1/m]*(1+(T-300[K])/40[K]))*I**, where the partial derivative of light intensity with respect to the *z*-direction is **Iz**, and the absorption coefficient is **(50[1/m]*(1+(T-300[K])/40[K]))**, which introduces a temperature dependency for illustrative purposes. This one line implements the Beer-Lambert law for a material with a temperature-dependent absorption coefficient, assuming that we will also solve for the temperature field, **T**, in our model.

*Implementation of the Beer-Lambert law with the* General Form PDE *interface.*

Since this equation is linear and stationary, the *Initial Values* do not affect the solution for the intensity variable. The *Zero Flux* boundary condition is appropriate on most faces, with the exception of the illuminated face. We will assume that the incident laser light intensity follows a Gaussian distribution with respect to distance from the origin. At the origin, and immediately above the material, the incident intensity is 3 W/mm^{2}. Some of the laser light will be reflected at the dielectric interface, so the intensity of light at the surface of the material is reduced to 0.95 of the incident intensity. This condition is implemented with a *Dirichlet Boundary Condition*. At the face opposite to the incident face, the Zero Flux boundary condition simply means that any light reaching that boundary will leave the domain.

*The Dirichlet Boundary Condition sets the incident light intensity within the material.*

With these settings described above, the problem of temperature-dependent light absorption governed by the Beer-Lambert law has been implemented. It is, of course, also necessary to solve for the temperature variation in the material over time. We will consider an arbitrary material with a thermal conductivity of 2 W/m/K, a density of 2000 kg/m^{3}, and a specific heat of 1000 J/kg/K that is initially at 300 K with a volumetric heat source.

The heat source itself is simply the absorption coefficient times the intensity, or equivalently, the derivative of the intensity with respect to the propagation direction, which can be entered as shown below.

*The heat source term is the absorbed light.*

Most other boundaries can be left at the default *Thermal Insulation*, which will also be appropriate for implementing the symmetry of the temperature field. However, at the illuminated boundary, the temperature will rise significantly and radiative heat loss can occur. This can be modeled with the *Diffuse Surface* boundary condition, which takes the ambient temperature of the surroundings and the surface emissivity as inputs.

*Thermal radiation from the top face to the surroundings is modeled with the Diffuse Surface boundary condition.*

It is worth noting that using the Diffuse Surface boundary condition implies that the object radiates as a gray body. However, the gray body assumption would imply that this material is opaque. So how can we reconcile this with the fact that we are using the Beer-Lambert law, which is appropriate for semi-transparent materials?

We can resolve this apparent discrepancy by noting that the material absorptivity is highly wavelength-dependent. At the wavelength of incident laser light that we are considering in this example, the penetration depth is large. However, when the part heats up, it will re-radiate primarily in the long-infrared regime. At long-infrared wavelengths, we can assume that the penetration depth is very small, and thus the assumption that the material bulk is opaque for emitted radiation is valid.

It is possible to solve this model either for the steady-state solution or for the transient response. The figure below shows the temperature and light intensity in the material over time, as well as the finite element mesh that is used. Although it is not necessary to use a swept mesh in the absorption direction, applying this feature provides a smooth solution for the light intensity with relatively fewer elements than a tetrahedral mesh. The plot of light intensity and temperature with respect to depth at the centerline illustrates the effect of the varying absorption coefficient due to the rise in temperature.

*Plot of the mesh (on the far left) and the light intensity and temperature at different times.*

*Light intensity and temperature as a function of depth along the centerline over time.*

Here, we have highlighted how the *General Form PDE* interface, available in the core COMSOL Multiphysics package, can be used for implementing the Beer-Lambert law to model the heating of a semi-transparent medium. This approach is appropriate if the incident light is collimated and at a wavelength where the material is semi-transparent.

Although this approach has been presented in the context of laser heating, the incident light needs only to be collimated for this approach to be valid. The light does not need to be coherent nor single wavelength. A wide spectrum source can be broken down into a sum of several wavelength bands over which the material absorption coefficient is roughly constant, with each solved using a separate *General Form PDE* interface.

In the approach presented here, the material itself is assumed to be completely opaque to ambient thermal radiation. It is, however, possible to model thermal re-radiation within the material using the *Radiation in Participating Media* physics interface available within the Heat Transfer Module.

The Beer-Lambert law does assume that the incident laser light is perfectly collimated and propagates in a single direction. If you are instead modeling a focused laser beam with gradual variations in the intensity along the optical path then the *Beam Envelopes* interface in the Wave Optics Module is more appropriate.

In future blog posts, we will introduce these as well as alternate approaches for modeling laser-material interactions. Stay tuned!

]]>

Let’s start by giving a very conceptual introduction to how a 3D CAD geometry is meshed when you use the default mesh settings in COMSOL Multiphysics. The default mesh settings will always use a Free Tetrahedral mesh to discretize an arbitrary volume into smaller elements. Tetrahedral elements (tets) are the default element type because any geometry, no matter how topologically complex, can be subdivided and approximated as tets. Within this article, we will only discuss free tetrahedral meshing, although there are situations when other types of meshes can be more appropriate, as discussed here.

*A cylinder (left) is meshed with triangular elements (grey) on the surface and the tetrahedral meshing algorithm subdivides the volume with tets (cyan). The ends are omitted for clarity.*

At a conceptual level, the tetrahedral meshing algorithm begins by applying a triangular mesh to all of the faces of the volume that you want to mesh. The volume is then subdivided into tetrahedra, such that each triangle on the boundary is respected and the size and shape of the tetrahedra inside the volume meets the specified size and growth criteria. If you get the error message “Failed to respect boundary element edge on geometry face” or similar, it is because the shape of the tetrahedra became too distorted during this process.

Of course, the true algorithm can only be stated mathematically, not in words. There are, however, cases that can cause this algorithm some difficulties, and these cases can be understood without resorting to any equations. The free tetrahedral meshing algorithm can have difficulties if:

- The part is extremely complex with very detailed regions mixed with coarse mesh.
- The aspect ratios of the edges and boundaries defining the domain are very large.

Let’s take a look at some examples of each case and how partitioning can help us.

To get us started, let us consider a modestly complex geometry: the *Helix* geometry primitive. You can certainly think of more complex geometries than this, but we can illustrate many concepts starting with this case.

Go ahead and open a new COMSOL Model file and create a helix with ten turns, and then mesh it with the default settings, as shown below.

*A ten-turn helix primitive with the corresponding default tetrahedral mesh.*

When you were meshing this relatively simple part, you may have noticed that the meshing step took a relatively long time. So let’s look at how partitioning can simplify this geometry. Add a *Work plane* to your geometry sequence that bisects the length of the helix and then add a *Partition* feature, using the Work plane as the partitioning object.

*A Work plane is used to partition the helix.*

As you can see from the image above, the resultant ten-turn helix object is now composed of twenty different domains, each representing a half-turn of the helix. When you re-mesh this model, you will find that the meshing time is reduced, which is good. Each domain represents a much easier meshing problem than the original problem, and, furthermore, the domains can be meshed in parallel on a multi-core computer.

However, you’re probably also thinking to yourself that we now have twenty different domains, and that we’ve subdivided the six surfaces of this helix into one hundred two surfaces, including the internal boundaries, which are dividing up the domain. Although this geometry now meshes a lot faster, we have added many more domains and boundaries that can be a distraction as we apply material properties and boundary conditions. What we actually want is to use the partitioned geometry for the mesh, but ignore the partitioning during the set-up of the physics.

What you’ll want to do next is to add a *Virtual Operation, the Mesh Control Domains* operation. This feature will take, as input, all twenty domains defining the helix. The output will appear to be our original helix, and when we apply material properties and physics settings, there will be only one domain and six boundaries.

*The Mesh Control Domains will specify that these are different domains only for the purposes of meshing.*

When you now mesh this geometry, you’ll observe that you have the best of both. The meshing takes relatively little time, and the physics settings will be easy to apply. If you haven’t already, try this out on your own!

We have only looked at one example geometry here, but there are many other cases where you’ll want to use this type of partitioning. Domains that look like combs or serpentines or objects that have many holes, cutouts, or domains embedded within them all present situations in which you should consider partitioning. Also, keep in mind that you don’t need to partition with planes; you can create and use other objects for partitioning. We’ll take a look at such an example next.

The CAD geometries you are working with can often contain some edges or surfaces that have vastly different sizes relative to the other edges and surfaces defining a domain. We often want to avoid such situations, since small features on a large domain may not be that important for our analysis objectives.

We’ve already looked at how we can ignore these small features using Virtual Operations to Simplify the Geometry, but what if these small features are important? Let’s examine how partitioning can help us in terms of the example geometry shown below.

*A flow domain to be meshed. Three small inlets, with even smaller fillets, protrude from the main pipe.*

The geometry that you see above has a large pipe with three smaller pipes protruding from it. The small fillets that round the transition between the two have dimensions that are over one hundred times smaller than the pipe volume. If we mesh this domain with the default mesh settings, the same settings will be used throughout. However, we will almost certainly want to have smaller mesh sizes around the inlets.

The default mesh will use one setting for all elements within the model. That will not be very useful here. We could just add additional *Size* features to the mesh, and apply these features to all of the faces around the small pipes to adjust the element sizes at these boundaries, but this is not quite optimal. It’s a lot of work and might not give us exactly what we want.

We can also use partitioning to define a small volume within which we will want to have different mesh settings. In the figure below, additional cylinders have been included that surround each of the smaller pipes and extend some distance into the pipe.

*Additional domains (wireframe) which will be used for partitioning of the blue domain.*

*Results of the partitioning operation.*

These additional cylinder objects can be used to partition the original modeling domain, as shown above. Using the Mesh Control Domains, it will again be possible to simplify this geometry down to a single domain for the purposes of physics and materials settings. Once you get to the meshing step, however, it is possible to add a Size feature to the Mesh sequence that will set the element size settings of these newly partitioned domains. This gives us control over the element sizes in these domains and makes things a little bit easier for the mesher.

*Different size features can be applied to each partitioned geometry.*

The geometries that we have looked at here can be meshed with minimal effort or modification to the default meshing settings, but this is not always the case. It is relatively easy to come up with a geometry that no meshing algorithm will ever be able to mesh in a reasonable amount of time. What can we do in that situation?

The answer (as I’m sure you’ve already guessed) is partitioning along with one other concept: divide and conquer. When confronted with a domain that does not mesh, use partitioning to divide it into two domains. Try to individually mesh each one. If one of the domains does not mesh, keep partitioning each half. Using this approach, you’ll very quickly zoom in on the problematic region of the original domain. You can then decide if you want to simplify the problematic parts of the geometry via the usage of Virtual Operations, or you can use the techniques we’ve outlined here and mesh sub-domain by sub-domain, or you can even use some combination of the two.

Another technique that you can use is to apply a Free Triangular mesh on all of the boundaries of the imported geometry. Surface meshing is much faster than volume meshing and will almost always succeed. Visually inspect the resultant surface mesh. It will then often be immediately apparent where in the model the small features and problematic areas are. Once you know where the issues are, delete the Free Triangular mesh, since the free tetrahedral meshing algorithm will typically want to adjust the mesh on the boundaries, but will not do so if there is already a surface mesh defined.

Along with the Virtual Operations which we have already mentioned for simplifying the geometry for meshing, you can also use the Repair and Defeaturing functionality to clean up CAD data originating from another source. The Virtual Operations will simply create an abstraction of the CAD geometry which can only be used inside of the COMSOL software, as compared to the Repair and Defeaturing operations which will modify the CAD directly, and will create a modified CAD representation that can be written out from COMSOL Multiphysics to other software packages.

We have now looked at two different representative cases where the default mesh settings are not optimal — a domain that is very complex as well as a domain with extreme aspect ratios. In both cases, we can use partitioning along with the Mesh Control Domains Virtual Operations feature to simplify the meshing operations.

We have also presented some strategies for handling cases in which your geometry will not mesh with the default settings. It is also worth saying that such situations arise most often when working with imported CAD geometry that was meant for manufacturing, rather than analysis purposes. If you are given a CAD file with many features that are cosmetic rather than functional or that you are reasonably certain will not affect the physics of the problem, consider removing these features in the originating CAD package, before they even get to COMSOL Multiphysics.

In future blog posts, we will also look at combining partitioning with swept meshing, which is another powerful technique in your toolkit as you use COMSOL Multiphysics. Stay tuned!

]]>

Let’s take a look at some sample experimental data in the plot below. Observe that the data is noisy and that the sampling is nonuniform in the *x*-axis. This experimental data may represent a material property. If the material property is dependent upon the solution variable (such as a temperature-dependent thermal conductivity), then we would usually not want to use this data directly in our analyses. Such noisy input data can often cause solver convergence difficulties, for the reasons discussed here. If we instead approximate the data with a smooth curve, then model convergence can often be improved and we will also have a simple function to represent the material property.

*Experimental data that we would like to approximate with a simpler function.*

What we would like to do is find a function, F(x), that fits the experimental data, D(x), as closely as possible. We will define the “best-fit” function as the function that minimizes the square of the difference between the experimental data and our fitting function, integrated over the entire data range. That is, our objective is to minimize:

\int_a^b (D(x)- F(x))^2 dx

So the first thing that we need to do is to make some decisions about what type of function we would like to fit. We have a great deal of flexibility about what *type* of functions to use, but we should choose a fitting function that results in a problem which will be numerically well-conditioned. Although we will not go into the details about why, for maximum robustness we will choose to fit this function:

F(x) = c_0\left(\frac{b-x}{b-a}\right)^3 + c_1 \left(\frac{x-a}{b-a}\right)\left(\frac{b-x}{b-a}\right)^2 + c_2 \left(\frac{x-a}{b-a}\right)^2\left(\frac{b-x}{b-a}\right) + c_3 \left(\frac{x-a}{b-a}\right)^3

which in this case, for a=0, b=1, simplifies to:

F(x) = c_0(1-x)^3 + c_1 x(1-x)^2 + c_2 x^2(1-x) + c_3 x^3

Now we need to find the four coefficients that will minimize:

R(c_0,c_1,c_2,c_3,x)= \int_a^b (D(x)- F(c_0,c_1,c_2,c_3,x))^2 dx

Although this may sound like an optimization problem, we do not have any constraints on our coefficients and we will assume that the above function has a single minimum. This minimum will correspond to the point where the derivatives, with respect to the coefficients, are zero. That is, to find the best fit function, we must find the values of the coefficients at which:

\frac{\partial R} {\partial c_0} = \frac{\partial R} {\partial c_1} = \frac{\partial R} {\partial c_2} =\frac{\partial R} {\partial c_3} = 0

It turns out that we can solve this problem with the core capabilities of COMSOL Multiphysics. Let’s find out how…

We start by creating a new file containing a 1D component. We will use the *Global ODEs and DAEs* physics interface to solve for our coefficients and we will use the Stationary Solver. For simplicity, we will use a dimensionless length unit, as shown in the screenshot below.

*Start out with a 1D component and set Unit system to None.*

Next, create the geometry. Our geometry should contain our interval (in this case, the range of our sample points is from 0 to 1) as well as a set of points along the *x*-direction for every sample point. You can simply copy and paste this range of points from a spreadsheet into the *Point* feature, as shown.

*Add points over the interval at every data sample point.*

Read in the experimental data using the *Interpolation* function. Give your data a reasonable name (we simply use *D* in the screenshot below), check on the “Use spatial coordinates as arguments”, and make sure to use the default *Linear* interpolation between data points.

*The settings for importing the experimental data.*

Define an Integration Operator over all domains. You can use the default name: *intop1*. This feature will be used to take the integral described above.

*The Integration Operator is defined over all domains.*

Now define two variables. One will be your function, F, and the other will be the function that we want to minimize, R. Since the *Geometric Entity Level* is set to *Entire Model*, F will be defined everywhere and spatially varying as a function of x. On the other hand, R is scalar valued everywhere and also available throughout the entire model. As shown in the screenshot below, we can enter F as a function of c_0,c_1,c_2,c_3 and will define these coefficients later.

*The definition of our fitting function and the quantity we wish to minimize.*

Next, we can use the *Global Equations* interface to define the four equations that we want to satisfy for our four coefficients. Recall that we want the derivative of R with respect to each coefficient to be zero. Using the differentiation operator, **d(f(x),x)**, we can enter this as shown below:

*The Global Equations that are used to solve for the coefficients of our fitting function.*

Finally, we need to have an appropriate mesh on our 1D domain. Recall that earlier we placed a geometric point at each data sampling point. Using the *Distribution* subfeature on the *Edge Mesh* feature, we can ensure that there is one element between each data point. We do not need any *more* elements than this, since we are assuming linear interpolation between data points, but we do not want *less* than this, because then we will miss some of the experimental data points.

*There should be one element over each data interval.*

We can now solve this stationary problem for the numerical values of our coefficients and plot the results. From the plot below, we can see the data points with the linear interpolation between them, as well as the computed fitting function. We have minimized the square of the difference between these two curves, integrated over the interval, and now have a smooth and simple function that approximates our data quite well.

*The experimental data (black, with linear interpolation) and the fitted function (red).*

Now, what we’ve done so far is actually fairly straightforward and you could compute similar types of curve fits in a spreadsheet program or any number of other software tools. But there is much more that we can do with this approach. We are not limited to using this fitting function. You are free to choose any function you want, but it is best to use a function that is a sum of set of orthogonal functions. Try out, for example:

F(x) = c_0 + c_1sin ( \pi x /4 ) + c_2cos ( \pi x /4 ) + c_3sin ( \pi x /2 ) + c_4cos ( \pi x /2 )

Be aware, however, that you will only want to solve for the linear coefficients on the various terms within the fitting function. You would not want to use nonlinear fitting coefficients such as F(x) = c_0 + c_1sin ( \pi x /c_3 ) + c_2cos ( \pi x /c_4 ) since such a problem might be too highly nonlinear to converge.

And what if you have a 2D or 3D data set? You can actually apply the exact same approach as we’ve outlined here. The only difference is that you will need to set up a 2D or a 3D domain. The domains need not be Cartesian and you can even switch to a different coordinate system.

Let’s take a quick look at some sample data points measured over the region shown below:

*Sampled data in a 2D region. We want a best fit surface to the heights of these points.*

Since the data is sampled over this annular region and seems to have variations with respect to the radial and circumferential directions (r,\theta), rather than the Cartesian directions, we can try to fit the function:

F(x) = c_0 + c_1r cos ( \theta ) +c_2 r sin ( \theta )+ c_3(2r^2-1) + c_4 r^2 cos ( 2\theta ) +c_5 r^2 sin ( 2\theta )

We can follow the exact same procedure as before. The only difference being that we need to integrate over a 2D domain rather than a line and write our expression using a cylindrical coordinate system.

*The computed best-fit surface to the data shown above.*

You can see that the core COMSOL Multiphysics package has very flexible capabilities for finding a best-fit curve to data in 1D, 2D, or 3D using the methods shown here.

There can be cases where you might want to go beyond a simple curve-fit and want to consider some additional constraints. In that case, you would want to use the capabilities of the Optimization Module, which can also perform these types of curve fits and much, much more. For an introduction to the Optimization Module for curve fitting and the related topic of parameter estimation, please also see these models:

]]>

When modeling electromagnetic structures (e.g., antennas, waveguides, cavities, filters, and transmission lines), we can often limit our analysis to one small part of the entire system. Consider, for example, a coaxial splitter as shown here, which splits the signal from one coaxial cable (coax) equally into two. We know that the electromagnetic fields in the incoming and outgoing cables will have a certain form and that the energy is propagating in the direction normal to the cross section of the coax.

There are many other such cases where we know the form (but not the magnitude or phase) of the electromagnetic fields at some boundaries of our modeling domain. These situations call for the use of the Lumped Port and the Port boundary conditions. Let us look at what these boundary conditions mean and when they should be used.

We can begin our discussion of the Lumped Port boundary condition by looking at the fields in a coaxial cable. A coaxial cable is a waveguide composed of an inner and outer conductor with a dielectric in between. Over its range of operating frequencies, a coax operates in Tranverse Electro-Magnetic (TEM) mode, meaning that the electric and the magnetic field vectors have no component in the direction of wave propagation along the cable. That is, the electric and magnetic fields both lie entirely in the cross-sectional plane. Within COMSOL Multiphysics, we can compute these fields and the impedance of a coax, as illustrated here.

However, there also exists an analytic solution for this problem. This solution shows that the electric field drops off proportional to 1/r between the inner and outer conductor. So, since we know the shape of the electric field at the cross section of a coax, we can apply this as a boundary condition using the *Lumped Port, Coaxial* boundary condition. The excitation options for this condition are that the excitation can be specified in terms of a cable impedance along with an applied voltage and phase, in terms of the applied current, or as a connection to an externally defined circuit. Regardless of these three options, the electric field will always vary as 1/r times a complex-valued number that represents the sum of the (user-specified) incoming and the (unknown) outgoing wave.

*The electric field in a coaxial cable.*

For a coaxial cable, we always need to apply the boundary condition at an annular face, but we can also use the Lumped Port boundary condition in other cases. There are also a Uniform and a User-Defined option for the Lumped Port condition. The Uniform option can be used if you have a geometry as shown below: a surface bridging the gap between two electrically conductive faces. The electric field is assumed to be uniform in magnitude between the bounding faces, and the software automatically computes the height and width of the Lumped Port face, which should always be much smaller than the wavelength in the surrounding material. Uniform Lumped Ports are commonly used to excite striplines and coplanar waveguides, as discussed in detail here.

*A typical Uniform Lumped Port geometry.*

The User-Defined option allows you to manually enter the height and width of the feed, as well as the direction of the electric field vector. This option is appropriate if you need to manually enter these settings, like in the geometry shown below and as demonstrated in this example of a dipole antenna.

*An example of a User-Defined Lumped Port geometry.*

Another use of the Lumped Port condition is to model a small electrical element such as a resistor, capacitor, or inductor bonded onto a microwave circuit. The Lumped Port can be used to specify an effective impedance between two conductive boundaries within the modeling domain. There is an additional Lumped Element boundary condition that is identical in formulation to the Lumped Port, but has a customized user interface and different postprocessing options. The example of a Wilkinson power divider demonstrates this functionality.

Once the solution of a model using Lumped Ports is computed, COMSOL Multiphysics will also automatically postprocess the S-parameters, as well as the impedance at each Lumped Port in the model. The impedance can be computed for TEM mode waveguides only. It is also possible to compute an approximate impedance for a structure that is very nearly TEM, as shown here. But once there is a significant electric or magnetic field in the direction of propagation, then we can no longer use the Lumped Port condition. Instead, we must use the Port boundary condition.

To begin discussing the Port boundary condition, let’s examine the fields within a rectangular waveguide. Again, there are analytic solutions for propagating fields in waveguide. These solutions are classified as either Transverse Electric (TE) or Transverse Magnetic (TM), meaning there is no electric or magnetic field in the direction of propagation, respectively.

Let’s examine a waveguide with TE modes only, which can be modeled in the 2D plane. The geometry we will consider is of two straight sections of different cross-sectional area. At the operating frequency, the wider section supports both TE10 and TE20 modes, while the narrower supports only the TE10 mode. The waveguide is excited with a TE10 mode on the wider section. As the wave propagates down the waveguide and hits the junction, part of the wave will be reflected back towards the source as a TE10 mode, part will continue along into the narrower section again as a TE10 mode, and part will be converted to a TE20 mode, and then propagate back towards the source boundary. We want to properly model this and compute the split into these various modes.

The Port boundary conditions are formulated slightly differently from the Lumped Port boundary conditions in that you can add multiple types of ports to the same boundary. That is, the Port boundary conditions each *contribute to* (as opposed to the Lumped Ports, which *override*) other boundary conditions. The Port boundary conditions also specify the magnitude of the incoming wave in terms of the power in each mode.

*Sketch of the waveguide system being considered.*

The image below shows the solution to the above model with three Port boundary conditions, along with the analytic solution for the TE10 and TE20 modes for the electric field shape. Computing the correct solution to this problem does require adding all three of these ports. After computing the solution, the software also makes the S-parameters available for postprocessing, which indicates the relative split and phase shift of the incoming to outgoing signals.

*Solution showing the different port modes and the computed electric field.*

The Port boundary condition also supports Circular and Coaxial waveguide shapes, since these cases have analytic solutions. However, most waveguide cross sections do not. In such cases, the Numeric Port boundary condition must be used. This condition can be applied to an arbitrary waveguide cross section. When solving a model with a Numeric Port, it is also necessary to first solve for the fields at the boundary. For examples of this modeling technique, please see this example first, which compares against a semi-analytic case, followed by this example, which can only be solved by numerically computing the field shape at the ports.

*Rectangular, Coaxial, and Circular Ports are predefined.*

*Numeric Ports can be used to define arbitrary waveguide cross sections.*

The last case, when using the Port boundary condition, is appropriate for the modeling of plane waves incident upon quasi-infinite periodic structures such as diffraction gratings. In this case, we know that any incoming and outgoing waves must be plane waves. The outgoing plane waves will be going in many different directions (different diffraction orders) and we can determine ahead of time the directions, albeit not the relative magnitudes. In such instances, you can use the Periodic Port boundary condition, which allows you to specify the incoming plane wave polarization and direction. The software will then automatically compute all the directions of the various diffracted orders and how much power goes into each diffracted order.

For an extensive discussion of the Periodic Port boundary condition, please read this previous blog post on periodic structures. For a quick introduction to the use of these boundary conditions, please see this model of plasmonic wire grating.

We have introduced the Lumped Port and the Port boundary conditions for modeling boundaries at which an electromagnetic wave can pass without reflection and where we know something about the shape of the fields. Alternative options for the modeling of boundaries that are non-reflecting in cases where we do not know the shape of the fields can be found here.

The Lumped Port boundary condition is available solely in the RF Module, while the Port boundary condition is available in the *Electromagnetic Waves* interface in the RF Module and the Wave Optics Module as well as the Beam Envelopes formulation in the Wave Optics Module. This previous blog post provides an extensive description of the differences between these modules.

But what about those boundaries that are not transparent, such as the conductive walls of the waveguide we have looked at today? These boundaries will reflect almost all of the wave and require a different set of boundary conditions, which we will look at next.

]]>

Let’s consider a thermostat similar to the one that you have in your home. Although there are many different types of thermostats, most of them use the same control scheme: A sensor that monitors temperature is placed somewhere within the system, usually some distance away from the heater. When the sensed temperature falls below a desired lower setpoint, the thermostat switches the heater on. As the temperature rises above a desired upper setpoint, the thermostat switches the heater off. This is known as a *bang-bang controller*. In practice, you typically only have a single setpoint, and there is an offset, or lag, which is used to define the upper and lower setpoints.

The objective of having different upper and lower setpoints is to minimize the switching of the heater state. If the upper and lower setpoints are the same, the thermostat would constantly be cycling the heater, which can lead to premature component failure. If you do want to implement such a control, you only need to know the current temperature of the sensor. This can be modeled in COMSOL Multiphysics quite easily, as we have highlighted in this previous blog post.

On the other hand, the bang-bang controller is a bit more complex since it does need to know something about the history of the system; the heater changes its state as the temperature rises above or below the setpoints. In other words, the controller provides *hysteresis*. In COMSOL Multiphysics, this can be implemented using the *Events* interface.

When using COMSOL Multiphysics to solve time-dependent models, the *Events* interface is used to stop the time-stepping algorithms at a particular point and offer the possibility of changing the values of variables. The times at which these events occur can be specified either explicitly or implicitly. An *explicit event* should be used when we know the point in time when something about the system changes. We’ve previously written about this topic on the blog in the context of modeling a periodic heat load. An *implicit event*, on the other hand, occurs at an unknown point in time and thus requires a bit more set-up. Let’s take a look at how this is done within the context of the thermal model shown below.

*Sketch of the thermal system under consideration.*

Consider a simple thermal model of a lab-on-a-chip device modeled in a 2D plane. A one millimeter thick glass slide has a heater on one side and a temperature sensor on the other. We will treat the heater as a 1W heat load distributed across part of the bottom surface, and we will assume that there is a very small, thermally insignificant temperature sensor on the top surface. There is also free convective cooling from the top of the slide to the surroundings, which is modeled with a heat flux boundary condition. The system is initially at 20°C, and we want to keep the sensor between 45°C and 55°C.

*A Component Coupling is used to define the Variable, T_s, the sensor temperature.*

The first thing we need to do — before using the *Events* interface — is define the temperature at the sensor point via an Integration Component Coupling and a Variable, as shown above. The reason why this is done is to make the temperature at this point, T_s, available within the *Events* interface.

The *Events* interface itself is added like any other physics interface within COMSOL Multiphysics. It is available within the *Mathematics > ODE and DAE interfaces* branch.

*The *Discrete States* interface is used to define the state of the heater. Initially, the heater is on.*

First, we use the *Events* interface to define a set of *discrete variables*, variables which are discontinuous in time. These are appropriate for modeling on/off conditions, as we have here. The *Discrete States* interface shown above defines a variable, *HeaterState*, which is multiplied by the applied heat load in the *Heat Transfer in Solids* problem. The variable can be either one or zero, depending upon the system’s temperature history. The initial condition is one, meaning we are starting our simulation with the heater on. It is important that we set the appropriate initial condition here. It is this *HeaterState* variable that will be changed depending upon the sensor temperature during the simulation.

*Two *Indicator States* in the *Events* interface depend upon the sensor temperature.*

To trigger a change in the *HeaterState* variable, we need to first introduce two *Indicator States*. The objective of the *Indicator States* is to define variables that will indicate when an event will occur. There are two indicator variables defined. The *Up* indicator variable is defined as:

`T_s - 55[degC]`

which goes smoothly from negative to positive as the sensor temperature rises above 55°C. Similarly, the *Down* indicator variable will go smoothly from negative to positive at 45°C. We will want to trigger a change in the *HeaterState* variable as these indicator variables change sign.

*The* HeaterState* variable is reinitialized within the *Events* interface.*

We use the *Implicit Events* interface, since we do not know ahead of time when these events will occur, but we do know under what conditions we want to change the state of the heater. As shown above, two *Implicit Event* features are used to reinitialize the state of the heater to either zero or one, depending upon when the *Up* and *Down* indicator variables become greater than or less than zero, respectively. The event is triggered when the logical condition becomes true. Once this happens, the transient solver will stop and restart with the newly initialized *HeaterState* variable, which is used to control the applied heat, as illustrated below.

*The *HeaterState *variable controls the applied heat.*

When solving this model, we can make some changes to the solver settings to ensure that we have good accuracy and keep only the most important results. We will want to solve this model for a total time of 30 minutes, and we will store the results only at the time steps that the solver takes. These settings are depicted below.

*The study settings for the Time-Dependent Solver set the total solution time from 0-30 minutes, with a relative tolerance of 0.001.*

We will need to make some changes within the settings for the Time-Dependent Solver. These changes can be made prior to the solution by first right-clicking on the *Study* branch, choosing “Show Default Solver”, and then making the two changes shown below.

*Modifications to the default solver settings. The event tolerance is changed to 0.001 and the output times to store are set to the steps taken by the solver.*

Of course, as with any finite element simulation, we will want to study the convergence of the solution as the mesh is refined and the solver tolerances are made tighter. Representative simulation results are highlighted below and demonstrate how the sensor temperature is kept between the upper and lower setpoints. Also, observe that the solver takes smaller time steps immediately after each event, but larger time steps when the solution varies gradually.

*The heater switches on and off to keep the sensor temperature between the setpoints.*

We have demonstrated here how implicit events can be used to stop and restart the solver as well as change variables that control the model. This enables us to model systems with hysteresis, such as thermostats, and perform simulations with minimal computational cost.

]]>

First, let’s take a (very) brief conceptual look at the implicit time-stepping algorithms used when you are solving a time-dependent problem in COMSOL Multiphysics. These algorithms choose a time step based upon a user-specified tolerance. While this allows the software to take very large time steps when there are gradual variations in the solution, the drawback is that using too loose of a tolerance can skip over certain transient events.

To understand this, consider the ordinary differential equation:

\frac{\partial u}{\partial t} = -u + f(t)

where the forcing function f(t) is a square unit pulse starting at t_s and ending at t_e. Given an initial condition, u_0, we can solve this problem for any length of time, either analytically or numerically. Here is the analytic solution for u_0=1:

In the above plot, we can observe the exponential decay and rise as the forcing function is zero or one. Let’s now look at the numerical solution to this problem for two different user-specified tolerances:

*The numeric solution (red dots) is shown for a relative tolerance of 0.2 and 0.01 and is compared to the analytical result (grey line).*

We can see from the plot above that a very loose relative tolerance of 0.2 does not accurately capture the switching of the load. At a tighter relative tolerance of 0.01 (the solver default), the solution is reasonably well resolved. We can also observe that the spacing of the points shows the varying time steps used by the solver. It is apparent that the solver takes larger time steps where the solution changes slowly and finer time steps when the heat load switches on and off.

However, if the tolerance is set too loosely, the solver may skip over the heat load change entirely when the width of the heat load gets very small. That is, if t_s and t_e move very close to each other, the magnitude of the total heat load is too small for the specified tolerance. We can of course mitigate this by using tighter tolerances, but a better option exists.

We can avoid having to tighten the tolerances by using *Explicit Events*, which are a way of letting the solver know that it should evaluate the solution at a specified point in time. From that point in time forward, the solver will continue as before until the next event is reached. Let’s look at the numeric solution to the above problem, with *Explicit Events* at t_s and t_e and solved with a relative tolerance of 0.2 (a very loose tolerance):

*When using* Explicit Events*, the numerical solution — even with a very loose relative tolerance of 0.2 — compares quite well with the analytical result. Away from the events, large time steps are taken.*

The above plot illustrates that the *Explicit Events* force a solution evaluation when the load switches on or off. The loose relative tolerance allows the solver to take large time steps when the solution varies gradually. Small time steps are taken immediately after the events to give good resolution of the variation in the solution. Thus, we have both good resolution of the heat load switching on or off and we take large time steps to minimize the overall computational cost.

Now that we’ve introduced the concepts, we will take a look at implementing these *Explicit Events*.

We will begin with an existing example from the COMSOL Multiphysics Model Library and modify it slightly to include a periodic heat load and the *Events* interface. We will look at an example of the Laser Heating of a Silicon Wafer, where a laser is modeled as a distributed heat source moving back and forth across the surface of a spinning silicon wafer.

The laser heat source itself traverses back and forth over the wafer with a period of 10 seconds along the centerline. To minimize the temperature variation over the wafer during the heating process, we want to turn the laser off periodically, while the heat source is in the center of the wafer. To model this, we will introduce an *Analytic* function, pulse (x), that uses the Boolean expression:

`(x<2)||(x>3)`

to evaluate pulse (t) to zero between t=2-3 seconds, and one otherwise. The *Periodic Extension* option is used to repeat this pattern every five seconds, as shown in the screenshot below.

*The settings used to define a periodic function, as plotted.*

We can use this function to modify the applied heat flux representing the laser heat source, as illustrated below:

*The settings for the applied heat flux boundary condition.*

The last thing that we need to do is to add the *Events* interface. This physics interface is found within *Mathematics > ODE and DAE interfaces* when using the *Add Physics* browser. Within the *Events* interface, add two *Explicit Events* with the settings shown below to define a periodic event starting at two and three seconds and repeating every five seconds.

*The* Explicit Events* settings. The second of these events starts at 3 s.*

No other changes are needed, but we can take a quick look at the solver settings:

*The settings for the time-dependent solver.*

Note that the entries in the *Times* field are the output times. These settings do not directly control the actual time steps taken by the solver. The *Relative Tolerance* field (default value of 0.01) along with the *Events* — if they are in the model — control these time steps.

*A comparison of unpulsed (left) and pulsed (right) heat loads.*

You can compare the results of this simulation to the original model to see the differences in temperature across the wafer. With a periodic heat load, the temperature rise is more gradual and the temperature variations at any point in time are smaller.

We have looked at using the *Events* interface for modeling a periodic heat load over time and introduced why it provides a good combination of accuracy and low computational requirements. There is a great deal more that you can do with the *Events* interface — if you would like to learn more, we encourage you to consult the documentation. An extended demonstration of the usage of the *Events* interface is featured in the Capacity Fade of a Li-ion Battery example from the Model Library.

On the other hand, when dealing with problems that are either convection dominated or wave-type problems (e.g., fluid flow models or transient structural response, respectively), then we would not want to introduce instantaneous changes in the loads. The reasons behind that — and alternative modeling techniques for such situations — will be the topic of an upcoming blog. Stay tuned!

]]>

We are often interested in modeling a radiating object, such as an antenna, in free space. We may be building this model to simulate an antenna on a satellite in deep space or, more often, an antenna mounted in an anechoic test chamber.

*An antenna in infinite free space. We only want to model a small region around the antenna.*

Such models can be built using the *Electromagnetic Waves, Frequency Domain* formulation in the RF Module or the Wave Optics Module. These modules provide similar interfaces for solving the frequency domain form of Maxwell’s equations via the finite element method. (For a description of the key differences between these modules, please see my previous blog post, titled “Computational Electromagnetics Modeling, Which Module to Use?“)

Let’s limit ourselves in this blog post to considering only 2D problems, where the electromagnetic wave is propagating in the *x-y* plane, with the electric field polarized in the *z*-direction. We will additionally assume that our modeling domain is purely vacuum, so that the frequency domain Maxwell’s equations reduce to:

\nabla \cdot \left( \mu_r^{-1} \nabla E_z \right) -k_0^2 \epsilon_r E_z= 0

where E_z is the electric field, relative permeability and permittivity \mu_r = \epsilon_r = 1 in vacuum, and k_0 is the wavenumber.

Solving the above equation via the finite element method requires that we have a finite-sized modeling domain, as well as a set of boundary conditions. We want to use boundary conditions along the outside that are transparent to any radiation. Doing so will let our truncated domain be a reasonable approximation of free space. We also want this truncated domain to be as small as possible, since keeping our model size down reduces our computational costs.

Let’s now look at two of the options available within the COMSOL Multiphysics simulation environment for truncating your modeling domain: the scattering boundary condition and the perfectly matched layer.

One of the first transparent boundary conditions formulated for wave-type problems was the Sommerfeld radiation condition, which, for 2D fields, can be written as:

\lim_{ r \to \infty} \sqrt r \left( \frac{\partial E_z}{\partial r} + i k_0 E_z \right) = 0

where r is the radial axis.

This condition is exactly non-reflecting when the boundaries of our modeling domain are infinitely far away from our source, but of course an infinitely large modeling domain is impossible. So, although we cannot apply the Sommerfeld condition exactly, we *can* apply a reasonable approximation of it.

Let’s now consider the boundary condition:

\mathbf{n} \cdot (\nabla E_z) + i k_0 E_z = 0

You can clearly see the similarities between this condition and the Sommerfeld condition. This boundary condition is more formally called the *first-order scattering boundary condition (SBC)* and is trivial to implement within COMSOL Multiphysics. In fact, it is nothing other than a Robin boundary condition with a complex-valued coefficient.

If you would like to see an example of a 2D wave equation implemented from scratch along with this boundary condition, please see the example model of diffraction patterns.

Now, there is a significant limitation to this condition. It is only non-reflecting if the incident radiation is exactly normally incident to the boundary. Any wave incident upon the SBC at a non-normal incidence will be partially reflected. The reflection coefficient for a plane wave incident upon a first-order SBC at varying incidence is plotted below.

*Reflection of a plane wave at the first-order SBC with respect to angle of incidence.*

We can observe from the above graph that as the incoming plane wave approaches grazing incidence, the wave is almost completely reflected. At a 60° incident angle, the reflection is around 10%, so we would clearly like to have a better boundary condition.

COMSOL Multiphysics also includes (as of version 4.4) the second-order SBC:

\mathbf{n} \cdot (\nabla E_z) + i k_0 E_z -\frac{i }{2 k_0} \nabla_t^2 E_z= 0

This equation adds a second term, which takes the second tangential derivative of the electric field along the boundary. This is also quite easy to implement within the COMSOL software architecture.

Let’s compare the reflection coefficient of the first- and second-order SBC:

*Reflection of a plane wave at the first- and second-order SBC with respect to angle of incidence.*

We can see that the second-order SBC is uniformly better. We can now get to a ~75° incident angle before the reflection is 10%. This is better, but still not the best we can achieve. Let’s now turn our attention away from boundary conditions and look at perfectly matched layers.

Recall that we are trying to simulate a situation such as an antenna in an anechoic test chamber, a room with pyramidal wedges of radiation absorbing material on the walls that will minimize any reflected signal. This can be our physical analogy for the perfectly matched layer (PML), which is not a boundary condition, but rather a domain that we add along the exterior of the model that should absorb all outgoing waves.

Mathematically speaking, the PML is simply a domain that has an anisotropic and complex-valued permittivity and permeability. For a sample of a complete derivation of these tensors, please see *Theory and Computation of Electromagnetic Fields*. Although PMLs are theoretically non-reflecting, they do exhibit some reflection due to the numerical discretization: the mesh. To minimize this reflection, we want to use a mesh in the PML that aligns with the anisotropy in the material properties. The appropriate PML meshes are shown below, for 2D circular and 3D spherical domains. Cartesian and spherical PMLs and their appropriate usage are also discussed within the product documentation.

*Appropriate meshes for 2D and 3D spherical PMLs.*

In COMSOL Multiphysics 5.0, these meshes can be automatically set up for 3D problems using the Physics-Controlled Meshing, as demonstrated in this video.

Let’s now look at the reflection from a PML with respect to incident angle as compared to the SBCs:

*Reflection of a plane wave at the first- and second-order SBC and the PML with respect to angle of incidence.*

We can see that the PML reflects the least amount across the widest range. There is still reflection as the wave is propagating almost exactly parallel to the boundary, but such cases are luckily rather rare in practice. An additional feature of the PML, which we will not go into detail about for now, is that it absorbs not only the propagating wave, but also any evanescent field. So, from a physical point of view, the PML truly can be thought of as a material with almost perfect absorption.

Clearly, the PML is the best of the approaches described here. However, the PML does use more memory as compared to the SBC.

So, if you are early in the modeling process and want to build a model that is a bit less computationally intensive, the second-order SBC is a good option. You can also use it in situations where you have a strong reason to believe that any reflections at the SBC won’t greatly affect the results you are interested in.

The first-order SBC is currently the default, for reasons of compatibility with previous versions of the software, but with COMSOL Multiphysics version 4.4 or greater, use the second-order SBC. We have only introduced the plane-wave form of the SBC here, but cylindrical-wave and spherical-wave (in 3D) forms of the first- and second-order SBC’s are also available. Although they do use less memory, they all exhibit more reflection as compared to the PML.

The SBC and the PMLs are appropriate conditions for open boundaries where you do not know much about the fields at the boundaries a *priori*. If, on the other hand, you want to model an open boundary where the fields are known to have a certain form, such as a boundary representing a waveguide, the Port and Lumped Port boundary conditions are more appropriate. We will discuss those conditions in an upcoming blog post.

There are many situations in which a rotating object is exposed to loads. For example, think of a rotisserie chicken or a kebab. Meat on a rotating spit is exposed to a heat load, usually a radiative heat source such as coals. Rotation is a simple way to distribute the applied heat. It keeps any regions from getting too hot or too cold and is an easy way to promote uniform cooking.

Now that I’ve got you licking your chops, let’s look at a slightly simpler case.

Today, we will look at the laser heating of a spinning silicon wafer. Although it isn’t quite as delicious to think about as rotating food, I’m sure you will find it equally informative.

As you may know, we already have an example of this in our Model Library and online Model Gallery. The existing example considers a wafer mounted on a rotating stage and heated by a laser traversing back and forth over the surface. The problem is solved in a stationary coordinate system. (Just think of yourself standing outside the process chamber and watching the wafer spinning on the stage.) We will call this the *global coordinate system*.

The laser is modeled as a heat source that moves back and forth along the global *x*-axis, while the wafer rotates about the global *z*-axis. The rotation of the wafer is modeled via the *Translational Motion* feature within the *Heat Transfer in Solids* physics interface, which adds a convective term to the governing transient heat transfer equation:

\rho C_p \frac{\partial T} {\partial t} -\nabla \cdot ( k \nabla T) = -\rho C_p \mathbf{u} \cdot \nabla T

The right-hand side of the above equation accounts for the rotation of the wafer as \mathbf{u}, the velocity vector. This velocity vector can be interpreted as material entering and leaving each element in the finite element mesh — that is, we are solving a problem on an Eulerian frame. Since the geometry is a uniform disk and the applied velocity vector describes a rotation about the axis of the disk, this is a valid approach.

The drawback, however, is when you want to add more physics to the model. The Translational Motion feature is only available within the Heat Transfer physics and for many other physics interfaces that we do not want to solve on an Eulerian frame.

Instead of solving this problem on an Eulerian frame in the global coordinate system, we can solve this problem on a Lagrangian frame, with a rotating coordinate system that moves with the material rotation of the wafer. (Think of yourself as a tiny person standing on the surface of the wafer. The surroundings will appear to be rotating, whereas the wafer will appear stationary.)

The right-hand side of the above governing heat transfer equation becomes zero, but we now need to consider a heat load that not only moves back and forth along the global *x*-axis but also rotates around the *z*-axis of our rotating coordinate system. Although this may sound complicated, it is quite straightforward to implement.

*An observer in the global coordinate system sees a spinning wafer with a laser heat source traversing back and forth along the *x*-axis (left). An observer in a coordinate system rotating with the wafer sees the wafer as stationary, but the heat source moves in a complicated path in the *x*-*y* plane (right.)*

The *General Extrusion* operators provide a mechanism for transforming fields from one coordinate system to another. Some applications that we have already written about include submodeling, coupling different physics interfaces, and evaluating results at a moving point.

Here, we will use the General Extrusion operators to apply a rotational transformation to the applied loads. Our loads are applied in the rotating coordinate system via a coordinate transform from the global coordinate system given by the rotation matrix:

\left\{ \begin{array}{c} x' \\ y' \\ z' \end{array} \right\} = \left[ \begin{array}{ccc} cos \theta & -sin \theta & 0 \\ sin \theta & cos \theta & 0 \\ 0 & 0 & 1 \\ \end{array} \right] \left\{ \begin{array}{c} x \\ y \\ z \end{array} \right\}

We can start with the existing Laser Heating of a Silicon Wafer example and simply remove the existing Translational Motion feature. We then have to add a General Extrusion operator, which implements the above transformation, as shown in the screenshot below. We will also want to implement a second operator that applies the reverse transform, which is done by switching the sign of the rotation.

*The general extrusion operation applies a rotational transform.*

The applied heat load is described via a user-defined function, hf(x,y,t), that describes how the laser heat load moves back and forth along the *x*-axis in the global coordinate system. This moving load is then transformed into the rotating coordinate system via the General Extrusion operator, as shown in the screenshot below.

*The applied heat load in the rotating coordinate system, defined via the global coordinate system and the rotational transform.*

That’s it — you can solve the model just as before.

The results will now be with respect to the rotating coordinate system. It can be more practical for us to plot the temperature solution with respect to the global coordinate system by using the General Extrusion operator that applies the *reverse* transformation. This will give us a visualization of the temperature field as if we were standing outside of the process chamber and were watching the spinning wafer with a thermal camera.

*The second general extrusion operator is used to rotate the results back to the global coordinate system.*

The results of the simulation of the temperature field over time will be identical regardless of whether you use the Translational Motion feature or the General Extrusion operator. Although the General Extrusion operator requires more effort to implement — and does take a bit longer to solve — it is needed if you are interested in more than just the thermal solution.

For example, if you also need to compute a temperature-driven chemical diffusion and reaction process or the evolution of thermal stresses during the wafer heating, these problems should be solved on a coordinate system that rotates with the wafer.

There are of course many other applications where you could use the General Extrusion operator, but I hope I’ve satisfied your appetite for today!

]]>