Whenever we want to solve a modeling problem involving Maxwell’s equations under the assumption that:
and
we can treat the problem as Frequency Domain. When the electromagnetic field solutions are wave-like, such as for resonant structures, radiating structures, or any problem where the effective wavelength is comparable to the sizes of the objects we are working with, then the problem can be treated as a wave electromagnetic problem.
COMSOL Multiphysics has a dedicated physics interface for this type of modeling — the Electromagnetic Waves, Frequency Domain interface. Available in the RF and Wave Optics modules, it uses the finite element method to solve the frequency domain form of Maxwell’s equations. Here’s a guide for when to use this interface:
The wave electromagnetic modeling approach is valid in the regime where the object sizes range from approximately \lambda/100 to 10 \lambda, regardless of the absolute frequency. Below this size, the Low Frequency regime is appropriate. In the Low Frequency regime, the object will not be acting as an antenna or resonant structure. If you want to build models in this regime, there are several different modules and interfaces that you could use. For details, please see this blog post.
The upper limit of \sim 10 \lambda comes from the memory requirements for solving large 3D models. Once your modeling domain size is greater than \sim 10\lambda in each direction, corresponding to a domain size of (10\lambda)^3 or 1000 cubic wavelengths, you will start to need significant computational resources to solve your models. For more details about this, please see this previous blog post. On the other hand, 2D models have far more modest memory requirements and can solve much larger problems.
For problems where the objects being modeled are much larger than the wavelength, there are two options:
If you are interested in X-ray frequencies and above, then the electromagnetic wave will interact with and scatter from the atomic lattice of materials. This type of scattering is not appropriate to model with the wave electromagnetics approach, since it is assumed that within each modeling domain the material can be treated as a continuum.
So now that we understand what is meant by wave electromagnetics problems, let’s further classify the most common application areas of the Electromagnetic Waves, Frequency Domain interface and look at some examples of its usage. We will only look at a few representative examples here that are good starting points for learning the software. These applications are selected from the RF Module Application Library and online Application Gallery and the Wave Optics Module Application Library, as well as online.
An antenna is any device that radiates electromagnetic radiation for the purposes of signal (and sometimes power) transmission. There is an almost infinite number of ways to construct an antenna, but one of the simplest is a dipole antenna. On the other hand, a patch antenna is more compact and used in many applications. Quantities of interest include the S-parameters, antenna impedance, losses, and far-field patterns, as well as the interactions of the radiated fields with any surrounding structures, as seen in our Car Windshield Antenna Effect on a Cable Harness tutorial model.
Whereas an antenna radiates into free space, waveguides and transmission lines guide the electromagnetic wave along a predefined path. It is possible to compute the impedance of transmission lines and the propagation constants and S-parameters of both microwave and optical waveguides.
Rather than transmitting energy, a resonant cavity is a structure designed to store electromagnetic energy of a particular frequency within a small space. Such structures can be either closed cavities, such as a metallic enclosure, or an open structure like an RF coil or Fabry-Perot cavity. Quantities of interest include the resonant frequency and the Q-factor.
Conceptually speaking, the combination of a waveguide with a resonant structure results in a filter or coupler. Filters are meant to either prevent or allow certain frequencies propagating through a structure and couplers are meant to allow certain frequencies to pass from one waveguide to another. A microwave filter can be as simple as a series of connected rectangular cavities, as seen in our Waveguide Iris Bandpass Filter tutorial model.
A scattering problem can be thought of as the opposite of an antenna problem. Rather than finding the radiated field from an object, an object is modeled in a background field coming from a source outside of the modeling domain. The far-field scattering of the electromagnetic wave by the object is computed, as demonstrated in the benchmark example of a perfectly conducting sphere in a plane wave.
Some electromagnetics problems can be greatly simplified in complexity if it can be assumed that the structure is quasi-infinite. For example, it is possible to compute the band structure of a photonic crystal by considering a single unit cell. Structures that are periodic in one or two directions such as gratings and frequency selective surfaces can also be analyzed for their reflection and transmission.
Whenever there is a significant amount of power transmitted via radiation, any object that interacts with the electromagnetic waves can heat up. The microwave oven in your kitchen is a perfect example of where you would need to model the coupling between electromagnetic fields and heat transfer. Another good introductory example is RF heating, where the transient temperature rises and temperature-dependent material properties are considered.
Applying a large DC magnetic bias to a ferrimagnetic material results in a relative permeability that is anisotropic for small (with respect to the DC bias) AC fields. Such materials can be used in microwave circulators. The nonreciprocal behavior of the material provides isolation.
You should now have a general overview of the capabilities and applications of the RF and Wave Optics modules for frequency domain wave electromagnetics problems. The examples listed above, as well as the other examples in the Application Gallery, are a great starting point for learning to use the software, since they come with documentation and step-by-step modeling instructions.
Please also keep in mind that the RF and Wave Optics modules also include other functionality and formulations not described here, including transient electromagnetic wave interfaces for modeling of material nonlinearities, such as second harmonic generation and modeling of signal propagation time. The RF Module additionally includes a circuit modeling tool for connecting a finite element model of a system to a circuit model, as well as an interface for modeling the transmission line equations.
As you delve deeper into COMSOL Multiphysics and wave electromagnetics modeling, please also read our other blog posts on meshing and solving options; various material models that you are able to use; as well as the boundary conditions available for modeling metallic objects, waveguide ports, and open boundaries. These posts will provide you with the foundation you need to model wave electromagnetics problems with confidence.
If you have any questions about the capabilities of using COMSOL Multiphysics for wave electromagnetics and how it can be used for your modeling needs, please contact us.
]]>
When you are solving a transient model, the COMSOL software by default uses an implicit time-stepping algorithm with adaptive time step size. This has the advantage of being unconditionally stable for many classes of problems and it lets the software choose the optimal time step size for the specified solver tolerances, thereby reducing the computational cost of the solution.
Two classes of time-stepping algorithms are available: a backward difference formula (BDF) and a generalized-alpha method. These algorithms use the solutions at several previous time steps (up to five) to numerically approximate the time derivatives of the fields and to predict the solution at the next time step.
However, these previous solutions are not by default accessible within the model. The Previous Solution operator makes the solution at the previous time step available as a field variable in the model. This Previous Solution operator is available for both transient as well as stationary problems solved using the continuation method. Let us take a look at how you can implement and use this Previous Solution operator in a transient model in COMSOL Multiphysics.
Using the Previous Solution operator requires only two additional features within the model tree. You must add an ODE and DAE interface to store the fields that you are interested in and you must add the Previous Solution feature to the Time-Dependent Solver. Let us take a look at the implementation in terms of a transient heat transfer example: The laser heating of a wafer with a moving heat load, solved on a rotating coordinate system.
The first step is to add a Domain ODEs and DAEs interface to the model, since we will be interested in tracking the solution at the previous time step throughout the volume of the part. If we were only interested in the previous solution across a boundary, edge, or point, or in some global quantity, we could also use a Boundary, Edge, Point, or Global ODEs and DAEs interface.
The Domain ODEs and DAEs interface for tracking the solution at the previous time step.
The screenshot above shows the relevant settings for the Domain ODEs and DAEs interface. Note that the units of both the dependent variable and the source are set to Temperature. It is a good modeling practice to appropriately set units. The discretization is set to a Lagrange Quadratic, which matches the discretization used by the Heat Transfer in Solids interface. You will always want to make sure that you are using the appropriate discretization. The name of the field variable here is left at the default “u”, although you can rename it to anything you would like.
The equation being solved by the Domain ODEs and DAEs interface.
The screenshot above shows the equation that stores the temperature solution at the previous time step. This equation can be read as:
u-nojac(T)=0
The nojac()
operator is needed, since we do not want this equation to contribute to the Jacobian (the system matrix). Lastly, we need to specify that this equation should be evaluated at the previous time step. This is done within the Solver Configurations.
The Previous Solution feature added to the Solver Configuration.
The screenshot above shows the Previous Solution feature added to the Time-Dependent Solver. Once you add this feature, simply select the appropriate field variable to be evaluated at the previous time step. It will also be faster (although not necessary) to use the Segregated Solver rather than the Fully Coupled solver.
And that is all there is to it. You can now solve the model just as you usually do and you will be able to evaluate the temperature at the previous computational time step.
Of course, having the solution at the previous time step isn’t really all that interesting in itself, but we can do quite a bit more than just store this solution. For example, we can apply logical expressions directly with the ODEs and DAEs equation interface. Consider the equation:
u-nojac(if(T>u,T,u))
This equation can be read as: “If the temperature at the previous time step is greater than u, set u equal to temperature. Otherwise, leave u unchanged.”
That is, it stores the maximum temperature reached at the previous time step at every point in the modeling domain. You can now evaluate the variables T and u at any point in the model to get both the temperature over time and the maximum temperature attained. To get the maximum temperature, you will want to take the maximum of the temperature at the previous time step and the temperature at the current time step, so you can introduce a variable in the model:
MaxTemp = max(T,u)
This will return the maximum temperature up to that time as shown in the plot below.
Temperature at a point plotted over time. The variable MaxTemp is also plotted and shows the maximum temperature reached up to that instant in time.
We have shown here the implementation of the newly introduced Previous Solution operator for time-dependent models. The three steps to use this functionality appropriately are:
We have shown how to evaluate the maximum temperature in this example, but there is a great deal more that can be done with this functionality, so stay tuned!
]]>
While many different types of laser light sources exist, they are all quite similar in terms of their outputs. Laser light is very nearly single frequency (single wavelength) and coherent. Typically, the output of a laser is also focused into a narrow collimated beam. This collimated, coherent, and single frequency light source can be used as a very precise heat source in a wide range of applications, including cancer treatment, welding, annealing, material research, and semiconductor processing.
When laser light hits a solid material, part of the energy is absorbed, leading to localized heating. Liquids and gases (and plasmas), of course, can also be heated by lasers, but the heating of fluids almost always leads to significant convective effects. Within this blog post, we will neglect convection and concern ourselves only with the heating of solid materials.
Solid materials can be either partially transparent or completely opaque to light at the laser wavelength. Depending upon the degree of transparency, different approaches for modeling the laser heat source are appropriate. Additionally, we must concern ourselves with the relative scale as compared to the wavelength of light. If the laser is very tightly focused, then a different approach is needed compared to a relatively wide beam. If the material interacting with the beam has geometric features that are comparable to the wavelength, we must additionally consider exactly how the beam will interact with these small structures.
Before starting to model any laser-material interactions, you should first determine the optical properties of the material that you are modeling, both at the laser wavelength and in the infrared regime. You should also know the relative sizes of the objects you want to heat, as well as the laser wavelength and beam characteristics. This information will be useful in guiding you toward the appropriate approach for your modeling needs.
In cases where the material is opaque, or very nearly so, at the laser wavelength, it is appropriate to treat the laser as a surface heat source. This is most easily done with the Deposited Beam Power feature (shown below), which is available with the Heat Transfer Module as of COMSOL Multiphysics version 5.1. It is, however, also quite easy to manually set up such a surface heat load using only the COMSOL Multiphysics core package, as shown in the example here.
A surface heat source assumes that the energy in the beam is absorbed over a negligibly small distance into the material relative to the size of the object that is heated. The finite element mesh only needs to be fine enough to resolve the temperature fields as well as the laser spot size. The laser itself is not explicitly modeled, and it is assumed that the fraction of laser light that is reflected off the material is never reflected back. When using a surface heat load, you must manually account for the absorptivity of the material at the laser wavelength and scale the deposited beam power appropriately.
The Deposited Beam Power feature in the Heat Transfer Module is used to model two crossed laser beams. The resultant surface heat source is shown.
In cases where the material is partially transparent, the laser power will be deposited within the domain, rather than at the surface, and any of the different approaches may be appropriate based on the relative geometric sizes and the wavelength.
If the heated objects are much larger than the wavelength, but the laser light itself is converging and diverging through a series of optical elements and is possibly reflected by mirrors, then the functionality in the Ray Optics Module is the best option. In this approach, light is treated as a ray that is traced through homogeneous, inhomogeneous, and lossy materials.
As the light passes through lossy materials (e.g., optical glasses) and strikes surfaces, some power deposition will heat up the material. The absorption within domains is modeled via a complex-valued refractive index. At surfaces, you can use a reflection or an absorption coefficient. Any of these properties can be temperature dependent. For those interested in using this approach, this tutorial model from our Application Gallery provides a great starting point.
A laser beam focused through two lenses. The lenses heat up due to the high-intensity laser light, shifting the focal point.
If the heated objects and the spot size of the laser are much larger than the wavelength, then it is appropriate to use the Beer-Lambert law to model the absorption of the light within the material. This approach assumes that the laser light beam is perfectly parallel and unidirectional.
When using the Beer-Lambert law approach, the absorption coefficient of the material and reflection at the material surface must be known. Both of these material properties can be functions of temperature. The appropriate way to set up such a model is described in our earlier blog entry “Modeling Laser-Material Interactions with the Beer-Lambert Law“.
You can use the Beer-Lambert law approach if you know the incident laser intensity and if there are no reflections of the light within the material or at the boundaries.
Laser heating of a semitransparent solid modeled with the Beer-Lambert law.
If the heated domain is large, but the laser beam is tightly focused within it, neither the ray optics nor the Beer-Lambert law modeling approach can accurately solve for the fields and losses near the focus. These techniques do not directly solve Maxwell’s equations, but instead treat light as rays. The beam envelope method, available within the Wave Optics Module, is the most appropriate choice in this case.
The beam envelope method solves the full Maxwell’s equations when the field envelope is slowly varying. The approach is appropriate if the wave vector is approximately known throughout the modeling domain and whenever you know approximately the direction in which light is traveling. This is the case when modeling a focused laser light as well as waveguide structures like a Mach-Zehnder modulator or a ring resonator. Since the beam direction is known, the finite element mesh can be very coarse in the propagation direction, thereby reducing computational costs.
A laser beam focused in a cylindrical material domain. The intensity at the incident side and within the material are plotted, along with the mesh.
The beam envelope method can be combined with the Heat Transfer in Solids interface via the Electromagnetic Heat Source multiphysics couplings. These couplings are automatically set up when you add the Laser Heating interface under Add Physics.
The Laser Heating interface adds the Beam Envelopes and the Heat Transfer in Solids interfaces and the multiphysics couplings between them.
Finally, if the heated structure has dimensions comparable to the wavelength, it is necessary to solve the full Maxwell’s equations without assuming any propagation direction of the laser light within the modeling space. Here, we need to use the Electromagnetic Waves, Frequency Domain interface, which is available in both the Wave Optics Module and the RF Module. Additionally, the RF Module offers a Microwave Heating interface (similar to the Laser Heating interface described above) and couples the Electromagnetic Waves, Frequency Domain interface to the Heat Transfer in Solids interface. Despite the nomenclature, the RF Module and the Microwave Heating interface are appropriate over a wide frequency band.
The full-wave approach requires a finite element mesh that is fine enough to resolve the wavelength of the laser light. Since the beam may scatter in all directions, the mesh must be reasonably uniform in size. A good example of using the Electromagnetic Waves, Frequency Domain interface: Modeling the losses in a gold nanosphere illuminated by a plane wave, as illustrated below.
Laser light heating a gold nanosphere. The losses in the sphere and the surrounding electric field magnitude are plotted, along with the mesh.
You can use any of the previous five approaches to model the power deposition from a laser source in a solid material. Modeling the temperature rise and heat flux within and around the material additionally requires the Heat Transfer in Solids interface. Available in the core COMSOL Multiphysics package, this interface is suitable for modeling heat transfer in solids and features fixed temperature, insulating, and heat flux boundary conditions. The interface also includes various boundary conditions for modeling convective heat transfer to the surrounding atmosphere or fluid, as well as modeling radiative cooling to ambient at a known temperature.
In some cases, you may expect that there is also a fluid that provides significant heating or cooling to the problem and cannot be approximated with a boundary condition. For this, you will want to explicitly model the fluid flow using the Heat Transfer Module or the CFD Module, which can solve for both the temperature and flow fields. Both modules can solve for laminar and turbulent fluid flow. The CFD Module, however, has certain additional turbulent flow modeling capabilities, which are described in detail in this previous blog post.
For instances where you are expecting significant radiation between the heated object and any surrounding objects at varying temperatures, the Heat Transfer Module has the additional ability to compute gray body radiative view factors and radiative heat transfer. This is demonstrated in our Rapid Thermal Annealing tutorial model. When you expect the temperature variations to be significant, you may also need to consider the wavelength-dependent surface emissivity.
If the materials under consideration are transparent to laser light, it is likely that they are also partially transparent to thermal (infrared-band) radiation. This infrared light will be neither coherent nor collimated, so we cannot use any of the above approaches to describe the reradiation within semitransparent media. Instead, we can use the radiation in participating media approach. This technique is suitable for modeling heat transfer within a material, where there is significant heat flux inside the material due to radiation. An example of this approach from our Application Gallery can be found here.
In this blog post, we have looked at the various modeling techniques available in the COMSOL Multiphysics environment for modeling the laser heating of a solid material. Surface heating and volumetric heating approaches are presented, along with a brief overview of the heat transfer modeling capabilities. Thus far, we have only considered the heating of a solid material that does not change phase. The heating of liquids and gases — and the modeling of phase change — will be covered in a future blog post. Stay tuned!
]]>
COMSOL Multiphysics uses the finite element method to solve for the electromagnetic fields within the modeling domains. Under the assumption that the fields vary sinusoidally in time at a known angular frequency \omega = 2 \pi f and that all material properties are linear with respect to field strength, the governing Maxwell’s equations in three dimensions reduce to:
where the material properties are \mu_r, the relative permeability; \epsilon_r, the relative permittivity; and \sigma , the electrical conductivity.
With the speed of light in vacuum, c_0, the above equation is solved for the electric field, \mathbf{E}=\mathbf{E}(x,y,z), throughout the modeling domain, where \mathbf{E} is a vector with components \mathbf{E}=. All other quantities (such as magnetic fields, currents, and power flow) can be derived from the electric field. It is also possible to reformulate the above equation as an eigenvalue problem, where a model is solved for the resonant frequencies of the system, rather than the response of the system at a particular frequency.
The above equation is solved via the finite element method. For a conceptual introduction to this method, please see our blog series on the weak form, and for a more in-depth reference, which will explain the nuances related to electromagnetic wave problems, please see The Finite Element Method in Electromagnetics by Jian-Ming Jin. From the point of view of this blog post, however, we can break down the finite element method into these four steps:
Let’s now look at each one of these steps in more detail and describe the options available at each step.
The governing equation shown above is the frequency domain form of Maxwell’s equations for wave-type problems in its most general form. However, this equation can be reformulated for several special cases.
Let us first consider the case of a modeling domain in which there is a known background electric field and we wish to place some object into this background field. The background field can be a linearly polarized plane wave, a Gaussian beam, or any general user-defined beam that satisfies Maxwell’s equations in free space. Placing an object into this field will perturb the field and lead to scattering of the background field. In such a situation, you can use the Scattered Field formulation, which solves the above equation, but makes the following substitution for the electric field:
where the background electric field is known and the relative field is the field that, once added to the background field, gives the total field that satisfies the governing Maxwell’s equations. Rather than solving for the total field, it is the relative field that is being solved. Note that the relative field is not the scattered field.
For an example of the usage of this Scattered Field formulation, which considers the radar scattering off of a perfectly electrically conductive sphere in a background plane wave and compares it to the analytic solution, please see our Computing the Radar Cross Section of a Perfectly Conducting Sphere tutorial model.
Next, let’s consider modeling in a 2D plane, where we solve for \mathbf{E}=\mathbf{E}(x,y) and can additionally simplify the modeling by considering an electric field that is polarized either In-Plane or Out-of-Plane. The In-Plane case will assume that E_z=0, while the Out-of-Plane case assumes that E_x=E_y=0. These simplifications reduce the size of the problem being solved, compared to solving for all three components of the electric field vector.
For modeling in the 2D axisymmetric plane, we solve for \mathbf{E}=\mathbf{E}(r,z), where the vector \mathbf{E} has the components < E_r, E_\phi, E_z> and we can again simplify our modeling by considering the In-Plane and Out-of-Plane cases, which assume E_\phi=0 and E_r=E_z=0, respectively.
When using either the 2D or the 2D axisymmetric In-Plane formulations, it is also possible to specify an Out-of-Plane Wave Number. This is appropriate to use when there is a known out-of-plane propagation constant, or known number of azimuthal modes. For 2D problems, the electric field can be rewritten as:
and for 2D axisymmetric problems, the electric field can be rewritten as:
where k_z or m, the out-of-plane wave number, must be specified.
This modeling approach can greatly simplify the computational complexity for some types of models. For example, a structurally axisymmetric horn antenna will have a solution that varies in 3D but is composed of a sum of known azimuthal modes. It is possible to recover the 3D solution from a set of 2D axisymmetric analyses by solving for these out-of-plane modes at a much lower computational cost, as demonstrated in our Corrugated Circular Horn Antenna tutorial model.
Whenever solving a wave electromagnetics problem, you must keep in mind the mesh resolution. Any wave-type problem must have a mesh that is fine enough to resolve the wavelengths in all media being modeled. This idea is fundamentally similar to the concept of the Nyquist frequency in signal processing: The sampling size (the finite element mesh size) must be at least less than one-half of the wavelength being resolved.
By default, COMSOL Multiphysics uses second-order elements to discretize the governing equations. A minimum of two elements per wavelength are necessary to solve the problem, but such a coarse mesh would give quite poor accuracy. At least five second-order elements per wavelength are typically used to resolve a wave propagating through a dielectric medium. First-order and third-order discretization is also available, but these are generally of more academic interest, since the second-order elements tend to be the best compromise between accuracy and memory requirements.
The meshing of domains to fulfill the minimum criterion of five elements per wavelength in each medium is now automated within the software, as shown in this video, which shows not only the meshing of different dielectric domains, but also the automated meshing of Perfectly Matched Layer domains. The new automated meshing capability will also set up an appropriate periodic mesh for problems with periodic boundary conditions, as demonstrated in this Frequency Selective Surface, Periodic Complementary Split Ring Resonator tutorial model.
With respect to the type of elements used, tetrahedral (in 3D) or triangular (in 2D) elements are preferred over hexahedral and prismatic (in 3D) or rectangular (in 2D) elements due to their lower dispersion error. This is a consequence of the fact that the maximum distance within an element is approximately the same in all directions for a tetrahedral element, but for a hexahedral element, the ratio of the shortest to the longest line that fits within a perfect cubic element is \sqrt3. This leads to greater error when resolving the phase of a wave traveling diagonally through a hexahedral element.
It is only necessary to use hexahedral, prismatic, or rectangular elements when you are meshing a perfectly matched layer or have some foreknowledge that the solution is strongly anisotropic in one or two directions. When resolving a wave that is decaying due to absorption in a material, such as a wave impinging upon a lossy medium, it is additionally necessary to manually resolve the skin depth with the finite element mesh, typically using a boundary layer mesh, as described here.
Manual meshing is still recommended, and usually needed, for cases when the material properties will vary during the simulation. For example, during an electromagnetic heating simulation, the material properties can be made functions of temperature. This possible variation in material properties should be considered before the solution, during the meshing step, as it is often more computationally expensive to remesh during the solution than to start with a mesh that is fine enough to resolve the eventual variations in the fields. This can require a manual and iterative approach to meshing and solving.
When solving over a wide frequency band, you can consider one of three options:
It is difficult to determine ahead of time which of the above three options will be the most efficient for a particular model.
Regardless of the initial mesh that you use, you will also always want to perform a mesh refinement study. That is, re-run the simulation with progressively finer meshes and observe how the solution changes. As you make the mesh finer, the solution will become more accurate, but at a greater computational cost. It is also possible to use adaptive mesh refinement if your mesh is composed entirely of tetrahedral or triangular elements.
Once you have properly defined the problem and meshed your domains, COMSOL Multiphysics will take this information and form a system of linear equations, which are solved using either a direct or iterative solver. These solvers differ only in their memory requirements and solution time, but there are several options that can make your modeling more efficient, since 3D electromagnetics models will often require a lot of RAM to solve.
The direct solvers will require more memory than the iterative solvers. They are used for problems with periodic boundary conditions, eigenvalue problems, and for all 2D models. Problems with periodic boundary conditions do require the use of a direct solver, and the software will automatically do so in such cases.
Eigenvalue problems will solve faster when using a direct solver as compared to using an iterative solver, but will use more memory. For this reason, it can often be attractive to reformulate an eigenvalue problem as a frequency domain problem excited over a range of frequencies near the approximate resonances. By solving in the frequency domain, it is possible to use the more memory-efficient iterative solvers. However, for systems with high Q-factors it becomes necessary to solve at many points in frequency space. For an example of reformulating an eigenvalue problem as a frequency domain problem, please see these examples of computing the Q-factor of an RF coil and the Q-factor of a Fabry-Perot cavity.
The iterative solvers used for frequency-domain simulations come with three different options defined by the Analysis Methodology settings of Robust (the default), Intermediate, or Fast, and can be changed within the physics interface settings. These different settings alter the type of iterative solver being used and the convergence tolerance. Most models will solve with any of these settings, and it can be worth comparing them to observe the differences in solution time and accuracy and choose the option most appropriate for your needs. Models that contain materials that have very large contrasts in the dielectric constants (~100:1) will need the Robust setting and may even require the use of the direct solver, if the iterative solver convergence is very slow.
Once you’ve solved your model, you will want to extract data from the computed electromagnetic fields. COMSOL Multiphysics will automatically produce a slice plot of the magnitude of the electric field, but there are many other postprocessing visualizations you can set up. Please see the Postprocessing & Visualization Handbook and our blog series on Postprocessing for guidance and to learn how to create images such as those shown below.
Attractive visualizations can be created by plotting combinations of the solution fields, meshes, and geometry.
Of course, good-looking images are not enough — we also want to extract numerical information from our models. COMSOL Multiphysics will automatically make available the S-parameters whenever using Ports or Lumped Ports, as well as the Lumped Port current, voltage, power, and impedance. For a model with multiple Ports or Lumped Ports, it is also possible to automatically set up a Port Sweep, as demonstrated in this tutorial model of a Ferrite Circulator, and write out a Touchstone file of the results. For eigenvalue problems, the resonant frequencies and Q-factors are automatically computed.
For models of antennas or for scattered field models, it is additionally possible to compute and plot the far-field radiated pattern, the gain, and the axial ratio.
Far-field radiation pattern of a Vivaldi antenna.
You can also integrate any derived quantity over domains, boundaries, and edges to compute, for example, the heat dissipated inside of lossy materials or the total electromagnetic energy within a cavity. Of course, there is a great deal more that you can do, and here we have just looked at the most commonly used postprocessing features.
We’ve looked at the various different formulations of the governing frequency domain form of Maxwell’s equations as applied to solving wave electromagnetics problems and when they should be used. The meshing requirements and capabilities have been discussed as well as the options for solving your models. You should also have a broad overview of the postprocessing functionality and where to go for more information about visualizing your data in COMSOL Multiphysics.
This information, along with the previous blog posts on defining the material properties, setting up metallic and radiating boundaries, and connecting the model to other devices should now give you a reasonably complete picture of what can be done with frequency domain electromagnetic wave modeling in the RF and Wave Optics modules. The software documentation, of course, goes into greater depth about all of the features and capabilities within the software.
If you are interested in using the RF or Wave Optics modules for your modeling needs, please contact us.
]]>
Each year, the COMSOL Conference gathers together researchers, engineers, and scientists from around the world for an event that is designed to highlight the power of simulation. As part of the event, a series of minicourses are available, describing the theory, functionality, and concepts behind COMSOL Multiphysics software.
A minicourse from the COMSOL Conference 2014 Cambridge.
As in previous years, the COMSOL Conference 2015 in Boston and Grenoble — the first two stops on the conference tour — will include three days filled with minicourses taught by COMSOL staff, as well a course taught by one of our sponsors, Simpleware Ltd. Each minicourse will last for an hour and a half and be taught in a lecture format.
This year’s conference offers a new way for attendees to put what they have learned into practice — hands-on sessions. Led by COMSOL staff, these hands-on sessions are comprised of three sessions that are 30 minutes each. They are designed to emphasize the use of the software, supplementing the material taught in the minicourses and running parallel to them. This guided, first-hand experience allows you to work directly with the software and explore how its features and capabilities can be applied to your own modeling processes.
In Boston and Grenoble, the COMSOL Conference 2015 will kick off with a one-hour “Introduction to COMSOL Multiphysics” minicourse that will focus on the use and workflow of the software — a great place to begin your conference experience. This course is meant to be an introduction to the software for new users and a refresher for those who have not used the software in a while.
From there, you can choose from a number of available educational options. We have divided up the courses into six different branches — Core Fundamentals, Electrical, Fluid, Mechanical, Chemical, and Interfacing. Let’s explore these areas of focus and their individual sessions in greater detail.
We will once again offer a series of courses that are designed to address the fundamental functionality of COMSOL Multiphysics. The Core Fundamentals series will show you how to utilize the software regardless of the physics involved. These are some of the most highly attended minicourses at the conference.
You can begin this series by attending the “CAD & Geometry” minicourse. This course will review what you need to know when using external CAD data or creating your CAD geometry within COMSOL Multiphysics using the new Design Module. Next up is the “Meshing” course, which introduces techniques that offer a more efficient approach to creating finite element mesh — a particularly important skill if you are creating complex models. The “Solvers” minicourse, meanwhile, highlights the fundamental theory and concepts behind solving linear and nonlinear stationary problems.
Moving along, the “Equation-Based Modeling” minicourse will show you how to set up your own governing equations from scratch, while also introducing important concepts behind the underlying finite element architecture. From there, the “Optimization” course will teach you how to improve your designs within the COMSOL® software environment.
Through the “Postprocessing” minicourse, you will learn how to efficiently extract results from your models and how to present these results in the best light. The series wraps up with the “Application Builder” course, guiding you through the process of quickly deploying your COMSOL models as apps to colleagues and customers.
Under the Electrical branch, minicourses are divided up by modules, beginning with the “Electromagnetics I: Low-Frequency Simulations” session. This minicourse is designed for those interested in simulating Maxwell’s equations in the low-frequency regime when there is no power transfer through radiation. The initial course covers the core package as well as the AC/DC Module. As a follow-up, the “Electromagnetics II: High-Frequency Simulations” session will address the modeling of propagating electromagnetic fields using the RF Module and the Wave Optics Module.
The series continues with the “Ray Optics” minicourse, where we will discuss the capabilities of the Ray Optics Module for modeling light via ray tracing. On the last conference day (Friday), attendees will have the opportunity to sit in on the “MEMS” minicourse. The focus of this course is how to model microelectromechanical systems. Lastly, the “Particle Tracing” course will provide an overview of how to model charged particles in electric and magnetic fields as well as particles in fluid flow.
Christopher Boucher leads a minicourse at the COMSOL Conference 2014 Cambridge.
Within the Fluid branch, minicourses are organized by application areas. The “Fluid-Solid Interactions” minicourse will start off this series, explaining how to model the interactions between moving fluids and solids. Meanwhile, the “Porous Media Flow” session will emphasize the modeling of fluid flow through a porous solid material.
On the second conference day (Thursday), we will once again be offering our very popular “CFD I: Laminar & Microfluidic Flow” course, followed by our “CFD II: Turbulent & High Mach Number Flow” course. This combination of sessions provides attendees with a broad understanding of the range of fluid flow modeling capabilities in COMSOL Multiphysics.
The Mechanical branch interweaves a series of sessions on structural and thermal modeling. In the “Structural Mechanics I: Statics & Dynamics” minicourse, we will cover the analysis of structures under static and dynamic loads. Continuing the series, the “Structural Mechanics II: Nonlinearity & Fatigue” session addresses various material and failure models, with an additional focus on contact modeling.
For those interested in learning about thermal modeling, the “Heat Transfer I: Conduction & Convection” and “Heat Transfer II: Radiation” minicourses are a great choice, offering a comprehensive introduction to this modeling approach. To wrap up, the “Acoustics & Vibrations” session will discuss the vibrations of structures and the modeling of sound, while almost exclusively addressing the innovative features and capabilities of the Acoustics Module.
Attendees sit in on a minicourse at the COMSOL Conference 2014 Boston.
The Chemical branch features two minicourses — the “Chemical Reaction Engineering I: Chemical Reaction Engineering” course and the “Chemical Reaction Engineering II: Electrochemistry” course. The initial session will focus exclusively on the Chemical Reaction Engineering Module, while the second session in the series will focus on the closely interrelated capabilities of the Batteries & Fuel Cells, Electrodeposition, Corrosion, and Electrochemistry modules.
Simpleware Ltd., a sponsor for this year’s event, will lead a minicourse designed to highlight the use of Simpleware software for meshing from image and scan data. For those looking to learn more about scripting via the MATLAB® environment, the Interfacing series will also feature a minicourse on using LiveLink™ for MATLAB®.
Conference attendees come from a number of different physics backgrounds and range from brand new to quite experienced users. We design our minicourses to cater to our wide audience, focusing on a variety of application areas and highlighting both fundamental and more advanced simulation techniques.
For those users who are new to COMSOL Multiphysics, we encourage you to attend the introduction at the beginning of the conference. From there, you can choose to attend the hands-on sessions or the minicourses (listed in the table below). We recommend choosing sessions within the fundamental branch as these are the courses that will be most beneficial to you. Our more experienced users also find these fundamental courses helpful, as they have told us that they pick up many useful tidbits along the way.
When choosing from the list of minicourses, attend those that are of greatest interest to you. Experienced users are encouraged to attend as many of these sessions as possible. Designed to supplement the lecture material within the minicourses, the hands-on sessions are a valuable opportunity to reinforce the concepts and techniques that you have learned during each course.
Wednesday | Thursday | Friday | |||||
---|---|---|---|---|---|---|---|
Session 1 | Session 2 | Session 3 | Session 1 | Session 2 | Session 1 | Session 2 | |
Core Fundamentals | CAD & Geometry | Meshing | Solvers | Equation-Based Modeling | Optimization | Postprocessing | Application Builder |
Electrical | Electromagnetics I: Low-Frequency Simulations | Electromagnetics II: High-Frequency Simulations | Ray Optics | MEMS | Particle Tracing | ||
Fluid | Fluid-Solid Interactions | Porous Media Flow | CFD I: Laminar & Microfluidic Flow | CFD II: Turbulent & High Mach Number Flow | |||
Mechanical | Structural Mechanics I: Statics & Dynamics | Heat Transfer I: Conduction & Convection | Acoustics & Vibrations | Heat Transfer II: Radiation | Structural Mechanics II: Nonlinearity & Fatigue | ||
Chemical | Chemical Reaction Engineering I: Chemical Reaction Engineering | Chemical Reaction Engineering II: Electrochemical Engineering | |||||
Interfacing | Simpleware | LiveLink™ for MATLAB® | |||||
Hands-on Sessions | Heat Transfer | CFD | Electromagnetics | Structural Mechanics | Acoustics | Core Functionality | Chemical Engineering |
One last piece of advice: Take the opportunity to meet with the instructors. Many of the staff leading these courses are the developers behind the functionality that you are using every day. They are always interested in hearing your thoughts and feedback.
We look forward to seeing you at the COMSOL Conference 2015!
MATLAB is a registered trademark of The MathWorks, Inc.
]]>
Here, we will speak about the frequency-domain form of Maxwell’s equations in the Electromagnetic Waves, Frequency Domain interface available in the RF Module and the Wave Optics Module. The information presented here also applies to the Electromagnetic Waves, Beam Envelopes formulation in the Wave Optics Module.
Under the assumption that material response is linear with field strength, we formulate Maxwell’s equations in the frequency domain, so the governing equations can be written as:
This equation solves for the electric field, \mathbf{E}, at the operating (angular) frequency \omega = 2 \pi f (c_0 is the speed of light in vacuum). The other inputs are the material properties \mu_r, the relative permeability; \epsilon_r, the relative permittivity; and \sigma , the electrical conductivity. All of these material inputs can be positive or negative, real or complex-valued numbers, and they can be scalar or tensor quantities. These material properties can vary as a function of frequency as well, though it is not always necessary to consider this variation if we are only looking at a relatively narrow frequency range.
Let us now explore each of these material properties in detail.
The electrical conductivity quantifies how well a material conducts current — it is the inverse of the electrical resistivity. The material conductivity is measured under steady-state (DC) conditions, and we can see from the above equation that as the frequency increases, the effective resistivity of the material increases. We typically assume that the conductivity is constant with frequency, and later on we will examine different models for handling materials with frequency-dependent conductivity.
Any material with non-zero conductivity will conduct current in an applied electric field and dissipate energy as a resistive loss, also called Joule heating. This will often lead to a measurable rise in temperature, which will alter the conductivity. You can enter any function or tabular data for variation of conductivity with temperature, and there is also a built-in model for linearized resistivity.
Linearized Resistivity is a commonly used model for the variation of conductivity with temperature, given by:
where \rho_0 is the reference resistivity, T_{ref} is the reference temperature, and \alpha is the resistivity temperature coefficient. The spatially-varying temperature field, T, can either be specified or computed.
Conductivity is entered as a real-valued number, but it can be anisotropic, meaning that the material’s conductivity varies in different coordinate directions. This is an appropriate approach if you have, for example, a laminated material in which you do not want to explicitly model the individual layers. You can enter a homogenized conductivity for the composite material, which would be either experimentally determined or computed from a separate analysis.
Within the RF Module, there are two other options for computing a homogenized conductivity: Archie’s Law for computing effective conductivity of non-conductive porous media filled with conductive liquid and a Porous Media model for mixtures of materials.
Archie’s Law is a model typically used for the modeling of soils saturated with seawater or crude oil, fluids with relatively higher conductivity compared to the soil.
Porous Media refers to a model that has three different options for computing an effective conductivity for a mixture of up to five materials. First, the Volume Average, Conductivity formulation is:
where \theta is the volume fraction of each material. This model is appropriate if the material conductivities are similar. If the conductivities are quite different, the Volume Average, Resistivity formulation is more appropriate:
Lastly, the Power Law formulation will give a conductivity lying between the other two formulations:
These models are all only appropriate to use if the length scale over which the material properties’ change is much smaller than the wavelength.
The relative permittivity quantifies how well a material is polarized in response to an applied electric field. It is typical to call any material with \epsilon_r>1 a dielectric material, though even vacuum (\epsilon_r=1) can be called a dielectric. It is also common to use the term dielectric constant to refer to a material’s relative permittivity.
A material’s relative permittivity is often given as a complex-valued number, where the negative imaginary component represents the loss in the material as the electric field changes direction over time. Any material experiencing a time-varying electric field will dissipate some of the electrical energy as heat. Known as dielectric loss, this results from the change in shape of the electron clouds around the atoms as the electric fields change. Dielectric loss is conceptually distinct from the resistive loss discussed earlier; however, from a mathematical point of view, they are actually handled identically — as a complex-valued term in the governing equation. Keep in mind that COMSOL Multiphysics follows the convention that a negative imaginary component (a positive-valued electrical conductivity) will lead to loss, while a positive complex component (a negative-valued electrical conductivity) will lead to gain within the material.
There are seven different material models for the relative permittivity. Let’s take a look at each of these models.
Relative Permittivity is the default option for the RF Module. A real- or complex-valued scalar or tensor value can be entered. The same Porous Media models described above for the electrical conductivity can be used for the relative permittivity.
Refractive Index is the default option for the Wave Optics Module. You separately enter the real and imaginary part of the refractive index, called n and k, and the relative permittivity is \epsilon_r=(n-jk)^2. This material model assumes zero conductivity and unit relative permeability.
Loss Tangent involves entering a real-valued relative permittivity, \epsilon_r', and a scalar loss tangent, \delta. The relative permittivity is computed via \epsilon_r=\epsilon_r'(1-j \tan \delta), and the material conductivity is zero.
Dielectric Loss is the option for entering the real and imaginary components of the relative permittivity \epsilon_r=\epsilon_r'-j \epsilon_r''. Be careful to note the sign: Entering a positive-valued real number for the imaginary component \epsilon_r'' when using this interface will lead to loss, since the multiplication by -j is done within the software. For an example of the appropriate usage of this material model, please see the Optical Scattering off of a Gold Nanosphere tutorial.
The Drude-Lorentz Dispersion model is a material model that was developed based upon the Drude free electron model and the Lorentz oscillator model. The Drude model (when \omega_0=0) is used for metals and doped semiconductors, while the Lorentz model describes resonant phenomena such as phonon modes and interband transitions. With the sum term, the combination of these two models can accurately describe a wide array of solid materials. It predicts the frequency-dependent variation of complex relative permittivity as:
where \epsilon_{\infty} is the high-frequency contribution to the relative permittivity, \omega_p is the plasma frequency, f_k is the oscillator strength, \omega_{0k} is the resonance frequency, and \Gamma_k is the damping coefficient. Since this model computes a complex-valued permittivity, the conductivity inside of COMSOL Multiphysics is set to zero. This approach is one way of modeling frequency-dependent conductivity.
The Debye Dispersion model is a material model that was developed by Peter Debye and is based on polarization relaxation times. The model is primarily used for polar liquids. It predicts the frequency-dependent variation of complex relative permittivity as:
where \epsilon_{\infty} is the high-frequency contribution to the relative permittivity, \Delta \epsilon_k is the contribution to the relative permittivity, and \tau_k is the relaxation time. Since this model computes a complex-valued permittivity, the conductivity is assumed to be zero. This is an alternate way to model frequency-dependent conductivity.
The Sellmeier Dispersion model is available in the Wave Optics Module and is typically used for optical materials. It assumes zero conductivity and unit relative permeability and defines the relative permittivity in terms of the operating wavelength, \lambda, rather than frequency:
where the coefficients B_k and C_k determine the relative permittivity.
The choice between these seven models will be dictated by the way the material properties are available to you in the technical literature. Keep in mind that, mathematically speaking, they enter the governing equation identically.
The relative permeability quantifies how a material responds to a magnetic field. Any material with \mu_r>1 is typically referred to as a magnetic material. The most common magnetic material on Earth is iron, but pure iron is rarely used for RF or optical applications. It is more typical to work with materials that are ferrimagnetic. Such materials exhibit strong magnetic properties with an anisotropy that can be controlled by an applied DC magnetic field. Opposed to iron, ferrimagnetic materials have a very low conductivity, so that high-frequency electromagnetic fields are able to penetrate into and interact with the bulk material. This tutorial demonstrates how to model ferrimagnetic materials.
There are two options available for specifying relative permeability: The Relative Permeability model, which is the default for the RF Module, and the Magnetic Losses model. The Relative Permeability model allows you to enter a real- or complex-valued scalar or tensor value. The same Porous Media models described above for the electrical conductivity can be used for the relative permeability. The Magnetic Losses model is analogous to the Dielectric Loss model described above in that you enter the real and imaginary components of the relative permeability as real-valued numbers. An imaginary-valued permeability will lead to a magnetic loss in the material.
In any electromagnetic modeling, one of the most important things to keep in mind is the concept of skin depth, the distance into a material over which the fields fall off to 1/e of their value at the surface. Skin depth is defined as:
where we have seen that relative permittivity and permeability can be complex-valued.
You should always check the skin depth and compare it to the characteristic size of the domains in your model. If the skin depth is much smaller than the object, you should instead model the domain as a boundary condition as described here: “Modeling Metallic Objects in Wave Electromagnetics Problems“. If the skin depth is comparable to or larger than the object size, then the electromagnetic fields will penetrate into the object and interact significantly within the domain.
A plane wave incident upon objects of different conductivities and hence different skin depths. When the skin depth is smaller than the wavelength, a boundary layer mesh is used (right). The electric field is plotted.
If the skin depth is smaller than the object, it is advised to use boundary layer meshing to resolve the strong variations in the fields in the direction normal to the boundary, with a minimum of one element per skin depth and a minimum of three boundary layer elements. If the skin depth is larger than the effective wavelength in the medium, it is sufficient to resolve the wavelength in the medium itself with five elements per wavelength, as shown in the left figure above.
In this blog post, we have looked at the various options available for defining the material properties within your electromagnetic wave models in COMSOL Multiphysics. We have seen that the material models for defining the relative permittivity are appropriate even for metals over a certain frequency range. On the other hand, we can also define metal domains via boundary conditions, as previously highlighted on the blog. Along with earlier blog posts on modeling open boundary conditions and modeling ports, we have now covered almost all of the fundamentals of modeling electromagnetic waves. There are, however, a few more points that remain. Stay tuned!
]]>
When approaching the question of what a metal is, we can do so from the point of view of the governing Maxwell’s equations that are solved for electromagnetic wave problems. Consider the frequency-domain form of Maxwell’s equations:
The above equation is solved in the Electromagnetic Waves, Frequency Domain interface available in the RF Module and the Wave Optics Module. This equation solves for the electric field, \mathbf{E}, at the operating (angular) frequency \omega = 2 \pi f. The other inputs are the material properties: \mu_r is the relative permeability, \epsilon_r is the relative permittivity, and \sigma is the electrical conductivity.
For the purposes of this discussion, we will say that a metal is any material that is both lossy and has a relatively small skin depth. A lossy material is any material that has a complex-valued permittivity or permeability or a non-zero conductivity. That is, a lossy material introduces an imaginary-valued term into the governing equation. This will lead to electric currents within the material, and the skin depth is a measure of the distance into the material over which this current flows.
At any non-zero operating frequency, inductive effects will drive any current flowing in a lossy material towards the boundary. The skin depth is the distance into the material within which approximately 63% of the current flows. It is given by:
where both \mu_r and \epsilon_r can be complex-valued.
At very high frequencies, approaching the optical regime, we are near the material plasma resonance and do in fact represent metals via a complex-valued permittivity. But when modeling metals below these frequencies, we can say that the permittivity is unity, the permeability is real-valued, and the electrical conductivity is very high. So the above equation reduces to:
Before you even begin your modeling in COMSOL Multiphysics, you should compute or have some rough estimate of the skin depth of all of the materials you are modeling. The skin depth, along with your knowledge of the dimensions of the part, will determine if it is possible to use the Impedance boundary condition or the Transition boundary condition.
Now that we have the skin depth, we will want to compare this to the characteristic size, L_c, of the object we are simulating. There are different ways of defining L_c. Depending on the situation, the characteristic size can be defined as the ratio of volume to surface area or as the thickness of the thinnest part of the object being simulated.
Let’s consider an object in which L_c \gg \delta. That is, the object is much larger than the skin depth. Although there are currents flowing inside of the object, the skin effect drives these currents to the surface. So, from a modeling point of view, we can treat the currents as flowing on the surface. In this situation, it is appropriate to use the Impedance boundary condition, which treats any material “behind” the boundary as being infinitely large. From the point of view of the electromagnetic wave, this is true, since L_c \gg \delta means that the wave does not penetrate through the object.
The Impedance boundary condition is appropriate if the skin depth is much smaller than the object.
With the Impedance boundary condition (IBC), we are able to avoid modeling Maxwell’s equations in the interior of any of the model’s metal domains by assuming that the currents flow entirely on the surface. Thus, we can avoid meshing the interior of these domains and save significant computational effort. Additionally, the IBC computes losses due to the finite conductivity. For an example of the appropriate usage of the IBC and a comparison with analytic results, please see the Computing Q-Factors and Resonant Frequencies of Cavity Resonators tutorial.
The IBC becomes increasingly accurate as L_c / \delta \rightarrow \infty; however, it is accurate even when L_c / \delta \gt > 10 for smooth objects like spheres. Sharp-edged objects such as wedges will have some inaccuracy at the corners, but this is a local effect and also an inherent issue whenever a sharp corner is introduced into the model, as discussed in this previous blog post.
Now, what if we are dealing with an object that has one dimension that is much smaller than the others, perhaps a thin film of material like aluminum foil? In that case, the skin depth in one direction may actually be comparable to the thickness, so the electromagnetic fields will partially penetrate through the material. Here, the IBC is not appropriate. We will instead want to use the Transition boundary condition.
The Transition boundary condition (TBC) is appropriate for a layer of conductive material with a thickness relatively smaller than the characteristic size, and curvature, of the objects being modeled. The TBC can be used even if the thickness is many times greater than the skin depth.
The TBC takes the material properties as well as the thickness of the film as inputs, computing an impedance through the thickness of the film as well as a tangential impedance. These are used to relate the current flowing on the surface of either side of the film. That is, the TBC will lead to a drop in the transmitted electric field.
From a computational point of view, the number of degrees of freedom on the boundary is doubled to compute the electric field on either surface of the TBC, as shown below. Additionally, the total losses through the thickness of the film are computed. For an example of using this boundary condition, see the Beam Splitter tutorial, which models a thin layer of silver via a complex-valued permittivity.
The Transition boundary condition computes a surface current on either side of the boundary.
So far, with both the TBC and the IBC, we have assumed that the surfaces are perfect. A planar boundary is assumed to be geometrically perfect. Curved boundaries will be resolved to within the accuracy of the finite element mesh used, the geometric discretization error, as discussed here.
Rough surfaces impede current flow compared to smooth surfaces.
All real surfaces, however, have some roughness, which may be significant. Imperfections in the surface prevent the current from flowing purely tangentially and effectively reduce the conductivity of the surface (illustrated in the figure above). With COMSOL Multiphysics version 5.1, this effect can be accounted for with the Surface Roughness feature that can be added to the IBC and TBC conditions.
For the IBC, the input is the Root Mean Square (RMS) roughness of the surface height. For the TBC, the input is instead given in terms of the RMS of the thickness variation of the film. The magnitude of this roughness should be greater than the skin depth, but much smaller than the characteristic size of the part. The effective conductivity of the surface is decreased as the roughness increases, as described in “Accurate Models for Microstrip Computer-Aided Design” by E. Hammerstad and O. Jensen. There is a second roughness model available, known as the Snowball model, which uses the relationships described in The Foundation of Signal Integrity by P. G. Huray.
It is also worth looking at the idealized situation — the Perfect Electric Conductor (PEC) boundary condition. For many applications in the radio and microwave regime, the losses at metallic boundaries are quite small relative to the other losses within the system. In microwave circuits, for example, the losses in the dielectric substrate typically far exceed the losses at any metallization.
The PEC boundary condition is a surface without loss; it will reflect 100% of any incident wave. This boundary condition is good enough for many modeling purposes and can be used early in your model-building process. It is also sometimes interesting to see how well your device would perform without any material losses.
Additionally, the PEC boundary condition can be used as a symmetry condition to simplify your modeling. Depending on your foreknowledge of the fields, you can use the PEC boundary condition, as well as its complement — the Perfect Magnetic Conductor (PMC) boundary condition — to enforce symmetry of the electric fields. The Computing the Radar Cross Section of a Perfectly Conducting Sphere tutorial illustrates the use of the PEC and PMC boundary conditions as symmetry conditions.
Lastly, COMSOL Multiphysics also includes Surface Current, Magnetic Field, and Electric Field boundary conditions. These conditions are provided primarily for mathematical completeness, since the currents and fields at a surface are almost never known ahead of time.
In this blog post, we have highlighted how the Impedance, Transition, and Perfect Electric Conductor boundary conditions can be used for modeling metallic surfaces, helping to identify situations in which each should be used. But, what if you cannot use any of these boundary conditions? What if the characteristic size of the parts you are simulating are similar to the skin depth? In that case, you cannot use a boundary condition. You will have to model the metal domain explicitly, just as you would for any other material. This will be the next topic we focus on in this series, so stay tuned.
]]>
The entire COMSOL® software product suite is built on top of the general-purpose software platform, COMSOL Multiphysics. This platform contains all of the core preprocessing, meshing, solving, and postprocessing capabilities, as well as several physics interfaces. (See our Specification Chart for complete details about what is available in each product.)
With COMSOL Multiphysics®, you can import 2D DXF™ files and 3D STL and 3D VRML files. You can use the 2D DXF™ file format to import profiles and extrude, revolve, or sweep them along a path to create 3D geometry.
The STL and VRML formats are best suited for simple shapes; complex CAD data does not transfer reliably since these formats lack the sophistication of modern CAD file formats. To work with STL files containing 3D scans, we recommend that you import those as a mesh and use the built-in functionality to convert the imported mesh to geometry. Depending on the complexity and quality of the 3D scan, the resulting geometry can then be combined with other geometric objects that are either imported or created in COMSOL Multiphysics.
Also part of the core COMSOL Multiphysics capabilities, the Virtual Operations approximate the CAD data for meshing purposes and are useful for cleaning up all imported CAD data, or even COMSOL native geometry.
The LiveLink™ products allow you to work with the data directly from your CAD program. Supported CAD packages include SOLIDWORKS® software, Inventor® software, Autodesk® AutoCAD® software, PTC® Creo® Parametric™ software, PTC® Pro/ENGINEER® software, Solid Edge® software, and the building information modeling (BIM) software Autodesk® Revit® Architecture software. Both LiveLink™ for SOLIDWORKS® and LiveLink™ for Inventor® offer the One Window interface, which directly embeds the COMSOL® modeling environment within the CAD software user interface. The list of version compatibility with these products is maintained here.
When using these LiveLink™ tools, you must have both COMSOL Multiphysics and the CAD program installed and running on the computer you are using. The CAD data as well as materials definitions and other selections will be bidirectionally synchronized between your CAD package and COMSOL Multiphysics, with full associativity. You can read more about that here. This means that any modifications that you make within the CAD package will be available within the COMSOL environment, and you can use COMSOL Multiphysics to change any of the dimensions within your CAD file. The functionality of each of these modules is described here.
Since the data is transferred with associativity, as you change the dimensions in your CAD program to reshape the part, the COMSOL software will track these changes and appropriately re-map all of the boundary conditions and other geometry- and selection-based settings. To see a demonstration of this, please watch the relevant videos in our Video Gallery. You will find this functionality useful when you want to perform parametric sweeps over the dimensions in your CAD file or perform dimensional optimization using the COMSOL Optimization Module.
In addition to synchronizing CAD data between a CAD software and COMSOL, the LiveLink™ products also include support for file import of the full range of CAD file formats supported by the CAD Import Module. If you are solving problems where you actually want to model the volume inside of the CAD domain (such as for fluid flow models), you can also use the Cap Faces command to create enclosed volumes based upon an existing geometry, as described here. You will also be able to perform repair and geometric clean-up (defeaturing) operations on your CAD data and write out the resultant geometry, or any geometry you create in COMSOL Multiphysics®, to the Parasolid® software or ACIS® software file format.
The LiveLink™ products are the best option for you if you can have your CAD software and COMSOL software installed on the same computer and you want to take advantage of the benefits offered by the included integration. However, if you are working with CAD data that is coming from someone else and don’t have their CAD software on your computer, then you may want to use the CAD Import Module or the Design Module instead.
The CAD Import Module and the Design Module will allow you to import a wide range of CAD file formats. You can find the complete list of formats and versions that are importable here.
If you are planning to make many design iterations, then the relative drawback of both the CAD Import Module and the Design Module compared to the LiveLink™ products is that the data import is one-way and there is no associativity that is maintained between the CAD data and the COMSOL model. That is, if you make a modification to the CAD file and have to re-import the geometry, the physics features and other geometry-based settings in the COMSOL model may not reflect these changes. You will need to manually check all settings and re-apply them to the modified geometry. Additionally, the dimensional data in the original CAD file is not accessible, so you will not be able to perform parametric sweeps or optimization.
It is possible to work around this limitation as described in the “Parameterizing the Dimensions of Imported CAD Files” blog post, but this technique is usually only practical for simpler geometries.
The Design Module provides additional functionality for creating geometry. It includes all of the capabilities of the CAD Import Module, but also provides extra geometric modeling tools. The Parasolid® software Kernel functionality is used to provide 3D Fillet, 3D Chamfer, Loft, Midsurface, and Thicken operations. You can learn more about these operations in this introduction to the Design Module.
The CAD Import Module is recommended only if you are certain that you will never be using COMSOL Multiphysics in conjunction with any of the CAD packages for which there is a LiveLink™ product and if you do not want to create any complex CAD geometries within the COMSOL environment. The Design Module is recommended over the CAD Import Module since it provides all of the same functionality, but will also allow you to create more complex CAD geometries within COMSOL Multiphysics. These geometries can then be exported to the Parasolid® software or ACIS® software file formats. Both modules include the full suite of defeaturing operations as well as the Cap Faces operation.
In addition to the products mentioned here, there is also the File Import for CATIA® V5 Module, which can import CATIA® V5 software files and is an add-on to any of the LiveLink™ products for CAD packages, the CAD Import Module, or the Design Module.
The ECAD Import Module is used for the import of ECAD data, which are files that are typically meant to describe the layout of an integrated circuit (IC), micro-electro-mechanical systems (MEMS) device, or printed circuit board (PCB) and thus contain planar layouts, and in some cases thickness and elevation information.
While the data transfer with this module is without associativity, you can take advantage of selections created by the import functionality to preserve model settings after importing a changed file. Also, the layered structure of the generated geometric objects makes the use of coordinate-based selections for model settings especially suitable to automate model set-up with imported ECAD files. Look out for a future blog post on how to do this.
We recommend the LiveLink™ products if you have your CAD software and COMSOL simulation software installed on the same computer. The Design Module or the CAD Import Module can be used if you only want to import files, and the Design Module is preferred since it has enhanced functionality. The add-on File Import for CATIA® V5 Module is only needed for that specific file type. Finally, to incorporate geometry from ECAD layout files into your simulations, you will need the ECAD Import Module.
If you have any other questions about how best to interact with your CAD data, please contact us.
ACIS is a registered trademark of Spatial Corporation.
Autodesk, the Autodesk logo, AutoCAD, DXF, Inventor, and Revit are registered trademarks or trademarks of Autodesk, Inc., and/or its subsidiaries and/or affiliates in the USA and/or other countries.
CATIA is a registered trademark of Dassault Systèmes or its subsidiaries in the US and/or other countries.
Parasolid and Solid Edge are trademarks or registered trademarks of Siemens Product Lifecycle Management Software Inc. or its subsidiaries in the United States and in other countries.
PTC, Creo, Parametric, and Pro/ENGINEER are trademarks or registered trademarks of PTC Inc. or its subsidiaries in the U.S. and in other countries.
SOLIDWORKS is a registered trademark of Dassault Systèmes SolidWorks Corp.
]]>
When light is incident upon a semi-transparent material, some of the energy will be absorbed by the material itself. If we can assume that the light is single wavelength, collimated (such as from a laser), and experiences very minimal refraction, reflection, or scattering within the material itself, then it is appropriate to model the light intensity via the Beer-Lambert law. This law can be written in differential form for the light intensity I as:
where z is the coordinate along the beam direction and \alpha(T) is the temperature-dependent absorption coefficient of the material. Because this temperature can vary in space and time, we must also solve the governing partial differential equation for temperature distribution within the material:
where the heat source term, Q, equals the absorbed light. These two equations present a bidirectionally coupled multiphysics problem that is well suited for modeling within the core architecture of COMSOL Multiphysics. Let’s find out how…
We will consider the problem shown above, which depicts a solid cylinder of material (20 mm in diameter and 25 mm in length) with a laser incident on the top. To reduce the model size, we will exploit symmetry and consider only one quarter of the entire cylinder. We will also partition the domain up into two volumes. These volumes will represent the same material, but we will only solve the Beer-Lambert law on the inside domain — the only region that the beam is heating up.
To implement the Beer-Lambert law, we will begin by adding the General Form PDE interface with the Dependent Variables and Units settings, as shown in the figure below.
Settings for the implementing the Beer-Lambert law. Note the Units settings.
Next, the equation itself is implemented via the General Form PDE interface, as illustrated in the following screenshot. Aside from the source term, f, all terms within the equation are set to zero; thus, the equation being solved is f=0. The source term is set to Iz-(50[1/m]*(1+(T-300[K])/40[K]))*I, where the partial derivative of light intensity with respect to the z-direction is Iz, and the absorption coefficient is (50[1/m]*(1+(T-300[K])/40[K])), which introduces a temperature dependency for illustrative purposes. This one line implements the Beer-Lambert law for a material with a temperature-dependent absorption coefficient, assuming that we will also solve for the temperature field, T, in our model.
Implementation of the Beer-Lambert law with the General Form PDE interface.
Since this equation is linear and stationary, the Initial Values do not affect the solution for the intensity variable. The Zero Flux boundary condition is appropriate on most faces, with the exception of the illuminated face. We will assume that the incident laser light intensity follows a Gaussian distribution with respect to distance from the origin. At the origin, and immediately above the material, the incident intensity is 3 W/mm^{2}. Some of the laser light will be reflected at the dielectric interface, so the intensity of light at the surface of the material is reduced to 0.95 of the incident intensity. This condition is implemented with a Dirichlet Boundary Condition. At the face opposite to the incident face, the Zero Flux boundary condition simply means that any light reaching that boundary will leave the domain.
The Dirichlet Boundary Condition sets the incident light intensity within the material.
With these settings described above, the problem of temperature-dependent light absorption governed by the Beer-Lambert law has been implemented. It is, of course, also necessary to solve for the temperature variation in the material over time. We will consider an arbitrary material with a thermal conductivity of 2 W/m/K, a density of 2000 kg/m^{3}, and a specific heat of 1000 J/kg/K that is initially at 300 K with a volumetric heat source.
The heat source itself is simply the absorption coefficient times the intensity, or equivalently, the derivative of the intensity with respect to the propagation direction, which can be entered as shown below.
The heat source term is the absorbed light.
Most other boundaries can be left at the default Thermal Insulation, which will also be appropriate for implementing the symmetry of the temperature field. However, at the illuminated boundary, the temperature will rise significantly and radiative heat loss can occur. This can be modeled with the Diffuse Surface boundary condition, which takes the ambient temperature of the surroundings and the surface emissivity as inputs.
Thermal radiation from the top face to the surroundings is modeled with the Diffuse Surface boundary condition.
It is worth noting that using the Diffuse Surface boundary condition implies that the object radiates as a gray body. However, the gray body assumption would imply that this material is opaque. So how can we reconcile this with the fact that we are using the Beer-Lambert law, which is appropriate for semi-transparent materials?
We can resolve this apparent discrepancy by noting that the material absorptivity is highly wavelength-dependent. At the wavelength of incident laser light that we are considering in this example, the penetration depth is large. However, when the part heats up, it will re-radiate primarily in the long-infrared regime. At long-infrared wavelengths, we can assume that the penetration depth is very small, and thus the assumption that the material bulk is opaque for emitted radiation is valid.
It is possible to solve this model either for the steady-state solution or for the transient response. The figure below shows the temperature and light intensity in the material over time, as well as the finite element mesh that is used. Although it is not necessary to use a swept mesh in the absorption direction, applying this feature provides a smooth solution for the light intensity with relatively fewer elements than a tetrahedral mesh. The plot of light intensity and temperature with respect to depth at the centerline illustrates the effect of the varying absorption coefficient due to the rise in temperature.
Plot of the mesh (on the far left) and the light intensity and temperature at different times.
Light intensity and temperature as a function of depth along the centerline over time.
Here, we have highlighted how the General Form PDE interface, available in the core COMSOL Multiphysics package, can be used for implementing the Beer-Lambert law to model the heating of a semi-transparent medium. This approach is appropriate if the incident light is collimated and at a wavelength where the material is semi-transparent.
Although this approach has been presented in the context of laser heating, the incident light needs only to be collimated for this approach to be valid. The light does not need to be coherent nor single wavelength. A wide spectrum source can be broken down into a sum of several wavelength bands over which the material absorption coefficient is roughly constant, with each solved using a separate General Form PDE interface.
In the approach presented here, the material itself is assumed to be completely opaque to ambient thermal radiation. It is, however, possible to model thermal re-radiation within the material using the Radiation in Participating Media physics interface available within the Heat Transfer Module.
The Beer-Lambert law does assume that the incident laser light is perfectly collimated and propagates in a single direction. If you are instead modeling a focused laser beam with gradual variations in the intensity along the optical path then the Beam Envelopes interface in the Wave Optics Module is more appropriate.
In future blog posts, we will introduce these as well as alternate approaches for modeling laser-material interactions. Stay tuned!
]]>
Let’s start by giving a very conceptual introduction to how a 3D CAD geometry is meshed when you use the default mesh settings in COMSOL Multiphysics. The default mesh settings will always use a Free Tetrahedral mesh to discretize an arbitrary volume into smaller elements. Tetrahedral elements (tets) are the default element type because any geometry, no matter how topologically complex, can be subdivided and approximated as tets. Within this article, we will only discuss free tetrahedral meshing, although there are situations when other types of meshes can be more appropriate, as discussed here.
A cylinder (left) is meshed with triangular elements (grey) on the surface and the tetrahedral meshing algorithm subdivides the volume with tets (cyan). The ends are omitted for clarity.
At a conceptual level, the tetrahedral meshing algorithm begins by applying a triangular mesh to all of the faces of the volume that you want to mesh. The volume is then subdivided into tetrahedra, such that each triangle on the boundary is respected and the size and shape of the tetrahedra inside the volume meets the specified size and growth criteria. If you get the error message “Failed to respect boundary element edge on geometry face” or similar, it is because the shape of the tetrahedra became too distorted during this process.
Of course, the true algorithm can only be stated mathematically, not in words. There are, however, cases that can cause this algorithm some difficulties, and these cases can be understood without resorting to any equations. The free tetrahedral meshing algorithm can have difficulties if:
Let’s take a look at some examples of each case and how partitioning can help us.
To get us started, let us consider a modestly complex geometry: the Helix geometry primitive. You can certainly think of more complex geometries than this, but we can illustrate many concepts starting with this case.
Go ahead and open a new COMSOL Model file and create a helix with ten turns, and then mesh it with the default settings, as shown below.
A ten-turn helix primitive with the corresponding default tetrahedral mesh.
When you were meshing this relatively simple part, you may have noticed that the meshing step took a relatively long time. So let’s look at how partitioning can simplify this geometry. Add a Work plane to your geometry sequence that bisects the length of the helix and then add a Partition feature, using the Work plane as the partitioning object.
A Work plane is used to partition the helix.
As you can see from the image above, the resultant ten-turn helix object is now composed of twenty different domains, each representing a half-turn of the helix. When you re-mesh this model, you will find that the meshing time is reduced, which is good. Each domain represents a much easier meshing problem than the original problem, and, furthermore, the domains can be meshed in parallel on a multi-core computer.
However, you’re probably also thinking to yourself that we now have twenty different domains, and that we’ve subdivided the six surfaces of this helix into one hundred two surfaces, including the internal boundaries, which are dividing up the domain. Although this geometry now meshes a lot faster, we have added many more domains and boundaries that can be a distraction as we apply material properties and boundary conditions. What we actually want is to use the partitioned geometry for the mesh, but ignore the partitioning during the set-up of the physics.
What you’ll want to do next is to add a Virtual Operation, the Mesh Control Domains operation. This feature will take, as input, all twenty domains defining the helix. The output will appear to be our original helix, and when we apply material properties and physics settings, there will be only one domain and six boundaries.
The Mesh Control Domains will specify that these are different domains only for the purposes of meshing.
When you now mesh this geometry, you’ll observe that you have the best of both. The meshing takes relatively little time, and the physics settings will be easy to apply. If you haven’t already, try this out on your own!
We have only looked at one example geometry here, but there are many other cases where you’ll want to use this type of partitioning. Domains that look like combs or serpentines or objects that have many holes, cutouts, or domains embedded within them all present situations in which you should consider partitioning. Also, keep in mind that you don’t need to partition with planes; you can create and use other objects for partitioning. We’ll take a look at such an example next.
The CAD geometries you are working with can often contain some edges or surfaces that have vastly different sizes relative to the other edges and surfaces defining a domain. We often want to avoid such situations, since small features on a large domain may not be that important for our analysis objectives.
We’ve already looked at how we can ignore these small features using Virtual Operations to Simplify the Geometry, but what if these small features are important? Let’s examine how partitioning can help us in terms of the example geometry shown below.
A flow domain to be meshed. Three small inlets, with even smaller fillets, protrude from the main pipe.
The geometry that you see above has a large pipe with three smaller pipes protruding from it. The small fillets that round the transition between the two have dimensions that are over one hundred times smaller than the pipe volume. If we mesh this domain with the default mesh settings, the same settings will be used throughout. However, we will almost certainly want to have smaller mesh sizes around the inlets.
The default mesh will use one setting for all elements within the model. That will not be very useful here. We could just add additional Size features to the mesh, and apply these features to all of the faces around the small pipes to adjust the element sizes at these boundaries, but this is not quite optimal. It’s a lot of work and might not give us exactly what we want.
We can also use partitioning to define a small volume within which we will want to have different mesh settings. In the figure below, additional cylinders have been included that surround each of the smaller pipes and extend some distance into the pipe.
Additional domains (wireframe) which will be used for partitioning of the blue domain.
Results of the partitioning operation.
These additional cylinder objects can be used to partition the original modeling domain, as shown above. Using the Mesh Control Domains, it will again be possible to simplify this geometry down to a single domain for the purposes of physics and materials settings. Once you get to the meshing step, however, it is possible to add a Size feature to the Mesh sequence that will set the element size settings of these newly partitioned domains. This gives us control over the element sizes in these domains and makes things a little bit easier for the mesher.
Different size features can be applied to each partitioned geometry.
The geometries that we have looked at here can be meshed with minimal effort or modification to the default meshing settings, but this is not always the case. It is relatively easy to come up with a geometry that no meshing algorithm will ever be able to mesh in a reasonable amount of time. What can we do in that situation?
The answer (as I’m sure you’ve already guessed) is partitioning along with one other concept: divide and conquer. When confronted with a domain that does not mesh, use partitioning to divide it into two domains. Try to individually mesh each one. If one of the domains does not mesh, keep partitioning each half. Using this approach, you’ll very quickly zoom in on the problematic region of the original domain. You can then decide if you want to simplify the problematic parts of the geometry via the usage of Virtual Operations, or you can use the techniques we’ve outlined here and mesh sub-domain by sub-domain, or you can even use some combination of the two.
Another technique that you can use is to apply a Free Triangular mesh on all of the boundaries of the imported geometry. Surface meshing is much faster than volume meshing and will almost always succeed. Visually inspect the resultant surface mesh. It will then often be immediately apparent where in the model the small features and problematic areas are. Once you know where the issues are, delete the Free Triangular mesh, since the free tetrahedral meshing algorithm will typically want to adjust the mesh on the boundaries, but will not do so if there is already a surface mesh defined.
Along with the Virtual Operations which we have already mentioned for simplifying the geometry for meshing, you can also use the Repair and Defeaturing functionality to clean up CAD data originating from another source. The Virtual Operations will simply create an abstraction of the CAD geometry which can only be used inside of the COMSOL software, as compared to the Repair and Defeaturing operations which will modify the CAD directly, and will create a modified CAD representation that can be written out from COMSOL Multiphysics to other software packages.
We have now looked at two different representative cases where the default mesh settings are not optimal — a domain that is very complex as well as a domain with extreme aspect ratios. In both cases, we can use partitioning along with the Mesh Control Domains Virtual Operations feature to simplify the meshing operations.
We have also presented some strategies for handling cases in which your geometry will not mesh with the default settings. It is also worth saying that such situations arise most often when working with imported CAD geometry that was meant for manufacturing, rather than analysis purposes. If you are given a CAD file with many features that are cosmetic rather than functional or that you are reasonably certain will not affect the physics of the problem, consider removing these features in the originating CAD package, before they even get to COMSOL Multiphysics.
In future blog posts, we will also look at combining partitioning with swept meshing, which is another powerful technique in your toolkit as you use COMSOL Multiphysics. Stay tuned!
]]>
Let’s take a look at some sample experimental data in the plot below. Observe that the data is noisy and that the sampling is nonuniform in the x-axis. This experimental data may represent a material property. If the material property is dependent upon the solution variable (such as a temperature-dependent thermal conductivity), then we would usually not want to use this data directly in our analyses. Such noisy input data can often cause solver convergence difficulties, for the reasons discussed here. If we instead approximate the data with a smooth curve, then model convergence can often be improved and we will also have a simple function to represent the material property.
Experimental data that we would like to approximate with a simpler function.
What we would like to do is find a function, F(x), that fits the experimental data, D(x), as closely as possible. We will define the “best-fit” function as the function that minimizes the square of the difference between the experimental data and our fitting function, integrated over the entire data range. That is, our objective is to minimize:
So the first thing that we need to do is to make some decisions about what type of function we would like to fit. We have a great deal of flexibility about what type of functions to use, but we should choose a fitting function that results in a problem which will be numerically well-conditioned. Although we will not go into the details about why, for maximum robustness we will choose to fit this function:
which in this case, for a=0, b=1, simplifies to:
Now we need to find the four coefficients that will minimize:
Although this may sound like an optimization problem, we do not have any constraints on our coefficients and we will assume that the above function has a single minimum. This minimum will correspond to the point where the derivatives, with respect to the coefficients, are zero. That is, to find the best fit function, we must find the values of the coefficients at which:
It turns out that we can solve this problem with the core capabilities of COMSOL Multiphysics. Let’s find out how…
We start by creating a new file containing a 1D component. We will use the Global ODEs and DAEs physics interface to solve for our coefficients and we will use the Stationary Solver. For simplicity, we will use a dimensionless length unit, as shown in the screenshot below.
Start out with a 1D component and set Unit system to None.
Next, create the geometry. Our geometry should contain our interval (in this case, the range of our sample points is from 0 to 1) as well as a set of points along the x-direction for every sample point. You can simply copy and paste this range of points from a spreadsheet into the Point feature, as shown.
Add points over the interval at every data sample point.
Read in the experimental data using the Interpolation function. Give your data a reasonable name (we simply use D in the screenshot below), check on the “Use spatial coordinates as arguments”, and make sure to use the default Linear interpolation between data points.
The settings for importing the experimental data.
Define an Integration Operator over all domains. You can use the default name: intop1. This feature will be used to take the integral described above.
The Integration Operator is defined over all domains.
Now define two variables. One will be your function, F, and the other will be the function that we want to minimize, R. Since the Geometric Entity Level is set to Entire Model, F will be defined everywhere and spatially varying as a function of x. On the other hand, R is scalar valued everywhere and also available throughout the entire model. As shown in the screenshot below, we can enter F as a function of c_0,c_1,c_2,c_3 and will define these coefficients later.
The definition of our fitting function and the quantity we wish to minimize.
Next, we can use the Global Equations interface to define the four equations that we want to satisfy for our four coefficients. Recall that we want the derivative of R with respect to each coefficient to be zero. Using the differentiation operator, d(f(x),x), we can enter this as shown below:
The Global Equations that are used to solve for the coefficients of our fitting function.
Finally, we need to have an appropriate mesh on our 1D domain. Recall that earlier we placed a geometric point at each data sampling point. Using the Distribution subfeature on the Edge Mesh feature, we can ensure that there is one element between each data point. We do not need any more elements than this, since we are assuming linear interpolation between data points, but we do not want less than this, because then we will miss some of the experimental data points.
There should be one element over each data interval.
We can now solve this stationary problem for the numerical values of our coefficients and plot the results. From the plot below, we can see the data points with the linear interpolation between them, as well as the computed fitting function. We have minimized the square of the difference between these two curves, integrated over the interval, and now have a smooth and simple function that approximates our data quite well.
The experimental data (black, with linear interpolation) and the fitted function (red).
Now, what we’ve done so far is actually fairly straightforward and you could compute similar types of curve fits in a spreadsheet program or any number of other software tools. But there is much more that we can do with this approach. We are not limited to using this fitting function. You are free to choose any function you want, but it is best to use a function that is a sum of set of orthogonal functions. Try out, for example:
Be aware, however, that you will only want to solve for the linear coefficients on the various terms within the fitting function. You would not want to use nonlinear fitting coefficients such as F(x) = c_0 + c_1sin ( \pi x /c_3 ) + c_2cos ( \pi x /c_4 ) since such a problem might be too highly nonlinear to converge.
And what if you have a 2D or 3D data set? You can actually apply the exact same approach as we’ve outlined here. The only difference is that you will need to set up a 2D or a 3D domain. The domains need not be Cartesian and you can even switch to a different coordinate system.
Let’s take a quick look at some sample data points measured over the region shown below:
Sampled data in a 2D region. We want a best fit surface to the heights of these points.
Since the data is sampled over this annular region and seems to have variations with respect to the radial and circumferential directions (r,\theta), rather than the Cartesian directions, we can try to fit the function:
We can follow the exact same procedure as before. The only difference being that we need to integrate over a 2D domain rather than a line and write our expression using a cylindrical coordinate system.
The computed best-fit surface to the data shown above.
You can see that the core COMSOL Multiphysics package has very flexible capabilities for finding a best-fit curve to data in 1D, 2D, or 3D using the methods shown here.
There can be cases where you might want to go beyond a simple curve-fit and want to consider some additional constraints. In that case, you would want to use the capabilities of the Optimization Module, which can also perform these types of curve fits and much, much more. For an introduction to the Optimization Module for curve fitting and the related topic of parameter estimation, please also see these models:
]]>