Let’s start by giving a very conceptual introduction to how a 3D CAD geometry is meshed when you use the default mesh settings in COMSOL Multiphysics. The default mesh settings will always use a Free Tetrahedral mesh to discretize an arbitrary volume into smaller elements. Tetrahedral elements (tets) are the default element type because any geometry, no matter how topologically complex, can be subdivided and approximated as tets. Within this article, we will only discuss free tetrahedral meshing, although there are situations when other types of meshes can be more appropriate, as discussed here.
At a conceptual level, the tetrahedral meshing algorithm begins by applying a triangular mesh to all of the faces of the volume that you want to mesh. This surface mesh is then used as the “seed” from which tetrahedral elements “grow” inwards. As the elements grow together and fill in the domain, the meshing algorithm attempts to adjust the element sizes such that they are reasonably isotropic (have similar edge lengths) and have gradual transitions between small and large elements. This is visualized in the image below.
A cylinder (left) is meshed with triangular elements (grey) on the surface and the tetrahedral meshing algorithm fills in the volume with tets (cyan) starting from the boundaries. The ends are omitted for clarity.
Of course, the true algorithm can only be stated mathematically, not in words, so we won’t try to do so here. There are, however, a few cases that might cause this algorithm some difficulties, and these can be stated without resorting to any equations. The free tetrahedral meshing algorithm can have difficulties if:
Let’s take a look at some examples of each case and how partitioning can help us.
Simply put, a convex geometry is a domain within which it is possible to draw a straight line between any two points in the domain and have that line lie entirely within the domain. A non-convex geometry is a domain where a straight line between any two points within the domain can intersect the boundaries of the domain. A good example of such a non-convex geometry is the Helix geometry primitive.
Go ahead and open a new COMSOL Model file and create a helix with ten turns, and then mesh it with the default settings, as shown below.
A ten-turn helix primitive with the corresponding default tetrahedral mesh.
When you were meshing this relatively simple part, you may have noticed that the meshing step took a relatively long time. So let’s look at how partitioning can simplify this geometry. Add a Work plane to your geometry sequence that bisects the length of the helix and then add a Partition feature, using the Work plane as the partitioning object.
A Work plane is used to partition the helix.
As you can see from the image above, the resultant ten-turn helix object is now composed of twenty different domains, each representing a half-turn of the helix. Although still non-convex, these subdomain as less non-convex than the entire helix domain. When you re-mesh this model, you will find that the meshing time is significantly reduced, which is good.
However, you’re probably also thinking to yourself that we now have twenty different domains, and that we’ve subdivided the six surfaces of this helix into one hundred two surfaces, including the internal boundaries, which are dividing up the domain. Although this geometry now meshes a lot faster, we have added many more domains and boundaries that can be a distraction as we apply material properties and boundary conditions. What we actually want is to use the partitioned geometry for the mesh, but ignore the partitioning during the set-up of the physics.
What you’ll want to do next is to add a Virtual Operation, the Mesh Control Domains operation. This feature will take, as input, all twenty domains defining the helix. The output will appear to be our original helix, and when we apply material properties and physics settings, there will be only one domain and six boundaries.
The Mesh Control Domains will specify that these are different domains only for the purposes of meshing.
When you now mesh this geometry, you’ll observe that you have the best of both. The meshing takes relatively little time, and the physics settings will be easy to apply. If you haven’t already, try this out on your own!
We have only looked at one example of a non-convex geometry here, but there are many other cases where you’ll want to use this type of partitioning. Domains that look like combs or serpentines or objects that have many holes, cutouts, or domains embedded within them all present situations in which you should consider partitioning. Also, keep in mind that you don’t need to partition with planes; you can create and use other objects for partitioning. We’ll take a look at such an example next.
The CAD geometries you are working with can often contain some edges or surfaces that have vastly different sizes relative to the other edges and surfaces defining a domain. We often want to avoid such situations, since small features on a large domain may not be that important for our analysis objectives.
We’ve already looked at how we can ignore these small features using Virtual Operations to Simplify the Geometry, but what if these small features are important? Let’s examine how partitioning can help us in terms of the example geometry shown below.
A flow domain to be meshed. Three small inlets, with even smaller fillets, protrude from the main pipe.
The geometry that you see above has a large pipe with three smaller pipes protruding from it. The small fillets that round the transition between the two have dimensions that are over one hundred times smaller than the pipe volume. If we mesh this domain with the default mesh settings, the same settings will be used throughout. However, we will almost certainly want to have smaller mesh sizes around the inlets.
The default mesh will use one setting for all elements within the model. That will not be very useful here. We could just add additional Size features to the mesh, and apply these features to all of the faces around the small pipes to adjust the element sizes at these boundaries, but this is not quite optimal. It’s a lot of work and might not give us exactly what we want.
We can also use partitioning to define a small volume within which we will want to have different mesh settings. In the figure below, additional cylinders have been included that surround each of the smaller pipes and extend some distance into the pipe.
Additional domains (wireframe) which will be used for partitioning of the blue domain.
Results of the partitioning operation.
These additional cylinder objects can be used to partition the original modeling domain, as shown above. Using the Mesh Control Domains, it will again be possible to simplify this geometry down to a single domain for the purposes of physics and materials settings. Once you get to the meshing step, however, it is possible to add a Size feature to the Mesh sequence that will set the element size settings of these newly partitioned domains. This gives us control over the element sizes in these domains and makes things a little bit easier for the mesher.
Different size features can be applied to each partitioned geometry.
The geometries that we have looked at here can be meshed with minimal effort or modification to the default meshing settings, but this is not always the case. It is relatively easy to come up with a geometry that no meshing algorithm will ever be able to mesh in a reasonable amount of time. What can we do in that situation?
The answer (as I’m sure you’ve already guessed) is partitioning along with one other concept: divide and conquer. When confronted with a domain that does not mesh, use partitioning to divide it into two domains. Try to individually mesh each one. If one of the domains does not mesh, keep partitioning each half. Using this approach, you’ll very quickly zoom in on the problematic region of the original domain. You can then decide if you want to simplify the problematic parts of the geometry via the usage of Virtual Operations, or you can use the techniques we’ve outlined here and mesh sub-domain by sub-domain, or you can even use some combination of the two.
Another technique that you can use is to apply a Free Triangular mesh on all of the boundaries of the imported geometry. Surface meshing is much faster than volume meshing and will almost always succeed. Visually inspect the resultant surface mesh. It will then often be immediately apparent where in the model the small features and problematic areas are. Once you know where the issues are, delete the Free Triangular mesh, since the free tetrahedral meshing algorithm will typically want to adjust the mesh on the boundaries, but will not do so if there is already a surface mesh defined.
Along with the Virtual Operations that we have already mentioned for simplifying the geometry for meshing, you can also use the Repair and Defeaturing functionality to clean up CAD data originating from another source. The Virtual Operations will simply create an abstraction of the CAD geometry that can only be used inside of the COMSOL software, as compared to the Repair and Defeaturing operations, which will modify the CAD directly and create a modified CAD representation that can be written out from COMSOL Multiphysics to other software packages.
We have now looked at two different representative cases where the default mesh settings are not optimal — a domain that is very non-convex as well as a domain with extreme aspect ratios. In both cases, we can use partitioning along with the Mesh Control Domains Virtual Operations feature to simplify the meshing operations.
We have also presented some strategies for handling cases in which your geometry will not mesh with the default settings. It is also worth noting that such situations arise most often when working with imported CAD geometry that was meant for manufacturing, rather than analysis purposes. If you are given a CAD file with many features that are cosmetic rather than functional or that you are reasonably certain will not affect the physics of the problem, consider removing these features in the originating CAD package, before they even get to COMSOL Multiphysics.
In future blog posts, we will also look at combining partitioning with swept meshing, which is another powerful technique in your toolkit as you use COMSOL Multiphysics. Stay tuned!
]]>
Let’s take a look at some sample experimental data in the plot below. Observe that the data is noisy and that the sampling is nonuniform in the x-axis. This experimental data may represent a material property. If the material property is dependent upon the solution variable (such as a temperature-dependent thermal conductivity), then we would usually not want to use this data directly in our analyses. Such noisy input data can often cause solver convergence difficulties, for the reasons discussed here. If we instead approximate the data with a smooth curve, then model convergence can often be improved and we will also have a simple function to represent the material property.
Experimental data that we would like to approximate with a simpler function.
What we would like to do is find a function, F(x), that fits the experimental data, D(x), as closely as possible. We will define the “best-fit” function as the function that minimizes the square of the difference between the experimental data and our fitting function, integrated over the entire data range. That is, our objective is to minimize:
So the first thing that we need to do is to make some decisions about what type of function we would like to fit. We have a great deal of flexibility about what type of functions to use, but we should choose a fitting function that results in a problem which will be numerically well-conditioned. Although we will not go into the details about why, for maximum robustness we will choose to fit this function:
which in this case, for a=0, b=1, simplifies to:
Now we need to find the four coefficients that will minimize:
Although this may sound like an optimization problem, we do not have any constraints on our coefficients and we will assume that the above function has a single minimum. This minimum will correspond to the point where the derivatives, with respect to the coefficients, are zero. That is, to find the best fit function, we must find the values of the coefficients at which:
It turns out that we can solve this problem with the core capabilities of COMSOL Multiphysics. Let’s find out how…
We start by creating a new file containing a 1D component. We will use the Global ODEs and DAEs physics interface to solve for our coefficients and we will use the Stationary Solver. For simplicity, we will use a dimensionless length unit, as shown in the screenshot below.
Start out with a 1D component and set Unit system to None.
Next, create the geometry. Our geometry should contain our interval (in this case, the range of our sample points is from 0 to 1) as well as a set of points along the x-direction for every sample point. You can simply copy and paste this range of points from a spreadsheet into the Point feature, as shown.
Add points over the interval at every data sample point.
Read in the experimental data using the Interpolation function. Give your data a reasonable name (we simply use D in the screenshot below), check on the “Use spatial coordinates as arguments”, and make sure to use the default Linear interpolation between data points.
The settings for importing the experimental data.
Define an Integration Operator over all domains. You can use the default name: intop1. This feature will be used to take the integral described above.
The Integration Operator is defined over all domains.
Now define two variables. One will be your function, F, and the other will be the function that we want to minimize, R. Since the Geometric Entity Level is set to Entire Model, F will be defined everywhere and spatially varying as a function of x. On the other hand, R is scalar valued everywhere and also available throughout the entire model. As shown in the screenshot below, we can enter F as a function of c_0,c_1,c_2,c_3 and will define these coefficients later.
The definition of our fitting function and the quantity we wish to minimize.
Next, we can use the Global Equations interface to define the four equations that we want to satisfy for our four coefficients. Recall that we want the derivative of R with respect to each coefficient to be zero. Using the differentiation operator, d(f(x),x), we can enter this as shown below:
The Global Equations that are used to solve for the coefficients of our fitting function.
Finally, we need to have an appropriate mesh on our 1D domain. Recall that earlier we placed a geometric point at each data sampling point. Using the Distribution subfeature on the Edge Mesh feature, we can ensure that there is one element between each data point. We do not need any more elements than this, since we are assuming linear interpolation between data points, but we do not want less than this, because then we will miss some of the experimental data points.
There should be one element over each data interval.
We can now solve this stationary problem for the numerical values of our coefficients and plot the results. From the plot below, we can see the data points with the linear interpolation between them, as well as the computed fitting function. We have minimized the square of the difference between these two curves, integrated over the interval, and now have a smooth and simple function that approximates our data quite well.
The experimental data (black, with linear interpolation) and the fitted function (red).
Now, what we’ve done so far is actually fairly straightforward and you could compute similar types of curve fits in a spreadsheet program or any number of other software tools. But there is much more that we can do with this approach. We are not limited to using this fitting function. You are free to choose any function you want, but it is best to use a function that is a sum of set of orthogonal functions. Try out, for example:
Be aware, however, that you will only want to solve for the linear coefficients on the various terms within the fitting function. You would not want to use nonlinear fitting coefficients such as F(x) = c_0 + c_1sin ( \pi x /c_3 ) + c_2cos ( \pi x /c_4 ) since such a problem might be too highly nonlinear to converge.
And what if you have a 2D or 3D data set? You can actually apply the exact same approach as we’ve outlined here. The only difference is that you will need to set up a 2D or a 3D domain. The domains need not be Cartesian and you can even switch to a different coordinate system.
Let’s take a quick look at some sample data points measured over the region shown below:
Sampled data in a 2D region. We want a best fit surface to the heights of these points.
Since the data is sampled over this annular region and seems to have variations with respect to the radial and circumferential directions (r,\theta), rather than the Cartesian directions, we can try to fit the function:
We can follow the exact same procedure as before. The only difference being that we need to integrate over a 2D domain rather than a line and write our expression using a cylindrical coordinate system.
The computed best-fit surface to the data shown above.
You can see that the core COMSOL Multiphysics package has very flexible capabilities for finding a best-fit curve to data in 1D, 2D, or 3D using the methods shown here.
There can be cases where you might want to go beyond a simple curve-fit and want to consider some additional constraints. In that case, you would want to use the capabilities of the Optimization Module, which can also perform these types of curve fits and much, much more. For an introduction to the Optimization Module for curve fitting and the related topic of parameter estimation, please also see these models:
]]>
When modeling electromagnetic structures (e.g., antennas, waveguides, cavities, filters, and transmission lines), we can often limit our analysis to one small part of the entire system. Consider, for example, a coaxial splitter as shown here, which splits the signal from one coaxial cable (coax) equally into two. We know that the electromagnetic fields in the incoming and outgoing cables will have a certain form and that the energy is propagating in the direction normal to the cross section of the coax.
There are many other such cases where we know the form (but not the magnitude or phase) of the electromagnetic fields at some boundaries of our modeling domain. These situations call for the use of the Lumped Port and the Port boundary conditions. Let us look at what these boundary conditions mean and when they should be used.
We can begin our discussion of the Lumped Port boundary condition by looking at the fields in a coaxial cable. A coaxial cable is a waveguide composed of an inner and outer conductor with a dielectric in between. Over its range of operating frequencies, a coax operates in Tranverse Electro-Magnetic (TEM) mode, meaning that the electric and the magnetic field vectors have no component in the direction of wave propagation along the cable. That is, the electric and magnetic fields both lie entirely in the cross-sectional plane. Within COMSOL Multiphysics, we can compute these fields and the impedance of a coax, as illustrated here.
However, there also exists an analytic solution for this problem. This solution shows that the electric field drops off proportional to 1/r between the inner and outer conductor. So, since we know the shape of the electric field at the cross section of a coax, we can apply this as a boundary condition using the Lumped Port, Coaxial boundary condition. The excitation options for this condition are that the excitation can be specified in terms of a cable impedance along with an applied voltage and phase, in terms of the applied current, or as a connection to an externally defined circuit. Regardless of these three options, the electric field will always vary as 1/r times a complex-valued number that represents the sum of the (user-specified) incoming and the (unknown) outgoing wave.
The electric field in a coaxial cable.
For a coaxial cable, we always need to apply the boundary condition at an annular face, but we can also use the Lumped Port boundary condition in other cases. There are also a Uniform and a User-Defined option for the Lumped Port condition. The Uniform option can be used if you have a geometry as shown below: a surface bridging the gap between two electrically conductive faces. The electric field is assumed to be uniform in magnitude between the bounding faces, and the software automatically computes the height and width of the Lumped Port face, which should always be much smaller than the wavelength in the surrounding material. Uniform Lumped Ports are commonly used to excite striplines and coplanar waveguides, as discussed in detail here.
A typical Uniform Lumped Port geometry.
The User-Defined option allows you to manually enter the height and width of the feed, as well as the direction of the electric field vector. This option is appropriate if you need to manually enter these settings, like in the geometry shown below and as demonstrated in this example of a dipole antenna.
An example of a User-Defined Lumped Port geometry.
Another use of the Lumped Port condition is to model a small electrical element such as a resistor, capacitor, or inductor bonded onto a microwave circuit. The Lumped Port can be used to specify an effective impedance between two conductive boundaries within the modeling domain. There is an additional Lumped Element boundary condition that is identical in formulation to the Lumped Port, but has a customized user interface and different postprocessing options. The example of a Wilkinson power divider demonstrates this functionality.
Once the solution of a model using Lumped Ports is computed, COMSOL Multiphysics will also automatically postprocess the S-parameters, as well as the impedance at each Lumped Port in the model. The impedance can be computed for TEM mode waveguides only. It is also possible to compute an approximate impedance for a structure that is very nearly TEM, as shown here. But once there is a significant electric or magnetic field in the direction of propagation, then we can no longer use the Lumped Port condition. Instead, we must use the Port boundary condition.
To begin discussing the Port boundary condition, let’s examine the fields within a rectangular waveguide. Again, there are analytic solutions for propagating fields in waveguide. These solutions are classified as either Transverse Electric (TE) or Transverse Magnetic (TM), meaning there is no electric or magnetic field in the direction of propagation, respectively.
Let’s examine a waveguide with TE modes only, which can be modeled in the 2D plane. The geometry we will consider is of two straight sections of different cross-sectional area. At the operating frequency, the wider section supports both TE10 and TE20 modes, while the narrower supports only the TE10 mode. The waveguide is excited with a TE10 mode on the wider section. As the wave propagates down the waveguide and hits the junction, part of the wave will be reflected back towards the source as a TE10 mode, part will continue along into the narrower section again as a TE10 mode, and part will be converted to a TE20 mode, and then propagate back towards the source boundary. We want to properly model this and compute the split into these various modes.
The Port boundary conditions are formulated slightly differently from the Lumped Port boundary conditions in that you can add multiple types of ports to the same boundary. That is, the Port boundary conditions each contribute to (as opposed to the Lumped Ports, which override) other boundary conditions. The Port boundary conditions also specify the magnitude of the incoming wave in terms of the power in each mode.
Sketch of the waveguide system being considered.
The image below shows the solution to the above model with three Port boundary conditions, along with the analytic solution for the TE10 and TE20 modes for the electric field shape. Computing the correct solution to this problem does require adding all three of these ports. After computing the solution, the software also makes the S-parameters available for postprocessing, which indicates the relative split and phase shift of the incoming to outgoing signals.
Solution showing the different port modes and the computed electric field.
The Port boundary condition also supports Circular and Coaxial waveguide shapes, since these cases have analytic solutions. However, most waveguide cross sections do not. In such cases, the Numeric Port boundary condition must be used. This condition can be applied to an arbitrary waveguide cross section. When solving a model with a Numeric Port, it is also necessary to first solve for the fields at the boundary. For examples of this modeling technique, please see this example first, which compares against a semi-analytic case, followed by this example, which can only be solved by numerically computing the field shape at the ports.
Rectangular, Coaxial, and Circular Ports are predefined.
Numeric Ports can be used to define arbitrary waveguide cross sections.
The last case, when using the Port boundary condition, is appropriate for the modeling of plane waves incident upon quasi-infinite periodic structures such as diffraction gratings. In this case, we know that any incoming and outgoing waves must be plane waves. The outgoing plane waves will be going in many different directions (different diffraction orders) and we can determine ahead of time the directions, albeit not the relative magnitudes. In such instances, you can use the Periodic Port boundary condition, which allows you to specify the incoming plane wave polarization and direction. The software will then automatically compute all the directions of the various diffracted orders and how much power goes into each diffracted order.
For an extensive discussion of the Periodic Port boundary condition, please read this previous blog post on periodic structures. For a quick introduction to the use of these boundary conditions, please see this model of plasmonic wire grating.
We have introduced the Lumped Port and the Port boundary conditions for modeling boundaries at which an electromagnetic wave can pass without reflection and where we know something about the shape of the fields. Alternative options for the modeling of boundaries that are non-reflecting in cases where we do not know the shape of the fields can be found here.
The Lumped Port boundary condition is available solely in the RF Module, while the Port boundary condition is available in the Electromagnetic Waves interface in the RF Module and the Wave Optics Module as well as the Beam Envelopes formulation in the Wave Optics Module. This previous blog post provides an extensive description of the differences between these modules.
But what about those boundaries that are not transparent, such as the conductive walls of the waveguide we have looked at today? These boundaries will reflect almost all of the wave and require a different set of boundary conditions, which we will look at next.
]]>
Let’s consider a thermostat similar to the one that you have in your home. Although there are many different types of thermostats, most of them use the same control scheme: A sensor that monitors temperature is placed somewhere within the system, usually some distance away from the heater. When the sensed temperature falls below a desired lower setpoint, the thermostat switches the heater on. As the temperature rises above a desired upper setpoint, the thermostat switches the heater off. This is known as a bang-bang controller. In practice, you typically only have a single setpoint, and there is an offset, or lag, which is used to define the upper and lower setpoints.
The objective of having different upper and lower setpoints is to minimize the switching of the heater state. If the upper and lower setpoints are the same, the thermostat would constantly be cycling the heater, which can lead to premature component failure. If you do want to implement such a control, you only need to know the current temperature of the sensor. This can be modeled in COMSOL Multiphysics quite easily, as we have highlighted in this previous blog post.
On the other hand, the bang-bang controller is a bit more complex since it does need to know something about the history of the system; the heater changes its state as the temperature rises above or below the setpoints. In other words, the controller provides hysteresis. In COMSOL Multiphysics, this can be implemented using the Events interface.
When using COMSOL Multiphysics to solve time-dependent models, the Events interface is used to stop the time-stepping algorithms at a particular point and offer the possibility of changing the values of variables. The times at which these events occur can be specified either explicitly or implicitly. An explicit event should be used when we know the point in time when something about the system changes. We’ve previously written about this topic on the blog in the context of modeling a periodic heat load. An implicit event, on the other hand, occurs at an unknown point in time and thus requires a bit more set-up. Let’s take a look at how this is done within the context of the thermal model shown below.
Sketch of the thermal system under consideration.
Consider a simple thermal model of a lab-on-a-chip device modeled in a 2D plane. A one millimeter thick glass slide has a heater on one side and a temperature sensor on the other. We will treat the heater as a 1W heat load distributed across part of the bottom surface, and we will assume that there is a very small, thermally insignificant temperature sensor on the top surface. There is also free convective cooling from the top of the slide to the surroundings, which is modeled with a heat flux boundary condition. The system is initially at 20°C, and we want to keep the sensor between 45°C and 55°C.
A Component Coupling is used to define the Variable, T_s, the sensor temperature.
The first thing we need to do — before using the Events interface — is define the temperature at the sensor point via an Integration Component Coupling and a Variable, as shown above. The reason why this is done is to make the temperature at this point, T_s, available within the Events interface.
The Events interface itself is added like any other physics interface within COMSOL Multiphysics. It is available within the Mathematics > ODE and DAE interfaces branch.
The Discrete States interface is used to define the state of the heater. Initially, the heater is on.
First, we use the Events interface to define a set of discrete variables, variables which are discontinuous in time. These are appropriate for modeling on/off conditions, as we have here. The Discrete States interface shown above defines a variable, HeaterState, which is multiplied by the applied heat load in the Heat Transfer in Solids problem. The variable can be either one or zero, depending upon the system’s temperature history. The initial condition is one, meaning we are starting our simulation with the heater on. It is important that we set the appropriate initial condition here. It is this HeaterState variable that will be changed depending upon the sensor temperature during the simulation.
Two Indicator States in the Events interface depend upon the sensor temperature.
To trigger a change in the HeaterState variable, we need to first introduce two Indicator States. The objective of the Indicator States is to define variables that will indicate when an event will occur. There are two indicator variables defined. The Up indicator variable is defined as:
T_s - 55[degC]
which goes smoothly from negative to positive as the sensor temperature rises above 55°C. Similarly, the Down indicator variable will go smoothly from negative to positive at 45°C. We will want to trigger a change in the HeaterState variable as these indicator variables change sign.
The HeaterState variable is reinitialized within the Events interface.
We use the Implicit Events interface, since we do not know ahead of time when these events will occur, but we do know under what conditions we want to change the state of the heater. As shown above, two Implicit Event features are used to reinitialize the state of the heater to either zero or one, depending upon when the Up and Down indicator variables become greater than or less than zero, respectively. The event is triggered when the logical condition becomes true. Once this happens, the transient solver will stop and restart with the newly initialized HeaterState variable, which is used to control the applied heat, as illustrated below.
The HeaterState variable controls the applied heat.
When solving this model, we can make some changes to the solver settings to ensure that we have good accuracy and keep only the most important results. We will want to solve this model for a total time of 30 minutes, and we will store the results only at the time steps that the solver takes. These settings are depicted below.
The study settings for the Time-Dependent Solver set the total solution time from 0-30 minutes, with a relative tolerance of 0.001.
We will need to make some changes within the settings for the Time-Dependent Solver. These changes can be made prior to the solution by first right-clicking on the Study branch, choosing “Show Default Solver”, and then making the two changes shown below.
Modifications to the default solver settings. The event tolerance is changed to 0.001 and the output times to store are set to the steps taken by the solver.
Of course, as with any finite element simulation, we will want to study the convergence of the solution as the mesh is refined and the solver tolerances are made tighter. Representative simulation results are highlighted below and demonstrate how the sensor temperature is kept between the upper and lower setpoints. Also, observe that the solver takes smaller time steps immediately after each event, but larger time steps when the solution varies gradually.
The heater switches on and off to keep the sensor temperature between the setpoints.
We have demonstrated here how implicit events can be used to stop and restart the solver as well as change variables that control the model. This enables us to model systems with hysteresis, such as thermostats, and perform simulations with minimal computational cost.
]]>
First, let’s take a (very) brief conceptual look at the implicit time-stepping algorithms used when you are solving a time-dependent problem in COMSOL Multiphysics. These algorithms choose a time step based upon a user-specified tolerance. While this allows the software to take very large time steps when there are gradual variations in the solution, the drawback is that using too loose of a tolerance can skip over certain transient events.
To understand this, consider the ordinary differential equation:
where the forcing function f(t) is a square unit pulse starting at t_s and ending at t_e. Given an initial condition, u_0, we can solve this problem for any length of time, either analytically or numerically. Here is the analytic solution for u_0=1:
In the above plot, we can observe the exponential decay and rise as the forcing function is zero or one. Let’s now look at the numerical solution to this problem for two different user-specified tolerances:
The numeric solution (red dots) is shown for a relative tolerance of 0.2 and 0.01 and is compared to the analytical result (grey line).
We can see from the plot above that a very loose relative tolerance of 0.2 does not accurately capture the switching of the load. At a tighter relative tolerance of 0.01 (the solver default), the solution is reasonably well resolved. We can also observe that the spacing of the points shows the varying time steps used by the solver. It is apparent that the solver takes larger time steps where the solution changes slowly and finer time steps when the heat load switches on and off.
However, if the tolerance is set too loosely, the solver may skip over the heat load change entirely when the width of the heat load gets very small. That is, if t_s and t_e move very close to each other, the magnitude of the total heat load is too small for the specified tolerance. We can of course mitigate this by using tighter tolerances, but a better option exists.
We can avoid having to tighten the tolerances by using Explicit Events, which are a way of letting the solver know that it should evaluate the solution at a specified point in time. From that point in time forward, the solver will continue as before until the next event is reached. Let’s look at the numeric solution to the above problem, with Explicit Events at t_s and t_e and solved with a relative tolerance of 0.2 (a very loose tolerance):
When using Explicit Events, the numerical solution — even with a very loose relative tolerance of 0.2 — compares quite well with the analytical result. Away from the events, large time steps are taken.
The above plot illustrates that the Explicit Events force a solution evaluation when the load switches on or off. The loose relative tolerance allows the solver to take large time steps when the solution varies gradually. Small time steps are taken immediately after the events to give good resolution of the variation in the solution. Thus, we have both good resolution of the heat load switching on or off and we take large time steps to minimize the overall computational cost.
Now that we’ve introduced the concepts, we will take a look at implementing these Explicit Events.
We will begin with an existing example from the COMSOL Multiphysics Model Library and modify it slightly to include a periodic heat load and the Events interface. We will look at an example of the Laser Heating of a Silicon Wafer, where a laser is modeled as a distributed heat source moving back and forth across the surface of a spinning silicon wafer.
The laser heat source itself traverses back and forth over the wafer with a period of 10 seconds along the centerline. To minimize the temperature variation over the wafer during the heating process, we want to turn the laser off periodically, while the heat source is in the center of the wafer. To model this, we will introduce an Analytic function, pulse (x), that uses the Boolean expression:
(x<2)||(x>3)
to evaluate pulse (t) to zero between t=2-3 seconds, and one otherwise. The Periodic Extension option is used to repeat this pattern every five seconds, as shown in the screenshot below.
The settings used to define a periodic function, as plotted.
We can use this function to modify the applied heat flux representing the laser heat source, as illustrated below:
The settings for the applied heat flux boundary condition.
The last thing that we need to do is to add the Events interface. This physics interface is found within Mathematics > ODE and DAE interfaces when using the Add Physics browser. Within the Events interface, add two Explicit Events with the settings shown below to define a periodic event starting at two and three seconds and repeating every five seconds.
The Explicit Events settings. The second of these events starts at 3 s.
No other changes are needed, but we can take a quick look at the solver settings:
The settings for the time-dependent solver.
Note that the entries in the Times field are the output times. These settings do not directly control the actual time steps taken by the solver. The Relative Tolerance field (default value of 0.01) along with the Events — if they are in the model — control these time steps.
A comparison of unpulsed (left) and pulsed (right) heat loads.
You can compare the results of this simulation to the original model to see the differences in temperature across the wafer. With a periodic heat load, the temperature rise is more gradual and the temperature variations at any point in time are smaller.
We have looked at using the Events interface for modeling a periodic heat load over time and introduced why it provides a good combination of accuracy and low computational requirements. There is a great deal more that you can do with the Events interface — if you would like to learn more, we encourage you to consult the documentation. An extended demonstration of the usage of the Events interface is featured in the Capacity Fade of a Li-ion Battery example from the Model Library.
On the other hand, when dealing with problems that are either convection dominated or wave-type problems (e.g., fluid flow models or transient structural response, respectively), then we would not want to introduce instantaneous changes in the loads. The reasons behind that — and alternative modeling techniques for such situations — will be the topic of an upcoming blog. Stay tuned!
]]>
We are often interested in modeling a radiating object, such as an antenna, in free space. We may be building this model to simulate an antenna on a satellite in deep space or, more often, an antenna mounted in an anechoic test chamber.
An antenna in infinite free space. We only want to model a small region around the antenna.
Such models can be built using the Electromagnetic Waves, Frequency Domain formulation in the RF Module or the Wave Optics Module. These modules provide similar interfaces for solving the frequency domain form of Maxwell’s equations via the finite element method. (For a description of the key differences between these modules, please see my previous blog post, titled “Computational Electromagnetics Modeling, Which Module to Use?“)
Let’s limit ourselves in this blog post to considering only 2D problems, where the electromagnetic wave is propagating in the x-y plane, with the electric field polarized in the z-direction. We will additionally assume that our modeling domain is purely vacuum, so that the frequency domain Maxwell’s equations reduce to:
where E_z is the electric field, relative permeability and permittivity \mu_r = \epsilon_r = 1 in vacuum, and k_0 is the wavenumber.
Solving the above equation via the finite element method requires that we have a finite-sized modeling domain, as well as a set of boundary conditions. We want to use boundary conditions along the outside that are transparent to any radiation. Doing so will let our truncated domain be a reasonable approximation of free space. We also want this truncated domain to be as small as possible, since keeping our model size down reduces our computational costs.
Let’s now look at two of the options available within the COMSOL Multiphysics simulation environment for truncating your modeling domain: the scattering boundary condition and the perfectly matched layer.
One of the first transparent boundary conditions formulated for wave-type problems was the Sommerfeld radiation condition, which, for 2D fields, can be written as:
where r is the radial axis.
This condition is exactly non-reflecting when the boundaries of our modeling domain are infinitely far away from our source, but of course an infinitely large modeling domain is impossible. So, although we cannot apply the Sommerfeld condition exactly, we can apply a reasonable approximation of it.
Let’s now consider the boundary condition:
You can clearly see the similarities between this condition and the Sommerfeld condition. This boundary condition is more formally called the first-order scattering boundary condition (SBC) and is trivial to implement within COMSOL Multiphysics. In fact, it is nothing other than a Robin boundary condition with a complex-valued coefficient.
If you would like to see an example of a 2D wave equation implemented from scratch along with this boundary condition, please see the example model of diffraction patterns.
Now, there is a significant limitation to this condition. It is only non-reflecting if the incident radiation is exactly normally incident to the boundary. Any wave incident upon the SBC at a non-normal incidence will be partially reflected. The reflection coefficient for a plane wave incident upon a first-order SBC at varying incidence is plotted below.
Reflection of a plane wave at the first-order SBC with respect to angle of incidence.
We can observe from the above graph that as the incoming plane wave approaches grazing incidence, the wave is almost completely reflected. At a 60° incident angle, the reflection is around 10%, so we would clearly like to have a better boundary condition.
COMSOL Multiphysics also includes (as of version 4.4) the second-order SBC:
This equation adds a second term, which takes the second tangential derivative of the electric field along the boundary. This is also quite easy to implement within the COMSOL software architecture.
Let’s compare the reflection coefficient of the first- and second-order SBC:
Reflection of a plane wave at the first- and second-order SBC with respect to angle of incidence.
We can see that the second-order SBC is uniformly better. We can now get to a ~75° incident angle before the reflection is 10%. This is better, but still not the best we can achieve. Let’s now turn our attention away from boundary conditions and look at perfectly matched layers.
Recall that we are trying to simulate a situation such as an antenna in an anechoic test chamber, a room with pyramidal wedges of radiation absorbing material on the walls that will minimize any reflected signal. This can be our physical analogy for the perfectly matched layer (PML), which is not a boundary condition, but rather a domain that we add along the exterior of the model that should absorb all outgoing waves.
Mathematically speaking, the PML is simply a domain that has an anisotropic and complex-valued permittivity and permeability. For a sample of a complete derivation of these tensors, please see Theory and Computation of Electromagnetic Fields. Although PMLs are theoretically non-reflecting, they do exhibit some reflection due to the numerical discretization: the mesh. To minimize this reflection, we want to use a mesh in the PML that aligns with the anisotropy in the material properties. The appropriate PML meshes are shown below, for 2D circular and 3D spherical domains. Cartesian and spherical PMLs and their appropriate usage are also discussed within the product documentation.
Appropriate meshes for 2D and 3D spherical PMLs.
In COMSOL Multiphysics 5.0, these meshes can be automatically set up for 3D problems using the Physics-Controlled Meshing, as demonstrated in this video.
Let’s now look at the reflection from a PML with respect to incident angle as compared to the SBCs:
Reflection of a plane wave at the first- and second-order SBC and the PML with respect to angle of incidence.
We can see that the PML reflects the least amount across the widest range. There is still reflection as the wave is propagating almost exactly parallel to the boundary, but such cases are luckily rather rare in practice. An additional feature of the PML, which we will not go into detail about for now, is that it absorbs not only the propagating wave, but also any evanescent field. So, from a physical point of view, the PML truly can be thought of as a material with almost perfect absorption.
Clearly, the PML is the best of the approaches described here. However, the PML does use more memory as compared to the SBC.
So, if you are early in the modeling process and want to build a model that is a bit less computationally intensive, the second-order SBC is a good option. You can also use it in situations where you have a strong reason to believe that any reflections at the SBC won’t greatly affect the results you are interested in.
The first-order SBC is currently the default, for reasons of compatibility with previous versions of the software, but with COMSOL Multiphysics version 4.4 or greater, use the second-order SBC. We have only introduced the plane-wave form of the SBC here, but cylindrical-wave and spherical-wave (in 3D) forms of the first- and second-order SBC’s are also available. Although they do use less memory, they all exhibit more reflection as compared to the PML.
The SBC and the PMLs are appropriate conditions for open boundaries where you do not know much about the fields at the boundaries a priori. If, on the other hand, you want to model an open boundary where the fields are known to have a certain form, such as a boundary representing a waveguide, the Port and Lumped Port boundary conditions are more appropriate. We will discuss those conditions in an upcoming blog post.
]]>
There are many situations in which a rotating object is exposed to loads. For example, think of a rotisserie chicken or a kebab. Meat on a rotating spit is exposed to a heat load, usually a radiative heat source such as coals. Rotation is a simple way to distribute the applied heat. It keeps any regions from getting too hot or too cold and is an easy way to promote uniform cooking.
Now that I’ve got you licking your chops, let’s look at a slightly simpler case.
Today, we will look at the laser heating of a spinning silicon wafer. Although it isn’t quite as delicious to think about as rotating food, I’m sure you will find it equally informative.
As you may know, we already have an example of this in our Model Library and online Model Gallery. The existing example considers a wafer mounted on a rotating stage and heated by a laser traversing back and forth over the surface. The problem is solved in a stationary coordinate system. (Just think of yourself standing outside the process chamber and watching the wafer spinning on the stage.) We will call this the global coordinate system.
The laser is modeled as a heat source that moves back and forth along the global x-axis, while the wafer rotates about the global z-axis. The rotation of the wafer is modeled via the Translational Motion feature within the Heat Transfer in Solids physics interface, which adds a convective term to the governing transient heat transfer equation:
The right-hand side of the above equation accounts for the rotation of the wafer as \mathbf{u}, the velocity vector. This velocity vector can be interpreted as material entering and leaving each element in the finite element mesh — that is, we are solving a problem on an Eulerian frame. Since the geometry is a uniform disk and the applied velocity vector describes a rotation about the axis of the disk, this is a valid approach.
The drawback, however, is when you want to add more physics to the model. The Translational Motion feature is only available within the Heat Transfer physics and for many other physics interfaces that we do not want to solve on an Eulerian frame.
Instead of solving this problem on an Eulerian frame in the global coordinate system, we can solve this problem on a Lagrangian frame, with a rotating coordinate system that moves with the material rotation of the wafer. (Think of yourself as a tiny person standing on the surface of the wafer. The surroundings will appear to be rotating, whereas the wafer will appear stationary.)
The right-hand side of the above governing heat transfer equation becomes zero, but we now need to consider a heat load that not only moves back and forth along the global x-axis but also rotates around the z-axis of our rotating coordinate system. Although this may sound complicated, it is quite straightforward to implement.
An observer in the global coordinate system sees a spinning wafer with a laser heat source traversing back and forth along the x-axis (left). An observer in a coordinate system rotating with the wafer sees the wafer as stationary, but the heat source moves in a complicated path in the x-y plane (right.)
The General Extrusion operators provide a mechanism for transforming fields from one coordinate system to another. Some applications that we have already written about include submodeling, coupling different physics interfaces, and evaluating results at a moving point.
Here, we will use the General Extrusion operators to apply a rotational transformation to the applied loads. Our loads are applied in the rotating coordinate system via a coordinate transform from the global coordinate system given by the rotation matrix:
We can start with the existing Laser Heating of a Silicon Wafer example and simply remove the existing Translational Motion feature. We then have to add a General Extrusion operator, which implements the above transformation, as shown in the screenshot below. We will also want to implement a second operator that applies the reverse transform, which is done by switching the sign of the rotation.
The general extrusion operation applies a rotational transform.
The applied heat load is described via a user-defined function, hf(x,y,t), that describes how the laser heat load moves back and forth along the x-axis in the global coordinate system. This moving load is then transformed into the rotating coordinate system via the General Extrusion operator, as shown in the screenshot below.
The applied heat load in the rotating coordinate system, defined via the global coordinate system and the rotational transform.
That’s it — you can solve the model just as before.
The results will now be with respect to the rotating coordinate system. It can be more practical for us to plot the temperature solution with respect to the global coordinate system by using the General Extrusion operator that applies the reverse transformation. This will give us a visualization of the temperature field as if we were standing outside of the process chamber and were watching the spinning wafer with a thermal camera.
The second general extrusion operator is used to rotate the results back to the global coordinate system.
The results of the simulation of the temperature field over time will be identical regardless of whether you use the Translational Motion feature or the General Extrusion operator. Although the General Extrusion operator requires more effort to implement — and does take a bit longer to solve — it is needed if you are interested in more than just the thermal solution.
For example, if you also need to compute a temperature-driven chemical diffusion and reaction process or the evolution of thermal stresses during the wafer heating, these problems should be solved on a coordinate system that rotates with the wafer.
There are of course many other applications where you could use the General Extrusion operator, but I hope I’ve satisfied your appetite for today!
]]>
Capacitive sensors, like those found in touchscreen devices, consist of multiple conductive electrodes embedded within a transparent dielectric material (a glass or even a sapphire screen). The electrodes themselves are very thin, made of a nearly completely transparent material, and are invisible to the naked eye.
Let’s begin by considering a very elementary configuration that consists of two arrays of electrodes positioned at 90° to each other, as shown in the figure below.
Note that actual touchscreens are much more complex than what we will show here, but the modeling techniques will essentially be the same.
Simplified schematic of the key parts of a capacitive touchscreen sensor (not to scale).
An electrostatic field results whenever a voltage differential is applied between any two or more of the electrodes. Although the field is highest in the region between and around the electrodes, it does extend some distance away. When a conductive object (such as a finger) comes close to this region, the fields are altered and it becomes possible to sense the resultant capacitance change between the two active electrodes. It is this difference in capacitance that can be used to sense the position of a finger touching the screen.
While a subset of the electrodes has a potential difference applied, the other electrodes will either each be individually electrically isolated or they will all be electrically joined together — but still electrically isolated. Thus, they will be at a constant but unknown potential.
Correctly modeling these electrodes, as well as the surrounding metallic housings and other dielectric objects, is key to computing the capacitance changes. Let’s take a look at how to do this using the capabilities of the AC/DC Module.
For this relatively small device, we can reasonably model the entire structure; the sensor is only 20 x 30 millimeters in size and the spacing between the electrodes is 1 millimeter. For larger touchscreens, it would be more reasonable to consider just a small subsection of the entire screen.
A capacitive sensor is embedded within the glass watch face (clear). The wrist band and watch case are only for visualization purposes.
As shown in the following figures, the modeling domain is a cylindrical region. This region encompasses the glass screen, the finger, and an air volume around the watch. It’s reasonable to argue that the effect of the size of the surrounding air volume rapidly drops off as the size increases.
Here, the boundaries of the air volume are set to a zero charge condition, mimicking a boundary to free space. Moreover, two of the parallel electrodes are set to the Ground boundary condition, fixing the voltage field to zero. The Terminal boundary condition is applied to two of the perpendicular electrodes, which fixes them to a constant voltage. The Terminal boundary condition will also automatically compute the capacitance. All of the other boundaries are modeled via the Floating Potential boundary condition.
Visualization of the finite element model. The finger (grey), electric shielding (orange), and all unexcited electrodes (red and green) are modeled with floating potential boundary conditions. Two electrodes (white and black) have a potential difference applied. The watch face (cyan) is partially hidden. Electric insulation boundary conditions (blue) are used on all other faces. The air and the watch face are volume meshed. For clarity, the mesh is only shown on some surfaces.
The Floating Potential boundary condition is used to represent a set of surfaces over which a charge can freely redistribute itself. The condition is meant to simulate the boundaries of an object that will be at a constant but unknown electric potential. This is a consequence of an externally applied electrostatic field.
Several groups of faces use this Floating Potential boundary condition such as the bottom face of the watch, which represents the electric shielding underneath the glass cover. The electrodes that are not currently being excited are part of a single Floating Potential boundary condition (under the assumption that they are all electrically connected). Note that it’s possible to use the Floating Potential Group option to allow each physically separate boundary to float to a different constant voltage. It is also possible to electrically connect any set of electrodes simply by making them part of the same Group.
The boundaries of the finger (when it’s included in the model) also have the Floating Potential boundary condition. This is under the assumption that the human body is a relatively good conductor in comparison to the air and dielectric layers.
There are just two different materials being used here. The built-in Air material is applied to most domains and sets the permittivity to unity. The built-in Quartz Glass material is used to assign a higher permittivity to the screen.
Although the screen itself is a sandwich of different materials, we can assume that all layers have the same material properties. Hence, we do not need to explicitly model the boundaries between them; all the different layers are treated as a single domain.
Color visualization of the log of the magnitude of the electric field. Since the finger is treated as a floating potential, the field inside is omitted.
Accurate results depend on having a finite element mesh that’s fine enough to resolve the spatial variations in the voltage field. Although we do not know ahead of time where the strong variations in the field will be, we can use adaptive mesh refinement to let the software determine where the smaller elements are needed.
Several levels of adaptive mesh refinement are used, and the results are presented in the table below. They were generated on an eight core Xeon system running at 3.7 GHz, with 64 GB of RAM:
Degrees of Freedom (millions) | Memory Used (GB) | Solution Time, Excluding Remeshing (seconds) | Percentage Difference in Measured Capacitance |
---|---|---|---|
0.125 (Default “Normal” mesh setting) | 1.7 | 10 | 28% |
0.6 (After 1^{st} adaptive mesh refinement) | 2.2 | 20 | 6% |
2.3 (2^{nd} refinement) | 4.8 | 84 | 2% |
7.7 (3^{rd} refinement) | 14 | 711 | 0.6% |
24.4 (4^{th} refinement) | 47 | 2,960 | N/A |
From the table above, we can deduce that we can start with a very coarse mesh and use adaptive mesh refinement to get a more accurate value of capacitance. This can be done at the cost of increased memory usage and solution time. The percentage difference in capacitances is compared to the most refined case.
So far, we have just looked at computing the capacitance between two of the electrodes in the array. In practice, we want to compute the capacitance between all of the electrodes, the Capacitance Matrix. This square symmetric matrix defines the relationship between applied voltage and charge on the electrodes in the system. For a system composed of n electrodes and one ground, the matrix is:
The diagonal components of this matrix are computed by taking the integral of the electric energy density over all domains:
where
The off-diagonal terms are given by:
where
These diagonal and off-diagonal terms are computed automatically by the software — but more on that in a later blog post.
We’ve looked at an example of a capacitive touchscreen device that was solved using the electrostatic modeling capabilities of the AC/DC Module. Although the geometry had been simplified for presentation purposes, the techniques outlined here can be used for more complex structures.
Whenever solving such finite element models, it will always be important to study the convergence of the desired quantities (in this case, usually the capacitance with respect to mesh refinement). The adaptive mesh refinement functionality greatly automates this model validation step.
When solving such large models, you can also benefit from using a distributed memory parallel solver for faster solution times. There is, of course, much more that you can do with COMSOL Multiphysics and the AC/DC Module than what is covered here. If you are interested in learning more, please contact us.
]]>
When we use the term CAD geometry, we are referring to a set of data structures that provide a very precise method for describing the shapes of parts. This method is called boundary representation, or B-rep. A B-rep model for solids consists of topological entities (faces, edges, and vertices) and their geometrical representation (surfaces, curves, and points). A face is a bounded portion of a surface, an edge is a bounded segment of a curve, and a vertex lies at a point.
In the B-rep data structures, surfaces are often represented by Non-Uniform Rational B-Splines, or NURBS. The B-rep model of a part is used as the basis for other operations, such as generating tooling paths in Computer Aided Manufacturing software, creating Rapid Prototyping files, and — most importantly — for your COMSOL Multiphysics modeling, generating the finite element mesh.
Your first choice in terms of element type will usually be the tetrahedral mesh for 3D models or a triangular mesh in 2D models. Any 3D geometry can be meshed with tetrahedral (“tet”) elements and any 2D geometry can be meshed with triangles. Additionally, these are the only elements that support Adaptive Mesh Refinement.
For the rest of this blog post, we will focus on the 3D case, since it is the most computationally challenging. At a very conceptual level, the COMSOL tetrahedral meshing algorithm first applies a mesh on all of the surfaces of an object. This mesh is then used to “seed” the volume mesh from which tetrahedral elements “grow” elements inwards. As these tetrahedral elements intersect, their sizes are adjusted with the objective of keeping the elements as isotropic (similar edge lengths and included angles) as possible and to have reasonably gradual transitions between smaller and larger elements.
An issue that you can run into with this algorithm is that the meshing is done based upon the underlying topological entities. There is no way for the meshing algorithm to insert larger elements if the underlying entities are small. As we saw in the previous blog post “Working with Imported CAD Designs,” we can use the CAD repair and defeaturing tools to simplify the geometry.
However, when these algorithms attempt to remove topological entities, they often need to modify the underlying NURBS surfaces and are therefore somewhat limited. An alternative in COMSOL Multiphysics software is to use Virtual Operations, which can keep the existing geometrical representations as a basis for constructing a new alternative topological structure purely for the purposes of meshing and defining the physics.
Let us take a look at the virtual operations and see what you can do with them through a series of examples. The first ten options in the Virtual Operations menu actually only represent five unique capabilities, but they can be used in different ways.
The Virtual Operations menu.
Let’s look at a quick example for each of these five.
The below image demonstrates the Ignore Vertices feature (top) and the Form Composite Edges feature (bottom), which result in the same geometry.
Below is a demonstration of the Ignore Edges feature (top) and the Form Composite Faces feature (bottom), which result in the same geometry.
The following image demonstrates that the Ignore Faces feature (top) can be used to ignore any faces that lie between two adjacent domains, resulting in a single domain. The Form Composite Domains feature (bottom) will also combine multiple domains into a single domain.
As shown next, the Collapse Edges feature (top) and the Merge Vertices feature (bottom) will result in the same geometry. The Merge Vertices feature gives the additional option of choosing which vertex to remove and which one to keep.
The Collapse Faces command (top) and the Merge Edges command (bottom) stand out, since they have been designed to work even in those cases where the faces are not continuous. A useful application for these commands is to get rid of slivers resulting from the union of components that are slightly misaligned or do not fit for other reasons.
Lastly, the Mesh Control Points, Edges, Faces, and Domains features will hide points, edges, faces, or domains during the set-up of the physics; however, these geometric entities will still be present during the meshing step. By using these operations, you can gain greater control over the meshing process by designating geometric entities for the control of the mesh size and distribution. The physics set-up is kept simple by excluding the control entities. A typical area of application is in CFD simulations, where regions of steep gradients in a volume need a high mesh density.
It appears that we have a lot of options here, and you may wonder which of these features you should be using. In practice, the Form Composite Faces can usually be your first choice. Almost all of the issues that you will typically run into, with the exception of forming composite domains, can be handled with this feature.
Let’s look at a case from the COMSOL Multiphysics Model Library: the stresses and strains in a wrench. This is a structural model of a combination wrench. The provided CAD geometry has some relatively complex sculpted surfaces and fillets and blends, which result in small faces in some parts of the model. These small faces force the tet mesher to use smaller elements, but we can see that Virtual Operations can be utilized to avoid this.
A detailed view of a CAD file shows that small faces result in a fine mesh. Using the Virtual Operations allows larger elements in these regions.
We can use the Form Composite Faces feature to abstract whole sets of faces. You can simply select all of the faces and then deselect those faces that you do not want to abstract. This is acceptable and recommended if you know you do not need high fidelity of the mesh in certain regions where there are many small faces.
Virtual Operations can be used to combine sets of surfaces and significantly simplify some parts of the geometry.
We have now seen why you would want to use these Virtual Operations and the many ways in which they can be used. If you want to see a step-by-step guide for using these features to simplify your geometry, please see the Model Library example on using Virtual Operations on a Wheel Rim Geometry.
]]>
COMSOL Multiphysics has three add-on products for electromagnetic wave propagation: the Ray Optics Module, the Wave Optics Module, and the RF Module. Let’s take a look at the differences.
The RF Module and the Wave Optics Module both offer an Electromagnetic Waves, Frequency Domain interface, which solves the full-wave form of Maxwell’s equations via the finite element method (FEM). This requires a finite element mesh that is fine enough to resolve the electromagnetic waves, as shown in the figure below.
Full-wave simulation of scattering off of a metallic sphere. The variations in the magnitude of the electric field require a fine mesh everywhere.
This approach is appropriate when the solutions we are interested in have significant variations in all directions and are on a length scale comparable to the wavelength.
The Wave Optics Module also includes the Electromagnetic Waves, Beam Envelopes interface, which solves a modified version of the full-wave Maxwell’s equations, again via the finite element method. The Beam Envelopes formulation requires, as input, an approximate and slowly varying wave vector. Rather than solving for the electromagnetic fields themselves, this formulation solves for the slowly varying electric field amplitude.
Beam envelopes simulation of a directional coupler. The gradual variation in the field magnitude allows for a very coarse mesh in that direction.
The advantage of the Beam Envelopes formulation is that a very coarse mesh can be used in the direction of propagation. The limitation is that the wave vector field must be approximately uniform or slowly varying throughout the modeling domain. However, this is indeed the case for a range of important optical devices such as optical fibers or directional couplers.
The Ray Optics Module includes the Geometrical Optics interface, which treats electromagnetic waves as rays. It does not use the finite element method; instead, it traces the rays through the modeling domain by solving a set of ordinary differential equations for the position and wave vector. Although the domains through which the rays travel must be meshed, the mesh can be very coarse. Only at curved surfaces must the mesh be refined.
Geometrical optics simulation of a plane wave scattering from a cylinder. The ray intensity decreases after the rays are reflected by the curved surface, causing the waves to diverge. A very coarse mesh can be used, except on the curved boundaries.
The Ray Optics Module traces rays of light propagating through different media and can consider many different behaviors of the rays at boundaries. The wavelength-dependence of the refractive indices of the media can be considered. It is also possible to compute the intensity, the phase, and the polarization of light and how these vary as the ray goes through different media and across boundaries.
Let’s now take a deeper look at the various physical phenomena that can be modeled.
Refraction and reflection at a dielectric interface.
A ray of light propagating through a medium of uniform refractive index will travel in a straight line. When the ray encounters an interface between materials of different refractive indices, the ray will be partially reflected and partially refracted. This behavior is governed by Snell’s Law and the Fresnel equations and is handled automatically, by the Ray Optics Module, at interfaces between different materials.
A light ray bends as it passes through a graded index material.
Light propagating through a medium with a non-uniform refractive index will bend in the direction of a relatively higher refractive index. Such graded index behavior can be modeled simply by defining the refractive index as a smooth, spatially varying function. The Ray Optics Module inherits the powerful tools of COMSOL Multiphysics for creating spatially varying materials.
For instance, the Luneburg Lens example model available in the Model Library of the Ray Optics Module defines the refractive index simply as sqrt(x^2+y^2+z^2). Alternatively, you can define spatially distributed media as a look-up table from a file, or more spectacularly, as a function of another physics field quantity such as n = f(T(x,y,z)), where n is the refractive index, f is some function, and T(x,y,z) is a spatially varying temperature field computed by a heat transfer simulation in COMSOL Multiphysics. More on this in a blog post coming soon.
Specular (left) and diffuse (right) reflection of a ray of light.
At boundaries, the ray can propagate through unimpeded as if the boundary were completely transparent, it can be completely absorbed, or it can be reflected. Reflections occur at surfaces of materials through which light cannot pass and will be either specular, diffuse, or a mixture of the two. Specular reflection occurs on highly polished metal surfaces, whereas most other surfaces reflect more diffusely.
Reflection and transmission through a (possibly multi-layer) thin dielectric film.
It is also possible to model structures composed of thin layers of different materials, such as dielectric mirrors or anti-reflective coatings. These can be modeled by adding one or more Thin Dielectric Film nodes to a boundary. The effective reflection and transmission coefficients through the multi-layer stack are then computed without explicitly modeling each layer. This is demonstrated in the Anti-Reflective Coating, Multilayer model.
Reflection and transmission into various diffraction orders from an optical grating.
On the other hand, structures with periodic wavelength-scale variation in the plane of the boundary can be modeled with the Grating boundary condition. Diffraction gratings have periodic variations in their structure and can split and diffract a ray into several different rays, which are termed diffraction orders. It is also possible to compute the characteristics of the grating via the full-wave formulation and use this as an input, as demonstrated in the Diffraction Grating model.
The polarization of a ray of light changes as it goes through various optical elements.
Lastly, boundary conditions can be used to manipulate the polarization of the ray. Boundary conditions for simulating linear polarizers, linear and circular wave retarders, ideal depolarizers, and optical components with arbitrary Mueller matrices can be represented as boundary conditions. These conditions are demonstrated in the Linear Wave Retarder model.
The rays themselves can be launched into the model from domains, boundaries, and any user-specified points. The rays can have a spherical, hemispherical, or conical distribution. It is also possible to model illumination from the sun by specifying a position on Earth. Along with the path of the ray, the intensity, polarization, and phase can also be computed, if desired. This makes it possible to compute both optical intensity on surfaces and interference patterns. Examples of this include modeling a solar dish and computing the interference pattern of a Michelson interferometer.
The Ray Optics Module does not directly consider interactions with structures that have size comparable to the wavelength.
For example, consider a plane wave scattering off of a diamond-shaped metallic object as shown below. If the wavelength is comparable to the object size, there will be significant diffraction around the object and the region behind it will get illuminated. Similarly, a plane wave incident upon a wavelength-scale slit will experience significant diffraction and broadening. Modeling either of these effects requires a full-wave approach using the Wave Optics Module or the RF Module.
A diamond-shaped object scatters an electromagnetic wave in all directions (left). There is significant illumination behind the scatterer. A plane wave incident upon a slit (right) will spread out. The color in both plots indicates the electric field norm.
The Geometrical Optics approach, on the other hand, does not consider these diffractive phenomena. Rays representing a plane wave will be reflected specularly from the surfaces and will not illuminate the region behind the object. Rays passing through a slit will not spread out. These are both valid approximations if the wavelength of light is much smaller than the object’s size.
A diamond-shaped object in a plane wave using the Geometrical Optics approach (left) and a plane wave passing through a slit (right) does not experience any diffraction.
Currently, the Ray Optics Module also does not consider refractive indices that are dependent upon the intensity of light. However, such problems can be addressed with the Beam Envelopes formulation in the Wave Optics Module, as demonstrated in the example of Self Focusing in BK-7 Glass.
The complete capabilities of the Ray Optics Module are demonstrated by the Model Library examples, available within the software and on our online Model Gallery.
If you are interested in using the Ray Optics Module for any of your modeling needs, please contact us.
]]>
The plot below shows the amount of memory needed to solve various different 3D finite element problems in terms of the number of degrees of freedom (DOF) in the model.
Memory requirements (with a second-polynomial curve fit) with respect to degrees of freedom for various representative cases.
There are five different cases presented here:
What you should see from this graph is that, with a computer that has 64 GB of random access memory (RAM), you can solve problems that range in size anywhere from ~26,000 DOF on the low end all the way up to almost 14 million degrees of freedom. So why this wide range of numbers? Let’s look at how to interpret these data…
For most problems, COMSOL Multiphysics solves a set of governing partial differential equations via the finite element method, which takes your CAD model and subdivides the domains into elements, which are defined by a set of nodes on the boundaries.
At each node, there will be at least one unknown, and the number of these unknowns is based upon the physics that you are solving. For example, when solving for temperature, you only have a single unknown (called T, by default) at each node. When solving a structural problem, you are instead computing strains and the resultant stresses, thus you are solving for three unknowns (u,v,w), which are the displacements of each node in the x-y-z space.
For a turbulent fluid flow problem, you are solving for the fluid velocities (also called u,v,w by default) and pressure (p) as well as extra unknowns describing the turbulence. If you are solving a diffusion problem with many different species, you will have as many unknowns per node as you have chemical species. Additionally, different physics within the same model can have a different default discretization order, meaning there can be additional nodes along the element edges, as well as in the element interior.
A second-order tetrahedral element solving for the temperature field, T, will have a total of 10 unknowns per element, while a first-order element solving the laminar Navier-Stokes equations for velocity, \mathbf{u}=(u_x,u_y,u_z), and pressure, p, will have a total of 16 unknowns per element.
COMSOL Multiphysics will use the information about the physics, material properties, boundary condition, element type, and element shape to assemble a system of equations (a square matrix), which need to be solved to get the answer to the finite element problem. The size of this matrix is the number of degrees of freedom (DOFs) of the model, where the number of DOFs is a function of the number of elements, the discretization order used in each physics, and the number of variables solved for.
These systems of equations are typically sparse, which means that most of the terms in the matrix are zero. For most types of finite element models, each node is only connected to the neighboring nodes in the mesh. Note that element shape matters; a mesh composed of tetrahedra will have different matrix sparsity from a mesh composed of hexahedra (brick) elements.
Some models will include non-local couplings between nodes, resulting in a relatively dense system matrix. Radiative heat transfer is a typical problem that will have a dense system matrix. There is radiative heat exchange between any surfaces than can see each other, so each node on the radiating surfaces is connected to every other node. The result of this is clearly seen in the plots I shared at the beginning of this blog post. The thermal model that includes radiation has much higher memory requirements than the thermal model without radiation.
You should see, at this point, that it is not just the number of DOFs, but also the sparsity of the system matrix that will affect the amount of memory needed to solve your COMSOL Multiphysics model. Let’s now take a look at how your computer manages memory.
COMSOL Multiphysics uses the memory management algorithms provided by the Operating System (OS) that you are working with. Regardless of which OS you are using, the performance of these algorithms is quite similar on all of the latest OS’s that we support.
The OS creates a Virtual Memory Stack, which the COMSOL software sees as a continuous space of free memory. This continuous block of virtual memory can actually map to different physical locations, so some part of the data may be stored within RAM and other parts will be stored on the hard disk. The OS manages where (in RAM or on disk) that the data is actually stored, and by default you do not have any control over this. The amount of virtual memory is controlled by the OS, and it is not something that you usually want to change.
Under ideal circumstances, the data that COMSOL Multiphysics needs to store will fit entirely within RAM, but once there is no longer enough space, part of the data will spill over to the hard disk. When this happens, performance of all programs running on the computer will be noticeably degraded.
If too much memory space is requested by the COMSOL software, then the OS will determine that it can no longer manage memory efficiently (even via the hard disk) and will tell COMSOL Multiphysics that there is no more memory available. This is the point at which you will get an out-of-memory message and COMSOL Multiphysics will stop trying to solve the model.
Next, let’s take a look at what COMSOL Multiphysics is doing when you get this out-of-memory message and what you can do about it.
When you set up and solve a finite element problem, there are three memory intensive steps: Meshing, Assembly, and Solving.
Direct solvers are very robust and can handle essentially any problem that will arise during finite element modeling. The sparse matrix direct solvers used by COMSOL Multiphysics are the MUMPS, PARDISO, and SPOOLES solvers. There is also a dense matrix solver, which should only be used if you know the system matrix is fully populated.
The drawback to all of these solvers is that the memory and time required goes up very rapidly as the number of DOFs and the matrix density increase; the scaling is very close to quadratic with respect to number of DOFs.
As of writing this, both the MUMPS and PARDISO direct solvers in the COMSOL software come with an out-of-core option. This option overrides the OS’s memory management and lets COMSOL Multiphysics directly control how much data will be stored in RAM and when and how to start writing data to the hard drive. Although this is superior to the OS’s memory management algorithm, it will be slower than solving the problem entirely in RAM.
If you have access to a cluster supercomputer, such as the Amazon Web Service™ Amazon Elastic Compute Cloud™, you can also use the MUMPS solver to distribute the problem over many nodes of the cluster. Although this does allow you to solve much larger problems, it is also important to realize that solving on a cluster may be slower than solving on a single machine.
Due to their aggressive (approximately quadratic) scaling with problem size, the direct solvers are only used as the default for a few of the 3D physics interfaces (although they are almost always used for 2D models, for which their scaling is much better.)
The most common case where the direct solver is used by default is for 3D structural mechanics problems. While this choice has been made for robustness, it is also possible to use an iterative solver for many structural mechanics problems. The method for switching the solver settings is demonstrated in the example model of the stresses in a wrench.
Iterative solvers require relatively much less memory than the direct solvers, but they require more customization of the settings to get them to work well.
With all of the predefined physics interfaces where it is reasonable to do so, we have provided default iterative solver suggestions that are selected for robustness. These settings are handled automatically and do not require any user interaction, so as long as you are using the built-in physics interfaces, you do not need worry about these settings.
The memory and time needed by an iterative solver will be relatively much less than a direct solver for the same problem, so when they can be used, they should be. The scaling as the problem size increases is much closer to linear, as opposed to the quadratic scaling typical of the direct solvers.
At the time of writing this, the iterative solvers should be used on a computer that has enough RAM to solve the problem, so if you get an out-of-memory message when using an iterative solver, you should upgrade the amount of RAM on your computer.
It is also possible to use an iterative solver on a cluster computer using Domain Decomposition methods. This class of iterative methods has recently been introduced into the software, so stay tuned for more details about this in the future.
Although the data shown above do provide an upper and lower bound of memory requirements, these bounds are quite wide. We’ve seen that introducing a small change to a model, such as introducing a non-local coupling like radiative heat transfer, can significantly change memory requirements. So let’s introduce a general recipe for how you can predict memory requirements.
Start with a representative model that contains the combination of physics you want to solve and approximates the true geometric complexity. Begin with as coarse a mesh as possible, and then gradually increase the mesh refinement. Alternatively, start with a smaller representative model and gradually increase the size.
Solve each model and monitor memory requirements. Observe the default solver being used. If it is a direct solver, use the out-of-core option in your tests, or consider if an iterative solver can be used instead. Fit a second-order polynomial to the data, and use this curve to predict the memory required by the size of the larger problem that you eventually want to solve. This is the most reliable way to predict the memory requirements of large, complex, 3D multiphysics models.
As we have now seen, the memory needed will depend upon (at least) the geometry, mesh, element types, combination of physics being solved, couplings between the physics, and the scope of any non-local model couplings. At this point, it should also be made clear that it is not generally possible to predict the memory requirements in all cases. You may need to repeat this procedure several times for variations of your model.
It is also fair to say that setting up and solving large models in the most efficient way possible is something that can require some deep expertise of not just the solver settings, but also of finite element modeling in general. If you do have a particular modeling concern, please contact your COMSOL Support Team for guidance.
You should now have an understanding of why the memory requirements for a COMSOL Multiphysics model can vary dramatically. You should also be able to predict with confidence the memory requirements of your larger models and decide what kind of hardware is appropriate for your modeling challenges.
Amazon Web Services and Amazon Elastic Compute Cloud are trademarks of Amazon.com, Inc. or its affiliates in the United States and/or other countries.
]]>