One of the first physical laws that we learn as engineers is Ohm’s law: The current through a device equals the applied voltage difference divided by the device electrical resistance, or *I* = *V/R _{e}*, where the electrical resistance,

Shortly after learning that law, we probably also learned about the dissipated power within the device, which equals the current times the voltage difference, or *Q* = *IV*, which we could also write as *Q* = *I ^{2}R_{e}* or

We start our discussion from this point and consider a completely lumped model of a device. (Yes, we’re starting so simple that we don’t even need to use the COMSOL Multiphysics® software for this part!) Let’s consider a lumped device with electrical resistance of *R _{e}* = 1 Ω and thermal resistance of

We choose an ambient temperature of 300 K, or 27°C, which is about room temperature. Let’s now plot out the device lumped temperature as a function of increasing voltage (from 0 to 10 V) and current (from 0 to 10 A), as shown in the image below. Unsurprisingly, we see a quadratic increase in temperature.

*Device temperature as a function of applied voltage (left) and applied current (right), assuming constant properties.*

We might think that we can use the curve to predict a wider range of operating conditions. Suppose we want to drive the device up to its failure temperature, where the material melts or vaporizes. Let’s say that this material will vaporize when its temperature gets up to 700 K (427°C). Based on this curve, some simple math would indicate that the maximum voltage is 20 V and the maximum current is 20 A, but this is quite wrong!

At this point, you’re probably ready to point out the simple mistake that we’ve made: Electrical resistance is not constant with temperature. For most metals, the electrical conductivity goes down with an increasing temperature and since resistivity is the inverse of conductivity, the device resistivity goes up. So, let’s introduce a temperature dependence for the resistivity:

R_e = \rho_0(1+\alpha_e(T-T^e_0))

This is known as a linearized resistivity model, where *ρ*_{0} is the reference resistivity at , the reference temperature, and *α _{e}* is the temperature coefficient of electrical resistivity.

Let’s choose *ρ*_{0} = 1 Ω, = 300 K, and *α _{e}* = 1/200 K. Now, the resistance is 1 Ω at a device temperature of 300 K and 2 Ω at a temperature rise of 200 K above the set temperature. The equations for lumped temperature as a function of voltage and current now become:

T = T_{ambient} + (V^2 /\rho_0(1+\alpha_e(T-T^e_0))) R_t

and

T = T_{ambient} + I^2 \rho_0(1+\alpha_e(T-T^e_0)) R_t

These equations are a bit more complicated (the first is a quadratic equation in terms of *T*) but still possible to solve by hand. The plots of temperature as a function of increasing voltage and current are displayed below.

*Device temperature as a function of applied voltage (left) and applied current (right) with the electrical resistivity as a function of temperature.*

For the voltage-driven problem, as the temperature rises, the resistance rises. Since the resistance occurs in the denominator of the temperature expression, higher resistance lowers the temperature and we see that the temperature will be *lower* than that for the constant resistivity case. If we drive the device with constant current, the temperature-dependent resistance appears in the numerator. As we increase the current, the resistive heating will be *higher* than that for the linear material case.

We might be tempted at this point to compute the maximum voltage or current that this device can sustain, but you are probably already realizing the second mistake we’ve made: We also need to incorporate the temperature dependence of the thermal resistance. For metals, it’s reasonable to assume that the electrical and thermal conductivity will show the same trends. Thus, let’s use a nonlinear expression that is similar to what we used before:

R_t = r_0(1+\alpha_t(T-T^t_0))

Now, our voltage-driven and current-driven equations for temperature become:

T = T_{ambient} + V^2 r_0(1+\alpha_t(T-T^t_0))/\rho_0(1+\alpha_e(T-T^e_0))

and

T = T_{ambient} + I^2 \rho_0(1+\alpha_e(T-T^e_0))/r_0(1+\alpha_t(T-T^t_0))

Although only slightly different from before, these nonlinear equations are now quite a bit more difficult to solve. Simulation software is starting to look more attractive! Once we do solve these equations (let’s set *r*_{0} = 1 K/W, *α _{t}* = 1/400 K, and = 300 K), we can plot the device temperature, as shown below.

*Device temperature as a function of applied voltage (left) and applied current (right) with the electrical and thermal resistivity as a function of temperature.*

Observe that for the current-driven case, the temperature rises asymptotically. Since both the electrical and the thermal resistance increase with an increasing temperature, the device temperature rises very sharply as the current is increased. As the temperature rises to infinity, the problem becomes unsolvable. This is actually entirely expected; in fact, this is how your basic car fuse works. Now, if we were solving this problem in COMSOL Multiphysics, we could also solve this as a transient model (incorporating the thermal mass due to the device density and specific heat) and predict the time that it takes for the device temperature to rise to its failure point.

Things are luckily a bit simpler for the voltage-driven case. Here, we also see a predictable behavior: The rising thermal resistivity drives the temperature higher than when we only considered a temperature-dependent electrical conductivity. Now, the interesting point here is the temperature is still lower than for the constant resistivity case. This also sometimes confuses people, but just keep in mind that one of these nonlinearities is driving the temperature *down* while the other is driving the temperature *up*. In general, for a more complicated model (such as one you would build in COMSOL Multiphysics), you don’t know which nonlinearity will predominate.

What other mistake might we have made here? We have used a *positive* temperature coefficient of thermal resistivity. This is certainly true for most metals, but it turns out that the opposite is true for some insulators, glass being a common example. Usually, the total device thermal resistance is mostly a function of the insulators rather than the electrically conductive domains. In addition, the device’s thermal resistance should incorporate the effects of the cooling to the surrounding ambient environment. So, the effects of free convection (which increases with the temperature difference) and radiation (which has a fourth-order dependence on temperature difference) could also be lumped into this single thermal resistance. For now, though, let’s keep the problem (relatively) simple and just switch the sign of the temperature coefficient of thermal resistivity, *α _{t}* = -1/400 K, and directly compare the voltage- and current-driven cases for driving voltage up to 100 V and current up to 100 A.

*Device temperature as a function of applied voltage (pink) and applied current (blue) with a negative temperature coefficient of thermal resistivity.*

We now see some results that are quite different. Observe that for both the voltage- and current-driven cases, the temperature increases approximately quadratically at low loads, but at higher loads, the temperature increase starts to flatten out due to the decreasing thermal resistivity. The slope, although always positive, decreases in magnitude. The current-driven case starts to asymptotically approach *T* = 700 K, but the voltage-driven case stays significantly below the failure temperature.

This is an important result and highlights another common mistake. The nonlinear material models we used here for electrical and thermal resistivity are *approximations* that start to become invalid if we get too close to 700 K. If we anticipate operating in this regime, we should go back to the literature and find a more sophisticated material model. Although our existing nonlinear material models did solve, we always need to check that they are still valid at the computed operating temperature. Of course, if we are not close to these operating conditions, we can use the linearized resistivity model (one of the built-in material models within COMSOL Multiphysics). Then, our model will be valid.

We can hopefully now see from all of this data that the temperature has a very complicated relationship with respect to the driving voltage or current. When nonlinear materials are considered, the temperature might be higher or lower than when using constant properties, and the slope of the temperature response can switch from quite steep to quite shallow just based on the operating conditions.

Have these results thoroughly confused you yet? What if we went back and changed one of the coefficients in the resistance expressions? Certain materials have negative temperature coefficients of electrical and thermal resistivity. What if we used an even more complicated nonlinearity? Would you feel confident in saying anything about the expected temperature variations in even this simple lumped device case, or would you rather check it against a rigorous calculation?

What about the case of a real-world device? One that has a combination of many different materials, different electrical and thermal conductivities as a function of temperature, and complex shapes? Would you model this under steady-state conditions only or in the time domain, to find out how long it takes for the temperature to rise? Maybe — in fact, most likely — there will also be nonlinear boundary conditions such as radiation and free convection that we don’t want to approximate via a single lumped thermal resistance. What can you expect then? Almost anything! And how do you analyze it? Well, with COMSOL Multiphysics, of course!

Evaluate how COMSOL Multiphysics can help you meet your multiphysics modeling and analysis goals. Contact us via the button below.

]]>

One of the most common problems solved with the COMSOL Multiphysics® software is that of electromagnetic heating, combining the solution to Maxwell’s equations, which solves for the current flow and resultant losses, and the solution to the heat transfer equation, which solves for the temperature distribution.

As mentioned in a previous blog post, when solving for the electromagnetic fields, sharp reentrant corners lead to locally nonconvergent electric fields and current density. Electromagnetic losses are the product of the electric field and current density, so the peak losses at a sharp corner will similarly go to infinity with mesh refinement.

However, the integral over the losses around the sharp corner will be convergent with respect to mesh refinement. This is one of the strengths of the finite element method, which solves the governing equation in the so-called “weak form”, which satisfies the governing partial differential equations in an integral sense: minimizing the total error in the model, but allowing (possibly infinite!) local errors.

*A schematic of a simple electromagnetic heating problem with a singularity at the inside sharp corner.*

Let’s review this concept with a simple example, as shown in the image above. A rectangular domain with a sharp notch has an electric potential difference applied, leading to current flow and resistive losses in the material.

Below, we see a color plot of the resistive losses and the mesh used for different levels of mesh refinement at the sharp inside corner. At the highest level of mesh refinement, the losses appear very localized around the sharp corner.

*The electromagnetic losses on several different meshes.*

At this sharp inside corner, the electric fields are actually theoretically infinitely large, since this geometry and the boundary conditions imply that the current must instantaneously change direction at a point. Also note that the sharp outside corners do not lead to singularities. As a consequence of the geometry and boundary conditions, the electric currents are not forced to change direction instantaneously at these points.

*The resistive losses in log scale, plotted along a cut line, and a table of the integral of the losses for different meshes.*

If we plot the losses at a cross-sectional line, as shown above, we can observe that the losses at the sharp point get larger and larger as the mesh is refined. However, the integral of the losses over the domain (roughly speaking, the area underneath the plotted curves) converges very quickly with mesh refinement.

Now let’s make this a multiphysics problem by additionally solving the heat transfer equation for the temperature distribution under steady-state conditions. These temperature fields are plotted below for several levels of mesh refinement, as well as the temperature at the sharp point.

*The temperature field and a table of the temperature evaluated at the sharp corner for different levels of mesh refinement.*

We can see from these results that the temperature at the singular point — and, of course, everywhere else — is also very insensitive to the mesh refinement. This is for two reasons. First, as we just saw, the total resistive losses are quite insensitive to the mesh. Second, the diffusive nature of the steady-state heat transfer governing equation will return very similar temperature solutions as long as the total heat loads are similar. The transient temperature solution, on the other hand, can predict very high local temperatures if the heat load is very high, but this is also a local and relative effect, albeit in time. That is, spikes in the heat load distribution in space will be smoothed out over time, and in the limit of very long simulation times, the transient solution will approach the steady state solution.

What can we conclude from all of this information? If you are solving an electromagnetic heating problem and are only interested in computing the total electromagnetic losses and temperature distribution, you can usually avoid adding fillets to your model.

The advantages here are twofold. You do not need to go through the CAD modeling effort of adding any fillets to your geometry and you do not need an overly refined mesh in the sharp corners, which can save you the most valuable resource: time!

]]>

*This post was originally published in 2013. It has since been updated to include all of the turbulence models currently available with the CFD Module as of version 5.3 of the COMSOL® software.*

Let’s start by considering the fluid flow over a flat plate, as shown in the figure below. The uniform velocity profile hits the leading edge of the flat plate, and a laminar boundary layer begins to develop. The flow in this region is very predictable. After some distance, small chaotic oscillations begin to develop in the boundary layer and the flow begins to transition to turbulence, eventually becoming fully turbulent.

The transition between these three regions can be defined in terms of the Reynolds number, , where is the fluid density; is the velocity; is the characteristic length (in this case, the distance from the leading edge); and is the fluid’s dynamic viscosity. We will assume that the fluid is *Newtonian*, meaning that the viscous stress is directly proportional, with the dynamic viscosity as the constant of proportionality, to the shear rate. This is true, or very nearly so, for a wide range of fluids of engineering importance, such as air or water. Density can vary with respect to pressure, although it is here assumed that the fluid is only weakly compressible, meaning that the Mach number is less than about 0.3. The weakly compressible flow option for the fluid flow interfaces in COMSOL Multiphysics neglects the influence of pressure waves on the flow and pressure fields.

In the laminar regime, the fluid flow can be completely predicted by solving Navier-Stokes equations, which gives the velocity and the pressure fields. Let us first assume that the velocity field does not vary with time. An example of this is outlined in The Blasius Boundary Layer tutorial model. As the flow begins to transition to turbulence, oscillations appear in the flow, despite the fact that the inlet flow rate does not vary with time. It is then no longer possible to assume that the flow is invariant with time. In this case, it is necessary to solve the time-dependent Navier-Stokes equations, and the mesh used must be fine enough to resolve the size of the smallest eddies in the flow. Such a situation is demonstrated in the Flow Past a Cylinder tutorial model. Note that the flow is unsteady, but still laminar in this model. Steady-state and time-dependent laminar flow problems do not require any modules and can be solved with COMSOL Multiphysics alone.

As the flow rate — and thus also the Reynolds number — increases, the flow field exhibits small eddies and the spatial and temporal scales of the oscillations become so small that it is computationally unfeasible to resolve them using the Navier-Stokes equations, at least for most practical cases. In this flow regime, we can use a Reynolds-averaged Navier-Stokes (RANS) formulation, which is based on the observation that the flow field (u) over time contains small, local oscillations (u’) and can be treated in a time-averaged sense (U). For one- and two-equation models, additional transport equations are introduced for turbulence variables, such as the turbulence kinetic energy (k in k-ε and k-ω).

In algebraic models, algebraic equations that depend on the velocity field — and, in some cases, on the distance from the walls — are introduced in order to describe the turbulence intensity. From the estimates for the turbulence variables, an eddy viscosity that adds to the molecular viscosity of the fluid is calculated. The momentum that would be transferred by the small eddies is instead translated to a viscous transport. Turbulence dissipation usually dominates over viscous dissipation everywhere, except for in the viscous sublayer close to solid walls. Here, the turbulence model has to continuously reduce the turbulence level, such as in low Reynolds number models. Or, new boundary conditions have to be computed using wall functions.

The term “low Reynolds number model” sounds like a contradiction, since flows can only be turbulent if the Reynolds number is high enough. The notation “low Reynolds number” does not refer to the flow on a global scale, but to the region close to the wall where viscous effects dominate; i.e., the viscous sublayer in the figure above. A low Reynolds number model is a model that correctly reproduces the limiting behaviors of various flow quantities as the distance to the wall approaches zero. So, a low Reynolds number model must, for example, predict that *k*~*y*^{2} as *y*→0. Correct limiting behavior means that the turbulence model can be used to model the whole boundary layer, including the viscous sublayer and the buffer layer.

Most ω-based models are low Reynolds number models by construction. But the standard k-ε model and other commonly encountered k-ε models are *not* low Reynolds number models. Some of them can, however, be supplemented with so-called damping functions that give the correct limiting behavior. They are then known as low Reynolds number k-ε models.

Low Reynolds number models often give a very accurate description of the boundary layer. The sharp gradients close to walls do, however, require very high mesh resolutions and that, in turn, means that the high accuracy comes at a high computational cost. This is why alternative methods to model the flow close to walls are often employed for industrial applications.

The turbulent flow near a flat wall can be divided into four regions. At the wall, the fluid velocity is zero, and in a thin layer above this, the flow velocity is linear with distance from the wall. This region is called the *viscous sublayer*, or laminar sublayer. Further away from the wall is a region called the *buffer layer*. In the buffer region, turbulence stresses begin to dominate over viscous stresses and it eventually connects to a region where the flow is fully turbulent and the average flow velocity is related to the log of the distance to the wall. This is known as the *log-law region*. Even further away from the wall, the flow transitions to the *free-stream region*. The viscous and buffer layers are very thin and if the distance to the end of the buffer layer is , then the log-law region will extend about away from the wall.

It is possible to use a RANS model to compute the flow field in all four of these regions. However, since the thickness of the buffer layer is so small, it can be advantageous to use an approximation in this region. Wall functions ignore the flow field in the buffer region and analytically compute a nonzero fluid velocity at the wall. By using a wall function formulation, you assume an analytic solution for the flow in the viscous layer and the resultant models will have significantly lower computational requirements. This is a very useful approach for many practical engineering applications.

If you need a level of accuracy beyond what the wall function formulations provide, then you will want to consider a turbulence model that solves the entire flow regime as described for the low Reynolds number models above. For example, you may want to compute lift and drag on an object or compute the heat transfer between the fluid and the wall.

The automatic wall treatment functionality, which is new in COMSOL Multiphysics version 5.3, combines benefits from both wall functions and low Reynolds number models. Automatic wall treatment adapts the formulation to the mesh available in the model so that you get both robustness and accuracy. For instance, for a coarse boundary layer mesh, the feature will utilize a robust wall function formulation. However, for a dense boundary layer mesh, the automatic wall treatment will use a low Reynolds number formulation to resolve the velocity profile completely to the wall.

Going from a low Reynolds number formulation to a wall function formulation is a smooth transition. The software blends the two formulations in the boundary elements. Then, the software calculates the wall distance of the boundary elements’ grid points (this is in viscous units given by a liftoff). The combined formulations are then used for the boundary conditions.

All turbulence models in COMSOL Multiphysics, except the k-ε model, support automatic wall treatment. This means that the low Reynolds number models can be used for industrial applications and that their low Reynolds number modeling capability is only invoked when the mesh is fine enough.

The eight RANS turbulence models differ in how they model the flow close to walls, the number of additional variables solved for, and what these variables represent. All of these models augment the Navier-Stokes equations with an additional turbulence eddy viscosity term, but they differ in how it is computed.

The L-VEL and algebraic yPlus turbulence models compute the eddy viscosity using algebraic expressions based only on the local fluid velocity and the distance to the closest wall. They do not solve any additional transport equations. These models solve for the flow everywhere and are the most robust and least computationally intensive of the eight turbulence models. While they are generally the least accurate models, they do provide good approximations for internal flow, especially in electronic cooling applications.

The Spalart-Allmaras model adds a single additional variable for an undamped kinematic eddy viscosity. It is a low Reynolds number model and can resolve the entire flow field down to the solid wall. The model was originally developed for aerodynamics applications and is advantageous in that it is relatively robust and has moderate resolution requirements. Experience shows that this model does not accurately compute fields that exhibit shear flow, separated flow, or decaying turbulence. Its advantage is that it is quite stable and shows good convergence.

The k-ε model solves for two variables: k, the turbulence kinetic energy; and ε (epsilon), the rate of dissipation of turbulence kinetic energy. Wall functions are used in this model, so the flow in the buffer region is not simulated. The k-ε model has historically been very popular for industrial applications due to its good convergence rate and relatively low memory requirements. It does not very accurately compute flow fields that exhibit adverse pressure gradients, strong curvature to the flow, or jet flow. It does perform well for external flow problems around complex geometries. For example, the k-ε model can be used to solve for the airflow around a bluff body.

The turbulence models listed below are all more nonlinear than the k-ε model and they can often be difficult to converge unless a good initial guess is provided. The k-ε model can be used to provide a good initial guess. Just solve the model using the k-ε model and then use the new *Generate New Turbulence Interface* functionality, available in the CFD Module with COMSOL Multiphysics version 5.3.

The k-ω model is similar to the k-ε model, but it solves for ω (omega) — the specific rate of dissipation of kinetic energy. It is a low Reynolds number model, but it can also be used in conjunction with wall functions. It is more nonlinear, and thereby more difficult to converge than the k-ε model, and it is quite sensitive to the initial guess of the solution. The k-ω model is useful in many cases where the k-ε model is not accurate, such as internal flows, flows that exhibit strong curvature, separated flows, and jets. A good example of internal flow is flow through a pipe bend.

The low Reynolds number k-ε model is similar to the k-ε model, but does not need wall functions: it can solve for the flow everywhere. It is a logical extension of the k-ε model and shares many of its advantages, but generally requires a denser mesh; not only at walls, but everywhere its low Reynolds number properties kick in and dampen the turbulence. It can sometimes be useful to use the k-ε model to first compute a good initial condition for solving the low Reynolds number k-ε model. An alternative way is to use the automatic wall treatment and start with a coarse boundary layer mesh to get wall functions and then refine the boundary layer at the interesting walls to get the low Reynolds number models.

The low Reynolds number k-ε model can compute lift and drag forces and heat fluxes can be modeled with higher accuracy compared to the k-ε model. It has also shown to predict separation and reattachment quite well for a number of cases.

The SST model is a combination of the k-ε model in the free stream and the k-ω model near the walls. It is a low Reynolds number model and kind of the “go to” model for industrial applications. It has similar resolution requirements to the k-ω model and the low Reynolds number k-ε model, but its formulation eliminates some weaknesses displayed by pure k-ω and k-ε models. In a tutorial model example, the SST model solves for flow over a NACA 0012 Airfoil. The results are shown to compare well with experimental data.

Close to wall boundaries, the fluctuations of the velocity are usually much larger in the parallel directions to the wall in comparison with the direction perpendicular to the wall. The velocity fluctuations are said to be anisotropic. Further away from the wall, the fluctuations are of the same magnitude in all directions. The velocity fluctuations become isotropic.

The v2-f turbulence model describes the anisotropy of the turbulence intensity in the turbulent boundary layer using two new equations, in addition to the two equations for turbulence kinetic energy (k) and dissipation rate (ε). The first equation describes the transport of turbulent velocity fluctuations normal to the streamlines. The second equation accounts for nonlocal effects such as the wall-induced damping of the redistribution of turbulence kinetic energy between the normal and parallel directions.

You should use this model for enclosed flows over curved surfaces, for example, to model cyclones.

Solving for any kind of fluid flow problem — laminar or turbulent — is computationally intensive. Relatively fine meshes are required and there are many variables to solve for. Ideally, you would have a very fast computer with many gigabytes of RAM to solve such problems, but simulations can still take hours or days for larger 3D models. Therefore, we want to use as simple a mesh as possible, while still capturing all of the details of the flow.

Referring back to the figure at the top of this blog post, we can observe that for the flat plate (and for most flow problems), the velocity field changes quite slowly in the direction tangential to the wall, but quite rapidly in the normal direction, especially if we consider the buffer layer region. This observation motivates the use of a boundary layer mesh. Boundary layer meshes (which are the default mesh type on walls when using our physics-based meshing) insert thin rectangles in 2D or triangular prisms in 3D at the walls. These high-aspect-ratio elements will do a good job of resolving the variations in the flow speed normal to the boundary, while reducing the number of calculation points in the direction tangential to the boundary.

*The boundary layer mesh (magenta) around an airfoil and the surrounding triangular mesh (cyan) for a 2D mesh.*

*The boundary layer mesh (magenta) around a bluff body and the surrounding tetrahedral mesh (cyan) for a 3D volumetric mesh.*

Once you’ve used one of these turbulence models to solve your flow simulation, you will want to verify that the solution is accurate. Of course, as you do with any finite element model, you can simply run it with finer and finer meshes and observe how the solution changes with increasing mesh refinement. Once the solution does not change to within a value you find acceptable, your simulation can be considered converged with respect to the mesh. However, there are additional values you need to check when modeling turbulence.

When using wall function formulations, you will want to check the *wall resolution viscous units* (this plot is generated by default). This value tells you how far into the boundary layer your computational domain starts and should not be too large. You should consider refining your mesh in the wall normal direction if there are regions where the wall resolution exceeds several hundred. The second variable that you should check when using wall functions is the *wall liftoff* (in length units). This variable is related to the assumed thickness of the viscous layer and should be small relative to the surrounding dimensions of the geometry. If it is not, then you should refine the mesh in these regions as well.

*The maximum wall liftoff in viscous units is less than 100, so there is no need to refine the boundary layer mesh.*

When solving a model using low Reynolds number wall treatment, check the *dimensionless distance to cell center* (also generated by default). This value should be of order unity everywhere for the algebraic models and less than 0.5 for all two-equation models and the v2-f model. If it is not, then refine the mesh in these regions.

In this blog post, we have discussed the various turbulence models available in COMSOL Multiphysics, highlighting when and why you should use each one of them. The real strength of the COMSOL® software is when you want to combine your fluid flow simulations with other physics, such as finding stresses on a solar panel in high winds, forced convection modeling in a heat exchanger, or mass transfer in a mixer, among other possibilities.

If you are interested in using the COMSOL® software for your CFD and multiphysics simulations, or if you have a question that isn’t addressed here, please contact us.

]]>

As we saw in a previous blog post on creating randomized geometries, you can use the *Record Method* functionality to record a series of operations that you’re performing within the COMSOL Multiphysics graphical user interface (GUI) and then replay that method to reproduce those same steps. Of course, this doesn’t do us any good if we have already created the file — we don’t want to go back and rerecord the entire file. As it turns out, though, COMSOL Multiphysics automatically keeps a history of everything that you’ve done in a model file as Java® code. We can just extract the relevant operations directly from this code and insert them into a new model method.

*The* Compact History *option.*

To extract all of the history of all operations within a file, there are a few steps you need to take. First, go to the *File* menu and choose the *Compact History* option. We do this because COMSOL Multiphysics keeps a history of all commands, but we only want the minimum set of commands that were used to generate the existing model. Next, go to *File Menu* > *Save As* and save to a *Model File for Java* file type. You now have a text file that contains Java® code. Try this out yourself and open the resulting file in a text editor. This file always has lines of code at the beginning and end that are similar to what is shown below:

/*example_model.java */ import com.comsol.model.*; import com.comsol.model.util.*; public class example_model { public static Model run() { Model model = ModelUtil.create("Model"); model.modelPath("C:\\Temp"); model.label("example_model.mph"); model.comments("This is an example model"); ... ... /* Lines of code describing the model contents */ ... return model; } public static void main(String[] args) { run();} }

The above code snippet shows us what we can remove. Only the code between the `Model model = ModelUtil.create("Model");`

and `return model;`

is used to define all of the features within the model. In fact, we can also remove the `model.modelPath();`

, `model.label();`

, and ` model.comments();`

lines. Go ahead and remove all of these lines of code in your text editor and you are left with just the set of commands needed to reproduce the model in a model method.

Next, open a new blank model file, go to the Application Builder, and create a new model method. Copy all of the lines from your edited Java® file into this new model method. Then, switch back to the Model Builder, go to the *Developer* tab, and choose *Run Model Method* to run this code. Running this model method reproduces all of the steps from your original file, including solving the model. Solving the model may take a long time, so we often want to trim our model method.

*A model method within the Application Builder.*

There are two approaches that you can take for trimming down the code. The first is to manually edit the Java® code itself, pruning out any code that you don’t want to rerun. It’s helpful to have the *COMSOL Programming Reference Manual* handy if you’re going to do this, because you may need to know what every line does before you delete it. The second, simpler approach is to delete the features directly within the COMSOL Multiphysics GUI. Start with a copy of your original model file and delete everything that you don’t want to appear within the method. You can delete the geometry sequence, mesh, study steps, results visualizations, and anything else that you don’t want to reproduce.

Let’s take a look at a quick example of this. Suppose that you’ve built a model that simulates thermal curing and you want to include this thermal curing simulation in other existing models that already have the heat transfer simulations set up.

As we saw in a previous blog post, modeling thermal curing in addition to heat transfer requires three steps:

- Defining a set of material parameters
- Adding a
*Domain ODE*interface to model the evolution of the cure over time - Coupling the heat of reaction from the curing into the thermal problem

We can build a model in the GUI that contains just these steps and then write out the Java® file. Of course, we still need to do some manual editing, and it’s also helpful to go through the *Application Programming Guide* to get an introduction to the basics. But once you’re comfortable with all of the syntax, you’ll see that the above three steps within the GUI can be written in the model method shown here:

model.param().set("H_r", "500[kJ/kg]", "Total Heat of Reaction"); model.param().set("A", "200e3[1/s]", "Frequency Factor"); model.param().set("E_a", "150[kJ/mol]", "Activation Energy"); model.param().set("n", "1.4", "Order of Reaction"); model.component("comp1").physics("ht").create("hsNEW", "HeatSource"); model.component("comp1").physics("ht").feature("hsNEW").selection().all(); model.component("comp1").physics("ht").feature("hsNEW").set("Q0", "-ht.rho*H_r*d(alpha,t)"); model.component("comp1").physics().create("dode", "DomainODE", "geom1"); model.component("comp1").physics("dode").field("dimensionless").field("alpha"); model.component("comp1").physics("dode").field("dimensionless").component(new String[]{"alpha"}); model.component("comp1").physics("dode").prop("Units").set("SourceTermQuantity", "frequency"); model.component("comp1").physics("dode").feature("dode1").set("f", "A*exp(-E_a/R_const/T)*(1-alpha)^n");

The first four lines of this code snippet define an additional set of global parameters. The next three lines add a *Heat Source* domain feature to an existing *Heat Transfer* interface (with the tag *ht*), define the heat source term, and apply the heat source to all domains. The last five lines set up a *Domain ODE* interface that is applied by default to all domains in the model and sets the variable name, the units, as well as the equation to solve.

*Running the model method from the* Developer *tab.*

We can run the above model method in a file that already has a heat transfer analysis set up. For example, try adding and running this model method in the Axisymmetric Transient Heat Transfer tutorial, available in the Application Library in COMSOL Multiphysics. Then, just re-solve the model to solve for both temperature and degree of cure.

Now, there are a few assumptions in the above code snippet:

- We want to model curing in all of the domains in our model
- There is already a component with the tag
*comp1*to which we can add physics interfaces - There is not already a
*Domain ODE*interface with the tag*dode*in that component - The temperature variable is defined as
*T*, which we can use in the*Domain ODE*interface - A heat transfer physics interface with tag
*ht*already exists and that we can add a feature with tag*hsNEW*to it

Of course, as you develop your own model methods, you need to be able to recognize and address these kinds of general logical issues.

From this simple example, you can also see that you can create a model method that acts as a reusable template for any part of the modeling process in COMSOL Multiphysics. You might want to run such a template model method in every new file you create, possibly to load in a set of custom material properties, set up a complicated physics interface, or define a complicated set of expressions. You might also want to reuse the same model method in an existing file to set up a particular customized study type, modify solver settings, or define a results visualization that you plan to reuse over and over again.

Once you get comfortable with the basics of this workflow, you’ll find yourself saving lots of time, which we hope you’ll appreciate!

- How to Generate Random Surfaces in COMSOL Multiphysics®
- How to Model Gearbox Vibration and Noise in COMSOL Multiphysics®
- How to Create a Randomized Geometry Using Model Methods

*Oracle and Java are registered trademarks of Oracle and/or its affiliates.*

Before we get to the rough surface, let’s start with something simple: a thin uniform layer of gold coating on top of optically flat glass, as shown in the image below. Such a model exhibits negligible structural variation in the plane of glass. In addition, it can be modeled quite simply in the COMSOL Multiphysics® software by considering a small two-dimensional unit cell that has a width much smaller than the wavelength.

This computational model is based on the Fresnel equation example, one of the verification models in the Application Gallery, but is modified to include a layer of gold with a wavelength-dependent refractive index. This type of index requires that we manually adjust the mesh size based on the minimum wavelength in each material as well as the skin depth, as described in a previous blog post.

*Light incident on a metal coating on top of a glass substrate is reflected, transmitted, and absorbed.*

The model includes Floquet periodic boundary conditions on the left and right sides of the modeling domain and a Port boundary condition at the top and bottom. The Port boundary condition at the top launches a plane wave at a specified angle of incidence and computes the reflected light, while the one at the bottom calculates the transmitted light. We can integrate the losses within the metal layer to compute the absorbance within the gold layer.

*The computational model that calculates the optical properties of a metal film on glass.*

If we are interested in computing incident light at off-normal incident angles, then we also have to concern ourselves with the height of the modeling domain — the distance between the material interfaces and the Port boundary conditions. This distance must be large enough such that any evanescent field drops off to approximately zero within the modeling domain.

The reason for this has to do with the Port boundary conditions, which can only consider the propagating component of the electromagnetic field. Any evanescent component of the field that reaches the Port boundary condition is artificially reflected, so we must place the port boundary far enough away from the material interfaces. In the most general cases, it is difficult to determine how far the evanescent field extends. A simple rule of thumb is to place the Port boundary conditions at least half a wavelength away from the material interfaces and to check if making the domain larger alters the results.

The sample results below show the transmitted, reflected, and absorbed light as well as their total — which should always add up to one. If these do not add up to one, then we must carefully check our model setup.

*The transmittance, reflectance, and absorbance of light normally incident on a flat glass surface with a metal coating as a function of wavelength.*

*The transmittance, reflectance, and absorbance of 550-nm light at various angles of incidence.*

Let’s now make things a little bit more complicated and introduce a periodic structural variation: a sinusoidal ripple. Clearly, we now need to consider a larger unit cell that considers a single ripple.

*A surface with periodic variations reflects and transmits light into several different diffraction orders.*

We can still apply the same domain properties and all of the same boundary conditions. However, if spacing is large enough, then we can have higher-order diffraction. In other words, light can be reflected and transmitted into several different directions. To properly compute the reflection and transmission, we need to add several diffraction order ports. The software computes the appropriate number of ports based on the domain width, material properties, and specified incident angle. If we are studying a range of incident angles, we must make sure to compute all of the diffraction orders present at the limits of the angular sweep.

*There can be multiple diffraction orders present, depending on the ratio of wavelength to domain width, refractive index, and incident angles.*

The conditions under which higher-order diffraction appears and the appropriate modeling procedure is presented in depth in the example of a plasmonic wire grating, so let’s not go into it at length here. In short, the wider the computational domain relative to the wavelengths in the materials above and below, the more diffraction orders can be present (the number of diffraction orders varies with the incident angle). The results shown below plot the total transmittance and reflectance; i.e., all of the light reflected into the different diffraction orders is added up, as is all of the transmitted light.

*The transmittance, reflectance, and absorbance of light normally incident on a rippled glass surface with a metal coating.*

*The transmittance, reflectance, and absorbance of 550-nm light at various angles of incidence.*

Let’s now move on to the most computationally difficult case: a surface with many random variations in the surface height. To model the randomness, we must model several different domains of increasing widths and different subsets of the rough profile. As the domain width increases — and as different subsets of the surface are sampled — the average behavior computed from these different models converges. That is, we generate a set of statistics by sampling the rough surface. Rather than going into detail on how to calculate these statistics, let’s focus on how to model one domain that approximates a rough surface by defining the height variation as the sum of different sinusoids with random height and phase, as described here.

*A rough surface with random variations reflects and transmits light in random directions. The computational model must sample a statistically significant subset of the roughness profile.*

Our computational domain must now be very wide, many times longer than the wavelength. As we still want to model a plane wave incident at various angles on the structure, we use the Floquet periodic boundary conditions, which require that we have an identical mesh on the periodic boundaries. Practically speaking, this means that we may need to slightly alter the geometry of our domain to ensure that the boundaries on the left and right side are identical. If we do use a sum of sine functions, as described here, then the profile will automatically be periodic.

We still want to launch the wave with a Port boundary condition. However, it is no longer practical to use diffraction order ports to monitor the reflected and transmitted light, as this can result in hundreds (or thousands) of diffraction orders. Furthermore, since this model represents a statistical sampling, the relative fraction of light scattered into these different orders is not of interest; we’re only interested in the sum of the total reflected and transmitted light. That is, this modeling approach computes the total integrated scatter plus the specular reflection and transmission of the surface.

*The computational domain for a model of a rough surface. Light is launched from the interior port toward the material interface. Light reflected back toward this port passes though it and is absorbed in the PML, as is the transmitted light. Two additional boundaries are introduced to monitor the total reflectance and transmittance.*

Thus, we introduce an alternative modeling strategy that does not use ports to compute reflection and transmission. Instead, we use a perfectly matched layer (PML) above and below to absorb all reflected and transmitted light as well as probes to compute reflection and transmission. PMLs absorb any fields incident upon them, as described in this blog post on using PMLs for wave electromagnetics problems.

The PML absorbs both propagating and evanescent components of the field, but we only want it to absorb the propagating component. Thus, we again need to ensure that we place the PMLs far enough away from the material interfaces. We use the same rule of thumb as before, placing the PML at least half a wavelength away from the material interfaces.

As we approach grazing angles of incidence, even the PML domain does not, by default, absorb all of the light. At nearly grazing angles, the effective wavelength in the absorbing direction is very long, and we need to modify the default wavelength in the PML settings (shown below). This change to the settings is only necessary if we are interested in angles of incidence greater than ~75°.

*The PML settings modified to account for grazing angles of incidence.*

Since our domain is now bounded by PMLs above and below, the port that launches the wave must now be placed within the modeling domain. To do this, we use the *Slit Condition* option to define an interior port that is backed by a domain. This means that the port now launches a wave in one direction, emanating from this interior boundary. Any light reflected back toward the boundary passes through unimpeded and then gets absorbed by the PML.

Although this is a good way to launch the wave, we will no longer use the Port boundary condition to compute how much light is reflected, since we would have to add hundreds of diffraction ports, and similarly, we’d need hundreds of ports to compute the total transmittance.

To monitor the total transmitted and reflected light, we instead introduce two additional interior boundaries to the model, placed just in front of the PML domains (shown in the schematic above). At these two boundaries, we integrate the power flux in the upward and downward directions, normalized by the incident power, which gives us the total reflectance and transmittance. To more accurately determine the integral of the power flux at these boundaries, we also introduce a boundary layer mesh composed of a single layer of elements much smaller than the wavelength.

On the incident side, we place this monitoring boundary above the interior port. The launching port introduces a plane wave propagating toward the material interface. The light reflected at the interface passes through this interior port, then moves through the boundary at which we monitor reflectance, and is absorbed in the PML.

The plots below show sample results of the transmittance, reflectance, and absorbance. They are notably different from the smooth surface and periodically varying surface results. Note that the sweep over the angle of incidence terminates at 85° off normal. Of course these plots will look slightly different for each different random geometry case that we run.

*The transmittance, reflectance, and absorbance of light normally incident on a rough glass surface.*

*The transmittance, reflectance, and absorbance of 550-nm light at angles of incidence up to 85° off normal.*

Here, we have introduced a modeling approach that is appropriate for computing the optical transmission and reflection from a rough surface. This method contrasts with the approach for modeling a uniform optically flat surface as well as the one for modeling surfaces with periodic variations. The modeling method for rough surfaces can also be used for the modeling of periodic structures that have a very long period, such as when the scattering into different diffraction orders is not of interest.

Modeling truly random surfaces does require some care, as the geometry needs to be altered to ensure that it is periodic. Furthermore, the domain size and number of different random geometries studied must be large enough to give statistically meaningful results. Since this requires solving many different variations of the same model and postprocessing the results, it is helpful to use the Application Builder, LiveLink™ *for* MATLAB®, or LiveLink™ *for* Excel® in our modeling workflow.

- See how to create a rough surface as a sum of sine functions
- See how to create a random geometry using model methods
- Check out other blog posts related to simulating wave optics:

*MATLAB is a registered trademark of The MathWorks, Inc. Microsoft and Excel are either registered trademarks or trademarks of Microsoft Corporation in the United States and/or other countries.*

Determining the best cheese in the world is a hotly contested task, but I’ll go ahead and add my opinion: a good Emmentaler cheese is hard to beat. A master cheesemaker might joke that it’s really the holes that add the flavor, so if we’re going to build a good COMSOL Multiphysics model of a wheel of cheese, we need to include the holes.

*A model of Emmentaler cheese, with randomly positioned and sized holes.*

It turns out that the reasons for the holes in Swiss cheese are quite complicated, so we aren’t going to try to model the hole formation itself. Instead, we will simply set up a model of the cheese, as shown in the image above. We want to include a randomly distributed set of holes within the cheese, with a random hole radius between some upper and lower limit on the radius. We can build this randomized geometry in COMSOL Multiphysics version 5.3 using the new *Model Method* functionality. Let’s find out how…

When you’re running COMSOL Multiphysics® version 5.3 on the Windows® platform and working with the Model Builder, you will now see a *Developer* tab in the ribbon, as shown in the screenshot below. One of the options is *Record Method*. When clicked, this option prompts you to enter a new method *Name* and *Method type*. You can enter any string for the method name, while the method type can either be *Application method* or *Model method*.

An *Application method* can be used within a COMSOL app — a process introduced in this tutorial video. A *Model method* can be used within the underlying COMSOL Multiphysics model and can operate on (and add information to) the existing model data.

*The* Developer *tab, showing the* Record Method *and* Run Model Method *buttons.*

After you click the *OK* button in the *Record Method* dialog box, you can see a red highlight around the entire graphical user interface. All operations performed are recorded in this method until you click the *Stop Recording* button. You can then switch to the Application Builder and view your recorded method. The screenshot below shows the Application Builder and the method after we record the creation of a single geometry object. The object is a cylinder with the tag `cyl1`

, a radius of 40 cm, and a height of 20 cm — a good starting approximation for a wheel of cheese.

*The Application Builder showing code for a model method used to create a geometry.*

When we’re working with the Model Builder, we can call this model method within any other model file (as long as it doesn’t already have an existing object with tag `cyl1`

within the geometry sequence) via the *Run Model Method* button in the *Developer* tab. Of course, this simple model method just creates a cylinder. If we want to model the holes, we need to introduce a bit of randomness into our method. Let’s look at that next.

Within a model method, you can call standard Java® classes, such as the Math.random class, which returns a double-precision number greater than or equal to 0.0 and less than 1.0. We want to use this class, along with a little bit of extra code, to set up a specified number of randomly positioned and sized holes within the model of the wheel of cheese.

Let’s say that we want 1000 holes randomly distributed throughout the cheese that each have a random radius between 0.1 cm and 1 cm. We also need to keep in mind that Emmentaler cheese has a natural rind within which no holes form. So, we need to add a bit of logic to make sure that our 1000 holes are actually inside the cheese. The complete model method below (with line numbers added and text strings in red) shows how to do this.

1 int NUMBER_OF_HOLES = 1000; 2 int ind = 0; 3 double hx, hy, hz, hr = 0.0; 4 double CHEESE_HEIGHT = 20.0; 5 double CHEESE_RADIUS = 40.0; 6 double RIND_THICKNESS = 0.2; 7 double HOLE_MIN_RADIUS = 0.1; 8 double HOLE_MAX_RADIUS = 1.0; 9 model.component("comp1").geom("geom1").lengthUnit("cm"); 10 model.component("comp1").geom("geom1").selection().create("csel1", "CumulativeSelection"); 11 while (ind < NUMBER_OF_HOLES) { 12 hx = (2.0*Math.random()-1.0)*CHEESE_RADIUS; 13 hy = (2.0*Math.random()-1.0)*CHEESE_RADIUS; 14 hz = Math.random()*CHEESE_HEIGHT; 15 hr = Math.random()*(HOLE_MAX_RADIUS-HOLE_MIN_RADIUS)+HOLE_MIN_RADIUS; 16 if ((Math.sqrt(hx*hx+hy*hy)+hr) > CHEESE_RADIUS-RIND_THICKNESS) {continue; } 17 if (((hz-hr) < RIND_THICKNESS) || ((hz+hr) > CHEESE_HEIGHT-RIND_THICKNESS)) {continue; } 18 model.component("comp1").geom("geom1").create("sph"+ind, "Sphere"); 19 model.component("comp1").geom("geom1").feature("sph"+ind).set("r", hr); 20 model.component("comp1").geom("geom1").feature("sph"+ind).set("pos", new double[]{hx, hy, hz}); 21 model.component("comp1").geom("geom1").feature("sph"+ind).set("contributeto", "csel1"); 22 ind++; 23 } 24 model.component("comp1").geom("geom1").create("cyl1", "Cylinder"); 25 model.component("comp1").geom("geom1").feature("cyl1").set("r", CHEESE_RADIUS); 26 model.component("comp1").geom("geom1").feature("cyl1").set("h", CHEESE_HEIGHT); 27 model.component("comp1").geom("geom1").create("dif1", "Difference"); 28 model.component("comp1").geom("geom1").feature("dif1").selection("input").set("cyl1"); 29 model.component("comp1").geom("geom1").feature("dif1").selection("input2").named("csel1"); 30 model.component("comp1").geom("geom1").run();

Let’s go through this model method line by line:

1. Initialize and define the total number of holes that we want to put in the cheese.

2. Initialize and define an index counter to use later.

3. Initialize a set of double-precision numbers that holds the *xyz*-position and radius of each hole.

4–8. Initialize and define a set of numbers that defines the cheese height, radius, ring thickness, and maximum and minimum possible hole radius in centimeters.

9. Set the length unit of the geometry to centimeters.

10. Create a new selection set, with tag `csel`

and name ` CumulativeSelection`

. Note that if such a selection set already exists, the method fails at this point. You could also modify the method to account for this, if you want to run the method repeatedly in the same file.

11. Initialize a while loop to create the specified number of holes.

12–14. Define the *xyz*-position of the holes by calling the random method and scaling the output such that the *xyz*-position of the holes lies within the outer Cartesian bounds of the cheese.

15. Define the hole radius to lie between the specified limits.

16–17. Check if the hole position and size are such that the hole is actually outside of the cheese. If so, continue to the next iteration of the while loop without executing any of the remaining code in the loop. This check can be done in a single line or split into three lines, depending on your preference of programming style.

18. Create a sphere with a name based on the current index value.

19–20. Set the radius and position of the newly created sphere. Although the radius can be passed in directly as a double, the position must be specified as an array of doubles.

21. Specify that this sphere feature is part of (contributes to) the selection set named `csel1`

.

22–23. Iterate the index, indicating that a sphere has been created, and close the while loop.

24–26. Create a cylinder primitive that represents the wheel of cheese.

27–29. Set up a Boolean difference operation. The object to add is the cylinder primitive, while the object to subtract is the selection of all of the spheres.

30. Run the entire geometry sequence, which cuts all of the spheres out of the cylinder, forming the wheel of cheese.

We can run this method in a new (and empty) model file to create a model of a wheel of cheese. Each time we rerun the method, we will get a different model. The geometry sequence in the model file contains all of the spheres and the cylinder primitives as well as the Boolean operation.

If we want to, we could also add some additional code to our model method to write out a geometry file of just the final geometry: the cheese. This geometry file can be written in the COMSOL Multiphysics native or STL file format. We could also write out to Parasolid® software or ACIS® software file formats with any of the optional modules that include the Parasolid® software kernel. Working with just the final geometry after it has been exported and reimported is faster than working with the complete geometry sequence.

We can see the final results of our efforts below. Delicious!

*A model of a wheel of Emmentaler cheese, ready to be eaten.*

We’ve looked at a simple example of how to use model methods to create a geometry with randomly placed and sized features. There are some questions that we haven’t addressed here, such as how to ensure that the holes are not overlapping and how to come up with a close-packed arrangement, but these turn out to be difficult mathematical questions that are fields in their own right.

Of course, there is a lot more that you can do with model methods, which we will save for another day. There are also other ways to create a random geometry, such as by parametrically defining a surface.

- Read about another use of model methods: generating simulation results that you can see and hear
- Check out the new features in COMSOL Multiphysics® version 5.3 on the Release Highlights page

*ACIS is a registered trademark of Spatial Corporation. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Parasolid is a trademark or registered trademark of Siemens Product Lifecycle Management Software Inc. or its subsidiaries in the United States and in other countries. Microsoft and Windows are either registered trademarks or trademarks of Microsoft Corporation in the United States and/or other countries.*

Let’s start by considering a model of the electrical heating of a busbar, shown below. You may recognize this as an introductory example to COMSOL Multiphysics, but if you haven’t already modeled it, we encourage you to review this model by going through the *Introduction to COMSOL Multiphysics* PDF booklet.

*Electric currents (arrow plot) flowing through a metal busbar lead to resistive heating that raises the temperature (color surface plot).*

In this example, we model electric current flowing through a busbar. This leads to resistive heating, which in turn causes the temperature of the busbar to rise. We assume that there is only heat transfer to the surrounding air, neglecting any conductive heat transfer through the bolts and radiative heat transfer. The example also initially assumes that there isn’t any fan forcing air over the busbar. Thus, the transfer of heat to the air is via natural, or free, convection.

As the part heats the surrounding air, the air gets hotter. As the air gets hotter, its density decreases, causing the hot air to rise relative to the cooler surrounding air. These free convective air currents increase the rate of heat transfer from the part to the surrounding air. The air currents depend on the temperature variations as well as the geometry of the part and its surroundings. Convection can, of course, also happen in any other gas or liquid, such as water or transformer oil, but we will center this discussion primarily around convection in air.

We can classify the surrounding airspace into one of two categories: *Internal* or *External*. Internal means that there is a finite-sized cavity (such as an electrical junction box) around the part within which the air is reasonably well contained, although it might have known air inlets and outlets to an external space. We then assume that the thermal boundary conditions on the outside of the cavity and at the inlets and outlets are known. On the other hand, External implies that the object is surrounded by what is essentially an infinitely large volume of air. We then assume that the air temperature far away from the object is a constant, known value.

*The settings for a constant heat transfer coefficient.*

The introductory busbar example assumes free convective heat transfer to an external airspace. This is modeled using the following boundary condition for the heat flux:

q=h \left(T_{ext}-T \right)

where the external air temperature is *T _{ext}* = 25°C and is the heat transfer coefficient.

This single-valued heat transfer coefficient represents an approximate and average of all of the local variations in air currents. Even for this simple system, any value between could be an appropriate heat transfer coefficient, and it’s worth trying out the bounding cases and comparing results.

If we instead know that there is a fan blowing air over this structure, then due to the faster air currents, we use a heat transfer coefficient of to represent the enhanced heat transfer.

If the surrounding fluid is a liquid such as water, then the range of free and forced heat transfer coefficients are much wider. For free convection in a liquid, is the typical range. For forced convection, the range is even wider: .

Clearly, entering a single-valued heat transfer coefficient for free or forced convection is an oversimplification, so why do we do it? First, it is simple to implement and easy to compare the best and worst cases. Also, this boundary condition can be applied with the core COMSOL Multiphysics package. However, there are some more sophisticated approaches available within the Heat Transfer Module and CFD Module, so let’s look at those next.

A *convective correlation* is an empirical relationship that has been developed for common geometries. When using the Heat Transfer Module or CFD Module, these correlations are available within the Heat Flux boundary condition, shown in the screenshot below.

*The Heat Flux boundary condition with the external natural convection correlation for a vertical wall.*

Using these correlations requires that you enter the part’s characteristic dimensions. For example, with our busbar model, we use the *External natural convection, Vertical wall* correlation and choose a wall height of 10 cm to model the free convective heat flux off of the busbar’s vertical faces. We also need to specify the external air temperature and pressure. These values can be loaded from the ASHRAE database, a process we describe in a previous blog post.

The table below shows schematics for all of the available correlations. They take the information about the surface geometry and use a Nusselt number correlation to compute a heat transfer coefficient. For the horizontally aligned faces of the busbar, for example, we use the *Horizontal plate, Upside* and *Horizontal plate, Downside* correlations.

When using the Forced Convection correlations, you must also enter the air velocity. These convective correlations have the advantage of being a more accurate representation of reality, since they are based on well-established experimental data. These correlations lead to a nonlinear boundary condition, but this usually results in only slightly longer computation times than when using a constant heat transfer coefficient. The disadvantage is that they are only appropriate to use when there is an empirical relationship that is reasonable for the part geometry.

Free Convection | Forced Convection | |
---|---|---|

External | ||

Internal |

*The available* Convective Correlation *boundary conditions.*

Note that all of the above convective correlations, even those classified as Internal, assume the presence of an infinite external reservoir of fluid; e.g., the ambient airspace. The heat carried away from the surfaces goes into this ambient airspace without changing its temperature, and the ambient air coming in is at a known temperature. If, however, we are dealing with convection in a completely enclosed container, then none of these correlations are appropriate and we must move to a different modeling approach.

Let’s consider a rectangular air-filled cavity. If this cavity is heated on one of the vertical sides and cooled on the other, then there will be a regular circulation of the air. Similarly, there will be air circulation if the cavity is heated from below and cooled from above. These cases are shown in the images below, which were generated by solving for both the temperature distribution and the air flow.

*Free convective currents in vertically and horizontally aligned rectangular cavities.*

Solving for the free convective currents is fairly involved. See, for example, this blog post on modeling natural convection. Therefore, we might like to find a simpler alternative. Within the Heat Transfer Module, there is the option to use the *Equivalent conductivity for convection* feature. When using this feature, the effective thermal conductivity of the air is increased based upon correlations for the horizontal and vertical rectangular cavity cases, as shown in the screenshot below.

*The Equivalent conductivity for convection feature and settings.*

The air domain is still explicitly modeled using the *Fluid* domain feature within the *Heat Transfer* interface, but the air flow fields are not computed and the velocity term is simply neglected. The thermal conductivity is increased by an empirical correlation factor that depends on the cavity dimensions and the temperature variation across the cavity. The dimensions of the cavity must be entered, but the software can automatically determine and update the temperature difference across the cavity.

*Temperature distribution in vertically and horizontally aligned cavities using the Equivalent conductivity for convection feature. The free convective air currents are not computed. Instead, the thermal conductivity of the air is increased.*

This approach for approximating free convection in a completely closed cavity requires us to mesh the air domain and solve for the temperature field in the air, but this usually adds only a small computational cost. The disadvantage of this approach is that it is not very applicable for nonrectangular geometries.

Next, let’s consider a completely sealed enclosure, but with a fan or blower inside that actively mixes the air. We can reasonably assume that well-mixed air is at a constant temperature throughout the cavity. In this case, it is appropriate to use the *Isothermal Domain* feature, which is available with the Heat Transfer Module when the *Isothermal domain* option is selected in the Settings window.

*The settings associated with using the Isothermal Domain interface.*

A well-mixed air domain can be explicitly modeled using the Isothermal Domain feature. In the model, the temperature of the entire domain is a constant value. The temperature of the air is computed based upon the balance of heat entering and leaving the domain via the boundaries. The Isothermal Domain boundaries can be set as one of the following options:

*Thermally Insulated*: no heat transfer across the boundary*Continuity*: continuity of temperature across the boundary*Ventilation*: a known mass flow of fluid, of known temperature, into or out of the isothermal domain*Convective Heat Flux*: a user-specified heat transfer coefficient, as described earlier*Thermal Contact*: a specific thermal resistance

Of all of these boundary condition options, the *Convective Heat Flux* is the most appropriate for well-mixed air in an enclosed cavity.

*Representative results when using an Isothermal Domain feature. The well-mixed air domain is a constant temperature and there is heat transfer to the surrounding solid domains via a specified heat transfer coefficient.*

The most computationally expensive approach, but also the most general, is to explicitly model the airflow. We can model both forced and free convection as well as simulate an internal or external flow. This type of modeling can be done with either the Heat Transfer Module or CFD Module.

*An example of computing air flow and temperature within an enclosure.*

If you finished the *Introduction to COMSOL Multiphysics* booklet, you have already solved one example of an internal forced convection model. You can learn more about explicitly modeling airflow in the resources mentioned at the end of this post.

We will finish up this topic by addressing the question: When can free convection in air be ignored and how can we model these cases? When a cavity’s dimensions are very small, such as a thin gap between parts or a very thin tube, we run into the possibility that the viscous damping will exceed any buoyancy forces. This balance of viscous to buoyancy forces is characterized by the nondimensional Rayleigh number. The onset of free convection can be quite varied depending on boundary conditions and geometry. A good rule of thumb is that for dimensions less than 1mm, there will likely not be any free convection, but once the dimensions of the cavity get larger than 1cm, there likely will be free convective currents.

So how can we model heat transfer through these small gaps? If there is no air flow, then these air-filled regions can simply be modeled as either a solid or a fluid with no convective term. This is demonstrated in the Window and Glazing Thermal Performances tutorial. It is also appropriate to model the air as a solid within any microscale enclosed structure.

If these thin gaps are very small compared to the other dimensions of the system being analyzed, you can further simplify the gaps by modeling them via the Thin Layer boundary condition with a *Thermally thick approximation* layer type. This boundary condition introduces a jump in temperature across interior boundaries based on the specified thickness and thermal conductivity.

*The Thin Layer boundary condition can model a thin air gap between parts.*

We can use the previous two approaches within the core COMSOL Multiphysics package. In the Heat Transfer Module, there are additional options for the Thin Layer condition to consider more general and multilayer boundaries, which can be composed of several layers of materials.

Before closing out this discussion, we should also quickly address the question of radiative heat transfer. Although we haven’t discussed radiation here, an engineer must always take it into consideration. Surfaces exposed to ambient conditions will radiate heat to the surroundings and be heated by the sun. The magnitude of radiative heating from the sun is significant — about 1000 watts per square meter — and should not be neglected. For details on modeling radiative heat transfer to ambient conditions, read this previous blog post.

There will also be radiative heat transfer between interior surfaces. Radiative heat flux between surfaces is a function of the difference of temperature to the fourth power. Keep in mind that radiative heat transfer between two surfaces at 20°C and 50°C will be 200 watts per square meter at most, but rises to 1000 watts per square meter for surfaces at 20°C and 125°C. To correctly compute the radiative heat transfer between surfaces, it is also important to compute the view factors with the Heat Transfer Module.

Today we looked at several approaches for modeling convection, starting from the simplest approach of using a constant convective heat transfer coefficient. We then discussed using an Empirical Convective Correlation boundary condition before going over how to use an effective thermal conductivity within a domain and an isothermal domain feature, approaches with higher accuracy and only a slightly greater computational cost. The most computationally intensive approach — explicitly computing the flow field — is, of course, the most general. We also touched on when it is appropriate to neglect free convection entirely and how to model such situations. You should now have a greater understanding of the available options and trade-offs for modeling free and forced convection. Happy modeling!

- Learn about explicitly modeling air flow and heat transfer on the COMSOL Blog
- Get an introduction to simulating heat transfer in an archived webinar

Let’s suppose that we are tasked with designing a coil such that the magnetic field along part of the centerline is as close to a target value as possible. As we saw in an earlier blog post, we can achieve this by adjusting the current through each turn of the coil to be different. However, this requires that we include a separate current control for each turn in our physical design. Instead, we can use a single current control for the entire coil and adjust the physical spacing of the coils along the axial direction.

*A ten-turn axisymmetric coil. The objective is to alter the magnetic field at the centerline (green).*

The case that we will consider here is shown in the image above. A ten-turn axisymmetric coil is driven by a single current source; that is, the same current flows through each turn. The initial coil design spaces the 1-cm-diameter coil turns a distance of *S _{0}* = 4 cm apart. Since the coil is axisymmetric (and we are only interested in solutions that are symmetric about the

*The computational model. We want to change the five coil positions and the coil current.*

Our optimization objective is to get the *B _{z}* field as close as possible to a desired value,

More formally, these statements can be written as:

\begin{aligned}

& \underset{I, \Delta Z_1, \ldots ,\Delta Z_5}{\text{minimize:}}

& & \frac{1}{L_0} \int_0^{L_0} \left( \frac{B_z}{B_0} -1 \right) ^2 d l \\

& \text{subject to:}

& & -(S_0-G_0)/2 \le \Delta Z_1 \leq \Delta Z_{max}\\

& & & -\Delta Z_{max} \leq \Delta Z_2, \ldots ,\Delta Z_5 \leq \Delta Z_{max}\\

& & & G_0 \le (Z_5-Z_4) \\

& & & G_0 \le (Z_4-Z_3) \\

& & & G_0 \le (Z_3-Z_2) \\

& & & G_0 \le (Z_2-Z_1) \\

& & & 0 \leq I \leq I_{max}\\

\end{aligned}

& \underset{I, \Delta Z_1, \ldots ,\Delta Z_5}{\text{minimize:}}

& & \frac{1}{L_0} \int_0^{L_0} \left( \frac{B_z}{B_0} -1 \right) ^2 d l \\

& \text{subject to:}

& & -(S_0-G_0)/2 \le \Delta Z_1 \leq \Delta Z_{max}\\

& & & -\Delta Z_{max} \leq \Delta Z_2, \ldots ,\Delta Z_5 \leq \Delta Z_{max}\\

& & & G_0 \le (Z_5-Z_4) \\

& & & G_0 \le (Z_4-Z_3) \\

& & & G_0 \le (Z_3-Z_2) \\

& & & G_0 \le (Z_2-Z_1) \\

& & & 0 \leq I \leq I_{max}\\

\end{aligned}

We solve this problem with a combination of parameter and shape optimization by using the *Optimization* and *Deformed Geometry* interfaces in COMSOL Multiphysics.

We can begin our implementation by reviewing the model developed here, which optimizes for a particular field value. We start with the same *Optimization* interface and *Integral Objective* feature introduced in the previous blog post. Two *Global Control Variable* features are then used. The first sets up the displacements of the five coils, using *Control Variables Scaling* to scale the optimization variables close to unity. The second *Global Control Variables* feature similarly defines and constrains the current.

*The definitions of the variables that control the five coils’ positions.*

The five *Control Variables* shown in the screenshot above define the displacements of the coils, as well as a small square region of space around each coil, which is shown as the green domains in the illustration below. As these green domains move up and down, the surrounding yellow domains must stretch and shrink to accommodate, while the surrounding blue domain is fixed. Since we know the displacements of the green domains, we can specify a linear variation of displacement along all of the red edges. This linear displacement variation is computed using a *Coefficient Form Boundary PDE* interface, as described in an earlier blog post on modeling translational motion.

*The definitions of the deformations for the various domains in the model.*

This information about the specified displacements of the various domains is set up using the *Deformed Geometry* interface, as shown in the screenshot below. The *Prescribed Deformation* domain features move the green domains and the yellow domains are allowed to deform due to the *Free Deformation* domains. The *Prescribed Mesh Displacement* boundary features apply to the black and red edges and completely define the deformations of the yellow domains.

*The control over the coil turn displacement via the Prescribed Deformation feature in the* Deformed Geometry *interface.*

As a consequence of setting up the *Deformed Geometry* interface in this way, the five control variables for the positions of the coils now represent a shape optimization problem. Previously, we have discussed shape optimization in a more general case from structural mechanics. Shape optimization takes advantage of the ability of COMSOL Multiphysics to compute design sensitivities with respect to changes in the shape of the geometry.

We also need to define a set of *Global Inequality Constraints* to prevent the green domains surrounding the coils from getting too close to each other and intersecting. The screenshot below shows this implementation. Note that the constraint is scaled with respect to the gap size *G _{0}* so that the constraint equation is also close to one in magnitude.

*One of the four constraints that keep the coils from getting too close to each other.*

Due to the large deformations that can occur in the domains around the coils that stretch and contract, it is also helpful to use a mapped mesh.

*A mapped mesh is used in the deforming domains around the coils. The infinite element domain also has a mapped mesh.*

We can then solve this problem using a gradient-based optimization solver (SNOPT), taking advantage of the analytically computed gradients. The current through the coil and the coil positions are adjusted to minimize the above objective function. The results of the optimization are shown in the figure below.

*The magnetic flux density’s* z*-component along the centerline for the optimized coil.*

*The optimized coil positions.*

We have introduced a model that uses a combination of shape and parameter optimization to adjust the coil current and spacing between the coils in a 2D axisymmetric coil. By taking advantage of the *Optimization* and *Deformed Geometry* interfaces, we are able to analytically compute the derivatives for this problem and converge to an optimum very quickly.

- Read a blog post on modeling a linear electromagnetic plunger
- Browse the Optimization category on the COMSOL Blog

The problem we will look at today is the optimization of a ten-turn axisymmetric coil structure, as shown in the image below. Each of the five turns on either side of the *xy*-plane is symmetrically but independently driven.

*A ten-turn coil with five independently driven coil pairs. The objective is to alter the magnetic field at the centerline (green highlight).*

The coil is both rotationally symmetric and symmetric about the *z* = 0 plane, so we can reduce the computational model to a 2D axisymmetric model, as shown in the schematic below. Our modeling domain is truncated with an *infinite element* domain. We use the Perfect Magnetic Conductor boundary condition to exploit symmetry about the *z* = 0 plane. Thus, our model reduces to a quarter-circle domain with five independent coils that are modeled using the *Coil Domain* feature.

*A schematic of the computational model.*

If all of the coils are driven with the same current of 10 A, we can plot the *z*-component of the magnetic flux density along the centerline, as shown in the image below. It is this field distribution along a part of the centerline that we want to change via optimization.

*The magnetic field distribution along the coil centerline. We want to adjust the magnetic field within the optimization zone.*

From the image above, we see the magnetic field along a portion of the centerline due to a current of 10 A through each coil. It is this field distribution that we want to change by adjusting the current flowing through the coils. Our design variables are the five unique coil currents: . These design variables have bounds: . That is, the current cannot be too great in magnitude, otherwise the coils will overheat.

We will look at three different optimization problem statements:

- To have the magnetic field at the centerline be as close to a desired target value as possible
- To minimize the power needed to drive the coil, along with a constraint on the field minimum at several points
- To minimize the gradient of the magnetic field along the centerline, along with a constraint on the field at one point

Let’s state these optimization problems a bit more formally. The first optimization problem can be written as:

\begin{aligned}

& \underset{I_1, \ldots ,I_5}{\text{minimize:}}

& & \frac{1}{L_0} \int_0^{L_0} \left( \frac{B_z}{B_0} -1 \right) ^2 d l \\

& \text{subject to:}

& & -I_{max} \leq I_1, \ldots ,I_5 \leq I_{max}\\

\end{aligned}

& \underset{I_1, \ldots ,I_5}{\text{minimize:}}

& & \frac{1}{L_0} \int_0^{L_0} \left( \frac{B_z}{B_0} -1 \right) ^2 d l \\

& \text{subject to:}

& & -I_{max} \leq I_1, \ldots ,I_5 \leq I_{max}\\

\end{aligned}

The objective here is to minimize the difference between the computed *B _{z}*-field and the desired field,

Now, let’s look at the implementation of this problem within COMSOL Multiphysics. We begin by adding an *Optimization* interface to our model, which contains two features. The first feature is the *Global Control Variables*, as shown in the screenshot below. We can see that five control variables are set up: `I1,...,I5`

. These variables are used to specify the current flowing through the five *Coil* features in the *Magnetic Fields* interface.

The *Initial Value*, *Upper Bound*, and *Lower Bound* to these variables are also specified by two *Global Parameters*, `I_init`

and ` I_max`

. Also note that the *Scale factor* is set such that the optimization variables also have a magnitude close to one. We will use this same setup for the control variables in all three examples.

*Setting up the Global Control Variables feature, which specifies the coil currents.*

Next, the objective function is defined via the *Integral Objective* feature over a boundary, as shown in the screenshot below. Note that the *Multiply by 2πr* option is toggled off.

*The implementation of the objective function to achieve a desired field along one boundary.*

We include an *Optimization* step in the Study, as shown in the screenshot below. Since our objective function can be analytically differentiated with respect to the design variables, we can use the SNOPT solver. This solver takes advantage of the analytically computed gradient and solves the optimization problem in a few seconds. All of the other solver settings can be left at their defaults.

*The Optimization study step.*

After solving, we can plot the fields and results. The figure below shows that the *B _{z}*-field matches the target value very well.

*Results of optimizing for a target value of magnetic flux along the centerline.*

Our second optimization problem is to minimize the total power needed to drive the coil and to include a constraint on the field minimum at several points along the centerline. This can be expressed as:

\begin{aligned}

& \underset{I_1, \ldots ,I_5}{\text{minimize:}}

& & \frac{1}{P_o}\sum_{k=1}^{5} P_{coil}^k \\

& \text{subject to:}

& & -I_{max} \leq I_1, \ldots ,I_5 \leq I_{max}\\

& & & 1 \le B_z^i/B_{0}, i=1, \ldots, M

\end{aligned}

& \underset{I_1, \ldots ,I_5}{\text{minimize:}}

& & \frac{1}{P_o}\sum_{k=1}^{5} P_{coil}^k \\

& \text{subject to:}

& & -I_{max} \leq I_1, \ldots ,I_5 \leq I_{max}\\

& & & 1 \le B_z^i/B_{0}, i=1, \ldots, M

\end{aligned}

where *P _{o}* is the initial total power dissipated in all coils and is the power dissipated in the k

We further want to constrain the fields at *M* number of points on the centerline to be above a value of *B*_{0}.

The implementation of this problem uses the same Global Control Variables feature as before. The objective of minimizing the total dissipated coil power is implemented via the *Global Objective* feature, as shown in the screenshot below. The built-in variables for the dissipated power (`mf.PCoil_1,...,mf.PCoil5`

) in each Coil feature can be used directly. The objective is normalized with respect to the initial total power so that it is close to unity.

*Implementation of the objective to minimize total power.*

The constraint on the field minimum has to be implemented at a set of discrete points within the model. In this case, we introduce five points evenly distributed over the optimization zone. Each of these constraints has to be introduced with a separate *Point Sum Inequality Constraint* feature, as shown below. We again apply a normalization such that this constraint has a magnitude of one. Note that the *Multiply by 2πr* option is toggled off, since these points lie on the centerline.

*The implementation of the constraint on the field minimum at a point.*

We can solve this problem using the same approach as before. The results are plotted below. It is interesting to note that the minimal dissipated power solution does not result in a very uniform field distribution over the target zone.

*Results of optimizing for a minimum power dissipation with a constraint on the field minimum.*

Finally, let’s consider minimizing the gradient of the field along the optimization zone, with a constraint on the field at the centerpoint. This can be expressed as:

\begin{aligned}

& \underset{I_1, \ldots ,I_5}{\text{minimize:}}

& & \frac{1}{L_0 B_{0}} \int_0^{L_0} \left( \frac{\partial B_z}{\partial z } \right) ^2 d l \\

& \text{subject to:}

& & -I_{max} \leq I_1, \ldots ,I_5 \leq I_{max}\\

& & & B_z(r=0,z=0) = B_0 \end{aligned}

& \underset{I_1, \ldots ,I_5}{\text{minimize:}}

& & \frac{1}{L_0 B_{0}} \int_0^{L_0} \left( \frac{\partial B_z}{\partial z } \right) ^2 d l \\

& \text{subject to:}

& & -I_{max} \leq I_1, \ldots ,I_5 \leq I_{max}\\

& & & B_z(r=0,z=0) = B_0 \end{aligned}

The constraint here fixes the field at the centerpoint of the coil. Although the *Optimization* interface does have an explicit equality constraint, we can achieve the same results with an inequality constraint with equal upper and lower bounds. We again apply a normalization such that our constraint is actually , as shown in the image below.

*The implementation of an equality constraint.*

The objective of minimizing the gradient of the field within the target zone is implemented via the Integral Objective feature (shown below). The gradient of the *B _{z}*-field with respect to the

`d(mf.Bz,z)`

.

*The objective of minimizing the gradient of the field.*

We can use the same solver settings as before. The results for this case are shown below. The field within the optimization zone is quite uniform and matches the target at the centerpoint.

*Results of optimizing for a minimum field gradient with a constraint on the field at a point.*

Although the field here appears almost identical to the first case, the solution in terms of the coil currents is quite different, which raises an interesting point. There are multiple combinations of coil currents that will give nearly identical solutions in terms of minimizing the field difference or gradient. Another way of saying this is that the objective function has multiple local minimum points.

The SNOPT optimization solver uses a type of gradient-based approach and will approach different local minima for different initial conditions for the coil currents. Although a gradient-based solver will converge to a local minimum, there is no guarantee that this is in fact the global minimum. In general (unless we perform an exhaustive search of the design space), it is never possible to guarantee that the optimized solution is a global minimum.

Furthermore, if we were to increase the number of coils in this problem, we can get into the situation where multiple combinations of coil currents are nearly equivalently optimal. That is, there is no single optimal point, but rather an “optimal line” or “optimal surface” in the design space (the combination of coil currents) that is nearly equivalent. The optimization solver does not provide direct feedback of this, but will tend to converge more slowly in such cases.

We have shown three different ways to optimize the currents flowing through the different turns of a coil. These three cases introduce different types of objective functions and constraints and can be adapted for a variety of other cases. Depending upon the overall goals and objectives of your coil design problem, you may want to use any one of these or even an entirely different objective function and constraint set. These examples show the power and flexibility of the Optimization Module in combination with the AC/DC Module.

Of course, there is even more that can be done with these problems. In an upcoming blog post, we will look at adjusting the locations of the coil turns — stay tuned.

*Editor’s note: We have published the follow-up post in this blog series. Read it here: “How to Optimize the Spacing of Electromagnetic Coils“.*

- Browse the COMSOL Blog for more information on modeling electromagnetic coils:

Let’s consider the model shown below of two current-carrying circular coils, each carrying a current of I_{0} = 0.25 mA. The magnetic flux density, the *B*-field, is plotted in the space around these primary coils. Suppose that we want to introduce a smaller pickup coil in the space between the larger primary coils. This pickup coil intercepts part of the magnetic flux and is defined by its outside perimeter curve *C* and enclosed area *A*.

*The magnetic flux density around a Helmholtz coil with a pickup coil inside. The enclosed area of the coil is shaded in gray. We want to recompute the mutual inductance as the pickup coil changes orientation.*

The mutual inductance between these primary and pickup coils is defined as:

L=\frac{1}{I_0}\int_A\mathbf{B \cdot n } dA

where *n* is a vector normal to the surface *A*.

Since the *B*-field is computed from the magnetic vector potential, , we can use Stokes’ theorem to see that the above surface integral is equivalent to the line integral:

L=\frac{1}{I_0}\oint_C\mathbf{A \cdot t } dC

where *t* is the tangent vector to the curve *C*.

This method of computing mutual inductance is also shown in the Application Gallery example of an RFID system.

We can place the pickup coil at any location and orientation around the primary coils, solve the model, and evaluate either of the above integrals. We can even add the pickup coil geometry features after solving the model by using the *Update Solution* functionality. This functionality remeshes the entire model geometry and maps the previously computed solution from the old mesh onto the new mesh. This is appropriate and easy to do if the changes to the geometry do not affect the solution and if we only want to try out a few different pickup coil locations.

Suppose that we want to try out many different locations and orientations for the pickup coil. Since the *A*-field doesn’t change, we don’t want to re-solve or remesh the entire model, but just want to move the pickup coil geometry around. We can achieve this by using a combination of multiple geometry components, the *General Extrusion* component coupling, and *Integration* component couplings.

We begin with the existing Helmholtz coil example and introduce another 3D component into our model. The geometry within this second component is used to define the pickup coil’s edges, cross-sectional surface, and orientation. The *Rotate* and *Move* geometry features enable us to reorient the coil into any position that we would like. For visualization purposes, we can also include the edges that define the primary coils, as shown in the screenshot below.

*The setup of a second component and geometry.*

The spatial coordinates of *Component 2* overlap exactly with *Component 1*, but otherwise there is no connection between the two. A mapping is introduced via the General Extrusion component coupling that is defined in *Component 1*. This coupling uses a *Source Selection* for all of the domains in *Component 1*. Whenever this coupling is evaluated at a point in space in *Component 2*, it takes the fields at the same point in space in *Component 1*.

The approach of using two components and mapping the solution between them is also introduced in the Submodeling Analysis of a Shaft tutorial model.

Within *Component 2*, we define two Integration component couplings, one defined over the edges of the pickup coil, named *intC*, and the other over the cross-sectional boundary, named *intA*. This allows us to compute the mutual inductance with either of the above approaches by defining two variables. These variables, named `L_C`

and ` L_A`

, are defined via the equations:

intC(t1x*comp1.genext1(comp1.Ax)+t1y*comp1.genext1(comp1.Ay)+t1z*comp1.genext1(comp1.Az))/I0

and

intA(nx*comp1.genext1(comp1.mf.Bx)+ny*comp1.genext1(comp1.mf.By)+nz*comp1.genext1(comp1.mf.Bz))/I0

Here, t1, t1y, and t1z are the components of the tangent vector to the pickup coil perimeter; nx, ny, and nz are the components of the normal vector to the pickup coil surface; and I_{0} = 0.25 mA is a global parameter.

Since there are multiple components in the model, we must use the full name of the component couplings and fields that reside within *Component 1*. Also, note that the normal vector and perimeter tangent vector can be oriented in one of two opposite directions, which results in a sign change that we need to be aware of.

*Integration component coupling defined over the pickup coil perimeter in the second component.*

We can also sweep over different positions and orientations of the pickup coil. We already have the solution for the magnetic fields computed in *Study 1*. We add a second study that includes a parametric sweep, but does not solve for any physics. Within the study step settings, we can specify that we want to use the existing solution for the magnetic field, as shown in the screenshot below.

*Study step settings showing how the solution from* Study 1 *is used in* Study 2*.*

This second study takes relatively little computational resources when compared to remeshing the entire model and re-solving for the magnetic fields. For each different position of the pickup coil, the software only needs to remesh the pickup coil surface. The solution from *Study 1* is then mapped onto this new pickup coil position and the two variables are evaluated.

This approach also works for nonplanar integration surfaces and multiturn integration curves, as demonstrated in the RFID example. Not only can the integration edges and surfaces be almost arbitrary, but they can also easily be reoriented into any position using the *Rotate* and *Move* geometry operations. Thus, this is a very general approach for evaluating fields over arbitrary geometries and locations.

Now that we’ve seen the most flexible approach, which enables us to perform an integration over an arbitrary shape, let’s look at a simpler case. Suppose that we are dealing with a planar integration surface and the surface edges can easily be defined in terms of the *x*- and *y*-coordinates relative to an origin point on that plane.

The first step in this approach is to take a slice (cut plane) through the entire modeling space. We can do this via the *Cut Plane* data set, as described in this blog post on computing the total normal flux on a planar surface. We can control the origin of the cut plane and the normal vector to the cut plane via the global parameters. Also, note that the cut plane defines local variables called *cpl1x* and *cpl1y* for the local *x* and *y* locations, respectively, as well as *cpl1nx*, *cpl1ny*, and *cpl1nz* for the components of the normal vector to the plane.

*Using the Cut Plane data set. The origin point and normal are defined in terms of global parameters. The advanced settings show the names of the local* x*- and* y*-coordinates and normal coordinates.*

We can now perform a surface integration over this entire cut plane, but we want to restrict ourselves to a subspace within this plane. We do this by using a space-dependent logical expression that evaluates to true (or 1) within our area of interest and false (or 0) elsewhere. This logical expression multiplies our integrand. In the screenshot below, for example, we see the surface integration performed over the cut plane is the expression:

-(sqrt(cpl1x^2+cpl1y^2)<0.1[m])*(cpl1nx*mf.Bx+cpl1ny*mf.By+cpl1nz*mf.Bz)/I0

which includes the logical expression `(sqrt(cpl1x^2+cpl1y^2)<0.1[m])`

that evaluates to 1 within a circle with a 0.1-m radius centered at the origin.

The remainder of the expression evaluates the flux dotted with the cut plane normal vector, thus giving us the flux through a 0.1-m-radius circle centered at the cut plane origin.

*The evaluation of an integral over a subregion of a cut plane using logical expressions.*

The boundaries of the subregion within the cut plane are a bit jagged; however, this gets reduced with mesh refinement. As in the earlier approach, we use a second study with a parametric sweep to store all of the different orientations of the cut plane in a second data set. In this case, there isn't a second component or geometry that is getting reoriented and remeshed, so the evaluation is faster.

Let's now look at an even simpler approach that is useful in a smaller set of cases. Suppose that we want to integrate along a curve that can be described via a parametric equation. One of the simplest curves to describe via a parametric equation is the unit circle on the xy-plane, which is defined by for . It's also straightforward to compute the tangent vector for any parametric curve. For a unit circle, the tangent vector components are:

<\text{tx,ty}>=<-sin(s),cos(s)>

We can use these simple equations within a *Parameterized Curve 3D* data set for a 0.1-m-radius circle lying in the xz-plane. The circle's centerpoint is offset from the global origin via a set of global parameters.

*Settings in a Parametric Curve 3D data set. The curve is shown in black and the tangent vector arrows in gray.*

We can create a second data set with another study and use a parametric sweep to evaluate many different origin points for the circle. We then perform a line integral over this new data set, as shown in the screenshot below. The integrand

(-sin(s)*Ax+0*Ay+cos(s)*Az)/I0

assumes a circle in the xz-plane and evaluates the *A*-field along the parametric curve.

*The line integration over a Parametric Curve data set.*

Of the three approaches, this is the simplest and most accurate for a given mesh discretization, but also has the most limitations. Writing parametric equations for curves other than circles, ellipses, and squares aligned with the coordinate axes is more difficult.

In the figure below, we plot and compare the results for all of these approaches for a circular integration area moved back and forth along the coil axis.

*Comparison of the various approaches for computing mutual inductance.*

Here, we have shown three approaches for extracting surface and line integrals over subregions within a modeling space. The first approach is the most general; it allows integration over arbitrary (and even nonplanar) surfaces and curves. The first approach also allows arbitrary reorientation of the geometries being integrated over. While this approach is the most flexible, it also requires the most work to set up.

The second approach (using a Cut Plane data set) is applicable only to planar integration surfaces and is more limited in the shapes of integration surfaces that can be considered. The third approach (using a parameterized curve) is the quickest and simplest to implement, but is best suited for simple integration curves such as circles.

- Learn more about integration over subregions on the COMSOL Blog
- Find out how to use the General Extrusion component coupling in these blog posts:

Let’s consider two objects labeled A and B, shown below. The three distances that we want to compute are:

- Distance to object A as a distance field. In this case, we calculate the distance and direction from all of the points surrounding and inside object A to the closest point on its boundary (
*d*)._{A} - Distance from every point on the boundary of object B to the closest point on object A (
*d*)._{AB} - Endpoints of the line that is the shortest distance between objects A and B (
*d*)._{AB,min}

*Two objects, A and B, and the distances that we want to compute.*

We can compute all of these various distances using a combination of the General Extrusion and Minimum component couplings in COMSOL Multiphysics. Let’s first look at how to use the General Extrusion component coupling. We name the operator `A_b`

and define its *Source Selection* to be the boundaries of object A. Within the *Advanced* section, we use the *Mesh search method* of *Closest point*. These settings are shown below. All other settings for this operator can be left at their defaults.

*The settings for the General Extrusion component coupling used to compute the closest point distance. Note that the* Mesh search method *is set to* Closest point*.*

We use this operator within the definition of a variable called `d_A`

, defined as:

sqrt((x-A_b(x))^2+(y-A_b(y))^2)

This variable is defined over the domains where we want to compute the distance field; in this case, just the surrounding domain. We can also compute the negative of the gradient of this distance field, . This gives us the components of a vector field that points toward the closest point on the boundary of A. We can use the Differentiation operator `d(d_A,x)`

and ` d(d_A,y)`

to take the spatial derivatives, as shown in the screenshot below.

*The variable definitions.*

We can use these variables anywhere that we want. For example, we can plot the distance field or make material properties dependent upon distance. The image below plots the contours of the distance and the direction vectors. Note that the distance is computed even in the region behind object B. We clearly get quite a bit of information here, but there is a substantial computational cost, since the shortest distance is computed at every point in the surrounding domain. There are also times when we don’t need all of this information and just want the distances between objects.

*The distance field (contour lines) and shortest direction to the boundary of object A (arrows) in the domain surrounding the two objects.*

Let’s make things a bit easier and only concern ourselves with the distance between two objects and not the direction. We use the same General Extrusion component coupling, but only need to define a variable on the boundary of object B to compute the distance.

*The variable defining the distance between the objects.*

While this is the same distance function we used before, we don’t need a mesh in the intermediate space. We don’t even need a mesh over domains A and B; there just needs to be a mesh on the boundaries of the objects. This approach takes much less time, but it gives us only the shortest distance from object A to every point on the boundary of object B. We cannot recover the direction vector. We can also flip all of these definitions around and compute the shortest distance from object B to every point on the boundary of object A. These distances, shown in the plot below, are available along the boundaries of the objects.

*The distance from every point on the boundary of object B to the closest point on object A and vice versa.*

Now, let’s find the line that describes the shortest distance between the two objects. In the previous section, we saw that we can compute two variables, `d_AB`

and `d_BA`

, which describe the shortest distances between A and B and vice versa. We now want to find the minimum distance between the boundaries of these domains. Thus, we set up two different Minimum component couplings: one for the boundary of object A and another for object B. We call these operators `minA`

and ` minB`

, as shown in the screenshot below.

*The definition of the Minimum component coupling over the boundary of object A.*

We then call these Minimum component couplings to extract the minimum distance. We can also provide a second argument to the Minimum component coupling to find the coordinates at which the distance is at a minimum. For example, by defining the variable `A_x`

as the expression `minA(d_BA,x)`

, it takes on the value of the *x*-coordinate at which ` d_BA`

is at a minimum over the boundary of A.

*The definitions for the coordinates of the shortest line segment between two domains.*

We can call the variables defining these coordinates anywhere we want. For example, we can use the *Cut Line* feature to show the shortest line segment connecting the two objects, as seen in the following image. If we have a meshed domain and a solution between the two objects, then we can plot the fields just along the shortest line between the two.

*The Cut Line feature, used to determine the shortest line between objects.*

These techniques for determining distances can be used in any model. Although the examples presented here are in 2D, they can all be generalized to 3D as well. However, computing the 3D distance field does take a relatively long time, whereas calculating distances between boundaries and clearances is less intensive.

Computing the distance field around nonsmooth shapes also requires a bit more care. As shown in the figure below, the distance field around reentrant corners are nonsmooth, hence the direction vector will be undefined along those lines that are equidistant from two different parts of the boundary. Resolving this nonsmoothness of the distance field requires a finer mesh.

*The distance field around and inside an object with reentrant corners on a coarse mesh (left) and a more refined mesh (right). The smoothness of the distance field is mesh dependent in such cases.*

Once we have computed this distance field on an appropriately fine mesh, we treat it like any other variable in our model. For example, we can make material properties a function of distance from a surface. The image below shows such a representative material distribution.

*A representative material distribution that is a function of distance to the surface.*

It is also possible to use the distance function to help visualize our results. Suppose we are only interested in the part of the solution that is within a specific distance of the surface. In this case, we can use the *Filter* subfeature when making a volume plot. We then enter a logical expression to only display the results that are within a certain distance of the object’s surface, an example of which is shown below.

*Using the distance function to plot only the solution within 5 mm of the surface.*

We have demonstrated how to compute a distance field to a boundary within a model, the distances between boundaries, and the shortest line segment between two boundaries. This approach also works to calculate distance fields from edges and points in 3D models. The computed distances can be used anywhere within the setup, physics definitions, and results evaluation of a model. We’ve shared a couple of examples here, but now it’s your turn. We would love to hear what you come up with!

- Find a variety of help resources for modeling in COMSOL Multiphysics: