Imagine that you are modeling an electronic device and are interested in its temperature distribution during operation. After testing a few set-ups, you discover that the heat flux boundary condition you applied is not a well-suited approximation for your model. You realize that a fluid flow simulation is required in order to achieve more accurate results. The only problem is that you have used almost all of your laptop’s 4 GB of RAM to model the heat transfer simulation, based on the heat flux approximation. You require two-way coupling, and including the simulation of fluid flow will only add even more degrees of freedom to your computation — and require even more RAM.

What now? You need more computer power.

Now imagine, instead, that you are analyzing the mechanics of a structural component with a lot of small details for your customer. In order to optimize the design, you are required to run the analysis for a large number of different design dimensions. As you have only the one processor, locally, and each run will take quite a bit of time, you realize that you will not reach your customer’s deadline.

The solution? You would need to run these simulations in parallel on multiple processors.

Finally, let’s look at an application independent of the physics involved, but still reliant on the analysis performed. You have set up your model using the physics interface of your choice, but it’s the end of the day and you just want to get the computed solution to your model as quickly and easily as possible, overnight. Utilizing a direct solver does not require a lot of work with manipulating solver settings and the like, but the required RAM of a direct solver increases greatly with the number of degrees of freedom in your model.

What’s the fix this time? You need a bigger computer.

What if there was another solution to all three cases…

This is where cloud computing comes into the picture. Compute clouds are services that make computing power available to those who need it, when they need it.

The service has several advantages, especially if you don’t have the time, money, and experience to invest in a traditional cluster or server rack. You might also not need a cluster available 24/7, but only need that extra compute power during certain periods of time. For instance, for that one-off analysis or that task that needs to be performed quicker.

*An organization can access COMSOL Multiphysics® and the hardware resources of cloud computing to run many different analyses at the time they require, utilizing the resources they require.*

Utilizing cloud computing will have a positive impact on your workflow. The ability to add more compute power directly when you need it will enable you to be more agile in your day-to-day COMSOL Multiphysics® simulation work. You won’t have to worry about the lack of adequate hardware on-site and you can go about your daily business with the certainty that you can expand into the cloud whenever the situation calls for it.

Traditionally, when using cloud computing services, you needed expertise in the network and hardware technology being used, as well as the operating system and software implemented by the cloud service to support running your application. An example workflow would have had you register to the cloud service, research what specifications their machines require, rent the machine, and then connect it to your network to allow access to your license server. Then came the easy part: Install COMSOL Multiphysics and run your model.

However, since HPC is becoming more and more important in the CAE community, we at COMSOL have made it as simple as possible for you to take the step into the cloud. To achieve this, we now provide a simpler way to launch remote COMSOL® software installations for use with the Amazon Elastic Compute Cloud™, or Amazon EC2™ for short.

Note: COMSOL Multiphysics has been able to utilize remote computing resources for a long time, either through batch jobs started from the user interface or the command line, or on-the-fly through client-server technology. For this, you only need a Floating Network License (FNL) for COMSOL Multiphysics®.

To simplify launching your virtual computers onto Amazon EC2™, Amazon provides a tool called AWS CloudFormation™. This reads from a template file containing information about the structure of the cloud resources that you are about to launch. COMSOL supplies you with such a template, thus reducing the number of steps that you need to go through to launch and send your computations to the cloud.

After a few initial steps, such as registering to Amazon Web Services™, all you need to do is run the AWS CloudFormation™ tool with our templates to launch a license server on Amazon EC2™ (or forward your local license server) and then run COMSOL Multiphysics®. You can choose to either run COMSOL® software on a single cloud-based machine or on a cloud-based cluster. To connect to the cloud, you just use the COMSOL Multiphysics® client-server functionality and then work as if on your own local computer, but using the Amazon EC2™ cloud-based compute power.

To help you with these steps, we have released a new User’s Guide on how to run COMSOL Multiphysics® with Amazon Elastic Compute Cloud™. The guide will take you through every step necessary to get started with COMSOL Multiphysics and cloud computing. And if you are having difficulty, please do not hesitate to contact our technical support team.

*Amazon Web Services, the “Powered by Amazon Web Services” logo, Amazon Elastic Compute Cloud, Amazon EC2, and AWS CloudFormation are trademarks of Amazon.com, Inc. or its affiliates in the United States and/or other countries.*

Let’s consider a thermostat similar to the one that you have in your home. Although there are many different types of thermostats, most of them use the same control scheme: A sensor that monitors temperature is placed somewhere within the system, usually some distance away from the heater. When the sensed temperature falls below a desired lower setpoint, the thermostat switches the heater on. As the temperature rises above a desired upper setpoint, the thermostat switches the heater off. This is known as a *bang-bang controller*. In practice, you typically only have a single setpoint, and there is an offset, or lag, which is used to define the upper and lower setpoints.

The objective of having different upper and lower setpoints is to minimize the switching of the heater state. If the upper and lower setpoints are the same, the thermostat would constantly be cycling the heater, which can lead to premature component failure. If you do want to implement such a control, you only need to know the current temperature of the sensor. This can be modeled in COMSOL Multiphysics quite easily, as we have highlighted in this previous blog post.

On the other hand, the bang-bang controller is a bit more complex since it does need to know something about the history of the system; the heater changes its state as the temperature rises above or below the setpoints. In other words, the controller provides *hysteresis*. In COMSOL Multiphysics, this can be implemented using the *Events* interface.

When using COMSOL Multiphysics to solve time-dependent models, the *Events* interface is used to stop the time-stepping algorithms at a particular point and offer the possibility of changing the values of variables. The times at which these events occur can be specified either explicitly or implicitly. An *explicit event* should be used when we know the point in time when something about the system changes. We’ve previously written about this topic on the blog in the context of modeling a periodic heat load. An *implicit event*, on the other hand, occurs at an unknown point in time and thus requires a bit more set-up. Let’s take a look at how this is done within the context of the thermal model shown below.

*Sketch of the thermal system under consideration.*

Consider a simple thermal model of a lab-on-a-chip device modeled in a 2D plane. A one millimeter thick glass slide has a heater on one side and a temperature sensor on the other. We will treat the heater as a 1W heat load distributed across part of the bottom surface, and we will assume that there is a very small, thermally insignificant temperature sensor on the top surface. There is also free convective cooling from the top of the slide to the surroundings, which is modeled with a heat flux boundary condition. The system is initially at 20°C, and we want to keep the sensor between 45°C and 55°C.

*A Component Coupling is used to define the Variable, T_s, the sensor temperature.*

The first thing we need to do — before using the *Events* interface — is define the temperature at the sensor point via an Integration Component Coupling and a Variable, as shown above. The reason why this is done is to make the temperature at this point, T_s, available within the *Events* interface.

The *Events* interface itself is added like any other physics interface within COMSOL Multiphysics. It is available within the *Mathematics > ODE and DAE interfaces* branch.

*The *Discrete States* interface is used to define the state of the heater. Initially, the heater is on.*

First, we use the *Events* interface to define a set of *discrete variables*, variables which are discontinuous in time. These are appropriate for modeling on/off conditions, as we have here. The *Discrete States* interface shown above defines a variable, *HeaterState*, which is multiplied by the applied heat load in the *Heat Transfer in Solids* problem. The variable can be either one or zero, depending upon the system’s temperature history. The initial condition is one, meaning we are starting our simulation with the heater on. It is important that we set the appropriate initial condition here. It is this *HeaterState* variable that will be changed depending upon the sensor temperature during the simulation.

*Two *Indicator States* in the *Events* interface depend upon the sensor temperature.*

To trigger a change in the *HeaterState* variable, we need to first introduce two *Indicator States*. The objective of the *Indicator States* is to define variables that will indicate when an event will occur. There are two indicator variables defined. The *Up* indicator variable is defined as:

`T_s - 55[degC]`

which goes smoothly from negative to positive as the sensor temperature rises above 55°C. Similarly, the *Down* indicator variable will go smoothly from negative to positive at 45°C. We will want to trigger a change in the *HeaterState* variable as these indicator variables change sign.

*The* HeaterState* variable is reinitialized within the *Events* interface.*

We use the *Implicit Events* interface, since we do not know ahead of time when these events will occur, but we do know under what conditions we want to change the state of the heater. As shown above, two *Implicit Event* features are used to reinitialize the state of the heater to either zero or one, depending upon when the *Up* and *Down* indicator variables become greater than or less than zero, respectively. The event is triggered when the logical condition becomes true. Once this happens, the transient solver will stop and restart with the newly initialized *HeaterState* variable, which is used to control the applied heat, as illustrated below.

*The *HeaterState *variable controls the applied heat.*

When solving this model, we can make some changes to the solver settings to ensure that we have good accuracy and keep only the most important results. We will want to solve this model for a total time of 30 minutes, and we will store the results only at the time steps that the solver takes. These settings are depicted below.

*The study settings for the Time-Dependent Solver set the total solution time from 0-30 minutes, with a relative tolerance of 0.001.*

We will need to make some changes within the settings for the Time-Dependent Solver. These changes can be made prior to the solution by first right-clicking on the *Study* branch, choosing “Show Default Solver”, and then making the two changes shown below.

*Modifications to the default solver settings. The event tolerance is changed to 0.001 and the output times to store are set to the steps taken by the solver.*

Of course, as with any finite element simulation, we will want to study the convergence of the solution as the mesh is refined and the solver tolerances are made tighter. Representative simulation results are highlighted below and demonstrate how the sensor temperature is kept between the upper and lower setpoints. Also, observe that the solver takes smaller time steps immediately after each event, but larger time steps when the solution varies gradually.

*The heater switches on and off to keep the sensor temperature between the setpoints.*

We have demonstrated here how implicit events can be used to stop and restart the solver as well as change variables that control the model. This enables us to model systems with hysteresis, such as thermostats, and perform simulations with minimal computational cost.

]]>

First, let’s take a (very) brief conceptual look at the implicit time-stepping algorithms used when you are solving a time-dependent problem in COMSOL Multiphysics. These algorithms choose a time step based upon a user-specified tolerance. While this allows the software to take very large time steps when there are gradual variations in the solution, the drawback is that using too loose of a tolerance can skip over certain transient events.

To understand this, consider the ordinary differential equation:

\frac{\partial u}{\partial t} = -u + f(t)

where the forcing function f(t) is a square unit pulse starting at t_s and ending at t_e. Given an initial condition, u_0, we can solve this problem for any length of time, either analytically or numerically. Here is the analytic solution for u_0=1:

In the above plot, we can observe the exponential decay and rise as the forcing function is zero or one. Let’s now look at the numerical solution to this problem for two different user-specified tolerances:

*The numeric solution (red dots) is shown for a relative tolerance of 0.2 and 0.01 and is compared to the analytical result (grey line).*

We can see from the plot above that a very loose relative tolerance of 0.2 does not accurately capture the switching of the load. At a tighter relative tolerance of 0.01 (the solver default), the solution is reasonably well resolved. We can also observe that the spacing of the points shows the varying time steps used by the solver. It is apparent that the solver takes larger time steps where the solution changes slowly and finer time steps when the heat load switches on and off.

However, if the tolerance is set too loosely, the solver may skip over the heat load change entirely when the width of the heat load gets very small. That is, if t_s and t_e move very close to each other, the magnitude of the total heat load is too small for the specified tolerance. We can of course mitigate this by using tighter tolerances, but a better option exists.

We can avoid having to tighten the tolerances by using *Explicit Events*, which are a way of letting the solver know that it should evaluate the solution at a specified point in time. From that point in time forward, the solver will continue as before until the next event is reached. Let’s look at the numeric solution to the above problem, with *Explicit Events* at t_s and t_e and solved with a relative tolerance of 0.2 (a very loose tolerance):

*When using* Explicit Events*, the numerical solution — even with a very loose relative tolerance of 0.2 — compares quite well with the analytical result. Away from the events, large time steps are taken.*

The above plot illustrates that the *Explicit Events* force a solution evaluation when the load switches on or off. The loose relative tolerance allows the solver to take large time steps when the solution varies gradually. Small time steps are taken immediately after the events to give good resolution of the variation in the solution. Thus, we have both good resolution of the heat load switching on or off and we take large time steps to minimize the overall computational cost.

Now that we’ve introduced the concepts, we will take a look at implementing these *Explicit Events*.

We will begin with an existing example from the COMSOL Multiphysics Model Library and modify it slightly to include a periodic heat load and the *Events* interface. We will look at an example of the Laser Heating of a Silicon Wafer, where a laser is modeled as a distributed heat source moving back and forth across the surface of a spinning silicon wafer.

The laser heat source itself traverses back and forth over the wafer with a period of 10 seconds along the centerline. To minimize the temperature variation over the wafer during the heating process, we want to turn the laser off periodically, while the heat source is in the center of the wafer. To model this, we will introduce an *Analytic* function, pulse (x), that uses the Boolean expression:

`(x<2)||(x>3)`

to evaluate pulse (t) to zero between t=2-3 seconds, and one otherwise. The *Periodic Extension* option is used to repeat this pattern every five seconds, as shown in the screenshot below.

*The settings used to define a periodic function, as plotted.*

We can use this function to modify the applied heat flux representing the laser heat source, as illustrated below:

*The settings for the applied heat flux boundary condition.*

The last thing that we need to do is to add the *Events* interface. This physics interface is found within *Mathematics > ODE and DAE interfaces* when using the *Add Physics* browser. Within the *Events* interface, add two *Explicit Events* with the settings shown below to define a periodic event starting at two and three seconds and repeating every five seconds.

*The* Explicit Events* settings. The second of these events starts at 3 s.*

No other changes are needed, but we can take a quick look at the solver settings:

*The settings for the time-dependent solver.*

Note that the entries in the *Times* field are the output times. These settings do not directly control the actual time steps taken by the solver. The *Relative Tolerance* field (default value of 0.01) along with the *Events* — if they are in the model — control these time steps.

*A comparison of unpulsed (left) and pulsed (right) heat loads.*

You can compare the results of this simulation to the original model to see the differences in temperature across the wafer. With a periodic heat load, the temperature rise is more gradual and the temperature variations at any point in time are smaller.

We have looked at using the *Events* interface for modeling a periodic heat load over time and introduced why it provides a good combination of accuracy and low computational requirements. There is a great deal more that you can do with the *Events* interface — if you would like to learn more, we encourage you to consult the documentation. An extended demonstration of the usage of the *Events* interface is featured in the Capacity Fade of a Li-ion Battery example from the Model Library.

On the other hand, when dealing with problems that are either convection dominated or wave-type problems (e.g., fluid flow models or transient structural response, respectively), then we would not want to introduce instantaneous changes in the loads. The reasons behind that — and alternative modeling techniques for such situations — will be the topic of an upcoming blog. Stay tuned!

]]>

Recall our simple example of 1D heat transfer at steady state with no heat source, where the temperature T is a function of the position x in the domain defined by the interval 1\le x\le 5. With the boundary conditions that the outgoing flux should be 2 at the left boundary (x=1) and the temperature should be 9 at the right boundary (x=5), the weak form equation reads:

(1)

\int_1^5 \partial_x T(x) \partial_x \tilde{T}(x) \,dx = -2 \tilde{T}_1 -\lambda_2 \tilde{T}_2 -\tilde{\lambda}_2 (T_2-9)

We now attempt to find a way to solve this equation numerically.

To solve Eq. (1) numerically, we first divide the domain 1\le x\le 5 into four evenly spaced sub-intervals, or *mesh elements*, bound by five *nodal points* x = 1, 2, \cdots, 5. Then, we can define a set of basis functions, or *shape functions*, \psi_{1L}(x), \psi_{1R}(x), \psi_{2L}(x), \psi_{2R}(x), \cdots, \psi_{4R}(x), as shown in the graph below, where there are two shape functions in each mesh element, represented by a solid line and a dashed line.

For example, in the first element (1 \le x \le 2), we have

(2)

\begin{equation*}

\psi_{1L}(x) %26=%26 \left\{ \begin{array}{ll}

2-x \mbox{ for } 1 \le x \le 2,\\

0 \mbox{ elsewhere}

\end{array} \right \mbox{ (solid red line)}

\end{equation}

\psi_{1L}(x) %26=%26 \left\{ \begin{array}{ll}

2-x \mbox{ for } 1 \le x \le 2,\\

0 \mbox{ elsewhere}

\end{array} \right \mbox{ (solid red line)}

\end{equation}

\begin{equation*}

\psi_{1R}(x) %26=%26 \left\{ \begin{array}{ll}

x-1 \mbox{ for } 1 \le x \le 2, \\

0 \mbox{ elsewhere}

\end{array} \right \mbox{ (dashed red line)}

\end{equation}

\psi_{1R}(x) %26=%26 \left\{ \begin{array}{ll}

x-1 \mbox{ for } 1 \le x \le 2, \\

0 \mbox{ elsewhere}

\end{array} \right \mbox{ (dashed red line)}

\end{equation}

We observe that each shape function is a simple linear function ranging from 0 to 1 within a mesh element, and vanishes outside of that mesh element.

Note: Of course, COMSOL Multiphysics allows shape functions formed with higher-order polynomials, not just linear functions. The choice of linear shape functions here is for visual clarity.

With this set of shape functions, we can approximate any arbitrary function defined in the domain 1\le x\le 5 by a simple linear combination of them:

(3)

u(x) \approx a_{1L} \psi_{1L}(x) + a_{1R} \psi_{1R}(x) + a_{2L} \psi_{2L}(x) + a_{2R} \psi_{2R}(x) + \cdots

where a_{1L}, a_{1R}, a_{2L}, a_{2R}, \cdots are some constant coefficients, one for each shape function. In the graph below, the arbitrary function u(x) is represented by the black curve. The cyan curve represents the approximation by the superposition of the shape functions (3). Each term on the right-hand side of Eq. (3) is plotted using the same color and line style as the graph above.

We see that in general the approximation (represented by the cyan curve) can be discontinuous across the boundary between adjacent mesh elements. In practice, many physical systems, including our simple example of heat conduction, are expected to have continuous solutions. For this reason, the default shape functions for most physics interfaces are *Lagrange elements*, in which the shape function coefficients are constrained so that the solution is continuous across boundaries between adjacent elements. In this case, the approximation is simplified, as shown in the figure below,

where the cyan curve has been made continuous by setting the coefficients on each side of a mesh boundary to be equal: a_{1R} = a_{2L}, a_{2R} = a_{3L}, a_{3R} = a_{4L}. We also renamed the coefficients for brevity:

\begin{equation*}

\begin{align}

a_1 %26\equiv a_{1L}\\

a_2 %26\equiv a_{1R} = a_{2L}\\

a_3 %26\equiv a_{2R} = a_{3L}\\

a_4 %26\equiv a_{3R} = a_{4L}\\

a_5 %26\equiv a_{4R}

\end{align}

\end{equation*}

\begin{align}

a_1 %26\equiv a_{1L}\\

a_2 %26\equiv a_{1R} = a_{2L}\\

a_3 %26\equiv a_{2R} = a_{3L}\\

a_4 %26\equiv a_{3R} = a_{4L}\\

a_5 %26\equiv a_{4R}

\end{align}

\end{equation*}

We see that the continuity condition requires pairs of shape functions to share the same coefficient in making the approximation (3), which can now in turn be simplified by combining those pairs of shape functions into a new set of basis functions *ϕ1(x),ϕ2(x),⋯,ϕ5(x)*, with each function localized around a nodal point:

(4)

\begin{equation*}

\begin{align}

\phi_1(x) \equiv \psi_{1L}(x) %26= \left\{ \begin{array}{ll}

2-x \mbox{ for } 1 \le x \le 2,\\

0 \mbox{ elsewhere}

\end{array} \right

\\

\phi_2(x) \equiv \psi_{1R}(x) + \psi_{2L}(x) %26= \left\{ \begin{array}{lll}

x-1 \mbox{ for } 1 \le x \le 2, \\

3-x \mbox{ for } 2 < x \le 3, \\

0 \mbox{ elsewhere}

\end{array} \right

\\

\phi_3(x) \equiv \psi_{2R}(x) + \psi_{3L}(x) %26= \left\{ \begin{array}{lll}

x-2 \mbox{ for } 2 \le x \le 3, \\

4-x \mbox{ for } 3 < x \le 4, \\

0 \mbox{ elsewhere}

\end{array} \right

\\

\\ &\centerdot&

\\ &\centerdot&

\\ &\centerdot&

\end{align}

\end{equation*}

\begin{align}

\phi_1(x) \equiv \psi_{1L}(x) %26= \left\{ \begin{array}{ll}

2-x \mbox{ for } 1 \le x \le 2,\\

0 \mbox{ elsewhere}

\end{array} \right

\\

\phi_2(x) \equiv \psi_{1R}(x) + \psi_{2L}(x) %26= \left\{ \begin{array}{lll}

x-1 \mbox{ for } 1 \le x \le 2, \\

3-x \mbox{ for } 2 < x \le 3, \\

0 \mbox{ elsewhere}

\end{array} \right

\\

\phi_3(x) \equiv \psi_{2R}(x) + \psi_{3L}(x) %26= \left\{ \begin{array}{lll}

x-2 \mbox{ for } 2 \le x \le 3, \\

4-x \mbox{ for } 3 < x \le 4, \\

0 \mbox{ elsewhere}

\end{array} \right

\\

\\ &\centerdot&

\\ &\centerdot&

\\ &\centerdot&

\end{align}

\end{equation*}

As shown in the graph below, each new basis function is essentially a triangular-shaped, piecewise-linear function centered around a nodal point. Its value varies between 1 and 0 within the mesh element(s) adjacent to the nodal point, and vanishes everywhere else.

As discussed above, by choosing this new set of basis functions, we constrain the solution to be continuous across the boundary between adjacent mesh elements. Most physical systems satisfy this continuity constraint, including our simple heat transfer example here.

Now, with this new set of basis functions, the approximation (3) is simplified to

(5)

u(x) \approx a_1 \phi_1(x) + a_2 \phi_2(x) + \cdots + a_5 \phi_5(x)

In the graph below, the arbitrary function u(x) is represented by the black curve. The cyan curve represents the approximation by the superposition of the new basis functions. Each term on the right-hand side of Eq. (5) is plotted using the same color scheme as the graph above.

As an aside, if the black curve represents the exact solution to some real modeling problem, then we see that the approximation is not very good, due to the coarseness of the mesh. Also, in general, the nodal point values a_1, a_2, \cdots are not required to lie on the exact solution, unless one is constrained to a known solution value (shown in a_5 as an example in the figure above). The discrepancy between the black and the cyan curves we see here represents the discretization error of the solution. In 2D and 3D models, there will also be a discretization error of the geometry. In my colleague Walter Frei’s blog post on meshing considerations, both types of errors are discussed in some detail. Due to these potential errors, a mesh refinement study is necessary to ensure the accuracy of modeling results.

We note that the approximation given by Eq. (5) (the cyan curve) is piecewise-linear. Thus, it’s impossible to evaluate its second derivatives numerically. As we have mentioned before, the weak formulation provides numerical benefits by reducing the order of differentiation in the equation system. In this case, only the first derivative is needed and it can be readily evaluated numerically. In a future blog entry, we will discuss an example of discontinuity in the material property that also benefits from the reduced order of differentiation.

With the new set of basis functions defined above, we proceed to discretize the weak form equation (1) in two steps. First, the temperature function, T(x), can be approximated by the set of basis functions in the same way as in Eq. (5):

(6)

T(x) = a_1 \phi_1(x) + a_2 \phi_2(x) + \cdots + a_5 \phi_5(x)

where a_1, a_2, \cdots , a_5 are unknown coefficients to be determined.

Substituting the expression for T(x) (6) into the weak form equation (1), we obtain

(7)

\begin{array}{ll}

a_1 \int_1^5 \partial_x \phi_1(x) \partial_x \tilde{T}(x) \,dx + a_2 \int_1^5 \partial_x \phi_2(x) \partial_x \tilde{T}(x) \,dx + \cdots + a_5 \int_1^5 \partial_x \phi_5(x) \partial_x \tilde{T}(x) \,dx

\\

= -2 \tilde{T}_1 -\lambda_2 \tilde{T}_2 -\tilde{\lambda}_2 (a_5 -9)

\end{array}

a_1 \int_1^5 \partial_x \phi_1(x) \partial_x \tilde{T}(x) \,dx + a_2 \int_1^5 \partial_x \phi_2(x) \partial_x \tilde{T}(x) \,dx + \cdots + a_5 \int_1^5 \partial_x \phi_5(x) \partial_x \tilde{T}(x) \,dx

\\

= -2 \tilde{T}_1 -\lambda_2 \tilde{T}_2 -\tilde{\lambda}_2 (a_5 -9)

\end{array}

where the temperature at the right boundary x=5, T_2, has been evaluated using the expression (6) and the fact that the basis functions are localized, leading to only one term, a_5 \phi_5(x=5) = a_5, contributing to T(x=5).

We see that there are six unknowns in the discretized version of the weak form equation (7): The five coefficients a_1, a_2, \cdots , a_5 and the one flux \lambda_2 at the right boundary. It is customary to call the unknowns *degrees of freedom*. For example, here we say the (discretized) problem has “six degrees of freedom”.

To solve for the six unknowns, we need six equations. This leads to the second step of discretization. Recall from our first blog post that the role of the test functions is to sample the equation locally to clamp down the solution everywhere within the domain. Now we already have a set of localized functions, our basis functions \phi_1, \cdots, \phi_5, so we can just substitute them into the test function \tilde{T} in Eq. (7) to obtain the six equations we need.

Here is a table showing the six substitutions that will generate the six equations for us:

\tilde{T}(x) | \tilde{\lambda}_2 |
---|---|

\phi_1(x) | 0 |

\phi_2(x) | 0 |

\phi_3(x) | 0 |

\phi_4(x) | 0 |

\phi_5(x) | 0 |

0 | 1 |

Since each of the basis functions is localized, each substitution yields an equation with a small number of terms. For example, the first substitution gives

\begin{array}{ll}

a_1 \int_1^5 \partial_x \phi_1(x) \partial_x \phi_1(x) \,dx + a_2 \int_1^5 \partial_x \phi_2(x) \partial_x \phi_1(x) \,dx + \cdots + a_5 \int_1^5 \partial_x \phi_5(x) \partial_x \phi_1(x) \,dx

\\

= -2 \phi_1(x=1) -\lambda_2 \phi_1(x=5) -0 (a_5 -9)

\end{array}

a_1 \int_1^5 \partial_x \phi_1(x) \partial_x \phi_1(x) \,dx + a_2 \int_1^5 \partial_x \phi_2(x) \partial_x \phi_1(x) \,dx + \cdots + a_5 \int_1^5 \partial_x \phi_5(x) \partial_x \phi_1(x) \,dx

\\

= -2 \phi_1(x=1) -\lambda_2 \phi_1(x=5) -0 (a_5 -9)

\end{array}

We note that \phi_1 has non-trivial overlap only with itself and \phi_2. Therefore, only the first two terms on the left-hand side are non-zero. Also, \phi_1 is localized near the left boundary (x=1), so only the first term on the right-hand side remains. The equation now becomes

(8)

a_1 -a_2 = -2

where we have evaluated the definite integrals on the left-hand side:

\begin{equation*}

\begin{align}

\int_1^5 \partial_x \phi_1(x) \partial_x \phi_1(x) \,dx %26= 1\\

\int_1^5 \partial_x \phi_2(x) \partial_x \phi_1(x) \,dx %26= -1\\

\end{align}

\end{equation*}

\begin{align}

\int_1^5 \partial_x \phi_1(x) \partial_x \phi_1(x) \,dx %26= 1\\

\int_1^5 \partial_x \phi_2(x) \partial_x \phi_1(x) \,dx %26= -1\\

\end{align}

\end{equation*}

and used the definition of \phi_1 on the right-hand side: \phi_1(x=1) = 1.

Similarly, the remaining five substitutions listed in the table above yield these equations:

(9)

\begin{equation*}

\begin{align}

-a_1 + 2 a_2 -a_3 %26= 0\\

-a_2 + 2 a_3 -a_4 %26= 0\\

-a_3 + 2 a_4 -a_5 %26= 0\\

-a_4 + a_5 %26= -\lambda_2\\

0 %26= -(a_5 -9)\\

\end{align}

\end{equation*}

\begin{align}

-a_1 + 2 a_2 -a_3 %26= 0\\

-a_2 + 2 a_3 -a_4 %26= 0\\

-a_3 + 2 a_4 -a_5 %26= 0\\

-a_4 + a_5 %26= -\lambda_2\\

0 %26= -(a_5 -9)\\

\end{align}

\end{equation*}

We now have six equations for our six unknowns and it is straightforward to verify that the solution matches what we have obtained using COMSOL Multiphysics software in the previous post. For example, the last equation immediately gives us a_5 = 9, and using the expression (6) for the temperature, we obtain its value at the right boundary:

\begin{equation*}

\begin{align}

T(x=5) %26= a_1 \phi_1(x=5) + a_2 \phi_2(x=5) + \cdots + a_5 \phi_5(x=5)\\

%26= a_1 \cdot 0 + a_2 \cdot 0 + \cdots + a_5 \cdot 1\\

%26= 9\\

\begin{align}

\end{equation*}

\begin{align}

T(x=5) %26= a_1 \phi_1(x=5) + a_2 \phi_2(x=5) + \cdots + a_5 \phi_5(x=5)\\

%26= a_1 \cdot 0 + a_2 \cdot 0 + \cdots + a_5 \cdot 1\\

%26= 9\\

\begin{align}

\end{equation*}

This agrees with the fixed boundary condition that the temperature should be 9 at the right boundary. It is also easy to see that it is the term associated with the test function \tilde{\lambda}_2 that gives rise to the equation (0 = a_5 -9), as we would expect.

It is convenient to write the discretized system of equations (in our simple example, there are six equations given in (8) and (9)) in terms of matrices and vectors:

(10)

\left(

\begin{array}{cccccc}

1 & -1 & 0 & 0 & 0 & 0 \\

-1 & 2 & -1 & 0 & 0 & 0 \\

0 & -1 & 2 & -1 & 0 & 0 \\

0 & 0 & -1 & 2 & -1 & 0 \\

0 & 0 & 0 & -1 & 1 & 1 \\

0 & 0 & 0 & 0 & 1 & 0

\end{array}

\right)

\left(

\begin{array}{c} a_1 \\ a_2 \\ a_3 \\ a_4 \\ a_5 \\ \lambda_2 \end{array}

\right)

= \left(

\begin{array}{c} -2 \\ 0 \\ 0 \\ 0 \\ 0 \\ 9 \end{array}

\right)

\begin{array}{cccccc}

1 & -1 & 0 & 0 & 0 & 0 \\

-1 & 2 & -1 & 0 & 0 & 0 \\

0 & -1 & 2 & -1 & 0 & 0 \\

0 & 0 & -1 & 2 & -1 & 0 \\

0 & 0 & 0 & -1 & 1 & 1 \\

0 & 0 & 0 & 0 & 1 & 0

\end{array}

\right)

\left(

\begin{array}{c} a_1 \\ a_2 \\ a_3 \\ a_4 \\ a_5 \\ \lambda_2 \end{array}

\right)

= \left(

\begin{array}{c} -2 \\ 0 \\ 0 \\ 0 \\ 0 \\ 9 \end{array}

\right)

The matrix on the left-hand side is customarily called the *stiffness matrix* and the vector on the right is called the *load vector*, due to the application of this technique in structural mechanics.

We notice two interesting facts about this matrix equation. First, there are a lot of zeros in the matrix (a so-called *sparse* matrix). In a practical model where there are many more mesh elements than our four elements here, we can envision that most of the elements in the matrix will be zero. This is a direct benefit of choosing localized shape functions, and it lends itself to very efficient numerical methods to solve the equation system.

Second, the Lagrange multiplier \lambda_2 appears only in one equation (the last column of the matrix has only one non-zero element). The remaining five equations involve only the five unknown coefficients a_1, a_2, \cdots , a_5. Therefore, we can choose to solve for a_1, a_2, \cdots , a_5 using the five equations, without ever needing to solve for \lambda_2. As we briefly mentioned in the previous entry, in general, it is possible to choose not to solve for the Lagrange multiplier(s) in order to gain computation speed.

Today, we reviewed the basic procedure for discretizing the weak form equation using our simple example. We took advantage of a set of localized shape functions in two steps:

- Using them as a basis to approximate the real solution
- Substituting them one by one into the weak form equation to obtain the discretized system of equations

The resulting matrix equation is sparse, which can be solved efficiently using a computer.

In the previous blog post, when we implemented the weak form equation using COMSOL Multiphysics, the discretization was done under the hood without needing the user’s help. Next, we will show you how to inspect the stiffness matrix and load vector, as well as how to choose to solve for — or not to solve for — the Lagrange multiplier by using the *Weak Form PDE* interface in the software.

*Polar plots* are exactly what you remember from math class. They use polar coordinates *r* and $\theta$ to describe a pattern, usually an acoustic or electromagnetic field. These plots are very useful for getting a localized or top-down view of the radiation pattern of your device. For instance, to ensure that a loudspeaker’s design is optimized for the most even sound distribution, you would examine a polar plot to see how the sound waves propagate from the speaker and what the range is.

To demonstrate this, I’ll show a polar plot from a conical antenna model. (If you have the RF Module installed, you can find this simulation under *File > Model Libraries > RF Module > Antennas*.) This model analyzes the impedance and radiation patterns of a conical antenna. The geometry of the antenna is shown below:

This antenna directs an electromagnetic wave propagating in a coaxial cable. Propagation occurs in the *z*-direction, and the model results show the radiation pattern close to the antenna for different frequencies. Below is a polar plot describing the near-field pattern changing with the elevation angle, $\theta$:

In contrast to the plots just shown, a *far-field plot* describes a wave pattern (electromagnetic or acoustic) at distances far from the source. For instance, the far-field pattern of the conical antenna looks like this:

Comparing this to the near-field plot shown earlier, we can see how the radiation pattern changes with large jumps in distance from the antenna.

Far-field plots, however, are often shown in three dimensions, so these are not limited to the polar plot type. Variables are plotted for a chosen number of angles on a circle (in two dimensions) or a sphere (in three dimensions), where you can specify the angle interval as well as the origin and radius of the circle or sphere. This plots the radiation pattern by deforming the circle or sphere in the radial direction for each evaluation point specified. This means that the distance from the center becomes equal to the value of the expression on the evaluation point. (One advantage to this plot type is that the circle or sphere used for defining the plot directions is not part of the model geometry, so the number of plotting directions is unrelated to the solution domain.)

A 3D plot of the far field for the conical antenna is highlighted below. You’ll notice that this gives a very indicative view of the wave pattern, providing a sort of “big-picture” feel in contrast to the localized data in the polar plots.

Lastly, there are a couple of plot types that are unique to particle tracing applications. These are the *particle tracing plots*, *Poincaré maps*, and *phase portraits* in COMSOL Multiphysics. To illustrate each of these plot types, I’ll use a model of a laminar mixer. If you have the Particle Tracing Module installed, this model can be found under *File > Model Libraries > Particle Tracing > Fluid Flow*.

The simulation analyzes fluid being pumped through a pipe with stationary blades. This type of mixing is suited for laminar flow because it generates only small pressure losses. The model evaluates the mixing performance by tracking the trajectories of particles suspended in the mixer.

The particle tracing plot type shows the actual trajectories of a specified number of particles or a specified density and diameter. In this case, a species is dissolved in room temperature water. In the results shown below, a line tracks each particle five seconds after being released into the mixer. The color expression is based on the shear rate (1/s) of the flow.

Another 3D plot group included in this model is a Poincaré map, or a first recurrence map. These are 2D plots that are created from a cut plane defined on the particle data set. This is used for visualizing the position of particles in a section transversal to their trajectories. The Poincaré map represents the particle trajectories in a space dimension that is one dimension lower than the original particle space. The plot shows a dot on the cut plane everywhere that a particle has crossed the plane (including multiple locations for the same particle if it crosses the plane multiple times). In the plot below, the dot color indicates the velocity of the particle when it crosses the plane, five seconds after release.

One nice trick to showing Poincaré maps is to include several of them in a 3D plot group. For instance, the results plot below depicts maps at different locations along the mixer, with the outlines of the blades underneath. In this case, the color of the dots indicates the level of *dissolution* (how well-mixed the species is in the water):

The very last plot type we’ll discuss here is a phase portrait. Like a Poincaré map, phase portraits show the locations of particles on a 2D plot. However, they are not necessarily on a plane with an orientation that is transversal to the direction of particle travel. Phase portraits are typically used to plot particle position and velocity together, where the position is taken as the distance from the origin. Below is a phase portrait depicting particle positions at t=5 seconds:

That’s all for the application-specific plot types. Hopefully you’ve enjoyed this introduction to some unique ways to show results and gather data in RF, acoustic, and particle tracing simulations. Stay tuned for our next blog post on creating deformations in your results plots!

We are often interested in modeling a radiating object, such as an antenna, in free space. We may be building this model to simulate an antenna on a satellite in deep space or, more often, an antenna mounted in an anechoic test chamber.

*An antenna in infinite free space. We only want to model a small region around the antenna.*

Such models can be built using the *Electromagnetic Waves, Frequency Domain* formulation in the RF Module or the Wave Optics Module. These modules provide similar interfaces for solving the frequency domain form of Maxwell’s equations via the finite element method. (For a description of the key differences between these modules, please see my previous blog post, titled “Computational Electromagnetics Modeling, Which Module to Use?“)

Let’s limit ourselves in this blog post to considering only 2D problems, where the electromagnetic wave is propagating in the *x-y* plane, with the electric field polarized in the *z*-direction. We will additionally assume that our modeling domain is purely vacuum, so that the frequency domain Maxwell’s equations reduce to:

\nabla \cdot \left( \mu_r^{-1} \nabla E_z \right) -k_0^2 \epsilon_r E_z= 0

where E_z is the electric field, relative permeability and permittivity \mu_r = \epsilon_r = 1 in vacuum, and k_0 is the wavenumber.

Solving the above equation via the finite element method requires that we have a finite-sized modeling domain, as well as a set of boundary conditions. We want to use boundary conditions along the outside that are transparent to any radiation. Doing so will let our truncated domain be a reasonable approximation of free space. We also want this truncated domain to be as small as possible, since keeping our model size down reduces our computational costs.

Let’s now look at two of the options available within the COMSOL Multiphysics simulation environment for truncating your modeling domain: the scattering boundary condition and the perfectly matched layer.

One of the first transparent boundary conditions formulated for wave-type problems was the Sommerfeld radiation condition, which, for 2D fields, can be written as:

\lim_{ r \to \infty} \sqrt r \left( \frac{\partial E_z}{\partial r} + i k_0 E_z \right) = 0

where r is the radial axis.

This condition is exactly non-reflecting when the boundaries of our modeling domain are infinitely far away from our source, but of course an infinitely large modeling domain is impossible. So, although we cannot apply the Sommerfeld condition exactly, we *can* apply a reasonable approximation of it.

Let’s now consider the boundary condition:

\mathbf{n} \cdot (\nabla E_z) + i k_0 E_z = 0

You can clearly see the similarities between this condition and the Sommerfeld condition. This boundary condition is more formally called the *first-order scattering boundary condition (SBC)* and is trivial to implement within COMSOL Multiphysics. In fact, it is nothing other than a Robin boundary condition with a complex-valued coefficient.

If you would like to see an example of a 2D wave equation implemented from scratch along with this boundary condition, please see the example model of diffraction patterns.

Now, there is a significant limitation to this condition. It is only non-reflecting if the incident radiation is exactly normally incident to the boundary. Any wave incident upon the SBC at a non-normal incidence will be partially reflected. The reflection coefficient for a plane wave incident upon a first-order SBC at varying incidence is plotted below.

*Reflection of a plane wave at the first-order SBC with respect to angle of incidence.*

We can observe from the above graph that as the incoming plane wave approaches grazing incidence, the wave is almost completely reflected. At a 60° incident angle, the reflection is around 10%, so we would clearly like to have a better boundary condition.

COMSOL Multiphysics also includes (as of version 4.4) the second-order SBC:

\mathbf{n} \cdot (\nabla E_z) + i k_0 E_z -\frac{i }{2 k_0} \nabla_t^2 E_z= 0

This equation adds a second term, which takes the second tangential derivative of the electric field along the boundary. This is also quite easy to implement within the COMSOL software architecture.

Let’s compare the reflection coefficient of the first- and second-order SBC:

*Reflection of a plane wave at the first- and second-order SBC with respect to angle of incidence.*

We can see that the second-order SBC is uniformly better. We can now get to a ~75° incident angle before the reflection is 10%. This is better, but still not the best we can achieve. Let’s now turn our attention away from boundary conditions and look at perfectly matched layers.

Recall that we are trying to simulate a situation such as an antenna in an anechoic test chamber, a room with pyramidal wedges of radiation absorbing material on the walls that will minimize any reflected signal. This can be our physical analogy for the perfectly matched layer (PML), which is not a boundary condition, but rather a domain that we add along the exterior of the model that should absorb all outgoing waves.

Mathematically speaking, the PML is simply a domain that has an anisotropic and complex-valued permittivity and permeability. For a sample of a complete derivation of these tensors, please see *Theory and Computation of Electromagnetic Fields*. Although PMLs are theoretically non-reflecting, they do exhibit some reflection due to the numerical discretization: the mesh. To minimize this reflection, we want to use a mesh in the PML that aligns with the anisotropy in the material properties. The appropriate PML meshes are shown below, for 2D circular and 3D spherical domains. Cartesian and spherical PMLs and their appropriate usage are also discussed within the product documentation.

*Appropriate meshes for 2D and 3D spherical PMLs.*

In COMSOL Multiphysics 5.0, these meshes can be automatically set up for 3D problems using the Physics-Controlled Meshing, as demonstrated in this video.

Let’s now look at the reflection from a PML with respect to incident angle as compared to the SBCs:

*Reflection of a plane wave at the first- and second-order SBC and the PML with respect to angle of incidence.*

We can see that the PML reflects the least amount across the widest range. There is still reflection as the wave is propagating almost exactly parallel to the boundary, but such cases are luckily rather rare in practice. An additional feature of the PML, which we will not go into detail about for now, is that it absorbs not only the propagating wave, but also any evanescent field. So, from a physical point of view, the PML truly can be thought of as a material with almost perfect absorption.

Clearly, the PML is the best of the approaches described here. However, the PML does use more memory as compared to the SBC.

So, if you are early in the modeling process and want to build a model that is a bit less computationally intensive, the second-order SBC is a good option. You can also use it in situations where you have a strong reason to believe that any reflections at the SBC won’t greatly affect the results you are interested in.

The first-order SBC is currently the default, for reasons of compatibility with previous versions of the software, but with COMSOL Multiphysics version 4.4 or greater, use the second-order SBC. We have only introduced the plane-wave form of the SBC here, but cylindrical-wave and spherical-wave (in 3D) forms of the first- and second-order SBC’s are also available. Although they do use less memory, they all exhibit more reflection as compared to the PML.

The SBC and the PMLs are appropriate conditions for open boundaries where you do not know much about the fields at the boundaries a *priori*. If, on the other hand, you want to model an open boundary where the fields are known to have a certain form, such as a boundary representing a waveguide, the Port and Lumped Port boundary conditions are more appropriate. We will discuss those conditions in an upcoming blog post.

Before you can fully benefit from a product and get the most out of it, you of course need to learn how to use it first. Without this knowledge, frustration usually follows.

Do you recall your first experience with a smartphone, for example? The technology most likely seemed foreign at the time. These days, however, most would agree that smartphones are actually rather intuitive. We’re confident you will find that this is also the case with COMSOL Multiphysics. Reading the manual and following instructions for any advanced tool is usually the best method for quickly learning how to use it.

To aid you in your simulation learning process, we have created a video tutorial that shows you the basics of setting up a model and running a simulation in COMSOL Multiphysics.

In the video, we use an electrical-thermal-structural coupling to demonstrate the concept of *multiphysics*, which is one of the large advantages for using the COMSOL software. That said, the steps for setting up a model are the same no matter what physics you are modeling. In other words, this video is beneficial for anyone interested in learning the basics of simulating with COMSOL Multiphysics.

*A screenshot of the COMSOL Multiphysics user interface. The model shown includes an electrical-thermal-structural coupling, with Joule heating and thermal expansion.*

My colleague, Amelia Halliday, will begin by presenting the screen you see when you first open COMSOL Multiphysics. She will then continue through the process of creating and running a simulation to investigate the electrical and thermal properties of a thermal actuator. After running a successful simulation, she will continue to add complexity by including thermal expansion. At the end of the video, you will have a basic understanding of how to create and run your own simulations. Links below the video will allow you to jump back to certain modeling steps in order to review them again.

Do not be deterred if this is not your application area; the process for creating a model is the same no matter the physics being modeled.

You will always follow the same workflow when using COMSOL Multiphysics:

- Set up the model environment
- Create geometrical objects
- Specify material properties
- Define physics boundary conditions
- Create the mesh
- Run the simulation
- Postprocess the results

- After watching the full video, get more modeling tips from the “How To” section of our Video Gallery
- If you have any questions while using COMSOL Multiphysics, please contact support

There are many situations in which a rotating object is exposed to loads. For example, think of a rotisserie chicken or a kebab. Meat on a rotating spit is exposed to a heat load, usually a radiative heat source such as coals. Rotation is a simple way to distribute the applied heat. It keeps any regions from getting too hot or too cold and is an easy way to promote uniform cooking.

Now that I’ve got you licking your chops, let’s look at a slightly simpler case.

Today, we will look at the laser heating of a spinning silicon wafer. Although it isn’t quite as delicious to think about as rotating food, I’m sure you will find it equally informative.

As you may know, we already have an example of this in our Model Library and online Model Gallery. The existing example considers a wafer mounted on a rotating stage and heated by a laser traversing back and forth over the surface. The problem is solved in a stationary coordinate system. (Just think of yourself standing outside the process chamber and watching the wafer spinning on the stage.) We will call this the *global coordinate system*.

The laser is modeled as a heat source that moves back and forth along the global *x*-axis, while the wafer rotates about the global *z*-axis. The rotation of the wafer is modeled via the *Translational Motion* feature within the *Heat Transfer in Solids* physics interface, which adds a convective term to the governing transient heat transfer equation:

\rho C_p \frac{\partial T} {\partial t} -\nabla \cdot ( k \nabla T) = -\rho C_p \mathbf{u} \cdot \nabla T

The right-hand side of the above equation accounts for the rotation of the wafer as \mathbf{u}, the velocity vector. This velocity vector can be interpreted as material entering and leaving each element in the finite element mesh — that is, we are solving a problem on an Eulerian frame. Since the geometry is a uniform disk and the applied velocity vector describes a rotation about the axis of the disk, this is a valid approach.

The drawback, however, is when you want to add more physics to the model. The Translational Motion feature is only available within the Heat Transfer physics and for many other physics interfaces that we do not want to solve on an Eulerian frame.

Instead of solving this problem on an Eulerian frame in the global coordinate system, we can solve this problem on a Lagrangian frame, with a rotating coordinate system that moves with the material rotation of the wafer. (Think of yourself as a tiny person standing on the surface of the wafer. The surroundings will appear to be rotating, whereas the wafer will appear stationary.)

The right-hand side of the above governing heat transfer equation becomes zero, but we now need to consider a heat load that not only moves back and forth along the global *x*-axis but also rotates around the *z*-axis of our rotating coordinate system. Although this may sound complicated, it is quite straightforward to implement.

*An observer in the global coordinate system sees a spinning wafer with a laser heat source traversing back and forth along the *x*-axis (left). An observer in a coordinate system rotating with the wafer sees the wafer as stationary, but the heat source moves in a complicated path in the *x*-*y* plane (right.)*

The *General Extrusion* operators provide a mechanism for transforming fields from one coordinate system to another. Some applications that we have already written about include submodeling, coupling different physics interfaces, and evaluating results at a moving point.

Here, we will use the General Extrusion operators to apply a rotational transformation to the applied loads. Our loads are applied in the rotating coordinate system via a coordinate transform from the global coordinate system given by the rotation matrix:

\left\{ \begin{array}{c} x' \\ y' \\ z' \end{array} \right\} = \left[ \begin{array}{ccc} cos \theta & -sin \theta & 0 \\ sin \theta & cos \theta & 0 \\ 0 & 0 & 1 \\ \end{array} \right] \left\{ \begin{array}{c} x \\ y \\ z \end{array} \right\}

We can start with the existing Laser Heating of a Silicon Wafer example and simply remove the existing Translational Motion feature. We then have to add a General Extrusion operator, which implements the above transformation, as shown in the screenshot below. We will also want to implement a second operator that applies the reverse transform, which is done by switching the sign of the rotation.

*The general extrusion operation applies a rotational transform.*

The applied heat load is described via a user-defined function, hf(x,y,t), that describes how the laser heat load moves back and forth along the *x*-axis in the global coordinate system. This moving load is then transformed into the rotating coordinate system via the General Extrusion operator, as shown in the screenshot below.

*The applied heat load in the rotating coordinate system, defined via the global coordinate system and the rotational transform.*

That’s it — you can solve the model just as before.

The results will now be with respect to the rotating coordinate system. It can be more practical for us to plot the temperature solution with respect to the global coordinate system by using the General Extrusion operator that applies the *reverse* transformation. This will give us a visualization of the temperature field as if we were standing outside of the process chamber and were watching the spinning wafer with a thermal camera.

*The second general extrusion operator is used to rotate the results back to the global coordinate system.*

The results of the simulation of the temperature field over time will be identical regardless of whether you use the Translational Motion feature or the General Extrusion operator. Although the General Extrusion operator requires more effort to implement — and does take a bit longer to solve — it is needed if you are interested in more than just the thermal solution.

For example, if you also need to compute a temperature-driven chemical diffusion and reaction process or the evolution of thermal stresses during the wafer heating, these problems should be solved on a coordinate system that rotates with the wafer.

There are of course many other applications where you could use the General Extrusion operator, but I hope I’ve satisfied your appetite for today!

]]>

Recall that in the previous entry, we studied a simple example of 1D heat transfer at steady state with no heat source, where the temperature T is a function of the position x in the domain defined by the interval 1\le x\le 5.

The weak formulation turns the differential equation for the heat transfer physics into an integral equation, with a test function \tilde{T}(x) as a localized sampling function within the integrand to clamp down the solution. Integrating the weak form by parts provides the numerical benefit of reduced differentiation order. It also provides a natural way to specify boundary conditions in terms of the heat flux. For fixed boundary conditions, in terms of the temperature, the weak formulation uses the same mechanism of test functions and its natural boundary conditions to construct additional terms in the equation system.

In the end, we arrived at an exemplary equation that looks like this:

(1)

\int_1^5 \partial_x T(x) \partial_x \tilde{T}(x) \,dx = -2 \tilde{T}_1 -\lambda_2 \tilde{T}_2 -\tilde{\lambda}_2 (T_2-9)

Here, the integrand on the left-hand side involves only the first derivative of the temperature, the first term on the right-hand side defines that the outgoing flux should be 2 at the left boundary (x=1), and the other two terms on the right-hand side together specify that the temperature should be 9 at the right boundary (x=5).

To implement Eq. (1) in COMSOL Multiphysics, we use the Model Wizard to create a new 1D model with a *Weak Form PDE (w)* interface (under *Mathematics > PDE Interfaces*) and a *Stationary* study. The dependent variable can be set to `T`

to match the notation in our equation. For the geometry, we make an *Interval* between 1 and 5. The weak expression under the default “Weak Form PDE 1″ node reads: `-test(Tx)*Tx+1[m^-2]*test(T)`

, where the first term corresponds to the integrand in our Eq. (1) and the second term corresponds to a heat source, which is not in our simple example and should be removed from the input field.

The weak expression now reads: `-test(Tx)*Tx`

, where `Tx`

is the COMSOL Multiphysics notation for \partial_x T(x), the first derivative of the temperature, and `test(Tx)`

is the first derivative of the test function \partial_x \tilde{T}(x). The negative sign comes from the convention that the input field assumes that the expression is on the right-hand side of the equal sign (as seen in the “Equation” section of the settings window), while the integral in our equation is on the left-hand side.

To implement the weak form terms on the right-hand side of Eq. (1) for the boundary conditions, right-click the *Weak Form PDE (w)* node. We see that there are built-in boundary features such as the *Dirichlet Boundary Condition* item, which is available in the pop-up menu for your convenience. However, since here we are interested in entering the equation ourselves, we hover the mouse over the item *More* in the pop-up menu and click on the item *Weak Contribution* in the next pop-up menu.

In the settings window for the “Weak Contribution 1″ node under *Boundary Selection*, we select boundary 1 at the left end of the domain (at x=1). We then enter the weak expression as: `-2*test(T)`

under the section *Weak Contribution* in the same settings window. This takes care of the first term on the right-hand side of Eq. (1), which specifies the outgoing flux to be 2 at the boundary x=1.

For the fixed boundary condition at x=5, where the last two terms on the right-hand side of Eq. (1) together specify that T=9, we create another “Weak Contribution” node at boundary 2 at the right end of the domain and an *Auxiliary Dependent Variable* subnode under it.

We enter `lambda2`

for the *Field variable* name in the subnode and then enter the weak expression as the two terms in Eq. (1): `-lambda2*test(T)-test(lambda2)*(T-9)`

The COMSOL software discretizes the domain by creating a mesh. Let’s right-click the *Mesh 1* node and select *Edge* and then right-click *Edge 1* and select *Distribution*. Then, we set the “Number of elements” to 4 and click *Build All*. We intentionally keep the number of elements small to make it easier when we discuss the discretization in more detail later.

Also, under the *Discretization* section in the settings window for the *Weak Form PDE (w)* interface node, we set the “Element order” to Linear (click on the *Show* button under Model Builder and then the item *Discretization* in the pop-up menu to enable the *Discretization* section):

Now we are ready to click *Compute* and check whether the solution makes sense.

The solution gives a straight line within the domain, which is consistent with the temperature profile at steady state with no heat source. The slope of the line is 2, which is consistent with the boundary condition that the outgoing flux is 2 at x=1. The temperature is 9 at x=5, as specified by the fixed boundary condition. Since there is no heat source, the total heat flux going out of the domain should sum up to zero in the steady state. Thus, the outgoing flux should be -2 at x=5.

We readily verify this by making a point evaluation of the heat flux variable `lambda2`

, as shown in the screenshot below:

Some readers may wonder whether it is always necessary to solve for the auxiliary variable `lambda2`

, the so-called *Lagrange multiplier*, especially if it is not needed by the modeler and solving for it inevitably requires more computation. As we will see in the following posts, COMSOL Multiphysics provides alternative features and allows the user to decide whether or not to solve for the Lagrange multiplier.

Today, we refreshed the concept of the weak formulation and implemented an exemplary weak form equation (1) in COMSOL Multiphysics. The resulting numerical solution behaves as expected from simple physical arguments.

In future blog posts, we will take a look “under the hood” to see how the weak form equations, such as Eq. (1), are discretized and solved numerically. We will see how the same problem can be solved in different ways and how different boundary conditions can be set up for different types of problems.

Stay tuned!

]]>

Streamline plots show curves that are tangent everywhere to an instantaneous vector field. For this reason, they are often used to visualize fluid flow. While they are best known for their wavy shapes, significant care and intention can go into the creation of good streamline plots. We’ll return to a familiar model for this demonstration: the Flow in a Pipe Elbow model. This model is suitable for a demonstration because it can be applied to a range of more complicated devices and processes such as mixers, entire pipe networks, water filtration, and the cooling of electronics.

If you’d like to follow along and have the CFD Module installed, the pipe elbow model can be found under File > Model Libraries > CFD Module > Single-Phase Benchmarks. This model simulates the flow of water at 90°C through a 90-degree bend. Because of the pipe diameter and the high Reynolds number, a turbulence model is used in the simulation.

In the solved model, there is an existing 3D plot group called *Pressure* (*spf2*) that already contains a streamline plot. Let’s take a look:

Only half of the symmetric pipe geometry is modeled, with the inlet at the bottom and the outlet at the top right in this view. The streamlines in this plot group show the instantaneous fluid velocity as water flows toward, through, and away from the bend. We can see a fully developed turbulent flow profile at first; then, a separation zone and turbulent vortices appear after the bend. The color expression in the streamline plot indicates fluid velocity (in m/s).

There are several different ways to arrange streamline plots. In the settings window, the Streamline Positioning tab contains different options for the position. We’ll touch on just a few of them.

In the figure previously shown, the positioning is set to *On selected boundaries*, meaning that a single boundary has been selected for the curves to start at — in this case, the inlet of the pipe — and the user specifies how many streamlines to plot. Currently, there are twenty individual curves released that begin at this boundary, as specified in the *Number* field:

Some of the other positioning tools have slightly different effects. I’ll now set the positioning to *Uniform density*. With this option, the user controls the distance between each curve rather than specifying the number of lines.

This is also where, if you’re not careful, you might end up with a very dense array of streamlines, which can take some time to compute the postprocessing:

The presentation of the streamline density can be adjusted using the tube settings under the Coloring and Style tab. We can either shrink the tube radius (currently set to 0.001 m) or choose a different line style from the *Line type* drop-down list.

For a case like the dense uniformity shown in the previous figure, the line style will show the plot more clearly than the tubes with the radius shown above:

The ribbon style shows the streamlines as flat ribbons. Like the tube style, these are controlled by a user-defined width and scale factor. Both styles share the risk of the same density issue if they are packed together too tightly. For the figure below, I increased the separating distance from 0.01 to 0.02, thereby decreasing the number of ribbon streamlines plotted:

One nice trick you can do with the ribbons, especially for RANS CFD applications, is to set the ribbon width to a variable. I have set the width expression (scaled down) to *k2*, representing the turbulent kinetic energy.

This can be used to help identify vortices in the flow. Below, we can see that the ribbons are much thinner before the bend. They thicken near the bottom of the pipe after the elbow, where the turbulent kinetic energy has increased:

We’ve now seen a few different ways to shape the streamlines, but what are they really representing? Often, one of the best uses for streamlines is to depict the flow velocity as fluid moves through a domain — in this case, the pipe bend. Each of the earlier examples shows a plot that, by default, contains a color expression. Disabling this color expression from the streamline plot will show a very different look. The result is still useful as it shows the turbulent flow in the pipe, but it doesn’t provide the same information (and this is why coloring is so significant):

Let’s take a look at the Color Expression settings. Like most plots, the color expression will be based on a variable of your choice. The variable you choose depends on your application. For this model, it’s important to know the velocity and fluid pressure of the water. I’ll set the expression to show pressure:

Below, you can see how the coloring looks different than it did earlier when it was set to show velocity. This image also shows the extent of the pressure drop brought on by the fluid being forced through a 90-degree bend:

There you have it. We have provided you with a brief example of how streamline plots can be particularly helpful in depicting fluid flow — which could mean air, water, or a chemical mixture — through any domain. I hope you’ll have the opportunity to apply this technique in your future postprocessing efforts.

- Download the Flow in a Pipe Elbow model
- Read the previous blog entry in this postprocessing series

Any field variable *u = u(x,y,z)* in a 3D model could have a different maximum value in various *xy* planes, i.e., *(u _{max})_{i}* at

*A schematic showing different field maximum values on parallel sections.*

The first approach would be to use the built-in postprocessing features such as the Cut Plane data set and the Surface Maximum feature. You could create a Cut Plane data set at a particular *z*-location. The maximum value of any variable at this plane can be evaluated using the Surface Maximum feature (under Derived Values > Maximum > Surface Maximum). With this approach, the cross-sectional maximum can be evaluated at a single *z*-location. This procedure must be repeated for different axial locations to obtain maximum values at each of these cross sections.

The manual procedure outlined in the above section can be automated using LiveLink™ *for* MATLAB® through a simple ‘for’ loop. The built-in wrapper function ‘mphmax’ enables the maximum calculation for a Cut Plane data set. Here is the sample code for evaluating the maximum cross-sectional values (*u _{max}* vector with ‘n’ values) for

z = [z1 z2 z3 zn]; for i = 1:length(z) model.result.dataset('cpl1').set('quickz', z(i)); umax(i) = mphmax(model,'u',2,'dataset','cpl1'); end plot(z, umax)

The Parametric Sweep or Auxiliary Sweep features are available to run COMSOL Multiphysics models for multiple values of any given parameter (used for defining geometry, mesh, and physics settings). Because these sweep features are essentially a ‘for’ loop, we can even utilize the features for a postprocessing analysis such as this.

Let’s look at the various steps needed to create the cross-sectional maximum value plot for the velocity magnitude in the Laminar Static Mixer model. The slice plot below shows the velocity field magnitude at various *z*-locations. Our objective is to obtain the maximum velocity magnitude on each of these slices (cross sections) and plot these slice maximums with respect to the *z*-coordinate.

*Velocity slice plot for the Laminar Static Mixer model.*

The first step is to add a parameter corresponding to the axial coordinate. For this example model, we add a parameter *zp*, as shown in the screenshot below.

*Adding a parameter in the Parameters table.*

Next, we need to add a second study (with a stationary step) to run the Auxiliary/Parametric sweep with respect to the parameter *zp*. The sole purpose of using the second study step is to restructure the available solution data in order to achieve our postprocessing objective. Therefore, in this step, we do not solve for any physics interfaces and simply pass along the solution from Study 1 to this study.

Using the Stationary step settings mentioned above (and also noted in the snapshot below), we then need to compute the second study step.

*Stationary step settings for running the Auxiliary Sweep feature without solving for physics interfaces.*

We now need to add a Parameterized Surface data set (right-click on *Data Sets* under the Results node and select “Parameterized Surface” from the More Data Sets submenu). In the Parameterized Surface settings, we will use Study 2/Solution 1 as the data set and parameter *zp* as the *z*-coordinate expression. This is illustrated in the image below.

*Settings for creating the Parameterized Surface data set.*

In the next step, we need to add a Surface Maximum node to the model (right-click on *Derived Values* under the Results node and choose “Surface Maximum” from the Maximum submenu). We utilize the Parameterized Surface as the data set for evaluating the Surface Maximum, as shown in the Surface Maximum settings below. Now, click on *Evaluate* to create a table with plane maximum values (velocity magnitude) for each of the *z*-coordinate values.

*Settings for evaluating the Surface Maximum using the Parameterized Surface data set.*

Lastly, we can click on the *Table Graph* button (shown in the figure below) to plot the table data. The 1D plot indicates the maximum velocity field magnitude plotted versus the *z*-coordinate location.

*1D plot showing the cross-sectional maximum values for the velocity field magnitude at different *z*-coordinates.*

This post demonstrates a very simple — yet powerful — postprocessing trick for automating the maximum (or the minimum, average, or integration) calculation for any variable on parallel planes. The key concept we have highlighted here is how to use the sweep functionality within COMSOL Multiphysics to restructure the available solution data for postprocessing purposes. We have also presented a scripting solution using LiveLink™ *for* MATLAB® that would require only a few lines of code in order to complete this postprocessing task.

Stay tuned for the next entry in this blog series for additional tips and tricks!

*MATLAB is a registered trademark of The MathWorks, Inc. All other trademarks are the property of their respective owners. For a list of such trademark owners, see http://www.comsol.com/tm. COMSOL AB and its subsidiaries and products are not affiliated with, endorsed by, sponsored by, or supported by these trademark owners.*