Widely observed in nature, spirals, or helices, are utilized in many engineering designs. As an electrical engineer, for instance, you may wind inductive coils in spiral patterns and design helical antennas. As a mechanical engineer, you may use spirals when designing springs, helical gears, or even the watch mechanism highlighted below.

*An example of an Archimedean spiral used in a clock mechanism. Image by Greubel Forsey. Licensed by CC BY-SA 3.0, via Wikimedia Commons.*

Here, we’ll focus on a specific type of spiral, the one that is featured in the mechanism shown above: an Archimedean spiral. An *Archimedean spiral* is a type of a spiral that has a fixed distance between its successive turns. This property enables it to be widely used in the design of flat coils and springs.

We can describe an Archimedean spiral with the following equation in polar coordinates:

r=a+b\theta

where a and b are parameters that define the initial radius of the spiral and the distance between its successive turns, the latter of which is equal to 2 \pi b. Note that an Archimedean spiral is also sometimes referred to as an *arithmetic spiral*. This name derives from the arithmetic progression of the distance from the origin to point on the same radial.

Now that we’ve introduced Archimedean spirals, let’s take a look at how to parameterize and create such a design for analysis in COMSOL Multiphysics.

*An Archimedean spiral can be described in both polar and Cartesian coordinates.*

To begin, we need to convert the spiral equations from a polar to a Cartesian coordinate system and express each equation in a parametric form:

\begin{align*}

x_{component}=rcos(\theta) \\

y_{component}=rsin(\theta)

\end{align*}

x_{component}=rcos(\theta) \\

y_{component}=rsin(\theta)

\end{align*}

This transformation allows us to rewrite the Archimedean spiral’s equation in a parametric form in the Cartesian coordinate system:

\begin{align*}

x_{component}=(a+b\theta)cos(\theta) \\

y_{component}=(a+b\theta)sin(\theta)

\end{align*}

x_{component}=(a+b\theta)cos(\theta) \\

y_{component}=(a+b\theta)sin(\theta)

\end{align*}

In COMSOL Multiphysics, it is necessary to decide on the set of parameters that will define the spiral geometry. These parameters are the spiral’s initial radius a_{initial}, the spiral’s final radius a_{final}, and the desired number of turns n. The spiral growth rate b can then be expressed as:

b=\frac{a_{final}-a_{initial}}{2 \pi n}

Further, we need to decide on the spiral’s start angle theta_0 and end angle theta_f. Let’s begin with the values of theta_0=0 and theta_f=2 \pi n. With this information, we are able to define a set of parameters for the spiral geometry.

*The parameters used to build the spiral geometry.*

To build this spiral, we’ll start with a *3D Component* and create a *Work Plane* in the *Geometry* branch. In the *Work Plane* geometry, we then add a *Parametric Curve* and use the parametric equations referenced above with a varying angle to draw a 2D version of the Archimedean spiral. These equations can be directly entered into the parametric curve’s *Expression* field, or we can first define each equation in a new *Analytic* function as:

\begin{align*}

X_{fun}=(a+bs)cos(s) \\

Y_{fun}=(a+bs)sin(s) \\

\end{align*}

X_{fun}=(a+bs)cos(s) \\

Y_{fun}=(a+bs)sin(s) \\

\end{align*}

*The X-component of the Archimedean spiral equation defined in the *Analytic* function.*

The *Analytic* function can be used in the expressions for the Parametric Curve. In this Parametric Curve, we vary parameter s from the initial angle of the spiral, theta_0, to the final angle of the spiral, theta_f=2 \pi n.

*The settings for the Parametric Curve feature.*

The parametric spiral equations used in the Parametric Curve feature will result in a spiral represented by a curve. Let’s now build upon this geometry, adding thickness to it in order to create a 2D solid object.

Up to this point, our spiral has been parameterized in terms of the initial radius a_{initial}, final radius a_{final}, and desired number of turns n. Now, we must incorporate thickness as another control parameter in the spiral equation.

Let’s begin with the main property of the spiral, which states that the distance between the spiral turns is equal to 2 \pi b. This is also equivalent to \frac{a_{final}-a_{initial}}{n}. To incorporate thickness, we represent the distance between each successive turn of the spiral as a sum of the spiral thickness and the remaining gap between turns, thick+gap.

*The distance between spirals turns is defined in terms of the spiral thickness and gap parameters.*

To control thickness and obtain identical distance between the turns, the distance can be expressed as:

\begin{align*}

distance=\frac{a_{initial}-a_{final}}{n} \\

gap=distance-thick

\end{align*}

distance=\frac{a_{initial}-a_{final}}{n} \\

gap=distance-thick

\end{align*}

After defining thickness and expressing the gap between turns in terms of thickness and constant distance between centerlines of the spiral, we can rewrite the spiral growth parameter in terms of thickness as:

\begin{align*}

distance=2\pi b \\

b=\frac{gap+thick}{2\pi}

\end{align*}

distance=2\pi b \\

b=\frac{gap+thick}{2\pi}

\end{align*}

We will also want to express the final angle of the spiral in terms of its initial and final radii:

\begin{align*}

\theta_{final}=2 \pi n \\

a_{final}=\text{total distance}+a_{initial} \\

a_{final}=2 \pi bn+a_{initial} \\

n=\frac{a_{final}-a_{initial}}{2 \pi b} \\

\theta_{final}=\frac{2 \pi (a_{final}-a_{initial})}{2 \pi b} \\

\theta_{final}=\frac{a_{final}-a_{initial}}{b}

\end{align*}

\theta_{final}=2 \pi n \\

a_{final}=\text{total distance}+a_{initial} \\

a_{final}=2 \pi bn+a_{initial} \\

n=\frac{a_{final}-a_{initial}}{2 \pi b} \\

\theta_{final}=\frac{2 \pi (a_{final}-a_{initial})}{2 \pi b} \\

\theta_{final}=\frac{a_{final}-a_{initial}}{b}

\end{align*}

Want to start the spiral from an angle other than zero? If so, you will need to add this initial angle to your final angle in the expression for the parameter: theta_f=\frac{a_{final}-a_{initial}}{b}+theta_0.

Duplicating the existing spiral curve twice and placing these curves with an offset of -\frac{thick}{2} and +\frac{thick}{2} with respect to the initial spiral curve allows us to build the spiral with thickness. To position the upper and lower spirals correctly, we must make sure that the offset spirals are normal to the initial spiral curve. This can be achieved by multiplying the offset distance \pm\frac{thick}{2} by the unit vector normal to the spiral curve. The equations of the normal vectors to a curve in parametric form are:

n_x=-\frac{dy}{ds} \quad \text{and} \quad n_y=\frac{dx}{ds}

where s is the parameter used in the Parametric Curve feature. To get a unit normal, we need to divide these expressions by the length of the normal:

\sqrt{(dx/ds)^2+(dy/ds)^2 }

Our updated parametric equations for the Archimedean spiral with a half-thickness shift are:

\begin{align*}

x_{component}=(a+bs)cos(s)-\frac{dy/ds}{\sqrt{(dx/ds)^2+(dy/ds)^2}}\frac{thick}{2} \\

y_{component}=(a+bs)sin(s)+\frac{dx/ds}{\sqrt{(dx/ds)^2+(dy/ds)^2}}\frac{thick}{2}

\end{align*}

x_{component}=(a+bs)cos(s)-\frac{dy/ds}{\sqrt{(dx/ds)^2+(dy/ds)^2}}\frac{thick}{2} \\

y_{component}=(a+bs)sin(s)+\frac{dx/ds}{\sqrt{(dx/ds)^2+(dy/ds)^2}}\frac{thick}{2}

\end{align*}

Writing out these equations in the parametric curve’s expression fields can be rather time consuming. As such, we introduce the following notation:

\begin{align*}

N_x=-\frac{dy/ds}{\sqrt{(dx/ds)^2+(dy/ds)^2}} \\

N_y=\frac{dx/ds}{\sqrt{(dx/ds)^2+(dy/ds)^2 }}

\end{align*}

N_x=-\frac{dy/ds}{\sqrt{(dx/ds)^2+(dy/ds)^2}} \\

N_y=\frac{dx/ds}{\sqrt{(dx/ds)^2+(dy/ds)^2 }}

\end{align*}

where each N_x and N_y is defined via the *Analytic* function in COMSOL Multiphysics, similar to how we defined X_{fun} and Y_{fun} for the first parametric curve. Within the function, we use the differentiation operator, `d(f(x),x)`

, to take the derivative, as depicted in the following screenshot.

*Examples of the derivative operator used in the *Analytic* function.*

The functions X_{fun}, Y_{fun}, N_x, and N_y can then be used directly in the parametric curve’s expressions for the curve on one side:

\begin{align*}

x_{lower}=X_{fun}(s)+N_x(s)\frac{thick}{2} \\

y_{lower}=Y_{fun}(s)+N_y(s)\frac{thick}{2}

\end{align*}

x_{lower}=X_{fun}(s)+N_x(s)\frac{thick}{2} \\

y_{lower}=Y_{fun}(s)+N_y(s)\frac{thick}{2}

\end{align*}

The functions can also be used for the curve on the other side:

\begin{align*}

x_{upper}=X_{fun}(s)-N_x(s)\frac{thick}{2} \\

y_{upper}=Y_{fun}(s)-N_y(s)\frac{thick}{2}

\end{align*}

x_{upper}=X_{fun}(s)-N_x(s)\frac{thick}{2} \\

y_{upper}=Y_{fun}(s)-N_y(s)\frac{thick}{2}

\end{align*}

*Equations for the second of the two offset parametric curves.*

To join the ends of two curves, we add two more parametric curves using a slight modification of the equations mentioned above. For the curve that joins the center of the spiral, we have to evaluate X_{fun}, Y_{fun}, N_x, and N_y for the starting value of the angle, theta. For the curve that joins the outer side of the spiral, we have to evaluate the final value of theta. Therefore, the joining curve in the center is:

\begin{align*}

X_{fun}(theta_0)+s\cdot N_x(theta_0)\cdot\frac{thick}{2} \\

Y_{fun}(theta_0)+s\cdot N_y(theta_0)\cdot\frac{thick}{2}

\end{align*}

X_{fun}(theta_0)+s\cdot N_x(theta_0)\cdot\frac{thick}{2} \\

Y_{fun}(theta_0)+s\cdot N_y(theta_0)\cdot\frac{thick}{2}

\end{align*}

The outer joining curve, meanwhile, is:

\begin{align*}

X_{fun}(theta_f)+s\cdot N_x(theta_f)\cdot\frac{thick}{2} \\

Y_{fun}(theta_f)+s\cdot N_y(theta_f)\cdot\frac{thick}{2}

\end{align*}

X_{fun}(theta_f)+s\cdot N_x(theta_f)\cdot\frac{thick}{2} \\

Y_{fun}(theta_f)+s\cdot N_y(theta_f)\cdot\frac{thick}{2}

\end{align*}

In both of the above equations, s goes from -1 to +1, as shown in the screenshot below.

*Equations for the curve that joins one end of the spiral.*

We now have five curves that define the centerline of the spiral and all four sides of the profile. We can disable (or even delete) the curve describing the centerline since it isn’t truly necessary, leaving just the spiral outline. With the outline of our spiral defined, the *Convert to Solid* operation can be used to create a single geometry object. This 2D spiral can finally be extruded into 3D via the *Extrude* operation.

*The full geometry sequence and extruded 3D spiral geometry.*

We have walked you through the steps of creating a fully parameterized Archimedean spiral. With this spiral geometry, you can change any of the parameters and experiment with different designs, or even use them as parameters in an optimization study. We encourage you to utilize this technique in your own modeling processes, advancing the analysis of your particular spiral-based engineering design.

- To explore further applications of simulation in the design of spiral models, try out these tutorial models:
- Read a related user story: “Analysis of Spiral Resonator Filters“

If you look up to the sky on a clear night, you may see the inspiration for our new color tables: the cosmos. These new color tables are a tribute to space research and science fiction. In order to discover their individual inspirations, let’s take a small tour through space.

The *Twilight* color table, as its name implies, is inspired by NASA images of the rising sun. In these moments, the sunlight creates multicolored layers on polar mesospheric clouds, as shown below. We used this inspiration to create a color table with ten color levels, which the software interpolates between.

*A NASA Johnson Space Center photo showing the Endeavor and sunlight illuminating polar mesospheric clouds (left) and simulation results showing the electrodeposition of a car door with the Twilight color table (right).*

The *Aurora Australis* and *Aurora Borealis* color tables are both named after a well-known natural phenomenon in which solar winds traveling through space interact with Earth’s magnetic field. This results in an awe-inspiring light show that appears to paint the sky a rainbow of different colors. The vibrant colors of the Aurora Borealis (northern latitudes) and Aurora Australis (southern latitudes) are reflected in these color tables. COMSOL Multiphysics version 5.2a offers eight color levels for the Aurora Australis color table and ten for the Aurora Borealis color table.

*The Aurora Borealis (left). Image by Carsten Frenzl — Own work. Licensed under CC BY 2.0, via Flickr Creative Commons. A pressure field around an Ahmed body, visualized with the Aurora Borealis color table (right).*

Venturing out further, we find the inspiration for our next color table, *Jupiter Aurora Borealis*. These colors are influenced by ultraviolet images of the largest planet in our solar system, Jupiter. One such photo, shown below, depicts auroras on Jupiter’s surface. This ultraviolet visualization guided the creation of nine color levels for the Jupiter Aurora Borealis color table.

*Auroras on Jupiter’s surface (left), taken by NASA’s Hubble Space Telescope. Image Credit: NASA/STScl. The flow of droplets around an inkjet nozzle, using the Jupiter Aurora Borealis color table (right).*

Finally, let’s take a look at our *Heat Camera* and *Heat Camera Light* color tables, which draw inspiration from a certain famous creature from science fiction movies: the Predator. In the *Predator* films, extraterrestrial hunters chase their prey using thermal vision, that is, they see infrared radiation. The Predators can also switch color tables, further enabling them to search their landscape for prey. Our own Heat Camera color tables resemble this fictional thermal vision as well as the color tables of actual heat camera technology. Eight color levels are included in the Heat Camera color table.

*An infrared camera (left). Image taken by NASA/John Bethune. An increase in logarithmic temperature in a SAM phantom, shown with the Heat Camera color table (right).*

Color tables offer much more than just visual appeal — they also expand the visualization capabilities of COMSOL Multiphysics. We’ll take a look at some concrete examples in the next section.

While all of our color tables are multihue (include multiple colors), they differ in other ways. For instance, these color tables include both sequential color scales, which monotonously transition from lighter to darker colors in only one direction, and diverging color scales, which plot the lightest or darkest color in the middle and become darker or lighter (respectively) on each side. When picking the right color table for the job, you may need to choose between sequential and diverging color tables, with each option offering different benefits.

For instance, sequential color tables are preferential for visualizing heat transfer and CFD simulations. My colleague Ed Fontes mentions the colors in the Heat Transfer Module as an example. Here, the default colors are red and yellow. Such colors bring up certain associations, like the glow of iron that is about to melt or a burning stove top. The instinctual nature of these associations makes your simulation results easy to interpret.

Let’s look again at the new Jupiter Aurora Borealis color table, which, unlike the other aurora color tables, has a relatively small color variation consisting of a range of blues. Blue colors like these are commonly associated with water, with darker colors signifying deep water and lighter hues representing more shallow water. As such, this color table would be a good choice for fluid flow simulations. The visual below and to the left uses the Jupiter Aurora Borealis color table to illustrate the topology of a wave in a microchannel, with the color variation indicating the distance to the water’s surface.

*CFD simulations showing a wave formed by water flowing through a narrow channel. Images created using the new sequential Jupiter Aurora Borealis color table (left) and the diverging Twilight color table (right).*

Sequential color scales are not necessarily the best way to display your results. It all depends on what’s right for your unique simulation needs. Although most of our new color tables are sequential, there is one notable exception: the Twilight color table, which is classified as diverging. Diverging color tables, as Ed Fontes explains, help to highlight a significant element in your design by using a brighter or darker color in the middle of the scale. Such color tables are often used to show a transition level of a field.

To learn more, let’s return to our fluid flow example from before. Unlike the Jupiter Aurora Borealis color table, the Twilight color table clearly defines the water and air phases by using different colors on either end of the scale, as seen in the image above and to the right. It also utilizes the bright value in the middle of the scale to highlight a transitional phase, enabling you to differentiate between three different aspects of your simulation results.

Further use of the diverging Twilight color table is highlighted in the following image of a simulation of electromagnetic waves propagating through an elbow waveguide. With the Twilight color table, the positive (blue) and negative (tan) colors of the *z* component of the electric field are defined in a straightforward manner.

*Electromagnetic wave propagation in an elbow waveguide, using the Twilight color table.*

You can also use multiple color tables at once to observe details that would not be noticeable with a single color table. Two colors tables can help you simulate the magnetic flux around a permeable magnetic sphere that is subjected to a background magnetic field. In this case, we selected the Jupiter Aurora Borealis color table to show magnetic flux at the outer boundaries and the Aurora Borealis color table to show magnetic flux density within the modeling domain and the permeable sphere in the core. As another example, we used the Aurora Australis color table and Jupiter Borealis color table to visualize the electron temperature and magnetic vector potential streamlines with color representing the intensity of the electric field, respectively.

*Results using the Jupiter Aurora Borealis (left scale) and Aurora Borealis (right scale) color tables to help visualize results from a magnetic permeable sphere simulation.*

*Results using both the Aurora Australis (left scale) and the the Jupiter Aurora Borealis (right scale) color tables.*

Postprocessing is a powerful tool for better understanding and communicating simulation results. The six new color tables highlighted here aim to extend the visualization capabilities of COMSOL Multiphysics, combining eye-catching results with useful applications. Browse the different options to find the color table that best suits your simulation needs…

- Learn how to manually change the color range in a color table
- Browse additional postprocessing tips and tricks
- See what else is new in version 5.2a of COMSOL Multiphysics® in our Release Highlights

*The National Aeronautics and Space Administration (NASA) does not endorse the COMSOL Multiphysics® software.*

The typical home thermostat for your furnace or heater has two temperature setpoints:

*On*, at which the heat is turned on at the maximum possible rate*Off*, at which the heat is turned off

In practice, you may only directly control one of these setpoints, with the other setpoint automatically set to a few degrees different. Such a control scheme can be modeled with the approach developed in a previous blog post on thermostats. Today, we will include a time delay between switching the heater state in our thermostat simulation.

A delay can be implemented for many reasons, but one of the main reasons is to prevent *short-cycling* of the heater. Turning a device on and off repeatedly can lead to excessive wear on the system components, which will incur repair costs that we want to avoid.

There are various different delay schemes that we can use to prevent short-cycling. One common approach is to maintain the heater in either the on or off state for some defined minimum time before switching. Let’s look at how this control scheme is implemented within COMSOL Multiphysics.

Let’s suppose that you have a thermal model of a house or any other system wherein you want to control a heater using a thermostat with a delay. Within such a model, you can choose one point as the location of the thermostat temperature sensor, as described in our earlier blog post on modeling thermostats. As previously mentioned, we will use the *Events* interface to control the on/off state of the heater in our model.

*A thermal model of a house. The thermostat monitors the sensor temperature and controls the heater’s on/off state.*

Before we get to the implementation, let’s write out exactly how we want our thermostat to work. We have a thermostat with an upper and lower setpoint and we want the heater to maintain its current state, either on or off, for a specified delay time. We can write this as two separate statements:

- If the heater has been in the off state for more than the specified delay time and if the temperature is below the lower setpoint, then turn the heater on.
- If the heater has been in the on state for more than the specified delay time and if the temperature is above the upper setpoint, then turn the heater off.

These two statements can be implemented within the *Events* interface. We simply need to introduce a way to track the time that has elapsed since the last switching of the thermostat. The screenshots below show how to set up these conditions.

*The *Discrete States* interface defines two discrete variables that determine the heater state.*

The *Events* interface contains several features, starting with the *Discrete States* interface shown in the screenshot above. Two discrete variables are defined: `HeaterState`

and `TimeOfSwitch`

. The `HeaterState`

variable is initially one, meaning that at the start of the simulation, the heater is turned on. This variable is used within the *Heat Transfer in Solids* interface as a multiplier on the applied heat load. To turn the heater off, the `HeaterState`

variable is set to zero instead. The `TimeofSwitch`

variable stores the time of the last switching event. For this problem, the initial value is zero, which means that at ` t=0`

, the heater state was switched.

These two *Discrete State* variables are only changed when an event is triggered, and for this to happen, we need to check the elapsed time since the last switching event, and we need to check the temperature of the sensor against the upper and lower setpoints. This is done with the *Indicator States* interface, as shown in the screenshot below.

*The* Indicator States* interface defines three different indicators that will trigger an event.*

We can see in the above screenshot that three different indicators are tracked during the simulation. First, the `TurnOn`

indicator evaluates the expression `T_bot-T_s`

, which will go from negative to positive when we want to turn the heater on, where `T_bot`

is a *Global Parameter*. The variable `T_s`

is the sensor temperature and is defined using a *Point Integration Coupling* and a *Variable Definition*, as described in our earlier blog post on thermostats. Similarly, the `TurnOff`

indicator evaluates ` T_s-T_top`

and goes from negative to positive when we want to turn the heater off.

The `OkToSwitch`

indicator evaluates `t-TimeOfSwitch-Delay`

and goes from negative to positive after the heater is either in the on or off state for longer than the specified delay time, where ` Delay`

is also a *Global Parameter*. These discrete and indicator states are used to trigger two *Implicit Events*, as shown in the following screenshots.

*The first* Implicit Event* controls when the heater is turned on.*

In the above screenshot, the first *Implicit Event* is used to turn on the heater. The *Event Condition* is: `(OkToSwitch>0)&&(HeaterState==0)&&(TurnOn>0)`

. This logical condition can be read as: If the time since the last switching event is greater than the specified delay time, and if the heater is currently off, and if the sensor temperature is below the turn on setpoint, trigger an implicit switching event. When this event is triggered, the solver is restarted and reinitializes `HeaterState`

to one, which turns on the heater. Additionally, `TimeOfSwitch`

is reinitialized to ` t`

, the current time. These two variables will remain unchanged until another implicit event is triggered.

*The second *Implicit Event* controls when the heater is turned on.*

The second *Implicit Event*, shown above, is almost identical to the first, but instead triggers a turn-off event. When the condition `(OkToSwitch>0)&&(HeaterState==1)&&(TurnOff>0)`

is satisfied, this event reinitializes `HeaterState`

to zero and ` TimeSinceSwitch`

to the current time.

You may ask yourself why, in these two implicit events, we check if the heater is off before triggering it to turn on (and similarly check if the heater is on before we trigger it to turn off). This additional check is done to prevent any events from getting triggered and reinitializing `TimeOfSwitch`

too often. The system may naturally (due to some change or fluctuation in the other boundary conditions) get into a state where the sensor temperature goes above the upper setpoint without any heating. Or, the sensor temperature could stay below the lower setpoint even though the heater is on. We do not want such cases to result in any events being triggered, hence these additional checks.

With these features as just described, we have implemented our thermostat with delay between switching. Before solving, make certain that the initial values of the *Discrete States*, the `HeaterState`

and the ` TimeOfSwitch`

variables, are appropriate for the initial heater state, and the last switching event. Also keep in mind that you may need to adjust the *Event Tolerance*, as demonstrated in our previous blog post on thermostats.

Let’s now look at some results.

*The thermostat with delay can switch only every five minutes. The horizontal dashed lines represent the thermostat setpoints.*

In the above plot, we can observe the thermostat delay behavior. The sensor temperature is initially below both setpoints and the heater is switched on at the time of zero. The system takes about seven minutes for the sensor temperature to reach the upper setpoint, at which time the heater turns off. The system starts to cool down quite rapidly, but the thermostat cannot switch back on until five minutes have elapsed. The thermostat temperature dips below the lower setpoint before the heater switches back on. After the heater is switched on, the system temperature rises above the upper setpoint, again due to the five-minute switching delay time. We can observe this over- and under-shoot behavior repeating in time.

The technique we’ve outlined here will enable you to implement a thermostat model with a delay between any switching events and is representative of many real control schemes. The key difference for implementing a delay in your thermostat simulation versus a thermostat without delay is the introduction of an additional *Indicator State* variable that keeps track of the thermostat switching time.

Of course, this approach is useful far beyond just temperature control. Within COMSOL Multiphysics, we can use the *Events* interface whenever we solve a transient problem.

- If you have additional questions about this technique, or would like to use COMSOL Multiphysics for your modeling needs, please don’t hesitate to contact us
- Read these related blog posts for more information on the
*Events*interface and thermal modeling of a house:

Creating and solving a simulation requires access to a range of functionality. In COMSOL Multiphysics, the COMSOL Desktop® environment serves as the UI that encompasses all of these elements, with various menus, tools, and windows included in the mix.

*The COMSOL Desktop® environment.*

Not only does the COMSOL Desktop house all of these important components in one place, but it also organizes them in a way that complements the model creation process. The ribbon across the top of the desktop, as well as the nodes in the Model Builder window, provide you with a clear outline of all the steps that should be taken to build your model from start to finish. These steps include choosing your spatial dimension, creating definitions, building your geometry, assigning materials, defining the physics, creating the mesh, running the simulation, and postprocessing your results.

COMSOL Multiphysics, as we’ve noted, is a modeling environment that is designed to be both intuitive and user-friendly. In light of this, we have created a short video that provides a guided tour of the COMSOL Desktop to help you get started. Along with showing you how to navigate through the desktop, we also review all of the main windows and toolbars in the software, explaining how they relate to one another and update depending on the operation that is performed.

Within the COMSOL Desktop, you will notice the *Graphics* window. Serving as an integral part of the UI, this window allows you to visualize your simulation’s geometry, mesh, and results.

The *Graphics* window itself houses the visual results of your model. The *Graphics* window toolbar, meanwhile, offers functionality that lets you easily customize your view. You can do so via a row of buttons, each of which include a capability that is represented by a distinct icon. Together, they can be used to achieve different types of graphics, views, images, and perspectives.

*Graphics window displaying the geometry for the busbar tutorial model.*

In a short tutorial video on the *Graphics* window, we show you all of the mouse movements that you can use to move around and rotate your geometry, as well as zoom in and zoom out. Further, we review the *Graphics* window toolbar buttons that enable you to perform each of the aforementioned actions, among others, including changing the view of your model as well as hiding, showing, and selecting geometric entities. This video also highlights the dynamic nature of the toolbar and how it updates automatically depending on the spatial dimension of your model component, as well as the node you currently have open.

One of the predominant, and most important, ways to interact with the visuals in the *Graphics* window is by selecting geometric entities. This takes place in virtually every step of the simulation workflow. In COMSOL Multiphysics, you will find a plethora of tools that can help you select geometric entities, regardless of the simplicity or complexity of your model’s geometry.

*Screenshot of the video showcasing the mouse scroll wheel functionality.*

Being aware of each of the selections tools available in the software can be an asset to you while modeling, as each of them caters to specific modeling cases. The following tutorial video discusses each option for selecting geometric entities, noting the advantages of the different methods. Some examples include using the hover-and-click method in the *Graphics* window for simple geometries or individual geometric entities; the mouse scroll wheel button to reach interior geometric entities; and the *Selection List* window for complex geometries.

Helping users like you quickly get models up and running in COMSOL Multiphysics is one of our primary goals. Through watching the videos shown above, it is our hope that you can become more aware of and knowledgeable about the features and functionality available in the software — all of which are designed with the user in mind. By making the simulation environment in which you run your studies easy-to-use, we aim to help you focus on what’s most important: the simulation study at hand.

- To learn about additional features and functionality available in COMSOL Multiphysics, as well as how to use them, head over to the Video Gallery today and browse the Core Functionality topics.

In the mathematical treatment of partial differential equations, you will encounter boundary conditions of the *Dirichlet*, *Neumann*, and *Robin* types. With a Dirichlet condition, you prescribe the variable for which you are solving. A Neumann condition, meanwhile, is used to prescribe a flux, that is, a gradient of the dependent variable. A Robin condition is a mixture of the two previous boundary condition types, where a relation between the variable and its gradient is prescribed.

The following table features some examples from various physics fields that show the corresponding physical interpretation.

Physics | Dirichlet | Neumann | Robin |
---|---|---|---|

Solid Mechanics | Displacement | Traction (stress) | Spring |

Heat Transfer | Temperature | Heat flux | Convection |

Pressure Acoustics | Acoustic pressure | Normal acceleration | Impedance |

Electric Currents | Fixed potential | Fixed current | Impedance |

Within the context of the finite element method, these types of boundary conditions will have different influences on the structure of the problem that is being solved.

The Neumann conditions are “loads” and appear in the right-hand side of the system of equations. In COMSOL Multiphysics, you can see them as weak contributions in the *Equation View*. As the Neumann conditions are purely additive contributions to the right-hand side, they can contain any function of variables: time, coordinates, or parameter values.

Let’s consider a heat transfer problem where a circular heat source with a radius R is traveling in the *x* direction with a velocity v. Its intensity has a parabolic distribution with a peak value q_0. A mathematical description of this load could be

q(x,y,t)=q_0\left(1-\left(\frac{r}{R}\right)^2\right), \quad r=\sqrt{(x-vt)^2+y^2}, \quad r < R

For a traveling load, it is obviously not possible to have domain boundaries, or even a mesh, that fits the load distribution at all times.

The load distribution itself can be entered in a straightforward manner. Since the variable for the radial coordinate, r, will be used in two places, it is a good idea to define it as a variable. The entire input for the moving heat source is shown below.

*Parameters describing the moving heat source.*

*The variable describing the local radial coordinate from the current center of the traveling heat source.*

*Input of the heat flux.*

The results from a time-dependent simulation using such data are depicted in the following animation. Symmetry is assumed in the *yz*-plane, so the load is actually applied on a moving semicircle.

*Animation of the temperature distribution as the heat source travels along the bar.*

Where a Dirichlet condition is given, the dependent variable is prescribed, so there is no need to solve for it. Equations for such degrees of freedom can thus be eliminated from the problem. Dirichlet conditions therefore change the structure of the stiffness matrix. When looking in the *Equation View* of COMSOL Multiphysics, these conditions will appear as constraints.

Assume that you want the traveling spot to prescribe the temperature as exactly 450 K. This may be a bit artificial, but it displays an important difference between the Neumann condition and the Dirichlet condition. If you were to add a *Temperature* node and enter a similar expression (` if(r < R,450[K],0)`

), it would mean setting the temperature to absolute zero on the part of the boundary that is not covered by the hot spot.

The intention is, however, to switch off the Dirichlet condition outside of the hot spot. There’s a small trick for doing so. If you instead enter `if(r < R,450[K],ht.Tvar)`

as the prescribed value, you will get the intended behavior (shown in the following animation).

*Settings for a conditional Dirichlet condition.*

*Animation of the temperature distribution as the prescribed temperature spot travels along the bar.*

In order to understand how this works, enable the *Equation View*, and look at the implementation of the Dirichlet condition (in this case, a prescribed temperature):

*Enabling the* Equation View*.*

*The* Equation View *for the* Temperature *node.*

The constraint is formulated as `ht.T0-ht.Tvar`

, which implicitly means `ht.T0-ht.Tvar = 0`

. The first term is the prescribed temperature, which you enter as input. The second term is just the temperature degree of freedom cast into a variable. This constrains the temperature to be equal to the given value, unless the given value happens to be the string `ht.Tvar`

. In that case, the symbolic algebra during assembly will reduce the expression to `ht.Tvar-ht.Tvar`

, and further to zero. And with the constraint expression being ` 0`

, there is no constraint.

In COMSOL Multiphysics, there are actually two possible implementations of a Dirichlet condition. The default case is the *pointwise constraint*, as referenced above, but you can also use a *weak constraint*. In the weak constraint, equations are added rather than removed. The heat fluxes needed to enforce the prescribed values of the temperature are then added as extra degrees of freedom (*Lagrange multipliers*).

You can use essentially the same trick to make a weak constraint conditional, with just a small twist weaved into the mix. Using weak constraints requires that you first enable the *Advanced Physics Options*.

*Enabling the* Advanced Physics Options*.*

When weak constraints have been selected in a node within a physics interface, there will be extra degrees of freedom for the Lagrange multiplier. In our case, the degree of freedom has the name `T_lm`

.

If the same expression for the temperature as that shown above is applied, the extra degree of freedom will not get any stiffness matrix contribution on the part of the boundary where the Dirichlet condition is switched off. The stiffness matrix will thus become singular. To avoid this situation, change `if(r < R,450[K],ht.Tvar)`

to `if(r < R,450[K],ht.Tvar-T_lm*1e-2)`

. The multiplier used for ` T_lm`

differs between varying models and physics fields, and it may require some tuning.

The reason this works, as a textbook might note, is “left as an exercise to the reader”.

*Settings for a conditional Dirichlet condition when using weak constraints.*

The Robin conditions generally contribute to both the stiffness matrix and the right-hand side. The structure of the stiffness matrix is not affected, but values are added to existing positions. The Robin conditions also appear as weak contributions in the *Equation View*. Turning these conditions into functions of time, space, and other variables is no different than doing so for Neumann conditions.

It is, however, interesting that by selecting appropriate values, you can actually morph Robin conditions into acting as approximative Dirichlet or Neumann conditions. This is especially important for cases where you want to switch between the two boundary condition types during a simulation.

To create a Dirichlet condition, you assign a high value of the “stiffness”, for instance, a spring constant or heat transfer coefficient. In mathematical terms, this is actually a *penalty* implementation of the Dirichlet condition. The higher the stiffness, the greater the accuracy of the prescribed value for the degree of freedom. But there is a caveat: A very high stiffness will harm the numerical conditioning of the stiffness matrix. For a heat transfer problem, a starting point for choosing a “high” heat transfer coefficient \alpha could be

\alpha=1000 \frac{\lambda}{h}

where \lambda is the thermal conductivity and h is a characteristic element size.

The same idea can be applied to other physics fields by replacing \lambda with the appropriate material property (i.e., Young’s modulus in solid mechanics). The factor 1000 is just a suggestion and can be replaced by 10^{4} or 10^{5}.

If you were to use convection to model the moving 450 K spot from the previous example, you could utilize the settings shown below. The built-in variable `h`

for the element size is applied in the expression.

*Using a convection condition to prescribe the temperature.*

In the good old days, when I first began using finite element analysis, it was sometimes not possible to prescribe nonzero displacements in finite element programs for structural mechanics. The limitation was imposed by the added complexity of the programming. If this was the case, the best option was to use the penalty method by adding a predeformed stiff spring. You wouldn’t want it to be too stiff though; in those days, single precision arithmetic was still in use!

Let’s turn our attention toward approximating a Neumann condition. We want a heat flux that is independent of the surface temperature. In the case of heat transfer, the Robin condition states that the inward heat flux q is

q=\alpha(T_{\textrm{ext}}-T)

where \alpha is the heat transfer coefficient, T is the temperature at the boundary, and T_{\textrm{ext}} is the external temperature.

So if T_{\textrm{ext}} is much larger than the expected temperature on the surface, then q \approx \alpha T_{\textrm{ext}}. The strategy is then to select an arbitrary, very large T_{\textrm{ext}} and compute a suitable heat transfer coefficient, as highlighted below.

*Using a convection condition to prescribe the heat flux.*

Designers actually use this principle to introduce a fixed force into real-life mechanical components: a prestressed long weak spring. If the predefomation of the spring is much larger than the displacement of the parts to which the spring is connected, the force in the spring will be almost constant.

When a boundary condition is limited by a Boolean expression like `if(r < R,...`

, then it is more than likely that the border of the region to which it is applied will not follow the edges of the mesh elements. This will introduce discretization errors.

For a Neumann or Robin condition, a numerical integration is performed over each finite element. The value of the function is evaluated at a number of discrete Gauss points in the element. If the size of the mesh elements is large in comparison to the geometrical size of the load, then the exact number of Gauss points covered by the load can significantly affect the total load. As such, there should be several elements on the patch covered by the load at any instant.

*A small change in the location of the load may alter the number of contributing integration points. (In reality, the number of integration points is larger.)*

Dirichlet conditions, at least in the pointwise sense, are instead applied to the mesh nodes. In the figure below, the temperature distribution and heat flux are shown for a certain time when simulating the moving circular spot with a prescribed temperature of 450 K. In front of the hot spot, a darker shade with 260 K is visible. Since the initial and ambient temperatures in the simulation are 293 K, this is not expected. It is a numerical artifact that is related to the fact that not all nodes on each element have a Dirichlet condition. At a discontinuity in Dirichlet conditions, there will be singularities. This is a topic of discussion in a previous blog post. Refining the mesh will reduce such an effect.

The green arrows in the following figure represent the nodes at which an influx of heat is created as a reaction to prescribing the temperature. With the mesh density in the model, the approximation of a semicircle will be rather rough.

*Temperature distribution and heat flux around the semicircular prescribed temperature.*

There are many ways in which the solution can enter your boundary conditions. This will generally introduce nonlinearities, which are automatically detected by COMSOL Multiphysics.

As an example, let’s look at a beam featuring a support that is placed slightly below it, inhibiting further movement after a certain deflection. This can be implemented with a conditional Dirichlet condition via a *Prescribed Displacement/Rotation* node in the *Beam* interface.

*Beam with a deflection, controlling support and distributed load.*

*Settings prescribing that the beam should stop at a deflection of 2 cm.*

The analysis shows the expected behavior. At lower loads, the deflection shape is symmetric, whereas at higher load levels, the point on the beam where the extra support is located will stop moving. At the final load level, the beam will even undergo a change of sign in the curvature. This is visible in the deformation plot, but it is shown more clearly in a bending moment graph.

*The beam displacement at the support point stops at 2 cm.*

*Bending moment along the beam for various load levels.*

The approach highlighted here is rather crude and the iterative solution may not have good convergence properties. A more stable implementation is to use a highly nonlinear spring at the support point, so that the reaction force is a continuously differentiable function of the displacement. This is actually similar to how penalty contact is implemented in the *Solid Mechanics* interface.

COMSOL Multiphysics gives you access to very powerful mechanisms for prescribing nonstandard boundary conditions. Today, we have provided you with a few examples of what you can do with these conditions.

For those who are interested in analyzing a model with a traveling load, take a look at the Traveling Load tutorial model, available in our Application Gallery.

If you have additional questions on how to prescribe nonstandard boundary conditions within your own modeling processes, please contact us today.

]]>

The ability to implement the Fourier transformation in a simulation is a useful functionality for a variety of applications. Besides Fourier optics, we use Fourier transformation in Fraunhofer diffraction theory, signal processing for frequency pattern extraction, and image processing for noise reduction and filtering.

In this example, we calculate an image of the light from a traffic light passing through a mesh curtain, shown below. To simplify the model, we assume the electric field of the lights is a plane wave of uniform intensity; for instance, 1 V/m. Let the mesh geometry be measured by the local coordinates x and y in a plane perpendicular to the direction of the light propagation, and let the image pattern be measured by the local coordinates u and v near the eye in a plane parallel to the mesh plane.

*A Fraunhofer diffraction pattern as a Fourier transform of a square aperture in a mesh curtain.*

According to the Fraunhofer diffraction theory, then, we can calculate the image above simply by Fourier transforming the light transmission function, which is a periodic rectangular function if the mesh is square. Let’s consider a simplified case of a single mesh whose transmission function is a single rectangular function. We will discuss the case of a periodic transmission function later on.

We are interested in the light hitting one square of the mesh and getting diffracted by the sharp edges of the fabric while transmitting in the center of the mesh. In this case, the light transmission function is described by a 2D rectangular function. By implementing a Fourier transformation into a COMSOL Multiphysics simulation, we can more fully understand this process.

In order to learn how to implement Fourier transformation, let’s first discuss the concept of *data sets*, or multidimensional matrices that store numbers. There are two possible types of data sets in COMSOL Multiphysics: *Solution* and *Grid*. For any computation, the COMSOL software creates a data set, which is placed under the *Results* > *Data Sets* node.

The Solution data set consists of an unstructured grid and is used to store solution data. To make use of this data set, we specify the data to which each column and row corresponds. If we specify *Solution 1 (sol1)*, the matrix dimension corresponds to that of the model in Study 1. If it is a time-dependent problem, for example, the data set has a three-dimensional array, which may be written as T(i,j,k) with i=1,\cdots, N_t, \ j=1, \cdots, N_n, \ k = 1, \cdots, N_s . Here, N_t is the number of stored time steps, N_n is the number of nodes, and N_s is the number of the space dimension. Similarly, the data set for a time-dependent parametric study consists of a 4D array. Again, note that the spatial data (other than the time and parameter data) links with the nodal position on the mesh, not necessarily on the regular grid.

On the other hand, the Grid data set is equipped with a regular grid and is provided for functions and all other general purpose uses. All numbers stored in the Grid data set link to the grid defined in the Settings window. This data set is automatically created when a function is defined in the *Definition* node and by clicking on *Create Plot*. This creates a 1D Grid data set in the *Data Sets* node.

You also need to specify the range and the resolution of your independent variables. By default, the resolution for a 1D Grid data set is set to 1000. If the independent variable (i.e., *x*) ranges from 0 to 1, the Grid data set prepares data series of 0, 0.001, 0.002, …, 0.999, and 1. The default resolution is 100 for 2D and 30 for 3D. For Fourier transformation, we use the Grid data set. We can also use this data set as an independent tool for our calculation, as it does not point to a solution.

To begin our simulation, let’s define the built-in 1D rectangular function, as shown in the image below.

*Defining the built-in 1D rectangular function.*

Then, we click on the *Create Plot* button in the Settings window to create a separate 1D plot group in the *Results* node.

*A plot of the built-in 1D rectangular function.*

Let’s look at the Settings window of the plot. We expand the *1D Plot Group 1* node and click on *Line Graph 1* to see the data set pointing to *Grid 1D*. In the *Grid 1D* node settings, we see that the data set is associated with a function `rect1`

.

*Settings for the built-in 1D rectangular function.*

*Settings for the 1D Grid data set.*

We can create a 2D rectangular function by defining an analytic function in the *Definitions* node as `rect1(x)*rect1(y)`

. For learning purposes, we will create and define a 2D Grid data set and plot it manually instead of automatically. The results are shown in the following series of images.

In the Grid 2D settings, we choose *All* for *Function* because the 2D rectangular function uses another function, `rect1`

. We also assign x and y as independent variables, which we previously defined as the curtain’s local coordinates, and set the resolution to 64 for quicker testing. To plot our results, we choose the 2D grid data, renamed to Grid 2D (source space), for the data set in the Plot Group settings window.

*Defining the function in the Grid 2D settings.*

*Creating and defining a 2D data set.*

*Setting the 2D plot group for the 2D rectangular function.*

*A 2D plot of the 2D rectangular function.*

Now, let’s implement a Fourier transform of this function by calculating:

g(u,v) = \iint_{-\infty}^\infty {\rm rect}(x,y) \exp (-2 \pi i(xu+yv) ) dxdy.

Here, u and v represent the destination space (Fourier/frequency space) independent variables, as we previously discussed.

Since we already created a 2D data set for x and y, now we can create a Grid 2D data set, renamed to Grid 2D (Destination space), for u and v (shown below). We choose *Function* from *Source* and *All* from *Function* because the `rect`

function calls the ` rect1`

function as well. We can change the resolution to 64 here, as we did for the 2D data set, for quicker calculation.

*Settings for the Grid 2D data set for the Fourier space.*

Now, we are at the stage in our simulation where we can type in the equations by using the `integrate`

operator.

*Entering the equation for the Fourier transform of the 2D rectangular function.*

We finally obtain the resulting Fourier transform, as shown in the figure below. Compare this (more accurately, the square of this) to each twinkling colored light in the photograph of the mesh curtain. In practice, this image hasn’t been truly seen yet. To calculate the image on its final destination, the retina of the eye, we would need to implement the Fourier transformation one more time.

*The Fourier transform of the 2D rectangular function.*

In COMSOL Multiphysics, you can use the data set feature and `integrate`

operator as a convenient standalone calculation tool and a preprocessing and postprocessing tool before or after your main computation. Note that the Fourier transformation discussed here is *not* the discrete Fourier transformation (FFT). We still use discrete math, but we carry out the integration numerically by using the Gaussian quadrature. This function is used in the finite element integration in COMSOL Multiphysics, while the discrete Fourier transform is formed by the operation of number sequences. As a result, we don’t need to be concerned with the aliasing problem, Fourier space resolution issue, or Fourier space shift issue.

There is more to discuss on this subject, but let’s comment on the two cases that we simplified earlier. We calculated for a single mesh. In practice, the mesh curtain is made of a finite number of periodic square openings. It sounds like we have to redo our calculation for the periodic case, but fortunately, the end result differs only by an envelope function of the periodicity. For details, Hecht’s *Optics* outlines this topic very well.

The second simplification was that we assumed a sharp rectangular function for the mesh transmission function. In COMSOL Multiphysics, all functions other than the user-defined functions are smoothed to some extent for numerical stability and accuracy reasons. You may have noticed that our rectangular function had small slopes. This may be a complication rather than a simplification because the simplest case is a rectangular function with no slopes and we used a smoothed rectangular function instead of a sharp one.

The Fourier transforms of the two extreme cases are known; i.e., a rectangular function with no slopes is transformed to a sinc function (`sin(x)/x`

) and a Gaussian function to another Gaussian function. A sinc function has ripples around the center representing a diffraction effect, while a Gaussian function decays without any ripples. Our smoothed rectangular function is somewhere between these two extremes, so its Fourier transform is also somewhere between a sinc function and a Gaussian function. As we previously mentioned, the curtain fabric can’t have sharp edges, so our results may be more accurate for this example case anyway.

- Check out these blog posts about simulating holographic data storage systems:
- Find more information in these introductory books on optics:
- J.W. Goodman,
*Introduction to Fourier Optics*, W. H. Freeman, 2004. - E. Hecht,
*Optics*, Pearson Education Limited, 2014.

- J.W. Goodman,

When you hear the name Alan Turing, morphogenesis may not be the first thought that comes to mind. Turing, a famous British mathematician, is more widely recognized for his contributions to the fields of cryptology and computer science. During the Second World War, Turing helped break a complicated German cipher called the Enigma code — an achievement that has recently become more well-known due to it being featured in a movie.

Turing is also credited with creating an artificial intelligence test called the Turing test as well as the first formal computer algorithm concept. Such achievements have led many to consider him to be the father of artificial intelligence and theoretical computer science.

*Alan Turing’s theory of morphogenesis offers a prediction behind the development of stripes on a tiger. Image by J. Patrick Fischer — Own work. Licensed under CC BY-SA 3.0, via Wikimedia Commons.*

It wasn’t until later in his life that Turing entered into the realm of mathematical biology. In 1952, he published a paper on his theory of morphogenesis titled “The Chemical Basis of Morphogenesis”. Here, he used a mathematical modeling system to explain how nonuniformity, in the form of repeating patterns, can spring from a natural, homogeneous state. In simpler terms, Turing’s model suggests that such patterns result from the diffusion of two morphogens: an activator and an inhibitor.

To better understand his theory, we can look at the example of how stripes develop on a tiger. Turing’s theory predicts that morphogens produce patterns of chemically different cells by activating some cells and inhibiting others. In the case of a tiger, the activator creates the dark stripes and the inhibitor prevents the dark color in the area surrounding the stripes. This reaction diffuses across a tiger’s cells, generating a repetitive pattern of stripes.

Modern research continues to further validate Turing’s theory. For example, in a study from the Centre for Genomic Regulation, researchers found evidence that fingers and toes are patterned by a Turing mechanism. Such conclusions were drawn from computational predictions, with physical experiments confirming these findings. A team at King’s College London also produced experimental evidence validating Turing’s speculative theory. They identified specific morphogens used in Turing’s model and observed the effects predicted by this theory. Further experiments from researchers at Brandeis University and the University of Pittsburgh illustrated similar patterns arising from once identical structures.

The findings from these and other studies on Turing’s theory of morphogenesis can be applied to a range of applications, from biomedical uses to growing soft robots with specific patterns and shapes. In the next section, we’ll see how a team of researchers further investigated this theory with the help of COMSOL Multiphysics.

Modeling the complex biological processes in morphogenesis and organogenesis (the process where tissues develop into complex organs) can be difficult for two reasons: due to the parameterization of the computational models and the fact that it is not feasible to directly measure several of the parameters.

To account for such challenges, a team of researchers from ETH Zurich and the University of Nice-Sophia Antipolis turned to COMSOL Multiphysics. Using the simulation software, they looked to evaluate different methods for parameter optimization and explore the possibility of using experimental data as a means to quantitatively study, parameterize, and discretize computational models.

The first step was to create a Turing-type model based on receptor-ligand signaling. This model, as the group previously proposed, can be used to model branching morphogenesis in both the lungs and kidneys.

*A simple Turing-type model. Images by D. Menshykau et al. and taken from their COMSOL Conference poster submission.*

Two parameter optimization methods were tested with this model: a gradient-based optimization solver (SNOPT) and a gradient-free optimization solver (Coordinate Search).

*Convergence of the two optimization solvers. Points depict the initial values of the optimization solver that lead to convergence, while crosses depict a failure to converge. Different colors represent the value of the objective function when the optimization ends. Images by D. Menshykau et al. and taken from their COMSOL Conference poster submission.*

The results from the study show that SNOPT and Coordinate Search share a drawback: For both of these methods, you can only recover the correct parameter values from a confined region of the parameter space. Therefore, the research team developed a three-step strategy to optimize the parameters of Turing-type models:

- Evenly sample the relevant parameter space
- Pick points that have minimal cost-function value
- Have the local optimization solver (SNOPT or Coordinate Search) use these points as a starting condition

Next, the researchers investigated the parameterization of image-based models, a common method for reproducing experimental data in simulations. 2D time-lapse movies of embryonic kidneys in vitro were used for the image-based model of branching morphogenesis. Through this approach, the researchers tested the ability of their mechanism to accurately predict growth areas throughout the process.

*The process of image-based modeling. a) Time-lapse movie screenshot. b) Enlarged image of the kidney explant and calculated displacement field. c) Deviation for points in Turing space (black), in between (green), and outside the Turing space (red). (d) The growth field and the distribution of U ^{2}V (the ligand-receptor complex) on the epithelium-mesenchyme border. Images by D. Menshykau et al. and taken from their COMSOL Conference poster submission.*

Findings showed that the Turing-type model adequately reproduced the observed growth areas of embryonic kidneys. As such, it was concluded that image-based data can be used when studying the mechanisms in charge of organ development and for solving, parameterizing, and discretizing models.

Turing’s theory of morphogenesis is still evolving today, with researchers continuously finding new ways to model and utilize it. Simulation studies, such as those presented here, can help offer a better understanding of the mechanisms controlling morphogenesis while furthering the design of computational models for studying morphogenesis and organogenesis.

Along with the studies highlighted today, researchers at ETH Zurich and the University of Nice-Sophia Antipolis have used COMSOL Multiphysics to perform additional analyses pertaining to organogenesis. One study, for instance, helped address the fact that cell boundaries, and possibly subcellular compartmentalization, can greatly affect signaling networks during organogenesis. In a more recent study, researchers explored techniques for image-based modeling using the example of limb bud development. You can learn all about these studies and more in the section below.

- Read the full paper: “Simulating Organogenesis in COMSOL Multiphysics®: Parameter Optimization for PDE-based Models“
- Explore additional research on organogenesis:
- Browse other blog posts relating to simulation’s role in analyzing natural phenomena:

`residual`

operator, new with COMSOL Multiphysics version 5.2, to evaluate and plot your model’s algebraic residual in order to troubleshoot convergence issues. This blog post demonstrates the use of the ` residual`

operator for visualizing and understanding the convergence properties of a turbulent flow simulation.

When solving a finite element model, it is important to know how accurate the results are. Depending on the mesh (the discretization) and the nature of the equations for which you solve, the algebraic residual is only one of several error sources. The following sources contribute to the computation error:

- The
*truncation error*(also called the*Galerkin error*for the finite element method). - The
*quadrature error*made by using numerical methods to approximate the finite element integrals. - The
*geometrical approximation error*made by representing the actual geometry by a polynomial representation, which is a sort of integration error for elements adjacent to or on a curved boundary. - The
*algebraic error*obtained by terminating the solvers prematurely (or by using a sloppy tolerance). This is the error that you can access using the`residual`

operator.

Normally, the algebraic error is (and should be) much smaller than the truncation error. However, if you run into convergence issues, the algebraic residual is not small and can reveal where the cause of the issue resides in your model.

Contrary to the related information that COMSOL Multiphysics already provides, such as convergence numbers, which are scaled global numbers, the `residual`

operator provides an unscaled residual value for each variable in a model. It also shows a spatial distribution of the residual for that quantity. The residual is the latest-computed residual vector from the finite element discretization, but interpreted as a continuous spatial field quantity. The spatial distribution helps you pinpoint where in the modeling domain the residual is relatively large and therefore will hamper convergence of the solution. Possible problems in the model could be:

- Insufficient mesh resolution in that part of the modeling domain
- Sharp corners in the geometry
- Inappropriate or incompatible boundary conditions

Let’s see exactly how to plot and evaluate the algebraic residual in your model with the `residual`

operator.

The `residual`

operator is available for stationary and time-dependent simulations. To add the `residual`

operator to the model, you need to activate it using a setting in the *Advanced* subnode to a stationary or time-dependent solver. By default, the *Store latest residual* setting is ` Off`

. To compute the residual, choose *While solving*, or to compute it and store it in the output for postprocessing, choose *While solving and in output*. Storing the residual in the output so that you can use it in plots and for general postprocessing requires additional memory resources, but this is not necessary if you want to plot the residual while solving only.

*Settings for specifying the storage of the last residual values.*

`Residual`

Operator in COMSOL Multiphysics: A Fluid Dynamics ExampleTo demonstrate how the `residual`

operator can provide insight into how numerical disturbances can enter and propagate in a finite element model, consider the Turbulent Flow Over a Backward Facing Step example model from the Application Gallery. This example uses the k-ɛ turbulence model. The model features a classic 2D geometry of a backward facing step with a corner that causes a recirculation zone in a turbulent fluid flow simulation.

This version of the model includes plotting the residual in the momentum equations (the velocity field) while solving. To do so, a Velocity plot group contains a *Surface* plot node with the expression `residual(spf2.U)`

, where ` spf2.U`

is the velocity magnitude.

*Part of the *Surface* plot node’s Settings window, with the residual operator as the expression to plot.*

The updates are computed for each segregated iteration. In the *Segregated* solver node, pseudo time stepping is used for stabilization and acceleration, and the Velocity plot group is selected as the plot to show while solving. For such a plot, it is sufficient to set the *Store last residual* feature to the *While solving* option. Also note that if you choose *Residual* as the Termination criterion for the segregated solver, then you get the same residual values as those provided by the `residual`

operator, which determine the convergence. However, for the default solution or residual criterion, the ` residual`

operator also provides important information.

*The Segregated solver settings with the plot of the residual, specified under* Results While Solving.

This example is a nice illustration of what happens when you solve transport problems with some sort of stationary approach — in this case, pseudo time stepping. The residual caused by the “backstep disturbance” needs to be transported out of the region. The residual does not drop until the “disturbances” have been transported out of the channel and the simulation can finish. The screenshots below show how the area where the residual is relatively large moves from the inlet area toward the outlet.

*The residual for the velocity field at the beginning and toward the end of the solution.*

The convergence plot confirms that the initially sluggish convergence changes to a quick convergence in just a few iterations once the large residual has disappeared. This behavior is consistent with the fact that perturbations and errors evolve according to the approximated equation — in this case, an approximation of time-dependent, convection-dominated flow equations. The errors therefore need time, which corresponds to iterations for pseudo time stepping, to be transported out of the domain.

*The convergence plot for the Turbulent Flow Over a Backward Facing Step tutorial model. Notice the fast convergence toward the end.*

By plotting and evaluating the algebraic residual, you can troubleshoot models that do not converge or converge slowly, so that you can find your simulation results as quickly and efficiently as possible.

- Read this blog post to learn how to use viscosity ramping, another method to improve convergence for your CFD models
- Browse the COMSOL Blog for additional posts on how to make full use of the solving capabilities of COMSOL Multiphysics

Suppose you are working on a COMSOL Multiphysics model and performing thermal analyses of systems. You’re working with a team of people performing other types of analyses. Your colleagues aren’t quite as lucky as you and don’t get to use COMSOL Multiphysics, but they still need access to your modeling results. Fortunately, you can quickly convert your analysis data into a very specific text file format that can be read by the software tool that your colleagues are using.

*A simple COMSOL Multiphysics thermal model that has different element types.*

Now, let’s suppose that you’re performing steady-state thermal analyses of models containing all kinds of different elements, including tetrahedral; pyramidal; prismatic; and hexahedral (brick) types, as shown above. Let’s further suppose that the software tool we will be reading the data into requires that we use a linear discretization, meaning that we need to write out a temperature at each of the nodes (vertices) defining the corners of the elements and a linear interpolation of temperature will be used to compute the temperature fields within the elements. Thus, we need to write out all of the vertex locations; the temperature data at each of these vertices; and a description of which vertices define each element, and how.

There are, of course, many different ways in which this kind of information can be written. For the purposes of today’s blog post, we will assume a simple comma-delimited format, a generic sample of which is shown below.

N, 1, 0.0, 0.0, 0.0 ... N, 1000, 10.0, 10.0, 10.0 D, 1, 332.0 ... D, 1000, 343.0 TET, 1, 2, 4, 6, 3 ... TET, 100, 42, 43, 41, 45 PYR, 101, 47, 48, 41, 40, 44 ... PRISM, 201, 66, 67, 65, 72, 74, 73 ... HEX, 301, 81, 82, 83, 84, 91, 92, 93, 94 ...

Let’s take a look at what these various lines mean. All lines begin with a text string denoting what kind of information is on that line and the information is delimited by commas into various *fields*. On the first line, we have vertex (nodal) location information:

N, 1, 0.0, 0.0, 0.0

The first field after the `N`

character is the node number, which is arbitrary, and the second, third, and fourth fields are the *x*-, *y*-, and *z*-locations of that node, so our first node is at the global origin. In the above sample, there are one thousand nodes in all.

Next, the computed temperature data at each node location is written on a separate line:

D, 1, 332.0

where the first field after the `D`

character is the node number and the second field is the temperature at that node location.

The remaining lines give information about the elements and how they are defined by the nodes. Let’s start by looking at the first element definition:

TET, 1, 2, 4, 6, 3

This line defines a tetrahedral (`TET`

) element. The first field is the element number and the next four fields tell us which nodes define the tetrahedra. Here, nodes 2, 4, 6, and 3 are nodes 1 through 4 of the element. Such information is also known as the *element connectivity* and to understand this, let’s look at an illustration of our element.

*A tetrahedral element is defined by four nodes.*

For our tetrahedral element example, nodes 2, 4, and 6 are the first three nodes. If we use the right-hand rule and follow these three nodes around in order, we get a vector that points in the direction of the fourth node, node number 3. The element ordering for all of the three-dimensional element types is shown below.

*Element number conventions for the four different types of 3D elements.*

Now that we understand the data format that we need to write out to our text file, let’s look at how this can be done with the Application Builder.

We begin our app development with an existing 3D model that already has a steady-state temperature solution computed, as shown earlier. To develop the app, switch to the Application Builder, where we will define the graphical user interface and write the data processing code behind our app. Our interface will be very simple, with just a *button* that calls a *method* and a *message log* that will display some information. The app will do only one thing: When the user clicks on the button, the data format described above is written to a text file and a summary of what was written will be shown in the message log.

*Our app has a button (1) that calls a method (2), which writes the mesh and solution data to a file and writes some statistics to the message log (3).*

The method contains all of the data processing code and is shown below with line numbers added and text strings in red.

1 StringBuffer FileB = new StringBuffer(); 2 double[][] d_Vtx = model.mesh("mesh1").getVertex(); 3 String[][] s_Vtx = toString(d_Vtx); 4 for (int m = 0; m < s_Vtx[0].length; m++) { 5 FileB.append("N, "+(m+1)+", "+s_Vtx[0][m]+", "+s_Vtx[1][m]+", "+s_Vtx[2][m]+"\n"); 6 } 7 model.result().numerical().create("interp", "Interp").setInterpolationCoordinates(d_Vtx); 8 model.result().numerical("interp").set("expr", "T"); 9 double[][][] AllData = model.result().numerical("interp").getData(); 10 model.result().numerical().remove("interp"); 11 for (int m = 0; m < AllData[0][0].length; m++) { 12 FileB.append("D, "+(m+1)+", "+AllData[0][0][m]+"\n"); 13 } 14 int[][] Ei; 15 int numTets = model.mesh("mesh1").getNumElem("tet"); 16 if (numTets > 0) { 17 Ei = model.mesh("mesh1").getElem("tet"); 18 for (int m = 0; m < numTets; m++) { 19 FileB.append("TET, "+(m+1)+", "+(Ei[0][m]+1)+", "+(Ei[1][m]+1)+", "+(Ei[2][m]+1)+", "+(Ei[3][m]+1)+"\n"); 20 } 21 } 22 int numPyrs = model.mesh("mesh1").getNumElem("pyr"); 23 if (numPyrs > 0) { 24 Ei = model.mesh("mesh1").getElem("pyr"); 25 for (int m = 0; m < numPyrs; m++) { 26 FileB.append("PYR, "+(m+1+numTets)+", "+(Ei[0][m]+1)+", "+(Ei[1][m]+1)+", "+(Ei[2][m]+1)+", "+(Ei[3][m]+1)+", "+(Ei[4][m]+1)+"\n"); 27 } 28 } 29 int numPrisms = model.mesh("mesh1").getNumElem("prism"); 30 if (numPrisms > 0) { 31 Ei = model.mesh("mesh1").getElem("prism"); 32 for (int m = 0; m < numPrisms; m++) { 32 FileB.append("PRISM, "+(m+1+numTets+numPyrs)+", "+(Ei[0][m]+1)+", "+(Ei[1][m]+1)+", "+(Ei[2][m]+1)+", "+(Ei[3][m]+1)+", "+(Ei[4][m]+1)+", "+(Ei[5][m]+1)+"\n"); 34 } 35 } 36 int numHexes = model.mesh("mesh1").getNumElem("hex"); 37 if (numHexes > 0) { 38 Ei = model.mesh("mesh1").getElem("hex"); 39 for (int m = 0; m < numHexes; m++) { 40 FileB.append("HEX, "+(m+1+numTets+numPyrs+numPrisms)+", "+(Ei[0][m]+1)+", "+(Ei[1][m]+1)+", "+(Ei[2][m]+1)+", "+(Ei[3][m]+1)+", "+(Ei[4][m]+1)+", "+(Ei[5][m]+1)+", "+(Ei[6][m]+1)+", "+(Ei[7][m]+1)+"\n"); 41 } 42 } 43 writeFile("user:///output.txt", FileB.toString()); 44 message("Data written to file output.txt in the user directory."); 45 message(s_Vtx[0].length+" Nodes\n"+numTets+" Tetrahedral Elements\n"+numPyrs+" Pyramid Elements\n"+numPrisms+" Prismatic Elements\n"+numHexes+" Hexahedral Elements\n");

Let’s go through this method line-by-line.

- Creates a string buffer into which we will store the data that will get written to the file.
- Extracts all of the mesh vertex locations from the model and puts the data into a 2D array of doubles.
- Converts the mesh location numerical data into string data, since it will be written out to a text file.
- Starts a for-loop that will iterate over all node locations.
- Appends a line for each node to the string buffer, with the node index and
*xyz*-locations. - Closes the for-loop over nodes.
- Sets up an interpolation feature to extract the data at the previously extracted node point locations.
- Sets the expression to evaluate at these node points. In this case, the variable “T” means that we are extracting temperature.
- Using the interpolation feature, extracts all of the temperature data. This gets stored in a 3D array of doubles.
- Removes the interpolation feature, since it is no longer needed.
- Sets up a for-loop over all of the extracted data. Since we are only extracting one field (temperature) and are assuming only one solution set exists in our model, we only need to index over the last dimension of our array.
- Appends a data line to the string buffer, with the node number and the value of temperature at that node.
- Closes the for-loop over all of the output data.
- Initializes an empty 2D array that stores the element index data.
- Extracts the number of tetrahedral elements from the model.
- Checks if there are any tetrahedral elements to write out.
- Extracts the tetrahedral element connectivity data from the model.
- Sets up a for-loop over all of the tetrahedral elements.
- Appends a line for each tetrahedral element to the string buffer, with the element number and the node numbers.
- Closes the loop over all tetrahedral elements.
- Closes the if-statement that checks if there are any tetrahedral elements.

Lines 22 through 42 simply repeat the functionality of lines 15 through 21 for the other element types. Note that the element numbers are incremented based upon all previous element numbers. Also, throughout this method, the indices of all nodes are incremented by one. This is because COMSOL Multiphysics internally starts all indices at zero, but in our desired output format, we want to start all node and element indices at one. The entire string buffer is converted to a string and written to the file on line 43. On lines 44 and 45, some information is printed to the message log.

The file that is written out is called `output.txt`

and is found in the user directory. The location of the directory on the disk is specified in the software preferences when you go to *File Menu > Preferences > Files > Application files*, as shown in the screenshot below. You can change the directory as desired.

*The location of the output files is specified in the software preferences.*

And with that, our method is complete. In the screenshot below, we can see our application in action.

*The application reports what has been written to the file.*

We have demonstrated how to create a very simple app that writes out the mesh and results from a COMSOL Multiphysics steady-state thermal simulation. You can simply copy the data from the app and paste it into a text file or spreadsheet. There is, of course, a lot more sophistication that we could build into this app, including:

- Writing out multiple data sets that might represent different load cases or different times from a transient simulation.
- Formatting the data into a fixed-format file type.
- Writing out higher-order element types and interpolation schemes.
- Writing out vector data, or data that is discontinuous between elements.

Of course, we won’t address all of these cases right now, but if you’re interested in adding such modifications into your own customized app, here are some resources to get you started:

- Download the
*Introduction to Application Builder*manual - Watch videos designed to help you learn the basics of using the Application Builder and COMSOL Server™
- Browse our blog posts to see how simulation apps are used in a range of applications

Seeking more information on how to build a specific type of app or have other modeling inquiries? Contact us.

]]>

A material with thermal hysteresis will exhibit a solidification temperature that is different from the melting temperature. Such materials have applications in heat sinks and thermal storage systems and are even used by living organisms, such as fish and insects living in cold climates. We won’t concern ourselves here with the exact physical mechanisms by which thermal hysteresis happens, but rather focus on how to model it.

We will begin by considering a representative material with hysteresis that is incompressible and plot out the enthalpy of the material as a function of temperature, as shown below. When the material is in the solid state and is being heated, the enthalpy is given by the bottom curve or path. As the material passes the melting temperature, it becomes completely liquid. When this material is subsequently cooled in the liquid state, it will follow the upper path, thus the material remains liquid at temperatures below the melting temperature. Once the freezing temperature is reached, the material becomes completely solid. If the material is then heated back up, it will follow the bottom path, and so on. In the completely molten or completely solid state, the two enthalpy curves overlap. The latent heat of melting and solidification is the jump in these curves.

*Enthalpy versus temperature for an idealized incompressible material.*

Now, the above curve represents a bit of an idealized case that would only occur in the real world if we had a perfectly pure material. It is also a bit impractical for computational modeling purposes since this immediate transition between states represents a discontinuity that is quite difficult to solve numerically.

However, if we introduce a small transition zone over which the enthalpy varies smoothly, then we have a model that is much more amenable to numerical analysis. The physical interpretation of this is that the material changes phase over some finite temperature and in the intermediate range, the material is a mixture of both solid and liquid. Only once the material is fully outside of the transition zone will it switch over to following the other curve.

Note that we have centered the smoothing around the nominal melting and freezing temperatures, so the fully molten state is at a temperature slightly higher than the nominal melting temperature and the fully solid state is slightly below the freezing temperature. The plot below shows a gradual smoothing, but this transitional zone can be made very narrow to better approximate the behavior if we really did have a perfectly pure material.

*A smoothed enthalpy curve is more amenable to numerical analysis.*

Since the material is assumed to be incompressible, the enthalpy depends only on the temperature. The above plot will also give us the specific heat, which is the derivative of enthalpy with respect to temperature. The specific heat is constant except for a small region around the melting and freezing temperatures.

*The specific heat is the derivative of the enthalpy with respect to temperature and is different if the material is heated or cooled.*

This temperature-dependent specific heat data can be put directly into the governing equation for heat transfer and, along with an appropriate set of boundary conditions, can be solved in COMSOL Multiphysics. In fact, the existing Application Gallery example, Cooling and Solidification of Metal, makes use of such a temperature-dependent specific heat, albeit without hysteresis. The only additional requirement for modeling thermal hysteresis is to introduce a switch to determine which path to follow. Let’s now look at how to implement this in COMSOL Multiphysics.

Here, we will look at a simple example model of a phase-change material within a thin-walled container. One side wall is perfectly insulated and the wall on the other side is held at a known temperature that varies periodically over time. A schematic of this is shown below. We are interested in computing the temperature as a function of time and position through the thickness and can reduce this to a one-dimensional model to get started.

*Schematic of the thermal model. A time-varying temperature is applied at one side.*

Our modeling begins by setting up some physical constants via the *Global Parameters* that define the melting and freezing temperatures and the smoothing to apply to the enthalpy functions. The two smoothed enthalpy functions plotted earlier are implemented as shown in the screenshot below. We can here take advantage of the built-in *Step* function, which additionally features the option to apply a user-defined smoothing.

*Implementation of the enthalpy functions shown above, using the smoothed step function. Note that the units have been defined.*

The geometry of our model is simply a 1D interval representing the phase-change material region. The *Heat Transfer in Solids* interface is used, since we are assuming that there is no fluid flow. The material properties are as shown below.

*Screenshot showing the material properties definitions in the phase-change material.*

The thermal conductivity and the density are constants. The specific heat (the heat capacity at constant pressure) is defined as:

SorL*d(H_StoL(T),T)+(1-SorL)*d(H_LtoS(T),T)

where the differentiation operator takes the derivatives of the two different enthalpy functions with respect to temperature and the `SorL`

variable defines the local material behavior as either *Solid* or *Liquid*. The ` SorL`

variable can be either zero or one and can be different in each element.

A *Domain ODEs and DAEs* interface is used to define this variable, with interface settings as illustrated below. Note that the dependent variable and the source term are both dimensionless and the shape function is of the type *Discontinuous Lagrange — Constant*, meaning that the `SorL`

variable will take on a different constant value within each element.

*Settings for the *Domain ODEs and DAEs* interface that tracks the material state.*

*The Source Term settings.*

The above screenshot depicts the equation that is being solved for in the *Domain ODEs and DAEs* interface. Let’s examine in detail the *Source Term* equation used:

SorL-nojac(if(T> T_top,0,if(T< T_bot,1,SorL)))

This equation is evaluated at the centroid of each element and implements in a single line the following:

If the current temperature is greater than the temperature of complete melting (the nominal melting temperature plus half the smoothing temperature), then the material has passed its solid-to-liquid phase-change temperature, so set `SorL = 0`

. This means that the liquid-to-solid enthalpy curve is followed. If the current temperature is less than the temperature of complete solidification (the nominal freezing temperature minus half the smoothing temperature), then the material has passed its liquid-to-solid phase-change temperature, so set `SorL = 1`

. This means that the solid-to-liquid enthalpy curve is followed. Otherwise, when neither of the previous two conditions are satisfied, leave the ` SorL`

variable at its previous value, meaning that there is no change of path while in the intermediate zone.

The `nojac()`

operator tells the software to exclude the enclosed expression from the Jacobian computation, thus it does not try to differentiate the enclosed expression but merely evaluates the expression itself at each time step.

The `SorL`

variable is used in one place in the model: in the definition of the specific heat in the phase-change material domain, as shown earlier. It is also important to set the initial value of the variable appropriately. If the initial temperature of the phase-change material is above or below the temperature of complete melting or solidification, then this choice is unambiguous; otherwise, you must choose the initial state of the material. In the example here, we will consider the initial temperature of the system to be below the freezing temperature, so the material will initially follow the solid-to-liquid path. Therefore, the initial value of the ` SorL`

variable is set to one.

*The solver settings showing the usage of the *Previous Solution *operator.*

In terms of solving the model, we only need to keep in mind that the `SorL`

variable needs to be evaluated at the previous time step using the *Previous Solution* operator in the solver sequence, as shown in the screenshot above. We can also use a segregated solver and, of course, we should investigate tightening the time-dependent solver tolerances, the scaling of the dependent variables, and study the convergence of the solution with mesh refinement.

Let’s now look at some results. In the plots below, we see the temperature through the thickness of our modeling domain for the heating and cooling of the phase-change material. Observe that the slope of the temperature as a function of position changes as the material passes through the melting and freezing points. This is due to extra heat that must be added or removed as the material changes phase.

*Temperature over time in the phase-change material during heating. The blue line is the melting temperature.*

*Temperature over time in the material during cooling. The red line is the freezing temperature.*

*An animation showing the combined temperature profile over time during heating and cooling. The blue and red lines are the melting and freezing temperatures.*

Today, we have introduced an approach appropriate for modeling materials exhibiting thermal hysteresis under the assumption of constant density and that the material must go completely above and below the melting and freezing temperatures to change phase. This modeling approach makes use of the *Previous Solution* operator and equation-based modeling. For more details on the usage of the *Previous Solution* operator and further examples, please see:

- Using the Previous Solution Operator in Transient Modeling
- Tracking Material Damage with the Previous Solution Operator

The approach shown here is a bit simplified for the sake of explanation. If you are interested in the modeling of heat transfer with phase change, either with or without hysteresis, we recommend that you look to the Heat Transfer Module, which has a built-in interface for modeling heat transfer with phase change, as introduced here.

If you are instead interested in the modeling of irreversible changes in phase, then you may also want to take a look at our previous blog post on thermal curing as well as tracking material damage with the *Previous Solution* operator.

Looking to model thermal hysteresis in COMSOL Multiphysics or have other questions about this process? Please contact us.

]]>

Thermosets are a class of polymer materials that undergo an irreversible chemical reaction, causing the polymer chains to cross-link and form a rigid material. This chemical reaction can be due to heat, light, or the addition of a chemical catalyst. Bakelite, one of the first thermosets, is often credited as kicking off the polymer industry. Bakelite is a very hard material that is resistant to many chemicals, is a good electrical insulator, and has an attractive surface finish. The material was used in a variety of early consumer products, such as telephones and radio cabinets.

*A Bakelite radio cabinet. Image by Joe Haupt — Own work. Licensed under CC BY-SA 2.0, via Wikimedia Commons.*

Bakelite and other thermosets come in various precursor forms, such as powders and thick viscous liquids. These precursors are put into a mold and heated under high pressure. Additional filler materials are often added to improve the properties of the final product. Carbon fiber and fiberglass composites, for example, bond relatively strong but flexible fibers together using a relatively rigid thermoset matrix.

Now, depending upon the exact manufacturing process, the precursor material might not move around or flow significantly during the curing step. If this is so, then you can develop a very simple model to predict the curing based upon the temperature. Let’s now look at how to implement such a model in COMSOL Multiphysics.

We will look at simulating curing during a transfer molding process, wherein the material is loaded into a mold and then heated, as shown in the schematic below. During heating and curing, the material does not move around inside the mold, and for simplicity, we won’t consider any filler materials. A thin-walled part, such as the radio cabinet shown earlier, can be reasonably modeled with a one-dimensional model through the thickness. Since the material is heated uniformly and at a known rate on both sides, we can exploit symmetry to only model one half of the material.

*Schematic of a mold with a thermoset curing inside and the equivalent model for temperature and degree of cure.*

Our model will compute the variation in time of the temperature, T, and the degree of cure, \alpha, of the thermoset from the centerline to the mold wall. Assuming no flow, the equation governing heat transfer in the thermoset precursor is:

(1)

\rho C_p \frac{\partial T}{\partial t} + \nabla \cdot (-k \nabla T) = -\rho H_r \frac{\partial \alpha}{\partial t}

where \rho, C_p, and k are the density, specific heat, and thermal conductivity of the material.

The degree of cure is \alpha and as the material cures, it absorbs heat, thus there is a negative volumetric heat source that is a function of H_r, the heat of the reaction. The rate of change of the degree of cure is often described by:

(2)

\frac{\partial \alpha}{\partial t} = A e^{-E_a/RT}(1-\alpha)^n

where the Arrhenius equation defines the temperature-dependent reaction rate, with A being the frequency factor, E_a being the activation energy, R as the universal gas constant, and n as the order of the reaction.

Let’s now look at how to set up this model in COMSOL Multiphysics, starting with the definitions of a few *Global Parameters* defining the properties of our representative thermoset material.

*The Global Parameters define a set of representative material properties of a thermoset.*

Our modeling domain is simply a 5-mm-long 1D interval, with the material properties as shown above. The *Heat Transfer in Solids* interface solves for the temperature distribution over time, starting with the specified initial temperature along with a Thermal Insulation boundary condition on one end. A Heat Flux boundary condition at the other end of the domain applies 10 kW/m^{2} due to the heating of the mold.

*The absorption of heat due to the material curing is modeled via the *Heat Source* feature.*

The endothermic effect of the curing is accounted for via a volumetric heat source, `-rho0*H_r*d(alpha,t)`

, as shown above. This feature implements the right-hand term from Equation 1 based upon the time derivative of the degree of cure.

We now need to add one more interface to solve for the degree of cure, and this is done via the *Domain ODEs and DAEs* interface, as shown below. Note that the field name is `alpha`

. Pay special attention to how the units are set up.

*Settings for the *Domain ODEs and DAEs* interface, which solves for the degree of cure.*

Lastly, looking at the settings for the *Distributed ODE* feature, we see that the *Source Term* is `A*exp(-E_a/R_const/T)*(1-alpha)^n`

and the *Damping* term is unity, while the *Mass Coefficient* is zero, thus giving us Equation 2. An initial condition of zero means that the material is modeled starting from the uncured state.

*Settings for the Distributed ODE feature that solves for the degree of cure.*

And that’s all there is to it: We can solve our model for a ten-minute curing time and plot the temperature and degree of cure through the thickness and at the inside and midpoint of the material, as shown below. Here, we apply a constant heat load at one side, so we will want to check the maximum temperature and the degree of cure through the thickness.

*The temperature increases through the thickness of the material over time. Darker lines indicate increasing time. *

*The degree of cure through the material over time. Darker lines indicate increasing time.*

*The degree of cure at the center (blue) and side (green) of the thermoset material.*

We have shown how to quickly set up a thermal curing model entirely within the core capabilities of COMSOL Multiphysics. Of course, you can use a similar approach if you want to model the curing of other materials, such as concrete. If the material curing is due to light, such as in a photopolymerization process, you may also want to look over the various ways of modeling the interaction of light with materials, and in particular the modeling of light being absorbed within the volume of a solid as governed by the Beer-Lambert law.

The model presented here can be easily extended in many ways, including adding temperature nonlinearities to all of the materials properties, incorporating the effect of a filler material, and solving these equations in a 3D model. If you would like to see work that includes these examples, please read:

- T. Behzad and M. Sain, “Finite element modeling of polymer curing in natural fiber reinforced composites“,
*Composite Science and Technology*, vol. 67, pp. 1666-1673, 2007.

If you have other questions or are interested in using COMSOL Multiphysics for your thermal curing modeling needs, please contact us.

]]>