One of the first physical laws that we learn as engineers is Ohm’s law: The current through a device equals the applied voltage difference divided by the device electrical resistance, or *I* = *V/R _{e}*, where the electrical resistance,

Shortly after learning that law, we probably also learned about the dissipated power within the device, which equals the current times the voltage difference, or *Q* = *IV*, which we could also write as *Q* = *I ^{2}R_{e}* or

We start our discussion from this point and consider a completely lumped model of a device. (Yes, we’re starting so simple that we don’t even need to use the COMSOL Multiphysics® software for this part!) Let’s consider a lumped device with electrical resistance of *R _{e}* = 1 Ω and thermal resistance of

We choose an ambient temperature of 300 K, or 27°C, which is about room temperature. Let’s now plot out the device lumped temperature as a function of increasing voltage (from 0 to 10 V) and current (from 0 to 10 A), as shown in the image below. Unsurprisingly, we see a quadratic increase in temperature.

*Device temperature as a function of applied voltage (left) and applied current (right), assuming constant properties.*

We might think that we can use the curve to predict a wider range of operating conditions. Suppose we want to drive the device up to its failure temperature, where the material melts or vaporizes. Let’s say that this material will vaporize when its temperature gets up to 700 K (427°C). Based on this curve, some simple math would indicate that the maximum voltage is 20 V and the maximum current is 20 A, but this is quite wrong!

At this point, you’re probably ready to point out the simple mistake that we’ve made: Electrical resistance is not constant with temperature. For most metals, the electrical conductivity goes down with an increasing temperature and since resistivity is the inverse of conductivity, the device resistivity goes up. So, let’s introduce a temperature dependence for the resistivity:

R_e = \rho_0(1+\alpha_e(T-T^e_0))

This is known as a linearized resistivity model, where *ρ*_{0} is the reference resistivity at , the reference temperature, and *α _{e}* is the temperature coefficient of electrical resistivity.

Let’s choose *ρ*_{0} = 1 Ω, = 300 K, and *α _{e}* = 1/200 K. Now, the resistance is 1 Ω at a device temperature of 300 K and 2 Ω at a temperature rise of 200 K above the set temperature. The equations for lumped temperature as a function of voltage and current now become:

T = T_{ambient} + (V^2 /\rho_0(1+\alpha_e(T-T^e_0))) R_t

and

T = T_{ambient} + I^2 \rho_0(1+\alpha_e(T-T^e_0)) R_t

These equations are a bit more complicated (the first is a quadratic equation in terms of *T*) but still possible to solve by hand. The plots of temperature as a function of increasing voltage and current are displayed below.

*Device temperature as a function of applied voltage (left) and applied current (right) with the electrical resistivity as a function of temperature.*

For the voltage-driven problem, as the temperature rises, the resistance rises. Since the resistance occurs in the denominator of the temperature expression, higher resistance lowers the temperature and we see that the temperature will be *lower* than that for the constant resistivity case. If we drive the device with constant current, the temperature-dependent resistance appears in the numerator. As we increase the current, the resistive heating will be *higher* than that for the linear material case.

We might be tempted at this point to compute the maximum voltage or current that this device can sustain, but you are probably already realizing the second mistake we’ve made: We also need to incorporate the temperature dependence of the thermal resistance. For metals, it’s reasonable to assume that the electrical and thermal conductivity will show the same trends. Thus, let’s use a nonlinear expression that is similar to what we used before:

R_t = r_0(1+\alpha_t(T-T^t_0))

Now, our voltage-driven and current-driven equations for temperature become:

T = T_{ambient} + V^2 r_0(1+\alpha_t(T-T^t_0))/\rho_0(1+\alpha_e(T-T^e_0))

and

T = T_{ambient} + I^2 \rho_0(1+\alpha_e(T-T^e_0))/r_0(1+\alpha_t(T-T^t_0))

Although only slightly different from before, these nonlinear equations are now quite a bit more difficult to solve. Simulation software is starting to look more attractive! Once we do solve these equations (let’s set *r*_{0} = 1 K/W, *α _{t}* = 1/400 K, and = 300 K), we can plot the device temperature, as shown below.

*Device temperature as a function of applied voltage (left) and applied current (right) with the electrical and thermal resistivity as a function of temperature.*

Observe that for the current-driven case, the temperature rises asymptotically. Since both the electrical and the thermal resistance increase with an increasing temperature, the device temperature rises very sharply as the current is increased. As the temperature rises to infinity, the problem becomes unsolvable. This is actually entirely expected; in fact, this is how your basic car fuse works. Now, if we were solving this problem in COMSOL Multiphysics, we could also solve this as a transient model (incorporating the thermal mass due to the device density and specific heat) and predict the time that it takes for the device temperature to rise to its failure point.

Things are luckily a bit simpler for the voltage-driven case. Here, we also see a predictable behavior: The rising thermal resistivity drives the temperature higher than when we only considered a temperature-dependent electrical conductivity. Now, the interesting point here is the temperature is still lower than for the constant resistivity case. This also sometimes confuses people, but just keep in mind that one of these nonlinearities is driving the temperature *down* while the other is driving the temperature *up*. In general, for a more complicated model (such as one you would build in COMSOL Multiphysics), you don’t know which nonlinearity will predominate.

What other mistake might we have made here? We have used a *positive* temperature coefficient of thermal resistivity. This is certainly true for most metals, but it turns out that the opposite is true for some insulators, glass being a common example. Usually, the total device thermal resistance is mostly a function of the insulators rather than the electrically conductive domains. In addition, the device’s thermal resistance should incorporate the effects of the cooling to the surrounding ambient environment. So, the effects of free convection (which increases with the temperature difference) and radiation (which has a fourth-order dependence on temperature difference) could also be lumped into this single thermal resistance. For now, though, let’s keep the problem (relatively) simple and just switch the sign of the temperature coefficient of thermal resistivity, *α _{t}* = -1/400 K, and directly compare the voltage- and current-driven cases for driving voltage up to 100 V and current up to 100 A.

*Device temperature as a function of applied voltage (pink) and applied current (blue) with a negative temperature coefficient of thermal resistivity.*

We now see some results that are quite different. Observe that for both the voltage- and current-driven cases, the temperature increases approximately quadratically at low loads, but at higher loads, the temperature increase starts to flatten out due to the decreasing thermal resistivity. The slope, although always positive, decreases in magnitude. The current-driven case starts to asymptotically approach *T* = 700 K, but the voltage-driven case stays significantly below the failure temperature.

This is an important result and highlights another common mistake. The nonlinear material models we used here for electrical and thermal resistivity are *approximations* that start to become invalid if we get too close to 700 K. If we anticipate operating in this regime, we should go back to the literature and find a more sophisticated material model. Although our existing nonlinear material models did solve, we always need to check that they are still valid at the computed operating temperature. Of course, if we are not close to these operating conditions, we can use the linearized resistivity model (one of the built-in material models within COMSOL Multiphysics). Then, our model will be valid.

We can hopefully now see from all of this data that the temperature has a very complicated relationship with respect to the driving voltage or current. When nonlinear materials are considered, the temperature might be higher or lower than when using constant properties, and the slope of the temperature response can switch from quite steep to quite shallow just based on the operating conditions.

Have these results thoroughly confused you yet? What if we went back and changed one of the coefficients in the resistance expressions? Certain materials have negative temperature coefficients of electrical and thermal resistivity. What if we used an even more complicated nonlinearity? Would you feel confident in saying anything about the expected temperature variations in even this simple lumped device case, or would you rather check it against a rigorous calculation?

What about the case of a real-world device? One that has a combination of many different materials, different electrical and thermal conductivities as a function of temperature, and complex shapes? Would you model this under steady-state conditions only or in the time domain, to find out how long it takes for the temperature to rise? Maybe — in fact, most likely — there will also be nonlinear boundary conditions such as radiation and free convection that we don’t want to approximate via a single lumped thermal resistance. What can you expect then? Almost anything! And how do you analyze it? Well, with COMSOL Multiphysics, of course!

Evaluate how COMSOL Multiphysics can help you meet your multiphysics modeling and analysis goals. Contact us via the button below.

]]>

The important boundary condition that we will discuss here is called the *Inflow* boundary condition. It is available at boundaries that are exterior to a fluid domain and is equivalent to having a virtual channel “upstream”. The *Inflow* boundary condition is used to define a heat flux at the inlet boundary that brings the same energy to the fluid domain as if you had modeled the virtual channel as a real CFD domain. The virtual channel can be seen as a long insulated channel with given thermal properties at the inlet, and with the same velocity profile as defined in the settings for the *Inflow* boundary condition.

*Representation of the virtual domain corresponding to an* Inflow *boundary condition.*

From a mathematical point of view, the boundary condition is formulated as a flux condition:

(1)

-\mathbf{n} \cdot \mathbf{q} = \rho \Delta H \bf{u}\cdot \mathbf{n}

where the enthalpy variation is defined as:

(2)

\Delta H = \int_{T_{\mathrm{upstream}}}^{T}{C_p \mathrm{d}T}+\int_{p_{\mathrm{upstream}}}^{p}{\frac{1-\alpha_p T}{\rho}\mathrm{d}p}

where we can designate the two terms:

{\Delta H}_T = \int_{T_{\mathrm{upstream}}}^{T}{C_p \mathrm{d}T}

and

{\Delta H}_p = \int_{p_{\mathrm{upstream}}}^{p}{\frac{1-\alpha_p T}{\rho}\mathrm{d}p}

so that we can write:

\Delta H ={\Delta H}_T + {\Delta H}_p

This expression contains two terms. The first, , depends on the temperature difference while the second, , depends on the pressure difference.

Eq. (1) expresses the fact that the normal conductive heat flux () at the inflow boundary is proportional to the flow rate and enthalpy variation between the upstream conditions and inlet conditions.

As shown in Eq. (2), the enthalpy variation depends in general both on the difference in temperature and in pressure. However, the pressure contribution to the enthalpy, , is neglected when the work due to pressure changes is not included in the energy equation.

In the COMSOL Multiphysics® software, this is controlled in the *Nonisothermal Flow* multiphysics feature using the corresponding check box:

There is another classical case where this term cancels out: when the fluid is modeled as an ideal gas. Indeed, in this case, .

First, let’s assume that the pressure contribution to the enthalpy is null. (We have seen above that this is actually quite often the case.) Then, the boundary condition reads:

(3)

k\nabla T \cdot \mathbf{n} = \int_{T_{\mathrm{upstream}}}^{T}{C_p \mathrm{d} T} \: \rho\mathbf{u} \cdot \mathbf{n}

When advective heat transfer dominates at the inlet (large flow rates), the temperature gradient, and hence the heat transfer by conduction, in the normal direction to the inlet boundary is very small. So in this case, Eq. (3) imposes that the enthalpy variation is close to zero. As is positive, the *Inflow* boundary condition requires to be fulfilled. So, when advective heat transfer dominates at the inlet, the *Inflow* boundary condition is almost equivalent to a *Dirichlet* boundary condition that prescribes the upstream temperature at the inlet.

Conversely, when the flow rate is low or in the presence of large heat sources or sinks next to the inlet, the conductive heat flux cannot be neglected. In addition, the inlet temperature has to be adjusted to balance the energy brought by the flow at the inlet and the energy transferred by conduction from the interior, as described by Eq. (3).

This makes it possible to observe a realistic upstream feedback due to thermal conduction from the inlet surroundings.

Keeping the assumption that the enthalpy only depends on the temperature and that, in addition, the heat capacity is constant, Eq. (1) reads:

(4)

k \nabla T \cdot \mathbf{n} = (T-T_\mathrm{upstream})C_p \rho\mathbf{u} \cdot \mathbf{n}

which corresponds to a *Danckwerts* boundary condition that is used in, for example, the *Transport of Diluted Species* interface.

In practice, there are many models where the heat capacity is nearly constant, so the *Inflow* boundary condition behaves like a *Danckwerts* boundary condition with an averaged heat capacity. Interestingly, if this is not the case, the *Inflow* boundary condition automatically accounts for an incoming flux that corresponds to the enthalpy and cannot be expressed by simply using a *Danckwerts* boundary condition.

Let’s discuss a general case. In Eq. (2), the enthalpy variation depends both on the difference in temperature and in pressure.

Considering that the *Inflow* boundary condition models a virtual channel feeding the inlet, we expect pressure losses between the virtual channel inlet and the boundary where the condition is defined. This explains why the upstream pressure is different from the inlet pressure. While the fluid flows through the channel, it is subject to pressure work that results in a temperature change between the virtual channel inlet and the boundary where the *Inflow* boundary condition is defined. This is what is described by the pressure-dependent term in Eq. (2). Note that the viscous dissipation in the virtual channel is not accounted for.

In practical situations, the pressure contribution, , is often zero (for ideal gases or when work done by pressure changes are neglected) or small in the sense that a very small difference between the upstream temperature and the inflow temperature is enough to balance it. To illustrate this, consider two common fluids:

- Air: Its density is defined from the ideal gas law in the Material Library, hence the pressure contribution to the enthalpy, , is zero.
- Water: The order of magnitude of is 1000 J/K/kg while the order of magnitude of is 0.001 m
^{3}/kg. A pressure difference of 1 bar (= 10^{5}Pa) and a temperature difference of 0.1 K induce and , respectively; two contributions with the same order of magnitude in .

To illustrate how the *Inflow* boundary condition behaves compared to a *Temperature* boundary condition, we can study the stationary temperature profile in a long channel in 2D, which actually represents a flow between two plates. Beyond a certain point, the channel is cooled by a convective heat flux on both sides. The channel height is 1 cm and the part exposed to the convective heat flux is 10 cm long. The channel is filled with air (the density follows the ideal gas law).

At the inlet located at some distance from the cooling area, the average velocity is U_{in} and the temperature is T_{hot} = 30°C. The convective heat flux is defined as h(T_{cold}-T), with h = 100 W/m^{2}/K and T_{cold} = 10°C.

Most of the temperature variations occur beyond the point where the heat flux is applied, so we can choose to reduce the computational domain by modeling only a fraction of the channel before the cooling area. The image below contains two sketches. The one at the bottom has a section of length *L*_{inlet} = 2 cm before the cooling area, while on the one on top, the inlet coincides with the beginning of the cooling area (*L*_{input} = 0).

*Representation of the geometry with a section before the area exposed to the heat flux (top) and with the inlet at the beginning of the area exposed to the heat flux (bottom).*

Now we solve the model using either a *Temperature* or *Inflow* boundary condition at the inlet. We vary two parameters in the model:

- Inlet velocity, U
_{in}: 1 cm/s and 10 cm/s - Length of the channel before the area exposed to the heat flux,
*L*_{inlet}: 0, 0.2, 1, and 2 cm

The goal of these simulations is to determine the values of *L*_{inlet}for which we are able to set accurate thermal boundary conditions using *Temperature* and *Inflow* boundary conditions, respectively.

Let’s comment on the results for U_{in} = 10 cm/s. In the left part of the figure below, we see the temperature profile using the *Temperature* boundary condition (top) and the *Inflow* boundary condition (bottom). The two graphs look very similar and it is difficult to draw any conclusion from them, but the graph on the right gives more details.

The graph to the right shows the temperature profile along the vertical line located at the beginning of the cooling zone. (It coincides with the inlet boundary, when *L*_{inlet} = 0. Let’s call it “reference line” in the rest of this blog post.) The solid lines represent the results obtained using an *Inflow* boundary condition and the dotted lines correspond to the *Temperature* boundary condition. The different colors correspond to different values of *L*_{inlet}.

Let’s first check the results obtained using the *Temperature* boundary condition (dotted line). We see that as *L*_{inlet} increases, the temperature profile along the reference line converges to a given profile. The results for *L*_{inlet} = 2 cm show no improvement; they coincide with the results obtained for *L*_{inlet} = 1 cm, so we can consider that there is no need to further extend the channel.

For *L*_{inlet} = 0, the temperature profile is quite different from the converged profile. This illustrates a classical issue using a *Temperature* boundary condition: As the temperature profile is not known in advance along the reference line, the best option is to prescribe a reasonable temperature; here, the upstream temperature.

When an *Inflow* boundary condition is used, if the value of *L*_{inlet} is increased, the temperature profile along the reference line convergences to the same profile as when a *Temperature* boundary condition is used.

We notice that especially with *L*_{inlet} = 0, the solution is much closer to the converged profile than when using the *Temperature* boundary condition.

*Left: Temperature field in the channel using the* Temperature *boundary condition (top) and* Inflow *boundary condition (bottom) for* L* _{inlet} = 0 and U_{in} = 10 cm/s. Right: A comparison of the temperature along the reference line with the* Inflow

It is important to keep in mind that in many projects, the geometry contains inlets that are fed by channels that are not represented in the geometry. While for simple geometries — like here — it is easy to modify it to include a part of the channel before the inlet, it can be challenging for advanced geometries. Even with *L*_{inlet} = 0, the *Inflow* boundary condition gives a decent prediction of the temperature profile at the inlet.

When the channel before the inlet can be extended a sufficient distance, the temperature profile on the inlet boundary obtained using *Inflow* and *Temperature* boundary conditions coincide. This is in agreement with the analysis made before stating that when the advective heat transfer dominates and an ideal gas model is used, the *Inflow* boundary condition is similar to a *Temperature* boundary condition. It is interesting to mention here that from a numerical point of view, the two conditions behave similarly in this case. (For example, the number of iterations taken by the nonlinear solver is identical for both conditions.)

Apart from the temperature profile, another quantity that should be monitored is the heat rate induced by the heat flux. The table below contains this heat rate for the different values of *L*_{inlet}. One column contains the value for the *Inflow* boundary condition and the other for the *Temperature* boundary condition.

*The heat rate tabulated for the case with highest inlet velocity.*

When the *Inflow* boundary condition is used, the heat rate is almost constant. When using a *Temperature* boundary condition, the heat rate is affected by the value of *L*_{inlet}.

Because the velocity is lower in this case, the advective effects no longer dominate. The image below to the left shows the temperature field obtained using the *Temperature* boundary condition (top) and *Inflow* boundary condition (bottom). Although the two plots look similar, a closer look at them reveals that at the end of the inlet boundary, there is a difference between the two temperature profiles.

The graph to the right shows the temperature profile along the reference line. As before, the solid lines represent the results obtained using an *Inflow* boundary condition, the dotted lines correspond to the *Temperature* boundary condition, and the different colors correspond to different values of *L*_{inlet}.

Again, for *L*_{inlet} = 0, the *Temperature* boundary condition prescribes a constant temperature along the reference line. This temperature profile is far from the solution obtained with the largest values of *L*_{inlet}. As before, we see that as *L*_{inlet} increases, the temperature converges to a given profile. However, here, the convergence is slower, compared to the case with *U*_{in} = 10 cm/s. Comparing the solution obtained using the *Inflow* boundary condition and the *Temperature* boundary condition, we notice that for any value of *L*_{inlet}, the solution obtained using the *Inflow* boundary condition is always closer to the converged profile.

*Left: Temperature field in the channel using the* Temperature *boundary condition (top) and* Inflow *boundary condition (bottom) for* L* _{inlet} = 0 and* U

The table below again shows the heat rate for the two boundary conditions.

*The heat rate tabulated for the case with the lowest inlet velocity.*

The trend is similar to the first case, but when a *Temperature* boundary condition is used, the influence of *L*_{inlet} on the heat rate is much larger. Using a *Temperature* boundary condition with *L*_{inlet} = 0, the value of the heat rate is overestimated by a factor of almost 2 compared to the solution obtained with a long inlet. Using an *Inflow* boundary condition, the heat rate is correctly predicted for any value of *L*_{inlet}.

These results show that when *L*_{inlet} is small (especially when *L*_{inlet} = 0), the temperature profile and the heat flux are more realistic using an *Inflow* boundary condition rather than a uniform *Temperature* boundary condition. This can be explained by the fact that at the inlet, a uniform temperature profile is not realistic. In practical situations, the temperature is not controlled exactly at the inlet but, for example, in a tank located at some distance.

While in many configurations, the *Temperature* and *Inflow* features describe similar conditions and lead to similar simulation results, there are a number of configurations (especially for slow flow and small dimensions) where the conductive effects are not dominated by the advective effects and where the *Inflow* boundary condition usually leads to a temperature profile that is closer to the reality than a *Temperature* boundary condition. In addition, a *Temperature* boundary condition could enforce an erroneous temperature value that induces large heat fluxes that are not realistic.

As the *Inflow* boundary condition is simple to use and usually does not induce an additional numerical cost to be solved, it ought to be the first choice to define a heat transfer condition at the flow inlet. The vast majority of model examples in the Application Libraries use it.

Learn more about all of the functionality available for heat transfer modeling in COMSOL Multiphysics by clicking the button below.

: Heat capacity (SI unit: J/K/kg)

: Heat transfer coefficient (SI unit: W/m^{2}/K)

: Boundary normal vector (SI unit: K)

: Thermal conductivity (W/K/m)

: Pressure (SI unit: Pa)

: Heat flux (SI unit: W/m^{2})

: Pressure of the upstream (SI unit: Pa)

: Temperature (SI unit: K)

, : Cold and hot temperatures (SI unit: K)

: Temperature of the upstream (SI unit: K)

: Inlet temperature (SI unit: m/s)

: Coefficient of thermal expansion (SI unit: 1/K)

: Density (SI unit: kg)

: Velocity (SI unit: m/s)

: Enthalpy change vs. reference enthalpy (SI unit: K)

]]>Thermosiphons have been used for keeping houses warm since the 1800s. These devices use central heaters and pipe networks that carry water and steam to different rooms. The cool part (figuratively) is that no pump is needed for fluid transport — convective currents induced by the heater located at the bottom of an installation are enough. Let’s discuss modeling thermosiphons using a “pseudofluid” with temperature-dependent properties.

From their initial applications in large-scale heating, thermosiphons have since been used in various industries that rely on efficient heat transfer in small spaces. Today, thermosiphons are found in a wide range of applications: collecting heat from solar panel arrays, heating water and food, cooling IC engines, and even cooling electronic ICs.

*An example of a thermosiphon. Image by Gilabrand at English Wikipedia. Licensed under CC BY-SA 3.0, via Wikimedia Commons.*

One reason why thermosiphons can be very efficient is that they can operate near the phase change temperature of the transport fluid. This means that the fluid, while carrying heat from point A to point B, uses that heat not only to raise its temperature, but also to change its phase from liquid to vapor.

There are two reasons why phase change in a thermosiphon can be a significant advantage. First, a phase change gives a much greater change in density than a rise in temperature. Fluid transport would be set up much more easily here.

Also, thermosiphon fluids typically need as much heat to change phase as they would to raise the temperature by hundreds of degrees (Celsius). For instance, water has a specific latent heat of vaporization of 2264.7 kJ/kg, whereas the specific heat of water is 4.186 kJ/kgK. This means that the amount of heat absorbed by water, while changing to steam, is 541 times the amount it would need to raise its temperature by 1°C. This means that a lot more heat can be absorbed from a source at a specific temperature if the fluid is changing phase instead of rapidly heating up.

The heat transfer from one body to another is proportional to the temperature difference between them. A fluid that stays at a certain temperature during a larger heat transfer would have the advantage of maintaining the same temperature difference for a longer time. This means that the heat transfer rate would stay high for a longer time, instead of dropping as the temperature difference between the heat source and the fluid reduces. However, this very source of efficiency can make modeling the thermosiphon a challenge.

Modeling a flow that involves phase change can be computationally demanding. A usual phase-change fluid flow model involves:

- Two separate domains (one liquid, one vapor)
- An “interface tracking” approach between the two domains that requires a moving mesh on both domains

Another drawback is that this method doesn’t allow topological changes in the interface between the two domains. The creation or merging of bubbles of vapor, for instance, would not be permissible.

Since the interface between the domains is a surface, there would be no modeling of the “slushy”, part-liquid–part-vapor transitional situations. Modeling a thermosiphon with this approach would create an approximation that has a single boundary between the liquid and vapor, which moves as the fluid undergoes a phase change.

A different approach to modeling this kind of fluid flow problem involves using a single domain of what we’ll call a *pseudofluid*. This pseudofluid is a material with properties defined as a function of temperature. The properties change from those of the liquid to those of the vapor, over a small region known as the *phase transition window*. In the figures below, we see how a cross-phase density function is defined to indicate the transition of state from liquid to vapor.

Note: A similar modeling philosophy is used in the

Phase Change Materialnode in the Heat Transfer Module. Though the phase transition could also be modeled using this node, along with theNonisothermal Flowmultiphysics coupling option, the pseudofluid approach allows flexibility in definition of the phase transition function. The pseudofluid material models the phase transition based on two parameters: temperature and pressure. Accurate modeling of the pressure-dependent variation of the density of steam is critical for ensuring mass conservation. The fluid flow and heat transfer equations are solved using theSingle Phase,Laminar Flow, andHeat Transfer in Fluidsphysics nodes available in the COMSOL Multiphysics® software. These two nodes collectively solve the equations for conservation of mass, momentum, and energy.

This modeling approach enables apparent topological changes between phases, since there aren’t any domain boundaries to deal with. This overcomes one of the major approximations of the interface tracking approach. Our solution could now have plenty of pockets of fluid transitioning from one phase to another, which is in line with our everyday observations of fluids brought to a boil, for instance.

There are two approximations inherent in the pseudofluid approach. It doesn’t take surface tension forces into account; so even though topological changes are handled, a big contributing factor in bubble formation during boiling is still left out. Also, phase transition occurs over a small range of temperatures instead of a specific value. The smaller this range is, the more accurate the phase change phenomenon. Ideally, we would choose a range that represents the intermediate slushy stage well. However, a smaller range causes more difficulties in convergence of the solutions.

The pseudofluid approach is successful in setting up convection currents and representing the phase change process of the fluid. In the video below, we see a simple vertical container with a fluid being heated, modeled with this approach. Initially, the top portion of the container is filled with vapor, while the bottom portion contains liquid. The container is heated from its bottom surface, and we can see the gradual phase change to convert the entire contents into vapor.

The image below shows the formation of different phases of fluid (represented by their density), as well as the local velocity of the convection currents in a tilted tube, which represents the thermosiphon flask.

*Comparing the quantitative performance of the pseudofluid model of the thermosiphon to experimental data — both from in-house experiments and literature (Ref. 1). Steady-state temperatures are compared with data from reference literature at different locations of a vertical thermosiphon. Temperature variation with respect to time is compared with experimental data for a tilted thermosiphon.*

The model seems to perform well, and the deviations in performance are within acceptable limits.

We developed and applied this pseudofluid modeling technique to optimize a real-world thermosiphon application. Once the fluid flow model is set up, the computation time is considerably reduced compared to the interface tracking approach. This frees up computational resources to optimize many other parameters of the thermosiphon.

One objective is to maximize the heat transfer rate of the apparatus. Using a fluid near its phase transition temperature greatly reduces the size of the thermosiphon needed for a certain heat transfer rate.

Another important parameter is the mass of fluid stored in the apparatus. With too much fluid, the heat input needed to vaporize it would be very high. It is even possible that the steady-state heat output would prevent the fluid from vaporizing at all, which would greatly reduce the efficiency of the thermosiphon. Not enough fluid would mean that very little heat would vaporize it. If the heat drawn from the thermosiphon is not high enough, the fluid remains vaporized in the steady state — once again losing the increased efficiency that comes with phase change.

Building on the fluid flow model discussed in this blog post, we can also optimize the thermosiphon’s dimensions, angle of slant, and design of the heat-absorbing surfaces.

It is relatively easy to imagine applying the pseudofluid modeling technique to problems concerning fluids that transition between gels and free-flowing liquids. A question worth asking: Can an improved pseudofluid model actually be used as a universal mechanics model? In other words, can this model include the whole spectrum of phase, from brittle solids like rock to free-flowing vapors?

Modeling pseudofluids and phase-transitioning material properties could help in unifying different physics models. A mathematical model that handles these transitions well could even change the way we think about “phases”. Traditional phases may well end up being thought of as approximate descriptions on a continuous spectrum of material states. Although the mathematics to accurately describe these phase changes hasn’t quite been perfected yet, we at Noumenon Multiphysics may have some developments soon!

Note that aside from the two approximations for this approach mentioned earlier, there are some other limitations. This model, though capable of predicting turbulent flows, may become computationally expensive in such scenarios. (For spatial resolution of turbulence, the mesh would need to be refined in the entire domain. For temporal resolution of turbulence, smaller time steps would be required for obtaining converged solutions. So, the number of mesh elements and computational time would both increase. On the other hand, while using a turbulence model along with this pseudofluid material model is also possible, it adds extra equations to the model.) This limitation seems acceptable as far as thermosiphons are concerned, since a turbulent thermosiphon would be greatly inefficient anyways. It is worth noting, however, that different turbulent flow models could be added to the fluid flow model for different applications.

The accuracy of the pseudofluid model depends greatly on the quality of data available about the fluid involved for different conditions of temperature and pressure. Most significantly, the change in density with respect to change in pressure needs to be accurately known to create a useful pseudofluid material model. This is relatively easy for water and steam, but collecting similar data for other fluids may be a more difficult task.

Mandar Gadgil is an associate engineer at Noumenon Multiphysics. At Noumenon Multiphysics, Mandar has played an important role in solving numerous challenging problems in modeling and simulation for the engineering industry. He has worked on multiphase, multispecies fluid flow models, models of electric flow for biomedical applications, electromagnetism, fluid-structure interaction, battery modeling, and much more. Mandar has completed his Master of Technology (M.Tech.) with a specialization in modeling and simulation from the Department of Applied Mathematics at the Defence Institute of Advanced Technology, India.

- F. Bandar, L.C. Wrobel, and H. Jouhara, “Numerical modelling of the temperature distribution in a two-phase closed thermosyphon,”
*Applied Thermal Engineering*, 2013.

Additive manufacturing is the process of creating a 3D object by adding one or more materials on top of each other layer by layer. To learn more about this type of manufacturing, we reached out to Professor Frédéric Roger of the Mines Telecom Institute, Lille-Douai Center. (IMT is a French public institution dedicated to higher education, research, and innovation in engineering and digital technologies.)

Professor Roger says that, in a sense, additive manufacturing is a bit like sewing or weaving. In both processes, a heterogeneous finished product is created by controlling how different raw materials are consolidated. In weaving, the materials are usually thread and yarn; however, additive manufacturing can use many materials, including polymers, metal alloys, ceramics, and composites.

*Choosing the right materials is important for creating an ideal finished product, be it a warm blanket (left, woven by my grandmother) or a customized aerospace part (right). Right image in the public domain in the United States, via Wikimedia Commons.*

This wide range of materials means that additive manufacturing can be used to design a large amount of unique objects across many industries. For instance, Roger mentions that by using the right materials and thermodynamic conditions, engineers can make objects that withstand or adapt to severe environmental conditions. Such objects could even adapt to certain temperatures or chemical conditions by changing their shape or releasing chemical species (like drugs) that are trapped in a matrix. A transformation over time would add another dimension to the printed part, resulting in “4D printing”.

*Sometimes, additive manufacturing parts are inspired by natural forms, like the bio-inspired example pictured here. Image courtesy Frédéric Roger.*

According to Roger, the many opportunities that come with additive manufacturing make it “an unavoidable manufacturing process,” as it “offers new opportunities to develop optimized structures with advanced materials.” However, before engineers can create these structures, they have to improve the additive manufacturing process.

Since additive manufacturing is a complex process, it can be difficult to study. This technique varies based on the materials involved and the specific type of additive manufacturing. Studying this process also requires accounting for many different effects, such as:

- Multiple phase transformations
- Transfer of energy, mass, and momentum
- Sintering
- Photopolymerization
- Drying
- Crystallization
- Deformation
- Stress

To account for these factors, engineers can use the COMSOL Multiphysics® software, which Roger mentions is “a unique software that has great advantages in the simulation of additive manufacturing.” The software helps engineers to not only “optimize the additive manufacturing process but also to predict the mechanical and microstructural consequences on the product.” Through this, engineers can include all of the relevant physics and determine the ideal manufacturing conditions and part geometries that balance the needs of stiffness, weight reduction, and heat dissipation.

*Left: An example of the additive manufacturing process, which involves many different physics. Image by Les Pounder — Own work. Licensed under CC BY-SA 2.0, via Flickr Creative Commons. Right: Example of an additive manufacturing part created with two materials and filled with a honeycomb inner structure. Image courtesy Frédéric Roger.*

A challenge is that analyzing the additive manufacturing process while coupling the relevant physics can result in large model sizes and long computational times. To overcome this issue, Roger implements several different simulation strategies, such as activating mesh properties, using adaptive remeshing, and performing sequential simulations.

By taking a sequential approach, Roger is able to better analyze the succession of thermodynamic states that a material experiences during additive manufacturing. At the same time, this approach helps to reduce the complexity of the multiphysics couplings by dissociating them over time. As such, sequential simulations provide a way to comprehensively model and optimize the additive manufacturing process while reducing computational costs.

For their simulations, Roger and his team focused on fused-deposition modeling (FDM®), a common additive manufacturing technique that is both affordable and enables control over process parameters. The aim of the study was to optimize the internal and external geometry of a printed thermoplastic part and achieve the best possible performance. To accomplish these goals in an efficient manner, the team split their analysis into three parts, discussed below.

For more information about this study, check out the researchers’ paper.

In the first part of the study, the researchers wanted to minimize the total weight of a printed structure while maintaining a material distribution that maximizes stiffness. To do so, they used topological optimization and structural mechanics analysis to study a mechanical structure exposed to a tensile load.

*Original geometry and boundary conditions (left) and the Young’s modulus distribution that defines the optimal shape by color contrast (right). Left image by F. Roger and P. Krawczak and taken from their COMSOL Conference 2015 Grenoble presentation. Right image courtesy Frédéric Roger.*

Through the studies, they found an optimal shape for the part, determining that the middle of the shape has the highest stress levels. As such, the researchers divided the structure into domains based on the stress concentration field: a high-stress middle area surrounded by two low-stress areas. In the following study, they used this information to apply specific manufacturing conditions to the high-stress zone

*The stress fields in the optimized geometry. Image courtesy Frédéric Roger.*

In the second study, the researchers aimed to increase the stability of the high-stress zone in their part by testing two possible infill strategies:

- Heterogeneous filling with variable densities
- Multimaterial filling

In the heterogeneous case, the team created a more resistant domain in the high-stress middle area by using a higher density of infill. At the same time, they minimized the weight of the external areas by using less material. The results indicated that the ideal geometry contains 60% material in the high-stress region and 20% material in the low-stress regions.

*Printing an optimized part using one material with varying densities. Image by F. Roger and P. Krawczak and taken from their COMSOL Conference 2015 Grenoble paper.*

As shown below, the multimaterial case involved using red ABS plastic on the ends of the part and black conductive ABS with improved mechanical properties in the middle. The team found that they could replace the conductive ABS with materials similar to ABS that have reinforced filters to increase stiffness.

*Printing an optimized part using two materials. Image by F. Roger and P. Krawczak and taken from their COMSOL Conference 2015 Grenoble paper.*

After optimizing the inner and outer designs of the 3D-printed part, the researchers modeled the fused thermoplastic deposition process and evaluated manufacturing parameters. The resulting simulations helped them to accurately predict thermal history, wetting conditions, polymer crystallization, interactions between filaments, and residual stresses and strains. One example is shown below, depicting the plastic strain during the heating and cooling process.

*The fusion and solidification of a disk that is irradiated by a laser beam as well as the resulting plastic strain evolution. This analysis takes Newtonian fluid flow and solid thermomechanical properties into account. Animation courtesy Frédéric Roger.*

The study also investigated the heat and mass transfer within the first two layers of a thin-walled tube. The researchers were then able to analyze the plastic droplet deposition process and identify areas where the filaments reached fusion temperature. The animations of the material deposition study are shown below. They depict a heat source moving along a deposition pattern and heating the filaments up to fusion temperature, ~230°C for ABS droplets. The extruder path domain in the simulations is premeshed and the meshes are continuously activated depending on the extruder’s position.

*Two-layer circular deposition (top). The moving heat source represents the hot ABS deposition. The thermal expansion of the two layers (amplified by a factor of five), showing the moving heat source activating the properties of the material (bottom). Here, blue indicates a nonactivated mesh and the physical properties (thermal conductivity and stiffness) are close to zero. Animations courtesy Frédéric Roger.*

Using these simulations, Roger and his team predicted the temperature field between the filaments during the deposition process, an important factor that affects filament adhesion. Similar analyses could help researchers compare different additive manufacturing conditions and determine the best deposition strategy for a specific application.

Roger says that these simulations enabled his team to “define an additive manufacturing part whose internal and external architectures give it the best possible industrial performance.” Of course, this is only the start of what can be achieved by combining additive manufacturing and multiphysics simulation.

If you have any tips for using COMSOL Multiphysics to study the additive manufacturing process, be sure to let us know in the comments below!

- Read more about the researchers’ work in their paper: “Optimal Design Of Fused Deposition Modeling Structures Using Comsol Multiphysics“
- Check out these related blog posts:

*FDM is a registered trademark of Stratasys, Inc.*

Compared to other heat exchangers, compact heat exchangers have a much larger heat transfer area per volume, usually thanks to dense arrays of plates or tubes. This attribute makes these heat exchangers lighter and more compact than classical heat exchangers. One disadvantage of the smaller heat exchangers is that they have higher pressure drops, which limits the flow rate and thus the amount of heat they can transfer.

*An illustration of a plate-and-frame heat exchanger, a common type of compact heat exchanger.*

In Reference 1, researchers explored whether they could improve the performance of compact heat exchangers by adding a dynamic wall. When the wall deforms, it generates oscillations that help mix the fluid and lessen the thermal boundary layers. As a result, the heat exchanger is able to transfer more heat. In addition, the oscillations generate a pumping effect similar to that of a peristaltic pump. This makes up for pressure losses, increasing the efficiency of the heat exchanger.

Oscillation might be a useful way to enhance the performance of compact heat exchangers. Using COMSOL Multiphysics, we can test this idea by easily creating and examining a model of the dynamic wall heat exchanger…

We start by modeling a static heat exchanger without a dynamic wall. This way, we can compare the results of both heat exchanger designs.

The static heat exchanger geometry consists of an upper wall, bottom wall, and channel. Fluid (water in this case) moves through the channel, steadily increasing in temperature due to a heat flux applied to the bottom wall. At this wall, we set the delivered heat rate to 125 W. Probes at the outlet determine the temperature and mass flow rate of the water when it exits the exchanger.

*The geometry of a static heat exchanger.*

Next, we prescribe a deformation on the upper wall based on the following parameters:

- Time
- Channel height
- Channel length
- Oscillation frequency
- Oscillation amplitude
- Number of waves in the channel length direction

*Animation showing the deformation of the dynamic wall.*

For the full details of how to model the dynamic wall heat exchanger, go to the Application Gallery, where you can download the model documentation and MPH-file.

To simulate the heat transfer and oscillation, we couple two built-in features. The first is the *Conjugate Heat Transfer* multiphysics coupling, which enables us to account for the heat transport between the exchanger and the water. We combine that coupling with the *Moving Mesh* feature, which simulates the deformation of the wall and channel.

Let’s look at the results for the static analysis of the heat exchanger. When the upper wall remains flat, we get a mass flow rate of 5.5 g/s and a heat transfer coefficient of 2900 W/m^{2}.

*The temperature profile in the channel for the static heat exchanger.*

Next, let’s look at the time-dependent analysis for the dynamic wall heat exchanger. The oscillation reaches a pseudoperiodic state after around 0.6 seconds. After it enters this regime, the average mass flow rate is 10.5 g/s, nearly double the rate at static conditions. As expected, the heat transfer coefficient is also higher: about 19,000 W/m^{2} for an oscillation amplitude of 90%.

*Left: The variations in temperature and flow rate. Right: The temperature profile in the channel of the dynamic wall heat exchanger.*

With simulation, it’s possible to analyze and optimize heat exchanger designs for maximum performance and efficiency.

- Check out these related blog posts:

- P. Kumar, K. Schmidmayer, F. Topin, and M. Miscevic, “Heat transfer enhancement by dynamic corrugated heat exchanger wall: Numerical study,”
*Journal of Physics: Conference Series*, vol. 745, 2016.

The process of natural convection, also called buoyancy flow or free convection, involves temperature and density gradients that cause a fluid (like air) to move, leading to the transport of heat. Unlike forced convection, no fans or external sources are needed to generate fluid flow — just differences in temperature and density.

Natural convection in air has a wide range of applications in various industries. In the electronics field, this phenomenon dissipates heat in devices, which helps prevent them from overheating. Additionally, structures like solar chimneys and Trombe walls take advantage of this heat transport method to heat and cool buildings. The agricultural industry also depends on natural convection, which helps in the drying and storage of various products.

*Natural convection of air through vertical circuit boards.*

With the COMSOL Multiphysics® software, it is possible to study natural convection in air for both 2D and 3D models. Let’s take a look at one example…

The Buoyancy Flow in Air tutorial shows how to model natural convection in air for two geometries:

- 2D square
- 3D cube

In both cases, all of the edges are insulated except for the left and right sides, which are set to a low and high temperature, respectively. The temperature difference (around 10 K) leads to density gradients in the air, generating buoyancy flow. Note that the cube has more sides than the square, which influences how the air flows.

To simplify the model setup, there are a couple of built-in features in COMSOL Multiphysics that we can use. First up is the predefined *Nonisothermal Flow* interface, which couples fluid dynamics and heat transfer in the model. We can also use the Material Library to easily determine the thermophysical properties of air.

Next, we can estimate the flow regime by computing the Grashof, Rayleigh, and Prandtl numbers. The Grashof and Rayleigh numbers suggest that the flow is laminar, with a velocity of around 0.2 m/s. As for the Prandtl number, it indicates that viscosity doesn’t influence the buoyancy of the air and that the shear layer thickness is about 3 mm.

For more details on estimating the flow regime, download the model documentation from the Application Gallery.

*Note: The Buoyancy Flow in Water tutorial model demonstrates a similar model setup with water instead of air.*

Let’s take a look at the results, starting with the velocity magnitude of air in the 2D square. In the left image below, we see that the velocity increases as the air nears the left and right edges, with a maximum velocity of 0.05 m/s. While this is a bit lower than the estimated velocity calculated using the Grashof and Rayleigh numbers, it is still in the same order of magnitude. Further, the shear layer thickness (3 mm) corresponds with the estimate from the Prandtl number.

*The velocity magnitude (left) and velocity profile (right) of air in the 2D square.*

As shown below, the results for the velocity magnitude in the 3D cube are similar to those for the 2D square.

*Velocity magnitude in the cube.*

Next up, let’s look at the temperature results for the 2D geometry. A single convective cell fills the square, with the air flowing around the edges. We see that the flow of air is faster at the left and right sides, where the temperature differences are the greatest.

*The temperature field in the square.*

The 3D results show a slightly different scenario. There are small convective cells in the cube at the corners of a vertical plane perpendicular to the heated sides. As mentioned, this difference is likely due to how the front and back sides in the cube affect the airflow.

*The temperature and velocity fields in the 3D cube.*

The model geometries in the Buoyancy Flow in Air tutorial are rather simple, but the example provides you with a solid foundation for modeling natural convection in more detailed models that represent real-world applications.

For more details about this example, go to the Application Gallery via the button above. From there, you can download the MPH-file and step-by-step instructions on how to build the model.

]]>

During the TPV cell energy production process, fuel burns within an emitting device that intensely radiates heat. Photovoltaic (PV) cells capture this radiation and convert it into electricity, with an efficiency of 1–20%. The required efficiency depends on the intended application of the cell. For example, efficiency is not a major factor when TPVs are used to cogenerate electricity within heat generators. On the other hand, efficiency is critical when TPVs are used as electric power sources for vehicles.

*Left: Simplified schematic depicting the electricity generation process of a TPV. Right: An image from a prototype TPV system. Right image courtesy Dr. D. Wilhelm, Paul Sherrer Institute, Switzerland.*

To improve the efficiency of TPV systems, engineers need to maximize radiative heat transfer, but this comes with a catch. The more radiation in the system, the less radiation converted to electric power. These losses — as well as conductive heat transfer — raise the temperature of the PV cell. If the temperature increases too much, it can exceed the operating temperature range of the PV cell, causing it to stop functioning.

One option for increasing the operation temperature of a TPV system is to use high-efficiency semiconductor materials, which can withstand temperatures up to 1000°C. Since these materials tend to be expensive, engineers can reduce costs by combining smaller-area PV cells with mirrors that focus radiation onto the cells. Of course, there is a limit to how much the beams can be focused, since the cells overheat if the radiation intensity gets too high.

Engineers designing TPV devices need to find optimal system geometries and operating conditions that maximize performance, minimize material costs, and ensure that the device temperature stays within the operating range. Heat transfer simulation can help achieve these design goals.

This example uses the Heat Transfer Module and the *Surface-to-Surface Radiation* interface to determine how operating conditions (e.g., the flame temperature) affect the efficiency of a normal TPV system as well as the temperature of the system’s components. The goal is to maximize surface-to-surface radiative heat fluxes while minimizing conductive heat fluxes. In this model, the effects of geometry changes are also evaluated.

The model geometry includes an emitter, mirrors, insulation, and a PV cell that is cooled by water on its back side. For details on setting up this model — including how to add conduction, surface-to-surface radiation, and convective cooling — take a look at the TPV cell model documentation.

*The TPV system model geometry.*

To minimize the computational costs of the simulation, we use sector symmetry and reflection to reduce the computational domain to one sixteenth of the original geometry. When modeling the surface-to-surface radiation, we expand this view to account for the presence of all of the surfaces in the full geometry.

First, let’s check the voltaic efficiency of the PV cell for a range of cell temperatures. In doing so, we see that the efficiency decreases as the temperature increases. When the temperature of the cell exceeds 1600 K, the efficiency is 0. As such, the maximum operational temperature for the PV cell design is 1600 K.

*Plotting PV cell voltaic efficiency versus temperature.*

In the next plots, we see how the temperature of the emitter affects the temperature of the PV cell and the electric output power. The cell temperature plot (left image below) indicates that the emitter temperature must be under ~1800 K to keep the PV cell below its maximum operating temperature of 1600 K.

Keeping this in mind, let’s take a look at the electric power output results (right image below). From the results, we conclude that the maximum electric power is achieved when the emitter temperature is ~1600 K.

*Plotting PV cell temperature (left) and electric output power (right) against operating temperature.*

Moving on, let’s examine the temperature distribution in the PV cell for the optimal operating condition (left image below) and compare it to a temperature that exceeds this operating temperature (right image below). The two plots highlight how the device’s temperature distribution varies due to operating conditions.

*The stationary temperature distribution in the full TPV system when the emitter temperature is 1600 K (left) and 2000 K (right).*

Looking closer at the plot of the optimal emitter temperature of 1600 K, we see that the PV cells are heated to a sustainable temperature of slightly above 1200 K. It is important to note that the outside part of the insulation reaches a temperature of 800 K, indicating that a large amount of heat is transferred to the surrounding air. In addition, the irradiative flux significantly varies around the PV cell circumference and insulation jacket.

To determine the cause of this variation, we generate a plot of the irradiative flux for a single sector of symmetry at a temperature of 1600 K. The graph indicates that the variation is caused by shadowing and is related to the mirror positions. Using this plot, we could optimize the cell size and placement of the mirrors for a PV design.

*The irradiation flux at the TPV cell, insulation inner surface, mirrors, and emitter.*

Using models like the one discussed here, engineers can efficiently find optimal operating conditions for TPV devices, minimizing prototype development and testing.

To try this TPV cell example yourself, download the model files above.

]]>

Modeling the transport of heat and moisture through porous materials, or from the surface of a fluid, often involves including the surrounding media in the model in order to get accurate estimates of the conditions at the material surfaces. In the investigations of hygrothermal behavior of building envelopes, food packaging, and other common engineering problems, the surrounding medium is probably moist air (air with water vapor).

*Moist air is the environing medium for applications such as building envelopes (illustration, left) and solar food drying (right). Right image by ArianeCCM — Own work. Licensed under CC BY-SA 3.0, via Wikimedia Commons.*

When considering porous media, the moisture transport process, which includes capillary flow, bulk flow, and binary diffusion of water vapor in air, depends on the nature of the material. In moist air, moisture is transported by diffusion and advection, where the advecting flow field in most cases is turbulent.

Computing heat and moisture transport in moist air requires the resolution of three sets of equations:

- The Navier-Stokes equations, to compute the airflow velocity field and pressure
- The energy equation, to compute the temperature
- The moisture transport equation, to compute the relative humidity

These equations are coupled together through the pressure, temperature, and relative humidity, which are used to evaluate the properties of air (density ; viscosity ; thermal conductivity ; and heat capacity ); molecular diffusivity and through the velocity field used for convective transport.

With the addition of the *Moisture Flow* multiphysics interface in version 5.3a, COMSOL Multiphysics defines all three of these equations in a few steps, as shown in the figure below.

*Single-physics interfaces and multiphysics couplings for the coupled resolution of single-phase flow, heat transfer, and moisture transport in building materials and moist air.*

Whenever studying the flow of moist air, two questions should be asked:

- Does the flow depend on moisture distribution?
- Does the nature of the flow require the use of a turbulence model?

If the answer is “yes” for at least one of these questions, then you should consider using the *Moisture Flow* multiphysics interfaces, found under the *Chemical Species Transport* branch.

*The* Moisture Flow *group under the* Chemical Species Transport *branch of the* Physics Wizard*, with the single-physics interfaces and coupling node added with each version of the* Moisture Flow *predefined multiphysics interface.*

The *Laminar Flow* version of the multiphysics interface combines the *Moisture Transport in Air* interface with the *Laminar Flow* interface and adds the *Moisture Flow* coupling. Similarly, each version under *Turbulent Flow* combines the *Moisture Transport in Air* interface and the corresponding *Turbulent Flow* interface and adds the *Moisture Flow* coupling.

Besides providing a user-friendly way to define the coupled set of equations of the moisture flow problem, the multiphysics interfaces for turbulent flow handle the moisture-related turbulence variables required for the fluid flow computation.

One advantage of using the *Moisture Flow* multiphysics interface is its usability. When adding the *Moisture Flow* node through the predefined interface, an automatic coupling of the Navier-Stokes equations is defined for the fluid flow and the moisture transport equations by the software (center screenshot in the image below) by using the following variables:

- The density and dynamic viscosity in the Navier-Stokes equations, which depend on the relative humidity variable from the
*Moisture Transport*interface through a mixture formula based on dry air and pure steam properties (left screenshot below) - The velocity field and absolute pressure variables from the
*Single-Phase Flow*interface, which are used in the moisture transport equation (right screenshot below)

*User interfaces of the* Moisture Flow *coupling,* Fluid Properties *feature (*Single-Phase Flow *interface), and* Moist Air *feature (*Moisture Transport in Air *interface).*

The performance of the *Moisture Flow* multiphysics interface is especially attractive when dealing with a turbulent moisture flow.

For turbulent flows, the turbulent mixing caused by the eddy diffusivity in the moisture convection is automatically accounted for by the COMSOL® software by enhancing the moisture diffusivity with a correction term based on the turbulent Schmidt number . The Kays-Crawford model is the default choice for the evaluation of the turbulent Schmidt number, but a user-defined value or expression can also be entered directly in the graphical user interface.

*Selection of the model for the computation of the turbulent Schmidt number in the user interface of the* Moisture Flow *coupling.*

In addition, for coarse meshes that may not be suitable for resolving the thin boundary layer close to walls, *Wall functions* can be selected or automatically applied by the software. The wall functions are such that the computational domain is assumed to be located at a distance from the wall, the so-called lift-off position, corresponding to the distance from the wall where the logarithmic layer meets the viscous sublayer (or would meet it if there was no buffer layer in between). The moisture flux at the lift-off position, , which accounts for the flux to and from the wall, is automatically defined by the *Moisture Flow* interface, based on the relative humidity.

*Approximation of the flow field and the moisture flux close to walls when using wall functions in the turbulence model for fluid flow.*

Note that the *Low-Reynolds* and *Automatic* options for *Wall Treatment* are also available for some of the RANS models.

For more information, read this blog post on choosing a turbulence model.

By using the *Moisture Flow* interface, an appropriate mass conservation is granted in the fluid flow problem by the *Screen* and *Interior Fan* boundary conditions. A continuity condition is also applied on vapor concentration at the boundaries where the *Screen* feature is applied. For the *Interior Fan* condition, the mass flow rate is conserved in an averaged way and the vapor concentration is homogenized at the fan outlet, as shown in the figure below.

*Average mass flow rate conservation across a boundary with the* Interior Fan *condition.*

Let’s consider evaporative cooling at the water surface of a glass of water placed in a turbulent airflow. The *Turbulent Flow, Low Reynolds k-ε* interface, the *Moisture Transport in Air* interface, and the *Heat Transfer in Moist Air* interface are coupled through the *Nonisothermal Flow*, *Moisture Flow*, and *Heat and Moisture* coupling nodes. These couplings compute the nonisothermal airflow passing over the glass, the evaporation from the water surface with the associated latent heat effect, and the transport of both heat and moisture away from this surface.

By using the *Automatic* option for *Wall treatment* in the *Turbulent Flow, Low Reynolds k-ε* interface, wall functions are used if the mesh resolution is not fine enough to fully resolve the velocity boundary layer close to the walls. Convective heat and moisture fluxes at lift-off position are added by the *Nonisothermal Flow* and *Moisture Flow* couplings. The temperature and relative humidity solutions after 20 minutes are shown below, along with the streamlines of the airflow velocity field.

*Temperature (left) and relative humidity (right) solutions with the streamlines of the velocity field after 20 minutes.*

The temperature and relative humidity fields have a strong resemblance here, which is quite natural since the fields are strongly coupled and since both transport processes have similar boundary conditions, in this case. In addition, heat transfer is given by conduction and advection while mass transfer is described by diffusion and advection. The two transport processes originate from the same physical phenomena: conduction and diffusion from molecular interactions in the gas phase while advection is given by the total motion of the bulk of the fluid. Also, the contribution of the eddy diffusivity to the turbulent thermal conductivity and the turbulent diffusivity originate from the same physical phenomenon, which adds further to the similarity of the temperature and moisture field.

Learn more about the key features and functionality included with the Heat Transfer Module, and add-on to COMSOL Multiphysics:

Read the following blog posts to learn more about heat and moisture transport modeling:

- How to Model Heat and Moisture Transport in Porous Media with COMSOL®
- How to Model Heat and Moisture Transport in Air with COMSOL®

Get a demonstration of the *Nonisothermal Flow* and *Heat and Moisture* couplings in these tutorial models:

Canadian Nuclear Laboratories aims to improve nuclear fuel because it limits the efficiency of power generation in nuclear reactors. As Andrew Prudil said in his keynote talk: “If we can increase the power rating of the reactors, that’s worth millions of dollars per day.” Optimized nuclear fuel also enables more green energy on a power grid and reduces the risk of nuclear accidents. Plus, the improved fuel can be used in existing reactors to enhance their performance.

Before engineers can develop improved nuclear fuel, they have to understand its behavior. This is no simple matter, as nuclear fuel experiences multiple physical phenomena during nuclear reaction fission, including high temperatures, radiation, mechanical loading, thermal expansion, the creation of fission products such as xenon and krypton, and more.

To learn more about nuclear fuel behavior during a reaction, in which “everything depends on everything else,” CNL turned to the COMSOL Multiphysics® software.

First, Prudil discussed a multiphysics model — created for his PhD thesis — that studies the behavior of nuclear fuel (or pellets, in this case). The Fuel and Sheath Modeling Tool (FAST) simulates a long row of pellets separated by small gaps inside a metal sheath. Each part of the model involves multiple types of physics. For instance, sheaths in nuclear reactors typically use zirconium-based alloys, which consist of anisotropic crystal structures. For accurate results, the model must account for how the crystals behave when pulled in different directions.

*From the video: Results for the FAST simulations.*

The simulations show how the ends of the pellets push outward to make room for the hot material at the center. The “hourglassing” phenomenon causes the ends of the pellets to create a wavy pattern in the cladding (exaggerated in the image above). FAST can also plot the radial displacement and various stress and strain fields, such as the hydrostatic pressure, von Mises stress, and axial creep. Prudil noted that the results show “very interesting, very rich spatial fields.”

With FAST, it’s possible to look at how nuclear fuel behaves in a continuum — both in terms of a temperature gradient and mechanical loading.

Prudil then discussed a model created at Canadian Nuclear Laboratories that simulates how fission gas forms bubbles on the boundary of a single grain of uranium oxide, a process that involves fission gas products such as xenon and krypton. At the grain boundary, these insoluble gases try to diffuse pressure by forming bubbles. The bubbles grow larger and larger and let gases escape.

The CNL model simulates this process for individual bubbles. Instead of using the traditional phase field method, which can be computationally expensive, they created the included phase technique to model the phase interface.

*Simulation results for the included phase technique. Animation courtesy Andrew Prudil and can be found in the paper: “A novel model of third phase inclusions on two phase boundaries“.*

Initially, the simulations show a random distribution of bubbles on the grain boundary. As time progresses, the bubbles combine to minimize the surface energy before collecting at the edges and vertices. CNL validated their approach, determining that they could control the contact angle of a single bubble on an infinite plane.

Wrapping up, Prudil mentioned that COMSOL Multiphysics could also be used to investigate other interesting multiphysics phenomena (e.g., columnar grain growth). With these capabilities, engineers can learn more about nuclear fuel and continue to advance the field.

To learn more about how CNL uses multiphysics modeling to understand the behavior of nuclear fuel, watch the keynote video at the top of this post.

]]>

Before we do anything else, let’s pour some 90°C coffee into a vacuum flask and consider the material properties of the model.

Materials involved:

- The coffee is represented using material properties of water
- The screw topper and insulation ring are both made of nylon
- The flask is made up of two stainless steel walls with a plastic foam filler in between (the inner gap in vacuum flasks is usually filled with evacuated air, but it may also contain foam)

All material properties except for the foam filler can be pulled directly from the Material Library in the COMSOL Multiphysics® software. As always, when using COMSOL Multiphysics, you can add special material properties manually into the software. In the case of the foam in this example, you would enter the following values:

- Conductivity: 0.03 W/(m·K)
- Density: 60 kg/m
^{3} - Heat capacity: 200 J/(kg·K)

Tip: The modeling approaches mentioned here are both covered in the Natural Convection Cooling of a Vacuum Flask tutorial model. Please refer to the tutorial MPH-file and accompanying documentation to see exactly how to set up and solve this model, because we won’t go into detail in this blog post.

For a quick and simple model, you can describe the thermal dissipation using predefined heat transfer coefficients. This method should help us determine how the coffee cools over time inside the vacuum flask. It’s simple because it *won’t* tell us anything about the flow behavior of the air around the flask, and it’s useful because it *will* show us the cooling power over time.

Instead of computing heat transfer and flow velocity in the fluid domain, you would simply model the heat flux on the external boundary of the vacuum flask, defined from the heat transfer coefficient, the surface temperature, and the ambient temperature (25°C; a little warmer than standard room temperature):

*q* = *h*(T_{∞}-T)

There are many predefined cases where *h* is known with high accuracy. The Heat Transfer Module (an add-on to COMSOL Multiphysics) includes a library of heat transfer coefficients for easy access.

Another time-saver with this method is the fact that you can avoid predicting whether the flow is turbulent or laminar, because many correlations are valid for a wide range of flow regimes. As long as you use the appropriate *h* correlations, you can typically arrive at accurate results at a very low computational cost with this method.

What about the second approach? It’s worth considering how the cooling power is distributed on the flask surface as the coffee cools down. To do so, you need to include surrounding fluid flow in the model.

To get a more complete picture of what’s going on with our precious java (seriously, when can I drink it?), we could create a more detailed model of the convective airflow outside the vacuum flask.

Taking the second approach calls for using the *Gravity* feature available in the *Single-Phase Flow* interface with the Heat Transfer Module or the CFD Module, which allows you to include buoyancy forces in the model. Typically, you would first need to figure out whether the flow is laminar or turbulent before following this modeling approach. For the sake of brevity here, let’s skip ahead: we know from the tutorial model documentation that the flow is laminar in this case.

The detailed model shows that the warm flask drives vertical air currents along its walls. The currents eventually combine in a thermal plume above the flask and air in the surrounding area is pulled toward the flask, feeding into the vertical flow. (This flow is weak enough that there are no significant changes in dynamic pressure.)

The vortex that forms above the flask’s lid reduces the cooling in that region — something you can’t tell from the first method. In essence, the fluid flow model is better at describing local cooling power than the simple method with the approximated heat transfer coefficient.

So how long will the coffee stay warm in the vacuum flask? Many coffee drinkers like to stay within the range of 50–60°C (roughly 120–140°F), because it’s supposedly when the “coffee notes shine.” Both methods suggest that after 10 hours inside the flask, the coffee will be about 54°C, which is still within the enjoyable range. Of course, if we were to bring the flask outside in cooler temperatures than the assumed 25°C, the coffee would cool down quicker.

*A plot of the coffee temperature over time for the two modeling approaches. The blue line denotes the first approach and the green line denotes the second approach.*

Though both modeling approaches give very similar results in terms of the coffee temperature over time, it’s a different story when looking at the flask surface’s cooling power:

*A plot of the heat transfer coefficient for the two modeling approaches. The blue line denotes the first approach and the green line denotes the second approach.*

For fast *and* accurate results in the long run, you can combine the two approaches. After setting up the more detailed model, you can create and calibrate functions for heat transfer coefficients to use later, via the simpler approach for solving large-scale and time-dependent models.

We saw that there are two different ways to model the convective cooling of coffee inside a vacuum flask over time. The detailed approach is more computationally demanding, as it combines heat transfer and fluid flow, but it’s also more accurate in the sense that it accounts for local effects. By combining both methods, you can save time in the future.

Try it yourself by downloading the tutorial model from the online Application Gallery or within the Application Library inside the COMSOL Multiphysics software. If you have any questions about this model or the COMSOL Multiphysics software, please contact us.

]]>

Active thermal control systems in buildings help keep people comfortable in extreme weather conditions. For example, these systems can maintain a steady temperature indoors in places that are “summer dominant”, which are often very warm during the day and quite cool at night. However, active thermal control systems consume a lot of energy. They are also expensive to run around the clock, making them a less-than-ideal solution for keeping a steady interior temperature in such climates.

Passive thermal control systems are more efficient. One advantage of these systems is that they can run on less energy. They can also work with active thermal control systems, which minimizes the energy needed to maintain an even temperature. Further, passive thermal control systems can be included in a building during construction, helping to lower energy consumption costs over the lifetime of the structure.

*A bedroom in the process of being remodeled. The walls are covered in a standard plaster.*

Using phase change materials (PCMs) in building elements is one method of passive thermal control. As PCMs change phase, they begin to absorb or release their latent energy within a small temperature range, depending on whether they are in their liquid or solid phase, respectively. In doing so, they increase the thermal inertia of a building. Since PCMs can provide or store heat as needed, using them in buildings is particularly advantageous in areas with large daily temperature changes. This ability is less useful in climates that are constantly hot, as the PCM also stays hot.

To optimize PCMs for use in buildings, a research team from the Frederick Research Center, Cyprus, and University of Cyprus used numerical simulation.

To study the thermal performance of a PCM in a building element, the researchers created a 3D model consisting of a concrete block with a layer of plaster. They added a PCM to the plaster in different weight amounts:

- 5%
- 10%
- 20%

It is important that the PCM can adjust with the temperature changes in hot climates. For this reason, Micronal 5038X was used as the PCM, as it has a melting temperature of 26˚C. To see how effective the PCM is in managing temperature, the model also includes a reference plaster to act as a comparison.

*The geometry of the building element. Image courtesy A. Kylili, M. Theodoridou, I. Ioannou, and P.A. Fokaides and taken from their COMSOL Conference 2016 Munich paper.*

The model captures the behavior of the PCM with the *Heat Transfer with Phase Change* interface. The interface helps predict how these types of materials transform when they change phase. In this case, the researchers determined the temperature transfer in the plaster with Micronal 5038X throughout the simulation.

The exterior temperatures in the model are based on measured temperatures averaged per hour. Using real temperatures increases the accuracy of the simulation, providing a more realistic look at the thermal performance of the PCM in a specific climate.

Note that you can access meteorological data for over 6000 different areas in COMSOL Multiphysics. To learn how, check out this blog post on thermal modeling of the airflow in and around a house.

Let’s look at how the simulation results compare with experimental data. As we can see below, the temperature peaks and timing for the reference plaster and plaster with 5% PCM both show good agreement. Based on these results, the plasters that include 10% and 20% PCM can also be verified.

*Comparison of the simulation and experimental results for the reference plaster and plaster with 5% PCM. Image courtesy A. Kylili, M. Theodoridou, I. Ioannou, and P.A. Fokaides and taken from their COMSOL Conference 2016 Munich paper.*

When looking at the thermal performance of the different plasters, we can see that the higher percentage of PCM in the plaster, the better. Using Micronal 5038X minimizes the temperature variation inside a building over a 24-hour time period. Further, it takes longer for the building to reach its maximum and minimum temperatures.

Another useful aspect of the PCM is that it can adjust to different daytime and nighttime temperatures. In 24 hours, the PCM is able to change phase twice.

*Left: The thermal performance of plasters containing different percentages of PCM. Right: The temperature distribution in a building element that uses a PCM-enhanced plaster. Images courtesy A. Kylili, M. Theodoridou, I. Ioannou, and P.A. Fokaides and taken from their COMSOL Conference 2016 Munich presentation.*

Based on the researchers’ simulation results, validated with experimental data, the novel PCM-enhanced plaster can be optimized for use in buildings located in hot climates.

- Read the researchers’ full paper: Numerical Heat Transfer Analysis of a Phase Change Material (PCM) – Enhanced Plaster
- Check out these blog posts on how to model phase change: