Cam-follower mechanisms are categorized based on the input and output motion of their configurations. The different types of cam-follower mechanisms are described below.

When a follower moves along a guide while a cam rotates, the motion is categorized as a rotating cam and translating follower. This is further categorized based on the motion of the follower. If the motion is along the axis passing through the center of rotation of the cam, then it is called a radial inline follower, whereas if the motion is along an offset from the axis, it is called a radial offset follower.

*Animation showing the displacement and velocity plot of the radial inline follower.*

*Animation showing the displacement and velocity plot of the radial offset follower.*

When the rotary motion of the cam is converted into oscillatory motion, the configuration is known as a rotating cam and oscillatory follower.

*Animation showing an example of an oscillatory cam and follower.*

In this case, both the cam and follower exhibit translational motion. This means that the motion to the follower is due to the profile height of the cam.

*Animation showing a displacement plot of a wedge cam.*

There are cases in which the cam is stationary and the follower traces the profile of the cam. This type of arrangement is classified as a stationary cam and moving follower.

*Animation showing displacement plot of a stationary cam and moving follower.*

A point follower is a mechanism in which a pin on a follower slides in a slot. The slot can be of any profile.

*Animation showing a displacement plot of a pin moving in a curved slot.*

There are many ways to transform one type of motion into another. The variety of the cam-follower mechanism is limited only by your own imagination. For example, you can use a combination of the above configurations to generate a combined effect. A few common examples are barrel cams and end cams, which are used to convert rotary motion into translational and oscillatory motion.

*Animation showing the combined translational and oscillatory motion of a barrel cam.*

The *Cam-Follower* joint type, available with the Multibody Dynamics Module as of COMSOL Multiphysics® software version 5.3a, is used to model applications in which a point follows a surface. In other words, the contact is unique and occurs at a single point.

Usually, the active component is called the *cam* and the passive component is called the *follower*, but in the COMSOL® software, this feature is built in such a way that you can model both components as either active or passive.

With the default settings, the cam and follower will always remain in contact, which means that no chattering is allowed. To model intermittent contact, the *Activation Conditions* feature under the *Cam-Follower* joint type can be used. In addition, the cam and follower can both be modeled as rigid or flexible components.

In order to ensure that the point on the follower always follows the specified cam surface, an offset from the cam boundary is defined such that the gap distance is always zero. The implementation of these constraints follows this procedure:

- Based on the selected cam surface, a search algorithm is used to find the closest point on the cam surface with respect to the specified follower point
- The internal gap variable is defined such that the difference of absolute vectors between a point on the follower and the closest point on the cam is set to a predefined offset

*The* Equation *and* Sketch *for the* Cam-Follower *feature in COMSOL Multiphysics with the add-on Structural Mechanics Module and Multibody Dynamics Module.*

If you are not interested in the details of the formulation, you can skip to the next section.

To understand the formulation, let’s take a look at an example of a cam and roller configuration in which both the cam and roller have been modeled as rigid components. For a rigid body, a point on the body can be located from the origin of a body-fixed (local) coordinate system (X, Y) by . Since the point is fixed to the body, the elements of the vector are constant in the body-fixed coordinate system, while in the global coordinate system, they vary with the rotation. The transformation from a local to global orientation is represented in terms of the rotation matrices.

S_{Global}=[R] S_{Local}

*Schematic of the cam and roller configuration.*

Now, the absolute position of point (P) on body (i) can be represented as follows:

r_p^i=r_c^i+[R] S_{local}

Here, the position of point (c) is computed using the center of rotation of the rigid body and it also serves as the origin for the body-fixed coordinate system.

*x*= Global*x*-axis*y*= Global*y*-axis*r*= Absolute position of the center of rotation of body A^{A}*s*= Global position vector for point (g) on body A with respect to the center of rotation^{A}*r*= Absolute position of the center of rotation of body B^{B}*s*= Global position vector for point (h) on body B with respect to the center of rotation^{B}*R*= Absolute position of point (g) with respect to the spatial coordinate system^{A}*R*= Absolute position of point (h) with respect to the spatial coordinate system^{B}

So,

R^A=r^A+s^A=r^A+[R](s^A)_{Local}

R^B=r^B+s^B=r^B+[R](s^B)_{Local}

As the common contact point is not fixed with respect to the bodies (A) and (B), a constraint needs to be defined between the two bodies to maintain continuous contact.

\vec{d}=R^A-R^B

In COMSOL Multiphysics, the constraint is defined on the vector d such that its magnitude in the normal direction of the closest point on the cam surface is equal to the offset value.

\vec{d}\boldsymbol{\cdot}\vec{n}-x_{offset}=0

For our example, let’s consider a spring-loaded valve-opening mechanism that has a rocker arm and a radial cam. In this mechanism, the cam rotation is prescribed and a spring is attached to the valve to restrict its motion.

*Geometry of a valve-opening mechanism with a rocker arm and radial cam.*

The objective of this analysis is twofold:

- To compute the displacement, velocity, and acceleration of the follower for a given cam shaft RPM
- To compute the cam-follower connection force as well as the torque required to rotate the cam shaft at a given RPM for different values of valve spring stiffness

One of the main objectives behind mounting the spring on the valve is to force the follower to follow the cam profile and to avoid intermittent contacts between the cam and follower. Hence, the optimal value of the valve spring stiffness should enforce the contact between the cam and follower at all times, while simultaneously requiring the least torque to rotate the cam shaft.

*Animation showing a displacement plot of the radial cam.*

The follower velocity first increases when the follower comes in contact with the opening flank region of the cam profile. Later, it decreases and becomes zero at the tip of the nose region. Similarly, it increases in the reverse direction and becomes zero when the follower comes in contact with the heal region of the cam profile. In COMSOL Multiphysics, the joint’s velocity and acceleration sign convention is decided based on the specified axis of the joint. It will be positive if the destination attachment is moving along the positive direction of the axis of the joint. For this case, the joint axis is defined along the *z*-axis, so the velocity is positive when the follower is moving upward and vice versa.

*Variation of the follower velocity (left) and follower acceleration (right) with cam rotation.*

You can see that the acceleration values are negative in the range of 60° to 120° of cam rotation. This is the region where the follower has a tendency to leave the contact with the cam profile, which depends on the valve spring stiffness value for a given camshaft RPM. By plotting the connection force vs. cam rotation, you can check which spring ensures continuous contact. The contact force sign convention is positive if the cam and follower are no longer in contact, while the negative value shows that the contact is still maintained.

*Variation of the cam-follower connection force with cam rotation (left) and torque required to rotate the cam shaft (right) for different values of the valve spring stiffness.*

Out of the four valve stiffness values, only 20 kN/m and 30 kN/m can enforce a continuous contact between the cam and follower. To choose the optimal value of valve stiffness, we can look at the required torque plot. It can be observed in this plot that the required torque is less for a value of 20 kN/m, hence this is the optimal value of the valve stiffness among the values considered in this analysis.

One problem with cam design is to determine the cam profile suitable to generate the desired motion of follower. In COMSOL Multiphysics, one can easily create a geometry based on the follower rise function. The follower rise function is defined as the displacement of the follower with change in the cam rotation. The first step is to derive the radius as a function of the follower rise function. This relationship can be established using the analytical approach if the follower rise function is preknown. The analytical approach is quite simpler than the graphical approach of creating the cam surface, because the follower rise function is usually a combination of different elementary functions. Also, in a few cases, the desired output motion of follower is already known so it became simple to generate a cam profile from the known follower rise function.

*Schematic representing the variation of the cam profile with the cam angle.*

*R*= Radius of base circle_{min}*θ*= Angle of rotation of points on the cam surface*r*= Radius of points on the cam surface*h*= Height of the follower above its initial position

From the above image, it is clear that the radius (*r*) and follower height (*h*) are functions of the angle of rotation and the follower rise function is simply a difference of cam profile from the base circle. The relation between them is defined as follows:

h=r-R_{min}

r=h+R_{min}

Now, if the follower rise function is known, the variation of the radius (*r*) will represent a circle for each value of *θ*. So, it will give a family of curves. In order to generate the cam profile, we need to plot the envelope of the curves. This can be easily done in COMSOL Multiphysics using the *Parametric Curve* option.

x=r\boldsymbol{\cdot}cos(\theta)\hspace{0.5cm};\hspace{0.5cm}

y=r\boldsymbol{\cdot}sin(\theta)

y=r\boldsymbol{\cdot}sin(\theta)

Let’s take a simple example to illustrate this concept. Consider a simple knife edge radial follower with a known follower rise and a cam angle rotation. The rise function is such that there is outstroke during 60° of cam rotation, dwell for the next 30° of cam rotation, return stroke during the next 60° of cam rotation, and dwell for the remaining 210° of cam rotation.

*Follower height as a function of the cam angle. (Follower rise function.)*

First, you need to create the interpolation function and enter the data for h vs. *θ*, also called the follower rise function. Thanks to the flexibility of the COMSOL® software, one can directly import the follower rise function. After importing the follower rise function, the cam surface can be generated using the parametric curve. For this, one needs to first determine the base circle radius and then express the radius that depends upon the follower rise function and base circle radius. The parametric form is similar to that of a circle; the only difference in this case is that the radius is a function of *θ*. To do this in COMSOL Multiphysics, one can use the interpolated function to create the follower rise function and further use them under the parametric curve to define the value of *r* (the radius of the cam surface). Usually, the data is a piecewise curve, so it is good practice to create different profile curves for each section of outstroke, upstroke, and dwell. Finally, you use the *Convert to solid* option to generate the cam profile.

*Schematic of the generated cam profile.*

*A plot of the follower rise as a function of the cam rotation, after performing a simulation with the generated cam profile.*

The cases in which the motion of the follower is a combination of different types of analytical expressions, such as uniform motion, parabolic motion, simple harmonic motion, cycloidal motion, or general polynomial motion. In these cases, the cam profile can be easily created using the *Analytic Function* feature with a combination of different motions for the full cam rotation. The analytic function allows a symbolic expression, so you can directly write it as a function of *θ*.

In order to get a smoother surface representation, it is useful to increase the shape function order for the displacement to Quadratic. This applies both when the cam is rigid and when it is flexible.

If possible, use a fine mesh on a cam boundary to improve the accuracy of the mesh normal used in the *Cam-Follower* connection node.

The Structural Mechanics Module contains advanced tools for mechanical analyses. See what other types of analysis are possible by clicking the button below.

Note: The *Cam-Follower* feature also requires the Multibody Dynamics Module, an add-on to the Structural Mechanics Module.

One of the first physical laws that we learn as engineers is Ohm’s law: The current through a device equals the applied voltage difference divided by the device electrical resistance, or *I* = *V/R _{e}*, where the electrical resistance,

Shortly after learning that law, we probably also learned about the dissipated power within the device, which equals the current times the voltage difference, or *Q* = *IV*, which we could also write as *Q* = *I ^{2}R_{e}* or

We start our discussion from this point and consider a completely lumped model of a device. (Yes, we’re starting so simple that we don’t even need to use the COMSOL Multiphysics® software for this part!) Let’s consider a lumped device with electrical resistance of *R _{e}* = 1 Ω and thermal resistance of

We choose an ambient temperature of 300 K, or 27°C, which is about room temperature. Let’s now plot out the device lumped temperature as a function of increasing voltage (from 0 to 10 V) and current (from 0 to 10 A), as shown in the image below. Unsurprisingly, we see a quadratic increase in temperature.

*Device temperature as a function of applied voltage (left) and applied current (right), assuming constant properties.*

We might think that we can use the curve to predict a wider range of operating conditions. Suppose we want to drive the device up to its failure temperature, where the material melts or vaporizes. Let’s say that this material will vaporize when its temperature gets up to 700 K (427°C). Based on this curve, some simple math would indicate that the maximum voltage is 20 V and the maximum current is 20 A, but this is quite wrong!

At this point, you’re probably ready to point out the simple mistake that we’ve made: Electrical resistance is not constant with temperature. For most metals, the electrical conductivity goes down with an increasing temperature and since resistivity is the inverse of conductivity, the device resistivity goes up. So, let’s introduce a temperature dependence for the resistivity:

R_e = \rho_0(1+\alpha_e(T-T^e_0))

This is known as a linearized resistivity model, where *ρ*_{0} is the reference resistivity at , the reference temperature, and *α _{e}* is the temperature coefficient of electrical resistivity.

Let’s choose *ρ*_{0} = 1 Ω, = 300 K, and *α _{e}* = 1/200 K. Now, the resistance is 1 Ω at a device temperature of 300 K and 2 Ω at a temperature rise of 200 K above the set temperature. The equations for lumped temperature as a function of voltage and current now become:

T = T_{ambient} + (V^2 /\rho_0(1+\alpha_e(T-T^e_0))) R_t

and

T = T_{ambient} + I^2 \rho_0(1+\alpha_e(T-T^e_0)) R_t

These equations are a bit more complicated (the first is a quadratic equation in terms of *T*) but still possible to solve by hand. The plots of temperature as a function of increasing voltage and current are displayed below.

*Device temperature as a function of applied voltage (left) and applied current (right) with the electrical resistivity as a function of temperature.*

For the voltage-driven problem, as the temperature rises, the resistance rises. Since the resistance occurs in the denominator of the temperature expression, higher resistance lowers the temperature and we see that the temperature will be *lower* than that for the constant resistivity case. If we drive the device with constant current, the temperature-dependent resistance appears in the numerator. As we increase the current, the resistive heating will be *higher* than that for the linear material case.

We might be tempted at this point to compute the maximum voltage or current that this device can sustain, but you are probably already realizing the second mistake we’ve made: We also need to incorporate the temperature dependence of the thermal resistance. For metals, it’s reasonable to assume that the electrical and thermal conductivity will show the same trends. Thus, let’s use a nonlinear expression that is similar to what we used before:

R_t = r_0(1+\alpha_t(T-T^t_0))

Now, our voltage-driven and current-driven equations for temperature become:

T = T_{ambient} + V^2 r_0(1+\alpha_t(T-T^t_0))/\rho_0(1+\alpha_e(T-T^e_0))

and

T = T_{ambient} + I^2 \rho_0(1+\alpha_e(T-T^e_0))/r_0(1+\alpha_t(T-T^t_0))

Although only slightly different from before, these nonlinear equations are now quite a bit more difficult to solve. Simulation software is starting to look more attractive! Once we do solve these equations (let’s set *r*_{0} = 1 K/W, *α _{t}* = 1/400 K, and = 300 K), we can plot the device temperature, as shown below.

*Device temperature as a function of applied voltage (left) and applied current (right) with the electrical and thermal resistivity as a function of temperature.*

Observe that for the current-driven case, the temperature rises asymptotically. Since both the electrical and the thermal resistance increase with an increasing temperature, the device temperature rises very sharply as the current is increased. As the temperature rises to infinity, the problem becomes unsolvable. This is actually entirely expected; in fact, this is how your basic car fuse works. Now, if we were solving this problem in COMSOL Multiphysics, we could also solve this as a transient model (incorporating the thermal mass due to the device density and specific heat) and predict the time that it takes for the device temperature to rise to its failure point.

Things are luckily a bit simpler for the voltage-driven case. Here, we also see a predictable behavior: The rising thermal resistivity drives the temperature higher than when we only considered a temperature-dependent electrical conductivity. Now, the interesting point here is the temperature is still lower than for the constant resistivity case. This also sometimes confuses people, but just keep in mind that one of these nonlinearities is driving the temperature *down* while the other is driving the temperature *up*. In general, for a more complicated model (such as one you would build in COMSOL Multiphysics), you don’t know which nonlinearity will predominate.

What other mistake might we have made here? We have used a *positive* temperature coefficient of thermal resistivity. This is certainly true for most metals, but it turns out that the opposite is true for some insulators, glass being a common example. Usually, the total device thermal resistance is mostly a function of the insulators rather than the electrically conductive domains. In addition, the device’s thermal resistance should incorporate the effects of the cooling to the surrounding ambient environment. So, the effects of free convection (which increases with the temperature difference) and radiation (which has a fourth-order dependence on temperature difference) could also be lumped into this single thermal resistance. For now, though, let’s keep the problem (relatively) simple and just switch the sign of the temperature coefficient of thermal resistivity, *α _{t}* = -1/400 K, and directly compare the voltage- and current-driven cases for driving voltage up to 100 V and current up to 100 A.

*Device temperature as a function of applied voltage (pink) and applied current (blue) with a negative temperature coefficient of thermal resistivity.*

We now see some results that are quite different. Observe that for both the voltage- and current-driven cases, the temperature increases approximately quadratically at low loads, but at higher loads, the temperature increase starts to flatten out due to the decreasing thermal resistivity. The slope, although always positive, decreases in magnitude. The current-driven case starts to asymptotically approach *T* = 700 K, but the voltage-driven case stays significantly below the failure temperature.

This is an important result and highlights another common mistake. The nonlinear material models we used here for electrical and thermal resistivity are *approximations* that start to become invalid if we get too close to 700 K. If we anticipate operating in this regime, we should go back to the literature and find a more sophisticated material model. Although our existing nonlinear material models did solve, we always need to check that they are still valid at the computed operating temperature. Of course, if we are not close to these operating conditions, we can use the linearized resistivity model (one of the built-in material models within COMSOL Multiphysics). Then, our model will be valid.

We can hopefully now see from all of this data that the temperature has a very complicated relationship with respect to the driving voltage or current. When nonlinear materials are considered, the temperature might be higher or lower than when using constant properties, and the slope of the temperature response can switch from quite steep to quite shallow just based on the operating conditions.

Have these results thoroughly confused you yet? What if we went back and changed one of the coefficients in the resistance expressions? Certain materials have negative temperature coefficients of electrical and thermal resistivity. What if we used an even more complicated nonlinearity? Would you feel confident in saying anything about the expected temperature variations in even this simple lumped device case, or would you rather check it against a rigorous calculation?

What about the case of a real-world device? One that has a combination of many different materials, different electrical and thermal conductivities as a function of temperature, and complex shapes? Would you model this under steady-state conditions only or in the time domain, to find out how long it takes for the temperature to rise? Maybe — in fact, most likely — there will also be nonlinear boundary conditions such as radiation and free convection that we don’t want to approximate via a single lumped thermal resistance. What can you expect then? Almost anything! And how do you analyze it? Well, with COMSOL Multiphysics, of course!

Evaluate how COMSOL Multiphysics can help you meet your multiphysics modeling and analysis goals. Contact us via the button below.

]]>

In a recent video on YouTube from standupmaths, science enthusiasts Matt Parker and Hugh Hunt discuss and demonstrate the “mystery” of a tuning fork. When you strike a tuning fork and hold it against a tabletop, it seems to double in frequency. As it turns out, the explanation behind this mystery can be boiled down to nonlinear solid mechanics.

When you hold a vibrating tuning fork in your hand, the bending motion of the prongs sets the air around them in motion. The pressure waves in the air propagate as sound. You can hear it, but it is not a very efficient conversion of the mechanical vibration into acoustic pressure.

When you hold the stem of the tuning fork to a table, an axial motion in the stem connects to the tabletop. The motion is much smaller than the transverse motion of the prongs, but it has the potential to set the large flat tabletop in motion — a surface that is a far better emitter of sound than the thin prongs of a tuning fork. The tabletop surface will act as a large loudspeaker diaphragm.

*Our tuning fork.*

To investigate this interesting behavior, we created a solid mechanics computational model of a tuning fork. The model is based on a tuning fork that one of my colleagues keeps in her handbag. The tone of the device is a reference A4 (440 Hz), the material is stainless steel, and the total length is about 12 cm.

First, let’s have a look at the displacement as the tuning fork is vibrating in its first eigenmode:

*The mode shape for the fundamental frequency of the tuning fork.*

If we study the displacements in detail, it turns out that even though the overall motion of the prongs is in the transverse direction (the *x* direction in the picture), there are also some small vertical components (in the *z* direction), consisting of two parts:

- The bending of the prongs is accompanied with an up-down motion that varies linearly over the prong cross section
- The stem has an essentially rigid axial motion, which is necessary for keeping the center of mass in a fixed position, as required by Newton’s second law

The displacements are shown in the figures below. The mode is normalized so that the maximum total displacement is 1. The peak axial displacement is 0.03 and the displacement in the stem is 0.01.

*Total displacement vectors in the first eigenmode.*

*Axial displacements only. Note that the scales differ between figures. The center of gravity is indicated by the blue sphere.*

Now, let’s turn to the sound emission. By adding a boundary element representation of the acoustic field to the model, the sound pressure level in the surrounding air can be computed. The amplitude of the vibration at the prong tips is set to 1 mm. This is approximately the maximum feasible value if the tuning fork is not to be overloaded from a stress point of view.

As can be seen in the figure below, the intensity of the sound decreases rather fast with the distance from the tuning fork, and also has a large degree of directionality. Actually, if you turn a tuning fork around its axis beside your ear, the near-silence in the 45-degree directions is striking.

*Sound pressure level (dB) and radiation pattern (inset) around the tuning fork.*

We now add a 2-cm-thick wooden table surface to the model. It measures 1 by 1 m and is supported at the corners. The stem of the tuning fork is in contact with a point at the center of the table. As can be seen below, the sound pressure levels are quite significant in a large portion of the air domain above and outside the table.

*Sound pressure levels above the table when the stem of the tuning fork is attached to the table.*

For comparison, we plot the sound pressure level for the same air domain when the tuning fork is held up. The difference is quite stunning with very low sound pressure levels in all parts of the air above the table except for in the vicinity of the tuning fork. This matches our experience with tuning forks as shown in the original YouTube video.

*Sound pressure levels for the tuning fork when held up.*

So far, we have not touched on the original question: Why does the frequency double when the tuning fork is placed on the table? One possible explanation could be that there *is* such a natural frequency, which has a motion that is more prominent in the vertical direction. For a vibrating string, for example, the natural frequencies are integer multiples of the fundamental frequency.

This is not the case for a tuning fork. If the prongs are approximated as cantilever beams in bending, the lowest natural frequency is given by the expression

f_1 = \dfrac{1.875^2}{2 \pi L^2}\sqrt{\dfrac{EI}{\rho A}}

The quantities in this expression are:

- Length of the prong, L
- Young’s modulus, E; usually around 200 GPa for steel
- Mass density, ρ; approximately 7800 kg/m
^{3} - Area moment of inertia of the prong cross section, I
- Cross-sectional area of the prong, A

For our tuning fork, this evaluates to 435 Hz, so the formula provides a good approximation.

The second natural frequency of a cantilever beam is

f_2 = \dfrac{4.694^2}{2 \pi L^2}\sqrt{\dfrac{EI}{\rho A}}

This frequency is a factor 6.27 higher than the fundamental frequency. It cannot be involved in the frequency doubling. However, there are other mode shapes besides those with symmetric bending. Could one of them be involved in the frequency doubling?

This is unlikely for two reasons. The first reason is that the frequency doubling phenomenon can be observed for tuning forks with different geometries, and it would be too much of a coincidence if all of them have an eigenmode with exactly twice the fundamental natural frequency. The second reason is that nonsymmetrical eigenmodes have a significant transverse displacement at the stem, where the tuning fork is clenched. Such eigenmodes will thus be strongly damped by your hand, and have an insignificant amplitude. One such mode, with a natural frequency of 1242 Hz, is shown in the animation below.

*The tuning fork’s first eigenmode at 440 Hz, an out-of-plane mode with an eigenfrequency of 1242 Hz, and the second bending mode with an eigenfrequency of 2774 Hz.*

Let’s summarize what we know about the frequency-doubling phenomenon. Since it is only experienced when we press the tuning fork to the table, the double frequency vibration has a strong axial motion in the stem. Also, we can see from a spectrum analyzer (you can download such an app on a smartphone) that the level of vibration at the double frequency decays relatively quickly. There is a transition back to the fundamental frequency as the dominant one.

The dependency on the amplitude suggests a nonlinear phenomenon. The axial movement of the stem indicates that the stem compensates for a change in the location of the center of mass of the prongs.

Without going into details with the math, it can be shown that for the bending cantilever, the center of mass shifts down by a distance relative to the original length *L*, which is

\dfrac{\delta Z}{L} = \beta \left ( \dfrac{a}{L} \right)^2

Here, *a* is the transverse motion at the tip and the coefficient β ≈ 0.2.

The important observation is that the vertical movement of the center of mass is proportional to the square of the vibration amplitude. Also, the center of mass will be at its lowest position twice per cycle (both when the prong bends inward and when it bends outward), thus the double frequency.

With *a* = 1 mm and a prong length of *L* = 80 mm, the maximum shift in the position of the center of mass of the prongs can be estimated to

\delta Z = 0.2 \left ( \frac{1}{ 80} \right)^2 \mathrm 80 \, mm = 0.0025 \, mm

The stem has a significantly smaller mass than the prongs, so it has to move even more for the total center of gravity to maintain its position. The stem displacement amplitude can thus be estimated to 0.005 mm. This should be seen in relation to what we know from the numerical experiments above. The linear (440 Hz) part of the axial motion is of the order of *a*/100; in this example, 0.01 mm.

In reality, the tuning fork is a more complex system than a pure cantilever beam, and the connection region between the stem and the prongs will affect the results. For the tuning fork analyzed here, the second-order displacements are actually less than half of the back-of-the-envelope predicted 0.005 mm.

Still, the axial displacement caused by the second-order moving mass effect is significant. Furthermore, when it comes to emitting sound, it is the velocity, not the displacement, that is important. So, if displacement amplitudes are equal at 440 Hz and 880 Hz, the velocity at the double frequency is twice that at the fundamental frequency.

Since the amplitude of the axial vibration at 440 Hz is proportional to the prong amplitude *a*, and the amplitude of the 880-Hz vibration is proportional to *a*^{2}, it is necessary that we strike the tuning fork hard enough to experience the frequency-doubling effect. As the vibration decays, the relative importance of the nonlinear term decreases. This is clearly seen on the spectrum analyzer.

The behavior can be investigated in detail by performing a geometrically nonlinear transient dynamic analysis. The tuning fork is set in motion by a symmetric impulse applied horizontally on the prongs, and is then left free to vibrate. It can be seen that the horizontal prong displacement is almost sinusoidal at 440 Hz, while the stem moves up and down in a clearly nonlinear manner. The stem displacement is highly nonsymmetrical, since the 440 Hz contribution is synchronous with the prong displacement, while the 880-Hz term always gives an additional upward displacement.

Due to the nonlinearity of the system, the vibration is not completely periodic. Even the prong displacement amplitude can vary from one cycle to another.

*The blue line shows the transverse displacement at the prong tip, and the green line shows the vertical displacement at the bottom of the stem.*

If the frequency spectrum of the stem displacement plotted above is computed using FFT, there are two significant peaks at 440 Hz and 880 Hz. There is also a small third peak around the second bending mode.

*Frequency spectrum of the vertical stem displacement.*

To actually see the second-order term at 880 Hz in action, we can subtract the part of the stem vibration that is in phase with the prong bending from the total stem displacement. This displacement difference is seen in the graph below as the red curve.

*The total axial stem displacement (blue), the prong bending proportional stem displacement (dashed green), and the remaining second-order displacement (red).*

How did we perform this calculation? Well, we know from the eigenfrequency analysis that the amplitude of the axial stem vibration is about 1% of the transverse prong displacement (actually 0.92%). In the graph above, the dashed green curve is 0.0092 times the current displacement of the prong tip (not shown in the graph). This curve can be considered as showing the linear 440 Hz term — a more or less pure sine wave. That value is then subtracted from the total stem displacement, and what is left is the red curve. The second-order displacement is zero when the prong is straight, and peaks both when the prong has its maximum inward bending and when it has its maximum outward bending.

Actually, the red curve looks very much like it is having a time variation proportional to sin^{2}(ωt). It should, since that displacement, according to the analysis above, is proportional to the square of the prong displacement. Using a well-known trigonometric identity, . Enter the double frequency!

Commenters on the original video from standupmaths have noticed that some tuning forks work better than others, and with some tuning forks, it is difficult to see the frequency doubling at all. As discussed above, the first criterion is that you hit it hard enough in order to get into the nonlinear regime. But there are also geometrical differences influencing the ratio between the amplitude of the two types of vibration.

For instance, prongs that are heavy relative to the stem will cause large double-frequency displacements, since the stem must move more in order to maintain the center of gravity. Slender prongs can have a larger amplitude–length (*a*/*L*) ratio, thus increasing the nonlinear term.

The design of the region where the prongs meet the stem is important. If it is stiff, then the amplitude of the fundamental frequency vibration in the stem will be reduced, and the relative importance of the double-frequency vibration is larger.

The cross section of the prongs will also have an influence. If we return to the expression for the natural frequency

f_1 = \dfrac{1.875^2}{2 \pi L^2}\sqrt{\dfrac{EI}{\rho A}}

it can be seen that the moment of inertia of the cross section plays a role. A prong with a square cross section with side *d* has

I = \dfrac{d^4}{12}

while a prong with a circular cross section with diameter *d* has

I = \dfrac{\pi d^4}{64}

Thus, for two tuning forks that look the same when viewed from the side, the one with a square profile must have prongs that are a factor 1.14 longer to give the same fundamental frequency. If we assume the same maximum stress due to bending in the two tuning forks, the one with the square profile can have a transverse displacement amplitude, which is 1.14^{2} larger than the circular one because of its higher load-carrying capacity. In addition, if the stem is kept at a fixed size, then it will become proportionally lighter when compared to the longer prongs. All these contributions end up in a 70% increase in vertical stem vibration amplitude when moving from a circular profile to a square profile.

In addition, tuning forks with a circular cross section usually have a design that is more flexible at the connection between the prongs and the stem, and thus a higher level of vibration at the fundamental frequency.

The conclusion is that a tuning fork with a square cross section is more likely to exhibit the frequency-doubling behavior than one with a circular cross section.

In most cases, the answer is “no.” The fundamental frequency is still there, even though it may have a lower amplitude than the one with the double frequency. But the way our senses work, we hear the fundamental frequency, although with a different timbre. It is difficult, but not impossible, to strike the tuning fork so hard that the sound level of the double frequency is significantly dominant.

The frequency doubling occurs due to a nonlinear phenomenon, where the stem of the tuning fork must move upward, in order to compensate for the small lowering of the center of mass of the prongs as they approach the outermost positions of their bending motion.

Note that it is not the fact that the tuning fork is connected to the table that causes the frequency doubling. The reason that we measure it in that case is that the sound emitted by the resonating table surface is caused by the axial stem motion, whereas the sound we hear from the tuning fork that is held up is dominated by the prong bending. The motion is the same in both cases, as long as the impedance of the table is ignored. In fact, you can measure the doubled frequency with a tuning fork when held up as well, but it is 30 dB or so below the fundamental frequency.

- Watch the original videos from standupmaths on YouTube:
- Read more about the intersection of tuning forks and simulation on the COMSOL Blog:

BEM functionality is available in the Acoustics Module as the *Pressure Acoustics, Boundary Elements* interface. The interface can solve 2D and 3D acoustics problems that have constant-valued material properties within each domain. The fluid model can include dissipation by using complex-valued material data. Furthermore, the BEM interface’s implementation as a scattered field formulation means that it can handle scattering problems (see the image below). As we will see below, the introduction of BEM allows users to solve a new category of problems that were not possible before.

*Classical BEM benchmark model of a spherical scatterer for which the results are compared to an analytical solution. The left image shows the sound pressure level in two cut planes at 500 Hz, while the right image shows a comparison of the scattered field at 1400 Hz. Images from the Spherical Scatterer: BEM Benchmark tutorial model.*

An important feature is the ability to couple the BEM-based interface with FEM-based interfaces. For example, by using the *Acoustic-Structure Boundary* multiphysics coupling feature, you can couple the acoustics BEM interface to vibrating structures based on FEM. In addition, BEM and FEM acoustic domains can be combined by using the *Acoustic BEM-FEM Boundary* multiphysics coupling.

This flexibility allows BEM and FEM to be used where they are best suited and this is all done within the same user interface, as with all other physics couplings in COMSOL Multiphysics. For instance, you can use FEM to model a vibrating structure’s interior, like a closed air domain, as this method can include more general material properties, and BEM to model the exterior domain, as this method is better for modeling large and infinite domains. This is the case in the loudspeaker model depicted below.

*User interface of COMSOL Multiphysics when setting up a multiphysics model of a loudspeaker that includes BEM and FEM acoustics as well as the* Solid Mechanics *and* Shell *interfaces. The physics are coupled with the built-in multiphysics couplings. Image from the Vibroacoustic Loudspeaker Simulation: Multiphysics with BEM-FEM tutorial model.*

With BEM, you only need to mesh the surfaces next to the modeling domain. This means that there’s less need to create large volumetric meshes (necessary for FEM), making interfaces based on BEM particularly helpful for models that involve radiation and scattering and have detailed CAD geometries. The interface also has built-in conditions to set up an infinite sound hard boundary (wall) or an infinite sound soft boundary. These conditions are very useful when modeling, for example, underwater acoustics, where the ocean surface can be modeled as an infinite sound soft boundary.

Typically, it is advantageous to use interfaces based on BEM for problems with large fluid domains for which a large FEM-based volumetric mesh would otherwise be required (i.e., cases that would run out of memory due to the large 3D mesh). For cases like this one, using BEM can even extend the class of problems that COMSOL Multiphysics can handle. Some examples of these problems include:

- Models with an infinite wall or infinite sound soft boundary that is far (in terms of wavelengths) from the radiating objects
- Models that include the interaction of scattering and radiating objects that are far from each other
- Radiation problems from complex noncompact geometries, where it is difficult to snugly fit a radiation condition or a perfectly matched layer (PML) when using FEM

*An example of a transducer array located far from a scattering object. This type of problem is very hard or even impossible to solve with a pure FEM-based approach due to the large memory requirement. Using BEM, the model can be solved (moving the sphere further away does not cost more on the computational side). Image from the Tonpilz Transducer Array for Sonar Systems tutorial model.*

While BEM is more computationally demanding than FEM for an equal amount of degrees of freedom (DOFs), BEM usually requires far fewer DOFs than FEM to obtain the same accuracy. The fully populated and dense system matrices generated by BEM require different dedicated numerical methods than the ones for FEM. A FEM-based interface, such as the *Pressure Acoustics, Frequency Domain* interface, is usually faster for solving small- and medium-sized acoustics models than BEM.

According to the user’s guide for the Acoustics Module, the BEM used in the *Pressure Acoustics, Boundary Elements* interface is based on the direct method with Costabel’s symmetric coupling. To solve the resulting linear system, the adaptive cross approximation (ACA) fast summation method is used. This method uses partial assembly of the matrices where the effect of the matrix vector multiplication is calculated. As for the default iterative solver, it is GMRES. With the built-in multiphysics couplings, it is easy and seamless to set up problems that combine FEM- and BEM-based physics. When solving these coupled models, the default approach is to use hybridization with the ACA for BEM and an appropriate preconditioner for the FEM part of the problem (direct or multigrid).

As already mentioned, the *Pressure Acoustics, Boundary Elements* interface seamlessly couples to the finite-element-based interfaces like the *Pressure Acoustics, Frequency Domain* interface and the *Solid Mechanics* interface. This coupling makes it possible to easily set up hybrid FEM-BEM models that take advantage of the strengths of each formulation where needed and where they are best applied.

BEM is not meant to replace finite elements in acoustics but should be seen as a complement. The general rule of thumb is to use BEM where large fluid domains would otherwise require a very fine mesh when running a FEM-based model, and to otherwise couple BEM to FEM-based physics where they are best used. Some applications and examples include:

- Modeling transducers and radiation problems with complex geometries
- Model the transducer (piezo or electromagnetic) with FEM and the exterior acoustics with BEM

- Combining interior and exterior problems
- Use FEM in narrow regions and resonant volumes and use BEM for the radiation part
- Remember that acoustics models based on BEM and FEM can easily be coupled using the
*Acoustic BEM-FEM Boundary*multiphysics coupling

Remember that smaller models that fit in memory are typically faster with FEM. Use the traditional approach with a radiation condition or a PML to model open radiation domains.

The *Pressure Acoustics, Boundary Elements* interface can be used to replace a FEM-based radiation condition or PML and the far-field calculation feature. See, for example, the model example below.

*In the Bessel Panel tutorial model, the* Pressure Acoustics, Boundary Elements *interface is used to model the open space. The BEM interface is effectively replacing a radiation condition (or a PML) and the far-field calculation feature that was previously necessary. This image shows the sound pressure level on the surface of the FEM domain (several point sources are located inside this domain) and in three cut planes, with a given extent, in the exterior BEM region.*

When solving a problem with the BEM interface, the resulting solution consists of the dependent variables, equivalently unknown fields, on boundaries. This includes the pressure * p* and its normal derivative; i.e., the normal flux variable

` pabe.pbam1.bemflux`

. Evaluating the solution in a domain is based on an integral kernel evaluation, which is at the heart of BEM. On boundaries, a dedicated boundary variable is defined. This variable has different definitions on exterior and interior boundaries; it is equal to the dependent variable on exterior boundaries. Up and down pressure-dependent variables are defined on interior boundaries (`pabe.p_up`

and ` pabe.p_down`

) because the pressure is discontinuous here; for example, in an *Interior Sound Hard Wall* boundary. Moreover, on all boundaries, predefined postprocessing variables exist that combine the properties of the boundary variables, when needed, with variables based on the kernel evaluation.

These variables and all other postprocessing variables are found in the *Replace Expressions* list in the plots, as shown in the image below.

*The user interface with a list of some of the predefined postprocessing variables.*

When postprocessing the BEM solution within the domains, the pressure field has to be reconstructed using the aforementioned BEM integral kernel evaluation. Dedicated data sets are available for easy visualization of the BEM solution by automating the kernel evaluation on a grid. The paragraphs below discuss data sets that can be used to plot acoustics results.

The *Grid 3D* and *Grid 2D* data sets are specially designed for evaluating the solution within domains where there is no mesh. These data sets set up a regular grid of points where the solution is evaluated. The size and bounds of the grid can be modified as well as the resolution (the grid spacing). When visualizing wave problems, it is important to have an adequate spatial resolution. However, the resolution should not be too large, as it will increase the rendering time.

A grid data set can, for example, be selected as the input data set for a slice or a surface plot. A grid data set and a multislice plot are automatically generated and used in the default plots when a BEM model is solved. The grid data set can also be used as input to a cut plane, cut line, or cut point.

Parameterized curves and surfaces can be used directly to evaluate the BEM solution as long as the option *Only evaluate globally defined expressions* is selected.

The dedicated acoustics plots can be used directly with the BEM variables as input. Examples include the *Far Field* plot, used for plotting the spatial response (not necessarily in the far field, but as a matter of fact at any distance) and the *Directivity* plot. For example, the sound pressure level variable `pabe.Lp`

can be used as the expression.

*Screenshots of the user interface for some of the different data sets mentioned above. The important settings are highlighted.*

The screenshots above are taken from the Loudspeaker Radiation: BEM Acoustics tutorial model. This model solves a radiation problem and has most of the common plots and results visualization set up.

The image below shows the sound pressure level depicted in three slices through the grid on the speaker surface. To illustrate the generality of the postprocessing and visualization tools, the sound pressure level is also shown along a parameterized spiral curve created using a *Parametric Curve 3D* data set.

*Sound pressure level depicted in different ways in the Loudspeaker Radiation: BEM Acoustics tutorial model.*

Next, I want to discuss two cases that require special consideration when using BEM.

Many acoustics applications involve a situation in which a transducer is located in an infinite baffle and is radiating into a half-space. In most cases, this setup is not possible using boundary elements, at least not if the baffle has to be infinite. A noninfinite baffle can be set up using, for example, the *Interior Sound Hard Wall* boundary condition.

Typically, we would want to use the *Infinite Sound Hard Boundary* feature. This condition cannot “have a hole in it” like when a loudspeaker driver is sitting in a baffle. Since the BEM formulation is based on the full-space Green’s function, an infinite symmetry plane or an infinite wall condition mean that they are infinite and cannot have an opening in them. Basically, all boundaries that have a selection in the physics interface and are active must be located on the same side of the infinite condition or lie on it. If this is not the case, the results will be unphysical.

My general recommendation for the infinite baffle setups is to use the FEM-based physics interface together with the far-field calculation feature and a PML or radiation condition. For an example, see the Lumped Loudspeaker Driver model. This setup will typically be much faster!

*User interface of the* Pressure Acoustics, Boundary Elements *interface. The infinite conditions are found at the top physics level (highlighted here). Once a condition is selected, the resulting plane is depicted in the Graphics window.*

Interior problems — especially problems with sharp resonances where no or little loss is present — can be challenging to solve with BEM. This is not because of the method itself but because an iterative solver is used to efficiently solve the underlying matrix system. The same problem is also found for a FEM-based model that uses an iterative solver.

Near a sharp resonance, any small change results in variations in the pressure that are hard to capture to ensure convergence. If possible, use FEM in these situations together with a direct solver or make sure to add realistic boundary conditions with losses, such as an impedance condition.

BEM is a very useful complement to FEM in the COMSOL Multiphysics environment. Many engineers in the acoustics modeling community have been looking forward to the addition of this functionality. We hope that you will enjoy this latest addition to the Acoustics Module.

See what’s possible with the specialized acoustics modeling features available in the Acoustics Module add-on product by clicking the button below.

Try it yourself: Download one of the tutorial models featured in this blog post. From the Application Gallery, you can log into your COMSOL Access account and download the MPH-file.

]]>

The important boundary condition that we will discuss here is called the *Inflow* boundary condition. It is available at boundaries that are exterior to a fluid domain and is equivalent to having a virtual channel “upstream”. The *Inflow* boundary condition is used to define a heat flux at the inlet boundary that brings the same energy to the fluid domain as if you had modeled the virtual channel as a real CFD domain. The virtual channel can be seen as a long insulated channel with given thermal properties at the inlet, and with the same velocity profile as defined in the settings for the *Inflow* boundary condition.

*Representation of the virtual domain corresponding to an* Inflow *boundary condition.*

From a mathematical point of view, the boundary condition is formulated as a flux condition:

(1)

-\mathbf{n} \cdot \mathbf{q} = \rho \Delta H \bf{u}\cdot \mathbf{n}

where the enthalpy variation is defined as:

(2)

\Delta H = \int_{T_{\mathrm{upstream}}}^{T}{C_p \mathrm{d}T}+\int_{p_{\mathrm{upstream}}}^{p}{\frac{1-\alpha_p T}{\rho}\mathrm{d}p}

where we can designate the two terms:

{\Delta H}_T = \int_{T_{\mathrm{upstream}}}^{T}{C_p \mathrm{d}T}

and

{\Delta H}_p = \int_{p_{\mathrm{upstream}}}^{p}{\frac{1-\alpha_p T}{\rho}\mathrm{d}p}

so that we can write:

\Delta H ={\Delta H}_T + {\Delta H}_p

This expression contains two terms. The first, , depends on the temperature difference while the second, , depends on the pressure difference.

Eq. (1) expresses the fact that the normal conductive heat flux () at the inflow boundary is proportional to the flow rate and enthalpy variation between the upstream conditions and inlet conditions.

As shown in Eq. (2), the enthalpy variation depends in general both on the difference in temperature and in pressure. However, the pressure contribution to the enthalpy, , is neglected when the work due to pressure changes is not included in the energy equation.

In the COMSOL Multiphysics® software, this is controlled in the *Nonisothermal Flow* multiphysics feature using the corresponding check box:

There is another classical case where this term cancels out: when the fluid is modeled as an ideal gas. Indeed, in this case, .

First, let’s assume that the pressure contribution to the enthalpy is null. (We have seen above that this is actually quite often the case.) Then, the boundary condition reads:

(3)

k\nabla T \cdot \mathbf{n} = \int_{T_{\mathrm{upstream}}}^{T}{C_p \mathrm{d} T} \: \rho\mathbf{u} \cdot \mathbf{n}

When advective heat transfer dominates at the inlet (large flow rates), the temperature gradient, and hence the heat transfer by conduction, in the normal direction to the inlet boundary is very small. So in this case, Eq. (3) imposes that the enthalpy variation is close to zero. As is positive, the *Inflow* boundary condition requires to be fulfilled. So, when advective heat transfer dominates at the inlet, the *Inflow* boundary condition is almost equivalent to a *Dirichlet* boundary condition that prescribes the upstream temperature at the inlet.

Conversely, when the flow rate is low or in the presence of large heat sources or sinks next to the inlet, the conductive heat flux cannot be neglected. In addition, the inlet temperature has to be adjusted to balance the energy brought by the flow at the inlet and the energy transferred by conduction from the interior, as described by Eq. (3).

This makes it possible to observe a realistic upstream feedback due to thermal conduction from the inlet surroundings.

Keeping the assumption that the enthalpy only depends on the temperature and that, in addition, the heat capacity is constant, Eq. (1) reads:

(4)

k \nabla T \cdot \mathbf{n} = (T-T_\mathrm{upstream})C_p \rho\mathbf{u} \cdot \mathbf{n}

which corresponds to a *Danckwerts* boundary condition that is used in, for example, the *Transport of Diluted Species* interface.

In practice, there are many models where the heat capacity is nearly constant, so the *Inflow* boundary condition behaves like a *Danckwerts* boundary condition with an averaged heat capacity. Interestingly, if this is not the case, the *Inflow* boundary condition automatically accounts for an incoming flux that corresponds to the enthalpy and cannot be expressed by simply using a *Danckwerts* boundary condition.

Let’s discuss a general case. In Eq. (2), the enthalpy variation depends both on the difference in temperature and in pressure.

Considering that the *Inflow* boundary condition models a virtual channel feeding the inlet, we expect pressure losses between the virtual channel inlet and the boundary where the condition is defined. This explains why the upstream pressure is different from the inlet pressure. While the fluid flows through the channel, it is subject to pressure work that results in a temperature change between the virtual channel inlet and the boundary where the *Inflow* boundary condition is defined. This is what is described by the pressure-dependent term in Eq. (2). Note that the viscous dissipation in the virtual channel is not accounted for.

In practical situations, the pressure contribution, , is often zero (for ideal gases or when work done by pressure changes are neglected) or small in the sense that a very small difference between the upstream temperature and the inflow temperature is enough to balance it. To illustrate this, consider two common fluids:

- Air: Its density is defined from the ideal gas law in the Material Library, hence the pressure contribution to the enthalpy, , is zero.
- Water: The order of magnitude of is 1000 J/K/kg while the order of magnitude of is 0.001 m
^{3}/kg. A pressure difference of 1 bar (= 10^{5}Pa) and a temperature difference of 0.1 K induce and , respectively; two contributions with the same order of magnitude in .

To illustrate how the *Inflow* boundary condition behaves compared to a *Temperature* boundary condition, we can study the stationary temperature profile in a long channel in 2D, which actually represents a flow between two plates. Beyond a certain point, the channel is cooled by a convective heat flux on both sides. The channel height is 1 cm and the part exposed to the convective heat flux is 10 cm long. The channel is filled with air (the density follows the ideal gas law).

At the inlet located at some distance from the cooling area, the average velocity is U_{in} and the temperature is T_{hot} = 30°C. The convective heat flux is defined as h(T_{cold}-T), with h = 100 W/m^{2}/K and T_{cold} = 10°C.

Most of the temperature variations occur beyond the point where the heat flux is applied, so we can choose to reduce the computational domain by modeling only a fraction of the channel before the cooling area. The image below contains two sketches. The one at the bottom has a section of length *L*_{inlet} = 2 cm before the cooling area, while on the one on top, the inlet coincides with the beginning of the cooling area (*L*_{input} = 0).

*Representation of the geometry with a section before the area exposed to the heat flux (top) and with the inlet at the beginning of the area exposed to the heat flux (bottom).*

Now we solve the model using either a *Temperature* or *Inflow* boundary condition at the inlet. We vary two parameters in the model:

- Inlet velocity, U
_{in}: 1 cm/s and 10 cm/s - Length of the channel before the area exposed to the heat flux,
*L*_{inlet}: 0, 0.2, 1, and 2 cm

The goal of these simulations is to determine the values of *L*_{inlet}for which we are able to set accurate thermal boundary conditions using *Temperature* and *Inflow* boundary conditions, respectively.

Let’s comment on the results for U_{in} = 10 cm/s. In the left part of the figure below, we see the temperature profile using the *Temperature* boundary condition (top) and the *Inflow* boundary condition (bottom). The two graphs look very similar and it is difficult to draw any conclusion from them, but the graph on the right gives more details.

The graph to the right shows the temperature profile along the vertical line located at the beginning of the cooling zone. (It coincides with the inlet boundary, when *L*_{inlet} = 0. Let’s call it “reference line” in the rest of this blog post.) The solid lines represent the results obtained using an *Inflow* boundary condition and the dotted lines correspond to the *Temperature* boundary condition. The different colors correspond to different values of *L*_{inlet}.

Let’s first check the results obtained using the *Temperature* boundary condition (dotted line). We see that as *L*_{inlet} increases, the temperature profile along the reference line converges to a given profile. The results for *L*_{inlet} = 2 cm show no improvement; they coincide with the results obtained for *L*_{inlet} = 1 cm, so we can consider that there is no need to further extend the channel.

For *L*_{inlet} = 0, the temperature profile is quite different from the converged profile. This illustrates a classical issue using a *Temperature* boundary condition: As the temperature profile is not known in advance along the reference line, the best option is to prescribe a reasonable temperature; here, the upstream temperature.

When an *Inflow* boundary condition is used, if the value of *L*_{inlet} is increased, the temperature profile along the reference line convergences to the same profile as when a *Temperature* boundary condition is used.

We notice that especially with *L*_{inlet} = 0, the solution is much closer to the converged profile than when using the *Temperature* boundary condition.

*Left: Temperature field in the channel using the* Temperature *boundary condition (top) and* Inflow *boundary condition (bottom) for* L* _{inlet} = 0 and U_{in} = 10 cm/s. Right: A comparison of the temperature along the reference line with the* Inflow

It is important to keep in mind that in many projects, the geometry contains inlets that are fed by channels that are not represented in the geometry. While for simple geometries — like here — it is easy to modify it to include a part of the channel before the inlet, it can be challenging for advanced geometries. Even with *L*_{inlet} = 0, the *Inflow* boundary condition gives a decent prediction of the temperature profile at the inlet.

When the channel before the inlet can be extended a sufficient distance, the temperature profile on the inlet boundary obtained using *Inflow* and *Temperature* boundary conditions coincide. This is in agreement with the analysis made before stating that when the advective heat transfer dominates and an ideal gas model is used, the *Inflow* boundary condition is similar to a *Temperature* boundary condition. It is interesting to mention here that from a numerical point of view, the two conditions behave similarly in this case. (For example, the number of iterations taken by the nonlinear solver is identical for both conditions.)

Apart from the temperature profile, another quantity that should be monitored is the heat rate induced by the heat flux. The table below contains this heat rate for the different values of *L*_{inlet}. One column contains the value for the *Inflow* boundary condition and the other for the *Temperature* boundary condition.

*The heat rate tabulated for the case with highest inlet velocity.*

When the *Inflow* boundary condition is used, the heat rate is almost constant. When using a *Temperature* boundary condition, the heat rate is affected by the value of *L*_{inlet}.

Because the velocity is lower in this case, the advective effects no longer dominate. The image below to the left shows the temperature field obtained using the *Temperature* boundary condition (top) and *Inflow* boundary condition (bottom). Although the two plots look similar, a closer look at them reveals that at the end of the inlet boundary, there is a difference between the two temperature profiles.

The graph to the right shows the temperature profile along the reference line. As before, the solid lines represent the results obtained using an *Inflow* boundary condition, the dotted lines correspond to the *Temperature* boundary condition, and the different colors correspond to different values of *L*_{inlet}.

Again, for *L*_{inlet} = 0, the *Temperature* boundary condition prescribes a constant temperature along the reference line. This temperature profile is far from the solution obtained with the largest values of *L*_{inlet}. As before, we see that as *L*_{inlet} increases, the temperature converges to a given profile. However, here, the convergence is slower, compared to the case with *U*_{in} = 10 cm/s. Comparing the solution obtained using the *Inflow* boundary condition and the *Temperature* boundary condition, we notice that for any value of *L*_{inlet}, the solution obtained using the *Inflow* boundary condition is always closer to the converged profile.

*Left: Temperature field in the channel using the* Temperature *boundary condition (top) and* Inflow *boundary condition (bottom) for* L* _{inlet} = 0 and* U

The table below again shows the heat rate for the two boundary conditions.

*The heat rate tabulated for the case with the lowest inlet velocity.*

The trend is similar to the first case, but when a *Temperature* boundary condition is used, the influence of *L*_{inlet} on the heat rate is much larger. Using a *Temperature* boundary condition with *L*_{inlet} = 0, the value of the heat rate is overestimated by a factor of almost 2 compared to the solution obtained with a long inlet. Using an *Inflow* boundary condition, the heat rate is correctly predicted for any value of *L*_{inlet}.

These results show that when *L*_{inlet} is small (especially when *L*_{inlet} = 0), the temperature profile and the heat flux are more realistic using an *Inflow* boundary condition rather than a uniform *Temperature* boundary condition. This can be explained by the fact that at the inlet, a uniform temperature profile is not realistic. In practical situations, the temperature is not controlled exactly at the inlet but, for example, in a tank located at some distance.

While in many configurations, the *Temperature* and *Inflow* features describe similar conditions and lead to similar simulation results, there are a number of configurations (especially for slow flow and small dimensions) where the conductive effects are not dominated by the advective effects and where the *Inflow* boundary condition usually leads to a temperature profile that is closer to the reality than a *Temperature* boundary condition. In addition, a *Temperature* boundary condition could enforce an erroneous temperature value that induces large heat fluxes that are not realistic.

As the *Inflow* boundary condition is simple to use and usually does not induce an additional numerical cost to be solved, it ought to be the first choice to define a heat transfer condition at the flow inlet. The vast majority of model examples in the Application Libraries use it.

Learn more about all of the functionality available for heat transfer modeling in COMSOL Multiphysics by clicking the button below.

: Heat capacity (SI unit: J/K/kg)

: Heat transfer coefficient (SI unit: W/m^{2}/K)

: Boundary normal vector (SI unit: K)

: Thermal conductivity (W/K/m)

: Pressure (SI unit: Pa)

: Heat flux (SI unit: W/m^{2})

: Pressure of the upstream (SI unit: Pa)

: Temperature (SI unit: K)

, : Cold and hot temperatures (SI unit: K)

: Temperature of the upstream (SI unit: K)

: Inlet temperature (SI unit: m/s)

: Coefficient of thermal expansion (SI unit: 1/K)

: Density (SI unit: kg)

: Velocity (SI unit: m/s)

: Enthalpy change vs. reference enthalpy (SI unit: K)

]]>Thermosiphons have been used for keeping houses warm since the 1800s. These devices use central heaters and pipe networks that carry water and steam to different rooms. The cool part (figuratively) is that no pump is needed for fluid transport — convective currents induced by the heater located at the bottom of an installation are enough. Let’s discuss modeling thermosiphons using a “pseudofluid” with temperature-dependent properties.

From their initial applications in large-scale heating, thermosiphons have since been used in various industries that rely on efficient heat transfer in small spaces. Today, thermosiphons are found in a wide range of applications: collecting heat from solar panel arrays, heating water and food, cooling IC engines, and even cooling electronic ICs.

*An example of a thermosiphon. Image by Gilabrand at English Wikipedia. Licensed under CC BY-SA 3.0, via Wikimedia Commons.*

One reason why thermosiphons can be very efficient is that they can operate near the phase change temperature of the transport fluid. This means that the fluid, while carrying heat from point A to point B, uses that heat not only to raise its temperature, but also to change its phase from liquid to vapor.

There are two reasons why phase change in a thermosiphon can be a significant advantage. First, a phase change gives a much greater change in density than a rise in temperature. Fluid transport would be set up much more easily here.

Also, thermosiphon fluids typically need as much heat to change phase as they would to raise the temperature by hundreds of degrees (Celsius). For instance, water has a specific latent heat of vaporization of 2264.7 kJ/kg, whereas the specific heat of water is 4.186 kJ/kgK. This means that the amount of heat absorbed by water, while changing to steam, is 541 times the amount it would need to raise its temperature by 1°C. This means that a lot more heat can be absorbed from a source at a specific temperature if the fluid is changing phase instead of rapidly heating up.

The heat transfer from one body to another is proportional to the temperature difference between them. A fluid that stays at a certain temperature during a larger heat transfer would have the advantage of maintaining the same temperature difference for a longer time. This means that the heat transfer rate would stay high for a longer time, instead of dropping as the temperature difference between the heat source and the fluid reduces. However, this very source of efficiency can make modeling the thermosiphon a challenge.

Modeling a flow that involves phase change can be computationally demanding. A usual phase-change fluid flow model involves:

- Two separate domains (one liquid, one vapor)
- An “interface tracking” approach between the two domains that requires a moving mesh on both domains

Another drawback is that this method doesn’t allow topological changes in the interface between the two domains. The creation or merging of bubbles of vapor, for instance, would not be permissible.

Since the interface between the domains is a surface, there would be no modeling of the “slushy”, part-liquid–part-vapor transitional situations. Modeling a thermosiphon with this approach would create an approximation that has a single boundary between the liquid and vapor, which moves as the fluid undergoes a phase change.

A different approach to modeling this kind of fluid flow problem involves using a single domain of what we’ll call a *pseudofluid*. This pseudofluid is a material with properties defined as a function of temperature. The properties change from those of the liquid to those of the vapor, over a small region known as the *phase transition window*. In the figures below, we see how a cross-phase density function is defined to indicate the transition of state from liquid to vapor.

Note: A similar modeling philosophy is used in the

Phase Change Materialnode in the Heat Transfer Module. Though the phase transition could also be modeled using this node, along with theNonisothermal Flowmultiphysics coupling option, the pseudofluid approach allows flexibility in definition of the phase transition function. The pseudofluid material models the phase transition based on two parameters: temperature and pressure. Accurate modeling of the pressure-dependent variation of the density of steam is critical for ensuring mass conservation. The fluid flow and heat transfer equations are solved using theSingle Phase,Laminar Flow, andHeat Transfer in Fluidsphysics nodes available in the COMSOL Multiphysics® software. These two nodes collectively solve the equations for conservation of mass, momentum, and energy.

This modeling approach enables apparent topological changes between phases, since there aren’t any domain boundaries to deal with. This overcomes one of the major approximations of the interface tracking approach. Our solution could now have plenty of pockets of fluid transitioning from one phase to another, which is in line with our everyday observations of fluids brought to a boil, for instance.

There are two approximations inherent in the pseudofluid approach. It doesn’t take surface tension forces into account; so even though topological changes are handled, a big contributing factor in bubble formation during boiling is still left out. Also, phase transition occurs over a small range of temperatures instead of a specific value. The smaller this range is, the more accurate the phase change phenomenon. Ideally, we would choose a range that represents the intermediate slushy stage well. However, a smaller range causes more difficulties in convergence of the solutions.

The pseudofluid approach is successful in setting up convection currents and representing the phase change process of the fluid. In the video below, we see a simple vertical container with a fluid being heated, modeled with this approach. Initially, the top portion of the container is filled with vapor, while the bottom portion contains liquid. The container is heated from its bottom surface, and we can see the gradual phase change to convert the entire contents into vapor.

The image below shows the formation of different phases of fluid (represented by their density), as well as the local velocity of the convection currents in a tilted tube, which represents the thermosiphon flask.

*Comparing the quantitative performance of the pseudofluid model of the thermosiphon to experimental data — both from in-house experiments and literature (Ref. 1). Steady-state temperatures are compared with data from reference literature at different locations of a vertical thermosiphon. Temperature variation with respect to time is compared with experimental data for a tilted thermosiphon.*

The model seems to perform well, and the deviations in performance are within acceptable limits.

We developed and applied this pseudofluid modeling technique to optimize a real-world thermosiphon application. Once the fluid flow model is set up, the computation time is considerably reduced compared to the interface tracking approach. This frees up computational resources to optimize many other parameters of the thermosiphon.

One objective is to maximize the heat transfer rate of the apparatus. Using a fluid near its phase transition temperature greatly reduces the size of the thermosiphon needed for a certain heat transfer rate.

Another important parameter is the mass of fluid stored in the apparatus. With too much fluid, the heat input needed to vaporize it would be very high. It is even possible that the steady-state heat output would prevent the fluid from vaporizing at all, which would greatly reduce the efficiency of the thermosiphon. Not enough fluid would mean that very little heat would vaporize it. If the heat drawn from the thermosiphon is not high enough, the fluid remains vaporized in the steady state — once again losing the increased efficiency that comes with phase change.

Building on the fluid flow model discussed in this blog post, we can also optimize the thermosiphon’s dimensions, angle of slant, and design of the heat-absorbing surfaces.

It is relatively easy to imagine applying the pseudofluid modeling technique to problems concerning fluids that transition between gels and free-flowing liquids. A question worth asking: Can an improved pseudofluid model actually be used as a universal mechanics model? In other words, can this model include the whole spectrum of phase, from brittle solids like rock to free-flowing vapors?

Modeling pseudofluids and phase-transitioning material properties could help in unifying different physics models. A mathematical model that handles these transitions well could even change the way we think about “phases”. Traditional phases may well end up being thought of as approximate descriptions on a continuous spectrum of material states. Although the mathematics to accurately describe these phase changes hasn’t quite been perfected yet, we at Noumenon Multiphysics may have some developments soon!

Note that aside from the two approximations for this approach mentioned earlier, there are some other limitations. This model, though capable of predicting turbulent flows, may become computationally expensive in such scenarios. (For spatial resolution of turbulence, the mesh would need to be refined in the entire domain. For temporal resolution of turbulence, smaller time steps would be required for obtaining converged solutions. So, the number of mesh elements and computational time would both increase. On the other hand, while using a turbulence model along with this pseudofluid material model is also possible, it adds extra equations to the model.) This limitation seems acceptable as far as thermosiphons are concerned, since a turbulent thermosiphon would be greatly inefficient anyways. It is worth noting, however, that different turbulent flow models could be added to the fluid flow model for different applications.

The accuracy of the pseudofluid model depends greatly on the quality of data available about the fluid involved for different conditions of temperature and pressure. Most significantly, the change in density with respect to change in pressure needs to be accurately known to create a useful pseudofluid material model. This is relatively easy for water and steam, but collecting similar data for other fluids may be a more difficult task.

Mandar Gadgil is an associate engineer at Noumenon Multiphysics. At Noumenon Multiphysics, Mandar has played an important role in solving numerous challenging problems in modeling and simulation for the engineering industry. He has worked on multiphase, multispecies fluid flow models, models of electric flow for biomedical applications, electromagnetism, fluid-structure interaction, battery modeling, and much more. Mandar has completed his Master of Technology (M.Tech.) with a specialization in modeling and simulation from the Department of Applied Mathematics at the Defence Institute of Advanced Technology, India.

- F. Bandar, L.C. Wrobel, and H. Jouhara, “Numerical modelling of the temperature distribution in a two-phase closed thermosyphon,”
*Applied Thermal Engineering*, 2013.

Additive manufacturing is the process of creating a 3D object by adding one or more materials on top of each other layer by layer. To learn more about this type of manufacturing, we reached out to Professor Frédéric Roger of the Mines Telecom Institute, Lille-Douai Center. (IMT is a French public institution dedicated to higher education, research, and innovation in engineering and digital technologies.)

Professor Roger says that, in a sense, additive manufacturing is a bit like sewing or weaving. In both processes, a heterogeneous finished product is created by controlling how different raw materials are consolidated. In weaving, the materials are usually thread and yarn; however, additive manufacturing can use many materials, including polymers, metal alloys, ceramics, and composites.

*Choosing the right materials is important for creating an ideal finished product, be it a warm blanket (left, woven by my grandmother) or a customized aerospace part (right). Right image in the public domain in the United States, via Wikimedia Commons.*

This wide range of materials means that additive manufacturing can be used to design a large amount of unique objects across many industries. For instance, Roger mentions that by using the right materials and thermodynamic conditions, engineers can make objects that withstand or adapt to severe environmental conditions. Such objects could even adapt to certain temperatures or chemical conditions by changing their shape or releasing chemical species (like drugs) that are trapped in a matrix. A transformation over time would add another dimension to the printed part, resulting in “4D printing”.

*Sometimes, additive manufacturing parts are inspired by natural forms, like the bio-inspired example pictured here. Image courtesy Frédéric Roger.*

According to Roger, the many opportunities that come with additive manufacturing make it “an unavoidable manufacturing process,” as it “offers new opportunities to develop optimized structures with advanced materials.” However, before engineers can create these structures, they have to improve the additive manufacturing process.

Since additive manufacturing is a complex process, it can be difficult to study. This technique varies based on the materials involved and the specific type of additive manufacturing. Studying this process also requires accounting for many different effects, such as:

- Multiple phase transformations
- Transfer of energy, mass, and momentum
- Sintering
- Photopolymerization
- Drying
- Crystallization
- Deformation
- Stress

To account for these factors, engineers can use the COMSOL Multiphysics® software, which Roger mentions is “a unique software that has great advantages in the simulation of additive manufacturing.” The software helps engineers to not only “optimize the additive manufacturing process but also to predict the mechanical and microstructural consequences on the product.” Through this, engineers can include all of the relevant physics and determine the ideal manufacturing conditions and part geometries that balance the needs of stiffness, weight reduction, and heat dissipation.

*Left: An example of the additive manufacturing process, which involves many different physics. Image by Les Pounder — Own work. Licensed under CC BY-SA 2.0, via Flickr Creative Commons. Right: Example of an additive manufacturing part created with two materials and filled with a honeycomb inner structure. Image courtesy Frédéric Roger.*

A challenge is that analyzing the additive manufacturing process while coupling the relevant physics can result in large model sizes and long computational times. To overcome this issue, Roger implements several different simulation strategies, such as activating mesh properties, using adaptive remeshing, and performing sequential simulations.

By taking a sequential approach, Roger is able to better analyze the succession of thermodynamic states that a material experiences during additive manufacturing. At the same time, this approach helps to reduce the complexity of the multiphysics couplings by dissociating them over time. As such, sequential simulations provide a way to comprehensively model and optimize the additive manufacturing process while reducing computational costs.

For their simulations, Roger and his team focused on fused-deposition modeling (FDM®), a common additive manufacturing technique that is both affordable and enables control over process parameters. The aim of the study was to optimize the internal and external geometry of a printed thermoplastic part and achieve the best possible performance. To accomplish these goals in an efficient manner, the team split their analysis into three parts, discussed below.

For more information about this study, check out the researchers’ paper.

In the first part of the study, the researchers wanted to minimize the total weight of a printed structure while maintaining a material distribution that maximizes stiffness. To do so, they used topological optimization and structural mechanics analysis to study a mechanical structure exposed to a tensile load.

*Original geometry and boundary conditions (left) and the Young’s modulus distribution that defines the optimal shape by color contrast (right). Left image by F. Roger and P. Krawczak and taken from their COMSOL Conference 2015 Grenoble presentation. Right image courtesy Frédéric Roger.*

Through the studies, they found an optimal shape for the part, determining that the middle of the shape has the highest stress levels. As such, the researchers divided the structure into domains based on the stress concentration field: a high-stress middle area surrounded by two low-stress areas. In the following study, they used this information to apply specific manufacturing conditions to the high-stress zone

*The stress fields in the optimized geometry. Image courtesy Frédéric Roger.*

In the second study, the researchers aimed to increase the stability of the high-stress zone in their part by testing two possible infill strategies:

- Heterogeneous filling with variable densities
- Multimaterial filling

In the heterogeneous case, the team created a more resistant domain in the high-stress middle area by using a higher density of infill. At the same time, they minimized the weight of the external areas by using less material. The results indicated that the ideal geometry contains 60% material in the high-stress region and 20% material in the low-stress regions.

*Printing an optimized part using one material with varying densities. Image by F. Roger and P. Krawczak and taken from their COMSOL Conference 2015 Grenoble paper.*

As shown below, the multimaterial case involved using red ABS plastic on the ends of the part and black conductive ABS with improved mechanical properties in the middle. The team found that they could replace the conductive ABS with materials similar to ABS that have reinforced filters to increase stiffness.

*Printing an optimized part using two materials. Image by F. Roger and P. Krawczak and taken from their COMSOL Conference 2015 Grenoble paper.*

After optimizing the inner and outer designs of the 3D-printed part, the researchers modeled the fused thermoplastic deposition process and evaluated manufacturing parameters. The resulting simulations helped them to accurately predict thermal history, wetting conditions, polymer crystallization, interactions between filaments, and residual stresses and strains. One example is shown below, depicting the plastic strain during the heating and cooling process.

*The fusion and solidification of a disk that is irradiated by a laser beam as well as the resulting plastic strain evolution. This analysis takes Newtonian fluid flow and solid thermomechanical properties into account. Animation courtesy Frédéric Roger.*

The study also investigated the heat and mass transfer within the first two layers of a thin-walled tube. The researchers were then able to analyze the plastic droplet deposition process and identify areas where the filaments reached fusion temperature. The animations of the material deposition study are shown below. They depict a heat source moving along a deposition pattern and heating the filaments up to fusion temperature, ~230°C for ABS droplets. The extruder path domain in the simulations is premeshed and the meshes are continuously activated depending on the extruder’s position.

*Two-layer circular deposition (top). The moving heat source represents the hot ABS deposition. The thermal expansion of the two layers (amplified by a factor of five), showing the moving heat source activating the properties of the material (bottom). Here, blue indicates a nonactivated mesh and the physical properties (thermal conductivity and stiffness) are close to zero. Animations courtesy Frédéric Roger.*

Using these simulations, Roger and his team predicted the temperature field between the filaments during the deposition process, an important factor that affects filament adhesion. Similar analyses could help researchers compare different additive manufacturing conditions and determine the best deposition strategy for a specific application.

Roger says that these simulations enabled his team to “define an additive manufacturing part whose internal and external architectures give it the best possible industrial performance.” Of course, this is only the start of what can be achieved by combining additive manufacturing and multiphysics simulation.

If you have any tips for using COMSOL Multiphysics to study the additive manufacturing process, be sure to let us know in the comments below!

- Read more about the researchers’ work in their paper: “Optimal Design Of Fused Deposition Modeling Structures Using Comsol Multiphysics“
- Check out these related blog posts:

*FDM is a registered trademark of Stratasys, Inc.*

Triaxial testing is a method used to determine the stress-strain properties of soils by subjecting soil samples to constant lateral pressure while increasing vertical pressure. This test measures stresses in three mutually perpendicular directions.

The mechanical behavior of a rock, sand, or soil sample can be complicated to analyze, depending on the specimen. Even if the soil seems stable at first, the construction process can subject it to shifts and uneven settlements later on. Just look at what happened to the Tower of Pisa. This popular tourist destination was built on softer ground (clay, sand, and shells), which shifted over time and caused the tower to lean off-kilter.

*The Leaning Tower of Pisa, Italy. Image by Alkarex Malin äger — Own work. Licensed under CC BY-SA 3.0, via Wikimedia Commons.*

A triaxial test obtains the shear strength parameter measurements needed for a construction project. One of the reasons why triaxial testing is so common is because of its versatility. During the procedure, you are able to control drainage; measure pore water pressures, stress, and strain; increase loads; and observe deflections until sample failure.

In a typical triaxial test, the soil sample is placed inside a rubber membrane and then axially compressed while maintaining a constant radial pressure. If the soil is cohesive, you can prepare the samples directly from saturated, compacted samples. If the soil lacks cohesion, you can use a mold to keep the shape needed for the test.

There are three main triaxial tests:

- Consolidated – Drained (CD): The sample is consolidated and sheared in compression slowly, allowing the pore pressures to dissipate and the sample to adjust to the surrounding stresses.
- Consolidated – Undrained (CU): The sample is not allowed to drain and is assumed to be fully saturated. Pore pressures are measured to approximate the consolidated-drained strength.
- Unconsolidated – Undrained (UU): The sample is compressed at a constant rate, and loads are applied quickly, giving the sample no chance to consolidate.

Triaxial tests have a wide variety of application areas. For instance, triaxial testing is used in the oil and gas industry to determine the properties of shale cores and predict how soil responds during natural gas extraction. Also, triaxial shear tests are used for building dams and embankments. For applications such as underground expansion in cities as well as commercial and residential construction, triaxial testing is performed before excavation.

One advantage of triaxial testing is the adaptable machinery design, which includes many modification options. If you are testing high-strength rock, for instance, you can change the cylinder sleeve material from rubber to a thin metal sheeting. You can also vary the design of the triaxial testing apparatus to account for large loads or different methods of compression.

*A triaxial rock test system. Image by Jingyi Cheng, Zhijun Wan, Yidong Zhang, Wenfeng Li, Syd S. Peng, Peng Zhang — Own work. Licensed under CC BY 4.0, via Wikimedia Commons.*

How does triaxial testing apply to other environmental conditions that are harder to predict, like seismic shifts? During an earthquake, for instance, the mechanical behavior of soil can change considerably. The type of change it undergoes depends on several variables, such as:

- Duration and intensity of the quake
- Location of the quake
- Water table depth of the structure

In addition, earthquakes can initiate soil liquefaction. This type of quicksand can continue to cause damage long after the earthquake is over. Triaxial testing is often used to help predict structural damage in the event of a natural disaster.

*Liquefaction after the earthquake in Niigata, Japan, in 1964. Image in the public domain in the United States, via Wikimedia Commons.*

The Triaxial Earthquake and Shock Simulator (TESS) is an advanced example of a triaxial testing method for earthquake preparation. This device is operated by the U.S. Army Engineer Research and Development Center. TESS is able to independently control three axes at the same time, which provides more realistic earthquake conditions for engineers who want to test facilities and equipment for vulnerabilities.

Using the Geomechanics Module, an add-on to the Structural Mechanics Module and COMSOL Multiphysics, you can model a triaxial testing apparatus to examine its loading and unloading curves.

In this example, the apparatus consists of a cylinder that presses the soil sample from the top. A flexible membrane controls the surrounding pressure, which allows changes in the radial forces.

In the schematic below, you see how the different boundaries of the testing apparatus can be defined. Because the soil is subjected to loading at the top of the apparatus, the *Prescribed Displacement* boundary condition is used to slowly increase the vertical displacement.

*Dimensions, boundary conditions, and boundary load for the triaxial apparatus.*

This model is a starting point for testing the soil’s model parameters. The soil properties are taken from a standard clay material. This example uses the *Soil Plasticity* feature in the COMSOL® software with the *Drucker-Prager criterion*.

After applying a vertical displacement and a confinement pressure to the specimen sample, you can study the results of the static response and collapse load for various confinement pressures.

After loading the soil sample, you can look at the effective plastic strain. Most of the sample suffers from plastic deformation (shown in red), with only a small part in the elastic region (shown in blue).

From here, you can test different confinement pressures by plotting the additional loading stress on the soil sample caused by the prescribed top displacement. Below is the extra loading stress for three different confinement pressures on the porous matrix:

Depending on the loading and draining conditions of the soil samples, you can conduct one or more of the three triaxial testing methods.

To get started with modeling a triaxial testing apparatus, click the button below. From the Application Gallery, you can log into your COMSOL Access account and download the MPH-file for detailed instructions.

- Read more about using simulation for geotechnical applications on the COMSOL Blog:

Topology optimization helps engineers design applications in an optimized manner with respect to certain *a priori* objectives. Mainly used in structural mechanics, topology optimization is also used for thermal, electromagnetics, and acoustics applications. One physics that was missing from this list until last year is microacoustics. This blog post describes a new method for including thermoviscous losses for microacoustics topology optimization.

A previous blog post on acoustic topology optimization outlined the introductory theory and gave a couple of examples. The description of the acoustics was the standard Helmholtz wave equation. With this formulation, we can perform topology optimization for many different applications, such as loudspeaker cabinets, waveguides, room interiors, reflector arrangements, and similar large-scale geometries.

The governing equation is the standard wave equation with material parameters given in terms of the density and the bulk modulus K. For topology optimization, the density and the bulk modulus are interpolated via a variable, . This interpolation variable ideally takes binary values: 0 represents air and 1 represents a solid. During the optimization procedure, however, its value follows an interpolation scheme, such as a solid isotropic material with penalization model (SIMP), as shown in Figure 1.

*Figure 1: The density and bulk modulus interpolation for standard acoustic topology optimization. The units have been omitted to have both values in the same plot.*

Using this approach will work for applications where the so-called thermoviscous losses (close to walls in the acoustic boundary layers) are of little importance. The optimization domain can be coupled to narrow regions described by, for example, a homogenized model (this is the *Narrow Region Acoustics* feature in the *Pressure Acoustics, Frequency Domain* interface). However, if the narrow regions where the thermoviscous losses occur change shape themselves, this procedure is no longer valid. An example is when the cross section of a waveguide changes shape.

For microacoustic applications, such as hearing aids, mobile phones, and certain metamaterial geometries, the acoustic formulation typically needs to include the so-called thermoviscous losses explicitly. This is because the main losses occur in the acoustic boundary layer near walls. Figure 2 below illustrates these effects.

*Figure 2: The volume field is the acoustic pressure, the surface field is the temperature variation, and the arrows indicate the velocity.*

An acoustic wave travels from the bottom to the top of a tube with a circular cross section. The pressure is shown in a ¾-revolution plot.

The arrows indicate the particle velocity at this particular frequency. Near the boundary, the velocity is low and tends to zero on the boundary, whereas in the bulk, it takes on the velocity expected from standard acoustics via Euler’s equation. At the boundary, the velocity is zero because of viscosity, since the air “sticks” to the boundary. Adjacent particles are slowed down, which leads to an overall loss in energy, or rather a conversion from acoustic to thermal energy (viscous dissipation due to shear). In the bulk, however, the molecules move freely.

Modeling microacoustics in detail, including the losses associated with the acoustic boundary layers, requires solving the set of linearized Navier-Stokes equations with quiescent conditions. These equations are implemented in the *Thermoviscous Acoustics* physics interfaces available in the Acoustics Module add-on to the COMSOL Multiphysics® software. However, this formulation is not suited for topology optimization where certain assumptions can be used. A formulation based on a Helmholtz decomposition is presented in Ref. 1. The formulation is valid in many microacoustic applications and allows decoupling of the thermal, viscous, and compressible (pressure) waves. An approximate, yet accurate, expression (Ref. 1) links the velocity and the pressure gradient as

\vec{v}=\Psi_{v} \frac{\nabla{p}}{ik{\rho_0}c}

where the viscous field is a scalar nondimensional field that describes the variation between bulk conditions and boundary conditions.

In the figure above, the surface color plot shows the acoustic temperature variation. The variation on the boundary is zero due to the high thermal conductivity in the solid wall, whereas in the bulk, the temperature variation can be calculated via the isentropic energy equation. Again, the relationship between temperature variation and acoustic pressure can be written in a general form (Ref. 1) as

T=\Psi_{h} \frac{p}{{\rho_0}{C_p}}

where the thermal field is a scalar, nondimensional field that describes the variation between bulk conditions and boundary conditions.

As will be shown later, these viscous and thermal fields are essential for setting up the topology optimization scheme.

For thermoviscous acoustics, there is no established interpolation scheme, as opposed to standard acoustics topology optimization. Since there is no one-equation system that accurately describes the thermoviscous physics (typically, it requires three governing equations), there are no obvious variables to interpolate. However, I will describe a novel procedure in this section.

For simplicity, we look at only wave propagation in a waveguide of constant cross section. This is equivalent to the so-called Low Reduced Frequency model, which may be known to those working with microacoustics. The viscous field can be calculated (Ref. 1) via Equation 1 as

(1)

\Psi_{v}+ k_v^{-2} \Delta_{cd} \Psi_{v}=1

where is the Laplacian in the cross-sectional direction only. For certain simple geometries, the fields can be calculated analytically (as done in the *Narrow Region Acoustics* feature in the *Pressure Acoustics, Frequency Domain* interface). However, when used for topology optimization, they must be calculated numerically for each step in the optimization procedure.

In standard acoustics topology optimization, an interpolation variable varies between 0 and 1, where 0 represents air and 1 represents a solid. To have a similar interpolation scheme for the thermoviscoacoustic topology optimization, I came up with a heuristic approach, where the thermal and viscous fields are used in the interpolation strategy. The two typical boundary conditions for the viscous field (Ref. 1) are

\Psi_{v} = 0 \thickspace (no slip)

and

\nabla_{cd}\Psi_{v} = 0 \thickspace (slip)

These boundary conditions give us insight into how to perform the optimization procedure, since an air-solid interface could be represented by the former boundary condition and an air-air interface by the latter. We write the governing equation in a more general matter:

a_{v} \Psi_{v}+k_{v}^{-2}\Delta_{cd}\Psi_{v}=f_{v}

We already know that for air domains, (a_{v},f_{v}) = (1,1), since that gives us the original equation (1). If we instead set a_{v} to a large value so that the gradient term becomes insignificant, and set f_{v} to zero, we get

a_{v} \Psi_{v} = 0

This corresponds exactly to the boundary condition for no-slip boundaries, just as a solid-air interface, but obtained via the governing equation. We need this property, since we have no way of applying explicit boundary conditions during the optimization. So, for solids, (a_{v},f_{v}) should have values of (“large”,0). Thus, we have established our interpolation extremes:

a_{v}(\epsilon)= \left\{ \begin{array}{ll}1\ \textrm{for}\ \epsilon=o \thickspace(air) \\ large\ \textrm{for} \ \epsilon=1 \thickspace (solid) \end{array} \right.

and

f_{v}(\epsilon)= \left\{ \begin{array}{ll}1\ \textrm{for}\ \epsilon=o \thickspace (air) \\ 0\ \textrm{for} \ \epsilon=1\thickspace (solid) \end{array} \right.

I carried out a comparison between the explicit boundary conditions and interpolation extremes, with the test geometry shown in Figure 3. On the left side, boundary conditions are used, whereas on the adjacent domains on the right, the suggested values of a_{v} and f_{v} are input.

*Figure 3: On the left, standard boundary conditions are applied. On the right, black domains indicate a modified field equation that mimics a solid boundary. White domains are air.*

The field in all domains is now calculated for a frequency with a boundary layer thick enough to visually take up some of the domain. It can be seen that the field is symmetric, which means that the extreme field values can describe either air or a solid. In a sense, that is comparable to using the actual corresponding boundary conditions.

*Figure 4: The resulting field with contours for the setup in Figure 3.*

The actual interpolation between the extremes is done via SIMP or RAMP schemes (Ref. 2), for example, as with the standard acoustic topology optimization. The viscous field, as well as the thermal field, can be linked to the acoustic pressure variable pressure via equations. With this, the world’s first acoustic topology optimization scheme that incorporates accurate thermoviscous losses has come to fruition.

Here, we give an example that shows how the optimization method can be used for a practical case. A tube with a hexagonally shaped cross section has a certain acoustic loss due to viscosity effects. Each side length in the hexagon is approximately 1.1 mm, which gives an area equivalent to a circular area with a radius of 1 mm. Between 100 and 1000 Hz, this acoustic loss increases by a factor of approximately 2.6, as shown in Figure 7. Now, we seek to find an optimal topology so that we obtain a flatter acoustic loss response in this frequency range, with no regards to the actual loss value. The resulting geometry looks like this:

*Figure 5: The topology for a maximally flat acoustic loss response and resulting viscous field at 1000 Hz.*

A simpler geometry that resembles the optimized topology was created, where explicit boundary conditions can be applied.

*Figure 6: A simplified representation of the optimized topology, with the viscous field at 1000 Hz.*

The normalized acoustic loss for the initial hexagonal geometry and the topology-optimized geometry are compared in Figure 7. For each tube, the loss is normalized to the value at 100 Hz.

*Figure 7: The acoustics loss normalized to the value at 100 Hz for the initial cross section (dashed) and the topology-optimized geometry (solid), respectively.*

For the optimized topology, the acoustic loss at 1000 Hz is only 1.5 times higher than at 100 Hz, compared to the 2.6 times for the initial geometry. The overall loss is larger for the optimized geometry, but as mentioned before, we do not consider this in the example.

This novel topology optimization strategy can be expanded to a more general 1D method, where pressure can be used directly in the objective function. A topology optimization scheme for general 3D geometries has also been established, but its implementation is still ongoing. It would be very advantageous for those of us working with microacoustics to focus on improving topology optimization, in both universities and industry. I hope to see many advances in this area in the future.

- W.R. Kampinga, Y.H. Wijnant, A. de Boer, “An Efficient Finite Element Model for Viscothermal Acoustics,”
*Acta Acoustica*united with*Acoustica*, vol. 97, pp. 618–631, 2011. - M.P. Bendsoe, O. Sigmund,
*Topology Optimization: Theory, Methods, and Applications*, Springer, 2003.

René Christensen has been working in the field of vibroacoustics for more than a decade, both as a consultant (iCapture ApS) and as an engineer in the hearing aid industry (Oticon A/S, GN Hearing A/S). He has a special interest in the modeling of viscothermal effects in microacoustics, which was also the topic of his PhD. René joined the hardware platform R&D acoustics team at GN Hearing as a senior acoustic engineer in 2015. In this role, he works with the design and optimization of hearing aids.

]]>

Compared to other heat exchangers, compact heat exchangers have a much larger heat transfer area per volume, usually thanks to dense arrays of plates or tubes. This attribute makes these heat exchangers lighter and more compact than classical heat exchangers. One disadvantage of the smaller heat exchangers is that they have higher pressure drops, which limits the flow rate and thus the amount of heat they can transfer.

*An illustration of a plate-and-frame heat exchanger, a common type of compact heat exchanger.*

In Reference 1, researchers explored whether they could improve the performance of compact heat exchangers by adding a dynamic wall. When the wall deforms, it generates oscillations that help mix the fluid and lessen the thermal boundary layers. As a result, the heat exchanger is able to transfer more heat. In addition, the oscillations generate a pumping effect similar to that of a peristaltic pump. This makes up for pressure losses, increasing the efficiency of the heat exchanger.

Oscillation might be a useful way to enhance the performance of compact heat exchangers. Using COMSOL Multiphysics, we can test this idea by easily creating and examining a model of the dynamic wall heat exchanger…

We start by modeling a static heat exchanger without a dynamic wall. This way, we can compare the results of both heat exchanger designs.

The static heat exchanger geometry consists of an upper wall, bottom wall, and channel. Fluid (water in this case) moves through the channel, steadily increasing in temperature due to a heat flux applied to the bottom wall. At this wall, we set the delivered heat rate to 125 W. Probes at the outlet determine the temperature and mass flow rate of the water when it exits the exchanger.

*The geometry of a static heat exchanger.*

Next, we prescribe a deformation on the upper wall based on the following parameters:

- Time
- Channel height
- Channel length
- Oscillation frequency
- Oscillation amplitude
- Number of waves in the channel length direction

*Animation showing the deformation of the dynamic wall.*

For the full details of how to model the dynamic wall heat exchanger, go to the Application Gallery, where you can download the model documentation and MPH-file.

To simulate the heat transfer and oscillation, we couple two built-in features. The first is the *Conjugate Heat Transfer* multiphysics coupling, which enables us to account for the heat transport between the exchanger and the water. We combine that coupling with the *Moving Mesh* feature, which simulates the deformation of the wall and channel.

Let’s look at the results for the static analysis of the heat exchanger. When the upper wall remains flat, we get a mass flow rate of 5.5 g/s and a heat transfer coefficient of 2900 W/m^{2}.

*The temperature profile in the channel for the static heat exchanger.*

Next, let’s look at the time-dependent analysis for the dynamic wall heat exchanger. The oscillation reaches a pseudoperiodic state after around 0.6 seconds. After it enters this regime, the average mass flow rate is 10.5 g/s, nearly double the rate at static conditions. As expected, the heat transfer coefficient is also higher: about 19,000 W/m^{2} for an oscillation amplitude of 90%.

*Left: The variations in temperature and flow rate. Right: The temperature profile in the channel of the dynamic wall heat exchanger.*

With simulation, it’s possible to analyze and optimize heat exchanger designs for maximum performance and efficiency.

- Check out these related blog posts:

- P. Kumar, K. Schmidmayer, F. Topin, and M. Miscevic, “Heat transfer enhancement by dynamic corrugated heat exchanger wall: Numerical study,”
*Journal of Physics: Conference Series*, vol. 745, 2016.

What if there was a bright side to hitting a pothole? Innovations in vehicle suspension technology could make this possible. Potential developments include a method for converting kinetic energy into electrical energy to power vehicles, software-driven shocks that can mitigate potholes, and mechanical suspension settings that adjust with voice commands.

Enhanced suspension systems are not possible without first developing a strong foundation. The suspension system in any vehicle, after all, needs to adapt to load variations, absorb dips and bumps in the road, and more. If not, common suspension problems arise, such as poor wheel alignment, wearing springs, and damaged dampers.

*An example of a chassis with a suspension system. Image by Christopher Ziemnowicz — Own work. Licensed under CC BY-SA 2.5, via Wikimedia Commons.*

By setting up a simplified lumped model in the COMSOL Multiphysics® software, you can analyze and optimize vehicle suspension system designs.

Available as of version 5.3a of COMSOL Multiphysics®, the *Lumped Mechanical System* interface can be used for modeling discrete mechanical systems in a nongraphical format. This can be in terms of masses, dampers, and springs. You have the option of connecting these systems to a 2D or 3D *Multibody Dynamics* interface. When modeling a lumped mechanical system, you can use both the *Lumped Mechanical System* and *Multibody Dynamics* interfaces within the Multibody Dynamics Module.

In this tutorial, the lumped model of the vehicle suspension system has three main components:

- Wheels
- Seats
- Body

*The lumped model of a vehicle suspension system with three main components.*

Each wheel has one degree of freedom (DOF) and is represented by a green circle in the image above. Each seat is represented by a blue circle and also has one DOF. At the center of gravity, the body has three DOFs that account for the system’s rotation:

- Roll
- Pitch
- Heave

You can use a *Rigid Domain* node and *Prescribed Displacement/Rotation* subnode in the *Multibody Dynamics* interface to restrict the number of DOFs for the body.

To model the wheel and seat, you use the *Mass*, *Spring*, and *Damper* nodes within the *Lumped Mechanical System* interface. The full vehicle model includes all four wheels and four seats, and both components are defined as a subsystem.

In the schematic below, the mass (m), spring (k), and damper (c) are shown. The lumped model of the wheel accounts for its mass and stiffness, as well as the stiffness and damping of the vehicle suspension. The lumped model of the seat accounts for its stiffness and damping, as well as the mass of the passenger.

*The lumped model of a wheel and seat.*

The *Lumped Mechanical System* interface enables you to model the vehicle body as an *External Source* in the lumped mechanical system. This helps to connect the suspension system with the vehicle body at the wheel-body and body-seat points.

Through a transient analysis, you can compute both the vehicle motion and seat vibration levels for a given road profile. In this scenario, the bump height for the road is 4 cm and the width is 7.5 cm. The vehicle is assumed to be moving with a constant speed of 40 km/h. The road profile is modeled by assuming a series of bumps on the road, but only the left wheels of the vehicle are assumed to be moving over the bumps.

Let’s take a look at the time history of the vehicle’s roll, pitch, and heave. These results could be useful for designing shocks that intuitively reduce the amount of roll, pitch, and heave after hitting a pothole.

As shown below, the roll rotation is larger than the pitch rotation for the given road excitation as the left side of the vehicle is moving over the bumps given in the road profile. You can also see the corresponding velocities for the roll, pitch, and heave motions in the velocity plot below on the right. Two different frequencies — low and high — correspond to the natural frequencies for the components of the system.

*Vehicle roll, pitch, and heave motions at the center of gravity (left) and corresponding vehicle velocities (right).*

If you want to harness the kinetic energy induced by hitting a pothole, for example, you need to determine how the vehicle moves and the rate at which it moves. In this case, you can analyze the time history of displacement and acceleration at all four seat locations. The seat displacement results show that the left side of the vehicle has a much larger displacement because this side goes over the bumps in the road, whereas the right side does not.

*Time history of seat displacements (left) and seat accelerations (right).*

Finally, to determine how soft or hard the suspension is and modify it accordingly, we want to find out what the forces are in the springs. The results show that the force magnitude in the spring and damper of a wheel is much larger than that of a seat. This is because the force is absorbed by the inertia of the wheels and the vehicle body, so only a fraction of the force is transmitted from the wheel to the seat. Additionally, the frequency of vibration is much lower for the forces in the seat compared to the forces in the wheel — making for a smoother ride.

*Forces in the springs and damper of the front-left wheel (left) and front-left seat (right).*

This simplified model provides a solid foundation for analyzing vehicle suspension, which you can then compare to data from experiments. With verified results, you can enhance suspension system designs for real-world performance.

Try the Lumped Model of a Vehicle Suspension System tutorial yourself via the button above. From there, you can download the MPH-file if you have a COMSOL Access account and a valid software license.

]]>