When evaluating loudspeaker performance, dips and/or peaks in the on-axis sound pressure level can be a result of an unfortunate distribution of phase components. To overcome this, we use a phase decomposition technique that splits a total surface vibration into three components depending on how they contribute to the sound pressure in an arbitrary observation point; either adding to, subtracting from, or not contributing to the pressure.

A commonly used metric in loudspeaker performance is the sound pressure level as a function of frequency in an on-axis observation point. If at some frequency the overall displacement is decreased compared to adjacent frequencies, the resulting sound pressure level will have a corresponding dip at that frequency. However, the converse is not necessarily true. In other words, a dip in sound pressure may not always be a result of the overall displacement being low; rather, it may so be that part of the vibrating surface contributes negatively to the resulting sound pressure level.

By applying phase decomposition to the vibration, the underlying nature of the vibration can be exposed. The decomposition splits the total displacement into three parts, each of which either adds to, subtracts from, or has no influence on the sound pressure in an observation point of choice. The technique is available in at least one software package, but here the intended input is measured data from a laser vibrometer and not simulated vibrations.

Third-party software is available to enable the use of exported displacement data from COMSOL Multiphysics as input for the scanning software, but here I will show you how the entire analysis can instead be carried out using only COMSOL Multiphysics and the Acoustics Module.

As a starting point, we assume that the vibrating surface is flat and placed in an infinitely large baffle. Since only the normal displacement contributes to the sound generation, we align the surface normal \mathbf{n} with the z-axis, and denote this displacement phasor \mathbf{w}. This is in accordance with the COMSOL Multiphysics convention.

*The flat radiation surface area shown in gray is assumed to be placed in an infinite baffle.*

For this situation, of a flat vibrating surface, the sound pressure level can be calculated via the so-called *Rayleigh integral* (see any standard acoustics text book, for a more advanced text on the subject see, for example, *Fourier Acoustics*, by E. G. Williams):

\pmb{p}(P) = \frac{-\omega^2\rho}{2\pi} \int_{S} \mathbf{w}(Q) \frac{e^{-ikR}}{R}dS

Here, \pmb{p}(P) is the pressure phasor in observation point P, \omega is the angular frequency, \rho is the density of the fluid medium, \mathbf{w}(Q) is the displacement phasor in point Q on the vibration surface area S, k is the wave number, and R is the distance from a point Q on the radiating surface to the observation point P.

If air loading is low, that is, the forces exerted on the vibrating surface by the fluid are small enough that their effect on the surface vibration is negligible, there is no need to include an acoustic domain. In that case, the sound pressure can be determined from a purely structural simulation.

Assuming that the geometry in question is flat enough for the Rayleigh integral approach to be acceptable, the surface vibration can be split into three components:

- In-phase component, which contributes positively to the sound pressure
- Anti-phase component, which contributes negatively to the sound pressure
- Quadrature, or out-of-phase, component, which neither adds to nor subtracts from the sound pressure in the chosen observation point

The phase components can be determined by looking at a phasor diagram relating the phase of the total displacement in a point Q on the vibrating surface to the phase of the pressure in the observation point P.

*The phasor relationship between the displacement in a point Q and the pressure in observation point P.*

Since the pressure here is found via the Rayleigh integral, the sign difference between the displacement and the pressure is first accounted for by a phase shift of π radians, indicated by the red arrow. Now, consider what an in-phase displacement component means: The phase of the in-phase displacement component should lead the phase of the pressure by a phase shift exactly matching the distance traveled by the sound wave from the local point on the surface to the observation point. This phase difference of kR is indicated by the blue arrow.

The phase of the displacement \arg(\mathbf{w}(Q)) is shown for a situation where it is not aligned with the in-phase axis. By projecting \mathbf{w}(Q) onto the in-phase axis, we can determine its in-phase component. We can find the out-of-phase and anti-phase projections in a similar way; the former being in quadrature with the in-phase component and the latter being π radians offset to the in-phase component. By visual inspection, we can observe that there will be a nonzero out-of-phase projection, which is larger than the in-phase projection, but no anti-phase component for the surface point and observation point in question.

The analysis is carried out over the entire surface in order to obtain the three vibration components. Each component can subsequently be fed back into the Rayleigh integral to calculate its respective sound pressure component. By definition, the out-of-phase surface vibration naturally does not have a corresponding sound pressure contribution. This displacement simply cancels out acoustically in the observation point in question.

Let’s review a couple of simulation examples: one of a vibrating disk and another of a loudspeaker.

First, we will illustrate the phase decomposition technique with a vibrating disk as a test case, where the individual phase components can be found by visual inspection.

We’ve chosen an on-axis observation point several radii away from the disk. Let’s consider the total plate vibration shown in one of the four figures below. A larger part of the vibration has one particular phase, whereas a smaller part of the plate has an opposite phase. The larger part of the displacement must be in-phase for an on-axis observation point. This can be realized by considering the “extreme” case that the entire vibration has only one phase. Such a vibration must contribute entirely positively to the sound pressure in an observation point on-axis and away from the plate surface.

We’ve applied the phase decomposition technique to the total displacement and found the expected in-phase component. Since the remaining displacement is in opposite phase of the in-phase component, it must then be an anti-phase component. The analysis in COMSOL Multiphysics confirms this.

Lastly, since the total displacement is made up entirely of an in-phase and an anti-phase component, the out-of-phase component must be zero. This is also what we find via the phase decomposition.

*Total*

*In-phase*

*Anti-phase*

*Out-of-phase*

*Displacement components of the vibrating disk for an on-axis observation point.*

Note that if we choose another observation point — for example, off-axis and/or very close to the plate — we will get different displacement components than those shown above for the same total displacement.

Next, we use the phase decomposition technique for a 3-inch driver placed in a baffle. We carried out a complete 2D axisymmetric vibroacoustic simulation for a wide frequency range. The electromagnetic system was included in a lumped fashion, so that an input voltage could be applied directly.

The surface displacement components are illustrated for a frequency of 4.5 kHz. The total displacement pattern seems fairly simple, but visual inspection cannot reveal the individual phase components.

*Total*

*In-phase*

*Anti-phase*

*Out-of-phase*

*Displacement components of the loudspeaker surface at 4.5 kHz for an on-axis observation point.*

Just as with the previous example, we chose an observation point on-axis and several radii away from the surface. The in-phase component is concentrated around the inner topology, or the cone, whereas there is no in-phase displacement at the outer part of the surface, the so-called *surround*. The anti-phase component is concentrated around the surround part of the surface.

This means that the surround is the part to investigate (material and/or topology) if the anti-phase displacement is found to be unacceptably large. The surround is also the sole contributor to the out-of-phase displacement at this particular frequency.

I should note that the phase-decomposed displacement components have no radial components, since the analysis assumes a flat vibrating surface. Therefore, they do not sum exactly to the total vibration for this case, since the total vibration has both axial and radial components. However, the analysis still provides insight into the vibration pattern and how the individual components affect the resulting sound pressure in the chosen observation point.

By feeding the displacement components back into the Rayleigh integral, we can find the individual pressure contributions.

*The sound pressure level components for the loudspeaker driver.*

We can see that at low frequencies, the total displacement is dominated by in-phase motion, but above approximately 4 kHz, the anti-phase component subtracts from the pressure. The insight provided by the phase decomposition technique can aid engineering decisions and, in some cases, design changes may be warranted.

The phase decomposition technique is, of course, not limited to loudspeaker analysis. Any vibrating structure that is fairly flat can be analyzed. In fact, it’s advantageous to apply the Rayleigh integral on its own in order to have an estimate of the radiated sound without having to include an acoustic domain, especially if the air loading on the vibrating surface is negligible. The phase decomposition can then be added as a further layer in the analysis.

Special thanks to Mads Herring Jensen, the technical product manager for the Acoustics Module at COMSOL A/S, for help with the implementation.

René Christensen has been working with vibroacoustics for about a decade for a handful of companies such as DELTA, Oticon A/S, and iCapture ApS. He holds a PhD in hearing aid acoustics with focus on viscothermal effects. René recently joined the R&D department at the Danish loudspeaker company Dynaudio A/S where his main responsibilities are development and optimization of drivers for the “Automotive” and “Home” lines, design of waveguides and cabinets, and conceptual work for future products.

]]>Stirling engines, or heat pumps, are systems that are able to work on incredibly low temperature differences. In fact, some types of Stirling engines only need human body heat in order to operate. Here, we explore the dynamics of this interesting machine that you can build at home and demonstrate how to model it using COMSOL Multiphysics.

Let’s begin by taking a step back in the history of the Stirling engine. Denominated the “engine of the future,” the Stirling engine was first developed by Robert Stirling nearly 200 years ago in 1816. While the technology never really came out on top, this type of heat engine has found extensive use in many modern applications. For example, the solar Stirling engine is used to directly transform solar heat into mechanical energy, which is in turn applied to powering a generator and producing electrical power. Additionally, analogous approaches exist that are based on geothermal energy or use industrial waste heat. The most astonishing modern application of the Stirling engine may be its operation in Swedish submarines — the absence of air is not an issue for a Stirling’s propulsion.

We’ve touched upon some of the applications of Stirling engines, but how does this machine operate? In a Stirling engine, thermal heat is converted into mechanical work (or the other way around) in a cyclic process. This can be realized in different ways, but the principle remains the same: The engine cycles through the four processes of cooling, compression, heating, and expansion. A gas is used to transport the heat from a permanent hot side to a cold side. The efficiency of the engine is restricted by the efficiency of the Carnot cycle.

In contrast to conventional engines, Stirling engines do not need to reach high temperatures to operate. Some Stirling engines only need a small Kelvin temperature difference between the hot side and the cold side. Furthermore, the sound level and subsequent energy losses are very low because there are no explosions and no exhaust. However, Stirling engines are most suitable for applications where constant power is required, as controlling dynamic power regulation would be an extensive task. This is likely the most prominent reason as to why there are still no cars powered by Stirling engines.

*A Stirling engine operates using the heat from a human hand. (“A Stirling engine that works solely with the energy taken from the temperature-difference from the surrounding air and the palm of the hand” by Arsdell — Own work. Licensed under Creative Commons Attribution-Share Alike 3.0 via Wikimedia Commons).*

For those of you with some experience in handcrafting, it is possible to build your own Stirling engine at home, without the need for professional tools or experience. Several video tutorials are available on YouTube that include instructions designed to guide you through the building process. The easiest of these examples is probably the version consisting of a cola can and some domestic odds and ends.

While easy to build, this model of a Stirling engine is likely not optimized from an efficiency point of view. Creating a numerical model of the engine offers a better solution.

With a numerical model of a Stirling engine, we are able to find and test sets of materials as well as parameter adjustments. The involved physics are heat transfer and fluid flow, and the mechanical process can be simplified by solving the equation of motion as an additional ODE.

The 2D-axisymmetric model consists of a main cylinder that includes the working fluid (air) and the displacer. The small cylinder on top contains the power piston. Both the displacer and the power piston are connected in parallel and joined 90º out of phase on a crankshaft, which is not featured as part of the model. The whole setup corresponds to a gamma type Stirling engine.

*A model of a Stirling heat pump.*

Here, the heat transport within the working gas is solved. The mechanical part is realized via a moving mesh (ALE) approach: The displacer and the power piston are free to move in the *z*-direction. The displacement is prescribed here, corresponding to a heat pump. Mechanical work is used to move thermal energy opposite to the direction of spontaneous heat flow. The other way around — a Stirling engine — can also be modeled by applying a heat source and solving for the resulting pressure forces at the power piston and the displacer. In any case, the system runs through a certain chain of processes that can be divided into the four steps of the Carnot cycle:

*Thermodynamic processes that are acting on the working fluid.*

While its efficiency is nowhere near the theoretical Carnot cycle diagram, the resulting pressure/volume graph shown below does correspond to experimental results.

*A pressure/volume graph of the Stirling cycle.*

The real advantage of the model is that we can analyze the physics within the heat pump. For example, the animation below indicates velocity distribution during the operation of the heat pump.

*Velocity distribution during the heat pump’s operation.*

Since the piston brings mechanical energy needed to pump heat, we can also investigate the dynamic temperature distribution within the heat pump during operation.

*An animation depicting temperature distribution.*

If you are looking to improve the efficiency of the Stirling engine, the goal is to maximize the enclosed area in the pressure/volume graph. This area corresponds to the work that is done by the engine. The engine’s overall efficiency can be improved in multiple ways. If we choose a working gas that features a high specific gas constant (i.e., a low molar mass), this will maximize the isothermal expansion (and therefore the work ability) of the engine. This is why hydrogen and helium are preferred as working gases. Another possibility is to maximize the heat that is transported by the displacer through using a porous medium as a regenerating displacer (see this paper).

]]>

For our example, we will use a model that couples the Navier-Stokes equations and the heat transfer equations to model natural convection in a square cavity with a heated wall. The temperature on the left and right walls is 293 K and 294 K, respectively. The top and bottom walls are insulated. The fluid is air and the length of the side is 10 cm.

We will use this model to compare the computational cost of three different modeling approaches:

- Solving the full Navier-Stokes equations (Approach 1)
- Solving the full Navier-Stokes equations with pressure shift (Approach 2)
- Using the Boussinesq approximation with pressure shift (Approach 3)

\rho \left( \frac{\partial \mathbf{u}}{\partial t} + \mathbf{u} \cdot \nabla \mathbf{u} \right) = -\nabla p + \nabla \cdot ( \mu (\nabla \mathbf{u} + (\nabla \mathbf{u})^{T}) -\frac{2}{3} \mu (\nabla \cdot \mathbf{u})\mathbf{I}) + \rho \mathbf{g}

\rho \left( \frac{\partial \mathbf{u}}{\partial t} + \mathbf{u} \cdot \nabla \mathbf{u} \right) = -\nabla P + \nabla \cdot ( \mu (\nabla \mathbf{u} + (\nabla \mathbf{u})^{T})- \frac{2}{3} \mu (\nabla \cdot \mathbf{u})\mathbf{I}) + (\rho-\rho_0)\mathbf{g}

\rho_{0} \left( \frac{\partial \mathbf{u}}{\partial t} + \mathbf{u} \cdot \nabla \mathbf{u} \right) = -\nabla P + \mu \nabla^2 \mathbf{u} -\rho_0\frac{T-T_0}{T_0}\,\mathbf{g}

Each of these three approaches and their variables are defined here.

In COMSOL Multiphysics, the model is solved with a stationary study using the *Laminar Flow*, and *Heat Transfer in Fluids* interfaces, and the *Non-Isothermal Flow* multiphysics coupling:

While setting up the model, it is important to check whether the flow is laminar or turbulent. For a natural convection problem, this is done by calculating the Grashof number, *Gr*. For an ideal gas, it is defined as

\mathrm{Gr} = \frac{g \left(T_\mathrm{hot}-T_\mathrm{cold}\right) L^3}{\nu^2\,T_\mathrm{cold}}

The Grashof number is the ratio of buoyancy to viscous forces. A value below 10^8 indicates that the flow is laminar, while a value above 10^9 indicates that the flow is turbulent. In this case, the Grashof number is around 1.5 \times \hspace{1pt} 10^5, meaning that the flow is laminar.

When using the full Navier-Stokes equation, we set the buoyancy force to \rho \mathbf{g}:

The buoyancy term is added using a volume force feature. The terms *nitf1.rho* and *g_const* represent the temperature- and pressure-dependent density, \rho, and the gravitational acceleration, \mathbf{g}, respectively.

When using the Navier-Stokes equations with pressure shift, we have to make three changes.

First, we need to change the definition of the volume force to (\rho-\rho_0)\mathbf{g}, as such:

The term rho0 refers to the reference density \rho_0.

Next, we evaluate the reference density \rho_0 and the reference viscosity \mu from the material properties in a table of variables:

Here, pA and T0 represent the reference temperature and pressure.

The air viscosity is set to the constant \mu_{0}:

Finally, when using the Boussinesq approximation, we need to set the buoyancy force to -\rho_0\frac{T-T_0}{T_0}\,\mathbf{g}:

As with Approach 2, we also evaluate the reference density and viscosity from the material properties. A third and final step with Approach 3 is to set the fluid density to the constant reference density \rho_{0} (the Boussinesq approximation states that the density is constant except in the buoyancy term).

Note: If your model includes a pressure boundary condition (open domain), set the pressure to the hydrostatic pressure -rho0*g_const*y for Approach 1 or to 0[Pa] for Approach 2 and Approach 3. The boundary conditions for models including gravitational forces are also discussed here.

The mesh is made of 15,000 triangular elements and 1,200 boundary layer elements. These are first-order elements.

The resulting velocity magnitude and streamlines are nearly identical for all three approaches. The maximum temperature difference between Approach 1 and 2 is less than 2 \times \hspace{1pt} 10^{-6} K and the maximum temperature difference between Approach 1 and 3 is around 5 \times \hspace{1pt} 10^{-4} K. The only thing that differs is the simulation time.

*Velocity magnitude and streamlines.*

Because of the short running time of this 2D simulation (around 30 seconds), we look at the computational load by comparing the number of iterations it takes the solver to converge to the steady-state solution. The number of iterations, in this case, is nearly proportional to the CPU time.

The table below compares the number of iterations across all three approaches.

Approach 1 | Approach 2 | Approach 3 | |
---|---|---|---|

Number of Iterations | 39 | 55 | 55 |

These results are very surprising!

While the Boussinesq approximation is supposed to reduce the nonlinearity of the model and the number of iterations required for convergence, the full Navier-Stokes equations (39 iterations) can be solved faster than the Boussinesq approximation (55 iterations). We also note that the use of Navier-Stokes equations with a pressure shift leads to the same number of iterations as the Boussinesq approximation.

To better understand these results, we can run a second set of simulations after disabling the *pseudo time-stepping algorithm*. Pseudo time stepping is used for stabilizing the convergence toward steady state in transport problems. The pseudo time stepping relies on an adaptive feedback regulator that controls a Courant–Friedrichs–Lewy (CFL) number. The pseudo time stepping is often necessary to get the model to converge. In this particular case, however, it is not needed .

Here’s a look at the COMSOL Multiphysics settings window for the *default* solver settings with pseudo time stepping:

The following snapshot shows the *updated* solver settings without pseudo time stepping. We recommend that you always keep pseudo time stepping switched on, unless you feel comfortable tuning the solver settings.

Note on the solver settings for natural convection:

Due to the very strong coupling between the laminar flow and heat transfer physics in natural convection modeling, always use a fully coupled solver. The COMSOL software automatically switches to a fully coupled solver when a volume force is added in the laminar flow physics, meaning that you are modeling natural convection.

This second table shows the number of iterations without pseudo time stepping:

Approach 1 | Approach 2 | Approach 3 | |
---|---|---|---|

Number of Iterations | 9 | 7 | 7 |

These results make more sense than the previous ones with pseudo time stepping. This is because Approach 3, the most linear problem, now converges faster than Approach 1. What is surprising is that Approach 2 and Approach 3 converge with the same number of iterations.

Comparing these results with the first set of results, a speed-up of 8 (from 55 to 7 iterations) is observed for the third approach — the Boussinesq approximation. These results also indicate that the number of iterations in the first set of results not only depend on the linearity of the problem, but also on the tuning of the pseudo time-stepping algorithm.

Here, we have discussed the implementation and benefits of the Boussinesq approximation as well as using the pressure shift method. The results show that, for this particular model, there are no real benefits in terms of computational time for using the Boussinesq approximation, regardless of whether or not pseudo time stepping is enabled. This is generally the case since the Boussinesq approximation is only valid when the nonlinearity is small. A much shorter computational time for the Boussinesq approximation with respect to the full Navier-Stokes equations would indicate that the Boussinesq approximation might not be valid.

Because of the small speed-up observed with the Boussinesq approximation and the fact it is not always easy to know a *priori* if the Boussinesq approximation is valid, we generally recommend solving for the full Navier-Stokes equations. Implementing the pressure shift (Approach 2 and 3), however, does avoid round-off errors and simplifies the implementation of time-dependent problems as well as models with open boundaries. This will be the object of a future blog entry.

Using Approach 3 (Boussinesq approximation with pressure shift) involves more implementation steps and does not reduce the number of iterations as compared with Approach 2 (Navier-Stokes equations with pressure shift). The final simulation time might be slightly shorter for Approach 3, since it does not require the evaluation of the temperature- and pressure-dependent density and the temperature-dependent viscosity, but this speed-up might not be noticeable.

The number of iterations is reduced by a factor 4 to 8, depending on the chosen approach, by disabling the pseudo time-stepping algorithm. Please keep in mind, however, that most problems will not converge without pseudo time stepping or other load ramping or nonlinearity ramping strategies.

You can set up and solve this model using the CFD Module or the Heat Transfer Module. If you have any questions about the models that I’ve presented here, contact our Technical Support team. If you are not yet a COMSOL Multiphysics user and would like to learn more about our software, please contact us via this form — we’d love to connect with you.

]]>

One of the most common effects associated with an earthquake is shaking. Depending on the size and magnitude of the seismic waves, this shaking can result in various levels of destruction. In the case of buildings, such waves can produce instability or, in more extreme cases, cause structures to collapse.

Seismic control is an important consideration in the design of buildings. This is particularly relevant to taller structures, which pose a greater risk to human life during earthquakes. Such control can be achieved through various techniques, one of which includes a passive control approach where an external energy source isn’t required.

A *base isolation system* is an example of such a passive control method. As indicated by its name, a base isolation system isolates a structure’s base from its foundation with the use of a bearing. The bearing, which acts as an isolator, deflects and absorbs seismic waves, helping to protect the structure from the force of the vibrations.

*Fixed base and isolated base systems. Image by R. Sugumar, C.S. Kumar, and T.K. Datta, and taken from their presentation submission*.

Base isolation systems are valued for their simplistic construction and installation as well as their ability to be installed in existing buildings. Such systems can be used effectively in tall structures featuring multiple levels and in buildings with up to 20 stories.

Using the Nonlinear Structural Materials Module in COMSOL Multiphysics, a team of researchers from India set out to investigate the efficiency of a base isolation system as well as how to optimize its performance. They presented their findings from the study at the COMSOL Conference 2014 Bangalore.

In the analysis, a laminated rubber bearing (LRB) was used as the isolator. Consisting of steel shims between rubber layers, the LRB uses its flexibility to deflect seismic waves and, through plastic deformation, absorbs the energy from the vibrations. Additionally, its lead core assists in further dissipating the energy. Here, the steel component of the LRB was treated as an elastoplastic material, the rubber as a hyperelastic material, and the lead as an elastic perfectly plastic model.

*A schematic depicting the cross section of the LRB. Image by R. Sugumar, C.S. Kumar, and T.K. Datta, and taken from their presentation submission*.

For the structure, the research team chose a two-story, single-bay frame comprised of mild steel (i.e., low-carbon steel). In this study, two frames were analyzed: one bare frame and one frame with a base isolation system.

An eigenmode analysis was performed for the structures, and their damping properties were determined based on the first two eigenfrequencies and assuming damping ratios of 0.02. These damping properties were incorporated then into a transient analysis. The force exerted on the structures was derived from a pre-recorded earthquake time history, as illustrated in the plot below.

*Pre-recorded earthquake acceleration. Image by R. Sugumar, C.S. Kumar, and T.K. Datta, and taken from their presentation submission*.

The responses of the bare frame and the base isolation system frame to the applied earthquake acceleration were then compared. The results showed that the presence of the isolator reduced the frame’s response to the vibrations, highlighting the effectiveness of this approach in providing structures with seismic control. Furthermore, researchers noted that modifying the material properties and the dimensions of the isolator could further enhance the LRB’s ability to control vibrations.

*Comparing the response of the bare frame (uncontrolled) and the frame with the base isolation system (controlled). Image by R. Sugumar, C.S. Kumar, and T.K. Datta, and taken from their presentation submission*.

From this research, we can observe the effective nature of base isolation systems using laminated rubber bearings as a means of seismic control for structures. Simulation can help address how different parameters impact the performance of the isolator and advance its ability to stabilize vibrations within buildings. The analyses performed here can also be applied to other types of bearings, emphasizing the extensive capabilities and applications of the nonlinear material models and simulations available in the COMSOL Multiphysics FEA software.

- Access the presentation, poster, and abstract: “Seismic Control of a Structure Using Laminated Rubber Bearings“

Consider a drum head constructed by stretching a membrane over a stiff frame that encloses a flat 2D domain. The vibration of the membrane is described by the wave equation (Helmholtz equation) with the Dirichlet boundary condition at the periphery of the domain where the membrane is constrained by the stiff frame. In this case, there is a set of discrete solutions to the wave equation, called *normal modes* or *eigenmodes*, each of which vibrates at a characteristic frequency, called *eigenfrequencies*.

The lowest eigenfrequency defines the fundamental tone, which for instance could be concert pitch A (440 Hz). The set of higher eigenfrequencies, or *overtones* in musical terms, gives rise to the tone color or timbre of the vibrating membrane. Kac’s lecture drew our attention to the eigenfrequencies: Is it possible to construct two drum heads with different shapes that share a set of eigenfrequencies? The idea was that if the two drums have an identical set of eigenfrequencies (being *isospectral*), then they would have the same timbre and sound the same to the ear, even though their shapes are different.

Kac commented on the asymptotic behavior of the eigenfrequencies in the limit of very high frequencies and made connections to various branches of physics and mathematics to provide a ground for intuitive understanding. The uniqueness question (in 2D flat space) remained unsolved until over two decades later when Gordon, Webb, and Wolpert finally constructed two polygons with an identical set of eigenvalues (see “One cannot hear the shape of a drum” and “Isospectral plane domains and surfaces via Riemannian orbifolds“).

The eigenvalues of the two polygons can be computed numerically, which is shown in this Isospectral Drums model in our Model Gallery.

The image below shows the first three normal modes of the two polygons that share the same set of eigenfrequencies:

In Gordon and Webb’s easy-to-read introductory article on this subject (“You can’t hear the shape of a drum”, *American Scientist*, vol. 84 (1996), pp. 46–55), they commented that such isospectral drums with different shapes are expected to be the exception, not the rule. In other words, they expected that, in general, one *can* hear the shape of a drum, unless the shape of the drum is specially constructed to be isospectral with another shape, like the two polygons depicted above.

In the following discussion, we will take a closer look at such special shapes by considering various physical mechanisms involved in the sound production and detection. We will find that when we include relevant physical effects, we actually *can* tell two drums apart by the sound, even if they are specially constructed to share the same set of eigenfrequencies.

The first effect we will examine is the excitation of the vibrational modes in the membrane. Since the timbre is determined by the set of relative amplitudes of the normal modes, it is not enough to just have an identical set of eigenfrequencies for the two drums to sound the same. They also need to have the same relative amplitude for each eigenmode, which may not be trivial to achieve.

Let’s take, for example, the same two polygonal drums from above and hit them with a drum stick at a few arbitrary places, one at a time, as such:

Each location of striking is somewhere in the middle of the drum, where a child may instinctively choose to hit if given such a drum and a drum stick. We use COMSOL Multiphysics simulation software to calculate the frequency response of each of the locations and plot the results in the graphs below.

We first focus on just one drum, say, the one on the left. Here is a plot showing the left drum’s frequency response:

As we hinted at earlier, the drum sounds differently depending on the location where it is struck by the drum stick. We see different energy distribution among the first three eigenmodes, which will result in different timbre. This is, of course, a well-known fact to percussionists and is the result of the same principle that enables a single bell to ring in two distinct tones, as demonstrated by this ancient set of bells from over two thousand years ago.

Now we know we can’t even make one drum sound the same unless we have a perfect aim of the drum stick. Is there any hope that we can make the two different drums sound the same?

In the graph below, we’ve added the frequency response of the second drum (the dashed curves). As we examine the graph, it becomes evident that none of the dashed curves match the solid curves in all three of the eigenmodes. In other words, the two drums do sound differently, even though they are isospectral, sharing the same set of eigenfrequencies.

Of course, we haven’t done an exhaustive search of all the possible combinations of striking locations. However, this simple example illustrates that it is not an easy job to make the drums sound the same, due to the different coupling strengths of energy from the drum stick to the various vibrational modes of the membrane.

The magic of mathematics never ceases to amaze us. Not long after the two isospectral polygons were published, Buser, Conway, Doyle, and Semmler constructed a pair of domains that are not only isospectral (sharing the same set of eigenfrequencies), but also *homophonic*: having a special point in the domain such that “corresponding normalized Dirichlet eigenfunctions take equal values at the distinguished points” (“Some planar isospectral domains“). In other words, if the special point of each drum is hit by a drum stick, then each corresponding pair of eigenmodes of the two drums will be excited with the same amplitude and the two drums will sound the same.

Shown below are the first few normal mode shapes computed numerically:

The special point of each domain is marked with a blue square in the schematic below:

In the following graph, we plot the computed frequency response of the two drums to a narrow Gaussian area load centered on each of the special points:

Isn’t it amazing how well the two frequency response curves (solid blue curve and green circles) lie on top of each other? With such a perfectly matched vibrational energy spectrum, wouldn’t the two drums sound exactly the same? Let’s continue our journey to explore more physical effects and find out.

Our ears do not sense the vibration of the membrane directly. Rather, the sensing is mediated by the acoustic wave in the air. Let’s set up the two homophonic drums outdoors, where the sound is allowed to propagate away from the drums without significant reflection. In this case, we can easily compute the frequency spectrum of the sound wave using COMSOL Multiphysics to find out what we really hear with our ears.

Let’s take a look at the three vibrational modes with the highest energies at about 111, 146, and 184 Hz as shown in the spectral graph above. For convenience, we will call them the first, second, and third mode, with the understanding that there are other modes in between being neglected since they are much less energetic.

The polar graph below compares the computed sound pressure level (in dB) in the plane of each of the two drums, a few meters away from each drum.

We see that the sound pressure field produced by the first mode is more or less independent of direction (solid and dashed blue curves). This is not surprising, since the mode shape of each drum looks pretty much like a monopole source:

On the other hand, the directionality of the sound field from the second or the third mode of each of the drums is quite pronounced and also quite different between the two drums. For example, for the second mode, the sound field from Drum 1 looks like a dipole field (solid red curve), while the one from Drum 2 is more complex (dashed red curve). This observation again matches what we see in the mode shapes of the two drums:

What really determines the perceived timbre is the ratio of the amplitudes of the higher modes (the overtones) to the lowest mode (the fundamental tone). So, in the next graph, we plot the amplitude ratios of the second and the third modes to the first mode, at a sampling of directions:

The blue square points are from Drum 1 and the red round points from Drum 2. The graph can be viewed as a map of timbre — if two points on the graph are near each other, then they sound similar; on the other hand, if two points on the graph are far away from each other, then they have very distinct timbre. As qualitatively illustrated by the green dashed boundaries, each drum can produce a range of timbre that the other cannot, in some range of directions.

As long as a listener is allowed to move around each drum, perhaps blindfolded, he or she will hear distinct ranges of timbre that tell the two drums apart. Therefore, even though the two “homophonic” drums share the same energy spectrum in their vibration modes, due to the difference in the mode shape and to the difference in energy transfer to the sound field in the air, the acoustic energy spectrum in some range of directions can be quite different. This is what would cause the two drums to sound differently to our ears.

In the previous analysis, we ignored the reaction force acted on the membrane by the air, the so-called *air loading effect*. It turns out that this effect is very significant for a real drum, since, after all, the entire area of the membrane participates in the pushing and pulling of the air around it.

We can simulate this effect using the *Acoustic-Structure Boundary* Multiphysics coupling feature of COMSOL Multiphysics. We find, for example, that the eigenfrequency of the second mode that we were discussing shifts from 146 Hz down to about 86 Hz. In addition, the magnitudes of shift of the two drums are different. The eigenfrequency of one drum was shifted down to 85.6 Hz, while the one of the other drum shifted to 86.8 Hz. This difference causes a pitch difference of about 23 cents, which is very audible in a side-by-side comparison.

Therefore, not only do the two drums differ in timbre (in some range of directions), they also differ in absolute pitch when we take the air loading effect into account.

The graph below shows the frequency response of the two drums around this mode. The difference in the resonant frequency is clearly seen, as well as the difference in the width of the resonance. There should be no doubt in our mind that with such different frequency responses, the two drums will produce easily distinguishable sounds.

It is a great achievement in mathematics to invent the isospectral drums that share the same set of eigenfrequencies and the homophonic drums that share the same power spectrum of the vibrational modes when excited at a special point. However, these phenomena only happen in vacuum, where there is no sound. Once we put the drums to test in air, they start to sound differently due to the air loading effect and the directionality of the energy transfer from the membrane to the sound wave.

In his lecture, Kac told the early 20^{th}-century story of Lorentz calling for mathematicians’ attention to the eigenvalue problem involved in the theory of black body radiation and Weyl answering the call with the proof of the theorem of the asymptotic behavior of eigenvalues at very high frequencies.

Here, we could use the help of our mathematician friends again, even though the subject matter may not be as grand as black body radiation and quantum mechanics. Is it possible to construct homophonic drums with different shapes that sound the same when including directionality and air loading effects? It may be possible to pose this as an optimization problem to solve numerically for a solution with a finite set of audible frequencies.

However, the computation cost will be high and the result will be approximate. An elegant analytical solution similar to those shown in the papers mentioned above would be much nicer. I hope this will arouse the interest of mathematicians who are reading.

]]>

A vehicle’s dashboard provides valuable information for a driver, from indicating the speed of the car to gauging its fuel levels. What’s as important as the instruments themselves is the manner in which they are installed. In many cases, fasteners known as snap hooks are used in the design of a car’s control panel, ensuring that the different components are securely fixed.

When inserting a snap hook into its slot, an important consideration is the force that needs to be applied to insert the hook into the slot as well as the force that is required to remove it. With COMSOL Multiphysics FEA software, you can study these forces and the resulting stresses and strains in the hook.

In the Snap Hook model, we leverage the snap hook’s symmetry to analyze only half of its original geometry in an effort to decrease the size of the model. The snap hook is assumed to be composed of an elastoplastic material featuring isotropic hardening and a constant tangent hardening modulus. Meanwhile, the lock is assumed to be rigid when compared to the hook, with the space behind the lock representing the slot into which the hook should lock.

*Model geometry.*

Several boundary conditions are applied to the model, as shown in the schematic below. For boundaries in the symmetry plane, a symmetry boundary condition is used. A fixed boundary condition is implemented for the face of the lock, where it is attached to the remainder of the geometry of the locking mechanism (not modeled here). Lastly, a prescribed boundary condition is applied where the face of the hook meets the rest of the geometry.

*The applied boundary conditions.*

Before insertion into the slot, we first measure the effective stress levels in the hook. As indicated in the following plot, the maximum effective stress levels occur at parameter step 0.84. This parameter represents the point right before the hook enters the slot. Note that when passing over the edge, the hook is bent upwards. The elastic forces will tend to press the hook into the slot before it “fits.” If holding the hook, you would be pushing up until this point; however, here the hook would actually be pulled away from your hand.

*The hook’s effective stress levels prior to entering the slot.*

The next graph depicts the degree of force needed to insert and remove the fastener as a function of the parameter step. When the parameter value varies from 0 to 1, the hook is moved inwards at a constant rate to eventually sit in the slot. Then, between the parameter values 1 and 2, the hook is pulled back out of the slot.

At the parameter value 0.2, the hook first comes into contact with the fixed locking mechanism. The force rises steeply, while the hook tip is forced upwards. In reality, the hook would snap into place after reaching the peak force of around 2.5 N at the parameter value 0.23. Since we control the displacement in the simulation, we can follow the force throughout the entire process. Between the parameter values 0.7 and 0.9, the hook slides down on the back side. The change in the sign of the force indicates that the hook is actually pulled into the slot by a combination of the geometry and the elastic forces.

When trying to pull the hook back out of the slot (at parameter values greater than 1), we must apply a load that is three times higher — about 7.5 N — to remove the hook from the slot (at a parameter value of about 1.12). This is a desirable feature of a hook designed for a locking mechanism.

*Force required for the insertion and removal of the snap hook. The first positive peak in the graph can be attributed to the elastic forces pushing the hook into the slot before it “fits.” Following retraction, the hook hits a steep surface, which results in the second positive peak in the graph. After passing the corner (represented by parameter 1.2), the hook is pressed out by itself, or a negative force.*

Upon its removal from the slot, the hook is shown to have a volume in which there are plastic strains, as illustrated in the plot below. Thus, we can conclude that after inserting the hook into the slot, it is permanently deformed.

*The hook’s effective plastic strain following its removal from the slot.*

In this blog post, we have explored the role of simulation in addressing the forces behind the insertion and removal of a snap hook into a slot. By analyzing such forces, you can enhance the design of snap hooks to ensure that they provide continuous security while also being able to remove them without causing damage to the fastener. This is particularly relevant within the automotive industry in cases where certain parts of a vehicle’s control panel need to be repaired or replaced.

- Model download: Snap Hook
- Related blog post: Why All These Stresses and Strains?

Not everyone has access to a laundry room or the ability (or time!) to go to a laundromat. You may not want to spend your change on coin-operated machines or leave the warmth of your apartment to go to a laundromat. Portable washing machines are a solution to these problems. They do have their own setbacks, though. The lightness of these machines coupled with an unbalanced distribution of clothes causes them to become unstable.

When the laundry is in a spin cycle, it generates a centrifugal force — causing the machine to destabilize. This problem could be solved by making the washing machine heavier, but since this machine is meant to be portable, this isn’t an ideal option. To learn more, we turn to simulation.

For this problem, we chose to model a simplified horizontal-axis front-load washing machine (the horizontal model has more severe instability than the vertical version). We used this model to figure out how walking instability affects the washing machine during a spin cycle. In order to try to remove the instability, our model makes use of an active balancing method.

In order to simplify our model, we made a few assumptions.

Assumptions about the drum and washer:

- Both are rigid.
- The rotation around the axis of the drum is the sole relative motion between the drum and the washer.
- The laundry spins at the same speed as the drum. We assume this because the RPM of the drum is high enough to produce substantial centrifugal forces.

Assumptions about the machine’s interaction with the surrounding environment:

- The washing machine cannot tip over. It remains connected to the ground throughout the simulation.
- The Coulomb friction model, with a constant friction coefficient, is used to simulate friction between the washer and the ground.

When setting up the model, we have to pay attention to where everything is placed. The balancing mass should be placed on both sides, the front as well as the back side, of the drum however for the ease of modeling, we place the balancing mass only on the front side of the drum making sure that its center of mass moved to the center of mass of the drum. When calculating mass, we need to remember that the mass of the drum adds to the clothing mass.

The drum is connected to the washer and the slot through hinge joints and a prismatic joint connects the slot and balancing mass. Both the joints, hinge joint connecting the drum to the slot and the prismatic joint, are necessary for active balancing, as we will learn more about in the next section. (Please note that in our model the prismatic joint is always locked since it isn’t used, while the drum/slot joint is only locked when there is no balancing in the system.)

Planar joints function as the washer’s four supports and connect the washer to the ground at four separate points. We’re able to analyze the joint forces independently on all joints because the joints have elasticity.

The instability of our washing machine model could result in multiple kinds of slip. For our needs, though, we are focusing on rotational slip and instability, since it happens at a lower critical speed than translational slip.

We measure our model’s stability by using the slip margin. Slip margin is the difference between the maximum possible friction force and the actual friction force. In the case of our example, it decides the critical operational speed needed to avoid walking instability. Walking instability will happen when the slip margin reaches zero. Our machine slips when three or more of our supports have a zero slip margin.

In order to make our machine more stable, first we have to eliminate what is making our washing machine unstable. In this case, the culprit is the net unbalanced centrifugal force acting on the rotating clothing. To combat this, we can apply an equal and opposite force. This is where active balancing comes into play.

To balance the forces within our machine, we adjust the angular and radial positions of the balancing mass. An angular correction serves to fix the direction of the centrifugal force and can be achieved by rotating the slot-balancing mass in relation to the drum. A radial position fixes the magnitude of the centrifugal force through the translation of balancing mass in the slot. In our simulation only angular correction is required because the radial position of the balancing mass is already set based on the weight of the laundry.

Our model makes it easy to use active balancing. Since we know the angular drum speed and acceleration rate of our model, we can activate our balancing system at a specific time instead of waiting for the slip margin to get close to zero.

After taking all the previously mentioned factors into account, we ran our simulations. At first we looked at our washing machine without the aid of active balancing:

*Left: The washer rotation (magnified by a factor of 100). Right: The drum rotation and friction force at the washer supports.*

Next, we saw what happens when we add active balancing to our simulation by viewing the total imbalance of our washing machine in two ways.

Our first plot shows the imbalance in the rotating frame and displays a clear correlation between active balancing and reduced imbalance. A similar effect is viewed in the fixed frame plot.

*Left: Total imbalance in the rotating frame. Right: Total imbalance in the fixed frame.*

Another area we looked into was the slip margins of our model. First, we analyzed the differences in the individual supports without active balancing. We compared the slip margin of a support in the front of the washing machine (support 1) and a support in the back (support 3). The plot to the left below reveals that the a washer with front support has a lower slip margin and is therefore more prone to slipping than a washer with back support.

Widening the parameters to look at the total slip margin of our model, we saw that the total slip margin does become zero for short periods. This means that the washing machine will experience walking instability during these moments. The right-side plot shows that active balancing increases the total slip margin upon activation.

*Left: The slip margins of supports 1 and 3 in the absence of active balancing. Right: The washing machine’s total slip margin with and without balancing.*

Active balancing is further shown as helpful when looking at the movement of the washer around a vertical axis. Rotational instability is eliminated with the balancing mechanism.

*The Z-axis rotation of washer with and without balancing.*

We modeled other aspects of our portable washing machine as well. When looking into the revolutions per minute (RPM) of the drum and correction motor, we found that the correction motor works efficiently by starting when needed and stopping when the system is stabilized. We also calculated the appropriate correction angle for stabilizing the system.

*Left: RPM of the correction motor and the drum. Right: The necessary correction angle in an active balancing system.*

Creating an active balancing system helped our model avoid the effects of walking instability and rotational slip during its spin cycle. Now it’s your turn to try it out yourself — download the model via the link below to get started.

- Model download: Walking Instability in a Washing Machine
- Related blog post: Simulating Vibration and Noise in a Washing Machine

Many of today’s motor vehicles rely on reciprocating piston engines as their source of power. In an internal combustion reciprocating engine, fuel combines with an oxidizer in a combustion chamber. The combustion causes the gases to expand, applying pressure to the engine’s piston, pushing it out of the chamber. The linear movement of the piston is converted to a rotating motion by way of a connecting rod, which joins the piston to the crankshaft. This continual motion exerts a great deal of stress on the connecting rod — a force that becomes greater with increasing engine speeds.

Within reciprocating engines, it is crucial that each component is analyzed critically, since one part’s failure often means replacing the entire engine. To optimize the design of this engine and ensure a long operational lifetime, we can analyze the connecting rods from the fatigue viewpoint.

The High-Cycle Fatigue of a Reciprocating Piston Engine model is based on an example of a three-cylinder reciprocating engine from the Multibody Dynamics Module. In this engine, a flywheel is mounted on the crankshaft, with the assembly supported on both ends by journal bearings. The model also features three sets of cylinders, pistons, and connecting rods that are identical. Hinge joints are used to connect the bottom ends of the connecting rods to the common crankshaft as well as to connect the pistons to the top ends of the rods. A prismatic joint is used to connect each of the cylinders to a piston.

*Reciprocating engine geometry.*

Aside from the flexible central connecting rod, it is assumed that the engine components are rigid. The cylinders are fixed and the other parts of the engine are able to freely move in space. The engine as a whole operates at 1,000 RPM and the material data is derived from structural steel, which shows a fatigue limit at 210 MPa.

Our analysis begins with the stress history in the connecting rod fillet, as a stress concentration resulting from geometrical change is assumed in this area. After a few revolutions, the engine reaches a steady-state behavior. Following the third cycle, the stress history seemingly repeats itself for each cycle, as shown in the plot below. The third principal stress dominates the connecting rod’s stress history as the part is in compression during this time. Because the first and second principal stresses are small compared to the third one, we can consider the stress state at the fillet to be uniaxial. Since the von Mises stress would be better suited in a *multiaxial* loading, we use the principal stress as the amplitude stress in the Basquin relation.

*The stress history in a connecting rod fillet.*

The following plot addresses fatigue life prediction in the connecting rod. Here, the point of focus is the fillet near the top end of the rod. According to the Basquin model, the fatigue life is predicted to be more than twenty-five billion cycles, which is a notably long operational lifetime. Although the endurance limit is not defined in the Basquin model, the relation can be used to back-calculate the fatigue life at the endurance stress to 245 million cycles. Since the model prediction gives a greater life than the back-calculated fatigue life at the endurance limit, we can assume then that the stress within the engine’s assembly is beneath the endurance limit, which as previously noted is 210 MPa for the used material, and that the connecting rod has an infinite operational life.

*The connecting rod’s fatigue life prediction.*

The initial stress history plot also indicates that the rod is designed for infinite life. With a principal stress range around 110 MPa, the stress amplitude is near 55 MPa, which is lower than the material’s endurance limit.

- Download the model: High-cycle fatigue of a reciprocating engine

By design, cranes offer a mechanical advantage, lifting and lowering heavy materials that require strength beyond that of a human. In many applications of this machine — from construction to electric line maintenance — a favorable feature is mobility. Truck-mounted cranes are free to move in various directions as well as travel on highways, which can help avoid the need for additional transport equipment.

*An example of a truck-mounted crane. (“A truck-mounted crane from Palfinger (Austria). The concrete component (build in Germany) is a small sewage treatment plant for a house with up to four residents.” by TM — Own work. Licensed under Creative Commons Attribution-Share Alike 2.0 Germany, via Wikimedia Commons.)*

In these types of cranes, there are several hydraulic cylinders that control the motion of the crane as well many other mechanisms. When handling heavy loads, the components are subjected to large forces. Through simulation, we can explore the impact of these forces during the machine’s operating cycle, determining ways to enhance its performance by building a more efficient design.

Combining the Multibody Dynamics Module with the Structural Mechanics Module, the Truck Mounted Crane model analyzes the forces on the cylinders and hinges of a crane during an operating cycle. The crane geometry, which is imported from a CAD model, is comprised of 14 parts that move in relation to one another.

*Geometry of the truck-mounted crane.*

The figure below provides a more detailed overview of the crane link mechanisms, followed by a table defining the individual components.

Part | Color |
---|---|

Base | Blue |

Inner boom | Green |

Outer boom | Yellow |

Telescopic extensions | Cyan, Magenta, Gray |

Boom lift cylinders | Red, Gray |

Boom lift pistons | Yellow, Magenta |

Inner link mechanism | Magenta, Black |

Outer link mechanism | Cyan, Blue |

In this example, there are two loads applied — self weight in the negative *z*-direction and a payload of 1,000 kg at the crane’s tip. The operating cycle consists of lifting the payload from a position that is far away and placing it below the crane. The load is initially moved upwards and then drawn inwards to a position that is close to the crane. The plot below depicts the crane tip’s trajectory during the operating cycle.

*Crane tip trajectory during operating cycle.*

In reality, the crane is operated by controlling three cylinder lengths — the inner cylinder, the outer cylinder, and the extension cylinders. The inner cylinder raises the inner boom, the outer cylinder regulates the angle between the inner boom and the outer boom, and the extension cylinders determine the reach of the extensions. Here, the angles of the booms are used as parameters rather than the cylinder lengths, as this method is more convenient.

The image below illustrates the 9^{th} position of the operating cycle, which features an inner boom angle to the horizontal of 45°, a -30° angle between the inner and outer boom, and a total extension of 1.5m.

*The crane during the 9 ^{th} position of the operating cycle. The color shows the total displacement of the crane components.*

We can now address the impact of the forces on various parts of the crane. In each of the following graphs, the solution number correlates to the position of the crane. Initially, the crane picks up a load in an extended position and then, at the final solution, releases the load close to its own position.

Let’s begin with the forces in the cylinders controlling the boom. Here, the compressive forces are positive. As can be expected, when the payload is far from the crane base, the cylinder forces are greater. The maximum force during the operating cycle determines the required cylinder capacity.

*Forces in the cylinders controlling the boom.*

The next graph highlights the forces in the extension cylinders. As in the previous case, a compressive force is defined as positive. Because they have to carry the weight of extension segments a further distance, the inner cylinders endure greater forces.

*Forces in the extension cylinders.*

Finally, we can observe the forces acting on the hinges between the main parts of the crane. This same tactic can be used to analyze the forces within the connections between any parts of the crane. The results below are a valuable resource in the structural dimensioning of such details.

*Forces on the hinges.*

We can now take things a step further and use the Optimization Module to enhance the crane link mechanism. This can be accomplished through the Optimization of a Crane Link Mechanism model, a continuation of the Truck Mounted Crane model. In this case, the focus is on reducing the cylinder force that is necessary to haul a particular payload in a worst-case load cycle scenario.

*A detailed look at the link mechanisms.*

The table below identifies each of the parts and their colors considered in this model.

Part | Color |
---|---|

Base | Blue |

Inner boom | Green |

Boom lift cylinder | Red |

Boom lift piston | Yellow |

Link mechanism | Magenta, Black |

Since this example is designed to test for the worst-case scenario, the operating cycle is chosen so that the link mechanism will experience as much force as possible. To ensure this, the inner boom is raised to its highest position, the telescopic extensions are extended as far out as possible, and the angle of the outer boom is selected to ensure that the crane tip is as far out as it can be. The same loads from the original model are applied.

Within this optimization problem, the positions of three axles can be changed. These include the axle that connects the first link arm to the base, the axle that connects the second link arm to the boom, and the axle that connects the two link arms and the hydraulic cylinder’s piston.

Now let’s compare our results. The first graph below shows the variation of the cylinder force during the operating cycle. Here, we compare the maximum cylinder force during the operating cycle, which determines the capacity of the hydraulic cylinder.

In comparison to the original design, the optimized version enables a reduction in the maximum force from 597 kN to 413 kN — that’s a 31% decrease, a sizable improvement! With this enhancement, the allowed payload can become greater; the decreased forces will enable the link mechanism to meet stress criteria more easily.

*Comparing the cylinder forces in the optimized and original designs.*

The second plot illustrates the *y-* and *z*-components and the magnitude of the force acting on the axle that forms the hinge between the base and the boom. As indicated by the results below, the total force of the original design is greater than the total force of the optimized design.

*Forces acting on the axle in the optimized and original designs.*

With COMSOL Multiphysics version 5.0, we introduced two new models designed to analyze the interactions between different components of a truck-mounted crane and evaluate the role of optimization methods in enhancing these mechanisms. These examples demonstrate how simulation can be used to investigate the impact of loads on such complex mechanical systems and how this knowledge can be applied to developing a stronger design.

Download the models featured here:

One last thing… We’re in the process of creating an app based on this model. Stay tuned for that.

]]>When inside a room — a conference room, concert hall, or even a car — everyone has an opinion of when the “acoustics” are good or bad. In *room acoustics*, we want to study this notion of sound quality in a quantitative way. In short, room acoustics is concerned with assessing the acoustics of enclosed spaces. The Acoustics Module of COMSOL Multiphysics has several tools to simulate the acoustics of rooms and other confined spaces. I will present those here.

When sound is emitted inside a room, a listener will perceive the sound as a combination of direct sound from the source as well as sound reflected off the walls. At the walls, the sound is reflected, absorbed, and scattered.

Since all of these processes are frequency dependent, a poorly designed meeting room can, for example, be highly reverberant in a frequency band that is important for speech. The room could also have a strong modal behavior (standing waves) at certain critical frequencies that are easily excited. These are things you want to avoid and be able to predict when designing a room.

Architects and civil engineers want to control the sound field by placing absorbers, diffusers, and reflectors in appropriate locations. In concert halls, you want to maximize the listening experience where the audience is located. In office spaces, you want to avoid anything that can seem noisy and disturb the concentration of employees. In classrooms and lecture halls, you want to ensure clear perception of speech. The sound environment is important for various reasons, which is why there are national standards and regulations for the sound environment in many cases.

Refurbishing a badly designed room can be very expensive, so you do not want to rely only on measurements on scale models or measurements done after the fact. Modeling the room acoustic behavior beforehand is important and essential in order to optimize and perfect the design. Simulation models and measurements need to relate architectural aspects (geometry) to subjective observations using physical measures (metrics). This is done by calculating a long range of room acoustic measures, such as the reverberation time, early decay time, clarity, and many other standardized parameters.

The modeling approach you want to adopt depends on the studied frequency (the wavelength compared to geometrical features of the room). In the Acoustics Module of the COMSOL suite of FEA software, we essentially offer three approaches packaged in three physics interfaces. The *Pressure Acoustics* interface can model the modal behavior in rooms. The *Ray Acoustics* interface and the *Acoustic Diffusion Equation* interface cover the high frequency limit or reverberant behavior (geometrical acoustics). I discuss the interfaces and their applicability in the sections below.

*Animation of the ray front position as they are released inside a small concert hall. The color scale gives an impression of the ray intensity on a logarithmic scale.*

As mentioned above, room acoustics is typically divided into three categories, depending on the studied frequency. Or, more specifically, depending on the wavelength compared to the characteristic geometric features of the room in question.

In the low-frequency range, the room resonances dominate. This is known as the *modal region*. At the other end of the scale, in the high-frequency limit, the wavelength becomes smaller than the characteristic geometrical features of the room. Here, we deal with the reverberant region or the *geometrical acoustics limit*. Between the modal and the high-frequency limit, there is a so-called *transition zone*. Note that there is no clear-cut definition of this zone.

Classical room acoustics theory provides some tools that enable a back-of-the-envelope assessment of the behavior of a room. For a given room, the Schroeder frequency, f_\textrm{s}, predicts the limiting frequency between the modal behavior and the high-frequency reverberant behavior of the room.

The Schroeder frequency is given by:

(1)

f_\textrm{s} = 2000 (\textrm{m}/\textrm{s})^{3/2} \sqrt{\frac{T_{60}}{V}}

where V is the room volume and T_{60} is the reverberation time.

The equation is based on the criterion (suggested by Schroeder) that at the limit, three eigenfrequencies fall into one resonance half-width. The reverberation time (or decay time), T_{60}, is the time required for the sound pressure level (created by an impulse source) to decay 60 dB. A first simple approximate measure of the reverberation time is given by the well-established Sabine formula:

(2)

T_{60} = \frac{55.3 V}{c A}, \qquad A = \Sigma S_i \alpha_i

Here, c is the speed of sound and A is the total absorption, where S_i and \alpha_i are the surface area and absorption of the i^{th} surface, respectively.

This is possibly the best-known formula in room acoustics. The equation stems from a classical statistical room acoustics analysis assuming a pure diffuse sound field. In a diffuse sound field, the sound pressure level is uniform and the reflected sound dominates. This phenomenon is also known as a *reverberant sound field*. In such a field, the damping constant (related to the overall absorption) can be approximated and relates to the reverberation time.

The modal behavior of rooms and enclosed spaces is best analyzed solving Helmholtz equation or the scalar wave equation using the finite element method. In the reverberant or high-frequency limit at frequencies above the Schroeder frequency, you may utilize two different approaches. Your choice depends on the assumptions that can be made and the desired level of detail.

The *Acoustic Diffusion Equation* interface may be used in the purely diffuse sound field limit, neglecting all direct sound. This is a fast method to assess reverberation times and sound pressure level distributions in systems of coupled rooms. The ray tracing capabilities of the *Ray Acoustics* interface provide a much more detailed picture including the direct sound and early reflections. With this interface, you also have the ability to reconstruct an impulse response.

Up to the Schroeder frequency, the modal behavior of rooms is important, where standing waves dominate over the reverberant nature. Inside a car, the transition may be as high as somewhere between several hundreds of Hertz up to 1000 Hz. In a small office, it may be up to 200 Hz, while in large concert halls, the transition is typically below 50 Hz. In the small concert hall model shown below, the Schroeder frequency is 115 Hz (the reverberation time is about 1.3 s and the volume is 430 m^{3}). The modal behavior is important for subwoofer systems in cinemas, for instance.

The modal behavior as well as the room eigenfrequencies are best analyzed using the *Pressure Acoustics* interface. A frequency domain study can reproduce a transfer function for the bass system. You can also use it to analyze dead regions or find eigenfrequencies. A transient study is interesting when, for example, looking at bass build-up transients inside a car cabin.

Models of interest here are:

*Pressure distribution for the first eigenmode inside a small room. From the Eigenmodes of a Room model.*

If you want to compute the trajectories, phase, and intensity of acoustic rays, you should choose the *Ray Acoustics* interface. Ray acoustics is a good choice when working in the high-frequency limit where you have an acoustic wavelength that is smaller than the characteristic geometric features. The interface is not limited to modeling acoustics in closed spaces, like rooms and concert halls, but can also be used in outdoor environments. At exterior boundaries, you can assign various wall conditions, such as combinations of specular and diffuse reflections. The frequency, intensity, and direction of the incident rays may influence both impedance and absorption.

Below are two figures from the Small Concert Hall model found in the Model Gallery for the Acoustics Module.

The figure to the left depicts the ray paths for a selected number of rays emitted from a source located on the small stage. The figure to the right depicts the energy response as measured in the center of the room. The dots represent the simulated ray response (5,000 rays are released) and the green and red curves represent decay curves based on simple Sabine-like estimates of the reverberation time T_{60}. The cyan curve is a so-called *Schroeder integration* of the energy response, yielding the energy-decay curve. All four agree well when the response is measured in the center of the room.

*Left: Ray path for a selected number of rays emitted from a source located on a small stage. Right: The energy impulse response compared with two simple decay measures and the energy decay curve.*

With the *Ray Acoustics* interface, the response can be measured at any point in the concert hall. The properties of absorbers and diffusers can be both frequency-dependent and angle-of-incidence dependent. Thus, the listening environment can be well described, analyzed, and optimized. The simple estimates are not accurate everywhere in a room and not for complex room geometries.

The *Acoustic Diffusion Equation* interface solves a diffusion equation for the acoustic energy density distribution for room acoustics. The method is also sometimes referred to as *energy finite elements*. This method is an extension of the principles used to calculate the Sabine reverberation time in Equation 2. This particular interface is applicable for high-frequency acoustics when the acoustic fields are diffuse. The diffusion of the acoustic energy density depends on the mean free acoustic path and thus on the room geometry. Absorption may be applied at walls and a transmission loss may be applied when coupling rooms. Increased diffusion due to room fitting can be added. Material properties and sources may be specified in frequency bands.

Compared to a ray acoustics simulation, this interface does not include any phase information, direct sound, or early reflections. The interface supports stationary studies for modeling a steady-state sound energy or sound pressure level distribution. You can use a time-dependent study to determine energy decay curves and reverberation times. You can use an eigenvalue study to determine the reverberation time of coupled and uncoupled rooms. The eigenvalue is directly related to the exponential decay time and so the reverberation time.

We utilized all three study types in the One-Family House Acoustics model, which studies the acoustics in a single-family home with a noise source located in the living room.

*Energy flux and SPL distribution inside a two-story single-family house.*

Check back on the COMSOL blog this spring for specific blog posts about the *Acoustic Diffusion Equation* and *Ray Acoustics* physics interfaces.

In the meantime, here is a list of suggested reading material:

- H. Kuttruff,
*Room Acoustics*, CRC Press, Fifth Edition, 2009. - A. D. Pierce,
*Acoustics, An Introduction to its Physical Principles and Applications*, Acoustical Society of America, 1991. - ISO 3382 Standard, Measurement of room acoustic parameters.
- M. R. Schroeder,
*New Method of Measuring Reverberation Time*, J. Acoust. Soc. Am., 37 (1965). - M. R. Schroeder,
*Integrated-Impulse method measuring sound decay without using impulses*, J. Acoust. Soc. Am., 66 (1979).

We often get requests of the type “I would like to just enter my measured stress-strain curve directly into COMSOL Multiphysics”. In this new blog series, we will take a detailed look at how you can process and interpret material data from tests. We will also explain why it is not a good idea to just enter a simple stress-strain curve as input.

All material models are mathematical approximations of a true physical behavior. Material models can, however, not always be derived from physical principles, like mass conservation or equations of equilibrium. They are by nature phenomenological and based on measurements. The laws of physics will, however, enforce limits on the mathematical structure of material models and the possible values of material properties.

It is well known, even from everyday life, that different materials exhibit completely different behavior. A material can be very brittle, like glass, or very elastic, like rubber. Choosing a material model is not only determined by the material as such, but also by the operating conditions. If you immerse a piece of rubber into liquid nitrogen, it will become as brittle as glass — a popular educational experiment. Also, if you heat up glass, it will start to creep and show viscoelastic behavior.

When analyzing structural mechanics behavior in COMSOL Multiphysics, you can choose between about 50 built-in material models, many of them featuring several options for their settings. You can also set up and define your own material models, or combine several of the material models to, for example, describe a material exhibiting both creep and plasticity at the same time.

Some of the available classes of materials are:

- Linear elastic
- Hyperelastic
- Nonlinear elastic
- Plasticity
- Creep
- Concrete

Without going into details about how you should actually come to the correct decision about an appropriate material model, here are some questions you should ask yourself before you start modeling:

- How large are the stress and strain ranges?
- Will the loading speed be important?
- What is the operating temperature and will it be constant?
- Is there a predefined material model targeted specifically at my material, such as concrete or soil plasticity?
- Is the load constant, monotonously increasing, or cyclic?
- Is the stress state predominantly uniaxial or is it fully three-dimensional?

Based on these considerations, you can then make a choice of a suitable material model. Determining the correct parameters to use in this material model will then be more or less difficult.

On one end of the spectrum, there are common materials (such as steel at room temperature) where many engineers know the material data by heart (E = 210 GPa, *ν* = 0.3, *ρ* = 7850 kg/m^{3}) and where data is easily found in the literature or through a simple web search.

On the other end of the spectrum, finding the high temperature creep data for a cast iron to be used in an exhaust manifold can be a major project in itself. Many tests at different load levels and at different temperatures are required. A complete test program for this may take half a year and have a price tag of several hundred thousand dollars.

*Tensile testing equipment. “Inspekt desk 50kN IMGP8563″ by Smial. Original uploader was Smial at de.wikipedia — Transferred from de.wikipedia; transferred to Commons by User: Smial using CommonsHelper. (Original text: eigenes Foto). Licensed under CC BY-SA 2.0 de via Wikimedia Commons.*

Before starting your simulation with COMSOL Multiphysics, it is not enough to import the geometry of the specimen, select the material model, and apply the loads and other boundary conditions; you should also provide the parameters for the chosen material model in the operating stress-strain and temperature range. These parameters are typically obtained from one or more tests.

The most fundamental test is the *uniaxial tensile test*. This is also what most engineers in daily life refer to when they state that they have a “Stress-Strain curve.” If you look at the list of questions above, it is evident that even this seemingly simple test can leave many loose ends:

- A material may exhibit time dependence even at constant loads, giving creep or viscoelastic effects. Many tests, often at different temperature and stress levels, are needed to give reliable data.
- Material parameters obtained from an ordinary tensile test at low speed may not be representative of the material behavior at high strain rates. A crash analysis might show strain rates as high as 10 s
^{-1}, while conventional uniaxial testing machines can use strain rates as low as 10^{-3}s^{-1}. - Is the material isotropic or would tests in several directions be required?
- If you only have a tension test, what would happen in compression? With a single curve, you cannot really tell.
- A tensile test will supply stress versus strain in the tested direction, but it will not always contain data about the deformations in the transverse direction. Without that data, you have no information at all about the cross-coupling between the directions in the 3D case.
- When curve fitting experimental measurements, perhaps not all data should be given equal weight. It may so be that the response in a certain strain range has a larger impact on your simulation results.

Some materials, like concrete, have little or no capacity to carry loads in tension. Here, the *uniaxial compression test* is the most fundamental test. It has many properties in common with the tensile test.

Other materials, like steel and rubber, can also be tested in compression. It is actually a good idea to do so, as we will demonstrate later in this blog post.

When using only uniaxial testing (whether it is in tension, compression, or both), you can however not achieve the full picture of the properties of a given material. You will need to combine it with some other assumptions like isotropy or incompressibility. For many materials, such assumptions are well justified by experience, though.

We have illustrated how the range of a test will affect your conception of the material behavior in the animation below.

- If you just do the onloading part, it is not possible to discriminate between elastic and plastic behavior.
- By unloading, you can distinguish plastic from elastic behavior, but until the specimen is in a state of significant compression, it is not possible to determine whether an isotropic or a kinematic hardening model would give the best representation.

It is significantly more difficult to design testing equipment that can create a homogenous biaxial stress state. *Biaxial testing* is often used for materials that are available only in thin sheets, like fabrics, for instance. By controlling the ratio between the loads in two perpendicular directions, it is possible to extract much more information than from a uniaxial test.

For soils, which generally need to be confined, *triaxial compression* is a common test. Triaxial compression tests could in principle be applied to a block of any material, but the testing equipment is difficult to design. The low compressibility of most solid materials also makes triaxial testing less attractive, since the measured displacements will be small when the material is compressed in all directions.

The Triaxial Test model shows a finite element model of a triaxial compression test.

The *torsion test*, where a cylindrical test specimen is twisted, is a rather simple test that generates a non-uniaxial stress state. The stress state is, however, not homogenous through the rod. Therefore, some extra processing is needed to translate the moment versus angle results to stress-strain results.

In an upcoming blog post in this series, we will make an in-depth demonstration of how to fit measured data to a number of different hyperelastic material models. In the example here, we will assume that you have been able to fit your data to the tests. The raw data consists of two measurements: one in uniaxial tension and another in equibiaxial tension, as shown below.

The nominal stress (force divided by original area) is plotted against stretch (current length divided by original length).

*Measured stress-strain curves by Treloar.*

Since the data covers a wide range of stretches, the experimental results are clearly nonlinear. The simplest hyperelastic models with one or two parameters will probably not be sufficient to fit the experimental data. The Ogden model with three terms is a popular model for rubber, and it is the model we used here.

A least squares fit will give the results below when assigning equal weights to both data sets. As we can see in the graph, it is possible to fit both experiments very well with a single set of material parameters.

*Fitted material parameters using a three terms Ogden model.*

But what if the biaxial test had not been available? Fitting only the uniaxial data will give a different set of material parameters, which will of course fit that set of experimental data even more closely, but it would deviate from the biaxial results. This is shown below.

*Analytical results for uniaxial and biaxial tension when only the uniaxial data was used to fit the model parameters.*

Clearly, the prediction for a equibiaxial stress state will differ between the two sets of parameters. As we can see, the error in stress in the biaxial curve is more than 20% at some stretch levels.

What about other stress states? Two stress states that can be simulated in a simple finite element model are uniaxial compression and pure torsion. The uniaxial stress-strain curve over a wide range of stretches is shown below. The results on the tensile side are not as sensitive to the data set used for obtaining the material parameters as the compressive side is. This is not surprising as tensile data is used for parameter fitting in both cases, whereas neither of the experiments contain any information about the compressive behavior.

*Uniaxial response ranging from compression to tension. The scale on the *x*-axis is logarithmic.*

Note that operating conditions of rubber parts, such as seals, are often under predominantly compressive states. If the data sets used for parameter fitting contain only tension data, this may be a source of inaccuracy when modeling multiaxial stress states.

Finally, let’s have a look at a simulation where a circular bar is twisted. The same type of discrepancies between the results from two sets of material parameters as above can be seen below.

*Computed torque as function of the twist angle.*

Finally, it should be noted that many hyperelastic models are only conditionally stable. This means that even though the estimated material parameters are perfectly valid for a certain strain range, a unique and continuous stress-strain relation may not even exist for other strain combinations. We often come across such problems in support cases. This is unfortunately rather difficult to detect a *priori*, since it would require a full search of all possible strain combinations.

Measured data must be processed and analyzed before being used as input for simulations. For material models other than the simpler linear elastic model, it is a good idea to make small examples with a unit cube to assess the behavior under different loading states before using the material model in a large-scale simulation.

So the answer to the question: “I would like to just enter my measured stress-strain curve directly into COMSOL Multiphysics” is that such an approach is *not* recommended. That would make the software a black box where the user really must take a number of active decisions in order to obtain meaningful results.

Up next in our Structural Materials series: We will discuss nonlinear elasticity and plasticity.

]]>