In a recent video on YouTube from standupmaths, science enthusiasts Matt Parker and Hugh Hunt discuss and demonstrate the “mystery” of a tuning fork. When you strike a tuning fork and hold it against a tabletop, it seems to double in frequency. As it turns out, the explanation behind this mystery can be boiled down to nonlinear solid mechanics.
When you hold a vibrating tuning fork in your hand, the bending motion of the prongs sets the air around them in motion. The pressure waves in the air propagate as sound. You can hear it, but it is not a very efficient conversion of the mechanical vibration into acoustic pressure.
When you hold the stem of the tuning fork to a table, an axial motion in the stem connects to the tabletop. The motion is much smaller than the transverse motion of the prongs, but it has the potential to set the large flat tabletop in motion — a surface that is a far better emitter of sound than the thin prongs of a tuning fork. The tabletop surface will act as a large loudspeaker diaphragm.
Our tuning fork.
To investigate this interesting behavior, we created a solid mechanics computational model of a tuning fork. The model is based on a tuning fork that one of my colleagues keeps in her handbag. The tone of the device is a reference A4 (440 Hz), the material is stainless steel, and the total length is about 12 cm.
First, let’s have a look at the displacement as the tuning fork is vibrating in its first eigenmode:
The mode shape for the fundamental frequency of the tuning fork.
If we study the displacements in detail, it turns out that even though the overall motion of the prongs is in the transverse direction (the x direction in the picture), there are also some small vertical components (in the z direction), consisting of two parts:
The displacements are shown in the figures below. The mode is normalized so that the maximum total displacement is 1. The peak axial displacement is 0.03 and the displacement in the stem is 0.01.
Total displacement vectors in the first eigenmode.
Axial displacements only. Note that the scales differ between figures. The center of gravity is indicated by the blue sphere.
Now, let’s turn to the sound emission. By adding a boundary element representation of the acoustic field to the model, the sound pressure level in the surrounding air can be computed. The amplitude of the vibration at the prong tips is set to 1 mm. This is approximately the maximum feasible value if the tuning fork is not to be overloaded from a stress point of view.
As can be seen in the figure below, the intensity of the sound decreases rather fast with the distance from the tuning fork, and also has a large degree of directionality. Actually, if you turn a tuning fork around its axis beside your ear, the near-silence in the 45-degree directions is striking.
Sound pressure level (dB) and radiation pattern (inset) around the tuning fork.
We now add a 2-cm-thick wooden table surface to the model. It measures 1 by 1 m and is supported at the corners. The stem of the tuning fork is in contact with a point at the center of the table. As can be seen below, the sound pressure levels are quite significant in a large portion of the air domain above and outside the table.
Sound pressure levels above the table when the stem of the tuning fork is attached to the table.
For comparison, we plot the sound pressure level for the same air domain when the tuning fork is held up. The difference is quite stunning with very low sound pressure levels in all parts of the air above the table except for in the vicinity of the tuning fork. This matches our experience with tuning forks as shown in the original YouTube video.
Sound pressure levels for the tuning fork when held up.
So far, we have not touched on the original question: Why does the frequency double when the tuning fork is placed on the table? One possible explanation could be that there is such a natural frequency, which has a motion that is more prominent in the vertical direction. For a vibrating string, for example, the natural frequencies are integer multiples of the fundamental frequency.
This is not the case for a tuning fork. If the prongs are approximated as cantilever beams in bending, the lowest natural frequency is given by the expression
The quantities in this expression are:
For our tuning fork, this evaluates to 435 Hz, so the formula provides a good approximation.
The second natural frequency of a cantilever beam is
This frequency is a factor 6.27 higher than the fundamental frequency. It cannot be involved in the frequency doubling. However, there are other mode shapes besides those with symmetric bending. Could one of them be involved in the frequency doubling?
This is unlikely for two reasons. The first reason is that the frequency doubling phenomenon can be observed for tuning forks with different geometries, and it would be too much of a coincidence if all of them have an eigenmode with exactly twice the fundamental natural frequency. The second reason is that nonsymmetrical eigenmodes have a significant transverse displacement at the stem, where the tuning fork is clenched. Such eigenmodes will thus be strongly damped by your hand, and have an insignificant amplitude. One such mode, with a natural frequency of 1242 Hz, is shown in the animation below.
The tuning fork’s first eigenmode at 440 Hz, an out-of-plane mode with an eigenfrequency of 1242 Hz, and the second bending mode with an eigenfrequency of 2774 Hz.
Let’s summarize what we know about the frequency-doubling phenomenon. Since it is only experienced when we press the tuning fork to the table, the double frequency vibration has a strong axial motion in the stem. Also, we can see from a spectrum analyzer (you can download such an app on a smartphone) that the level of vibration at the double frequency decays relatively quickly. There is a transition back to the fundamental frequency as the dominant one.
The dependency on the amplitude suggests a nonlinear phenomenon. The axial movement of the stem indicates that the stem compensates for a change in the location of the center of mass of the prongs.
Without going into details with the math, it can be shown that for the bending cantilever, the center of mass shifts down by a distance relative to the original length L, which is
Here, a is the transverse motion at the tip and the coefficient β ≈ 0.2.
The important observation is that the vertical movement of the center of mass is proportional to the square of the vibration amplitude. Also, the center of mass will be at its lowest position twice per cycle (both when the prong bends inward and when it bends outward), thus the double frequency.
With a = 1 mm and a prong length of L = 80 mm, the maximum shift in the position of the center of mass of the prongs can be estimated to
The stem has a significantly smaller mass than the prongs, so it has to move even more for the total center of gravity to maintain its position. The stem displacement amplitude can thus be estimated to 0.005 mm. This should be seen in relation to what we know from the numerical experiments above. The linear (440 Hz) part of the axial motion is of the order of a/100; in this example, 0.01 mm.
In reality, the tuning fork is a more complex system than a pure cantilever beam, and the connection region between the stem and the prongs will affect the results. For the tuning fork analyzed here, the second-order displacements are actually less than half of the back-of-the-envelope predicted 0.005 mm.
Still, the axial displacement caused by the second-order moving mass effect is significant. Furthermore, when it comes to emitting sound, it is the velocity, not the displacement, that is important. So, if displacement amplitudes are equal at 440 Hz and 880 Hz, the velocity at the double frequency is twice that at the fundamental frequency.
Since the amplitude of the axial vibration at 440 Hz is proportional to the prong amplitude a, and the amplitude of the 880-Hz vibration is proportional to a^{2}, it is necessary that we strike the tuning fork hard enough to experience the frequency-doubling effect. As the vibration decays, the relative importance of the nonlinear term decreases. This is clearly seen on the spectrum analyzer.
The behavior can be investigated in detail by performing a geometrically nonlinear transient dynamic analysis. The tuning fork is set in motion by a symmetric impulse applied horizontally on the prongs, and is then left free to vibrate. It can be seen that the horizontal prong displacement is almost sinusoidal at 440 Hz, while the stem moves up and down in a clearly nonlinear manner. The stem displacement is highly nonsymmetrical, since the 440 Hz contribution is synchronous with the prong displacement, while the 880-Hz term always gives an additional upward displacement.
Due to the nonlinearity of the system, the vibration is not completely periodic. Even the prong displacement amplitude can vary from one cycle to another.
The blue line shows the transverse displacement at the prong tip, and the green line shows the vertical displacement at the bottom of the stem.
If the frequency spectrum of the stem displacement plotted above is computed using FFT, there are two significant peaks at 440 Hz and 880 Hz. There is also a small third peak around the second bending mode.
Frequency spectrum of the vertical stem displacement.
To actually see the second-order term at 880 Hz in action, we can subtract the part of the stem vibration that is in phase with the prong bending from the total stem displacement. This displacement difference is seen in the graph below as the red curve.
The total axial stem displacement (blue), the prong bending proportional stem displacement (dashed green), and the remaining second-order displacement (red).
How did we perform this calculation? Well, we know from the eigenfrequency analysis that the amplitude of the axial stem vibration is about 1% of the transverse prong displacement (actually 0.92%). In the graph above, the dashed green curve is 0.0092 times the current displacement of the prong tip (not shown in the graph). This curve can be considered as showing the linear 440 Hz term — a more or less pure sine wave. That value is then subtracted from the total stem displacement, and what is left is the red curve. The second-order displacement is zero when the prong is straight, and peaks both when the prong has its maximum inward bending and when it has its maximum outward bending.
Actually, the red curve looks very much like it is having a time variation proportional to sin^{2}(ωt). It should, since that displacement, according to the analysis above, is proportional to the square of the prong displacement. Using a well-known trigonometric identity, . Enter the double frequency!
Commenters on the original video from standupmaths have noticed that some tuning forks work better than others, and with some tuning forks, it is difficult to see the frequency doubling at all. As discussed above, the first criterion is that you hit it hard enough in order to get into the nonlinear regime. But there are also geometrical differences influencing the ratio between the amplitude of the two types of vibration.
For instance, prongs that are heavy relative to the stem will cause large double-frequency displacements, since the stem must move more in order to maintain the center of gravity. Slender prongs can have a larger amplitude–length (a/L) ratio, thus increasing the nonlinear term.
The design of the region where the prongs meet the stem is important. If it is stiff, then the amplitude of the fundamental frequency vibration in the stem will be reduced, and the relative importance of the double-frequency vibration is larger.
The cross section of the prongs will also have an influence. If we return to the expression for the natural frequency
it can be seen that the moment of inertia of the cross section plays a role. A prong with a square cross section with side d has
while a prong with a circular cross section with diameter d has
Thus, for two tuning forks that look the same when viewed from the side, the one with a square profile must have prongs that are a factor 1.14 longer to give the same fundamental frequency. If we assume the same maximum stress due to bending in the two tuning forks, the one with the square profile can have a transverse displacement amplitude, which is 1.14^{2} larger than the circular one because of its higher load-carrying capacity. In addition, if the stem is kept at a fixed size, then it will become proportionally lighter when compared to the longer prongs. All these contributions end up in a 70% increase in vertical stem vibration amplitude when moving from a circular profile to a square profile.
In addition, tuning forks with a circular cross section usually have a design that is more flexible at the connection between the prongs and the stem, and thus a higher level of vibration at the fundamental frequency.
The conclusion is that a tuning fork with a square cross section is more likely to exhibit the frequency-doubling behavior than one with a circular cross section.
In most cases, the answer is “no.” The fundamental frequency is still there, even though it may have a lower amplitude than the one with the double frequency. But the way our senses work, we hear the fundamental frequency, although with a different timbre. It is difficult, but not impossible, to strike the tuning fork so hard that the sound level of the double frequency is significantly dominant.
The frequency doubling occurs due to a nonlinear phenomenon, where the stem of the tuning fork must move upward, in order to compensate for the small lowering of the center of mass of the prongs as they approach the outermost positions of their bending motion.
Note that it is not the fact that the tuning fork is connected to the table that causes the frequency doubling. The reason that we measure it in that case is that the sound emitted by the resonating table surface is caused by the axial stem motion, whereas the sound we hear from the tuning fork that is held up is dominated by the prong bending. The motion is the same in both cases, as long as the impedance of the table is ignored. In fact, you can measure the doubled frequency with a tuning fork when held up as well, but it is 30 dB or so below the fundamental frequency.
BEM functionality is available in the Acoustics Module as the Pressure Acoustics, Boundary Elements interface. The interface can solve 2D and 3D acoustics problems that have constant-valued material properties within each domain. The fluid model can include dissipation by using complex-valued material data. Furthermore, the BEM interface’s implementation as a scattered field formulation means that it can handle scattering problems (see the image below). As we will see below, the introduction of BEM allows users to solve a new category of problems that were not possible before.
Classical BEM benchmark model of a spherical scatterer for which the results are compared to an analytical solution. The left image shows the sound pressure level in two cut planes at 500 Hz, while the right image shows a comparison of the scattered field at 1400 Hz. Images from the Spherical Scatterer: BEM Benchmark tutorial model.
An important feature is the ability to couple the BEM-based interface with FEM-based interfaces. For example, by using the Acoustic-Structure Boundary multiphysics coupling feature, you can couple the acoustics BEM interface to vibrating structures based on FEM. In addition, BEM and FEM acoustic domains can be combined by using the Acoustic BEM-FEM Boundary multiphysics coupling.
This flexibility allows BEM and FEM to be used where they are best suited and this is all done within the same user interface, as with all other physics couplings in COMSOL Multiphysics. For instance, you can use FEM to model a vibrating structure’s interior, like a closed air domain, as this method can include more general material properties, and BEM to model the exterior domain, as this method is better for modeling large and infinite domains. This is the case in the loudspeaker model depicted below.
User interface of COMSOL Multiphysics when setting up a multiphysics model of a loudspeaker that includes BEM and FEM acoustics as well as the Solid Mechanics and Shell interfaces. The physics are coupled with the built-in multiphysics couplings. Image from the Vibroacoustic Loudspeaker Simulation: Multiphysics with BEM-FEM tutorial model.
With BEM, you only need to mesh the surfaces next to the modeling domain. This means that there’s less need to create large volumetric meshes (necessary for FEM), making interfaces based on BEM particularly helpful for models that involve radiation and scattering and have detailed CAD geometries. The interface also has built-in conditions to set up an infinite sound hard boundary (wall) or an infinite sound soft boundary. These conditions are very useful when modeling, for example, underwater acoustics, where the ocean surface can be modeled as an infinite sound soft boundary.
Typically, it is advantageous to use interfaces based on BEM for problems with large fluid domains for which a large FEM-based volumetric mesh would otherwise be required (i.e., cases that would run out of memory due to the large 3D mesh). For cases like this one, using BEM can even extend the class of problems that COMSOL Multiphysics can handle. Some examples of these problems include:
An example of a transducer array located far from a scattering object. This type of problem is very hard or even impossible to solve with a pure FEM-based approach due to the large memory requirement. Using BEM, the model can be solved (moving the sphere further away does not cost more on the computational side). Image from the Tonpilz Transducer Array for Sonar Systems tutorial model.
While BEM is more computationally demanding than FEM for an equal amount of degrees of freedom (DOFs), BEM usually requires far fewer DOFs than FEM to obtain the same accuracy. The fully populated and dense system matrices generated by BEM require different dedicated numerical methods than the ones for FEM. A FEM-based interface, such as the Pressure Acoustics, Frequency Domain interface, is usually faster for solving small- and medium-sized acoustics models than BEM.
According to the user’s guide for the Acoustics Module, the BEM used in the Pressure Acoustics, Boundary Elements interface is based on the direct method with Costabel’s symmetric coupling. To solve the resulting linear system, the adaptive cross approximation (ACA) fast summation method is used. This method uses partial assembly of the matrices where the effect of the matrix vector multiplication is calculated. As for the default iterative solver, it is GMRES. With the built-in multiphysics couplings, it is easy and seamless to set up problems that combine FEM- and BEM-based physics. When solving these coupled models, the default approach is to use hybridization with the ACA for BEM and an appropriate preconditioner for the FEM part of the problem (direct or multigrid).
As already mentioned, the Pressure Acoustics, Boundary Elements interface seamlessly couples to the finite-element-based interfaces like the Pressure Acoustics, Frequency Domain interface and the Solid Mechanics interface. This coupling makes it possible to easily set up hybrid FEM-BEM models that take advantage of the strengths of each formulation where needed and where they are best applied.
BEM is not meant to replace finite elements in acoustics but should be seen as a complement. The general rule of thumb is to use BEM where large fluid domains would otherwise require a very fine mesh when running a FEM-based model, and to otherwise couple BEM to FEM-based physics where they are best used. Some applications and examples include:
Remember that smaller models that fit in memory are typically faster with FEM. Use the traditional approach with a radiation condition or a PML to model open radiation domains.
The Pressure Acoustics, Boundary Elements interface can be used to replace a FEM-based radiation condition or PML and the far-field calculation feature. See, for example, the model example below.
In the Bessel Panel tutorial model, the Pressure Acoustics, Boundary Elements interface is used to model the open space. The BEM interface is effectively replacing a radiation condition (or a PML) and the far-field calculation feature that was previously necessary. This image shows the sound pressure level on the surface of the FEM domain (several point sources are located inside this domain) and in three cut planes, with a given extent, in the exterior BEM region.
When solving a problem with the BEM interface, the resulting solution consists of the dependent variables, equivalently unknown fields, on boundaries. This includes the pressure p
and its normal derivative; i.e., the normal flux variable pabe.pbam1.bemflux
. Evaluating the solution in a domain is based on an integral kernel evaluation, which is at the heart of BEM.
On boundaries, a dedicated boundary variable is defined. This variable has different definitions on exterior and interior boundaries; it is equal to the dependent variable on exterior boundaries. Up and down pressure-dependent variables are defined on interior boundaries (pabe.p_up
and pabe.p_down
) because the pressure is discontinuous here; for example, in an Interior Sound Hard Wall boundary. Moreover, on all boundaries, predefined postprocessing variables exist that combine the properties of the boundary variables, when needed, with variables based on the kernel evaluation.
These variables and all other postprocessing variables are found in the Replace Expressions list in the plots, as shown in the image below.
The user interface with a list of some of the predefined postprocessing variables.
When postprocessing the BEM solution within the domains, the pressure field has to be reconstructed using the aforementioned BEM integral kernel evaluation. Dedicated data sets are available for easy visualization of the BEM solution by automating the kernel evaluation on a grid. The paragraphs below discuss data sets that can be used to plot acoustics results.
The Grid 3D and Grid 2D data sets are specially designed for evaluating the solution within domains where there is no mesh. These data sets set up a regular grid of points where the solution is evaluated. The size and bounds of the grid can be modified as well as the resolution (the grid spacing). When visualizing wave problems, it is important to have an adequate spatial resolution. However, the resolution should not be too large, as it will increase the rendering time.
A grid data set can, for example, be selected as the input data set for a slice or a surface plot. A grid data set and a multislice plot are automatically generated and used in the default plots when a BEM model is solved. The grid data set can also be used as input to a cut plane, cut line, or cut point.
Parameterized curves and surfaces can be used directly to evaluate the BEM solution as long as the option Only evaluate globally defined expressions is selected.
The dedicated acoustics plots can be used directly with the BEM variables as input. Examples include the Far Field plot, used for plotting the spatial response (not necessarily in the far field, but as a matter of fact at any distance) and the Directivity plot. For example, the sound pressure level variable pabe.Lp
can be used as the expression.
Screenshots of the user interface for some of the different data sets mentioned above. The important settings are highlighted.
The screenshots above are taken from the Loudspeaker Radiation: BEM Acoustics tutorial model. This model solves a radiation problem and has most of the common plots and results visualization set up.
The image below shows the sound pressure level depicted in three slices through the grid on the speaker surface. To illustrate the generality of the postprocessing and visualization tools, the sound pressure level is also shown along a parameterized spiral curve created using a Parametric Curve 3D data set.
Sound pressure level depicted in different ways in the Loudspeaker Radiation: BEM Acoustics tutorial model.
Next, I want to discuss two cases that require special consideration when using BEM.
Many acoustics applications involve a situation in which a transducer is located in an infinite baffle and is radiating into a half-space. In most cases, this setup is not possible using boundary elements, at least not if the baffle has to be infinite. A noninfinite baffle can be set up using, for example, the Interior Sound Hard Wall boundary condition.
Typically, we would want to use the Infinite Sound Hard Boundary feature. This condition cannot “have a hole in it” like when a loudspeaker driver is sitting in a baffle. Since the BEM formulation is based on the full-space Green’s function, an infinite symmetry plane or an infinite wall condition mean that they are infinite and cannot have an opening in them. Basically, all boundaries that have a selection in the physics interface and are active must be located on the same side of the infinite condition or lie on it. If this is not the case, the results will be unphysical.
My general recommendation for the infinite baffle setups is to use the FEM-based physics interface together with the far-field calculation feature and a PML or radiation condition. For an example, see the Lumped Loudspeaker Driver model. This setup will typically be much faster!
User interface of the Pressure Acoustics, Boundary Elements interface. The infinite conditions are found at the top physics level (highlighted here). Once a condition is selected, the resulting plane is depicted in the Graphics window.
Interior problems — especially problems with sharp resonances where no or little loss is present — can be challenging to solve with BEM. This is not because of the method itself but because an iterative solver is used to efficiently solve the underlying matrix system. The same problem is also found for a FEM-based model that uses an iterative solver.
Near a sharp resonance, any small change results in variations in the pressure that are hard to capture to ensure convergence. If possible, use FEM in these situations together with a direct solver or make sure to add realistic boundary conditions with losses, such as an impedance condition.
BEM is a very useful complement to FEM in the COMSOL Multiphysics environment. Many engineers in the acoustics modeling community have been looking forward to the addition of this functionality. We hope that you will enjoy this latest addition to the Acoustics Module.
See what’s possible with the specialized acoustics modeling features available in the Acoustics Module add-on product by clicking the button below.
Try it yourself: Download one of the tutorial models featured in this blog post. From the Application Gallery, you can log into your COMSOL Access account and download the MPH-file.
]]>Topology optimization helps engineers design applications in an optimized manner with respect to certain a priori objectives. Mainly used in structural mechanics, topology optimization is also used for thermal, electromagnetics, and acoustics applications. One physics that was missing from this list until last year is microacoustics. This blog post describes a new method for including thermoviscous losses for microacoustics topology optimization.
A previous blog post on acoustic topology optimization outlined the introductory theory and gave a couple of examples. The description of the acoustics was the standard Helmholtz wave equation. With this formulation, we can perform topology optimization for many different applications, such as loudspeaker cabinets, waveguides, room interiors, reflector arrangements, and similar large-scale geometries.
The governing equation is the standard wave equation with material parameters given in terms of the density and the bulk modulus K. For topology optimization, the density and the bulk modulus are interpolated via a variable, . This interpolation variable ideally takes binary values: 0 represents air and 1 represents a solid. During the optimization procedure, however, its value follows an interpolation scheme, such as a solid isotropic material with penalization model (SIMP), as shown in Figure 1.
Figure 1: The density and bulk modulus interpolation for standard acoustic topology optimization. The units have been omitted to have both values in the same plot.
Using this approach will work for applications where the so-called thermoviscous losses (close to walls in the acoustic boundary layers) are of little importance. The optimization domain can be coupled to narrow regions described by, for example, a homogenized model (this is the Narrow Region Acoustics feature in the Pressure Acoustics, Frequency Domain interface). However, if the narrow regions where the thermoviscous losses occur change shape themselves, this procedure is no longer valid. An example is when the cross section of a waveguide changes shape.
For microacoustic applications, such as hearing aids, mobile phones, and certain metamaterial geometries, the acoustic formulation typically needs to include the so-called thermoviscous losses explicitly. This is because the main losses occur in the acoustic boundary layer near walls. Figure 2 below illustrates these effects.
Figure 2: The volume field is the acoustic pressure, the surface field is the temperature variation, and the arrows indicate the velocity.
An acoustic wave travels from the bottom to the top of a tube with a circular cross section. The pressure is shown in a ¾-revolution plot.
The arrows indicate the particle velocity at this particular frequency. Near the boundary, the velocity is low and tends to zero on the boundary, whereas in the bulk, it takes on the velocity expected from standard acoustics via Euler’s equation. At the boundary, the velocity is zero because of viscosity, since the air “sticks” to the boundary. Adjacent particles are slowed down, which leads to an overall loss in energy, or rather a conversion from acoustic to thermal energy (viscous dissipation due to shear). In the bulk, however, the molecules move freely.
Modeling microacoustics in detail, including the losses associated with the acoustic boundary layers, requires solving the set of linearized Navier-Stokes equations with quiescent conditions. These equations are implemented in the Thermoviscous Acoustics physics interfaces available in the Acoustics Module add-on to the COMSOL Multiphysics® software. However, this formulation is not suited for topology optimization where certain assumptions can be used. A formulation based on a Helmholtz decomposition is presented in Ref. 1. The formulation is valid in many microacoustic applications and allows decoupling of the thermal, viscous, and compressible (pressure) waves. An approximate, yet accurate, expression (Ref. 1) links the velocity and the pressure gradient as
where the viscous field is a scalar nondimensional field that describes the variation between bulk conditions and boundary conditions.
In the figure above, the surface color plot shows the acoustic temperature variation. The variation on the boundary is zero due to the high thermal conductivity in the solid wall, whereas in the bulk, the temperature variation can be calculated via the isentropic energy equation. Again, the relationship between temperature variation and acoustic pressure can be written in a general form (Ref. 1) as
where the thermal field is a scalar, nondimensional field that describes the variation between bulk conditions and boundary conditions.
As will be shown later, these viscous and thermal fields are essential for setting up the topology optimization scheme.
For thermoviscous acoustics, there is no established interpolation scheme, as opposed to standard acoustics topology optimization. Since there is no one-equation system that accurately describes the thermoviscous physics (typically, it requires three governing equations), there are no obvious variables to interpolate. However, I will describe a novel procedure in this section.
For simplicity, we look at only wave propagation in a waveguide of constant cross section. This is equivalent to the so-called Low Reduced Frequency model, which may be known to those working with microacoustics. The viscous field can be calculated (Ref. 1) via Equation 1 as
(1)
where is the Laplacian in the cross-sectional direction only. For certain simple geometries, the fields can be calculated analytically (as done in the Narrow Region Acoustics feature in the Pressure Acoustics, Frequency Domain interface). However, when used for topology optimization, they must be calculated numerically for each step in the optimization procedure.
In standard acoustics topology optimization, an interpolation variable varies between 0 and 1, where 0 represents air and 1 represents a solid. To have a similar interpolation scheme for the thermoviscoacoustic topology optimization, I came up with a heuristic approach, where the thermal and viscous fields are used in the interpolation strategy. The two typical boundary conditions for the viscous field (Ref. 1) are
and
These boundary conditions give us insight into how to perform the optimization procedure, since an air-solid interface could be represented by the former boundary condition and an air-air interface by the latter. We write the governing equation in a more general matter:
We already know that for air domains, (a_{v},f_{v}) = (1,1), since that gives us the original equation (1). If we instead set a_{v} to a large value so that the gradient term becomes insignificant, and set f_{v} to zero, we get
This corresponds exactly to the boundary condition for no-slip boundaries, just as a solid-air interface, but obtained via the governing equation. We need this property, since we have no way of applying explicit boundary conditions during the optimization. So, for solids, (a_{v},f_{v}) should have values of (“large”,0). Thus, we have established our interpolation extremes:
and
I carried out a comparison between the explicit boundary conditions and interpolation extremes, with the test geometry shown in Figure 3. On the left side, boundary conditions are used, whereas on the adjacent domains on the right, the suggested values of a_{v} and f_{v} are input.
Figure 3: On the left, standard boundary conditions are applied. On the right, black domains indicate a modified field equation that mimics a solid boundary. White domains are air.
The field in all domains is now calculated for a frequency with a boundary layer thick enough to visually take up some of the domain. It can be seen that the field is symmetric, which means that the extreme field values can describe either air or a solid. In a sense, that is comparable to using the actual corresponding boundary conditions.
Figure 4: The resulting field with contours for the setup in Figure 3.
The actual interpolation between the extremes is done via SIMP or RAMP schemes (Ref. 2), for example, as with the standard acoustic topology optimization. The viscous field, as well as the thermal field, can be linked to the acoustic pressure variable pressure via equations. With this, the world’s first acoustic topology optimization scheme that incorporates accurate thermoviscous losses has come to fruition.
Here, we give an example that shows how the optimization method can be used for a practical case. A tube with a hexagonally shaped cross section has a certain acoustic loss due to viscosity effects. Each side length in the hexagon is approximately 1.1 mm, which gives an area equivalent to a circular area with a radius of 1 mm. Between 100 and 1000 Hz, this acoustic loss increases by a factor of approximately 2.6, as shown in Figure 7. Now, we seek to find an optimal topology so that we obtain a flatter acoustic loss response in this frequency range, with no regards to the actual loss value. The resulting geometry looks like this:
Figure 5: The topology for a maximally flat acoustic loss response and resulting viscous field at 1000 Hz.
A simpler geometry that resembles the optimized topology was created, where explicit boundary conditions can be applied.
Figure 6: A simplified representation of the optimized topology, with the viscous field at 1000 Hz.
The normalized acoustic loss for the initial hexagonal geometry and the topology-optimized geometry are compared in Figure 7. For each tube, the loss is normalized to the value at 100 Hz.
Figure 7: The acoustics loss normalized to the value at 100 Hz for the initial cross section (dashed) and the topology-optimized geometry (solid), respectively.
For the optimized topology, the acoustic loss at 1000 Hz is only 1.5 times higher than at 100 Hz, compared to the 2.6 times for the initial geometry. The overall loss is larger for the optimized geometry, but as mentioned before, we do not consider this in the example.
This novel topology optimization strategy can be expanded to a more general 1D method, where pressure can be used directly in the objective function. A topology optimization scheme for general 3D geometries has also been established, but its implementation is still ongoing. It would be very advantageous for those of us working with microacoustics to focus on improving topology optimization, in both universities and industry. I hope to see many advances in this area in the future.
René Christensen has been working in the field of vibroacoustics for more than a decade, both as a consultant (iCapture ApS) and as an engineer in the hearing aid industry (Oticon A/S, GN Hearing A/S). He has a special interest in the modeling of viscothermal effects in microacoustics, which was also the topic of his PhD. René joined the hardware platform R&D acoustics team at GN Hearing as a senior acoustic engineer in 2015. In this role, he works with the design and optimization of hearing aids.
]]>
Picture a micromirror as a single string on a guitar. The string is so light and thin that when you pluck it, the surrounding air dampens the string’s motion, bringing it to a standstill.
Because this damping effect is important to many MEMS devices, micromirrors have a wide variety of potential applications. For instance, these mirrors can be used to control optic elements, an ability that makes them useful in the microscopy and fiber optics fields. Micromirrors are found in scanners, heads-up displays, medical imaging, and more. Additionally, MEMS systems sometimes use integrated scanning micromirror systems for consumer and telecommunications applications.
Close-up view of an HDTV micromirror chip. Image by yellowcloud — Own work. Licensed under CC BY 2.0, via Flickr Creative Commons.
When developing a micromirror actuator system, engineers need to account for its dynamic vibrating behavior and damping, both of which greatly affect the operation of the device. Simulation provides a way to analyze these factors and accurately predict system performance in a timely and cost-efficient manner.
To perform an advanced MEMS analysis, you can combine features in the Structural Mechanics Module and Acoustics Module, two add-on products to the COMSOL Multiphysics simulation platform. Let’s take a look at frequency-domain (time-harmonic) and transient analyses of a vibrating micromirror.
We model an idealized system that consists of a vibrating silicon micromirror — which is 0.5 by 0.5 mm with a thickness of 1 μm — surrounded by air. A key parameter in this model is the penetration depth; i.e., the thickness of the viscous and thermal boundary layers. In these layers, energy dissipates via viscous drag and thermal conduction. The thickness of the viscous and thermal layers is characterized by the following penetration depth scales:
where is the frequency, is the fluid density, is the dynamic viscosity, is the coefficient of thermal conduction, is the heat capacity at constant pressure, and is the nondimensional Prandtl number.
For air, when the system is excited at a frequency of 10 kHz (which is typical for this model), the viscous and thermal scales are 22 µm and 18 µm, respectively. These are comparable to the geometric scales, like the mirror thickness, meaning that thermal and viscous losses must be included. Moreover, in real systems, the mirrors may be located near surfaces or in close proximity to each other, creating narrow regions where the damping effects are accentuated.
The frequency-domain analysis provides insight into the frequency response of the system, including the location of the resonance frequencies, Q-factor of the resonance, and damping of the system.
The micromirror model geometry, showing the symmetry plane, fixed constraint, and torquing force components.
In this example, we use three separate interfaces:
By modeling the detailed thermoviscous acoustics and using the Thermoviscous Acoustics, Frequency Domain interface, we can explicitly include thermal and viscous damping while solving the full linearized Navier-Stokes, continuity, and energy equations. In doing so, we accomplish one of the main goals for this model: accurately calculating the damping experienced by the mirror.
To set up and combine the three interfaces, we use the Acoustics-Thermoviscous Acoustics Boundary and Thermoviscous-Acoustics-Structure Boundary multiphysics couplings. We then solve the model using a frequency-domain sweep and an eigenfrequency study. These analyses enable us to study the resonance frequency of the mirror under a torquing load in the frequency domain.
Let’s take a look at the displacement of the micromirror for a frequency of 10 kHz and when exposed to the torquing force. In this scenario, the displacement mainly occurs at the edges of the device. To view displacement in a different way, we also plot the response at the tip of the micromirror over a range of frequencies.
Micromirror displacement at 10 kHz for phase 0 (left) and the absolute value of the z-component of the displacement field at the micromirror tip (right).
Next, let’s view the acoustic temperature variations (left image below) and acoustic pressure distribution (right image below) in the micromirror for a frequency of 11 kHz. As we can see, the maximum and minimum temperature fluctuations occur opposite to one another and there is an antisymmetric pressure distribution. The temperature fluctuations are closely related to the pressure fluctuations through the equation of state. Note that the temperature fluctuations fall to zero at the surface of the mirror, where an isothermal condition is applied. The temperature gradient near the surface gives rise to the thermal losses.
Temperature fluctuation field within the thermoviscous acoustics domain (left) and the pressure isosurfaces (right).
The two animations below show a dynamic extension of the frequency-domain data using the time-harmonic nature of the solution. Both animations depict the mirror movement in a highly exaggerated manner, with the first one showing an instantaneous velocity magnitude in a cross section and the second showing the acoustic temperature fluctuations. These results indicate that there are high-velocity regions close to the edge of the micromirror. We determine the extent of this region into the air via the scale of the viscous boundary layer (viscous penetration depth). We can also identify the thermal boundary layer or penetration depth using the same method.
Animation of the time-harmonic variation in the local velocity.
Animation of the time-harmonic variation in the acoustic temperature fluctuations.
When the problem is formulated in the frequency domain, eigenmodes or eigenfrequencies can also be identified. From the eigenfrequency study (also performed in the model), we can determine the vibrating modes, shown in the animation below (only half the mirror is shown as symmetry applies). Our results show that the fundamental mode is around 10.5 kHz, with higher modes at 13.1 kHz and 39.5 kHz. The complex value of the eigenfrequency is related to the Q-factor of the resonance and thus the damping. (This relationship is discussed in detail in the Vibrating Micromirror model documentation.)
Animation of the first three vibrating modes of the micromirror.
As of version 5.3a of the COMSOL® software, a different take on this example solves for the transient behavior of the micromirror. Using the same geometry, we extend the frequency-domain analysis into a transient analysis. To achieve this, we swap the frequency-domain interfaces with their corresponding transient interfaces and adjust the settings of the transient solver. In the simulation, the micromirror is actuated for a short time and exhibits damped vibrations.
The resulting model includes some of the most advanced air and gas damping mechanisms that COMSOL Multiphysics has to offer. For instance, the Thermoviscous Acoustics, Transient interface generates the full details for the viscous and thermal damping of the micromirror from the surrounding air.
In addition, by coupling the transient perfectly matched layer capabilities of pressure acoustics to the thermoviscous acoustics domain, we can create efficient nonreflecting boundary conditions (NRBCs) for this model in the time domain.
Let’s start with the displacement results. The 3D results (left image below) visualize the displacement of the micromirror and the pressure distribution at a given time. We also generate a plot (right image below) to illustrate the damped vibrations caused by thermal and viscous losses. The green curve represents the undamped response of the micromirror when the surrounding air is not coupled to the mirror movement. The time-domain simulations make it possible to study transients of the system, like the decay time, and the response of the system to an anharmonic forcing.
Micromirror displacement and pressure distribution (left) and the transient evolution of the mirror displacement (right).
We can also examine the acoustic temperature variations surrounding the micromirror. The isothermal condition at the micromirror surface produces an acoustic thermal boundary layer. As with the frequency-domain example, the highest and lowest temperatures are located opposite to one another.
In addition, by calculating the acoustic velocity variations of the micromirror, we see that a no-slip condition at the micromirror surface results in a viscous boundary layer.
Acoustic temperature variations (left) as well as acoustic velocity variations for the x-component (center) and z-component (right).
These examples demonstrate that we can analyze micromirrors using advanced modeling features available in the Acoustics Module in combination with the Structural Mechanics Module. For more details on modeling micromirrors, check out the tutorials below.
]]>
Fluid-filled pipes, also referred to as fluid-carrying structures, have a large number of industrial applications, such as gas pipelines, automotive mufflers, aircraft fuselage, and underwater pipelines. The size of a pipeline system can range from centimeters to kilometers.
Common applications of fluid-filled pipes. Left: A submerged pipeline. Image by Grand Canyon National Park. Licensed under CC BY 2.0, via Flickr Creative Commons. Center: A model of an aircraft fuselage. Right: An automotive muffler. Image by lw5315us. Licensed under CC BY-SA 2.0, via Flickr Creative Commons.
Large pipe systems are difficult to model with simulation software, and because acoustic and elastic modes don’t exist independently, individual analysis doesn’t make it easier. Therefore, we need to underline the effect of fluid loading on the response of the pipe.
At low frequencies, the fluid loading term tends to be small, so the response of the system is dominated by the dynamics of the structure/pipe (Ref. 2). Fluid loading changes the vibrational characteristics of the structure in contact, and consequently, the acoustic radiation. Fluid-loading effects are exhibited strongest by structures in contact with denser fluids, since the fluid forces are proportional to the mean density of the fluid.
Generally, systems can be described by distributed mass and stiffness. There are infinite degrees of freedom (DOFs) for a continuous system, which results in infinite modes. For a finite frequency range, there is a finite number of modes that can be analyzed individually using modal decomposition. The motion of such continuous systems is described by partial differential equations (PDEs) from force/acceleration and force/deformation relations. Examples of such systems are strings, rods, and shafts (second-order PDEs) as well as beams (fourth-order PDEs) and fluid-filled pipes.
The solution to such equations can be visualized using two approaches:
Suppose we’re interested in modeling the dynamics of a large system at higher frequencies using the finite element method (FEM). To capture its behavior, the wavelength must be discretized with a sufficient number of elements, which could result in a large number of DOFs and more memory and time. We can tackle this issue using wave modes or representing the system as guided waves, since the waves travel long distances before they decay.
Wave properties are another advantage of a wave-based approach. They are important for studying structure-borne sound, frequency response of finite-length waveguides, and computing the energy transmission through structures. You represent these wave modes through dispersion curves, which provide a relationship between the wave number and frequency.
Dispersion curves are basically separate lines that each represent an individual mode. The only prerequisite for the wave-based method is that the cross section of the system is constant (there is no limit to the length). For modeling long systems, such as pipes carrying fluid, beams, or rail track, the wave-based approach is very useful.
Waves propagate in time and space. The spatial variation is described by a quantity representing phase change per unit distance and is equal to ω/c. This is the wave number, denoted by k. One wavelength corresponds to an x-dependent phase difference of 2π: kλ = 2π.
When a system is excited with a force at one end, a large number of waves start to propagate to the other side. Each wave travels with a velocity described as the phase velocity (independent of ω; e.g., longitudinal shear waves) and group velocity (frequency dependent; e.g., bending waves). All of the waves travel together under an envelope. The speed at which the energy is transported is given by the group velocity, which is the velocity of the envelope given by c_{g} = ∂ω/∂k.
Schematic of a dispersion curve.
Dispersion curves explain the dynamics of a coupled system. In a fluid-filled pipe where waves can travel in fluid as well as in the pipe wall, the dispersion curves provide a common wave number or wave mode that propagates into the system as a whole. Dispersion curves also provide insight into what happens inside the system at different frequencies. Let’s see how to compute dispersion curves analytically.
Consider a linear conservative system that is uniform and unbounded in one direction (z). The equation of free vibration can be written as:
(1)
where μ(z) is the mass density, and L(z) is the stiffness operator and depends on and so on.
The exact form varies. In general, w might be a function of 1, 2, or 3 space variables depending on the problem (such as a beam, plate, or acoustic cavity). Under the passage of a time-harmonic wave, the solution of Eq. (1) is w(z,t) = We^{i(ωt – kz)}, where W is the amplitude of the wave, ω is the circular frequency, and k is the wave number.
Substituting w(z,t) in Eq. (1) provides the dispersion/characteristic equation. The solution is wave numbers, which come in pairs and represent waves traveling in the ±z direction. Wave numbers can be characterized as:
We may want to obtain basic wave modes (such as longitudinal, shear, and bending) of a structure analytically. Systems considered here have a constant cross section and wave propagation in the positive x direction. For computing longitudinal motion, consider a uniform elastic bar with density ρ and Young’s modulus E. The equation of motion for free vibration is given by . Using the same principle as for time-harmonic motion, we get the dispersion relation , with phase velocity and group velocity c_{g} = c_{L}. Since c_{L} is independent of ω and k, all harmonic waves travel at the same speed. The dispersion relation for shear waves is of the form , where G is the shear modulus of the material.
To compute the bending waves, we consider the Euler-Bernoulli and Timoshenko theories based on certain assumptions. The Euler-Bernoulli theory assumes that the cross section of the beam remains plane and perpendicular to the neutral axis during bending, ignoring rotary inertia and shearing effects. This simplifies many terms and yields a fourth-order partial differential equation, , which can be easily solved.
The only problem with this assumption is that it cannot be validated at high frequencies when the wavelength becomes comparable to the thickness of the structure. The dispersion relation is given by , corresponding to phase velocity and group velocity c_{gb} = 2c_{b}. The phase speed depends on frequency, so the bending waves are dispersive. The wave spreads out due to higher-frequency components, which propagate faster.
Other theories, such as Timoshenko, incorporate shear effects and provide more accurate behavior at higher frequencies. For complicated cross sections, analytical solutions are not feasible.
The acoustic pressure field inside a cylindrical duct, which satisfies the acoustic wave equation, is given by:
where n is the circumferential mode order, P_{n} is the amplitude coefficient, J_{n}(k_{r}r) is the Bessel function of first kind, k_{z} is the out-of-plane wave number, and θ is the circumferential angle.
The radial wave number k_{r} is determined by the boundary condition for a rigid wall; i.e., J_{n}‘(k_{r}r)_{r=a} = 0 , where J_{n}‘(k_{r}r) is the derivative of the Bessel function with respect to r. The solution/modes are multivalued for a given n. Correspondingly, the out-of-plane wave number is computed using the relation k_{z}^{2} + k_{r}^{2} = k^{2}.
Our fluid-filled pipe is linearly elastic and homogeneous. The fluid is purely acoustic, which means it’s compressible, inviscid, and barotropic. The pipe’s modes are computed individually. For the numerical example, the pipe material is steel and the fluid is air. Material properties are given by:
Material Properties |
---|
E = 2e11 N/m^{2} |
ρ_{s} = 7800 kg/m^{3} |
ν = 0.3 |
ρ_{f} = 1.25 kg/m^{3} |
Speed of sound in air, c = 343 m/s |
r_{o} = 0.05 m |
t = 0.0025 m |
We use the Solid Mechanics interface and Pressure Acoustics, Frequency Domain interface to solve the model. We also use the mode analysis study type, where the modes or out-of-plane wave numbers are computed at each frequency. Mode analysis assumes that the mode is harmonic in space; i.e., u(x,y,z) = u(x,y)e^{ikzz}. This equation can be solved at a given frequency for free vibrations for most of the out-of-plane wave numbers, k_{z}.
Certain discrete values — eigenvalues — correspond to the wave numbers of the propagating or evanescent modes. The mode analysis study step triggers the solver that can find these wave numbers and the corresponding mode shapes. A parametric sweep of frequency computes the wave numbers at different frequencies.
Settings for computing out-of-plane wave numbers.
The real values of the wave numbers are plotted, since they correspond to propagating wave modes. The cross-sectional shapes are also plotted in terms of total displacement. To read the dispersion curve, we use the lines (see below). Comparing the bending, shear, and longitudinal modes with analytical solutions helps to easily identify them. We also see a mode cuts on at around 6000 Hz and propagate from there. Sometimes, the behavior of the mode changes at high frequencies (a bending mode can convert into a shear, longitudinal, or extensional mode). Such behavior can be easily captured with dispersion curves.
Dispersion curves for a hollow cylindrical pipe (left) and rigid-walled acoustic duct (right).
Pipe cross-sectional mode shapes.
The dispersion curves for a cylindrical rigid-walled pipe can be analyzed using the same analogy. The first 6 acoustic modes (see graph above to the right) cut on at around 2000, 3500, 4300, 4800, and 6100 Hz, respectively. The modes are compared with the analytical solutions and the cross-sectional shapes are plotted for the cylindrical duct, also showing the pressure distribution across the duct cross section.
Pressure distribution at different modes.
The wave numbers computed using the COMSOL Multiphysics® software are compared with the analytical wave numbers of the hollow pipe and rigid-walled cylindrical duct, respectively. Results show good agreement, but there are clear differences observed for the bending mode. Since the analytical theory is based on assumptions, it cannot be used for high frequencies. The reliability of numerical results lies in the proper discretization of the domain under study.
Note that a sufficient number of elements per wavelength (~6–8 quadratic elements) must be used to capture the wavelength accurately. Another advantage of the numerical approach is that analytical solutions are difficult to obtain for complex cross sections (such as multilayered pipes or complex shape cross sections). In the plot above to the left, apart from the regular wave modes (i.e., bending, longitudinal, shear), many other modes are observed in the numerical solutions. This number increases with the frequency range. The “extra” modes (such as the ring mode) also have physical significance and they are extremely difficult to obtain via analytical solutions. The system’s overall dynamic response is the superposition of all of the modes.
At a higher frequency range, the system’s behavior becomes more complex. Modes overlap with each another and it’s extremely difficult to understand the behavior of each mode. Again, dispersion curves come to the rescue.
Now, we compute the wave number for the coupled system for the fluid-filled pipe. Using the method described earlier, the wave modes are computed using the mode analysis solver in COMSOL Multiphysics for both air and water as the internal fluid.
Dispersion curves for an air-filled (left) and water-filled (right) steel pipe.
The results for the air-filled steel pipe are compared with the uncoupled acoustic and elastic modes. Since the fluid is light, it has minimal effect on the vibrations of the coupled system.
The ring mode can be seen at low frequencies where the pipe resonates as a ring. However, due to Poisson’s effect, there is a slight coupling between the elastic and acoustic parts. As the frequency increases, the motions of the elastic and acoustic parts become strongly coupled, highlighted by a rapid increase in radial vibrations. For instance, branch 1 corresponds to the longitudinal mode and branch 2 corresponds to the coupled mode. Although the coupling between air and steel is weak, at 6000 Hz, the extensional mode converts into an acoustic mode.
Cross-sectional mode shapes for the coupled system are plotted below. They correspond to the displacements in the pipe and pressure field in the fluid domain.
Strong coupling behavior is seen in the plot above for a water-filled pipe. Branch 1 corresponds to the acoustic wave in a rigid-walled cylindrical duct (purely acoustic mode). Considering branch 2, the pipe behaves in vacuo at low frequencies. At high frequencies, the fluid and pipe motion become strongly coupled and the mode converts into a second acoustic mode. Branch 3 originates (cuts on) at around 10,000 Hz. The mode seems to follow the trend of an extensional structural mode and at high frequencies, it again converts into a rigid-walled acoustic mode. We can analyze other branches similarly.
Coupled elastoacoustic wave mode shapes, with air as the internal fluid.
Further, dynamic analysis of the system using dispersion curves can be done at high frequencies. For finite-length systems, these propagation constants or wave numbers can be used to compute the forced response for significant computational efficiency.
Suppose you want to reduce the noise radiation from your system. A few easy techniques can be employed, such as using a multilayered/sandwich pipe made of soft rubber material enclosed by two stiff skins or a complicated cross section (maybe elliptical). Such complex configurations can easily be tested using dispersion curves.
However, the analysis must have:
In this blog post, we have discussed how dispersion curves are computed for an infinite-length multiphysics system and how they can be further analyzed for structural mechanics and pressure acoustics. The analysis is performed using the mode analysis solver. In an upcoming blog post, we will demonstrate how to use wave modes to compute the forced response of finite length waveguides.
C.R. Fuller and F.J. Fahy, “Characteristics of wave propagation and energy distributions in cylindrical elastic shells filled with fluid,” Journal of Sound and Vibration, vol. 81, no. 501518, 1982.
Imagine that you’re at a busy cocktail party on New Year’s Eve. Music and laughter fill the air in a cacophony of sound. You and a friend are chatting in the middle of the crowd, waiting in anticipation for the countdown to begin.
Now, close your eyes and think about trying to listen to your friend.
How did you pick your friend’s voice out from the mixture of noises around you?
The answer to this question lies within the cocktail party effect, a concept popularized by Colin Cherry in 1953. The cocktail party problem involves hearing and focusing on a sound of interest, like a speech signal, in an environment with competing sounds.
To do so, you need to overcome two challenges:
These challenges are exacerbated when the party becomes larger and there are more competing sound sources. As a result, it is difficult to determine the speech signal of interest, recover it from the blending of sounds around you, and then pay attention to it. Despite the challenge, many people are able to naturally solve this problem without thinking much about it.
So how do we do it? Let’s take a look…
According to this source, a main element at play here is that our brains are able to use grouping cues to determine which sounds go together. For instance, individual sounds often have common amplitude changes across their different frequencies. This means that when we come across sounds at multiple frequencies that stop and start at the same time, our brains interpret these as belonging to the same sound source. Additionally, when frequencies in a sound mix have a harmonic relationship, they are often heard as one sound, since it is likely that they are related to one another.
Fluctuations in natural sounds also make it easier to differentiate between the sounds. Although different sounds can obscure each other at times, when they fluctuate, we get a glimpse of the underlying sounds in the noisy environment. Our auditory system can then fill in the blanks for the obscured sounds by accurately grouping the obscured bits.
Press play to be transported to a noisy cocktail party. At first, you can only hear a melange of sound. Then, you run into an old friend, who starts talking to you. As you focus on what your friend is saying, you are eventually able to filter out the other sounds of the party, effectively turning them into background noise.
Another helpful way our brains solve this problem is by using our understanding of various classes of sounds. Going back to our cocktail party example, if your friend is speaking, you’ll have a better chance of hearing them if they are forming coherent sentences than if they are speaking gibberish. In addition, your perception of sound is more accurate if your friend has an accent that is familiar to you.
Localization and visual cues also help us direct our attention to the correct auditory source. If a target sound is in a different location than undesired sounds, for example, we can more easily differentiate it using our spatial hearing, and as a result, the rest becomes background noise.
While the average human body is typically able to solve the cocktail party problem on its own, people with impaired hearing may struggle in loud situations. To learn more, we reached out to Abigail Kressner of the Technical University of Denmark. Kressner mentions that one generally accepted theory on why hearing-impaired people struggle in loud situations is that it is due to a “combination of audibility (i.e., whether the signals are loud enough for the hearing-impaired person to hear them) and reduced temporal resolution.”
Kressner elaborates by saying that these issues may “influence a hearing-impaired listener’s ability to segregate different streams of sound within a complex acoustic scene like a cocktail party and that they also may have reduced attentional segregation.” Those who are hearing impaired are also less able to “listen in the dips” between fluctuations of competing noise sources. As we touched on earlier, these fluctuations in the noise provide glimpses of the target speech sound for those with normal hearing, and therefore, they provide clues for understanding the speech. Replicating this ability in machine algorithms for hearing aids is a challenge for hearing aid designers.
The first objective in designing hearing aids is, of course, to make sounds audible for hearing-aid users. But after meeting that requirement, there are a great deal of additional features that can be added, including:
Kressner notes that these approaches both encounter the challenge of distinguishing between sound signals and finding the one the listener wants to hear. For instance, you may want to listen to a friend talking in front of you or someone on the other side of the room who has just called your name.
A hearing aid. Image by Udo Schröter — Own work. Licensed under CC BY-SA 3.0, via Wikimedia Commons.
How will the hearing aid device know which signal the user wants to listen to? The COCOHA project thinks brain signals (EEG signals) are the answer. This solution still has a lot of work ahead of it, though, including more research into decoding cognitive attention and then using this information to adjust the device and suppress unwanted signals.
Let’s move away from our imaginary cocktail party and instead take a walk through a dense forest. Here, on warm spring evenings, you may hear a chorus of Cope’s gray treefrogs. While each individual call is similar, fitter males give off faster and longer calls. The females listen for these calls, tuning out extra noises and tuning in to the calls of interest. Research into how these frogs achieve this feat and the difference between their ears and human ears could assist in improving the design of both hearing aids and speech recognition systems.
Finding inspiration for improving hearing aid designs in nature; a photo of a Cope’s gray treefrog. Image by Fredlyfish4 — Own work. Licensed under CC BY-SA 4.0, via Wikimedia Commons.
So far, a lot of research into designing hearing aids that account for the cocktail party problem “has been acquired via very controlled, yet unrealistic laboratory experiments,” Kressner notes. This isn’t ideal, because there is “a disconnect between what we see in the laboratory and what we see in the real world.” To move forward and close this gap, Kressner suggests that it could be possible to use, for example, numerical modeling or more realistic psychoacoustic reproduction techniques to better understand what is happening in the real world.
Finding inspiration in simulation; a probe tube microphone, which can be used in association with hearing aids, simulated with the COMSOL Multiphysics® software.
Say you just ordered a new loudspeaker. You probably have expectations about the product: It should survive the trip home; withstand falls; and, above all, it should work. As Richard Little said in his keynote talk: “Our customers expect that things just kind of work when [they come] out of the box and they don’t really have to worry about it, because that’s what consumer products are normally expected to do.”
Upholding these performance conditions is the job of Sonos engineers working to create powered wireless loudspeakers. To do so, they have to ensure the performance and durability of the many competing components in a loudspeaker. Little’s team focuses on just one of these components: audio transducers, which convert input electrical signals into sound. In his talk, Little discussed maximizing the durability of transducers via a predictive design process by accounting for:
Little and his group use simulation to effectively analyze the durability of their transducer designs. This enables them to create virtual prototypes, improve the accuracy of their physical prototypes, and reduce time to market.
First, Little discussed a transducer component involving nonmoving parts that need to withstand handling-related stress: the basket. The basket of a transducer is its weakest part. As such, Little’s team works to improve transducer baskets by studying their materials and geometry. Little spoke about finding a type and grade of steel that can prevent a basket from deforming, while still minimizing material costs. This is accomplished by evaluating the basket’s structural integrity with time-dependent mechanical simulations.
From the video: Simulation results for the transducer basket with <130 MPa stress.
The results Little shared indicate that 130 MPa is the targeted yield stress. This is seen as the lowest acceptable yield strength level to use when choosing a grade of steel for a design. Of course, there are other options for improving the design’s robustness, including using thicker steel or a different plastic material for the basket. However, these design choices have implications in regards to cost, acoustic performance, and manufacturing requirements.
Switching gears, Little discussed a moving component example involving a flat speaker that is typically placed beneath a television. Due to its design, the speaker’s woofer is shallow and the diaphragm is mostly flat. The voice coil of the woofer is subjected to a great deal of stress where it is attached to the lower diaphragm surface, because the flat diaphragm offers no geometric reinforcement.
Simulation confirms that high stresses exist in this specific location, with some areas having high enough stress to eventually fatigue and fail. The Sonos team addressed this challenge by reinforcing the area with the highest stress by gluing on a small secondary ring. This design modification removes all of the stress, while negligibly impacting cost and acoustic performance.
From the video: The woofer diaphragm design was modified by adding a second ring, reducing stress.
With simulation, Richard Little and the Sonos team managed to accurately analyze stress on audio transducers and improve their designs. “This is a great way of investigating the durability of your product, as opposed to just designing for performance,” Little said. “It’s something that has been very important for us. We want our products to last 10 years out there in the field under normal usage.”
Want to learn more about Sonos’ acoustic simulations and loudspeaker designs? Watch the video at the top of this post.
]]>
I’m going to assume that the engineers, physicists, scientists, and researchers who read the COMSOL Blog don’t hold much stock in the paranormal. Even so, hearing a rattling window in the middle of the night or a whispering noise in an empty house is enough to frighten even a seasoned analytical mind.
When a scientist debunks a suspected poltergeist (a supernatural entity that causes physical disturbances), it is known as a false poltergeist. Researchers in this line of work often refer to Occam’s razor to explain these occurrences, as it states that the simplest explanation for something is likely the most valid. For instance, the Roswell UFO incident in 1947 can be explained most simply by a weather balloon that fell out of the sky, not a flying saucer flown by little gray aliens.
Weather balloon or aliens: Which explanation do you think is the most valid? (Photo from my 2016 visit to the International UFO Museum and Research Center in Roswell, New Mexico.)
In the article “Things that Go Bump in the Night: The Physics of ‘False Poltergeists’” from a past issue of Sound & Vibration magazine, Roman Vinokur discusses common vibroacoustic phenomena that are mistaken for ghosts and supernatural entities. Let’s put on our ghost hunting/acoustician hats and take a look at some of the examples featured in the article.
If you ever wake in the middle of the night to a rumbling or groaning sound, think about Helmholtz resonance before burning sage or calling a medium. The most basic example of a Helmholtz resonator is a glass bottle with a narrow opening. When you hold the bottle up to your lips and blow perpendicular to its opening, it makes a humming sound.
Helmholtz resonance in action (turn up your sound!)
A room with an open window or door can also act as a Helmholtz resonator. When turbulent airflow passes through an opening in the room, it excites Helmholtz resonance. The natural frequency of Helmholtz resonance depends on the room’s volume, thickness of the walls, and area of the opening. If this value falls within the range of infrasound (below 20 Hz), it can cause creepy sounds in the audible range — perhaps leading to a suspected poltergeist.
Infrasound can even vibrate our internal organs. This explains why people who recount paranormal experiences often describe feelings of nausea; anxiety; and most commonly, coldness.
Helmholtz resonance can be excited by sound waves propagating from an internal or external noise source. For example, thunder can reverberate in small rooms, which can be perceived as something more malicious than weather. The Sound & Vibrations article mentions a building that was rumored to be haunted. In actuality, the only thing haunting the building was revenge. The workers who built the apartment were scammed by the owner who hired them. To get revenge, the workers embedded empty glass bottles in the building’s roof. The bottles acted as Helmholtz resonators and wind passing through their openings at night caused tenants to hear roaring sounds at a frequency of 100 Hz.
Besides causing scary sounds, Helmholtz resonators also reduce noise in a wide range of applications. For instance, resonators are used in car exhaust systems because they can attenuate a specific and narrow frequency band. When a mean flow enters a typical exhaust system, a Helmholtz resonator attenuates the sound that is generated (similar to our bottle example above, but with the opposite effect).
An animation showing the pressure distribution for a Helmholtz resonator under certain operating conditions. Automotive designers often turn to acoustics modeling and analysis to evaluate how the presence of flow affects the Helmholtz resonator’s performance.
Learn about modeling aeroacoustics applications with the COMSOL Multiphysics® software in a previous blog post by my colleague Mads Jensen.
Watch any film about a haunted house (if you’re not sure where to start, I can recommend a few!) and there is a scene with creaking floorboards, rattling windows, doors that open and shut on their own, or some combination of phenomena. In the movies, a ghost is to blame, but the actual cause of such movement and noise isn’t as insidious, but instead due to mechanical resonance.
When the frequency spectrum of the source of a vibration lies in the infrasound range, it is inaudible (or barely audible) to the human ear. However, the movement caused by the vibration source is easy to hear. Basically, you can sometimes hear the effect of a vibration, but not the cause. This discrepancy is where ghost stories are born.
These roommates have very different explanations for what’s causing the rattling noises between the first and second floors.
Going back to the example of a multistory building, rooms often contain equipment that moves or vibrates. Objects ranging in size from vacuum cleaners to air conditioning units to treadmills can cause noise on another level of a building. If a person hears the noise produced by vibrations but is too far away to hear the cause, they could suspect that a paranormal entity is afoot.
The way skyscrapers are arranged in cities can sometimes form street canyons, also called urban canyons, which manipulate sound propagation. The canyon effect neglects sound absorption in air and solid surfaces. When propagating in a canyon, a sound wave’s energy does not follow the usual distance law valid for open spaces (6 dB attenuation for each doubling of the distance, a spherical wavefront). In a canyon, the pressure amplitude decreases inversely to the square root of the distance from the noise source. So instead, for every doubling of the distance traveled, the energy is attenuated by only 3 dB (a cylindrical wavefront). Thus, sound can propagate over longer distances in canyons with less attenuation than in open environments.
Let’s say our multistory building has a wall canyon. (You can picture a wall canyon as the cutout center of a U-shaped building, sometimes called a courtyard building.) If a conversation is happening on a lower level of the building — in front of open windows, of course — the canyon effect causes the sound to propagate to a higher floor. The person on the upper level hears the conversation as if it is happening close to them, but don’t see anyone talking. Therefore, the wall canyon causes the perceived effect of the hushed whispers of a ghost.
Due to the canyon effect, the conversation in front of an open window does not have the original low frequency when it reaches the open window a floor above.
Interestingly, negative temperature inversion can also produce something similar to the canyon effect. For instance, at night, the temperature of the ground cools faster than the air above. This causes sound to propagate for longer distances because of multiple reflections from the ground. What could possibly be scary about this effect, you ask? Perhaps hearing an owl hoot when there are no owls or trees around for miles…
This acoustic effect can be studied using ray acoustics and the propagation in graded media functionality. It is commonly studied in the field of underwater acoustics, where waves propagate in underwater sound channels generated by temperature or salinity gradients in the water column.
Owls are often seen as ominous, but aren’t they cute?
Let’s go back to Occam’s razor: The simplest explanation is usually the truth. As we’ve discussed, supernatural and paranormal experiences can be explained simply via acoustics and vibrations. But maybe the explanation is even simpler.
Say you’re alone in the house and hear footsteps on the floor above you. Is it a ghost? Alien? Infrasound from the layout of the room? It could just be a roommate or family member who had a change in their schedule and happens to be walking around upstairs when you don’t expect them to. What about noticing a quiet pitter patter while taking a walk outside? It’s probably not a ghost. It could be sound attenuation from the canyon effect. Or, it’s simply a cat out exploring the neighborhood — although if it’s a black cat, I’d still take precautions.
COMSOL’s main office is located north of Boston in Massachusetts. In the southeastern corner of the state, there is an area known to paranormal enthusiasts as the Bridgewater Triangle (a reference to the area’s epicenter, Bridgewater, MA; and the Bermuda Triangle). The area is a hotbed of reported ghosts and UFO sightings as well as other bizarre forms of paranormal activity, like the pukwudgies, tiny creatures who supposedly live in the woods and play tricks on hikers.
Although I used to be wary of venturing to the Bridgewater Triangle and bumping into a ghost, learning about how vibroacoustic effects can cause seemingly supernatural noises has me ready to explore the area.
Here, we discuss different entities for gauging the performance of mufflers. One important parameter is the thickness of the muffler’s casing and how this affects its performance. By performing acoustic-structure interaction simulations, we can see how shell thickness affects muffler performance.
Using the same model setup that was defined in the preceding blog post, we perform a parameterized study to observe the effect of varying shell thicknesses on the muffler. We start at a base thickness of 1 mm, which is the original shell thickness that was used in the previous studies. Then, we halve the base thickness and double it.
The acoustic domain (see below) surrounding the muffler model provides a good means to assess the sound emission into the atmosphere for the different shell thicknesses.
Figure 1. Cross-sectional and isometric views of the muffler model and surrounding acoustic domain.
The transmission loss (TL) from the muffler inlet to the muffler outlet, as defined in the original blog post, is
where P_{in} is the acoustic power at the muffler inlet and P_{out} is the acoustic power at the muffler outlet. The variables P_{in} and P_{out} are dependent on the pressure at the inlet, p_{in}, and outlet, p_{out}, respectively.
The TL from the inlet to the outlet is computed in this study for the simulation cases with shell thicknesses of 0.5 mm and 2 mm. These TL curves are compared in Figure 2 below, along with the case for the 1-mm shell thickness.
Figure 2. Transmission loss from the muffler inlet to the outlet for shell thickness, t, of 0.5 mm, 1 mm, and 2 mm.
The shell mode noted at 172 Hz for a shell thickness of 1 mm (from the previous studies) is found to occur at 180 Hz for the model with a shell thickness of 0.5 mm. In the vicinity of 180 Hz, the peak and the dip in the curve for the model with the 0.5-mm thickness is far more profound than that of the model with the 1-mm thickness for this eigenmode.
For the 0.5-mm case, the difference in the TL at this mode from the peak to the dip is approximately 18 dB, with a frequency spread of 8 Hz and the dip occurring at 188 Hz. This is expected, as the pressure pulses exciting the shell plates would have a greater impact on the plates with a smaller thickness. Therefore, for the largest computed shell thickness of 2 mm, the curve is smooth in the region where this spike occurs for the 0.5-mm case and 1-mm case.
The behavior of the TL for the 2-mm case is close to that of a pure pressure acoustics simulation, where the boundaries of the muffler are defined as sound hard boundaries. Similarly, the shell mode noted at 342 Hz for the 1-mm shell thickness case is present at 338 Hz for the 0.5-mm shell thickness case, but it is not visible in the TL curve for the 2-mm shell thickness case.
The resonating acoustic mode at 386 Hz is present for all three cases, as noted with a sharp dip present in all three curves at this frequency.
The next notable peak present in all three curves lies between 610 Hz and 640 Hz. As the shell thickness increases, the position of the peak shifts to the right. Shells with a frequency of 614 Hz, 632 Hz, and 638 Hz have a thickness of 0.5 mm, 1 mm, and 2 mm, respectively. This is coupled to the fact that the muffler structure becomes stiffer with increasing thickness and the frequency of this eigenmode is increased.
Despite the right-shift in frequency for increasing thickness, the amplitude of the peak is greater for 1-mm thickness than 2-mm thickness. It would be expected that a structure with a larger shell thickness would produce a better TL than a structure with a smaller thickness. However, an acoustic eigenfrequency noted in the pressure acoustics case from the original blog post is present in the vicinity of the eigenmode for the 1-mm shell thickness case. This acoustic mode could be in phase with the shell eigenmode for the 1-mm shell thickness, which in turn results in a greater peak in TL at this mode than for the other shell thickness cases.
The final peak observed in all three cases for the computed frequency range occurs in the vicinity of 700 Hz. The frequency spacing for this mode is minute for varying shell thicknesses in comparison to that of the preceding eigenmode for different thicknesses. The peaks occur at 696 Hz, 702 Hz, and 700 Hz in the TL curves for 0.5-mm, 1-mm, and-2 mm shell thicknesses, respectively. Therefore, it can be deduced that the frequency at which this eigenmode occurs remains impervious to the variation in shell thickness. It is likely an acoustic eigenmode where the stiffness of the shell does not affect the air contained inside the muffler.
The transmission loss from the muffler inlet to the acoustic domain boundary was defined in the previous blog post and is also computed in this study for the muffler model with shell thicknesses of 0.5 mm and 2 mm (as plotted in the figure below). The two curves (solid orange and solid gray) are plotted, along with the TL curves from the previous graph, which account for the shell thicknesses of 0.5 mm and 2 mm (the dashed orange line and dashed gray line).
Figure 3. Transmission loss from the inlet to the outlet compared to the transmission loss from the inlet to the acoustic domain boundary, for shell thicknesses (t) of 0.5 mm and 2 mm.
It is evident that the solid gray curve is smoother and has fewer dips and peaks than the solid orange curve. The peaks and dips in the solid orange curve are sharper than that of the solid gray curve. Further, the solid gray curve has a higher TL than the orange curve for most of the computed frequency range. These differences in the solid curves are expected, considering that the muffler shell is stiffer with a thickness of 2 mm as compared with 0.5 mm. A stiffer shell makes the structural response less profound due to its interaction with the air volume in the muffler, causing less shell noise to be emitted into the surrounding atmosphere.
Comparisons can also be made between the curves for the two types of TL for each thickness. It can be noted that for the muffler model with a 0.5-mm thickness, the two orange curves coincide with each other far more than the gray curves. The two gray curves (2-mm shell) sit farther apart from each other than the two orange curves (0.5-mm shell) do for most of the computed frequency range. For the orange curves, the TL from the muffler inlet to the acoustic domain boundary drops below the TL from the inlet to the outlet in the vicinity of the 180-Hz shell eigenmode. This indicates that at this mode, more sound is emitted into the surrounding atmosphere than passes through the muffler outlet.
A more acoustics-specific comparison of the transmission loss from the muffler inlet to the acoustic domain boundary for the three shell thicknesses is provided in the plot below, by arranging the data in 1/3 octave bands.
Figure 4. Transmission loss from the muffler inlet to the acoustic boundary, plotted in 1/3 octave bands for the three thicknesses.
Representing the transmission loss for different shell thicknesses by binning the TL in fractional octaves is akin to what is done with empirical data obtained from acoustic measurements to meet established standards. It can be clearly noted from the graph above that the muffler with a shell thickness of 2 mm performs best in most bands, except for the last two bands. This can be validated by looking at the solid gray curve in the line graph discussed at the beginning of this section, where it starts to dip after 600 Hz.
Aside from the transmission loss, an additional measure for gauging muffler performance is the muffler efficiency, which is defined as
where P_{in} and P_{out} are the acoustic power at the muffler inlet and outlet, respectively.
The muffler efficiency for the three shell thicknesses is plotted below, and it can be seen that the efficiency for each case is quite similar over the computed frequency range.
Figure 5. Muffler efficiency for the muffler inlet to the outlet for the different shell thicknesses.
The muffler performs at almost 100% efficiency from approximately 200 Hz onward for all three cases. The only exception in all cases is at the resonating acoustic mode of 386 Hz, when a sharp dip is observed. The muffler efficiency for computed frequencies below 85 Hz is less than 60%, and the poor performance of the muffler in the low-frequency range is also evident in the TL from the inlet to the outlet, shown at the beginning of the blog post.
A third means to quantify muffler performance is the normalized radiated sound power at the acoustic domain boundary, which is defined as
where P_{out_domain} is the acoustic power at the acoustic domain boundary. This variable is dependent on p_{out_domain}, the pressure at the acoustic domain boundary.
The computed P*_{out_domain} for each of the three cases with different shell thicknesses is plotted in Figure 6 below.
Figure 6. Normalized radiated sound power at the acoustic domain boundary for the shell thicknesses.
As expected, for most of the computed frequency range just below 600 Hz, the muffler with the 0.5-mm shell thickness has the highest sound radiation into the acoustic domain and the muffler with the 2-mm thickness has the lowest emitted sound. The sharp drop in the solid orange curve at 188 Hz in Figure 2 is noted as a large spike in the solid orange curve in Figure 6 (above). Therefore, the muffler with a shell thickness of 0.5 mm radiates more than 5% of the incident power into the atmosphere at the eigenmode occurring between 180 Hz and 188 Hz.
Although other peaks are present in the three curves, particularly at frequencies close to eigenmodes, these peaks are minute in comparison to the peak at 188 Hz for the 0.5-mm case, with less than 1% of incident power radiated into the surrounding domain.
The sound pressure level at the peak of normalized radiated sound power for each of the three shell thicknesses is plotted (as isosurfaces) below.
Figure 7. Sound pressure level at 188 Hz, t = 0.5 mm.
Figure 8. Sound pressure level at 342 Hz, t = 1 mm.
Figure 9. Sound pressure level at 634 Hz, t = 2 mm.
It has been shown that the shell thickness drastically influences the performance of a muffler. Naturally, the greater the thickness, the stiffer the structure. Thus, with increasing thickness, the transmission loss curve approaches the hard boundary condition in a pure acoustics analysis (compare Figure 2 to the results from the previous blog post).
Further, the peak sound power radiating into the surrounding air is reduced from more than 5% to less than 1%, merely by increasing the shell thickness from 0.5 mm to 1 mm.
In addition to the reduction of the maximum radiated sound power, it is interesting to note the transmission loss curves in Figure 3. The results exemplify the complexity of the stated problem: The location of greater transmission loss is not constant, but rather a function of the frequency and shell thickness. For example, the intersection of the 0.5-mm curve indicates that the (total) transmission loss into the surrounding air is greater than at the muffler outlet. As we might expect, the greatest difference in transmission loss for increasing shell thickness generally occurs in the surrounding air. However, at certain frequencies (around 630 Hz), the transmission loss for the 2-mm shell thickness analysis reduces even below the corresponding 0.5-mm case.
In conclusion, the COMSOL Multiphysics® software provides a remarkably simple way to investigate the influence of the interaction between structural elements and gases or fluids. This enables acoustics engineers to easily determine suitable materials and/or structural parameters to obtain the desired behavior of the component. Common applications include analyses with regards to vibration, fatigue properties, and component noise evaluation.
Linus Fagerberg of Lightness by Design is an experienced consultant working with simulation-supported product development. He holds a PhD from KTH Royal Institute of Technology and is specialized in the structural mechanics of composites, stability, and optimization. Linus believes that numerical simulation is a great tool to consistently deliver high-quality products, improve performance, and mitigate risks. Lightness by Design is a COMSOL Certified Consultant based in Stockholm, Sweden.
]]>In recent years, the European Union has introduced stricter noise emission limits for road vehicles. For those designing mufflers, these limits make it important to create more efficient ways of developing and assessing the performance of their designs. At Lightness by Design, we’ve developed a novel approach that accomplishes this goal.
A 2016 blog post illustrates the impact of including structural effects in a pure acoustics model through the use of an automotive muffler geometry in the COMSOL Multiphysics® software. The effect on the predicted transmission loss for the muffler modeled with pure pressure acoustics was compared with the multiphysics model.
Figure 1. The muffler model contained in the acoustic domain with a surrounding perfectly matched layer.
Lightness by Design has extended the acoustic-structure coupling for the muffler model to evaluate the sound leakage from the muffler into the surrounding atmosphere. To facilitate this evaluation, a cylinder-shaped acoustic domain with a radius of 0.35 m and length of 1.4 m is added to encompass the muffler, with the domain’s center placed at the center of the muffler (shown in Figure 1). The external domain layer, with a thickness of 50 mm, enables the definition of a perfectly matched layer (PML), which represents a nonreflecting condition.
Muffler geometry similitude is retained from the previous study and material properties and boundary conditions applied to the muffler geometry are kept as well. Therefore, the surfaces of the extruded inlet and outlet pipe sections of the muffler going through the acoustic domain are modeled as sound hard boundaries, as indicated in the image below. A Plane Wave Radiation boundary condition is applied at both ends of the pipe and a 1 Pa incident plane wave is applied at the inlet face of the muffler. For a visual description, see Figure 2.
Figure 2. The muffler model showing the applied boundary conditions.
The acoustic domain is modeled with the acoustic properties of air at an ambient temperature of 20°C. These properties are identical to the acoustic properties of air inside the muffler volume.
The Plane Wave Radiation condition introduces the artificial damping of any outgoing pressure waves (to minimize the reflection), thus replicating an unbounded or “infinite” pipe. The same mesh size settings, as defined and applied to the muffler geometry in the previous study, are applied to the muffler and acoustic domains of interest here. The external PML region is swept with six elements through the thickness. The acoustic-shell multiphysics coupling has a similar setup to the one in the prior study.
The transmission loss is a good quantity by which to gauge the performance of a muffler. The transmission loss, TL, from the muffler inlet to the muffler outlet was defined in the previous study as:
where P_{in} is the acoustic power at the muffler inlet and P_{out} is the acoustic power at the muffler outlet.
For the model at hand, not only is the transmission loss from the inlet of the muffler to the outlet of the muffler of interest, but also the transmission loss from the inlet of the muffler to the boundary of the acoustic domain is also important to evaluate (Figure 3 indicates these boundaries). The latter provides a means to numerically evaluate the sound leakage from the muffler into the surrounding atmosphere. The radiated power is found by integrating the acoustic intensity on the exterior physical surface (on the inside of the PML).
Figure 3. The muffler model and acoustic domain. The boundaries included in the transmission loss calculation are shown.
A harmonic analysis for a frequency range of 10 to 750 Hz and a shell thickness of 1 mm has been conducted for the model at hand. Figure 4, below, contains the transmission loss curves from the previous study (dotted orange line and dashed gray line) along with the transmission loss curve computed in this study (solid orange line).
Figure 4. The transmission loss from the muffler inlet to the outlet for a shell thickness of 1 mm.
As expected, the dashed gray curve coincides well with the solid orange curve. The small difference is as expected and derives from air being present on both sides of the shell. The transmission loss is calculated from the muffler inlet to the muffler outlet and the only difference between the two models is the inclusion of the acoustic domain. This indicates that the coupling to the surrounding air domain is essentially one way. The exterior air load on the muffler does not significantly influence the transmission loss. If the outside acoustic domain were stiffer or heavier, its influence on the transmission loss would be more significant. Figure 5 shows the two types of transmission loss computed in this study.
Figure 5. The transmission loss from the muffler inlet to the outlet compared with the transmission loss from the muffler inlet to the acoustic domain boundary.
It is interesting to note that at the lowest computed frequency of 10 Hz, the transmission loss curve for the muffler inlet to the acoustic boundary (solid gray curve) has its peak value of transmission loss. It continues to have a high transmission loss in the low-frequency range below 100 Hz. This implies that in this frequency range, the sound leakage into the surrounding domain is lower in this region than in the rest of the computed frequency range.
However, from the solid orange curve shown in Figure 5, it can be noted that the muffler performance is weak in the range below 100 Hz, with a very low transmission loss relative to the rest of the calculated range. This indicates that the sound is passing through the muffler without much attenuation in the muffler and without exciting the shell of the muffler excessively. This results in a very low sound emission into the surrounding domain.
Sharp dips in the solid gray curve are noted at 172 Hz and 342 Hz, where shell eigenmodes were noted in the previous study. Therefore, more sound is transmitted into the surrounding domain at these frequencies, especially at the mode at 342 Hz (where the solid gray curve shows a lower transmission loss than the solid orange curve). This actually shows that more sound is being emitted into the surrounding acoustic domain than is passing through the muffler outlet.
The third notable dip in the solid gray curve is at 386 Hz, where an acoustic eigenfrequency was found in the previous study. It is interesting to note that at 386 Hz, there is almost no transmission loss from the muffler inlet to the muffler outlet. The orange curve dips close to the y = 0 axis, but the transmission loss in the gray curve at 386 Hz is still above where it is at 342 Hz. This implies that the acoustic mode at 386 Hz is a resonating mode, with the air volume moving back and forth in the muffler cavity without significantly exciting the muffler shell or causing a high sound emission into the surrounding vicinity.
When focusing on two low dips in the solid gray curve (at 172 Hz and 386 Hz) to obtain a better insight into how these two eigenmodes affect the sound radiated from the muffler, isosurface plots of the sound pressure level (SPL) are created for half of the acoustic domain, displayed below in Figure 6.
Figure 6. Surface and volume plots for the computed model at 172 Hz (left) and 386 Hz (right).
The figure on the left, for the shell mode at 172 Hz, has the total displacement of the muffler shell plotted along with the isosurfaces of the SPL of the acoustic domain. Maximum shell displacement at 172 Hz occurs at both short ends of the muffler cavity, which creates an almost symmetric distribution of the SPL about the z-axis. On the other hand, the figure on the right has the isosurfaces of the SPL of the acoustic domain accompanying the plot of the SPL of the air contained inside the muffler for the resonating mode at 386 Hz. It is clear that the air contained in the muffler volume is moving back and forth, creating a standing wave. This standing wave inside the muffler volume creates an uneven sound emission into the acoustic domain about the z-axis, due to a higher SPL present at the right end of the muffler.
An eigenfrequency study only goes as far as indicating at what frequency an eigenmode exists. Determining the response of the structure at a specific eigenmode, the behavior of the air in the muffler volume at an eigenfrequency of interest or the interaction of the acoustic and shell modes necessitates a harmonic analysis that consequently yields a transmission loss curve. The transmission loss from the muffler inlet to the muffler outlet obtained in this study and the prior study are able to fulfill this need. Furthermore, the newly defined transmission loss from the muffler inlet to the acoustic domain boundary improves one’s understanding of the muffler’s performance by predicting the sound that is leaked into the surrounding atmosphere.
The investigation herein has advanced the investigation from the previous blog post by coupling the muffler model to a surrounding acoustic domain. It has also characterized a new quantity for assessing the performance of the muffler, namely the transmission loss from the muffler inlet to the surrounding atmosphere. The novel technique described here enables muffler designers to better predict external noise generation and therefore meet mandatory noise emission standards.
In an upcoming blog post, this model will be used to evaluate how varying the shell thickness affects the performance of the muffler. Stay tuned!
Note that a shell-stiffening analysis can be conducted in other ways than by simply varying shell thickness. Another method for analyzing the stiffness of the shell could be done by changing the topology of the shell by embossing it and then comparing the embossed shell’s performance to that of the original muffler geometry.
Linus Fagerberg from Lightness by Design is an experienced consultant working with simulation-supported product development. He holds a PhD from KTH Royal Institute of Technology and is specialized in the structural mechanics of composites, stability, and optimization. Linus believes that numerical simulation is a great tool to consistently deliver high-quality products, improve performance, and mitigate risks. Lightness by Design is a COMSOL Certified Consultant based in Stockholm, Sweden.
]]>Ultrasound focusing is widely used in various industrial applications, such as nondestructive testing (NDT) and medical imaging. For clinical applications, high-intensity focused ultrasound (HIFU) is a specific aspect of this technology where most of the power provided by the probe is carried to a targeted zone to coagulate biological tissues. This blog post discusses ultrasound focusing simulation.
Ultrasounds have a great advantage: They can reach a volume inside a piece of metal, a human organ, or biological tissue without the need to cut through the path of the transmitted signal to the target at which it is directed. Unlike the scalpel of a surgeon in a medical treatment, ultrasound will not leave any scarring on the skin of a patient and can still treat the targeted zone with good accuracy, while limiting the risk of damage to the surrounding healthy tissues. Focused ultrasound is used or has the potential to be used to treat diseases like prostate and breast cancer, hypertension, and even glaucoma.
There are several ways to focus ultrasound using different transducer designs, and the COMSOL Multiphysics® software is a very good tool to simulate and optimize them. Designing a transducer that will effectively produce an ultrasound field that reaches a targeted zone can be a difficult task. It depends on the frequency and power of the emitted signal; the attenuation and absorption of the medium in which the ultrasound propagates; and, of course, the position and dimensions of the transducer itself.
Figure 1: Schematic of the acoustic field generated by an ultrasound transducer.
Here are a few important aspects of an ultrasound transducer used for clinical applications (see figure above):
Two alternatives can be used to focus the signal from the transducer:
Figure 2: Schematic of an ultrasonic probe with an array of piezo transducers (phased array) used to focus the acoustic signal. The transducer consists of a backing material, the piezo elements, and a matching layer to the tested sample (here tissue).
COMSOL Multiphysics has been used to look at these two alternatives. Besides the ability to model ultrasound propagation, it is also very interesting to couple this simulation with a heat transfer simulation and even a damage law of biological tissues. In this way, we can quickly visualize whether the focusing effect can treat the right amount of tissues and check the location and volume of coagulation, all within the same modeling interface.
Ultrasound can be focused directly by the way the emitting transducer is shaped. A tutorial available with the Acoustics Module provides a very good example of this phenomenon coupled with heat transfer. A few assumptions are made to the acoustics simulation, such as neglecting the nonlinear effects and shear waves, but it still provides very valuable information about the sensitivity of the focal zone to the probe parameters.
This tutorial can be adapted to most device configurations and used as a starting point for simulations. For instance, before running the heat transfer part of the simulation, we can check how the frequency modifies the size of the focal zone, hence the energy that is delivered to this zone. In the example below, three frequencies are computed at 0.5 MHz, 0.7 MHz, and 1 MHz. Figures 3–5 show the shape of the ultrasound pressure wave, the size of the focal zone with the criteria max(SPL) – 6dB, and the corresponding energy that is used to heat and coagulate tissues, respectively.
Figure 3: Simulated ultrasounds (represented in blue and red wave-type signals) are emitted and focused by a curved transducer (the surface with orange arrows at the bottom). They travel in tissues and provide a peak of intensity in the focal zone. This results in an elevation of temperature due to the absorbed energy.
When the transducer diameter and curvature are kept constant, increasing the frequency will reduce the size of the focal zone. We clearly see the smaller wavelength at a higher frequency, as well as its effect on focusing.
Figure 4: The size of the focal zone is visualized with the max(SPL) – 6dB criteria. It confirms what can be seen from the pressure plot above. The dB scale is not the same for the three visualized frequencies.
Figure 5: The acoustics intensity is plotted for all three frequencies on the same color scale in W/cm^{2}. The maximum intensity deposited is more than 10 times higher at 1 MHz than at 0.5 MHz, with all other parameters kept equal. Although the focal zone is decreased when increasing the frequency, it also means that more energy is transmitted to this zone, hence allowing for higher temperatures in the tissues.
Another way to focus ultrasounds is to use several transducers in an array of piezoelectric elements and then control the voltage input with a phase delay for each element. This phase delay must be calculated for each array configuration, since it depends on frequency, piezoelectric elements, size, position, and, of course, on the focal distance.
For a linear array of elements, a quick method is to consider the distance d_{i} between the center of each element i and the focal point and to apply the phase as:
(3)
To illustrate this, let us make a geometry of a 16-element array probe and use the Acoustics Module and Heat Transfer Module with COMSOL Multiphysics to couple several interfaces:
The geometry is shown in a 2D cross section in Figure 6 below, with matching and backing layers in front of and behind the piezoelectric elements, respectively. The backing layer is used to prevent excessive vibrations. The matching layer is an intermediate material between the piezoelectric material and the biological tissue that is necessary for ultrasonic waves to efficiently enter the tissue. It has the same function as the gel used by a doctor between the probe and the skin for an echography.
Figure 6 also shows the phase delay that has been calculated based on (3) as a color and deformation plot on each element, going from 0 on the side elements to 434° in the centered elements.
When the voltage is applied on these elements, the piezoelectric material vibrates and creates an ultrasound wave that will focus at the desired focal distance due to the phase delay.
As for the geometrically focused probe, this simulation can then be coupled to the heat transfer and damage law simulation to assess the temperature elevation and the coagulated volumes in the biological tissues. The heat source from the acoustics signal, given in the plane-wave limit, is calculated as:
(4)
where α_{abs} is the acoustic absorption coefficient of the tissue and I_{ac} is the acoustic intensity magnitude.
The absorption of energy, represented by α_{abs}, varies significantly with the different tissues. As a result, it is also important to check if the calculated focused signal damages other tissues between the array probe and the focal zone. If these tissues are not supposed to be damaged, then the focus should be modified. In this case, the simulation allows us to quickly modify the design and operation parameters of the array probe and to validate or discard an array configuration.
Figures 7 and 8 show the shape of the ultrasound pressure wave and the corresponding energy that is focused, respectively.
Figure 6: The delay, which is calculated as a function of the frequency, the focal distance, and the size and position of the transducer elements.
Figure 7: The wave pattern is seen at a frequency of 1.5 MHz. One can decide to modify the geometric design, phase delay, and even the device frequency if the ultrasounds are not focused enough.
Figure 8: The acoustics intensity is plotted in W/cm^{2}. Here, the 16 piezoelectric transducer elements provide a low intensity that is spread out over a few millimeters on the focal zone. At this stage, the heat transfer and damage simulation could be run to decide if the temperature elevation due to nonnegligible intensity between the focal zone and the transducers (a few W/cm^{2}) is too high or if it could be handled during the medical treatment.
Thomas Clavet is a mechanical engineer from Arts et Métiers Paris Tech and KTH University in Stockholm. He has previously worked as a stress engineer in the nuclear industry and as an application engineer for COMSOL Ltd. in the U.K. and Ireland, where he met and trained several COMSOL Multiphysics users in the fields of fluid flow, heat transfer, acoustics, and structural mechanics simulations.
Thomas founded EMC3 Consulting in 2014, in the south of France, to provide his expertise in the use of COMSOL Multiphysics as a COMSOL Certified Consultant and in the fields of CFD, heat transfer, acoustics, and structural mechanics simulations.
Learn more about how EMC3 Consulting is helping companies to design better products with COMSOL Multiphysics by visiting www.emc3-consulting.com.
]]>