A transducer converts a signal of one energy form (input signal) to a signal of another energy form (output signal). In regard to a loudspeaker, which is an electroacoustic transducer, the input signal is the electric voltage that, in the case of a moving coil loudspeaker, drives its voice coil. The output signal is the acoustic pressure that the human ear perceives as a sound. A distortion occurs when the output signal quantitatively and/or qualitatively differs from the input signal.
Schematic representation of a moving coil loudspeaker.
The distortion can be divided into two principal parts:
The term linear distortion, which might sound rather confusing, implies that the output signal has the same frequency content as the input signal. In this distortion, it is the amplitude and/or phase of the output signal that is distorted. In contrast, the term nonlinear distortion suggests that the output signal contains frequency components that are absent in the input signal. This means that the energy is transferred from one frequency at the input to several frequencies at the output.
Input and output signals in linear and nonlinear transducers.
Let the input sinusoidal signal, , be applied to a transducer with a nonlinear transfer function. The frequency content of the output signal will then have more than one frequency. Apart from the fundamental portion, which corresponds to the frequency , there will be a distorted portion. Its spectrum usually (but not always) consists of the frequencies , which are multiples of the fundamental frequency , in which . These frequencies, called overtones, are present in the sound, and it is the overtones that make musical instruments sound different: A note played on a violin sounds different from the same note played on a guitar. The same happens with the sound emitted from a loudspeaker.
The distortion is a relative quantity that can be described by the value of the total harmonic distortion (THD). This value is calculated as the ratio of the amplitude of the distorted portion of the signal to that of the fundamental part:
The profile of a signal with a higher THD visibly differs from the pure sinusoidal.
Unfortunately, the value of the THD of the output signal itself might not be enough to judge the quality of the loudspeaker. A signal with a lower THD may sound worse than a signal with a higher THD. The reason is that the human ear perceives various overtones differently.
The distortion can be represented as a set of individual even-order, , and odd-order, , components. The former are due to asymmetric nonlinearities of the transducer, while the latter are due to symmetric nonlinearities. The thing is that the sound containing even-order harmonics is perceived as “sweet” and “warm”. This can be explained by the fact that there are octave multiples of the fundamental frequency among them. The odd-order harmonics sound “harsh” and “gritty”. That is quite alright for a guitar distortion pedal, but not for a loudspeaker. What matters is, of course, not just the presence of those harmonics, but rather their level in the output signal.
Another interesting effect, called intermodulation, occurs when the input signal contains more than one frequency component. The corresponding output signals start to interact with each other, producing frequency components absent in the input signal. In practice, if a two-tone sine wave such as (in which ) is applied to the input, the system nonlinearities will result in the modulation of the higher-frequency component by the lower one. That is, the frequencies , , and so on will appear in the frequency spectrum of the output signal. The quantitative measure of the intermodulation that corresponds to the frequency, in which , is the n^{th}-order intermodulation distortion (IMD) coefficient. It is defined as:
In practice, using an input signal containing three or more frequencies for the IMD analysis is not advisable, as the results become harder to interpret.
To summarize, the linear analysis of the loudspeaker, though a powerful tool for a designer, might not be sufficient. The loudspeaker can only be completely described if an additional nonlinear analysis is carried out. The nonlinear analysis is supposed to answer the following questions:
From the simulation point of view, there is both bad and good news. The bad news is that the full nonlinear analysis cannot be performed in the frequency domain. It requires the transient simulation of the loudspeaker, which is more demanding and time consuming than the frequency-domain analysis. The good news is that the effect of certain nonlinearities is only significant at low frequencies.
For example, the voice coil displacement is greater at lower frequencies and therefore the finite strain theory must be used to model the mechanical parts of the motor. Using the finite strain theory is redundant at higher frequencies, where the infinitesimal strain theory is applicable. The figures below show the results for the transient loudspeaker tutorial, driven by the same amplitude ( V) of input voltage:
Voice coil motion in the air gap of the loudspeaker driver for a single-tone input voltage signal: 70 Hz on the left and 140 Hz on the right.
Acoustic pressure at the listening point for a single-tone input voltage. The blue curves correspond to the nonlinear time-domain analysis, while the red curves correspond to the frequency-domain analysis: 70 Hz on the left and 140 Hz on the right.
The animations above depict the magnetic field in the voice coil gap and the motion of the former and the spider (both in pink) as well as the voice coil (in orange). As expected, the displacements, as well as the spider deformation, are higher at the lower frequency. The spider deformation obeys the large strain theory and therefore the linear approximation is inaccurate in this case. This is confirmed by the output signal plots. These plots depict the acoustic pressure at the listening point located about 14.5 cm in front of the speaker dust cap tip.
The acoustic pressure profile obtained from the nonlinear time-domain modeling for the 70-Hz input signal deviates from the sinusoidal shape to a certain extent, which means that higher-order harmonics start playing a definite role. This is not visible for the input signal at 140 Hz: There’s only a slight difference in the amplitude between the linear frequency-domain and nonlinear time-domain simulation results. The THD value of the output signal drops from 4.3% in the first case to 0.9% in the second case. The plots below show how the harmonics contribute to the sound pressure level (SPL) at the listening point.
Frequency spectra of the SPL at the listening point: single-tone input voltage (70 Hz on the left and 140 Hz on the right).
The IMD analysis of the loudspeaker is carried out in a similar way. What’s different is the input signal applied to the voice coil, which contains two harmonics parts:
whose amplitudes, and , usually correlate as , which corresponds to 12 dB.
The example below studies the IMD of the same test loudspeaker driver. The dual-frequency input voltage, in which Hz and Hz, serves as the input signal. The SPL plot on the left shows how the second- and third-order harmonics arising in the low-frequency part of the output signal generate a considerable level of the corresponding order IMDs in the high-frequency part. The IMD level becomes sufficiently lower if the signal frequency is increased to 140 Hz. This is seen in the right plot below.
Frequency spectra of the SPL at the listening point for a two-tone input voltage.
Since transient nonlinear simulations tend to be demanding, the loudspeaker driver model should not be overcomplicated. The 2D axisymmetric formulation is a good starting approach and was used for the tutorial examples in the previous section. After that, it’s important to estimate which effects are more important than others. This will help you set up an adequate multiphysics model of a loudspeaker.
The system nonlinearities include, but are not limited to, the following:
Speaking the lumped parameters’ language, this means that they are no longer constants like the Thiele-Small parameters, but functions of the voice coil position, , and the input voltage, . The above-mentioned nonlinearities will be reflected in the nonlinear inductance, ; compliance, ; and dynamic force factor, . For instance, the tutorial example shows that the nonlinear behavior of the force factor is more distinct at 70 Hz, whereas it is almost flat (that is, closer to linear) at 140 Hz.
Nonlinear (left) and almost linear (right) behavior of the dynamic force factor: 70 Hz on the left and 140 Hz on the right.
With the following steps, the discussed nonlinearities can be incorporated into the model. First, the nonlinear magnetic effects are taken into account through the constitutive relation for the corresponding material. In the test example, the BH curve option is chosen for the iron pole piece. Next, the Include geometric nonlinearity option available under the Study Settings section forces the structural parts of the model to obey the finite strain theory. Lastly, the topology change is captured by the Moving Mesh feature. Whenever applied, the feature ensures that the mesh element nodes move together with the moving parts of the system. Since the displacements can be quite high, it is likely that the mesh element distortion reaches extreme levels and the numerical model becomes unstable. The Automatic Remeshing option is used as a remedy against highly distorted mesh elements.
All in all, the nonlinear time-domain analysis of the loudspeaker requires much more effort and patience than the linear frequency-domain study. This is especially relevant when the model includes the Moving Mesh feature with the Automatic Remeshing option activated. Investing some time in the geometry and mesh preprocessing will pay off, as the moving mesh is very sensitive to the mesh quality. That is, highly distorted mesh elements and near-zero angles between the geometric entities have to be avoided. A proper choice of the Condition for Remeshing option may also require some trial and error.
The loudspeaker design discussed here might not be considered “good” by most standards. The odd-order harmonics prevail in the frequency content of the output signal.
To perform your own nonlinear distortion analysis of a loudspeaker, click on the button below. This will take you to the Application Gallery, where you can find the MPH-files for this model together with detailed modeling instructions. (Note: You must have a COMSOL Access account and valid software license.)
The original version of this post was written by Alexandra Foley and published on July 15, 2013. It has since been revised with additional details, animations, and an updated version of the featured model.
One of the most common ways we experience the Doppler effect in action is the change in pitch caused by either a moving sound source around a stationary observer or a moving observer around a stationary sound source. When the sound source is stationary, the sound that we hear is at the same pitch as the sound emitted from the sound source.
Sound waves propagating from a stationary sound source in a uniform flow (this corresponds to the source moving at constant speed).
When the sound source moves, the sound we perceive changes. Going back to the ambulance example, when an ambulance drives past us, the siren sounds different than it would if we were standing right next to it. The moving ambulance has a different pitch as it approaches, when it is closest to us, and as it passes us and drives away.
As the ambulance moves toward us, each successive sound wave is emitted from a closer position than that of the previous wave. Because of this change in position, each sound wave takes less time to reach us than the one before. The distance between wave crests (the wavelength) is thereby reduced, meaning that the perceived frequency of the wave increases and the sound is perceived to be of a higher pitch. Conversely, as a sound source moves away, waves are emitted from a source that is farther and farther away. This creates an increased wavelength, a decreased perceived frequency, and a lower pitch.
The situation is mirrored when we drive by the siren of an ambulance that is parked. In this instance, the observers (us) move toward the source (the siren) and the sound waves reach us from closer and closer positions as we move.
Another example of the Doppler effect that is easy to visualize involves waves on a water surface. For instance, a bug rests on the surface of a puddle. When the bug is stationary, it moves its legs to stay afloat. These disturbances propagate outward from the bug in spherical waves.
When the bug starts moving across the water, the water flow around the bug changes. The waves appear closer together when we look at the bug swimming toward us (eek!) and farther apart as it swims away (phew!) The animation above shows the principle for waves (ripples) on water, which move much slower than the speed of sound. The slower speed is why, in this instance, the Doppler effect can be seen with the naked eye.
By using the COMSOL Multiphysics® software and the add-on Acoustics Module, you can simulate the Doppler effect and measure the change in frequency for a source moving at a certain velocity. Let’s assume that the air surrounding the sound source (the ambulance, in this case) is moving with a velocity of V = 50 m/s in the negative z direction. We also assume that the observer of the sound is standing 1 m from the ambulance as it passes by. In the figure below, we can see the change in the pressure as the ambulance approaches and passes an observer.
In this plot, the distance of the ambulance from the observer is represented on the x-axis. The solid line represents the pressure perceived by the observer of an approaching ambulance and the dashed line shows the pressure as the ambulance gets farther away.
From this plot, we can see how the amplitude of the wave (or pressure) drops off at a faster rate when the ambulance is moving away from an observer compared to when it approaches. The change in the amplitude of the wave depicts how the siren becomes quieter as the ambulance moves away. The rate at which the sound level decreases as the ambulance recedes is much faster than the rate at which the sound becomes louder as the ambulance approaches (as shown in the graph above).
To look at this effect in a different way, we can visualize the sound pressure level around the sound source (remember, the source is effectively moving in the positive z direction).
The sound pressure level around the sound source is represented by colors and contour lines. You can see how the outermost contour runs from well inside the physical domain to the perfectly matched layer, showing that the sound is greater below the source than above it.
The Doppler effect is apparent in many other phenomena. One common example is Doppler radar, in which a radar beam is fired at a moving target. The time it takes for the beam to bounce off the target and return to the transmitter can provide information about a target’s velocity. Doppler radar is used by police to identify people driving faster than the speed limit.
The Doppler effect is also used in the field of astronomy to determine the direction and rate at which a star, planet, or galaxy moves compared to Earth. By measuring the change in the color of electromagnetic waves — called redshift or blueshift — an astronomer can determine a celestial body’s radial velocity. If you notice a star that appears red, it is quite far from Earth — and a visible sign that the universe is expanding!
Other applications that take advantage of the Doppler effect include meteorological forecasts, sonar, medical imaging, blood flow measurement, and satellite communication.
Click the button below to try simulating the Doppler effect. With a COMSOL Access account and valid software license, you will be able to download the MPH-file for the example featured in this blog post.
Lennart Moheit, a PhD student at TUM, notes that “acoustics and vibration is a wide scientific field that has a lot of practical applications.” The study of acoustics is important in many areas, such as civil engineering, fluid mechanics, and thermodynamics, and has just as many (if not more) uses, like improving room acoustics, analyzing anechoic coatings, and even explaining false poltergeists.
One way to teach students about acoustics and the underlying theories is to use traditional experiments. However, some experiments are expensive and have difficult and time-consuming setup procedures. Additionally, Moheit notes that certain experiments need special laboratory conditions and rooms that might not be available, such as anechoic chambers and reverberant rooms.
Simulation of the sound pressure distribution inside a room.
To address these limitations and create another way to teach acoustics, Moheit designs customized apps for educators. The idea came to him after attending a COMSOL workshop in which he was introduced to apps. Moheit says, “I immediately thought of using acoustics apps for educational purposes because I was experienced in working with COMSOL Multiphysics® before and had a really clear idea of the possibilities of such a concept. Especially in acoustics and vibration, there are a lot of phenomena that can be simulated and visualized very well using the finite element method (FEM).” To further expand on what can be simulated with this software, the Acoustics Module, an add-on product to the COMSOL Multiphysics® software, contains other dedicated numerical methods as well. These include the boundary element method (BEM), discontinuous Galerkin finite element method (dG-FEM), and ray tracing methods.
A major benefit of using apps for this purpose is that they clearly demonstrate different phenomena, including more complex theories, without the need for extensive experimental setup. Apps are particularly helpful in acoustics because they have the ability to “visualize things we can’t see but only hear in reality,” Moheit notes.
Specialized apps, such as this one for analyzing gratings and speakers, visualize phenomena — like interference — that can’t be seen in real life. Image courtesy Lennart Moheit.
In addition, incorporating these “virtual experiments for everyone” into lectures helps keep students engaged. Moheit explains that apps “provide a completely new way of understanding physical phenomena, because they are interactive and not only passive.”
Simulation apps run detailed calculations in the background, where unnecessary details are hidden from the user via a simplified user interface (UI). As a result, students don’t have to be simulation experts to use apps. They can simply change predetermined parameters by making adjustments via the app UI and receive results based on their choices.
To deploy and share apps with students, educators can use the COMSOL Server™ product. COMSOL Server™ enables users to access and run apps through web browsers using computers, tablets, and even smartphones. This makes apps easily accessible — both in the classroom and at home.
Drawing from inspiration, Moheit worked with the chair of the Vibroacoustics of Vehicles and Machines group at TUM to create a virtual learning platform called the “App Server” that provides freely accessible, interactive apps. As a note, the App Server and some of the apps mentioned in this blog post are still being worked on and improved.
The TUM App Server. Image courtesy Lennart Moheit.
Let’s take a quick look at a few of the apps Moheit helped build.
One app Moheit created is for simulating room acoustics, similar to the one mentioned at the beginning of this blog post. In this case, the model is of a Munich subway car, easily recognizable to TUM students. This app visualizes a mode shape with high (red) and low (blue) sound pressures at a certain frequency. Using these results, students can locate loud areas in a subway car at a specific frequency. In the loud areas, passengers can experience an unpleasant humming noise that can make it more difficult to hold conversations — practical concerns that the students can easily understand.
Moheit mentions that, in the future, this app will be expanded to include typical room acoustics calculations, such as reverberant time, sound wave propagation, and reflection and absorption at the walls.
The simulation results of the room acoustics app, showing a mode shape inside a subway train. Image courtesy Lennart Moheit.
An app featuring a wine glass helps students understand the structural dynamics of different designs, making it simple to alter the shape and material of the glass as well as the fluid volume (deciding if the glass is half full or half empty is up to the user). The results show the musical note of the natural frequencies of the glass and can be compared to measurement data. As a bonus, this app can also provide hints as to why glasses break when they are excited by one of their natural frequencies where they get in resonance.
UI and results of the wine glass app. Image courtesy Lennart Moheit.
Next up is a bell-themed app made to investigate the structural dynamics of different bell designs. Using this app, students can modify the geometry and materials of a bell, eventually finding the natural frequencies of the vibrating bell as well as the corresponding musical notes. Users can also visualize and animate bell vibrations. This app aims to enhance the students’ knowledge of how bells create sound, from the initial vibrations to when the sound reaches a listener’s ear.
Left: The UI and results of the bell app. Right: A photo of the bell app in use. Images courtesy Lennart Moheit.
Other virtual experiments within the TUM App Server include an app for analyzing the operation of a trombone and an app of an impedance tube.
Left: A trombone app is the only musical instrument app available at TUM at the moment. Right: An impedance tube/duct app, which is a common academic problem. Images courtesy Lennart Moheit.
In the future, Moheit notes that acoustics apps could be used by people of all ages and learning stages. For young learners in particular, “the digital, interactive learning concept might be understood as a kind of game and can help make the content more accessible for people who don’t like physics yet because it seems to be too theoretical to them or they didn’t understand the reason for all of the equations,” Moheit explains. Acoustics is a good topic for younger students, since it is a wide scientific field with many practical applications that can be easily understood.
As apps become more widespread, Moheit says that future students may be able to use apps to check their homework, for example, when solving basic mechanical problems. Students can also make their own apps. Moheit imagines a “course or workshop where students solve and understand a physical problem using COMSOL Multiphysics and build an app for that problem to provide their own insight to everyone else.”
Moheit has ideas for other apps that can be added to the TUM App Server, although they may not all be possible to create. His ideas for apps include:
Of course, these are only some of the possibilities of educational acoustics apps. If you have any ideas for using apps to teach acoustics, be sure to let us know in the comments below!
In a recent video on YouTube from standupmaths, science enthusiasts Matt Parker and Hugh Hunt discuss and demonstrate the “mystery” of a tuning fork. When you strike a tuning fork and hold it against a tabletop, it seems to double in frequency. As it turns out, the explanation behind this mystery can be boiled down to nonlinear solid mechanics.
When you hold a vibrating tuning fork in your hand, the bending motion of the prongs sets the air around them in motion. The pressure waves in the air propagate as sound. You can hear it, but it is not a very efficient conversion of the mechanical vibration into acoustic pressure.
When you hold the stem of the tuning fork to a table, an axial motion in the stem connects to the tabletop. The motion is much smaller than the transverse motion of the prongs, but it has the potential to set the large flat tabletop in motion — a surface that is a far better emitter of sound than the thin prongs of a tuning fork. The tabletop surface will act as a large loudspeaker diaphragm.
Our tuning fork.
To investigate this interesting behavior, we created a solid mechanics computational model of a tuning fork. The model is based on a tuning fork that one of my colleagues keeps in her handbag. The tone of the device is a reference A4 (440 Hz), the material is stainless steel, and the total length is about 12 cm.
First, let’s have a look at the displacement as the tuning fork is vibrating in its first eigenmode:
The mode shape for the fundamental frequency of the tuning fork.
If we study the displacements in detail, it turns out that even though the overall motion of the prongs is in the transverse direction (the x direction in the picture), there are also some small vertical components (in the z direction), consisting of two parts:
The displacements are shown in the figures below. The mode is normalized so that the maximum total displacement is 1. The peak axial displacement is 0.03 and the displacement in the stem is 0.01.
Total displacement vectors in the first eigenmode.
Axial displacements only. Note that the scales differ between figures. The center of gravity is indicated by the blue sphere.
Now, let’s turn to the sound emission. By adding a boundary element representation of the acoustic field to the model, the sound pressure level in the surrounding air can be computed. The amplitude of the vibration at the prong tips is set to 1 mm. This is approximately the maximum feasible value if the tuning fork is not to be overloaded from a stress point of view.
As can be seen in the figure below, the intensity of the sound decreases rather fast with the distance from the tuning fork, and also has a large degree of directionality. Actually, if you turn a tuning fork around its axis beside your ear, the near-silence in the 45-degree directions is striking.
Sound pressure level (dB) and radiation pattern (inset) around the tuning fork.
We now add a 2-cm-thick wooden table surface to the model. It measures 1 by 1 m and is supported at the corners. The stem of the tuning fork is in contact with a point at the center of the table. As can be seen below, the sound pressure levels are quite significant in a large portion of the air domain above and outside the table.
Sound pressure levels above the table when the stem of the tuning fork is attached to the table.
For comparison, we plot the sound pressure level for the same air domain when the tuning fork is held up. The difference is quite stunning with very low sound pressure levels in all parts of the air above the table except for in the vicinity of the tuning fork. This matches our experience with tuning forks as shown in the original YouTube video.
Sound pressure levels for the tuning fork when held up.
So far, we have not touched on the original question: Why does the frequency double when the tuning fork is placed on the table? One possible explanation could be that there is such a natural frequency, which has a motion that is more prominent in the vertical direction. For a vibrating string, for example, the natural frequencies are integer multiples of the fundamental frequency.
This is not the case for a tuning fork. If the prongs are approximated as cantilever beams in bending, the lowest natural frequency is given by the expression
The quantities in this expression are:
For our tuning fork, this evaluates to 435 Hz, so the formula provides a good approximation.
The second natural frequency of a cantilever beam is
This frequency is a factor 6.27 higher than the fundamental frequency. It cannot be involved in the frequency doubling. However, there are other mode shapes besides those with symmetric bending. Could one of them be involved in the frequency doubling?
This is unlikely for two reasons. The first reason is that the frequency doubling phenomenon can be observed for tuning forks with different geometries, and it would be too much of a coincidence if all of them have an eigenmode with exactly twice the fundamental natural frequency. The second reason is that nonsymmetrical eigenmodes have a significant transverse displacement at the stem, where the tuning fork is clenched. Such eigenmodes will thus be strongly damped by your hand, and have an insignificant amplitude. One such mode, with a natural frequency of 1242 Hz, is shown in the animation below.
The tuning fork’s first eigenmode at 440 Hz, an out-of-plane mode with an eigenfrequency of 1242 Hz, and the second bending mode with an eigenfrequency of 2774 Hz.
Let’s summarize what we know about the frequency-doubling phenomenon. Since it is only experienced when we press the tuning fork to the table, the double frequency vibration has a strong axial motion in the stem. Also, we can see from a spectrum analyzer (you can download such an app on a smartphone) that the level of vibration at the double frequency decays relatively quickly. There is a transition back to the fundamental frequency as the dominant one.
The dependency on the amplitude suggests a nonlinear phenomenon. The axial movement of the stem indicates that the stem compensates for a change in the location of the center of mass of the prongs.
Without going into details with the math, it can be shown that for the bending cantilever, the center of mass shifts down by a distance relative to the original length L, which is
Here, a is the transverse motion at the tip and the coefficient β ≈ 0.2.
The important observation is that the vertical movement of the center of mass is proportional to the square of the vibration amplitude. Also, the center of mass will be at its lowest position twice per cycle (both when the prong bends inward and when it bends outward), thus the double frequency.
With a = 1 mm and a prong length of L = 80 mm, the maximum shift in the position of the center of mass of the prongs can be estimated to
The stem has a significantly smaller mass than the prongs, so it has to move even more for the total center of gravity to maintain its position. The stem displacement amplitude can thus be estimated to 0.005 mm. This should be seen in relation to what we know from the numerical experiments above. The linear (440 Hz) part of the axial motion is of the order of a/100; in this example, 0.01 mm.
In reality, the tuning fork is a more complex system than a pure cantilever beam, and the connection region between the stem and the prongs will affect the results. For the tuning fork analyzed here, the second-order displacements are actually less than half of the back-of-the-envelope predicted 0.005 mm.
Still, the axial displacement caused by the second-order moving mass effect is significant. Furthermore, when it comes to emitting sound, it is the velocity, not the displacement, that is important. So, if displacement amplitudes are equal at 440 Hz and 880 Hz, the velocity at the double frequency is twice that at the fundamental frequency.
Since the amplitude of the axial vibration at 440 Hz is proportional to the prong amplitude a, and the amplitude of the 880-Hz vibration is proportional to a^{2}, it is necessary that we strike the tuning fork hard enough to experience the frequency-doubling effect. As the vibration decays, the relative importance of the nonlinear term decreases. This is clearly seen on the spectrum analyzer.
The behavior can be investigated in detail by performing a geometrically nonlinear transient dynamic analysis. The tuning fork is set in motion by a symmetric impulse applied horizontally on the prongs, and is then left free to vibrate. It can be seen that the horizontal prong displacement is almost sinusoidal at 440 Hz, while the stem moves up and down in a clearly nonlinear manner. The stem displacement is highly nonsymmetrical, since the 440 Hz contribution is synchronous with the prong displacement, while the 880-Hz term always gives an additional upward displacement.
Due to the nonlinearity of the system, the vibration is not completely periodic. Even the prong displacement amplitude can vary from one cycle to another.
The blue line shows the transverse displacement at the prong tip, and the green line shows the vertical displacement at the bottom of the stem.
If the frequency spectrum of the stem displacement plotted above is computed using FFT, there are two significant peaks at 440 Hz and 880 Hz. There is also a small third peak around the second bending mode.
Frequency spectrum of the vertical stem displacement.
To actually see the second-order term at 880 Hz in action, we can subtract the part of the stem vibration that is in phase with the prong bending from the total stem displacement. This displacement difference is seen in the graph below as the red curve.
The total axial stem displacement (blue), the prong bending proportional stem displacement (dashed green), and the remaining second-order displacement (red).
How did we perform this calculation? Well, we know from the eigenfrequency analysis that the amplitude of the axial stem vibration is about 1% of the transverse prong displacement (actually 0.92%). In the graph above, the dashed green curve is 0.0092 times the current displacement of the prong tip (not shown in the graph). This curve can be considered as showing the linear 440 Hz term — a more or less pure sine wave. That value is then subtracted from the total stem displacement, and what is left is the red curve. The second-order displacement is zero when the prong is straight, and peaks both when the prong has its maximum inward bending and when it has its maximum outward bending.
Actually, the red curve looks very much like it is having a time variation proportional to sin^{2}(ωt). It should, since that displacement, according to the analysis above, is proportional to the square of the prong displacement. Using a well-known trigonometric identity, . Enter the double frequency!
Commenters on the original video from standupmaths have noticed that some tuning forks work better than others, and with some tuning forks, it is difficult to see the frequency doubling at all. As discussed above, the first criterion is that you hit it hard enough in order to get into the nonlinear regime. But there are also geometrical differences influencing the ratio between the amplitude of the two types of vibration.
For instance, prongs that are heavy relative to the stem will cause large double-frequency displacements, since the stem must move more in order to maintain the center of gravity. Slender prongs can have a larger amplitude–length (a/L) ratio, thus increasing the nonlinear term.
The design of the region where the prongs meet the stem is important. If it is stiff, then the amplitude of the fundamental frequency vibration in the stem will be reduced, and the relative importance of the double-frequency vibration is larger.
The cross section of the prongs will also have an influence. If we return to the expression for the natural frequency
it can be seen that the moment of inertia of the cross section plays a role. A prong with a square cross section with side d has
while a prong with a circular cross section with diameter d has
Thus, for two tuning forks that look the same when viewed from the side, the one with a square profile must have prongs that are a factor 1.14 longer to give the same fundamental frequency. If we assume the same maximum stress due to bending in the two tuning forks, the one with the square profile can have a transverse displacement amplitude, which is 1.14^{2} larger than the circular one because of its higher load-carrying capacity. In addition, if the stem is kept at a fixed size, then it will become proportionally lighter when compared to the longer prongs. All these contributions end up in a 70% increase in vertical stem vibration amplitude when moving from a circular profile to a square profile.
In addition, tuning forks with a circular cross section usually have a design that is more flexible at the connection between the prongs and the stem, and thus a higher level of vibration at the fundamental frequency.
The conclusion is that a tuning fork with a square cross section is more likely to exhibit the frequency-doubling behavior than one with a circular cross section.
In most cases, the answer is “no.” The fundamental frequency is still there, even though it may have a lower amplitude than the one with the double frequency. But the way our senses work, we hear the fundamental frequency, although with a different timbre. It is difficult, but not impossible, to strike the tuning fork so hard that the sound level of the double frequency is significantly dominant.
The frequency doubling occurs due to a nonlinear phenomenon, where the stem of the tuning fork must move upward, in order to compensate for the small lowering of the center of mass of the prongs as they approach the outermost positions of their bending motion.
Note that it is not the fact that the tuning fork is connected to the table that causes the frequency doubling. The reason that we measure it in that case is that the sound emitted by the resonating table surface is caused by the axial stem motion, whereas the sound we hear from the tuning fork that is held up is dominated by the prong bending. The motion is the same in both cases, as long as the impedance of the table is ignored. In fact, you can measure the doubled frequency with a tuning fork when held up as well, but it is 30 dB or so below the fundamental frequency.
BEM functionality is available in the Acoustics Module as the Pressure Acoustics, Boundary Elements interface. The interface can solve 2D and 3D acoustics problems that have constant-valued material properties within each domain. The fluid model can include dissipation by using complex-valued material data. Furthermore, the BEM interface’s implementation as a scattered field formulation means that it can handle scattering problems (see the image below). As we will see below, the introduction of BEM allows users to solve a new category of problems that were not possible before.
Classical BEM benchmark model of a spherical scatterer for which the results are compared to an analytical solution. The left image shows the sound pressure level in two cut planes at 500 Hz, while the right image shows a comparison of the scattered field at 1400 Hz. Images from the Spherical Scatterer: BEM Benchmark tutorial model.
An important feature is the ability to couple the BEM-based interface with FEM-based interfaces. For example, by using the Acoustic-Structure Boundary multiphysics coupling feature, you can couple the acoustics BEM interface to vibrating structures based on FEM. In addition, BEM and FEM acoustic domains can be combined by using the Acoustic BEM-FEM Boundary multiphysics coupling.
This flexibility allows BEM and FEM to be used where they are best suited and this is all done within the same user interface, as with all other physics couplings in COMSOL Multiphysics. For instance, you can use FEM to model a vibrating structure’s interior, like a closed air domain, as this method can include more general material properties, and BEM to model the exterior domain, as this method is better for modeling large and infinite domains. This is the case in the loudspeaker model depicted below.
User interface of COMSOL Multiphysics when setting up a multiphysics model of a loudspeaker that includes BEM and FEM acoustics as well as the Solid Mechanics and Shell interfaces. The physics are coupled with the built-in multiphysics couplings. Image from the Vibroacoustic Loudspeaker Simulation: Multiphysics with BEM-FEM tutorial model.
With BEM, you only need to mesh the surfaces next to the modeling domain. This means that there’s less need to create large volumetric meshes (necessary for FEM), making interfaces based on BEM particularly helpful for models that involve radiation and scattering and have detailed CAD geometries. The interface also has built-in conditions to set up an infinite sound hard boundary (wall) or an infinite sound soft boundary. These conditions are very useful when modeling, for example, underwater acoustics, where the ocean surface can be modeled as an infinite sound soft boundary.
Typically, it is advantageous to use interfaces based on BEM for problems with large fluid domains for which a large FEM-based volumetric mesh would otherwise be required (i.e., cases that would run out of memory due to the large 3D mesh). For cases like this one, using BEM can even extend the class of problems that COMSOL Multiphysics can handle. Some examples of these problems include:
An example of a transducer array located far from a scattering object. This type of problem is very hard or even impossible to solve with a pure FEM-based approach due to the large memory requirement. Using BEM, the model can be solved (moving the sphere further away does not cost more on the computational side). Image from the Tonpilz Transducer Array for Sonar Systems tutorial model.
While BEM is more computationally demanding than FEM for an equal amount of degrees of freedom (DOFs), BEM usually requires far fewer DOFs than FEM to obtain the same accuracy. The fully populated and dense system matrices generated by BEM require different dedicated numerical methods than the ones for FEM. A FEM-based interface, such as the Pressure Acoustics, Frequency Domain interface, is usually faster for solving small- and medium-sized acoustics models than BEM.
According to the user’s guide for the Acoustics Module, the BEM used in the Pressure Acoustics, Boundary Elements interface is based on the direct method with Costabel’s symmetric coupling. To solve the resulting linear system, the adaptive cross approximation (ACA) fast summation method is used. This method uses partial assembly of the matrices where the effect of the matrix vector multiplication is calculated. As for the default iterative solver, it is GMRES. With the built-in multiphysics couplings, it is easy and seamless to set up problems that combine FEM- and BEM-based physics. When solving these coupled models, the default approach is to use hybridization with the ACA for BEM and an appropriate preconditioner for the FEM part of the problem (direct or multigrid).
As already mentioned, the Pressure Acoustics, Boundary Elements interface seamlessly couples to the finite-element-based interfaces like the Pressure Acoustics, Frequency Domain interface and the Solid Mechanics interface. This coupling makes it possible to easily set up hybrid FEM-BEM models that take advantage of the strengths of each formulation where needed and where they are best applied.
BEM is not meant to replace finite elements in acoustics but should be seen as a complement. The general rule of thumb is to use BEM where large fluid domains would otherwise require a very fine mesh when running a FEM-based model, and to otherwise couple BEM to FEM-based physics where they are best used. Some applications and examples include:
Remember that smaller models that fit in memory are typically faster with FEM. Use the traditional approach with a radiation condition or a PML to model open radiation domains.
The Pressure Acoustics, Boundary Elements interface can be used to replace a FEM-based radiation condition or PML and the far-field calculation feature. See, for example, the model example below.
In the Bessel Panel tutorial model, the Pressure Acoustics, Boundary Elements interface is used to model the open space. The BEM interface is effectively replacing a radiation condition (or a PML) and the far-field calculation feature that was previously necessary. This image shows the sound pressure level on the surface of the FEM domain (several point sources are located inside this domain) and in three cut planes, with a given extent, in the exterior BEM region.
When solving a problem with the BEM interface, the resulting solution consists of the dependent variables, equivalently unknown fields, on boundaries. This includes the pressure p
and its normal derivative; i.e., the normal flux variable pabe.pbam1.bemflux
. Evaluating the solution in a domain is based on an integral kernel evaluation, which is at the heart of BEM.
On boundaries, a dedicated boundary variable is defined. This variable has different definitions on exterior and interior boundaries; it is equal to the dependent variable on exterior boundaries. Up and down pressure-dependent variables are defined on interior boundaries (pabe.p_up
and pabe.p_down
) because the pressure is discontinuous here; for example, in an Interior Sound Hard Wall boundary. Moreover, on all boundaries, predefined postprocessing variables exist that combine the properties of the boundary variables, when needed, with variables based on the kernel evaluation.
These variables and all other postprocessing variables are found in the Replace Expressions list in the plots, as shown in the image below.
The user interface with a list of some of the predefined postprocessing variables.
When postprocessing the BEM solution within the domains, the pressure field has to be reconstructed using the aforementioned BEM integral kernel evaluation. Dedicated data sets are available for easy visualization of the BEM solution by automating the kernel evaluation on a grid. The paragraphs below discuss data sets that can be used to plot acoustics results.
The Grid 3D and Grid 2D data sets are specially designed for evaluating the solution within domains where there is no mesh. These data sets set up a regular grid of points where the solution is evaluated. The size and bounds of the grid can be modified as well as the resolution (the grid spacing). When visualizing wave problems, it is important to have an adequate spatial resolution. However, the resolution should not be too large, as it will increase the rendering time.
A grid data set can, for example, be selected as the input data set for a slice or a surface plot. A grid data set and a multislice plot are automatically generated and used in the default plots when a BEM model is solved. The grid data set can also be used as input to a cut plane, cut line, or cut point.
Parameterized curves and surfaces can be used directly to evaluate the BEM solution as long as the option Only evaluate globally defined expressions is selected.
The dedicated acoustics plots can be used directly with the BEM variables as input. Examples include the Far Field plot, used for plotting the spatial response (not necessarily in the far field, but as a matter of fact at any distance) and the Directivity plot. For example, the sound pressure level variable pabe.Lp
can be used as the expression.
Screenshots of the user interface for some of the different data sets mentioned above. The important settings are highlighted.
The screenshots above are taken from the Loudspeaker Radiation: BEM Acoustics tutorial model. This model solves a radiation problem and has most of the common plots and results visualization set up.
The image below shows the sound pressure level depicted in three slices through the grid on the speaker surface. To illustrate the generality of the postprocessing and visualization tools, the sound pressure level is also shown along a parameterized spiral curve created using a Parametric Curve 3D data set.
Sound pressure level depicted in different ways in the Loudspeaker Radiation: BEM Acoustics tutorial model.
Next, I want to discuss two cases that require special consideration when using BEM.
Many acoustics applications involve a situation in which a transducer is located in an infinite baffle and is radiating into a half-space. In most cases, this setup is not possible using boundary elements, at least not if the baffle has to be infinite. A noninfinite baffle can be set up using, for example, the Interior Sound Hard Wall boundary condition.
Typically, we would want to use the Infinite Sound Hard Boundary feature. This condition cannot “have a hole in it” like when a loudspeaker driver is sitting in a baffle. Since the BEM formulation is based on the full-space Green’s function, an infinite symmetry plane or an infinite wall condition mean that they are infinite and cannot have an opening in them. Basically, all boundaries that have a selection in the physics interface and are active must be located on the same side of the infinite condition or lie on it. If this is not the case, the results will be unphysical.
My general recommendation for the infinite baffle setups is to use the FEM-based physics interface together with the far-field calculation feature and a PML or radiation condition. For an example, see the Lumped Loudspeaker Driver model. This setup will typically be much faster!
User interface of the Pressure Acoustics, Boundary Elements interface. The infinite conditions are found at the top physics level (highlighted here). Once a condition is selected, the resulting plane is depicted in the Graphics window.
Interior problems — especially problems with sharp resonances where no or little loss is present — can be challenging to solve with BEM. This is not because of the method itself but because an iterative solver is used to efficiently solve the underlying matrix system. The same problem is also found for a FEM-based model that uses an iterative solver.
Near a sharp resonance, any small change results in variations in the pressure that are hard to capture to ensure convergence. If possible, use FEM in these situations together with a direct solver or make sure to add realistic boundary conditions with losses, such as an impedance condition.
BEM is a very useful complement to FEM in the COMSOL Multiphysics environment. Many engineers in the acoustics modeling community have been looking forward to the addition of this functionality. We hope that you will enjoy this latest addition to the Acoustics Module.
See what’s possible with the specialized acoustics modeling features available in the Acoustics Module add-on product by clicking the button below.
Try it yourself: Download one of the tutorial models featured in this blog post. From the Application Gallery, you can log into your COMSOL Access account and download the MPH-file.
]]>Topology optimization helps engineers design applications in an optimized manner with respect to certain a priori objectives. Mainly used in structural mechanics, topology optimization is also used for thermal, electromagnetics, and acoustics applications. One physics that was missing from this list until last year is microacoustics. This blog post describes a new method for including thermoviscous losses for microacoustics topology optimization.
A previous blog post on acoustic topology optimization outlined the introductory theory and gave a couple of examples. The description of the acoustics was the standard Helmholtz wave equation. With this formulation, we can perform topology optimization for many different applications, such as loudspeaker cabinets, waveguides, room interiors, reflector arrangements, and similar large-scale geometries.
The governing equation is the standard wave equation with material parameters given in terms of the density and the bulk modulus K. For topology optimization, the density and the bulk modulus are interpolated via a variable, . This interpolation variable ideally takes binary values: 0 represents air and 1 represents a solid. During the optimization procedure, however, its value follows an interpolation scheme, such as a solid isotropic material with penalization model (SIMP), as shown in Figure 1.
Figure 1: The density and bulk modulus interpolation for standard acoustic topology optimization. The units have been omitted to have both values in the same plot.
Using this approach will work for applications where the so-called thermoviscous losses (close to walls in the acoustic boundary layers) are of little importance. The optimization domain can be coupled to narrow regions described by, for example, a homogenized model (this is the Narrow Region Acoustics feature in the Pressure Acoustics, Frequency Domain interface). However, if the narrow regions where the thermoviscous losses occur change shape themselves, this procedure is no longer valid. An example is when the cross section of a waveguide changes shape.
For microacoustic applications, such as hearing aids, mobile phones, and certain metamaterial geometries, the acoustic formulation typically needs to include the so-called thermoviscous losses explicitly. This is because the main losses occur in the acoustic boundary layer near walls. Figure 2 below illustrates these effects.
Figure 2: The volume field is the acoustic pressure, the surface field is the temperature variation, and the arrows indicate the velocity.
An acoustic wave travels from the bottom to the top of a tube with a circular cross section. The pressure is shown in a ¾-revolution plot.
The arrows indicate the particle velocity at this particular frequency. Near the boundary, the velocity is low and tends to zero on the boundary, whereas in the bulk, it takes on the velocity expected from standard acoustics via Euler’s equation. At the boundary, the velocity is zero because of viscosity, since the air “sticks” to the boundary. Adjacent particles are slowed down, which leads to an overall loss in energy, or rather a conversion from acoustic to thermal energy (viscous dissipation due to shear). In the bulk, however, the molecules move freely.
Modeling microacoustics in detail, including the losses associated with the acoustic boundary layers, requires solving the set of linearized Navier-Stokes equations with quiescent conditions. These equations are implemented in the Thermoviscous Acoustics physics interfaces available in the Acoustics Module add-on to the COMSOL Multiphysics® software. However, this formulation is not suited for topology optimization where certain assumptions can be used. A formulation based on a Helmholtz decomposition is presented in Ref. 1. The formulation is valid in many microacoustic applications and allows decoupling of the thermal, viscous, and compressible (pressure) waves. An approximate, yet accurate, expression (Ref. 1) links the velocity and the pressure gradient as
where the viscous field is a scalar nondimensional field that describes the variation between bulk conditions and boundary conditions.
In the figure above, the surface color plot shows the acoustic temperature variation. The variation on the boundary is zero due to the high thermal conductivity in the solid wall, whereas in the bulk, the temperature variation can be calculated via the isentropic energy equation. Again, the relationship between temperature variation and acoustic pressure can be written in a general form (Ref. 1) as
where the thermal field is a scalar, nondimensional field that describes the variation between bulk conditions and boundary conditions.
As will be shown later, these viscous and thermal fields are essential for setting up the topology optimization scheme.
For thermoviscous acoustics, there is no established interpolation scheme, as opposed to standard acoustics topology optimization. Since there is no one-equation system that accurately describes the thermoviscous physics (typically, it requires three governing equations), there are no obvious variables to interpolate. However, I will describe a novel procedure in this section.
For simplicity, we look at only wave propagation in a waveguide of constant cross section. This is equivalent to the so-called Low Reduced Frequency model, which may be known to those working with microacoustics. The viscous field can be calculated (Ref. 1) via Equation 1 as
(1)
where is the Laplacian in the cross-sectional direction only. For certain simple geometries, the fields can be calculated analytically (as done in the Narrow Region Acoustics feature in the Pressure Acoustics, Frequency Domain interface). However, when used for topology optimization, they must be calculated numerically for each step in the optimization procedure.
In standard acoustics topology optimization, an interpolation variable varies between 0 and 1, where 0 represents air and 1 represents a solid. To have a similar interpolation scheme for the thermoviscoacoustic topology optimization, I came up with a heuristic approach, where the thermal and viscous fields are used in the interpolation strategy. The two typical boundary conditions for the viscous field (Ref. 1) are
and
These boundary conditions give us insight into how to perform the optimization procedure, since an air-solid interface could be represented by the former boundary condition and an air-air interface by the latter. We write the governing equation in a more general matter:
We already know that for air domains, (a_{v},f_{v}) = (1,1), since that gives us the original equation (1). If we instead set a_{v} to a large value so that the gradient term becomes insignificant, and set f_{v} to zero, we get
This corresponds exactly to the boundary condition for no-slip boundaries, just as a solid-air interface, but obtained via the governing equation. We need this property, since we have no way of applying explicit boundary conditions during the optimization. So, for solids, (a_{v},f_{v}) should have values of (“large”,0). Thus, we have established our interpolation extremes:
and
I carried out a comparison between the explicit boundary conditions and interpolation extremes, with the test geometry shown in Figure 3. On the left side, boundary conditions are used, whereas on the adjacent domains on the right, the suggested values of a_{v} and f_{v} are input.
Figure 3: On the left, standard boundary conditions are applied. On the right, black domains indicate a modified field equation that mimics a solid boundary. White domains are air.
The field in all domains is now calculated for a frequency with a boundary layer thick enough to visually take up some of the domain. It can be seen that the field is symmetric, which means that the extreme field values can describe either air or a solid. In a sense, that is comparable to using the actual corresponding boundary conditions.
Figure 4: The resulting field with contours for the setup in Figure 3.
The actual interpolation between the extremes is done via SIMP or RAMP schemes (Ref. 2), for example, as with the standard acoustic topology optimization. The viscous field, as well as the thermal field, can be linked to the acoustic pressure variable pressure via equations. With this, the world’s first acoustic topology optimization scheme that incorporates accurate thermoviscous losses has come to fruition.
Here, we give an example that shows how the optimization method can be used for a practical case. A tube with a hexagonally shaped cross section has a certain acoustic loss due to viscosity effects. Each side length in the hexagon is approximately 1.1 mm, which gives an area equivalent to a circular area with a radius of 1 mm. Between 100 and 1000 Hz, this acoustic loss increases by a factor of approximately 2.6, as shown in Figure 7. Now, we seek to find an optimal topology so that we obtain a flatter acoustic loss response in this frequency range, with no regards to the actual loss value. The resulting geometry looks like this:
Figure 5: The topology for a maximally flat acoustic loss response and resulting viscous field at 1000 Hz.
A simpler geometry that resembles the optimized topology was created, where explicit boundary conditions can be applied.
Figure 6: A simplified representation of the optimized topology, with the viscous field at 1000 Hz.
The normalized acoustic loss for the initial hexagonal geometry and the topology-optimized geometry are compared in Figure 7. For each tube, the loss is normalized to the value at 100 Hz.
Figure 7: The acoustics loss normalized to the value at 100 Hz for the initial cross section (dashed) and the topology-optimized geometry (solid), respectively.
For the optimized topology, the acoustic loss at 1000 Hz is only 1.5 times higher than at 100 Hz, compared to the 2.6 times for the initial geometry. The overall loss is larger for the optimized geometry, but as mentioned before, we do not consider this in the example.
This novel topology optimization strategy can be expanded to a more general 1D method, where pressure can be used directly in the objective function. A topology optimization scheme for general 3D geometries has also been established, but its implementation is still ongoing. It would be very advantageous for those of us working with microacoustics to focus on improving topology optimization, in both universities and industry. I hope to see many advances in this area in the future.
René Christensen has been working in the field of vibroacoustics for more than a decade, both as a consultant (iCapture ApS) and as an engineer in the hearing aid industry (Oticon A/S, GN Hearing A/S). He has a special interest in the modeling of viscothermal effects in microacoustics, which was also the topic of his PhD. René joined the hardware platform R&D acoustics team at GN Hearing as a senior acoustic engineer in 2015. In this role, he works with the design and optimization of hearing aids.
]]>
Picture a micromirror as a single string on a guitar. The string is so light and thin that when you pluck it, the surrounding air dampens the string’s motion, bringing it to a standstill.
Because this damping effect is important to many MEMS devices, micromirrors have a wide variety of potential applications. For instance, these mirrors can be used to control optic elements, an ability that makes them useful in the microscopy and fiber optics fields. Micromirrors are found in scanners, heads-up displays, medical imaging, and more. Additionally, MEMS systems sometimes use integrated scanning micromirror systems for consumer and telecommunications applications.
Close-up view of an HDTV micromirror chip. Image by yellowcloud — Own work. Licensed under CC BY 2.0, via Flickr Creative Commons.
When developing a micromirror actuator system, engineers need to account for its dynamic vibrating behavior and damping, both of which greatly affect the operation of the device. Simulation provides a way to analyze these factors and accurately predict system performance in a timely and cost-efficient manner.
To perform an advanced MEMS analysis, you can combine features in the Structural Mechanics Module and Acoustics Module, two add-on products to the COMSOL Multiphysics simulation platform. Let’s take a look at frequency-domain (time-harmonic) and transient analyses of a vibrating micromirror.
We model an idealized system that consists of a vibrating silicon micromirror — which is 0.5 by 0.5 mm with a thickness of 1 μm — surrounded by air. A key parameter in this model is the penetration depth; i.e., the thickness of the viscous and thermal boundary layers. In these layers, energy dissipates via viscous drag and thermal conduction. The thickness of the viscous and thermal layers is characterized by the following penetration depth scales:
where is the frequency, is the fluid density, is the dynamic viscosity, is the coefficient of thermal conduction, is the heat capacity at constant pressure, and is the nondimensional Prandtl number.
For air, when the system is excited at a frequency of 10 kHz (which is typical for this model), the viscous and thermal scales are 22 µm and 18 µm, respectively. These are comparable to the geometric scales, like the mirror thickness, meaning that thermal and viscous losses must be included. Moreover, in real systems, the mirrors may be located near surfaces or in close proximity to each other, creating narrow regions where the damping effects are accentuated.
The frequency-domain analysis provides insight into the frequency response of the system, including the location of the resonance frequencies, Q-factor of the resonance, and damping of the system.
The micromirror model geometry, showing the symmetry plane, fixed constraint, and torquing force components.
In this example, we use three separate interfaces:
By modeling the detailed thermoviscous acoustics and using the Thermoviscous Acoustics, Frequency Domain interface, we can explicitly include thermal and viscous damping while solving the full linearized Navier-Stokes, continuity, and energy equations. In doing so, we accomplish one of the main goals for this model: accurately calculating the damping experienced by the mirror.
To set up and combine the three interfaces, we use the Acoustics-Thermoviscous Acoustics Boundary and Thermoviscous-Acoustics-Structure Boundary multiphysics couplings. We then solve the model using a frequency-domain sweep and an eigenfrequency study. These analyses enable us to study the resonance frequency of the mirror under a torquing load in the frequency domain.
Let’s take a look at the displacement of the micromirror for a frequency of 10 kHz and when exposed to the torquing force. In this scenario, the displacement mainly occurs at the edges of the device. To view displacement in a different way, we also plot the response at the tip of the micromirror over a range of frequencies.
Micromirror displacement at 10 kHz for phase 0 (left) and the absolute value of the z-component of the displacement field at the micromirror tip (right).
Next, let’s view the acoustic temperature variations (left image below) and acoustic pressure distribution (right image below) in the micromirror for a frequency of 11 kHz. As we can see, the maximum and minimum temperature fluctuations occur opposite to one another and there is an antisymmetric pressure distribution. The temperature fluctuations are closely related to the pressure fluctuations through the equation of state. Note that the temperature fluctuations fall to zero at the surface of the mirror, where an isothermal condition is applied. The temperature gradient near the surface gives rise to the thermal losses.
Temperature fluctuation field within the thermoviscous acoustics domain (left) and the pressure isosurfaces (right).
The two animations below show a dynamic extension of the frequency-domain data using the time-harmonic nature of the solution. Both animations depict the mirror movement in a highly exaggerated manner, with the first one showing an instantaneous velocity magnitude in a cross section and the second showing the acoustic temperature fluctuations. These results indicate that there are high-velocity regions close to the edge of the micromirror. We determine the extent of this region into the air via the scale of the viscous boundary layer (viscous penetration depth). We can also identify the thermal boundary layer or penetration depth using the same method.
Animation of the time-harmonic variation in the local velocity.
Animation of the time-harmonic variation in the acoustic temperature fluctuations.
When the problem is formulated in the frequency domain, eigenmodes or eigenfrequencies can also be identified. From the eigenfrequency study (also performed in the model), we can determine the vibrating modes, shown in the animation below (only half the mirror is shown as symmetry applies). Our results show that the fundamental mode is around 10.5 kHz, with higher modes at 13.1 kHz and 39.5 kHz. The complex value of the eigenfrequency is related to the Q-factor of the resonance and thus the damping. (This relationship is discussed in detail in the Vibrating Micromirror model documentation.)
Animation of the first three vibrating modes of the micromirror.
As of version 5.3a of the COMSOL® software, a different take on this example solves for the transient behavior of the micromirror. Using the same geometry, we extend the frequency-domain analysis into a transient analysis. To achieve this, we swap the frequency-domain interfaces with their corresponding transient interfaces and adjust the settings of the transient solver. In the simulation, the micromirror is actuated for a short time and exhibits damped vibrations.
The resulting model includes some of the most advanced air and gas damping mechanisms that COMSOL Multiphysics has to offer. For instance, the Thermoviscous Acoustics, Transient interface generates the full details for the viscous and thermal damping of the micromirror from the surrounding air.
In addition, by coupling the transient perfectly matched layer capabilities of pressure acoustics to the thermoviscous acoustics domain, we can create efficient nonreflecting boundary conditions (NRBCs) for this model in the time domain.
Let’s start with the displacement results. The 3D results (left image below) visualize the displacement of the micromirror and the pressure distribution at a given time. We also generate a plot (right image below) to illustrate the damped vibrations caused by thermal and viscous losses. The green curve represents the undamped response of the micromirror when the surrounding air is not coupled to the mirror movement. The time-domain simulations make it possible to study transients of the system, like the decay time, and the response of the system to an anharmonic forcing.
Micromirror displacement and pressure distribution (left) and the transient evolution of the mirror displacement (right).
We can also examine the acoustic temperature variations surrounding the micromirror. The isothermal condition at the micromirror surface produces an acoustic thermal boundary layer. As with the frequency-domain example, the highest and lowest temperatures are located opposite to one another.
In addition, by calculating the acoustic velocity variations of the micromirror, we see that a no-slip condition at the micromirror surface results in a viscous boundary layer.
Acoustic temperature variations (left) as well as acoustic velocity variations for the x-component (center) and z-component (right).
These examples demonstrate that we can analyze micromirrors using advanced modeling features available in the Acoustics Module in combination with the Structural Mechanics Module. For more details on modeling micromirrors, check out the tutorials below.
]]>
Fluid-filled pipes, also referred to as fluid-carrying structures, have a large number of industrial applications, such as gas pipelines, automotive mufflers, aircraft fuselage, and underwater pipelines. The size of a pipeline system can range from centimeters to kilometers.
Common applications of fluid-filled pipes. Left: A submerged pipeline. Image by Grand Canyon National Park. Licensed under CC BY 2.0, via Flickr Creative Commons. Center: A model of an aircraft fuselage. Right: An automotive muffler. Image by lw5315us. Licensed under CC BY-SA 2.0, via Flickr Creative Commons.
Large pipe systems are difficult to model with simulation software, and because acoustic and elastic modes don’t exist independently, individual analysis doesn’t make it easier. Therefore, we need to underline the effect of fluid loading on the response of the pipe.
At low frequencies, the fluid loading term tends to be small, so the response of the system is dominated by the dynamics of the structure/pipe (Ref. 2). Fluid loading changes the vibrational characteristics of the structure in contact, and consequently, the acoustic radiation. Fluid-loading effects are exhibited strongest by structures in contact with denser fluids, since the fluid forces are proportional to the mean density of the fluid.
Generally, systems can be described by distributed mass and stiffness. There are infinite degrees of freedom (DOFs) for a continuous system, which results in infinite modes. For a finite frequency range, there is a finite number of modes that can be analyzed individually using modal decomposition. The motion of such continuous systems is described by partial differential equations (PDEs) from force/acceleration and force/deformation relations. Examples of such systems are strings, rods, and shafts (second-order PDEs) as well as beams (fourth-order PDEs) and fluid-filled pipes.
The solution to such equations can be visualized using two approaches:
Suppose we’re interested in modeling the dynamics of a large system at higher frequencies using the finite element method (FEM). To capture its behavior, the wavelength must be discretized with a sufficient number of elements, which could result in a large number of DOFs and more memory and time. We can tackle this issue using wave modes or representing the system as guided waves, since the waves travel long distances before they decay.
Wave properties are another advantage of a wave-based approach. They are important for studying structure-borne sound, frequency response of finite-length waveguides, and computing the energy transmission through structures. You represent these wave modes through dispersion curves, which provide a relationship between the wave number and frequency.
Dispersion curves are basically separate lines that each represent an individual mode. The only prerequisite for the wave-based method is that the cross section of the system is constant (there is no limit to the length). For modeling long systems, such as pipes carrying fluid, beams, or rail track, the wave-based approach is very useful.
Waves propagate in time and space. The spatial variation is described by a quantity representing phase change per unit distance and is equal to ω/c. This is the wave number, denoted by k. One wavelength corresponds to an x-dependent phase difference of 2π: kλ = 2π.
When a system is excited with a force at one end, a large number of waves start to propagate to the other side. Each wave travels with a velocity described as the phase velocity (independent of ω; e.g., longitudinal shear waves) and group velocity (frequency dependent; e.g., bending waves). All of the waves travel together under an envelope. The speed at which the energy is transported is given by the group velocity, which is the velocity of the envelope given by c_{g} = ∂ω/∂k.
Schematic of a dispersion curve.
Dispersion curves explain the dynamics of a coupled system. In a fluid-filled pipe where waves can travel in fluid as well as in the pipe wall, the dispersion curves provide a common wave number or wave mode that propagates into the system as a whole. Dispersion curves also provide insight into what happens inside the system at different frequencies. Let’s see how to compute dispersion curves analytically.
Consider a linear conservative system that is uniform and unbounded in one direction (z). The equation of free vibration can be written as:
(1)
where μ(z) is the mass density, and L(z) is the stiffness operator and depends on and so on.
The exact form varies. In general, w might be a function of 1, 2, or 3 space variables depending on the problem (such as a beam, plate, or acoustic cavity). Under the passage of a time-harmonic wave, the solution of Eq. (1) is w(z,t) = We^{i(ωt – kz)}, where W is the amplitude of the wave, ω is the circular frequency, and k is the wave number.
Substituting w(z,t) in Eq. (1) provides the dispersion/characteristic equation. The solution is wave numbers, which come in pairs and represent waves traveling in the ±z direction. Wave numbers can be characterized as:
We may want to obtain basic wave modes (such as longitudinal, shear, and bending) of a structure analytically. Systems considered here have a constant cross section and wave propagation in the positive x direction. For computing longitudinal motion, consider a uniform elastic bar with density ρ and Young’s modulus E. The equation of motion for free vibration is given by . Using the same principle as for time-harmonic motion, we get the dispersion relation , with phase velocity and group velocity c_{g} = c_{L}. Since c_{L} is independent of ω and k, all harmonic waves travel at the same speed. The dispersion relation for shear waves is of the form , where G is the shear modulus of the material.
To compute the bending waves, we consider the Euler-Bernoulli and Timoshenko theories based on certain assumptions. The Euler-Bernoulli theory assumes that the cross section of the beam remains plane and perpendicular to the neutral axis during bending, ignoring rotary inertia and shearing effects. This simplifies many terms and yields a fourth-order partial differential equation, , which can be easily solved.
The only problem with this assumption is that it cannot be validated at high frequencies when the wavelength becomes comparable to the thickness of the structure. The dispersion relation is given by , corresponding to phase velocity and group velocity c_{gb} = 2c_{b}. The phase speed depends on frequency, so the bending waves are dispersive. The wave spreads out due to higher-frequency components, which propagate faster.
Other theories, such as Timoshenko, incorporate shear effects and provide more accurate behavior at higher frequencies. For complicated cross sections, analytical solutions are not feasible.
The acoustic pressure field inside a cylindrical duct, which satisfies the acoustic wave equation, is given by:
where n is the circumferential mode order, P_{n} is the amplitude coefficient, J_{n}(k_{r}r) is the Bessel function of first kind, k_{z} is the out-of-plane wave number, and θ is the circumferential angle.
The radial wave number k_{r} is determined by the boundary condition for a rigid wall; i.e., J_{n}‘(k_{r}r)_{r=a} = 0 , where J_{n}‘(k_{r}r) is the derivative of the Bessel function with respect to r. The solution/modes are multivalued for a given n. Correspondingly, the out-of-plane wave number is computed using the relation k_{z}^{2} + k_{r}^{2} = k^{2}.
Our fluid-filled pipe is linearly elastic and homogeneous. The fluid is purely acoustic, which means it’s compressible, inviscid, and barotropic. The pipe’s modes are computed individually. For the numerical example, the pipe material is steel and the fluid is air. Material properties are given by:
Material Properties |
---|
E = 2e11 N/m^{2} |
ρ_{s} = 7800 kg/m^{3} |
ν = 0.3 |
ρ_{f} = 1.25 kg/m^{3} |
Speed of sound in air, c = 343 m/s |
r_{o} = 0.05 m |
t = 0.0025 m |
We use the Solid Mechanics interface and Pressure Acoustics, Frequency Domain interface to solve the model. We also use the mode analysis study type, where the modes or out-of-plane wave numbers are computed at each frequency. Mode analysis assumes that the mode is harmonic in space; i.e., u(x,y,z) = u(x,y)e^{ikzz}. This equation can be solved at a given frequency for free vibrations for most of the out-of-plane wave numbers, k_{z}.
Certain discrete values — eigenvalues — correspond to the wave numbers of the propagating or evanescent modes. The mode analysis study step triggers the solver that can find these wave numbers and the corresponding mode shapes. A parametric sweep of frequency computes the wave numbers at different frequencies.
Settings for computing out-of-plane wave numbers.
The real values of the wave numbers are plotted, since they correspond to propagating wave modes. The cross-sectional shapes are also plotted in terms of total displacement. To read the dispersion curve, we use the lines (see below). Comparing the bending, shear, and longitudinal modes with analytical solutions helps to easily identify them. We also see a mode cuts on at around 6000 Hz and propagate from there. Sometimes, the behavior of the mode changes at high frequencies (a bending mode can convert into a shear, longitudinal, or extensional mode). Such behavior can be easily captured with dispersion curves.
Dispersion curves for a hollow cylindrical pipe (left) and rigid-walled acoustic duct (right).
Pipe cross-sectional mode shapes.
The dispersion curves for a cylindrical rigid-walled pipe can be analyzed using the same analogy. The first 6 acoustic modes (see graph above to the right) cut on at around 2000, 3500, 4300, 4800, and 6100 Hz, respectively. The modes are compared with the analytical solutions and the cross-sectional shapes are plotted for the cylindrical duct, also showing the pressure distribution across the duct cross section.
Pressure distribution at different modes.
The wave numbers computed using the COMSOL Multiphysics® software are compared with the analytical wave numbers of the hollow pipe and rigid-walled cylindrical duct, respectively. Results show good agreement, but there are clear differences observed for the bending mode. Since the analytical theory is based on assumptions, it cannot be used for high frequencies. The reliability of numerical results lies in the proper discretization of the domain under study.
Note that a sufficient number of elements per wavelength (~6–8 quadratic elements) must be used to capture the wavelength accurately. Another advantage of the numerical approach is that analytical solutions are difficult to obtain for complex cross sections (such as multilayered pipes or complex shape cross sections). In the plot above to the left, apart from the regular wave modes (i.e., bending, longitudinal, shear), many other modes are observed in the numerical solutions. This number increases with the frequency range. The “extra” modes (such as the ring mode) also have physical significance and they are extremely difficult to obtain via analytical solutions. The system’s overall dynamic response is the superposition of all of the modes.
At a higher frequency range, the system’s behavior becomes more complex. Modes overlap with each another and it’s extremely difficult to understand the behavior of each mode. Again, dispersion curves come to the rescue.
Now, we compute the wave number for the coupled system for the fluid-filled pipe. Using the method described earlier, the wave modes are computed using the mode analysis solver in COMSOL Multiphysics for both air and water as the internal fluid.
Dispersion curves for an air-filled (left) and water-filled (right) steel pipe.
The results for the air-filled steel pipe are compared with the uncoupled acoustic and elastic modes. Since the fluid is light, it has minimal effect on the vibrations of the coupled system.
The ring mode can be seen at low frequencies where the pipe resonates as a ring. However, due to Poisson’s effect, there is a slight coupling between the elastic and acoustic parts. As the frequency increases, the motions of the elastic and acoustic parts become strongly coupled, highlighted by a rapid increase in radial vibrations. For instance, branch 1 corresponds to the longitudinal mode and branch 2 corresponds to the coupled mode. Although the coupling between air and steel is weak, at 6000 Hz, the extensional mode converts into an acoustic mode.
Cross-sectional mode shapes for the coupled system are plotted below. They correspond to the displacements in the pipe and pressure field in the fluid domain.
Strong coupling behavior is seen in the plot above for a water-filled pipe. Branch 1 corresponds to the acoustic wave in a rigid-walled cylindrical duct (purely acoustic mode). Considering branch 2, the pipe behaves in vacuo at low frequencies. At high frequencies, the fluid and pipe motion become strongly coupled and the mode converts into a second acoustic mode. Branch 3 originates (cuts on) at around 10,000 Hz. The mode seems to follow the trend of an extensional structural mode and at high frequencies, it again converts into a rigid-walled acoustic mode. We can analyze other branches similarly.
Coupled elastoacoustic wave mode shapes, with air as the internal fluid.
Further, dynamic analysis of the system using dispersion curves can be done at high frequencies. For finite-length systems, these propagation constants or wave numbers can be used to compute the forced response for significant computational efficiency.
Suppose you want to reduce the noise radiation from your system. A few easy techniques can be employed, such as using a multilayered/sandwich pipe made of soft rubber material enclosed by two stiff skins or a complicated cross section (maybe elliptical). Such complex configurations can easily be tested using dispersion curves.
However, the analysis must have:
In this blog post, we have discussed how dispersion curves are computed for an infinite-length multiphysics system and how they can be further analyzed for structural mechanics and pressure acoustics. The analysis is performed using the mode analysis solver. In an upcoming blog post, we will demonstrate how to use wave modes to compute the forced response of finite length waveguides.
C.R. Fuller and F.J. Fahy, “Characteristics of wave propagation and energy distributions in cylindrical elastic shells filled with fluid,” Journal of Sound and Vibration, vol. 81, no. 501518, 1982.
Imagine that you’re at a busy cocktail party on New Year’s Eve. Music and laughter fill the air in a cacophony of sound. You and a friend are chatting in the middle of the crowd, waiting in anticipation for the countdown to begin.
Now, close your eyes and think about trying to listen to your friend.
How did you pick your friend’s voice out from the mixture of noises around you?
The answer to this question lies within the cocktail party effect, a concept popularized by Colin Cherry in 1953. The cocktail party problem involves hearing and focusing on a sound of interest, like a speech signal, in an environment with competing sounds.
To do so, you need to overcome two challenges:
These challenges are exacerbated when the party becomes larger and there are more competing sound sources. As a result, it is difficult to determine the speech signal of interest, recover it from the blending of sounds around you, and then pay attention to it. Despite the challenge, many people are able to naturally solve this problem without thinking much about it.
So how do we do it? Let’s take a look…
According to this source, a main element at play here is that our brains are able to use grouping cues to determine which sounds go together. For instance, individual sounds often have common amplitude changes across their different frequencies. This means that when we come across sounds at multiple frequencies that stop and start at the same time, our brains interpret these as belonging to the same sound source. Additionally, when frequencies in a sound mix have a harmonic relationship, they are often heard as one sound, since it is likely that they are related to one another.
Fluctuations in natural sounds also make it easier to differentiate between the sounds. Although different sounds can obscure each other at times, when they fluctuate, we get a glimpse of the underlying sounds in the noisy environment. Our auditory system can then fill in the blanks for the obscured sounds by accurately grouping the obscured bits.
Press play to be transported to a noisy cocktail party. At first, you can only hear a melange of sound. Then, you run into an old friend, who starts talking to you. As you focus on what your friend is saying, you are eventually able to filter out the other sounds of the party, effectively turning them into background noise.
Another helpful way our brains solve this problem is by using our understanding of various classes of sounds. Going back to our cocktail party example, if your friend is speaking, you’ll have a better chance of hearing them if they are forming coherent sentences than if they are speaking gibberish. In addition, your perception of sound is more accurate if your friend has an accent that is familiar to you.
Localization and visual cues also help us direct our attention to the correct auditory source. If a target sound is in a different location than undesired sounds, for example, we can more easily differentiate it using our spatial hearing, and as a result, the rest becomes background noise.
While the average human body is typically able to solve the cocktail party problem on its own, people with impaired hearing may struggle in loud situations. To learn more, we reached out to Abigail Kressner of the Technical University of Denmark. Kressner mentions that one generally accepted theory on why hearing-impaired people struggle in loud situations is that it is due to a “combination of audibility (i.e., whether the signals are loud enough for the hearing-impaired person to hear them) and reduced temporal resolution.”
Kressner elaborates by saying that these issues may “influence a hearing-impaired listener’s ability to segregate different streams of sound within a complex acoustic scene like a cocktail party and that they also may have reduced attentional segregation.” Those who are hearing impaired are also less able to “listen in the dips” between fluctuations of competing noise sources. As we touched on earlier, these fluctuations in the noise provide glimpses of the target speech sound for those with normal hearing, and therefore, they provide clues for understanding the speech. Replicating this ability in machine algorithms for hearing aids is a challenge for hearing aid designers.
The first objective in designing hearing aids is, of course, to make sounds audible for hearing-aid users. But after meeting that requirement, there are a great deal of additional features that can be added, including:
Kressner notes that these approaches both encounter the challenge of distinguishing between sound signals and finding the one the listener wants to hear. For instance, you may want to listen to a friend talking in front of you or someone on the other side of the room who has just called your name.
A hearing aid. Image by Udo Schröter — Own work. Licensed under CC BY-SA 3.0, via Wikimedia Commons.
How will the hearing aid device know which signal the user wants to listen to? The COCOHA project thinks brain signals (EEG signals) are the answer. This solution still has a lot of work ahead of it, though, including more research into decoding cognitive attention and then using this information to adjust the device and suppress unwanted signals.
Let’s move away from our imaginary cocktail party and instead take a walk through a dense forest. Here, on warm spring evenings, you may hear a chorus of Cope’s gray treefrogs. While each individual call is similar, fitter males give off faster and longer calls. The females listen for these calls, tuning out extra noises and tuning in to the calls of interest. Research into how these frogs achieve this feat and the difference between their ears and human ears could assist in improving the design of both hearing aids and speech recognition systems.
Finding inspiration for improving hearing aid designs in nature; a photo of a Cope’s gray treefrog. Image by Fredlyfish4 — Own work. Licensed under CC BY-SA 4.0, via Wikimedia Commons.
So far, a lot of research into designing hearing aids that account for the cocktail party problem “has been acquired via very controlled, yet unrealistic laboratory experiments,” Kressner notes. This isn’t ideal, because there is “a disconnect between what we see in the laboratory and what we see in the real world.” To move forward and close this gap, Kressner suggests that it could be possible to use, for example, numerical modeling or more realistic psychoacoustic reproduction techniques to better understand what is happening in the real world.
Finding inspiration in simulation; a probe tube microphone, which can be used in association with hearing aids, simulated with the COMSOL Multiphysics® software.
Say you just ordered a new loudspeaker. You probably have expectations about the product: It should survive the trip home; withstand falls; and, above all, it should work. As Richard Little said in his keynote talk: “Our customers expect that things just kind of work when [they come] out of the box and they don’t really have to worry about it, because that’s what consumer products are normally expected to do.”
Upholding these performance conditions is the job of Sonos engineers working to create powered wireless loudspeakers. To do so, they have to ensure the performance and durability of the many competing components in a loudspeaker. Little’s team focuses on just one of these components: audio transducers, which convert input electrical signals into sound. In his talk, Little discussed maximizing the durability of transducers via a predictive design process by accounting for:
Little and his group use simulation to effectively analyze the durability of their transducer designs. This enables them to create virtual prototypes, improve the accuracy of their physical prototypes, and reduce time to market.
First, Little discussed a transducer component involving nonmoving parts that need to withstand handling-related stress: the basket. The basket of a transducer is its weakest part. As such, Little’s team works to improve transducer baskets by studying their materials and geometry. Little spoke about finding a type and grade of steel that can prevent a basket from deforming, while still minimizing material costs. This is accomplished by evaluating the basket’s structural integrity with time-dependent mechanical simulations.
From the video: Simulation results for the transducer basket with <130 MPa stress.
The results Little shared indicate that 130 MPa is the targeted yield stress. This is seen as the lowest acceptable yield strength level to use when choosing a grade of steel for a design. Of course, there are other options for improving the design’s robustness, including using thicker steel or a different plastic material for the basket. However, these design choices have implications in regards to cost, acoustic performance, and manufacturing requirements.
Switching gears, Little discussed a moving component example involving a flat speaker that is typically placed beneath a television. Due to its design, the speaker’s woofer is shallow and the diaphragm is mostly flat. The voice coil of the woofer is subjected to a great deal of stress where it is attached to the lower diaphragm surface, because the flat diaphragm offers no geometric reinforcement.
Simulation confirms that high stresses exist in this specific location, with some areas having high enough stress to eventually fatigue and fail. The Sonos team addressed this challenge by reinforcing the area with the highest stress by gluing on a small secondary ring. This design modification removes all of the stress, while negligibly impacting cost and acoustic performance.
From the video: The woofer diaphragm design was modified by adding a second ring, reducing stress.
With simulation, Richard Little and the Sonos team managed to accurately analyze stress on audio transducers and improve their designs. “This is a great way of investigating the durability of your product, as opposed to just designing for performance,” Little said. “It’s something that has been very important for us. We want our products to last 10 years out there in the field under normal usage.”
Want to learn more about Sonos’ acoustic simulations and loudspeaker designs? Watch the video at the top of this post.
]]>
I’m going to assume that the engineers, physicists, scientists, and researchers who read the COMSOL Blog don’t hold much stock in the paranormal. Even so, hearing a rattling window in the middle of the night or a whispering noise in an empty house is enough to frighten even a seasoned analytical mind.
When a scientist debunks a suspected poltergeist (a supernatural entity that causes physical disturbances), it is known as a false poltergeist. Researchers in this line of work often refer to Occam’s razor to explain these occurrences, as it states that the simplest explanation for something is likely the most valid. For instance, the Roswell UFO incident in 1947 can be explained most simply by a weather balloon that fell out of the sky, not a flying saucer flown by little gray aliens.
Weather balloon or aliens: Which explanation do you think is the most valid? (Photo from my 2016 visit to the International UFO Museum and Research Center in Roswell, New Mexico.)
In the article “Things that Go Bump in the Night: The Physics of ‘False Poltergeists’” from a past issue of Sound & Vibration magazine, Roman Vinokur discusses common vibroacoustic phenomena that are mistaken for ghosts and supernatural entities. Let’s put on our ghost hunting/acoustician hats and take a look at some of the examples featured in the article.
If you ever wake in the middle of the night to a rumbling or groaning sound, think about Helmholtz resonance before burning sage or calling a medium. The most basic example of a Helmholtz resonator is a glass bottle with a narrow opening. When you hold the bottle up to your lips and blow perpendicular to its opening, it makes a humming sound.
Helmholtz resonance in action (turn up your sound!)
A room with an open window or door can also act as a Helmholtz resonator. When turbulent airflow passes through an opening in the room, it excites Helmholtz resonance. The natural frequency of Helmholtz resonance depends on the room’s volume, thickness of the walls, and area of the opening. If this value falls within the range of infrasound (below 20 Hz), it can cause creepy sounds in the audible range — perhaps leading to a suspected poltergeist.
Infrasound can even vibrate our internal organs. This explains why people who recount paranormal experiences often describe feelings of nausea; anxiety; and most commonly, coldness.
Helmholtz resonance can be excited by sound waves propagating from an internal or external noise source. For example, thunder can reverberate in small rooms, which can be perceived as something more malicious than weather. The Sound & Vibrations article mentions a building that was rumored to be haunted. In actuality, the only thing haunting the building was revenge. The workers who built the apartment were scammed by the owner who hired them. To get revenge, the workers embedded empty glass bottles in the building’s roof. The bottles acted as Helmholtz resonators and wind passing through their openings at night caused tenants to hear roaring sounds at a frequency of 100 Hz.
Besides causing scary sounds, Helmholtz resonators also reduce noise in a wide range of applications. For instance, resonators are used in car exhaust systems because they can attenuate a specific and narrow frequency band. When a mean flow enters a typical exhaust system, a Helmholtz resonator attenuates the sound that is generated (similar to our bottle example above, but with the opposite effect).
An animation showing the pressure distribution for a Helmholtz resonator under certain operating conditions. Automotive designers often turn to acoustics modeling and analysis to evaluate how the presence of flow affects the Helmholtz resonator’s performance.
Learn about modeling aeroacoustics applications with the COMSOL Multiphysics® software in a previous blog post by my colleague Mads Jensen.
Watch any film about a haunted house (if you’re not sure where to start, I can recommend a few!) and there is a scene with creaking floorboards, rattling windows, doors that open and shut on their own, or some combination of phenomena. In the movies, a ghost is to blame, but the actual cause of such movement and noise isn’t as insidious, but instead due to mechanical resonance.
When the frequency spectrum of the source of a vibration lies in the infrasound range, it is inaudible (or barely audible) to the human ear. However, the movement caused by the vibration source is easy to hear. Basically, you can sometimes hear the effect of a vibration, but not the cause. This discrepancy is where ghost stories are born.
These roommates have very different explanations for what’s causing the rattling noises between the first and second floors.
Going back to the example of a multistory building, rooms often contain equipment that moves or vibrates. Objects ranging in size from vacuum cleaners to air conditioning units to treadmills can cause noise on another level of a building. If a person hears the noise produced by vibrations but is too far away to hear the cause, they could suspect that a paranormal entity is afoot.
The way skyscrapers are arranged in cities can sometimes form street canyons, also called urban canyons, which manipulate sound propagation. The canyon effect neglects sound absorption in air and solid surfaces. When propagating in a canyon, a sound wave’s energy does not follow the usual distance law valid for open spaces (6 dB attenuation for each doubling of the distance, a spherical wavefront). In a canyon, the pressure amplitude decreases inversely to the square root of the distance from the noise source. So instead, for every doubling of the distance traveled, the energy is attenuated by only 3 dB (a cylindrical wavefront). Thus, sound can propagate over longer distances in canyons with less attenuation than in open environments.
Let’s say our multistory building has a wall canyon. (You can picture a wall canyon as the cutout center of a U-shaped building, sometimes called a courtyard building.) If a conversation is happening on a lower level of the building — in front of open windows, of course — the canyon effect causes the sound to propagate to a higher floor. The person on the upper level hears the conversation as if it is happening close to them, but don’t see anyone talking. Therefore, the wall canyon causes the perceived effect of the hushed whispers of a ghost.
Due to the canyon effect, the conversation in front of an open window does not have the original low frequency when it reaches the open window a floor above.
Interestingly, negative temperature inversion can also produce something similar to the canyon effect. For instance, at night, the temperature of the ground cools faster than the air above. This causes sound to propagate for longer distances because of multiple reflections from the ground. What could possibly be scary about this effect, you ask? Perhaps hearing an owl hoot when there are no owls or trees around for miles…
This acoustic effect can be studied using ray acoustics and the propagation in graded media functionality. It is commonly studied in the field of underwater acoustics, where waves propagate in underwater sound channels generated by temperature or salinity gradients in the water column.
Owls are often seen as ominous, but aren’t they cute?
Let’s go back to Occam’s razor: The simplest explanation is usually the truth. As we’ve discussed, supernatural and paranormal experiences can be explained simply via acoustics and vibrations. But maybe the explanation is even simpler.
Say you’re alone in the house and hear footsteps on the floor above you. Is it a ghost? Alien? Infrasound from the layout of the room? It could just be a roommate or family member who had a change in their schedule and happens to be walking around upstairs when you don’t expect them to. What about noticing a quiet pitter patter while taking a walk outside? It’s probably not a ghost. It could be sound attenuation from the canyon effect. Or, it’s simply a cat out exploring the neighborhood — although if it’s a black cat, I’d still take precautions.
COMSOL’s main office is located north of Boston in Massachusetts. In the southeastern corner of the state, there is an area known to paranormal enthusiasts as the Bridgewater Triangle (a reference to the area’s epicenter, Bridgewater, MA; and the Bermuda Triangle). The area is a hotbed of reported ghosts and UFO sightings as well as other bizarre forms of paranormal activity, like the pukwudgies, tiny creatures who supposedly live in the woods and play tricks on hikers.
Although I used to be wary of venturing to the Bridgewater Triangle and bumping into a ghost, learning about how vibroacoustic effects can cause seemingly supernatural noises has me ready to explore the area.