My parents love each other to death, but their habits can sometimes clash. My mom enjoys watching late-night television talk shows, while my dad prefers a good night’s sleep whenever he gets the chance. As they will eventually need to downsize, I decided to help them plan a home where they could stay friends.

From past experience, I knew there was no sense in trying to optimize the location of every plant, rug, or bookcase. My dad is constantly moving furniture around to, as he says, “feel the space.” My mom, meanwhile, tends to pull the couch closer and closer to the television rather than admitting that she needs glasses.

In short, they are bound to mess around with the input data and thereby remove digit after digit from the precision of any *a priori* sound level estimate. Luckily, accuracy was not a primary concern. I just needed to establish that my dad would get his beauty sleep.

The obvious start for a “quick and dirty” model of the acoustics in an apartment is the *Acoustic Diffusion Equation* interface. This interface is very easy to use and, in most situations, it is a lot faster than the more accurate *Pressure Acoustics* or *Ray Acoustics* interfaces.

For my simulation, I created a simple drawing of the relevant rooms and included the larger pieces of furniture. With the drawing complete, I set out to create my first acoustic diffusion model. This was a breeze — two power sources representing the stereo speakers connected to the television, absorption coefficients assigned to the walls and the furniture, and…that was it.

Approximate absorption coefficients for common materials are easy to find online. If you want to be more thorough, you can use different values in different frequency bands, or even specify them as arbitrary functions of the frequency. I selected a constant low value for the walls (including the floor and the ceiling) and a higher one for the soft parts of the furniture. To compensate for the lack of carpets and the relatively sparse decoration, I then nudged the wall coefficient slightly in the upward direction. If you decide on a similar time-saving measure, I suggest that you be open about it. My parents understand that acoustic diffusion is not an exact science, and they appreciate my honesty.

*Distribution of the sound pressure level (dB) without a door between the rooms. The red dots indicate my mom’s viewing position and my dad’s head while he is trying to sleep.*

My first solution shows a sound pressure level decreasing by a quite modest 11 dB between the living room couch and the bedroom. Luckily, two important elements that would increase the difference were still missing.

The first element is a door between the rooms. If your door manufacturer cites a transmission loss in dB, make sure to check whether it concerns only transmission through the door itself, or if it was measured with the door in its frame. This makes a difference because a significant amount of sound may sneak through the space between the door and the floor unless you install a fitting. If you have access to a drawing of the door and know the material that was used to make it, you can, of course, run an acoustic-structure interaction analysis to get a second opinion. It is trivial to include a door with a specified transmission loss in an acoustic diffusion simulation.

The second element is the direct sound. The acoustic diffusion equation only deals with the part of the sound that has already struck the walls or the furniture and has become diffuse. With my mom sitting directly in front of the television, there is also a significant direct sound reaching her. By approximating the sound sources as points and neglecting shadowing from the table, it is quite simple to add the direct sound as an analytical expression in terms of the emitted sound power and the local coordinates.

In my second — and final — simulation, I added the direct sound hitting the couch and put up a door between the rooms. The total loss between the couch and the bed was now a much more acceptable 23 dB. I provided my parents with a nice printed report and gave them the thumbs up to move into the home.

*Sound pressure level distribution (dB) with a door added and an approximation of the direct sound included in the living room.*

Diffusion is often discussed as a description of the motion of particles in a gas. The particles travel in straight lines except when, at random intervals, they bounce off the gas molecules. The diffusion coefficient is a function of the mean free path between two consecutive collisions.

*Mean free path between collisions in a gas (left) and for sound particles in a room (right).*

The acoustic diffusion equation deals with conceptual “sound particles”, with a density proportional to the local sound energy. These particles do not bounce on the air molecules, but rather on the walls of the room. The mean free path \lambda and, with that, the diffusion coefficient D relate to the proportions of the room. It holds that \lambda = 4V/S, where V is the volume of the room and S is the total surface area of the room’s walls, floor, and ceiling. In turn, D=\lambda c/3, where c is the speed of sound.

The implementation of the acoustic diffusion equation in COMSOL Multiphysics is

\frac{\partial{w}}{\partial{t}}+\nabla \cdot (-D_t \nabla w) + c m_a w = q(\textbf{x},t)

The equation is solved for the acoustic energy density w, from which you can derive the sound pressure level and other important measurables. If you drop the time derivative, you get the stationary form. The volume absorption coefficient m_a accounts for the air dissipation, which is often negligible but sometimes important in very large spaces. D_t = D is the diffusion coefficient and q is an optional volumetric sound source. With the alternative formulation

\frac{\partial{w}}{\partial{t}}+\nabla \cdot (-D_t \nabla w) + c (m_a + \frac { \alpha_f } {\lambda_f}) w = q(\textbf{x},t), ~ ~ D_t = \frac {D_f D}{D_f+D}

you can also include an averaged description of the furnishing. Here, \alpha_f is the average absorption coefficient of the furniture (the fittings). The diffusion coefficient D_f and the mean free path \lambda_f derive from the number density and the average cross section of the furniture.

Say, for example, that my parents had wanted to invest in a furniture store. In that case, I would have used this formulation rather than draw each individual item.

The boundary conditions include various ways of specifying the local absorption coefficient and applying sound sources. Point sources are also available.

Like ray acoustics, the acoustic diffusion equation does not account for low-frequency behaviors, such as standing waves or diffraction around corners. These are chiefly important below the *Schroeder frequency*, which you can learn about more in the blog post “Modeling Room Acoustics with COMSOL Multiphysics“. In my parents’ new living room and bedroom, the Schroeder frequencies are 167 Hz and 183 Hz, respectively.

For events to turn into statistics, you need to monitor your system for a while. When compared to ray acoustics, the main limitation in the acoustic diffusion equation is that it does not include early sound. This means that it will systematically underestimate the sound pressure level in the vicinity of the sound sources — the case for my mom’s viewing position right in front of the television. You can take my approach and often at least partially compensate for this limitation by calculating the direct sound analytically and adding it to the diffuse solution. It can, however, become rather difficult or impossible to do so if there are obstacles near the sources acting to reflect or absorb the sound.

While it can be argued that acoustic diffusion is the least accurate of the three acoustics analysis methods available in COMSOL Multiphysics, acoustic diffusion is easier to set up and is often orders of magnitude faster to solve than other methods. The solution time required to produce the plots presented here was approximately 2.5 seconds on a regular desktop computer. For a ray tracing model with a high number of rays, obtaining good statistics would take at the very least a few minutes — and possibly hours — to solve. Pressure acoustics is the only game in town for the low-frequency, resonance-dominated range. But, for frequencies much greater than the Schroeder frequency, this approach would be out of the question due to the quickly increasing solution time and memory usage.

At the end of the day, if your parents ask you for an opinion on the sound environment in their living quarters, I’d wholeheartedly recommend that you run an acoustic diffusion simulation for them. Alternatively, if you are in the early stages of designing a concert hall or an office space, acoustic diffusion can still be a great tool for obtaining an initial assessment of the high-frequency sound distribution. You can then add ray acoustics to predict the early sound and get a more accurate result, as well as pressure acoustics to investigate the low-frequency behavior.

The simulation referenced in this blog post is available for download here. To learn more about room acoustics, I also encourage you to download the One-Family House Acoustics tutorial from our Application Gallery.

]]>In an earlier blog post, we considered the computation of acoustic radiation force using a perturbation approach. This method has the advantage of being both robust and fast; however, it relies heavily on the theoretical evaluation of correct perturbation terms. The idea behind the method presented here is to solve the problem by deducing the radiation force from the solution of the full nonlinear set of Navier-Stokes equations, interacting with a solid, elastic microparticle.

The problem that we want to solve involves the motion of a fluid due to the acoustic wave, which in turn exerts force on a solid particle. The particle responds to the applied force by deformation and net motion and also applies reaction forces on the fluid. This means that in addition to solving the equations describing the fluid dynamics, you would also need to account for the deformation of the particle and the resulting deformation of the space occupied by the fluid. The pre-built *Fluid-Structure Interaction* physics interface in COMSOL Multiphysics® software allows you to solve this problem by coupling fluid flow, structural stress analysis, and mesh deformation.

The acoustic radiation force is a nonlinear effect, where the nonlinearity is inherent to the flow and stems from the convective term (\mathbf u \cdot \nabla )\mathbf u in the Navier-Stokes equation rather than from the material nonlinearity. To support acoustic waves, the fluid has to be compressible. The fluid compressibility can be introduced by modifying the constitutive relation of the fluid. We assume a linear elastic fluid where p = c_0^2(\rho-\rho_0) and, for water, we put c_0 = 1500 m/s and \rho_0 = 1000 kg/m^{3}. Because we initially want to compare the method with classical, analytical models that neglect the effect of viscosity, we assume an arbitrary, small viscosity value.

The acoustic radiation force is much higher in the standing wave fields, and most practical applications utilizing this effect involve standing waves. Let us examine this case.

Because the problem is nonlinear, it must be solved in the time domain. Since the solution can be quite time consuming, we will solve it in a 2D axisymmetric geometry. To create a standing wave in a time-dependent solution, we consider a resonant box two wavelengths high and initiate the standing wave by setting up the corresponding initial conditions. For example, in a box two wavelengths high, the standing wave solution would be p(r,z,t) = p_0 cos(k_0 z) cos(\omega t) and, at the time t= 0 , the initial conditions should read p(r,z,t=0) = p_0 cos(k_0 z) and \mathbf u(r,z,t=0) = 0. This is illustrated below for a simulation box that is 3 mm high and has a wavelength of \lambda = 1.5 mm (the corresponding frequency is 1 MHz).

As noted in the previous blog post, we expect the nonlinear force term to be a few orders of magnitude smaller than the linear force, such that the particle will appear to be bouncing up and down in the acoustic field. However, every time the particle moves, it will go a little further in one direction than the other. This is a result of the nonlinear force component that does not change its direction when the field changes its sign. Because the nonlinear force is so small, it is very hard to extract the value of the nonlinear force from the time-domain solution, unless a very simple trick is used.

The essence of this trick is to utilize the very fact that the nonlinear force does not change direction when the excitation changes sign. Let us denote x_\textrm p (t) as the average displacement of the particle obtained when p_0 is positive and x_\textrm m (t) when p_0 is negative. The nonlinear displacement component will be given by the combination of the two, x_\textrm{nl}(t) = 1/2 \left [ x_\textrm p (t) +x_\textrm m (t) \right]. This method was first used here. The solution corresponding to the pressure amplitude of 0.1 MPa and a nylon particle of 100-μm radius is depicted in the plot below. You can see that x_\textrm p (t) and x_\textrm m (t) have opposite signs and otherwise appear identical; however, their sum is non-zero, albeit very small in comparison.

*Here, the particle is shown in motion.*

The method outlined above considers nonlinear acoustic phenomena. Therefore, the usual rules of acoustic modeling apply. They are:

- The mesh has to be sufficiently fine to resolve the shortest wavelength. Here, the basic wavelength is that of the externally imposed standing wave. Shorter wavelengths, however, are produced by the particle vibrating at its resonant frequency. These wavelengths will be captured by the model because it is solved in the time domain.
- A suitable time progression method when resolving wave phenomena in COMSOL Multiphysics is the generalized alpha method. To make sure that the solution is stable, a Courant–Friedrichs–Lewy (CFL) criterion has to be manually imposed by setting a manual step size with \delta t < 0.5\ h_\textrm{min} / c_0.

The last step of the analysis is computing the force from the average nonlinear displacement given by the finite element model. We can observe the effect of the acoustic radiation force as a small offset of the displacement from the otherwise perfect oscillatory motion of the particle. Judging by the difference in the computed linear and nonlinear components of the displacements, the difference in force components is about three orders of magnitude. So, the effect of the acoustic radiation force is negligible during one acoustic cycle and, to evaluate it correctly, around five to ten acoustic cycles must be computed.

With this data, we can export the results to Excel® spreadsheet software and find the average acceleration by fitting a second-degree polynomial to the displacement curve and computing the force as F = m \ddot x_\textrm{nl}. The graph below shows the results of such an analysis.

Here, the maximum force at a λ/8 distance from the pressure node of the standing wave is evaluated as a function of the particle’s radius. A good agreement is obtained in the limit of the small particle k_0R_0 \ll 1 for which the analytical model is valid, and the deviation appears to increase as we depart from this limit.

We have outlined a direct method for evaluating nonlinear acoustic radiation force. The same approach can be applied to other nonlinear acoustic effects such as acoustic streaming. The advantage of this method is that it does not rely on any theoretical models (e.g., the perturbation method) to express the nonlinear terms. Note that because the fluid-structure interaction approach requires the model to be solved in the time domain, it is much slower than the perturbation method.

*Excel is a registered trademark of Microsoft Corporation in the United States and/or other countries.*

When evaluating loudspeaker performance, dips and/or peaks in the on-axis sound pressure level can be a result of an unfortunate distribution of phase components. To overcome this, we use a phase decomposition technique that splits a total surface vibration into three components depending on how they contribute to the sound pressure in an arbitrary observation point; either adding to, subtracting from, or not contributing to the pressure.

A commonly used metric in loudspeaker performance is the sound pressure level as a function of frequency in an on-axis observation point. If at some frequency the overall displacement is decreased compared to adjacent frequencies, the resulting sound pressure level will have a corresponding dip at that frequency. However, the converse is not necessarily true. In other words, a dip in sound pressure may not always be a result of the overall displacement being low; rather, it may so be that part of the vibrating surface contributes negatively to the resulting sound pressure level.

By applying phase decomposition to the vibration, the underlying nature of the vibration can be exposed. The decomposition splits the total displacement into three parts, each of which either adds to, subtracts from, or has no influence on the sound pressure in an observation point of choice. The technique is available in at least one software package, but here the intended input is measured data from a laser vibrometer and not simulated vibrations.

Third-party software is available to enable the use of exported displacement data from COMSOL Multiphysics as input for the scanning software, but here I will show you how the entire analysis can instead be carried out using only COMSOL Multiphysics and the Acoustics Module.

As a starting point, we assume that the vibrating surface is flat and placed in an infinitely large baffle. Since only the normal displacement contributes to the sound generation, we align the surface normal \mathbf{n} with the z-axis, and denote this displacement phasor \mathbf{w}. This is in accordance with the COMSOL Multiphysics convention.

*The flat radiation surface area shown in gray is assumed to be placed in an infinite baffle.*

For this situation, of a flat vibrating surface, the sound pressure level can be calculated via the so-called *Rayleigh integral* (see any standard acoustics text book, for a more advanced text on the subject see, for example, *Fourier Acoustics*, by E. G. Williams):

\pmb{p}(P) = \frac{-\omega^2\rho}{2\pi} \int_{S} \mathbf{w}(Q) \frac{e^{-ikR}}{R}dS

Here, \pmb{p}(P) is the pressure phasor in observation point P, \omega is the angular frequency, \rho is the density of the fluid medium, \mathbf{w}(Q) is the displacement phasor in point Q on the vibration surface area S, k is the wave number, and R is the distance from a point Q on the radiating surface to the observation point P.

If air loading is low, that is, the forces exerted on the vibrating surface by the fluid are small enough that their effect on the surface vibration is negligible, there is no need to include an acoustic domain. In that case, the sound pressure can be determined from a purely structural simulation.

Assuming that the geometry in question is flat enough for the Rayleigh integral approach to be acceptable, the surface vibration can be split into three components:

- In-phase component, which contributes positively to the sound pressure
- Anti-phase component, which contributes negatively to the sound pressure
- Quadrature, or out-of-phase, component, which neither adds to nor subtracts from the sound pressure in the chosen observation point

The phase components can be determined by looking at a phasor diagram relating the phase of the total displacement in a point Q on the vibrating surface to the phase of the pressure in the observation point P.

*The phasor relationship between the displacement in a point Q and the pressure in observation point P.*

Since the pressure here is found via the Rayleigh integral, the sign difference between the displacement and the pressure is first accounted for by a phase shift of π radians, indicated by the red arrow. Now, consider what an in-phase displacement component means: The phase of the in-phase displacement component should lead the phase of the pressure by a phase shift exactly matching the distance traveled by the sound wave from the local point on the surface to the observation point. This phase difference of kR is indicated by the blue arrow.

The phase of the displacement \arg(\mathbf{w}(Q)) is shown for a situation where it is not aligned with the in-phase axis. By projecting \mathbf{w}(Q) onto the in-phase axis, we can determine its in-phase component. We can find the out-of-phase and anti-phase projections in a similar way; the former being in quadrature with the in-phase component and the latter being π radians offset to the in-phase component. By visual inspection, we can observe that there will be a nonzero out-of-phase projection, which is larger than the in-phase projection, but no anti-phase component for the surface point and observation point in question.

The analysis is carried out over the entire surface in order to obtain the three vibration components. Each component can subsequently be fed back into the Rayleigh integral to calculate its respective sound pressure component. By definition, the out-of-phase surface vibration naturally does not have a corresponding sound pressure contribution. This displacement simply cancels out acoustically in the observation point in question.

Let’s review a couple of simulation examples: one of a vibrating disk and another of a loudspeaker.

First, we will illustrate the phase decomposition technique with a vibrating disk as a test case, where the individual phase components can be found by visual inspection.

We’ve chosen an on-axis observation point several radii away from the disk. Let’s consider the total plate vibration shown in one of the four figures below. A larger part of the vibration has one particular phase, whereas a smaller part of the plate has an opposite phase. The larger part of the displacement must be in-phase for an on-axis observation point. This can be realized by considering the “extreme” case that the entire vibration has only one phase. Such a vibration must contribute entirely positively to the sound pressure in an observation point on-axis and away from the plate surface.

We’ve applied the phase decomposition technique to the total displacement and found the expected in-phase component. Since the remaining displacement is in opposite phase of the in-phase component, it must then be an anti-phase component. The analysis in COMSOL Multiphysics confirms this.

Lastly, since the total displacement is made up entirely of an in-phase and an anti-phase component, the out-of-phase component must be zero. This is also what we find via the phase decomposition.

*Total*

*In-phase*

*Anti-phase*

*Out-of-phase*

*Displacement components of the vibrating disk for an on-axis observation point.*

Note that if we choose another observation point — for example, off-axis and/or very close to the plate — we will get different displacement components than those shown above for the same total displacement.

Next, we use the phase decomposition technique for a 3-inch driver placed in a baffle. We carried out a complete 2D axisymmetric vibroacoustic simulation for a wide frequency range. The electromagnetic system was included in a lumped fashion, so that an input voltage could be applied directly.

The surface displacement components are illustrated for a frequency of 4.5 kHz. The total displacement pattern seems fairly simple, but visual inspection cannot reveal the individual phase components.

*Total*

*In-phase*

*Anti-phase*

*Out-of-phase*

*Displacement components of the loudspeaker surface at 4.5 kHz for an on-axis observation point.*

Just as with the previous example, we chose an observation point on-axis and several radii away from the surface. The in-phase component is concentrated around the inner topology, or the cone, whereas there is no in-phase displacement at the outer part of the surface, the so-called *surround*. The anti-phase component is concentrated around the surround part of the surface.

This means that the surround is the part to investigate (material and/or topology) if the anti-phase displacement is found to be unacceptably large. The surround is also the sole contributor to the out-of-phase displacement at this particular frequency.

I should note that the phase-decomposed displacement components have no radial components, since the analysis assumes a flat vibrating surface. Therefore, they do not sum exactly to the total vibration for this case, since the total vibration has both axial and radial components. However, the analysis still provides insight into the vibration pattern and how the individual components affect the resulting sound pressure in the chosen observation point.

By feeding the displacement components back into the Rayleigh integral, we can find the individual pressure contributions.

*The sound pressure level components for the loudspeaker driver.*

We can see that at low frequencies, the total displacement is dominated by in-phase motion, but above approximately 4 kHz, the anti-phase component subtracts from the pressure. The insight provided by the phase decomposition technique can aid engineering decisions and, in some cases, design changes may be warranted.

The phase decomposition technique is, of course, not limited to loudspeaker analysis. Any vibrating structure that is fairly flat can be analyzed. In fact, it’s advantageous to apply the Rayleigh integral on its own in order to have an estimate of the radiated sound without having to include an acoustic domain, especially if the air loading on the vibrating surface is negligible. The phase decomposition can then be added as a further layer in the analysis.

Special thanks to Mads Herring Jensen, the technical product manager for the Acoustics Module at COMSOL A/S, for help with the implementation.

René Christensen has been working with vibroacoustics for about a decade for a handful of companies such as DELTA, Oticon A/S, and iCapture ApS. He holds a PhD in hearing aid acoustics with focus on viscothermal effects. René recently joined the R&D department at the Danish loudspeaker company Dynaudio A/S where his main responsibilities are development and optimization of drivers for the “Automotive” and “Home” lines, design of waveguides and cabinets, and conceptual work for future products.

]]>

Consider a drum head constructed by stretching a membrane over a stiff frame that encloses a flat 2D domain. The vibration of the membrane is described by the wave equation (Helmholtz equation) with the Dirichlet boundary condition at the periphery of the domain where the membrane is constrained by the stiff frame. In this case, there is a set of discrete solutions to the wave equation, called *normal modes* or *eigenmodes*, each of which vibrates at a characteristic frequency, called *eigenfrequencies*.

The lowest eigenfrequency defines the fundamental tone, which for instance could be concert pitch A (440 Hz). The set of higher eigenfrequencies, or *overtones* in musical terms, gives rise to the tone color or timbre of the vibrating membrane. Kac’s lecture drew our attention to the eigenfrequencies: Is it possible to construct two drum heads with different shapes that share a set of eigenfrequencies? The idea was that if the two drums have an identical set of eigenfrequencies (being *isospectral*), then they would have the same timbre and sound the same to the ear, even though their shapes are different.

Kac commented on the asymptotic behavior of the eigenfrequencies in the limit of very high frequencies and made connections to various branches of physics and mathematics to provide a ground for intuitive understanding. The uniqueness question (in 2D flat space) remained unsolved until over two decades later when Gordon, Webb, and Wolpert finally constructed two polygons with an identical set of eigenvalues (see “One cannot hear the shape of a drum” and “Isospectral plane domains and surfaces via Riemannian orbifolds“).

The eigenvalues of the two polygons can be computed numerically, which is shown in this Isospectral Drums model in our Model Gallery.

The image below shows the first three normal modes of the two polygons that share the same set of eigenfrequencies:

In Gordon and Webb’s easy-to-read introductory article on this subject (“You can’t hear the shape of a drum”, *American Scientist*, vol. 84 (1996), pp. 46–55), they commented that such isospectral drums with different shapes are expected to be the exception, not the rule. In other words, they expected that, in general, one *can* hear the shape of a drum, unless the shape of the drum is specially constructed to be isospectral with another shape, like the two polygons depicted above.

In the following discussion, we will take a closer look at such special shapes by considering various physical mechanisms involved in the sound production and detection. We will find that when we include relevant physical effects, we actually *can* tell two drums apart by the sound, even if they are specially constructed to share the same set of eigenfrequencies.

The first effect we will examine is the excitation of the vibrational modes in the membrane. Since the timbre is determined by the set of relative amplitudes of the normal modes, it is not enough to just have an identical set of eigenfrequencies for the two drums to sound the same. They also need to have the same relative amplitude for each eigenmode, which may not be trivial to achieve.

Let’s take, for example, the same two polygonal drums from above and hit them with a drum stick at a few arbitrary places, one at a time, as such:

Each location of striking is somewhere in the middle of the drum, where a child may instinctively choose to hit if given such a drum and a drum stick. We use COMSOL Multiphysics simulation software to calculate the frequency response of each of the locations and plot the results in the graphs below.

We first focus on just one drum, say, the one on the left. Here is a plot showing the left drum’s frequency response:

As we hinted at earlier, the drum sounds differently depending on the location where it is struck by the drum stick. We see different energy distribution among the first three eigenmodes, which will result in different timbre. This is, of course, a well-known fact to percussionists and is the result of the same principle that enables a single bell to ring in two distinct tones, as demonstrated by this ancient set of bells from over two thousand years ago.

Now we know we can’t even make one drum sound the same unless we have a perfect aim of the drum stick. Is there any hope that we can make the two different drums sound the same?

In the graph below, we’ve added the frequency response of the second drum (the dashed curves). As we examine the graph, it becomes evident that none of the dashed curves match the solid curves in all three of the eigenmodes. In other words, the two drums do sound differently, even though they are isospectral, sharing the same set of eigenfrequencies.

Of course, we haven’t done an exhaustive search of all the possible combinations of striking locations. However, this simple example illustrates that it is not an easy job to make the drums sound the same, due to the different coupling strengths of energy from the drum stick to the various vibrational modes of the membrane.

The magic of mathematics never ceases to amaze us. Not long after the two isospectral polygons were published, Buser, Conway, Doyle, and Semmler constructed a pair of domains that are not only isospectral (sharing the same set of eigenfrequencies), but also *homophonic*: having a special point in the domain such that “corresponding normalized Dirichlet eigenfunctions take equal values at the distinguished points” (“Some planar isospectral domains“). In other words, if the special point of each drum is hit by a drum stick, then each corresponding pair of eigenmodes of the two drums will be excited with the same amplitude and the two drums will sound the same.

Shown below are the first few normal mode shapes computed numerically:

The special point of each domain is marked with a blue square in the schematic below:

In the following graph, we plot the computed frequency response of the two drums to a narrow Gaussian area load centered on each of the special points:

Isn’t it amazing how well the two frequency response curves (solid blue curve and green circles) lie on top of each other? With such a perfectly matched vibrational energy spectrum, wouldn’t the two drums sound exactly the same? Let’s continue our journey to explore more physical effects and find out.

Our ears do not sense the vibration of the membrane directly. Rather, the sensing is mediated by the acoustic wave in the air. Let’s set up the two homophonic drums outdoors, where the sound is allowed to propagate away from the drums without significant reflection. In this case, we can easily compute the frequency spectrum of the sound wave using COMSOL Multiphysics to find out what we really hear with our ears.

Let’s take a look at the three vibrational modes with the highest energies at about 111, 146, and 184 Hz as shown in the spectral graph above. For convenience, we will call them the first, second, and third mode, with the understanding that there are other modes in between being neglected since they are much less energetic.

The polar graph below compares the computed sound pressure level (in dB) in the plane of each of the two drums, a few meters away from each drum.

We see that the sound pressure field produced by the first mode is more or less independent of direction (solid and dashed blue curves). This is not surprising, since the mode shape of each drum looks pretty much like a monopole source:

On the other hand, the directionality of the sound field from the second or the third mode of each of the drums is quite pronounced and also quite different between the two drums. For example, for the second mode, the sound field from Drum 1 looks like a dipole field (solid red curve), while the one from Drum 2 is more complex (dashed red curve). This observation again matches what we see in the mode shapes of the two drums:

What really determines the perceived timbre is the ratio of the amplitudes of the higher modes (the overtones) to the lowest mode (the fundamental tone). So, in the next graph, we plot the amplitude ratios of the second and the third modes to the first mode, at a sampling of directions:

The blue square points are from Drum 1 and the red round points from Drum 2. The graph can be viewed as a map of timbre — if two points on the graph are near each other, then they sound similar; on the other hand, if two points on the graph are far away from each other, then they have very distinct timbre. As qualitatively illustrated by the green dashed boundaries, each drum can produce a range of timbre that the other cannot, in some range of directions.

As long as a listener is allowed to move around each drum, perhaps blindfolded, he or she will hear distinct ranges of timbre that tell the two drums apart. Therefore, even though the two “homophonic” drums share the same energy spectrum in their vibration modes, due to the difference in the mode shape and to the difference in energy transfer to the sound field in the air, the acoustic energy spectrum in some range of directions can be quite different. This is what would cause the two drums to sound differently to our ears.

In the previous analysis, we ignored the reaction force acted on the membrane by the air, the so-called *air loading effect*. It turns out that this effect is very significant for a real drum, since, after all, the entire area of the membrane participates in the pushing and pulling of the air around it.

We can simulate this effect using the *Acoustic-Structure Boundary* Multiphysics coupling feature of COMSOL Multiphysics. We find, for example, that the eigenfrequency of the second mode that we were discussing shifts from 146 Hz down to about 86 Hz. In addition, the magnitudes of shift of the two drums are different. The eigenfrequency of one drum was shifted down to 85.6 Hz, while the one of the other drum shifted to 86.8 Hz. This difference causes a pitch difference of about 23 cents, which is very audible in a side-by-side comparison.

Therefore, not only do the two drums differ in timbre (in some range of directions), they also differ in absolute pitch when we take the air loading effect into account.

The graph below shows the frequency response of the two drums around this mode. The difference in the resonant frequency is clearly seen, as well as the difference in the width of the resonance. There should be no doubt in our mind that with such different frequency responses, the two drums will produce easily distinguishable sounds.

It is a great achievement in mathematics to invent the isospectral drums that share the same set of eigenfrequencies and the homophonic drums that share the same power spectrum of the vibrational modes when excited at a special point. However, these phenomena only happen in vacuum, where there is no sound. Once we put the drums to test in air, they start to sound differently due to the air loading effect and the directionality of the energy transfer from the membrane to the sound wave.

In his lecture, Kac told the early 20^{th}-century story of Lorentz calling for mathematicians’ attention to the eigenvalue problem involved in the theory of black body radiation and Weyl answering the call with the proof of the theorem of the asymptotic behavior of eigenvalues at very high frequencies.

Here, we could use the help of our mathematician friends again, even though the subject matter may not be as grand as black body radiation and quantum mechanics. Is it possible to construct homophonic drums with different shapes that sound the same when including directionality and air loading effects? It may be possible to pose this as an optimization problem to solve numerically for a solution with a finite set of audible frequencies.

However, the computation cost will be high and the result will be approximate. An elegant analytical solution similar to those shown in the papers mentioned above would be much nicer. I hope this will arouse the interest of mathematicians who are reading.

]]>

When sound is emitted inside a room, a listener will perceive the sound as a combination of direct sound from the source as well as sound reflected off the walls. At the walls, the sound is reflected, absorbed, and scattered.

Since all of these processes are frequency dependent, a poorly designed meeting room can, for example, be highly reverberant in a frequency band that is important for speech. The room could also have a strong modal behavior (standing waves) at certain critical frequencies that are easily excited. These are things you want to avoid and be able to predict when designing a room.

Architects and civil engineers want to control the sound field by placing absorbers, diffusers, and reflectors in appropriate locations. In concert halls, you want to maximize the listening experience where the audience is located. In office spaces, you want to avoid anything that can seem noisy and disturb the concentration of employees. In classrooms and lecture halls, you want to ensure clear perception of speech. The sound environment is important for various reasons, which is why there are national standards and regulations for the sound environment in many cases.

Refurbishing a badly designed room can be very expensive, so you do not want to rely only on measurements on scale models or measurements done after the fact. Modeling the room acoustic behavior beforehand is important and essential in order to optimize and perfect the design. Simulation models and measurements need to relate architectural aspects (geometry) to subjective observations using physical measures (metrics). This is done by calculating a long range of room acoustic measures, such as the reverberation time, early decay time, clarity, and many other standardized parameters.

The modeling approach you want to adopt depends on the studied frequency (the wavelength compared to geometrical features of the room). In the Acoustics Module of the COMSOL suite of FEA software, we essentially offer three approaches packaged in three physics interfaces. The *Pressure Acoustics* interface can model the modal behavior in rooms. The *Ray Acoustics* interface and the *Acoustic Diffusion Equation* interface cover the high frequency limit or reverberant behavior (geometrical acoustics). I discuss the interfaces and their applicability in the sections below.

http://comsol.wistia.com/medias/p2h6djrhcx?embedType=api&videoFoam=true&videoWidth=640&playerPreference=html5

*Animation of the ray front position as they are released inside a small concert hall. The color scale gives an impression of the ray intensity on a logarithmic scale.*

As mentioned above, room acoustics is typically divided into three categories, depending on the studied frequency. Or, more specifically, depending on the wavelength compared to the characteristic geometric features of the room in question.

In the low-frequency range, the room resonances dominate. This is known as the *modal region*. At the other end of the scale, in the high-frequency limit, the wavelength becomes smaller than the characteristic geometrical features of the room. Here, we deal with the reverberant region or the *geometrical acoustics limit*. Between the modal and the high-frequency limit, there is a so-called *transition zone*. Note that there is no clear-cut definition of this zone.

Classical room acoustics theory provides some tools that enable a back-of-the-envelope assessment of the behavior of a room. For a given room, the Schroeder frequency, f_\textrm{s}, predicts the limiting frequency between the modal behavior and the high-frequency reverberant behavior of the room.

The Schroeder frequency is given by:

(1)

f_\textrm{s} = 2000 (\textrm{m}/\textrm{s})^{3/2} \sqrt{\frac{T_{60}}{V}}

where V is the room volume and T_{60} is the reverberation time.

The equation is based on the criterion (suggested by Schroeder) that at the limit, three eigenfrequencies fall into one resonance half-width. The reverberation time (or decay time), T_{60}, is the time required for the sound pressure level (created by an impulse source) to decay 60 dB. A first simple approximate measure of the reverberation time is given by the well-established Sabine formula:

(2)

T_{60} = \frac{55.3 V}{c A}, \qquad A = \Sigma S_i \alpha_i

Here, c is the speed of sound and A is the total absorption, where S_i and \alpha_i are the surface area and absorption of the i^{th} surface, respectively.

This is possibly the best-known formula in room acoustics. The equation stems from a classical statistical room acoustics analysis assuming a pure diffuse sound field. In a diffuse sound field, the sound pressure level is uniform and the reflected sound dominates. This phenomenon is also known as a *reverberant sound field*. In such a field, the damping constant (related to the overall absorption) can be approximated and relates to the reverberation time.

The modal behavior of rooms and enclosed spaces is best analyzed solving Helmholtz equation or the scalar wave equation using the finite element method. In the reverberant or high-frequency limit at frequencies above the Schroeder frequency, you may utilize two different approaches. Your choice depends on the assumptions that can be made and the desired level of detail.

The *Acoustic Diffusion Equation* interface may be used in the purely diffuse sound field limit, neglecting all direct sound. This is a fast method to assess reverberation times and sound pressure level distributions in systems of coupled rooms. The ray tracing capabilities of the *Ray Acoustics* interface provide a much more detailed picture including the direct sound and early reflections. With this interface, you also have the ability to reconstruct an impulse response.

Up to the Schroeder frequency, the modal behavior of rooms is important, where standing waves dominate over the reverberant nature. Inside a car, the transition may be as high as somewhere between several hundreds of Hertz up to 1000 Hz. In a small office, it may be up to 200 Hz, while in large concert halls, the transition is typically below 50 Hz. In the small concert hall model shown below, the Schroeder frequency is 115 Hz (the reverberation time is about 1.3 s and the volume is 430 m^{3}). The modal behavior is important for subwoofer systems in cinemas, for instance.

The modal behavior as well as the room eigenfrequencies are best analyzed using the *Pressure Acoustics* interface. A frequency domain study can reproduce a transfer function for the bass system. You can also use it to analyze dead regions or find eigenfrequencies. A transient study is interesting when, for example, looking at bass build-up transients inside a car cabin.

Models of interest here are:

*Pressure distribution for the first eigenmode inside a small room. From the Eigenmodes of a Room model.*

If you want to compute the trajectories, phase, and intensity of acoustic rays, you should choose the *Ray Acoustics* interface. Ray acoustics is a good choice when working in the high-frequency limit where you have an acoustic wavelength that is smaller than the characteristic geometric features. The interface is not limited to modeling acoustics in closed spaces, like rooms and concert halls, but can also be used in outdoor environments. At exterior boundaries, you can assign various wall conditions, such as combinations of specular and diffuse reflections. The frequency, intensity, and direction of the incident rays may influence both impedance and absorption.

Below are two figures from the Small Concert Hall model found in the Model Gallery for the Acoustics Module.

The figure to the left depicts the ray paths for a selected number of rays emitted from a source located on the small stage. The figure to the right depicts the energy response as measured in the center of the room. The dots represent the simulated ray response (5,000 rays are released) and the green and red curves represent decay curves based on simple Sabine-like estimates of the reverberation time T_{60}. The cyan curve is a so-called *Schroeder integration* of the energy response, yielding the energy-decay curve. All four agree well when the response is measured in the center of the room.

*Left: Ray path for a selected number of rays emitted from a source located on a small stage. Right: The energy impulse response compared with two simple decay measures and the energy decay curve.*

With the *Ray Acoustics* interface, the response can be measured at any point in the concert hall. The properties of absorbers and diffusers can be both frequency-dependent and angle-of-incidence dependent. Thus, the listening environment can be well described, analyzed, and optimized. The simple estimates are not accurate everywhere in a room and not for complex room geometries.

The *Acoustic Diffusion Equation* interface solves a diffusion equation for the acoustic energy density distribution for room acoustics. The method is also sometimes referred to as *energy finite elements*. This method is an extension of the principles used to calculate the Sabine reverberation time in Equation 2. This particular interface is applicable for high-frequency acoustics when the acoustic fields are diffuse. The diffusion of the acoustic energy density depends on the mean free acoustic path and thus on the room geometry. Absorption may be applied at walls and a transmission loss may be applied when coupling rooms. Increased diffusion due to room fitting can be added. Material properties and sources may be specified in frequency bands.

Compared to a ray acoustics simulation, this interface does not include any phase information, direct sound, or early reflections. The interface supports stationary studies for modeling a steady-state sound energy or sound pressure level distribution. You can use a time-dependent study to determine energy decay curves and reverberation times. You can use an eigenvalue study to determine the reverberation time of coupled and uncoupled rooms. The eigenvalue is directly related to the exponential decay time and so the reverberation time.

We utilized all three study types in the One-Family House Acoustics model, which studies the acoustics in a single-family home with a noise source located in the living room.

*Energy flux and SPL distribution inside a two-story single-family house.*

Check back on the COMSOL blog this spring for specific blog posts about the *Acoustic Diffusion Equation* and *Ray Acoustics* physics interfaces.

In the meantime, here is a list of suggested reading material:

- H. Kuttruff,
*Room Acoustics*, CRC Press, Fifth Edition, 2009. - A. D. Pierce,
*Acoustics, An Introduction to its Physical Principles and Applications*, Acoustical Society of America, 1991. - ISO 3382 Standard, Measurement of room acoustic parameters.
- M. R. Schroeder,
*New Method of Measuring Reverberation Time*, J. Acoust. Soc. Am., 37 (1965). - M. R. Schroeder,
*Integrated-Impulse method measuring sound decay without using impulses*, J. Acoust. Soc. Am., 66 (1979).

Acoustic radiation force is an important nonlinear acoustic phenomenon that manifests itself as a nonzero force exerted by acoustic fields on particles. Acoustic radiation is an acoustophoretic phenomenon, that is, the movement of objects by sound. One interesting example of this force in action is the acoustic particle levitation discussed in this previous blog post. Today, we shall examine the nature of this force and show how it can be computed using COMSOL Multiphysics.

To understand the nature of the acoustic radiation force, let’s first consider a simple example of a particle in a standing wave pressure field (here assumed to be lossless).

The force on the particle arises as a result of particle’s finite size, such that the gradients in the pressure field will result in greater force being exerted on one side of the particle than the other. However, if we are considering a harmonic pressure wave, then the force is expected to behave as a harmonic function, which can be expressed as F_\text{harmonic} = F_0\sin (2\pi f_0 t+\phi). I’ve shown this as a black arrow in the animation below.

If time-averaged, the total contribution goes to zero. So, where does the observed nonzero force come from?

This question was first addressed by L. V. King back in 1934 (“On the acoustic radiation pressure on spheres“). In order to understand King’s results, we must take a step back to examine how the governing equations of acoustics are derived.

We will find out that they emerge from the Navier-Stokes equations as a result of a linearization procedure, which is normally carried out in two steps.

First, a very small time-varying perturbation in pressure and velocity is assumed on top of a stationary background field. When time derivatives are applied, the stationary terms drop out and what is left only includes the time-dependent perturbation terms. The remaining expression will contain both linear and nonlinear contributions. The latter appear in the form of products of two or more linear perturbation terms, and they result from convective and inertial terms in the original Navier-Stokes equation.

But, in the simplest acoustic limit, the contribution of nonlinear terms can be neglected because the amplitudes of perturbations considered are very small. For example, 0.01^{2} is much smaller than 0.01 and can therefore be neglected. So, in the second step of the linearization procedure, all the nonlinear terms are neglected and the linear wave equation is obtained.

What King has indicated was that in order to understand and evaluate the effect of acoustic radiation force, the nonlinear terms must somehow be retained in the equations.

Keeping the terms up to a second order, the pressure field will appear as a combination of two terms p = p_1 + p_2, where p_1 and p_2 can be expressed in a simplistic form as p_1 = \rho_0 c_0 v, which appears as a linear function of the perturbation velocity v, and p_2 = 1/2 \rho_0 v^2, which appears as a nonlinear function of v. Since, in the acoustic limit, we only consider the cases in which v \ll c_0, where c_0 is the adiabatic speed of sound, we conclude that p_2 \ll p_1.

At this point, we are ready to answer the first question: Where does the acoustic radiation force come from?

Going back to the example of a particle in a standing wave pressure field, let’s examine the linear and nonlinear components of the pressure and the forces produced by these components. In this case, p_1 will be a time-harmonic function p_1 = P_1 \cos(kx)\sin(2\pi f t) and p_2 will be an an-harmonic function p_2 = P_2\cos^2(kx)[1-cos(4\pi f t)] resulting from the nonlinear contribution.

These terms are visualized by the waveforms in the animation above. The forces resulting from these pressure terms are indicated by arrows. The linear force (black arrow) changes both in magnitude and direction so its cycle-averaged contribution is zero, whereas the nonlinear term (red arrow) only changes in magnitude and on average exerts a nonzero force.

The simple analysis above demonstrates the main mechanism underlying the acoustic radiation force phenomenon. Intuitively, we realize that no force will appear if the particle has the same acoustic properties as the surrounding medium. In other words, the radiation force should be a function of not only the size of a particle and the amplitude of the acoustic field, but also of the particle’s *acoustic contrast* (the ratio of the material properties of the particle relative to the surrounding fluid).

Due to the acoustic contrast, the field incident on the particle will be reflected from its surface and the radiation force will be a result of a combination of incident and reflected waves. This makes the problem quite difficult to solve analytically. The solution in a closed analytic form was only given for some limiting cases by a number of authors, starting with King. He has considered rigid spherical particles with dimensions much smaller than the wavelength of the incident wave, but much larger than the viscous and thermal skin depths. It was the second assumption that allowed these terms to be neglected.

King’s results have been extended to include compressible particles as in “Acoustic radiation pressure on a compressible sphere“. The results from this study were later confirmed by L. P. Gor’kov in 1962 in “On the forces acting on a small particle in an acoustical field in an ideal fluid”. Viscous and thermal effects become important when the size of the particles becomes comparable to the acoustic boundary layers (thermal and viscous). Results including viscosity were recently presented in 2012 by M. Settnes and H. Bruus.

Gor’kov has developed an elegant approach to expressing the radiation force in terms of time-averaged kinetic and potential energies of stationary acoustic fields of any geometries. His results, when applied to small compressible fluid particles, give the force as a gradient of a potential function U_\text{rad}:

(1)

\mathbf{F} = -\nabla U_\text{rad},

The potential function U_\text{rad} is expressed using the acoustic pressure and velocity as:

(2)

U_\text{rad} = V_p \left [ f_1\frac{1}{2\rho c^2}\langle p^2 \rangle - f_2\frac{3}{4}\rho\langle v^2 \rangle\right],

where V_p is the volume of the particle and the scattering coefficients are given by:

(3)

f_1 = 1 -\frac{K_0}{K_p},\ \ \ \ \ f_2 = \frac{2(\rho_p-\rho)}{2\rho_p+\rho},

where K_i are the bulk moduli. The scattering coefficients f_1 and f_2 represent the monopole and dipole coefficients, respectively. This approach, which is based on the scattering theory, is only valid for particles that are small compared to the wavelength \lambda in the limit a/\lambda \ll 1, where a is the radius of the particle.

The v and p terms that appear in Eq. (1) are the first-order terms that can be obtained by solving a linear acoustic problem. Results in this form are typically obtained using a perturbation method, which is widely practiced in physics. A thorough review and examples of this method applied to nonlinear problems in acoustics and microfluidics can be found in a textbook by Professor Henrik Bruus titled *Theoretical Microfluidics*.

Eq. (1) is coded in the COMSOL Multiphysics *Particle Tracing for Fluid Flow* interface to evaluate the acoustic radiation force on particles. But, as mentioned above, it only applies to acoustically small particles and neglects thermoviscous effects. An example can be seen in the Acoustic Levitator model. Knowing the radiation force is important when modeling and simulating systems that handle particles using this phenomenon. This can be, for instance, microfuidic systems that sort and handle cells and other particles. An example of this is discussed in the blog post Acoustofluidic Multiphysics Problem: Microparticle Acoustophoresis.

To extend the theory beyond the limit of acoustically small particles, a numerical approach is required. We will consider that next.

In general, all forces can be expressed using momentum fluxes as \mathbf F = \int_S T \mathbf{n} d\mathbf{a}, where the surface of integration, S, is the external surface of the particle.

Gor’kov has used this fact to obtain a closed-form analytical expression for a force acting on a particle in an arbitrary acoustic field. To compute the nonlinear acoustic radiation force, the momentum flux due to the acoustic field has to be evaluated up to second-order terms. The main appeal of his result is that, as mentioned earlier, the second-order terms can be expressed using the solution of a linear problem.

To implement his method, all we need to do is solve the acoustic problem, use the results to compute the second-order momentum flux, and substitute the solution into the flux integral.

H. Bruus has shown that neglecting the thermoviscous effects, the second-order flux terms are:

(4)

T = -\frac{1}{2\rho c^2}\langle p^2 \rangle + \frac{1}{2}\rho\langle v^2 \rangle

The integral should be taken over a surface of a particle moving in response to the applied force. This means that the surface of integration is a function of time S = S(t). To overcome this difficulty, Yosioka and Kawasima have indicated that the integration can be transformed to an equilibrium surface S_0 that encloses the particle. Compensating for the error with the addition of a convective momentum flux term, the force, in total, becomes:

(5)

\mathbf F = \int_{S_0} T \mathbf{n} d\mathbf{a} -\int_{S_0}\rho \langle(\mathbf{vn})\cdot \mathbf{v}\rangle d\mathbf a

All that is left to do now is solve the acoustics problem to obtain the acoustic pressure and velocity and substitute them into the integral in Eq. (5). In contrast to the approach used in Eq. (1) to (3), the force expression given in Eq. (5) is valid for all particle sizes as long as the stress T is given. This approach was recently implemented in COMSOL Multiphysics by a group of researchers from the University of Southampton.

It should be noted that the expression in Eq. (4) is only true when the viscous and thermal effects are neglected. If these losses are included the integration surface S_0 should be taken outside of the boundary layers or a correct full stress expression for T used on the particle surface. A first principle perturbation approach including thermal and viscous losses was presented at the 2013 ICA-ASA conference by M. J. Herring Jensen and H. Bruus, titled “First-principle simulation of the acoustic radiation force on microparticles in ultrasonic standing waves”. A detailed derivation of the governing equations up to second order, in a form suited for implementation in COMSOL Multiphysics, is given in the paper “Numerical study of thermoviscous effects in ultrasound-induced acoustic streaming in microchannels“.

To benchmark the method presnted by Glynne-Jones et al., let’s compute an acoustic radiation force exerted by a standing wave on a spherical nylon particle immersed in water. We assume a frequency of 1 MHz and a pressure amplitude of 1 bar and implement the model using the *Acoustic-Structure Interaction* interface in 2D axisymmetric geometry. The size of the box in the model is four wavelengths high and two wavelengths wide.

Let’s excite a standing wave in this box using a Background Pressure Field condition, set up in such way that the particle is at a distance of \lambda/8 from the pressure node.

The integrals in Eq. (5) are computed by setting up integration coupling operators in the *Component 1 > Definitions* node. We need to make sure that the integral is calculated in the revolved geometry by checking the appropriate box and selecting the boundaries of the particle to define the surface of integration.

It is noteworthy to mention here that the force computation used in this method is independent of the surface of integration due to the conservation of flux as long, as it is located outside the particle. In fact, using a surface at larger distances will be more numerically accurate, simply because there will be more points to use for numerical evaluation of the integral. To perform this integration, we can add another surface external to the particle.

Finally, new flux variables are introduced in the *Component 1 > Definitions as Variables 1a* node. They are used as arguments for the integration operators to compute the total force.

We are now ready to compare the perturbation approach to an analytical solution.

As expected, they compare reasonably well for small particle radii where the analytical solution considered is valid. Some analytical models that include higher harmonics in scattered field decomposition offer solutions that agree with the outlined numerical approach for large and small spherical particles (as in the paper by T. Hasegawa, “Comparison of two solutions for acoustic radiation pressure on a sphere“.)

A small discrepancy for small particle radii between analytical and numerical methods may be attributed to the fact that the theoretical models assume that the particle is plastic, whereas in this example, we have considered an elastic particle with bulk modulus of 0.4.

The perturbation method has a number of advantages.

First, it exploits the linear acoustics method to evaluate nonlinear second-order force effects. This allows the analysis to be easily extended to 3D for particles of arbitrary shapes and material composition. For example, we can extend it to simulate acoustic radiation forces on biological cells or microbubbles.

Second, because the acoustic equations are solved in the frequency domain where very efficient numerical methods are well established, the solution time in COMSOL Multiphysics is quite fast even in 3D.

Meanwhile, the disadvantage of this method is that it is driven by theoretical results that rely on a set of simplifying assumptions, and it can only be validated in a limited number of cases. What we would like to have is a numerical method that allows the problem to be solved directly.

We shall see how this can be achieved in the next blog post. Stay tuned!

- Blog posts:
- Model: Acoustic Levitator
- L. V. King, “On the acoustic radiation pressure on spheres“, (1934)
- M. Settnes and H. Bruus, “Forces acting on a small particle in an acoustical field in a viscous fluid“, Phys. Rev. E 85, 016327 1-12 (2012)
- P. Glynne-Jones, P. P. Mishra, R. J. Boltryk, and M. Hill, “Efficient finite element modeling of radiation forces on elastic particles of arbitrary size and geometry“, J. Acoust. Soc. Am., 133, 1885-93 (2013)
- P.B. Muller, R. Barnkob, M.J.H. Jensen, and H. Bruus, “A numerical study of microparticle acoustophoresis driven by acoustic radiation forces and streaming-induced drag forces“, Lab Chip 12, 4617-4627 (2012)
- P.B. Muller and H. Bruus, “Numerical study of thermoviscous effects in ultrasound-induced acoustic streaming in microchannels“, Phys. Rev. E 90, 043016 1-12 (2014)

The piezoelectric modeling interface seeks to:

- Make the modeling workflow more
- Transparent
- Flexible

- Enable you to debug the models more easily

This will allow you to successfully simulate piezoelectric devices as well as easily extend the simulation by coupling it with any other physics.

You may already be familiar with the three different modules that can be used for simulating piezoelectric materials:

Each of these modules gives you a predefined *Piezoelectric Devices* interface that you can use for modeling systems that include both piezoelectric and other structural materials. The Acoustics Module offers two predefined interfaces, namely the *Acoustic-Piezoelectric Interaction, Frequency Domain* interface and the *Acoustic-Piezoelectric Interaction, Transient* interface. These two allow you to model how piezoelectric acoustic transducers interact with the fluid media surrounding them.

*The *Piezoelectric Devices* interface is available in the list of structural mechanics physics interfaces.*

*The *Acoustic-Piezoelectric Interaction, Frequency Domain *and the* Acoustic-Piezoelectric Interaction, Transient* interfaces are available in the list of acoustics physics interfaces.*

These predefined multiphysics interfaces couple the relevant physics governing equations via constitutive laws or boundary conditions. Thus, they offer a good starting point for setting up more complex multiphysics problems involving piezoelectric materials. The new piezoelectric interfaces in COMSOL Multiphysics version 5.0 provide a transparent workflow to visualize the constituent physics interfaces. There is also a separate Multiphysics node that lists how the constituent physics interfaces are connected to each other.

Let’s find out how these multiphysics interfaces are structured.

Upon selecting the *Piezoelectric Devices* multiphysics interface, you see the constituent physics: *Solid Mechanics* and *Electrostatics*. You also see the *Piezoelectric Effect* branch listed under the Multiphysics node, which controls the connection between *Solid Mechanics* and *Electrostatics*.

*Part of the model tree showing the physics interfaces and multiphysics couplings that appear upon selecting the* Piezoelectric Devices *interface.*

By default, all modeling domains are assumed to be made of piezoelectric material. If that is not the case, you can deselect the non-piezo structural domains from the branch *Solid Mechanics > Piezoelectric Material*. These domains then get automatically assigned to the *Solid Mechanics > Linear Elastic Material* branch. This process ensures that all parts of the geometry are marked as either piezoelectric or non-piezo structural materials and that nothing is accidentally left undefined.

If you are working with other material models that are available with the Nonlinear Structural Materials Module, such as hyperelasticity, you can add that as a branch under *Solid Mechanics* and assign the relevant parts of your modeling geometry to this branch. The Solid Mechanics node gives us full flexibility to set up a model that involves not only piezoelectric material but also linear and nonlinear structural materials. The best part is that if these materials are geometrically touching each other, the COMSOL software will automatically take care of displacement compatibility across them.

If some parts of the model are not solid at all, like an air gap, you can deselect them in the Solid Mechanics node.

From the Solid Mechanics node, you will also assign any sort of mechanical loads and constraints to the model.

The Electrostatics node allows you to group together all the information related to electrical inputs to the model. This would include, for example, any electrical boundary conditions such as voltage and charge sources. By default, any geometric domain that has been assigned to the *Solid Mechanics > Piezoelectric Material* branch also gets assigned to the *Electrostatics > Charge Conservation, Piezoelectric* branch. If you have any other dielectric materials in the model that are not piezoelectric, you could assign them to the *Electrostatics > Charge Conservation* branch.

The *Multiphysics > Piezoelectric Effect* branch ensures that the structural and electrostatics equations are solved in a coupled fashion within the domains that are assigned to the *Solid Mechanics > Piezoelectric Material* (and also the *Electrostatics > Charge Conservation, Piezoelectric*) branch.

The multiphysics coupling is implemented using the well-known coupled constitutive law for piezoelectric materials. Note that the *Electrostatics > Charge Conservation, Piezoelectric* branch is mainly used as a placeholder for assigning geometric domains that belong to the piezoelectric material model. This helps the *Multiphysics > Piezoelectric Effect* branch understand whether a domain assigned to the *Electrostatics* interface is piezoelectric or not.

Note: For an example of working with the

Piezoelectric Devicesinterface, check out the tutorial on modeling a Piezoelectric Shear Actuated Beam.

It is also possible to add effects of damping or other material losses in dynamic simulations. You can do so by adding one or more of the following subnodes under the *Solid Mechanics > Piezoelectric Material* branch:

*Damping and losses that can be added to a piezoelectric material.*

Subnode Name | When to Use the Subnode |
---|---|

Mechanical Damping | Allows you to add purely structural damping. Choose between using Loss Factor (in frequency domain) or Rayleigh damping (for both frequency and time domains) models. |

Coupling Loss | Allows you to add electromechanical coupling loss. Choose between using Loss Factor (for frequency domain) or Rayleigh damping (for both frequency and time domains) models. |

Dielectric Loss | Allows you to add dielectric or polarization loss. Choose between using Loss Factor (for frequency domain) and Dispersion (for both frequency and time domains) models. |

Conduction Loss (Time-Harmonic) | Allows you to add electrical energy dissipation due to electrical resistance in a harmonically vibrating piezoelectric material (for frequency domain only). |

Note: For an example of adding damping to piezoelectric models, check out the tutorial on modeling a Thin Film BAW Composite Resonator.

Additional damping also takes place due to the interaction between a piezoelectric device and its surroundings. This can be modeled in greater details using the Acoustic-Piezoelectric Interaction interfaces.

Upon selecting one of the Acoustic-Piezoelectric Interaction interfaces, you see the constituent physics: *Pressure Acoustics*, *Solid Mechanics* and *Electrostatics*. You also see the *Acoustic-Structure Boundary* and *Piezoelectric Effect* branches listed under the Multiphysics node.

*Part of the model tree showing the physics interfaces and multiphysics couplings that appear when selecting the *Acoustic-Piezoelectric Interaction, Frequency Domain* and the* Acoustic-Piezoelectric Interaction, Transient* interfaces.*

By default, all modeling domains are assigned to the *Pressure Acoustics* interface as well as the *Solid Mechanics > Piezoelectric Material* and* Electrostatics > Charge Conservation, Piezoelectric* branches. Note that the *Pressure Acoustics* interface is designed to simulate acoustic waves propagating in fluid media.

Since COMSOL Multiphysics cannot know a *priori* which parts of the modeling geometry belong to the fluid domain and which ones are solids, you are expected to provide that information by deselecting the solid domains from the *Pressure Acoustics, Frequency Domain* (or *Pressure Acoustics, Transient*) branch and deselecting the fluid domains from the *Solid Mechanics* and *Electrostatics* branches.

Once you do that, the boundaries at the interface between the solid and fluid domains are detected and assigned to the *Multiphysics > Acoustic-Structure Boundary* branch. This branch controls the coupling between the *Pressure Acoustics* and *Solid Mechanics* physics interfaces. It does so by considering the acoustic pressure of the fluid to be acting as a mechanical load on the solid surfaces, while the component of the acceleration vector that is normal (perpendicular) to the same surfaces acts as a sound source that produces pressure waves in the fluid.

Note: For an example of Acoustic-Piezoelectric Interaction, check out the tutorial on modeling a Tonpilz Transducer.

The transparency in the workflow as discussed above also paves the way for adding more physics and creating your own multiphysics couplings.

For example, let’s say there is some heat source within your piezoelectric device that produces nonuniform temperature distribution within the device. In order to model this, you can add another physics interface called *Heat Transfer in Solids* in the model tree and prescribe appropriate heat sources and sinks to find out the temperature profile. You could then add a *Thermal Expansion* branch under the Multiphysics node to compute additional strains in different parts of the device as a result of the temperature variation.

The *Multiphysics > Thermal Expansion* branch couples the *Heat Transfer in Solids* and the *Solid Mechanics* interfaces. It might also be possible that the piezoelectric material properties have a temperature dependency. You could represent these properties as functions of temperature and let the *Multiphysics > Temperature Coupling* branch pass on the information related to temperature distribution in the modeling geometry to the *Solid Mechanics* or even the *Electrostatics* branches, thereby producing additional multiphysics couplings.

*Part of the model tree showing the physics interfaces and multiphysics couplings that you can use to combine piezoelectric modeling with thermal expansion and temperature-dependent material properties.*

Similar to adding more physics and multiphysics couplings, it is also possible to disable one or more multiphysics couplings — or even any of the physics interfaces shown in the model tree. This could be immensely helpful for debugging large and complex models.

*The model tree on the left shows a scenario where the Piezoelectric Effect multiphysics coupling is disabled. The model tree on the right shows a scenario where the* Electrostatics* physics interface is disabled.*

For example, you can disable the *Multiphysics > Piezoelectric Effect* branch and solve for the *Solid Mechanics* and *Electrostatics* physics interfaces in an uncoupled sense. You could also solve a model by disabling either the *Solid Mechanics* or the *Electrostatics* interface.

Running such case studies could help in evaluating how the device would respond to certain inputs if there were no piezoelectric material in place. This approach could also be used to evaluate equivalent structural stiffness or equivalent capacitance of the piezoelectric material.

You could also start by adding only one of the constituent physics, say *Solid Mechanics*, and after performing some initial structural analysis, go ahead and add the *Electrostatics* physics interface to the model tree once you are ready to add the effect of a piezoelectric material.

In that case, when you add the *Electrostatics* physics on top of the existing *Solid Mechanics* physics in the model tree, the COMSOL software will automatically add the Multiphysics node. From there, you can manually add the *Piezoelectric Effect* branch. Note that if you take this approach of adding the constituent physics interfaces and multiphysics effect manually, you would also have to manually add the piezoelectric modeling domains to the *Solid Mechanics > Piezoelectric Material*, the *Electrostatics > Charge Conservation, Piezoelectric*, and the *Multiphysics > Piezoelectric Effect* branches.

In a similar fashion, you can continue to add more physics interfaces and multiphysics couplings to your model based on your needs.

To learn more about modeling piezoelectric devices in the COMSOL software environment, you are encouraged to refer to these resources:

- Piezoelectric Features Overview
- Acoustics Module User’s Guide
- MEMS Module User’s Guide
- Structural Mechanics Module User’s Guide

In particle tracing and ray tracing simulations, we often need to use the particle or ray properties to change a variable that is defined on a set of domains or boundaries. For example, solid particles in a fluid might exert a significant force on the surrounding fluid, and they may also erode the surfaces they hit.

In previous blog posts, I’ve discussed two other cases in greater detail: divergence of an electron beam due to self-potential and thermal deformation of lenses in a high-powered laser system. Each of these phenomena can be modeled using Accumulators or the specialized features that are derived from them.

An Accumulator is a physics feature that communicates information from particles or rays to the underlying finite element mesh. For each Accumulator feature in a model, a corresponding dependent variable, called an *accumulated variable*, is declared. These accumulated variables can be defined either within a set of domains or on a set of boundaries, and they can represent any physical quantity, making them extremely flexible.

The Accumulator features can be added to any of the physics interfaces of the Particle Tracing Module. They can also be used in the *Geometrical Optics* interface, available with the Ray Optics Module, and the *Ray Acoustics* interface, available with the Acoustics Module.

Depending on the physics interface, more specialized versions of the Accumulator may be available for computing specific types of physical quantities. For example, the *Particle Tracing for Fluid Flow* interface includes a dedicated *Erosion* boundary condition that includes several built-in models for computing the rate of erosive wear on a surface.

The Accumulators can be divided into three broad categories, which function in the following ways:

- Accumulators on boundaries increment a variable defined on a boundary element whenever a particle hits it.
- Accumulators on domains project information from each particle to the mesh elements the particle passes through.
- Nonlocal accumulators communicate information from a particle’s current position to the location where it was originally released.

We will now investigate each of these varieties in greater detail.

When particles or rays strike a surface, they can affect that surface in a wide variety of ways. For example, a laser can cause a boundary to heat up, sediment particles can erode their surroundings, and sputtering can occur when high-velocity ions strike a wafer in a process chamber. All of these effects require the same basic modeling procedure; we define a variable on the boundary and change its value when particles or rays interact with the boundary.

To begin, let’s consider a simple case in which we want to count the number of times a boundary is hit. We first define a variable, called `rpd`

, for example, which can have a distinct value in every boundary mesh element. Initially, this variable is set to zero in all elements. Every time a particle hits a mesh element on this boundary, we would like to increment the variable on that element by 1.

The values of the accumulated variable on the boundary elements (illustrated as triangles) after one collision are shown below:

To implement this in COMSOL Multiphysics, we first set up the particle tracing model, then add a “Wall” node to the boundary for which we want to count collisions. In this case, let’s specify that particles are reflected at this surface by selecting the Bounce wall condition. We then add the Accumulator node as a subnode to this Wall.

The settings shown in the following screenshot cause the accumulated variable (called `rpb`

) to be incremented by 1 (the expression in the Source edit field) every time a particle hits the wall.

I have created an animation that demonstrates how the number of collisions with each boundary element is counted over the course of the study. Check it out:

By changing the expression in the Source edit field, it is possible to increment the accumulated variable using any combination of variables that exist on the particle and on the boundary. For example, the accumulated variable may increase by a different amount based on the velocity or mass of incoming particles. The dependent variable need not be dimensionless. In fact, it can represent any physical quantity.

In addition to the generic Accumulator subnode — which can represent anything — dedicated accumulator-based features are available in the different physics interfaces, including the following:

- In the
*Charged Particle Tracing*physics interface:*Etch*(Use this to model physical sputtering of a surface by energetic ions.)*Current Density**Heat Source**Surface Charge Density*

- In the
*Particle Tracing for Fluid Flow*physics interface:*Erosion*(For computing the total mass removed from the surface or the rate of erosive wear.)*Mass Deposition**Boundary Load**Mass Flux*

- In the
*Geometrical Optics*physics interface:*Deposited Ray Power*(For computing a boundary heat source using the power of incident rays.)

We may also want to transfer information from particles to all of the mesh elements they pass through, not just the boundary elements they touch. We can do so by adding an Accumulator node to the physics interface directly, instead of adding it as a subnode to a Wall or other boundary condition.

For example, we can use an Accumulator to reconstruct the number density of particles within a domain. This technique is used in a benchmark model of free molecular flow through an s-bend in which the *Free Molecular Flow* interface is used to compute the number density of molecules in a rarefied gas.

Here is the geometry of the s-bend:

The settings window for the Accumulator is shown below.

The expression in the Source edit field is a bit more complicated than in the previous case. The source term R is defined as

(1)

R = \frac{J_{\textrm{in}} L}{N_{p}}

where J_{\textrm{in}} (SI unit: 1/(m^2 s)) is the molecular flux at the inlet, L (SI unit: m) is the length of the inlet, and N_{p} (dimensionless) is the number of model particles.

Physically, we can interpret R as the number of real molecules per unit time, per unit length in the out-of-plane direction, that are represented by each model particle. Because this source term acts on the time derivative of the accumulated variable, each particle leaves behind a “trail” in the mesh elements it passes through, which contributes to the number density in those elements.

I have created a second animation in which the number density of molecules is computed using the Accumulator (bottom) and the result is compared to the result of the *Free Molecular Flow* interface (top). Here it is:

We do see some noise in the particle tracing solution because each particle can only make a uniform contribution to the mesh element it is currently in. However, when the number of particles is large compared to the number of mesh elements, it is still possible to obtain an accurate solution.

In addition to the generic Accumulator node, which can represent anything, dedicated accumulator-based features are available in the different physics interfaces, including the following:

- In the
*Charged Particle Tracing*physics interface:- Particle-Field Interaction computes the charge density of particles, which can then be used as a source term to compute the self-potential of a beam of ions or electrons. It is also possible to compute the current density, which can create a significant magnetic field if the beam is relativistic.

- In the
*Particle Tracing for Fluid Flow*physics interface:- Fluid-Particle Interaction computes the body load exerted by particles on the surrounding fluid.

- In the
*Geometrical Optics*physics interface:- Deposited Ray Power generates a heat source term based on the amount of power absorbed by the medium if rays propagate through an absorbing medium.

The third variety of Accumulator is a bit more advanced than the previous two. A *Nonlocal Accumulator* is used to communicate information from a particle’s current position to the initial position from which it was released. The Nonlocal Accumulator can be added to an “Inlet” node, causing it to declare an accumulated variable on the mesh elements on the Inlet boundary.

The Nonlocal Accumulator can be used in some advanced models of surface-to-surface radiation. In many cases, the *Surface-to-Surface Radiation* physics interface (available with the Heat Transfer Module) can be used to efficiently and accurately model radiative heat transfer. However, the *Surface-to-Surface Radiation* interface relies on the assumption that all surfaces reflect radiation diffusely. That is, the direction of reflected radiation is completely independent of the direction of incident radiation. It cannot be used, for example, if some of the radiation undergoes specular reflection at smooth, polished, metallic surfaces.

One approach to modeling radiative heat transfer with a combination of specular and diffuse radiation is to use the *Mathematical Particle Tracing* interface, as demonstrated in the example of mixed diffuse and specular reflection between two parallel plates.

The incident heat flux on each plate is computed by releasing particles from the plate surface, querying the temperature of each surface the particles hit, and communicating this information back to the point at which the particles are initially released. The below image shows the temperature distribution between the two plates, where the top plate is heated by an external Gaussian source.

We have seen that Accumulators can be used to model interactions between particles or rays and any field that is defined on the surrounding domains of boundaries. The accumulated variables can represent any physical quantity. The Accumulator is the basic building block that allows for sophisticated one-way or two-way coupling between a particle- or ray-based physics interface and any of the other products in the COMSOL product suite.

The Accumulators and related physics features have too many settings and applications to discuss in detail in a single blog post. To learn more about the many options available, please refer to the User’s Guide for the Particle Tracing Module (for particle tracing physics interfaces), the Ray Optics Module (for the *Geometrical Optics* interface), or the Acoustics Module (for the *Ray Acoustics* interface).

If you are interested in learning more about any of these products, please contact us.

]]>

Whether on your way to the airport or simply passing by one, you have likely experienced watching a plane fly right overhead as it prepares to land. It is often a sight at which we marvel, to see a plane flying so low to the ground, but another element that can be quite captivating is the sound that the aircraft makes. While our experience lasts only for a short moment, imagine what it is like for those residents living in close proximity to the airport, hearing that sound periodically throughout the day. Taking this perspective, it is easy to see why addressing aircraft noise has become such a prevalent area of concern.

Since becoming a public issue in the 1960s, new regulations and research have initiated the development of quieter aircraft. One design element that has proven successful within this movement is a high-bypass turbofan engine. Used in the majority of airliners, these engines feature a fan that captures incoming air. As the air passes through the fan, a portion enters the combustion chamber, while the remainder bypasses the engine. Compared to its predecessor, the *turbojet*, in which all air passes through the gas generator, the turbofan engine creates less aircraft engine noise while also offering greater thrust at lower speeds.

*A CFM56 turbofan engine. (“CFM56 P1220759” by David Monniaux — Own work. Licensed under Creative Commons Attribution Share-Alike 3.0, via Wikimedia Commons).*

With this improved engine technology, the next step becomes analyzing the acoustical field of the turbofan engine in an effort to optimize its design. For this, we can turn to simulation.

To analyze aircraft engine noise, we can use the Flow Duct model in COMSOL Multiphysics. This model features an axially symmetric duct within a turbofan engine. It is an approximate model of a turbofan’s inlet section in the CFM56 series (which is quite common among airliners). In this example, it is assumed that the flow of air is compressible, irrotational, has no viscosity, and constant entropy. The acoustic field is modeled as perturbations on top of the background flow using the linearized potential flow equations. With these formulations, the pressure and the velocity field are directly related to and derived from a so-called velocity potential.

*The geometry of the duct.*

In this model, *z* = 0 is referred to as the *source plane* and represents the fan’s location in the actual geometry of the engine. A noise source is introduced at this boundary. Meanwhile, *z* = *L* represents the engine’s fore end and is known as the *inlet plane*. Variables R_{1} and R_{2} indicate the spinner and duct-wall profiles.

In this study, we model cases with and without a compressible irrotational background flow. With Mach number *M* = -0.5 (flow in the negative *z*-direction) and *M* = 0 (no flow), respectively. The analysis also compares the use of hard and lined walls within the duct.

The model first solves the background flow, which is assumed to be stationary. Then, a suitable acoustic source (a given propagating eigenmode) is derived. Finally, the acoustic field is found.

For the case with a mean background flow (*M* = -0.5), the velocity potential was found to be uniform beyond the terminal plane (contour lines in the plot below). Additionally, the deviations in the mean density value (due to the compressible nature of the air flow) were most prevalent in nonuniform areas of the duct’s geometry, such as the tip of the spinner. These deviations are highlighted by the red and blue colors in the figure below.

*Plot of mean-flow field for source plane *M* = -0.5. The color surfaces correspond to the background density and the contour plots to the velocity potential.*

Using these results, we can then calculate eigenmodes for the acoustic field at the source of the noise. These can be shown to represent a certain component of the engine noise source at a given frequency. The graph below shows the resulting velocity-potential profile for the first axial boundary mode at the source plane in the case of *M* = -0.5 and *M* = 0.

*Graph showing the resulting velocity-potential profile for the lowest mode.*

With the background flow and a source at hand, the acoustic field can now be solved for. The results (seen below) can be compared to results for a similar system presented by Rienstra and Eversman (2001).

In the cases without a background flow, the acoustic pressure distribution for both hard and lined walls were in good agreement when compared to the simulation results in the paper. With the mean flow, the results from the hard wall case paired well with alternate solutions. However, in the case of the lined wall, there were some notable differences, specifically near the source plane. These differences can be explained by the discrepancy in the noise source definition. In this model, the source mode was derived for the hard duct wall case, while the compared simulation used a noise source adapted to the acoustic lining.

*Pressure distribution in the acoustic field for hard (top) and lined (bottom) duct wall in cases with no mean flow (*M* = 0).*

*Pressure distribution in the acoustic field for hard (top) and lined (bottom) duct wall in cases with a mean flow (*M* = -0.5).*

The model presented here is very conceptual, but it could potentially be extended for more complex situations. By modeling these systems, it is possible to optimize the shape of certain engine duct parts and the lining properties in order to reduce the sound emission. Such optimization should, of course, go hand-in-hand with a control of the flow properties in order to not degrade the engine’s performance.

- Try it yourself: Download the Flow Duct model now.
- S.W. Rienstra and W. Eversman, “A Numerical Comparison Between the Multiple-Scales and Finite-Element Solution for Sound Propagation in Lined Flow Ducts,” J. Fluid Mech., vol. 437, pp. 367–384, 2001.

The researchers at Argonne National Lab (Argonne) turned to multiphysics simulation and trial-and-error prototyping to optimize the effectiveness of their acoustic levitator. When we want to move an object, sound may not be the tool we would typically reach for. So how does it have the power to float or levitate objects in a lab setting? It’s all about combining forces in just the right way to create lift.

When sound vibrations travel through a medium like air, the resulting compression is measurable and real. By combining acoustophoretic force, gravity, and drag, the pressure is just enough to not only lift a material like liquid medicine, but to also allow the medicine to be positioned, rotated, and moved according to the needs of the operator.

*Pressure pockets created by waves between the transducers of the acoustic levitator do the heavy lifting on a particle scale.*

By keeping the droplets in a steady rotation, researchers are able to work on the chemical reactions while the medicine stays liquid and amorphous. This is key for creating a safe, steady environment where medicine will form correctly.

Every material and measurement in the acoustic levitator will change both whether the device works in its final design and how finely it can be adjusted according to the needs of the scientists who use it.

The geometry of the device includes two small piezoelectric transducers that stand like trumpets above and below the working area where medicine is created, like this:

*The acoustic levitator’s wave patterns are controlled by pieces of Gaussian profile foam located on evenly-spaced transducers.*

Possibly the most important part of the design is the Gaussian profile foam, which consists of polystyrene and coats the ends of each transducer. This foam works to remove acoustic waves that fall outside the required range. It acts as a filter to maintain even, well-defined standing waves.

Using COMSOL Multiphysics together with the Acoustics Module, CFD Module, and Particle Tracing Module, the team at Argonne modeled the acoustic levitator. Working cohesively with simulation, they were able to narrow down the shape of the acoustic field and location of floating droplets.

*The simulation above shows that at T = 0.75 seconds droplets formed from the particles. On the left, the simulation shows the expected particle distribution and on the right, a photograph depicts the actual distribution of the droplets.*

As advances in acoustic levitation expand, the ability to work with finer and finer chemical reactions will allow members of the pharmaceutical science community to expand their reach, perhaps discovering many new medicines with life-saving qualities.

- Learn more about how Argonne improved their acoustic levitation technology.

Sound Navigation and Ranging (SoNaR, more commonly written in all lowercase as “sonar”) technology can be used for investigating and communicating underwater. To improve the sonar system, you need to optimize the design at a component level. A major component of sonar is the *electro-acoustic transducer*.

Sonar technology can be used for various applications for the main purpose of detecting objects in water. Some specific applications include mapping the ocean floor for nautical charts, finding hazardous or lost objects, communicating with other vessels, detecting enemy submarines, navigating the seas (both on and underwater), and more.

*Towed sonar. Image credit: http://www.netmarine.net/ via Wikipedia.*

A recent real-world example is the search effort for Malaysia Airlines Flight 370 in April this year. After weeks of employing other search methods, officials decided to use sonar to attempt locating the missing aircraft. The black box of any airplane comes equipped with underwater locator beacons for this very reason. The search team’s sonar did detect signals, but they were unfortunately unable to confirm that they came from Flight 370.

There are two types of sonar: active and passive. Active sonar implies that the sonar device can itself make sounds and then “listen” for the echo to return. The sound signals are created from electrical pulses that are converted into sound via the piezoelectric or magnetostrictive material in the center of a transducer. By transmitting sound and then actively awaiting its return and eventually receiving the signal or echo that bounces back, the device can measure how far away the object in question is.

Passive sonar simply involves listening to sounds made by other objects or beings, such as the locator beacons in the case of Flight 370. Both passive and active sonar systems are able to listen to incoming sounds by converting these into electrical signals, again via the piezoelectric or magnetostrictive material in the transducer.

The sonar performance is only as good as its components. The component responsible for sending and receiving signals is the *electro-acoustic transducer*. For efficiency, you’ll often have a multitude of these transducers arranged in arrays. There are various different designs to choose from, including tonpilz, ring, and flextensional transducers. Here, we will focus on the tonpilz piezo transducer.

A tonpilz piezo transducer contains a stack of active piezoceramic rings in between a light head and heavy tail mass. This assembly allows the transducer to act as either a source or a receiver. Additionally, it could be pre-stressed by a central bolt.

*Tonpilz piezo transducer.*

When designing a tonpilz transducer, you will need to consider several elements. The design is based on multiphysics couplings between acoustics and structural mechanics as well as piezoelectricity and structural mechanics. We want to understand how the device deforms and where stresses lie; what the sound pressure level and radiated pressure field are; and the sound beam’s pattern, transmitting voltage response (TVR) curve, and directivity index (DI).

Due to the multiphysics nature of the component, I would suggest you model it using COMSOL Multiphysics and the Acoustics Module. The Acoustics Module comes with the *Acoustic-Piezoelectric Interaction, Frequency Domain* interface, which contains all the necessary multiphysics couplings for modeling the transducer.

If we open the solved Tonpilz Piezo Transducer model from the Model Gallery, we can study the performance of a transducer with a bolt that is not pre-stressed. Below, you can see four of the most important plots.

Note: My colleague Mads Herring Jensen recently updated the model entry with files for COMSOL Multiphysics 4.4. It is available in the Model Gallery for those of you who want to download the model MPH-file and the accompanying PowerPoint® presentation.

*The sensitivity, or rather the TVR (Transmitting Voltage Response), of the transducer plotted between 1 and 40 kHz.*

*The transducer directivity (depicted as a 3D polar plot) evaluated at a distance of 10 m in front of the transducer for all the modeled frequencies. The normalized directivity is shown in the figure below.
*

*Here, the spatial sensitivity of the transducer is depicted in the* xz*-plane at a distance of 10 m. The patterns are normalized to 0 dB in front. Evaluating this data at any desired distance is a simple postprocessing task. Based on the far-field data, you can also readily calculate the directivity index (DI). This is done in the model.*

*Specific acoustic impedance at the surface of the transducer.*

From plotting the results, we can conclude that the transducer is very versatile. That is because we can control its directivity (varying the direct index from -7 dB to +8 db) given that the TVR is almost constant in the frequency range of 10 to 30 kHz.

- Download the Tonpilz Piezo Transducer model
- Watch how to build the model in the Tonpilz Piezo Transducer Tutorial video
- Learn what you can simulate with the Acoustics Module
- COMSOL Conference papers: