When inside a room — a conference room, concert hall, or even a car — everyone has an opinion of when the “acoustics” are good or bad. In *room acoustics*, we want to study this notion of sound quality in a quantitative way. In short, room acoustics is concerned with assessing the acoustics of enclosed spaces. The Acoustics Module of COMSOL Multiphysics has several tools to simulate the acoustics of rooms and other confined spaces. I will present those here.

When sound is emitted inside a room, a listener will perceive the sound as a combination of direct sound from the source as well as sound reflected off the walls. At the walls, the sound is reflected, absorbed, and scattered.

Since all of these processes are frequency dependent, a poorly designed meeting room can, for example, be highly reverberant in a frequency band that is important for speech. The room could also have a strong modal behavior (standing waves) at certain critical frequencies that are easily excited. These are things you want to avoid and be able to predict when designing a room.

Architects and civil engineers want to control the sound field by placing absorbers, diffusers, and reflectors in appropriate locations. In concert halls, you want to maximize the listening experience where the audience is located. In office spaces, you want to avoid anything that can seem noisy and disturb the concentration of employees. In classrooms and lecture halls, you want to ensure clear perception of speech. The sound environment is important for various reasons, which is why there are national standards and regulations for the sound environment in many cases.

Refurbishing a badly designed room can be very expensive, so you do not want to rely only on measurements on scale models or measurements done after the fact. Modeling the room acoustic behavior beforehand is important and essential in order to optimize and perfect the design. Simulation models and measurements need to relate architectural aspects (geometry) to subjective observations using physical measures (metrics). This is done by calculating a long range of room acoustic measures, such as the reverberation time, early decay time, clarity, and many other standardized parameters.

The modeling approach you want to adopt depends on the studied frequency (the wavelength compared to geometrical features of the room). In the Acoustics Module of the COMSOL suite of FEA software, we essentially offer three approaches packaged in three physics interfaces. The *Pressure Acoustics* interface can model the modal behavior in rooms. The *Ray Acoustics* interface and the *Acoustic Diffusion Equation* interface cover the high frequency limit or reverberant behavior (geometrical acoustics). I discuss the interfaces and their applicability in the sections below.

*Animation of the ray front position as they are released inside a small concert hall. The color scale gives an impression of the ray intensity on a logarithmic scale.*

As mentioned above, room acoustics is typically divided into three categories, depending on the studied frequency. Or, more specifically, depending on the wavelength compared to the characteristic geometric features of the room in question.

In the low-frequency range, the room resonances dominate. This is known as the *modal region*. At the other end of the scale, in the high-frequency limit, the wavelength becomes smaller than the characteristic geometrical features of the room. Here, we deal with the reverberant region or the *geometrical acoustics limit*. Between the modal and the high-frequency limit, there is a so-called *transition zone*. Note that there is no clear-cut definition of this zone.

Classical room acoustics theory provides some tools that enable a back-of-the-envelope assessment of the behavior of a room. For a given room, the Schroeder frequency, f_\textrm{s}, predicts the limiting frequency between the modal behavior and the high-frequency reverberant behavior of the room.

The Schroeder frequency is given by:

(1)

f_\textrm{s} = 2000 (\textrm{m}/\textrm{s})^{3/2} \sqrt{\frac{T_{60}}{V}}

where V is the room volume and T_{60} is the reverberation time.

The equation is based on the criterion (suggested by Schroeder) that at the limit, three eigenfrequencies fall into one resonance half-width. The reverberation time (or decay time), T_{60}, is the time required for the sound pressure level (created by an impulse source) to decay 60 dB. A first simple approximate measure of the reverberation time is given by the well-established Sabine formula:

(2)

T_{60} = \frac{55.3 V}{c A}, \qquad A = \Sigma S_i \alpha_i

Here, c is the speed of sound and A is the total absorption, where S_i and \alpha_i are the surface area and absorption of the i^{th} surface, respectively.

This is possibly the best-known formula in room acoustics. The equation stems from a classical statistical room acoustics analysis assuming a pure diffuse sound field. In a diffuse sound field, the sound pressure level is uniform and the reflected sound dominates. This phenomenon is also known as a *reverberant sound field*. In such a field, the damping constant (related to the overall absorption) can be approximated and relates to the reverberation time.

The modal behavior of rooms and enclosed spaces is best analyzed solving Helmholtz equation or the scalar wave equation using the finite element method. In the reverberant or high-frequency limit at frequencies above the Schroeder frequency, you may utilize two different approaches. Your choice depends on the assumptions that can be made and the desired level of detail.

The *Acoustic Diffusion Equation* interface may be used in the purely diffuse sound field limit, neglecting all direct sound. This is a fast method to assess reverberation times and sound pressure level distributions in systems of coupled rooms. The ray tracing capabilities of the *Ray Acoustics* interface provide a much more detailed picture including the direct sound and early reflections. With this interface, you also have the ability to reconstruct an impulse response.

Up to the Schroeder frequency, the modal behavior of rooms is important, where standing waves dominate over the reverberant nature. Inside a car, the transition may be as high as somewhere between several hundreds of Hertz up to 1000 Hz. In a small office, it may be up to 200 Hz, while in large concert halls, the transition is typically below 50 Hz. In the small concert hall model shown below, the Schroeder frequency is 115 Hz (the reverberation time is about 1.3 s and the volume is 430 m^{3}). The modal behavior is important for subwoofer systems in cinemas, for instance.

The modal behavior as well as the room eigenfrequencies are best analyzed using the *Pressure Acoustics* interface. A frequency domain study can reproduce a transfer function for the bass system. You can also use it to analyze dead regions or find eigenfrequencies. A transient study is interesting when, for example, looking at bass build-up transients inside a car cabin.

Models of interest here are:

*Pressure distribution for the first eigenmode inside a small room. From the Eigenmodes of a Room model.*

If you want to compute the trajectories, phase, and intensity of acoustic rays, you should choose the *Ray Acoustics* interface. Ray acoustics is a good choice when working in the high-frequency limit where you have an acoustic wavelength that is smaller than the characteristic geometric features. The interface is not limited to modeling acoustics in closed spaces, like rooms and concert halls, but can also be used in outdoor environments. At exterior boundaries, you can assign various wall conditions, such as combinations of specular and diffuse reflections. The frequency, intensity, and direction of the incident rays may influence both impedance and absorption.

Below are two figures from the Small Concert Hall model found in the Model Gallery for the Acoustics Module.

The figure to the left depicts the ray paths for a selected number of rays emitted from a source located on the small stage. The figure to the right depicts the energy response as measured in the center of the room. The dots represent the simulated ray response (5,000 rays are released) and the green and red curves represent decay curves based on simple Sabine-like estimates of the reverberation time T_{60}. The cyan curve is a so-called *Schroeder integration* of the energy response, yielding the energy-decay curve. All four agree well when the response is measured in the center of the room.

*Left: Ray path for a selected number of rays emitted from a source located on a small stage. Right: The energy impulse response compared with two simple decay measures and the energy decay curve.*

With the *Ray Acoustics* interface, the response can be measured at any point in the concert hall. The properties of absorbers and diffusers can be both frequency-dependent and angle-of-incidence dependent. Thus, the listening environment can be well described, analyzed, and optimized. The simple estimates are not accurate everywhere in a room and not for complex room geometries.

The *Acoustic Diffusion Equation* interface solves a diffusion equation for the acoustic energy density distribution for room acoustics. The method is also sometimes referred to as *energy finite elements*. This method is an extension of the principles used to calculate the Sabine reverberation time in Equation 2. This particular interface is applicable for high-frequency acoustics when the acoustic fields are diffuse. The diffusion of the acoustic energy density depends on the mean free acoustic path and thus on the room geometry. Absorption may be applied at walls and a transmission loss may be applied when coupling rooms. Increased diffusion due to room fitting can be added. Material properties and sources may be specified in frequency bands.

Compared to a ray acoustics simulation, this interface does not include any phase information, direct sound, or early reflections. The interface supports stationary studies for modeling a steady-state sound energy or sound pressure level distribution. You can use a time-dependent study to determine energy decay curves and reverberation times. You can use an eigenvalue study to determine the reverberation time of coupled and uncoupled rooms. The eigenvalue is directly related to the exponential decay time and so the reverberation time.

We utilized all three study types in the One-Family House Acoustics model, which studies the acoustics in a single-family home with a noise source located in the living room.

*Energy flux and SPL distribution inside a two-story single-family house.*

Check back on the COMSOL blog this spring for specific blog posts about the *Acoustic Diffusion Equation* and *Ray Acoustics* physics interfaces.

In the meantime, here is a list of suggested reading material:

- H. Kuttruff,
*Room Acoustics*, CRC Press, Fifth Edition, 2009. - A. D. Pierce,
*Acoustics, An Introduction to its Physical Principles and Applications*, Acoustical Society of America, 1991. - ISO 3382 Standard, Measurement of room acoustic parameters.
- M. R. Schroeder,
*New Method of Measuring Reverberation Time*, J. Acoust. Soc. Am., 37 (1965). - M. R. Schroeder,
*Integrated-Impulse method measuring sound decay without using impulses*, J. Acoust. Soc. Am., 66 (1979).

When modeling acoustics phenomena using the *Thermoacoustics* interface, there are several things to be aware of. First off, the physics have to be set up correctly and the mesh has to resolve the viscous and thermal boundary layers. It is also important to note that solving a thermoacoustic model involves solving for the pressure, velocity field (for example, 3 components in 3D), and temperature. This means that the model can become computationally expensive and involve many degrees of freedom (DOFs).

Erroneous specifications of the coefficients of thermal expansion and compressibility is a problem that I often see in support cases. If these coefficients are wrong or even evaluate to zero, the result is a model where acoustic waves (pressure or compressibility waves) propagate at the wrong speed of sound or do not propagate at all. The speed of sound relates to both of these coefficients.

A detailed description of both of these coefficients and on how to define them is given in the Acoustics Module’s *User’s Guide* (under The *Thermoacoustics, Frequency Domain* Interface in the section *Thermoacoustics Model*). The model Vibrating Particle in Water: Correct Thermoacoustic Material Parameters, which can be found in the Model Library, also discusses these issues. A simple check is to plot the parameters `ta.betaT`

(isothermal compressibility) and `ta.alpha0`

(thermal expansion) after solving the model to ensure that they have the correct values.

When meshing a thermoacoustics model, it is important to properly resolve the acoustic boundary layer to capture the physics correctly. In order to do this and avoid too many mesh elements, there are a few tricks you can use:

*Create parameters to control your mesh.*For example, create a parameter for the analysis frequency, say`f0`

, and then also create a parameter for the viscous (or thermal) boundary layer thickness at this frequency. In air, we know that the viscous boundary layer thickness at 100 Hz is 0.22 mm, and, in general, you can write the thickness as`dvisc = 0.22[mm]*sqrt(100[Hz]/f0)`

. If you perform a frequency sweep, you can create parameters for the thickest and thinnest value of the boundary layer. Having these parameters at hand can help you build a good mesh.*Use Boundary Layers.*This will keep the number of mesh elements constant for all studied frequencies. This is especially important in 3D. If you simply prescribe a maximum element size on the walls, the number of mesh elements will explode as the boundary layer thickness decreases.*Use logic expressions when defining the mesh.*For example, use`min(,)`

when defining the maximum element size or the thickness of a boundary layer. In the figure below, an example is given of a circular duct with a diameter 2a = 2 mm. The overall “Maximum element size” is set to a/3. A boundary layer mesh is used with five layers and a thickness of`min(a/30,0.3*dvisc)`

. This ensures a constant mesh thickness up to around 500 Hz (keeping the mesh in the middle of the pipe of good quality) and then the thickness decreases with`dvisc`

as the frequency parameter`f0`

increases.

In general, when solving a model using the Frequency Domain study step, it is not possible to have the mesh depend on the frequency variable `freq`

. This is what you would like for this type of models. However, it is possible to achieve this when performing a parametric sweep. Therefore, one workaround is to use a Parametric Sweep around the Frequency Domain study step. Sweep the parameter over `f0`

and set `f0`

to be the frequency in the Frequency Domain step.

Note that when doing this, the COMSOL software will re-mesh every time a parameter in the mesh changes, which may slow down the computation a bit. On the other hand, you can set up a more intelligent mesh in this way and still save time.

A final option is to prepare several meshes, maybe one mesh for each chunk of 1000 Hz, and then use several studies with these meshes selected for a restricted frequency range.

*Example of a mesh that captures the effects in the acoustic boundary layer, here shown at four different frequencies. The color represents the RMS velocity for a wave traveling in an infinite circular duct with a diameter of 2 mm.*

In that it is computationally expensive to solve thermoacoustics models, it is often advantageous to do so only in the parts of your system where thermoacoustics is relevant. These simulations can then be combined with simulations based on less-complex physics that describe the rest of your system. Here are some ideas on how this can be done:

*Couple the thermoacoustics model to pressure acoustics where relevant.*In models where large differences exist in the geometry scale, only use thermoacoustics in the narrow regions and pressure acoustics in the larger domains. The*Thermoacoustics*interface is a multiphysics interface that has the ability to be automatically coupled to the*Pressure Acoustics*interface. This is exemplified in the Generic 711 Coupler model (located in both the Model Library within the software and the Model Gallery on our website).*Use submodels and lumped models.*For instance, extract a transfer impedance from a detailed thermoacoustic model and use it in a pressure acoustics model. A nice example model of this is seen in the Acoustic Muffler with Thermoacoustic Impedance Lumping model. In this example, the transfer impedance of a perforated plate is analyzed and used in a pressure acoustics model.*As frequency increases, the acoustic boundary layer decreases in size and relevance.*This means that at a certain frequency, the boundary layer losses can be considered to become negligible, and you can switch to solving the modeling as a pressure acoustics problem.- In structures of constant cross section you can use the Narrow Region Acoustics models of the
*Pressure Acoustics*interface. These are homogenized fluid models where the boundary layer losses are smeared over the fluid domain. These models provide a first good approximate response of a system without the cost of solving a full thermoacoustic model.

The documentation for the *Thermoacoustics* interface contains some tips and tricks on how to use different solver approaches if the model becomes very large. See: Acoustics Module User’s Guide > The Thermoacoustics Branch > Theory Background for the Thermoacoustics Branch > Solver Suggestions for Large Thermoacoustic Models.

The most important points when modeling acoustics using the *Thermoacoustics* interface are:

- Solve only for thermoacoustics where and when necessary; investigate if the viscous and/or thermal boundary layer thickness are comparable to the geometrical scale or not (depending on the frequency range and geometry scales).
- Check material parameters to be sure that both compressibility and thermal expansion are non-zero.
- Check the mesh size at boundaries and compare it to the viscous and thermal boundary layer thickness.

Examples of systems where the use of thermoacoustics is important are listed below.

Electroacoustic transducers are a good example of true multiphysics models where it is essential to include both thermal and viscous losses:

- Blog post: Thermoacoustics Simulation for More Robust Microphone Analysis
- Model downloads:
- B&K 4134 Condenser Microphone, results compared with measurements
- Tutorial model on a simplified 2D axisymmetric condenser microphone model

- COMSOL News article about the use of COMSOL Multiphysics to model hearing aids, “Simulation-Based Design of New Implantable Hearing Device“

- Blog post about using COMSOL Multiphysics to model MEMS microphones

- An example of a vibrating micromirror, which solves for thermoacoustics in order to model Fluid Structure Interaction (FSI) in the frequency domain

The solution of a Thermoacoustics sub-model to find the transfer impedance of a perforated plate in a muffler system. The impedance is subsequently used as a transfer impedance condition in a Pressure Acoustics model:

- Model download: Acoustic Muffler with Thermoacoustic Impedance Lumping

Modeling the response of an Ear Canal Simulator, the so-called 711 coupler. The model results are compared to IEC standard curves and to a lossless model. The results clearly show the necessity to include thermal and viscous losses.

- Model download: Generic 711 Coupler an Occluded Ear Canal Simulator

Advanced application using the *Thermoacoustic* interface to model photoacoustic applications.

- Model download: Photoacoustic Resonator

- A thermoacoustic tutorial model describing the importance of setting up the compressibility and thermal expansion material parameters correctly

- COMSOL Documentation: Acoustics Module User’s Guide.
- COMSOL Documentation: Acoustics Module User’s Guide > The Thermoacoustics Branch.

It is, for example, necessary to include the thermal and viscous losses when modeling the response of small transducers, like condenser microphones, MEMS microphones, and miniature loudspeakers (i.e. receivers). Other applications include analyzing feedback in hearing aids and in mobile devices, or studying the damped vibrations of MEMS structures.

A good example for us to investigate here, which relates to an engineering application, is the transfer impedance of the standard IEC 60318-4 occluded ear canal simulator (sometimes referred to as the 711-coupler), as depicted in the figure below. In the graph to the right, the response is modeled including and excluding thermoacoustic losses. It is evident that these types of losses need to be included in order to capture the correct behavior when comparing their curves to the standard simulator’s data.

*The pressure distribution inside an occluded ear canal simulator at 7850 Hz, complying with the IEC 60318-4 standard, is depicted to the left. The modeled transfer impedance of the coupler (in blue, including thermal and viscous losses) is shown together with the prescribed standard curves (in red), and the curve resulting from a pure lossless model (in green).*

The thermoacoustic effect is typically seen and is most pronounced at resonances, which are rounded and shift down in frequency. To model these effects, it is necessary to include thermal conduction effects and viscous losses explicitly in the governing equations, solving the momentum (Navier-Stokes), mass (continuity), and energy conservation equations. This is achieved by solving the thermoacoustics equations in the *Thermoacoustic* interface, included in the Acoustics Module. The equations are also known as the thermo-viscous acoustics, visco-thermal acoustics, and linearized Navier-Stokes equations.

Here, we will present the physical background for the thermoacoustics equations along with the important boundary layer characteristic, length scale. We will also provide a short description of the material parameters necessary for describing fluid media.

Acoustic waves are the propagation of small linear fluctuations in pressure on top of a background stationary (atmospheric) pressure. The governing equations for the fluctuations (the wave equation or Helmholtz’s equation) are derived by perturbing, or *linearizing*, the fundamental governing equations of fluid mechanics — the Navier-Stokes equation, the continuity equation, and the energy equation. Doing this results in the conservation equations for momentum, mass, and energy for any small (acoustic) perturbation.

For many applications simulating acoustics, a series of assumptions are then made to simplify these equations: the system is assumed lossless and isentropic (adiabatic and reversible). Yet, if you retain both the viscous and heat conduction effects, you will end up with the equations for thermoacoustics that solve for the acoustic perturbations in pressure, velocity, and temperature.

The procedure to derive the governing equations in the frequency domain is to assume small harmonic oscillations about the steady background properties. The dependent variables take the form:

p = p_0+p’e^{i\omega t}, \quad \mathbf{u} = \mathbf{u}_0+\mathbf{u}’ e^{i\omega t}, \quad T = T_0 + T’ e^{i\omega t}

where p is the pressure, \mathbf{u} is the velocity field, T is the temperature, and \omega is the angular frequency. Primed (‘) variables are the acoustic variables, while variables accompanied with the subscript 0 represent the background mean flow.

In thermoacoustics, the background fluid is assumed to be quiescent so that \mathbf{u}_0=\mathbf{0}. The background pressure p_0 and background temperature T_0 need to be specified (they can be functions of space). Inserting the above equation into the governing equations and only retaining terms linear in the first-order variables yields the governing equations for the propagation of acoustic waves including viscous and thermal losses.

Note: Details on this can be found in the User’s Guide of the Acoustics Module in the “Theory Background for the Thermoacoustic Branch” section.

The governing equations in the *Thermoacoustic* interface, in the frequency domain, are the continuity equation (omitting primes from the acoustic variables):

i\omega\rho =-\rho_0 (\nabla\cdot\mathbf{u})

where \rho_0 is the background density; the momentum equation:

i\omega\rho_0 \mathbf{u} = \nabla\cdot \left(-p\mathbf{I}+\mu ( \nabla\mathbf{u}+(\nabla\mathbf{u})^T )+\left(\mu_\textrm{B}-\frac{2}{3}\mu \right)(\nabla\cdot\mathbf{u})\mathbf{I} \right)

where \mu is the dynamic viscosity and \mu_\textrm{B} is the bulk viscosity, and the term on the right hand side represents the divergence of the stress tensor; the energy conservation equation:

i\omega (\rho_0 C_p T -\ T_0 \alpha_0 p) = -\nabla\cdot(-\textrm{k}\nabla T) + Q

where C_p is the heat capacity at constant pressure, \textrm{k} is the thermal conductivity, \alpha_0 is the coefficient of thermal expansion (isobaric), and Q is a possible heat source; and finally, the linearized equation of state relating variations in pressure, temperature, and density:

\rho = \rho_0 (\beta_T p -\alpha_0 T)

where \beta_T is the isothermal compressibility.

The left-hand sides of the governing equations represent the conserved quantities: mass, momentum, and energy (actually entropy). In the frequency domain, multiplication with i\omega corresponds to differentiation with respect to time. The terms on the right-hand sides represent the processes that locally change or modify the respective conserved quantity. In two of the equations, diffusive loss terms are present, due to viscous shear and thermal conduction. Viscous losses are present when there are gradients in the velocity field, while thermal losses are present when there are gradients in the temperature.

When sound waves propagate in a fluid bounded by walls, so-called *viscous* and *thermal boundary layers* are created at the solid surfaces. At the wall, the no-slip condition applies to the velocity field, \mathbf{u} = 0, and an isothermal condition for the temperature, namely T = 0. The isothermal condition is a very good approximation, as thermal conduction is typically orders of magnitude higher in solids than fluids. These two conditions give rise to the *acoustic boundary layer*, which consists of the viscous and a thermal boundary layers. The flow transforms from the bulk condition of being nearly lossless and described by isentropic (adiabatic) conditions to the conditions in this layer.

The problem of a time-harmonic wave propagating in the horizontal plane along a wall (this could be waves propagating in a small section of a pipe) is illustrated in the figures below. The left figure shows the velocity amplitude and the right figure the fluid’s temperature, from the wall towards the bulk, while the middle figure shows the velocity magnitude as well as an animation indicating the velocity vector over a harmonic period.

*Velocity amplitude (left) and fluid temperature (right), from the wall to the bulk, of an acoustics wave propagating in the horizontal plane (bottom). The viscous and thermal boundary layer thicknesses are indicated by the red dotted lines closest to the wall. The upper dotted lines represent 2 \pi times the boundary layer thickness, in each case. The animation indicates the acoustic velocity components, while the color plot shows velocity amplitude.*

The viscous and thermal boundary layers are clearly visible. Because gradients are large in the boundary layer, losses are large here too. This means that in systems of relatively small dimensions, the losses associated with the boundary layer become important. In many engineering applications (miniature transducers, mobile devices, etc.), including the losses associated with the boundary layer is essential in order to model the correct physical behavior and response.

The viscous characteristic length is shown as a red dotted line in the velocity and temperature plots shown above, together with 2 \pi times the value (known as the viscous/thermal wavelength). The two characteristic lengths are related by the dimensionless Prandtl number Pr:

\textrm{Pr} = \frac{C_p \mu}{\textrm{k}} \qquad \delta_\textrm{visc} = \sqrt{\textrm{Pr}} \: \delta_\textrm{therm}

which gives a measure of the ratio of the viscous to thermal losses in a system. For air, this number is 0.7, while it is around 7.1 for water. In air, the thermal and viscous effects are roughly equal in importance, while for water (and most other fluids), the thermal losses only play a more minor role. The viscous and thermal boundary layer thicknesses exist as pre-defined variables for use in postprocessing in the Acoustics Module, and they are denoted by `ta.d_visc`

and `ta.d_therm`

. The Prandtl number is denoted by `ta.Pr`

.

The plane wave problem can be solved analytically and expressions for the viscous (d_\textrm{visc}) and thermal (d_\textrm{therm}) boundary layer thickness subsequently derived. They are given by:

\delta_\textrm{visc} = \sqrt{\frac{2\mu}{\omega\rho_0}} \qquad \delta_\textrm{therm} = \sqrt{\frac{2 \textrm{k}}{\omega\rho_0 C_p}}

The value of d_\textrm{visc} is 0.22 mm for air and 0.057 mm for water at 100 Hz, 20°C and 1 atm. Over a range of frequencies, the viscous and thermal boundary layer thickness can be plotted, such as the figures below:

*The value of the viscous (d_\textrm{visc}) and thermal (d_\textrm{therm}) boundary layer thickness as functions of frequency for (left) air and (right) water.*

This shows the diminishing effect of viscous and thermal losses at increasing acoustic wave propagation frequencies. Finally, another important effect that is captured when modeling with the Thermoacoustic interface is the transition from adiabatic to isothermal acoustics at low frequencies in small devices. This effect occurs when the thermal boundary layer stretches over the full device and is important in, for example, condenser microphones, such as the B&K 4133 condenser microphone. At isothermal conditions the speed of sound changes to the isothermal speed of sound.

It is important to note that viscous and thermal losses also exist in the bulk of the fluid. These are losses that typically occur when acoustic signals propagate over long distances and are attenuated. One example of this is sonar signals. These types of losses are, in air, only dominating at very high frequencies (they can be neglected at audio frequencies). The bulk losses are, of course, also described by the governing equations for thermoacoustics as they include all the physics. However, modeling large domains with the thermoacoustics equations is very computationally expensive. In the Acoustics Module, you should instead use the Pressure Acoustics interface and select one of the available fluid models: *Viscous*, *Thermally conducting*, or *Thermally conducting and viscous*.

Solving a full thermoacoustic model involves defining several material parameters:

- Dynamic viscosity \mu:
- The dynamic viscosity measures the fluid’s resistance to shearing in the fluid. It is the constant that relates stress to velocity gradients. The dynamic viscosity is related to the kinematic viscosity \nu by the relation \mu = \rho_0 \: \nu. The symbol for the dynamic viscosity \eta is also sometimes used.

- Bulk viscosity \mu_\textrm{B}:
- The bulk viscosity is also known as the volume viscosity, the second viscosity, or the expansive viscosity. It is related to losses that appear due to the compression and expansion of the fluid. \mu_\textrm{B} appears in the stress tensor term (right side of equation 3), which has to do with the compressibility (\nabla\cdot\mathbf{u}) of the bulk fluid. This factor is difficult to measure and is often seen to depend on the frequency.

- Heat capacity at constant pressure (specific) C_p:
- This material parameter gives a measure of how much energy is required to change the temperature of the fluid (at constant pressure).

- Coefficient of thermal conduction \textrm{k}:
- The coefficient of proportionality between the temperature gradient and the heat flux in Fourier’s heat conduction law.

- Coefficient of thermal expansion (isobaric) \alpha_0:
- This is the volumetric thermal expansion of the fluid and expresses the ability of the fluid to expand when its temperature rises.

- Isothermal compressibility \beta_T:
- Important parameter in the equation of state of the fluid. It relates changes in pressure to changes in volume in the fluid. The isothermal compressibility is related to the usual (isentropic) compressibility through the ratio of specific heats by \beta_T = \gamma \beta_s.

Now that you know the theory behind thermoacoustics and the associated equations, we can move on to tips and tricks for setting up a thermoacoustic model using COMSOL Multiphysics and the Acoustics Module. We will discuss that as well as examples and applications in the next blog post of this series.

- COMSOL Documentation: Acoustics Module User’s Guide
- COMSOL Documentation: Acoustics Module User’s Guide > The Thermoacoustics Branch
- D. T. Blackstock, “Fundamentals of Physical Acoustics”, John Wiley and Sons, 2000
- S. Temkin, “Elements of Acoustics”, Acoustical Society of America, 2001
- B. Lautrup, “Physics of Continuous Matter”, Second Edition, CRC Press, 2011
- P. M. Morse and K. U. Ingard, “Theoretical Acoustics” Princeton University Press
- A. D. Pierce, “Acoustics; An Introduction to Its Physical Principles and Applications”, Acoustical Society of America, 1989
- A. S. Dukhin and P. J. Goetz, “Bulk viscosity and compressibility measurements using acoustic spectroscopy”, J. Chem. Phys. 130, 124519 (2009)

*The Knowles SPU0409LE5FH MEMS condenser microphone with dimensions 3.76 x 3 x 1.1 mm ^{3}. Photo courtesy of Knowles Electronics*.

A MEMS microphone is a condenser microphone that comprises a MEMS die and a complementary metal-oxide-semiconductor (CMOS) die combined in an acoustic housing. The CMOS often includes both a preamplifier as well as an analog-to-digital (AD) converter. Because of this and the small size of the microphone, it is well suited for integration in digital mobile devices, smart phones, headsets, and hearing aids. The housing with the acoustic port is depicted in the image above. The condenser or variable capacitor consists of a highly compliant diaphragm in close proximity to a perforated, rigid backplate. The perforations permit the air between the diaphragm and backplate to escape. The diaphragm and backplate pair is referred to as the motor (shown in the figure farther down below). The microphone works by first polarizing (charging) the condenser with a DC voltage. This voltage will also result in a static deformation and tensioning of the diaphragm, and, to a much less extent, the backplate. When an acoustic signal reaches the diaphragm through the acoustic port, the diaphragm is set in motion. This mechanical deformation in turn results in an AC voltage across the microphone. These effects combine to provide a real multiphysics problem well suited for analysis in COMSOL Multiphysics. The sensitivity of a microphone is expressed as the ratio of the incident pressure to the measured voltage on the dB scale.

The MEMS microphone model includes a description of the electrical, mechanical, and acoustical properties of the transducer. The acoustic description includes thermal and viscous losses explicitly solving the linearized continuity, Navier-Stokes, and energy equations, that is, *thermoacoustics*. The mechanics of the diaphragm were also modeled including electrostatic attraction forces and acoustic loads, or *electromechanics*. A submodel was also implemented to analyze the interplay between the vibrating diaphragm and the small perforations in the microphone backplate. The model had no free-fitting parameters and it resulted in the prediction of the static mechanical behavior of the MEMS motor (the diaphragm and backplate system) as well as the dynamic frequency response. The model results showed good agreement with measured data.

Because the geometrical dimensions are so small in this system, the vibrations of the diaphragm will be highly damped by the air. The air and acoustics need to be treated including both thermal conduction and viscous losses. The viscous penetration depth (thickness of the acoustic viscous boundary layer) is, for example, 55 µm at 100 Hz and 5.5 µm at 10 kHz, which is larger than or comparable to the distance between the backplate and diaphragm, which is only 4 µm. The *Thermoacoustics* interface of the Acoustics Module is the natural first choice for modeling these effects. This interface will also result in the correct modeling of the transition from adiabatic to isothermal behavior at low frequencies. The complex combined mechanics and electrostatics effects are all included in the *Electromechanics* interface of the MEMS Module. The two physics are fully coupled at the fluid-structure boundary by requiring continuity in the displacement/velocity field.

As a MEMS microphone makes out a complex system, we faced several challenges when trying to model it in detail. Some of these included:

- During the clean room MEMS fabrication process, the diaphragm is released and will bend slightly
- It is important to give a correct description of its initial shape and stress distribution

- The geometry of the microphone is complex, involves many different aspect ratios, and small length-scales
- Thinking about the mesh is important

- Because the system is complex and involves many different physics, the resulting model can easily become too large to solve
- Reducing the model by using symmetries and lumped approximations also needs to be addressed

A classical condenser microphone, like the B&K 4134 from the Model Gallery, in essence works the same way as the MEMS microphone and involves solving the same physics. Modeling it, however, involves some specific challenges as mentioned above. They are primarily due to the complex fabrication involved and lie in describing the initial static state and the complexity of the geometry.

*Sketch of the MEMS microphone motor (not to scale). The diaphragm has a thickness of 1 µm, the gap between the backplate and the diaphragm is 4 µm, the diameter of the perforations in the backplate is 10 µm, and the thickness of the backplate is 2 µm. The distance across the motor from support post to support post is 590 µm. Sketch courtesy of Knowles Electronics*.

As a first step when planning the modeling process, we decided to focus on validating the initial stationary description of the model. One direct measurement of the stationary shape of the microphone is achieved by measuring the DC capacitance as a function of the polarization voltage. The measurements are compared to the model results in the figure below. As you can see, the two curves show good agreement. At about 15.8 V, the measured curve is seen to jump. This corresponds with the point where the diaphragm bends so much due to the electrostatic forces that it touches the backplate.

*Simulation results of the microphone static capacitance as a function of the DC polarization voltage. The green curve represents measurements and the blue curve the modeled capacitance including a constant offset accounting for the constant parasitic capacitance present when performing measurements (0.23 pF). Measurements courtesy of Knowles Electronics*.

The electric potential in slices through a 30 degree cut-out of the microphone motor is depicted in the image below. The field is seen to have very strong gradients in the region where the electrodes are located, while it drops off outside of this region. The holes in the backplate are clearly seen to influence the field. The full dynamic behavior of the microphone was also analyzed solving for the structural displacement, the electric field, and the thermoacoustic fields (pressure, velocity, and temperature) in the frequency domain. This is a fully coupled multiphysics model in a complex geometry and therefore required up to 60 GB of RAM to solve. The resulting sensitivity also showed good agreement with measurements.

*Stationary electric potential depicted in slices through the MEMS microphone motor.*

A unit cell of the diaphragm and backplate system is shown in the below animations. The model represents one hole and the air gap (thin air film) between the vibrating diaphragm and the backplate (here fixed). This system is analyzed as a unit cell using symmetries. The detailed coupled acoustic behavior including viscous and thermal losses is again captured using the *thermoacoustics* interface. The two animations show the behavior of the instantaneous acoustic velocity distribution and temperature distribution at 10 kHz over one period. The model also solves for the pressure (not shown here). The build-up and decay of both the thermal and viscous boundary layers can be seen in the animations. At 10 kHz, the thickness of both is about 5 µm and is comparable to the air gap height of 4 µm.

*Dynamic analysis of one “unit cell” of the diaphragm and backplate system, here modeled at 10 kHz. Shown here are the instantaneous acoustic velocity magnitude (color) and velocity field (vectors).*

*Dynamic analysis of one “unit cell” of the diaphragm and backplate system, here modeled at 10 kHz. The acoustic temperature variations are depicted in this animation.*

- M. J. H. Jensen, W. Conklin, and J. Schultz,
*Characterization of a microelectromechanical microphone using the finite element method*, J. Acoust. Soc. Am.,**134**, pp. 4122 (2013) (Conference abstract) - B&K 4134 Condenser Microphone model in the Model Gallery
- Knowles Electronics
- Acoustics Module
- MEMS Module
- P. Loeppert and S. Lee,
*Sisonic–the first commercialized MEMS microphone*, Solid-state sensors, actuators and microsystems workshop, Hilton Head Island, South Carolina, pp. 27–30 (2006)

As the Technical Product Manger for the Acoustics Module here at COMSOL, this was an event that I simply could not miss. This particular acoustics conference is a good opportunity to get updated on the latest research in many areas of acoustics, but also to see what is being done simulation-wise in many areas. This year, for example, there was a special session on Computational Methods in Transducer Design, Modeling, Simulation, and Optimization.

As I walked around the conference, and listened to presentations in many different areas of acoustics, it was nice to see so many applications using COMSOL Multiphysics. A quick search of the conference proceedings reveals that COMSOL is mentioned in around 30 presentations. The areas of research where it is being used span acoustical oceanography, architectural acoustics, thermoacoustic phenomena, nonlinear acoustics, transducers, noise in hearing aids, the modeling of the speech production, and much more. All the abstracts of the conference can be found in the Journal of the Acoustical Society of America volume 133. Links to the conference proceedings will be available on the International Commission for Acoustics homepage.

I was fortunate enough to be invited and hold presentations in two of the sessions (one of which was the aforementioned simulation session). The first talk was about modeling acoustic radiation forces on microparticles — work I have done together with Prof. H. Bruus from The Physics Department at DTU. My second talk was about virtual prototyping of condenser microphones, and this work was done together with E. S. Olsen from Brüel & Kjær Sound and Vibration Measurement A/S. I will go into more detail on both presentations below.

My first talk was on “First-principle simulation of the acoustic radiation force on microparticles in ultrasonic standing waves” [1]; a continuation of the work I mentioned in the blog entry on microparticle acoustophoresis. The aim of the work was to make a model that could predict the acoustic radiation force on a microparticle, based on first-principle simulation. Theoretically, acoustic radiation is described by complex, non-linear governing equations sensitive to the detailed boundary conditions, making it hard to obtain quantitative predictions. Here, COMSOL Multiphysics is a very valuable tool as the user has full control over the solved equations as well as the solver sequence. The simulation results are compared to analytical results that exist for spherical particles in the adiabatic case. The current model extends the known analytical results to also include thermal effects, and it includes the analysis of the radiation force exerted on ellipsoidal particles (see figure below).

*Streaming velocity amplitude
around an ellipsoid microparticle.*

The second work I presented at the acoustics conference was on “Virtual prototyping of condenser microphones using the finite element method for detailed electric, mechanic, and acoustic characterization” [2]. The aim of this paper was to present a full virtual prototype of a Brüel and Kjær type 4134 microphone that could reproduce all relevant characteristic curves, that is, the sensitivity for different vent configuration, the microphone impedance, and the microphone capacitance. All these were compared to actual measurements and showed very good agreement. The model of a microphone is a true multiphysics application that involves both acoustics, membranes models, and electrostatics. The model can be used to gain insight into the detailed physical processes governing the microphone behavior. In the future, it can be used to optimize existing microphones and help develop new prototypes. Below is an animation that shows the diaphragm displacement for an oblique incident acoustic wave at 25 kHz:

*Movement of the microphone diaphragm in a BK type 4134 microphone for an oblique incident
plane wave (the motion is over one period and is highly exaggerated).*

- Brüel and Kjaer 4134 Condenser Microphone model in the COMSOL Model Gallery
- Acoustofluidic Multiphysics Problem: Microparticle Acoustophoresis
- ICA-ASA Conference homepage
- Proceedings of the International Congress on Acoustics
- Acoustical Society of America
- Abstract reference 1: M. J. Herring Jensen and H. Bruus, “First-principle simulation of the acoustic radiation force on microparticles in ultrasonic standing waves (A)”, J. Acoust. Soc. Am. Volume 133, Issue 5, pp. 3236-3236 (2013)
- Abstract reference 2: M. J. Herring Jensen and E. S. Olsen, “Virtual prototyping of condenser microphones using the finite element method for detailed electric, mechanic, and acoustic characterization (A)”, J. Acoust. Soc. Am. Volume 133, Issue 5, pp. 3359-3359 (2013)

Damping is generally achieved by a combination of three processes: resonances that trap the acoustic energy at given frequencies, absorption of acoustic energy by different porous and fibrous lining materials, and losses in perforated plates, also known as perforates. COMSOL Multiphysics offers a multitude of techniques to model both the damping in porous materials and the acoustic properties of perforates.

*Adding an equivalent fluid model for a porous domain is easy in COMSOL Multiphysics. Simply add a new pressure acoustics model and activate it in the domain with the porous material, then select the fluid model. The “Macroscopic empirical porous models” comprise the Delany-Bazley and the Miki models, while the “Biot equivalents” model is the Johnson-Champoux-Allard (JCA) model in a rigid or limp porous matrix frame configuration.*

As with many modeling approaches, several degrees of detail are possible in this type of analysis. You may model the system in great detail, for example, modeling every hole in a perforated plate. This approach may yield detailed results, but it is often impossible in practice due to the computational cost. You can also choose a form of model reduction, meaning you can determine which approximation can be used to simplify the model. For example, this could come in the form of lumping (describing part of your model using characteristics like stiffness, damping, and mass), employing sub-models (using the results from a detailed analysis of a small section of your model as an input into the larger model), or homogenization (consider a heterogeneous domain as a homogeneous domain with certain compensating properties). The typical approach for a porous material, for example, is to describe the wave propagation in a homogenized way, spreading out the losses by using an equivalent fluid model. Similarly, perforated plates are most often modeled using a homogenized transfer impedance approach, where the losses due to the presence of the holes are spread out over the boundary where the holes are located.

To implement these types of models in COMSOL Multiphysics, you could use some of the built-in functionality, such as selecting one of the many fluid models or using a perforated plate boundary condition. You also have the flexibility to customize the model based on your needs by defining your preferred equivalent fluid model, defining your own transfer impedance model, or defining your material properties based on measured data you import into COMSOL Multiphysics.

As I mentioned in the beginning of this blog post, a good description of damping materials and perforated plates is important when designing systems containing mufflers. It is also important in application areas other than mufflers, of course. For instance, this is important when modeling wall lining and sound absorption for rooms (absorbers and diffusers), or when modeling the acoustics of a car interior, or a loudspeaker system. In both cases, one has to decide how to treat and model sound propagation through materials such as textiles, foams, and porous materials.

When simulating a system that contains a porous material, you will need to decide on which modeling approach to take, as well as how your material can be described based on that approach. The damping properties of porous and fibrous materials can be included in one of the following forms in COMSOL Multiphysics:

- A surface impedance may be added at the boundary between the air domain and the porous material so that the porous domain
*itself*is not modeled, but only its “boundary” influence is. This impedance can be based on an analytical model or on measured data from an impedance tube measurement, for example. - The porous domain can be modeled as a fluid with a given attenuation coefficient; this can be, and often is, frequency-dependent. In COMSOL Multiphysics you can enter the attenuation coefficient of a material directly, and this can be entered as an analytical expression or as a frequency-dependent function based on measured material property data you import into COMSOL Multiphysics.
- The porous domain can be modeled as an equivalent fluid: a homogenized model where the porous domain is treated as a “fluid” with damping properties. COMSOL Multiphysics contains many equivalent fluid models, including the well-known Delany-Bazley, Miki, and Johnson-Champoux-Allard (JCA) models.
- Finally, it is possible to use the
*Poroelastic Waves*interface to solve a detailed model for the interaction between the elastic porous matrix and the saturating fluid based on the equations described by Biot’s theory. This interface can be used to model any porous material, and solves for both the displacement of the porous matrix and the pressure in the saturating fluid.

Your choice of an approach from the above list will depend on what data is available for the porous material and on the desired level of detail. Keep in mind that in all cases it is possible to use a sub-model approach. For example, you could create a detailed model of a piece of porous lining and extract the surface impedance, then add that impedance as a boundary condition in a model of the full system.

When perforated plates are present in a muffler system, they typically contain many holes, and it is not generally expedient to model them all in detail. Moreover, in order to get the correct viscous and thermal damping of the acoustic waves (when they pass through the holes) you would need to model the acoustics in the holes using the *Thermoacoustic* interface, which will increase the computational complexity of the model. The best approach, therefore, is to model a perforated plate using a transfer impedance boundary condition; this boundary condition is available in COMSOL Multiphysics and is based on certain characteristics of your perforations such as plate thickness and hole diameter. See, for example, the Acoustic Muffler with Thermoacoustic Impedance Lumping model. This model also shows how to use a sub-model approach: here a detailed thermoacoustic model of a single hole is used to determine a transfer impedance.

The radiation forces act directly on the particles and occur because of momentum transfer when the acoustic field scatters on the particles (this force depends on the contrast in mechanical properties between the particle and suspending fluid). The streaming-induced drag occurs because the acoustic field interacts with the fluid and creates a stationary bulk flow. Both these effects are nonlinear and scale differently (see the difference in the results in the figures below). The processes are modeled and included in the COMSOL Multiphysics simulations presented in a paper I co-wrote with my former PhD thesis supervisor Professor H. Bruus and two of his PhD students P. B. Muller and R. Barnkob, titled “A numerical study of microparticle acoustophoresis driven by acoustic radiation forces and streaming-induced drag forces”, in *Lab on a Chip* 12, 4617–4627, (2012). Professor H. Bruus’ group works on theoretical aspects of fluid flow at the micrometer scale (microfluidics), a subset of this is acoustofluidics.

As we started working on how to solve the combined radiation and streaming problem, it occurred to me that this was an ideal problem to solve in COMSOL as a true multiphysics problem. The problem is handled by using functionality from the Acoustics, CFD, and Particle Tracing Modules.

First, the acoustic field is solved using the *Thermoacoustic Interface*. It is crucial to include viscosity and thermal conduction explicitly as we needed to model and resolve the thin acoustic boundary layer in detail. It is in this micrometer-thick layer that some of the nonlinear effects are strongest. The *Thermoacoustic Interface* solves the linearized Navier-Stokes, continuity, and energy equations for a compressible fluid.

Second, products of the (first order) acoustic field are used as source terms in the *Single-phase Flow Interface*. Two sources emerge from the equations: one corresponds to a volumetric force and the other to a mass source.

Finally, the *Particle Tracing Interface* is used to track and model the movement of particles. Here we include both the acoustic radiation force (derived from the acoustic field) and the viscous drag from the streaming-induced background flow.

Our microparticle acoustophoresis paper clearly showcases how COMSOL can be used to solve and couple virtually any physical phenomena described by partial differential equations and a system of ordinary differential equations. The open (non-black box) nature of COMSOL allowed us to edit and modify the existing physics to fit the equations of this advanced application. This clearly makes COMSOL the ideal choice for any researcher solving non-standard problems.

*Figure (click on images to animate): The motion of polystyrene particles in water in a 360 µm by 160 µm microchannel cross section. Small 0.5 µm diameter particles where the motion is governed by the streaming-induced drag (top), and large 5 µm diameter particles where the motion is governed by the acoustic radiation force (bottom).*

- “A numerical study of microparticle acoustophoresis driven by acoustic radiation forces and streaming-induced drag forces”, in Lab on a Chip, 12, 4617–4627, P. B. Muller, R. Barnkob, M. J. Herring Jensen, and H. Bruus (2012)
- A tutorial in 23 papers on acoustofluidics in Lab on a Chip
- The work was also presented at the COMSOL conference in Milan in 2012

Inter-Noise is a great opportunity to meet researchers and stay up-to-date with the latest in the field of acoustics, noise control, and noise propagation. There will also be plenty of exhibition booths to check out at the vendor exposition. I’m the Technical Product Manager for the Acoustics Module, so this year I will be representing COMSOL at Inter-Noise. Before joining COMSOL, I worked for five years in the hearing aid industry as an acoustic finite element expert (using COMSOL Multiphysics) studying, among other things, the performance of miniature directional microphones and analyzing feedback problems in hearing aids. Now that you know why I’m looking forward to attending this event, you might be curious about the Acoustics Module. In relation to the topics covered at the conference, the Acoustics Module may, for example, be used for detailed modeling of sound absorbing materials in panels, absorbers, and for modeling diffusers (optimizing acoustical response and determining lumped parameters). Improvement of noise control systems may be achieved via simulations of noise propagation and responses. Inter-Noise has become *the* conference where the latest in calibration of transducers, such as measurement microphones, and measurement techniques is presented. Detailed modeling of acoustics in small devices is an area where the Acoustics Module is especially powerful with its Thermoacoustics user interface.

I’m excited to represent COMSOL at this year’s acoustics conference in NYC, and I hope to see you there too!

]]>