A gearbox assembly generally consists of gears, shafts, bearings, and housing. When operated, a gearbox radiates noise in its surroundings for two main reasons:

- Transmission of undesired lateral and axial forces on the bearings and housing while transmitting power from one shaft to another
- Flexibility in the different parts of the gearbox, such as the gear mesh, bearings, and housing

Out of all of the components in a gearbox, the primary source of vibration or noise is the gear mesh. A typical path followed by the structural vibration, seen as the noise radiation in the surrounding area, can be illustrated like this:

The noise generated due to gear meshing can be classified into two types: *gear whine* and *gear rattle*.

Gear whine is one of the most common types of noise in a gearbox, especially when it runs under a loaded condition. Gear whine is caused by the vibration generated in a gear because of the presence of transmission error in the meshing as well as the varying mesh stiffness. This type of noise occurs at the meshing frequency and typically ranges from 50 to 90 dB SPL when measured at a distance of 1 m.

Gear rattle is observed mostly when a gearbox is running under an unloaded condition. Typical examples are diesel engine vehicles such as buses and trucks at idle speed. A gear rattle is an impact-induced noise caused by the unloaded gear pairs of the gearbox. Backlash, required for lubrication purposes, is one of the gear parameters that directly impact the gear rattle noise. If possible, simply adjusting the amount of backlash can reduce gear rattle.

We know that transmission error is the main cause of gear whine, but what exactly is it? When two rigid gears have a perfect involute profile, the rotation of the output gear is a function of the input rotation and the gear ratio. A constant rotation of the input shaft results in a constant rotation of the output shaft. There can be various unintended and intended reasons for modifying the gear tooth profile, such as gear runouts, misalignment, tooth tip, and root relief. These geometrical errors or modifications can introduce an error in the rotation of the output gear, known as the *transmission error* (TE). Under dynamic loading, the gear tooth deflection also adds to the transmission error. The combined error is known as the *dynamic transmission error* (DTE).

Reducing gear whine or rattle to an acceptable level is a big challenge, especially for modern complex gearboxes, which consist of many gears meshing simultaneously. By accurately simulating these complex behaviors, we can design a quieter gearbox. COMSOL Multiphysics gives designers the ability to accurately identify problems and propose realistic solutions within the allowable design constraints. With such a tool, we can optimize existing designs to reduce noise problems and gain insight into new designs earlier in the process, well before the production stage.

*A gearbox model in the COMSOL Desktop®.*

Let’s consider a five-speed synchromesh gearbox of a manual-transmission vehicle in order to study the vibration and radiation of gear whine noise to the surrounding area. The gearbox is in a car and used to transfer power from the engine to the wheels.

*Geometry of a five-speed synchromesh gearbox of a manual transmission vehicle.*

In order to numerically simulate the entire phenomenon of gearbox vibration and noise, we perform two analyses:

- Multibody analysis
- Acoustic analysis

In the multibody analysis, we compute the dynamics of the gears and housing vibrations, performed at the specified engine speed and output torque in the time domain. For the acoustic analysis, we compute the sound pressure levels outside the gearbox for a range of frequencies using the normal acceleration of the housing as a source of noise.

First, we look into the gear arrangement in the synchromesh gearbox. Here, helical gears are used to transfer the power from the input end of the drive shaft to the counter shaft and further from the counter shaft to the output end of the drive shaft.

*The gear arrangement in the five-speed synchromesh gearbox, excluding the synchronizing rings that connect the gears with the main shaft.*

The gears used in the model have the following properties:

Property | Value |
---|---|

Pressure angle | 25 [deg] |

Helix angle | 30 [deg] |

Gear mesh stiffness | 1e8 [N/m] |

Contact ratio | 1.25 |

All of the gears on the counter shaft are fixed to the shaft, whereas the gears on the drive shaft can rotate freely. Only one gear at a time is fixed on the shaft. In real life, this is achieved with the help of synchronizing rings. In the model, hinge joints with an activation condition are used to conditionally engage or disengage gears with the drive shaft.

Looking at the shafts, they are assumed rigid and rested on the housing through hinge joints, whereas the housing is assumed flexible, further mounted on the ground, and connected to the engine at one of its ends. The driving conditions considered for the simulation in terms of engine speed, load torque, and the engaged gear are as follows:

Input | Value |
---|---|

Engine speed | 5000 [rpm] |

Load torque | 1000 [N-m] |

Engaged gear | 5 |

With these settings, it is possible to run a multibody analysis and compute the housing vibrations as shown in this animation:

*The von Mises stress distribution in the housing together with the speed of different gears.*

In order to have a better understanding of the variation of normal acceleration as a function of time, we can choose any point on the gearbox housing. The time history of the normal acceleration at that point is shown below. Let’s transform this result to the frequency domain using the FFT solver. In this way, we can find the frequency content of the vibration. It is clear from the frequency response plot that the normal acceleration of the housing contains more than one dominant frequency. The frequency band in which the housing vibration is dominant is 1000–3000 Hz.

*Time history and frequency spectrum of the normal acceleration at one of the points on the gearbox housing.*

Once we have simulated the vibrations in a gearbox, let’s see how to model the noise radiation in COMSOL Multiphysics. To begin, we create an air domain outside the gearbox to simulate the noise radiation in the surrounding.

In order to couple multibody dynamics and acoustics, we assume a one-way coupling, as the exterior fluid is air. This implies that the vibrations from the gearbox housing affect the surrounding fluid, whereas the feedback from the acoustic waves to the structure is neglected. It is a good assumption that the problem is one-way coupled.

The acoustic analysis is performed for a range of frequencies. As the multibody analysis is solved in the time domain, the FFT solver is used to convert the housing accelerations from the time domain to the frequency domain.

*The air domain enclosing the gearbox for acoustic analysis. The two microphones placed to measure noise levels are shown.*

As a source of noise, the normal acceleration of the gearbox housing is applied on the interior boundaries of the acoustics domain. In order to avoid any reflections from the exterior boundaries of the surrounding domain, we apply a spherical wave radiation condition. With these settings, we can solve for the acoustic analysis and look at the sound pressure level in the near field as well as on the surface of the gearbox housing at different frequencies. For a better understanding of the directivity of the noise radiation, we can create far-field plots in different planes at different frequencies.

*The sound pressure level in the near field (left) and at the surface of the gearbox (right).*

*The far-field sound pressure level at a distance of 1 m in the* xy*-plane (left) and* xz*-plane (right).*

After visualizing the sound pressure level in the outside field, it is interesting to find out the variation of sound pressure with frequency at a particular location. For this purpose, two microphones are placed in specific locations.

Microphone | Placement | Position |
---|---|---|

1 | Side of the gearbox | (0, -0.5 m, 0) |

2 | Top of the gearbox | (0, 0, 0.75 m) |

These microphone locations are defined in the *Parameters* node in the results and can be changed without updating the solution every time.

*The frequency spectrum of the pressure magnitude at the two microphone locations.*

The pressure response plot at the microphone locations gives a good idea of the frequency content present in the noise. However, wouldn’t it be nice if we could actually listen to the noise recorded at the microphone, just like in a physical experiment? This is possible by writing Java® code in a model method using the magnitude and phase information of the pressure as a function of frequency.

Let’s listen to the sound files corresponding to the noise received at the two microphones…

We have already looked at the acoustics results for various frequencies. It would also be nice to see them in the time domain. Let’s transform the results from the frequency domain to the time domain using the FFT solver so that we can visualize the transient wave propagation in the surrounding area of the gearbox.

*Animation showing the transient acoustic pressure wave propagation in the surrounding area of the gearbox.*

The above approach describes a technique to couple multibody analysis and acoustics simulation in order to accurately compute the noise radiation from a gearbox. This technique can be used early in the design process to improve the gearbox in such a way that the noise radiation is minimal in the range of operating speeds of the gearbox. Additionally, model methods — new functionality as of version 5.3 of the COMSOL Multiphysics® software — enable us to actually hear the noise generated by the gearbox — making the simulation one step closer to a physical experiment.

- Learn how to evaluate gear mesh stiffness
- Get started simulating gears with these tutorial models:

To develop better hearing aids, engineers continuously improve their designs to enhance sound quality, increase output, reduce feedback problems, and provide new features to help users. For instance, future versions of hearing aids may contain brain-computer interfaces to enable the hard of hearing to more easily listen to individual conversations or sounds while ignoring background noise.

*A behind-the-ear (BTE) hearing aid. Image by Udo Schröter — Own work. Licensed under CC BY-SA 3.0, via Wikimedia Commons.*

By analyzing hearing aids with simulation, engineers can develop innovative devices. For this type of study, they need to analyze how the transducer interacts with the system as a whole. This can be computationally expensive because some studies, such as a vibration isolation analysis of the transducer’s elastic mounting, require fully detailed multiphysics models that include details on the transducer’s inner workings.

For other studies, such as evaluating the electroacoustic response of a hearing aid, engineers can instead use lumped-parameter modeling. In this case, they can create a lumped parameter model of the transducer (or use a model provided by the manufacturer) and couple it to a multiphysics model that represents the rest of the system. This lumped parameter transducer model serves as an electroacoustic analogy, similar to those used in SPICE.

Let’s discuss a multiphysics model of a Knowles ED-23146 balanced armature receiver (also known as a miniature loudspeaker), which is based on data provided by Knowles, IL, USA. This model requires both the Acoustics Module and AC/DC Module — add-on products to the COMSOL Multiphysics® software.

*A Knowles ED-23146 balanced armature receiver. Image courtesy of Knowles, IL, USA.*

We model the receiver, seen on the left-hand side of the image below, as a lumped SPICE network. This lumped receiver model is connected to a test setup that includes a 50-mm earmold tube as well as a generic simplified 0.4-cc measurement coupler (this is a standardized ear canal simulator). We use the test setup to represent the receiver in a BTE hearing aid.

*The modeled system includes a receiver, tube, coupler, and measurement microphone. Everything in blue is modeled with finite elements.*

To account for the viscous and thermal losses occurring in the narrow tube, we use the *Narrow Region Acoustics* feature in the *Pressure Acoustics, Frequency Domain* interface. We don’t include the losses caused by the impedance jump from the tube to the coupler.

Note that while narrow region acoustics models have a lower computational cost, it’s better to use full thermoviscous acoustics models when working with complicated geometries. The narrow region acoustics models are valid for waveguides with constant cross sections. More information can be found in the

Acoustics Module User’s Guide.

By comparing the simulation results with existing measurements, we can see that our model generates good predictions across a broad frequency band.

For example, let’s look at the response at the microphone’s location in the coupler. The image below compares the results from the full model with known measurements and those from a model without viscous and thermal acoustic losses. The full model agrees well with the known measurements, but the results don’t match at frequencies above 14 kHz. This is because the wavelength becomes comparable to the length scales of structures missing from the simplified model, such as the microphone’s protective mesh. Further, at these high frequencies, the lumped parameter model is imprecise. It is also evident that including the thermoviscous losses is important to get correct results.

*A comparison of the microphone response for a model that includes thermal and viscous losses, a model without these losses, and existing measurements. Measurement data is provided by Knowles, IL, USA.*

Next, let’s examine the frequency dependency of the transducer’s electric input impedance (real and imaginary). The results indicate that values from the simulation and measurements are in good agreement.

*The electric input impedance (both real and imaginary) as a function of the frequency. The model results are compared to existing measurements. Measurement data is provided by Knowles, IL, USA.*

We can also analyze the pressure and sound pressure level distribution within the tube and coupler system for three frequencies (1200, 3200, and 4600 Hz). The model’s evaluated frequencies correspond to the response’s first three peaks. Specifically, they relate to the tube and coupler system’s quarter-, half-, and three-quarter-wave resonances, respectively.

*The pressure distribution (left) and sound pressure level distribution (right) at three different frequencies.*

- See how impedance boundary conditions benefit acoustics models
- Read other blog posts related to acoustics modeling:

The new *Convected Wave Equation, Time Explicit* interface builds on the functionality of the Acoustics Module. The technology behind this interface comes from the discontinuous Galerkin (DG) method, also called DG-FEM, which relies on a solver that is time explicit and very memory lean. Using the *Convected Wave Equation, Time Explicit* interface enables you to efficiently solve large transient linear acoustics problems that contain many wavelengths in a stationary background flow. It also includes absorbing layers (sponge layers) that can act as effective nonreflecting boundary conditions.

*A model of an ultrasound flow meter that uses the* Convected Wave Equation, Time Explicit *interface. A turbulent flow model is also present to calculate the background flow through the flow meter.*

With the absorbing layer technology to truncate the computational domain and the memory-efficient formulation of DG-FEM, you can set up and solve very large problems in the time domain — measured in terms of the number of wavelengths that can be resolved. This makes the *Convected Wave Equation, Time Explicit* interface suited for modeling the propagation of linear acoustic signals that span large distances in relation to the wavelength.

This interface is useful for linear ultrasound applications, such as ultrasound flow meters and ultrasound sensors in which the time-of-flight parameter is considered. It is also useful for nonultrasound applications like room acoustics and auto cabins that experience the transient propagation of audio pulses.

The governing equations solved by the *Convected Wave Equation, Time Explicit* interface are the linearized Euler equations. These equations assume an adiabatic equation of state (see Ref. 1 and Ref. 2). The mass and momentum conservation equations read:

\begin{align}

& \frac{\partial \rho}{\partial t}+\nabla\cdot(\rho \mathbf{u}_0 + \rho_0 \mathbf{u})=0 \\

& \frac{\partial \mathbf{u}}{\partial t} + (\mathbf{u}_0\cdot\nabla)\mathbf{u} + (\mathbf{u}\cdot\nabla)\mathbf{u}_0 + \frac{1}{\rho_0}\nabla p -\frac{\rho}{\rho_0^2}\nabla p_0 = 0 \\

& p=c_0^2 \rho

\end{align}

& \frac{\partial \rho}{\partial t}+\nabla\cdot(\rho \mathbf{u}_0 + \rho_0 \mathbf{u})=0 \\

& \frac{\partial \mathbf{u}}{\partial t} + (\mathbf{u}_0\cdot\nabla)\mathbf{u} + (\mathbf{u}\cdot\nabla)\mathbf{u}_0 + \frac{1}{\rho_0}\nabla p -\frac{\rho}{\rho_0^2}\nabla p_0 = 0 \\

& p=c_0^2 \rho

\end{align}

The acoustic pressure *p*, as well as the acoustic velocity perturbation *u*, are the dependent variables. The speed of sound is *c _{0}* and the steady-state mean background flow variables are defined with a subscript 0 through the density

The background flow can be a stationary flow with a velocity gradient that ranges from small to moderate. When the background velocity is set to zero, the equations are effectively reduced to modeling the classical wave equation. Note that there are no physical loss mechanisms in the interface and the above equations are set on a conservative form.

The *Convected Wave Equation, Time Explicit* interface is, as mentioned, based on the DG method, a time-explicit formulation that is memory efficient. With this method, it isn’t necessary to invert a full system matrix when stepping forward in time. In contrast, time-implicit methods require inverting this matrix, which consumes a lot of memory when solving large problems. In DG-FEM, only a few mass matrices are inverted for a reference mesh element (making them small in size) before evolving in time. The method is also quadrature free. As for the DG method, computing the local flux vector and divergence of the flux is a time-consuming process that can be run efficiently with BLAS level 3 operations.

Implicit methods are sometimes thought to be faster on small to medium problems that fit in the RAM. This is not always true. When doing a comparison, it’s important to look at the error level. When using an implicit method, it’s tempting to use a time step that is too large, which introduces errors from the time method. On the other hand, due to the discontinuous elements, the DG method is more accurate for the same polynomial order and mesh.

The FEM-based physics interfaces, such as the *Pressure Acoustics, Transient* or *Linearized Navier-Stokes, Transient* interfaces, require the use of a time-implicit method. The challenge is that the RAM consumption for implicit methods grows rapidly when increasing the model size or frequency. The latter is due to the fact that the smallest wavelength in the system must be resolved with a certain number of mesh elements. Still, FEM methods are more flexible and can easily be coupled in multiphysics applications.

In its default formulation, the *Convected Wave Equation, Time Explicit* interface uses quartic (fourth-order) shape functions, a sweet spot for speed and efficiency in wave problems solved with the DG method. This allows us to use a mesh with an element size that is about half of the wavelength for the highest-frequency component that needs to be resolved. This in turn simplifies meshing for large problems.

As an example, the ultrasound flow meter model presented below consists of 7.6 million degrees of freedom (DOF) and can be solved on a desktop computer using 9.5 GB of RAM. With an implicit formulation, solving a model of this size on a desktop is inconceivable. The solution time depends less on the RAM available and more on the processor speed and number of cores available, as the code runs fully parallelized. The DG-FEM formulation is very well suited for parallelization.

In the Acoustics Module, COMSOL plans to use the DG-FEM formulation more in the future, as we believe it is truly effective for solving large wave propagation problems. We are also continuously working on improving and fine-tuning the method and solvers.

As mentioned above, the *Convected Wave Equation, Time Explicit* interface comes with an *Absorbing Layer* feature. This is a type of sponge layer similar to the perfectly matched layer (PML) that already exists in many frequency domain interfaces. The difference lies in the technique that the absorbing layer uses — it combines a scaling system, filtering, and a simple low-reflecting impedance condition.

Inside the layer domain, a coordinate scaling effectively reduces the speed of the propagating waves and ensures that they “align up” (normal) toward the outer boundary. This means that the waves hit the outer boundary in a direction that is closer to normal. Filtering attenuates and filters out the wave’s high-frequency components that the scaling generates. At the layer’s outer boundary, a simple plane-wave impedance condition removes all of the remaining waves, since normal incidence has been ensured. The animation below, created using the Gaussian Pulse in 2D Uniform Flow: Convected Wave Equation and Absorbing Layers benchmark tutorial, shows absorbing layers in action.

*The time evolution of a Gaussian pulse in a uniform background flow. The flow moves toward the right side at Mach 0.5. The absorbing layers absorb the outgoing waves.*

As an application example for the *Convected Wave Equation, Time Explicit* interface, let’s take a look at an ultrasound flow meter set with a generic, wetted time-of-flight configuration. Wetted ultrasound flow meters have a dedicated signal tube for the ultrasound signal and the whole device is mounted into the tubing where the flow is measured. To estimate the speed of the main flow, we calculate the difference in arrival times for two signals that simultaneously traverse the flow upstream and downstream.

In our model, water fills the flow meter and the main flow tube has a diameter of 5 mm. The image below shows the background flow field when one symmetry condition is used. The average velocity in the main channel is 10 m/s. (Note that simulating the flow requires the CFD Module.) The signal tube is the small side duct sitting at a 45° angle to the main channel.

To learn more about this model, you can find step-by-step instructions in the Application Library.

*Background mean flow inside the ultrasound flow meter.*

The animation below shows the propagation of the acoustic pulse downstream in the signal tube, its interaction with the background flow, and the diffraction effects. The signal is a harmonic carrier at 2.5 MHz with a Gaussian pulse envelope. There are absorbing layers placed at the inlet and outlet of the main duct. The animation only shows the pressure signal magnitude on the symmetry plane of the system. As previously noted, this acoustics model involves solving 7.6 million DOF and can be run on a desktop using 9.5 GB of RAM.

*Propagation of the acoustic pulse downstream through the signal tube in the ultrasound flow meter.*

The figure below shows the received signals for the pulse propagating upstream (green) and downstream (blue). We measure the arrival time difference between the two signals to be 49 ns and use it to estimate the mean flow velocity, getting a value of 10.75 m/s. While the actual value is 10 m/s, we know that the difference in results is due to the flow profile correction factor (FPCF), an important parameter for this model. Using simulation, we can calculate the value of the FPCF, as the flow field is known *a priori* from simulations. We can also optimize the flow meter geometry and test different detection signals.

*Pressure signal profiles for the signals moving upstream and downstream in the ultrasound flow meter. The arrival time difference is used to predict the mean flow velocity in an ultrasound flow meter.*

When using the *Convected Wave Equation, Time Explicit* interface, based on the DG-FEM formulation, there are certain general modeling considerations that are good to know. Some practices differ from those associated with the FEM-based interfaces in the Acoustics Module. These guidelines are also available in the *Acoustics Module User’s Guide* under the “Modeling with the Convected Wave Equation Interface” section.

The absorbing layer is set up from the *Definitions* node in the model tree just like a PML — by adding the absorbing layer to the geometric entity that represents the layer. (As for the PML, it is good practice to use the *Layers* option when creating the geometry of your model.) Once the absorbing layer is set up, we need to place an Acoustic Impedance boundary condition on the outermost boundary of the layer. For advanced users, while the default values usually work well, the filter parameters for the absorbing layer can be modified at the topmost physics level once we enable *Advanced Physics Options* to be shown.

*Settings for the Absorbing Layer feature and the Acoustic Impedance boundary condition.*

Meshing a model using the *Convected Wave Equation, Time Explicit* interface is slightly different than with most other physics interfaces in the Acoustics Module. Since the default settings are for using fourth-order shape functions, we can usually achieve proper spatial resolution with a mesh that has an element size set to anything between λ_{min}/2 and λ_{min}/1.5. Note that the internal time-stepping size of a time-explicit method is strictly controlled by the CFL condition and thus the mesh size. This means that the smallest mesh elements in a model control the time step, so avoiding small mesh elements is a good idea if possible. The internal time step used for the *Convected Wave Equation, Time Explicit* interface is automatically selected, based on the mesh and physics, by the COMSOL Multiphysics® software.

When solving large transient models that include millions of DOF, the amount of data in the output can be very large and result in stored files of many GBs. A good strategy for reducing the model size is to only store data on the geometric entities that are needed for postprocessing; for example, on a symmetry plane, along a line, or in a point. We can easily accomplish this in COMSOL Multiphysics by using the *Store fields in output* functionality (located under the *Values of Dependent Variables* section) in the study step settings. Here, we can decide what selections data has to be stored. Note that the *Times* specified in the *Study Settings* section are the times when the solution is stored — they are not related to the internal time stepping.

*Reducing the size of stored files can be done by setting the times where the solution is stored and saving data only in the selections where you need it.*

When analyzing the results from a simulation run with the *Convected Wave Equation, Time Explicit* interface, we need to remember that fourth-order elements discretize the dependent variables. This means that within a mesh element, the shape function has a lot of freedom and can contain a lot of spatial details. We can view these details by setting a high *Resolution* in the *Quality* section for the plots. When solving a model, the default plots generated already have this option selected, with a custom resolution and the *Element refinement* set to six. When adding more user-defined plots, we must change the resolution.

*Setting a custom resolution with the* Element refinement *set to six ensures good spatial representation of solutions.*

An example of the difference between the custom resolution set for a default plot and the default resolution for an added user-defined plot is displayed in the figures below. At first glance, it looks like the solution on the left is incorrect. However, once the correct resolution is selected in the *Quality* section, it reveals the true wave nature of the solution.

*The acoustic velocity with an incorrect resolution (left) and the correctly configured plot with a high resolution (right).*

- Download the models highlighted in this blog post:
- Find out how to model other acoustics applications on the COMSOL Blog
- Check out other updates to the Acoustics Module on the Release Highlights page

- A. D. Pierce, “Acoustics, An Introduction to its Physical Principles and Applications,” Acoustical Society of America (1991).
- A. D. Pierce, “Wave equation for sound in fluids with unsteady inhomogeneous flow,” The Journal of the Acoustical Society of America. 87, pp. 2292 (1990).

Can you make sound out of light? In his presentation, Carl Meinhart answers this question by starting small, with photons and phonons. The idea is that when an infrared photon interacts with matter in some manner, it could create a Stokes’-shifted photon with a lower energy level. Simultaneously, the excess energy from the shift could generate an acoustic phonon. In this way, light can generate acoustics. But, as Meinhart notes in the keynote video, “it’s kind of a chicken-and-egg [scenario]; you need the acoustics and this scattered light to create each other, so they have to exist simultaneously.”

*From the video: Carl Meinhart discusses a theory behind converting light into acoustics.*

While the idea was originally predicted in the 1920s as *Brillouin scattering*, it wasn’t observed until the 1960s. Modern researchers can now turn to the COMSOL® software to analyze this theory and all of the relevant multiphysics phenomena. For a specific photonics example, Meinhart examines an innovative design from the Vahala Research Group at Caltech, a pioneer in this field. The Vahala Research Group designed an optical ring that uses whispering gallery modes for the ring instead of guided waveguides. Meinhart explains that when simulating this kind of device, “it’s very important to design the optics and the acoustics simultaneously,” a task that can be achieved with multiphysics simulation.

Through their research, the team found that their design has a very high Q factor. Research like this indicates that very sensitive high-Q resonators can be built by combining photons, phonons, and the concept of Brillouin scattering.

To try this sort of simulation yourself, download the example Meinhart mentions in his presentation, the Optical Ring Resonator Notch Filter tutorial.

Next, Meinhart turns to an industry example: maximizing the speed of a microfluidic valve. When looking to increase speed, a researcher’s first move is often to decrease inertia by making their design light and small. However, physical prototypes of small devices like microfluidic valves are expensive and time consuming to create and difficult to measure experimentally.

Instead, to analyze microfluidic devices, researchers can use the COMSOL Multiphysics® software, which Meinhart states is “an invaluable tool for this process” because “the only way you can really visualize what’s going on is through numerical simulation.”

*From the video: Carl Meinhart shares the example of a magnetically actuated microfluidic valve (left) and its approximate real-world size (right).*

For a concrete example, Meinhart considers a microfluidic valve being commercialized by Owl Biomedical, Inc. To increase their microvalve’s speed, the group tried using magnetic materials and thin silicon, which bends well and is a high-Q material. The resulting magnetically actuated device can be evaluated by importing the complicated geometry into COMSOL Multiphysics® using a product like LiveLink™ *for* SOLIDWORKS®. Then, researchers can analyze the design by combining nonlinear magnetics, fluid-structure interaction, and particle tracing simulation studies.

Initial results revealed that this microvalve design contained nonoptimal flow patterns. But, by using simulation to modify the shape over many iterations, researchers can balance the spring forces and optimize the flow and opening and closing speeds. The result? An incredibly fast microfluidic valve design that, when used to create a cell sorter, can sort 55,000 cells in 1 second or 200 million cells per hour. This optimized design has the potential to revolutionize cell sorting through Owl Biomedical’s cell sorter.

To learn more about how Carl Meinhart uses multiphysics simulation to study transport processes in photonics and microfluidics, watch the video at the top of this post.

*SOLIDWORKS is a registered trademark of Dassault Systèmes SolidWorks Corp.*

Echologics provides specialized services in water loss management, leak detection, and pipe condition assessment. They developed a permanent leak detection system for pipe networks, using acoustic technology. With this solution, Sebastien says, “the pipes can talk to you.”

The location of a leak is measured using the time delay between signals captured with two sensors placed on the pipe. The time delay is determined using the correlation function. This technique also requires knowledge of the mechanical behavior of the pipe and the propagation speed of acoustic waves to accurately locate the leak. To solve this problem, Sebastien created an app using the Application Builder, a built-in tool in the COMSOL Multiphysics® software, to find the exact location of pipe leaks.

He explains that the app is advantageous for Echologics because its user interface is designed for ease of use in the field. This includes app dimensions that could easily fit on a tablet device when accessed with the COMSOL Server™ product, for instance. This is particularly useful for Echologics, as their field engineers travel extensively.

With apps, engineers at Echologics can easily run and rerun analyses. For example, an engineer can predict a leak location in a pipe using the app and contact the client to tell them where the leak is located. If the client recently replaced that segment of the pipe with a different material, for example, the engineer can rerun the analysis through the app and provide the exact leak location based on the new information. This enables them to quickly respond to the customer with an updated location.

During his keynote talk, Sebastien discussed how Echologics designed their app so that users can easily navigate its interface. By separating the app into five tabs, field engineers only have to calculate the information they need. For example, if an engineer using the app has already measured the speed of sound in a certain pipe segment, they don’t need to use the *Speed Prediction* tab in the app. Instead, they can simply input the measured speed in the *Leak Location* tab that calculates the results.

*From the video: Sebastien Perrier demonstrates the custom app built by Echologics for predicting the location of a pipe leak.*

After all of the information is entered into the app, it reports the leak’s location in relation to the two closest sensors. Echologics’ app also includes a *Visualization* tab so that the app users can see their results. For Sebastien, the beauty of this app is that he can “visualize and confirm” when each sensor detects the leak.

Watch Sebastien Perrier give a demonstration of this app in the keynote video at the top of this post.

]]>

To avoid detection by sonar during World War II, the German Navy covered their U-boats in rubber sheets with drilled air holes at regular distances. The same basic technology of embedding periodic patterns in spongy coatings is still in use, although the specifics are evolving. Finding the pattern and material properties that will minimize the echo for a desired range of frequencies is not an easy task, but one that lends itself very well to modeling.

Let’s find out how you can set up a model of an anechoic coating using the COMSOL Multiphysics® software. For our demonstration, we’ll consider a coating discussed in Ref. 1. The authors of this paper propose a quadratic array of tiny cylindrical holes stamped into a thin polydimethylsiloxane (PDMS) film. The film is placed on the submarine hull with the holes facing the steel. Hence, the holes form air bubbles, even when the vessel is submerged in water. Despite having a thickness of only 0.2 mm, this setup results in less than 10% reflectance for most of the frequency range between 1 and 2.8 MHz, and less than 50% reflectance all the way up to 5 MHz.

When setting up models with periodic geometries, the first thing you want to figure out is how far you can reduce the size of the model geometry. The figure below shows the periodic pattern of air cavities. The blue dashed-line square indicates an obvious and completely general choice of unit cell. Flanked by periodic *Floquet* boundary conditions, this geometry would allow for incident radiation from an arbitrary angle. See our Porous Absorber model for an example of oblique incidence on a periodic structure.

*Top view of the periodic pattern with two candidate unit cells.*

By assuming perpendicular plane wave incidence, we can exploit not only the periodicity, but also the geometric mirror symmetries. After establishing the *x*- and *y*-plane symmetries, it can be easy to forget that there is one mirror plane left, forming a 45-degree angle with both the *x*- and *y*-axes. This leaves us with the green solid-line triangle in the illustration, constituting 1/8 of the full periodic unit cell. Keep in mind, of course, that failing to notice and use a symmetry is not the end of the world — it merely makes the model more expensive than necessary to run.

Here is what the resulting geometry looks like, with water above the PDMS and steel below it:

*Model geometry produced in COMSOL Multiphysics® with the add-on Acoustics Module.*

We will take both the steel and the water to continue indefinitely beyond the modeled geometry. While this is clearly a good assumption for the water, it may seem like a less than obvious choice for the steel. Outer submarine hulls can be just a few millimeters thick, and omitting the other side of the hull means neglecting any reflections that might occur on the inside.

However, the transmission into the steel is small because of the high acoustic impedance contrast between the PDMS and the steel. Also, much of the reflected sound would likely be absorbed by the coating. Therefore, including the full thickness of the steel domain is left as an exercise for the curious reader. If you try this, please tell us about it in the comments section!

Materials that go on “forever” can be modeled either with various low-reflecting boundary conditions or with *perfectly matched layers* (PMLs). The former work optimally under the assumption of perpendicular plane waves. PMLs are more general, making them the preferred choice in nonperiodic, open geometries. For more information on PMLs, see our blog post on perfectly matched layers for wave electromagnetics problems — the considerations and conclusions are similar in pressure acoustics and structural mechanics.

So, can we expect only perpendicular plane waves at the ends of our geometry? To know for sure, we need a primer on diffraction theory.

The transmitted and reflected waves caused by a plane wave incident on a periodic pattern can be described as a sum of plane waves propagating in a finite number of discrete diffraction angles. In the immediate vicinity of the pattern, you will, of course, also have some arbitrarily shaped evanescent fields. Nevertheless, the propagating waves are all plane.

Typically, most of the acoustic energy will end up in the “zeroth diffraction order”, which is just the refraction and mirror reflection of the incident wave. Reflected higher diffraction orders occur at angles where the path distance between radiation traveling in the same direction from two neighboring unit cells is an integer number of wavelengths. This happens according to the equation

mc_i=fd(\sin(\theta_i)+\sin(\theta_{r,m}))

Here, *m* = 0, +/-1, +/-2,.. is the diffraction order; *c*_{i} is the pressure speed of sound in the incident medium; *f* is the frequency; *d* is the width of the repeating unit cell; *θ*_{i} is the angle of incidence; and *θ*_{r,m} is the angle of the *m*th order reflected diffracted wave.

Similarly, for the transmitted diffraction orders, we have

mc_i=fd(\sin(\theta_i)+c_i/c_t\sin(\theta_{t,m}))

with *c*_{t} being the pressure wave speed of sound in the final medium and *θ*_{t,m} the angle of the *m*th order transmitted diffracted wave.

Let us now look at the anechoic coating model, with *θ*_{i} = 0. For an *m*th order reflected diffracted wave to exist, we need

-1<\frac{mc_i}{fd}<1

So, if , we have no reflected diffracted waves. In the same manner, provided , we have no transmitted diffraction orders. The pressure speed of sound is higher in steel than in water, so diffraction would arise in the reflected waves first. With *d* = 120 µm and *c*_{i} = 1481 m/s, we can finally conclude that there is no diffraction at frequencies below 12.3 MHz.

Having decided that PMLs are not required in the relevant frequency spectrum, we need only leave a sufficient depth of water and steel in the model so that most of the evanescent wave content will have died out before reaching the exterior boundaries. For boundary conditions, we use a *Low-Reflecting Boundary* in the steel and the pressure acoustics counterpart, *Plane Wave Radiation*, in the water.

Speaking of *Pressure Acoustics*, that interface applies both in the water and in the air cavities. When modeling small confined spaces, the *Thermoviscous Acoustics* interface can be worth considering as a potentially more accurate option. However, it is only needed if the thermal and/or viscous boundary layers have a significant thickness. At the frequencies that we are concerned with here, these layers do remain much thinner than the dimensions of the cavity.

The steel and PDMS domains are modeled with *Solid Mechanics*. If you select *Acoustic-Solid Interaction, Frequency Domain* in the COMSOL Multiphysics® *Model Wizard*, you get the two relevant interfaces and an *Acoustic-Structure Boundary* automatically connecting them together.

The model is excited with an incident perpendicular wave added to the plane wave radiation condition. To find out the transmission, reflection, and absorption coefficients, you need to extract what fraction of the energy is passing through, being reflected, and being absorbed, respectively.

The transmitted power is simple. The outward mechanical energy flux is automatically available as solid.nI, so all you need to do is integrate that over the low-reflecting boundary terminating the steel domain. Divide that by the incident power, which for a plane wave has a known analytical expression, and you achieve the transmission coefficient.

The net acoustic intensity comes as a vector (acpr.Ix, acpr.Iy, acpr.Iz). To get the reflected power, take the negative of the *z*-component and subtract its integral over the inlet from the incident power. Divide by the incident power again and you have the reflection coefficient. Finally, the absorption coefficient is most conveniently achieved from the condition that all three coefficients sum up to 1.

The below plot shows the resulting transmission, reflection, and absorption coefficients. The results are generally in good agreement with those in the paper (referenced at the end of this post).

- Look at other acoustics models and apps with periodic geometries:
- Read related blog posts:
- Learn about the transfer impedance of a perforate
- Another blog post on modeling an RF anechoic chamber discusses similar techniques applied to electromagnetic waves

- V. Leroy, A. Strybulevych, M. Lanoy, F. Lemoult, A. Tourin, and J. H. Page:
*Superabsorption of acoustic waves with bubble metascreens*, Phys.Rev. B 91, 020301(R), 2015

Topology optimization is a powerful tool that enables engineers to find optimal solutions to problems related to their applications. Here, we’ll take a closer look at topology optimization as it relates to acoustics and how we optimally distribute acoustic media to obtain a desired response. Several examples will further illustrate the potential of this optimization technique.

Many engineering tasks revolve around optimizing an existing design or a future design for a certain application. Best practices and experiences derived from years of working within a given industry are of great importance when it comes to improving designs. However, optimization problems are often so complex that it is impossible to know if design iterations are pushing things in the right direction. This is where *optimization* as a mathematical discipline comes into play.

Before we proceed, let’s review some important terminology. In optimization — be it parameter optimization, shape optimization, or in our case topology optimization — there is always at least one so-called *objective function*. Typically, we want to minimize this function. For acoustic problems, we may want to minimize the sound pressure in a certain region, whereas for structural mechanics problems, we may want to minimize the stresses in a part of a structure. We state this objective as

\min_{\chi} F (\chi)

with *F* being the objective function. A *design variable* is varied throughout the optimization process to reach an optimal solution. It is varied within a *design domain* denoted *Ω _{d}*, which generally does not make up the entire finite element space

*The design domain is generally a subset of the entire finite element domain.*

Note that since the design variable varies as a function of space over the finite element discretized design domain, it is as such a vector. For this particular case, we will simply address it as a variable.

The optimization problem may have more than one objective function, and so it will be up to the engineer to decide how large of a weight each of these objectives should carry. Note that because the objectives may oppose each other during the optimization, special care should be taken when setting up the problem.

In addition to the objective function(s), there will usually be some *constraints* associated with the optimization problem. These constraints reflect some inherent size and/or weight limitations for the problem in question. With the *Optimization* interface in COMSOL Multiphysics, we can input the design variable, the objective function(s), and the constraints in a systematic way.

With topology optimization, we have an iterative process where the design variable is varied throughout the design domain. The design variable is continuous throughout the domain and takes on values from zero to one over the domain:

0 < \chi \leq 1\ \forall\ (x, y)\ \varepsilon\ \Omega_d.

Ideally, we want the design variable to settle near values of either zero or one. In this way, we get a near discrete design, with two distinct (binary) states distributed over the design domain. The interpretation of these two states will depend on the physics related to our optimization. Since most literature addresses topology optimization within the context of structural mechanics, we will first look at this type of physics and address its acoustics counterpart in the next section.

Topology optimization in COMSOL Multiphysics for static structural mechanics was a previous topic of discussion on the COMSOL Blog. To give a brief overview: A so-called MBB beam is investigated with the objective of maximizing the stiffness by minimizing the total strain energy for a given load and boundary conditions. The design domain makes up the entire finite element domain. A constraint is applied to the total mass of the structure. In the design space, Young’s modulus is interpolated via the design variable as

E(\chi) = \left\{ \begin{array}{ll}E_0\ \textrm{for}\ \chi=1\\0\ \textrm{for}\ \chi=0 \end{array} \right..

To help the binary design, we can use a so-called solid isotropic material with penalization (SIMP) interpolation

E (\chi) = \chi^p E_0

where *p* is the penalization factor, typically taking on a value in the range of three to five. With this interpolation (and an implicit linear interpolation of the density), intermediate values of *X* are avoided by the solver as they provide less favorable stiffness-to-weight ratios. I have recreated the resulting MBB beam topology from the previous blog post below.

*Recreation of the optimized MBB beam.*

In this figure, black indicates a material with a user-defined Young’s modulus of *E _{0}*. Meanwhile, white corresponds to zero stiffness, indicating that there should be no material.

Let’s now move on to our discussion of acoustic topology optimization, where we have a frequency-dependent solution with wave propagation in an acoustic media. The design variable is now related to the physics of acoustics. Instead of having a binary *void-material* distribution of material, our goal is to have a binary *air-solid* distribution, where “solid” refers to a fluid with a high density and bulk modulus, which emulates a solid structure.

We define four parameters that describe the inertial and compressional behavior of the standard medium and the “solid” medium: Air is given a density of and a bulk modulus of *K _{1}*, and the “solid” medium has a higher density of and a higher bulk modulus of

\rho(\chi) = \left\{ \begin{array}{ll}\rho_2\ \textrm{for}\ \chi=1 \\ \rho_1\ \textrm{for} \ \chi=0 \end{array} \right.

and

K(\chi) = \left\{ \begin{array}{ll}K_2\ \textrm{for}\ \chi=1\\K_1\ \textrm{for}\ \chi=0 \end{array} \right..

The easiest way to obtain these characteristics is by linear interpolation between the two extreme values. This is not necessarily the best approach since intermediate values of will not be penalized and therefore the optimal design may not be binary. As such, it will not be feasible to manufacture. In the literature alternative, interpolation schemes are given. In the cases presented here, the so-called rational approximation of material properties (RAMP) interpolation is used (see Ref. 1).

Just as with structural optimization, we define a design domain where the material distribution can take place, while simultaneously satisfying the constraints. An area or volume constraints can be defined via the design variable. For example, an area constraint on the design domain can be stated as an *inequality constraint*

\int^{}_{\Omega_d} \chi d \Omega_d \leq S_r

where *S _{r}* is an area ratio between the area of the design that is assigned solid properties and the entire design domain.

Let’s first take a look at a silencer (or “muffler”) example. For simplicity, we limit ourselves to a 2D domain. A typical measure used when characterizing a silencer is the so-called transmission loss, denoted *TL*, which is a measure of power input to power output:

TL = 10 \log_{10} \left(\frac{W_i}{W_o} \right).

The transmission loss is calculated using the so-called three-point method (see Ref. 2). We use this as our objective function, seeking to maximize it at a single frequency (in this case 420 Hz):

\max_{\chi} TL (420 \text {Hz}).

Two design domains are defined above and below a tubular section. The design domain is constrained in such a way that a maximum of 5% of the 2D area is the structure and thus 95% must be air:

\int^{}_{\Omega_d} \chi d \Omega_d \leq 0.05.

The initial state for the design domain is 100% air, i.e., . The animation below shows the evolution from the initial state to the resulting topology.

*An animation depicting the evolution from the initial state to the optimized silencer topology.*

The optimized structure takes on a “double expansion chamber” (see Ref. 3) silencer topology. The transmission loss has increased by approximately 14 dB at the target frequency, as illustrated in the plot below. However, at all frequencies other than the target frequency, the transmission loss has also changed, which may be of great importance for the specific application. Therefore, a single-frequency optimization may not be the best choice for the typical design problem.

*Transmission loss for the initial state and optimized silencer.*

Shifting gears, let’s now look at how to optimize for two objective functions and two frequencies. Here, we again consider a 2D room with three hard walls and a pressure input at the left side of the room. The room also includes two objective areas, *Ω _{1}* and

- Minimize the sound pressure level at a frequency
*f*and_{1} - Minimize the sound pressure level at a frequency
*f*_{2}= 1.5 f_{1}

with the circular design domain *Ω _{d}* and an area constraint that is 10% structure. The initial state is , making the design domain 100% air.

*A square 2D room with a circular design domain and two objective domains.*

With more than one objective function, we must make some choices regarding the relative weights, or importance, of the different objectives. In this case, the two objectives are of equal weight in importance, and the problem is stated as a so-called *min-max* problem:

\begin{align}

\min_{\chi} \max_{f_1 f_2} SPL_i (\chi, f_i) \\

\text {subject to} \int^{}_{\Omega_d} \chi d\Omega_d \leq 0.1.

\end{align}

\min_{\chi} \max_{f_1 f_2} SPL_i (\chi, f_i) \\

\text {subject to} \int^{}_{\Omega_d} \chi d\Omega_d \leq 0.1.

\end{align}

The figures below show the optimized topology (blue) along with the sound pressure for both frequencies using the same pressure scale. Note how the optimized topology results in a low-pressure zone (green) appearing in the upper-right corner at the first frequency. At the same time, this optimized topology ensures a similar low-pressure zone in the lower-right corner at the second frequency. This would certainly be a challenging task if trial-and-error was the only choice.

*Sound pressure for frequency* f_{1} *(left) and for frequency* f_{2} *(right). The optimized topology is shown in blue.*

As a third and final example, we’ll optimize a single objective over a frequency range. A sound source is radiating into a 2D domain, where we initially have a cylindrical sound field. Two square design domains are present, but since there is symmetry, we only consider one half of the geometry in the simulation. In this case, we want a constant magnitude of the on-axis sound pressure of in a point 0.4 meter in front of the sound source. The optimization is carried out in a frequency range of 4,000 to 4,200 Hz (50 Hz steps, a total of five frequencies). We can accomplish this via the Global Least-Squares Objective functionality in COMSOL Multiphysics, with the problem being stated as:

\begin{align}

\min_{\chi} \sum_{i=1}^{5} (\mid p_i (\chi, f_i, 0, 0.4) \mid -\overline{p}_{obj})^2 \\

\text {subject to} \int^{}_{\Omega_d} \chi d\Omega_d \leq 0.1.

\end{align}

\min_{\chi} \sum_{i=1}^{5} (\mid p_i (\chi, f_i, 0, 0.4) \mid -\overline{p}_{obj})^2 \\

\text {subject to} \int^{}_{\Omega_d} \chi d\Omega_d \leq 0.1.

\end{align}

The initial state is again . The optimized topology is shown below, along with the sound field for both the initial state and optimized state.

*Sound pressure for the initial state (left) and optimized state (right) at 4 kHz, with the optimized topology shown in blue within the square design domains.*

Since the sound pressure magnitude in the observation point of the initial state is lower than the objective pressure, the topology optimization results in the creation of a reflector that focuses the on-axis sound. The sound pressure magnitudes before and after the optimization are shown below. The pressure magnitude is close to the desired objective pressure in the frequency range following the optimization.

*The pressure magnitude divided by for the initial and optimized topology.*

Acoustic topology optimization offers great potential for helping acoustic engineers come up with innovative designs. As I have demonstrated today, you can effectively use this technique in COMSOL Multiphysics. With proper formulations of objectives and constraints, it is possible to construct applications with new and innovative topologies — topologies that would most likely not have been found using traditional methods.

I would like to give special thanks to Niels Aage, an associate professor at the Technical University of Denmark, for several fruitful discussions on the topic of optimization.

To learn more about using acoustic topology optimization in COMSOL Multiphysics, we encourage you to download the following example from our Application Gallery: Topology Optimization of Acoustic Modes in a 2D Room.

- M.P. Bendsoe, O. Sigmund,
*Topology Optimization: Theory, Methods, and Applications*, Springer 2003. - T.W. Wu, G.C. Wan, Muffler, “Performance studies and using a direct mixed-body boundary element method and a three-point method for evaluating transmission loss”, Trans. ASME:
*J. Vib. Acoust.*118 (1996) 479-484. - Z. Tao, A.F. Seybert, “A review of current techniques for measuring muffler transmission loss”,
*SAE International*, 2003.

René Christensen has been working in the field of vibroacoustics for more than a decade, both as a consultant (iCapture ApS) and as an engineer in the hearing aid industry (Oticon A/S, GN ReSound A/S). He has a special interest in the modeling of viscothermal effects in microacoustics, which was also the topic of his PhD. René joined the hardware platform R&D acoustics team at GN ReSound as a senior acoustic engineer in 2015. In this role, he works with the design and optimization of hearing aids.

]]>

Saying that the world’s oceans are large is an understatement. Oceans cover around 71% of Earth’s surface and the deepest known point, the Challenger Deep in the Mariana Trench, extends down for about 36,000 feet (almost 11 km). To study this massive environment, researchers need powerful, far-reaching tools.

*The depth of the Challenger Deep compared to the size of Mount Everest. Image by Nomi887 — Own work. Licensed under CC BY-SA 3.0, via Wikimedia Commons.*

Ocean acoustic tomography, which involves deep-water, low-frequency sound sources, is one option for measuring the temperature of oceans. This system measures the time it takes sound signals to travel between two instruments at known locations, a sound source and a receiver. Because sound travels faster in warmer water, you can use this measurement to extract the average temperature over the distance between the source and the receiver.

To get these measurements, long-range ocean acoustic tomography must be able to use low-frequency signals to cover a broad frequency band, something that often requires a high-power sound source. Therefore, creating a system that can successfully cover a large frequency band, while reducing power consumption via a highly efficient design, is ideal. One particular focus in this field is on resonators, since saving energy in a resonator helps increase overall transducer efficiency in cases where the wavelength is larger than its dimension.

In response to this, Andrey K. Morozov at Teledyne Webb Research (TWR) developed a sound resonator design that is highly efficient and has a tunable resonator. While previous research involved a high-Q resonant organ pipe operating at a frequency band of 200-300 Hz, this study revolves around a new high-frequency sound source that operates at an octave band of 500-1000 Hz. Further, the new high-Q resonant organ pipe design can keep a system in resonance when the transmitted signal has a changing instantaneous frequency. With its small size, this design is helpful for shallow water experiments.

In this design, a digitally synthesized frequency sweep signal is transmitted by a sound projector. The projector and high-Q resonator tune the organ pipe so that it matches a reference signal’s frequency and phase. This resonant tube can operate at any depth, but before it was ready to hit the seas, Morozov studied its design using the COMSOL Multiphysics® software.

As we can see in the schematic below, the organ pipe device is comprised of slotted resonator tubes (or pipes) that are moved via a symmetrical Tonpilz transducer. The Tonpilz driver’s piezoceramic stacks move pistons and thereby vary the volume. The two symmetrical pipes that are coupled through the Tonpilz transducer function like a half-wave resonator that has a volume velocity source driver.

*Image of a tunable resonant sound source and Tonpilz driver. Image by Andrey K. Morozov and taken from his COMSOL Conference 2016 Boston paper.*

Let’s focus on how these resonator tubes include slots or vents. In order to achieve smooth control of the resonance frequency, an electromechanical actuator moves two sleeves axially along the resonator tubes, maintaining a small gap in between the sleeve and pipe. Through this action, the slots are covered and the actuator can tune the organ pipe in a large frequency range. When the sleeves’ positions relative to the slot change, the equivalent acoustic impedance of the slots also changes, altering the resonance frequency of the entire resonator.

In the next section, we’ll see how simulation was used to further improve the design of the tunable organ pipe.

Morozov reduced the thickness of the resonator’s walls to make them lighter, which caused them to vibrate and store a large amount of acoustical energy. To prevent acoustical coupling between the main resonator and a mechanical part of the system, he used shock mounts to attach the main resonator pipe to the backbone rail. This design change did not completely avoid unwanted resonance effects in the tuning mechanics, so Morozov turned to simulation for further optimization.

The plot below and to the left represents the sound pressure level at resonance. Here, the vents in the main resonator pipe open and sound energy leaves the organ pipe through the resulting gap. In a low-frequency design, rounded edges in the sleeve cylinder help to prevent dual resonances in this position, but this isn’t a complete solution for a high-frequency resonator.

To learn more, the researcher studied the resonance curves for different sleeve positions, as seen below and to the right, shifting each position in 1 cm intervals.

*Left: Simulation results of a tunable organ pipe, performed for a standard spherical driver. Right: Results showing the different sleeve positions and their correlating frequency responses. Image by Andrey K. Morozov and taken from his COMSOL Conference 2016 Boston paper.*

His results showed that the vibrations in the main pipe and the resonating water beneath the sleeve can disturb the main resonance curve. Although both simulation results and experimental tests agree that this problem can be alleviated by increasing wall thickness, the resulting pipe design is too heavy.

To address this issue, Morozov easily tested different design configurations with simulation. He discovered that the tunable mechanism can be improved by ensuring that the gap between the sleeve and the main pipe is only from one of the orifice’s sides. Using this improved design as a basis, he completed additional studies, including investigating the optimal frequency, particle velocity, and sound pressure of the device, which we’ll focus on next.

*Comparing sound pressure levels and frequency in the improved design for various sleeve positions. Image by Andrey K. Morozov and taken from his COMSOL Conference 2016 Boston paper.*

In this new design, the pipe first functions as a half-wavelength resonator and radiates through its main orifices. At the end of the frequency band, the sound is mostly radiated through the completely open tuning vents, as seen in the following images. The transition between these two states is continuous.

*Absolute sound pressure when the slots are completely closed at the starting frequency range of 500 Hz (left) and when the slots are completely open at the maximum resonance frequency of 1000 Hz (right). Images by Andrey K. Morozov and taken from his COMSOL Conference 2016 Boston paper.*

To conclude, these simulations enabled Morozov to successfully visualize the structural acoustics of a new high-Q resonant organ pipe with an octal band of 500 to 1000 Hz and investigate important details, including the optimal profile of the opening slots.

Finally, a physical organ pipe was constructed out of aluminum using the exact dimensions of the model. The initial test pool results were similar to the simulation results and achieved the expected frequency range. However, the resonance frequencies were slightly lower in these tests. This is likely explained by the elliptical shape of the pipe and the limited pool dimensions. Both factors contributed to the decreased resonance frequency.

Due to these results, Morozov altered his experiment by cutting the pipes, as well as by performing another test at the Woods Hole Oceanographic Institution dock.

*The altered sound source system (left), tested at the Woods Hole Oceanographic Institution (right). Images by Andrey K. Morozov and taken from his COMSOL Conference 2016 Boston paper.*

The new experiment indicated that while the simulation could efficiently predict resonance frequencies, the model’s Q-factor is larger than in the experimental results. This difference is expected because real losses are hard to predict. Also, there were slight variations between the model and the realized design.

Designing a tunable resonant system is challenging because you need to precisely adjust parameters and ensure that it achieves the necessary frequency range. Using COMSOL Multiphysics, Morozov managed to achieve the octave frequency range in his tunable sound source design before performing a large amount of water tests. He found that the physical sound source parameters reasonably matched the simulation.

This improved design can help scientists measure long-range sound propagation and temperature over large distances in the ocean, allowing them to study everything from small-scale temperature fluctuations to overarching oceanic climate change.

- Download the original paper: “Simulation and Test of Tunable Organ Pipe For Ocean Acoustic Tomography“
- Head this way for an overview of the top papers and posters from the COMSOL Conference 2016 Boston
- Learn more about organ pipe design in this blog post: Hear the Sound of an Organ Pipe Design with a Simulation App

Let’s say, for example, that you’re almost done drawing the geometry of a muffler model. It’s nothing too fancy, just a couple of cylinders representing the pipe and an extruded oval forming the main chamber. Then it strikes you: Some of the sections in the pipes and the baffles separating the chambers need to be perforated — and not just by placing a hole here or there. In this case, you could use the *Array* tool to draw an arbitrarily large collection of holes, but meshing and solving the resulting geometry would take too much time and memory.

*Detail of muffler geometry, including perforated sections consisting of a few thousand holes.*

Fear not — there are better solutions. The simplest and most convenient approach is to draw the contours of the perforated regions and apply an *Interior Perforated Plate* condition. We then supply properties, such as the hole diameter and plate thickness, and get a partially transparent surface that represents the perforate.

In many cases, the accuracy of this method meets our simulation needs. However, the accuracy can be compromised if we push the limits of validity for the underlying engineering relation. For example, holes that are very small, lie too close to each other, or have a noncircular shape produce less reliable results.

*The same muffler geometry with the perforations now replaced by boundaries for applying a* Perforated Plate *or* Interior Impedance *condition.*

A more general alternative to the *Interior Perforated Plate* condition is the *Interior Impedance* condition. This allows us to specify a complex-valued transfer impedance, which represents the ratio between the pressure drop across the perforate and the normal particle velocity through it. The *Interior Perforated Plate* condition is a special predefined version of the *Interior Impedance* condition. The value of the impedance can be based on imported measurement data or an analytical expression. If we lack trustworthy measurements of the transfer impedance or a good analytical expression, the impedance can be based on numerical results. This method, as described below, makes it simple to model the impedance numerically.

The figure below shows the considered perforate in such a model, described in our Transfer Impedance of a Perforate tutorial. The principle behind this model is rather simple: We send a plane wave towards the hole and then calculate the transfer impedance from the resulting pressure difference across it and the average velocity through it.

*Representation of the perforate with the model domain colored according to the local acoustic velocity field.*

The example model takes advantage of available symmetries, occupying only a quarter of the hole itself as well as half of the distance between holes. The holes in this case have a diameter of 1 mm, which is small enough that we must consider the thermoviscous boundary layers and losses. To account for these factors, we can use the *Thermoviscous Acoustics* interface.

To send the plane wave, we utilize the *Background Acoustic Fields* node, a staple of the *Pressure Acoustics* interface that has recently been added to the *Thermoviscous Acoustics* interface. Its *Plane wave* option sends a pressure, velocity, and temperature distribution corresponding to a plane wave with viscous and thermal attenuation. We cap the model by adding perfectly matched layers above and below the model domain.

For even more on transfer impedances in perforated mufflers, we recommend the Thermoviscous Acoustic Impedance Lumping model. In addition to computing the transfer impedance of a perforate, this example demonstrates how to use the result in a model of an entire muffler, thereby summing up and driving home the message of this blog post.

- Try out the highlighted tutorial: Transfer Impedance of a Perforate
- Read our Thermoviscous Acoustics blog series to learn more about the theory of thermoviscous acoustics and how to model it in COMSOL Multiphysics
- Check out all of the blog posts in our Acoustics category

If you’ve ever flown on a commercial airplane, it’s likely that your flight was powered by a turbofan engine. Turbofan engines function by capturing air and sending part of this into a compressor. The compressed air then enters a combustion chamber, where it is ignited with fuel, and then the released products propel the plane forward.

*Left: A turbofan engine schematic. Image by K. Aainsqatsi — Own Work. Licensed under CC BY-SA 3.0, via Wikimedia Commons. Right: A real-world turbofan engine. Image by Sanjay Acharya — Own Work. Licensed under CC BY-SA 3.0 via Wikimedia Commons.*

In recent years, the design of turbofan engines have vastly improved, with a particular emphasis on noise reduction. To understand why, consider once again being a passenger on a flight — it can be rather unpleasant to listen to a loud engine. And for those people who live near airports, loud noise from planes as they land and take off can disturb sleep patterns. Reducing the noise generated by airplanes and their engines has therefore been a key point of focus in the aviation industry.

Reducing the excess fan noise that comes from turbofan aeroengines offers one potential solution to this issue. In the COMSOL Multiphysics® software, you can analyze and optimize the radiated noise from a turbofan engine to meet such goals. To learn more, let’s take a look at our simplified tutorial model of a jet pipe.

To analyze a turbofan aeroengine, we can focus on specific elements of its design. In this case, we’ll investigate the radiation of fan noise generated by a turbofan aeroengine’s annular duct. Let’s start by looking at our axisymmetric model geometry, which has a symmetry axis at the engine’s centerline. The model geometry mimics the outlet nozzle of the jet engine (see the schematic above). The gray area in the following schematic represents the interior of the engine in the nozzle. The model obviously uses a very simplified geometry and focuses on the physical principles and model setup.

*Turbofan motor geometry. The gray zone indicates the internal machinery of the engine. Air flows through the jet (M _{1}) as well as around the jet (M_{0}).*

In this model, air flows inside and outside of the duct as uniform mean flows with a Mach number of M_{1} = 0.45 inside and M_{0} = 0.25 outside, respectively. This corresponds to the red and pink regions in the initial schematic of the turbofan engine. Since the air surrounding the engine moves at a slower speed than inside the jet, a vortex sheet (indicated by the dashed lines in the image above) results in the jet stream, which separates the air flows along the extension of the duct’s wall. Using our model, we can calculate the near-field flow on both sides of the vortex field.

When solving our jet pipe model, we used the *Linearized Potential Flow, Frequency Domain* interface in the Acoustics Module to describe acoustic waves within a moving fluid. It’s important to note, however, that the field equations are valid only when working with an irrotational velocity field. Since this is not the case across a vortex sheet, the sheet has a discontinuous velocity potential. To model such discontinuity, we applied the built-in Vortex Sheet boundary condition on the interior boundaries. As for the acoustic field within the duct, we described this element using the sum of the eigenmodes propagating within the duct and then radiating into free space. This is a common approach when setting up sources in this type of simulation.

For our study, we utilized a boundary mode analysis to find the inlet sources. The first step was to investigate circumferential wave numbers *(m = 4, 17, and 24)* and generate various eigenmodes that correspond to different radial mode numbers. The second step was to use three eigenmodes as incident waves inside the duct: *(m,n) = (4, 0), (17, 1), and (24, 1)*. The results indicate that the largest eigenvalue for a given *m* corresponds to the radial mode *n = 0*. The smallest eigenvalue, meanwhile, corresponds to *n = 1*.

*Plot of the eigenmodes featuring circumferential mode shapes m = 4, 17, and 24 and radial modes n = 0 and 1.*

As part of our analysis, we also investigated the source velocity potential. As depicted in the plot below, we used a revolved geometry that included the circumferential wave number contribution to see its spatial shape.

*Model showing a boundary mode of (m, n) = (4, 0).*

To gain further confidence from the results of our analyses, we compared our simulation findings to those results presented in the paper “Theoretical Model for Sound Radiations from Annual Jet pipes: Far- and Near-field Solution” (see Ref. 1 in the model documentation). The plots below, for instance, showcase the near-field pressure from different source eigenmodes in our simulation study. All of the results are solved for a Mach number of M_{1} = 0.45 inside the pipe and M_{0} = 0.25 outside of the pipe.

*From left to right: The near-field solution for (m, n) = (4, 0), (17, 1), and (24, 1).*

Further, we analyzed the near-field sound pressure level and the revolved geometry’s near-field pressure. The results from these studies are highlighted in the plots below, respectively.

*Left: Near-field sound pressure level for (m, n) = (24, 1). Right: Near-field pressure shown in the revolved geometry for (m, n) = (4, 0).*

By comparing our findings to the established literature highlighted above, we were able to further confirm the validity of our results. Such accuracy speaks to benefits of using COMSOL Multiphysics to help reduce noise pollution in turbofan engine designs and thus facilitate important advancements within the aviation industry.

- Try the Jet Pipe tutorial model that was presented here
- Read more aeroacoustics posts on the COMSOL Blog:

Brüel & Kjær, an industry leader in sound and vibration measurement for over 40 years, caters to customers like Airbus, NASA, Ferrari, and more. Their microphones range from working standard microphones to ones that are custom-made for their application. They cover a range of frequencies as well, from infrasonic to ultrasonic. For each desired application and frequency, there are multiple factors in the microphone’s design that affect its performance.

*A 4134 microphone including the protective grid covering the diaphragm.*

When sound enters a microphone, the sound pressure waves cause the diaphragm to vibrate, and these vibrations are then converted to sound decibels. This process means that modeling a microphone requires accounting for mechanical, electrical, and acoustic phenomena in a tightly coupled setup — something that could only be achieved with a multiphysics simulation tool. To see whether a microphone’s design is consistent and reliable, Brüel & Kjær use COMSOL Multiphysics® software to test the precision of their devices and verify new designs.

The Brüel & Kjær Type 4134 condenser microphone, shown below, is a popular prototype for developing condenser microphones. Simulating condenser microphones requires modeling the diaphragm’s movement, membrane deformations, resonance frequency, and viscous and thermal acoustic losses. Due to a microphone’s small dimensions and large aspect ratios, the thermal and viscous losses affect their performance considerably. All of this results in the model needing to contain a lot of detail in order to be accurate.

*Geometry of the Type 4134 microphone showing the mesh used in the reduced sector geometry.*

To reduce the calculation time while maintaining accuracy, the researchers took advantage of model symmetry to compute thermal stress and resonance frequency. Sound pressure can be simulated with this method as well, but only when the sound is at a normal incidence to the diaphragm. When the sound wave is at a non-normal incidence, a nonsymmetric boundary condition can be used.

After verifying the simulation of the Type 4134 microphone, the researchers modeled other types with parameters that could not be observed in practice. For example, they studied how an air vent affects a microphone’s ability to measure low-frequency sounds. Simulation allowed Brüel & Kjær to test innovative designs and make changes as needed. They can even create custom devices for customers on a case-by-case basis.

In addition to improving their microphones, engineers at Brüel & Kjær also use multiphysics simulation to optimize and test their vibration transducer designs. They aim to create one with a high built-in resistance to withstand harsh environments. To accomplish this, the engineers must create a device that doesn’t have a resonant frequency in the vibration range that it would measure. Resonating in the desired vibration range would compromise the accuracy of the measurement.

*Simulation results of a suspended piezoelectric vibration transducer.*

To design a device that produces a flat response, researchers tried different combinations of materials and geometry. By adding a mechanical filter, they designed a vibration transducer with an error range of no more than 10 to 12%, which is well within the acceptable limits.

No device can be perfect, but simulation provides a way to get as close to perfect as possible. Engineers at Brüel & Kjær can quickly and efficiently test new designs in different scenarios, getting results that they couldn’t determine experimentally. This provides them with unique knowledge to help them create innovative designs and stay ahead of the competition.

- Download models similar to those shown here:
- Want to learn how to model microphones and transducers in COMSOL Multiphysics? Check out these blog posts:
- See how other people are using COMSOL Multiphysics in
*COMSOL News*2015