When molten metal solidifies, grains begin to form. These affect the physical properties of the solid metal; for instance, a smaller grain size makes the metal stronger. The grain of a metal is affected by many factors, such as temperature and cooling time. Engineers can also adjust grain morphology during the metal solidification process by using the physical phenomenon of acoustic streaming (AS) to induce drag on particles.
Molten metal being processed. Image by Goodwin Steel Castings — Own work. Licensed under CC BY-SA 2.0, via Flickr Creative Commons.
In AS, an oscillating sonotrode sound emitter is placed in a liquid and generates a steady fluid motion. To produce a significant effect, the sound waves need a high amplitude and frequency, often in the ultrasound range. Thus, this technique can require an ultrasonic treatment.
To further improve and develop AS, engineers rely on costly physical experiments. Simulation provides an alternative, enabling those in the metal processing industry to build models to test AS treatments that use different materials and fluids. These models can then be validated via experimental testing.
Let’s take a look at how researchers from the Institute of Thermal and Fluid Engineering and Institute of Product and Production Engineering at the University of Applied Sciences Northwestern Switzerland investigated this possibility.
The research team’s goal was to build and experimentally validate an AS model that can analyze a variety of fluids by altering the parameters. The resulting 2D axisymmetric model represents an experimental setup of an oscillating sonotrode placed into a fluid and generating a harmonic acoustic pressure field. They simplified this model by assuming isothermal behavior, neglecting cavitation, and using an averaged stationary flow.
Acoustic streaming sample geometry. The numbered dots represent the boundary positions and the colored dots show the locations of the three tracing massless particles. Image by D. Rubinetti, D. Weiss, J. Muller, and A. Wahlen and taken from their COMSOL Conference 2016 Munich paper.
Since AS is a multiphysics phenomenon, the researchers accounted for two physics phenomena:
Note that accounting for the acoustics in the high-frequency domain requires a compressible fluid description, due to the coupling of density and pressure perturbations.
The researchers solved the model equations with three study steps:
The first study shows that the sonotrode acceleration causes a sharp rise in acoustic particle velocity. Using the acoustic velocity field found here, the force term for the second study is generated.
Frequency-domain results, showing the acoustic velocity field. Image by D. Rubinetti, D. Weiss, J. Muller, and A. Wahlen and taken from their COMSOL Conference 2016 Munich paper.
The fluid flow study investigates the flow pattern, beginning with the axial jet exiting from the actuating sonotrode tip. The jet continues to the bottom wall, where it is deflected and creates vortices in the bottom corners. The generated flow is almost at a standstill in the open interface zones and has the highest value beneath the sonotrode.
Left: Stationary velocity field for an aluminum melt with a frequency of 20 kHz and 30 µm amplitude. Right: Comparison of the velocities of the three massless particles, with the colors matching those in the sample geometry. Images by D. Rubinetti, D. Weiss, J. Muller, and A. Wahlen and taken from their COMSOL Conference 2016 Munich paper.
When looking at the dispersal of the three separate tracing particles, the model demonstrates that the particles beneath the sonotrode (depicted as the black line in the right plot above) have a high acceleration, which increases the number of cycles.
To experimentally validate their simulations, the researchers created a small-scale laboratory model involving an aluminum sonotrode partially submerged in a fluid-filled crucible. Next, they performed experiments at a frequency of 20 kHz and at three different amplitudes: 10, 20, and 30 µm. To track the fluorescent particles used in the experiment, the team relied on a combination of a high-speed camera, diode laser, and lasersheet. They then determined the correlated velocity field via particle image velocimetry.
The experimental setup. Original image by D. Rubinetti, D. Weiss, J. Muller, and A. Wahlen and taken from their COMSOL Conference 2016 Munich paper.
When testing a seed oil liquid, we can see that the axial jet is visible in both the simulation and experimental results, as shown below. While the results don’t completely line up, they do match on the right side of the crucible for the direction and location of the induced flow.
The velocity fields for the simulation (left) and seed oil test case (right) with a frequency of 20 kHz and amplitude of 30 µm. Images by D. Rubinetti, D. Weiss, J. Muller, and A. Wahlen and taken from their COMSOL Conference 2016 Munich paper.
We can also compare the velocities along the rotational axis in the simulation and experiment, which show good agreement near the tip of the sonotrode. The results begin to differ as we move farther away from the tip, with the simulation achieving a peak velocity (over twice that of the experimental maximum velocity) within 10 mm from the tip. The difference between the results decreases when the axial difference increases, with both the simulation and experiment showing a decline behavior.
Comparison of the simulation and experimental results for the velocity magnitude along the rotational axis. Image by D. Rubinetti, D. Weiss, J. Muller, and A. Wahlen and taken from their COMSOL Conference 2016 Munich paper.
The deviation between the results may be due to a few reasons, including nonaccurate optical measurements (the experimental data is difficult to collect) and the team’s simplified simulations. The underlying issue may be that the flow is not steady in reality. As their experimental plot above confirms, momentum dissipates a lot more in the experiments than the model. This suggests that there are unsteady smaller eddies that transfer momentum at a rate that is not described by the averaged steady flow used in the model.
Through their experimental testing, the research team concluded that their AS model gives a qualitative description of the flow, except for the small region close to the sonotrode where the description seems to be fairly accurate. As a reasonable approach for analyzing AS and predicting fluid flow behavior, it can save time and money by minimizing the required amount of physical experiments.
Simulation is also a good choice for testing various fluids, parameters, and geometries. This enables engineers to efficiently study different AS treatments by tailoring the model to fit a specific case. The researchers also note that the model is versatile enough to be used to study other sound-driven fluid motion applications.
The complex interaction of a stationary background flow and an acoustic field can be modeled using the linearized Navier-Stokes physics interfaces in the Acoustics Module. The interfaces allow for a detailed analysis of how a flow, which can be both turbulent and nonisothermal, influences the acoustic field in different systems. This includes all linear effects in which a background flow interacts with and modifies an acoustic field. The linearized Navier-Stokes interfaces do not induce flow-induced noise source terms. Basically, the equations solve for the full linear perturbation to the general equations of CFD — mass, momentum, and energy conservation.
Being able to model and simulate the details of how a background flow influences an acoustic field is important in many industries and areas of application. In the automotive industry, the acoustic properties of exhaust and intake systems are altered when a flow passes through them. For example, the transmission loss of a muffler changes depending on the magnitude of the bypass background flow. In aerospace applications, the study of how liners and perforates behave acoustically when a flow is present is of high interest. The detailed acoustic properties (absorption, impedance, and reflection coefficients) of these subsystems influence the system-level behavior of, for example, a jet engine.
In the muffler and liner examples, the attenuation of the acoustic signal by the turbulence present in the background flow can also be captured with the linearized Navier-Stokes equations. Moreover, the background flows in these models are often nonisothermal in nature.
Example of an automotive application. Results from the Helmholtz resonator with flow example presented below. In front, the color surface plot is of the sound pressure level. At the back, the streamlines are of the background flow.
The linearized Navier-Stokes interfaces have a built-in multiphysics coupling to structures. This enables an out-of-the-box setup of fluid-structure interaction (FSI) models in the frequency domain (or in the linear regime in the time domain). The interaction of flow, acoustics, and structural vibrations is important in many applications. One application example is for flow sensing in a Coriolis flow meter. In general, these interfaces are suited for the analysis of the changed vibrational behavior of structures when under a fluid load by a background flow.
Example of FSI in the frequency domain: the movement of a Coriolis flow meter actuated at the fundamental frequency. The surface shows the structural deformation (the phase and amplitude are highly exaggerated for visualization) and the open cut-out section of the pipe shows the acoustic pressure on the pipe’s inner surface.
Other applications of the linearized Navier-Stokes interfaces include the study of combustion instabilities and general in-duct acoustics as well as more academic applications like analyzing the onset of flow instabilities or studying regions prone to whistling.
The interfaces now include the Galerkin least squares (GLS) stabilization scheme, enabling more robust simulations. This new default setting better handles the numerical and physical instabilities introduced by the convective and reactive terms included in the governing equations. Moreover, the reformulated slip boundary condition is now well suited when solving models with an iterative solver. This is crucial in cases where large industrial problems have to be solved.
The linearized Navier-Stokes equations represent a linearization to the full set of governing equations for a compressible, viscous, and nonisothermal flow (the Navier-Stokes equations). It is performed as a first-order perturbation around the steady-state background flow defined by its pressure, velocity, temperature, and density (p_{0}, u_{0}, T_{0}, and ρ_{0}). This results in the governing equations for the propagation of small perturbations in the pressure, velocity, and temperature (p, u, and T) — the dependent variables. In perturbation theory, a subscript 1 is sometimes used to express that these variables are first-order perturbations. The governing equations (with subscript 0 on the background fields) read:
(1)
where Φ = ∇u : τ_{0} + u_{0} : τ is the viscous dissipation function; M, F, and Q represent possible source terms; κ is the coefficient of thermal conduction (SI unit: W/m/K); α_{p} is the (isobaric) coefficient of thermal expansion (SI unit: 1/K); β_{T} is the isothermal compressibility (SI unit: 1/Pa); and _{p} is the specific heat capacity (heat capacity per unit mass) at constant pressure (SI unit: J/kg/K).
In the frequency domain, the time derivatives are, in the usual manner, replaced by a multiplication with iω. The constitutive equations for the stress tensor and the linearized equation of state (density perturbation) are given by:
(2)
where τ is the viscous stress tensor (Stokes expression), μ is the dynamic viscosity (SI unit: Pa s), and μ_{B} is the bulk viscosity (SI unit: Pa s).
The Fourier heat conduction law is used in the energy equation. A detailed derivation of the equations can be found in the Acoustics Module User’s Guide. The equations can be solved in the time domain or frequency domain using either the Linearized Navier-Stokes, Transient interface or the Linearized Navier-Stokes, Frequency Domain interface.
By taking a closer look at the governing equations presented in (1), you can see that they contain different types of terms:
Because of the general nature of the equations solved in the interfaces, they naturally model the propagation of acoustic (compressible) waves, vorticity waves, and entropy waves. The latter two types of waves are only convected with the background flow velocity and do not propagate at the speed of sound. As an acoustic wave propagates, it can interact with the flow (through the reactive terms) and energy can be transferred to and from an acoustic mode to both the vorticity and entropy modes. The reactive terms in the governing equations are responsible for this flow-acoustic-like coupling. This is in the sense that the vorticity and entropy waves are nonacoustic (CFD-like) perturbations to the background flow solution, so to some extent, they model the linear interaction between CFD and acoustics.
In many aeroacoustics formulations, the reactive terms are disregarded, as they are also responsible for the processes that generate the Kelvin-Helmholtz instabilities. These can be difficult to handle numerically. On the other hand, if the terms are disregarded, accurate modeling of sound attenuation and amplification is lost. The reactive terms are fully included in the linearized Navier-Stokes interfaces.
The growth of the instabilities is handled in two ways in COMSOL Multiphysics. The temporal growth of the instabilities can be handled by selecting the frequency-domain formulation rather than the time-domain formulation. The spatial instabilities, which can arise if the vorticity modes are not properly resolved, are efficiently handled by the GLS stabilization scheme.
Depending on the application modeled with the linearized Navier-Stokes equations, it may be necessary to resolve the acoustic, viscous, and thermal boundary layers. These are naturally created on solid surfaces for an oscillating flow, when no-slip and isothermal boundary conditions are present. Typically, it is not necessary to include the details of the losses in the boundary layers in large models (when compared with the boundary layer thickness). The thermal boundary layer can also often be disregarded in liquids but should be included with equal importance in gasses. The two effects can be disregarded by selecting either the slip or the adiabatic options on the wall boundary conditions.
It should be mentioned that one more indirect coupling between the background flow and the acoustics is possible. When an acoustic wave propagates through a region with turbulent background flow, it is attenuated. This effect can be included in the model by coupling the turbulent viscosity from the CFD RANS model to the acoustics model. This effect is important, for example, when analyzing the transmission loss of a muffler system in the presence of a flow.
Solving the linearized Navier-Stokes equations, which falls under the field of computational aeroacoustics (CAA), poses numerical challenges that need to be considered, understood, and handled carefully. As mentioned above, the governing equations are prone to both physical (Kelvin-Helmholtz) and numerical instabilities. Because the interfaces use stabilization, the remaining main numerical challenge is to avoid the introduction of numerical noise in terms involving the background field variables (p_{0}, u_{0}, T_{0}, and ρ_{0}). This is especially true in the reactive terms involving the gradient of the background flow variables.
The likelihood of this problem increases if different meshes are used for the CFD and acoustic models and/or different discretization orders are used for the background flow and acoustics problem. Note that using different meshes or discretization orders is well motivated by the fact that the two problems need to resolve different physics and length scales. To prevent this, a careful mapping of the background flow data from CFD to acoustics is necessary. This is a well-understood and described step in CAA modeling. Additionally, the mapping step can be used to smooth the CFD data. This can be an overall smoothing or a local smoothing of certain details, like the hydrodynamic boundary layer, if its details are not important for the acoustics model.
In COMSOL Multiphysics, the mapping between the mesh is performed by an additional study step. The details of this step are described in the Acoustics Module User’s Guide and in tutorial models using a linearized Navier-Stokes physics interface.
When performing simulations with a linearized Navier-Stokes physics interface, the following points should be considered:
Helmholtz resonators (used in exhaust systems) attenuate a narrow and specific frequency band. When a flow is present in the system, it changes the resonator’s acoustic properties as well as the subsystem’s transmission loss. The Helmholtz resonator tutorial model investigates the transmission loss in the main duct (the resonator is located as a side branch) when a mean flow is present.
To calculate the mean flow, the SST turbulence model is used for Mach numbers Ma = 0.05 and Ma = 0.1. The Linearized Navier-Stokes, Frequency Domain interface is used to solve the acoustics problem. Next, the acoustics model is coupled to the mean flow velocity, pressure, as well as turbulent viscosity. The predicted transmission loss shows good agreement with results from a published journal paper (Ref. 1). For the resonances to be located correctly and the amplitude of the transmission loss to be correct, the model must balance convective and diffusive terms properly. This is achieved in the model.
Transmission loss through the resonator as a function of frequency and Mach number of the background flow.
The pressure distribution inside the system at 100 Hz for Ma = 0.1. A plane wave is incident from the left side upstream of the flow.
In the Acoustic Liner with a Grazing Background Flow tutorial model, the acoustic liner consists of eight resonators with thin slits and the background grazing flow is at Mach number 0.3. The sound pressure level above the liner is calculated and shows good agreement with results from a published research paper (Ref. 2). This example computes the flow via the SST turbulence model in the CFD Module and the acoustic propagation with the Linearized Navier-Stokes, Frequency Domain interface. The acoustic boundary layer is resolved and the default linear discretization is switched to quadratic to improve the spatial resolution near walls.
The curves show the sound pressure level on the surface above the liners for four driving frequencies. The colored part of the curves highlights the extent of the liner. These results show good agreement with the experimental results from the referenced research paper.
The acoustic velocity fluctuations as a plane wave propagates above the liners, showing the first four liners. The driving frequency is 1000 Hz. The color plot shows the velocity amplitude and the arrows show the velocity vector. Near the holes at the surface of the liner, vorticity is generated by the flow-acoustics interaction.
Coriolis flow meters — also called mass or inertial flow meters — can measure the mass flow rate of a fluid moving through it. This device can also compute the density of the fluid, hence the volumetric flow rate. The Coriolis Flow Meter tutorial model demonstrates how to model a generic Coriolis flow meter with a curved geometry.
As a fluid travels through an elastic structure (a curved duct, for instance), it interacts with the movement of the structure when vibrating. The Coriolis effect causes a phase difference between the deformation of two points on the duct, which can be used to determine the mass flow rate.
To model this, the Linearized Navier-Stokes, Frequency Domain interface is coupled to the Solid Mechanics interface via the built-in multiphysics coupling. As for the background mean flow, it is simulated with the Turbulent Flow, SST interface. Using this approach, FSI can be efficiently modeled in the frequency domain.
The phase difference between upstream and downstream points (red dots on the animation below). This curve represents the calibration results needed to run a Coriolis flow meter.
The movement of the Coriolis flow meter for three mass flow rates. The flow meter is actuated at the natural frequency of the structure, f_{d} = 163.5 Hz. The deformation amplitude and phase are exaggerated for visualization. As the flow rate increases, the phase difference upstream and downstream increases.
E. Selamet, A. Selamet, A. Iqbal, and H. Kim, “Effect of Flow in Helmholtz Resonator Acoustics: A Three-Dimensional Computational Study vs. Experiments”, SAE International Journal, 2011.
C. K. W. Tam, N. N. Pastouchenko, M. G. Jones, and W. R. Watson, “Experimental validation of numerical simulations for an acoustic liner in grazing flow: Self-noise and added drag”, Journal of Sound and Vibration, p. 333, 2014.
Probe tubes are attached to microphone cases in order to distance the device from the sound field being measured. When fitting hearing aids, the tube is inserted into the ear canal with the microphone worn on the outside of the ear. This system provides measurements that calibrate and verify the comfort and effectiveness of hearing aids, specifically if the device amplifies signals to the level that the patient needs. In fact, the American Speech-Language-Hearing Association and the American Academy of Audiology say that in-the-ear measurements are the preferred method for verifying hearing aid performance.
A probe tube microphone performing in-the-ear measurements. Image by Cstokesrees — Own work. Licensed under CC BY-SA 3.0, via Wikimedia Commons.
When adding a probe tube to a microphone, we have to consider how these two components interact with one another. For instance, we need to understand how the probe affects the sensitivity of the microphone, thus the measurements that the device delivers. As we show here, multiphysics simulation provides answers.
For this example, we use a time-dependent model that consists of a generic probe tube microphone configuration. It includes:
The probe tube is made of an elastic material with a Young’s modulus of 0.1 and a Poisson’s ratio of 0.4. In the schematic below, L represents the length of the tube, while D_{0} is its outer diameter. The cavity in front of the microphone is a cylinder with a radius of R and height of H. This cavity is connected to a cone that has a bottom radius of R and top radius of D_{0}. An external sound field with the wave vector k hits the probe tube. Note that this sinusoidal wave moves in the positive x direction and has an amplitude of 1 Pa.
The probe tube microphone configuration.
To model this probe tube microphone design, we use the Pipe Acoustics, Transient interface. In our analysis, the probe tube is treated as a 1D structure — a valid assumption as long as we neglect the interaction between this component and the incoming sound field. We also assume that no significant thermal and viscous boundary losses occur inside the tube. This holds for the current configuration in which the incident field is a monochromatic wave. Since the diaphragm is not a fully rigid structure, we assume a resistive loss that is consistent with the impedance of common condenser microphones. This gives us a fully coupled acoustics simulation, as the probe tube is connected to two separate 3D pressure acoustics domains.
When analyzing a probe tube microphone, an important parameter to consider is the relationship between the pressure at the tip of the probe and the pressure at the diaphragm. This is a necessary step for calibrating the measurement system. The plot below on the left shows that the solution, following an initial transient, becomes periodic after around 4 ms. The system then experiences a gain of about 1.4 and a phase shift takes place. These two factors are dependent on the frequency of the applied signal, which is a pure harmonic tone of 500 Hz. The plot on the right depicts the pressure distribution in the xz-plane at the end of the time interval.
Left: Diaphragm pressure vs. probe tip pressure. Right: Pressure distribution in the xz-plane at 8 ms.
These results show the potential of using the COMSOL Multiphysics® software to analyze probe tube microphone designs. With a better understanding of how the probe tube and microphone interact, it is possible to further improve the design of these systems for fitting hearing aids and for other applications.
A gearbox assembly generally consists of gears, shafts, bearings, and housing. When operated, a gearbox radiates noise in its surroundings for two main reasons:
Out of all of the components in a gearbox, the primary source of vibration or noise is the gear mesh. A typical path followed by the structural vibration, seen as the noise radiation in the surrounding area, can be illustrated like this:
The noise generated due to gear meshing can be classified into two types: gear whine and gear rattle.
Gear whine is one of the most common types of noise in a gearbox, especially when it runs under a loaded condition. Gear whine is caused by the vibration generated in a gear because of the presence of transmission error in the meshing as well as the varying mesh stiffness. This type of noise occurs at the meshing frequency and typically ranges from 50 to 90 dB SPL when measured at a distance of 1 m.
Gear rattle is observed mostly when a gearbox is running under an unloaded condition. Typical examples are diesel engine vehicles such as buses and trucks at idle speed. A gear rattle is an impact-induced noise caused by the unloaded gear pairs of the gearbox. Backlash, required for lubrication purposes, is one of the gear parameters that directly impact the gear rattle noise. If possible, simply adjusting the amount of backlash can reduce gear rattle.
We know that transmission error is the main cause of gear whine, but what exactly is it? When two rigid gears have a perfect involute profile, the rotation of the output gear is a function of the input rotation and the gear ratio. A constant rotation of the input shaft results in a constant rotation of the output shaft. There can be various unintended and intended reasons for modifying the gear tooth profile, such as gear runouts, misalignment, tooth tip, and root relief. These geometrical errors or modifications can introduce an error in the rotation of the output gear, known as the transmission error (TE). Under dynamic loading, the gear tooth deflection also adds to the transmission error. The combined error is known as the dynamic transmission error (DTE).
Reducing gear whine or rattle to an acceptable level is a big challenge, especially for modern complex gearboxes, which consist of many gears meshing simultaneously. By accurately simulating these complex behaviors, we can design a quieter gearbox. COMSOL Multiphysics gives designers the ability to accurately identify problems and propose realistic solutions within the allowable design constraints. With such a tool, we can optimize existing designs to reduce noise problems and gain insight into new designs earlier in the process, well before the production stage.
A gearbox model in the COMSOL Desktop®.
Let’s consider a five-speed synchromesh gearbox of a manual-transmission vehicle in order to study the vibration and radiation of gear whine noise to the surrounding area. The gearbox is in a car and used to transfer power from the engine to the wheels.
Geometry of a five-speed synchromesh gearbox of a manual transmission vehicle.
In order to numerically simulate the entire phenomenon of gearbox vibration and noise, we perform two analyses:
In the multibody analysis, we compute the dynamics of the gears and housing vibrations, performed at the specified engine speed and output torque in the time domain. For the acoustic analysis, we compute the sound pressure levels outside the gearbox for a range of frequencies using the normal acceleration of the housing as a source of noise.
First, we look into the gear arrangement in the synchromesh gearbox. Here, helical gears are used to transfer the power from the input end of the drive shaft to the counter shaft and further from the counter shaft to the output end of the drive shaft.
The gear arrangement in the five-speed synchromesh gearbox, excluding the synchronizing rings that connect the gears with the main shaft.
The gears used in the model have the following properties:
Property | Value |
---|---|
Pressure angle | 25 [deg] |
Helix angle | 30 [deg] |
Gear mesh stiffness | 1e8 [N/m] |
Contact ratio | 1.25 |
All of the gears on the counter shaft are fixed to the shaft, whereas the gears on the drive shaft can rotate freely. Only one gear at a time is fixed on the shaft. In real life, this is achieved with the help of synchronizing rings. In the model, hinge joints with an activation condition are used to conditionally engage or disengage gears with the drive shaft.
Looking at the shafts, they are assumed rigid and rested on the housing through hinge joints, whereas the housing is assumed flexible, further mounted on the ground, and connected to the engine at one of its ends. The driving conditions considered for the simulation in terms of engine speed, load torque, and the engaged gear are as follows:
Input | Value |
---|---|
Engine speed | 5000 [rpm] |
Load torque | 1000 [N-m] |
Engaged gear | 5 |
With these settings, it is possible to run a multibody analysis and compute the housing vibrations as shown in this animation:
The von Mises stress distribution in the housing together with the speed of different gears.
In order to have a better understanding of the variation of normal acceleration as a function of time, we can choose any point on the gearbox housing. The time history of the normal acceleration at that point is shown below. Let’s transform this result to the frequency domain using the FFT solver. In this way, we can find the frequency content of the vibration. It is clear from the frequency response plot that the normal acceleration of the housing contains more than one dominant frequency. The frequency band in which the housing vibration is dominant is 1000–3000 Hz.
Time history and frequency spectrum of the normal acceleration at one of the points on the gearbox housing.
Once we have simulated the vibrations in a gearbox, let’s see how to model the noise radiation in COMSOL Multiphysics. To begin, we create an air domain outside the gearbox to simulate the noise radiation in the surrounding.
In order to couple multibody dynamics and acoustics, we assume a one-way coupling, as the exterior fluid is air. This implies that the vibrations from the gearbox housing affect the surrounding fluid, whereas the feedback from the acoustic waves to the structure is neglected. It is a good assumption that the problem is one-way coupled.
The acoustic analysis is performed for a range of frequencies. As the multibody analysis is solved in the time domain, the FFT solver is used to convert the housing accelerations from the time domain to the frequency domain.
The air domain enclosing the gearbox for acoustic analysis. The two microphones placed to measure noise levels are shown.
As a source of noise, the normal acceleration of the gearbox housing is applied on the interior boundaries of the acoustics domain. In order to avoid any reflections from the exterior boundaries of the surrounding domain, we apply a spherical wave radiation condition. With these settings, we can solve for the acoustic analysis and look at the sound pressure level in the near field as well as on the surface of the gearbox housing at different frequencies. For a better understanding of the directivity of the noise radiation, we can create far-field plots in different planes at different frequencies.
The sound pressure level in the near field (left) and at the surface of the gearbox (right).
The far-field sound pressure level at a distance of 1 m in the xy-plane (left) and xz-plane (right).
After visualizing the sound pressure level in the outside field, it is interesting to find out the variation of sound pressure with frequency at a particular location. For this purpose, two microphones are placed in specific locations.
Microphone | Placement | Position |
---|---|---|
1 | Side of the gearbox | (0, -0.5 m, 0) |
2 | Top of the gearbox | (0, 0, 0.75 m) |
These microphone locations are defined in the Parameters node in the results and can be changed without updating the solution every time.
The frequency spectrum of the pressure magnitude at the two microphone locations.
The pressure response plot at the microphone locations gives a good idea of the frequency content present in the noise. However, wouldn’t it be nice if we could actually listen to the noise recorded at the microphone, just like in a physical experiment? This is possible by writing Java® code in a model method using the magnitude and phase information of the pressure as a function of frequency.
Let’s listen to the sound files corresponding to the noise received at the two microphones…
We have already looked at the acoustics results for various frequencies. It would also be nice to see them in the time domain. Let’s transform the results from the frequency domain to the time domain using the FFT solver so that we can visualize the transient wave propagation in the surrounding area of the gearbox.
Animation showing the transient acoustic pressure wave propagation in the surrounding area of the gearbox.
The above approach describes a technique to couple multibody analysis and acoustics simulation in order to accurately compute the noise radiation from a gearbox. This technique can be used early in the design process to improve the gearbox in such a way that the noise radiation is minimal in the range of operating speeds of the gearbox. Additionally, model methods — new functionality as of version 5.3 of the COMSOL Multiphysics® software — enable us to actually hear the noise generated by the gearbox — making the simulation one step closer to a physical experiment.
To develop better hearing aids, engineers continuously improve their designs to enhance sound quality, increase output, reduce feedback problems, and provide new features to help users. For instance, future versions of hearing aids may contain brain-computer interfaces to enable the hard of hearing to more easily listen to individual conversations or sounds while ignoring background noise.
A behind-the-ear (BTE) hearing aid. Image by Udo Schröter — Own work. Licensed under CC BY-SA 3.0, via Wikimedia Commons.
By analyzing hearing aids with simulation, engineers can develop innovative devices. For this type of study, they need to analyze how the transducer interacts with the system as a whole. This can be computationally expensive because some studies, such as a vibration isolation analysis of the transducer’s elastic mounting, require fully detailed multiphysics models that include details on the transducer’s inner workings.
For other studies, such as evaluating the electroacoustic response of a hearing aid, engineers can instead use lumped-parameter modeling. In this case, they can create a lumped parameter model of the transducer (or use a model provided by the manufacturer) and couple it to a multiphysics model that represents the rest of the system. This lumped parameter transducer model serves as an electroacoustic analogy, similar to those used in SPICE.
Let’s discuss a multiphysics model of a Knowles ED-23146 balanced armature receiver (also known as a miniature loudspeaker), which is based on data provided by Knowles, IL, USA. This model requires both the Acoustics Module and AC/DC Module — add-on products to the COMSOL Multiphysics® software.
A Knowles ED-23146 balanced armature receiver. Image courtesy of Knowles, IL, USA.
We model the receiver, seen on the left-hand side of the image below, as a lumped SPICE network. This lumped receiver model is connected to a test setup that includes a 50-mm earmold tube as well as a generic simplified 0.4-cc measurement coupler (this is a standardized ear canal simulator). We use the test setup to represent the receiver in a BTE hearing aid.
The modeled system includes a receiver, tube, coupler, and measurement microphone. Everything in blue is modeled with finite elements.
To account for the viscous and thermal losses occurring in the narrow tube, we use the Narrow Region Acoustics feature in the Pressure Acoustics, Frequency Domain interface. We don’t include the losses caused by the impedance jump from the tube to the coupler.
Note that while narrow region acoustics models have a lower computational cost, it’s better to use full thermoviscous acoustics models when working with complicated geometries. The narrow region acoustics models are valid for waveguides with constant cross sections. More information can be found in the Acoustics Module User’s Guide.
By comparing the simulation results with existing measurements, we can see that our model generates good predictions across a broad frequency band.
For example, let’s look at the response at the microphone’s location in the coupler. The image below compares the results from the full model with known measurements and those from a model without viscous and thermal acoustic losses. The full model agrees well with the known measurements, but the results don’t match at frequencies above 14 kHz. This is because the wavelength becomes comparable to the length scales of structures missing from the simplified model, such as the microphone’s protective mesh. Further, at these high frequencies, the lumped parameter model is imprecise. It is also evident that including the thermoviscous losses is important to get correct results.
A comparison of the microphone response for a model that includes thermal and viscous losses, a model without these losses, and existing measurements. Measurement data is provided by Knowles, IL, USA.
Next, let’s examine the frequency dependency of the transducer’s electric input impedance (real and imaginary). The results indicate that values from the simulation and measurements are in good agreement.
The electric input impedance (both real and imaginary) as a function of the frequency. The model results are compared to existing measurements. Measurement data is provided by Knowles, IL, USA.
We can also analyze the pressure and sound pressure level distribution within the tube and coupler system for three frequencies (1200, 3200, and 4600 Hz). The model’s evaluated frequencies correspond to the response’s first three peaks. Specifically, they relate to the tube and coupler system’s quarter-, half-, and three-quarter-wave resonances, respectively.
The pressure distribution (left) and sound pressure level distribution (right) at three different frequencies.
The new Convected Wave Equation, Time Explicit interface builds on the functionality of the Acoustics Module. The technology behind this interface comes from the discontinuous Galerkin (DG) method, also called DG-FEM, which relies on a solver that is time explicit and very memory lean. Using the Convected Wave Equation, Time Explicit interface enables you to efficiently solve large transient linear acoustics problems that contain many wavelengths in a stationary background flow. It also includes absorbing layers (sponge layers) that can act as effective nonreflecting boundary conditions.
A model of an ultrasound flow meter that uses the Convected Wave Equation, Time Explicit interface. A turbulent flow model is also present to calculate the background flow through the flow meter.
With the absorbing layer technology to truncate the computational domain and the memory-efficient formulation of DG-FEM, you can set up and solve very large problems in the time domain — measured in terms of the number of wavelengths that can be resolved. This makes the Convected Wave Equation, Time Explicit interface suited for modeling the propagation of linear acoustic signals that span large distances in relation to the wavelength.
This interface is useful for linear ultrasound applications, such as ultrasound flow meters and ultrasound sensors in which the time-of-flight parameter is considered. It is also useful for nonultrasound applications like room acoustics and auto cabins that experience the transient propagation of audio pulses.
The governing equations solved by the Convected Wave Equation, Time Explicit interface are the linearized Euler equations. These equations assume an adiabatic equation of state (see Ref. 1 and Ref. 2). The mass and momentum conservation equations read:
The acoustic pressure p, as well as the acoustic velocity perturbation u, are the dependent variables. The speed of sound is c_{0} and the steady-state mean background flow variables are defined with a subscript 0 through the density ρ_{0}, pressure p_{0}, and velocity field u_{0}.
The background flow can be a stationary flow with a velocity gradient that ranges from small to moderate. When the background velocity is set to zero, the equations are effectively reduced to modeling the classical wave equation. Note that there are no physical loss mechanisms in the interface and the above equations are set on a conservative form.
The Convected Wave Equation, Time Explicit interface is, as mentioned, based on the DG method, a time-explicit formulation that is memory efficient. With this method, it isn’t necessary to invert a full system matrix when stepping forward in time. In contrast, time-implicit methods require inverting this matrix, which consumes a lot of memory when solving large problems. In DG-FEM, only a few mass matrices are inverted for a reference mesh element (making them small in size) before evolving in time. The method is also quadrature free. As for the DG method, computing the local flux vector and divergence of the flux is a time-consuming process that can be run efficiently with BLAS level 3 operations.
Implicit methods are sometimes thought to be faster on small to medium problems that fit in the RAM. This is not always true. When doing a comparison, it’s important to look at the error level. When using an implicit method, it’s tempting to use a time step that is too large, which introduces errors from the time method. On the other hand, due to the discontinuous elements, the DG method is more accurate for the same polynomial order and mesh.
The FEM-based physics interfaces, such as the Pressure Acoustics, Transient or Linearized Navier-Stokes, Transient interfaces, require the use of a time-implicit method. The challenge is that the RAM consumption for implicit methods grows rapidly when increasing the model size or frequency. The latter is due to the fact that the smallest wavelength in the system must be resolved with a certain number of mesh elements. Still, FEM methods are more flexible and can easily be coupled in multiphysics applications.
In its default formulation, the Convected Wave Equation, Time Explicit interface uses quartic (fourth-order) shape functions, a sweet spot for speed and efficiency in wave problems solved with the DG method. This allows us to use a mesh with an element size that is about half of the wavelength for the highest-frequency component that needs to be resolved. This in turn simplifies meshing for large problems.
As an example, the ultrasound flow meter model presented below consists of 7.6 million degrees of freedom (DOF) and can be solved on a desktop computer using 9.5 GB of RAM. With an implicit formulation, solving a model of this size on a desktop is inconceivable. The solution time depends less on the RAM available and more on the processor speed and number of cores available, as the code runs fully parallelized. The DG-FEM formulation is very well suited for parallelization.
In the Acoustics Module, COMSOL plans to use the DG-FEM formulation more in the future, as we believe it is truly effective for solving large wave propagation problems. We are also continuously working on improving and fine-tuning the method and solvers.
As mentioned above, the Convected Wave Equation, Time Explicit interface comes with an Absorbing Layer feature. This is a type of sponge layer similar to the perfectly matched layer (PML) that already exists in many frequency domain interfaces. The difference lies in the technique that the absorbing layer uses — it combines a scaling system, filtering, and a simple low-reflecting impedance condition.
Inside the layer domain, a coordinate scaling effectively reduces the speed of the propagating waves and ensures that they “align up” (normal) toward the outer boundary. This means that the waves hit the outer boundary in a direction that is closer to normal. Filtering attenuates and filters out the wave’s high-frequency components that the scaling generates. At the layer’s outer boundary, a simple plane-wave impedance condition removes all of the remaining waves, since normal incidence has been ensured. The animation below, created using the Gaussian Pulse in 2D Uniform Flow: Convected Wave Equation and Absorbing Layers benchmark tutorial, shows absorbing layers in action.
The time evolution of a Gaussian pulse in a uniform background flow. The flow moves toward the right side at Mach 0.5. The absorbing layers absorb the outgoing waves.
As an application example for the Convected Wave Equation, Time Explicit interface, let’s take a look at an ultrasound flow meter set with a generic, wetted time-of-flight configuration. Wetted ultrasound flow meters have a dedicated signal tube for the ultrasound signal and the whole device is mounted into the tubing where the flow is measured. To estimate the speed of the main flow, we calculate the difference in arrival times for two signals that simultaneously traverse the flow upstream and downstream.
In our model, water fills the flow meter and the main flow tube has a diameter of 5 mm. The image below shows the background flow field when one symmetry condition is used. The average velocity in the main channel is 10 m/s. (Note that simulating the flow requires the CFD Module.) The signal tube is the small side duct sitting at a 45° angle to the main channel.
To learn more about this model, you can find step-by-step instructions in the Application Library.
Background mean flow inside the ultrasound flow meter.
The animation below shows the propagation of the acoustic pulse downstream in the signal tube, its interaction with the background flow, and the diffraction effects. The signal is a harmonic carrier at 2.5 MHz with a Gaussian pulse envelope. There are absorbing layers placed at the inlet and outlet of the main duct. The animation only shows the pressure signal magnitude on the symmetry plane of the system. As previously noted, this acoustics model involves solving 7.6 million DOF and can be run on a desktop using 9.5 GB of RAM.
Propagation of the acoustic pulse downstream through the signal tube in the ultrasound flow meter.
The figure below shows the received signals for the pulse propagating upstream (green) and downstream (blue). We measure the arrival time difference between the two signals to be 49 ns and use it to estimate the mean flow velocity, getting a value of 10.75 m/s. While the actual value is 10 m/s, we know that the difference in results is due to the flow profile correction factor (FPCF), an important parameter for this model. Using simulation, we can calculate the value of the FPCF, as the flow field is known a priori from simulations. We can also optimize the flow meter geometry and test different detection signals.
Pressure signal profiles for the signals moving upstream and downstream in the ultrasound flow meter. The arrival time difference is used to predict the mean flow velocity in an ultrasound flow meter.
Editor’s note: As of COMSOL Multiphysics® version 5.3, a new mesh quality optimization procedure is available to speed up the DG method. This optimization is important to the performance of the method and should be used whenever possible. To learn more, visit the Release Highlights page or see the Acoustics Module user guide.
When using the Convected Wave Equation, Time Explicit interface, based on the DG-FEM formulation, there are certain general modeling considerations that are good to know. Some practices differ from those associated with the FEM-based interfaces in the Acoustics Module. These guidelines are also available in the Acoustics Module User’s Guide under the “Modeling with the Convected Wave Equation Interface” section.
The absorbing layer is set up from the Definitions node in the model tree just like a PML — by adding the absorbing layer to the geometric entity that represents the layer. (As for the PML, it is good practice to use the Layers option when creating the geometry of your model.) Once the absorbing layer is set up, we need to place an Acoustic Impedance boundary condition on the outermost boundary of the layer. For advanced users, while the default values usually work well, the filter parameters for the absorbing layer can be modified at the topmost physics level once we enable Advanced Physics Options to be shown.
Settings for the Absorbing Layer feature and the Acoustic Impedance boundary condition.
Meshing a model using the Convected Wave Equation, Time Explicit interface is slightly different than with most other physics interfaces in the Acoustics Module. Since the default settings are for using fourth-order shape functions, we can usually achieve proper spatial resolution with a mesh that has an element size set to anything between λ_{min}/2 and λ_{min}/1.5. Note that the internal time-stepping size of a time-explicit method is strictly controlled by the CFL condition and thus the mesh size. This means that the smallest mesh elements in a model control the time step, so avoiding small mesh elements is a good idea if possible. The internal time step used for the Convected Wave Equation, Time Explicit interface is automatically selected, based on the mesh and physics, by the COMSOL Multiphysics® software.
When solving large transient models that include millions of DOF, the amount of data in the output can be very large and result in stored files of many GBs. A good strategy for reducing the model size is to only store data on the geometric entities that are needed for postprocessing; for example, on a symmetry plane, along a line, or in a point. We can easily accomplish this in COMSOL Multiphysics by using the Store fields in output functionality (located under the Values of Dependent Variables section) in the study step settings. Here, we can decide what selections data has to be stored. Note that the Times specified in the Study Settings section are the times when the solution is stored — they are not related to the internal time stepping.
Reducing the size of stored files can be done by setting the times where the solution is stored and saving data only in the selections where you need it.
When analyzing the results from a simulation run with the Convected Wave Equation, Time Explicit interface, we need to remember that fourth-order elements discretize the dependent variables. This means that within a mesh element, the shape function has a lot of freedom and can contain a lot of spatial details. We can view these details by setting a high Resolution in the Quality section for the plots. When solving a model, the default plots generated already have this option selected, with a custom resolution and the Element refinement set to six. When adding more user-defined plots, we must change the resolution.
Setting a custom resolution with the Element refinement set to six ensures good spatial representation of solutions.
An example of the difference between the custom resolution set for a default plot and the default resolution for an added user-defined plot is displayed in the figures below. At first glance, it looks like the solution on the left is incorrect. However, once the correct resolution is selected in the Quality section, it reveals the true wave nature of the solution.
The acoustic velocity with an incorrect resolution (left) and the correctly configured plot with a high resolution (right).
Editor’s note: The performance numbers shown here have improved with the release of COMSOL® 5.3. We will continue to work on improving these numbers in future versions of the software.
Can you make sound out of light? In his presentation, Carl Meinhart answers this question by starting small, with photons and phonons. The idea is that when an infrared photon interacts with matter in some manner, it could create a Stokes’-shifted photon with a lower energy level. Simultaneously, the excess energy from the shift could generate an acoustic phonon. In this way, light can generate acoustics. But, as Meinhart notes in the keynote video, “it’s kind of a chicken-and-egg [scenario]; you need the acoustics and this scattered light to create each other, so they have to exist simultaneously.”
From the video: Carl Meinhart discusses a theory behind converting light into acoustics.
While the idea was originally predicted in the 1920s as Brillouin scattering, it wasn’t observed until the 1960s. Modern researchers can now turn to the COMSOL® software to analyze this theory and all of the relevant multiphysics phenomena. For a specific photonics example, Meinhart examines an innovative design from the Vahala Research Group at Caltech, a pioneer in this field. The Vahala Research Group designed an optical ring that uses whispering gallery modes for the ring instead of guided waveguides. Meinhart explains that when simulating this kind of device, “it’s very important to design the optics and the acoustics simultaneously,” a task that can be achieved with multiphysics simulation.
Through their research, the team found that their design has a very high Q factor. Research like this indicates that very sensitive high-Q resonators can be built by combining photons, phonons, and the concept of Brillouin scattering.
To try this sort of simulation yourself, download the example Meinhart mentions in his presentation, the Optical Ring Resonator Notch Filter tutorial.
Next, Meinhart turns to an industry example: maximizing the speed of a microfluidic valve. When looking to increase speed, a researcher’s first move is often to decrease inertia by making their design light and small. However, physical prototypes of small devices like microfluidic valves are expensive and time consuming to create and difficult to measure experimentally.
Instead, to analyze microfluidic devices, researchers can use the COMSOL Multiphysics® software, which Meinhart states is “an invaluable tool for this process” because “the only way you can really visualize what’s going on is through numerical simulation.”
From the video: Carl Meinhart shares the example of a magnetically actuated microfluidic valve (left) and its approximate real-world size (right).
For a concrete example, Meinhart considers a microfluidic valve being commercialized by Owl Biomedical, Inc. To increase their microvalve’s speed, the group tried using magnetic materials and thin silicon, which bends well and is a high-Q material. The resulting magnetically actuated device can be evaluated by importing the complicated geometry into COMSOL Multiphysics® using a product like LiveLink™ for SOLIDWORKS®. Then, researchers can analyze the design by combining nonlinear magnetics, fluid-structure interaction, and particle tracing simulation studies.
Initial results revealed that this microvalve design contained nonoptimal flow patterns. But, by using simulation to modify the shape over many iterations, researchers can balance the spring forces and optimize the flow and opening and closing speeds. The result? An incredibly fast microfluidic valve design that, when used to create a cell sorter, can sort 55,000 cells in 1 second or 200 million cells per hour. This optimized design has the potential to revolutionize cell sorting through Owl Biomedical’s cell sorter.
To learn more about how Carl Meinhart uses multiphysics simulation to study transport processes in photonics and microfluidics, watch the video at the top of this post.
SOLIDWORKS is a registered trademark of Dassault Systèmes SolidWorks Corp.
]]>
Echologics provides specialized services in water loss management, leak detection, and pipe condition assessment. They developed a permanent leak detection system for pipe networks, using acoustic technology. With this solution, Sebastien says, “the pipes can talk to you.”
The location of a leak is measured using the time delay between signals captured with two sensors placed on the pipe. The time delay is determined using the correlation function. This technique also requires knowledge of the mechanical behavior of the pipe and the propagation speed of acoustic waves to accurately locate the leak. To solve this problem, Sebastien created an app using the Application Builder, a built-in tool in the COMSOL Multiphysics® software, to find the exact location of pipe leaks.
He explains that the app is advantageous for Echologics because its user interface is designed for ease of use in the field. This includes app dimensions that could easily fit on a tablet device when accessed with the COMSOL Server™ product, for instance. This is particularly useful for Echologics, as their field engineers travel extensively.
With apps, engineers at Echologics can easily run and rerun analyses. For example, an engineer can predict a leak location in a pipe using the app and contact the client to tell them where the leak is located. If the client recently replaced that segment of the pipe with a different material, for example, the engineer can rerun the analysis through the app and provide the exact leak location based on the new information. This enables them to quickly respond to the customer with an updated location.
During his keynote talk, Sebastien discussed how Echologics designed their app so that users can easily navigate its interface. By separating the app into five tabs, field engineers only have to calculate the information they need. For example, if an engineer using the app has already measured the speed of sound in a certain pipe segment, they don’t need to use the Speed Prediction tab in the app. Instead, they can simply input the measured speed in the Leak Location tab that calculates the results.
From the video: Sebastien Perrier demonstrates the custom app built by Echologics for predicting the location of a pipe leak.
After all of the information is entered into the app, it reports the leak’s location in relation to the two closest sensors. Echologics’ app also includes a Visualization tab so that the app users can see their results. For Sebastien, the beauty of this app is that he can “visualize and confirm” when each sensor detects the leak.
Watch Sebastien Perrier give a demonstration of this app in the keynote video at the top of this post.
]]>
To avoid detection by sonar during World War II, the German Navy covered their U-boats in rubber sheets with drilled air holes at regular distances. The same basic technology of embedding periodic patterns in spongy coatings is still in use, although the specifics are evolving. Finding the pattern and material properties that will minimize the echo for a desired range of frequencies is not an easy task, but one that lends itself very well to modeling.
Let’s find out how you can set up a model of an anechoic coating using the COMSOL Multiphysics® software. For our demonstration, we’ll consider a coating discussed in Ref. 1. The authors of this paper propose a quadratic array of tiny cylindrical holes stamped into a thin polydimethylsiloxane (PDMS) film. The film is placed on the submarine hull with the holes facing the steel. Hence, the holes form air bubbles, even when the vessel is submerged in water. Despite having a thickness of only 0.2 mm, this setup results in less than 10% reflectance for most of the frequency range between 1 and 2.8 MHz, and less than 50% reflectance all the way up to 5 MHz.
When setting up models with periodic geometries, the first thing you want to figure out is how far you can reduce the size of the model geometry. The figure below shows the periodic pattern of air cavities. The blue dashed-line square indicates an obvious and completely general choice of unit cell. Flanked by periodic Floquet boundary conditions, this geometry would allow for incident radiation from an arbitrary angle. See our Porous Absorber model for an example of oblique incidence on a periodic structure.
Top view of the periodic pattern with two candidate unit cells.
By assuming perpendicular plane wave incidence, we can exploit not only the periodicity, but also the geometric mirror symmetries. After establishing the x- and y-plane symmetries, it can be easy to forget that there is one mirror plane left, forming a 45-degree angle with both the x- and y-axes. This leaves us with the green solid-line triangle in the illustration, constituting 1/8 of the full periodic unit cell. Keep in mind, of course, that failing to notice and use a symmetry is not the end of the world — it merely makes the model more expensive than necessary to run.
Here is what the resulting geometry looks like, with water above the PDMS and steel below it:
Model geometry produced in COMSOL Multiphysics® with the add-on Acoustics Module.
We will take both the steel and the water to continue indefinitely beyond the modeled geometry. While this is clearly a good assumption for the water, it may seem like a less than obvious choice for the steel. Outer submarine hulls can be just a few millimeters thick, and omitting the other side of the hull means neglecting any reflections that might occur on the inside.
However, the transmission into the steel is small because of the high acoustic impedance contrast between the PDMS and the steel. Also, much of the reflected sound would likely be absorbed by the coating. Therefore, including the full thickness of the steel domain is left as an exercise for the curious reader. If you try this, please tell us about it in the comments section!
Materials that go on “forever” can be modeled either with various low-reflecting boundary conditions or with perfectly matched layers (PMLs). The former work optimally under the assumption of perpendicular plane waves. PMLs are more general, making them the preferred choice in nonperiodic, open geometries. For more information on PMLs, see our blog post on perfectly matched layers for wave electromagnetics problems — the considerations and conclusions are similar in pressure acoustics and structural mechanics.
So, can we expect only perpendicular plane waves at the ends of our geometry? To know for sure, we need a primer on diffraction theory.
The transmitted and reflected waves caused by a plane wave incident on a periodic pattern can be described as a sum of plane waves propagating in a finite number of discrete diffraction angles. In the immediate vicinity of the pattern, you will, of course, also have some arbitrarily shaped evanescent fields. Nevertheless, the propagating waves are all plane.
Typically, most of the acoustic energy will end up in the “zeroth diffraction order”, which is just the refraction and mirror reflection of the incident wave. Reflected higher diffraction orders occur at angles where the path distance between radiation traveling in the same direction from two neighboring unit cells is an integer number of wavelengths. This happens according to the equation
Here, m = 0, +/-1, +/-2,.. is the diffraction order; c_{i} is the pressure speed of sound in the incident medium; f is the frequency; d is the width of the repeating unit cell; θ_{i} is the angle of incidence; and θ_{r,m} is the angle of the mth order reflected diffracted wave.
Similarly, for the transmitted diffraction orders, we have
with c_{t} being the pressure wave speed of sound in the final medium and θ_{t,m} the angle of the mth order transmitted diffracted wave.
Let us now look at the anechoic coating model, with θ_{i} = 0. For an mth order reflected diffracted wave to exist, we need
So, if , we have no reflected diffracted waves. In the same manner, provided , we have no transmitted diffraction orders. The pressure speed of sound is higher in steel than in water, so diffraction would arise in the reflected waves first. With d = 120 µm and c_{i} = 1481 m/s, we can finally conclude that there is no diffraction at frequencies below 12.3 MHz.
Having decided that PMLs are not required in the relevant frequency spectrum, we need only leave a sufficient depth of water and steel in the model so that most of the evanescent wave content will have died out before reaching the exterior boundaries. For boundary conditions, we use a Low-Reflecting Boundary in the steel and the pressure acoustics counterpart, Plane Wave Radiation, in the water.
Speaking of Pressure Acoustics, that interface applies both in the water and in the air cavities. When modeling small confined spaces, the Thermoviscous Acoustics interface can be worth considering as a potentially more accurate option. However, it is only needed if the thermal and/or viscous boundary layers have a significant thickness. At the frequencies that we are concerned with here, these layers do remain much thinner than the dimensions of the cavity.
The steel and PDMS domains are modeled with Solid Mechanics. If you select Acoustic-Solid Interaction, Frequency Domain in the COMSOL Multiphysics® Model Wizard, you get the two relevant interfaces and an Acoustic-Structure Boundary automatically connecting them together.
The model is excited with an incident perpendicular wave added to the plane wave radiation condition. To find out the transmission, reflection, and absorption coefficients, you need to extract what fraction of the energy is passing through, being reflected, and being absorbed, respectively.
The transmitted power is simple. The outward mechanical energy flux is automatically available as solid.nI, so all you need to do is integrate that over the low-reflecting boundary terminating the steel domain. Divide that by the incident power, which for a plane wave has a known analytical expression, and you achieve the transmission coefficient.
The net acoustic intensity comes as a vector (acpr.Ix, acpr.Iy, acpr.Iz). To get the reflected power, take the negative of the z-component and subtract its integral over the inlet from the incident power. Divide by the incident power again and you have the reflection coefficient. Finally, the absorption coefficient is most conveniently achieved from the condition that all three coefficients sum up to 1.
The below plot shows the resulting transmission, reflection, and absorption coefficients. The results are generally in good agreement with those in the paper (referenced at the end of this post).
Topology optimization is a powerful tool that enables engineers to find optimal solutions to problems related to their applications. Here, we’ll take a closer look at topology optimization as it relates to acoustics and how we optimally distribute acoustic media to obtain a desired response. Several examples will further illustrate the potential of this optimization technique.
Many engineering tasks revolve around optimizing an existing design or a future design for a certain application. Best practices and experiences derived from years of working within a given industry are of great importance when it comes to improving designs. However, optimization problems are often so complex that it is impossible to know if design iterations are pushing things in the right direction. This is where optimization as a mathematical discipline comes into play.
Before we proceed, let’s review some important terminology. In optimization — be it parameter optimization, shape optimization, or in our case topology optimization — there is always at least one so-called objective function. Typically, we want to minimize this function. For acoustic problems, we may want to minimize the sound pressure in a certain region, whereas for structural mechanics problems, we may want to minimize the stresses in a part of a structure. We state this objective as
with F being the objective function. A design variable is varied throughout the optimization process to reach an optimal solution. It is varied within a design domain denoted Ω_{d}, which generally does not make up the entire finite element space Ω, as visualized in the figure below.
The design domain is generally a subset of the entire finite element domain.
Note that since the design variable varies as a function of space over the finite element discretized design domain, it is as such a vector. For this particular case, we will simply address it as a variable.
The optimization problem may have more than one objective function, and so it will be up to the engineer to decide how large of a weight each of these objectives should carry. Note that because the objectives may oppose each other during the optimization, special care should be taken when setting up the problem.
In addition to the objective function(s), there will usually be some constraints associated with the optimization problem. These constraints reflect some inherent size and/or weight limitations for the problem in question. With the Optimization interface in COMSOL Multiphysics, we can input the design variable, the objective function(s), and the constraints in a systematic way.
With topology optimization, we have an iterative process where the design variable is varied throughout the design domain. The design variable is continuous throughout the domain and takes on values from zero to one over the domain:
Ideally, we want the design variable to settle near values of either zero or one. In this way, we get a near discrete design, with two distinct (binary) states distributed over the design domain. The interpretation of these two states will depend on the physics related to our optimization. Since most literature addresses topology optimization within the context of structural mechanics, we will first look at this type of physics and address its acoustics counterpart in the next section.
Topology optimization in COMSOL Multiphysics for static structural mechanics was a previous topic of discussion on the COMSOL Blog. To give a brief overview: A so-called MBB beam is investigated with the objective of maximizing the stiffness by minimizing the total strain energy for a given load and boundary conditions. The design domain makes up the entire finite element domain. A constraint is applied to the total mass of the structure. In the design space, Young’s modulus is interpolated via the design variable as
To help the binary design, we can use a so-called solid isotropic material with penalization (SIMP) interpolation
where p is the penalization factor, typically taking on a value in the range of three to five. With this interpolation (and an implicit linear interpolation of the density), intermediate values of X are avoided by the solver as they provide less favorable stiffness-to-weight ratios. I have recreated the resulting MBB beam topology from the previous blog post below.
Recreation of the optimized MBB beam.
In this figure, black indicates a material with a user-defined Young’s modulus of E_{0}. Meanwhile, white corresponds to zero stiffness, indicating that there should be no material.
Let’s now move on to our discussion of acoustic topology optimization, where we have a frequency-dependent solution with wave propagation in an acoustic media. The design variable is now related to the physics of acoustics. Instead of having a binary void-material distribution of material, our goal is to have a binary air-solid distribution, where “solid” refers to a fluid with a high density and bulk modulus, which emulates a solid structure.
We define four parameters that describe the inertial and compressional behavior of the standard medium and the “solid” medium: Air is given a density of and a bulk modulus of K_{1}, and the “solid” medium has a higher density of and a higher bulk modulus of K_{2}. The density and bulk modulus K in the design domain will vary between the two states during the optimization via the design variable — similar to the variance of the Young’s modulus in our structural mechanics example. But a different interpolation is needed for an acoustics analysis so that the associated values do not tend to zero for a zero-valued design variable, but instead vary between air and the solid, so that
and
The easiest way to obtain these characteristics is by linear interpolation between the two extreme values. This is not necessarily the best approach since intermediate values of will not be penalized and therefore the optimal design may not be binary. As such, it will not be feasible to manufacture. In the literature alternative, interpolation schemes are given. In the cases presented here, the so-called rational approximation of material properties (RAMP) interpolation is used (see Ref. 1).
Just as with structural optimization, we define a design domain where the material distribution can take place, while simultaneously satisfying the constraints. An area or volume constraints can be defined via the design variable. For example, an area constraint on the design domain can be stated as an inequality constraint
where S_{r} is an area ratio between the area of the design that is assigned solid properties and the entire design domain.
Let’s first take a look at a silencer (or “muffler”) example. For simplicity, we limit ourselves to a 2D domain. A typical measure used when characterizing a silencer is the so-called transmission loss, denoted TL, which is a measure of power input to power output:
The transmission loss is calculated using the so-called three-point method (see Ref. 2). We use this as our objective function, seeking to maximize it at a single frequency (in this case 420 Hz):
Two design domains are defined above and below a tubular section. The design domain is constrained in such a way that a maximum of 5% of the 2D area is the structure and thus 95% must be air:
The initial state for the design domain is 100% air, i.e., . The animation below shows the evolution from the initial state to the resulting topology.
An animation depicting the evolution from the initial state to the optimized silencer topology.
The optimized structure takes on a “double expansion chamber” (see Ref. 3) silencer topology. The transmission loss has increased by approximately 14 dB at the target frequency, as illustrated in the plot below. However, at all frequencies other than the target frequency, the transmission loss has also changed, which may be of great importance for the specific application. Therefore, a single-frequency optimization may not be the best choice for the typical design problem.
Transmission loss for the initial state and optimized silencer.
Shifting gears, let’s now look at how to optimize for two objective functions and two frequencies. Here, we again consider a 2D room with three hard walls and a pressure input at the left side of the room. The room also includes two objective areas, Ω_{1} and Ω_{2}, defined at each corner at the right side of the room. The two objectives are as follows:
with the circular design domain Ω_{d} and an area constraint that is 10% structure. The initial state is , making the design domain 100% air.
A square 2D room with a circular design domain and two objective domains.
With more than one objective function, we must make some choices regarding the relative weights, or importance, of the different objectives. In this case, the two objectives are of equal weight in importance, and the problem is stated as a so-called min-max problem:
The figures below show the optimized topology (blue) along with the sound pressure for both frequencies using the same pressure scale. Note how the optimized topology results in a low-pressure zone (green) appearing in the upper-right corner at the first frequency. At the same time, this optimized topology ensures a similar low-pressure zone in the lower-right corner at the second frequency. This would certainly be a challenging task if trial-and-error was the only choice.
Sound pressure for frequency f_{1} (left) and for frequency f_{2} (right). The optimized topology is shown in blue.
As a third and final example, we’ll optimize a single objective over a frequency range. A sound source is radiating into a 2D domain, where we initially have a cylindrical sound field. Two square design domains are present, but since there is symmetry, we only consider one half of the geometry in the simulation. In this case, we want a constant magnitude of the on-axis sound pressure of in a point 0.4 meter in front of the sound source. The optimization is carried out in a frequency range of 4,000 to 4,200 Hz (50 Hz steps, a total of five frequencies). We can accomplish this via the Global Least-Squares Objective functionality in COMSOL Multiphysics, with the problem being stated as:
The initial state is again . The optimized topology is shown below, along with the sound field for both the initial state and optimized state.
Sound pressure for the initial state (left) and optimized state (right) at 4 kHz, with the optimized topology shown in blue within the square design domains.
Since the sound pressure magnitude in the observation point of the initial state is lower than the objective pressure, the topology optimization results in the creation of a reflector that focuses the on-axis sound. The sound pressure magnitudes before and after the optimization are shown below. The pressure magnitude is close to the desired objective pressure in the frequency range following the optimization.
The pressure magnitude divided by for the initial and optimized topology.
Acoustic topology optimization offers great potential for helping acoustic engineers come up with innovative designs. As I have demonstrated today, you can effectively use this technique in COMSOL Multiphysics. With proper formulations of objectives and constraints, it is possible to construct applications with new and innovative topologies — topologies that would most likely not have been found using traditional methods.
I would like to give special thanks to Niels Aage, an associate professor at the Technical University of Denmark, for several fruitful discussions on the topic of optimization.
To learn more about using acoustic topology optimization in COMSOL Multiphysics, we encourage you to download the following example from our Application Gallery: Topology Optimization of Acoustic Modes in a 2D Room.
René Christensen has been working in the field of vibroacoustics for more than a decade, both as a consultant (iCapture ApS) and as an engineer in the hearing aid industry (Oticon A/S, GN ReSound A/S). He has a special interest in the modeling of viscothermal effects in microacoustics, which was also the topic of his PhD. René joined the hardware platform R&D acoustics team at GN ReSound as a senior acoustic engineer in 2015. In this role, he works with the design and optimization of hearing aids.
]]>
Saying that the world’s oceans are large is an understatement. Oceans cover around 71% of Earth’s surface and the deepest known point, the Challenger Deep in the Mariana Trench, extends down for about 36,000 feet (almost 11 km). To study this massive environment, researchers need powerful, far-reaching tools.
The depth of the Challenger Deep compared to the size of Mount Everest. Image by Nomi887 — Own work. Licensed under CC BY-SA 3.0, via Wikimedia Commons.
Ocean acoustic tomography, which involves deep-water, low-frequency sound sources, is one option for measuring the temperature of oceans. This system measures the time it takes sound signals to travel between two instruments at known locations, a sound source and a receiver. Because sound travels faster in warmer water, you can use this measurement to extract the average temperature over the distance between the source and the receiver.
To get these measurements, long-range ocean acoustic tomography must be able to use low-frequency signals to cover a broad frequency band, something that often requires a high-power sound source. Therefore, creating a system that can successfully cover a large frequency band, while reducing power consumption via a highly efficient design, is ideal. One particular focus in this field is on resonators, since saving energy in a resonator helps increase overall transducer efficiency in cases where the wavelength is larger than its dimension.
In response to this, Andrey K. Morozov at Teledyne Webb Research (TWR) developed a sound resonator design that is highly efficient and has a tunable resonator. While previous research involved a high-Q resonant organ pipe operating at a frequency band of 200-300 Hz, this study revolves around a new high-frequency sound source that operates at an octave band of 500-1000 Hz. Further, the new high-Q resonant organ pipe design can keep a system in resonance when the transmitted signal has a changing instantaneous frequency. With its small size, this design is helpful for shallow water experiments.
In this design, a digitally synthesized frequency sweep signal is transmitted by a sound projector. The projector and high-Q resonator tune the organ pipe so that it matches a reference signal’s frequency and phase. This resonant tube can operate at any depth, but before it was ready to hit the seas, Morozov studied its design using the COMSOL Multiphysics® software.
As we can see in the schematic below, the organ pipe device is comprised of slotted resonator tubes (or pipes) that are moved via a symmetrical Tonpilz transducer. The Tonpilz driver’s piezoceramic stacks move pistons and thereby vary the volume. The two symmetrical pipes that are coupled through the Tonpilz transducer function like a half-wave resonator that has a volume velocity source driver.
Image of a tunable resonant sound source and Tonpilz driver. Image by Andrey K. Morozov and taken from his COMSOL Conference 2016 Boston paper.
Let’s focus on how these resonator tubes include slots or vents. In order to achieve smooth control of the resonance frequency, an electromechanical actuator moves two sleeves axially along the resonator tubes, maintaining a small gap in between the sleeve and pipe. Through this action, the slots are covered and the actuator can tune the organ pipe in a large frequency range. When the sleeves’ positions relative to the slot change, the equivalent acoustic impedance of the slots also changes, altering the resonance frequency of the entire resonator.
In the next section, we’ll see how simulation was used to further improve the design of the tunable organ pipe.
Morozov reduced the thickness of the resonator’s walls to make them lighter, which caused them to vibrate and store a large amount of acoustical energy. To prevent acoustical coupling between the main resonator and a mechanical part of the system, he used shock mounts to attach the main resonator pipe to the backbone rail. This design change did not completely avoid unwanted resonance effects in the tuning mechanics, so Morozov turned to simulation for further optimization.
The plot below and to the left represents the sound pressure level at resonance. Here, the vents in the main resonator pipe open and sound energy leaves the organ pipe through the resulting gap. In a low-frequency design, rounded edges in the sleeve cylinder help to prevent dual resonances in this position, but this isn’t a complete solution for a high-frequency resonator.
To learn more, the researcher studied the resonance curves for different sleeve positions, as seen below and to the right, shifting each position in 1 cm intervals.
Left: Simulation results of a tunable organ pipe, performed for a standard spherical driver. Right: Results showing the different sleeve positions and their correlating frequency responses. Image by Andrey K. Morozov and taken from his COMSOL Conference 2016 Boston paper.
His results showed that the vibrations in the main pipe and the resonating water beneath the sleeve can disturb the main resonance curve. Although both simulation results and experimental tests agree that this problem can be alleviated by increasing wall thickness, the resulting pipe design is too heavy.
To address this issue, Morozov easily tested different design configurations with simulation. He discovered that the tunable mechanism can be improved by ensuring that the gap between the sleeve and the main pipe is only from one of the orifice’s sides. Using this improved design as a basis, he completed additional studies, including investigating the optimal frequency, particle velocity, and sound pressure of the device, which we’ll focus on next.
Comparing sound pressure levels and frequency in the improved design for various sleeve positions. Image by Andrey K. Morozov and taken from his COMSOL Conference 2016 Boston paper.
In this new design, the pipe first functions as a half-wavelength resonator and radiates through its main orifices. At the end of the frequency band, the sound is mostly radiated through the completely open tuning vents, as seen in the following images. The transition between these two states is continuous.
Absolute sound pressure when the slots are completely closed at the starting frequency range of 500 Hz (left) and when the slots are completely open at the maximum resonance frequency of 1000 Hz (right). Images by Andrey K. Morozov and taken from his COMSOL Conference 2016 Boston paper.
To conclude, these simulations enabled Morozov to successfully visualize the structural acoustics of a new high-Q resonant organ pipe with an octal band of 500 to 1000 Hz and investigate important details, including the optimal profile of the opening slots.
Finally, a physical organ pipe was constructed out of aluminum using the exact dimensions of the model. The initial test pool results were similar to the simulation results and achieved the expected frequency range. However, the resonance frequencies were slightly lower in these tests. This is likely explained by the elliptical shape of the pipe and the limited pool dimensions. Both factors contributed to the decreased resonance frequency.
Due to these results, Morozov altered his experiment by cutting the pipes, as well as by performing another test at the Woods Hole Oceanographic Institution dock.
The altered sound source system (left), tested at the Woods Hole Oceanographic Institution (right). Images by Andrey K. Morozov and taken from his COMSOL Conference 2016 Boston paper.
The new experiment indicated that while the simulation could efficiently predict resonance frequencies, the model’s Q-factor is larger than in the experimental results. This difference is expected because real losses are hard to predict. Also, there were slight variations between the model and the realized design.
Designing a tunable resonant system is challenging because you need to precisely adjust parameters and ensure that it achieves the necessary frequency range. Using COMSOL Multiphysics, Morozov managed to achieve the octave frequency range in his tunable sound source design before performing a large amount of water tests. He found that the physical sound source parameters reasonably matched the simulation.
This improved design can help scientists measure long-range sound propagation and temperature over large distances in the ocean, allowing them to study everything from small-scale temperature fluctuations to overarching oceanic climate change.