Can you make sound out of light? In his presentation, Carl Meinhart answers this question by starting small, with photons and phonons. The idea is that when an infrared photon interacts with matter in some manner, it could create a Stokes’-shifted photon with a lower energy level. Simultaneously, the excess energy from the shift could generate an acoustic phonon. In this way, light can generate acoustics. But, as Meinhart notes in the keynote video, “it’s kind of a chicken-and-egg [scenario]; you need the acoustics and this scattered light to create each other, so they have to exist simultaneously.”

*From the video: Carl Meinhart discusses a theory behind converting light into acoustics.*

While the idea was originally predicted in the 1920s as *Brillouin scattering*, it wasn’t observed until the 1960s. Modern researchers can now turn to the COMSOL® software to analyze this theory and all of the relevant multiphysics phenomena. For a specific photonics example, Meinhart examines an innovative design from the Vahala Research Group at Caltech, a pioneer in this field. The Vahala Research Group designed an optical ring that uses whispering gallery modes for the ring instead of guided waveguides. Meinhart explains that when simulating this kind of device, “it’s very important to design the optics and the acoustics simultaneously,” a task that can be achieved with multiphysics simulation.

Through their research, the team found that their design has a very high Q factor. Research like this indicates that very sensitive high-Q resonators can be built by combining photons, phonons, and the concept of Brillouin scattering.

To try this sort of simulation yourself, download the example Meinhart mentions in his presentation, the Optical Ring Resonator Notch Filter tutorial.

Next, Meinhart turns to an industry example: maximizing the speed of a microfluidic valve. When looking to increase speed, a researcher’s first move is often to decrease inertia by making their design light and small. However, physical prototypes of small devices like microfluidic valves are expensive and time consuming to create and difficult to measure experimentally.

Instead, to analyze microfluidic devices, researchers can use the COMSOL Multiphysics® software, which Meinhart states is “an invaluable tool for this process” because “the only way you can really visualize what’s going on is through numerical simulation.”

*From the video: Carl Meinhart shares the example of a magnetically actuated microfluidic valve (left) and its approximate real-world size (right).*

For a concrete example, Meinhart considers a microfluidic valve being commercialized by Owl Biomedical, Inc. To increase their microvalve’s speed, the group tried using magnetic materials and thin silicon, which bends well and is a high-Q material. The resulting magnetically actuated device can be evaluated by importing the complicated geometry into COMSOL Multiphysics® using a product like LiveLink™ *for* SOLIDWORKS®. Then, researchers can analyze the design by combining nonlinear magnetics, fluid-structure interaction, and particle tracing simulation studies.

Initial results revealed that this microvalve design contained nonoptimal flow patterns. But, by using simulation to modify the shape over many iterations, researchers can balance the spring forces and optimize the flow and opening and closing speeds. The result? An incredibly fast microfluidic valve design that, when used to create a cell sorter, can sort 55,000 cells in 1 second or 200 million cells per hour. This optimized design has the potential to revolutionize cell sorting through Owl Biomedical’s cell sorter.

To learn more about how Carl Meinhart uses multiphysics simulation to study transport processes in photonics and microfluidics, watch the video at the top of this post.

*SOLIDWORKS is a registered trademark of Dassault Systèmes SolidWorks Corp.*

Echologics provides specialized services in water loss management, leak detection, and pipe condition assessment. They developed a permanent leak detection system for pipe networks, using acoustic technology. With this solution, Sebastien says, “the pipes can talk to you.”

The location of a leak is measured using the time delay between signals captured with two sensors placed on the pipe. The time delay is determined using the correlation function. This technique also requires knowledge of the mechanical behavior of the pipe and the propagation speed of acoustic waves to accurately locate the leak. To solve this problem, Sebastien created an app using the Application Builder, a built-in tool in the COMSOL Multiphysics® software, to find the exact location of pipe leaks.

He explains that the app is advantageous for Echologics because its user interface is designed for ease of use in the field. This includes app dimensions that could easily fit on a tablet device when accessed with the COMSOL Server™ product, for instance. This is particularly useful for Echologics, as their field engineers travel extensively.

With apps, engineers at Echologics can easily run and rerun analyses. For example, an engineer can predict a leak location in a pipe using the app and contact the client to tell them where the leak is located. If the client recently replaced that segment of the pipe with a different material, for example, the engineer can rerun the analysis through the app and provide the exact leak location based on the new information. This enables them to quickly respond to the customer with an updated location.

During his keynote talk, Sebastien discussed how Echologics designed their app so that users can easily navigate its interface. By separating the app into five tabs, field engineers only have to calculate the information they need. For example, if an engineer using the app has already measured the speed of sound in a certain pipe segment, they don’t need to use the *Speed Prediction* tab in the app. Instead, they can simply input the measured speed in the *Leak Location* tab that calculates the results.

*From the video: Sebastien Perrier demonstrates the custom app built by Echologics for predicting the location of a pipe leak.*

After all of the information is entered into the app, it reports the leak’s location in relation to the two closest sensors. Echologics’ app also includes a *Visualization* tab so that the app users can see their results. For Sebastien, the beauty of this app is that he can “visualize and confirm” when each sensor detects the leak.

Watch Sebastien Perrier give a demonstration of this app in the keynote video at the top of this post.

]]>

To avoid detection by sonar during World War II, the German Navy covered their U-boats in rubber sheets with drilled air holes at regular distances. The same basic technology of embedding periodic patterns in spongy coatings is still in use, although the specifics are evolving. Finding the pattern and material properties that will minimize the echo for a desired range of frequencies is not an easy task, but one that lends itself very well to modeling.

Let’s find out how you can set up a model of an anechoic coating using the COMSOL Multiphysics® software. For our demonstration, we’ll consider a coating discussed in Ref. 1. The authors of this paper propose a quadratic array of tiny cylindrical holes stamped into a thin polydimethylsiloxane (PDMS) film. The film is placed on the submarine hull with the holes facing the steel. Hence, the holes form air bubbles, even when the vessel is submerged in water. Despite having a thickness of only 0.2 mm, this setup results in less than 10% reflectance for most of the frequency range between 1 and 2.8 MHz, and less than 50% reflectance all the way up to 5 MHz.

When setting up models with periodic geometries, the first thing you want to figure out is how far you can reduce the size of the model geometry. The figure below shows the periodic pattern of air cavities. The blue dashed-line square indicates an obvious and completely general choice of unit cell. Flanked by periodic *Floquet* boundary conditions, this geometry would allow for incident radiation from an arbitrary angle. See our Porous Absorber model for an example of oblique incidence on a periodic structure.

*Top view of the periodic pattern with two candidate unit cells.*

By assuming perpendicular plane wave incidence, we can exploit not only the periodicity, but also the geometric mirror symmetries. After establishing the *x*- and *y*-plane symmetries, it can be easy to forget that there is one mirror plane left, forming a 45-degree angle with both the *x*- and *y*-axes. This leaves us with the green solid-line triangle in the illustration, constituting 1/8 of the full periodic unit cell. Keep in mind, of course, that failing to notice and use a symmetry is not the end of the world — it merely makes the model more expensive than necessary to run.

Here is what the resulting geometry looks like, with water above the PDMS and steel below it:

*Model geometry produced in COMSOL Multiphysics® with the add-on Acoustics Module.*

We will take both the steel and the water to continue indefinitely beyond the modeled geometry. While this is clearly a good assumption for the water, it may seem like a less than obvious choice for the steel. Outer submarine hulls can be just a few millimeters thick, and omitting the other side of the hull means neglecting any reflections that might occur on the inside.

However, the transmission into the steel is small because of the high acoustic impedance contrast between the PDMS and the steel. Also, much of the reflected sound would likely be absorbed by the coating. Therefore, including the full thickness of the steel domain is left as an exercise for the curious reader. If you try this, please tell us about it in the comments section!

Materials that go on “forever” can be modeled either with various low-reflecting boundary conditions or with *perfectly matched layers* (PMLs). The former work optimally under the assumption of perpendicular plane waves. PMLs are more general, making them the preferred choice in nonperiodic, open geometries. For more information on PMLs, see our blog post on perfectly matched layers for wave electromagnetics problems — the considerations and conclusions are similar in pressure acoustics and structural mechanics.

So, can we expect only perpendicular plane waves at the ends of our geometry? To know for sure, we need a primer on diffraction theory.

The transmitted and reflected waves caused by a plane wave incident on a periodic pattern can be described as a sum of plane waves propagating in a finite number of discrete diffraction angles. In the immediate vicinity of the pattern, you will, of course, also have some arbitrarily shaped evanescent fields. Nevertheless, the propagating waves are all plane.

Typically, most of the acoustic energy will end up in the “zeroth diffraction order”, which is just the refraction and mirror reflection of the incident wave. Reflected higher diffraction orders occur at angles where the path distance between radiation traveling in the same direction from two neighboring unit cells is an integer number of wavelengths. This happens according to the equation

mc_i=fd(\sin(\theta_i)+\sin(\theta_{r,m})

Here, *m* = 0, +/-1, +/-2,.. is the diffraction order; *c*_{i} is the pressure speed of sound in the incident medium; *f* is the frequency; *d* is the width of the repeating unit cell; *θ*_{i} is the angle of incidence; and *θ*_{r,m} is the angle of the *m*th order reflected diffracted wave.

Similarly, for the transmitted diffraction orders, we have

mc_i=fd(\sin(\theta_i)+c_i/c_t\sin(\theta_{t,m}))

with *c*_{t} being the pressure wave speed of sound in the final medium and *θ*_{t,m} the angle of the *m*th order transmitted diffracted wave.

Let us now look at the anechoic coating model, with *θ*_{i} = 0. For an *m*th order reflected diffracted wave to exist, we need

-1<\frac{mc_i}{fd}<1

So, if , we have no reflected diffracted waves. In the same manner, provided , we have no transmitted diffraction orders. The pressure speed of sound is higher in steel than in water, so diffraction would arise in the reflected waves first. With *d* = 120 µm and *c*_{i} = 1481 m/s, we can finally conclude that there is no diffraction at frequencies below 12.3 MHz.

Having decided that PMLs are not required in the relevant frequency spectrum, we need only leave a sufficient depth of water and steel in the model so that most of the evanescent wave content will have died out before reaching the exterior boundaries. For boundary conditions, we use a *Low-Reflecting Boundary* in the steel and the pressure acoustics counterpart, *Plane Wave Radiation*, in the water.

Speaking of *Pressure Acoustics*, that interface applies both in the water and in the air cavities. When modeling small confined spaces, the *Thermoviscous Acoustics* interface can be worth considering as a potentially more accurate option. However, it is only needed if the thermal and/or viscous boundary layers have a significant thickness. At the frequencies that we are concerned with here, these layers do remain much thinner than the dimensions of the cavity.

The steel and PDMS domains are modeled with *Solid Mechanics*. If you select *Acoustic-Solid Interaction, Frequency Domain* in the COMSOL Multiphysics® *Model Wizard*, you get the two relevant interfaces and an *Acoustic-Structure Boundary* automatically connecting them together.

The model is excited with an incident perpendicular wave added to the plane wave radiation condition. To find out the transmission, reflection, and absorption coefficients, you need to extract what fraction of the energy is passing through, being reflected, and being absorbed, respectively.

The transmitted power is simple. The outward mechanical energy flux is automatically available as solid.nI, so all you need to do is integrate that over the low-reflecting boundary terminating the steel domain. Divide that by the incident power, which for a plane wave has a known analytical expression, and you achieve the transmission coefficient.

The net acoustic intensity comes as a vector (acpr.Ix, acpr.Iy, acpr.Iz). To get the reflected power, take the negative of the *z*-component and subtract its integral over the inlet from the incident power. Divide by the incident power again and you have the reflection coefficient. Finally, the absorption coefficient is most conveniently achieved from the condition that all three coefficients sum up to 1.

The below plot shows the resulting transmission, reflection, and absorption coefficients. The results are generally in good agreement with those in the paper (referenced at the end of this post).

- Look at other acoustics models and apps with periodic geometries:
- Read related blog posts:
- Learn about the transfer impedance of a perforate
- Another blog post on modeling an RF anechoic chamber discusses similar techniques applied to electromagnetic waves

- V. Leroy, A. Strybulevych, M. Lanoy, F. Lemoult, A. Tourin, and J. H. Page:
*Superabsorption of acoustic waves with bubble metascreens*, Phys.Rev. B 91, 020301(R), 2015

Topology optimization is a powerful tool that enables engineers to find optimal solutions to problems related to their applications. Here, we’ll take a closer look at topology optimization as it relates to acoustics and how we optimally distribute acoustic media to obtain a desired response. Several examples will further illustrate the potential of this optimization technique.

Many engineering tasks revolve around optimizing an existing design or a future design for a certain application. Best practices and experiences derived from years of working within a given industry are of great importance when it comes to improving designs. However, optimization problems are often so complex that it is impossible to know if design iterations are pushing things in the right direction. This is where *optimization* as a mathematical discipline comes into play.

Before we proceed, let’s review some important terminology. In optimization — be it parameter optimization, shape optimization, or in our case topology optimization — there is always at least one so-called *objective function*. Typically, we want to minimize this function. For acoustic problems, we may want to minimize the sound pressure in a certain region, whereas for structural mechanics problems, we may want to minimize the stresses in a part of a structure. We state this objective as

\min_{\chi} F (\chi)

with *F* being the objective function. A *design variable* is varied throughout the optimization process to reach an optimal solution. It is varied within a *design domain* denoted *Ω _{d}*, which generally does not make up the entire finite element space

*The design domain is generally a subset of the entire finite element domain.*

Note that since the design variable varies as a function of space over the finite element discretized design domain, it is as such a vector. For this particular case, we will simply address it as a variable.

The optimization problem may have more than one objective function, and so it will be up to the engineer to decide how large of a weight each of these objectives should carry. Note that because the objectives may oppose each other during the optimization, special care should be taken when setting up the problem.

In addition to the objective function(s), there will usually be some *constraints* associated with the optimization problem. These constraints reflect some inherent size and/or weight limitations for the problem in question. With the *Optimization* interface in COMSOL Multiphysics, we can input the design variable, the objective function(s), and the constraints in a systematic way.

With topology optimization, we have an iterative process where the design variable is varied throughout the design domain. The design variable is continuous throughout the domain and takes on values from zero to one over the domain:

0 < \chi \leq 1\ \forall\ (x, y)\ \varepsilon\ \Omega_d.

Ideally, we want the design variable to settle near values of either zero or one. In this way, we get a near discrete design, with two distinct (binary) states distributed over the design domain. The interpretation of these two states will depend on the physics related to our optimization. Since most literature addresses topology optimization within the context of structural mechanics, we will first look at this type of physics and address its acoustics counterpart in the next section.

Topology optimization in COMSOL Multiphysics for static structural mechanics was a previous topic of discussion on the COMSOL Blog. To give a brief overview: A so-called MBB beam is investigated with the objective of maximizing the stiffness by minimizing the total strain energy for a given load and boundary conditions. The design domain makes up the entire finite element domain. A constraint is applied to the total mass of the structure. In the design space, Young’s modulus is interpolated via the design variable as

E(\chi) = \left\{ \begin{array}{ll}E_0\ \textrm{for}\ \chi=1\\0\ \textrm{for}\ \chi=0 \end{array} \right..

To help the binary design, we can use a so-called solid isotropic material with penalization (SIMP) interpolation

E (\chi) = \chi^p E_0

where *p* is the penalization factor, typically taking on a value in the range of three to five. With this interpolation (and an implicit linear interpolation of the density), intermediate values of *X* are avoided by the solver as they provide less favorable stiffness-to-weight ratios. I have recreated the resulting MBB beam topology from the previous blog post below.

*Recreation of the optimized MBB beam.*

In this figure, black indicates a material with a user-defined Young’s modulus of *E _{0}*. Meanwhile, white corresponds to zero stiffness, indicating that there should be no material.

Let’s now move on to our discussion of acoustic topology optimization, where we have a frequency-dependent solution with wave propagation in an acoustic media. The design variable is now related to the physics of acoustics. Instead of having a binary *void-material* distribution of material, our goal is to have a binary *air-solid* distribution, where “solid” refers to a fluid with a high density and bulk modulus, which emulates a solid structure.

We define four parameters that describe the inertial and compressional behavior of the standard medium and the “solid” medium: Air is given a density of and a bulk modulus of *K _{1}*, and the “solid” medium has a higher density of and a higher bulk modulus of

\rho(\chi) = \left\{ \begin{array}{ll}\rho_2\ \textrm{for}\ \chi=1 \\ \rho_1\ \textrm{for} \ \chi=0 \end{array} \right.

and

K(\chi) = \left\{ \begin{array}{ll}K_2\ \textrm{for}\ \chi=1\\K_1\ \textrm{for}\ \chi=0 \end{array} \right..

The easiest way to obtain these characteristics is by linear interpolation between the two extreme values. This is not necessarily the best approach since intermediate values of will not be penalized and therefore the optimal design may not be binary. As such, it will not be feasible to manufacture. In the literature alternative, interpolation schemes are given. In the cases presented here, the so-called rational approximation of material properties (RAMP) interpolation is used (see Ref. 1).

Just as with structural optimization, we define a design domain where the material distribution can take place, while simultaneously satisfying the constraints. An area or volume constraints can be defined via the design variable. For example, an area constraint on the design domain can be stated as an *inequality constraint*

\int^{}_{\Omega_d} \chi d \Omega_d \leq S_r

where *S _{r}* is an area ratio between the area of the design that is assigned solid properties and the entire design domain.

Let’s first take a look at a silencer (or “muffler”) example. For simplicity, we limit ourselves to a 2D domain. A typical measure used when characterizing a silencer is the so-called transmission loss, denoted *TL*, which is a measure of power input to power output:

TL = 10 \log_{10} \left(\frac{W_i}{W_o} \right).

The transmission loss is calculated using the so-called three-point method (see Ref. 2). We use this as our objective function, seeking to minimize it at a single frequency (in this case 420 Hz):

\min_{\chi} TL (420 \text {Hz}).

Two design domains are defined above and below a tubular section. The design domain is constrained in such a way that a maximum of 5% of the 2D area is the structure and thus 95% must be air:

\int^{}_{\Omega_d} \chi d \Omega_d \leq 0.05.

The initial state for the design domain is 100% air, i.e., . The animation below shows the evolution from the initial state to the resulting topology.

*An animation depicting the evolution from the initial state to the optimized silencer topology.*

The optimized structure takes on a “double expansion chamber” (see Ref. 3) silencer topology. The transmission loss has increased by approximately 14 dB at the target frequency, as illustrated in the plot below. However, at all frequencies other than the target frequency, the transmission loss has also changed, which may be of great importance for the specific application. Therefore, a single-frequency optimization may not be the best choice for the typical design problem.

*Transmission loss for the initial state and optimized silencer.*

Shifting gears, let’s now look at how to optimize for two objective functions and two frequencies. Here, we again consider a 2D room with three hard walls and a pressure input at the left side of the room. The room also includes two objective areas, *Ω _{1}* and

- Minimize the sound pressure level at a frequency
*f*and_{1} - Minimize the sound pressure level at a frequency
*f*_{2}= 1.5 f_{1}

with the circular design domain *Ω _{d}* and an area constraint that is 10% structure. The initial state is , making the design domain 100% air.

*A square 2D room with a circular design domain and two objective domains.*

With more than one objective function, we must make some choices regarding the relative weights, or importance, of the different objectives. In this case, the two objectives are of equal weight in importance, and the problem is stated as a so-called *min-max* problem:

\begin{align}

\min_{\chi} \max_{f_1 f_2} SPL_i (\chi, f_i) \\

\text {subject to} \int^{}_{\Omega_d} \chi d\Omega_d \leq 0.1.

\end{align}

\min_{\chi} \max_{f_1 f_2} SPL_i (\chi, f_i) \\

\text {subject to} \int^{}_{\Omega_d} \chi d\Omega_d \leq 0.1.

\end{align}

The figures below show the optimized topology (blue) along with the sound pressure for both frequencies using the same pressure scale. Note how the optimized topology results in a low-pressure zone (green) appearing in the upper-right corner at the first frequency. At the same time, this optimized topology ensures a similar low-pressure zone in the lower-right corner at the second frequency. This would certainly be a challenging task if trial-and-error was the only choice.

*Sound pressure for frequency* f_{1} *(left) and for frequency* f_{2} *(right). The optimized topology is shown in blue.*

As a third and final example, we’ll optimize a single objective over a frequency range. A sound source is radiating into a 2D domain, where we initially have a cylindrical sound field. Two square design domains are present, but since there is symmetry, we only consider one half of the geometry in the simulation. In this case, we want a constant magnitude of the on-axis sound pressure of in a point 0.4 meter in front of the sound source. The optimization is carried out in a frequency range of 4,000 to 4,200 Hz (50 Hz steps, a total of five frequencies). We can accomplish this via the Global Least-Squares Objective functionality in COMSOL Multiphysics, with the problem being stated as:

\begin{align}

\min_{\chi} \sum_{i=1}^{5} (\mid p_i (\chi, f_i, 0, 0.4) \mid -\overline{p}_{obj})^2 \\

\text {subject to} \int^{}_{\Omega_d} \chi d\Omega_d \leq 0.1.

\end{align}

\min_{\chi} \sum_{i=1}^{5} (\mid p_i (\chi, f_i, 0, 0.4) \mid -\overline{p}_{obj})^2 \\

\text {subject to} \int^{}_{\Omega_d} \chi d\Omega_d \leq 0.1.

\end{align}

The initial state is again . The optimized topology is shown below, along with the sound field for both the initial state and optimized state.

*Sound pressure for the initial state (left) and optimized state (right) at 4 kHz, with the optimized topology shown in blue within the square design domains.*

Since the sound pressure magnitude in the observation point of the initial state is lower than the objective pressure, the topology optimization results in the creation of a reflector that focuses the on-axis sound. The sound pressure magnitudes before and after the optimization are shown below. The pressure magnitude is close to the desired objective pressure in the frequency range following the optimization.

*The pressure magnitude divided by for the initial and optimized topology.*

Acoustic topology optimization offers great potential for helping acoustic engineers come up with innovative designs. As I have demonstrated today, you can effectively use this technique in COMSOL Multiphysics. With proper formulations of objectives and constraints, it is possible to construct applications with new and innovative topologies — topologies that would most likely not have been found using traditional methods.

I would like to give special thanks to Niels Aage, an associate professor at the Technical University of Denmark, for several fruitful discussions on the topic of optimization.

To learn more about using acoustic topology optimization in COMSOL Multiphysics, we encourage you to download the following example from our Application Gallery: Topology Optimization of Acoustic Modes in a 2D Room.

- M.P. Bendsoe, O. Sigmund,
*Topology Optimization: Theory, Methods, and Applications*, Springer 2003. - T.W. Wu, G.C. Wan, Muffler, “Performance studies and using a direct mixed-body boundary element method and a three-point method for evaluating transmission loss”, Trans. ASME:
*J. Vib. Acoust.*118 (1996) 479-484. - Z. Tao, A.F. Seybert, “A review of current techniques for measuring muffler transmission loss”,
*SAE International*, 2003.

René Christensen has been working in the field of vibroacoustics for more than a decade, both as a consultant (iCapture ApS) and as an engineer in the hearing aid industry (Oticon A/S, GN ReSound A/S). He has a special interest in the modeling of viscothermal effects in microacoustics, which was also the topic of his PhD. René joined the hardware platform R&D acoustics team at GN ReSound as a senior acoustic engineer in 2015. In this role, he works with the design and optimization of hearing aids.

]]>

Saying that the world’s oceans are large is an understatement. Oceans cover around 71% of Earth’s surface and the deepest known point, the Challenger Deep in the Mariana Trench, extends down for about 36,000 feet (almost 11 km). To study this massive environment, researchers need powerful, far-reaching tools.

*The depth of the Challenger Deep compared to the size of Mount Everest. Image by Nomi887 — Own work. Licensed under CC BY-SA 3.0, via Wikimedia Commons.*

Ocean acoustic tomography, which involves deep-water, low-frequency sound sources, is one option for measuring the temperature of oceans. This system measures the time it takes sound signals to travel between two instruments at known locations, a sound source and a receiver. Because sound travels faster in warmer water, you can use this measurement to extract the average temperature over the distance between the source and the receiver.

To get these measurements, long-range ocean acoustic tomography must be able to use low-frequency signals to cover a broad frequency band, something that often requires a high-power sound source. Therefore, creating a system that can successfully cover a large frequency band, while reducing power consumption via a highly efficient design, is ideal. One particular focus in this field is on resonators, since saving energy in a resonator helps increase overall transducer efficiency in cases where the wavelength is larger than its dimension.

In response to this, Andrey K. Morozov at Teledyne Webb Research (TWR) developed a sound resonator design that is highly efficient and has a tunable resonator. While previous research involved a high-Q resonant organ pipe operating at a frequency band of 200-300 Hz, this study revolves around a new high-frequency sound source that operates at an octave band of 500-1000 Hz. Further, the new high-Q resonant organ pipe design can keep a system in resonance when the transmitted signal has a changing instantaneous frequency. With its small size, this design is helpful for shallow water experiments.

In this design, a digitally synthesized frequency sweep signal is transmitted by a sound projector. The projector and high-Q resonator tune the organ pipe so that it matches a reference signal’s frequency and phase. This resonant tube can operate at any depth, but before it was ready to hit the seas, Morozov studied its design using the COMSOL Multiphysics® software.

As we can see in the schematic below, the organ pipe device is comprised of slotted resonator tubes (or pipes) that are moved via a symmetrical Tonpilz transducer. The Tonpilz driver’s piezoceramic stacks move pistons and thereby vary the volume. The two symmetrical pipes that are coupled through the Tonpilz transducer function like a half-wave resonator that has a volume velocity source driver.

*Image of a tunable resonant sound source and Tonpilz driver. Image by Andrey K. Morozov and taken from his COMSOL Conference 2016 Boston paper.*

Let’s focus on how these resonator tubes include slots or vents. In order to achieve smooth control of the resonance frequency, an electromechanical actuator moves two sleeves axially along the resonator tubes, maintaining a small gap in between the sleeve and pipe. Through this action, the slots are covered and the actuator can tune the organ pipe in a large frequency range. When the sleeves’ positions relative to the slot change, the equivalent acoustic impedance of the slots also changes, altering the resonance frequency of the entire resonator.

In the next section, we’ll see how simulation was used to further improve the design of the tunable organ pipe.

Morozov reduced the thickness of the resonator’s walls to make them lighter, which caused them to vibrate and store a large amount of acoustical energy. To prevent acoustical coupling between the main resonator and a mechanical part of the system, he used shock mounts to attach the main resonator pipe to the backbone rail. This design change did not completely avoid unwanted resonance effects in the tuning mechanics, so Morozov turned to simulation for further optimization.

The plot below and to the left represents the sound pressure level at resonance. Here, the vents in the main resonator pipe open and sound energy leaves the organ pipe through the resulting gap. In a low-frequency design, rounded edges in the sleeve cylinder help to prevent dual resonances in this position, but this isn’t a complete solution for a high-frequency resonator.

To learn more, the researcher studied the resonance curves for different sleeve positions, as seen below and to the right, shifting each position in 1 cm intervals.

*Left: Simulation results of a tunable organ pipe, performed for a standard spherical driver. Right: Results showing the different sleeve positions and their correlating frequency responses. Image by Andrey K. Morozov and taken from his COMSOL Conference 2016 Boston paper.*

His results showed that the vibrations in the main pipe and the resonating water beneath the sleeve can disturb the main resonance curve. Although both simulation results and experimental tests agree that this problem can be alleviated by increasing wall thickness, the resulting pipe design is too heavy.

To address this issue, Morozov easily tested different design configurations with simulation. He discovered that the tunable mechanism can be improved by ensuring that the gap between the sleeve and the main pipe is only from one of the orifice’s sides. Using this improved design as a basis, he completed additional studies, including investigating the optimal frequency, particle velocity, and sound pressure of the device, which we’ll focus on next.

*Comparing sound pressure levels and frequency in the improved design for various sleeve positions. Image by Andrey K. Morozov and taken from his COMSOL Conference 2016 Boston paper.*

In this new design, the pipe first functions as a half-wavelength resonator and radiates through its main orifices. At the end of the frequency band, the sound is mostly radiated through the completely open tuning vents, as seen in the following images. The transition between these two states is continuous.

*Absolute sound pressure when the slots are completely closed at the starting frequency range of 500 Hz (left) and when the slots are completely open at the maximum resonance frequency of 1000 Hz (right). Images by Andrey K. Morozov and taken from his COMSOL Conference 2016 Boston paper.*

To conclude, these simulations enabled Morozov to successfully visualize the structural acoustics of a new high-Q resonant organ pipe with an octal band of 500 to 1000 Hz and investigate important details, including the optimal profile of the opening slots.

Finally, a physical organ pipe was constructed out of aluminum using the exact dimensions of the model. The initial test pool results were similar to the simulation results and achieved the expected frequency range. However, the resonance frequencies were slightly lower in these tests. This is likely explained by the elliptical shape of the pipe and the limited pool dimensions. Both factors contributed to the decreased resonance frequency.

Due to these results, Morozov altered his experiment by cutting the pipes, as well as by performing another test at the Woods Hole Oceanographic Institution dock.

*The altered sound source system (left), tested at the Woods Hole Oceanographic Institution (right). Images by Andrey K. Morozov and taken from his COMSOL Conference 2016 Boston paper.*

The new experiment indicated that while the simulation could efficiently predict resonance frequencies, the model’s Q-factor is larger than in the experimental results. This difference is expected because real losses are hard to predict. Also, there were slight variations between the model and the realized design.

Designing a tunable resonant system is challenging because you need to precisely adjust parameters and ensure that it achieves the necessary frequency range. Using COMSOL Multiphysics, Morozov managed to achieve the octave frequency range in his tunable sound source design before performing a large amount of water tests. He found that the physical sound source parameters reasonably matched the simulation.

This improved design can help scientists measure long-range sound propagation and temperature over large distances in the ocean, allowing them to study everything from small-scale temperature fluctuations to overarching oceanic climate change.

- Download the original paper: “Simulation and Test of Tunable Organ Pipe For Ocean Acoustic Tomography“
- Head this way for an overview of the top papers and posters from the COMSOL Conference 2016 Boston
- Learn more about organ pipe design in this blog post: Hear the Sound of an Organ Pipe Design with a Simulation App

Let’s say, for example, that you’re almost done drawing the geometry of a muffler model. It’s nothing too fancy, just a couple of cylinders representing the pipe and an extruded oval forming the main chamber. Then it strikes you: Some of the sections in the pipes and the baffles separating the chambers need to be perforated — and not just by placing a hole here or there. In this case, you could use the *Array* tool to draw an arbitrarily large collection of holes, but meshing and solving the resulting geometry would take too much time and memory.

*Detail of muffler geometry, including perforated sections consisting of a few thousand holes.*

Fear not — there are better solutions. The simplest and most convenient approach is to draw the contours of the perforated regions and apply an *Interior Perforated Plate* condition. We then supply properties, such as the hole diameter and plate thickness, and get a partially transparent surface that represents the perforate.

In many cases, the accuracy of this method meets our simulation needs. However, the accuracy can be compromised if we push the limits of validity for the underlying engineering relation. For example, holes that are very small, lie too close to each other, or have a noncircular shape produce less reliable results.

*The same muffler geometry with the perforations now replaced by boundaries for applying a* Perforated Plate *or* Interior Impedance *condition.*

A more general alternative to the *Interior Perforated Plate* condition is the *Interior Impedance* condition. This allows us to specify a complex-valued transfer impedance, which represents the ratio between the pressure drop across the perforate and the normal particle velocity through it. The *Interior Perforated Plate* condition is a special predefined version of the *Interior Impedance* condition. The value of the impedance can be based on imported measurement data or an analytical expression. If we lack trustworthy measurements of the transfer impedance or a good analytical expression, the impedance can be based on numerical results. This method, as described below, makes it simple to model the impedance numerically.

The figure below shows the considered perforate in such a model, described in our Transfer Impedance of a Perforate tutorial. The principle behind this model is rather simple: We send a plane wave towards the hole and then calculate the transfer impedance from the resulting pressure difference across it and the average velocity through it.

*Representation of the perforate with the model domain colored according to the local acoustic velocity field.*

The example model takes advantage of available symmetries, occupying only a quarter of the hole itself as well as half of the distance between holes. The holes in this case have a diameter of 1 mm, which is small enough that we must consider the thermoviscous boundary layers and losses. To account for these factors, we can use the *Thermoviscous Acoustics* interface.

To send the plane wave, we utilize the *Background Acoustic Fields* node, a staple of the *Pressure Acoustics* interface that has recently been added to the *Thermoviscous Acoustics* interface. Its *Plane wave* option sends a pressure, velocity, and temperature distribution corresponding to a plane wave with viscous and thermal attenuation. We cap the model by adding perfectly matched layers above and below the model domain.

For even more on transfer impedances in perforated mufflers, we recommend the Thermoviscous Acoustic Impedance Lumping model. In addition to computing the transfer impedance of a perforate, this example demonstrates how to use the result in a model of an entire muffler, thereby summing up and driving home the message of this blog post.

- Try out the highlighted tutorial: Transfer Impedance of a Perforate
- Read our Thermoviscous Acoustics blog series to learn more about the theory of thermoviscous acoustics and how to model it in COMSOL Multiphysics
- Check out all of the blog posts in our Acoustics category

If you’ve ever flown on a commercial airplane, it’s likely that your flight was powered by a turbofan engine. Turbofan engines function by capturing air and sending part of this into a compressor. The compressed air then enters a combustion chamber, where it is ignited with fuel, and then the released products propel the plane forward.

*Left: A turbofan engine schematic. Image by K. Aainsqatsi — Own Work. Licensed under CC BY-SA 3.0, via Wikimedia Commons. Right: A real-world turbofan engine. Image by Sanjay Acharya — Own Work. Licensed under CC BY-SA 3.0 via Wikimedia Commons.*

In recent years, the design of turbofan engines have vastly improved, with a particular emphasis on noise reduction. To understand why, consider once again being a passenger on a flight — it can be rather unpleasant to listen to a loud engine. And for those people who live near airports, loud noise from planes as they land and take off can disturb sleep patterns. Reducing the noise generated by airplanes and their engines has therefore been a key point of focus in the aviation industry.

Reducing the excess fan noise that comes from turbofan aeroengines offers one potential solution to this issue. In the COMSOL Multiphysics® software, you can analyze and optimize the radiated noise from a turbofan engine to meet such goals. To learn more, let’s take a look at our simplified tutorial model of a jet pipe.

To analyze a turbofan aeroengine, we can focus on specific elements of its design. In this case, we’ll investigate the radiation of fan noise generated by a turbofan aeroengine’s annular duct. Let’s start by looking at our axisymmetric model geometry, which has a symmetry axis at the engine’s centerline. The model geometry mimics the outlet nozzle of the jet engine (see the schematic above). The gray area in the following schematic represents the interior of the engine in the nozzle. The model obviously uses a very simplified geometry and focuses on the physical principles and model setup.

*Turbofan motor geometry. The gray zone indicates the internal machinery of the engine. Air flows through the jet (M _{1}) as well as around the jet (M_{0}).*

In this model, air flows inside and outside of the duct as uniform mean flows with a Mach number of M_{1} = 0.45 inside and M_{0} = 0.25 outside, respectively. This corresponds to the red and pink regions in the initial schematic of the turbofan engine. Since the air surrounding the engine moves at a slower speed than inside the jet, a vortex sheet (indicated by the dashed lines in the image above) results in the jet stream, which separates the air flows along the extension of the duct’s wall. Using our model, we can calculate the near-field flow on both sides of the vortex field.

When solving our jet pipe model, we used the *Linearized Potential Flow, Frequency Domain* interface in the Acoustics Module to describe acoustic waves within a moving fluid. It’s important to note, however, that the field equations are valid only when working with an irrotational velocity field. Since this is not the case across a vortex sheet, the sheet has a discontinuous velocity potential. To model such discontinuity, we applied the built-in Vortex Sheet boundary condition on the interior boundaries. As for the acoustic field within the duct, we described this element using the sum of the eigenmodes propagating within the duct and then radiating into free space. This is a common approach when setting up sources in this type of simulation.

For our study, we utilized a boundary mode analysis to find the inlet sources. The first step was to investigate circumferential wave numbers *(m = 4, 17, and 24)* and generate various eigenmodes that correspond to different radial mode numbers. The second step was to use three eigenmodes as incident waves inside the duct: *(m,n) = (4, 0), (17, 1), and (24, 1)*. The results indicate that the largest eigenvalue for a given *m* corresponds to the radial mode *n = 0*. The smallest eigenvalue, meanwhile, corresponds to *n = 1*.

*Plot of the eigenmodes featuring circumferential mode shapes m = 4, 17, and 24 and radial modes n = 0 and 1.*

As part of our analysis, we also investigated the source velocity potential. As depicted in the plot below, we used a revolved geometry that included the circumferential wave number contribution to see its spatial shape.

*Model showing a boundary mode of (m, n) = (4, 0).*

To gain further confidence from the results of our analyses, we compared our simulation findings to those results presented in the paper “Theoretical Model for Sound Radiations from Annual Jet pipes: Far- and Near-field Solution” (see Ref. 1 in the model documentation). The plots below, for instance, showcase the near-field pressure from different source eigenmodes in our simulation study. All of the results are solved for a Mach number of M_{1} = 0.45 inside the pipe and M_{0} = 0.25 outside of the pipe.

*From left to right: The near-field solution for (m, n) = (4, 0), (17, 1), and (24, 1).*

Further, we analyzed the near-field sound pressure level and the revolved geometry’s near-field pressure. The results from these studies are highlighted in the plots below, respectively.

*Left: Near-field sound pressure level for (m, n) = (24, 1). Right: Near-field pressure shown in the revolved geometry for (m, n) = (4, 0).*

By comparing our findings to the established literature highlighted above, we were able to further confirm the validity of our results. Such accuracy speaks to benefits of using COMSOL Multiphysics to help reduce noise pollution in turbofan engine designs and thus facilitate important advancements within the aviation industry.

- Try the Jet Pipe tutorial model that was presented here
- Read more aeroacoustics posts on the COMSOL Blog:

Brüel & Kjær, an industry leader in sound and vibration measurement for over 40 years, caters to customers like Airbus, NASA, Ferrari, and more. Their microphones range from working standard microphones to ones that are custom-made for their application. They cover a range of frequencies as well, from infrasonic to ultrasonic. For each desired application and frequency, there are multiple factors in the microphone’s design that affect its performance.

*A 4134 microphone including the protective grid covering the diaphragm.*

When sound enters a microphone, the sound pressure waves cause the diaphragm to vibrate, and these vibrations are then converted to sound decibels. This process means that modeling a microphone requires accounting for mechanical, electrical, and acoustic phenomena in a tightly coupled setup — something that could only be achieved with a multiphysics simulation tool. To see whether a microphone’s design is consistent and reliable, Brüel & Kjær use COMSOL Multiphysics® software to test the precision of their devices and verify new designs.

The Brüel & Kjær Type 4134 condenser microphone, shown below, is a popular prototype for developing condenser microphones. Simulating condenser microphones requires modeling the diaphragm’s movement, membrane deformations, resonance frequency, and viscous and thermal acoustic losses. Due to a microphone’s small dimensions and large aspect ratios, the thermal and viscous losses affect their performance considerably. All of this results in the model needing to contain a lot of detail in order to be accurate.

*Geometry of the Type 4134 microphone showing the mesh used in the reduced sector geometry.*

To reduce the calculation time while maintaining accuracy, the researchers took advantage of model symmetry to compute thermal stress and resonance frequency. Sound pressure can be simulated with this method as well, but only when the sound is at a normal incidence to the diaphragm. When the sound wave is at a non-normal incidence, a nonsymmetric boundary condition can be used.

After verifying the simulation of the Type 4134 microphone, the researchers modeled other types with parameters that could not be observed in practice. For example, they studied how an air vent affects a microphone’s ability to measure low-frequency sounds. Simulation allowed Brüel & Kjær to test innovative designs and make changes as needed. They can even create custom devices for customers on a case-by-case basis.

In addition to improving their microphones, engineers at Brüel & Kjær also use multiphysics simulation to optimize and test their vibration transducer designs. They aim to create one with a high built-in resistance to withstand harsh environments. To accomplish this, the engineers must create a device that doesn’t have a resonant frequency in the vibration range that it would measure. Resonating in the desired vibration range would compromise the accuracy of the measurement.

*Simulation results of a suspended piezoelectric vibration transducer.*

To design a device that produces a flat response, researchers tried different combinations of materials and geometry. By adding a mechanical filter, they designed a vibration transducer with an error range of no more than 10 to 12%, which is well within the acceptable limits.

No device can be perfect, but simulation provides a way to get as close to perfect as possible. Engineers at Brüel & Kjær can quickly and efficiently test new designs in different scenarios, getting results that they couldn’t determine experimentally. This provides them with unique knowledge to help them create innovative designs and stay ahead of the competition.

- Download models similar to those shown here:
- Want to learn how to model microphones and transducers in COMSOL Multiphysics? Check out these blog posts:
- See how other people are using COMSOL Multiphysics in
*COMSOL News*2015

Before a new motorcycle design is ready to hit the road, it must be optimized for many different factors, one of which is noise reduction. Not only do individual states have specific regulations on motorcycle noise, but many potential customers also prefer quieter motorcycles.

The first step toward designing a quieter product is finding the major contributors to noise generation. In motorcycles, this is the engine, which lacks the enclosed chamber found in cars to help contain engine noise. Furthermore, a motorcycle engine consists of many individual parts that generate noise, including the intake, exhaust, piston slap, gear whine, valve train, and combustion chamber.

*A motorcycle with an exposed engine. Image by Ulhas Mohite and Niket Bhatia and taken from their COMSOL Conference presentation submission.*

One specific cause of noise in a motorcycle engine is the combustion chamber. A quick rise of combustion pressure in a motorcycle’s combustion chamber causes structural vibrations. Such combustion-induced vibrations radiate noise. In certain frequency ranges, an engine’s radiated noise under combustion excitations is dominant when compared to other sources of engine noise. As such, this is an important element to study when aiming to reduce the noise generated by motorcycle engines.

Instead of solely physical testing, which can be costly and time-intensive, a team from Mahindra Two Wheelers, Ltd. turned to acoustics modeling to investigate how an engine’s structure contributes to noise radiation. Using the Acoustics Module, these researchers performed an acoustic-radiation analysis of a single-cylinder internal combustion (IC) engine under combustion load. Their goal was to identify which areas of the engine create the most noise, and to implement structural changes to reduce the noise produced in these locations. We’ll learn more about their research, which was presented at the COMSOL Conference 2015 Pune, in the next section.

To begin their acoustic analysis, the researchers enclosed the engine skin (the outer surface of the engine) in a computational domain enclosed by a Cartesian perfectly matched layer (PML), which dampens all outgoing waves with little or no reflections. They also applied a Normal Acceleration boundary condition to the boundaries of the engine skin, as seen in the image below and to the right.

*Engine skin surrounded by a PML domain (left) and the boundaries of the engine skin (right). Images by Ulhas Mohite and Niket Bhatia and taken from their COMSOL Conference paper submission.*

Additionally, to map on the engine skin mesh, the researchers used nodal accelerations that were interpolated with the *Interpolation* function feature in COMSOL Multiphysics® software. Since this acoustics model must be solved for each of its 40 frequency steps, the researchers carried out process automation with Java® code in order to substantially reduce the solution time. They also performed their acoustic analysis in a Batch mode.

*A surface acceleration plot created using interpolation. Image by Ulhas Mohite and Niket Bhatia and taken from their COMSOL Conference presentation submission.*

Let’s now turn to the team’s sound pressure level (SPL) studies. In their physical experiments, the researchers measured motorcycle engine noise and SPL in a quiet room. During these tests, they placed a microphone in front of the engine’s side cover, as shown in the picture below.

*The setup of the physical noise experiment. Image by Ulhas Mohite and Niket Bhatia and taken from their COMSOL Conference presentation submission.*

The microphone was located outside of their computational domain boundaries. This led the researchers to use far-field calculations and the Helmholtz-Kirchhoff (H-K) integral to compute the pressure and acoustic fields outside of the computational domain. To ensure that their far-field study was accurate, they used a thin-layer boundary mesh.

For their SPL simulations, the researchers noted that the motorcycle engine’s noise radiation under combustion load is dominant in a frequency range of 800 Hz to 2000 Hz. Due to this, they analyzed the one-third octave band data of the engine noise signal from the same range. Additionally, the research team generated a sound intensity plot of their engine to better understand what areas radiate the most noise. These acoustic analyses successfully highlighted engine areas with a high noise level, as we can see in the SPL plot below.

*The plot of the surface SPL at 1250 Hz. Image by Ulhas Mohite and Niket Bhatia and taken from their COMSOL Conference paper submission.*

The knowledge gained from these simulation studies enabled the researchers to improve the structural design of their motorcycle engine. By reducing noise in the development phase, they can save both time and money when prototyping their product. For this specific engine design, the researchers at Mahindra Two Wheelers increased rib height and wall thickness and strengthened the mounting location. By using these design modifications, they reduced the engine’s overall SPL by 3 dBA.

*Motorcycle engine design changes in the cylinder head and block. Image by Ulhas Mohite and Niket Bhatia and taken from their COMSOL Conference paper submission.*

- Read the research: “Prediction and Control of Motorcycle Engine Noise under Combustion Load“
- Want to learn about using simulation to study automobiles? Check out these blog posts:
- Read more about acoustics modeling on the COMSOL Blog:

*Oracle and Java are registered trademarks of Oracle and/or its affiliates.*

When driving into work in the morning, many of you may tune into a local radio station to get caught up on the latest news. Or, for those of you who take public transit, your attention may be geared toward listening to announcements for the arrival of the next train or bus. The modes of transportation may vary, but there is a shared characteristic between the sounds that you hear in both cases: They are influenced not only by the design of the sound systems, but also by how the acoustic waves interact with surrounding surfaces.

Just as light waves can reflect off a surface, sound waves can also exhibit the same behavior. The degree to which waves are reflected and the amount of absorption and attenuation of the reflected waves depends on the material, construction, and shape of the various surfaces with which they come into contact. These acoustic reflections are an important point of analysis in various engineering disciplines (i.e., soundproofing, room acoustics, and SONAR applications). Understanding the reflections offers insight into how to further optimize the technology based on its surrounding environment.

*How SONAR technology, like that shown above, performs is influenced by its acoustic reflections off surrounding surfaces. Image by Jean-Michel Roche. Licensed under CC BY-SA 3.0, via Wikimedia Commons.*

Take SONAR technology, for instance, which is one of many underwater acoustics applications. Using sound propagation, this technology helps to enable the detection of objects that are underwater as well as facilitate communications with other vessels. To better understand reflection phenomena, and thus advance and optimize such techniques, you can analyze the acoustic properties at the bottom of the measured or scanned body of water. COMSOL Multiphysics provides you with the tools to do so.

In the Application Gallery, you will find a 2D model designed to calculate the reflection coefficient of acoustic waves off a water-sediment surface. In this case, plane homogenous waves are incident from a fluid domain (water), which we model using the classical pressure acoustics that solve the Helmholtz equation. These waves are then reflected and transmitted at a water-sediment interface. To model the sediment domain, we apply Biot’s theory, which solves for the pressure and displacement field of the porous matrix.

As the figure below highlights, Floquet periodic conditions (also known as Bloch conditions) are applied on the acoustic and porous domains. The sizes of the domains are set to depend on the wavelength in the water — in other words, the frequency. Perfectly matched layers (PMLs), meanwhile, are used to truncate the computational domain for the fluid and porous domains.

*A sketch of the water-sediment system.*

With this model in place, it is possible to plot the reflection and absorption coefficients for various combinations of the angle of incidence and frequency. The first plot below, for instance, showcases the reflection coefficient as a function of the angle of incidence for different frequencies. The second plot shows the reflection coefficient as a function of frequency for different angles of incidence.

*Plots illustrating the reflection coefficient as compared to the angle of incidence (left) and as compared to the driving frequency (right).*

Now, imagine if you could make the underlying physics of this model available in an easy-to-use computational tool that could be used by those with little knowledge of the simulation world. The Application Builder, as we’ll demonstrate next, enables such possibilities.

The Acoustic Reflection Analyzer for a Water-Sediment Interface is based on the previously discussed model, hiding its underlying complexity in a simplified and intuitive interface (shown below). The example that we present here is meant to inspire you in your own app-building processes, tailoring your app’s design to include those elements that are key to your particular analysis.

*The user interface (UI) for the Acoustic Reflection Analyzer for a Water-Sediment Interface.*

As you can see, the app features a series of input parameters that let users easily modify the material properties of the water and the sediment. For the fluid domain (water), such properties include the speed of sound in the fluid, fluid density, and fluid viscosity. For the porous material (sediment), the porosity, pore size parameter, and permeability can also be adjusted, among other properties. By selecting the *Reset to Default Values* button in the ribbon, users can easily revert the input parameters back to their original settings and start fresh in their simulation studies.

When it comes to the frequencies and angles of incidence to simulate, this can be easily specified in the *Simulation* section. *Manual angle sweep* options enable the selection of specific angles of incidence for the incident pressure wave, while the *Complete angle sweep* options model the given amount of angles that are between normal incidence (0°) and gracing incidence (90°). App users can also choose the *Driving frequencies* at which the simulation is conducted.

Looking to the right side of the app’s interface, you’ll find the *Results* section. Here, there are five different tabs that include a series of simulation plots:

*Water-Sediment Interaction*: reflected pressure, incident pressure, total pressure, or displacement in the porous matrix for the specified driving frequency and angle of incidence*R vs theta0*: absolute value of the reflection coefficient as a function of the angle of incidence*R vs f0*: absolute value of the reflection coefficient as a function of the driving frequency*Absorption vs theta0*: absolute value of the absorption coefficient as a function of the angle of incidence*Random Incidence*: random incidence absorption coefficient for each selected frequency (this is an extension of the underlying model)

With the Application Builder, you are offered the best of both worlds — a tool that can incorporate complex physics into a user-friendly configuration. It is our hope that the app presented here, as well as the other examples available in our Application Gallery, will encourage you to begin creating apps of your own and experience the many ways in which they can optimize your design workflow.

- Download the Acoustic Reflection Analyzer for a Water-Sediment Interface demo app
- Our Application Gallery features a number of other demo apps pertaining to acoustics. Browse some additional examples:
- Watch this video for a quick introduction on how to turn your COMSOL Multiphysics models into simulation apps

Nanobots that reduce ocean pollution, solar cells that operate in the rain, and devices that transmit wireless data up to *ten* times faster. What binds these developing technologies together, aside from their innovative nature, is the use of a revolutionary material that has been a recurring topic of discussion on the COMSOL Blog. Its name? Graphene.

Recognizing the advantages of this strong, lightweight material, more and more industries have begun to embrace graphene’s potential use for a variety of applications. Take biosensors, for instance. Because of its high electrical conductivity and large surface area, graphene is an optimal material selection for these devices. This can be attributed to the fact that a faster transfer of electrons prompts greater accuracy and selectivity in the detection of biomolecules.

*Glucose monitoring is one application of biosensors. Image by David-i98 (talk) (Uploads) — Own work. Licensed under CC BY-CA 3.0, via Wikimedia Commons.*

In an effort to help bring new sophistication and reliability to biosensors for therapeutic solutions and personalized medical applications, a team from the Polytechnic University of Bucharest designed and analyzed a 3D multilayered graphene biosensor with COMSOL Multiphysics. Let’s have a look at their research, which was presented at the COMSOL Conference 2015 Grenoble.

As mentioned above, the research presented here is centered on a multilayered biosensor design. While single-layer graphene is known to be much more reactive than multilayered graphene structures, the edge of the material is more reactive than its surface. With graphene being rather inert, a multilayered graphene structure is an ideal option for biosensors.

In addition to the design of the biosensor itself, it is also important to consider the interface with which it will interact. In this case, that interface is human skin. Because it is the part of the human body with the largest surface area and because its responses to external and internal stimuli vary, human skin is a good environment through which to collect physical and chemical data.

Before developing their biosensor models, the team considered the following interfaces:

- Human skin to the PVA hydrogel
- PVA hydrogel to the graphene-based structure
- Graphene-based structure to the electrodes
- Graphene/electrodes to the silicon substrate

This series of interfaces led to the development of two biosensor device models that could describe the influence of process variables as well as environmental stimuli. The first of these models is a single-layer graphene/graphene-oxide sensor device with two electrodes. The second model is a multilayered sensor device with four electrodes. For the latter case, the inclusion and exclusion of the graphene composite structure was studied to further differentiate between the graphene responses.

*A multilayered graphene biosensor device. Image by E. Lacatus, G.C. Alecu, and A. Tudor and taken from their paper “Models for Simulation Based Selection of 3D Multilayered Graphene Biosensors“.*

With their models in place, the researchers ran a series of simulations in COMSOL Multiphysics to analyze the biosensing capabilities of both model configurations. Such analyses included measuring the temperature distribution at the interface of the device as well as the electrical potential (shown in the following set of figures). The findings revealed that the graphene-based structure possessed a sensing ability regardless of the specific design.

*Temperature distribution plots at the interface for the device with two electrodes (left) and the device with four electrodes (right). Image by E. Lacatus, G.C. Alecu, and A. Tudor and taken from their paper “Models for Simulation Based Selection of 3D Multilayered Graphene Biosensors“.*

*Electric potential plots on the interface for the device with two electrodes (left) and the device with four electrodes (right). Image by E. Lacatus, G.C. Alecu, and A. Tudor and taken from their paper “Models for Simulation Based Selection of 3D Multilayered Graphene Biosensors“.*

After identifying these sensing capabilities, the team tested a number of different biosensor device structures as a means of determining the optimal response for the PVA hydrogel on the sheets of graphene and for the protein-functionalized graphene biosensors. Simulation proved to be a useful tool for analyzing such properties. The simulation results shown below, for instance, highlight the spatial distribution of flux energy on the graphene biosensor as well as the interface charge distributions for the device with four electrodes.

*Simulations corresponding to the spatial distribution of flux energy on the graphene biosensor (left) and the interface charge distributions (right) for the device with four electrodes. Image by E. Lacatus, G.C. Alecu, and A. Tudor and taken from their paper “Models for Simulation Based Selection of 3D Multilayered Graphene Biosensors“.*

Furthering the researchers’ studies were analyses designed to show how different environmental stimuli influence the biosensor when reaching its active surface. Using the Acoustics Module, for example, the researchers were able to define the response of the interface to variations in acoustic pressure. The results below illustrate the effects of the acoustic stimuli on the graphene sensing structure for the device with four electrodes.

*Analyzing the impact of acoustic stimuli on the device with four electrodes. Image by E. Lacatus, G.C. Alecu, and A. Tudor and taken from their presentation “Models for Simulation Based Selection of 3D Multilayered Graphene Biosensors“.*

With COMSOL Multiphysics, the researchers could successfully identify the relevant properties of the graphene biosensing structures, all while relating them to the complex interface of the human skin. This simulation research, combined with the models themselves, provided valuable design solutions in the development of graphene-based biosensors.

- Download the full paper here: “Models for Simulation Based Selection of 3D Multilayered Graphene Biosensors“
- Explore further applications of COMSOL Multiphysics in the design and optimization of graphene-based structures on the COMSOL Blog
- Modeling a biosensor? Read this blog post to see how you can easily turn your model into an app with the Application Builder in COMSOL Multiphysics