When inside a room — a conference room, concert hall, or even a car — everyone has an opinion of when the “acoustics” are good or bad. In room acoustics, we want to study this notion of sound quality in a quantitative way. In short, room acoustics is concerned with assessing the acoustics of enclosed spaces. The Acoustics Module of COMSOL Multiphysics has several tools to simulate the acoustics of rooms and other confined spaces. I will present those here.
When sound is emitted inside a room, a listener will perceive the sound as a combination of direct sound from the source as well as sound reflected off the walls. At the walls, the sound is reflected, absorbed, and scattered.
Since all of these processes are frequency dependent, a poorly designed meeting room can, for example, be highly reverberant in a frequency band that is important for speech. The room could also have a strong modal behavior (standing waves) at certain critical frequencies that are easily excited. These are things you want to avoid and be able to predict when designing a room.
Architects and civil engineers want to control the sound field by placing absorbers, diffusers, and reflectors in appropriate locations. In concert halls, you want to maximize the listening experience where the audience is located. In office spaces, you want to avoid anything that can seem noisy and disturb the concentration of employees. In classrooms and lecture halls, you want to ensure clear perception of speech. The sound environment is important for various reasons, which is why there are national standards and regulations for the sound environment in many cases.
Refurbishing a badly designed room can be very expensive, so you do not want to rely only on measurements on scale models or measurements done after the fact. Modeling the room acoustic behavior beforehand is important and essential in order to optimize and perfect the design. Simulation models and measurements need to relate architectural aspects (geometry) to subjective observations using physical measures (metrics). This is done by calculating a long range of room acoustic measures, such as the reverberation time, early decay time, clarity, and many other standardized parameters.
The modeling approach you want to adopt depends on the studied frequency (the wavelength compared to geometrical features of the room). In the Acoustics Module of the COMSOL suite of FEA software, we essentially offer three approaches packaged in three physics interfaces. The Pressure Acoustics interface can model the modal behavior in rooms. The Ray Acoustics interface and the Acoustic Diffusion Equation interface cover the high frequency limit or reverberant behavior (geometrical acoustics). I discuss the interfaces and their applicability in the sections below.
Animation of the ray front position as they are released inside a small concert hall. The color scale gives an impression of the ray intensity on a logarithmic scale.
As mentioned above, room acoustics is typically divided into three categories, depending on the studied frequency. Or, more specifically, depending on the wavelength compared to the characteristic geometric features of the room in question.
In the low-frequency range, the room resonances dominate. This is known as the modal region. At the other end of the scale, in the high-frequency limit, the wavelength becomes smaller than the characteristic geometrical features of the room. Here, we deal with the reverberant region or the geometrical acoustics limit. Between the modal and the high-frequency limit, there is a so-called transition zone. Note that there is no clear-cut definition of this zone.
Classical room acoustics theory provides some tools that enable a back-of-the-envelope assessment of the behavior of a room. For a given room, the Schroeder frequency, f_\textrm{s}, predicts the limiting frequency between the modal behavior and the high-frequency reverberant behavior of the room.
The Schroeder frequency is given by:
(1)
where V is the room volume and T_{60} is the reverberation time.
The equation is based on the criterion (suggested by Schroeder) that at the limit, three eigenfrequencies fall into one resonance half-width. The reverberation time (or decay time), T_{60}, is the time required for the sound pressure level (created by an impulse source) to decay 60 dB. A first simple approximate measure of the reverberation time is given by the well-established Sabine formula:
(2)
Here, c is the speed of sound and A is the total absorption, where S_i and \alpha_i are the surface area and absorption of the i^{th} surface, respectively.
This is possibly the best-known formula in room acoustics. The equation stems from a classical statistical room acoustics analysis assuming a pure diffuse sound field. In a diffuse sound field, the sound pressure level is uniform and the reflected sound dominates. This phenomenon is also known as a reverberant sound field. In such a field, the damping constant (related to the overall absorption) can be approximated and relates to the reverberation time.
The modal behavior of rooms and enclosed spaces is best analyzed solving Helmholtz equation or the scalar wave equation using the finite element method. In the reverberant or high-frequency limit at frequencies above the Schroeder frequency, you may utilize two different approaches. Your choice depends on the assumptions that can be made and the desired level of detail.
The Acoustic Diffusion Equation interface may be used in the purely diffuse sound field limit, neglecting all direct sound. This is a fast method to assess reverberation times and sound pressure level distributions in systems of coupled rooms. The ray tracing capabilities of the Ray Acoustics interface provide a much more detailed picture including the direct sound and early reflections. With this interface, you also have the ability to reconstruct an impulse response.
Up to the Schroeder frequency, the modal behavior of rooms is important, where standing waves dominate over the reverberant nature. Inside a car, the transition may be as high as somewhere between several hundreds of Hertz up to 1000 Hz. In a small office, it may be up to 200 Hz, while in large concert halls, the transition is typically below 50 Hz. In the small concert hall model shown below, the Schroeder frequency is 115 Hz (the reverberation time is about 1.3 s and the volume is 430 m^{3}). The modal behavior is important for subwoofer systems in cinemas, for instance.
The modal behavior as well as the room eigenfrequencies are best analyzed using the Pressure Acoustics interface. A frequency domain study can reproduce a transfer function for the bass system. You can also use it to analyze dead regions or find eigenfrequencies. A transient study is interesting when, for example, looking at bass build-up transients inside a car cabin.
Models of interest here are:
Pressure distribution for the first eigenmode inside a small room. From the Eigenmodes of a Room model.
If you want to compute the trajectories, phase, and intensity of acoustic rays, you should choose the Ray Acoustics interface. Ray acoustics is a good choice when working in the high-frequency limit where you have an acoustic wavelength that is smaller than the characteristic geometric features. The interface is not limited to modeling acoustics in closed spaces, like rooms and concert halls, but can also be used in outdoor environments. At exterior boundaries, you can assign various wall conditions, such as combinations of specular and diffuse reflections. The frequency, intensity, and direction of the incident rays may influence both impedance and absorption.
Below are two figures from the Small Concert Hall model found in the Model Gallery for the Acoustics Module.
The figure to the left depicts the ray paths for a selected number of rays emitted from a source located on the small stage. The figure to the right depicts the energy response as measured in the center of the room. The dots represent the simulated ray response (5,000 rays are released) and the green and red curves represent decay curves based on simple Sabine-like estimates of the reverberation time T_{60}. The cyan curve is a so-called Schroeder integration of the energy response, yielding the energy-decay curve. All four agree well when the response is measured in the center of the room.
Left: Ray path for a selected number of rays emitted from a source located on a small stage. Right: The energy impulse response compared with two simple decay measures and the energy decay curve.
With the Ray Acoustics interface, the response can be measured at any point in the concert hall. The properties of absorbers and diffusers can be both frequency-dependent and angle-of-incidence dependent. Thus, the listening environment can be well described, analyzed, and optimized. The simple estimates are not accurate everywhere in a room and not for complex room geometries.
The Acoustic Diffusion Equation interface solves a diffusion equation for the acoustic energy density distribution for room acoustics. The method is also sometimes referred to as energy finite elements. This method is an extension of the principles used to calculate the Sabine reverberation time in Equation 2. This particular interface is applicable for high-frequency acoustics when the acoustic fields are diffuse. The diffusion of the acoustic energy density depends on the mean free acoustic path and thus on the room geometry. Absorption may be applied at walls and a transmission loss may be applied when coupling rooms. Increased diffusion due to room fitting can be added. Material properties and sources may be specified in frequency bands.
Compared to a ray acoustics simulation, this interface does not include any phase information, direct sound, or early reflections. The interface supports stationary studies for modeling a steady-state sound energy or sound pressure level distribution. You can use a time-dependent study to determine energy decay curves and reverberation times. You can use an eigenvalue study to determine the reverberation time of coupled and uncoupled rooms. The eigenvalue is directly related to the exponential decay time and so the reverberation time.
We utilized all three study types in the One-Family House Acoustics model, which studies the acoustics in a single-family home with a noise source located in the living room.
Energy flux and SPL distribution inside a two-story single-family house.
Check back on the COMSOL blog this spring for specific blog posts about the Acoustic Diffusion Equation and Ray Acoustics physics interfaces.
In the meantime, here is a list of suggested reading material:
We often get requests of the type “I would like to just enter my measured stress-strain curve directly into COMSOL Multiphysics”. In this new blog series, we will take a detailed look at how you can process and interpret material data from tests. We will also explain why it is not a good idea to just enter a simple stress-strain curve as input.
All material models are mathematical approximations of a true physical behavior. Material models can, however, not always be derived from physical principles, like mass conservation or equations of equilibrium. They are by nature phenomenological and based on measurements. The laws of physics will, however, enforce limits on the mathematical structure of material models and the possible values of material properties.
It is well known, even from everyday life, that different materials exhibit completely different behavior. A material can be very brittle, like glass, or very elastic, like rubber. Choosing a material model is not only determined by the material as such, but also by the operating conditions. If you immerse a piece of rubber into liquid nitrogen, it will become as brittle as glass — a popular educational experiment. Also, if you heat up glass, it will start to creep and show viscoelastic behavior.
When analyzing structural mechanics behavior in COMSOL Multiphysics, you can choose between about 50 built-in material models, many of them featuring several options for their settings. You can also set up and define your own material models, or combine several of the material models to, for example, describe a material exhibiting both creep and plasticity at the same time.
Some of the available classes of materials are:
Without going into details about how you should actually come to the correct decision about an appropriate material model, here are some questions you should ask yourself before you start modeling:
Based on these considerations, you can then make a choice of a suitable material model. Determining the correct parameters to use in this material model will then be more or less difficult.
On one end of the spectrum, there are common materials (such as steel at room temperature) where many engineers know the material data by heart (E = 210 GPa, ν = 0.3, ρ = 7850 kg/m^{3}) and where data is easily found in the literature or through a simple web search.
On the other end of the spectrum, finding the high temperature creep data for a cast iron to be used in an exhaust manifold can be a major project in itself. Many tests at different load levels and at different temperatures are required. A complete test program for this may take half a year and have a price tag of several hundred thousand dollars.
Tensile testing equipment. “Inspekt desk 50kN IMGP8563″ by Smial. Original uploader was Smial at de.wikipedia — Transferred from de.wikipedia; transferred to Commons by User: Smial using CommonsHelper. (Original text: eigenes Foto). Licensed under CC BY-SA 2.0 de via Wikimedia Commons.
Before starting your simulation with COMSOL Multiphysics, it is not enough to import the geometry of the specimen, select the material model, and apply the loads and other boundary conditions; you should also provide the parameters for the chosen material model in the operating stress-strain and temperature range. These parameters are typically obtained from one or more tests.
The most fundamental test is the uniaxial tensile test. This is also what most engineers in daily life refer to when they state that they have a “Stress-Strain curve.” If you look at the list of questions above, it is evident that even this seemingly simple test can leave many loose ends:
Some materials, like concrete, have little or no capacity to carry loads in tension. Here, the uniaxial compression test is the most fundamental test. It has many properties in common with the tensile test.
Other materials, like steel and rubber, can also be tested in compression. It is actually a good idea to do so, as we will demonstrate later in this blog post.
When using only uniaxial testing (whether it is in tension, compression, or both), you can however not achieve the full picture of the properties of a given material. You will need to combine it with some other assumptions like isotropy or incompressibility. For many materials, such assumptions are well justified by experience, though.
We have illustrated how the range of a test will affect your conception of the material behavior in the animation below.
It is significantly more difficult to design testing equipment that can create a homogenous biaxial stress state. Biaxial testing is often used for materials that are available only in thin sheets, like fabrics, for instance. By controlling the ratio between the loads in two perpendicular directions, it is possible to extract much more information than from a uniaxial test.
For soils, which generally need to be confined, triaxial compression is a common test. Triaxial compression tests could in principle be applied to a block of any material, but the testing equipment is difficult to design. The low compressibility of most solid materials also makes triaxial testing less attractive, since the measured displacements will be small when the material is compressed in all directions.
The Triaxial Test model shows a finite element model of a triaxial compression test.
The torsion test, where a cylindrical test specimen is twisted, is a rather simple test that generates a non-uniaxial stress state. The stress state is, however, not homogenous through the rod. Therefore, some extra processing is needed to translate the moment versus angle results to stress-strain results.
In an upcoming blog post in this series, we will make an in-depth demonstration of how to fit measured data to a number of different hyperelastic material models. In the example here, we will assume that you have been able to fit your data to the tests. The raw data consists of two measurements: one in uniaxial tension and another in equibiaxial tension, as shown below.
The nominal stress (force divided by original area) is plotted against stretch (current length divided by original length).
Measured stress-strain curves by Treloar.
Since the data covers a wide range of stretches, the experimental results are clearly nonlinear. The simplest hyperelastic models with one or two parameters will probably not be sufficient to fit the experimental data. The Ogden model with three terms is a popular model for rubber, and it is the model we used here.
A least squares fit will give the results below when assigning equal weights to both data sets. As we can see in the graph, it is possible to fit both experiments very well with a single set of material parameters.
Fitted material parameters using a three terms Ogden model.
But what if the biaxial test had not been available? Fitting only the uniaxial data will give a different set of material parameters, which will of course fit that set of experimental data even more closely, but it would deviate from the biaxial results. This is shown below.
Analytical results for uniaxial and biaxial tension when only the uniaxial data was used to fit the model parameters.
Clearly, the prediction for a equibiaxial stress state will differ between the two sets of parameters. As we can see, the error in stress in the biaxial curve is more than 20% at some stretch levels.
What about other stress states? Two stress states that can be simulated in a simple finite element model are uniaxial compression and pure torsion. The uniaxial stress-strain curve over a wide range of stretches is shown below. The results on the tensile side are not as sensitive to the data set used for obtaining the material parameters as the compressive side is. This is not surprising as tensile data is used for parameter fitting in both cases, whereas neither of the experiments contain any information about the compressive behavior.
Uniaxial response ranging from compression to tension. The scale on the x-axis is logarithmic.
Note that operating conditions of rubber parts, such as seals, are often under predominantly compressive states. If the data sets used for parameter fitting contain only tension data, this may be a source of inaccuracy when modeling multiaxial stress states.
Finally, let’s have a look at a simulation where a circular bar is twisted. The same type of discrepancies between the results from two sets of material parameters as above can be seen below.
Computed torque as function of the twist angle.
Finally, it should be noted that many hyperelastic models are only conditionally stable. This means that even though the estimated material parameters are perfectly valid for a certain strain range, a unique and continuous stress-strain relation may not even exist for other strain combinations. We often come across such problems in support cases. This is unfortunately rather difficult to detect a priori, since it would require a full search of all possible strain combinations.
Measured data must be processed and analyzed before being used as input for simulations. For material models other than the simpler linear elastic model, it is a good idea to make small examples with a unit cube to assess the behavior under different loading states before using the material model in a large-scale simulation.
So the answer to the question: “I would like to just enter my measured stress-strain curve directly into COMSOL Multiphysics” is that such an approach is not recommended. That would make the software a black box where the user really must take a number of active decisions in order to obtain meaningful results.
Up next in our Structural Materials series: We will discuss nonlinear elasticity and plasticity.
]]>
Let’s consider a thermostat similar to the one that you have in your home. Although there are many different types of thermostats, most of them use the same control scheme: A sensor that monitors temperature is placed somewhere within the system, usually some distance away from the heater. When the sensed temperature falls below a desired lower setpoint, the thermostat switches the heater on. As the temperature rises above a desired upper setpoint, the thermostat switches the heater off. This is known as a bang-bang controller. In practice, you typically only have a single setpoint, and there is an offset, or lag, which is used to define the upper and lower setpoints.
The objective of having different upper and lower setpoints is to minimize the switching of the heater state. If the upper and lower setpoints are the same, the thermostat would constantly be cycling the heater, which can lead to premature component failure. If you do want to implement such a control, you only need to know the current temperature of the sensor. This can be modeled in COMSOL Multiphysics quite easily, as we have highlighted in this previous blog post.
On the other hand, the bang-bang controller is a bit more complex since it does need to know something about the history of the system; the heater changes its state as the temperature rises above or below the setpoints. In other words, the controller provides hysteresis. In COMSOL Multiphysics, this can be implemented using the Events interface.
When using COMSOL Multiphysics to solve time-dependent models, the Events interface is used to stop the time-stepping algorithms at a particular point and offer the possibility of changing the values of variables. The times at which these events occur can be specified either explicitly or implicitly. An explicit event should be used when we know the point in time when something about the system changes. We’ve previously written about this topic on the blog in the context of modeling a periodic heat load. An implicit event, on the other hand, occurs at an unknown point in time and thus requires a bit more set-up. Let’s take a look at how this is done within the context of the thermal model shown below.
Sketch of the thermal system under consideration.
Consider a simple thermal model of a lab-on-a-chip device modeled in a 2D plane. A one millimeter thick glass slide has a heater on one side and a temperature sensor on the other. We will treat the heater as a 1W heat load distributed across part of the bottom surface, and we will assume that there is a very small, thermally insignificant temperature sensor on the top surface. There is also free convective cooling from the top of the slide to the surroundings, which is modeled with a heat flux boundary condition. The system is initially at 20°C, and we want to keep the sensor between 45°C and 55°C.
A Component Coupling is used to define the Variable, T_s, the sensor temperature.
The first thing we need to do — before using the Events interface — is define the temperature at the sensor point via an Integration Component Coupling and a Variable, as shown above. The reason why this is done is to make the temperature at this point, T_s, available within the Events interface.
The Events interface itself is added like any other physics interface within COMSOL Multiphysics. It is available within the Mathematics > ODE and DAE interfaces branch.
The Discrete States interface is used to define the state of the heater. Initially, the heater is on.
First, we use the Events interface to define a set of discrete variables, variables which are discontinuous in time. These are appropriate for modeling on/off conditions, as we have here. The Discrete States interface shown above defines a variable, HeaterState, which is multiplied by the applied heat load in the Heat Transfer in Solids problem. The variable can be either one or zero, depending upon the system’s temperature history. The initial condition is one, meaning we are starting our simulation with the heater on. It is important that we set the appropriate initial condition here. It is this HeaterState variable that will be changed depending upon the sensor temperature during the simulation.
Two Indicator States in the Events interface depend upon the sensor temperature.
To trigger a change in the HeaterState variable, we need to first introduce two Indicator States. The objective of the Indicator States is to define variables that will indicate when an event will occur. There are two indicator variables defined. The Up indicator variable is defined as:
T_s - 55[degC]
which goes smoothly from negative to positive as the sensor temperature rises above 55°C. Similarly, the Down indicator variable will go smoothly from negative to positive at 45°C. We will want to trigger a change in the HeaterState variable as these indicator variables change sign.
The HeaterState variable is reinitialized within the Events interface.
We use the Implicit Events interface, since we do not know ahead of time when these events will occur, but we do know under what conditions we want to change the state of the heater. As shown above, two Implicit Event features are used to reinitialize the state of the heater to either zero or one, depending upon when the Up and Down indicator variables become greater than or less than zero, respectively. The event is triggered when the logical condition becomes true. Once this happens, the transient solver will stop and restart with the newly initialized HeaterState variable, which is used to control the applied heat, as illustrated below.
The HeaterState variable controls the applied heat.
When solving this model, we can make some changes to the solver settings to ensure that we have good accuracy and keep only the most important results. We will want to solve this model for a total time of 30 minutes, and we will store the results only at the time steps that the solver takes. These settings are depicted below.
The study settings for the Time-Dependent Solver set the total solution time from 0-30 minutes, with a relative tolerance of 0.001.
We will need to make some changes within the settings for the Time-Dependent Solver. These changes can be made prior to the solution by first right-clicking on the Study branch, choosing “Show Default Solver”, and then making the two changes shown below.
Modifications to the default solver settings. The event tolerance is changed to 0.001 and the output times to store are set to the steps taken by the solver.
Of course, as with any finite element simulation, we will want to study the convergence of the solution as the mesh is refined and the solver tolerances are made tighter. Representative simulation results are highlighted below and demonstrate how the sensor temperature is kept between the upper and lower setpoints. Also, observe that the solver takes smaller time steps immediately after each event, but larger time steps when the solution varies gradually.
The heater switches on and off to keep the sensor temperature between the setpoints.
We have demonstrated here how implicit events can be used to stop and restart the solver as well as change variables that control the model. This enables us to model systems with hysteresis, such as thermostats, and perform simulations with minimal computational cost.
]]>
With its notable display of historical artwork, the Louvre has become a central landmark in Paris, France. As the Louvre grew in popularity — eventually ranking as one of the world’s most visited museums — it became evident that the building’s original entrance could no longer handle the large number of guests admitted on a daily basis. The need for an entrance with a greater capacity prompted the construction of the Louvre Pyramid within the museum’s courtyard in 1989. This structure now serves as the main entrance to the building, descending visitors into a spacious lobby and then moving them up to the museum level.
The Louvre Pyramid. (“Courtyard of the Louvre Museum, with the Pyramid” by Alvesgaspar — Own work. Licensed under Creative Commons Attribution Share-Alike 3.0, via Wikimedia Commons).
In comparison to the Louvre’s classical architecture, the pyramid was based on a more modern design approach — the space frame. A space frame is a truss-like structure composed of interlocking struts that form a geometric pattern. Requiring few interior supports, these structures offer a lightweight and elegant solution in structural engineering. Additionally, due to their inherent rigidity, space frames are able to span large areas while maintaining a strong resistance.
The Louvre pyramid is just one example of a building based on a space frame design. Many other structures, such as the Eden Project in England and Globen in Sweden, have also used a space frame as the basis for their construction. With its common application in modern buildings, it is important to study how loads affect the stability of such structures.
In the new COMSOL Multiphysics version 5.0 model, Instability of a Space Arc Frame, we set up and analyze a space frame. In this benchmark model, the frame undergoes concentrated loading at various points, with a small lateral load implemented to break the structure’s symmetry. The description of the space frame and the applied loads are based on the example from “A Mixed Co-rotational 3D Beam Element for Arbitrarily Large Rotations” by Z.X. Li and L. Vu-Quoc.
A schematic depicting the space frame’s geometry.
As a constraint, all of the frame’s base points are pinned. Vertical concentrated loads P are applied to the top four corners of the space frame. Meanwhile, lateral loads of 0.001*P are applied to the frame’s two front corners. These lateral loads are designed to perturb the frame’s symmetry to implement a controlled instability. The figure below shows the final state of the deformed frame.
Deformed space frame.
Next, we can evaluate the relationship between the compressive load and the horizontal displacement on point A of the frame. Comparing the reference data with the simulation results, the plot below illustrates a strong agreement between the two findings.
A plot relating load parameter P and displacement v. Here, the simulation results are compared with the reference data.
Furthermore, this plot highlights an instability occurring at a parameter value of around 8.0, even though a deviation from linearity can be seen much earlier. In practice, an imperfect structure’s critical load is often far lower than that of the ideal structure, as was discussed in this previous blog post.
Acoustic radiation force is an important nonlinear acoustic phenomenon that manifests itself as a nonzero force exerted by acoustic fields on particles. Acoustic radiation is an acoustophoretic phenomenon, that is, the movement of objects by sound. One interesting example of this force in action is the acoustic particle levitation discussed in this previous blog post. Today, we shall examine the nature of this force and show how it can be computed using COMSOL Multiphysics.
To understand the nature of the acoustic radiation force, let’s first consider a simple example of a particle in a standing wave pressure field (here assumed to be lossless).
The force on the particle arises as a result of particle’s finite size, such that the gradients in the pressure field will result in greater force being exerted on one side of the particle than the other. However, if we are considering a harmonic pressure wave, then the force is expected to behave as a harmonic function, which can be expressed as F_\text{harmonic} = F_0\sin (2\pi f_0 t+\phi). I’ve shown this as a black arrow in the animation below.
If time-averaged, the total contribution goes to zero. So, where does the observed nonzero force come from?
This question was first addressed by L. V. King back in 1934 (“On the acoustic radiation pressure on spheres“). In order to understand King’s results, we must take a step back to examine how the governing equations of acoustics are derived.
We will find out that they emerge from the Navier-Stokes equations as a result of a linearization procedure, which is normally carried out in two steps.
First, a very small time-varying perturbation in pressure and velocity is assumed on top of a stationary background field. When time derivatives are applied, the stationary terms drop out and what is left only includes the time-dependent perturbation terms. The remaining expression will contain both linear and nonlinear contributions. The latter appear in the form of products of two or more linear perturbation terms, and they result from convective and inertial terms in the original Navier-Stokes equation.
But, in the simplest acoustic limit, the contribution of nonlinear terms can be neglected because the amplitudes of perturbations considered are very small. For example, 0.01^{2} is much smaller than 0.01 and can therefore be neglected. So, in the second step of the linearization procedure, all the nonlinear terms are neglected and the linear wave equation is obtained.
What King has indicated was that in order to understand and evaluate the effect of acoustic radiation force, the nonlinear terms must somehow be retained in the equations.
Keeping the terms up to a second order, the pressure field will appear as a combination of two terms p = p_1 + p_2, where p_1 and p_2 can be expressed in a simplistic form as p_1 = \rho_0 c_0 v, which appears as a linear function of the perturbation velocity v, and p_2 = 1/2 \rho_0 v^2, which appears as a nonlinear function of v. Since, in the acoustic limit, we only consider the cases in which v \ll c_0, where c_0 is the adiabatic speed of sound, we conclude that p_2 \ll p_1.
At this point, we are ready to answer the first question: Where does the acoustic radiation force come from?
Going back to the example of a particle in a standing wave pressure field, let’s examine the linear and nonlinear components of the pressure and the forces produced by these components. In this case, p_1 will be a time-harmonic function p_1 = P_1 \cos(kx)\sin(2\pi f t) and p_2 will be an an-harmonic function p_2 = P_2\cos^2(kx)[1-cos(4\pi f t)] resulting from the nonlinear contribution.
These terms are visualized by the waveforms in the animation above. The forces resulting from these pressure terms are indicated by arrows. The linear force (black arrow) changes both in magnitude and direction so its cycle-averaged contribution is zero, whereas the nonlinear term (red arrow) only changes in magnitude and on average exerts a nonzero force.
The simple analysis above demonstrates the main mechanism underlying the acoustic radiation force phenomenon. Intuitively, we realize that no force will appear if the particle has the same acoustic properties as the surrounding medium. In other words, the radiation force should be a function of not only the size of a particle and the amplitude of the acoustic field, but also of the particle’s acoustic contrast (the ratio of the material properties of the particle relative to the surrounding fluid).
Due to the acoustic contrast, the field incident on the particle will be reflected from its surface and the radiation force will be a result of a combination of incident and reflected waves. This makes the problem quite difficult to solve analytically. The solution in a closed analytic form was only given for some limiting cases by a number of authors, starting with King. He has considered rigid spherical particles with dimensions much smaller than the wavelength of the incident wave, but much larger than the viscous and thermal skin depths. It was the second assumption that allowed these terms to be neglected.
King’s results have been extended to include compressible particles as in “Acoustic radiation pressure on a compressible sphere“. The results from this study were later confirmed by L. P. Gor’kov in 1962 in “On the forces acting on a small particle in an acoustical field in an ideal fluid”. Viscous and thermal effects become important when the size of the particles becomes comparable to the acoustic boundary layers (thermal and viscous). Results including viscosity were recently presented in 2012 by M. Settnes and H. Bruus.
Gor’kov has developed an elegant approach to expressing the radiation force in terms of time-averaged kinetic and potential energies of stationary acoustic fields of any geometries. His results, when applied to small compressible fluid particles, give the force as a gradient of a potential function U_\text{rad}:
(1)
The potential function U_\text{rad} is expressed using the acoustic pressure and velocity as:
(2)
where V_p is the volume of the particle and the scattering coefficients are given by:
(3)
where K_i are the bulk moduli. The scattering coefficients f_1 and f_2 represent the monopole and dipole coefficients, respectively. This approach, which is based on the scattering theory, is only valid for particles that are small compared to the wavelength \lambda in the limit a/\lambda \ll 1, where a is the radius of the particle.
The v and p terms that appear in Eq. (1) are the first-order terms that can be obtained by solving a linear acoustic problem. Results in this form are typically obtained using a perturbation method, which is widely practiced in physics. A thorough review and examples of this method applied to nonlinear problems in acoustics and microfluidics can be found in a textbook by Professor Henrik Bruus titled Theoretical Microfluidics.
Eq. (1) is coded in the COMSOL Multiphysics Particle Tracing for Fluid Flow interface to evaluate the acoustic radiation force on particles. But, as mentioned above, it only applies to acoustically small particles and neglects thermoviscous effects. An example can be seen in the Acoustic Levitator model. Knowing the radiation force is important when modeling and simulating systems that handle particles using this phenomenon. This can be, for instance, microfuidic systems that sort and handle cells and other particles. An example of this is discussed in the blog post Acoustofluidic Multiphysics Problem: Microparticle Acoustophoresis.
To extend the theory beyond the limit of acoustically small particles, a numerical approach is required. We will consider that next.
In general, all forces can be expressed using momentum fluxes as \mathbf F = \int_S T \mathbf{n} d\mathbf{a}, where the surface of integration, S, is the external surface of the particle.
Gor’kov has used this fact to obtain a closed-form analytical expression for a force acting on a particle in an arbitrary acoustic field. To compute the nonlinear acoustic radiation force, the momentum flux due to the acoustic field has to be evaluated up to second-order terms. The main appeal of his result is that, as mentioned earlier, the second-order terms can be expressed using the solution of a linear problem.
To implement his method, all we need to do is solve the acoustic problem, use the results to compute the second-order momentum flux, and substitute the solution into the flux integral.
H. Bruus has shown that neglecting the thermoviscous effects, the second-order flux terms are:
(4)
The integral should be taken over a surface of a particle moving in response to the applied force. This means that the surface of integration is a function of time S = S(t). To overcome this difficulty, Yosioka and Kawasima have indicated that the integration can be transformed to an equilibrium surface S_0 that encloses the particle. Compensating for the error with the addition of a convective momentum flux term, the force, in total, becomes:
(5)
All that is left to do now is solve the acoustics problem to obtain the acoustic pressure and velocity and substitute them into the integral in Eq. (5). In contrast to the approach used in Eq. (1) to (3), the force expression given in Eq. (5) is valid for all particle sizes as long as the stress T is given. This approach was recently implemented in COMSOL Multiphysics by a group of researchers from the University of Southampton.
It should be noted that the expression in Eq. (4) is only true when the viscous and thermal effects are neglected. If these losses are included the integration surface S_0 should be taken outside of the boundary layers or a correct full stress expression for T used on the particle surface. A first principle perturbation approach including thermal and viscous losses was presented at the 2013 ICA-ASA conference by M. J. Herring Jensen and H. Bruus, titled “First-principle simulation of the acoustic radiation force on microparticles in ultrasonic standing waves”. A detailed derivation of the governing equations up to second order, in a form suited for implementation in COMSOL Multiphysics, is given in the paper “Numerical study of thermoviscous effects in ultrasound-induced acoustic streaming in microchannels“.
To benchmark the method presnted by Glynne-Jones et al., let’s compute an acoustic radiation force exerted by a standing wave on a spherical nylon particle immersed in water. We assume a frequency of 1 MHz and a pressure amplitude of 1 bar and implement the model using the Acoustic-Structure Interaction interface in 2D axisymmetric geometry. The size of the box in the model is four wavelengths high and two wavelengths wide.
Let’s excite a standing wave in this box using a Background Pressure Field condition, set up in such way that the particle is at a distance of \lambda/8 from the pressure node.
The integrals in Eq. (5) are computed by setting up integration coupling operators in the Component 1 > Definitions node. We need to make sure that the integral is calculated in the revolved geometry by checking the appropriate box and selecting the boundaries of the particle to define the surface of integration.
It is noteworthy to mention here that the force computation used in this method is independent of the surface of integration due to the conservation of flux as long, as it is located outside the particle. In fact, using a surface at larger distances will be more numerically accurate, simply because there will be more points to use for numerical evaluation of the integral. To perform this integration, we can add another surface external to the particle.
Finally, new flux variables are introduced in the Component 1 > Definitions as Variables 1a node. They are used as arguments for the integration operators to compute the total force.
We are now ready to compare the perturbation approach to an analytical solution.
As expected, they compare reasonably well for small particle radii where the analytical solution considered is valid. Some analytical models that include higher harmonics in scattered field decomposition offer solutions that agree with the outlined numerical approach for large and small spherical particles (as in the paper by T. Hasegawa, “Comparison of two solutions for acoustic radiation pressure on a sphere“.)
A small discrepancy for small particle radii between analytical and numerical methods may be attributed to the fact that the theoretical models assume that the particle is plastic, whereas in this example, we have considered an elastic particle with bulk modulus of 0.4.
The perturbation method has a number of advantages.
First, it exploits the linear acoustics method to evaluate nonlinear second-order force effects. This allows the analysis to be easily extended to 3D for particles of arbitrary shapes and material composition. For example, we can extend it to simulate acoustic radiation forces on biological cells or microbubbles.
Second, because the acoustic equations are solved in the frequency domain where very efficient numerical methods are well established, the solution time in COMSOL Multiphysics is quite fast even in 3D.
Meanwhile, the disadvantage of this method is that it is driven by theoretical results that rely on a set of simplifying assumptions, and it can only be validated in a limited number of cases. What we would like to have is a numerical method that allows the problem to be solved directly.
We shall see how this can be achieved in the next blog post. Stay tuned!
Previous work on cloaking for flexural waves in elastic plates presented limitations and near invisibility. Now, a research group in Europe has figured out a new theoretical framework to both overcome the limitations and achieve exact cloaking for flexural waves in Kirchhoff-Love plates. To visualize and test the quality of the cloak, they ran COMSOL Multiphysics simulations.
Picture this: There are flexural waves emanating from a source in a thin elastic plate. If you place an object in the plate, it will disturb the waves and you will be able to see it. If a cloak is instead placed around the object, the waves will not be disturbed, thus rendering the object invisible.
Models illustrating an object without and with cloaking. Provided by Daniel J. Colquitt.
While that’s an easy enough concept to grasp, it is not so easy to realize. In order to cloak an object, you need to construct the right metamaterial (metamaterials do not exist in nature; they are artificial materials, engineered to have specific properties).
Metamaterials can be designed such that their material properties mimic the spatial variations created by coordinate transformations, thereby directing light, sound, or other waves in a specific manner. The initial configuration of the electromagnetic or acoustic fields is mapped on a Cartesian mesh, which is then twisted to transform the coordinates.
Illustration of an untwisted Cartesian mesh.
Twisted mesh. Provided by Daniel J. Colquitt.
When it comes to electromagnetic and acoustic cloaking, you would use Maxwell’s equations and the Helmholtz equation, respectively. Both of these equations are invariant when coordinates transform, meaning they are not altered in the process.
However, if you want to mathematically model cloaking for mechanical waves, such as flexural waves in elastic plates, you need a fourth-order partial differential equation (PDE). The general form of the PDE is not invariant; it changes during the coordinate transformation.
Earlier work with applying transformation elastodynamics to achieve cloaking came with strings attached. Following some previous frameworks, calculations yielded tensorial densities and nonsymmetric stresses. Other research produced a framework where the PDE could be applied to thin plates, but only for nonlinear theories.
COMSOL Multiphysics user Daniel J. Colquitt and his colleagues are finally breaking through the limitations of this type of cloaking design.
In the paper “Transformation elastodynamics and cloaking for flexural waves”, D.J. Colquitt, M. Brun, M. Gei, A.B. Movchan, N.V. Movchan, and I.S. Jones present a new framework for transformation elastodynamics for thin plates and an algorithm for broadband cloak design. The thin elastic plates are represented by Kirchhoff-Love plates — a 2D mathematical model that is commonly used to characterize thin plates.
“The equations which govern the flexural displacement of thin elastic plates are very different to those for light and acoustics (e.g., Helmholtz equation),” Colquitt explained to me in an email conversation. He added, “Indeed, the equation of motion for flexural waves involves the biharmonic operator, which is a fourth-order partial differential operator; this is fairly unusual for physical systems where the equations of motion are usually first- or second-order in space (e.g. Electromagnetism, Acoustics, Elasticity).”
So how did the researchers overcome the issue of equation variance during the coordinate transformation? They gave the transformed equation a physical interpretation by introducing a generalized plate model. In this new framework, the invisibility cloak is created by applying a specific combination of pre-stress and body forces to a metamaterial plate. Unlike other frameworks, the new one is completely linear. They also show that their new algorithm successfully ensures symmetric stresses and scalar densities.
To visualize the cloaking results and test the quality of the cloak, Colquitt et al. created simulations using COMSOL Multiphysics software. Apparently, when deciding which software to use for testing out their cloak, flexibility was a key factor.
“The invisibility cloak requires the implementation of an inhomogeneous, anisotropic plate subjected to inhomogeneous and anisotropic pre-stress and body forces. As one can imagine, the governing equations for this system are nonstandard,” Colquitt said. “COMSOL allowed us to directly implement the precise form of the transformed governing equations. This is very attractive for a mathematician — the ability to directly control the system of equations being solved.”
Below are some images and an animation illustrating their results.
Cloaking results. Provided by Daniel J. Colquitt.
Animation of the square cloaked object moving in the flexural waves in the thin plate. Courtesy of Daniel J. Colquitt.
So, the researchers have successfully developed a new theoretical framework that will allow experimentalists to design and build mechanical invisibility cloaks that can hide or protect objects from mechanical vibrations.
As far as putting the framework to use, Colquitt said: “There are many potential applications, such as cloaking, or the isolation of sensitive pieces of equipment from mechanical vibrations by routing the flexural waves around the equipment.”
In short, this is very fascinating research and I highly recommend reading the paper (link below) to get the full scope of their work.
When buying a home, there are several modern conveniences that people look for. One of these is often a washing machine. Having this device within your own home eliminates the time-consuming task of taking your clothes to the laundromat or washing them by hand, giving you the freedom to tackle other tasks around the house — or better yet: relax.
Imagine that it’s laundry day and you are fortunate enough to have a washer in your home. How would you notice that your laundry cycle is complete? One common tell-tale sign is silence.
During the cycle, your washing machine is likely to produce vibration and noise, a result from the uneven distribution of clothes and the structural properties of the machine. We now have a multibody dynamics model that enables you to analyze this common issue via simulations.
The Vibration in a Washing Machine Assembly model depicts a horizontal-axis portable washing machine. The model’s geometry accounts for various parts of the washing machine assembly (including the housing), with varying components defined by different colors.
The geometry of the washing machine.
The machine’s parts are represented as follows:
Color | |
---|---|
Clothes | Red |
Drum | White |
Tub | Cyan |
Motor | Yellow |
Pistons | Green |
Cylinders | Magenta |
Mountings | Blue |
Base supports | Black |
Housing | Gray |
Within this multibody dynamics example, we assume that the housing is modeled as a flexible shell and that the other components of the washing machine are modeled using rigid solids. Additionally, we assume that the clothes do not move in relation to the drum.
The diagram below explains the connection details between various parts of the machine through the use of joints and springs.
In an eigenfrequency analysis, one mode focuses on the translation of the tub and another shows the rotational motion of the tub about the vertical axis. Each of these eigenmodes highlights the corresponding deformation of the housing, which is relatively small compared to the motion of the tub.
On the left: The translational tub mode. On the right: The rotational tub mode.
The next set of figures shows the displacement magnitude of the tub with the angular position of the unbalanced clothes for the full time duration. Here, the color red represents the initial time of the trajectory and the color blue illustrates the final time.
To identify the vibrations induced in the housing during the machine’s spinning cycle, we can perform a transient analysis. The graph below depicts the housing deformation in multiple directions at a point on the right side wall.
Next, we can analyze the normal acceleration (a measure of noise emitted by the side walls) of the housing at a point in the middle of the right side wall. These results are illustrated in the graph below on the left. Meanwhile, the graph on the right depicts the frequency spectrum of this acceleration.
We can note that the frequencies of the side wall vibration are predominantly within the range of 0-30 Hz, with a peak of around 1.67 Hz, which can be considered the excitation frequency. This corresponds to the frequency expected from the inertial mass ratio of the drum and its load at the set rotational velocity. Yet, it is interesting to note that the frequency of the side wall vibration also occurs at higher values than the excitation frequency, which leads to an additional emission of noise.
With this in mind, the washing machine can be designed to mainly emit noise at lower, non-audible frequencies. The choice of materials and design must also ensure that the washing machine is structurally strong enough to limit the vibration magnitude at all frequencies in order to prevent the failure of its components.
The nonlinear stress-strain behavior in solids was already described 100 years ago by Paul Ludwik in his Elemente der Technologischen Mechanik. In that treatise, Ludwik described the nonlinear relation between shear stress \tau and shear strain \gamma observed in torsion tests with what is nowadays called Ludwik’s Law:
(1)
For n=1, the stress-strain curve is linear; for n=2, the curve is a parabola; and for n=\infty, the curve represents a perfectly plastic material. Ludwik just described the behavior (Fließkurve) of what we now call a pseudoplastic material.
In version 5.0 of the COMSOL Multiphysics simulation software, beside Ludwik’s power-law, the Nonlinear Structural Materials Module includes different material models within the family of nonlinear elasticity:
In the Geomechanics Module, we have now included material models intended to represent nonlinear deformations in soils:
The main difference between a nonlinear elastic material and an elastoplastic material (either in metal or soil plasticity) is the reversibility of the deformations. While a nonlinear elastic solid would return to its original shape after a load-unload cycle, an elastoplastic solid would suffer from permanent deformations, and the stress-strain curve would present hysteretic behavior and ratcheting.
Let’s open the Elastoplastic Analysis of a Plate with a Center Hole model, available in the Nonlinear Structural Materials Model Library as elastoplastic_plate, and modify it to solve for one load-unload cycle. Let’s also add one of the new material models included in version 5.0, the Uniaxial data model, and use the stress_strain_curve already defined in the model.
Here’s a screenshot of what those selections look like:
In our example, the stress_strain_curve represents the bilinear response of the axial stress as a function of axial strain, which can be recovered from Ludwik’s law when n=1.
We can compare the stress distribution after laterally loading the plate to a maximum value. The results are pretty much the same, but the main difference is observed after a full load-unload cycle.
Top: Elastoplastic material. Bottom: Uniaxial data model.
Let’s pick the point where we observed the highest stress and plot the x-direction stress component versus the corresponding strain. The green curve shows a nonlinear, yet elastic, relation between stress and strain (the stress path goes from a\rightarrow b \rightarrow a \rightarrow c \rightarrow a). The blue curve portraits a hysteresis loop observed in elastoplastic materials with isotropic hardening (the stress path goes from a\rightarrow b \rightarrow d \rightarrow e ).
With the Uniaxial data model, you can also define your own stress-strain curve obtained from experimental data, even if it is not symmetric in both tension and compression.
Heat transfer computation often needs to include surface-to-surface radiation to reflect reality with accuracy. The numerical tools used to simulate surface-to-surface radiation differ significantly from those used for conduction or convection. Whereas the latter are based on local discretization of partial differential equations (PDEs), surface-to-surface radiation relies on non-local quantities — the view factors between diffuse surfaces that emit and receive radiation.
When surface-to-surface radiation is activated, the heat interface creates a set of operators that are evaluated like the irradiation variables in surface-to-surface radiation. Thanks to these operators, it is possible to retrieve the irradiation variable values and to compute the geometrical view factor in a given geometry.
In this blog post, I’ll explain how to compute geometrical view factors in a simple 3D geometry where analytical values of the view factor are available.
The new operators in COMSOL Multiphysics version 5.0 offer full access to all of the information used to generate surface-to-surface radiation equations. This is true for even the more advanced configurations, such as radiation on both sides of a shell and multiple spectral bands with different opacity properties.
Let’s start with the simplest case where we assume that surfaces behave like gray surfaces. In this case, we don’t need to distinguish between the spectral bands. We have two operators, one for each face (up or down) of the surfaces. They are as follows:
radopu(expr_up,expr_down)
radopd(expr_up,expr_down)
These two operators are designed to be evaluated on a boundary where the surface-to-surface radiation is active. Assuming that the heat interface tag is ht
, ht.radopu(ht.Ju,ht.Jd)
returns the mutual surface irradiation, ht.Gm_u
, that is received at the evaluation point on the upside of the boundary.
Note that ht.Ju
and ht.Jd
define the radiosity on the up- and downsides of the boundaries, respectively. Similarly, ht.radopd(ht.Ju,ht.Jd)
returns the mutual surface irradiation, ht.Gm_d
, on the boundary downside.
When multiple spectral bands are considered, a given boundary can be opaque for a spectral band and transparent for another band. Hence, one pair of operators is needed per spectral band. They work exactly like the operator for gray surface and are named as follows:
ht.radopB1u(expr_up,expr_down)
and ht.radopB1d(expr_up,expr_down)
ht.radopB2u(expr_up,expr_down)
and ht.radopB2d(expr_up,expr_down)
ht.radopB3u(expr_up,expr_down)
and ht.radopB3d(expr_up,expr_down)
Let’s consider two diffuse gray surfaces: S1 and S2. We’ll assume that radiation occurs only on the upside of these surfaces. From a thermal perspective, the view factor between S1 and S2, F_{S1-S2}, is the ratio between the diffuse energy leaving S1 and intercepted by S2 and the total diffuse energy leaving S1.
Using the operators described above, we have
(1)
Note that for clarity, the ht.
prefix has been removed.
Assuming that radiosity is the same value on all surfaces, the above definition can be simplified and no longer depends on J. In that case, F_{S1-S2} depends only on the geometrical configuration and no longer on the thermal configuration. Let’s call this a geometrical view factor to distinguish it from the view factor based on thermal radiation.
We now have
(2)
where S1 represents either the surface name or its area and I_{\textrm{S1}} is the function indicator of the surface S1, which returns 1 when it is evaluated on S1 and 0 elsewhere.
In order to get used to the new operators and check their accuracy, I chose a simple configuration. The geometry consists of two concentric spheres of radius R_{int} and R_{ext} (with R_{int} < R_{ext}), as shown below:
The radiation occurs between the external side of the small sphere and the internal side of the large sphere. The geometrical view factors are:
Here, S_{int} and S_{ext} represent the interior and exterior sphere, respectively.
To compute the geometrical view factor in the COMSOL Multiphysics simulation software, we need to add a heat interface with surface-to-surface radiation activated, then draw the geometry and build the mesh.
Then, we don’t really need to run a heat transfer simulation since we are interested in the geometrical view factor. To have access to the radopu
and radopd
operators is enough to get initial values.
Before doing that, though, we’d better prepare a few tools that will help us with the postprocessing.
In the geometrical view factor expression, we have used the surface indicators I_{\textrm{S1}} and I_{\textrm{S2}}. These are defined as Variables in COMSOL Multiphysics and use the Geometric Entity Selection, so that it is 0 everywhere except on the corresponding surface where it is 1. Let’s name them ext
and int
.
Screenshots of the Geometric Entity Selection settings for the surface indicators.
Next, we define the integration operators intop_ext
and intop_int
. They will make it easy to compute surface integrals; for example, the surface of S_{ext} can be evaluated as intop_ext(1)
.
Screenshots of the settings for the integration operators.
We have seen that radiation may occur on the upside, downside, or on both sides of the boundaries. The radiation operators are designed to be able to distinguish the radiation coming from each side. Therefore, we need to check the sides on this model.
We can do this easily via the Diffuse Surface feature, where the radiation direction can be set to “Negative normal direction” (downside) or “Positive normal direction” (upside). Using this option prompts arrows to be automatically displayed to show the direction of radiation leaving the surface. In our example, the radiation occurs on the downside for S_{ext} and the upside for S_{int}.
With all these tools available to us, evaluating the view factor using an expression similar to (1) is direct. For example,
(3)
is evaluated in COMSOL Multiphysics syntax with intop_ext(comp1.ht.radopd(0,ext))/intop_ext(1)
.
Note that the use of radopd
is due to the fact that the radiation occurs on the downside of S_{ext}. The first argument of radopd
is 0 for the same reason.
(4)
is evaluated with intop_ext(comp1.ht.radopd(int,0))/intop_int(1)
, where radopd
is due to the fact that the radiation occurs on the downside of S_{ext}. But this time, the second argument of radopd
is 0 because radiation occurs on the upside of S_{int}.
I’ve gathered all the results in a table:
Analytical value | Computed value | Error | |
---|---|---|---|
F_{S_{\textrm{int}}-S_{\textrm{int}}} | 0 | 0 | 0 |
F_{S_{\textrm{int}}-S_{\textrm{ext}}} | 1 | 0.998 | 2e^{-3} |
F_{S_{\textrm{ext}}-S_{\textrm{ext}}} | 0.91 | 0.9102 | 2e^{-4} |
F_{S_{\textrm{ext}}-S_{\textrm{int}}} | 0.09 | 0.09 | 1e^{-6} |
Thanks to the radiation operators, we were able to retrieve the geometrical view factor.
With COMSOL Multiphysics 5.0, it is possible to compute geometrical view factors between diffuse surfaces, thanks to dedicated operators. This offers a solution for the requests we received since the surface-to-surface features have been released. But, these operators can do much more. They are flexible enough to provide all terms of the surface-to-surface radiation equation. They may also be used to formulate equations for other quantities.
Whether traveling a far distance between work and home or simply caught in a continuous wave of traffic, many of us spend a great deal of time on the road each day. On days where your evenings are free, this might be thought of as a time to relax and listen to a favorite CD or radio station. However, on days where your schedule is a bit more hectic, the lengthy commute can become a source of frustration as you race to make it on time to your destination. No matter the situation, an element that you rely on to make your car ride more enjoyable is comfort within your vehicle.
While on the road, you may have noticed the sensation of vibrations that can sometimes arise from your seat. The root of these vibrations can be traced to a number of sources, including road conditions, speed, engine vibrations, and the design of the vehicle’s seats. While not only a source of discomfort, prolonged exposure to such oscillations can be hazardous to one’s health, potentially resulting in fatigue or pain. With the growing concern behind the impact of these vibrations, some vehicles have begun to implement vibration isolators within the design of their seats in an effort to minimize this effect.
A vehicle’s seat can be a source of vibrations. (“Sedile in pelle di un’Alfa Romeo Giulietta” by Pava — Own work. Licensed under Creative Commons Attribution Share-Alike 3.0, via Wikimedia Commons).
With this biomechanical model built using the Multibody Dynamics Module, we can simulate the human body’s response to such vibrations, thereby helping to optimize the design of vibration isolators as well as analyze ride quality in vehicles.
The Biomechanical model of the human body in a sitting posture makes this analysis possible. An important element in the design of this model is addressing the complexity of the human body and how the different body parts are connected. In this example, we focus on the vibrational impact in six different areas of the body: the head, the torso, the viscera, the pelvis, the thighs, and the legs. Each element is treated as a lumped mass and defined as a rigid body.
To approximate the connections between varying body parts, we apply translational and rotational dampers and springs on the relative motion between the two connected body parts — a connection that is modeled with the elastic version of a fixed joint. This provides the translational and rotational stiffness and damping values between the connected body parts.
A fixed joint is used to model the connections between the body parts directly touching the seat (the legs, the thighs, and the pelvis) and the seat itself, which is the source of the vibration. To model the seat’s cushioning effect, elasticity on the joints is included when needed.
Note that rather than modeling the seat itself, a base motion node is used where the input excitation is 1 m/s^{2} in the vertical direction at three different locations.
We begin with an eigenfrequency analysis designed to determine the damped and undamped natural frequencies of vibration.
The figure below illustrates a rotational eigenmode on the undamped model. Considerable rotational movement is noted in the head and the torso. In comparison, little movement is found in other parts of the model.
We then shift our analysis to the translational eigenmode of the damped model. In the first major translational eigenmode, the results indicate a downward movement in the head, the pelvis, and the viscera, with no significant movement noted in the other body parts. This is illustrated in the figure below.
The second major translational eigenmode (shown below) notes displacement in the downward direction of the head, the torso, and the pelvis, whereas the viscera moves in the upward direction.
This example then features a frequency response analysis around the natural frequencies to analyze three different elements: vertical transmissibility, rotational transmissibility, and apparent mass.
Let’s first focus our attention on vertical transmissibility. Vertical transmissibility refers to the ratio between the head’s vertical acceleration and the seat’s input acceleration. When compared with the excitation frequency, the results show that the primary resonance is visible in the range of 4-6 Hz and the secondary resonance in the range of 8-10 Hz.
Vertical transmissibility vs. excitation frequency.
Rotational transmissibility is the ratio between the head’s angular acceleration and the seat’s input acceleration. With regards to this form of transmissibility, it is important to avoid high values as this can enhance discomfort as well as affect one’s vision. The plot below depicts its variation with the excitation frequency.
Rotational transmissibility vs. excitation frequency.
Finally, apparent mass refers to the ratio of the seat’s force to the seat’s input acceleration. Rather than depicting the end point characteristics of the model, this element conveys the driving point characteristics.
Apparent mass vs. excitation frequency.
In this blog post, we have introduced you to a biomechanical model of the human body, particularly highlighting its application within the automotive industry. We have showed you how, with the Multibody Dynamics interface, you can model various parts of the human body — and their connections — as well as analyze its dynamic response to whole body vibrations.
The piezoelectric modeling interface seeks to:
This will allow you to successfully simulate piezoelectric devices as well as easily extend the simulation by coupling it with any other physics.
You may already be familiar with the three different modules that can be used for simulating piezoelectric materials:
Each of these modules gives you a predefined Piezoelectric Devices interface that you can use for modeling systems that include both piezoelectric and other structural materials. The Acoustics Module offers two predefined interfaces, namely the Acoustic-Piezoelectric Interaction, Frequency Domain interface and the Acoustic-Piezoelectric Interaction, Transient interface. These two allow you to model how piezoelectric acoustic transducers interact with the fluid media surrounding them.
The Piezoelectric Devices interface is available in the list of structural mechanics physics interfaces.
The Acoustic-Piezoelectric Interaction, Frequency Domain and the Acoustic-Piezoelectric Interaction, Transient interfaces are available in the list of acoustics physics interfaces.
These predefined multiphysics interfaces couple the relevant physics governing equations via constitutive laws or boundary conditions. Thus, they offer a good starting point for setting up more complex multiphysics problems involving piezoelectric materials. The new piezoelectric interfaces in COMSOL Multiphysics version 5.0 provide a transparent workflow to visualize the constituent physics interfaces. There is also a separate Multiphysics node that lists how the constituent physics interfaces are connected to each other.
Let’s find out how these multiphysics interfaces are structured.
Upon selecting the Piezoelectric Devices multiphysics interface, you see the constituent physics: Solid Mechanics and Electrostatics. You also see the Piezoelectric Effect branch listed under the Multiphysics node, which controls the connection between Solid Mechanics and Electrostatics.
Part of the model tree showing the physics interfaces and multiphysics couplings that appear upon selecting the Piezoelectric Devices interface.
By default, all modeling domains are assumed to be made of piezoelectric material. If that is not the case, you can deselect the non-piezo structural domains from the branch Solid Mechanics > Piezoelectric Material. These domains then get automatically assigned to the Solid Mechanics > Linear Elastic Material branch. This process ensures that all parts of the geometry are marked as either piezoelectric or non-piezo structural materials and that nothing is accidentally left undefined.
If you are working with other material models that are available with the Nonlinear Structural Materials Module, such as hyperelasticity, you can add that as a branch under Solid Mechanics and assign the relevant parts of your modeling geometry to this branch. The Solid Mechanics node gives us full flexibility to set up a model that involves not only piezoelectric material but also linear and nonlinear structural materials. The best part is that if these materials are geometrically touching each other, the COMSOL software will automatically take care of displacement compatibility across them.
If some parts of the model are not solid at all, like an air gap, you can deselect them in the Solid Mechanics node.
From the Solid Mechanics node, you will also assign any sort of mechanical loads and constraints to the model.
The Electrostatics node allows you to group together all the information related to electrical inputs to the model. This would include, for example, any electrical boundary conditions such as voltage and charge sources. By default, any geometric domain that has been assigned to the Solid Mechanics > Piezoelectric Material branch also gets assigned to the Electrostatics > Charge Conservation, Piezoelectric branch. If you have any other dielectric materials in the model that are not piezoelectric, you could assign them to the Electrostatics > Charge Conservation branch.
The Multiphysics > Piezoelectric Effect branch ensures that the structural and electrostatics equations are solved in a coupled fashion within the domains that are assigned to the Solid Mechanics > Piezoelectric Material (and also the Electrostatics > Charge Conservation, Piezoelectric) branch.
The multiphysics coupling is implemented using the well-known coupled constitutive law for piezoelectric materials. Note that the Electrostatics > Charge Conservation, Piezoelectric branch is mainly used as a placeholder for assigning geometric domains that belong to the piezoelectric material model. This helps the Multiphysics > Piezoelectric Effect branch understand whether a domain assigned to the Electrostatics interface is piezoelectric or not.
Note: For an example of working with the Piezoelectric Devices interface, check out the tutorial on modeling a Piezoelectric Shear Actuated Beam.
It is also possible to add effects of damping or other material losses in dynamic simulations. You can do so by adding one or more of the following subnodes under the Solid Mechanics > Piezoelectric Material branch:
Damping and losses that can be added to a piezoelectric material.
Subnode Name | When to Use the Subnode |
---|---|
Mechanical Damping | Allows you to add purely structural damping. Choose between using Loss Factor (in frequency domain) or Rayleigh damping (for both frequency and time domains) models. |
Coupling Loss | Allows you to add electromechanical coupling loss. Choose between using Loss Factor (for frequency domain) or Rayleigh damping (for both frequency and time domains) models. |
Dielectric Loss | Allows you to add dielectric or polarization loss. Choose between using Loss Factor (for frequency domain) and Dispersion (for both frequency and time domains) models. |
Conduction Loss (Time-Harmonic) | Allows you to add electrical energy dissipation due to electrical resistance in a harmonically vibrating piezoelectric material (for frequency domain only). |
Note: For an example of adding damping to piezoelectric models, check out the tutorial on modeling a Thin Film BAW Composite Resonator.
Additional damping also takes place due to the interaction between a piezoelectric device and its surroundings. This can be modeled in greater details using the Acoustic-Piezoelectric Interaction interfaces.
Upon selecting one of the Acoustic-Piezoelectric Interaction interfaces, you see the constituent physics: Pressure Acoustics, Solid Mechanics and Electrostatics. You also see the Acoustic-Structure Boundary and Piezoelectric Effect branches listed under the Multiphysics node.
Part of the model tree showing the physics interfaces and multiphysics couplings that appear when selecting the Acoustic-Piezoelectric Interaction, Frequency Domain and the Acoustic-Piezoelectric Interaction, Transient interfaces.
By default, all modeling domains are assigned to the Pressure Acoustics interface as well as the Solid Mechanics > Piezoelectric Material and Electrostatics > Charge Conservation, Piezoelectric branches. Note that the Pressure Acoustics interface is designed to simulate acoustic waves propagating in fluid media.
Since COMSOL Multiphysics cannot know a priori which parts of the modeling geometry belong to the fluid domain and which ones are solids, you are expected to provide that information by deselecting the solid domains from the Pressure Acoustics, Frequency Domain (or Pressure Acoustics, Transient) branch and deselecting the fluid domains from the Solid Mechanics and Electrostatics branches.
Once you do that, the boundaries at the interface between the solid and fluid domains are detected and assigned to the Multiphysics > Acoustic-Structure Boundary branch. This branch controls the coupling between the Pressure Acoustics and Solid Mechanics physics interfaces. It does so by considering the acoustic pressure of the fluid to be acting as a mechanical load on the solid surfaces, while the component of the acceleration vector that is normal (perpendicular) to the same surfaces acts as a sound source that produces pressure waves in the fluid.
Note: For an example of Acoustic-Piezoelectric Interaction, check out the tutorial on modeling a Tonpilz Transducer.
The transparency in the workflow as discussed above also paves the way for adding more physics and creating your own multiphysics couplings.
For example, let’s say there is some heat source within your piezoelectric device that produces nonuniform temperature distribution within the device. In order to model this, you can add another physics interface called Heat Transfer in Solids in the model tree and prescribe appropriate heat sources and sinks to find out the temperature profile. You could then add a Thermal Expansion branch under the Multiphysics node to compute additional strains in different parts of the device as a result of the temperature variation.
The Multiphysics > Thermal Expansion branch couples the Heat Transfer in Solids and the Solid Mechanics interfaces. It might also be possible that the piezoelectric material properties have a temperature dependency. You could represent these properties as functions of temperature and let the Multiphysics > Temperature Coupling branch pass on the information related to temperature distribution in the modeling geometry to the Solid Mechanics or even the Electrostatics branches, thereby producing additional multiphysics couplings.
Part of the model tree showing the physics interfaces and multiphysics couplings that you can use to combine piezoelectric modeling with thermal expansion and temperature-dependent material properties.
Similar to adding more physics and multiphysics couplings, it is also possible to disable one or more multiphysics couplings — or even any of the physics interfaces shown in the model tree. This could be immensely helpful for debugging large and complex models.
The model tree on the left shows a scenario where the Piezoelectric Effect multiphysics coupling is disabled. The model tree on the right shows a scenario where the Electrostatics physics interface is disabled.
For example, you can disable the Multiphysics > Piezoelectric Effect branch and solve for the Solid Mechanics and Electrostatics physics interfaces in an uncoupled sense. You could also solve a model by disabling either the Solid Mechanics or the Electrostatics interface.
Running such case studies could help in evaluating how the device would respond to certain inputs if there were no piezoelectric material in place. This approach could also be used to evaluate equivalent structural stiffness or equivalent capacitance of the piezoelectric material.
You could also start by adding only one of the constituent physics, say Solid Mechanics, and after performing some initial structural analysis, go ahead and add the Electrostatics physics interface to the model tree once you are ready to add the effect of a piezoelectric material.
In that case, when you add the Electrostatics physics on top of the existing Solid Mechanics physics in the model tree, the COMSOL software will automatically add the Multiphysics node. From there, you can manually add the Piezoelectric Effect branch. Note that if you take this approach of adding the constituent physics interfaces and multiphysics effect manually, you would also have to manually add the piezoelectric modeling domains to the Solid Mechanics > Piezoelectric Material, the Electrostatics > Charge Conservation, Piezoelectric, and the Multiphysics > Piezoelectric Effect branches.
In a similar fashion, you can continue to add more physics interfaces and multiphysics couplings to your model based on your needs.
To learn more about modeling piezoelectric devices in the COMSOL software environment, you are encouraged to refer to these resources: