To measure blood flow in a painless and noninvasive way for patients, medical professionals can use magnetic flow meters, which rely on electromotive force (EMF). In a flow meter, external coils generate a magnetic field and noncontact electrodes measure the induced EMF. When a patient moves during the measurement process (even by merely taking a breath), blood vessels can move, which affects the flow meter’s sensitivity. This phenomenon is an important point of analysis in cardiac and thoracic medicine.
Magnetic flow meters, which rely on coils and electrodes, are a noninvasive way to measure flow in a patient’s blood vessels.
Researchers from ABB Corporate Research in India built a multiphysics model of this process that includes the effects of fluid-structure interaction (FSI) and electromagnetics. Their aim was to understand how blood vessel movement influences flow meter sensitivity by comparing the meter’s performance when blood vessels are displaced versus in their normal positions.
The research team modeled a blood vessel as a pipe, but with the appropriate biological material properties. They coupled multiple physical effects via built-in physics interfaces in the COMSOL Multiphysics® software. The Laminar Flow interface was used to model the blood flow through the vessel. To account for the magnetic field generated by the coils, as well as the EMF induced by the blood flow and magnetic field, they used the Magnetic and Electric Fields interface.
A schematic of the magnetic flow meter model. Image by S. Dasgupta, K. Ravikumar, P. Nenninger, and F. Gotthardt and taken from their COMSOL Conference 2016 Bangalore paper.
The researchers also used the Structural Mechanics Module, an add-on product to COMSOL Multiphysics, to model the vessel displacement during patient movement. They coupled this analysis with the CFD Module to account for FSI, including how the vessel displacement affects blood flow and how the fluid pressure affects the blood vessel.
This model was used to analyze the sensitivity of the magnetic flow meter when blood vessels are displaced and when they are in a normal position.
The researchers compared the results for a blood vessel in a normal position and displaced by 5 cm toward the upper coil. The simulation results show contour plots of the velocity and magnetic flux density across the pipe (i.e., blood vessel) cross section. Other results show the induced electric potential from the flow and magnetic field interface and the potential distribution across the pipe diameter.
The velocity (top left) and magnetic flux density (bottom left) across the pipe as well as the induced electric potential of the cross section (top right) and length (bottom right) of the pipe. Images by S. Dasgupta et al. and taken from their COMSOL Conference 2016 Bangalore paper.
The plots indicate an increase in the sensitivity of the flow meter between the nondisplaced and displaced blood vessels. The team determined that the increase is not due to velocity, since the velocity profiles are the same for both scenarios. Instead, the reason for the increase is that the displaced vessel shifted to a higher magnetic field zone, and the magnetic flux density in the vessel increased as it moved toward the coil.
The researchers from ABB concluded through their simulation results that blood vessel displacement is a potential issue for medical uses of magnetic flow meters. To address the concerns, they theorized that a patient’s body movement can be restricted during these procedures or a breath-synchronous magnetic field can be generated to compensate for the sensitivity changes.
While these simulations proved useful for the medical field, further studies can account for more real-world conditions, such as pulsatile blood flow as well as variations in vessel properties and body locations.
For the team, this research confirmed that COMSOL Multiphysics can be used to analyze how different phenomena — including fluid flow, structural mechanics, and electromagnetics — interact in a complex application.
Check out the full paper from the COMSOL Conference 2016 Bangalore (it won a Best Paper award!): “Measurement of Blood Flowrate in Large Blood Vessels Using Magnetic Flowmeter“
]]>
Fluid-filled pipes, also referred to as fluid-carrying structures, have a large number of industrial applications, such as gas pipelines, automotive mufflers, aircraft fuselage, and underwater pipelines. The size of a pipeline system can range from centimeters to kilometers.
Common applications of fluid-filled pipes. Left: A submerged pipeline. Image by Grand Canyon National Park. Licensed under CC BY 2.0, via Flickr Creative Commons. Center: A model of an aircraft fuselage. Right: An automotive muffler. Image by lw5315us. Licensed under CC BY-SA 2.0, via Flickr Creative Commons.
Large pipe systems are difficult to model with simulation software, and because acoustic and elastic modes don’t exist independently, individual analysis doesn’t make it easier. Therefore, we need to underline the effect of fluid loading on the response of the pipe.
At low frequencies, the fluid loading term tends to be small, so the response of the system is dominated by the dynamics of the structure/pipe (Ref. 2). Fluid loading changes the vibrational characteristics of the structure in contact, and consequently, the acoustic radiation. Fluid-loading effects are exhibited strongest by structures in contact with denser fluids, since the fluid forces are proportional to the mean density of the fluid.
Generally, systems can be described by distributed mass and stiffness. There are infinite degrees of freedom (DOFs) for a continuous system, which results in infinite modes. For a finite frequency range, there is a finite number of modes that can be analyzed individually using modal decomposition. The motion of such continuous systems is described by partial differential equations (PDEs) from force/acceleration and force/deformation relations. Examples of such systems are strings, rods, and shafts (second-order PDEs) as well as beams (fourth-order PDEs) and fluid-filled pipes.
The solution to such equations can be visualized using two approaches:
Suppose we’re interested in modeling the dynamics of a large system at higher frequencies using the finite element method (FEM). To capture its behavior, the wavelength must be discretized with a sufficient number of elements, which could result in a large number of DOFs and more memory and time. We can tackle this issue using wave modes or representing the system as guided waves, since the waves travel long distances before they decay.
Wave properties are another advantage of a wave-based approach. They are important for studying structure-borne sound, frequency response of finite-length waveguides, and computing the energy transmission through structures. You represent these wave modes through dispersion curves, which provide a relationship between the wave number and frequency.
Dispersion curves are basically separate lines that each represent an individual mode. The only prerequisite for the wave-based method is that the cross section of the system is constant (there is no limit to the length). For modeling long systems, such as pipes carrying fluid, beams, or rail track, the wave-based approach is very useful.
Waves propagate in time and space. The spatial variation is described by a quantity representing phase change per unit distance and is equal to ω/c. This is the wave number, denoted by k. One wavelength corresponds to an x-dependent phase difference of 2π: kλ = 2π.
When a system is excited with a force at one end, a large number of waves start to propagate to the other side. Each wave travels with a velocity described as the phase velocity (independent of ω; e.g., longitudinal shear waves) and group velocity (frequency dependent; e.g., bending waves). All of the waves travel together under an envelope. The speed at which the energy is transported is given by the group velocity, which is the velocity of the envelope given by c_{g} = ∂ω/∂k.
Schematic of a dispersion curve.
Dispersion curves explain the dynamics of a coupled system. In a fluid-filled pipe where waves can travel in fluid as well as in the pipe wall, the dispersion curves provide a common wave number or wave mode that propagates into the system as a whole. Dispersion curves also provide insight into what happens inside the system at different frequencies. Let’s see how to compute dispersion curves analytically.
Consider a linear conservative system that is uniform and unbounded in one direction (z). The equation of free vibration can be written as:
(1)
where μ(z) is the mass density, and L(z) is the stiffness operator and depends on and so on.
The exact form varies. In general, w might be a function of 1, 2, or 3 space variables depending on the problem (such as a beam, plate, or acoustic cavity). Under the passage of a time-harmonic wave, the solution of Eq. (1) is w(z,t) = We^{i(ωt – kz)}, where W is the amplitude of the wave, ω is the circular frequency, and k is the wave number.
Substituting w(z,t) in Eq. (1) provides the dispersion/characteristic equation. The solution is wave numbers, which come in pairs and represent waves traveling in the ±z direction. Wave numbers can be characterized as:
We may want to obtain basic wave modes (such as longitudinal, shear, and bending) of a structure analytically. Systems considered here have a constant cross section and wave propagation in the positive x direction. For computing longitudinal motion, consider a uniform elastic bar with density ρ and Young’s modulus E. The equation of motion for free vibration is given by . Using the same principle as for time-harmonic motion, we get the dispersion relation , with phase velocity and group velocity c_{g} = c_{L}. Since c_{L} is independent of ω and k, all harmonic waves travel at the same speed. The dispersion relation for shear waves is of the form , where G is the shear modulus of the material.
To compute the bending waves, we consider the Euler-Bernoulli and Timoshenko theories based on certain assumptions. The Euler-Bernoulli theory assumes that the cross section of the beam remains plane and perpendicular to the neutral axis during bending, ignoring rotary inertia and shearing effects. This simplifies many terms and yields a fourth-order partial differential equation, , which can be easily solved.
The only problem with this assumption is that it cannot be validated at high frequencies when the wavelength becomes comparable to the thickness of the structure. The dispersion relation is given by , corresponding to phase velocity and group velocity c_{gb} = 2c_{b}. The phase speed depends on frequency, so the bending waves are dispersive. The wave spreads out due to higher-frequency components, which propagate faster.
Other theories, such as Timoshenko, incorporate shear effects and provide more accurate behavior at higher frequencies. For complicated cross sections, analytical solutions are not feasible.
The acoustic pressure field inside a cylindrical duct, which satisfies the acoustic wave equation, is given by:
where n is the circumferential mode order, P_{n} is the amplitude coefficient, J_{n}(k_{r}r) is the Bessel function of first kind, k_{z} is the out-of-plane wave number, and θ is the circumferential angle.
The radial wave number k_{r} is determined by the boundary condition for a rigid wall; i.e., J_{n}‘(k_{r}r)_{r=a} = 0 , where J_{n}‘(k_{r}r) is the derivative of the Bessel function with respect to r. The solution/modes are multivalued for a given n. Correspondingly, the out-of-plane wave number is computed using the relation k_{z}^{2} + k_{r}^{2} = k^{2}.
Our fluid-filled pipe is linearly elastic and homogeneous. The fluid is purely acoustic, which means it’s compressible, inviscid, and barotropic. The pipe’s modes are computed individually. For the numerical example, the pipe material is steel and the fluid is air. Material properties are given by:
Material Properties |
---|
E = 2e11 N/m^{2} |
ρ_{s} = 7800 kg/m^{3} |
ν = 0.3 |
ρ_{f} = 1.25 kg/m^{3} |
Speed of sound in air, c = 343 m/s |
r_{o} = 0.05 m |
t = 0.0025 m |
We use the Solid Mechanics interface and Pressure Acoustics, Frequency Domain interface to solve the model. We also use the mode analysis study type, where the modes or out-of-plane wave numbers are computed at each frequency. Mode analysis assumes that the mode is harmonic in space; i.e., u(x,y,z) = u(x,y)e^{ikzz}. This equation can be solved at a given frequency for free vibrations for most of the out-of-plane wave numbers, k_{z}.
Certain discrete values — eigenvalues — correspond to the wave numbers of the propagating or evanescent modes. The mode analysis study step triggers the solver that can find these wave numbers and the corresponding mode shapes. A parametric sweep of frequency computes the wave numbers at different frequencies.
Settings for computing out-of-plane wave numbers.
The real values of the wave numbers are plotted, since they correspond to propagating wave modes. The cross-sectional shapes are also plotted in terms of total displacement. To read the dispersion curve, we use the lines (see below). Comparing the bending, shear, and longitudinal modes with analytical solutions helps to easily identify them. We also see a mode cuts on at around 6000 Hz and propagate from there. Sometimes, the behavior of the mode changes at high frequencies (a bending mode can convert into a shear, longitudinal, or extensional mode). Such behavior can be easily captured with dispersion curves.
Dispersion curves for a hollow cylindrical pipe (left) and rigid-walled acoustic duct (right).
Pipe cross-sectional mode shapes.
The dispersion curves for a cylindrical rigid-walled pipe can be analyzed using the same analogy. The first 6 acoustic modes (see graph above to the right) cut on at around 2000, 3500, 4300, 4800, and 6100 Hz, respectively. The modes are compared with the analytical solutions and the cross-sectional shapes are plotted for the cylindrical duct, also showing the pressure distribution across the duct cross section.
Pressure distribution at different modes.
The wave numbers computed using the COMSOL Multiphysics® software are compared with the analytical wave numbers of the hollow pipe and rigid-walled cylindrical duct, respectively. Results show good agreement, but there are clear differences observed for the bending mode. Since the analytical theory is based on assumptions, it cannot be used for high frequencies. The reliability of numerical results lies in the proper discretization of the domain under study.
Note that a sufficient number of elements per wavelength (~6–8 quadratic elements) must be used to capture the wavelength accurately. Another advantage of the numerical approach is that analytical solutions are difficult to obtain for complex cross sections (such as multilayered pipes or complex shape cross sections). In the plot above to the left, apart from the regular wave modes (i.e., bending, longitudinal, shear), many other modes are observed in the numerical solutions. This number increases with the frequency range. The “extra” modes (such as the ring mode) also have physical significance and they are extremely difficult to obtain via analytical solutions. The system’s overall dynamic response is the superposition of all of the modes.
At a higher frequency range, the system’s behavior becomes more complex. Modes overlap with each another and it’s extremely difficult to understand the behavior of each mode. Again, dispersion curves come to the rescue.
Now, we compute the wave number for the coupled system for the fluid-filled pipe. Using the method described earlier, the wave modes are computed using the mode analysis solver in COMSOL Multiphysics for both air and water as the internal fluid.
Dispersion curves for an air-filled (left) and water-filled (right) steel pipe.
The results for the air-filled steel pipe are compared with the uncoupled acoustic and elastic modes. Since the fluid is light, it has minimal effect on the vibrations of the coupled system.
The ring mode can be seen at low frequencies where the pipe resonates as a ring. However, due to Poisson’s effect, there is a slight coupling between the elastic and acoustic parts. As the frequency increases, the motions of the elastic and acoustic parts become strongly coupled, highlighted by a rapid increase in radial vibrations. For instance, branch 1 corresponds to the longitudinal mode and branch 2 corresponds to the coupled mode. Although the coupling between air and steel is weak, at 6000 Hz, the extensional mode converts into an acoustic mode.
Cross-sectional mode shapes for the coupled system are plotted below. They correspond to the displacements in the pipe and pressure field in the fluid domain.
Strong coupling behavior is seen in the plot above for a water-filled pipe. Branch 1 corresponds to the acoustic wave in a rigid-walled cylindrical duct (purely acoustic mode). Considering branch 2, the pipe behaves in vacuo at low frequencies. At high frequencies, the fluid and pipe motion become strongly coupled and the mode converts into a second acoustic mode. Branch 3 originates (cuts on) at around 10,000 Hz. The mode seems to follow the trend of an extensional structural mode and at high frequencies, it again converts into a rigid-walled acoustic mode. We can analyze other branches similarly.
Coupled elastoacoustic wave mode shapes, with air as the internal fluid.
Further, dynamic analysis of the system using dispersion curves can be done at high frequencies. For finite-length systems, these propagation constants or wave numbers can be used to compute the forced response for significant computational efficiency.
Suppose you want to reduce the noise radiation from your system. A few easy techniques can be employed, such as using a multilayered/sandwich pipe made of soft rubber material enclosed by two stiff skins or a complicated cross section (maybe elliptical). Such complex configurations can easily be tested using dispersion curves.
However, the analysis must have:
In this blog post, we have discussed how dispersion curves are computed for an infinite-length multiphysics system and how they can be further analyzed for structural mechanics and pressure acoustics. The analysis is performed using the mode analysis solver. In an upcoming blog post, we will demonstrate how to use wave modes to compute the forced response of finite length waveguides.
C.R. Fuller and F.J. Fahy, “Characteristics of wave propagation and energy distributions in cylindrical elastic shells filled with fluid,” Journal of Sound and Vibration, vol. 81, no. 501518, 1982.
Imagine that you’re at a busy cocktail party on New Year’s Eve. Music and laughter fill the air in a cacophony of sound. You and a friend are chatting in the middle of the crowd, waiting in anticipation for the countdown to begin.
Now, close your eyes and think about trying to listen to your friend.
How did you pick your friend’s voice out from the mixture of noises around you?
The answer to this question lies within the cocktail party effect, a concept popularized by Colin Cherry in 1953. The cocktail party problem involves hearing and focusing on a sound of interest, like a speech signal, in an environment with competing sounds.
To do so, you need to overcome two challenges:
These challenges are exacerbated when the party becomes larger and there are more competing sound sources. As a result, it is difficult to determine the speech signal of interest, recover it from the blending of sounds around you, and then pay attention to it. Despite the challenge, many people are able to naturally solve this problem without thinking much about it.
So how do we do it? Let’s take a look…
According to this source, a main element at play here is that our brains are able to use grouping cues to determine which sounds go together. For instance, individual sounds often have common amplitude changes across their different frequencies. This means that when we come across sounds at multiple frequencies that stop and start at the same time, our brains interpret these as belonging to the same sound source. Additionally, when frequencies in a sound mix have a harmonic relationship, they are often heard as one sound, since it is likely that they are related to one another.
Fluctuations in natural sounds also make it easier to differentiate between the sounds. Although different sounds can obscure each other at times, when they fluctuate, we get a glimpse of the underlying sounds in the noisy environment. Our auditory system can then fill in the blanks for the obscured sounds by accurately grouping the obscured bits.
Press play to be transported to a noisy cocktail party. At first, you can only hear a melange of sound. Then, you run into an old friend, who starts talking to you. As you focus on what your friend is saying, you are eventually able to filter out the other sounds of the party, effectively turning them into background noise.
Another helpful way our brains solve this problem is by using our understanding of various classes of sounds. Going back to our cocktail party example, if your friend is speaking, you’ll have a better chance of hearing them if they are forming coherent sentences than if they are speaking gibberish. In addition, your perception of sound is more accurate if your friend has an accent that is familiar to you.
Localization and visual cues also help us direct our attention to the correct auditory source. If a target sound is in a different location than undesired sounds, for example, we can more easily differentiate it using our spatial hearing, and as a result, the rest becomes background noise.
While the average human body is typically able to solve the cocktail party problem on its own, people with impaired hearing may struggle in loud situations. To learn more, we reached out to Abigail Kressner of the Technical University of Denmark. Kressner mentions that one generally accepted theory on why hearing-impaired people struggle in loud situations is that it is due to a “combination of audibility (i.e., whether the signals are loud enough for the hearing-impaired person to hear them) and reduced temporal resolution.”
Kressner elaborates by saying that these issues may “influence a hearing-impaired listener’s ability to segregate different streams of sound within a complex acoustic scene like a cocktail party and that they also may have reduced attentional segregation.” Those who are hearing impaired are also less able to “listen in the dips” between fluctuations of competing noise sources. As we touched on earlier, these fluctuations in the noise provide glimpses of the target speech sound for those with normal hearing, and therefore, they provide clues for understanding the speech. Replicating this ability in machine algorithms for hearing aids is a challenge for hearing aid designers.
The first objective in designing hearing aids is, of course, to make sounds audible for hearing-aid users. But after meeting that requirement, there are a great deal of additional features that can be added, including:
Kressner notes that these approaches both encounter the challenge of distinguishing between sound signals and finding the one the listener wants to hear. For instance, you may want to listen to a friend talking in front of you or someone on the other side of the room who has just called your name.
A hearing aid. Image by Udo Schröter — Own work. Licensed under CC BY-SA 3.0, via Wikimedia Commons.
How will the hearing aid device know which signal the user wants to listen to? The COCOHA project thinks brain signals (EEG signals) are the answer. This solution still has a lot of work ahead of it, though, including more research into decoding cognitive attention and then using this information to adjust the device and suppress unwanted signals.
Let’s move away from our imaginary cocktail party and instead take a walk through a dense forest. Here, on warm spring evenings, you may hear a chorus of Cope’s gray treefrogs. While each individual call is similar, fitter males give off faster and longer calls. The females listen for these calls, tuning out extra noises and tuning in to the calls of interest. Research into how these frogs achieve this feat and the difference between their ears and human ears could assist in improving the design of both hearing aids and speech recognition systems.
Finding inspiration for improving hearing aid designs in nature; a photo of a Cope’s gray treefrog. Image by Fredlyfish4 — Own work. Licensed under CC BY-SA 4.0, via Wikimedia Commons.
So far, a lot of research into designing hearing aids that account for the cocktail party problem “has been acquired via very controlled, yet unrealistic laboratory experiments,” Kressner notes. This isn’t ideal, because there is “a disconnect between what we see in the laboratory and what we see in the real world.” To move forward and close this gap, Kressner suggests that it could be possible to use, for example, numerical modeling or more realistic psychoacoustic reproduction techniques to better understand what is happening in the real world.
Finding inspiration in simulation; a probe tube microphone, which can be used in association with hearing aids, simulated with the COMSOL Multiphysics® software.
Canadian Nuclear Laboratories aims to improve nuclear fuel because it limits the efficiency of power generation in nuclear reactors. As Andrew Prudil said in his keynote talk: “If we can increase the power rating of the reactors, that’s worth millions of dollars per day.” Optimized nuclear fuel also enables more green energy on a power grid and reduces the risk of nuclear accidents. Plus, the improved fuel can be used in existing reactors to enhance their performance.
Before engineers can develop improved nuclear fuel, they have to understand its behavior. This is no simple matter, as nuclear fuel experiences multiple physical phenomena during nuclear reaction fission, including high temperatures, radiation, mechanical loading, thermal expansion, the creation of fission products such as xenon and krypton, and more.
To learn more about nuclear fuel behavior during a reaction, in which “everything depends on everything else,” CNL turned to the COMSOL Multiphysics® software.
First, Prudil discussed a multiphysics model — created for his PhD thesis — that studies the behavior of nuclear fuel (or pellets, in this case). The Fuel and Sheath Modeling Tool (FAST) simulates a long row of pellets separated by small gaps inside a metal sheath. Each part of the model involves multiple types of physics. For instance, sheaths in nuclear reactors typically use zirconium-based alloys, which consist of anisotropic crystal structures. For accurate results, the model must account for how the crystals behave when pulled in different directions.
From the video: Results for the FAST simulations.
The simulations show how the ends of the pellets push outward to make room for the hot material at the center. The “hourglassing” phenomenon causes the ends of the pellets to create a wavy pattern in the cladding (exaggerated in the image above). FAST can also plot the radial displacement and various stress and strain fields, such as the hydrostatic pressure, von Mises stress, and axial creep. Prudil noted that the results show “very interesting, very rich spatial fields.”
With FAST, it’s possible to look at how nuclear fuel behaves in a continuum — both in terms of a temperature gradient and mechanical loading.
Prudil then discussed a model created at Canadian Nuclear Laboratories that simulates how fission gas forms bubbles on the boundary of a single grain of uranium oxide, a process that involves fission gas products such as xenon and krypton. At the grain boundary, these insoluble gases try to diffuse pressure by forming bubbles. The bubbles grow larger and larger and let gases escape.
The CNL model simulates this process for individual bubbles. Instead of using the traditional phase field method, which can be computationally expensive, they created the included phase technique to model the phase interface.
Simulation results for the included phase technique. Animation courtesy Andrew Prudil and can be found in the paper: “A novel model of third phase inclusions on two phase boundaries“.
Initially, the simulations show a random distribution of bubbles on the grain boundary. As time progresses, the bubbles combine to minimize the surface energy before collecting at the edges and vertices. CNL validated their approach, determining that they could control the contact angle of a single bubble on an infinite plane.
Wrapping up, Prudil mentioned that COMSOL Multiphysics could also be used to investigate other interesting multiphysics phenomena (e.g., columnar grain growth). With these capabilities, engineers can learn more about nuclear fuel and continue to advance the field.
To learn more about how CNL uses multiphysics modeling to understand the behavior of nuclear fuel, watch the keynote video at the top of this post.
]]>
The Nonlinear Structural Materials Module, an add-on product to COMSOL Multiphysics®, provides a plethora of material models, including models for hyperelasticity, isotropic/kinematic hardening plasticity, viscoelasticity, creep, porous plasticity, soils, and more. These material models cover a vast majority of engineering problems within structural analysis.
However, in some situations, the mechanical behavior of a material is not readily expressed in terms of an existing material model. For instance, suppose you developed a specialized material model for a certain alloy and want to use it to solve a large structural mechanics boundary value problem in COMSOL Multiphysics. What do you do?
As a matter of fact, there are three different ways in which you can define your own material:
The implementation of a material model as an external DLL can seem like a complex endeavor, but this blog post demonstrates how to implement an elastoplastic material model in COMSOL Multiphysics using hands-on steps that you can follow.
As a starting point, we need to decide on a material model to implement. We choose an isotropic linear-elastic material with isotropic hardening. This is a simple plasticity model that already exists in COMSOL Multiphysics, but it serves nicely to convey some key points.
First, let’s go over some assumptions, definitions, and nomenclature:
The example material model: Uniaxial stress-strain curve and yield surface in principal stress space.
Now, let’s discuss approaches for implementing a material model as an external material. There are several different ways of calling user-coded routines for external materials, which we refer to as sockets.
We can use the General stress-strain relation socket of the external material to define a complete material model that includes (possibly) both elastic and inelastic strain contributions. This is the most general of the two modeling approaches discussed here. When we use the General stress-strain relation socket, we are faced with two tasks:
We can also use the Inelastic residual strain socket to define a description of an inelastic strain contribution to the overall material model. An example of this would be if we wanted to add our own creep strain term to the built-in linear elastic material. The Inelastic residual strain socket assumes an additive decomposition of the total (Green-Lagrange) strain into elastic and inelastic parts. Thus, this is an adequate assumption when strains are of the order < 10%. When we use the Inelastic residual strain socket of the external material model, we are faced with two tasks:
Two related External Material sockets are the General stress-deformation relation and the Inelastic residual deformation. These are more general versions of those discussed above. Instead of defining the deformation in terms of the Green-Lagrange strain tensor, the deformation gradient is provided. Many large-strain elastoplastic material models use a multiplicative decomposition of the deformation gradient into elastic and plastic parts. In these situations, you would likely want to use one of these sockets instead.
Tip: We link to the source file and model file at the bottom of this blog post.
The complexity of computing the stress tensor varies significantly between material models. In practice, the computation of the stress tensor often needs to be formulated as an algorithm. This is often called a stress update algorithm in literature. In essence, the objective of a stress update algorithm for a material model is to compute the stresses, knowing:
These quantities are provided to the external material as input.
The term “material state” represents any solution-dependent internal variables that are required to describe the material. Examples of such variables are plastic strain tensor components, current yield stress, back stress tensor components, damage parameters, effective plastic strain, etc. The choice of such state variables will depend on the material model. We must ensure that the material state is properly initialized at the start of the analysis, and that it is updated at the end of the increment.
We first need to investigate if there is plastic flow occurring during the increment. We do this by assuming that the elastic strain is equal to the total strain of the current increment, less the (deviatoric) plastic strain of the previously converged increment, . This assumption would hold true if there was, indeed, no plastic flow during the increment. The deviatoric stress tensor that is computed this way is aptly called a trial stress deviator and is given by
with an effective (von Mises) value .
The effective trial stress is compared to the yield stress of the material by assuming that there is no plastic flow during the increment. The yield stress corresponding to the previously converged increment is given by
Notice that the left-superscripted and in Steps 1 and 2 represent the material state of the previously converged increment, as we discussed earlier.
Now we check if the trial stress causes plastic flow. For another way of expressing this, if the trial stress is inside the yield surface, the response will be purely elastic during the increment. If not, plastic flow will result. The check is performed using the yield condition:
Check of the yield condition to determine elastic or elastoplastic computation.
The stress update algorithm now necessarily branches off into either a purely elastic computation or an elastoplastic computation. We will follow each of these branches, starting with the purely elastic branch.
Because we determined that there is no plastic flow during the increment, the trial stress deviator is, in fact, identical to the stress deviator, and the update of the plastic strain tensor and the effective plastic strain is trivial.
We can directly return the pure elastic stress-strain relation as the Jacobian.
The objective of the elastoplastic branch of the stress update algorithm is to compute the stress deviator and update the plastic strains. We begin by again expressing the stress deviator, now knowing that plastic flow takes place during the increment:
or
In the above equation, we used a discrete form for the flow rule that states that an increment in plastic strain is proportional to the stress deviator through a so-called plastic multiplier . Let’s stop for a moment and consider a graphical representation of this equation for stress:
Graphical representation of the correction of the trial stress deviator.
If we compute a trial stress deviator that lies outside the yield surface, we need to make a correction so that the stress deviator is returned to the to-be-determined yield surface. The plastic multiplier determines the exact amount by which the trial stress deviator should be scaled back to give the correct stress deviator. If we compute the plastic multiplier, it is straightforward to then compute the stress deviator and the plastic strain increment.
The key steps are to:
We can relate the plastic multiplier to the effective plastic strain increment using the flow rule and then transform the equation for stress into a governing scalar equation:
This is in general a nonlinear equation, and we need to solve it using a suitable iterative scheme. We are now ready to compute the stress tensor, the plastic strain tensor, and the effective plastic strain.
The updated plastic strain tensor and effective plastic strain are stored as state variables.
We computed the stresses and updated the material state (the state variables) for our material model. Now, we turn our attention to the Jacobian computation. The Jacobian has other names in literature, such as tangent stiffness, tangent modulus, or tangent operator. In the stress update algorithm, we express the deviatoric and hydrostatic parts of the stress tensor as:
The Jacobian that we want to compute is the derivative of the Second Piola-Kirchhoff stress tensor with respect to the Green-Lagrange strain tensor. For our example material, we assume that strains are small. This means that we do not need to distinguish between various measures of stresses and strains, because they are indistinguishable in the small-strain limit. The derivative of stress with respect to strain is written as
If we use the equations for the deviatoric and hydrostatic stress and the definition of the trial stress, we can express the Jacobian in the following way:
Note that we replaced the increment of the plastic strain tensor by the total plastic strain tensor in the expression above. Their derivatives with respect to strain are the same, by virtue of the additive update of the plastic strains. Recall that our two modeling approaches require differently defined Jacobians. We see immediately how they are related. In the General stress-strain relation, the Jacobian is given by the full expression above. In the Inelastic residual strain, the Jacobian is given by one term in the expression, namely:
The term is the elastic Jacobian. For a purely elastic computation, the total Jacobian of the General stress-strain relation equals this quantity, while the Jacobian of the Inelastic residual strain in this case is zero.
If we use the flow rule and the chain rule for differentiation, we arrive at the following expression:
Notice that this expression depends on the plastic multiplier. This suggests that for the current material model, there is little benefit in choosing the Inelastic residual strain over the General stress-strain relation, because both approaches require a full stress update algorithm to compute the plastic multiplier. For other material models, such as creep models, the benefit would be greater. Using the governing scalar equation and the flow rule, we can compute the last derivative in the expression above.
In order to ensure rapid convergence of the global equation solver and ultimately reduce the simulation time, the computed Jacobian should be accurate. Well, what does accurate mean? Simply put, it means that the computed derivative must be consistent with the stress update algorithm that was used to compute stresses. That is, any assumptions or simplifications used in the stress update algorithm should be reflected in the computation of the Jacobian. A derivative based on the stress update algorithm is often called algorithmic or consistent.
In some situations, the Jacobian computation can be cumbersome. It is often possible in these situations to use an approximate Jacobian. Keep in mind that the accuracy of the solution is determined by the stress update algorithm. As long as the Jacobian is not too far off, the global equation solver will still converge to the correct solution, albeit at a lower rate of convergence.
In the sections above, we developed a stress update algorithm and outlined how to compute a Jacobian. Now, we will consider a special case for the hardening curve. We assume that the yield stress is a linear function of effective plastic strain. This is usually called linear hardening, and it is defined by a constant “plastic modulus” , which is the constant slope of the hardening curve. As it turns out, linear hardening means that the plastic strain increment can be solved on closed form:
In the example’s source code file, we have made use of this specialization into linear hardening.
Let’s consider an example problem of pulling a plate with a hole.
Dimensions, boundary conditions, and loads for a plate with a hole.
The problem has two symmetry planes and only one-quarter of the plate is modeled. We use the following material parameters:
The problem assumes plane stress. We can compare predictions of the implementation of our material model with the built-in counterpart. We expect differences only within the order of numerical round-off as long as the tolerance when solving the nonlinear equations is tight enough.
Computations of the effective von Mises stress in MPa of the external material implementation (left) and built-in material model (right).
Computations of the effective plastic strain for the external material implementation (left) and built-in material model (right).
There are actually more scenarios where you can employ the possibility to add external materials.
Consider a situation where you have a source code for a material model that has been verified in another context. You may have created it yourself or found it in a textbook or journal paper. In this case, it may be more efficient to use the external material functionality than casting it into a new form and enter it as set of ODEs. Even when the code is written in, for example, Fortran or C++, it is usually rather straightforward to wrap it into the C interface used by the external material.
A coded implementation may be computationally more efficient than using the User Defined or extra PDE options. The reason is that the detailed knowledge about the material law makes it possible to devise efficient stress updates, using, for example, local time stepping.
You may want to distribute your material model in a compiled form so that the end user cannot access the source code. As a matter of fact, the third-party product PolyUMod is implemented this way.
PolyUMod software is developed by Veryst Engineering LLC. COMSOL AB and its subsidiaries and products are not affiliated with, endorsed by, sponsored by, or supported by Veryst Engineering LLC.
]]>
Say you just ordered a new loudspeaker. You probably have expectations about the product: It should survive the trip home; withstand falls; and, above all, it should work. As Richard Little said in his keynote talk: “Our customers expect that things just kind of work when [they come] out of the box and they don’t really have to worry about it, because that’s what consumer products are normally expected to do.”
Upholding these performance conditions is the job of Sonos engineers working to create powered wireless loudspeakers. To do so, they have to ensure the performance and durability of the many competing components in a loudspeaker. Little’s team focuses on just one of these components: audio transducers, which convert input electrical signals into sound. In his talk, Little discussed maximizing the durability of transducers via a predictive design process by accounting for:
Little and his group use simulation to effectively analyze the durability of their transducer designs. This enables them to create virtual prototypes, improve the accuracy of their physical prototypes, and reduce time to market.
First, Little discussed a transducer component involving nonmoving parts that need to withstand handling-related stress: the basket. The basket of a transducer is its weakest part. As such, Little’s team works to improve transducer baskets by studying their materials and geometry. Little spoke about finding a type and grade of steel that can prevent a basket from deforming, while still minimizing material costs. This is accomplished by evaluating the basket’s structural integrity with time-dependent mechanical simulations.
From the video: Simulation results for the transducer basket with <130 MPa stress.
The results Little shared indicate that 130 MPa is the targeted yield stress. This is seen as the lowest acceptable yield strength level to use when choosing a grade of steel for a design. Of course, there are other options for improving the design’s robustness, including using thicker steel or a different plastic material for the basket. However, these design choices have implications in regards to cost, acoustic performance, and manufacturing requirements.
Switching gears, Little discussed a moving component example involving a flat speaker that is typically placed beneath a television. Due to its design, the speaker’s woofer is shallow and the diaphragm is mostly flat. The voice coil of the woofer is subjected to a great deal of stress where it is attached to the lower diaphragm surface, because the flat diaphragm offers no geometric reinforcement.
Simulation confirms that high stresses exist in this specific location, with some areas having high enough stress to eventually fatigue and fail. The Sonos team addressed this challenge by reinforcing the area with the highest stress by gluing on a small secondary ring. This design modification removes all of the stress, while negligibly impacting cost and acoustic performance.
From the video: The woofer diaphragm design was modified by adding a second ring, reducing stress.
With simulation, Richard Little and the Sonos team managed to accurately analyze stress on audio transducers and improve their designs. “This is a great way of investigating the durability of your product, as opposed to just designing for performance,” Little said. “It’s something that has been very important for us. We want our products to last 10 years out there in the field under normal usage.”
Want to learn more about Sonos’ acoustic simulations and loudspeaker designs? Watch the video at the top of this post.
]]>
I’m going to assume that the engineers, physicists, scientists, and researchers who read the COMSOL Blog don’t hold much stock in the paranormal. Even so, hearing a rattling window in the middle of the night or a whispering noise in an empty house is enough to frighten even a seasoned analytical mind.
When a scientist debunks a suspected poltergeist (a supernatural entity that causes physical disturbances), it is known as a false poltergeist. Researchers in this line of work often refer to Occam’s razor to explain these occurrences, as it states that the simplest explanation for something is likely the most valid. For instance, the Roswell UFO incident in 1947 can be explained most simply by a weather balloon that fell out of the sky, not a flying saucer flown by little gray aliens.
Weather balloon or aliens: Which explanation do you think is the most valid? (Photo from my 2016 visit to the International UFO Museum and Research Center in Roswell, New Mexico.)
In the article “Things that Go Bump in the Night: The Physics of ‘False Poltergeists’” from a past issue of Sound & Vibration magazine, Roman Vinokur discusses common vibroacoustic phenomena that are mistaken for ghosts and supernatural entities. Let’s put on our ghost hunting/acoustician hats and take a look at some of the examples featured in the article.
If you ever wake in the middle of the night to a rumbling or groaning sound, think about Helmholtz resonance before burning sage or calling a medium. The most basic example of a Helmholtz resonator is a glass bottle with a narrow opening. When you hold the bottle up to your lips and blow perpendicular to its opening, it makes a humming sound.
Helmholtz resonance in action (turn up your sound!)
A room with an open window or door can also act as a Helmholtz resonator. When turbulent airflow passes through an opening in the room, it excites Helmholtz resonance. The natural frequency of Helmholtz resonance depends on the room’s volume, thickness of the walls, and area of the opening. If this value falls within the range of infrasound (below 20 Hz), it can cause creepy sounds in the audible range — perhaps leading to a suspected poltergeist.
Infrasound can even vibrate our internal organs. This explains why people who recount paranormal experiences often describe feelings of nausea; anxiety; and most commonly, coldness.
Helmholtz resonance can be excited by sound waves propagating from an internal or external noise source. For example, thunder can reverberate in small rooms, which can be perceived as something more malicious than weather. The Sound & Vibrations article mentions a building that was rumored to be haunted. In actuality, the only thing haunting the building was revenge. The workers who built the apartment were scammed by the owner who hired them. To get revenge, the workers embedded empty glass bottles in the building’s roof. The bottles acted as Helmholtz resonators and wind passing through their openings at night caused tenants to hear roaring sounds at a frequency of 100 Hz.
Besides causing scary sounds, Helmholtz resonators also reduce noise in a wide range of applications. For instance, resonators are used in car exhaust systems because they can attenuate a specific and narrow frequency band. When a mean flow enters a typical exhaust system, a Helmholtz resonator attenuates the sound that is generated (similar to our bottle example above, but with the opposite effect).
An animation showing the pressure distribution for a Helmholtz resonator under certain operating conditions. Automotive designers often turn to acoustics modeling and analysis to evaluate how the presence of flow affects the Helmholtz resonator’s performance.
Learn about modeling aeroacoustics applications with the COMSOL Multiphysics® software in a previous blog post by my colleague Mads Jensen.
Watch any film about a haunted house (if you’re not sure where to start, I can recommend a few!) and there is a scene with creaking floorboards, rattling windows, doors that open and shut on their own, or some combination of phenomena. In the movies, a ghost is to blame, but the actual cause of such movement and noise isn’t as insidious, but instead due to mechanical resonance.
When the frequency spectrum of the source of a vibration lies in the infrasound range, it is inaudible (or barely audible) to the human ear. However, the movement caused by the vibration source is easy to hear. Basically, you can sometimes hear the effect of a vibration, but not the cause. This discrepancy is where ghost stories are born.
These roommates have very different explanations for what’s causing the rattling noises between the first and second floors.
Going back to the example of a multistory building, rooms often contain equipment that moves or vibrates. Objects ranging in size from vacuum cleaners to air conditioning units to treadmills can cause noise on another level of a building. If a person hears the noise produced by vibrations but is too far away to hear the cause, they could suspect that a paranormal entity is afoot.
The way skyscrapers are arranged in cities can sometimes form street canyons, also called urban canyons, which manipulate sound propagation. The canyon effect neglects sound absorption in air and solid surfaces. When propagating in a canyon, a sound wave’s energy does not follow the usual distance law valid for open spaces (6 dB attenuation for each doubling of the distance, a spherical wavefront). In a canyon, the pressure amplitude decreases inversely to the square root of the distance from the noise source. So instead, for every doubling of the distance traveled, the energy is attenuated by only 3 dB (a cylindrical wavefront). Thus, sound can propagate over longer distances in canyons with less attenuation than in open environments.
Let’s say our multistory building has a wall canyon. (You can picture a wall canyon as the cutout center of a U-shaped building, sometimes called a courtyard building.) If a conversation is happening on a lower level of the building — in front of open windows, of course — the canyon effect causes the sound to propagate to a higher floor. The person on the upper level hears the conversation as if it is happening close to them, but don’t see anyone talking. Therefore, the wall canyon causes the perceived effect of the hushed whispers of a ghost.
Due to the canyon effect, the conversation in front of an open window does not have the original low frequency when it reaches the open window a floor above.
Interestingly, negative temperature inversion can also produce something similar to the canyon effect. For instance, at night, the temperature of the ground cools faster than the air above. This causes sound to propagate for longer distances because of multiple reflections from the ground. What could possibly be scary about this effect, you ask? Perhaps hearing an owl hoot when there are no owls or trees around for miles…
This acoustic effect can be studied using ray acoustics and the propagation in graded media functionality. It is commonly studied in the field of underwater acoustics, where waves propagate in underwater sound channels generated by temperature or salinity gradients in the water column.
Owls are often seen as ominous, but aren’t they cute?
Let’s go back to Occam’s razor: The simplest explanation is usually the truth. As we’ve discussed, supernatural and paranormal experiences can be explained simply via acoustics and vibrations. But maybe the explanation is even simpler.
Say you’re alone in the house and hear footsteps on the floor above you. Is it a ghost? Alien? Infrasound from the layout of the room? It could just be a roommate or family member who had a change in their schedule and happens to be walking around upstairs when you don’t expect them to. What about noticing a quiet pitter patter while taking a walk outside? It’s probably not a ghost. It could be sound attenuation from the canyon effect. Or, it’s simply a cat out exploring the neighborhood — although if it’s a black cat, I’d still take precautions.
COMSOL’s main office is located north of Boston in Massachusetts. In the southeastern corner of the state, there is an area known to paranormal enthusiasts as the Bridgewater Triangle (a reference to the area’s epicenter, Bridgewater, MA; and the Bermuda Triangle). The area is a hotbed of reported ghosts and UFO sightings as well as other bizarre forms of paranormal activity, like the pukwudgies, tiny creatures who supposedly live in the woods and play tricks on hikers.
Although I used to be wary of venturing to the Bridgewater Triangle and bumping into a ghost, learning about how vibroacoustic effects can cause seemingly supernatural noises has me ready to explore the area.
Before we do anything else, let’s pour some 90°C coffee into a vacuum flask and consider the material properties of the model.
Materials involved:
All material properties except for the foam filler can be pulled directly from the Material Library in the COMSOL Multiphysics® software. As always, when using COMSOL Multiphysics, you can add special material properties manually into the software. In the case of the foam in this example, you would enter the following values:
Tip: The modeling approaches mentioned here are both covered in the Natural Convection Cooling of a Vacuum Flask tutorial model. Please refer to the tutorial MPH-file and accompanying documentation to see exactly how to set up and solve this model, because we won’t go into detail in this blog post.
For a quick and simple model, you can describe the thermal dissipation using predefined heat transfer coefficients. This method should help us determine how the coffee cools over time inside the vacuum flask. It’s simple because it won’t tell us anything about the flow behavior of the air around the flask, and it’s useful because it will show us the cooling power over time.
Instead of computing heat transfer and flow velocity in the fluid domain, you would simply model the heat flux on the external boundary of the vacuum flask, defined from the heat transfer coefficient, the surface temperature, and the ambient temperature (25°C; a little warmer than standard room temperature):
q = h(T_{∞}-T)
There are many predefined cases where h is known with high accuracy. The Heat Transfer Module (an add-on to COMSOL Multiphysics) includes a library of heat transfer coefficients for easy access.
Another time-saver with this method is the fact that you can avoid predicting whether the flow is turbulent or laminar, because many correlations are valid for a wide range of flow regimes. As long as you use the appropriate h correlations, you can typically arrive at accurate results at a very low computational cost with this method.
What about the second approach? It’s worth considering how the cooling power is distributed on the flask surface as the coffee cools down. To do so, you need to include surrounding fluid flow in the model.
To get a more complete picture of what’s going on with our precious java (seriously, when can I drink it?), we could create a more detailed model of the convective airflow outside the vacuum flask.
Taking the second approach calls for using the Gravity feature available in the Single-Phase Flow interface with the Heat Transfer Module or the CFD Module, which allows you to include buoyancy forces in the model. Typically, you would first need to figure out whether the flow is laminar or turbulent before following this modeling approach. For the sake of brevity here, let’s skip ahead: we know from the tutorial model documentation that the flow is laminar in this case.
The detailed model shows that the warm flask drives vertical air currents along its walls. The currents eventually combine in a thermal plume above the flask and air in the surrounding area is pulled toward the flask, feeding into the vertical flow. (This flow is weak enough that there are no significant changes in dynamic pressure.)
The vortex that forms above the flask’s lid reduces the cooling in that region — something you can’t tell from the first method. In essence, the fluid flow model is better at describing local cooling power than the simple method with the approximated heat transfer coefficient.
So how long will the coffee stay warm in the vacuum flask? Many coffee drinkers like to stay within the range of 50–60°C (roughly 120–140°F), because it’s supposedly when the “coffee notes shine.” Both methods suggest that after 10 hours inside the flask, the coffee will be about 54°C, which is still within the enjoyable range. Of course, if we were to bring the flask outside in cooler temperatures than the assumed 25°C, the coffee would cool down quicker.
A plot of the coffee temperature over time for the two modeling approaches. The blue line denotes the first approach and the green line denotes the second approach.
Though both modeling approaches give very similar results in terms of the coffee temperature over time, it’s a different story when looking at the flask surface’s cooling power:
A plot of the heat transfer coefficient for the two modeling approaches. The blue line denotes the first approach and the green line denotes the second approach.
For fast and accurate results in the long run, you can combine the two approaches. After setting up the more detailed model, you can create and calibrate functions for heat transfer coefficients to use later, via the simpler approach for solving large-scale and time-dependent models.
We saw that there are two different ways to model the convective cooling of coffee inside a vacuum flask over time. The detailed approach is more computationally demanding, as it combines heat transfer and fluid flow, but it’s also more accurate in the sense that it accounts for local effects. By combining both methods, you can save time in the future.
Try it yourself by downloading the tutorial model from the online Application Gallery or within the Application Library inside the COMSOL Multiphysics software. If you have any questions about this model or the COMSOL Multiphysics software, please contact us.
]]>
A trebuchet is a long-range weapon that uses a swinging arm to send a projectile toward a target. The machine is generally associated with hurling boulders at a castle wall to bring it down, but trebuchets have also been used to throw Greek fire and wreak all kinds of havoc. Trebuchets have appeared in several films and TV shows, such as The Return of the King (2003); Marco Polo (2014–2016); and even in Monty Python and the Holy Grail (1975), where a cow was catapulted from inside the castle walls toward an unsuspecting King Arthur!
One nonfictional and historically notable trebuchet is War Wolf (known to the English soldiers at the time as “Ludgar”). In 1304, on one of his campaigns to defeat Scotland, King Edward I besieged Stirling Castle and ordered his engineers to build a giant trebuchet. War Wolf was the largest trebuchet ever made and was rumored to send boulders of about 150 kilograms across a distance of over 200 meters.
A small-scale replica of War Wolf, a counterweight trebuchet that uses a boulder-holding sling at the end of a swinging arm. Image by Ron L. Toms. Licensed under CC BY 3.0, via Wikimedia Commons.
Large trebuchets of this type would typically feature a counterweight roughly ten times the weight of the projectile, which would put War Wolf’s counterweight in the neighborhood of 1.5 tons! The poor prospects of surviving an assault from War Wolf prompted the Scottish garrison inside the castle to offer their surrender. However, the king would not have it, as he was eager to try out his new trebuchet. He forced the Scots to remain inside the castle and restarted his siege. War Wolf proved its worth, and the rest is, as they say, history.
The working principle of a trebuchet is simple. The counterweight is raised and the trebuchet is cocked. When the trebuchet is fired, the counterweight drops, and the potential energy of the system is converted into a combination of kinetic and potential energy. The projectile undergoes a swinging motion and is released at some suitable position along its trajectory. This happens when one end of the sling slips off the tip of the swinging arm.
Here, we build a computational model of a basic trebuchet with the Multibody Dynamics Module and version 5.3 of the COMSOL Multiphysics® software.
Our model uses the following assumptions and physical dimensions:
A schematic of the counterweight trebuchet model.
As the projectile is swung around by the swinging arm, it describes a nontrivial motion of varying velocity. If the trebuchet is to be designed for maximum throwing distance, a question arises: At what point during its trajectory should the projectile be released? Elementary mechanics tells us that if we neglect air resistance and the height from the ground at which the projectile is released, the throwing distance s of the projectile (measured in the positive x direction) can be expressed as
where v_{0} and α are the velocity and angle at the time of projectile release, respectively, and g is the gravitational acceleration.
Thus, finding the maximum throwing distance is equivalent to finding the combination of v_{0} and α that maximizes s. Intuitively, you might think that the angle of release should be α = 45°. Let’s see if this holds true for the trebuchet model.
The animation below shows the motion of the trebuchet as it is fired. The quantity s is shown along the projectile trajectory, and it represents the throwing distance that would follow from releasing the projectile at a certain point on this trajectory.
In the results below, the throwing distance is plotted as a function of the release angle α. The maximum throwing distance is obtained if the projectile is released at α ≈ 38°. The plot reveals that deviations of the order of 5° from this optimum only affect the throwing distance by a few meters. In other words, as long as the release angle is roughly correct, the trebuchet will function as intended.
Now, let’s examine what happens if we modify the length of the sling by ±10% using a parametric sweep. The plot below shows that the maximum throwing distance that can be obtained is greatly affected by the length of the sling. So, if you are in the business of designing trebuchets for medieval kings, you should pay attention to this design parameter.
Using a parametric sweep, you could easily examine the effect of changing other physical lengths in the model (while keeping the counterweight at fixed height for consistency). Try for yourself by downloading the model file from our Application Gallery.
In this blog post, we demonstrated that the Multibody Dynamics Module can be used to build a simple model of a counterweight trebuchet. If you are interested in learning more about multibody dynamics modeling, check out these additional blog posts:
Wondering why War Wolf was also called Ludgar? Apparently, the French name Loup de Guerre (“wolf of war”) proved more than a mouthful for the English soldiers, so it was condensed into “Ludgar”.
]]>Here, we discuss different entities for gauging the performance of mufflers. One important parameter is the thickness of the muffler’s casing and how this affects its performance. By performing acoustic-structure interaction simulations, we can see how shell thickness affects muffler performance.
Using the same model setup that was defined in the preceding blog post, we perform a parameterized study to observe the effect of varying shell thicknesses on the muffler. We start at a base thickness of 1 mm, which is the original shell thickness that was used in the previous studies. Then, we halve the base thickness and double it.
The acoustic domain (see below) surrounding the muffler model provides a good means to assess the sound emission into the atmosphere for the different shell thicknesses.
Figure 1. Cross-sectional and isometric views of the muffler model and surrounding acoustic domain.
The transmission loss (TL) from the muffler inlet to the muffler outlet, as defined in the original blog post, is
where P_{in} is the acoustic power at the muffler inlet and P_{out} is the acoustic power at the muffler outlet. The variables P_{in} and P_{out} are dependent on the pressure at the inlet, p_{in}, and outlet, p_{out}, respectively.
The TL from the inlet to the outlet is computed in this study for the simulation cases with shell thicknesses of 0.5 mm and 2 mm. These TL curves are compared in Figure 2 below, along with the case for the 1-mm shell thickness.
Figure 2. Transmission loss from the muffler inlet to the outlet for shell thickness, t, of 0.5 mm, 1 mm, and 2 mm.
The shell mode noted at 172 Hz for a shell thickness of 1 mm (from the previous studies) is found to occur at 180 Hz for the model with a shell thickness of 0.5 mm. In the vicinity of 180 Hz, the peak and the dip in the curve for the model with the 0.5-mm thickness is far more profound than that of the model with the 1-mm thickness for this eigenmode.
For the 0.5-mm case, the difference in the TL at this mode from the peak to the dip is approximately 18 dB, with a frequency spread of 8 Hz and the dip occurring at 188 Hz. This is expected, as the pressure pulses exciting the shell plates would have a greater impact on the plates with a smaller thickness. Therefore, for the largest computed shell thickness of 2 mm, the curve is smooth in the region where this spike occurs for the 0.5-mm case and 1-mm case.
The behavior of the TL for the 2-mm case is close to that of a pure pressure acoustics simulation, where the boundaries of the muffler are defined as sound hard boundaries. Similarly, the shell mode noted at 342 Hz for the 1-mm shell thickness case is present at 338 Hz for the 0.5-mm shell thickness case, but it is not visible in the TL curve for the 2-mm shell thickness case.
The resonating acoustic mode at 386 Hz is present for all three cases, as noted with a sharp dip present in all three curves at this frequency.
The next notable peak present in all three curves lies between 610 Hz and 640 Hz. As the shell thickness increases, the position of the peak shifts to the right. Shells with a frequency of 614 Hz, 632 Hz, and 638 Hz have a thickness of 0.5 mm, 1 mm, and 2 mm, respectively. This is coupled to the fact that the muffler structure becomes stiffer with increasing thickness and the frequency of this eigenmode is increased.
Despite the right-shift in frequency for increasing thickness, the amplitude of the peak is greater for 1-mm thickness than 2-mm thickness. It would be expected that a structure with a larger shell thickness would produce a better TL than a structure with a smaller thickness. However, an acoustic eigenfrequency noted in the pressure acoustics case from the original blog post is present in the vicinity of the eigenmode for the 1-mm shell thickness case. This acoustic mode could be in phase with the shell eigenmode for the 1-mm shell thickness, which in turn results in a greater peak in TL at this mode than for the other shell thickness cases.
The final peak observed in all three cases for the computed frequency range occurs in the vicinity of 700 Hz. The frequency spacing for this mode is minute for varying shell thicknesses in comparison to that of the preceding eigenmode for different thicknesses. The peaks occur at 696 Hz, 702 Hz, and 700 Hz in the TL curves for 0.5-mm, 1-mm, and-2 mm shell thicknesses, respectively. Therefore, it can be deduced that the frequency at which this eigenmode occurs remains impervious to the variation in shell thickness. It is likely an acoustic eigenmode where the stiffness of the shell does not affect the air contained inside the muffler.
The transmission loss from the muffler inlet to the acoustic domain boundary was defined in the previous blog post and is also computed in this study for the muffler model with shell thicknesses of 0.5 mm and 2 mm (as plotted in the figure below). The two curves (solid orange and solid gray) are plotted, along with the TL curves from the previous graph, which account for the shell thicknesses of 0.5 mm and 2 mm (the dashed orange line and dashed gray line).
Figure 3. Transmission loss from the inlet to the outlet compared to the transmission loss from the inlet to the acoustic domain boundary, for shell thicknesses (t) of 0.5 mm and 2 mm.
It is evident that the solid gray curve is smoother and has fewer dips and peaks than the solid orange curve. The peaks and dips in the solid orange curve are sharper than that of the solid gray curve. Further, the solid gray curve has a higher TL than the orange curve for most of the computed frequency range. These differences in the solid curves are expected, considering that the muffler shell is stiffer with a thickness of 2 mm as compared with 0.5 mm. A stiffer shell makes the structural response less profound due to its interaction with the air volume in the muffler, causing less shell noise to be emitted into the surrounding atmosphere.
Comparisons can also be made between the curves for the two types of TL for each thickness. It can be noted that for the muffler model with a 0.5-mm thickness, the two orange curves coincide with each other far more than the gray curves. The two gray curves (2-mm shell) sit farther apart from each other than the two orange curves (0.5-mm shell) do for most of the computed frequency range. For the orange curves, the TL from the muffler inlet to the acoustic domain boundary drops below the TL from the inlet to the outlet in the vicinity of the 180-Hz shell eigenmode. This indicates that at this mode, more sound is emitted into the surrounding atmosphere than passes through the muffler outlet.
A more acoustics-specific comparison of the transmission loss from the muffler inlet to the acoustic domain boundary for the three shell thicknesses is provided in the plot below, by arranging the data in 1/3 octave bands.
Figure 4. Transmission loss from the muffler inlet to the acoustic boundary, plotted in 1/3 octave bands for the three thicknesses.
Representing the transmission loss for different shell thicknesses by binning the TL in fractional octaves is akin to what is done with empirical data obtained from acoustic measurements to meet established standards. It can be clearly noted from the graph above that the muffler with a shell thickness of 2 mm performs best in most bands, except for the last two bands. This can be validated by looking at the solid gray curve in the line graph discussed at the beginning of this section, where it starts to dip after 600 Hz.
Aside from the transmission loss, an additional measure for gauging muffler performance is the muffler efficiency, which is defined as
where P_{in} and P_{out} are the acoustic power at the muffler inlet and outlet, respectively.
The muffler efficiency for the three shell thicknesses is plotted below, and it can be seen that the efficiency for each case is quite similar over the computed frequency range.
Figure 5. Muffler efficiency for the muffler inlet to the outlet for the different shell thicknesses.
The muffler performs at almost 100% efficiency from approximately 200 Hz onward for all three cases. The only exception in all cases is at the resonating acoustic mode of 386 Hz, when a sharp dip is observed. The muffler efficiency for computed frequencies below 85 Hz is less than 60%, and the poor performance of the muffler in the low-frequency range is also evident in the TL from the inlet to the outlet, shown at the beginning of the blog post.
A third means to quantify muffler performance is the normalized radiated sound power at the acoustic domain boundary, which is defined as
where P_{out_domain} is the acoustic power at the acoustic domain boundary. This variable is dependent on p_{out_domain}, the pressure at the acoustic domain boundary.
The computed P*_{out_domain} for each of the three cases with different shell thicknesses is plotted in Figure 6 below.
Figure 6. Normalized radiated sound power at the acoustic domain boundary for the shell thicknesses.
As expected, for most of the computed frequency range just below 600 Hz, the muffler with the 0.5-mm shell thickness has the highest sound radiation into the acoustic domain and the muffler with the 2-mm thickness has the lowest emitted sound. The sharp drop in the solid orange curve at 188 Hz in Figure 2 is noted as a large spike in the solid orange curve in Figure 6 (above). Therefore, the muffler with a shell thickness of 0.5 mm radiates more than 5% of the incident power into the atmosphere at the eigenmode occurring between 180 Hz and 188 Hz.
Although other peaks are present in the three curves, particularly at frequencies close to eigenmodes, these peaks are minute in comparison to the peak at 188 Hz for the 0.5-mm case, with less than 1% of incident power radiated into the surrounding domain.
The sound pressure level at the peak of normalized radiated sound power for each of the three shell thicknesses is plotted (as isosurfaces) below.
Figure 7. Sound pressure level at 188 Hz, t = 0.5 mm.
Figure 8. Sound pressure level at 342 Hz, t = 1 mm.
Figure 9. Sound pressure level at 634 Hz, t = 2 mm.
It has been shown that the shell thickness drastically influences the performance of a muffler. Naturally, the greater the thickness, the stiffer the structure. Thus, with increasing thickness, the transmission loss curve approaches the hard boundary condition in a pure acoustics analysis (compare Figure 2 to the results from the previous blog post).
Further, the peak sound power radiating into the surrounding air is reduced from more than 5% to less than 1%, merely by increasing the shell thickness from 0.5 mm to 1 mm.
In addition to the reduction of the maximum radiated sound power, it is interesting to note the transmission loss curves in Figure 3. The results exemplify the complexity of the stated problem: The location of greater transmission loss is not constant, but rather a function of the frequency and shell thickness. For example, the intersection of the 0.5-mm curve indicates that the (total) transmission loss into the surrounding air is greater than at the muffler outlet. As we might expect, the greatest difference in transmission loss for increasing shell thickness generally occurs in the surrounding air. However, at certain frequencies (around 630 Hz), the transmission loss for the 2-mm shell thickness analysis reduces even below the corresponding 0.5-mm case.
In conclusion, the COMSOL Multiphysics® software provides a remarkably simple way to investigate the influence of the interaction between structural elements and gases or fluids. This enables acoustics engineers to easily determine suitable materials and/or structural parameters to obtain the desired behavior of the component. Common applications include analyses with regards to vibration, fatigue properties, and component noise evaluation.
Linus Fagerberg of Lightness by Design is an experienced consultant working with simulation-supported product development. He holds a PhD from KTH Royal Institute of Technology and is specialized in the structural mechanics of composites, stability, and optimization. Linus believes that numerical simulation is a great tool to consistently deliver high-quality products, improve performance, and mitigate risks. Lightness by Design is a COMSOL Certified Consultant based in Stockholm, Sweden.
]]>
Active thermal control systems in buildings help keep people comfortable in extreme weather conditions. For example, these systems can maintain a steady temperature indoors in places that are “summer dominant”, which are often very warm during the day and quite cool at night. However, active thermal control systems consume a lot of energy. They are also expensive to run around the clock, making them a less-than-ideal solution for keeping a steady interior temperature in such climates.
Passive thermal control systems are more efficient. One advantage of these systems is that they can run on less energy. They can also work with active thermal control systems, which minimizes the energy needed to maintain an even temperature. Further, passive thermal control systems can be included in a building during construction, helping to lower energy consumption costs over the lifetime of the structure.
A bedroom in the process of being remodeled. The walls are covered in a standard plaster.
Using phase change materials (PCMs) in building elements is one method of passive thermal control. As PCMs change phase, they begin to absorb or release their latent energy within a small temperature range, depending on whether they are in their liquid or solid phase, respectively. In doing so, they increase the thermal inertia of a building. Since PCMs can provide or store heat as needed, using them in buildings is particularly advantageous in areas with large daily temperature changes. This ability is less useful in climates that are constantly hot, as the PCM also stays hot.
To optimize PCMs for use in buildings, a research team from the Frederick Research Center, Cyprus, and University of Cyprus used numerical simulation.
To study the thermal performance of a PCM in a building element, the researchers created a 3D model consisting of a concrete block with a layer of plaster. They added a PCM to the plaster in different weight amounts:
It is important that the PCM can adjust with the temperature changes in hot climates. For this reason, Micronal 5038X was used as the PCM, as it has a melting temperature of 26˚C. To see how effective the PCM is in managing temperature, the model also includes a reference plaster to act as a comparison.
The geometry of the building element. Image courtesy A. Kylili, M. Theodoridou, I. Ioannou, and P.A. Fokaides and taken from their COMSOL Conference 2016 Munich paper.
The model captures the behavior of the PCM with the Heat Transfer with Phase Change interface. The interface helps predict how these types of materials transform when they change phase. In this case, the researchers determined the temperature transfer in the plaster with Micronal 5038X throughout the simulation.
The exterior temperatures in the model are based on measured temperatures averaged per hour. Using real temperatures increases the accuracy of the simulation, providing a more realistic look at the thermal performance of the PCM in a specific climate.
Note that you can access meteorological data for over 6000 different areas in COMSOL Multiphysics. To learn how, check out this blog post on thermal modeling of the airflow in and around a house.
Let’s look at how the simulation results compare with experimental data. As we can see below, the temperature peaks and timing for the reference plaster and plaster with 5% PCM both show good agreement. Based on these results, the plasters that include 10% and 20% PCM can also be verified.
Comparison of the simulation and experimental results for the reference plaster and plaster with 5% PCM. Image courtesy A. Kylili, M. Theodoridou, I. Ioannou, and P.A. Fokaides and taken from their COMSOL Conference 2016 Munich paper.
When looking at the thermal performance of the different plasters, we can see that the higher percentage of PCM in the plaster, the better. Using Micronal 5038X minimizes the temperature variation inside a building over a 24-hour time period. Further, it takes longer for the building to reach its maximum and minimum temperatures.
Another useful aspect of the PCM is that it can adjust to different daytime and nighttime temperatures. In 24 hours, the PCM is able to change phase twice.
Left: The thermal performance of plasters containing different percentages of PCM. Right: The temperature distribution in a building element that uses a PCM-enhanced plaster. Images courtesy A. Kylili, M. Theodoridou, I. Ioannou, and P.A. Fokaides and taken from their COMSOL Conference 2016 Munich presentation.
Based on the researchers’ simulation results, validated with experimental data, the novel PCM-enhanced plaster can be optimized for use in buildings located in hot climates.