Acoustic radiation force is an important nonlinear acoustic phenomenon that manifests itself as a nonzero force exerted by acoustic fields on particles. Acoustic radiation is an acoustophoretic phenomenon, that is, the movement of objects by sound. One interesting example of this force in action is the acoustic particle levitation discussed in this previous blog post. Today, we shall examine the nature of this force and show how it can be computed using COMSOL Multiphysics.
To understand the nature of the acoustic radiation force, let’s first consider a simple example of a particle in a standing wave pressure field (here assumed to be lossless).
The force on the particle arises as a result of particle’s finite size, such that the gradients in the pressure field will result in greater force being exerted on one side of the particle than the other. However, if we are considering a harmonic pressure wave, then the force is expected to behave as a harmonic function, which can be expressed as F_\text{harmonic} = F_0\sin (2\pi f_0 t+\phi). I’ve shown this as a black arrow in the animation below.
If time-averaged, the total contribution goes to zero. So, where does the observed nonzero force come from?
This question was first addressed by L. V. King back in 1934 (“On the acoustic radiation pressure on spheres“). In order to understand King’s results, we must take a step back to examine how the governing equations of acoustics are derived.
We will find out that they emerge from the Navier-Stokes equations as a result of a linearization procedure, which is normally carried out in two steps.
First, a very small time-varying perturbation in pressure and velocity is assumed on top of a stationary background field. When time derivatives are applied, the stationary terms drop out and what is left only includes the time-dependent perturbation terms. The remaining expression will contain both linear and nonlinear contributions. The latter appear in the form of products of two or more linear perturbation terms, and they result from convective and inertial terms in the original Navier-Stokes equation.
But, in the simplest acoustic limit, the contribution of nonlinear terms can be neglected because the amplitudes of perturbations considered are very small. For example, 0.01^{2} is much smaller than 0.01 and can therefore be neglected. So, in the second step of the linearization procedure, all the nonlinear terms are neglected and the linear wave equation is obtained.
What King has indicated was that in order to understand and evaluate the effect of acoustic radiation force, the nonlinear terms must somehow be retained in the equations.
Keeping the terms up to a second order, the pressure field will appear as a combination of two terms p = p_1 + p_2, where p_1 and p_2 can be expressed in a simplistic form as p_1 = \rho_0 c_0 v, which appears as a linear function of the perturbation velocity v, and p_2 = 1/2 \rho_0 v^2, which appears as a nonlinear function of v. Since, in the acoustic limit, we only consider the cases in which v \ll c_0, where c_0 is the adiabatic speed of sound, we conclude that p_2 \ll p_1.
At this point, we are ready to answer the first question: Where does the acoustic radiation force come from?
Going back to the example of a particle in a standing wave pressure field, let’s examine the linear and nonlinear components of the pressure and the forces produced by these components. In this case, p_1 will be a time-harmonic function p_1 = P_1 \cos(kx)\sin(2\pi f t) and p_2 will be an an-harmonic function p_2 = P_2\cos^2(kx)[1-cos(4\pi f t)] resulting from the nonlinear contribution.
These terms are visualized by the waveforms in the animation above. The forces resulting from these pressure terms are indicated by arrows. The linear force (black arrow) changes both in magnitude and direction so its cycle-averaged contribution is zero, whereas the nonlinear term (red arrow) only changes in magnitude and on average exerts a nonzero force.
The simple analysis above demonstrates the main mechanism underlying the acoustic radiation force phenomenon. Intuitively, we realize that no force will appear if the particle has the same acoustic properties as the surrounding medium. In other words, the radiation force should be a function of not only the size of a particle and the amplitude of the acoustic field, but also of the particle’s acoustic contrast (the ratio of the material properties of the particle relative to the surrounding fluid).
Due to the acoustic contrast, the field incident on the particle will be reflected from its surface and the radiation force will be a result of a combination of incident and reflected waves. This makes the problem quite difficult to solve analytically. The solution in a closed analytic form was only given for some limiting cases by a number of authors, starting with King. He has considered rigid spherical particles with dimensions much smaller than the wavelength of the incident wave, but much larger than the viscous and thermal skin depths. It was the second assumption that allowed these terms to be neglected.
King’s results have been extended to include compressible particles as in “Acoustic radiation pressure on a compressible sphere“. The results from this study were later confirmed by L. P. Gor’kov in 1962 in “On the forces acting on a small particle in an acoustical field in an ideal fluid”. Viscous and thermal effects become important when the size of the particles becomes comparable to the acoustic boundary layers (thermal and viscous). Results including viscosity were recently presented in 2012 by M. Settnes and H. Bruus.
Gor’kov has developed an elegant approach to expressing the radiation force in terms of time-averaged kinetic and potential energies of stationary acoustic fields of any geometries. His results, when applied to small compressible fluid particles, give the force as a gradient of a potential function U_\text{rad}:
(1)
The potential function U_\text{rad} is expressed using the acoustic pressure and velocity as:
(2)
where V_p is the volume of the particle and the scattering coefficients are given by:
(3)
where K_i are the bulk moduli. The scattering coefficients f_1 and f_2 represent the monopole and dipole coefficients, respectively. This approach, which is based on the scattering theory, is only valid for particles that are small compared to the wavelength \lambda in the limit a/\lambda \ll 1, where a is the radius of the particle.
The v and p terms that appear in Eq. (1) are the first-order terms that can be obtained by solving a linear acoustic problem. Results in this form are typically obtained using a perturbation method, which is widely practiced in physics. A thorough review and examples of this method applied to nonlinear problems in acoustics and microfluidics can be found in a textbook by Professor Henrik Bruus titled Theoretical Microfluidics.
Eq. (1) is coded in the COMSOL Multiphysics Particle Tracing for Fluid Flow interface to evaluate the acoustic radiation force on particles. But, as mentioned above, it only applies to acoustically small particles and neglects thermoviscous effects. An example can be seen in the Acoustic Levitator model. Knowing the radiation force is important when modeling and simulating systems that handle particles using this phenomenon. This can be, for instance, microfuidic systems that sort and handle cells and other particles. An example of this is discussed in the blog post Acoustofluidic Multiphysics Problem: Microparticle Acoustophoresis.
To extend the theory beyond the limit of acoustically small particles, a numerical approach is required. We will consider that next.
In general, all forces can be expressed using momentum fluxes as \mathbf F = \int_S T \mathbf{n} d\mathbf{a}, where the surface of integration, S, is the external surface of the particle.
Gor’kov has used this fact to obtain a closed-form analytical expression for a force acting on a particle in an arbitrary acoustic field. To compute the nonlinear acoustic radiation force, the momentum flux due to the acoustic field has to be evaluated up to second-order terms. The main appeal of his result is that, as mentioned earlier, the second-order terms can be expressed using the solution of a linear problem.
To implement his method, all we need to do is solve the acoustic problem, use the results to compute the second-order momentum flux, and substitute the solution into the flux integral.
H. Bruus has shown that neglecting the thermoviscous effects, the second-order flux terms are:
(4)
The integral should be taken over a surface of a particle moving in response to the applied force. This means that the surface of integration is a function of time S = S(t). To overcome this difficulty, Yosioka and Kawasima have indicated that the integration can be transformed to an equilibrium surface S_0 that encloses the particle. Compensating for the error with the addition of a convective momentum flux term, the force, in total, becomes:
(5)
All that is left to do now is solve the acoustics problem to obtain the acoustic pressure and velocity and substitute them into the integral in Eq. (5). In contrast to the approach used in Eq. (1) to (3), the force expression given in Eq. (5) is valid for all particle sizes as long as the stress T is given. This approach was recently implemented in COMSOL Multiphysics by a group of researchers from the University of Southampton.
It should be noted that the expression in Eq. (4) is only true when the viscous and thermal effects are neglected. If these losses are included the integration surface S_0 should be taken outside of the boundary layers or a correct full stress expression for T used on the particle surface. A first principle perturbation approach including thermal and viscous losses was presented at the 2013 ICA-ASA conference by M. J. Herring Jensen and H. Bruus, titled “First-principle simulation of the acoustic radiation force on microparticles in ultrasonic standing waves”. A detailed derivation of the governing equations up to second order, in a form suited for implementation in COMSOL Multiphysics, is given in the paper “Numerical study of thermoviscous effects in ultrasound-induced acoustic streaming in microchannels“.
To benchmark the method presnted by Glynne-Jones et al., let’s compute an acoustic radiation force exerted by a standing wave on a spherical nylon particle immersed in water. We assume a frequency of 1 MHz and a pressure amplitude of 1 bar and implement the model using the Acoustic-Structure Interaction interface in 2D axisymmetric geometry. The size of the box in the model is four wavelengths high and two wavelengths wide.
Let’s excite a standing wave in this box using a Background Pressure Field condition, set up in such way that the particle is at a distance of \lambda/8 from the pressure node.
The integrals in Eq. (5) are computed by setting up integration coupling operators in the Component 1 > Definitions node. We need to make sure that the integral is calculated in the revolved geometry by checking the appropriate box and selecting the boundaries of the particle to define the surface of integration.
It is noteworthy to mention here that the force computation used in this method is independent of the surface of integration due to the conservation of flux as long, as it is located outside the particle. In fact, using a surface at larger distances will be more numerically accurate, simply because there will be more points to use for numerical evaluation of the integral. To perform this integration, we can add another surface external to the particle.
Finally, new flux variables are introduced in the Component 1 > Definitions as Variables 1a node. They are used as arguments for the integration operators to compute the total force.
We are now ready to compare the perturbation approach to an analytical solution.
As expected, they compare reasonably well for small particle radii where the analytical solution considered is valid. Some analytical models that include higher harmonics in scattered field decomposition offer solutions that agree with the outlined numerical approach for large and small spherical particles (as in the paper by T. Hasegawa, “Comparison of two solutions for acoustic radiation pressure on a sphere“.)
A small discrepancy for small particle radii between analytical and numerical methods may be attributed to the fact that the theoretical models assume that the particle is plastic, whereas in this example, we have considered an elastic particle with bulk modulus of 0.4.
The perturbation method has a number of advantages.
First, it exploits the linear acoustics method to evaluate nonlinear second-order force effects. This allows the analysis to be easily extended to 3D for particles of arbitrary shapes and material composition. For example, we can extend it to simulate acoustic radiation forces on biological cells or microbubbles.
Second, because the acoustic equations are solved in the frequency domain where very efficient numerical methods are well established, the solution time in COMSOL Multiphysics is quite fast even in 3D.
Meanwhile, the disadvantage of this method is that it is driven by theoretical results that rely on a set of simplifying assumptions, and it can only be validated in a limited number of cases. What we would like to have is a numerical method that allows the problem to be solved directly.
We shall see how this can be achieved in the next blog post. Stay tuned!
Previous work on cloaking for flexural waves in elastic plates presented limitations and near invisibility. Now, a research group in Europe has figured out a new theoretical framework to both overcome the limitations and achieve exact cloaking for flexural waves in Kirchhoff-Love plates. To visualize and test the quality of the cloak, they ran COMSOL Multiphysics simulations.
Picture this: There are flexural waves emanating from a source in a thin elastic plate. If you place an object in the plate, it will disturb the waves and you will be able to see it. If a cloak is instead placed around the object, the waves will not be disturbed, thus rendering the object invisible.
Models illustrating an object without and with cloaking. Provided by Daniel J. Colquitt.
While that’s an easy enough concept to grasp, it is not so easy to realize. In order to cloak an object, you need to construct the right metamaterial (metamaterials do not exist in nature; they are artificial materials, engineered to have specific properties).
Metamaterials can be designed such that their material properties mimic the spatial variations created by coordinate transformations, thereby directing light, sound, or other waves in a specific manner. The initial configuration of the electromagnetic or acoustic fields is mapped on a Cartesian mesh, which is then twisted to transform the coordinates.
Illustration of an untwisted Cartesian mesh.
Twisted mesh. Provided by Daniel J. Colquitt.
When it comes to electromagnetic and acoustic cloaking, you would use Maxwell’s equations and the Helmholtz equation, respectively. Both of these equations are invariant when coordinates transform, meaning they are not altered in the process.
However, if you want to mathematically model cloaking for mechanical waves, such as flexural waves in elastic plates, you need a fourth-order partial differential equation (PDE). The general form of the PDE is not invariant; it changes during the coordinate transformation.
Earlier work with applying transformation elastodynamics to achieve cloaking came with strings attached. Following some previous frameworks, calculations yielded tensorial densities and nonsymmetric stresses. Other research produced a framework where the PDE could be applied to thin plates, but only for nonlinear theories.
COMSOL Multiphysics user Daniel J. Colquitt and his colleagues are finally breaking through the limitations of this type of cloaking design.
In the paper “Transformation elastodynamics and cloaking for flexural waves”, D.J. Colquitt, M. Brun, M. Gei, A.B. Movchan, N.V. Movchan, and I.S. Jones present a new framework for transformation elastodynamics for thin plates and an algorithm for broadband cloak design. The thin elastic plates are represented by Kirchhoff-Love plates — a 2D mathematical model that is commonly used to characterize thin plates.
“The equations which govern the flexural displacement of thin elastic plates are very different to those for light and acoustics (e.g., Helmholtz equation),” Colquitt explained to me in an email conversation. He added, “Indeed, the equation of motion for flexural waves involves the biharmonic operator, which is a fourth-order partial differential operator; this is fairly unusual for physical systems where the equations of motion are usually first- or second-order in space (e.g. Electromagnetism, Acoustics, Elasticity).”
So how did the researchers overcome the issue of equation variance during the coordinate transformation? They gave the transformed equation a physical interpretation by introducing a generalized plate model. In this new framework, the invisibility cloak is created by applying a specific combination of pre-stress and body forces to a metamaterial plate. Unlike other frameworks, the new one is completely linear. They also show that their new algorithm successfully ensures symmetric stresses and scalar densities.
To visualize the cloaking results and test the quality of the cloak, Colquitt et al. created simulations using COMSOL Multiphysics software. Apparently, when deciding which software to use for testing out their cloak, flexibility was a key factor.
“The invisibility cloak requires the implementation of an inhomogeneous, anisotropic plate subjected to inhomogeneous and anisotropic pre-stress and body forces. As one can imagine, the governing equations for this system are nonstandard,” Colquitt said. “COMSOL allowed us to directly implement the precise form of the transformed governing equations. This is very attractive for a mathematician — the ability to directly control the system of equations being solved.”
Below are some images and an animation illustrating their results.
Cloaking results. Provided by Daniel J. Colquitt.
Animation of the square cloaked object moving in the flexural waves in the thin plate. Courtesy of Daniel J. Colquitt.
So, the researchers have successfully developed a new theoretical framework that will allow experimentalists to design and build mechanical invisibility cloaks that can hide or protect objects from mechanical vibrations.
As far as putting the framework to use, Colquitt said: “There are many potential applications, such as cloaking, or the isolation of sensitive pieces of equipment from mechanical vibrations by routing the flexural waves around the equipment.”
In short, this is very fascinating research and I highly recommend reading the paper (link below) to get the full scope of their work.
When buying a home, there are several modern conveniences that people look for. One of these is often a washing machine. Having this device within your own home eliminates the time-consuming task of taking your clothes to the laundromat or washing them by hand, giving you the freedom to tackle other tasks around the house — or better yet: relax.
Imagine that it’s laundry day and you are fortunate enough to have a washer in your home. How would you notice that your laundry cycle is complete? One common tell-tale sign is silence.
During the cycle, your washing machine is likely to produce vibration and noise, a result from the uneven distribution of clothes and the structural properties of the machine. We now have a multibody dynamics model that enables you to analyze this common issue via simulations.
The Vibration in a Washing Machine Assembly model depicts a horizontal-axis portable washing machine. The model’s geometry accounts for various parts of the washing machine assembly (including the housing), with varying components defined by different colors.
The geometry of the washing machine.
The machine’s parts are represented as follows:
Color | |
---|---|
Clothes | Red |
Drum | White |
Tub | Cyan |
Motor | Yellow |
Pistons | Green |
Cylinders | Magenta |
Mountings | Blue |
Base supports | Black |
Housing | Gray |
Within this multibody dynamics example, we assume that the housing is modeled as a flexible shell and that the other components of the washing machine are modeled using rigid solids. Additionally, we assume that the clothes do not move in relation to the drum.
The diagram below explains the connection details between various parts of the machine through the use of joints and springs.
In an eigenfrequency analysis, one mode focuses on the translation of the tub and another shows the rotational motion of the tub about the vertical axis. Each of these eigenmodes highlights the corresponding deformation of the housing, which is relatively small compared to the motion of the tub.
On the left: The translational tub mode. On the right: The rotational tub mode.
The next set of figures shows the displacement magnitude of the tub with the angular position of the unbalanced clothes for the full time duration. Here, the color red represents the initial time of the trajectory and the color blue illustrates the final time.
To identify the vibrations induced in the housing during the machine’s spinning cycle, we can perform a transient analysis. The graph below depicts the housing deformation in multiple directions at a point on the right side wall.
Next, we can analyze the normal acceleration (a measure of noise emitted by the side walls) of the housing at a point in the middle of the right side wall. These results are illustrated in the graph below on the left. Meanwhile, the graph on the right depicts the frequency spectrum of this acceleration.
We can note that the frequencies of the side wall vibration are predominantly within the range of 0-30 Hz, with a peak of around 1.67 Hz, which can be considered the excitation frequency. This corresponds to the frequency expected from the inertial mass ratio of the drum and its load at the set rotational velocity. Yet, it is interesting to note that the frequency of the side wall vibration also occurs at higher values than the excitation frequency, which leads to an additional emission of noise.
With this in mind, the washing machine can be designed to mainly emit noise at lower, non-audible frequencies. The choice of materials and design must also ensure that the washing machine is structurally strong enough to limit the vibration magnitude at all frequencies in order to prevent the failure of its components.
The nonlinear stress-strain behavior in solids was already described 100 years ago by Paul Ludwik in his Elemente der Technologischen Mechanik. In that treatise, Ludwik described the nonlinear relation between shear stress \tau and shear strain \gamma observed in torsion tests with what is nowadays called Ludwik’s Law:
(1)
For n=1, the stress-strain curve is linear; for n=2, the curve is a parabola; and for n=\infty, the curve represents a perfectly plastic material. Ludwik just described the behavior (Fließkurve) of what we now call a pseudoplastic material.
In version 5.0 of the COMSOL Multiphysics simulation software, beside Ludwik’s power-law, the Nonlinear Structural Materials Module includes different material models within the family of nonlinear elasticity:
In the Geomechanics Module, we have now included material models intended to represent nonlinear deformations in soils:
The main difference between a nonlinear elastic material and an elastoplastic material (either in metal or soil plasticity) is the reversibility of the deformations. While a nonlinear elastic solid would return to its original shape after a load-unload cycle, an elastoplastic solid would suffer from permanent deformations, and the stress-strain curve would present hysteretic behavior and ratcheting.
Let’s open the Elastoplastic Analysis of a Plate with a Center Hole model, available in the Nonlinear Structural Materials Model Library as elastoplastic_plate, and modify it to solve for one load-unload cycle. Let’s also add one of the new material models included in version 5.0, the Uniaxial data model, and use the stress_strain_curve already defined in the model.
Here’s a screenshot of what those selections look like:
In our example, the stress_strain_curve represents the bilinear response of the axial stress as a function of axial strain, which can be recovered from Ludwik’s law when n=1.
We can compare the stress distribution after laterally loading the plate to a maximum value. The results are pretty much the same, but the main difference is observed after a full load-unload cycle.
Top: Elastoplastic material. Bottom: Uniaxial data model.
Let’s pick the point where we observed the highest stress and plot the x-direction stress component versus the corresponding strain. The green curve shows a nonlinear, yet elastic, relation between stress and strain (the stress path goes from a\rightarrow b \rightarrow a \rightarrow c \rightarrow a). The blue curve portraits a hysteresis loop observed in elastoplastic materials with isotropic hardening (the stress path goes from a\rightarrow b \rightarrow d \rightarrow e ).
With the Uniaxial data model, you can also define your own stress-strain curve obtained from experimental data, even if it is not symmetric in both tension and compression.
Heat transfer computation often needs to include surface-to-surface radiation to reflect reality with accuracy. The numerical tools used to simulate surface-to-surface radiation differ significantly from those used for conduction or convection. Whereas the latter are based on local discretization of partial differential equations (PDEs), surface-to-surface radiation relies on non-local quantities — the view factors between diffuse surfaces that emit and receive radiation.
When surface-to-surface radiation is activated, the heat interface creates a set of operators that are evaluated like the irradiation variables in surface-to-surface radiation. Thanks to these operators, it is possible to retrieve the irradiation variable values and to compute the geometrical view factor in a given geometry.
In this blog post, I’ll explain how to compute geometrical view factors in a simple 3D geometry where analytical values of the view factor are available.
The new operators in COMSOL Multiphysics version 5.0 offer full access to all of the information used to generate surface-to-surface radiation equations. This is true for even the more advanced configurations, such as radiation on both sides of a shell and multiple spectral bands with different opacity properties.
Let’s start with the simplest case where we assume that surfaces behave like gray surfaces. In this case, we don’t need to distinguish between the spectral bands. We have two operators, one for each face (up or down) of the surfaces. They are as follows:
radopu(expr_up,expr_down)
radopd(expr_up,expr_down)
These two operators are designed to be evaluated on a boundary where the surface-to-surface radiation is active. Assuming that the heat interface tag is ht
, ht.radopu(ht.Ju,ht.Jd)
returns the mutual surface irradiation, ht.Gm_u
, that is received at the evaluation point on the upside of the boundary.
Note that ht.Ju
and ht.Jd
define the radiosity on the up- and downsides of the boundaries, respectively. Similarly, ht.radopd(ht.Ju,ht.Jd)
returns the mutual surface irradiation, ht.Gm_d
, on the boundary downside.
When multiple spectral bands are considered, a given boundary can be opaque for a spectral band and transparent for another band. Hence, one pair of operators is needed per spectral band. They work exactly like the operator for gray surface and are named as follows:
ht.radopB1u(expr_up,expr_down)
and ht.radopB1d(expr_up,expr_down)
ht.radopB2u(expr_up,expr_down)
and ht.radopB2d(expr_up,expr_down)
ht.radopB3u(expr_up,expr_down)
and ht.radopB3d(expr_up,expr_down)
Let’s consider two diffuse gray surfaces: S1 and S2. We’ll assume that radiation occurs only on the upside of these surfaces. From a thermal perspective, the view factor between S1 and S2, F_{S1-S2}, is the ratio between the diffuse energy leaving S1 and intercepted by S2 and the total diffuse energy leaving S1.
Using the operators described above, we have
(1)
Note that for clarity, the ht.
prefix has been removed.
Assuming that radiosity is the same value on all surfaces, the above definition can be simplified and no longer depends on J. In that case, F_{S1-S2} depends only on the geometrical configuration and no longer on the thermal configuration. Let’s call this a geometrical view factor to distinguish it from the view factor based on thermal radiation.
We now have
(2)
where S1 represents either the surface name or its area and I_{\textrm{S1}} is the function indicator of the surface S1, which returns 1 when it is evaluated on S1 and 0 elsewhere.
In order to get used to the new operators and check their accuracy, I chose a simple configuration. The geometry consists of two concentric spheres of radius R_{int} and R_{ext} (with R_{int} < R_{ext}), as shown below:
The radiation occurs between the external side of the small sphere and the internal side of the large sphere. The geometrical view factors are:
Here, S_{int} and S_{ext} represent the interior and exterior sphere, respectively.
To compute the geometrical view factor in the COMSOL Multiphysics simulation software, we need to add a heat interface with surface-to-surface radiation activated, then draw the geometry and build the mesh.
Then, we don’t really need to run a heat transfer simulation since we are interested in the geometrical view factor. To have access to the radopu
and radopd
operators is enough to get initial values.
Before doing that, though, we’d better prepare a few tools that will help us with the postprocessing.
In the geometrical view factor expression, we have used the surface indicators I_{\textrm{S1}} and I_{\textrm{S2}}. These are defined as Variables in COMSOL Multiphysics and use the Geometric Entity Selection, so that it is 0 everywhere except on the corresponding surface where it is 1. Let’s name them ext
and int
.
Screenshots of the Geometric Entity Selection settings for the surface indicators.
Next, we define the integration operators intop_ext
and intop_int
. They will make it easy to compute surface integrals; for example, the surface of S_{ext} can be evaluated as intop_ext(1)
.
Screenshots of the settings for the integration operators.
We have seen that radiation may occur on the upside, downside, or on both sides of the boundaries. The radiation operators are designed to be able to distinguish the radiation coming from each side. Therefore, we need to check the sides on this model.
We can do this easily via the Diffuse Surface feature, where the radiation direction can be set to “Negative normal direction” (downside) or “Positive normal direction” (upside). Using this option prompts arrows to be automatically displayed to show the direction of radiation leaving the surface. In our example, the radiation occurs on the downside for S_{ext} and the upside for S_{int}.
With all these tools available to us, evaluating the view factor using an expression similar to (1) is direct. For example,
(3)
is evaluated in COMSOL Multiphysics syntax with intop_ext(comp1.ht.radopd(0,ext))/intop_ext(1)
.
Note that the use of radopd
is due to the fact that the radiation occurs on the downside of S_{ext}. The first argument of radopd
is 0 for the same reason.
(4)
is evaluated with intop_ext(comp1.ht.radopd(int,0))/intop_int(1)
, where radopd
is due to the fact that the radiation occurs on the downside of S_{ext}. But this time, the second argument of radopd
is 0 because radiation occurs on the upside of S_{int}.
I’ve gathered all the results in a table:
Analytical value | Computed value | Error | |
---|---|---|---|
F_{S_{\textrm{int}}-S_{\textrm{int}}} | 0 | 0 | 0 |
F_{S_{\textrm{int}}-S_{\textrm{ext}}} | 1 | 0.998 | 2e^{-3} |
F_{S_{\textrm{ext}}-S_{\textrm{ext}}} | 0.91 | 0.9102 | 2e^{-4} |
F_{S_{\textrm{ext}}-S_{\textrm{int}}} | 0.09 | 0.09 | 1e^{-6} |
Thanks to the radiation operators, we were able to retrieve the geometrical view factor.
With COMSOL Multiphysics 5.0, it is possible to compute geometrical view factors between diffuse surfaces, thanks to dedicated operators. This offers a solution for the requests we received since the surface-to-surface features have been released. But, these operators can do much more. They are flexible enough to provide all terms of the surface-to-surface radiation equation. They may also be used to formulate equations for other quantities.
Whether traveling a far distance between work and home or simply caught in a continuous wave of traffic, many of us spend a great deal of time on the road each day. On days where your evenings are free, this might be thought of as a time to relax and listen to a favorite CD or radio station. However, on days where your schedule is a bit more hectic, the lengthy commute can become a source of frustration as you race to make it on time to your destination. No matter the situation, an element that you rely on to make your car ride more enjoyable is comfort within your vehicle.
While on the road, you may have noticed the sensation of vibrations that can sometimes arise from your seat. The root of these vibrations can be traced to a number of sources, including road conditions, speed, engine vibrations, and the design of the vehicle’s seats. While not only a source of discomfort, prolonged exposure to such oscillations can be hazardous to one’s health, potentially resulting in fatigue or pain. With the growing concern behind the impact of these vibrations, some vehicles have begun to implement vibration isolators within the design of their seats in an effort to minimize this effect.
A vehicle’s seat can be a source of vibrations. (“Sedile in pelle di un’Alfa Romeo Giulietta” by Pava — Own work. Licensed under Creative Commons Attribution Share-Alike 3.0, via Wikimedia Commons).
With this biomechanical model built using the Multibody Dynamics Module, we can simulate the human body’s response to such vibrations, thereby helping to optimize the design of vibration isolators as well as analyze ride quality in vehicles.
The Biomechanical model of the human body in a sitting posture makes this analysis possible. An important element in the design of this model is addressing the complexity of the human body and how the different body parts are connected. In this example, we focus on the vibrational impact in six different areas of the body: the head, the torso, the viscera, the pelvis, the thighs, and the legs. Each element is treated as a lumped mass and defined as a rigid body.
To approximate the connections between varying body parts, we apply translational and rotational dampers and springs on the relative motion between the two connected body parts — a connection that is modeled with the elastic version of a fixed joint. This provides the translational and rotational stiffness and damping values between the connected body parts.
A fixed joint is used to model the connections between the body parts directly touching the seat (the legs, the thighs, and the pelvis) and the seat itself, which is the source of the vibration. To model the seat’s cushioning effect, elasticity on the joints is included when needed.
Note that rather than modeling the seat itself, a base motion node is used where the input excitation is 1 m/s^{2} in the vertical direction at three different locations.
We begin with an eigenfrequency analysis designed to determine the damped and undamped natural frequencies of vibration.
The figure below illustrates a rotational eigenmode on the undamped model. Considerable rotational movement is noted in the head and the torso. In comparison, little movement is found in other parts of the model.
We then shift our analysis to the translational eigenmode of the damped model. In the first major translational eigenmode, the results indicate a downward movement in the head, the pelvis, and the viscera, with no significant movement noted in the other body parts. This is illustrated in the figure below.
The second major translational eigenmode (shown below) notes displacement in the downward direction of the head, the torso, and the pelvis, whereas the viscera moves in the upward direction.
This example then features a frequency response analysis around the natural frequencies to analyze three different elements: vertical transmissibility, rotational transmissibility, and apparent mass.
Let’s first focus our attention on vertical transmissibility. Vertical transmissibility refers to the ratio between the head’s vertical acceleration and the seat’s input acceleration. When compared with the excitation frequency, the results show that the primary resonance is visible in the range of 4-6 Hz and the secondary resonance in the range of 8-10 Hz.
Vertical transmissibility vs. excitation frequency.
Rotational transmissibility is the ratio between the head’s angular acceleration and the seat’s input acceleration. With regards to this form of transmissibility, it is important to avoid high values as this can enhance discomfort as well as affect one’s vision. The plot below depicts its variation with the excitation frequency.
Rotational transmissibility vs. excitation frequency.
Finally, apparent mass refers to the ratio of the seat’s force to the seat’s input acceleration. Rather than depicting the end point characteristics of the model, this element conveys the driving point characteristics.
Apparent mass vs. excitation frequency.
In this blog post, we have introduced you to a biomechanical model of the human body, particularly highlighting its application within the automotive industry. We have showed you how, with the Multibody Dynamics interface, you can model various parts of the human body — and their connections — as well as analyze its dynamic response to whole body vibrations.
The piezoelectric modeling interface seeks to:
This will allow you to successfully simulate piezoelectric devices as well as easily extend the simulation by coupling it with any other physics.
You may already be familiar with the three different modules that can be used for simulating piezoelectric materials:
Each of these modules gives you a predefined Piezoelectric Devices interface that you can use for modeling systems that include both piezoelectric and other structural materials. The Acoustics Module offers two predefined interfaces, namely the Acoustic-Piezoelectric Interaction, Frequency Domain interface and the Acoustic-Piezoelectric Interaction, Transient interface. These two allow you to model how piezoelectric acoustic transducers interact with the fluid media surrounding them.
The Piezoelectric Devices interface is available in the list of structural mechanics physics interfaces.
The Acoustic-Piezoelectric Interaction, Frequency Domain and the Acoustic-Piezoelectric Interaction, Transient interfaces are available in the list of acoustics physics interfaces.
These predefined multiphysics interfaces couple the relevant physics governing equations via constitutive laws or boundary conditions. Thus, they offer a good starting point for setting up more complex multiphysics problems involving piezoelectric materials. The new piezoelectric interfaces in COMSOL Multiphysics version 5.0 provide a transparent workflow to visualize the constituent physics interfaces. There is also a separate Multiphysics node that lists how the constituent physics interfaces are connected to each other.
Let’s find out how these multiphysics interfaces are structured.
Upon selecting the Piezoelectric Devices multiphysics interface, you see the constituent physics: Solid Mechanics and Electrostatics. You also see the Piezoelectric Effect branch listed under the Multiphysics node, which controls the connection between Solid Mechanics and Electrostatics.
Part of the model tree showing the physics interfaces and multiphysics couplings that appear upon selecting the Piezoelectric Devices interface.
By default, all modeling domains are assumed to be made of piezoelectric material. If that is not the case, you can deselect the non-piezo structural domains from the branch Solid Mechanics > Piezoelectric Material. These domains then get automatically assigned to the Solid Mechanics > Linear Elastic Material branch. This process ensures that all parts of the geometry are marked as either piezoelectric or non-piezo structural materials and that nothing is accidentally left undefined.
If you are working with other material models that are available with the Nonlinear Structural Materials Module, such as hyperelasticity, you can add that as a branch under Solid Mechanics and assign the relevant parts of your modeling geometry to this branch. The Solid Mechanics node gives us full flexibility to set up a model that involves not only piezoelectric material but also linear and nonlinear structural materials. The best part is that if these materials are geometrically touching each other, the COMSOL software will automatically take care of displacement compatibility across them.
If some parts of the model are not solid at all, like an air gap, you can deselect them in the Solid Mechanics node.
From the Solid Mechanics node, you will also assign any sort of mechanical loads and constraints to the model.
The Electrostatics node allows you to group together all the information related to electrical inputs to the model. This would include, for example, any electrical boundary conditions such as voltage and charge sources. By default, any geometric domain that has been assigned to the Solid Mechanics > Piezoelectric Material branch also gets assigned to the Electrostatics > Charge Conservation, Piezoelectric branch. If you have any other dielectric materials in the model that are not piezoelectric, you could assign them to the Electrostatics > Charge Conservation branch.
The Multiphysics > Piezoelectric Effect branch ensures that the structural and electrostatics equations are solved in a coupled fashion within the domains that are assigned to the Solid Mechanics > Piezoelectric Material (and also the Electrostatics > Charge Conservation, Piezoelectric) branch.
The multiphysics coupling is implemented using the well-known coupled constitutive law for piezoelectric materials. Note that the Electrostatics > Charge Conservation, Piezoelectric branch is mainly used as a placeholder for assigning geometric domains that belong to the piezoelectric material model. This helps the Multiphysics > Piezoelectric Effect branch understand whether a domain assigned to the Electrostatics interface is piezoelectric or not.
Note: For an example of working with the Piezoelectric Devices interface, check out the tutorial on modeling a Piezoelectric Shear Actuated Beam.
It is also possible to add effects of damping or other material losses in dynamic simulations. You can do so by adding one or more of the following subnodes under the Solid Mechanics > Piezoelectric Material branch:
Damping and losses that can be added to a piezoelectric material.
Subnode Name | When to Use the Subnode |
---|---|
Mechanical Damping | Allows you to add purely structural damping. Choose between using Loss Factor (in frequency domain) or Rayleigh damping (for both frequency and time domains) models. |
Coupling Loss | Allows you to add electromechanical coupling loss. Choose between using Loss Factor (for frequency domain) or Rayleigh damping (for both frequency and time domains) models. |
Dielectric Loss | Allows you to add dielectric or polarization loss. Choose between using Loss Factor (for frequency domain) and Dispersion (for both frequency and time domains) models. |
Conduction Loss (Time-Harmonic) | Allows you to add electrical energy dissipation due to electrical resistance in a harmonically vibrating piezoelectric material (for frequency domain only). |
Note: For an example of adding damping to piezoelectric models, check out the tutorial on modeling a Thin Film BAW Composite Resonator.
Additional damping also takes place due to the interaction between a piezoelectric device and its surroundings. This can be modeled in greater details using the Acoustic-Piezoelectric Interaction interfaces.
Upon selecting one of the Acoustic-Piezoelectric Interaction interfaces, you see the constituent physics: Pressure Acoustics, Solid Mechanics and Electrostatics. You also see the Acoustic-Structure Boundary and Piezoelectric Effect branches listed under the Multiphysics node.
Part of the model tree showing the physics interfaces and multiphysics couplings that appear when selecting the Acoustic-Piezoelectric Interaction, Frequency Domain and the Acoustic-Piezoelectric Interaction, Transient interfaces.
By default, all modeling domains are assigned to the Pressure Acoustics interface as well as the Solid Mechanics > Piezoelectric Material and Electrostatics > Charge Conservation, Piezoelectric branches. Note that the Pressure Acoustics interface is designed to simulate acoustic waves propagating in fluid media.
Since COMSOL Multiphysics cannot know a priori which parts of the modeling geometry belong to the fluid domain and which ones are solids, you are expected to provide that information by deselecting the solid domains from the Pressure Acoustics, Frequency Domain (or Pressure Acoustics, Transient) branch and deselecting the fluid domains from the Solid Mechanics and Electrostatics branches.
Once you do that, the boundaries at the interface between the solid and fluid domains are detected and assigned to the Multiphysics > Acoustic-Structure Boundary branch. This branch controls the coupling between the Pressure Acoustics and Solid Mechanics physics interfaces. It does so by considering the acoustic pressure of the fluid to be acting as a mechanical load on the solid surfaces, while the component of the acceleration vector that is normal (perpendicular) to the same surfaces acts as a sound source that produces pressure waves in the fluid.
Note: For an example of Acoustic-Piezoelectric Interaction, check out the tutorial on modeling a Tonpilz Transducer.
The transparency in the workflow as discussed above also paves the way for adding more physics and creating your own multiphysics couplings.
For example, let’s say there is some heat source within your piezoelectric device that produces nonuniform temperature distribution within the device. In order to model this, you can add another physics interface called Heat Transfer in Solids in the model tree and prescribe appropriate heat sources and sinks to find out the temperature profile. You could then add a Thermal Expansion branch under the Multiphysics node to compute additional strains in different parts of the device as a result of the temperature variation.
The Multiphysics > Thermal Expansion branch couples the Heat Transfer in Solids and the Solid Mechanics interfaces. It might also be possible that the piezoelectric material properties have a temperature dependency. You could represent these properties as functions of temperature and let the Multiphysics > Temperature Coupling branch pass on the information related to temperature distribution in the modeling geometry to the Solid Mechanics or even the Electrostatics branches, thereby producing additional multiphysics couplings.
Part of the model tree showing the physics interfaces and multiphysics couplings that you can use to combine piezoelectric modeling with thermal expansion and temperature-dependent material properties.
Similar to adding more physics and multiphysics couplings, it is also possible to disable one or more multiphysics couplings — or even any of the physics interfaces shown in the model tree. This could be immensely helpful for debugging large and complex models.
The model tree on the left shows a scenario where the Piezoelectric Effect multiphysics coupling is disabled. The model tree on the right shows a scenario where the Electrostatics physics interface is disabled.
For example, you can disable the Multiphysics > Piezoelectric Effect branch and solve for the Solid Mechanics and Electrostatics physics interfaces in an uncoupled sense. You could also solve a model by disabling either the Solid Mechanics or the Electrostatics interface.
Running such case studies could help in evaluating how the device would respond to certain inputs if there were no piezoelectric material in place. This approach could also be used to evaluate equivalent structural stiffness or equivalent capacitance of the piezoelectric material.
You could also start by adding only one of the constituent physics, say Solid Mechanics, and after performing some initial structural analysis, go ahead and add the Electrostatics physics interface to the model tree once you are ready to add the effect of a piezoelectric material.
In that case, when you add the Electrostatics physics on top of the existing Solid Mechanics physics in the model tree, the COMSOL software will automatically add the Multiphysics node. From there, you can manually add the Piezoelectric Effect branch. Note that if you take this approach of adding the constituent physics interfaces and multiphysics effect manually, you would also have to manually add the piezoelectric modeling domains to the Solid Mechanics > Piezoelectric Material, the Electrostatics > Charge Conservation, Piezoelectric, and the Multiphysics > Piezoelectric Effect branches.
In a similar fashion, you can continue to add more physics interfaces and multiphysics couplings to your model based on your needs.
To learn more about modeling piezoelectric devices in the COMSOL software environment, you are encouraged to refer to these resources:
Fatigue models are based on physical assumptions and are therefore said to be phenomenological. Since different micromechanical mechanisms govern fatigue under various conditions, many analytical and numerical relations are needed to cover the full spectrum of fatigue. These models, in turn, require dedicated material parameters.
It is well known that fatigue testing is expensive. Many test specimens are necessary since the impurities responsible for fatigue initiation are randomly distributed in the material. The difference in the fatigue life is clearly visible when you visualize all the test results in an S-N curve.
An S-N curve. The black squares represent individual fatigue tests.
Since the S-N curve — also called the Wöhler curve — is one of the oldest tools for fatigue prediction, there is a good chance that the material data is already available in this form. Many times, the data is given for a 50% failure risk. If you do not have access to the material data, you are faced with a testing campaign.
When you are done, pay attention to the statistical aspect and, at each load level, select the same reliability when constructing an S-N curve. This is important since the S-N curve is expressed in a logarithmic scale where a small difference in input has a large influence on the output. Then, S-N curves for different reliability levels fall under each other and you should select an appropriate level for your application. For noncritical structures, a failure rate of 50% might be acceptable. However, for critical structures, a significantly lower failure rate should be chosen.
Always pay attention when you combine fatigue data from different sources. Make sure that the testing conditions and the operating conditions are the same.
Another aspect of fatigue testing considers the mean stress that has a substantial influence on the fatigue life. In general, fatigue tests performed at tensile mean stress will give a shorter life than tests performed at a compressive mean stress. This effect is also frequently expressed using the R-value (the ratio between the minimum and maximum stress in the load cycle). Thus, with decreasing mean stress (or R-value), the fatigue life increases.
In the Fatigue Module, the Stress-Life models do not take into account this effect. When using these models, you need to choose material data obtained under the same testing conditions as the operating one.
In the cumulative damage model, the Palmgren-Miner linear damage summation uses an S-N curve. However, in this model, the S-N curve is specified with the R-value dependence and the mean stress effect is accounted for.
The mean stress effect.
In case you use a material library and the fatigue data is specified using the maximum stress, you can easily convert it to the stress amplitude using
where \sigma_a is the stress amplitude, \sigma_{max} is the maximum stress, and R is the R-value.
The stress-based models seem to be fairly simple. For example, the Findley and the Matake models use the expressions
and
respectively. They depend on only two material constants: f and k. These material parameters are, however, nonstandard material data that can be related to the endurance limit of the material.
Note that the actual values of f and k differ between the two models. The analytical relation is somewhat cumbersome to obtain since the stress-based models are based on the critical plane approach and you need to find a plane where the left-hand sides of the above relations are maximized. This is basically done by expressing the shear and the normal stress as a function of the orientation using the Mohr’s stress circle, maximizing by setting the derivative to zero, and simplifying the resulting relation.
The different steps of the data manipulation will not be shown here. For the Findley model, the material parameters are related to the standard fatigue data using
Here, R is the R-value and \sigma_U(R) is the endurance limit. The argument of the endurance limit indicates that the stress is R-value dependent. For the Matake model, the relation is somewhat simpler and given by
Since both relations have two unknown material parameters, you need endurance limits from two different types of fatigue tests. To illustrate this, consider a case where one endurance limit is obtained by alternating the load between a tensile and a compressive value, R=-1. In the second case, the load is cycled between a zero load and a maximum load, R=0. For the Findley model, this leads to
The pair of equations must be solved numerically. Here is the strategy:
For the Matake model, the two fatigue tests lead to
which you can solve analytically.
I would like to share a few examples where the discussed fatigue models are used:
To begin, I would like to highlight several changes in the Linear Elastic Material model of the Membrane interface.
First off, the previous version of the interface always assumed geometric nonlinearity. The new version listens to the “Include geometric nonlinearity” setting in the study step settings in the same way as the Solid Mechanics interface. The geometric linear version of the membrane can be used when it is acting as cladding on a solid surface. If the membrane is used by itself and not as a cladding, a tensile prestress is, as before, necessary in order to avoid singularity. This is because a membrane without stress or with a compressive stress has no transverse stiffness. To include the prestress effect, you must enable geometric nonlinearity for the study step.
Another update is that linear elastic materials can now also be orthotropic or anisotropic. This affects the settings of the Damping subnode as well, where non-isotropic loss factors are now allowed.
You may also notice that we have added a Hygroscopic Swelling feature as a subnode to the Linear Elastic Material node. (We described the hygroscopic swelling effect in a previous blog post. Check that out to learn more about the effect.)
All of you who use the Nonlinear Structural Materials Module may now use the Membrane interface to model thin hyperelastic structures by adding a Hyperelastic Material node. In order to illustrate the Hyperelastic Material model using the Membrane interface, we have recreated the Model Library example Inflation of a Spherical Rubber Balloon.
Tip: You can download the new version of the model in the Model Gallery by logging into your COMSOL Access account.
The Membrane interface works on the plane stress assumption, and it is assumed that there is no variation across the thickness of the balloon. Also, it requires a prestress to solve the model due to the absence of bending stiffness in the membrane. For this purpose, a separate study has been created before the inflation of the balloon is carried out in further studies. Results from this analysis are used as initial values for the rest of the inflation analyses. Aside from these two changes, the model is similar to the previous Solid Mechanics version.
The advantage of the Membrane version is that it is more computationally efficient. Why is that? Because the Membrane interface is on one geometric entity lower than the Solid Mechanics interface. The results obtained from the Membrane interface are in agreement with the analytical results. The plot below shows the inflation pressure as a function of circumferential stretch for different hyperelastic material models compared to the analytical expression for the Ogden model.
As the internal pressure increases, the balloon starts to inflate and its thickness decreases. Since the pressure is uniform over the surface, the thickness is the same along the cross section for any given inflation pressure. The next plot compares the variation of deformed thickness with applied stretch to the balloon obtained from the Membrane interface and the Solid Mechanics interface. We see that the thinning of the balloon is accurately captured by the Membrane interface.
We have added four new feature nodes to the Membrane interface.
They are as follows:
In addition to the specific improvements I just went over, we have made a few general changes to the structural mechanics interfaces that affect the Membrane interface. You will notice that the menus have been restructured for a number of structural mechanics interfaces.
The following interfaces now have restructured menus:
You can see a screenshot of the menu structure for the 2D Axisymmetric Membrane interface below:
As for the Spring Foundation features, we have generalized these so that you can enter the “spring force versus displacement” and the “damping force versus velocity” relations in matrix form, rather than by component.
For 2D Axisymmetric cases, there is a new load type called “Point Load (on Axis)”. With this option, it is now possible to apply loads on a point on a symmetry axis.
For 2D Axisymmetric cases, a Point Load is actually a line load (N/m) since a point represents a ring in axisymmetry. To follow better naming conventions, such a load is now called “Ring Load” in both the Solid Mechanics interface and the Membrane interface.
Models that were made with COMSOL Multiphysics version 4.4 or earlier still use the old Membrane interface and new functionality is not available. To utilize the new functionality for old models, we suggest that you add a new Membrane interface and copy all the nodes from the previous interface to the new one.
As always, do not hesitate to contact us if you have any questions.
]]>
Evaporation is a process that occurs if some liquid vaporizes into a gaseous phase that is not saturated with the liquid. We exemplify this process and its characteristic properties by using water as the liquid and air as the gaseous phase.
Let’s first define the saturation pressure, p_{sat}, at which the thermal equilibrium with the liquid or solid state is reached. It is strongly temperature dependent and there are many approximations out there, which are all very similar but not exactly the same.
The COMSOL Multiphysics simulation software uses the following approximation from Principles of environmental physics by J. L. Monteith and M. H. Unsworth (1990):
(1)
For ideal gases, it is easy to determine the saturation concentration at which the relative humidity is 100% with:
(2)
where R is the ideal gas constant.
The thermodynamic properties of moist air depend on the fraction of water vapor. A mixture formula is used to describe the properties with the proportional amount of dry air and water vapor. Assuming air behaves as an ideal gas, the density reads:
(3)
More details and references about the implementation of the moist air properties as used by COMSOL Multiphysics can be found in the Heat Transfer User’s Guide (located within the Heat Transfer Module).
Before setting up the COMSOL Multiphysics model, let’s consider the effects causing the cooling of the coffee as it evaporates.
We assume a slight air draft around the cup (or beaker, since there is no handle in this case) that accelerates the cooling by transporting heat and removing water vapor from the surface. At the coffee-air interface, vapor escapes from the liquid into the air, causing additional cooling by evaporation.
Sketch of the participating effects in a coffee cup.
The first step is to make use of the symmetry, which reduces the model size and thereby the computational time. For the slight air draft, we use the Turbulent Flow interface with a constant air velocity. A reasonable approximation here is to assume that the flow field won’t change with a change in temperature and moisture content. Hence, we calculate a stationary velocity field in an initial study.
What else do we need to model the evaporative cooling effect?
Thanks to the predefined moist air fluid type, it is easy to implement the evaporative cooling effect in a COMSOL Multiphysics model. To determine the temperature field, we add the Heat Transfer in Fluids interface to the model, whereupon the Multiphysics node appears.
With the Multiphysics node, you can build your non-isothermal flow model sequentially. That’s precisely what we’ll do here. We’ll start with the Turbulent Flow interface and add the multiphysics coupling one after the other. The Non-Isothermal Flow node defines the two-way coupling between the flow and the heat interfaces. Note that in this case, we do not need the strongly coupled approach since the flow field is assumed to be independent of the temperature or moisture content. Using the properties from the flow interface, the Non-Isothermal Flow node also accounts for the turbulence effects in the heat transfer interface.
Multiphysics node for Non-Isothermal Flow. The node settings define the non-isothermal flow properties: a common density for the heat transfer and flow interface, a turbulence model for heat, flow heating, and the names of the interfaces.
Let’s have a closer look at the steps for modeling evaporation. First, set up the coupling between heat transfer and vapor transport in order to accurately model the evaporative cooling effect and utilize the postprocessing variables that come along with the Moist Air feature of the Heat Transfer Module, such as relative humidity or moisture content.
Settings for heat transfer inside the air domain: (1) Coupling of the flow field (which is done via the Multiphysics node) for convective transport of the moist air. (2) Coupling to the Transport of Diluted Species interface, which gives us the correct input quantity of water vapor (3) to determine the moist air properties according to Equation 2.
The last thing to do is to set up proper boundary conditions. Here, we only discuss the boundary conditions connected to the evaporation. The rest is straightforward and can be read in the documentation of the model.
At the water surface, heat is released by evaporation. The heat of vaporization for water is approximately H_{vap}=2454\frac{kJ}{kg}\cdot M_w (it’s actually temperature dependent, but using a constant value here is a reasonable approximation) with the molar mass of water M_w=18.015\frac{g}{mol}. The amount of heat released depends on how much vapor escapes from the water surface into the air. This relates the heat source to the Transport of Diluted Species interface via the total flux in normal direction to the surface, which can be understood as the net flux of water vapor into air.
At the water surface, the relative humidity will always be 100%. Hence, the saturation concentration is reached. It defines the concentration of water vapor at the water surface according to Equation 2, with the saturation pressure determined by the Heat Transfer in Fluids interface. All in all, it is a strongly coupled phenomena that is implemented in no time.
Next, we take a look at the results of a time-dependent study over 20 minutes. The initial coffee temperature was 80°C and air at 20°C enters the modeling domain with a relative humidity of 20% causing the cooling. Below, you can see the resulting temperature and relative humidity distribution, both after 20 minutes.
Temperature distribution after 20 minutes.
Relative humidity after 20 minutes.
The temperature is highest in the shadow zone of the beaker/coffee cup. Therefore, the relative humidity becomes very low.
Does evaporation have a strong influence on the cooling? We can find out by comparing the average temperature of the coffee — including evaporation — to that of the same model neglecting evaporation.
To do so, we’ll set up a third study solving for the Heat Transfer in Fluids interface only and disable the Boundary Heat Source node. The resulting plot clearly shows that cooling due to evaporation has a significant impact on the overall cooling:
Comparison of the average coffee temperature over time.
This blog post has shown the basic aspects you need to consider when modeling evaporative cooling. Keep these concepts in mind as we continue the series about evaporation modeling. Next up, we’ll take it a step further and answer the question of what happens in a porous material and how this can be implemented.
For now, feel free to download the Evaporative Cooling model shown here along with detailed instructions from our Model Gallery to try it out for yourself.
]]>
In particle tracing and ray tracing simulations, we often need to use the particle or ray properties to change a variable that is defined on a set of domains or boundaries. For example, solid particles in a fluid might exert a significant force on the surrounding fluid, and they may also erode the surfaces they hit.
In previous blog posts, I’ve discussed two other cases in greater detail: divergence of an electron beam due to self-potential and thermal deformation of lenses in a high-powered laser system. Each of these phenomena can be modeled using Accumulators or the specialized features that are derived from them.
An Accumulator is a physics feature that communicates information from particles or rays to the underlying finite element mesh. For each Accumulator feature in a model, a corresponding dependent variable, called an accumulated variable, is declared. These accumulated variables can be defined either within a set of domains or on a set of boundaries, and they can represent any physical quantity, making them extremely flexible.
The Accumulator features can be added to any of the physics interfaces of the Particle Tracing Module. They can also be used in the Geometrical Optics interface, available with the Ray Optics Module, and the Ray Acoustics interface, available with the Acoustics Module.
Depending on the physics interface, more specialized versions of the Accumulator may be available for computing specific types of physical quantities. For example, the Particle Tracing for Fluid Flow interface includes a dedicated Erosion boundary condition that includes several built-in models for computing the rate of erosive wear on a surface.
The Accumulators can be divided into three broad categories, which function in the following ways:
We will now investigate each of these varieties in greater detail.
When particles or rays strike a surface, they can affect that surface in a wide variety of ways. For example, a laser can cause a boundary to heat up, sediment particles can erode their surroundings, and sputtering can occur when high-velocity ions strike a wafer in a process chamber. All of these effects require the same basic modeling procedure; we define a variable on the boundary and change its value when particles or rays interact with the boundary.
To begin, let’s consider a simple case in which we want to count the number of times a boundary is hit. We first define a variable, called rpd
, for example, which can have a distinct value in every boundary mesh element. Initially, this variable is set to zero in all elements. Every time a particle hits a mesh element on this boundary, we would like to increment the variable on that element by 1.
The values of the accumulated variable on the boundary elements (illustrated as triangles) after one collision are shown below:
To implement this in COMSOL Multiphysics, we first set up the particle tracing model, then add a “Wall” node to the boundary for which we want to count collisions. In this case, let’s specify that particles are reflected at this surface by selecting the Bounce wall condition. We then add the Accumulator node as a subnode to this Wall.
The settings shown in the following screenshot cause the accumulated variable (called rpb
) to be incremented by 1 (the expression in the Source edit field) every time a particle hits the wall.
I have created an animation that demonstrates how the number of collisions with each boundary element is counted over the course of the study. Check it out:
By changing the expression in the Source edit field, it is possible to increment the accumulated variable using any combination of variables that exist on the particle and on the boundary. For example, the accumulated variable may increase by a different amount based on the velocity or mass of incoming particles. The dependent variable need not be dimensionless. In fact, it can represent any physical quantity.
In addition to the generic Accumulator subnode — which can represent anything — dedicated accumulator-based features are available in the different physics interfaces, including the following:
We may also want to transfer information from particles to all of the mesh elements they pass through, not just the boundary elements they touch. We can do so by adding an Accumulator node to the physics interface directly, instead of adding it as a subnode to a Wall or other boundary condition.
For example, we can use an Accumulator to reconstruct the number density of particles within a domain. This technique is used in a benchmark model of free molecular flow through an s-bend in which the Free Molecular Flow interface is used to compute the number density of molecules in a rarefied gas.
Here is the geometry of the s-bend:
The settings window for the Accumulator is shown below.
The expression in the Source edit field is a bit more complicated than in the previous case. The source term R is defined as
(1)
where J_{\textrm{in}} (SI unit: 1/(m^2 s)) is the molecular flux at the inlet, L (SI unit: m) is the length of the inlet, and N_{p} (dimensionless) is the number of model particles.
Physically, we can interpret R as the number of real molecules per unit time, per unit length in the out-of-plane direction, that are represented by each model particle. Because this source term acts on the time derivative of the accumulated variable, each particle leaves behind a “trail” in the mesh elements it passes through, which contributes to the number density in those elements.
I have created a second animation in which the number density of molecules is computed using the Accumulator (bottom) and the result is compared to the result of the Free Molecular Flow interface (top). Here it is:
We do see some noise in the particle tracing solution because each particle can only make a uniform contribution to the mesh element it is currently in. However, when the number of particles is large compared to the number of mesh elements, it is still possible to obtain an accurate solution.
In addition to the generic Accumulator node, which can represent anything, dedicated accumulator-based features are available in the different physics interfaces, including the following:
The third variety of Accumulator is a bit more advanced than the previous two. A Nonlocal Accumulator is used to communicate information from a particle’s current position to the initial position from which it was released. The Nonlocal Accumulator can be added to an “Inlet” node, causing it to declare an accumulated variable on the mesh elements on the Inlet boundary.
The Nonlocal Accumulator can be used in some advanced models of surface-to-surface radiation. In many cases, the Surface-to-Surface Radiation physics interface (available with the Heat Transfer Module) can be used to efficiently and accurately model radiative heat transfer. However, the Surface-to-Surface Radiation interface relies on the assumption that all surfaces reflect radiation diffusely. That is, the direction of reflected radiation is completely independent of the direction of incident radiation. It cannot be used, for example, if some of the radiation undergoes specular reflection at smooth, polished, metallic surfaces.
One approach to modeling radiative heat transfer with a combination of specular and diffuse radiation is to use the Mathematical Particle Tracing interface, as demonstrated in the example of mixed diffuse and specular reflection between two parallel plates.
The incident heat flux on each plate is computed by releasing particles from the plate surface, querying the temperature of each surface the particles hit, and communicating this information back to the point at which the particles are initially released. The below image shows the temperature distribution between the two plates, where the top plate is heated by an external Gaussian source.
We have seen that Accumulators can be used to model interactions between particles or rays and any field that is defined on the surrounding domains of boundaries. The accumulated variables can represent any physical quantity. The Accumulator is the basic building block that allows for sophisticated one-way or two-way coupling between a particle- or ray-based physics interface and any of the other products in the COMSOL product suite.
The Accumulators and related physics features have too many settings and applications to discuss in detail in a single blog post. To learn more about the many options available, please refer to the User’s Guide for the Particle Tracing Module (for particle tracing physics interfaces), the Ray Optics Module (for the Geometrical Optics interface), or the Acoustics Module (for the Ray Acoustics interface).
If you are interested in learning more about any of these products, please contact us.
]]>