In the vast majority of simulations involving linear elastic materials, we are dealing with an isotropic material that does not have any directional sensitivity. To describe such a material, only two independent material parameters are required. There are many possible ways to select these parameters, but some of them are more popular than others.

Young’s modulus, shear modulus, and Poisson’s ratio are the parameters most commonly found in tables of material data. They are not independent, since the shear modulus, G, can be computed from Young’s modulus, E, and Poisson’s ratio, \nu, as

G = \frac{E}{2(1+\nu)}

Young’s modulus can be directly measured in a uniaxial tensile test, while the shear modulus can be measured in, for example, a pure torsion test.

In the uniaxial test, Poisson’s ratio determines how much the material will shrink (or possibly expand) in the transverse direction. The allowable range is -1 <\nu< 0.5, where positive values indicate that the material shrinks in the thickness direction while being pulled. There are a few materials, called *Auxetics*, which have a negative Poisson’s ratio. A cork in a wine bottle has a Poisson’s ratio close to zero, so that its diameter is insensitive to whether it is pulled or pushed.

For many metals and alloys, \nu \approx1/3, and the shear modulus is then about 40% of Young’s modulus.

Given the possible values of \nu, the possible ratios between the shear modulus and Young’s modulus are

\frac{1}{3} < \frac{G}{E} < \infty

When \nu approaches 0.5, the material becomes incompressible. Such materials pose specific problems in an analysis, as we will discuss.

The bulk modulus, K, measures the change in volume for a given uniform pressure. Expressed in E and \nu, it can be written as:

K = \frac{E}{3(1-2\nu)}

When \nu= 1/3, the value of the bulk modulus equals the value of Young’s modulus, but for an incompressible material (\nu \to0.5), K tends to infinity.

The bulk modulus is usually specified together with the shear modulus. These two quantities are, in a sense, the most physically independent choices of parameters. The volume change is only controlled by the bulk modulus and the distortion is only controlled by the shear modulus.

The Lamé constants \mu and \lambda are mostly seen in more mathematical treatises of elasticity. The full 3D constitutive relation between the stress tensor \boldsymbol \sigma and the strain tensor \boldsymbol \varepsilon can be conveniently written in terms of the Lamé constants:

\boldsymbol \sigma=2\mu \boldsymbol \varepsilon +\lambda \; \mathrm{trace}(\boldsymbol{\varepsilon}) \mathbf I

The constant \mu is simply the shear modulus, while \lambda can be written as

\lambda = \frac{E \nu}{(1+\nu)(1-2\nu)}

A full table of conversions between the various elastic parameters can be found here.

Some materials, like rubber, are almost incompressible. Mathematically, a fully incompressible material differs fundamentally from a compressible material. Since there is no volume change, it is not possible to determine the mean stress from it. The state equation for the mean stress (pressure), *p*, as function of volume change, \Delta V, as

p = f(\Delta V)

will no longer exist, and must instead be replaced by a constraint stating that

\Delta V = 0

Another way of looking at incompressibility is to note that the term (1-2\nu) appears in the denominator of the constitutive equations, so that a division by zero would occur if \nu= 0.5. Is it then a good idea to model an incompressible material approximately by setting \nu= 0.499?

It can be done, but in this case, a standard displacement based finite element formulation may give undesirable results. This is caused by a phenomenon called *locking*. Effects include:

- Overly stiff models.
- Checkerboard stress patterns.
- Errors or warnings from the equation solver because of ill-conditioning.

The remedy is to use a *mixed formulatio*n where the pressure is introduced as an extra degree of freedom. In COMSOL Multiphysics, you enable the mixed formulation by selecting the *Nearly incompressible material* checkbox in the settings for the material model.

*Part of the settings for a linear elastic material with mixed formulation enabled.*

When Poisson’s ratio is larger than about 0.45, or equivalently, the bulk modulus is more than one order of magnitude larger than the shear modulus, it is advisable to use a mixed formulation. An example of the effect is shown in the figure below.

*Stress distribution in a simple plane strain model, \nu = 0.499. The top image shows a standard displacement based formulation, while the bottom image shows a mixed formulation.*

In the solution with only displacement degrees of freedom, the stress pattern shows distortions at the left end where there is a constraint. These distortions are almost completely removed by using a mixed formulation.

In general cases of linear elastic materials, the material properties have a directional sensitivity. The most general case is called anisotropic, which means all six stress components can depend on all six strain components. This requires 21 material parameters. Clearly, it is a demanding task to obtain all of this data. If the stress, \boldsymbol \sigma, and strain, \boldsymbol \varepsilon, are treated as vectors, they are related by the constitutive 6-by-6 symmetric matrix \mathbf D through

\boldsymbol \sigma= \mathbf D \boldsymbol \varepsilon

Fortunately, it is common that nonisotropic materials exhibit certain symmetries. In an orthotropic material, there are three orthogonal directions in which the shear action is decoupled from the axial action. That is, when the material is stretched along one of these principal directions, it will only contract in the two orthogonal directions, but not be sheared. A full description of an orthotropic material requires nine independent material parameters.

The constitutive relation of an orthotropic material is easier when written on compliance form, \boldsymbol \varepsilon= \mathbf C \boldsymbol \sigma:

\mathsf{C} =

\begin{bmatrix}

\tfrac{1}{E_{\rm X}} & -\tfrac{\nu_{\rm YX}}{E_{\rm Y}} & -\tfrac{\nu_{\rm ZX}}{E_{\rm Z}} & 0 & 0 & 0 \\

-\tfrac{\nu_{\rm XY}}{E_{\rm X}} & \tfrac{1}{E_{\rm Y}} & -\tfrac{\nu_{\rm ZY}}{E_{\rm Z}} & 0 & 0 & 0 \\

-\tfrac{\nu_{\rm XZ}}{E_{\rm X}} & -\tfrac{\nu_{\rm YZ}}{E_{\rm Y}} & \tfrac{1}{E_{\rm Z}} & 0 & 0 & 0 \\

0 & 0 & 0 & \tfrac{1}{G_{\rm YZ}} & 0 & 0 \\

0 & 0 & 0 & 0 & \tfrac{1}{G_{\rm ZX}} & 0 \\

0 & 0 & 0 & 0 & 0 & \tfrac{1}{G_{\rm XY}} \\

\end{bmatrix}

\begin{bmatrix}

\tfrac{1}{E_{\rm X}} & -\tfrac{\nu_{\rm YX}}{E_{\rm Y}} & -\tfrac{\nu_{\rm ZX}}{E_{\rm Z}} & 0 & 0 & 0 \\

-\tfrac{\nu_{\rm XY}}{E_{\rm X}} & \tfrac{1}{E_{\rm Y}} & -\tfrac{\nu_{\rm ZY}}{E_{\rm Z}} & 0 & 0 & 0 \\

-\tfrac{\nu_{\rm XZ}}{E_{\rm X}} & -\tfrac{\nu_{\rm YZ}}{E_{\rm Y}} & \tfrac{1}{E_{\rm Z}} & 0 & 0 & 0 \\

0 & 0 & 0 & \tfrac{1}{G_{\rm YZ}} & 0 & 0 \\

0 & 0 & 0 & 0 & \tfrac{1}{G_{\rm ZX}} & 0 \\

0 & 0 & 0 & 0 & 0 & \tfrac{1}{G_{\rm XY}} \\

\end{bmatrix}

Since the compliance matrix must be symmetric, the twelve constants used are reduced to nine through three symmetry relations of the type

\tfrac{\nu_{\rm YX}}{E_Y} = \tfrac{\nu_{\rm YX }}{E_X}

Note that \nu_{\rm YX} \neq \nu_{\rm XY}, so when dealing with orthotropic data, it is important to make sure that the intended Poisson’s ratio values are used. The notation may not be the same in all sources.

Anisotropy and orthotropy commonly occur in inhomogeneous materials. Often, the properties are not measured, but computed using a homogenization process upscaling from microscopic to macroscopic scale. A discussion about such homogenization — in quite another context – can be found in this blog post.

For nonisotropic materials, there are limitations to the possible values of the material parameters similar to those described for isotropic materials. It is difficult to immediately see these limitations, but there are two things to look out for:

- The constitutive matrix \mathbf D must be positive definite.
- For a general anisotropic material, the only option is to check if all of its eigenvalues are positive.
- For an orthotropic material, this is true if all six elastic moduli are positive and \nu_{\rm XY}\nu_{\rm YX}+\nu_{\rm YZ}\nu_{\rm ZY}+\nu_{\rm ZX}\nu_{\rm XZ}+\nu_{\rm YX}\nu_{\rm ZY}\nu_{\rm XZ}<1

- If the material has low compressibility, a mixed formulation must be used.
- It is possible to make an estimate of an effective bulk modulus and the values of the shear moduli.
- In cases of uncertainty, it is better to take the extra cost of the mixed formulation to avoid possible inaccuracies.

When working with geometrically nonlinear problems, the meaning of “linear elasticity” is really a matter of convention. The issue here is that there are several possible representations of stresses and strains. For a discussion about different stress and strain measures, see this previous blog post.

Since the primary stress and strain quantities in COMSOL Multiphysics are Second Piola-Kirchhoff stress and Green-Lagrange strain, the natural interpretation of linear elasticity is that these quantities are linearly related to each other. Such a material is sometimes called a St. Venant material.

Intuitively, one could expect that “linear elasticity” means that there is a linear relation between force and displacement in a simple tensile test. This will not be the case, since both stresses and strains depend on the deformation. To see this, consider a bar with a square cross section.

*The bar subjected to uniform extension.*

The original length of the bar is L_0 and the original cross-section area is A_0=a_0^2, where a_0 is the original edge of the cross section. Assume that the bar is extended at a distance \Delta so that the current length is L=L_0+\Delta=L_0(1+\xi).

Here, 1+\xi is the axial stretch and \xi can be interpreted as the engineering strain. The new length of the edge of the cross section is a=a_0+d=a_0(1+\eta), where \eta is the engineering strain in the transverse directions.

The force can be expressed as the Cauchy stress \sigma_x in the axial direction multiplied by the current cross-section area:

F = \sigma_x A = \sigma_x A_0 (1+\eta)^2

To use the linear elastic relation, the Cauchy stress \boldsymbol \sigma must be expressed as the Second Piola-Kirchoff stress \mathbf S. The transformation rule is

\mathbf \sigma = J^{-1} \mathbf F \mathbf S \mathbf F^T

where \mathbf F is the deformation gradient tensor, and the volume scale is defined as J = det(\mathbf F). Without going into details, for a uniaxial case

\sigma_x = \frac{F_{xX}}{F_{yY}F_{zZ}}S_X= \frac{(1+\xi)}{(1+\eta)^2}S_X

Since for a St. Venant material in uniaxial extension, the axial stress is related to the axial strain as S_X = E \epsilon_X, we obtain

F = S_x A_0 (1+\xi) = E A_0 (1+\xi)\varepsilon_X

Given that the axial term of the Green-Lagrange strain tensor is defined as

\varepsilon_X = \frac{\partial u}{\partial X} + \frac{1}{2}(\frac{\partial u}{\partial X})^2 = \xi+\frac{1}{2}\xi^2

the force versus displacement relation is then

F = E A_0 (1+\xi)(\xi + \frac{1}{2}\xi^2)=E A_0 (\xi+\frac{3}{2}\xi^2+\frac{1}{2}\xi^3)

The linear elastic material furbished with geometric nonlinearity actually implies a cubic relation between force and engineering strain (or force versus displacement, since \Delta =L_0\xi), as shown in the figure below.

*The uniaxial response of a linear elastic material under geometric nonlinearity.
*

As can be seen in the graph, the stiffness of the material approaches zero at the compression side, \xi = \sqrt{{1}/{3}}-1 \approx -0.42. In practice, this means that the simulation will fail at that strain level. It can be argued that there are no real materials that are linear at large strains, so this should not cause problems in practice. However, linear elastic materials are often used far outside the range of reasonable stresses for several reasons, such as:

- Often, you may want to do a quick “order of magnitude” check before introducing more sophisticated material models.
- There are singularities in the model that cause very high strains in a point.
- Read more about singularities here.

- In contact problems, the study is always geometrically nonlinear.
- Often, high compressive strains appear locally in the contact zone at some time during the analysis.

In all of these cases, the solver may fail to find a solution if the compressive strains are large. If you suspect this to be the case, it is a good idea to plot the smallest principal strain. If it is smaller than -0.3 or so, we can expect this kind of breakdown. The critical value in terms of the Green-Lagrange strain is found to be -1/3. When this becomes a problem, you should consider changing to a suitable hyperelastic material model.

Compression may not be the only problem. In the analysis above, Poisson’s ratio did not enter the equations. So what happens with the cross section?

By definition in the uniaxial case, the transverse strain is related to the axial strain by

\varepsilon_Y = -\nu \varepsilon_X

When these strains are Green-Lagrange strains, this is a nonlinear relation stating that

\frac{\partial v}{\partial Y} + \frac{1}{2}(\frac{\partial v}{\partial Y})^2 = -\nu (\frac{\partial u}{\partial X} + \frac{1}{2}(\frac{\partial u}{\partial X})^2)

Thus, there is a strong nonlinearity in the change of the cross section. Solving this quadratic equation gives the following relation between the engineering strains

\eta = \sqrt{1-\nu(\xi^2+2\xi)}-1

The result is shown in the figure below.

*Transverse displacement as a function of the axial displacement for uniaxial tension of a St. Venant material. Five different values of Poisson’s ratio are shown.*

As you can see, the cross section collapses quickly at large extensions for higher values of Poisson’s ratio.

If another choice of stress and strain representation had been made — for example, if the Cauchy stress were proportional to the logarithmic, or “true” strain — it would have resulted in quite a different response. Instead, such a material has a stiffness that decreases with elongation, where the force-displacement response does depend on the value of Poisson’s ratio. Still, both materials can correctly be called “linear elastic”, although the results computed with large strain elasticity can differ widely between two different simulation platforms.

We have illustrated some limits for the use of linear elastic materials. In particular, the possible pitfalls related to incompressibility and to the combination of linear elasticity with large strains have been highlighted.

If you are interested in reading more about material modeling in structural mechanics problems, check out these blog posts:

- Introducing Nonlinear Elastic Materials
- Obtaining Material Data for Structural Mechanics from Measurements
- Part 2: Obtaining Material Data for Structural Mechanics from Measurements
- Fitting Measured Data to Different Hyperelastic Material Models
- Yield Surfaces and Plastic Flow Rules in Geomechanics
- Computing Stiffness of Linear Elastic Structures: Part 1
- Computing Stiffness of Linear Elastic Structures: Part 2

After obtaining our measured data, the question then becomes this: How can we estimate the material parameters required for defining the hyperelastic material models based on the measured data? One of the ways to do this in COMSOL Multiphysics is to fit a parameterized analytic function to the measured data using the Optimization Module.

In the section below, we will define analytical expressions for stress-strain relationships for two common tests — the *uniaxial test* and the *equibiaxial test*. These analytical expressions will then be fitted to the measured data to obtain material parameters.

Characterizing the volumetric deformation of hyperelastic materials to estimate material parameters can be a rather intricate process. Oftentimes, perfect incompressibility is assumed in order to estimate the parameters. This means that after estimating material parameters from curve fitting, you would have to use a reasonable value for bulk modulus of the nearly incompressible hyperelastic material, as this property is not calculated.

Here, we will fit the measured data to several perfectly incompressible hyperelastic material models. We will start by reviewing some of the basic concepts of the nearly incompressible formulation and then characterize the stress measures for the case of perfect incompressibility.

For nearly incompressible hyperelasticity, the total strain energy density is presented as

W_s = W_{iso}+W_{vol}

where W_{iso} is the isochoric strain energy density and W_{vol} is the volumetric strain energy density. The second Piola-Kirchhoff stress tensor is then given by

S = -p_pJC^{-1}+2\frac{\partial W_{iso}}{\partial C}

where p_{p} is the volumetric stress, J is the volume ratio, and C is the right Cauchy-Green tensor.

You can expand the second term from the above equation so that the second Piola-Kirchhoff stress tensor can be equivalently expressed as

S = -p_pJC^{-1}+2\left(J^{-2/3}\left(\frac{\partial W_{iso}}{\partial \bar{I_{1}}}+\bar{I_{1}} \frac{\partial W_{iso}}{\partial \bar{I_{2}}} \right)I-J^{-4/3} \frac{\partial W_{iso}}{\partial \bar{I}_{2}} C -\left(\frac{\bar{I_{1}}}{3}\frac{\partial W_{iso}}{\partial \bar{I}_{1}} + \frac{2 \bar{I}_{2}}{3}\frac{\partial W_{iso}}{\partial \bar{I}_{2}}\right)C^{-1}\right)

where \bar{I}_{1} and \bar{I}_{2} are invariants of the isochoric right Cauchy-Green tensor \bar{C} = J^{-2/3}C.

The first Piola-Kirchhoff stress tensor, P, and the Cauchy stress tensor, \sigma, can be expressed as a function of the second Piola-Kirchhoff stress tensor as

\begin{align}P& = FS\\

\sigma& = J^{-1}FSF^{T}

\end{align}

\sigma& = J^{-1}FSF^{T}

\end{align}

Here, F is the deformation gradient.

Note: You can read more about the description of different stress measures in our previous blog entry “Why All These Stresses and Strains?“

The strain energy density and stresses are often expressed in terms of the stretch ratio \lambda. The *stretch ratio* is a measure of the magnitude of deformation. In a uniaxial tension experiment, the stretch ratio is defined as \lambda = L/L_0, where L is the deformed length of the specimen and L_0 is its original length. In a multiaxial stress state, you can calculate principal stretches \lambda_a\;(a = 1,2,3) in the principal referential directions \hat{\mathbf{N}_a}, which are the same as the directions of the principal stresses. The stress tensor components can be rewritten in the spectral form as

S =\sideset{}{^3_{a=1}}

\sum S_{a} \hat{\mathbf{N}_{a}} \otimes \hat{\mathbf{N}_{a}}

\sum S_{a} \hat{\mathbf{N}_{a}} \otimes \hat{\mathbf{N}_{a}}

where S_{a} represents the principal values of the second Piola-Kirchhoff stress tensor and \hat{\mathbf{N}_{a}} represents the principal referential directions. You can represent the right Cauchy-Green tensor in its spectral form as

C = \sideset{}{^3_{a=1}}

\sum\lambda_a^2 \hat{\mathbf{N}_a}\otimes\hat{\mathbf{N}_a}

\sum\lambda_a^2 \hat{\mathbf{N}_a}\otimes\hat{\mathbf{N}_a}

where \lambda_a indicates the values of the principal stretches. This allows you to express the principal values of the second Piola-Kirchhoff stress tensor as a function of the principal stretches

S_a = \frac{-p_p J}{\lambda_a^2}+2\left(J^{-2/3}\left(\frac{\partial W_{iso}}{\partial \bar{I_{1}}}+\bar{I_{1}} \frac{\partial W_{iso}}{\partial \bar{I_{2}}} \right) -J^{-4/3} \frac{\partial W_{iso}}{\partial \bar{I}_{2}} \lambda_a^2 -\frac{1}{\lambda_a^2}\left(\frac{\bar{I_{1}}}{3}\frac{\partial W_{iso}}{\partial \bar{I}_{1}} + \frac{2 \bar{I}_{2}}{3}\frac{\partial W_{iso}}{\partial \bar{I}_{2}}\right)\right)

Now, let’s consider the uniaxial and biaxial tension tests explained in the initial blog post in our Structural Materials series. For both of these tests, we can derive a general relationship between stress and stretch.

Under the assumption of incompressibility (J=1), the principal stretches for the uniaxial deformation of an isotropic hyperelastic material are given by

\lambda_1 = \lambda, \lambda_2 = \lambda_3 = \lambda^{-1/2}

The deformation gradient is given by

\begin{array}{c} F = \\ \end{array} \left(\begin{array}{ccc} \lambda &0 &0 \\ 0 &\frac{1}{\sqrt{\lambda}} &0 \\ 0 &0 &\frac{1}{\sqrt{\lambda}}\end{array}\right)

For uniaxial extension S_2 = S_3 = 0, the volumetric stress p_{p} can be eliminated to give

S_{1} = 2\left(\frac{1}{\lambda} -\frac{1}{\lambda^4}\right) \left(\lambda \frac{\partial W_{iso}}{\partial \bar{I}_{1_{uni}}}+\frac{\partial W_{iso}}{\partial \bar{I}_{2_{uni}}}\right) ,\; P_1 = \lambda S_1\; \sigma_1 = \lambda^2 S_1,\;\;\;\;

The isochoric invariants \bar{I}_{1_{uni}} and \bar{I}_{2_{uni}} can be expressed in terms of the principal stretch \lambda as

\begin{align*}

\bar{I}_{1_{uni}} = \left(\lambda^2+\frac{2}{\lambda}\right) \\

\bar{I}_{2_{uni}} = \left(2\lambda + \frac{1}{\lambda^2}\right)

\end{align*}

\bar{I}_{1_{uni}} = \left(\lambda^2+\frac{2}{\lambda}\right) \\

\bar{I}_{2_{uni}} = \left(2\lambda + \frac{1}{\lambda^2}\right)

\end{align*}

Under the assumption of incompressibility, the principal stretches for the equibiaxial deformation of an isotropic hyperelastic material are given by

\lambda_1 = \lambda_2 = \lambda, \; \lambda_3 = \lambda^{-2}

For equibiaxial extension S_3 = 0, the volumetric stress p_{p} can be eliminated to give

S_1 = S_2 = 2\left(1-\frac{1}{\lambda^6}\right)\left(\frac{\partial W_{iso}}{\partial \bar{I}_{1_{bi}}}+\lambda^2\frac{\partial W_{iso}}{\partial \bar{I}_{2_{bi}}}\right),\; P_1 = \lambda S_1,\; \sigma_1 = \lambda^2 S_1\;\;\;\;

The invariants \bar{I}_{1_{bi}} and \bar{I}_{2_{bi}} are then given by

\begin{align*}

\bar{I}_{1_{bi}} = \left( 2\lambda^2 + \frac{1}{\lambda^4}\right) \\

\bar{I}_{2_{bi}} = \left(\lambda^4 + \frac{2}{\lambda^2}\right)

\end{align*}

\bar{I}_{1_{bi}} = \left( 2\lambda^2 + \frac{1}{\lambda^4}\right) \\

\bar{I}_{2_{bi}} = \left(\lambda^4 + \frac{2}{\lambda^2}\right)

\end{align*}

Let’s now look at the stress versus stretch relationships for a few of the the most common hyperelastic material models. We will consider the first Piola-Kirchhoff stress for the purpose of curve fitting.

The total strain energy density for a Neo-Hookean material model is given by

W_s = \frac{1}{2}\mu\left(\bar{I}_1-3\right)+\frac{1}{2}\kappa\left(J_{el}-1\right)^2

where J_{el} is the elastic volume ratio and \mu is a material parameter that we need to compute via curve fitting. Under the assumption of perfect incompressibility and using equations (1) and (2), the first Piola-Kirchhoff stress expressions for the cases of uniaxial and equibiaxial deformation are given by

\begin{align*}

P_{1_{uniaxial}} &= \mu\left(\lambda-\lambda^{-2}\right)\\

P_{1_{biaxial}} &= \mu\left(\lambda-\lambda^{-5}\right)

\end{align*}

P_{1_{uniaxial}} &= \mu\left(\lambda-\lambda^{-2}\right)\\

P_{1_{biaxial}} &= \mu\left(\lambda-\lambda^{-5}\right)

\end{align*}

The stress versus stretch relationship for a few of the other hyperelastic material models are listed below. These can be easily derived through the use of equations (1) and (2), which relate stress and the strain energy density.

\begin{align*}

P_{1_{uniaxial}} &= 2\left(1-\lambda^{-3}\right)\left(\lambda C_{10}+C_{01}\right)\\

P_{1_{biaxial}} &= 2\left(\lambda-\lambda^{-5}\right)\left(C_{10}+\lambda^2 C_{01}\right)

\end{align*}

P_{1_{uniaxial}} &= 2\left(1-\lambda^{-3}\right)\left(\lambda C_{10}+C_{01}\right)\\

P_{1_{biaxial}} &= 2\left(\lambda-\lambda^{-5}\right)\left(C_{10}+\lambda^2 C_{01}\right)

\end{align*}

Here, C_{10} and C_{01} are Mooney-Rivlin material parameters.

\begin{align}\begin{split}

P_{1_{uniaxial}}& = 2\left(1-\lambda^{-3}\right)\left(\lambda C_{10} + 2C_{20}\lambda\left(I_{1_{uni}}-3\right)+C_{11}\lambda\left(I_{2_{uni}}-3\right)\\

& \quad +C_{01}+2C_{02}\left(I_{2_{uni}}-3\right)+C_{11}\left(I_{1_{uni}}-3\right)\right)\\

P_{1_{biaxial}}& = 2\left(\lambda-\lambda^{-5}\right)\left(C_{10}+2C_{20}\left(I_{1_{bi}}-3\right)+C_{11}\left(I_{2_{bi}}-3\right)\\

& \quad +\lambda^2C_{01}+2\lambda^2C_{02}\left(I_{2_{bi}}-3\right)+\lambda^2 C_{11}\left(I_{1_{bi}}-3\right)\right)

\end{split}

\end{align}

P_{1_{uniaxial}}& = 2\left(1-\lambda^{-3}\right)\left(\lambda C_{10} + 2C_{20}\lambda\left(I_{1_{uni}}-3\right)+C_{11}\lambda\left(I_{2_{uni}}-3\right)\\

& \quad +C_{01}+2C_{02}\left(I_{2_{uni}}-3\right)+C_{11}\left(I_{1_{uni}}-3\right)\right)\\

P_{1_{biaxial}}& = 2\left(\lambda-\lambda^{-5}\right)\left(C_{10}+2C_{20}\left(I_{1_{bi}}-3\right)+C_{11}\left(I_{2_{bi}}-3\right)\\

& \quad +\lambda^2C_{01}+2\lambda^2C_{02}\left(I_{2_{bi}}-3\right)+\lambda^2 C_{11}\left(I_{1_{bi}}-3\right)\right)

\end{split}

\end{align}

Here, C_{10}, C_{01}, C_{20}, C_{02}, and C_{11} are Mooney-Rivlin material parameters.

\begin{align}

P_{1_{uniaxial}} &= 2\left(\lambda-\lambda^{-2}\right)\mu_0\sum_{p=1}^{5}\frac{p c_p}{N^{p-1}}I_{1_{uni}}^{p-1}\\

P_{1_{biaxial}} &= 2\left(\lambda-\lambda^{-5}\right)\mu_0\sum_{p=1}^{5}\frac{p c_p}{N^{p-1}}I_{1_{bi}}^{p-1}

\end{align}

P_{1_{uniaxial}} &= 2\left(\lambda-\lambda^{-2}\right)\mu_0\sum_{p=1}^{5}\frac{p c_p}{N^{p-1}}I_{1_{uni}}^{p-1}\\

P_{1_{biaxial}} &= 2\left(\lambda-\lambda^{-5}\right)\mu_0\sum_{p=1}^{5}\frac{p c_p}{N^{p-1}}I_{1_{bi}}^{p-1}

\end{align}

Here, \mu_0 and N are Arruda-Boyce material parameters, and c_p are the first five terms of the Taylor expansion of the inverse Langevin function.

\begin{align}

P_{1_{uniaxial}} &= 2\left(\lambda-\lambda^{-2}\right)\sum_{p=1}^{3}p c_p \left(I_{1_{uni}}-3\right)^{p-1}\\

P_{1_{biaxial}} &= 2\left(\lambda-\lambda^{-5}\right)\sum_{p=1}^{3}p c_p \left(I_{1_{bi}}-3\right)^{p-1}

\end{align}

P_{1_{uniaxial}} &= 2\left(\lambda-\lambda^{-2}\right)\sum_{p=1}^{3}p c_p \left(I_{1_{uni}}-3\right)^{p-1}\\

P_{1_{biaxial}} &= 2\left(\lambda-\lambda^{-5}\right)\sum_{p=1}^{3}p c_p \left(I_{1_{bi}}-3\right)^{p-1}

\end{align}

Here, the values of c_p are Yeoh material parameters.

\begin{align}

P_{1_{uniaxial}} &= \sum_{p=1}^{N}\mu_p \left(\lambda^{\alpha_p-1} -\lambda^{-\frac{\alpha_p}{2}-1}\right)\\

P_{1_{biaxial}} &= \sum_{p=1}^{N}\mu_p \left(\lambda^{\alpha_p-1} -\lambda^{-2\alpha_p-1}\right)

\end{align}

P_{1_{uniaxial}} &= \sum_{p=1}^{N}\mu_p \left(\lambda^{\alpha_p-1} -\lambda^{-\frac{\alpha_p}{2}-1}\right)\\

P_{1_{biaxial}} &= \sum_{p=1}^{N}\mu_p \left(\lambda^{\alpha_p-1} -\lambda^{-2\alpha_p-1}\right)

\end{align}

Here, \mu_p and \alpha_p are Ogden material parameters.

Using the *Optimization* interface in COMSOL Multiphysics, we will fit measured stress versus stretch data against the analytical expressions detailed in our discussion above. Note that the measured data we are using here is the *nominal stress*, which can be defined as the force in the current configuration acting on the original area. It is important that the measured data is fit against the appropriate stress measure. Therefore, we will fit the measured data against the analytical expressions for the first Piola-Kirchhoff stress expressions. The plot below shows the measured nominal stress (raw data) for uniaxial and equibiaxial tests for vulcanized rubber.

*Measured stress-strain curves by Treloar.*

Let’s begin by setting up the model to fit the uniaxial Neo-Hookean stress to the uniaxial measured data. The first step is to add an *Optimization* interface to a 0D model. Here, *0D* implies that our analysis is not tied to a particular geometry.

Next, we can define the material parameters that need to be computed as well as the variable for the analytical stress versus stretch relationship. The screenshot below shows the parameters and variable defined for the case of an uniaxial Neo-Hookean material model.

Within the *Optimization* interface, a *Global Least-Squares Objective* branch is added, where we can specify the measured uniaxial stress versus stretch data as an input file. Next, a *Parameter Column* and a *Value Column* are added. Here, we define lambda (stretch) as a measured parameter and specify the uniaxial analytical stress expression to fit against the measured data. We can also specify a weighing factor in the *Column contribution weight* setting. For detailed instructions on setting up the *Global Least-Squares Objective* branch, take a look at the Mooney-Rivlin Curve Fit tutorial, available in our Application Gallery.

We can now solve the above problem and estimate material parameters by fitting our uniaxial tension test data against the uniaxial Neo-Hookean material model. This is, however, rarely a good idea. As explained in Part 1 of this blog series, the seemingly simple test can leave many loose ends. Later on in this blog post, we will explore the consequence of material calibration based on just one data set.

Depending on the operating conditions, you can obtain a better estimate of material parameters through a combination of measured uniaxial tension, compression, biaxial tension, torsion, and volumetric test data. This compiled data can then be fit against analytical stress expressions for each of the applicable cases.

Here, we will use the equibiaxial tension test data alongside the uniaxial tension test data. Just as we have set up the optimization model for the uniaxial test, we will define another global least-squares objective for the equibiaxial test as well as corresponding parameter and value columns. In the second global least-squares objective, we will specify the measured equibiaxial stress versus stretch data file as an input file. In the value column, we will specify the equibiaxial analytical stress expression to fit against the equibiaxial test data.

The settings of the Optimization study step are shown in the screenshot below. The model tree branches have been manually renamed to reflect the material model (Neo-Hookean) and the two tests (uniaxial and equibiaxial). The optimization algorithm is a Levenberg-Marquardt solver, which is used to solve problems of the least-square type. The model is now set to optimize the sum of two global least-square objectives — the uniaxial and equibiaxial test cases.

The plot below depicts the fitted data against the measured data. Equal weights are assigned to both the uniaxial and equibiaxial least-squares fitting. It is clear that the Neo-Hookean material model with only one parameter is not a good fit here, as the test data is nonlinear and has one inflection point.

*Fitted material parameters using the Neo-Hookean model. Equal weights are assigned to both of the test data.*

Fitting the curves while specifying unequal weights for the two tests will result in a slightly different fitted curve. Similar to the Neo-Hookean model, we will set up global least-squares objectives corresponding to Mooney-Rivlin, Arruda-Boyce, Yeoh, and Ogden material models. In our calculation below, we will include cases for both equal and unequal weights.

In the case of unequal weights, we will use a higher but arbitrary weight for the entire equibiaxial data set. It is possible that you may want to assign unequal weights only for a certain stretch range instead of the entire stretch range. If this is the case, we can split the particular test case into parts, using a separate *Global Least-Squares Objective* branch for each stretch range. This will allow us to assign weights in correlation with different stretch ranges.

The plots below show fitted curves for different material models for equal and unequal weights that correspond to the two tests.

*Left: Fitted material parameters using Mooney-Rivlin, Arruda-Boyce, and Yeoh models. In these cases, equal weights are assigned to both test data. Right: Fitted material parameters using Mooney-Rivlin, Arruda-Boyce, and Yeoh models. Here, higher weight is assigned to equibiaxial test data.*

The Ogden material model with three terms fits both test data quite well for the case of equal weights assigned to both tests.

*Fitted material parameters using the Ogden model with three terms.*

If we only fit uniaxial data and use the computed parameters for plotting equibiaxial stress against the actual equibiaxial test data, we obtain the results in the plots below. These plots show the mismatch in the computed equibiaxial stress when compared to the measured equibiaxial stress. In material parameter estimation, it is best to perform curve fitting for a combination of different significant deformation modes rather than considering only one deformation mode.

*Uniaxial and equibiaxial stress computed by fitting model parameters to only uniaxial measured data.*

To find material parameters for hyperelastic material models, fitting the analytic curves may seem like a solid approach. However, the stability of a given hyperelastic material model may also be a concern. The criterion for determining material stability is known as *Drucker stability*. According to the Drucker’s criterion, incremental work associated with an incremental stress should always be greater than zero. If the criterion is violated, the material model will be unstable.

In this blog post, we have demonstrated how you can use the *Optimization* interface in COMSOL Multiphysics to fit a curve to multiple data sets. An alternative method for curve fitting that does not require the *Optimization* interface was also a topic of discussion in an earlier blog post. Just as we have used uniaxial and equibiaxial tension data here for the purpose of estimating material parameters, you can also fit the measured data to shear and volumetric tests to characterize other deformation states.

For detailed step-by-step instructions on how to use the *Optimization* interface for the purpose of curve fitting, take a look at the Mooney-Rivlin Curve Fit tutorial, available in our Application Gallery.

While many different types of laser light sources exist, they are all quite similar in terms of their outputs. Laser light is very nearly single frequency (single wavelength) and coherent. Typically, the output of a laser is also focused into a narrow collimated beam. This collimated, coherent, and single frequency light source can be used as a very precise heat source in a wide range of applications, including cancer treatment, welding, annealing, material research, and semiconductor processing.

When laser light hits a solid material, part of the energy is absorbed, leading to localized heating. Liquids and gases (and plasmas), of course, can also be heated by lasers, but the heating of fluids almost always leads to significant convective effects. Within this blog post, we will neglect convection and concern ourselves only with the heating of solid materials.

Solid materials can be either partially transparent or completely opaque to light at the laser wavelength. Depending upon the degree of transparency, different approaches for modeling the laser heat source are appropriate. Additionally, we must concern ourselves with the relative scale as compared to the wavelength of light. If the laser is very tightly focused, then a different approach is needed compared to a relatively wide beam. If the material interacting with the beam has geometric features that are comparable to the wavelength, we must additionally consider exactly how the beam will interact with these small structures.

Before starting to model any laser-material interactions, you should first determine the optical properties of the material that you are modeling, both at the laser wavelength and in the infrared regime. You should also know the relative sizes of the objects you want to heat, as well as the laser wavelength and beam characteristics. This information will be useful in guiding you toward the appropriate approach for your modeling needs.

In cases where the material is opaque, or very nearly so, at the laser wavelength, it is appropriate to treat the laser as a surface heat source. This is most easily done with the *Deposited Beam Power* feature (shown below), which is available with the Heat Transfer Module as of COMSOL Multiphysics version 5.1. It is, however, also quite easy to manually set up such a surface heat load using only the COMSOL Multiphysics core package, as shown in the example here.

A surface heat source assumes that the energy in the beam is absorbed over a negligibly small distance into the material relative to the size of the object that is heated. The finite element mesh only needs to be fine enough to resolve the temperature fields as well as the laser spot size. The laser itself is not explicitly modeled, and it is assumed that the fraction of laser light that is reflected off the material is never reflected back. When using a surface heat load, you must manually account for the absorptivity of the material at the laser wavelength and scale the deposited beam power appropriately.

*The Deposited Beam Power feature in the Heat Transfer Module is used to model two crossed laser beams. The resultant surface heat source is shown.*

In cases where the material is partially transparent, the laser power will be deposited within the domain, rather than at the surface, and any of the different approaches may be appropriate based on the relative geometric sizes and the wavelength.

If the heated objects are much larger than the wavelength, but the laser light itself is converging and diverging through a series of optical elements and is possibly reflected by mirrors, then the functionality in the Ray Optics Module is the best option. In this approach, light is treated as a ray that is traced through homogeneous, inhomogeneous, and lossy materials.

As the light passes through lossy materials (e.g., optical glasses) and strikes surfaces, some power deposition will heat up the material. The absorption within domains is modeled via a complex-valued refractive index. At surfaces, you can use a reflection or an absorption coefficient. Any of these properties can be temperature dependent. For those interested in using this approach, this tutorial model from our Application Gallery provides a great starting point.

*A laser beam focused through two lenses. The lenses heat up due to the high-intensity laser light, shifting the focal point.*

If the heated objects and the spot size of the laser are much larger than the wavelength, then it is appropriate to use the Beer-Lambert law to model the absorption of the light within the material. This approach assumes that the laser light beam is perfectly parallel and unidirectional.

When using the Beer-Lambert law approach, the absorption coefficient of the material and reflection at the material surface must be known. Both of these material properties can be functions of temperature. The appropriate way to set up such a model is described in our earlier blog entry “Modeling Laser-Material Interactions with the Beer-Lambert Law“.

You can use the Beer-Lambert law approach if you know the incident laser intensity and if there are no reflections of the light within the material or at the boundaries.

*Laser heating of a semitransparent solid modeled with the Beer-Lambert law.*

If the heated domain is large, but the laser beam is tightly focused within it, neither the ray optics nor the Beer-Lambert law modeling approach can accurately solve for the fields and losses near the focus. These techniques do not directly solve Maxwell’s equations, but instead treat light as rays. The beam envelope method, available within the Wave Optics Module, is the most appropriate choice in this case.

The beam envelope method solves the full Maxwell’s equations when the field envelope is slowly varying. The approach is appropriate if the wave vector is approximately known throughout the modeling domain and whenever you know approximately the direction in which light is traveling. This is the case when modeling a focused laser light as well as waveguide structures like a Mach-Zehnder modulator or a ring resonator. Since the beam direction is known, the finite element mesh can be very coarse in the propagation direction, thereby reducing computational costs.

*A laser beam focused in a cylindrical material domain. The intensity at the incident side and within the material are plotted, along with the mesh.*

The beam envelope method can be combined with the *Heat Transfer in Solids* interface via the *Electromagnetic Heat Source* multiphysics couplings. These couplings are automatically set up when you add the *Laser Heating* interface under *Add Physics*.

*The* Laser Heating *interface adds the* Beam Envelopes *and the* Heat Transfer in Solids *interfaces and the multiphysics couplings between them.*

Finally, if the heated structure has dimensions comparable to the wavelength, it is necessary to solve the full Maxwell’s equations without assuming any propagation direction of the laser light within the modeling space. Here, we need to use the *Electromagnetic Waves, Frequency Domain* interface, which is available in both the Wave Optics Module and the RF Module. Additionally, the RF Module offers a *Microwave Heating* interface (similar to the *Laser Heating* interface described above) and couples the *Electromagnetic Waves, Frequency Domain* interface to the *Heat Transfer in Solids* interface. Despite the nomenclature, the RF Module and the *Microwave Heating* interface are appropriate over a wide frequency band.

The full-wave approach requires a finite element mesh that is fine enough to resolve the wavelength of the laser light. Since the beam may scatter in all directions, the mesh must be reasonably uniform in size. A good example of using the *Electromagnetic Waves, Frequency Domain* interface: Modeling the losses in a gold nanosphere illuminated by a plane wave, as illustrated below.

*Laser light heating a gold nanosphere. The losses in the sphere and the surrounding electric field magnitude are plotted, along with the mesh.*

You can use any of the previous five approaches to model the power deposition from a laser source in a solid material. Modeling the temperature rise and heat flux within and around the material additionally requires the *Heat Transfer in Solids* interface. Available in the core COMSOL Multiphysics package, this interface is suitable for modeling heat transfer in solids and features fixed temperature, insulating, and heat flux boundary conditions. The interface also includes various boundary conditions for modeling convective heat transfer to the surrounding atmosphere or fluid, as well as modeling radiative cooling to ambient at a known temperature.

In some cases, you may expect that there is also a fluid that provides significant heating or cooling to the problem and cannot be approximated with a boundary condition. For this, you will want to explicitly model the fluid flow using the Heat Transfer Module or the CFD Module, which can solve for both the temperature and flow fields. Both modules can solve for laminar and turbulent fluid flow. The CFD Module, however, has certain additional turbulent flow modeling capabilities, which are described in detail in this previous blog post.

For instances where you are expecting significant radiation between the heated object and any surrounding objects at varying temperatures, the Heat Transfer Module has the additional ability to compute gray body radiative view factors and radiative heat transfer. This is demonstrated in our Rapid Thermal Annealing tutorial model. When you expect the temperature variations to be significant, you may also need to consider the wavelength-dependent surface emissivity.

If the materials under consideration are transparent to laser light, it is likely that they are also partially transparent to thermal (infrared-band) radiation. This infrared light will be neither coherent nor collimated, so we cannot use any of the above approaches to describe the reradiation within semitransparent media. Instead, we can use the radiation in participating media approach. This technique is suitable for modeling heat transfer within a material, where there is significant heat flux inside the material due to radiation. An example of this approach from our Application Gallery can be found here.

In this blog post, we have looked at the various modeling techniques available in the COMSOL Multiphysics environment for modeling the laser heating of a solid material. Surface heating and volumetric heating approaches are presented, along with a brief overview of the heat transfer modeling capabilities. Thus far, we have only considered the heating of a solid material that does not change phase. The heating of liquids and gases — and the modeling of phase change — will be covered in a future blog post. Stay tuned!

]]>

My parents love each other to death, but their habits can sometimes clash. My mom enjoys watching late-night television talk shows, while my dad prefers a good night’s sleep whenever he gets the chance. As they will eventually need to downsize, I decided to help them plan a home where they could stay friends.

From past experience, I knew there was no sense in trying to optimize the location of every plant, rug, or bookcase. My dad is constantly moving furniture around to, as he says, “feel the space.” My mom, meanwhile, tends to pull the couch closer and closer to the television rather than admitting that she needs glasses.

In short, they are bound to mess around with the input data and thereby remove digit after digit from the precision of any *a priori* sound level estimate. Luckily, accuracy was not a primary concern. I just needed to establish that my dad would get his beauty sleep.

The obvious start for a “quick and dirty” model of the acoustics in an apartment is the *Acoustic Diffusion Equation* interface. This interface is very easy to use and, in most situations, it is a lot faster than the more accurate *Pressure Acoustics* or *Ray Acoustics* interfaces.

For my simulation, I created a simple drawing of the relevant rooms and included the larger pieces of furniture. With the drawing complete, I set out to create my first acoustic diffusion model. This was a breeze — two power sources representing the stereo speakers connected to the television, absorption coefficients assigned to the walls and the furniture, and…that was it.

Approximate absorption coefficients for common materials are easy to find online. If you want to be more thorough, you can use different values in different frequency bands, or even specify them as arbitrary functions of the frequency. I selected a constant low value for the walls (including the floor and the ceiling) and a higher one for the soft parts of the furniture. To compensate for the lack of carpets and the relatively sparse decoration, I then nudged the wall coefficient slightly in the upward direction. If you decide on a similar time-saving measure, I suggest that you be open about it. My parents understand that acoustic diffusion is not an exact science, and they appreciate my honesty.

*Distribution of the sound pressure level (dB) without a door between the rooms. The red dots indicate my mom’s viewing position and my dad’s head while he is trying to sleep.*

My first solution shows a sound pressure level decreasing by a quite modest 11 dB between the living room couch and the bedroom. Luckily, two important elements that would increase the difference were still missing.

The first element is a door between the rooms. If your door manufacturer cites a transmission loss in dB, make sure to check whether it concerns only transmission through the door itself, or if it was measured with the door in its frame. This makes a difference because a significant amount of sound may sneak through the space between the door and the floor unless you install a fitting. If you have access to a drawing of the door and know the material that was used to make it, you can, of course, run an acoustic-structure interaction analysis to get a second opinion. It is trivial to include a door with a specified transmission loss in an acoustic diffusion simulation.

The second element is the direct sound. The acoustic diffusion equation only deals with the part of the sound that has already struck the walls or the furniture and has become diffuse. With my mom sitting directly in front of the television, there is also a significant direct sound reaching her. By approximating the sound sources as points and neglecting shadowing from the table, it is quite simple to add the direct sound as an analytical expression in terms of the emitted sound power and the local coordinates.

In my second — and final — simulation, I added the direct sound hitting the couch and put up a door between the rooms. The total loss between the couch and the bed was now a much more acceptable 23 dB. I provided my parents with a nice printed report and gave them the thumbs up to move into the home.

*Sound pressure level distribution (dB) with a door added and an approximation of the direct sound included in the living room.*

Diffusion is often discussed as a description of the motion of particles in a gas. The particles travel in straight lines except when, at random intervals, they bounce off the gas molecules. The diffusion coefficient is a function of the mean free path between two consecutive collisions.

*Mean free path between collisions in a gas (left) and for sound particles in a room (right).*

The acoustic diffusion equation deals with conceptual “sound particles”, with a density proportional to the local sound energy. These particles do not bounce on the air molecules, but rather on the walls of the room. The mean free path \lambda and, with that, the diffusion coefficient D relate to the proportions of the room. It holds that \lambda = 4V/S, where V is the volume of the room and S is the total surface area of the room’s walls, floor, and ceiling. In turn, D=\lambda c/3, where c is the speed of sound.

The implementation of the acoustic diffusion equation in COMSOL Multiphysics is

\frac{\partial{w}}{\partial{t}}+\nabla \cdot (-D_t \nabla w) + c m_a w = q(\textbf{x},t)

The equation is solved for the acoustic energy density w, from which you can derive the sound pressure level and other important measurables. If you drop the time derivative, you get the stationary form. The volume absorption coefficient m_a accounts for the air dissipation, which is often negligible but sometimes important in very large spaces. D_t = D is the diffusion coefficient and q is an optional volumetric sound source. With the alternative formulation

\frac{\partial{w}}{\partial{t}}+\nabla \cdot (-D_t \nabla w) + c (m_a + \frac { \alpha_f } {\lambda_f}) w = q(\textbf{x},t), ~ ~ D_t = \frac {D_f D}{D_f+D}

you can also include an averaged description of the furnishing. Here, \alpha_f is the average absorption coefficient of the furniture (the fittings). The diffusion coefficient D_f and the mean free path \lambda_f derive from the number density and the average cross section of the furniture.

Say, for example, that my parents had wanted to invest in a furniture store. In that case, I would have used this formulation rather than draw each individual item.

The boundary conditions include various ways of specifying the local absorption coefficient and applying sound sources. Point sources are also available.

Like ray acoustics, the acoustic diffusion equation does not account for low-frequency behaviors, such as standing waves or diffraction around corners. These are chiefly important below the *Schroeder frequency*, which you can learn about more in the blog post “Modeling Room Acoustics with COMSOL Multiphysics“. In my parents’ new living room and bedroom, the Schroeder frequencies are 167 Hz and 183 Hz, respectively.

For events to turn into statistics, you need to monitor your system for a while. When compared to ray acoustics, the main limitation in the acoustic diffusion equation is that it does not include early sound. This means that it will systematically underestimate the sound pressure level in the vicinity of the sound sources — the case for my mom’s viewing position right in front of the television. You can take my approach and often at least partially compensate for this limitation by calculating the direct sound analytically and adding it to the diffuse solution. It can, however, become rather difficult or impossible to do so if there are obstacles near the sources acting to reflect or absorb the sound.

While it can be argued that acoustic diffusion is the least accurate of the three acoustics analysis methods available in COMSOL Multiphysics, acoustic diffusion is easier to set up and is often orders of magnitude faster to solve than other methods. The solution time required to produce the plots presented here was approximately 2.5 seconds on a regular desktop computer. For a ray tracing model with a high number of rays, obtaining good statistics would take at the very least a few minutes — and possibly hours — to solve. Pressure acoustics is the only game in town for the low-frequency, resonance-dominated range. But, for frequencies much greater than the Schroeder frequency, this approach would be out of the question due to the quickly increasing solution time and memory usage.

At the end of the day, if your parents ask you for an opinion on the sound environment in their living quarters, I’d wholeheartedly recommend that you run an acoustic diffusion simulation for them. Alternatively, if you are in the early stages of designing a concert hall or an office space, acoustic diffusion can still be a great tool for obtaining an initial assessment of the high-frequency sound distribution. You can then add ray acoustics to predict the early sound and get a more accurate result, as well as pressure acoustics to investigate the low-frequency behavior.

The simulation referenced in this blog post is available for download here. To learn more about room acoustics, I also encourage you to download the One-Family House Acoustics tutorial from our Application Gallery.

]]>

Advanced composites are used extensively throughout the Boeing 787 Dreamliner, as shown in the diagram below. Also known as carbon fiber reinforced plastic (CFRP), the composites are formed from a lightweight polymer binder with dispersed carbon fiber filler to produce materials with high strength-to-weight ratios. Many wing components, for example, are made of CFRP, ensuring that they can support the load imposed during flight while minimizing their overall contribution to the weight of an aircraft.

*Advanced composites are used throughout the body of the Boeing 787. Copyright © Boeing.*

Despite their remarkable strength and light weight, CFRPs are generally not conductive like their aluminum counterparts, thus making them susceptible to lightning strike damage. Therefore, electrically conductive expanded metal foil (EMF) is added to the composite structure layup, shown in the figure below, to dissipate the high current and heat generated by a lightning strike.

*The composite structure layup shown at left consists of an expanded metal foil layer shown at right. This figure is a screenshot from the COMSOL Multiphysics® software model featured in this blog post. Copyright © Boeing.*

The figure also shows the additional coatings on top of the EMF, which are in place to protect it from moisture and environmental species that cause corrosion. Corrosive damage to the EMF could result in lower conductivity, thereby reducing its ability to protect aircraft structures from lightning strike damage. Temperature variations due to the ground-to-air flight cycle can, however, lead to the formation of cracks in the surface protection scheme, reducing its effectiveness.

During takeoff and landing, aircraft structures are subjected to cooling and heating, respectively. Thermal stress manifests as the expansion and compression — or ultimately the displacement — of adjacent layers throughout the depth of the composite structure. Although a single round-trip is not likely to pose a significant risk, over time, each layer of the composite structure contributes to fatigue damage buildup. Repetitive thermal stress results in cumulative strain and higher displacements, which are, in turn, associated with an increased risk of crack formation. The stresses in a material depend on its mechanical properties quantified by measurable attributes such as yield strength, Young’s modulus, and Poisson’s ratio.

By taking the thermal and mechanical properties of materials into account, it is possible to use simulation to design and optimize a surface protection scheme for aircraft composites that minimizes stress, displacement, and the risk of crack formation.

Evaluating the thermal performance of each layer in the surface protection scheme is essential in order to reduce the risks and maintenance costs associated with damage to the protective coating and EMF. Therefore, researchers at Boeing Research & Technology (BR&T), pictured below, are using multiphysics simulation and physical measurements to investigate the effect of the EMF design parameters on stress and displacement throughout the composite structure layup.

*The research team at Boeing Research & Technology from left to right: Patrice Ackerman, Jeffrey Morgan, Robert Greegor, and Quynhgiao Le. Copyright © Boeing.*

In their work, the researchers at BR&T have developed a coefficient of thermal expansion (CTE) model in COMSOL Multiphysics® simulation software. The figure shown above that presents the composite structure layup and EMF is a screenshot acquired from the model geometry used for their simulations in COMSOL Multiphysics.

The CTE model was used to evaluate heating of the aircraft composite structure as experienced upon descent, where the final and initial temperatures used in the simulations represent the ground and altitude temperatures, respectively. The *Thermal Stress* interface, which couples heat transfer and solid mechanics, was used in the model to simulate thermal expansion and solve for the displacement throughout the structure.

The material properties of each layer in the surface protection scheme as well as of the composites are custom-defined in the CTE model. The relative values of the coefficient of thermal expansion, heat capacity, density, thermal conductivity, Young’s modulus, and Poisson’s ratio are presented in the chart below.

*This graph presents the ratio of each material parameter relative to the paint layer. Copyright © Boeing.*

From the graph, trends can be identified that provide early insight into the behavior of the materials, which aids in making design decisions. For example, the paint layer is characterized by higher values of CTE, heat capacity, and Poisson’s ratio, thus indicating that it will undergo compressive stress and tensile strain upon heating and cooling.

Multiphysics simulation takes this predictive design capability one big step forward by quantifying the resulting displacement due to thermal stress throughout the entire composite structure layup simultaneously, taking into account the properties of all materials. The following figure shows an example of BR&T’s simulation results and presents the stress distribution and displacement throughout the composite structure.

*Left: Top-down and cross-sectional views of the von Mises stress and displacement in a one-inch square sample of a composite structure layup. Right: Transparency was used to show regions of higher stress, in red. Lower stress is shown in blue. Copyright © Boeing.*

In the plots at the left above, the displacement pattern caused by the EMF is evident through the paint layer at the top of the composite structure while a magnified cross-sectional view shows the variations in displacement above the mesh and voids of the EMF. The cross section also makes it easy to see the stress distribution through the depth of the composite structure, where there is a trend toward lower stress in the topmost layers. Transparency was used in the plot shown at the right to depict the regions of high stress in the composites and EMF, which is noticeably higher at the intersection of the mesh wires. Stress was plotted through the depth of the composite structure layup along the vertical red line shown in the center of the plot. The figure below shows the relative stress in each layer of the composite structure layup for different metallic compositions of the EMF.

*Relative stress in arbitrary units was plotted through the depth of the composite structure layups containing either aluminum (left) or copper EMF (right). Copyright © Boeing.*

The samples vary by the presence of a fiberglass corrosion isolation layer when aluminum is used as the material for the EMF. The fiberglass acts as a buffer resulting in lower stress in the aluminum EMF, when compared with the copper.

From lightning strike protection to the structural integrity of the composite protection scheme, it all relies on the design of the expanded metal foil layer. The design of the EMF layer can vary by its metallic composition, height, width of the mesh wire, and the mesh aspect ratio. For any EMF design parameter, there is a trade-off between current-carrying capacity, displacement, and weight. By using the CTE model, the researchers at BR&T found that increasing the mesh width and decreasing the aspect ratio are better strategies for increasing the current-carrying capacity of the EMF that minimize its impact on displacement in the composite structure.

The metal chosen for the EMF can also have a significant effect on stress and displacement in the composite structure, which was investigated using simulation and physical testing. Two composite structures, one with aluminum and the other with copper EMF, underwent thermal cycling with prolonged exposure to moisture in an environmental test chamber. In the results, shown below, the protective layers remained intact for the composite structure with copper EMF. However, for the layup with aluminum, cracking occurred in the primer, at the edges, on surfaces, and was particularly substantial in the mesh overlap regions.

*Photo micrographs of the composite structure layup after exposure to moisture and thermal cycling. A crack in the vicinity of the aluminum EMF is contained within the red ellipse. Copyright © Boeing.*

Simulations confirm the experiment results. Shown below, displacements are noticeably higher throughout the composite structure layup when aluminum is used for the EMF layer, where higher displacements are associated with an increased risk for developing cracks. The higher displacement is easiest to observe in the bottom plots, which show displacement ratios for each EMF height.

*Effect of varying the EMF height on displacement in each layer of the surface protection scheme. Copyright © Boeing.*

The larger displacements caused by the aluminum EMF can be attributed in part to its higher CTE when compared with copper, which exemplifies how important the properties of materials are to the thermal stability of the aircraft composite structures.

In the early design stages and along with experimental testing, multiphysics simulation offers a reliable means to evaluate the relative impact of the EMF design parameters on stress and displacement throughout the composite structures. An optimized EMF design is essential to minimizing the risk of crack formation in the composite surface protection scheme, which reduces maintenance costs and allows the EMF to perform its important protective function of mitigating lightning strike damage.

Refer to page 4 of *COMSOL News* 2014 to read the original article, “Boeing Simulates Thermal Expansion in Composites with Expanded Metal Foil for Lightning Strike Protection of Aircraft Structures”.

This article was based on the following publicly available resources from Boeing:

- The Boeing Company. “787 Advanced Composite Design.” 2008-2013.
- J.D. Morgan, R.B. Greegor, P.K. Ackerman, Q.N. Le, “Thermal Simulation and Testing of Expanded Metal Foils Used for Lightning Protection of Composite Aircraft Structures,” SAE Int. J. Aerosp. 6(2):371-377, 2013, doi:10.4271/2013-01-2132.
- R.B. Greegor, J.D. Morgan, Q.N. Le, P.K. Ackerman, “Finite Element Modeling and Testing of Expanded Metal Foils Used for Lightning Protection of Composite Aircraft Structures,” Proceedings of 2013 ICOLSE Conference; Seattle, WA, September 18-20, 2013.

To learn more about adding material property data to your COMSOL Multiphysics® simulations, read the following blog post series on *Obtaining Material Data for Structural Mechanics Simulations from Measurements* by my colleague Henrik Sönnerlind:

General information about aircraft design and structures can be found in chapter 1 of this handbook on aircraft maintenance from the Federal Aviation Administration.

*BOEING, Dreamliner, and 787 Dreamliner are registered trademarks of The Boeing Company Corporation in the U.S. and other countries.*

By design, heat exchangers transfer heat from one source to another. When analyzing the efficiency of this heat transfer, it is important to consider the impact of the system’s dimensions. Simulation offers a simplified approach to testing the performance of various designs. With simulation apps, you can now bring this power into the hands of those who are not simulation experts. Let’s get started by exploring the Concentric Tube Heat Exchanger Dimensioning Tool demo app.

With its simple design and ability to operate under high pressures, the concentric tube heat exchanger is a valued resource in many industries for a range of purposes, from food preparation to material processing. In this type of heat exchanger, a pipe is placed inside another pipe, with cold fluid traveling through the inner tube and warm fluid traveling through the space between the inner tube and outer tube. These fluids can flow parallel to one another or they can run in counterflow. While flowing through the system, heat from the warm fluid is transferred through the inner tube to the cold fluid.

Like for other heat exchanger configurations, dimensioning quantities are a key indicator in the behavior of a concentric tube heat exchanger. By computing these quantities, engineers can see how efficient the system is at transferring heat as well as how to improve the design to optimize the process. Simulation saves many steps in this analysis by testing different dimensions in a virtual setting. As different design schemes are developed, the designers and manufacturers behind the heat exchanger will turn to you — the simulation engineer — to run their simulations. This can, of course, be a rather time-consuming process, as the design can go through numerous minor modifications before the optimal configuration is achieved.

With the Application Builder, you can now take the physics and functionality behind your model and make it available in an easy-to-use simulation app. Once you have customized the app’s layout to meet specific design needs, it can be shared with your colleagues and customers, who can then run their own simulations. Extending the scope of simulation capabilities not only establishes a more integrated workflow, but it also enables you to take on more simulation projects.

To demonstrate how you can set up a simulation app that determines the optimal dimensions for a concentric tube heat exchanger, we have created a demo app, available for download via COMSOL Multiphysics and the online Application Gallery. I will introduce you to our Concentric Tube Heat Exchanger Dimensioning Tool demo app here and if you want to take it for a test run, feel free to download the files afterward.

The concentric tube heat exchanger demo app is based on a 2D axisymmetric model in which two rectangles are revolved to build the pipes. To solve the flow in each pipe, a laminar or turbulent formulation is used, providing temperature and pressure profiles for both the inner and outer pipes. The results below show the profiles for the default operating parameters within the app.

*On the left: Temperature profiles for the pipes. On the right: Pressure profiles for the pipes.*

Our demo app is customized to allow app users to modify the heat exchanger’s pipes as well as the fluids. Whereas textbook formulas rely on several assumptions, this app can account for a variety of parameters, from the temperature dependence of fluid properties to the different thicknesses of the inner and outer pipes. Using the app, however, remains as simple as applying a textbook formula. The app user simply needs to enter the particular parameter values and, with just one click, they will have their outputs.

When building your own simulation app, you can personalize the user interface to best fit the needs of your analysis. Customizing your app allows you to include only the information that is important to your specific design, creating a more simplified user interface.

Learn more about building and customizing a simulation app in this blog post: “How to Create a Simulation App: Horn Antenna Demo“.

Upon opening the heat exchanger demo app, the *Settings* panel initially appears. Within this panel, app users can change the geometrical parameters, operating conditions, and material properties of the heat exchanger’s tubes.

*The* Settings *panel for the concentric tube heat exchanger app.*

At the top of the application window, there is a ribbon featuring a series of buttons that can be used to shift between different panels. The panels include options for configuring the simulation as well as showing the simulation results. This intuitive format allows app users to easily navigate through the interface, moving from one panel to another.

*Ribbon at the top of the application window.*

Once computed, the app displays the temperature profile and several other quantities. These quantities include the log mean temperature difference, the overall heat transfer coefficient between the two pipes, the effectiveness ratio of exchanged heat flux to the maximum possible flux, and the number of transfer units ratio of the overall heat transfer coefficient to the minimum rate of heat capacity.

You can create simulation apps to package results in a simplified format that can be easily understood by other people involved in various stages of the design and manufacturing process. The screenshot below shows the simulation results for the demo app, available in the *Results* panel. We have also configured the UI of this demo app to allow users to access the results by clicking on the *Results* button at any point in time.

*Simulation results.*

Simulation apps are changing the scope of simulation, extending its capabilities to a wider user base. By turning your own model into an app, you can establish a more efficient approach to simulation, empowering others in your organization to run their own simulation tests. You can use the demo app presented here as a foundation for building your own heat exchanger app, tailoring the configuration of the heat exchanger — and the app’s layout — to reflect your own design needs.

- Explore the inner workings of the Concentric Tube Heat Exchanger Dimensioning Tool demo app

Let’s begin with a quick review. When solids enter a humid environment, it is likely that some of them will catch water molecules. The absorption and storage of these molecules can cause the solid to swell up, exposing it to increased stresses and strains. This effect is known as *hygroscopic swelling*.

Hygroscopic swelling is a phenomenon that occurs in various sectors of industry, from wood construction and paper to electronics and food processing. Whether an expected behavior or an undesirable effect, it must be modeled accurately in order to quantify its effects.

The Hygroscopic Swelling feature in COMSOL Multiphysics enables you to do exactly that. Available as a subnode for most material models in the structural mechanics interfaces, this feature allows you to analyze the effect of moisture concentrations within the solids, such as resulting deformations and stresses.

*The user interface (UI) of the Hygroscopic Swelling feature. The main inputs are colored and numbered.*

Using the above figure as our guide, we can now take a closer look at how this feature is used.

Hygroscopic swelling creates an inelastic strain that is proportional to the difference between the concentration and the strain-free reference concentration:

\epsilon_\textrm{hs}=\beta_\textrm{h} C_\textrm{diff}

where the coefficient of hygroscopic swelling \beta_\textrm{h} can be given in the material properties or directly in the node (Number 5 in the screenshot above). It does not have to be constant; it can depend on, for example, temperature or the moisture concentration itself.

In small deformation theory, the hygroscopic swelling contribution is additive — that is, the inelastic strain is the sum of the other inelastic strains and the hygroscopic strain. The coefficient of hygroscopic swelling is a second-order tensor, which can be defined as isotropic, diagonal, or symmetric. The expansion can thus be different in different directions. In wood, this effect is very pronounced.

In large deformation theory, available under the Hyperelastic Material model, the hygroscopic contribution is multiplicative — that is, the total deformation gradient tensor F is scaled by the hygroscopic stretch to form the elastic deformation gradient tensor F_\textrm{e} :

\begin{array}{ll}

\epsilon=\frac{1}{2} \left( F_\textrm{e}^\textrm{T}F_\textrm{e}-I \right) & F_\textrm{e}=F J_\textrm{hs}^{-1/3}

\\

J_\textrm{hs}= \left(1+\beta_\textrm{h} C_\textrm{diff} \right)^3

\end{array}

\epsilon=\frac{1}{2} \left( F_\textrm{e}^\textrm{T}F_\textrm{e}-I \right) & F_\textrm{e}=F J_\textrm{hs}^{-1/3}

\\

J_\textrm{hs}= \left(1+\beta_\textrm{h} C_\textrm{diff} \right)^3

\end{array}

In this case, the coefficient of hygroscopic swelling is isotropic, so only uniform volumetric expansion is taken into account.

The hygroscopic swelling has two types of effects. When applied on free structures, it induces deformations. When applied on constrained structures, deformation is impossible, causing the stress inside of the structure to increase. In real structures (often partially constrained), the effect is a mixture of these two behaviors.

*Example of a free solid (left column) and a fully constrained solid (right column) subjected to hygroscopic swelling with a constant moisture concentration. The first row shows roller constraints applied on each solid. Plotted results are the displacement field in the second row and von Mises stress in the third row. The free solid is only constrained by two roller conditions, which enables the solid to expand and completely release the stress. On the contrary, the solid constrained with roller conditions all around it shows no displacement but encounters an increase in stress.*

Depending on the selected moisture concentration type (2), the concentration is defined either as mass concentration ( C_\textrm{mo} and C_\textrm{mo,ref}) or molar concentration ( c_\textrm{mo} and c_\textrm{mo,ref}). As C_\textrm{diff} is the mass concentration difference, the molar mass M_\textrm{m} must also be specified (4) when molar concentration is used as the input. The default value for M_\textrm{m} is the molar mass of water 0.018 \; \textrm{kg}/\textrm{m}^3.

\begin{array} {ll} \epsilon_\textrm{hs}=\beta_\textrm{h} M_\textrm{m} \left(c_\textrm{mo}-c_\textrm{mo,ref} \right) \end{array} for molar concentration

\begin{array} {ll} \epsilon_\textrm{hs}=\beta_\textrm{h}\left(C_\textrm{mo}-C_\textrm{mo,ref} \right) \end{array} for mass concentration

The concentration (1) can either be user-defined or computed by another physics interface. As with any input in COMSOL Multiphysics, user-defined values can be a function of other variables, such as the space coordinates X, Y, and Z.

*On the left: User-defined, space-dependent moisture concentration. On the right: Displacement induced by hygroscopic swelling. The top face, where the concentration is highest, shows the largest displacement.*

The strain-free reference concentration (3) is the moisture concentration at which hygroscopic swelling has no effect. It can often be interpreted as an initial state, or the ex-factory moisture concentration. A moisture concentration higher than the reference concentration represents moistening and causes the solid to expand. A moisture concentration lower than the reference concentration represents drying and causes the solid to shrink.

*Left: Displacement with zero strain reference concentration. Right: Displacement with nonzero strain reference concentration. The applied concentration, which is the same in both cases but lower than the strain reference concentration, implies shrinkage of the solid.*

Often, the moisture concentration within a solid is unknown and has to be computed with a preceding simulation. You can compute the concentration with the *Transport of Diluted Species* interface or the *Transport of Diluted Species in Porous Media* interface. Such an approach is used in our MEMS Pressure Sensor Drift due to Hygroscopic Swelling example, new with COMSOL Multiphysics version 5.1.

One way to feed the computed concentration to the *Solid Mechanics* interface is to specify the desired concentration variable in the combo box of the Hygroscopic Swelling feature. There is, however, an even simpler approach.

In version 5.1, you can use a multiphysics coupling, which is made available when at least a solid mechanics and a transport physics interface are both present in the model tree. With this coupling feature, you simply have to specify which transport interface the concentration derives from and which solid mechanics interface to which you are applying hygroscopic swelling. You will also need to set the reference concentration, the molar mass, and the coefficient of hygroscopic swelling for all of the selected domains. When using the multiphysics coupling, you do not need to add any hygroscopic swelling subnodes to the material models.

*Selecting the participating physics interfaces in the Multiphysics Coupling node for hygroscopic swelling.*

*Left: Moisture concentration computed in the* Transport of Diluted Species interface. *Right: Displacement resulting from hygroscopic swelling.*

In the *Beam*, *Shell*, and *Plate* interfaces, the moisture concentration input is partitioned into an average concentration on the center line or midsurface, and a concentration gradient in the transverse direction(s). The latter causes the structure to bend.

The input for hygroscopic swelling in the *Beam* interface contains concentration gradient in the local *y-* and *z-*directions. In the *Shell* and *Plate* interfaces, it contains a concentration difference between the top face and the bottom face.

*Hygroscopic bending in a 2D* Beam *interface.*

*On the left: Moisture concentration. On the right: Resulting displacement. In the solid, bending is caused by the nonuniform expansion, which is higher on the top face than the bottom face. In the beam, the bending caused by the same effect is captured using the moisture gradient c_{\textrm{g}y}. In both plots, the solid model is placed above the beam model.*

When the “Include moisture as added mass” checkbox is marked (6), the weight of the water that is absorbed or released by the solid will have an effect on the mass-dependent phenomena, such as gravity or rotating frame loads. It will also have an effect on inertial terms in time-dependent or frequency domain studies.

*On the left: Displacement of two bars analyzed with a frequency sweep when one of them is subjected to hygroscopic swelling. On the right: Frequency response of the two bars. The water absorbed during hygroscopic swelling increases the mass and decreases the resonance frequency.*

The total mass, including the water mass uptake, can be calculated in a Mass Properties node under *Definitions*. The mass variable can then be used in postprocessing for comparison with the measured mass of the solid — a convenient way to evaluate the moisture concentration in real life.

*Screenshot of the Mass Properties node.*

Taking hygroscopic swelling into consideration is important in the design of many devices. By analyzing how different materials respond to that effect, you can optimize your design so as to prevent the failure of components and to ensure the device’s intended operation. Here, we have demonstrated how the hygroscopic swelling functionality in COMSOL Multiphysics can be a valuable tool for such an analysis. With the Hygroscopic Swelling feature, you can quantify the effects of hygroscopic swelling in a way that is both accurate and efficient.

- Download the tutorial model: MEMS Pressure Sensor Drift due to Hygroscopic Swelling
- Read more about the new multiphysics coupling feature for hygroscopic swelling on our COMSOL Multiphysics 5.1 release highlights page

Heat sinks are components designed to cool off devices by dissipating heat. They can be used passively or in active cooling systems combined with fans for example. When optimizing heat sink designs, you can turn to simulation for guidance. But what if you could simplify the design process by embedding your model in an app? You can — and the Heat Sink with Fins demo app is here to get you started.

Suppose that you work at a heat sink manufacturing company. For a fast estimation of the cooling performance of the heat sink, you may use well-known correlations based on geometry and flow properties. However, these correlations are only available for simple geometrical configurations such as flat surfaces, cylinders, or spheres. You can always use such a correlation to approach thermal characteristics of a real-life geometry, but this is not very accurate. For more precise results, you can use simulation.

As a simulation expert, you are called upon to test out various heat sink designs. With each design change, you run a new simulation and compare results, which may slow down the manufacturing process. Instead, you could turn to the Application Builder, a groundbreaking way for you to turn your models into apps and enable *others* to run and re-run your simulations to reflect simple parameter tweaks.

Building an app allows you to construct a scientifically accurate simulation environment that colleagues without simulation backgrounds can use when testing out different design iterations. You can decide exactly what inputs the users of the simulation app can enter, as well as what results they will receive. The demo app we will show you here also contains a feature that emails a simulation report to the app user once the results are in. More on that in a second.

The COMSOL Multiphysics simulation software allows you to be creative with many aspects of your application. You can choose how your app will look by changing everything from background images to the entire layout. To walk you through some neat app features that you can implement yourself, we’ll have a look at the Heat Sink with Fins demo app.

When it comes to heat sink design, the most important consideration is how well the heat sink can dissipate heat for a given configuration, including flow properties, geometric layout, and more. One benchmark to test this involves putting the heat sink inside a rectangular channel with insulated walls. Then, both the temperature and pressure at the inlet and outlet of the channel are measured. The amount of power that is required in order for the heat sink base to maintain a particular temperature is also measured. Ultimately, the results of the benchmark tell you how much heat the heat sink dissipates, as well as the amount of pressure loss over the channel.

The heat sink demo app uses simulation to analyze benchmark experiments of heat sinks in a user interface (UI) that is easier to use than traditional simulation environments. With a simplified UI, app users can observe how design elements impact the functionality of the heat sink. For example, when they add *a few* more fins to the design, they may see that the amount of dissipated heat increases. But when they add *too many* fins, the results will show that the fins obstruct the flow in the channel and cause a large pressure loss without significantly improving the cooling power — not a good heat sink design, in other words. App users can utilize the knowledge they gain from making these small changes to find out how many fins they should include to meet their design criteria.

You may recognize the heat sink model in our demo app from some of our other tutorials. It is made of aluminum and is modeled in a channel containing air with an air inlet and outlet at either end. Here’s what the parameterized geometry looks like:

*The geometry of the underlying heat sink model used in the demo app.*

The equations and boundary conditions necessary to solve the model will run in the background of the simulation app. The app user can update the simulation by changing the parameters without ever having to consider the underlying equations. That way, they do not need to be experts in setting up and running simulations.

When “appifying” the simulation, we need to make it as easy as possible for someone without simulation expertise to test out various designs and operating conditions that apply to the scenario. With the Application Builder, you can configure the UI however you want. In the demo app shown in this blog post, the app user can change the following design parameters:

- Geometry:
- Heat sink base width, depth, and thickness
- Number of fins
- Fin height and thickness

- Operating conditions:
- Air inlet velocity and temperature
- Heat source temperature

We have programmed the demo app to update the geometry based on user inputs and show a new version of the model to reflect the new design. The temperature and velocity fields are visualized in the graphics window to the right and the dissipated power and pressure loss results are displayed numerically in a section titled *Results* on the left-hand side of the app. The app user can also choose to download the results as a simulation report or have the app email the report automatically when the results are ready.

*A screenshot showing the Heat Sink with Fins demo app.*

A simple change in inputs allows app users to visualize how a change in the number of fins affects dissipated power and pressure loss. Even without a simulation background, users of the app can leverage the results to create the ideal heat sink design.

*How the number of fins affects dissipated power and pressure loss.*

Here, a higher number of fins increased the cooling power of the heat sink. The heat sink designer would be quite happy to see that the heat sink has sufficient cooling power. However, this increase in fins also increased the pressure loss. To find the optimal design, app users can easily test multiple heat sink configurations.

Everything about the UI of our demo app was created for ease-of-use. If you want to go behind the scenes and see how we set up the app, I encourage you to open the heat sink app file.

Thanks to the Application Builder, the possibilities are endless when making your own app. Use this demo app as a stepping stone to creating apps that suit the needs of you and your company. After creating your ideal app, you can make it available to your colleagues (or customers, faculty, and students) via COMSOL Server™.

- Download the Heat Sink with Fins demo app now to take it for a test run
- Watch a video that introduces the Application Builder
- Read an in-depth guide on creating simulation apps (with the Corrugated Circular Horn Antenna demo app as an example)
- See how others are using the Application Builder:

In my previous role as a structural analysis consultant, I sometimes came across the problem of how to report ridiculously high stress peaks in a finite element model to a customer. Experienced analysts know when stress peaks are an expected effect of modeling and can be safely ignored. Though, when a requirement that “the stress must nowhere exceed 70% of the yield stress” has been stated, this may still turn out to be an issue. Equally important is the fact that the small red spots in the color plots cannot always be ignored. Thus, we must have appropriate techniques for interpreting the model results.

Sharp reentrant corners will cause a singularity in the derivatives of the dependent variables for all elliptic partial differential equations. In structural mechanics, this means that the strains can become unbounded since the degrees of freedom are the displacements. Unless limited by the material model, the stresses will also be infinite in such a case.

Stresses are investigated in the majority of structural mechanics analyses. This is why singularities present more of an issue in structural mechanics than in most other physics fields. In heat transfer analyses, for instance, you are much more likely to be interested in the temperature than in the local values of the heat flux, the area in which a singularity would become evident.

Let’s have a look at a prototype problem. This problem involves a 2 meters by 1 meter rectangular plate, featuring a square cutout with a side of 0.2 meters, that is subjected to pure tension:

*The plate is constrained along the left edge and has a uniform load along the right edge.*

With two different meshes around the hole, the default plots of the effective stress look completely different. Since the peak stress is twice as high in the model with the finer mesh, most details in the stress field are lost. This can of course be remedied by manually adjusting the range of the plots, but it may hide important details at first glance.

*The same effective stress field in two plots. Both plots are automatically scaled by the mesh-dependent peak stress.*

In fact, the smaller the elements that are used in the corner, the higher the values of stress that will be found. The results will not converge since the “true” solution tends toward an infinite value.

*Stress at the corner as a function of element size (logarithmic horizontal axis).*

If we investigate the stress field close to the hole, we will find that the stress peak is very localized. In the figure below, the stress is plotted along a vertical cut line drawn at a distance of 0.05 meters from the hole. At this distance, the stress is virtually unchanged, even though the peak stress at the corner varies by a factor of two.

*Stress variation along a cut line (represented in red). Five different mesh sizes are used.*

In the real world, there are seldom perfectly sharp corners. Thus, you could argue that by using an accurate geometry representation containing all fillets, it is possible to avoid singularities. While true, this comes with a price tag. If very small geometrical details must be resolved by the mesh, the model grows enormously in size (especially the case in 3D). Even when a perfect CAD geometry is available, it is common practice to *defeature* the geometry to remove small details that are not important within the scope of the analysis. Therefore, in many cases, we actually deliberately introduce sharp corners at the preprocessing stage.

There are, however, some drawbacks to keeping the sharp corner:

- If the material model is nonlinear, there may be numerical problems at the singularity. For example, the strain rate predicted by a creep model is often proportional to a high power of stress. The high stress at the singularity (a value determined only by the mesh) raised to a power of five may result in strain rates so high that the time stepping is forced to be in the order of milliseconds, when you actually want to study an event taking place over months. If you still want to keep the sharp corner, the remedy here is to enclose the singularity in a small elastic domain.
- Adaptive meshing, error estimates, and the like can fail since the singularity will dominate over the rest of the solution. Exclude the corner from any such procedures.
- When running an optimization where stresses are part of the problem formulation, the singularity will lead to solutions that are optimal only in terms of reducing the amplitude of the unphysical peak stress. In the Multistudy Optimization of a Bracket tutorial, the region where the bracket is bolted is excluded from the search for a maximum stress.
- As previously noted, the high stress peaks tend to obscure more interesting features in the solution, both visually and psychologically.

Physically, if the corner is very sharp, the material will be damaged by the high strains. A brittle material may crack; a ductile material may yield. While it may sound alarming, such damage will only cause a local redistribution of the stresses in most cases. As seen from the perspective of the surrounding structure, the effect is no more dramatic than that of somewhat changing the fillet radius. High, very localized stresses will only be a true problem if the loading is cyclic, which creates a risk for fatigue.

In a building, nobody is concerned that the holes for windows and doors are rectangular with sharp corners. But, in an airliner, you will find that the windows are smoothly rounded since the variation between the pressure in the cabin and the pressure outside will provide a cyclic stress history.

*Left: A rectangular window featuring sharp corners. Image by Jose Mario Pires. Licensed under CC BY-SA 4.0 via Wikimedia Commons. Right: A window with smoothly rounded corners. Image by Orin Zebest. Licensed under CC BY-SA 2.0 via Wikimedia Commons.*

This is in fact recognized by many design standards, where high local stresses are allowed as long as the loads are static. The local corner stresses will not in any way affect the load-bearing capacity of the structure. Using this type of approach does rely on a systematic way of classifying the stress fields. Such methods are, for example, described in the *ASME Boiler & Pressure Vessel Code*.

For cyclic loads, on the other hand, it is important to obtain very accurate stress values. The fatigue life depends strongly on the stress amplitude. In this case, an accurate representation of the fillet is necessary, not only geometrically but also in terms of mesh resolution. If the model becomes too large to handle, you can use *submodeling*, an approach that is described in detail in this blog post.

*The detailed submodel on the right is driven by the results from the global analysis.*

Tip: To further explore the submodeling technique, download the Submodeling Analysis of a Shaft tutorial from our Application Gallery.

A force applied to a single point on a solid will locally give infinite stresses. This is the classical *Boussinesq-Cerruti problem* in the theory of elasticity, where the stresses vary as the inverse of the distance from the loaded point.

In the real world, point loads do not exist. The force is always distributed over a certain area. From the finite element analysis perspective, the question is whether or not it is worth the effort to resolve this small region. The answer to that question lies in the *Saint-Venant’s principle*, which states that all statically equivalent load distributions give the same result at a distance that is large enough when compared to the size of the loaded area.

Thus, when detailed results are not important within a distance of, say, three times the size of the loaded area, it actually does not matter how you apply the loads, so long as the resulting force and moment are correct. Just as in the case with the corner singularity, you may still need to avoid the effects of singular stresses. Note that line loads will have the same effect as a point load in causing local infinite stresses.

It is interesting to make mention of the fact that a point load applied on a beam element or perpendicular to a shell will *not* induce a singularity. The bending of the structural elements is governed by equations other than solid mechanics. However, a point load applied in the plane of a shell *will* cause a singularity.

If we think of a constraint in terms of its capability to apply a reaction force, it is evident that the same conclusions can be drawn as those for loads with respect to, for example, constraints applied to a point. But, that is not all. Consider the seemingly symmetric problem below. Here, we have a plate with a constant tensile load on one side and corresponding roller conditions on the other side.

*A square plate with one half of the vertical boundaries constrained and loaded.*

When looking at the stress distribution, it is apparent that the end of the roller condition introduces a singularity that the sudden change in the load does not. A general observation is that the end of a constraint has an effect that is similar to that of a sharp corner.

*Horizontal stress distribution.*

An infinitely stiff environment supporting the structure does not exist in reality. The analyst is again left with a choice: Can I live with the little red spot, or do I need to pay more attention to what is outside of my structure?

If the singularity caused by the boundary condition is not acceptable, you could consider the following approaches:

- Extend the model so that any singularity caused by the boundary condition is moved outside of the area of interest.
- Use a softer boundary condition by applying a Spring Foundation condition, for instance.
- Use infinite elements, which offer a cheap method for extending the computational domain. Learn more with this tutorial.

Situations similar to the one mentioned above are inevitable in many kinds of transitions. An example of such a transition is connecting a rigid domain to a flexible domain.

The art of analyzing welds is so important and complex that it warrants its own blog post. Here, we will only briefly touch on this subject.

Welded structures often consist of thin plates, so it is natural to use shell models in this context. Let’s have a look at the model below. In this example, a stress concentration is evident in the area where the smaller plate is welded to the wide plate.

*Stresses in a simple shell model of two plates welded together.*

The geometry and loads are symmetric with respect to the center of the geometry. The mesh in this model, however, is designed so that it is much finer at one end of the weld. A graph of the stress along the weld line reveals a singularity in the stress field in both plates.

*A stress plot identifying a singularity.*

For many welded structures — ship hulls, cargo cranes, and truck frames — dimensioning against fatigue is important. Refining the modeling process by using a solid model is seldom the answer here. The local geometry and quality of a weld is rarely well defined, unless it has been ground and X-rayed. The local geometry will differ along the weld and between the corresponding welds on two items that nominally should be identical.

When analyzing welds, the most common approach is to average the stress along the weld line or along a parallel line a certain distance away. The cut lines in COMSOL Multiphysics are particularly helpful here. The local coordinate systems also come in handy since stress components parallel and normal to the weld need to be treated differently. These averaged stresses are then compared with handbook values, which are available for a number of weld configurations and weld qualities. To learn more, see *Eurocode 3: Design of steel structures — Part 1-9: Fatigue*.

The worst conceivable geometrical singularity is the one caused by a crack. A crack can be seen as a 180° re-entrant corner, so many aspects of the corner singularity are also applicable here. When a crack is present in a finite element model, it is typically an area of focus within the study.

*The stress field around a crack tip, with the deformation scaled.*

The stress field around the crack tip is known from analytical solutions, at least for linear elasticity and plasticity under some assumptions. Computing the stress field through finite element analysis, however, can be difficult due to the singularity. Fortunately, it is usually not necessary to study the details at the crack tip. When determining the stress intensity factor, for example, you can use either the *J-integral* or *energy release rate* approach. These methods make use of global quantities far from the crack tip, so that the details at the singularity become less important.

Tip: Looking to explore the use of the J-integral approach in further detail? Consult the Single Edge Crack tutorial in our Application Gallery.

Singularities appear in many finite element models for a number of different reasons. As long as you understand how to interpret the results and how to circumvent some of the consequences, the presence of singularities should not be an issue in your modeling. In fact, many industrial-size models require the intentional use of singularities. Keeping down model size and analysis time often necessitates simplification of geometrical details, loadings, and boundary conditions in a way that introduces singularities.

]]>Simulating fatigue offers valuable insight into how stress can affect the longevity of a structure and its components. This can help identify potential design problems and pave the way for the development of a safer structure. Arriving at this solution, however, often requires running several simulations to test different scenarios. Our Frame Fatigue Life demo app demonstrates how simulation apps can save you time and energy in evaluating the impact of fatigue.

In the real world, structures are regularly subjected to external stresses. The repetition of these stresses can cause structures to weaken over time, which can result in their failure. Simulation helps to identify weaknesses within designs *before* failure occurs by analyzing the impact of applied loads.

Oftentimes, many simulations must be run before reaching an optimized design. This is especially true when modeling fatigue, as small design changes have a large impact on the lifetime of the structure. Those without a background in simulation will rely on you to run new tests as modifications are made to the structure’s existing design. But what if there was a way to shift some of that workload into the hands of other individuals involved in the design and manufacturing process?

Using the Application Builder, you can turn your models into easy-to-use simulation apps. This tool will enable your colleagues and customers to run their own simulations on the app, which you can customize to include only the specific input parameters that are relevant to your design. Take our Frame Fatigue Life demo app, for instance, which can be used to evaluate the fatigue life in a frame with a cutout. Let’s explore this demo app and its features in greater detail.

The Frame Fatigue Life demo app is based on a model of a thin-walled frame with a cutout. In this example, a random moment load is applied along the three principal directions on the right end of the frame; the left end of the frame remains fixed.

*The geometry of the frame.*

Within the frame, the cutout represents the *critical part*, as this is where fatigue can occur. The fatigue lifetime is significantly reduced if the cutout is a square, since sharp corners lead to increased stress in the surrounding area. The focus here is to predict fatigue life and stress distribution at the cutout, along with the relative usage factor at the critical part.

The Frame Fatigue Life demo app takes all of the physics and functionality behind the model and makes them available in a user-friendly interface. You can use this example as a foundation for creating your own fatigue life app, configuring its layout to meet your specific needs.

When building your own app, you have the ability to create a user interface (UI) that is tailored to your particular design. Depending on the focus of your analysis, you can decide which parameters to make available for modification within your app. As the simulation expert, you can program any sequence of model actions based on the app user’s click of a button, helping to reduce human error that can occur when several actions are required to make a change to the design.

Our Frame Fatigue Life demo app features input parameters that allow changes to the frame’s geometry. For further customization, user-defined load histories and materials can also be imported. Say an app user decides to test a different material or load history. In this case, the new input file will be automatically assigned to both fatigue steps after it has been imported. This reduces the level of human error that may occur if the changes are assigned to only one fatigue step rather than both of the steps.

Upon clicking the *Solve* button, necessary checks are applied to the inputs. If they are not correctly specified, a message displays to explain the mismatch, providing app users with the opportunity to correct their mistakes. It is possible that so many changes were made that it would be difficult to identify the exact point at which the error occurred. For these types of situations, the *Reset to Default* button can be used to revert the model back to its original state, offering a clearer picture of suitable parameter values, thus helping to identify the inaccuracy.

Once the proper inputs have been entered, the app solves for fatigue life in the designated frame. To achieve this, three study steps must be computed. The first study step computes the structural response to the applied moments; the following two study steps evaluate fatigue on each side of the shell element. When working with the Model Builder, this would mean having to solve all three studies manually. In the simulation app, however, all three steps are performed automatically once the *Solve* button is selected due to a scheme created with the Method Editor.

The simulation results from the app can be compiled into a report by clicking the *Create Report* button. The report for the demo app includes the predicted stress distribution, fatigue life, and relative fatigue usage at the critical part, as well as information on the original geometry. The app’s documentation file can be found under *Open Documentation*.

The results of the simulation show both sides of the frame, with fatigue life computed on the critical side. The fatigue usage factor on either side of the frame is displayed. At the critical point, the stress cycle distribution and relative usage factor are also shown.

*Simulation results for the fatigue life app.*

Apps mark a revolutionary step forward in simulation, hiding the complexity of models in a simplified and intuitive interface that can be used by individuals throughout an organization. Extending the power of simulation not only helps to lighten your workload, giving time for additional simulation projects, but it also promotes a more integrated approach to product design and development. We encourage you to use our Frame Fatigue Life demo app as a guide to creating your own app, establishing a more efficient way of designing safe structures.

- Download the demo app: Frame Fatigue Life
- See how easy it is to create your own apps with this video: Introducing the Application Builder in COMSOL Multiphysics

Truck-mounted cranes are designed to handle heavy loads. With this in mind, manufacturers and engineers look to optimize the machine’s payload, or *carrying*, capacity. Simulation apps can help expedite the optimization process by extending simulation capabilities into the hands of those who are not experts in simulation through a customized and intuitive interface. Our Truck Mounted Crane Analyzer demo app shows the benefits of this approach.

With their mechanical advantage in providing lifting power as well as mobility, truck-mounted cranes are valued in numerous industries, from construction to electrical line maintenance. When designing these machines, an important consideration is the payload capacity of the crane. In other words, how much weight is the crane able to lift when in operation? Addressing this question involves taking into account the orientation and extension of the crane as well as the capacity of the crane’s hydraulic cylinders, which control the machine’s motions.

The Truck Mounted Crane Analyzer demo app, released with COMSOL Multiphysics® software version 5.1, provides answers — all within an easy-to-use, customized format. Designers who are not experts in setting up simulations need a simplified simulation tool in order to modify the configuration of the crane and the positioning of the payload as well as the capacity of the hydraulic cylinders. By turning your models into apps, you will provide them with the easy-to-use tool that they need to run their own simulations. Let’s take a closer look at this example.

The truck-mounted crane demo app is based on the Truck Mounted Crane tutorial model, which we have previously blogged about. The crane features five hydraulic cylinders: the inner boom cylinder, the outer boom cylinder, and three extension cylinders. The inner boom cylinder is used to rotate the inner boom with respect to the base, while the outer boom cylinder is used to rotate the outer boom with respect to the inner boom. Meanwhile, the extension cylinders are designed to modify the length of the three extensions. The different components of the crane’s geometry are illustrated below.

Parts | Color |
---|---|

Inner boom | Cyan |

Outer boom | Green |

Inner boom cylinder | Red |

Outer boom cylinder | Blue |

Extension cylinders | Magenta |

*The geometry of the crane.*

When building your own app, you can adjust what parameters your app user should be able to modify. In our example case, we are creating an app for someone who wants to improve the payload capacity of the truck-mounted crane. Therefore, the demo app has been customized to allow modifications in the capacity of the hydraulic cylinders. In *your* app, you can include the elements that are important to your particular design scenario.

For the inner boom and outer boom cylinders in the crane, the capacity can range between 0.1 ton and 1000 ton (where 1 ton = 1000 kg); the capacity of the extension cylinders, on the other hand, can range between 0.1 ton and 100 ton. These parameters are listed under the *Capacity of Hydraulic Cylinders* section in the demo app, as shown in the screenshot below.

The intuitive user interface (UI) also includes options for changing the angle to the horizontal of the inner boom, the angle between the booms, and the total extension length. Located in the *Orientation and Extension* section, these parameters control the crane’s configuration as well as the payload’s position. To visualize the crane’s new configuration in the graphics window, the app user can click the *Update* button in the ribbon. Note that the weight of the crane is fixed in this application as 6.48 ton.

*The app’s user interface.*

After modifying the input parameters, the app user can click the *Compute* button in the ribbon to begin the computation of the simulation. On a typical desktop computer, it will take only about 25 seconds to compute the crane’s payload capacity with the simulation app.

Once the computation is complete, the payload capacity and the hydraulic cylinder usage results will be displayed in the *Results* section, with the updated input parameters reflected in the position of the linkages in the graphics window in the *Configuration* section. The *Results* section also includes the percent usage of each hydraulic cylinder, highlighting the limiting cylinder (i.e., the cylinder with a usage of 100%).

For the specified configuration of the crane, it is possible that the specified capacities of some of the hydraulic cylinders will not be sufficient to lift the weight of the crane. If this is the case, a warning message will appear as well as the minimum capacity that is required for each hydraulic cylinder in order to lift the crane’s weight. The minimum required capacity will also be highlighted if it exceeds the cylinder’s specified capacity.

Finally, by clicking the *Create Report* button, a Microsoft® Word document can be generated that compiles a report of the app. This report captures the current state of the application, noting modifications to the input parameters that you have decided to make available in your application and the simulation results.

By making simulation capabilities available to people without simulation expertise, simulation apps offer a more integrated approach to product design and development. The revolutionary Application Builder in COMSOL Multiphysics expands the scope of simulation by making the physics and functionality of your model available in a user-friendly format that can be tailored to design needs. Our truck-mounted crane demo app emphasizes the benefits of making your simulations accessible to a wider audience, offering a more efficient method for developing a high-performance machine.

- Download the demo app: Truck Mounted Crane Analyzer
- Learn how to create a simulation app (Corrugated Circular Horn Antenna Simulator example)
- Explore our other demo apps and additional updates to the Application Builder in COMSOL Multiphysics version 5.1 on our Release Highlights page

*Microsoft is a registered trademark of Microsoft Corporation in the United States and/or other countries.*