When materials, products, or structures fail or do not perform as intended, they will occasionally result in property damage or personal injury. Forensic engineering is used to address the causes of these types of failures, with tactics ranging from collecting evidence to performing experiments. The findings from the investigations are then typically shared with a judicial forum, such as a trial or arbitration, and used as part of testimony.

In any forensic engineering case, there are many possibilities and uncertainties to consider. Physically testing every situation is a process that can be both time consuming and costly. Tools like COMSOL Multiphysics provide an easier approach to studying these cases and checking the validity of various scenarios. As Stuart Brown, managing partner at Veryst Engineering, noted at the COMSOL Conference in Boston: Simulation enables you to answer “what-if” questions prevalent in forensic engineering.

*Stuart Brown of Veryst Engineering speaks at the COMSOL Conference 2015 Boston.*

In an elevator brake failure analysis case, a team at Veryst Engineering was presented with a number of theories pertaining to the reason for the structure’s failure. Without access to information such as the temperature in the elevator room or the exact number of people in the elevator at any point in time, accounting for the cause of this accident was challenging. Simulation software proved to be a powerful tool in generating answers.

The investigation was first launched in 2007 after an elevator brake in Japan failed, killing a passenger. Solenoid deterioration and the resulting brake lining wear were to blame. Prosecutors claimed that if this failure happened slowly, it should have been noticed and repaired. As a result, criminal charges were filed against multiple maintenance firms who had managed the elevator at different points in time. The question then became: How *quickly* did the failure occur?

*Licenciado Gustavo Díaz Ordaz International Airport (2014) – 03” by Another Believer – Own work. Licensed under CC BY-SA 3.0 via Commons.*

To answer this question, engineers at Veryst turned to simulation to analyze how different factors may have contributed to the failure of the elevator brake. COMSOL Multiphysics enabled the team to easily combine multiple physics and data sources from accident reports and lab experiments all in one platform. They then used this information to accurately model everything from individual brake components to the system as a whole.

Let’s take a closer look at their process for investigating a number of theories in this case.

The first point of focus was evaluating the validity of slow failure theories for the solenoid. One of these theories was that resistive heating caused thermal expansion and contraction in the solenoid. This resulted in high stresses, causing the wires in the solenoid coil to gradually crack. Another theory was that the electromagnetic field produced stresses in the coil, causing it to fail over time. By performing a coupled thermomechanical stress analysis and a coupled electromechanical analysis, respectively, the team at Veryst was able to rule out both of these theories as potential causes of failure.

After addressing the initial theories, the focus was then shifted to analyzing rapid brake failure scenarios. One claim was that a manufacturing variation in the solenoid led to its failure. In a normal solenoid, there should be space between the coils.

But what happens when a solenoid coil is flawed and wires cross over? In this case, the solenoid coil can create a pressure point that degrades the insulation between the crossed wires. This results in a short that heats up locally at first and then quickly begins to short and heat up adjacent coils, leading to rapid failure.

To better understand the possibility of such a rapid failure theory in the elevator brake case, Veryst engineers examined an argument put forward by the prosecution that the brake lining wear was consistent with a slow failure. Veryst designed a model in COMSOL Multiphysics to calculate the local wear on each brake lining. Extensive wear experiments led by investigators correlated the rates of overall lining wear to the temperature of the brake drum.

In addition, a classical model of wear linked the local wear rate to the local contact pressure. Veryst combined the experimental results with the wear law to develop a pressure- and temperature-dependent wear model. This model was implemented with the Structural Mechanics Module as well as user-defined differential equations solved on the surfaces of the brake linings.

*Left: Geometry of the elevator brake. Right: Analyzing brake lining wear in COMSOL Multiphysics. Copyright © Veryst Engineering. *

The results from the simulation analysis suggested that a damaged solenoid could produce high drum temperatures, resulting in the brake lining’s rapid wear. Such wear, as the engineers found, could have occurred in a very short time, even a matter of days. Verified against measured data, these findings provided an explanation for a self-consistent and scientifically sound rapid failure theory.

*Comparing measured data with simulation results. Copyright © Veryst Engineering.*

Through their simulations, the team at Veryst challenged the slow failure theories and validated the possibility of a rapid failure scenario in the case of the elevator brake. Before the case could close, the team faced another hurdle: They would need to present their simulation findings to a nontechnical audience during the trial. To overcome the hurdle, Veryst leveraged the postprocessing and visualization tools in COMSOL Multiphysics to create easy-to-understand representations of their results. In the end, the judicial tribunal chose to acquit Veryst’s client, finding that they were not negligent in their management of the elevator.

Learn more about Veryst’s use of COMSOL Multiphysics in this failure analysis case on page 34 of *COMSOL News* 2015.

Think about the first architects who designed a bridge above water. The design process likely included several trials and subsequent failures before they could safely allow people to cross the river. COMSOL Multiphysics and the Optimization Module would have helped make this process much simpler, if they had computers at the time, of course. Before we start to discuss building and optimizing bridges, let’s first identify the best design for a simple beam with the help of topology optimization.

In our structural steel beam example, both ends of the beam are on rollers, with an edge load acting on the top of the middle part. The beam’s dimensions are 6 m x 1 m x 0.5 m. In this case, we stay in the linear elastic domain and, due to the dimensions, we can use a 2D plane stress formulation. Note that there is a symmetry axis at x = 3 m.

*Geometry of the beam with loads and constraints.*

Using the beam geometry depicted above, we want to find the best compromise between the amount of material used and the stiffness of the beam. In order to do that we need to convert this into a mathematically formal language for optimization. Every optimization problem consists of finding the best design vector \alpha, such that the objective function F(\alpha) is minimal. Mathematically, this is written as \displaystyle \min_{\alpha} F(\alpha).

The design vector choice defines the type of optimization problem that is being solved:

- If \alpha is a set of parameters driving the geometry (e.g., length or height), we are talking about
*parametric optimization*. - If \alpha controls the exterior curves of the geometry, we are talking about
*shape optimization*. - If \alpha is a function determining whether a certain point of the geometry is void or solid, we are talking about
*topology optimization*.

Topology optimization is applied when you have no idea of the best design structure. On the one hand, this method is more flexible than others because any shape can be obtained as a result. On the other hand, the result is not always directly feasible. As such, topology optimization is often used in the initial phase, providing guidelines for future design schemes.

In practice, we define an artificial density function \rho_{design}(X) , which is between 0 and 1 for each point X = \lbrace x,y \rbrace of the geometry. For a structural mechanics simulation, this function is used to build a penalized Young’s modulus:

E(X)= \rho_{design} (X) E_0

where E_0 is the true Young’s modulus. Thus, \rho_{design}= 0 corresponds to a void part and \rho_{design}= 1 corresponds to a solid part.

As mentioned before, in regards to the objective function, we want to maximize the stiffness of the beam. For structural mechanics problems, maximizing the stiffness is the same as minimizing the compliance. In terms of energy, it is also equivalent to minimizing the total strain energy defined as:

\displaystyle Ws_{total}=\int_\Omega Ws \ d\Omega=\int_\Omega \frac{1}{2} \sigma: \varepsilon\ d\Omega

Our topology optimization problem is thus written as:

\min_{\rho_{design}}\int_\Omega \frac{1}{2} \sigma (\rho_{design}): \varepsilon\ d\Omega

Now that our optimization problem has been defined, we can set it up in COMSOL Multiphysics. In this blog post, we will not detail the solid mechanics portion of our simulation. There are, however, several tutorials from our Structural Mechanics Module that help showcase this element.

When adding the *Optimization* physics interface, it is possible to define a *Control Variable Field* on a domain. As a first discretization for \rho_{design}, we can select a constant element order. This means that we will have one value of \rho_{design} through all the mesh elements.

After this step is completed, a new Young’s modulus can be defined for the structural mechanics simulation, such as E(X)=\rho_{design} E_0.

As referenced above, the objective function is an integration over the domain. In the *Optimization* interface, we select *Integral Objective*. The elastic strain energy density is a predefined variable named *solid.Ws*. Thus, the objective can be easily defined as \int_\Omega Ws \ d\Omega.

Our discussion today will not focus on how optimization works in practice. Basically, the optimization solver begins with an initial guess and iterates on the design vector until the function objective has reached its minimum.

If we run our optimization problem, we get the results shown below.

*Results from the initial test.*

The solution is trivial in order to maximize the stiffness. The optimal solution shows the full amount of the original material!

After this initial test, we can conclude that a mass constraint is necessary if we want to make the optimization algorithm select a design. With a constraint of 50 percent, this could be written as:

\int_\Omega \rho \leq 0.5M_0 \Leftrightarrow \int_\Omega \rho_{design} \leq 0.5V_0

In COMSOL Multiphysics, a mass constraint can be included by adding an *Integral Inequality Constraint*. Additionally, the initial value for \rho_{design} needs to be set to 0.5 in order to respect this constraint at the initial state.

Let’s have a look at the results from this new problem, which are illustrated in the following animation.

*Results with the addition of a mass constraint.*

While this result is better, a problem remains: We have many areas with intermediate values for \rho_{design}. For the design, we only need to know if a given area is void or not. In order to get mostly 1 or 0 for the \rho_{design}, the intermediate values must be penalized. To do so, we can add an exponent p in the penalized Young’s modulus expression:

E(X)=(\rho_{design})^p E_0

In practice, p is taken between 3 and 5. For instance, if p = 5 and \rho_{design}= 0.5, the penalized Young’s modulus will be 0.03125 E_0. The contribution for the mass constraint, meanwhile, will still be 0.5. As such, the optimization algorithm will try lending to 0 or 1 for the design vector.

With our new penalized Young’s modulus, we get the following result.

*Results with the new penalized Young’s modulus.*

A beam design has started to emerge! There is, however, a problematic checkerboard design, one that seems to be highly dependent upon the chosen mesh. In order to avoid the checkerboard design, we need to control the variations of \rho_{design} in space. One way to estimate variations of a variable field is to compute its derivative norm integrated on the whole domain:

\int_\Omega |\nabla \rho_{design}|^2 \ d\Omega

A new question arises: How can we minimize both the variation of \rho_{design} and the total strain energy?

Since a scalar objective function is necessary, these objectives must be combined. We can think about adding them, but first, the two expressions need to be scaled to get values around 1. Concerning the stiffness objective, we simply divide by Ws0, which is the value of the total strain energy when \rho_{design} is constant. In regards to the regularization term, we can take the following scaling factor \frac{h_0 h_{max}}{A}, where h_{max} is the maximum mesh size, h_0 is the expected size of details in the solution, and A is the area of the design space. Our final optimization problem is now written as:

\min_{\rho_{design}} \ \ {q\int_\Omega \frac{Ws}{Ws0} \ d\Omega + (1-q)\frac{h_0 h_{max}}{A}\int_\Omega |\nabla \rho_{design}|^2 \ d\Omega}

s.t. ~ \int_\Omega ρ_{design} \leq 0.5V_0

where the factor q controls the regularization weight.

Finally, the discretization of \rho_{design} needs to be changed to Lagrange linear elements to enable the computation of its derivative.

By solving this final problem, we obtain results that offer helpful insight as to the best design structure for the beam.

*Results with regularization.*

Such a design scheme can be seen at different scales in the real world, as illustrated in the bridge below.

*A warren-type truss bridge. Image in the public domain, via Wikimedia Commons.*

Now that we have set up our topology optimization method, let’s move on to a slightly more complicated design space. We want to answer the question of how to design a bridge above water. To do so, a road zone in the geometry must be defined where the Young’s modulus is not penalized.

*Design space for a through-arch bridge.*

After a few iterations, we obtain a very good result for the through-arch bridge, one that is quite impressive. Such a result could provide architects with a solid understanding of the design that should be used for the bridge.

*Topology optimization results for a through-arch bridge.*

While the mathematical optimization algorithm had no guidelines on the particular design scheme, the result depicted above likely brings a real bridge design to mind. The Bayonne Bridge, shown below, is just one example among many others.

*The Bayonne Bridge. Image in the public domain, via Wikimedia Commons.*

It is important to note that this topology optimization method can be used in the exact same way for 3D cases. Applying the same bridge design question, the animation below shows a test in 3D for the design of a deck arch bridge.

*3D topology optimization for a deck arch bridge.*

Here, we have described the basics of using the topology optimization method for a structural mechanics analysis. To implement this method on your own, you can download the Topology Optimization of an MBB Beam tutorial from our Application Gallery.

While topology optimization may have initially been built for a mechanical design, the penalization method can also be applied to a large range of physics-based analyses in COMSOL Multiphysics. Our Minimizing the Flow Velocity in a Microchannel tutorial, for instance, provides an example of flow optimization.

- “Optimal shape design as a material distribution problem”, by M.P. Bendsøe.
- Topology Optimization: Theory, Methods, and Applications, by M.P. Bendsøe and O. Sigmund.

In mechanical vibration theory, the *vibrations nodes* are defined as the points that never move when a wave is passing through them. Because of a wave created by the impact of a ball hitting a racket, the racket will, in turn, begin to oscillate and vibrate. By looking at the mode shapes of the racket — held by a player at the end of the grip — we can identify points where the vibration motion is zero (i.e., where the magnitude is zero at any time during vibration). Here are the first three mode shapes of a tennis racket computed with COMSOL Multiphysics:

*The first three mode shapes of a tennis racket, from left to right and top to bottom. The fundamental mode is at 15 Hz, the second mode is at 140 Hz, and the third mode is at 405 Hz.*

As illustrated above, many different points feature this behavior. So why am I talking as if there is only one vibration node? In reality, there is actually an infinite number of vibration nodes. Upon impact, the ball generates an infinite number of harmonic series at different frequencies. An infinite number of frequencies are excited at one time, but which vibration node is the “sweet spot”? Is it the fundamental mode shape vibration node or is it a node that results from the crossing of different harmonics?

The fundamental mode vibration node cannot be the sweet spot for an obvious reason: It is located at the grip. Try hitting the ball with the grip to pass it over the net. If you are very lucky, you may succeed, but most likely, you won’t. The second vibration mode, meanwhile, has two nodes: one at the grip and one on the strings near the frame head. The latter is considered the sweet spot. Any player that hits the ball at this point will feel almost no vibration during impact.

There are, of course, vibration nodes on the strings for higher modes, as depicted in the third mode from the simulations above. However, as the natural frequency of the mode increases, the magnitude of the vibration drastically *decreases*. The graph below shows the frequency response of a sinusoidal load of 5 ms — approximately the duration of a ball’s impact upon hitting a racket — on a beam-like structure. For frequencies higher than 300 Hz, the magnitude is almost zero. That is, the influence of the third mode or higher is negligible. No matter where the ball strikes, even at points where the magnitude of the mode shape has reached its maximum, the higher modes will not have any influence at all because they are not excited.

*A plot showing the frequency response of a sinus load of 5 ms.*

When the ball hits the tennis racket near one end, with no other force acting on it, the racket will rotate about an axis toward the other end. As the point where the ball strikes the racket becomes closer to the center of mass, the distance from the axis of rotation will decrease. In a case where the ball hits the center of mass, the racket will translate without any rotation. The center of rotation is, from a mathematical point of view, at an infinite distance from the racket.

That said, it is possible to find an impact location that produces a center of rotation near the end of the grip where the player holds the equipment. We can find a location at a certain distance from the center of mass where the ball hits the racket and results in a center of rotation near the end of the grip. Referred to as the *center of percussion (COP)*, this location is sometimes considered a sweet spot as well. No force is applied to the hands, as the racket rotates about a center of rotation near the end of the grip, avoiding the player’s hands.

*Compared to older wooden tennis rackets from the 1970s, modern forms of this equipment feature a much larger head. This new design element has been used to move the center of percussion near the middle of the strings rather than by the racket’s frame. Image by CORE-Materials, via Wikimedia Commons.*

Let’s now take a quick look at what happens from a mechanical standpoint. For this purpose, we assume that the racket can be modeled as a rigid beam-like structure.

*Sketch of the beam-like structure. The parameters used in the following equations are defined in this figure. *

A force F applied to a free beam of mass M at a distance b from the center of mass implies that the center of mass translates at a speed V_{cm}. From Newton’s second law,

F=M \frac{d V_{cm}}{dt}

Moreover, a torque is generated by the force F about the center of mass:

Fb = I\frac{d\omega}{dt}

where I is the beam moment of inertia along the rotation axis and \omega is the angular velocity. Consider P, a point at a distance c from the center of mass. The speed v of this point is v=V_{cm}-c\omega, leading to:

\frac{dv}{dt}=(\frac{1}{M}-\frac{cb}{I})F

Since the center of rotation corresponds to the point where there is no translational acceleration, the COP is at a distance b_{cop} from the center of mass, which is given by

b_{cop}=\frac{I}{c_{cr}M}

where c_{cr} is the distance between the center of rotation and the center of mass. Given that the distance between the center of mass and the ideal center of rotation (at the grip end) is known, it is rather straightforward to determine the position of the COP for a particular racket shape.

The *power point*, sometimes called the third sweet spot, is the best bouncing point. In other words, this is where the ball achieves the most bounce upon contact. From a mathematical standpoint, the power point is defined as the point with the highest *coefficient of restitution (COR)*, the ratio of the rebound height to the incident height of the ball. The coefficient of restitution is quite useful in the sense that it is the result of *all* of the design elements that affect the speed of the ball. Design engineers do not need to know the influence of each parameter, as the COR provides a combined overview of all of these factors.

The power point is located at the throat of the racket, near the center of mass. The closer the point is to the throat, the greater the stiffness and the lower the energy loss during racket deformation. When a ball hits a racket, the impact energy is divided into kinetic energy and elastic energy (energy of deformation) throughout the ball, racket, and strings. At the power spot, deformation is very small, causing the racket to give almost all of the kinetic energy back to the ball.

The power spot is very useful when returning a fast serve. Indeed, if you must return a fast serve, you do not have much time to move your racket and prepare your stroke, so you will return the ball as it comes. However, it is important to note that the closer the ball comes to the power spot, the better your stroke will be.

One last interesting spot on the racket that I’d like to mention is the *dead spot*. When a ball strikes the dead spot, the ball will not bounce at all. All of the ball’s energy is given to the racket and no energy is given back to the ball. This is due to the fact that the effective mass of the racket at the dead spot — usually close to the tip — is equal to the mass of the ball. Mechanically speaking, the ratio between the resulting force and the acceleration at the dead spot is equal to the mass of the ball.

To better understand the physical phenomena at hand, let’s imagine the ideal collision between a rigid ball at an initial speed V_0 and another rigid ball, initially at rest, that features the same mass m. The conservation of energy and the conservation of momentum lead to:

\frac{1}{2}mV_0^2 = \frac{1}{2}mV_1^2+\frac{1}{2}mV_2^2

mV_0=mV_1+mV_2

Therefore, it turns out that:

V_1=0 \ \text{and} \ V_2=V_0

If a ball collides with another ball that is of the same mass but at rest, the ball will stop dead and give all of its energy to the other ball. Thus, when a ball hits the dead spot of a racket at rest, the ball will not bounce at all. This would be a very bad spot to use when trying to return a serve. On the other hand, when you actively hit a stationary ball, as in your own serve, the dead spot will provide a high momentum transfer of energy from the racket to the ball.

Then, when it is your turn to serve, what is the optimal point? This is not only determined by the mathematics of sweet spots. In most cases, the answer would be rather close to the tip. Because of the way you move your arm, the racket will feature a significantly higher speed at the tip than at the throat. Thus, the optimal point is determined by a combination of high impact speed and good momentum transfer properties.

We have now gained insight into the physical meaning behind the three well-known sweet spots of a tennis racket. At the vibration node, the uncomfortable vibration that tennis players feel over their hand and arm is minimal. At the center of percussion, the initial shock to the player’s hand is also minimal. Lastly, at the power point, the ball rebounds with the maximum level of speed.

*The location of the sweet spots on a tennis racket.*

Perfect your game; check out these additional resources for improving your tennis skills:

*Tennis Science for Tennis Players*, H. Brody.- “Physics of the tennis racket“, H. Brody.
- “Physics of the tennis racket II: The “sweet spot“, H. Brody.
- “Physics of the tennis racket III: The ball-racket interaction“, H. Brody.

We have been interested in cloaking for years and have covered this topic in various ways in previous blog posts. Although there are many different types of cloaking, one common theme is how complex the phenomenon is to achieve mathematically (and physically…).

*An ideal cloak, which is modeled as a spherical shell with a smaller sphere inside. In this optical cloaking example, light waves bend around the smaller sphere, causing it to seem invisible.
*

The concept starts with *metamaterials*. Metamaterials are artificial materials that depend on a certain structure and arrangement to work. *Cloaking* devices use these metamaterials to bend waves (such as thermal, electromagnetic, acoustic, and mechanical waves) around an object in order to hide or protect it.

Theoretically, different cloaking devices can perform different functions. For instance, electromagnetic cloaking can render things invisible from the human eye, while mechanical cloaking can hide an object from mechanical vibrations and stress. In reality, it’s not a simple task to cloak something — and this is especially true in structural mechanics. However, researchers are taking leaps forward in the realm of cloaking design.

For instance, you might recall reading about cloaking advancements for flexural waves in Kirchhoff-Love plates here on the COMSOL Blog. The research group that led this study overcame limitations that were previously associated with the cloaking of mechanical waves in elastic plates. They created a new theoretical framework for designing and building these invisibility cloaks and used COMSOL Multiphysics software to simulate and analyze the quality of their cloak.

More recently, researchers at the Karlsruhe Institute of Technology in Germany developed a very simple mathematical approach to cloaking based on a direct lattice transformation technique.

The team of scientists began by considering a 2D discrete lattice comprised of one material. Initially, an electrical analogy was studied, in which the lattice points within this structure were connected by resistors. These resistors were designed to act as a metamaterial, bending the electromagnetic waves and creating a cloak.

In the direct lattice approach, the lattice points of the structure were subjected to a coordinate transformation and the properties of the resistors were kept the same. Because the resistors and the connections between them were the same, the hole in the middle of the lattice and the distortion surrounding it could not be detected from the outside. Thus, in just one simple step, a cloak was successfully created.

The research team’s initial findings demonstrated the success of this simple and straightforward technique for cloaking in heat conduction, particle diffusion, electrostatics, and magnetostatics. Then, by replacing the resistors in the lattice structure with linear Hooke’s springs, the researchers found that their transformation approach was successful in cloaking elastic-solids as well.

To visualize and test the performance of the lattice-transformation cloak, the researchers used COMSOL Multiphysics simulation software. In the simulations, constant pressure was exerted onto the structure and the resulting strain was analyzed. The direct lattice approach was found to result in less error and less strain under various loading conditions — an indicator of very good performance.

Although mathematically *perfect* cloaking will never exist in reality, mechanical cloaking still has a lot of potential uses in the civil engineering and automotive industries. Using this technique, engineers could create strong materials that maintain their strength and durability, even when forming complex shapes. Constructing buildings of such material would help protect them from earthquake damage, for instance.

*Civil engineers could use mechanical cloaking to design support structures for bridges. (By Alicia Nijdam. Licensed under Creative Commons Attribution 2.0 Generic, via Wikimedia Commons).*

With mechanical cloaking, we could also see complex yet lightweight architecture, carbon-enforced cars, and tunnels with better stress protection in the future. Check out the links below for more information about this fascinating topic.

- Read more about the Karlsruhe Institute of Technology team’s mechanical cloaking technique from
*Phys.org* - Learn how fractals contribute to the magic of metamaterials
- Can you print an invisibility cloak with a 3D printer?
- Cloaking in science and fiction

3D printing has emerged as a popular manufacturing technique within a number of industries. The growing demand for this method of manufacturing has prompted greater simulation research behind its processes. Engineers at the Manufacturing Technology Centre (MTC) have identified their customers’ interest in a particular additive manufacturing technique known as shaped metal deposition. By building a simulation app, the team is better able to meet the demands of their customers while delivering more efficient and effective simulation results.

Designers and manufacturers are usually interested in testing various design schemes to create the most optimized device or process. As a simulation expert, you will often find yourself running multiple tests to account for each new design. The Application Builder, however, has revolutionized this process. By turning your model into a simulation app, you can enable those without a background in simulation to run their own tests and obtain results with the click of a button.

When designing an app, you can opt to include only those parameters that are important to your end-user’s particular analysis, hiding the model’s complexity while still including all of the underlying physics. As modifications are made to the design, app users can change specific inputs to simulate the performance of the new configurations. The result: A more efficient simulation process that allows engineers to focus on the design outcome rather than the physics behind the model.

Over the past few weeks, we’ve blogged about several of our own demo apps that are designed to help you get started with making apps. Today, we will share with you how a team at the MTC built their own app to analyze and optimize *shaped metal deposition* (SMD), an additive manufacturing (3D printing) technique. Let’s begin by exploring what prompted the development of this app.

*The MTC team behind the creation of the simulation app.*

The 3D printing industry has experienced tremendous growth within the last several years. As new initiatives have further developed the technology, 3D printing has emerged as a favorable method of manufacturing components for medical devices, automobiles, and apparel, to name a few.

At the MTC — which has recently become home to the UK National Centre for Net Shape and Additive Manufacturing — simulation engineers recognized their customers’ interest in additive manufacturing, with particular regards to shaped metal deposition. In contrast to powder-based additive manufacturing techniques, SMD is valued for its capability to build new features on pre-existing components as well as use a number of materials on the same part.

Similar to welding, this manufacturing technology deposits a mass of molten metal that is applied gradually on a surface. A cause for concern within this process is that the thermal expansion of the molten metal can deform the cladding as it cools. Thus, the final product can differ from the expected result.

*A simulation of temperature heating in the manufactured part, created by the MTC team. *

*Visible deformation on the manufactured part after six deposited layers.*

Using COMSOL Multiphysics, a team at the MTC created a model to better predict the outcome of the design by minimizing deformations or changing the design to account for such deformations. Responding to the growing popularity of this manufacturing technique, the MTC turned their model into a simulation app that could be shared across various departments within their organization.

The simulation app built by the MTC is based on a thermomechanical analysis of thermal stresses and deformation resulting from SMD thermal cycles. The app was designed to predict if the deposition process would create parts that fell within a specific range of tolerances. In some cases, this could require many tests to be run before arriving at an acceptable final deformation. With the app’s intuitive and user-friendly interface, app users are able to easily modify various inputs to test out each new design and analyze its performance.

*The MTC app’s user interface.*

Within the app, the MTC team has given users the ability to easily test out different geometries, change materials, apply meshing sequences, and experiment with various heat sources and deposition paths. The app also includes two predefined parametric geometries, as well as the option to import a custom geometry.

*Running a simulation using the app. This plot represents the temperature field.*

In *COMSOL News* 2015, Borja Lazaro Toralles, an engineer at the MTC, discussed the advantages of taking this approach to analyzing and optimizing SMD. “Were it not for the app, our simulation experts would have to test out each project we wanted to explore, something that would decrease the availability of skilled resources,” Lazaro Toralles noted in the article.

Since its development, the app has been shared with other members of the MTC team who do not possess simulation expertise. Distributing this easy-to-use tool throughout the organization has offered a simple way for team members to test and validate designs, expediting the simulation process and providing customers with faster results. Additionally, the availability of the app to the MTC engineers means that they are able to respond to companies who want to explore the use of this additive technology very rapidly and at a low cost.

The team at the MTC has already begun making updates to their simulation app, further enhancing its functionality and adding new resources for the end-users. Using the Physics Builder, the engineers have started designing a customized physics interface that will enable the modeling of more complex tool paths and melt pools. Tailored to their design needs, this interface will offer engineers an easier and faster method of implementation that is less prone to error.

To further improve the usability of the app, the MTC is planning to offer more contextual guidance through the card stack tool provided by the Application Builder. For increased accuracy, they have plans to add the capability of modeling the evolution of the microstructure on a macroscopic level to predict heat-affected zones.

Recognizing the advantages of building simulation apps, the MTC is looking to create additional apps to evaluate topology optimization as well as the modeling of hot isostatic pressing (HIP). They are also interested in potentially linking COMSOL Server™ with their own cluster to provide a secure environment for managing, running, and sharing simulation apps. This would be especially beneficial for those companies that do not possess high computational power.

- Read a related article in
*COMSOL News*2015: “Optimizing 3D Printing Techniques with Simulation Apps“ - To learn more about creating your own simulation apps, watch this video: Introducing the Application Builder in COMSOL Multiphysics
- Check out our series of blog posts on 3D printing

Miniature devices have many applications and researchers are constantly finding new uses for them. One such use, which we’ve blogged about before, is a microfluidic device that could let patients conduct immune detection tests by themselves. But to work in the microscale, devices like this one, of course, rely on even smaller components such as micropumps.

Let’s turn to a tutorial model of a valveless micropump mechanism that was created by Veryst Engineering, LLC using COMSOL Multiphysics version 5.1.

The micropump in the tutorial model creates an oscillatory fluid flow by repeating an upstroke and downstroke motion. The fluid flow enters a horizontal channel containing two tilted microflaps, which are located on either side of the micropump. The microflaps passively bend in reaction to the motion of the fluid and help to generate a net flow that moves in one direction. Through this process, the micropump mechanism is able to create fluid flow without the need for valves.

*The geometry of the micropump mechanism tutorial.*

Please note that the straight lines above the microflaps are there to help the meshing algorithm. Check out the tutorial model document if you’d like to learn how this model was created.

The tutorial calculates the micropump mechanism’s net flow rate over a time period of two seconds — the amount of time it takes for two full pumping cycles. The Reynolds number is set to 16 for this simulation so that we can evaluate the valveless micropump mechanism’s performance at low Reynolds numbers. The *Fluid-Structure Interaction* interface in COMSOL Multiphysics is instrumental in taking into account the flaps’ effects on the overall flow, as well as making it an easy model to set up.

*Left: At a time of 0.26 seconds, the fluid is pushed down and most of it flows to the outlet on the right. Right: At a time of 0.76 seconds, the fluid is pulled up and most of it flows from the inlet on the left.*

The simulation starts with the micropump’s downstroke, which is when the micropump pushes fluid down into the horizontal channel. This action causes the microflap on the right to bend down and the microflap on the left to curve up. In this position, the left-side microflap is obstructing the flow to the left and the flow channel on the right is widened. This naturally causes the majority of the fluid to flow to the right, since it is the path of least resistance.

During the following pumping upstroke, fluid is pumped up into the vertical chamber. Here, the flow causes the microflaps to bend in opposite directions from the previous case. This shift doesn’t change the direction of the net flow, because now the majority of the fluid is drawn into the flow channel from the inlet on the left.

Due to the natural deformation of the microflaps caused by the moving fluid, both of these stages created a left-to-right net flow rate. But how well did the micropump mechanism do at maintaining this flow over the entire simulation time period?

*The net fluid volume that is pumped from left to right.*

During the two-second test, the net volume pumped from left to right was continually increased, with a higher net flow rate during peaks of the stroke speed. This valveless micropump mechanism can function even at a lower Reynolds number.

The valveless micropump mechanism could have many future applications, one of which is to work as a fluid delivery system. In such a scenario, a micropump mechanism could take fluid from a droplet reservoir on its left and move it through a microfluidic channel to an outlet on its right. In this post we have shown just one set of simulation results. By experimenting with the tutorial model set up by Veryst Engineering, you can visualize how a valveless micropump may work in different situations and use this information to discover new uses for micropump mechanisms.

- Download the tutorial model: Micropump Mechanism

In the vast majority of simulations involving linear elastic materials, we are dealing with an isotropic material that does not have any directional sensitivity. To describe such a material, only two independent material parameters are required. There are many possible ways to select these parameters, but some of them are more popular than others.

Young’s modulus, shear modulus, and Poisson’s ratio are the parameters most commonly found in tables of material data. They are not independent, since the shear modulus, G, can be computed from Young’s modulus, E, and Poisson’s ratio, \nu, as

G = \frac{E}{2(1+\nu)}

Young’s modulus can be directly measured in a uniaxial tensile test, while the shear modulus can be measured in, for example, a pure torsion test.

In the uniaxial test, Poisson’s ratio determines how much the material will shrink (or possibly expand) in the transverse direction. The allowable range is -1 <\nu< 0.5, where positive values indicate that the material shrinks in the thickness direction while being pulled. There are a few materials, called *Auxetics*, which have a negative Poisson’s ratio. A cork in a wine bottle has a Poisson’s ratio close to zero, so that its diameter is insensitive to whether it is pulled or pushed.

For many metals and alloys, \nu \approx1/3, and the shear modulus is then about 40% of Young’s modulus.

Given the possible values of \nu, the possible ratios between the shear modulus and Young’s modulus are

\frac{1}{3} < \frac{G}{E} < \infty

When \nu approaches 0.5, the material becomes incompressible. Such materials pose specific problems in an analysis, as we will discuss.

The bulk modulus, K, measures the change in volume for a given uniform pressure. Expressed in E and \nu, it can be written as:

K = \frac{E}{3(1-2\nu)}

When \nu= 1/3, the value of the bulk modulus equals the value of Young’s modulus, but for an incompressible material (\nu \to0.5), K tends to infinity.

The bulk modulus is usually specified together with the shear modulus. These two quantities are, in a sense, the most physically independent choices of parameters. The volume change is only controlled by the bulk modulus and the distortion is only controlled by the shear modulus.

The Lamé constants \mu and \lambda are mostly seen in more mathematical treatises of elasticity. The full 3D constitutive relation between the stress tensor \boldsymbol \sigma and the strain tensor \boldsymbol \varepsilon can be conveniently written in terms of the Lamé constants:

\boldsymbol \sigma=2\mu \boldsymbol \varepsilon +\lambda \; \mathrm{trace}(\boldsymbol{\varepsilon}) \mathbf I

The constant \mu is simply the shear modulus, while \lambda can be written as

\lambda = \frac{E \nu}{(1+\nu)(1-2\nu)}

A full table of conversions between the various elastic parameters can be found here.

Some materials, like rubber, are almost incompressible. Mathematically, a fully incompressible material differs fundamentally from a compressible material. Since there is no volume change, it is not possible to determine the mean stress from it. The state equation for the mean stress (pressure), *p*, as function of volume change, \Delta V, as

p = f(\Delta V)

will no longer exist, and must instead be replaced by a constraint stating that

\Delta V = 0

Another way of looking at incompressibility is to note that the term (1-2\nu) appears in the denominator of the constitutive equations, so that a division by zero would occur if \nu= 0.5. Is it then a good idea to model an incompressible material approximately by setting \nu= 0.499?

It can be done, but in this case, a standard displacement based finite element formulation may give undesirable results. This is caused by a phenomenon called *locking*. Effects include:

- Overly stiff models.
- Checkerboard stress patterns.
- Errors or warnings from the equation solver because of ill-conditioning.

The remedy is to use a *mixed formulatio*n where the pressure is introduced as an extra degree of freedom. In COMSOL Multiphysics, you enable the mixed formulation by selecting the *Nearly incompressible material* checkbox in the settings for the material model.

*Part of the settings for a linear elastic material with mixed formulation enabled.*

When Poisson’s ratio is larger than about 0.45, or equivalently, the bulk modulus is more than one order of magnitude larger than the shear modulus, it is advisable to use a mixed formulation. An example of the effect is shown in the figure below.

*Stress distribution in a simple plane strain model, \nu = 0.499. The top image shows a standard displacement based formulation, while the bottom image shows a mixed formulation.*

In the solution with only displacement degrees of freedom, the stress pattern shows distortions at the left end where there is a constraint. These distortions are almost completely removed by using a mixed formulation.

In general cases of linear elastic materials, the material properties have a directional sensitivity. The most general case is called anisotropic, which means all six stress components can depend on all six strain components. This requires 21 material parameters. Clearly, it is a demanding task to obtain all of this data. If the stress, \boldsymbol \sigma, and strain, \boldsymbol \varepsilon, are treated as vectors, they are related by the constitutive 6-by-6 symmetric matrix \mathbf D through

\boldsymbol \sigma= \mathbf D \boldsymbol \varepsilon

Fortunately, it is common that nonisotropic materials exhibit certain symmetries. In an orthotropic material, there are three orthogonal directions in which the shear action is decoupled from the axial action. That is, when the material is stretched along one of these principal directions, it will only contract in the two orthogonal directions, but not be sheared. A full description of an orthotropic material requires nine independent material parameters.

The constitutive relation of an orthotropic material is easier when written on compliance form, \boldsymbol \varepsilon= \mathbf C \boldsymbol \sigma:

\mathsf{C} =

\begin{bmatrix}

\tfrac{1}{E_{\rm X}} & -\tfrac{\nu_{\rm YX}}{E_{\rm Y}} & -\tfrac{\nu_{\rm ZX}}{E_{\rm Z}} & 0 & 0 & 0 \\

-\tfrac{\nu_{\rm XY}}{E_{\rm X}} & \tfrac{1}{E_{\rm Y}} & -\tfrac{\nu_{\rm ZY}}{E_{\rm Z}} & 0 & 0 & 0 \\

-\tfrac{\nu_{\rm XZ}}{E_{\rm X}} & -\tfrac{\nu_{\rm YZ}}{E_{\rm Y}} & \tfrac{1}{E_{\rm Z}} & 0 & 0 & 0 \\

0 & 0 & 0 & \tfrac{1}{G_{\rm YZ}} & 0 & 0 \\

0 & 0 & 0 & 0 & \tfrac{1}{G_{\rm ZX}} & 0 \\

0 & 0 & 0 & 0 & 0 & \tfrac{1}{G_{\rm XY}} \\

\end{bmatrix}

\begin{bmatrix}

\tfrac{1}{E_{\rm X}} & -\tfrac{\nu_{\rm YX}}{E_{\rm Y}} & -\tfrac{\nu_{\rm ZX}}{E_{\rm Z}} & 0 & 0 & 0 \\

-\tfrac{\nu_{\rm XY}}{E_{\rm X}} & \tfrac{1}{E_{\rm Y}} & -\tfrac{\nu_{\rm ZY}}{E_{\rm Z}} & 0 & 0 & 0 \\

-\tfrac{\nu_{\rm XZ}}{E_{\rm X}} & -\tfrac{\nu_{\rm YZ}}{E_{\rm Y}} & \tfrac{1}{E_{\rm Z}} & 0 & 0 & 0 \\

0 & 0 & 0 & \tfrac{1}{G_{\rm YZ}} & 0 & 0 \\

0 & 0 & 0 & 0 & \tfrac{1}{G_{\rm ZX}} & 0 \\

0 & 0 & 0 & 0 & 0 & \tfrac{1}{G_{\rm XY}} \\

\end{bmatrix}

Since the compliance matrix must be symmetric, the twelve constants used are reduced to nine through three symmetry relations of the type

\tfrac{\nu_{\rm YX}}{E_Y} = \tfrac{\nu_{\rm YX }}{E_X}

Note that \nu_{\rm YX} \neq \nu_{\rm XY}, so when dealing with orthotropic data, it is important to make sure that the intended Poisson’s ratio values are used. The notation may not be the same in all sources.

Anisotropy and orthotropy commonly occur in inhomogeneous materials. Often, the properties are not measured, but computed using a homogenization process upscaling from microscopic to macroscopic scale. A discussion about such homogenization — in quite another context – can be found in this blog post.

For nonisotropic materials, there are limitations to the possible values of the material parameters similar to those described for isotropic materials. It is difficult to immediately see these limitations, but there are two things to look out for:

- The constitutive matrix \mathbf D must be positive definite.
- For a general anisotropic material, the only option is to check if all of its eigenvalues are positive.
- For an orthotropic material, this is true if all six elastic moduli are positive and \nu_{\rm XY}\nu_{\rm YX}+\nu_{\rm YZ}\nu_{\rm ZY}+\nu_{\rm ZX}\nu_{\rm XZ}+\nu_{\rm YX}\nu_{\rm ZY}\nu_{\rm XZ}<1

- If the material has low compressibility, a mixed formulation must be used.
- It is possible to make an estimate of an effective bulk modulus and the values of the shear moduli.
- In cases of uncertainty, it is better to take the extra cost of the mixed formulation to avoid possible inaccuracies.

When working with geometrically nonlinear problems, the meaning of “linear elasticity” is really a matter of convention. The issue here is that there are several possible representations of stresses and strains. For a discussion about different stress and strain measures, see this previous blog post.

Since the primary stress and strain quantities in COMSOL Multiphysics are Second Piola-Kirchhoff stress and Green-Lagrange strain, the natural interpretation of linear elasticity is that these quantities are linearly related to each other. Such a material is sometimes called a St. Venant material.

Intuitively, one could expect that “linear elasticity” means that there is a linear relation between force and displacement in a simple tensile test. This will not be the case, since both stresses and strains depend on the deformation. To see this, consider a bar with a square cross section.

*The bar subjected to uniform extension.*

The original length of the bar is L_0 and the original cross-section area is A_0=a_0^2, where a_0 is the original edge of the cross section. Assume that the bar is extended at a distance \Delta so that the current length is L=L_0+\Delta=L_0(1+\xi).

Here, 1+\xi is the axial stretch and \xi can be interpreted as the engineering strain. The new length of the edge of the cross section is a=a_0+d=a_0(1+\eta), where \eta is the engineering strain in the transverse directions.

The force can be expressed as the Cauchy stress \sigma_x in the axial direction multiplied by the current cross-section area:

F = \sigma_x A = \sigma_x A_0 (1+\eta)^2

To use the linear elastic relation, the Cauchy stress \boldsymbol \sigma must be expressed as the Second Piola-Kirchoff stress \mathbf S. The transformation rule is

\mathbf \sigma = J^{-1} \mathbf F \mathbf S \mathbf F^T

where \mathbf F is the deformation gradient tensor, and the volume scale is defined as J = det(\mathbf F). Without going into details, for a uniaxial case

\sigma_x = \frac{F_{xX}}{F_{yY}F_{zZ}}S_X= \frac{(1+\xi)}{(1+\eta)^2}S_X

Since for a St. Venant material in uniaxial extension, the axial stress is related to the axial strain as S_X = E \epsilon_X, we obtain

F = S_x A_0 (1+\xi) = E A_0 (1+\xi)\varepsilon_X

Given that the axial term of the Green-Lagrange strain tensor is defined as

\varepsilon_X = \frac{\partial u}{\partial X} + \frac{1}{2}(\frac{\partial u}{\partial X})^2 = \xi+\frac{1}{2}\xi^2

the force versus displacement relation is then

F = E A_0 (1+\xi)(\xi + \frac{1}{2}\xi^2)=E A_0 (\xi+\frac{3}{2}\xi^2+\frac{1}{2}\xi^3)

The linear elastic material furbished with geometric nonlinearity actually implies a cubic relation between force and engineering strain (or force versus displacement, since \Delta =L_0\xi), as shown in the figure below.

*The uniaxial response of a linear elastic material under geometric nonlinearity.
*

As can be seen in the graph, the stiffness of the material approaches zero at the compression side, \xi = \sqrt{{1}/{3}}-1 \approx -0.42. In practice, this means that the simulation will fail at that strain level. It can be argued that there are no real materials that are linear at large strains, so this should not cause problems in practice. However, linear elastic materials are often used far outside the range of reasonable stresses for several reasons, such as:

- Often, you may want to do a quick “order of magnitude” check before introducing more sophisticated material models.
- There are singularities in the model that cause very high strains in a point.
- Read more about singularities here.

- In contact problems, the study is always geometrically nonlinear.
- Often, high compressive strains appear locally in the contact zone at some time during the analysis.

In all of these cases, the solver may fail to find a solution if the compressive strains are large. If you suspect this to be the case, it is a good idea to plot the smallest principal strain. If it is smaller than -0.3 or so, we can expect this kind of breakdown. The critical value in terms of the Green-Lagrange strain is found to be -1/3. When this becomes a problem, you should consider changing to a suitable hyperelastic material model.

Compression may not be the only problem. In the analysis above, Poisson’s ratio did not enter the equations. So what happens with the cross section?

By definition in the uniaxial case, the transverse strain is related to the axial strain by

\varepsilon_Y = -\nu \varepsilon_X

When these strains are Green-Lagrange strains, this is a nonlinear relation stating that

\frac{\partial v}{\partial Y} + \frac{1}{2}(\frac{\partial v}{\partial Y})^2 = -\nu (\frac{\partial u}{\partial X} + \frac{1}{2}(\frac{\partial u}{\partial X})^2)

Thus, there is a strong nonlinearity in the change of the cross section. Solving this quadratic equation gives the following relation between the engineering strains

\eta = \sqrt{1-\nu(\xi^2+2\xi)}-1

The result is shown in the figure below.

*Transverse displacement as a function of the axial displacement for uniaxial tension of a St. Venant material. Five different values of Poisson’s ratio are shown.*

As you can see, the cross section collapses quickly at large extensions for higher values of Poisson’s ratio.

If another choice of stress and strain representation had been made — for example, if the Cauchy stress were proportional to the logarithmic, or “true” strain — it would have resulted in quite a different response. Instead, such a material has a stiffness that decreases with elongation, where the force-displacement response does depend on the value of Poisson’s ratio. Still, both materials can correctly be called “linear elastic”, although the results computed with large strain elasticity can differ widely between two different simulation platforms.

We have illustrated some limits for the use of linear elastic materials. In particular, the possible pitfalls related to incompressibility and to the combination of linear elasticity with large strains have been highlighted.

If you are interested in reading more about material modeling in structural mechanics problems, check out these blog posts:

- Introducing Nonlinear Elastic Materials
- Obtaining Material Data for Structural Mechanics from Measurements
- Part 2: Obtaining Material Data for Structural Mechanics from Measurements
- Fitting Measured Data to Different Hyperelastic Material Models
- Yield Surfaces and Plastic Flow Rules in Geomechanics
- Computing Stiffness of Linear Elastic Structures: Part 1
- Computing Stiffness of Linear Elastic Structures: Part 2

After obtaining our measured data, the question then becomes this: How can we estimate the material parameters required for defining the hyperelastic material models based on the measured data? One of the ways to do this in COMSOL Multiphysics is to fit a parameterized analytic function to the measured data using the Optimization Module.

In the section below, we will define analytical expressions for stress-strain relationships for two common tests — the *uniaxial test* and the *equibiaxial test*. These analytical expressions will then be fitted to the measured data to obtain material parameters.

Characterizing the volumetric deformation of hyperelastic materials to estimate material parameters can be a rather intricate process. Oftentimes, perfect incompressibility is assumed in order to estimate the parameters. This means that after estimating material parameters from curve fitting, you would have to use a reasonable value for bulk modulus of the nearly incompressible hyperelastic material, as this property is not calculated.

Here, we will fit the measured data to several perfectly incompressible hyperelastic material models. We will start by reviewing some of the basic concepts of the nearly incompressible formulation and then characterize the stress measures for the case of perfect incompressibility.

For nearly incompressible hyperelasticity, the total strain energy density is presented as

W_s = W_{iso}+W_{vol}

where W_{iso} is the isochoric strain energy density and W_{vol} is the volumetric strain energy density. The second Piola-Kirchhoff stress tensor is then given by

S = -p_pJC^{-1}+2\frac{\partial W_{iso}}{\partial C}

where p_{p} is the volumetric stress, J is the volume ratio, and C is the right Cauchy-Green tensor.

You can expand the second term from the above equation so that the second Piola-Kirchhoff stress tensor can be equivalently expressed as

S = -p_pJC^{-1}+2\left(J^{-2/3}\left(\frac{\partial W_{iso}}{\partial \bar{I_{1}}}+\bar{I_{1}} \frac{\partial W_{iso}}{\partial \bar{I_{2}}} \right)I-J^{-4/3} \frac{\partial W_{iso}}{\partial \bar{I}_{2}} C -\left(\frac{\bar{I_{1}}}{3}\frac{\partial W_{iso}}{\partial \bar{I}_{1}} + \frac{2 \bar{I}_{2}}{3}\frac{\partial W_{iso}}{\partial \bar{I}_{2}}\right)C^{-1}\right)

where \bar{I}_{1} and \bar{I}_{2} are invariants of the isochoric right Cauchy-Green tensor \bar{C} = J^{-2/3}C.

The first Piola-Kirchhoff stress tensor, P, and the Cauchy stress tensor, \sigma, can be expressed as a function of the second Piola-Kirchhoff stress tensor as

\begin{align}P& = FS\\

\sigma& = J^{-1}FSF^{T}

\end{align}

\sigma& = J^{-1}FSF^{T}

\end{align}

Here, F is the deformation gradient.

Note: You can read more about the description of different stress measures in our previous blog entry “Why All These Stresses and Strains?“

The strain energy density and stresses are often expressed in terms of the stretch ratio \lambda. The *stretch ratio* is a measure of the magnitude of deformation. In a uniaxial tension experiment, the stretch ratio is defined as \lambda = L/L_0, where L is the deformed length of the specimen and L_0 is its original length. In a multiaxial stress state, you can calculate principal stretches \lambda_a\;(a = 1,2,3) in the principal referential directions \hat{\mathbf{N}_a}, which are the same as the directions of the principal stresses. The stress tensor components can be rewritten in the spectral form as

S =\sideset{}{^3_{a=1}}

\sum S_{a} \hat{\mathbf{N}_{a}} \otimes \hat{\mathbf{N}_{a}}

\sum S_{a} \hat{\mathbf{N}_{a}} \otimes \hat{\mathbf{N}_{a}}

where S_{a} represents the principal values of the second Piola-Kirchhoff stress tensor and \hat{\mathbf{N}_{a}} represents the principal referential directions. You can represent the right Cauchy-Green tensor in its spectral form as

C = \sideset{}{^3_{a=1}}

\sum\lambda_a^2 \hat{\mathbf{N}_a}\otimes\hat{\mathbf{N}_a}

\sum\lambda_a^2 \hat{\mathbf{N}_a}\otimes\hat{\mathbf{N}_a}

where \lambda_a indicates the values of the principal stretches. This allows you to express the principal values of the second Piola-Kirchhoff stress tensor as a function of the principal stretches

S_a = \frac{-p_p J}{\lambda_a^2}+2\left(J^{-2/3}\left(\frac{\partial W_{iso}}{\partial \bar{I_{1}}}+\bar{I_{1}} \frac{\partial W_{iso}}{\partial \bar{I_{2}}} \right) -J^{-4/3} \frac{\partial W_{iso}}{\partial \bar{I}_{2}} \lambda_a^2 -\frac{1}{\lambda_a^2}\left(\frac{\bar{I_{1}}}{3}\frac{\partial W_{iso}}{\partial \bar{I}_{1}} + \frac{2 \bar{I}_{2}}{3}\frac{\partial W_{iso}}{\partial \bar{I}_{2}}\right)\right)

Now, let’s consider the uniaxial and biaxial tension tests explained in the initial blog post in our Structural Materials series. For both of these tests, we can derive a general relationship between stress and stretch.

Under the assumption of incompressibility (J=1), the principal stretches for the uniaxial deformation of an isotropic hyperelastic material are given by

\lambda_1 = \lambda, \lambda_2 = \lambda_3 = \lambda^{-1/2}

The deformation gradient is given by

\begin{array}{c} F = \\ \end{array} \left(\begin{array}{ccc} \lambda &0 &0 \\ 0 &\frac{1}{\sqrt{\lambda}} &0 \\ 0 &0 &\frac{1}{\sqrt{\lambda}}\end{array}\right)

For uniaxial extension S_2 = S_3 = 0, the volumetric stress p_{p} can be eliminated to give

S_{1} = 2\left(\frac{1}{\lambda} -\frac{1}{\lambda^4}\right) \left(\lambda \frac{\partial W_{iso}}{\partial \bar{I}_{1_{uni}}}+\frac{\partial W_{iso}}{\partial \bar{I}_{2_{uni}}}\right) ,\; P_1 = \lambda S_1\; \sigma_1 = \lambda^2 S_1,\;\;\;\;

The isochoric invariants \bar{I}_{1_{uni}} and \bar{I}_{2_{uni}} can be expressed in terms of the principal stretch \lambda as

\begin{align*}

\bar{I}_{1_{uni}} = \left(\lambda^2+\frac{2}{\lambda}\right) \\

\bar{I}_{2_{uni}} = \left(2\lambda + \frac{1}{\lambda^2}\right)

\end{align*}

\bar{I}_{1_{uni}} = \left(\lambda^2+\frac{2}{\lambda}\right) \\

\bar{I}_{2_{uni}} = \left(2\lambda + \frac{1}{\lambda^2}\right)

\end{align*}

Under the assumption of incompressibility, the principal stretches for the equibiaxial deformation of an isotropic hyperelastic material are given by

\lambda_1 = \lambda_2 = \lambda, \; \lambda_3 = \lambda^{-2}

For equibiaxial extension S_3 = 0, the volumetric stress p_{p} can be eliminated to give

S_1 = S_2 = 2\left(1-\frac{1}{\lambda^6}\right)\left(\frac{\partial W_{iso}}{\partial \bar{I}_{1_{bi}}}+\lambda^2\frac{\partial W_{iso}}{\partial \bar{I}_{2_{bi}}}\right),\; P_1 = \lambda S_1,\; \sigma_1 = \lambda^2 S_1\;\;\;\;

The invariants \bar{I}_{1_{bi}} and \bar{I}_{2_{bi}} are then given by

\begin{align*}

\bar{I}_{1_{bi}} = \left( 2\lambda^2 + \frac{1}{\lambda^4}\right) \\

\bar{I}_{2_{bi}} = \left(\lambda^4 + \frac{2}{\lambda^2}\right)

\end{align*}

\bar{I}_{1_{bi}} = \left( 2\lambda^2 + \frac{1}{\lambda^4}\right) \\

\bar{I}_{2_{bi}} = \left(\lambda^4 + \frac{2}{\lambda^2}\right)

\end{align*}

Let’s now look at the stress versus stretch relationships for a few of the the most common hyperelastic material models. We will consider the first Piola-Kirchhoff stress for the purpose of curve fitting.

The total strain energy density for a Neo-Hookean material model is given by

W_s = \frac{1}{2}\mu\left(\bar{I}_1-3\right)+\frac{1}{2}\kappa\left(J_{el}-1\right)^2

where J_{el} is the elastic volume ratio and \mu is a material parameter that we need to compute via curve fitting. Under the assumption of perfect incompressibility and using equations (1) and (2), the first Piola-Kirchhoff stress expressions for the cases of uniaxial and equibiaxial deformation are given by

\begin{align*}

P_{1_{uniaxial}} &= \mu\left(\lambda-\lambda^{-2}\right)\\

P_{1_{biaxial}} &= \mu\left(\lambda-\lambda^{-5}\right)

\end{align*}

P_{1_{uniaxial}} &= \mu\left(\lambda-\lambda^{-2}\right)\\

P_{1_{biaxial}} &= \mu\left(\lambda-\lambda^{-5}\right)

\end{align*}

The stress versus stretch relationship for a few of the other hyperelastic material models are listed below. These can be easily derived through the use of equations (1) and (2), which relate stress and the strain energy density.

\begin{align*}

P_{1_{uniaxial}} &= 2\left(1-\lambda^{-3}\right)\left(\lambda C_{10}+C_{01}\right)\\

P_{1_{biaxial}} &= 2\left(\lambda-\lambda^{-5}\right)\left(C_{10}+\lambda^2 C_{01}\right)

\end{align*}

P_{1_{uniaxial}} &= 2\left(1-\lambda^{-3}\right)\left(\lambda C_{10}+C_{01}\right)\\

P_{1_{biaxial}} &= 2\left(\lambda-\lambda^{-5}\right)\left(C_{10}+\lambda^2 C_{01}\right)

\end{align*}

Here, C_{10} and C_{01} are Mooney-Rivlin material parameters.

\begin{align}\begin{split}

P_{1_{uniaxial}}& = 2\left(1-\lambda^{-3}\right)\left(\lambda C_{10} + 2C_{20}\lambda\left(I_{1_{uni}}-3\right)+C_{11}\lambda\left(I_{2_{uni}}-3\right)\\

& \quad +C_{01}+2C_{02}\left(I_{2_{uni}}-3\right)+C_{11}\left(I_{1_{uni}}-3\right)\right)\\

P_{1_{biaxial}}& = 2\left(\lambda-\lambda^{-5}\right)\left(C_{10}+2C_{20}\left(I_{1_{bi}}-3\right)+C_{11}\left(I_{2_{bi}}-3\right)\\

& \quad +\lambda^2C_{01}+2\lambda^2C_{02}\left(I_{2_{bi}}-3\right)+\lambda^2 C_{11}\left(I_{1_{bi}}-3\right)\right)

\end{split}

\end{align}

P_{1_{uniaxial}}& = 2\left(1-\lambda^{-3}\right)\left(\lambda C_{10} + 2C_{20}\lambda\left(I_{1_{uni}}-3\right)+C_{11}\lambda\left(I_{2_{uni}}-3\right)\\

& \quad +C_{01}+2C_{02}\left(I_{2_{uni}}-3\right)+C_{11}\left(I_{1_{uni}}-3\right)\right)\\

P_{1_{biaxial}}& = 2\left(\lambda-\lambda^{-5}\right)\left(C_{10}+2C_{20}\left(I_{1_{bi}}-3\right)+C_{11}\left(I_{2_{bi}}-3\right)\\

& \quad +\lambda^2C_{01}+2\lambda^2C_{02}\left(I_{2_{bi}}-3\right)+\lambda^2 C_{11}\left(I_{1_{bi}}-3\right)\right)

\end{split}

\end{align}

Here, C_{10}, C_{01}, C_{20}, C_{02}, and C_{11} are Mooney-Rivlin material parameters.

\begin{align}

P_{1_{uniaxial}} &= 2\left(\lambda-\lambda^{-2}\right)\mu_0\sum_{p=1}^{5}\frac{p c_p}{N^{p-1}}I_{1_{uni}}^{p-1}\\

P_{1_{biaxial}} &= 2\left(\lambda-\lambda^{-5}\right)\mu_0\sum_{p=1}^{5}\frac{p c_p}{N^{p-1}}I_{1_{bi}}^{p-1}

\end{align}

P_{1_{uniaxial}} &= 2\left(\lambda-\lambda^{-2}\right)\mu_0\sum_{p=1}^{5}\frac{p c_p}{N^{p-1}}I_{1_{uni}}^{p-1}\\

P_{1_{biaxial}} &= 2\left(\lambda-\lambda^{-5}\right)\mu_0\sum_{p=1}^{5}\frac{p c_p}{N^{p-1}}I_{1_{bi}}^{p-1}

\end{align}

Here, \mu_0 and N are Arruda-Boyce material parameters, and c_p are the first five terms of the Taylor expansion of the inverse Langevin function.

\begin{align}

P_{1_{uniaxial}} &= 2\left(\lambda-\lambda^{-2}\right)\sum_{p=1}^{3}p c_p \left(I_{1_{uni}}-3\right)^{p-1}\\

P_{1_{biaxial}} &= 2\left(\lambda-\lambda^{-5}\right)\sum_{p=1}^{3}p c_p \left(I_{1_{bi}}-3\right)^{p-1}

\end{align}

P_{1_{uniaxial}} &= 2\left(\lambda-\lambda^{-2}\right)\sum_{p=1}^{3}p c_p \left(I_{1_{uni}}-3\right)^{p-1}\\

P_{1_{biaxial}} &= 2\left(\lambda-\lambda^{-5}\right)\sum_{p=1}^{3}p c_p \left(I_{1_{bi}}-3\right)^{p-1}

\end{align}

Here, the values of c_p are Yeoh material parameters.

\begin{align}

P_{1_{uniaxial}} &= \sum_{p=1}^{N}\mu_p \left(\lambda^{\alpha_p-1} -\lambda^{-\frac{\alpha_p}{2}-1}\right)\\

P_{1_{biaxial}} &= \sum_{p=1}^{N}\mu_p \left(\lambda^{\alpha_p-1} -\lambda^{-2\alpha_p-1}\right)

\end{align}

P_{1_{uniaxial}} &= \sum_{p=1}^{N}\mu_p \left(\lambda^{\alpha_p-1} -\lambda^{-\frac{\alpha_p}{2}-1}\right)\\

P_{1_{biaxial}} &= \sum_{p=1}^{N}\mu_p \left(\lambda^{\alpha_p-1} -\lambda^{-2\alpha_p-1}\right)

\end{align}

Here, \mu_p and \alpha_p are Ogden material parameters.

Using the *Optimization* interface in COMSOL Multiphysics, we will fit measured stress versus stretch data against the analytical expressions detailed in our discussion above. Note that the measured data we are using here is the *nominal stress*, which can be defined as the force in the current configuration acting on the original area. It is important that the measured data is fit against the appropriate stress measure. Therefore, we will fit the measured data against the analytical expressions for the first Piola-Kirchhoff stress expressions. The plot below shows the measured nominal stress (raw data) for uniaxial and equibiaxial tests for vulcanized rubber.

*Measured stress-strain curves by Treloar.*

Let’s begin by setting up the model to fit the uniaxial Neo-Hookean stress to the uniaxial measured data. The first step is to add an *Optimization* interface to a 0D model. Here, *0D* implies that our analysis is not tied to a particular geometry.

Next, we can define the material parameters that need to be computed as well as the variable for the analytical stress versus stretch relationship. The screenshot below shows the parameters and variable defined for the case of an uniaxial Neo-Hookean material model.

Within the *Optimization* interface, a *Global Least-Squares Objective* branch is added, where we can specify the measured uniaxial stress versus stretch data as an input file. Next, a *Parameter Column* and a *Value Column* are added. Here, we define lambda (stretch) as a measured parameter and specify the uniaxial analytical stress expression to fit against the measured data. We can also specify a weighing factor in the *Column contribution weight* setting. For detailed instructions on setting up the *Global Least-Squares Objective* branch, take a look at the Mooney-Rivlin Curve Fit tutorial, available in our Application Gallery.

We can now solve the above problem and estimate material parameters by fitting our uniaxial tension test data against the uniaxial Neo-Hookean material model. This is, however, rarely a good idea. As explained in Part 1 of this blog series, the seemingly simple test can leave many loose ends. Later on in this blog post, we will explore the consequence of material calibration based on just one data set.

Depending on the operating conditions, you can obtain a better estimate of material parameters through a combination of measured uniaxial tension, compression, biaxial tension, torsion, and volumetric test data. This compiled data can then be fit against analytical stress expressions for each of the applicable cases.

Here, we will use the equibiaxial tension test data alongside the uniaxial tension test data. Just as we have set up the optimization model for the uniaxial test, we will define another global least-squares objective for the equibiaxial test as well as corresponding parameter and value columns. In the second global least-squares objective, we will specify the measured equibiaxial stress versus stretch data file as an input file. In the value column, we will specify the equibiaxial analytical stress expression to fit against the equibiaxial test data.

The settings of the Optimization study step are shown in the screenshot below. The model tree branches have been manually renamed to reflect the material model (Neo-Hookean) and the two tests (uniaxial and equibiaxial). The optimization algorithm is a Levenberg-Marquardt solver, which is used to solve problems of the least-square type. The model is now set to optimize the sum of two global least-square objectives — the uniaxial and equibiaxial test cases.

The plot below depicts the fitted data against the measured data. Equal weights are assigned to both the uniaxial and equibiaxial least-squares fitting. It is clear that the Neo-Hookean material model with only one parameter is not a good fit here, as the test data is nonlinear and has one inflection point.

*Fitted material parameters using the Neo-Hookean model. Equal weights are assigned to both of the test data.*

Fitting the curves while specifying unequal weights for the two tests will result in a slightly different fitted curve. Similar to the Neo-Hookean model, we will set up global least-squares objectives corresponding to Mooney-Rivlin, Arruda-Boyce, Yeoh, and Ogden material models. In our calculation below, we will include cases for both equal and unequal weights.

In the case of unequal weights, we will use a higher but arbitrary weight for the entire equibiaxial data set. It is possible that you may want to assign unequal weights only for a certain stretch range instead of the entire stretch range. If this is the case, we can split the particular test case into parts, using a separate *Global Least-Squares Objective* branch for each stretch range. This will allow us to assign weights in correlation with different stretch ranges.

The plots below show fitted curves for different material models for equal and unequal weights that correspond to the two tests.

*Left: Fitted material parameters using Mooney-Rivlin, Arruda-Boyce, and Yeoh models. In these cases, equal weights are assigned to both test data. Right: Fitted material parameters using Mooney-Rivlin, Arruda-Boyce, and Yeoh models. Here, higher weight is assigned to equibiaxial test data.*

The Ogden material model with three terms fits both test data quite well for the case of equal weights assigned to both tests.

*Fitted material parameters using the Ogden model with three terms.*

If we only fit uniaxial data and use the computed parameters for plotting equibiaxial stress against the actual equibiaxial test data, we obtain the results in the plots below. These plots show the mismatch in the computed equibiaxial stress when compared to the measured equibiaxial stress. In material parameter estimation, it is best to perform curve fitting for a combination of different significant deformation modes rather than considering only one deformation mode.

*Uniaxial and equibiaxial stress computed by fitting model parameters to only uniaxial measured data.*

To find material parameters for hyperelastic material models, fitting the analytic curves may seem like a solid approach. However, the stability of a given hyperelastic material model may also be a concern. The criterion for determining material stability is known as *Drucker stability*. According to the Drucker’s criterion, incremental work associated with an incremental stress should always be greater than zero. If the criterion is violated, the material model will be unstable.

In this blog post, we have demonstrated how you can use the *Optimization* interface in COMSOL Multiphysics to fit a curve to multiple data sets. An alternative method for curve fitting that does not require the *Optimization* interface was also a topic of discussion in an earlier blog post. Just as we have used uniaxial and equibiaxial tension data here for the purpose of estimating material parameters, you can also fit the measured data to shear and volumetric tests to characterize other deformation states.

For detailed step-by-step instructions on how to use the *Optimization* interface for the purpose of curve fitting, take a look at the Mooney-Rivlin Curve Fit tutorial, available in our Application Gallery.

Advanced composites are used extensively throughout the Boeing 787 Dreamliner, as shown in the diagram below. Also known as carbon fiber reinforced plastic (CFRP), the composites are formed from a lightweight polymer binder with dispersed carbon fiber filler to produce materials with high strength-to-weight ratios. Many wing components, for example, are made of CFRP, ensuring that they can support the load imposed during flight while minimizing their overall contribution to the weight of an aircraft.

*Advanced composites are used throughout the body of the Boeing 787. Copyright © Boeing.*

Despite their remarkable strength and light weight, CFRPs are generally not conductive like their aluminum counterparts, thus making them susceptible to lightning strike damage. Therefore, electrically conductive expanded metal foil (EMF) is added to the composite structure layup, shown in the figure below, to dissipate the high current and heat generated by a lightning strike.

*The composite structure layup shown at left consists of an expanded metal foil layer shown at right. This figure is a screenshot from the COMSOL Multiphysics® software model featured in this blog post. Copyright © Boeing.*

The figure also shows the additional coatings on top of the EMF, which are in place to protect it from moisture and environmental species that cause corrosion. Corrosive damage to the EMF could result in lower conductivity, thereby reducing its ability to protect aircraft structures from lightning strike damage. Temperature variations due to the ground-to-air flight cycle can, however, lead to the formation of cracks in the surface protection scheme, reducing its effectiveness.

During takeoff and landing, aircraft structures are subjected to cooling and heating, respectively. Thermal stress manifests as the expansion and compression — or ultimately the displacement — of adjacent layers throughout the depth of the composite structure. Although a single round-trip is not likely to pose a significant risk, over time, each layer of the composite structure contributes to fatigue damage buildup. Repetitive thermal stress results in cumulative strain and higher displacements, which are, in turn, associated with an increased risk of crack formation. The stresses in a material depend on its mechanical properties quantified by measurable attributes such as yield strength, Young’s modulus, and Poisson’s ratio.

By taking the thermal and mechanical properties of materials into account, it is possible to use simulation to design and optimize a surface protection scheme for aircraft composites that minimizes stress, displacement, and the risk of crack formation.

Evaluating the thermal performance of each layer in the surface protection scheme is essential in order to reduce the risks and maintenance costs associated with damage to the protective coating and EMF. Therefore, researchers at Boeing Research & Technology (BR&T), pictured below, are using multiphysics simulation and physical measurements to investigate the effect of the EMF design parameters on stress and displacement throughout the composite structure layup.

*The research team at Boeing Research & Technology from left to right: Patrice Ackerman, Jeffrey Morgan, Robert Greegor, and Quynhgiao Le. Copyright © Boeing.*

In their work, the researchers at BR&T have developed a coefficient of thermal expansion (CTE) model in COMSOL Multiphysics® simulation software. The figure shown above that presents the composite structure layup and EMF is a screenshot acquired from the model geometry used for their simulations in COMSOL Multiphysics.

The CTE model was used to evaluate heating of the aircraft composite structure as experienced upon descent, where the final and initial temperatures used in the simulations represent the ground and altitude temperatures, respectively. The *Thermal Stress* interface, which couples heat transfer and solid mechanics, was used in the model to simulate thermal expansion and solve for the displacement throughout the structure.

The material properties of each layer in the surface protection scheme as well as of the composites are custom-defined in the CTE model. The relative values of the coefficient of thermal expansion, heat capacity, density, thermal conductivity, Young’s modulus, and Poisson’s ratio are presented in the chart below.

*This graph presents the ratio of each material parameter relative to the paint layer. Copyright © Boeing.*

From the graph, trends can be identified that provide early insight into the behavior of the materials, which aids in making design decisions. For example, the paint layer is characterized by higher values of CTE, heat capacity, and Poisson’s ratio, thus indicating that it will undergo compressive stress and tensile strain upon heating and cooling.

Multiphysics simulation takes this predictive design capability one big step forward by quantifying the resulting displacement due to thermal stress throughout the entire composite structure layup simultaneously, taking into account the properties of all materials. The following figure shows an example of BR&T’s simulation results and presents the stress distribution and displacement throughout the composite structure.

*Left: Top-down and cross-sectional views of the von Mises stress and displacement in a one-inch square sample of a composite structure layup. Right: Transparency was used to show regions of higher stress, in red. Lower stress is shown in blue. Copyright © Boeing.*

In the plots at the left above, the displacement pattern caused by the EMF is evident through the paint layer at the top of the composite structure while a magnified cross-sectional view shows the variations in displacement above the mesh and voids of the EMF. The cross section also makes it easy to see the stress distribution through the depth of the composite structure, where there is a trend toward lower stress in the topmost layers. Transparency was used in the plot shown at the right to depict the regions of high stress in the composites and EMF, which is noticeably higher at the intersection of the mesh wires. Stress was plotted through the depth of the composite structure layup along the vertical red line shown in the center of the plot. The figure below shows the relative stress in each layer of the composite structure layup for different metallic compositions of the EMF.

*Relative stress in arbitrary units was plotted through the depth of the composite structure layups containing either aluminum (left) or copper EMF (right). Copyright © Boeing.*

The samples vary by the presence of a fiberglass corrosion isolation layer when aluminum is used as the material for the EMF. The fiberglass acts as a buffer resulting in lower stress in the aluminum EMF, when compared with the copper.

From lightning strike protection to the structural integrity of the composite protection scheme, it all relies on the design of the expanded metal foil layer. The design of the EMF layer can vary by its metallic composition, height, width of the mesh wire, and the mesh aspect ratio. For any EMF design parameter, there is a trade-off between current-carrying capacity, displacement, and weight. By using the CTE model, the researchers at BR&T found that increasing the mesh width and decreasing the aspect ratio are better strategies for increasing the current-carrying capacity of the EMF that minimize its impact on displacement in the composite structure.

The metal chosen for the EMF can also have a significant effect on stress and displacement in the composite structure, which was investigated using simulation and physical testing. Two composite structures, one with aluminum and the other with copper EMF, underwent thermal cycling with prolonged exposure to moisture in an environmental test chamber. In the results, shown below, the protective layers remained intact for the composite structure with copper EMF. However, for the layup with aluminum, cracking occurred in the primer, at the edges, on surfaces, and was particularly substantial in the mesh overlap regions.

*Photo micrographs of the composite structure layup after exposure to moisture and thermal cycling. A crack in the vicinity of the aluminum EMF is contained within the red ellipse. Copyright © Boeing.*

Simulations confirm the experiment results. Shown below, displacements are noticeably higher throughout the composite structure layup when aluminum is used for the EMF layer, where higher displacements are associated with an increased risk for developing cracks. The higher displacement is easiest to observe in the bottom plots, which show displacement ratios for each EMF height.

*Effect of varying the EMF height on displacement in each layer of the surface protection scheme. Copyright © Boeing.*

The larger displacements caused by the aluminum EMF can be attributed in part to its higher CTE when compared with copper, which exemplifies how important the properties of materials are to the thermal stability of the aircraft composite structures.

In the early design stages and along with experimental testing, multiphysics simulation offers a reliable means to evaluate the relative impact of the EMF design parameters on stress and displacement throughout the composite structures. An optimized EMF design is essential to minimizing the risk of crack formation in the composite surface protection scheme, which reduces maintenance costs and allows the EMF to perform its important protective function of mitigating lightning strike damage.

Refer to page 4 of *COMSOL News* 2014 to read the original article, “Boeing Simulates Thermal Expansion in Composites with Expanded Metal Foil for Lightning Strike Protection of Aircraft Structures”.

This article was based on the following publicly available resources from Boeing:

- The Boeing Company. “787 Advanced Composite Design.” 2008-2013.
- J.D. Morgan, R.B. Greegor, P.K. Ackerman, Q.N. Le, “Thermal Simulation and Testing of Expanded Metal Foils Used for Lightning Protection of Composite Aircraft Structures,” SAE Int. J. Aerosp. 6(2):371-377, 2013, doi:10.4271/2013-01-2132.
- R.B. Greegor, J.D. Morgan, Q.N. Le, P.K. Ackerman, “Finite Element Modeling and Testing of Expanded Metal Foils Used for Lightning Protection of Composite Aircraft Structures,” Proceedings of 2013 ICOLSE Conference; Seattle, WA, September 18-20, 2013.

To learn more about adding material property data to your COMSOL Multiphysics® simulations, read the following blog post series on *Obtaining Material Data for Structural Mechanics Simulations from Measurements* by my colleague Henrik Sönnerlind:

General information about aircraft design and structures can be found in chapter 1 of this handbook on aircraft maintenance from the Federal Aviation Administration.

*BOEING, Dreamliner, and 787 Dreamliner are registered trademarks of The Boeing Company Corporation in the U.S. and other countries.*

Let’s begin with a quick review. When solids enter a humid environment, it is likely that some of them will catch water molecules. The absorption and storage of these molecules can cause the solid to swell up, exposing it to increased stresses and strains. This effect is known as *hygroscopic swelling*.

Hygroscopic swelling is a phenomenon that occurs in various sectors of industry, from wood construction and paper to electronics and food processing. Whether an expected behavior or an undesirable effect, it must be modeled accurately in order to quantify its effects.

The Hygroscopic Swelling feature in COMSOL Multiphysics enables you to do exactly that. Available as a subnode for most material models in the structural mechanics interfaces, this feature allows you to analyze the effect of moisture concentrations within the solids, such as resulting deformations and stresses.

*The user interface (UI) of the Hygroscopic Swelling feature. The main inputs are colored and numbered.*

Using the above figure as our guide, we can now take a closer look at how this feature is used.

Hygroscopic swelling creates an inelastic strain that is proportional to the difference between the concentration and the strain-free reference concentration:

\epsilon_\textrm{hs}=\beta_\textrm{h} C_\textrm{diff}

where the coefficient of hygroscopic swelling \beta_\textrm{h} can be given in the material properties or directly in the node (Number 5 in the screenshot above). It does not have to be constant; it can depend on, for example, temperature or the moisture concentration itself.

In small deformation theory, the hygroscopic swelling contribution is additive — that is, the inelastic strain is the sum of the other inelastic strains and the hygroscopic strain. The coefficient of hygroscopic swelling is a second-order tensor, which can be defined as isotropic, diagonal, or symmetric. The expansion can thus be different in different directions. In wood, this effect is very pronounced.

In large deformation theory, available under the Hyperelastic Material model, the hygroscopic contribution is multiplicative — that is, the total deformation gradient tensor F is scaled by the hygroscopic stretch to form the elastic deformation gradient tensor F_\textrm{e} :

\begin{array}{ll}

\epsilon=\frac{1}{2} \left( F_\textrm{e}^\textrm{T}F_\textrm{e}-I \right) & F_\textrm{e}=F J_\textrm{hs}^{-1/3}

\\

J_\textrm{hs}= \left(1+\beta_\textrm{h} C_\textrm{diff} \right)^3

\end{array}

\epsilon=\frac{1}{2} \left( F_\textrm{e}^\textrm{T}F_\textrm{e}-I \right) & F_\textrm{e}=F J_\textrm{hs}^{-1/3}

\\

J_\textrm{hs}= \left(1+\beta_\textrm{h} C_\textrm{diff} \right)^3

\end{array}

In this case, the coefficient of hygroscopic swelling is isotropic, so only uniform volumetric expansion is taken into account.

The hygroscopic swelling has two types of effects. When applied on free structures, it induces deformations. When applied on constrained structures, deformation is impossible, causing the stress inside of the structure to increase. In real structures (often partially constrained), the effect is a mixture of these two behaviors.

*Example of a free solid (left column) and a fully constrained solid (right column) subjected to hygroscopic swelling with a constant moisture concentration. The first row shows roller constraints applied on each solid. Plotted results are the displacement field in the second row and von Mises stress in the third row. The free solid is only constrained by two roller conditions, which enables the solid to expand and completely release the stress. On the contrary, the solid constrained with roller conditions all around it shows no displacement but encounters an increase in stress.*

Depending on the selected moisture concentration type (2), the concentration is defined either as mass concentration ( C_\textrm{mo} and C_\textrm{mo,ref}) or molar concentration ( c_\textrm{mo} and c_\textrm{mo,ref}). As C_\textrm{diff} is the mass concentration difference, the molar mass M_\textrm{m} must also be specified (4) when molar concentration is used as the input. The default value for M_\textrm{m} is the molar mass of water 0.018 \; \textrm{kg}/\textrm{m}^3.

\begin{array} {ll} \epsilon_\textrm{hs}=\beta_\textrm{h} M_\textrm{m} \left(c_\textrm{mo}-c_\textrm{mo,ref} \right) \end{array} for molar concentration

\begin{array} {ll} \epsilon_\textrm{hs}=\beta_\textrm{h}\left(C_\textrm{mo}-C_\textrm{mo,ref} \right) \end{array} for mass concentration

The concentration (1) can either be user-defined or computed by another physics interface. As with any input in COMSOL Multiphysics, user-defined values can be a function of other variables, such as the space coordinates X, Y, and Z.

*On the left: User-defined, space-dependent moisture concentration. On the right: Displacement induced by hygroscopic swelling. The top face, where the concentration is highest, shows the largest displacement.*

The strain-free reference concentration (3) is the moisture concentration at which hygroscopic swelling has no effect. It can often be interpreted as an initial state, or the ex-factory moisture concentration. A moisture concentration higher than the reference concentration represents moistening and causes the solid to expand. A moisture concentration lower than the reference concentration represents drying and causes the solid to shrink.

*Left: Displacement with zero strain reference concentration. Right: Displacement with nonzero strain reference concentration. The applied concentration, which is the same in both cases but lower than the strain reference concentration, implies shrinkage of the solid.*

Often, the moisture concentration within a solid is unknown and has to be computed with a preceding simulation. You can compute the concentration with the *Transport of Diluted Species* interface or the *Transport of Diluted Species in Porous Media* interface. Such an approach is used in our MEMS Pressure Sensor Drift due to Hygroscopic Swelling example, new with COMSOL Multiphysics version 5.1.

One way to feed the computed concentration to the *Solid Mechanics* interface is to specify the desired concentration variable in the combo box of the Hygroscopic Swelling feature. There is, however, an even simpler approach.

In version 5.1, you can use a multiphysics coupling, which is made available when at least a solid mechanics and a transport physics interface are both present in the model tree. With this coupling feature, you simply have to specify which transport interface the concentration derives from and which solid mechanics interface to which you are applying hygroscopic swelling. You will also need to set the reference concentration, the molar mass, and the coefficient of hygroscopic swelling for all of the selected domains. When using the multiphysics coupling, you do not need to add any hygroscopic swelling subnodes to the material models.

*Selecting the participating physics interfaces in the Multiphysics Coupling node for hygroscopic swelling.*

*Left: Moisture concentration computed in the* Transport of Diluted Species interface. *Right: Displacement resulting from hygroscopic swelling.*

In the *Beam*, *Shell*, and *Plate* interfaces, the moisture concentration input is partitioned into an average concentration on the center line or midsurface, and a concentration gradient in the transverse direction(s). The latter causes the structure to bend.

The input for hygroscopic swelling in the *Beam* interface contains concentration gradient in the local *y-* and *z-*directions. In the *Shell* and *Plate* interfaces, it contains a concentration difference between the top face and the bottom face.

*Hygroscopic bending in a 2D* Beam *interface.*

*On the left: Moisture concentration. On the right: Resulting displacement. In the solid, bending is caused by the nonuniform expansion, which is higher on the top face than the bottom face. In the beam, the bending caused by the same effect is captured using the moisture gradient c_{\textrm{g}y}. In both plots, the solid model is placed above the beam model.*

When the “Include moisture as added mass” checkbox is marked (6), the weight of the water that is absorbed or released by the solid will have an effect on the mass-dependent phenomena, such as gravity or rotating frame loads. It will also have an effect on inertial terms in time-dependent or frequency domain studies.

*On the left: Displacement of two bars analyzed with a frequency sweep when one of them is subjected to hygroscopic swelling. On the right: Frequency response of the two bars. The water absorbed during hygroscopic swelling increases the mass and decreases the resonance frequency.*

The total mass, including the water mass uptake, can be calculated in a Mass Properties node under *Definitions*. The mass variable can then be used in postprocessing for comparison with the measured mass of the solid — a convenient way to evaluate the moisture concentration in real life.

*Screenshot of the Mass Properties node.*

Taking hygroscopic swelling into consideration is important in the design of many devices. By analyzing how different materials respond to that effect, you can optimize your design so as to prevent the failure of components and to ensure the device’s intended operation. Here, we have demonstrated how the hygroscopic swelling functionality in COMSOL Multiphysics can be a valuable tool for such an analysis. With the Hygroscopic Swelling feature, you can quantify the effects of hygroscopic swelling in a way that is both accurate and efficient.

- Download the tutorial model: MEMS Pressure Sensor Drift due to Hygroscopic Swelling
- Read more about the new multiphysics coupling feature for hygroscopic swelling on our COMSOL Multiphysics 5.1 release highlights page

In my previous role as a structural analysis consultant, I sometimes came across the problem of how to report ridiculously high stress peaks in a finite element model to a customer. Experienced analysts know when stress peaks are an expected effect of modeling and can be safely ignored. Though, when a requirement that “the stress must nowhere exceed 70% of the yield stress” has been stated, this may still turn out to be an issue. Equally important is the fact that the small red spots in the color plots cannot always be ignored. Thus, we must have appropriate techniques for interpreting the model results.

Sharp reentrant corners will cause a singularity in the derivatives of the dependent variables for all elliptic partial differential equations. In structural mechanics, this means that the strains can become unbounded since the degrees of freedom are the displacements. Unless limited by the material model, the stresses will also be infinite in such a case.

Stresses are investigated in the majority of structural mechanics analyses. This is why singularities present more of an issue in structural mechanics than in most other physics fields. In heat transfer analyses, for instance, you are much more likely to be interested in the temperature than in the local values of the heat flux, the area in which a singularity would become evident.

Let’s have a look at a prototype problem. This problem involves a 2 meters by 1 meter rectangular plate, featuring a square cutout with a side of 0.2 meters, that is subjected to pure tension:

*The plate is constrained along the left edge and has a uniform load along the right edge.*

With two different meshes around the hole, the default plots of the effective stress look completely different. Since the peak stress is twice as high in the model with the finer mesh, most details in the stress field are lost. This can of course be remedied by manually adjusting the range of the plots, but it may hide important details at first glance.

*The same effective stress field in two plots. Both plots are automatically scaled by the mesh-dependent peak stress.*

In fact, the smaller the elements that are used in the corner, the higher the values of stress that will be found. The results will not converge since the “true” solution tends toward an infinite value.

*Stress at the corner as a function of element size (logarithmic horizontal axis).*

If we investigate the stress field close to the hole, we will find that the stress peak is very localized. In the figure below, the stress is plotted along a vertical cut line drawn at a distance of 0.05 meters from the hole. At this distance, the stress is virtually unchanged, even though the peak stress at the corner varies by a factor of two.

*Stress variation along a cut line (represented in red). Five different mesh sizes are used.*

In the real world, there are seldom perfectly sharp corners. Thus, you could argue that by using an accurate geometry representation containing all fillets, it is possible to avoid singularities. While true, this comes with a price tag. If very small geometrical details must be resolved by the mesh, the model grows enormously in size (especially the case in 3D). Even when a perfect CAD geometry is available, it is common practice to *defeature* the geometry to remove small details that are not important within the scope of the analysis. Therefore, in many cases, we actually deliberately introduce sharp corners at the preprocessing stage.

There are, however, some drawbacks to keeping the sharp corner:

- If the material model is nonlinear, there may be numerical problems at the singularity. For example, the strain rate predicted by a creep model is often proportional to a high power of stress. The high stress at the singularity (a value determined only by the mesh) raised to a power of five may result in strain rates so high that the time stepping is forced to be in the order of milliseconds, when you actually want to study an event taking place over months. If you still want to keep the sharp corner, the remedy here is to enclose the singularity in a small elastic domain.
- Adaptive meshing, error estimates, and the like can fail since the singularity will dominate over the rest of the solution. Exclude the corner from any such procedures.
- When running an optimization where stresses are part of the problem formulation, the singularity will lead to solutions that are optimal only in terms of reducing the amplitude of the unphysical peak stress. In the Multistudy Optimization of a Bracket tutorial, the region where the bracket is bolted is excluded from the search for a maximum stress.
- As previously noted, the high stress peaks tend to obscure more interesting features in the solution, both visually and psychologically.

Physically, if the corner is very sharp, the material will be damaged by the high strains. A brittle material may crack; a ductile material may yield. While it may sound alarming, such damage will only cause a local redistribution of the stresses in most cases. As seen from the perspective of the surrounding structure, the effect is no more dramatic than that of somewhat changing the fillet radius. High, very localized stresses will only be a true problem if the loading is cyclic, which creates a risk for fatigue.

In a building, nobody is concerned that the holes for windows and doors are rectangular with sharp corners. But, in an airliner, you will find that the windows are smoothly rounded since the variation between the pressure in the cabin and the pressure outside will provide a cyclic stress history.

*Left: A rectangular window featuring sharp corners. Image by Jose Mario Pires. Licensed under CC BY-SA 4.0 via Wikimedia Commons. Right: A window with smoothly rounded corners. Image by Orin Zebest. Licensed under CC BY-SA 2.0 via Wikimedia Commons.*

This is in fact recognized by many design standards, where high local stresses are allowed as long as the loads are static. The local corner stresses will not in any way affect the load-bearing capacity of the structure. Using this type of approach does rely on a systematic way of classifying the stress fields. Such methods are, for example, described in the *ASME Boiler & Pressure Vessel Code*.

For cyclic loads, on the other hand, it is important to obtain very accurate stress values. The fatigue life depends strongly on the stress amplitude. In this case, an accurate representation of the fillet is necessary, not only geometrically but also in terms of mesh resolution. If the model becomes too large to handle, you can use *submodeling*, an approach that is described in detail in this blog post.

*The detailed submodel on the right is driven by the results from the global analysis.*

Tip: To further explore the submodeling technique, download the Submodeling Analysis of a Shaft tutorial from our Application Gallery.

A force applied to a single point on a solid will locally give infinite stresses. This is the classical *Boussinesq-Cerruti problem* in the theory of elasticity, where the stresses vary as the inverse of the distance from the loaded point.

In the real world, point loads do not exist. The force is always distributed over a certain area. From the finite element analysis perspective, the question is whether or not it is worth the effort to resolve this small region. The answer to that question lies in the *Saint-Venant’s principle*, which states that all statically equivalent load distributions give the same result at a distance that is large enough when compared to the size of the loaded area.

Thus, when detailed results are not important within a distance of, say, three times the size of the loaded area, it actually does not matter how you apply the loads, so long as the resulting force and moment are correct. Just as in the case with the corner singularity, you may still need to avoid the effects of singular stresses. Note that line loads will have the same effect as a point load in causing local infinite stresses.

It is interesting to make mention of the fact that a point load applied on a beam element or perpendicular to a shell will *not* induce a singularity. The bending of the structural elements is governed by equations other than solid mechanics. However, a point load applied in the plane of a shell *will* cause a singularity.

If we think of a constraint in terms of its capability to apply a reaction force, it is evident that the same conclusions can be drawn as those for loads with respect to, for example, constraints applied to a point. But, that is not all. Consider the seemingly symmetric problem below. Here, we have a plate with a constant tensile load on one side and corresponding roller conditions on the other side.

*A square plate with one half of the vertical boundaries constrained and loaded.*

When looking at the stress distribution, it is apparent that the end of the roller condition introduces a singularity that the sudden change in the load does not. A general observation is that the end of a constraint has an effect that is similar to that of a sharp corner.

*Horizontal stress distribution.*

An infinitely stiff environment supporting the structure does not exist in reality. The analyst is again left with a choice: Can I live with the little red spot, or do I need to pay more attention to what is outside of my structure?

If the singularity caused by the boundary condition is not acceptable, you could consider the following approaches:

- Extend the model so that any singularity caused by the boundary condition is moved outside of the area of interest.
- Use a softer boundary condition by applying a Spring Foundation condition, for instance.
- Use infinite elements, which offer a cheap method for extending the computational domain. Learn more with this tutorial.

Situations similar to the one mentioned above are inevitable in many kinds of transitions. An example of such a transition is connecting a rigid domain to a flexible domain.

The art of analyzing welds is so important and complex that it warrants its own blog post. Here, we will only briefly touch on this subject.

Welded structures often consist of thin plates, so it is natural to use shell models in this context. Let’s have a look at the model below. In this example, a stress concentration is evident in the area where the smaller plate is welded to the wide plate.

*Stresses in a simple shell model of two plates welded together.*

The geometry and loads are symmetric with respect to the center of the geometry. The mesh in this model, however, is designed so that it is much finer at one end of the weld. A graph of the stress along the weld line reveals a singularity in the stress field in both plates.

*A stress plot identifying a singularity.*

For many welded structures — ship hulls, cargo cranes, and truck frames — dimensioning against fatigue is important. Refining the modeling process by using a solid model is seldom the answer here. The local geometry and quality of a weld is rarely well defined, unless it has been ground and X-rayed. The local geometry will differ along the weld and between the corresponding welds on two items that nominally should be identical.

When analyzing welds, the most common approach is to average the stress along the weld line or along a parallel line a certain distance away. The cut lines in COMSOL Multiphysics are particularly helpful here. The local coordinate systems also come in handy since stress components parallel and normal to the weld need to be treated differently. These averaged stresses are then compared with handbook values, which are available for a number of weld configurations and weld qualities. To learn more, see *Eurocode 3: Design of steel structures — Part 1-9: Fatigue*.

The worst conceivable geometrical singularity is the one caused by a crack. A crack can be seen as a 180° re-entrant corner, so many aspects of the corner singularity are also applicable here. When a crack is present in a finite element model, it is typically an area of focus within the study.

*The stress field around a crack tip, with the deformation scaled.*

The stress field around the crack tip is known from analytical solutions, at least for linear elasticity and plasticity under some assumptions. Computing the stress field through finite element analysis, however, can be difficult due to the singularity. Fortunately, it is usually not necessary to study the details at the crack tip. When determining the stress intensity factor, for example, you can use either the *J-integral* or *energy release rate* approach. These methods make use of global quantities far from the crack tip, so that the details at the singularity become less important.

Tip: Looking to explore the use of the J-integral approach in further detail? Consult the Single Edge Crack tutorial in our Application Gallery.

Singularities appear in many finite element models for a number of different reasons. As long as you understand how to interpret the results and how to circumvent some of the consequences, the presence of singularities should not be an issue in your modeling. In fact, many industrial-size models require the intentional use of singularities. Keeping down model size and analysis time often necessitates simplification of geometrical details, loadings, and boundary conditions in a way that introduces singularities.

]]>