In mechanical vibration theory, the *vibrations nodes* are defined as the points that never move when a wave is passing through them. Because of a wave created by the impact of a ball hitting a racket, the racket will, in turn, begin to oscillate and vibrate. By looking at the mode shapes of the racket — held by a player at the end of the grip — we can identify points where the vibration motion is zero (i.e., where the magnitude is zero at any time during vibration). Here are the first three mode shapes of a tennis racket computed with COMSOL Multiphysics:

*The first three mode shapes of a tennis racket, from left to right and top to bottom. The fundamental mode is at 15 Hz, the second mode is at 140 Hz, and the third mode is at 405 Hz.*

As illustrated above, many different points feature this behavior. So why am I talking as if there is only one vibration node? In reality, there is actually an infinite number of vibration nodes. Upon impact, the ball generates an infinite number of harmonic series at different frequencies. An infinite number of frequencies are excited at one time, but which vibration node is the “sweet spot”? Is it the fundamental mode shape vibration node or is it a node that results from the crossing of different harmonics?

The fundamental mode vibration node cannot be the sweet spot for an obvious reason: It is located at the grip. Try hitting the ball with the grip to pass it over the net. If you are very lucky, you may succeed, but most likely, you won’t. The second vibration mode, meanwhile, has two nodes: one at the grip and one on the strings near the frame head. The latter is considered the sweet spot. Any player that hits the ball at this point will feel almost no vibration during impact.

There are, of course, vibration nodes on the strings for higher modes, as depicted in the third mode from the simulations above. However, as the natural frequency of the mode increases, the magnitude of the vibration drastically *decreases*. The graph below shows the frequency response of a sinusoidal load of 5 ms — approximately the duration of a ball’s impact upon hitting a racket — on a beam-like structure. For frequencies higher than 300 Hz, the magnitude is almost zero. That is, the influence of the third mode or higher is negligible. No matter where the ball strikes, even at points where the magnitude of the mode shape has reached its maximum, the higher modes will not have any influence at all because they are not excited.

*A plot showing the frequency response of a sinus load of 5 ms.*

When the ball hits the tennis racket near one end, with no other force acting on it, the racket will rotate about an axis toward the other end. As the point where the ball strikes the racket becomes closer to the center of mass, the distance from the axis of rotation will decrease. In a case where the ball hits the center of mass, the racket will translate without any rotation. The center of rotation is, from a mathematical point of view, at an infinite distance from the racket.

That said, it is possible to find an impact location that produces a center of rotation near the end of the grip where the player holds the equipment. We can find a location at a certain distance from the center of mass where the ball hits the racket and results in a center of rotation near the end of the grip. Referred to as the *center of percussion (COP)*, this location is sometimes considered a sweet spot as well. No force is applied to the hands, as the racket rotates about a center of rotation near the end of the grip, avoiding the player’s hands.

*Compared to older wooden tennis rackets from the 1970s, modern forms of this equipment feature a much larger head. This new design element has been used to move the center of percussion near the middle of the strings rather than by the racket’s frame. Image by CORE-Materials, via Wikimedia Commons.*

Let’s now take a quick look at what happens from a mechanical standpoint. For this purpose, we assume that the racket can be modeled as a rigid beam-like structure.

*Sketch of the beam-like structure. The parameters used in the following equations are defined in this figure. *

A force F applied to a free beam of mass M at a distance b from the center of mass implies that the center of mass translates at a speed V_{cm}. From Newton’s second law,

F=M \frac{d V_{cm}}{dt}

Moreover, a torque is generated by the force F about the center of mass:

Fb = I\frac{d\omega}{dt}

where I is the beam moment of inertia along the rotation axis and \omega is the angular velocity. Consider P, a point at a distance c from the center of mass. The speed v of this point is v=V_{cm}-c\omega, leading to:

\frac{dv}{dt}=(\frac{1}{M}-\frac{cb}{I})F

Since the center of rotation corresponds to the point where there is no translational acceleration, the COP is at a distance b_{cop} from the center of mass, which is given by

b_{cop}=\frac{I}{c_{cr}M}

where c_{cr} is the distance between the center of rotation and the center of mass. Given that the distance between the center of mass and the ideal center of rotation (at the grip end) is known, it is rather straightforward to determine the position of the COP for a particular racket shape.

The *power point*, sometimes called the third sweet spot, is the best bouncing point. In other words, this is where the ball achieves the most bounce upon contact. From a mathematical standpoint, the power point is defined as the point with the highest *coefficient of restitution (COR)*, the ratio of the rebound height to the incident height of the ball. The coefficient of restitution is quite useful in the sense that it is the result of *all* of the design elements that affect the speed of the ball. Design engineers do not need to know the influence of each parameter, as the COR provides a combined overview of all of these factors.

The power point is located at the throat of the racket, near the center of mass. The closer the point is to the throat, the greater the stiffness and the lower the energy loss during racket deformation. When a ball hits a racket, the impact energy is divided into kinetic energy and elastic energy (energy of deformation) throughout the ball, racket, and strings. At the power spot, deformation is very small, causing the racket to give almost all of the kinetic energy back to the ball.

The power spot is very useful when returning a fast serve. Indeed, if you must return a fast serve, you do not have much time to move your racket and prepare your stroke, so you will return the ball as it comes. However, it is important to note that the closer the ball comes to the power spot, the better your stroke will be.

One last interesting spot on the racket that I’d like to mention is the *dead spot*. When a ball strikes the dead spot, the ball will not bounce at all. All of the ball’s energy is given to the racket and no energy is given back to the ball. This is due to the fact that the effective mass of the racket at the dead spot — usually close to the tip — is equal to the mass of the ball. Mechanically speaking, the ratio between the resulting force and the acceleration at the dead spot is equal to the mass of the ball.

To better understand the physical phenomena at hand, let’s imagine the ideal collision between a rigid ball at an initial speed V_0 and another rigid ball, initially at rest, that features the same mass m. The conservation of energy and the conservation of momentum lead to:

\frac{1}{2}mV_0^2 = \frac{1}{2}mV_1^2+\frac{1}{2}mV_2^2

mV_0=mV_1+mV_2

Therefore, it turns out that:

V_1=0 \ \text{and} \ V_2=V_0

If a ball collides with another ball that is of the same mass but at rest, the ball will stop dead and give all of its energy to the other ball. Thus, when a ball hits the dead spot of a racket at rest, the ball will not bounce at all. This would be a very bad spot to use when trying to return a serve. On the other hand, when you actively hit a stationary ball, as in your own serve, the dead spot will provide a high momentum transfer of energy from the racket to the ball.

Then, when it is your turn to serve, what is the optimal point? This is not only determined by the mathematics of sweet spots. In most cases, the answer would be rather close to the tip. Because of the way you move your arm, the racket will feature a significantly higher speed at the tip than at the throat. Thus, the optimal point is determined by a combination of high impact speed and good momentum transfer properties.

We have now gained insight into the physical meaning behind the three well-known sweet spots of a tennis racket. At the vibration node, the uncomfortable vibration that tennis players feel over their hand and arm is minimal. At the center of percussion, the initial shock to the player’s hand is also minimal. Lastly, at the power point, the ball rebounds with the maximum level of speed.

*The location of the sweet spots on a tennis racket.*

Perfect your game; check out these additional resources for improving your tennis skills:

*Tennis Science for Tennis Players*, H. Brody.- “Physics of the tennis racket“, H. Brody.
- “Physics of the tennis racket II: The “sweet spot“, H. Brody.
- “Physics of the tennis racket III: The ball-racket interaction“, H. Brody.

Laboratory test results provide clinicians with important insight into the best treatment methods for their patients. However, there is often a period of waiting between the time of testing and when the results are made available — a time frame that can range from several hours to even days. With this waiting period comes delays in starting patients’ treatments, as the clinicians must obtain the test results before they can move forward in their assessments.

In recent years, there has been a healthcare industry shift from such conventional laboratory tests to rapid point-of-care testing. *Point-of-care testing* refers to simple medical tests that deliver fast results and can be conveniently performed in a variety of settings, from the home to a physician’s office. Along with increasing patients’ roles in managing their healthcare, these tests expedite clinicians’ decision-making processes, allowing them to suggest treatments more quickly. As noted in a 2010 report from the National Institutes of Health, accelerating this process can greatly enhance the delivery of healthcare as well as address issues relating to health disparities.

*A simple method for testing blood glucose. Image by Karl101 — Own work, via Wikimedia Commons.*

So what types of point-of-care tests are available today? Blood glucose testing, pregnancy testing, food pathogen screening, and cholesterol screening are some common examples. As technologies continue to grow, point-of-care testing is becoming a viable option for more and more laboratory-based clinical tests. Such is the case for polymerase chain reaction (PCR) tests.

*Polymerase chain reaction* is a technology that is designed to copy small segments of DNA with the goal of generating enough sequences to perform analyses. This method of DNA amplification relies on thermal cycling. As the particular DNA segment of interest is exposed to repeated cycles of heating and cooling, the molecule is amplified at an exponential rate.

The roots of PCR technology can be traced back to the work of H. Gobind Khorana and Kjell Kleppe in 1966. Through a process they deemed *repair replication*, a small synthetic DNA molecule was duplicated and then quadrupled via two primers (short nucleic acid sequences) and DNA polymerase (enzymes that create DNA molecules by assembling nucleotides).

Using this initial research as his foundation, Kary Mullis, an American biochemist, added repeated thermal cycling into the mix. Through this cycling process, DNA sequences could be rapidly copied, with the amplification becoming increasingly fast over time. Several years later, a thermostable DNA polymerase — *Taq polymerase* — was implemented within the process. This DNA polymerase automated the thermocycler-based process, removing the need for continuous handling throughout amplification.

*Top: PCR test tubes. Image by Madprime — Own work, via Wikimedia Commons. Bottom: Adding PCR test tubes to a thermal cycler. Image by Karl Mumm — Own work, via Wikimedia Commons.*

The polymerase chain reaction process can be broken down into a series of steps:

- The select DNA molecule is exposed to heat to separate the double-stranded DNA molecule into single strands — a process known as
*denaturation*. DNA must be separated into single strands in order to be copied. - The reaction temperature is reduced so that the primers can anneal to their matching sequences on the initial DNA strand. The DNA polymerase then binds to the annealed primer.
- The DNA polymerase synthesizes a new strand of DNA, using the single-stranded DNA molecule and the primers as a template. The amplification process continues exponentially.

PCR tests have found use in a variety of medical and biological applications, including DNA cloning, the identification of genetic fingerprints, and the detection and diagnosis of infectious diseases. These tests typically take an hour or so to complete and utilize a conventional heater that is both expensive and requires a lot of power. As such, PCR tests have not been practical for point-of-care testing in the past. A recent design, however, could lead to new advancements.

Looking to advance the speed of conventional PCR tests, a team of researchers at UC Berkeley turned to the power of LEDs. In their experiments, the researchers used thin films of gold deposited onto a plastic chip featuring microfluidic wells. These wells, which the LEDs were positioned underneath, were designed to hold the PCR mixture with the sample of DNA. As part of their research, the team performed electromagnetic simulations using COMSOL Multiphysics to help define their geometry and material parameters.

With the LEDs in place, the electrons at the interface of the gold films and a DNA solution were successfully heated. Such behavior can be explained by *plasmonics*, which describes the interaction between light and free electrons on the surface of a metal. When exposed to light, free electrons become excited and begin to oscillate, causing heat to be generated. When the lights turn off, the oscillations stop and heat is no longer produced.

The research team found that their plasmonic PCR system helped to speed up the thermal cycling process, generating test results within a matter of minutes. Comparing their design with conventional PCR tests, the researchers found that both methods compared well in their ability to amplify a sample of DNA. With its simple, low-cost design and ability to deliver fast results, this LED-based system could help bring PCR tests outside of the laboratory and into a variety of environments.

*Berkeley News*article: “Heating and cooling with light leads to ultrafast DNA diagnostics“*Light: Science & Applications*article: “Ultrafast photonic PCR“

Biofuels are recognized as a valued source of renewable energy, with applications ranging from heating buildings to powering transportation. Increasing the availability of these fuels requires an understanding of the processes behind biomass conversion. With the help of COMSOL Multiphysics® simulation software, researchers at NREL are seeking to optimize such processes, making biofuel conversion more efficient and cost-effective.

Fossil fuels are at the core of our industrial society, helping to generate steam and electricity as well as power modes of transportation. As the industry continues to grow and the demands for energy increase, concern for depleting these nonrenewable energy sources as well as their impact on the environment also increases. In an effort to conserve such resources and mitigate the negative environmental effects associated with their mass consumption, many industries are looking to renewable energies, like biofuels, as a viable alternative.

Most recognized for their use as transportation fuel, *biofuels* are a renewable source of hydrocarbon liquid fuel produced mainly from plant materials, also referred to as *biomass*. Second generation biofuels circumvent the “food-vs-fuel” dilemma, as they are produced from agricultural waste and excess forest residues. Using a biomass source is particularly advantageous because it is a carbon-neutral process that reduces greenhouse gas emissions. However, due to the costs associated with the production process itself, renewable biofuels have yet to become economically competitive with fossil fuels.

Before optimizing biofuel production, Peter Ciesielski and fellow researchers at NREL knew that they would first need to obtain a better fundamental understanding of the dynamics behind the conversion processes. For that, they turned to COMSOL Multiphysics.

Let’s first take a closer look at the specific biofuel conversion process that the researchers analyzed — pyrolysis.

*Pyrolysis* is a thermochemical process that breaks down biomass through exposure to high temperatures in the absence of oxygen. During this process, the biomass is vaporized to form a mixture of compounds, called *bio-oil*, as well as additional gaseous products and char. With further refinement of the bio-oil, hydrocarbon biofuels are produced.

*Left: Preparing woody biomass for pyrolysis. Image credit: Warren Gretz, NREL 05756. Right: Accounting for physical processes in pyrolysis is an important step before creating a model. Image credit: Phil Shepherd, NREL 03677.*

*Fast pyrolysis* takes things one step further. Often used for woody biomass, this thermochemical conversion route utilizes an extremely high heat transfer rate to decompose the biomass, with internal temperatures reaching upwards of 500ºC in a second. In this process, ensuring efficient heat and mass transfer is key, as this helps to minimize char formation and speed up favorable reactions.

*Peter Ciesielski of NREL acquires images of wood biomass for his research of fast pyrolysis.*

When Ciesielski and his colleagues first began developing their model of biomass, they made it a point to take internal microstructure into account — an element that had been ignored in previous research. “Since COMSOL® has geometry tools, physics, meshing, and solvers already implemented, we can spend more time making the biomass model geometry really accurate,” Ciesielski stated in a *COMSOL News* 2015 article.

*A close-up of the biomass particle model.*

With their structurally accurate model in place, the team set out to simulate heat and mass transfer within fast pyrolysis. In this process, the breakdown of biomass begins with the application of very high temperatures — about 500°C — to an oxygen-free reaction vessel for a couple of seconds. After implementing these conditions, the researchers used the *Conjugate Heat Transfer* interface to simulate heat transfer between the outer fluid domain, comprised of nitrogen gas, and the biomass particle. Through this analysis, they could identify how long it would take for a particle of a given size, shape, and structure to reach optimal decomposition temperatures.

*Simulating heat transfer in the biomass particle.*

Additional simulations were performed to evaluate the diffusion of sulfuric acid, a chemical used to pretreat biomass before its conversion into biofuel. Using the *Transport of Diluted Species* interface, the researchers ran transient simulations of mass transport in the microstructure and solid particle geometries. The findings from both of the studies suggested that a detailed or *microstructured* model is useful in evaluating and optimizing biofuel conversion processes, as a simplified solid model may not adequately capture differences between biomass feedstocks.

Looking to further understand and improve biofuel production via fast pyrolysis, Ciesielski and his colleagues are planning to analyze rapid phase transitions and chemical reactions in future simulations. After obtaining a basic understanding of transport in biomass, the team hopes to run effective correlations with low-order models for a variety of process parameters and biomass feedstocks. These results can be used to optimize the performance of large-scale reactors designed to produce industrial quantities of biofuel.

What’s more, the research team at NREL was eager to discover that their biomass model is general enough to apply to any process that takes biomass as a feedstock. As such, the model provides an accurate foundation for modeling a variety of conversion processes and enhancing their efficiency. This has opened a lot doors, paving the way for future advancements in biofuel development.

- Read the full story about NREL’s simulation research on page 31 of
*COMSOL News*2015 - Take a closer look at how NREL analyzed and developed biomass models in this journal article: “Biomass Particle Models with Realistic Morphology and Resolved Microstructure for Simulations of Intraparticle Transport Phenomena“

We have been interested in cloaking for years and have covered this topic in various ways in previous blog posts. Although there are many different types of cloaking, one common theme is how complex the phenomenon is to achieve mathematically (and physically…).

*An ideal cloak, which is modeled as a spherical shell with a smaller sphere inside. In this optical cloaking example, light waves bend around the smaller sphere, causing it to seem invisible.
*

The concept starts with *metamaterials*. Metamaterials are artificial materials that depend on a certain structure and arrangement to work. *Cloaking* devices use these metamaterials to bend waves (such as thermal, electromagnetic, acoustic, and mechanical waves) around an object in order to hide or protect it.

Theoretically, different cloaking devices can perform different functions. For instance, electromagnetic cloaking can render things invisible from the human eye, while mechanical cloaking can hide an object from mechanical vibrations and stress. In reality, it’s not a simple task to cloak something — and this is especially true in structural mechanics. However, researchers are taking leaps forward in the realm of cloaking design.

For instance, you might recall reading about cloaking advancements for flexural waves in Kirchhoff-Love plates here on the COMSOL Blog. The research group that led this study overcame limitations that were previously associated with the cloaking of mechanical waves in elastic plates. They created a new theoretical framework for designing and building these invisibility cloaks and used COMSOL Multiphysics software to simulate and analyze the quality of their cloak.

More recently, researchers at the Karlsruhe Institute of Technology in Germany developed a very simple mathematical approach to cloaking based on a direct lattice transformation technique.

The team of scientists began by considering a 2D discrete lattice comprised of one material. Initially, an electrical analogy was studied, in which the lattice points within this structure were connected by resistors. These resistors were designed to act as a metamaterial, bending the electromagnetic waves and creating a cloak.

In the direct lattice approach, the lattice points of the structure were subjected to a coordinate transformation and the properties of the resistors were kept the same. Because the resistors and the connections between them were the same, the hole in the middle of the lattice and the distortion surrounding it could not be detected from the outside. Thus, in just one simple step, a cloak was successfully created.

The research team’s initial findings demonstrated the success of this simple and straightforward technique for cloaking in heat conduction, particle diffusion, electrostatics, and magnetostatics. Then, by replacing the resistors in the lattice structure with linear Hooke’s springs, the researchers found that their transformation approach was successful in cloaking elastic-solids as well.

To visualize and test the performance of the lattice-transformation cloak, the researchers used COMSOL Multiphysics simulation software. In the simulations, constant pressure was exerted onto the structure and the resulting strain was analyzed. The direct lattice approach was found to result in less error and less strain under various loading conditions — an indicator of very good performance.

Although mathematically *perfect* cloaking will never exist in reality, mechanical cloaking still has a lot of potential uses in the civil engineering and automotive industries. Using this technique, engineers could create strong materials that maintain their strength and durability, even when forming complex shapes. Constructing buildings of such material would help protect them from earthquake damage, for instance.

*Civil engineers could use mechanical cloaking to design support structures for bridges. (By Alicia Nijdam. Licensed under Creative Commons Attribution 2.0 Generic, via Wikimedia Commons).*

With mechanical cloaking, we could also see complex yet lightweight architecture, carbon-enforced cars, and tunnels with better stress protection in the future. Check out the links below for more information about this fascinating topic.

- Read more about the Karlsruhe Institute of Technology team’s mechanical cloaking technique from
*Phys.org* - Learn how fractals contribute to the magic of metamaterials
- Can you print an invisibility cloak with a 3D printer?
- Cloaking in science and fiction

3D printing has emerged as a popular manufacturing technique within a number of industries. The growing demand for this method of manufacturing has prompted greater simulation research behind its processes. Engineers at the Manufacturing Technology Centre (MTC) have identified their customers’ interest in a particular additive manufacturing technique known as shaped metal deposition. By building a simulation app, the team is better able to meet the demands of their customers while delivering more efficient and effective simulation results.

Designers and manufacturers are usually interested in testing various design schemes to create the most optimized device or process. As a simulation expert, you will often find yourself running multiple tests to account for each new design. The Application Builder, however, has revolutionized this process. By turning your model into a simulation app, you can enable those without a background in simulation to run their own tests and obtain results with the click of a button.

When designing an app, you can opt to include only those parameters that are important to your end-user’s particular analysis, hiding the model’s complexity while still including all of the underlying physics. As modifications are made to the design, app users can change specific inputs to simulate the performance of the new configurations. The result: A more efficient simulation process that allows engineers to focus on the design outcome rather than the physics behind the model.

Over the past few weeks, we’ve blogged about several of our own demo apps that are designed to help you get started with making apps. Today, we will share with you how a team at the MTC built their own app to analyze and optimize *shaped metal deposition* (SMD), an additive manufacturing (3D printing) technique. Let’s begin by exploring what prompted the development of this app.

*The MTC team behind the creation of the simulation app.*

The 3D printing industry has experienced tremendous growth within the last several years. As new initiatives have further developed the technology, 3D printing has emerged as a favorable method of manufacturing components for medical devices, automobiles, and apparel, to name a few.

At the MTC — which has recently become home to the UK National Centre for Net Shape and Additive Manufacturing — simulation engineers recognized their customers’ interest in additive manufacturing, with particular regards to shaped metal deposition. In contrast to powder-based additive manufacturing techniques, SMD is valued for its capability to build new features on pre-existing components as well as use a number of materials on the same part.

Similar to welding, this manufacturing technology deposits a mass of molten metal that is applied gradually on a surface. A cause for concern within this process is that the thermal expansion of the molten metal can deform the cladding as it cools. Thus, the final product can differ from the expected result.

*A simulation of temperature heating in the manufactured part, created by the MTC team. *

*Visible deformation on the manufactured part after six deposited layers.*

Using COMSOL Multiphysics, a team at the MTC created a model to better predict the outcome of the design by minimizing deformations or changing the design to account for such deformations. Responding to the growing popularity of this manufacturing technique, the MTC turned their model into a simulation app that could be shared across various departments within their organization.

The simulation app built by the MTC is based on a thermomechanical analysis of thermal stresses and deformation resulting from SMD thermal cycles. The app was designed to predict if the deposition process would create parts that fell within a specific range of tolerances. In some cases, this could require many tests to be run before arriving at an acceptable final deformation. With the app’s intuitive and user-friendly interface, app users are able to easily modify various inputs to test out each new design and analyze its performance.

*The MTC app’s user interface.*

Within the app, the MTC team has given users the ability to easily test out different geometries, change materials, apply meshing sequences, and experiment with various heat sources and deposition paths. The app also includes two predefined parametric geometries, as well as the option to import a custom geometry.

*Running a simulation using the app. This plot represents the temperature field.*

In *COMSOL News* 2015, Borja Lazaro Toralles, an engineer at the MTC, discussed the advantages of taking this approach to analyzing and optimizing SMD. “Were it not for the app, our simulation experts would have to test out each project we wanted to explore, something that would decrease the availability of skilled resources,” Lazaro Toralles noted in the article.

Since its development, the app has been shared with other members of the MTC team who do not possess simulation expertise. Distributing this easy-to-use tool throughout the organization has offered a simple way for team members to test and validate designs, expediting the simulation process and providing customers with faster results. Additionally, the availability of the app to the MTC engineers means that they are able to respond to companies who want to explore the use of this additive technology very rapidly and at a low cost.

The team at the MTC has already begun making updates to their simulation app, further enhancing its functionality and adding new resources for the end-users. Using the Physics Builder, the engineers have started designing a customized physics interface that will enable the modeling of more complex tool paths and melt pools. Tailored to their design needs, this interface will offer engineers an easier and faster method of implementation that is less prone to error.

To further improve the usability of the app, the MTC is planning to offer more contextual guidance through the card stack tool provided by the Application Builder. For increased accuracy, they have plans to add the capability of modeling the evolution of the microstructure on a macroscopic level to predict heat-affected zones.

Recognizing the advantages of building simulation apps, the MTC is looking to create additional apps to evaluate topology optimization as well as the modeling of hot isostatic pressing (HIP). They are also interested in potentially linking COMSOL Server™ with their own cluster to provide a secure environment for managing, running, and sharing simulation apps. This would be especially beneficial for those companies that do not possess high computational power.

- Read a related article in
*COMSOL News*2015: “Optimizing 3D Printing Techniques with Simulation Apps“ - To learn more about creating your own simulation apps, watch this video: Introducing the Application Builder in COMSOL Multiphysics
- Check out our series of blog posts on 3D printing

Miniature devices have many applications and researchers are constantly finding new uses for them. One such use, which we’ve blogged about before, is a microfluidic device that could let patients conduct immune detection tests by themselves. But to work in the microscale, devices like this one, of course, rely on even smaller components such as micropumps.

Let’s turn to a tutorial model of a valveless micropump mechanism that was created by Veryst Engineering, LLC using COMSOL Multiphysics version 5.1.

The micropump in the tutorial model creates an oscillatory fluid flow by repeating an upstroke and downstroke motion. The fluid flow enters a horizontal channel containing two tilted microflaps, which are located on either side of the micropump. The microflaps passively bend in reaction to the motion of the fluid and help to generate a net flow that moves in one direction. Through this process, the micropump mechanism is able to create fluid flow without the need for valves.

*The geometry of the micropump mechanism tutorial.*

Please note that the straight lines above the microflaps are there to help the meshing algorithm. Check out the tutorial model document if you’d like to learn how this model was created.

The tutorial calculates the micropump mechanism’s net flow rate over a time period of two seconds — the amount of time it takes for two full pumping cycles. The Reynolds number is set to 16 for this simulation so that we can evaluate the valveless micropump mechanism’s performance at low Reynolds numbers. The *Fluid-Structure Interaction* interface in COMSOL Multiphysics is instrumental in taking into account the flaps’ effects on the overall flow, as well as making it an easy model to set up.

*Left: At a time of 0.26 seconds, the fluid is pushed down and most of it flows to the outlet on the right. Right: At a time of 0.76 seconds, the fluid is pulled up and most of it flows from the inlet on the left.*

The simulation starts with the micropump’s downstroke, which is when the micropump pushes fluid down into the horizontal channel. This action causes the microflap on the right to bend down and the microflap on the left to curve up. In this position, the left-side microflap is obstructing the flow to the left and the flow channel on the right is widened. This naturally causes the majority of the fluid to flow to the right, since it is the path of least resistance.

During the following pumping upstroke, fluid is pumped up into the vertical chamber. Here, the flow causes the microflaps to bend in opposite directions from the previous case. This shift doesn’t change the direction of the net flow, because now the majority of the fluid is drawn into the flow channel from the inlet on the left.

Due to the natural deformation of the microflaps caused by the moving fluid, both of these stages created a left-to-right net flow rate. But how well did the micropump mechanism do at maintaining this flow over the entire simulation time period?

*The net fluid volume that is pumped from left to right.*

During the two-second test, the net volume pumped from left to right was continually increased, with a higher net flow rate during peaks of the stroke speed. This valveless micropump mechanism can function even at a lower Reynolds number.

The valveless micropump mechanism could have many future applications, one of which is to work as a fluid delivery system. In such a scenario, a micropump mechanism could take fluid from a droplet reservoir on its left and move it through a microfluidic channel to an outlet on its right. In this post we have shown just one set of simulation results. By experimenting with the tutorial model set up by Veryst Engineering, you can visualize how a valveless micropump may work in different situations and use this information to discover new uses for micropump mechanisms.

- Download the tutorial model: Micropump Mechanism

First proposed by Adrian Bejan in 1996, the *constructal law* is a theory that summarizes the generation of design and evolution phenomena within nature. With the foundation that designs universally evolve in a particular direction in time, the law states that the shape and structure of a flow system will change over time to facilitate an easier flow. In other words, design configurations will evolve in a direction that enables greater access for those currents flowing throughout it.

Take a plant, for instance. For a plant to thrive, it is important that its structure promotes the flow of nutrients and water. The plant will thus reorient its branches over time to ensure that its design facilitates the proper flow. Another example is a river. When spreading into the sea, a river will often encounter obstacles resulting from settling sediment. As such, the river will change the direction of its flow to avoid running into these obstructions.

While common in nature, these tree-like architectures are evident in manmade designs as well. Aircraft designs, for instance, have evolved over time to transport a greater number of people and goods across further distances. As noted by research from Bejan and his colleagues, maintaining the proportionality of the engine mass to the body of the aircraft has been important to aircraft success. When comparing this to statistics from various mammals, insects, and birds, a nearly identical pattern was found between mass and speed — a reflection of the connection between the law of design in nature and the evolution of manmade technologies.

*The constructal law has shown similarities between the pattern of manmade designs and elements of nature. Left: Image by Altair78, via Wikimedia Commons. Right: Image by Dan Pancamo, via Wikimedia Commons.*

Recently, Bejan and a team of researchers from Duke University and Université de Toulouse in France used the theory behind the constructal law to enhance the performance of phase change energy storage systems by maximizing the melting rate of the phase-change materials. Before diving into their research, let’s take a closer look at phase change energy storage technology.

Energy efficiency is an important consideration in the design of modern technologies. In an effort to reduce environmental impact and save on costs, designers and manufacturers often turn to energy storage techniques as a solution.

One common method used to reduce energy consumption is thermal energy storage technology. Used since the late 19^{th} century, phase change energy storage technology has become a valued approach to energy storage in refrigeration systems as well as commercial buildings. This energy storage technique involves the heating or cooling of a storage medium. The thermal energy is then collected and set aside until it is needed in the future.

Phase-change materials are often used as a storage medium within the thermal energy storage process. When undergoing phase change, a *phase-change material* (PCM) absorbs a great deal of heat at a near average temperature. As it continues to absorb heat, the material does not experience a significant rise in temperature until it is fully melted.

Once the ambient temperature of the environment around the material decreases, the PCM becomes solid again, releasing its stored heat. The use of phase-change materials as a medium is particularly valued, as these materials are able to store high capacities of thermal energy and exhibit isothermal behavior throughout the charge and discharge process.

*A schematic demonstrating how a phase-change material works. Image by Pazrev — Own work, via Wikimedia Commons.*

Looking to improve the performance of phase change energy storage systems, Bejan and his team of researchers applied the constructal law to their analysis of energy storage through the melting of a phase-change material. In their study, heating was applied along invading lines with the ability to freely morph.

Using COMSOL Multiphysics in their simulation research, the team found that the melted material evolved into an S-shaped flow. By enabling the heat to spread freely through the cold material as in a tree-like architecture, the material was able to melt more quickly. The findings also showed that increasing the tree structure’s complexity and varying its branching angle and stem length simultaneously helped to further accelerate the melting process.

While a heating and cooling coil is traditionally embedded in a phase-change material, the researchers found that the most effective way to spread the heat within the volume was to allow a natural flow to develop and evolve over time. As the constructal law indicates, giving a design the freedom to morph naturally ultimately enhances the performance of the system. Applying the theory behind the constructal law to phase change energy storage systems paves the way for advancing the energy efficiency of this technology, enabling it to continue to evolve and improve well into the future.

Solar energy is created when photovoltaic cells made up of a semiconducting material, such as silicon, transform sunlight (photons) into electricity (voltage). In photovoltaic cells, microthin layers of silicon make up integrated circuits that conduct electricity.

Photovoltaic cells range from a single cell to a group of cells called a *module*, or a group of modules called an *array*. Different amounts of solar cells can power anything from a small electronic device, such as a cellphone, to a whole building. Solar cells are used in a variety of applications, including transportation, telecommunication, space exploration, and more.

The power derived from solar cells provides a clean and sustainable source of energy; solar energy is a renewable resource. The operation of solar cells doesn’t require any moving parts or emissions, which makes them even more energy efficient.

As much as solar energy is a preferable electricity source, there are still some issues with how it is manufactured. Although the operation of solar cells is efficient, manufacturing them costs a high amount of power in itself. The silicon needed for solar cells also has to be extremely pure. The purity of silicon directly affects the amount of electricity produced from sunlight, which is an amount known as its *photovoltaic conversion efficiency*. While common metallurgical silicon is 99.9% pure, the silicon used for photovoltaic cells must be 99.9999% pure.

To fill the need for highly pure silicon for solar cells, a research team at EMIX developed the cold crucible continuous casting (4C) process, which transforms metallurgical silicon into a substance ready for solar cell use. First, silicon is fed into a water-cooled crucible and inductively heated to a melting temperature of 1414°C. Then, the silicon is electromagnetically mixed inside the crucible. Lorentz forces prevent contact between the silicon and the crucible walls.

The mixture homogenizes at the solid-liquid interface, which enhances crystallization conditions for a high purity. The melted substance is pulled through the bottom of the crucible, cooled, and solidified into a rod through an annealing process. Finally, the high-purity silicon is sawed into ingots and sold to solar cell manufacturers, where it is sliced into 200-micrometer thick wafers for use in photovoltaic cells.

*The geometry of the crucible used in the 4C process. Copyright © EMIX.*

EMIX has been using COMSOL Multiphysics software nearly as long as they’ve been manufacturing photovoltaic-quality silicon. Using the *Heat Transfer in Fluids* interface and the *Laminar Flow* interface in the Heat Transfer Module, the research team can adjust certain variables to ensure that the 4C process optimizes production rates and is as energy efficient as the solar cells it produces.

*The research team at EMIX from left to right: Julien Givernaud, Elodie Pereira, Nicolas Pourade, Florine Boulle, Alexandre Petit. Copyright © EMIX.*

By simulating the 4C process, the EMIX team can test a variety of different variables, including:

- Cooling method
- Pull rate
- Crucible and coil shapes
- Characteristics of the furnace
- Effect of the electromagnetic field
- Shape of the solid-liquid interface
- Effect of elastic stresses on crystallization behavior

*A simulation of the 4C process using COMSOL Multiphysics simulation software. Copyright © EMIX.*

Using COMSOL Multiphysics simulation software to enhance the 4C process has proved to be extremely beneficial for EMIX. They were able to estimate both inductance and impedance in their silicon manufacturing, improve their crucible design for electrical efficiency, and test different parameters for continuous casting. This led to higher production rates, lower stresses in the silicon ingots, energy savings of 15%, and a 30% increase in pull rate.

Thanks to an innovative, optimized, and energy efficient 4C process, EMIX has been able to streamline silicon production. With simulation, they have even identified processes that they will soon test on an industrial scale. For now, we can be sure that the future of photovoltaic cells and solar energy is looking bright.

- Read more about how EMIX uses simulation on page 20 of
*COMSOL News*2015 - Run a similar simulation: Download our Continuous Casting tutorial to get started

Marangoni convection — also called *thermocapillary convection* — is important in a number of processes, including welding, crystal growth, and electron beam melting. Due to the types of metals used and the extremely high temperatures involved, performing experiments to analyze Marangoni convection often proves to be rather challenging. The impact of gravity, which mixes up this convective effect with the Marangoni effect, also adds to the difficulty of studying this phenomenon.

At NASA, researchers analyzed Marangoni convection to see how mass and heat move within a fluid under microgravity conditions. Conducting the experiment in microgravity enabled the research team to create silicone oil columns much larger than those that could be studied on Earth, offering a more detailed look at the flow and instability within them. Additionally, suppressing the influence of gravity helped eliminate the possibility of gravity-induced deformation, thus enhancing the accuracy of their results.

With numerical experiments, it is very easy to separate effects that are simply impossible to remove in an experiment on Earth. Our Marangoni Effect tutorial uses a transparent liquid at ambient temperatures to find the velocity field induced through the Marangoni effect in a fluid with known thermo-physical properties. The transparency of the silicone oil makes it easy to implement and compare our simulation results with the microgravity experimental findings.

To begin, we must solve the Navier-Stokes equations to model the velocity field and pressure distribution in the fluid. Keep in mind that variations in temperature affect the velocity and cause a buoyancy force that needs to be represented in the equations. This can be done by using the Boussinesq approximation in the Navier-Stokes equations.

With the *Laminar Flow* interface, we can solve the momentum balance equations. To solve for heat transfer, we use the *Heat Transfer in Fluids* interface. Finally, we use the *Non-Isothermal Flow* multiphysics coupling to set the convective term in the heat equation and the *Marangoni Effect* multiphysics coupling to impose that the shear stress is proportional to the temperature gradient.

*The setup of the tutorial model. The Multiphysics node contains both the nonisothermal coupling and the Marangoni effect.*

This simulation presents three multiphysics couplings that must be solved using the nonlinear solver:

- Because of the temperature dependency of the fluid density, \rho, accounted for following the Boussinesq approximation, the gravity force, -\rho \textbf{g}, is given by an expression that includes temperature.
- Convective heat transfer depends on the velocity of the momentum balance.
- The Marangoni effect relates the shear stress applied at the free surface to the surface temperature gradient.

In our simulations, we analyze a gradual increase in temperature difference between vertical walls. For an almost unnoticeable temperature increase of 1 mK, the temperature field and velocity field have only a slight relation, and the decrease appears linear from left to right.

*The results of a Marangoni effect simulation after only a small change in temperature. The background color represents the temperature field and the red arrows indicate the velocity field. The black lines are isotherms.*

With an increase of 50 mK, Marangoni convection increases the fluid flow and temperature distribution. The temperature decrease is no longer linear across the plot.

*The results of the simulation after a temperature increase of 50 mK.*

Finally, we test a temperature difference of 2 K. The temperature and velocity fields are distinctly coupled and the fluid accelerates at the surface where the temperature gradient is highest.

*The results of the simulation when the temperature difference is raised to 2 K.*

As indicated by the simulation results, the Marangoni effect becomes predominant as the difference in temperature increases.

For the same temperature difference of 2 K, we can easily remove the gravity contribution and keep the Marangoni effect. With the same objective of understanding how buoyancy forces compare with the Marangoni effect, we can simply disable the Marangoni contribution at the surface, leaving the surface free of stress. The results show that the Marangoni effect is predominant versus buoyancy forces. The shape of the curve shows a peak close to the cold right wall, which is characteristic of the fluid behavior of high Prandtl numbers.

*The results of the horizontal velocity at the surface versus the horizontal coordinate (m) for a temperature difference of 2 K. Blue represents both the Marangoni effect and the buoyancy effect; green represents only the Marangoni effect; and red represents only the buoyancy effect.*

In this blog post, we have demonstrated how to set up a model representing an experiment combining gravity and Marangoni effects. Separating these two effects is challenging in an experimental setting. In numerical simulations, this process is straightforward, facilitating an understanding of each effect.

You can reproduce the results shown here by downloading the Marangoni Effect tutorial from our Application Gallery. This example uses the Non-Isothermal Flow and Marangoni Effect multiphysics couplings available in the Heat Transfer Module.

While we have focused our attention here on single-phase flows, it is worth mentioning that the Marangoni effect is also handled in the two-phase flow interfaces, which are available in the CFD Module and the Microfluidics Module.

]]>

In the vast majority of simulations involving linear elastic materials, we are dealing with an isotropic material that does not have any directional sensitivity. To describe such a material, only two independent material parameters are required. There are many possible ways to select these parameters, but some of them are more popular than others.

Young’s modulus, shear modulus, and Poisson’s ratio are the parameters most commonly found in tables of material data. They are not independent, since the shear modulus, G, can be computed from Young’s modulus, E, and Poisson’s ratio, \nu, as

G = \frac{E}{2(1+\nu)}

Young’s modulus can be directly measured in a uniaxial tensile test, while the shear modulus can be measured in, for example, a pure torsion test.

In the uniaxial test, Poisson’s ratio determines how much the material will shrink (or possibly expand) in the transverse direction. The allowable range is -1 <\nu< 0.5, where positive values indicate that the material shrinks in the thickness direction while being pulled. There are a few materials, called *Auxetics*, which have a negative Poisson’s ratio. A cork in a wine bottle has a Poisson’s ratio close to zero, so that its diameter is insensitive to whether it is pulled or pushed.

For many metals and alloys, \nu \approx1/3, and the shear modulus is then about 40% of Young’s modulus.

Given the possible values of \nu, the possible ratios between the shear modulus and Young’s modulus are

\frac{1}{3} < \frac{G}{E} < \infty

When \nu approaches 0.5, the material becomes incompressible. Such materials pose specific problems in an analysis, as we will discuss.

The bulk modulus, K, measures the change in volume for a given uniform pressure. Expressed in E and \nu, it can be written as:

K = \frac{E}{3(1-2\nu)}

When \nu= 1/3, the value of the bulk modulus equals the value of Young’s modulus, but for an incompressible material (\nu \to0.5), K tends to infinity.

The bulk modulus is usually specified together with the shear modulus. These two quantities are, in a sense, the most physically independent choices of parameters. The volume change is only controlled by the bulk modulus and the distortion is only controlled by the shear modulus.

The Lamé constants \mu and \lambda are mostly seen in more mathematical treatises of elasticity. The full 3D constitutive relation between the stress tensor \boldsymbol \sigma and the strain tensor \boldsymbol \varepsilon can be conveniently written in terms of the Lamé constants:

\boldsymbol \sigma=2\mu \boldsymbol \varepsilon +\lambda \; \mathrm{trace}(\boldsymbol{\varepsilon}) \mathbf I

The constant \mu is simply the shear modulus, while \lambda can be written as

\lambda = \frac{E \nu}{(1+\nu)(1-2\nu)}

A full table of conversions between the various elastic parameters can be found here.

Some materials, like rubber, are almost incompressible. Mathematically, a fully incompressible material differs fundamentally from a compressible material. Since there is no volume change, it is not possible to determine the mean stress from it. The state equation for the mean stress (pressure), *p*, as function of volume change, \Delta V, as

p = f(\Delta V)

will no longer exist, and must instead be replaced by a constraint stating that

\Delta V = 0

Another way of looking at incompressibility is to note that the term (1-2\nu) appears in the denominator of the constitutive equations, so that a division by zero would occur if \nu= 0.5. Is it then a good idea to model an incompressible material approximately by setting \nu= 0.499?

It can be done, but in this case, a standard displacement based finite element formulation may give undesirable results. This is caused by a phenomenon called *locking*. Effects include:

- Overly stiff models.
- Checkerboard stress patterns.
- Errors or warnings from the equation solver because of ill-conditioning.

The remedy is to use a *mixed formulatio*n where the pressure is introduced as an extra degree of freedom. In COMSOL Multiphysics, you enable the mixed formulation by selecting the *Nearly incompressible material* checkbox in the settings for the material model.

*Part of the settings for a linear elastic material with mixed formulation enabled.*

When Poisson’s ratio is larger than about 0.45, or equivalently, the bulk modulus is more than one order of magnitude larger than the shear modulus, it is advisable to use a mixed formulation. An example of the effect is shown in the figure below.

*Stress distribution in a simple plane strain model, \nu = 0.499. The top image shows a standard displacement based formulation, while the bottom image shows a mixed formulation.*

In the solution with only displacement degrees of freedom, the stress pattern shows distortions at the left end where there is a constraint. These distortions are almost completely removed by using a mixed formulation.

In general cases of linear elastic materials, the material properties have a directional sensitivity. The most general case is called anisotropic, which means all six stress components can depend on all six strain components. This requires 21 material parameters. Clearly, it is a demanding task to obtain all of this data. If the stress, \boldsymbol \sigma, and strain, \boldsymbol \varepsilon, are treated as vectors, they are related by the constitutive 6-by-6 symmetric matrix \mathbf D through

\boldsymbol \sigma= \mathbf D \boldsymbol \varepsilon

Fortunately, it is common that nonisotropic materials exhibit certain symmetries. In an orthotropic material, there are three orthogonal directions in which the shear action is decoupled from the axial action. That is, when the material is stretched along one of these principal directions, it will only contract in the two orthogonal directions, but not be sheared. A full description of an orthotropic material requires nine independent material parameters.

The constitutive relation of an orthotropic material is easier when written on compliance form, \boldsymbol \varepsilon= \mathbf C \boldsymbol \sigma:

\mathsf{C} =

\begin{bmatrix}

\tfrac{1}{E_{\rm X}} & -\tfrac{\nu_{\rm YX}}{E_{\rm Y}} & -\tfrac{\nu_{\rm ZX}}{E_{\rm Z}} & 0 & 0 & 0 \\

-\tfrac{\nu_{\rm XY}}{E_{\rm X}} & \tfrac{1}{E_{\rm Y}} & -\tfrac{\nu_{\rm ZY}}{E_{\rm Z}} & 0 & 0 & 0 \\

-\tfrac{\nu_{\rm XZ}}{E_{\rm X}} & -\tfrac{\nu_{\rm YZ}}{E_{\rm Y}} & \tfrac{1}{E_{\rm Z}} & 0 & 0 & 0 \\

0 & 0 & 0 & \tfrac{1}{G_{\rm YZ}} & 0 & 0 \\

0 & 0 & 0 & 0 & \tfrac{1}{G_{\rm ZX}} & 0 \\

0 & 0 & 0 & 0 & 0 & \tfrac{1}{G_{\rm XY}} \\

\end{bmatrix}

\begin{bmatrix}

\tfrac{1}{E_{\rm X}} & -\tfrac{\nu_{\rm YX}}{E_{\rm Y}} & -\tfrac{\nu_{\rm ZX}}{E_{\rm Z}} & 0 & 0 & 0 \\

-\tfrac{\nu_{\rm XY}}{E_{\rm X}} & \tfrac{1}{E_{\rm Y}} & -\tfrac{\nu_{\rm ZY}}{E_{\rm Z}} & 0 & 0 & 0 \\

-\tfrac{\nu_{\rm XZ}}{E_{\rm X}} & -\tfrac{\nu_{\rm YZ}}{E_{\rm Y}} & \tfrac{1}{E_{\rm Z}} & 0 & 0 & 0 \\

0 & 0 & 0 & \tfrac{1}{G_{\rm YZ}} & 0 & 0 \\

0 & 0 & 0 & 0 & \tfrac{1}{G_{\rm ZX}} & 0 \\

0 & 0 & 0 & 0 & 0 & \tfrac{1}{G_{\rm XY}} \\

\end{bmatrix}

Since the compliance matrix must be symmetric, the twelve constants used are reduced to nine through three symmetry relations of the type

\tfrac{\nu_{\rm YX}}{E_Y} = \tfrac{\nu_{\rm YX }}{E_X}

Note that \nu_{\rm YX} \neq \nu_{\rm XY}, so when dealing with orthotropic data, it is important to make sure that the intended Poisson’s ratio values are used. The notation may not be the same in all sources.

Anisotropy and orthotropy commonly occur in inhomogeneous materials. Often, the properties are not measured, but computed using a homogenization process upscaling from microscopic to macroscopic scale. A discussion about such homogenization — in quite another context – can be found in this blog post.

For nonisotropic materials, there are limitations to the possible values of the material parameters similar to those described for isotropic materials. It is difficult to immediately see these limitations, but there are two things to look out for:

- The constitutive matrix \mathbf D must be positive definite.
- For a general anisotropic material, the only option is to check if all of its eigenvalues are positive.
- For an orthotropic material, this is true if all six elastic moduli are positive and \nu_{\rm XY}\nu_{\rm YX}+\nu_{\rm YZ}\nu_{\rm ZY}+\nu_{\rm ZX}\nu_{\rm XZ}+\nu_{\rm YX}\nu_{\rm ZY}\nu_{\rm XZ}<1

- If the material has low compressibility, a mixed formulation must be used.
- It is possible to make an estimate of an effective bulk modulus and the values of the shear moduli.
- In cases of uncertainty, it is better to take the extra cost of the mixed formulation to avoid possible inaccuracies.

When working with geometrically nonlinear problems, the meaning of “linear elasticity” is really a matter of convention. The issue here is that there are several possible representations of stresses and strains. For a discussion about different stress and strain measures, see this previous blog post.

Since the primary stress and strain quantities in COMSOL Multiphysics are Second Piola-Kirchhoff stress and Green-Lagrange strain, the natural interpretation of linear elasticity is that these quantities are linearly related to each other. Such a material is sometimes called a St. Venant material.

Intuitively, one could expect that “linear elasticity” means that there is a linear relation between force and displacement in a simple tensile test. This will not be the case, since both stresses and strains depend on the deformation. To see this, consider a bar with a square cross section.

*The bar subjected to uniform extension.*

The original length of the bar is L_0 and the original cross-section area is A_0=a_0^2, where a_0 is the original edge of the cross section. Assume that the bar is extended at a distance \Delta so that the current length is L=L_0+\Delta=L_0(1+\xi).

Here, 1+\xi is the axial stretch and \xi can be interpreted as the engineering strain. The new length of the edge of the cross section is a=a_0+d=a_0(1+\eta), where \eta is the engineering strain in the transverse directions.

The force can be expressed as the Cauchy stress \sigma_x in the axial direction multiplied by the current cross-section area:

F = \sigma_x A = \sigma_x A_0 (1+\eta)^2

To use the linear elastic relation, the Cauchy stress \boldsymbol \sigma must be expressed as the Second Piola-Kirchoff stress \mathbf S. The transformation rule is

\mathbf \sigma = J^{-1} \mathbf F \mathbf S \mathbf F^T

where \mathbf F is the deformation gradient tensor, and the volume scale is defined as J = det(\mathbf F). Without going into details, for a uniaxial case

\sigma_x = \frac{F_{xX}}{F_{yY}F_{zZ}}S_X= \frac{(1+\xi)}{(1+\eta)^2}S_X

Since for a St. Venant material in uniaxial extension, the axial stress is related to the axial strain as S_X = E \epsilon_X, we obtain

F = S_x A_0 (1+\xi) = E A_0 (1+\xi)\varepsilon_X

Given that the axial term of the Green-Lagrange strain tensor is defined as

\varepsilon_X = \frac{\partial u}{\partial X} + \frac{1}{2}(\frac{\partial u}{\partial X})^2 = \xi+\frac{1}{2}\xi^2

the force versus displacement relation is then

F = E A_0 (1+\xi)(\xi + \frac{1}{2}\xi^2)=E A_0 (\xi+\frac{3}{2}\xi^2+\frac{1}{2}\xi^3)

The linear elastic material furbished with geometric nonlinearity actually implies a cubic relation between force and engineering strain (or force versus displacement, since \Delta =L_0\xi), as shown in the figure below.

*The uniaxial response of a linear elastic material under geometric nonlinearity.
*

As can be seen in the graph, the stiffness of the material approaches zero at the compression side, \xi = \sqrt{{1}/{3}}-1 \approx -0.42. In practice, this means that the simulation will fail at that strain level. It can be argued that there are no real materials that are linear at large strains, so this should not cause problems in practice. However, linear elastic materials are often used far outside the range of reasonable stresses for several reasons, such as:

- Often, you may want to do a quick “order of magnitude” check before introducing more sophisticated material models.
- There are singularities in the model that cause very high strains in a point.
- Read more about singularities here.

- In contact problems, the study is always geometrically nonlinear.
- Often, high compressive strains appear locally in the contact zone at some time during the analysis.

In all of these cases, the solver may fail to find a solution if the compressive strains are large. If you suspect this to be the case, it is a good idea to plot the smallest principal strain. If it is smaller than -0.3 or so, we can expect this kind of breakdown. The critical value in terms of the Green-Lagrange strain is found to be -1/3. When this becomes a problem, you should consider changing to a suitable hyperelastic material model.

Compression may not be the only problem. In the analysis above, Poisson’s ratio did not enter the equations. So what happens with the cross section?

By definition in the uniaxial case, the transverse strain is related to the axial strain by

\varepsilon_Y = -\nu \varepsilon_X

When these strains are Green-Lagrange strains, this is a nonlinear relation stating that

\frac{\partial v}{\partial Y} + \frac{1}{2}(\frac{\partial v}{\partial Y})^2 = -\nu (\frac{\partial u}{\partial X} + \frac{1}{2}(\frac{\partial u}{\partial X})^2)

Thus, there is a strong nonlinearity in the change of the cross section. Solving this quadratic equation gives the following relation between the engineering strains

\eta = \sqrt{1-\nu(\xi^2+2\xi)}-1

The result is shown in the figure below.

*Transverse displacement as a function of the axial displacement for uniaxial tension of a St. Venant material. Five different values of Poisson’s ratio are shown.*

As you can see, the cross section collapses quickly at large extensions for higher values of Poisson’s ratio.

If another choice of stress and strain representation had been made — for example, if the Cauchy stress were proportional to the logarithmic, or “true” strain — it would have resulted in quite a different response. Instead, such a material has a stiffness that decreases with elongation, where the force-displacement response does depend on the value of Poisson’s ratio. Still, both materials can correctly be called “linear elastic”, although the results computed with large strain elasticity can differ widely between two different simulation platforms.

We have illustrated some limits for the use of linear elastic materials. In particular, the possible pitfalls related to incompressibility and to the combination of linear elasticity with large strains have been highlighted.

If you are interested in reading more about material modeling in structural mechanics problems, check out these blog posts:

- Introducing Nonlinear Elastic Materials
- Obtaining Material Data for Structural Mechanics from Measurements
- Part 2: Obtaining Material Data for Structural Mechanics from Measurements
- Fitting Measured Data to Different Hyperelastic Material Models
- Yield Surfaces and Plastic Flow Rules in Geomechanics
- Computing Stiffness of Linear Elastic Structures: Part 1
- Computing Stiffness of Linear Elastic Structures: Part 2

After obtaining our measured data, the question then becomes this: How can we estimate the material parameters required for defining the hyperelastic material models based on the measured data? One of the ways to do this in COMSOL Multiphysics is to fit a parameterized analytic function to the measured data using the Optimization Module.

In the section below, we will define analytical expressions for stress-strain relationships for two common tests — the *uniaxial test* and the *equibiaxial test*. These analytical expressions will then be fitted to the measured data to obtain material parameters.

Characterizing the volumetric deformation of hyperelastic materials to estimate material parameters can be a rather intricate process. Oftentimes, perfect incompressibility is assumed in order to estimate the parameters. This means that after estimating material parameters from curve fitting, you would have to use a reasonable value for bulk modulus of the nearly incompressible hyperelastic material, as this property is not calculated.

Here, we will fit the measured data to several perfectly incompressible hyperelastic material models. We will start by reviewing some of the basic concepts of the nearly incompressible formulation and then characterize the stress measures for the case of perfect incompressibility.

For nearly incompressible hyperelasticity, the total strain energy density is presented as

W_s = W_{iso}+W_{vol}

where W_{iso} is the isochoric strain energy density and W_{vol} is the volumetric strain energy density. The second Piola-Kirchhoff stress tensor is then given by

S = -p_pJC^{-1}+2\frac{\partial W_{iso}}{\partial C}

where p_{p} is the volumetric stress, J is the volume ratio, and C is the right Cauchy-Green tensor.

You can expand the second term from the above equation so that the second Piola-Kirchhoff stress tensor can be equivalently expressed as

S = -p_pJC^{-1}+2\left(J^{-2/3}\left(\frac{\partial W_{iso}}{\partial \bar{I_{1}}}+\bar{I_{1}} \frac{\partial W_{iso}}{\partial \bar{I_{2}}} \right)I-J^{-4/3} \frac{\partial W_{iso}}{\partial \bar{I}_{2}} C -\left(\frac{\bar{I_{1}}}{3}\frac{\partial W_{iso}}{\partial \bar{I}_{1}} + \frac{2 \bar{I}_{2}}{3}\frac{\partial W_{iso}}{\partial \bar{I}_{2}}\right)C^{-1}\right)

where \bar{I}_{1} and \bar{I}_{2} are invariants of the isochoric right Cauchy-Green tensor \bar{C} = J^{-2/3}C.

The first Piola-Kirchhoff stress tensor, P, and the Cauchy stress tensor, \sigma, can be expressed as a function of the second Piola-Kirchhoff stress tensor as

\begin{align}P& = FS\\

\sigma& = J^{-1}FSF^{T}

\end{align}

\sigma& = J^{-1}FSF^{T}

\end{align}

Here, F is the deformation gradient.

Note: You can read more about the description of different stress measures in our previous blog entry “Why All These Stresses and Strains?“

The strain energy density and stresses are often expressed in terms of the stretch ratio \lambda. The *stretch ratio* is a measure of the magnitude of deformation. In a uniaxial tension experiment, the stretch ratio is defined as \lambda = L/L_0, where L is the deformed length of the specimen and L_0 is its original length. In a multiaxial stress state, you can calculate principal stretches \lambda_a\;(a = 1,2,3) in the principal referential directions \hat{\mathbf{N}_a}, which are the same as the directions of the principal stresses. The stress tensor components can be rewritten in the spectral form as

S =\sideset{}{^3_{a=1}}

\sum S_{a} \hat{\mathbf{N}_{a}} \otimes \hat{\mathbf{N}_{a}}

\sum S_{a} \hat{\mathbf{N}_{a}} \otimes \hat{\mathbf{N}_{a}}

where S_{a} represents the principal values of the second Piola-Kirchhoff stress tensor and \hat{\mathbf{N}_{a}} represents the principal referential directions. You can represent the right Cauchy-Green tensor in its spectral form as

C = \sideset{}{^3_{a=1}}

\sum\lambda_a^2 \hat{\mathbf{N}_a}\otimes\hat{\mathbf{N}_a}

\sum\lambda_a^2 \hat{\mathbf{N}_a}\otimes\hat{\mathbf{N}_a}

where \lambda_a indicates the values of the principal stretches. This allows you to express the principal values of the second Piola-Kirchhoff stress tensor as a function of the principal stretches

S_a = \frac{-p_p J}{\lambda_a^2}+2\left(J^{-2/3}\left(\frac{\partial W_{iso}}{\partial \bar{I_{1}}}+\bar{I_{1}} \frac{\partial W_{iso}}{\partial \bar{I_{2}}} \right) -J^{-4/3} \frac{\partial W_{iso}}{\partial \bar{I}_{2}} \lambda_a^2 -\frac{1}{\lambda_a^2}\left(\frac{\bar{I_{1}}}{3}\frac{\partial W_{iso}}{\partial \bar{I}_{1}} + \frac{2 \bar{I}_{2}}{3}\frac{\partial W_{iso}}{\partial \bar{I}_{2}}\right)\right)

Now, let’s consider the uniaxial and biaxial tension tests explained in the initial blog post in our Structural Materials series. For both of these tests, we can derive a general relationship between stress and stretch.

Under the assumption of incompressibility (J=1), the principal stretches for the uniaxial deformation of an isotropic hyperelastic material are given by

\lambda_1 = \lambda, \lambda_2 = \lambda_3 = \lambda^{-1/2}

The deformation gradient is given by

\begin{array}{c} F = \\ \end{array} \left(\begin{array}{ccc} \lambda &0 &0 \\ 0 &\frac{1}{\sqrt{\lambda}} &0 \\ 0 &0 &\frac{1}{\sqrt{\lambda}}\end{array}\right)

For uniaxial extension S_2 = S_3 = 0, the volumetric stress p_{p} can be eliminated to give

S_{1} = 2\left(\frac{1}{\lambda} -\frac{1}{\lambda^4}\right) \left(\lambda \frac{\partial W_{iso}}{\partial \bar{I}_{1_{uni}}}+\frac{\partial W_{iso}}{\partial \bar{I}_{2_{uni}}}\right) ,\; P_1 = \lambda S_1\; \sigma_1 = \lambda^2 S_1,\;\;\;\;

The isochoric invariants \bar{I}_{1_{uni}} and \bar{I}_{2_{uni}} can be expressed in terms of the principal stretch \lambda as

\begin{align*}

\bar{I}_{1_{uni}} = \left(\lambda^2+\frac{2}{\lambda}\right) \\

\bar{I}_{2_{uni}} = \left(2\lambda + \frac{1}{\lambda^2}\right)

\end{align*}

\bar{I}_{1_{uni}} = \left(\lambda^2+\frac{2}{\lambda}\right) \\

\bar{I}_{2_{uni}} = \left(2\lambda + \frac{1}{\lambda^2}\right)

\end{align*}

Under the assumption of incompressibility, the principal stretches for the equibiaxial deformation of an isotropic hyperelastic material are given by

\lambda_1 = \lambda_2 = \lambda, \; \lambda_3 = \lambda^{-2}

For equibiaxial extension S_3 = 0, the volumetric stress p_{p} can be eliminated to give

S_1 = S_2 = 2\left(1-\frac{1}{\lambda^6}\right)\left(\frac{\partial W_{iso}}{\partial \bar{I}_{1_{bi}}}+\lambda^2\frac{\partial W_{iso}}{\partial \bar{I}_{2_{bi}}}\right),\; P_1 = \lambda S_1,\; \sigma_1 = \lambda^2 S_1\;\;\;\;

The invariants \bar{I}_{1_{bi}} and \bar{I}_{2_{bi}} are then given by

\begin{align*}

\bar{I}_{1_{bi}} = \left( 2\lambda^2 + \frac{1}{\lambda^4}\right) \\

\bar{I}_{2_{bi}} = \left(\lambda^4 + \frac{2}{\lambda^2}\right)

\end{align*}

\bar{I}_{1_{bi}} = \left( 2\lambda^2 + \frac{1}{\lambda^4}\right) \\

\bar{I}_{2_{bi}} = \left(\lambda^4 + \frac{2}{\lambda^2}\right)

\end{align*}

Let’s now look at the stress versus stretch relationships for a few of the the most common hyperelastic material models. We will consider the first Piola-Kirchhoff stress for the purpose of curve fitting.

The total strain energy density for a Neo-Hookean material model is given by

W_s = \frac{1}{2}\mu\left(\bar{I}_1-3\right)+\frac{1}{2}\kappa\left(J_{el}-1\right)^2

where J_{el} is the elastic volume ratio and \mu is a material parameter that we need to compute via curve fitting. Under the assumption of perfect incompressibility and using equations (1) and (2), the first Piola-Kirchhoff stress expressions for the cases of uniaxial and equibiaxial deformation are given by

\begin{align*}

P_{1_{uniaxial}} &= \mu\left(\lambda-\lambda^{-2}\right)\\

P_{1_{biaxial}} &= \mu\left(\lambda-\lambda^{-5}\right)

\end{align*}

P_{1_{uniaxial}} &= \mu\left(\lambda-\lambda^{-2}\right)\\

P_{1_{biaxial}} &= \mu\left(\lambda-\lambda^{-5}\right)

\end{align*}

The stress versus stretch relationship for a few of the other hyperelastic material models are listed below. These can be easily derived through the use of equations (1) and (2), which relate stress and the strain energy density.

\begin{align*}

P_{1_{uniaxial}} &= 2\left(1-\lambda^{-3}\right)\left(\lambda C_{10}+C_{01}\right)\\

P_{1_{biaxial}} &= 2\left(\lambda-\lambda^{-5}\right)\left(C_{10}+\lambda^2 C_{01}\right)

\end{align*}

P_{1_{uniaxial}} &= 2\left(1-\lambda^{-3}\right)\left(\lambda C_{10}+C_{01}\right)\\

P_{1_{biaxial}} &= 2\left(\lambda-\lambda^{-5}\right)\left(C_{10}+\lambda^2 C_{01}\right)

\end{align*}

Here, C_{10} and C_{01} are Mooney-Rivlin material parameters.

\begin{align}\begin{split}

P_{1_{uniaxial}}& = 2\left(1-\lambda^{-3}\right)\left(\lambda C_{10} + 2C_{20}\lambda\left(I_{1_{uni}}-3\right)+C_{11}\lambda\left(I_{2_{uni}}-3\right)\\

& \quad +C_{01}+2C_{02}\left(I_{2_{uni}}-3\right)+C_{11}\left(I_{1_{uni}}-3\right)\right)\\

P_{1_{biaxial}}& = 2\left(\lambda-\lambda^{-5}\right)\left(C_{10}+2C_{20}\left(I_{1_{bi}}-3\right)+C_{11}\left(I_{2_{bi}}-3\right)\\

& \quad +\lambda^2C_{01}+2\lambda^2C_{02}\left(I_{2_{bi}}-3\right)+\lambda^2 C_{11}\left(I_{1_{bi}}-3\right)\right)

\end{split}

\end{align}

P_{1_{uniaxial}}& = 2\left(1-\lambda^{-3}\right)\left(\lambda C_{10} + 2C_{20}\lambda\left(I_{1_{uni}}-3\right)+C_{11}\lambda\left(I_{2_{uni}}-3\right)\\

& \quad +C_{01}+2C_{02}\left(I_{2_{uni}}-3\right)+C_{11}\left(I_{1_{uni}}-3\right)\right)\\

P_{1_{biaxial}}& = 2\left(\lambda-\lambda^{-5}\right)\left(C_{10}+2C_{20}\left(I_{1_{bi}}-3\right)+C_{11}\left(I_{2_{bi}}-3\right)\\

& \quad +\lambda^2C_{01}+2\lambda^2C_{02}\left(I_{2_{bi}}-3\right)+\lambda^2 C_{11}\left(I_{1_{bi}}-3\right)\right)

\end{split}

\end{align}

Here, C_{10}, C_{01}, C_{20}, C_{02}, and C_{11} are Mooney-Rivlin material parameters.

\begin{align}

P_{1_{uniaxial}} &= 2\left(\lambda-\lambda^{-2}\right)\mu_0\sum_{p=1}^{5}\frac{p c_p}{N^{p-1}}I_{1_{uni}}^{p-1}\\

P_{1_{biaxial}} &= 2\left(\lambda-\lambda^{-5}\right)\mu_0\sum_{p=1}^{5}\frac{p c_p}{N^{p-1}}I_{1_{bi}}^{p-1}

\end{align}

P_{1_{uniaxial}} &= 2\left(\lambda-\lambda^{-2}\right)\mu_0\sum_{p=1}^{5}\frac{p c_p}{N^{p-1}}I_{1_{uni}}^{p-1}\\

P_{1_{biaxial}} &= 2\left(\lambda-\lambda^{-5}\right)\mu_0\sum_{p=1}^{5}\frac{p c_p}{N^{p-1}}I_{1_{bi}}^{p-1}

\end{align}

Here, \mu_0 and N are Arruda-Boyce material parameters, and c_p are the first five terms of the Taylor expansion of the inverse Langevin function.

\begin{align}

P_{1_{uniaxial}} &= 2\left(\lambda-\lambda^{-2}\right)\sum_{p=1}^{3}p c_p \left(I_{1_{uni}}-3\right)^{p-1}\\

P_{1_{biaxial}} &= 2\left(\lambda-\lambda^{-5}\right)\sum_{p=1}^{3}p c_p \left(I_{1_{bi}}-3\right)^{p-1}

\end{align}

P_{1_{uniaxial}} &= 2\left(\lambda-\lambda^{-2}\right)\sum_{p=1}^{3}p c_p \left(I_{1_{uni}}-3\right)^{p-1}\\

P_{1_{biaxial}} &= 2\left(\lambda-\lambda^{-5}\right)\sum_{p=1}^{3}p c_p \left(I_{1_{bi}}-3\right)^{p-1}

\end{align}

Here, the values of c_p are Yeoh material parameters.

\begin{align}

P_{1_{uniaxial}} &= \sum_{p=1}^{N}\mu_p \left(\lambda^{\alpha_p-1} -\lambda^{-\frac{\alpha_p}{2}-1}\right)\\

P_{1_{biaxial}} &= \sum_{p=1}^{N}\mu_p \left(\lambda^{\alpha_p-1} -\lambda^{-2\alpha_p-1}\right)

\end{align}

P_{1_{uniaxial}} &= \sum_{p=1}^{N}\mu_p \left(\lambda^{\alpha_p-1} -\lambda^{-\frac{\alpha_p}{2}-1}\right)\\

P_{1_{biaxial}} &= \sum_{p=1}^{N}\mu_p \left(\lambda^{\alpha_p-1} -\lambda^{-2\alpha_p-1}\right)

\end{align}

Here, \mu_p and \alpha_p are Ogden material parameters.

Using the *Optimization* interface in COMSOL Multiphysics, we will fit measured stress versus stretch data against the analytical expressions detailed in our discussion above. Note that the measured data we are using here is the *nominal stress*, which can be defined as the force in the current configuration acting on the original area. It is important that the measured data is fit against the appropriate stress measure. Therefore, we will fit the measured data against the analytical expressions for the first Piola-Kirchhoff stress expressions. The plot below shows the measured nominal stress (raw data) for uniaxial and equibiaxial tests for vulcanized rubber.

*Measured stress-strain curves by Treloar.*

Let’s begin by setting up the model to fit the uniaxial Neo-Hookean stress to the uniaxial measured data. The first step is to add an *Optimization* interface to a 0D model. Here, *0D* implies that our analysis is not tied to a particular geometry.

Next, we can define the material parameters that need to be computed as well as the variable for the analytical stress versus stretch relationship. The screenshot below shows the parameters and variable defined for the case of an uniaxial Neo-Hookean material model.

Within the *Optimization* interface, a *Global Least-Squares Objective* branch is added, where we can specify the measured uniaxial stress versus stretch data as an input file. Next, a *Parameter Column* and a *Value Column* are added. Here, we define lambda (stretch) as a measured parameter and specify the uniaxial analytical stress expression to fit against the measured data. We can also specify a weighing factor in the *Column contribution weight* setting. For detailed instructions on setting up the *Global Least-Squares Objective* branch, take a look at the Mooney-Rivlin Curve Fit tutorial, available in our Application Gallery.

We can now solve the above problem and estimate material parameters by fitting our uniaxial tension test data against the uniaxial Neo-Hookean material model. This is, however, rarely a good idea. As explained in Part 1 of this blog series, the seemingly simple test can leave many loose ends. Later on in this blog post, we will explore the consequence of material calibration based on just one data set.

Depending on the operating conditions, you can obtain a better estimate of material parameters through a combination of measured uniaxial tension, compression, biaxial tension, torsion, and volumetric test data. This compiled data can then be fit against analytical stress expressions for each of the applicable cases.

Here, we will use the equibiaxial tension test data alongside the uniaxial tension test data. Just as we have set up the optimization model for the uniaxial test, we will define another global least-squares objective for the equibiaxial test as well as corresponding parameter and value columns. In the second global least-squares objective, we will specify the measured equibiaxial stress versus stretch data file as an input file. In the value column, we will specify the equibiaxial analytical stress expression to fit against the equibiaxial test data.

The settings of the Optimization study step are shown in the screenshot below. The model tree branches have been manually renamed to reflect the material model (Neo-Hookean) and the two tests (uniaxial and equibiaxial). The optimization algorithm is a Levenberg-Marquardt solver, which is used to solve problems of the least-square type. The model is now set to optimize the sum of two global least-square objectives — the uniaxial and equibiaxial test cases.

The plot below depicts the fitted data against the measured data. Equal weights are assigned to both the uniaxial and equibiaxial least-squares fitting. It is clear that the Neo-Hookean material model with only one parameter is not a good fit here, as the test data is nonlinear and has one inflection point.

*Fitted material parameters using the Neo-Hookean model. Equal weights are assigned to both of the test data.*

Fitting the curves while specifying unequal weights for the two tests will result in a slightly different fitted curve. Similar to the Neo-Hookean model, we will set up global least-squares objectives corresponding to Mooney-Rivlin, Arruda-Boyce, Yeoh, and Ogden material models. In our calculation below, we will include cases for both equal and unequal weights.

In the case of unequal weights, we will use a higher but arbitrary weight for the entire equibiaxial data set. It is possible that you may want to assign unequal weights only for a certain stretch range instead of the entire stretch range. If this is the case, we can split the particular test case into parts, using a separate *Global Least-Squares Objective* branch for each stretch range. This will allow us to assign weights in correlation with different stretch ranges.

The plots below show fitted curves for different material models for equal and unequal weights that correspond to the two tests.

*Left: Fitted material parameters using Mooney-Rivlin, Arruda-Boyce, and Yeoh models. In these cases, equal weights are assigned to both test data. Right: Fitted material parameters using Mooney-Rivlin, Arruda-Boyce, and Yeoh models. Here, higher weight is assigned to equibiaxial test data.*

The Ogden material model with three terms fits both test data quite well for the case of equal weights assigned to both tests.

*Fitted material parameters using the Ogden model with three terms.*

If we only fit uniaxial data and use the computed parameters for plotting equibiaxial stress against the actual equibiaxial test data, we obtain the results in the plots below. These plots show the mismatch in the computed equibiaxial stress when compared to the measured equibiaxial stress. In material parameter estimation, it is best to perform curve fitting for a combination of different significant deformation modes rather than considering only one deformation mode.

*Uniaxial and equibiaxial stress computed by fitting model parameters to only uniaxial measured data.*

To find material parameters for hyperelastic material models, fitting the analytic curves may seem like a solid approach. However, the stability of a given hyperelastic material model may also be a concern. The criterion for determining material stability is known as *Drucker stability*. According to the Drucker’s criterion, incremental work associated with an incremental stress should always be greater than zero. If the criterion is violated, the material model will be unstable.

In this blog post, we have demonstrated how you can use the *Optimization* interface in COMSOL Multiphysics to fit a curve to multiple data sets. An alternative method for curve fitting that does not require the *Optimization* interface was also a topic of discussion in an earlier blog post. Just as we have used uniaxial and equibiaxial tension data here for the purpose of estimating material parameters, you can also fit the measured data to shear and volumetric tests to characterize other deformation states.

For detailed step-by-step instructions on how to use the *Optimization* interface for the purpose of curve fitting, take a look at the Mooney-Rivlin Curve Fit tutorial, available in our Application Gallery.