Camfollower mechanisms are categorized based on the input and output motion of their configurations. The different types of camfollower mechanisms are described below.
When a follower moves along a guide while a cam rotates, the motion is categorized as a rotating cam and translating follower. This is further categorized based on the motion of the follower. If the motion is along the axis passing through the center of rotation of the cam, then it is called a radial inline follower, whereas if the motion is along an offset from the axis, it is called a radial offset follower.
Animation showing the displacement and velocity plot of the radial inline follower.
Animation showing the displacement and velocity plot of the radial offset follower.
When the rotary motion of the cam is converted into oscillatory motion, the configuration is known as a rotating cam and oscillatory follower.
Animation showing an example of an oscillatory cam and follower.
In this case, both the cam and follower exhibit translational motion. This means that the motion to the follower is due to the profile height of the cam.
Animation showing a displacement plot of a wedge cam.
There are cases in which the cam is stationary and the follower traces the profile of the cam. This type of arrangement is classified as a stationary cam and moving follower.
Animation showing displacement plot of a stationary cam and moving follower.
A point follower is a mechanism in which a pin on a follower slides in a slot. The slot can be of any profile.
Animation showing a displacement plot of a pin moving in a curved slot.
There are many ways to transform one type of motion into another. The variety of the camfollower mechanism is limited only by your own imagination. For example, you can use a combination of the above configurations to generate a combined effect. A few common examples are barrel cams and end cams, which are used to convert rotary motion into translational and oscillatory motion.
Animation showing the combined translational and oscillatory motion of a barrel cam.
The CamFollower joint type, available with the Multibody Dynamics Module as of COMSOL Multiphysics® software version 5.3a, is used to model applications in which a point follows a surface. In other words, the contact is unique and occurs at a single point.
Usually, the active component is called the cam and the passive component is called the follower, but in the COMSOL® software, this feature is built in such a way that you can model both components as either active or passive.
With the default settings, the cam and follower will always remain in contact, which means that no chattering is allowed. To model intermittent contact, the Activation Conditions feature under the CamFollower joint type can be used. In addition, the cam and follower can both be modeled as rigid or flexible components.
In order to ensure that the point on the follower always follows the specified cam surface, an offset from the cam boundary is defined such that the gap distance is always zero. The implementation of these constraints follows this procedure:
The Equation and Sketch for the CamFollower feature in COMSOL Multiphysics with the addon Structural Mechanics Module and Multibody Dynamics Module.
If you are not interested in the details of the formulation, you can skip to the next section.
To understand the formulation, let’s take a look at an example of a cam and roller configuration in which both the cam and roller have been modeled as rigid components. For a rigid body, a point on the body can be located from the origin of a bodyfixed (local) coordinate system (X, Y) by . Since the point is fixed to the body, the elements of the vector are constant in the bodyfixed coordinate system, while in the global coordinate system, they vary with the rotation. The transformation from a local to global orientation is represented in terms of the rotation matrices.
Schematic of the cam and roller configuration.
Now, the absolute position of point (P) on body (i) can be represented as follows:
Here, the position of point (c) is computed using the center of rotation of the rigid body and it also serves as the origin for the bodyfixed coordinate system.
So,
As the common contact point is not fixed with respect to the bodies (A) and (B), a constraint needs to be defined between the two bodies to maintain continuous contact.
In COMSOL Multiphysics, the constraint is defined on the vector d such that its magnitude in the normal direction of the closest point on the cam surface is equal to the offset value.
For our example, let’s consider a springloaded valveopening mechanism that has a rocker arm and a radial cam. In this mechanism, the cam rotation is prescribed and a spring is attached to the valve to restrict its motion.
Geometry of a valveopening mechanism with a rocker arm and radial cam.
The objective of this analysis is twofold:
One of the main objectives behind mounting the spring on the valve is to force the follower to follow the cam profile and to avoid intermittent contacts between the cam and follower. Hence, the optimal value of the valve spring stiffness should enforce the contact between the cam and follower at all times, while simultaneously requiring the least torque to rotate the cam shaft.
Animation showing a displacement plot of the radial cam.
The follower velocity first increases when the follower comes in contact with the opening flank region of the cam profile. Later, it decreases and becomes zero at the tip of the nose region. Similarly, it increases in the reverse direction and becomes zero when the follower comes in contact with the heal region of the cam profile. In COMSOL Multiphysics, the joint’s velocity and acceleration sign convention is decided based on the specified axis of the joint. It will be positive if the destination attachment is moving along the positive direction of the axis of the joint. For this case, the joint axis is defined along the zaxis, so the velocity is positive when the follower is moving upward and vice versa.
Variation of the follower velocity (left) and follower acceleration (right) with cam rotation.
You can see that the acceleration values are negative in the range of 60° to 120° of cam rotation. This is the region where the follower has a tendency to leave the contact with the cam profile, which depends on the valve spring stiffness value for a given camshaft RPM. By plotting the connection force vs. cam rotation, you can check which spring ensures continuous contact. The contact force sign convention is positive if the cam and follower are no longer in contact, while the negative value shows that the contact is still maintained.
Variation of the camfollower connection force with cam rotation (left) and torque required to rotate the cam shaft (right) for different values of the valve spring stiffness.
Out of the four valve stiffness values, only 20 kN/m and 30 kN/m can enforce a continuous contact between the cam and follower. To choose the optimal value of valve stiffness, we can look at the required torque plot. It can be observed in this plot that the required torque is less for a value of 20 kN/m, hence this is the optimal value of the valve stiffness among the values considered in this analysis.
One problem with cam design is to determine the cam profile suitable to generate the desired motion of follower. In COMSOL Multiphysics, one can easily create a geometry based on the follower rise function. The follower rise function is defined as the displacement of the follower with change in the cam rotation. The first step is to derive the radius as a function of the follower rise function. This relationship can be established using the analytical approach if the follower rise function is preknown. The analytical approach is quite simpler than the graphical approach of creating the cam surface, because the follower rise function is usually a combination of different elementary functions. Also, in a few cases, the desired output motion of follower is already known so it became simple to generate a cam profile from the known follower rise function.
Schematic representing the variation of the cam profile with the cam angle.
From the above image, it is clear that the radius (r) and follower height (h) are functions of the angle of rotation and the follower rise function is simply a difference of cam profile from the base circle. The relation between them is defined as follows:
Now, if the follower rise function is known, the variation of the radius (r) will represent a circle for each value of θ. So, it will give a family of curves. In order to generate the cam profile, we need to plot the envelope of the curves. This can be easily done in COMSOL Multiphysics using the Parametric Curve option.
Let’s take a simple example to illustrate this concept. Consider a simple knife edge radial follower with a known follower rise and a cam angle rotation. The rise function is such that there is outstroke during 60° of cam rotation, dwell for the next 30° of cam rotation, return stroke during the next 60° of cam rotation, and dwell for the remaining 210° of cam rotation.
Follower height as a function of the cam angle. (Follower rise function.)
First, you need to create the interpolation function and enter the data for h vs. θ, also called the follower rise function. Thanks to the flexibility of the COMSOL® software, one can directly import the follower rise function. After importing the follower rise function, the cam surface can be generated using the parametric curve. For this, one needs to first determine the base circle radius and then express the radius that depends upon the follower rise function and base circle radius. The parametric form is similar to that of a circle; the only difference in this case is that the radius is a function of θ. To do this in COMSOL Multiphysics, one can use the interpolated function to create the follower rise function and further use them under the parametric curve to define the value of r (the radius of the cam surface). Usually, the data is a piecewise curve, so it is good practice to create different profile curves for each section of outstroke, upstroke, and dwell. Finally, you use the Convert to solid option to generate the cam profile.
Schematic of the generated cam profile.
A plot of the follower rise as a function of the cam rotation, after performing a simulation with the generated cam profile.
The cases in which the motion of the follower is a combination of different types of analytical expressions, such as uniform motion, parabolic motion, simple harmonic motion, cycloidal motion, or general polynomial motion. In these cases, the cam profile can be easily created using the Analytic Function feature with a combination of different motions for the full cam rotation. The analytic function allows a symbolic expression, so you can directly write it as a function of θ.
In order to get a smoother surface representation, it is useful to increase the shape function order for the displacement to Quadratic. This applies both when the cam is rigid and when it is flexible.
If possible, use a fine mesh on a cam boundary to improve the accuracy of the mesh normal used in the CamFollower connection node.
The Structural Mechanics Module contains advanced tools for mechanical analyses. See what other types of analysis are possible by clicking the button below.
Note: The CamFollower feature also requires the Multibody Dynamics Module, an addon to the Structural Mechanics Module.
]]>
In a recent video on YouTube from standupmaths, science enthusiasts Matt Parker and Hugh Hunt discuss and demonstrate the “mystery” of a tuning fork. When you strike a tuning fork and hold it against a tabletop, it seems to double in frequency. As it turns out, the explanation behind this mystery can be boiled down to nonlinear solid mechanics.
When you hold a vibrating tuning fork in your hand, the bending motion of the prongs sets the air around them in motion. The pressure waves in the air propagate as sound. You can hear it, but it is not a very efficient conversion of the mechanical vibration into acoustic pressure.
When you hold the stem of the tuning fork to a table, an axial motion in the stem connects to the tabletop. The motion is much smaller than the transverse motion of the prongs, but it has the potential to set the large flat tabletop in motion — a surface that is a far better emitter of sound than the thin prongs of a tuning fork. The tabletop surface will act as a large loudspeaker diaphragm.
Our tuning fork.
To investigate this interesting behavior, we created a solid mechanics computational model of a tuning fork. The model is based on a tuning fork that one of my colleagues keeps in her handbag. The tone of the device is a reference A4 (440 Hz), the material is stainless steel, and the total length is about 12 cm.
First, let’s have a look at the displacement as the tuning fork is vibrating in its first eigenmode:
The mode shape for the fundamental frequency of the tuning fork.
If we study the displacements in detail, it turns out that even though the overall motion of the prongs is in the transverse direction (the x direction in the picture), there are also some small vertical components (in the z direction), consisting of two parts:
The displacements are shown in the figures below. The mode is normalized so that the maximum total displacement is 1. The peak axial displacement is 0.03 and the displacement in the stem is 0.01.
Total displacement vectors in the first eigenmode.
Axial displacements only. Note that the scales differ between figures. The center of gravity is indicated by the blue sphere.
Now, let’s turn to the sound emission. By adding a boundary element representation of the acoustic field to the model, the sound pressure level in the surrounding air can be computed. The amplitude of the vibration at the prong tips is set to 1 mm. This is approximately the maximum feasible value if the tuning fork is not to be overloaded from a stress point of view.
As can be seen in the figure below, the intensity of the sound decreases rather fast with the distance from the tuning fork, and also has a large degree of directionality. Actually, if you turn a tuning fork around its axis beside your ear, the nearsilence in the 45degree directions is striking.
Sound pressure level (dB) and radiation pattern (inset) around the tuning fork.
We now add a 2cmthick wooden table surface to the model. It measures 1 by 1 m and is supported at the corners. The stem of the tuning fork is in contact with a point at the center of the table. As can be seen below, the sound pressure levels are quite significant in a large portion of the air domain above and outside the table.
Sound pressure levels above the table when the stem of the tuning fork is attached to the table.
For comparison, we plot the sound pressure level for the same air domain when the tuning fork is held up. The difference is quite stunning with very low sound pressure levels in all parts of the air above the table except for in the vicinity of the tuning fork. This matches our experience with tuning forks as shown in the original YouTube video.
Sound pressure levels for the tuning fork when held up.
So far, we have not touched on the original question: Why does the frequency double when the tuning fork is placed on the table? One possible explanation could be that there is such a natural frequency, which has a motion that is more prominent in the vertical direction. For a vibrating string, for example, the natural frequencies are integer multiples of the fundamental frequency.
This is not the case for a tuning fork. If the prongs are approximated as cantilever beams in bending, the lowest natural frequency is given by the expression
The quantities in this expression are:
For our tuning fork, this evaluates to 435 Hz, so the formula provides a good approximation.
The second natural frequency of a cantilever beam is
This frequency is a factor 6.27 higher than the fundamental frequency. It cannot be involved in the frequency doubling. However, there are other mode shapes besides those with symmetric bending. Could one of them be involved in the frequency doubling?
This is unlikely for two reasons. The first reason is that the frequency doubling phenomenon can be observed for tuning forks with different geometries, and it would be too much of a coincidence if all of them have an eigenmode with exactly twice the fundamental natural frequency. The second reason is that nonsymmetrical eigenmodes have a significant transverse displacement at the stem, where the tuning fork is clenched. Such eigenmodes will thus be strongly damped by your hand, and have an insignificant amplitude. One such mode, with a natural frequency of 1242 Hz, is shown in the animation below.
The tuning fork’s first eigenmode at 440 Hz, an outofplane mode with an eigenfrequency of 1242 Hz, and the second bending mode with an eigenfrequency of 2774 Hz.
Let’s summarize what we know about the frequencydoubling phenomenon. Since it is only experienced when we press the tuning fork to the table, the double frequency vibration has a strong axial motion in the stem. Also, we can see from a spectrum analyzer (you can download such an app on a smartphone) that the level of vibration at the double frequency decays relatively quickly. There is a transition back to the fundamental frequency as the dominant one.
The dependency on the amplitude suggests a nonlinear phenomenon. The axial movement of the stem indicates that the stem compensates for a change in the location of the center of mass of the prongs.
Without going into details with the math, it can be shown that for the bending cantilever, the center of mass shifts down by a distance relative to the original length L, which is
Here, a is the transverse motion at the tip and the coefficient β ≈ 0.2.
The important observation is that the vertical movement of the center of mass is proportional to the square of the vibration amplitude. Also, the center of mass will be at its lowest position twice per cycle (both when the prong bends inward and when it bends outward), thus the double frequency.
With a = 1 mm and a prong length of L = 80 mm, the maximum shift in the position of the center of mass of the prongs can be estimated to
The stem has a significantly smaller mass than the prongs, so it has to move even more for the total center of gravity to maintain its position. The stem displacement amplitude can thus be estimated to 0.005 mm. This should be seen in relation to what we know from the numerical experiments above. The linear (440 Hz) part of the axial motion is of the order of a/100; in this example, 0.01 mm.
In reality, the tuning fork is a more complex system than a pure cantilever beam, and the connection region between the stem and the prongs will affect the results. For the tuning fork analyzed here, the secondorder displacements are actually less than half of the backoftheenvelope predicted 0.005 mm.
Still, the axial displacement caused by the secondorder moving mass effect is significant. Furthermore, when it comes to emitting sound, it is the velocity, not the displacement, that is important. So, if displacement amplitudes are equal at 440 Hz and 880 Hz, the velocity at the double frequency is twice that at the fundamental frequency.
Since the amplitude of the axial vibration at 440 Hz is proportional to the prong amplitude a, and the amplitude of the 880Hz vibration is proportional to a^{2}, it is necessary that we strike the tuning fork hard enough to experience the frequencydoubling effect. As the vibration decays, the relative importance of the nonlinear term decreases. This is clearly seen on the spectrum analyzer.
The behavior can be investigated in detail by performing a geometrically nonlinear transient dynamic analysis. The tuning fork is set in motion by a symmetric impulse applied horizontally on the prongs, and is then left free to vibrate. It can be seen that the horizontal prong displacement is almost sinusoidal at 440 Hz, while the stem moves up and down in a clearly nonlinear manner. The stem displacement is highly nonsymmetrical, since the 440 Hz contribution is synchronous with the prong displacement, while the 880Hz term always gives an additional upward displacement.
Due to the nonlinearity of the system, the vibration is not completely periodic. Even the prong displacement amplitude can vary from one cycle to another.
The blue line shows the transverse displacement at the prong tip, and the green line shows the vertical displacement at the bottom of the stem.
If the frequency spectrum of the stem displacement plotted above is computed using FFT, there are two significant peaks at 440 Hz and 880 Hz. There is also a small third peak around the second bending mode.
Frequency spectrum of the vertical stem displacement.
To actually see the secondorder term at 880 Hz in action, we can subtract the part of the stem vibration that is in phase with the prong bending from the total stem displacement. This displacement difference is seen in the graph below as the red curve.
The total axial stem displacement (blue), the prong bending proportional stem displacement (dashed green), and the remaining secondorder displacement (red).
How did we perform this calculation? Well, we know from the eigenfrequency analysis that the amplitude of the axial stem vibration is about 1% of the transverse prong displacement (actually 0.92%). In the graph above, the dashed green curve is 0.0092 times the current displacement of the prong tip (not shown in the graph). This curve can be considered as showing the linear 440 Hz term — a more or less pure sine wave. That value is then subtracted from the total stem displacement, and what is left is the red curve. The secondorder displacement is zero when the prong is straight, and peaks both when the prong has its maximum inward bending and when it has its maximum outward bending.
Actually, the red curve looks very much like it is having a time variation proportional to sin^{2}(ωt). It should, since that displacement, according to the analysis above, is proportional to the square of the prong displacement. Using a wellknown trigonometric identity, . Enter the double frequency!
Commenters on the original video from standupmaths have noticed that some tuning forks work better than others, and with some tuning forks, it is difficult to see the frequency doubling at all. As discussed above, the first criterion is that you hit it hard enough in order to get into the nonlinear regime. But there are also geometrical differences influencing the ratio between the amplitude of the two types of vibration.
For instance, prongs that are heavy relative to the stem will cause large doublefrequency displacements, since the stem must move more in order to maintain the center of gravity. Slender prongs can have a larger amplitude–length (a/L) ratio, thus increasing the nonlinear term.
The design of the region where the prongs meet the stem is important. If it is stiff, then the amplitude of the fundamental frequency vibration in the stem will be reduced, and the relative importance of the doublefrequency vibration is larger.
The cross section of the prongs will also have an influence. If we return to the expression for the natural frequency
it can be seen that the moment of inertia of the cross section plays a role. A prong with a square cross section with side d has
while a prong with a circular cross section with diameter d has
Thus, for two tuning forks that look the same when viewed from the side, the one with a square profile must have prongs that are a factor 1.14 longer to give the same fundamental frequency. If we assume the same maximum stress due to bending in the two tuning forks, the one with the square profile can have a transverse displacement amplitude, which is 1.14^{2} larger than the circular one because of its higher loadcarrying capacity. In addition, if the stem is kept at a fixed size, then it will become proportionally lighter when compared to the longer prongs. All these contributions end up in a 70% increase in vertical stem vibration amplitude when moving from a circular profile to a square profile.
In addition, tuning forks with a circular cross section usually have a design that is more flexible at the connection between the prongs and the stem, and thus a higher level of vibration at the fundamental frequency.
The conclusion is that a tuning fork with a square cross section is more likely to exhibit the frequencydoubling behavior than one with a circular cross section.
In most cases, the answer is “no.” The fundamental frequency is still there, even though it may have a lower amplitude than the one with the double frequency. But the way our senses work, we hear the fundamental frequency, although with a different timbre. It is difficult, but not impossible, to strike the tuning fork so hard that the sound level of the double frequency is significantly dominant.
The frequency doubling occurs due to a nonlinear phenomenon, where the stem of the tuning fork must move upward, in order to compensate for the small lowering of the center of mass of the prongs as they approach the outermost positions of their bending motion.
Note that it is not the fact that the tuning fork is connected to the table that causes the frequency doubling. The reason that we measure it in that case is that the sound emitted by the resonating table surface is caused by the axial stem motion, whereas the sound we hear from the tuning fork that is held up is dominated by the prong bending. The motion is the same in both cases, as long as the impedance of the table is ignored. In fact, you can measure the doubled frequency with a tuning fork when held up as well, but it is 30 dB or so below the fundamental frequency.
Additive manufacturing is the process of creating a 3D object by adding one or more materials on top of each other layer by layer. To learn more about this type of manufacturing, we reached out to Professor Frédéric Roger of the Mines Telecom Institute, LilleDouai Center. (IMT is a French public institution dedicated to higher education, research, and innovation in engineering and digital technologies.)
Professor Roger says that, in a sense, additive manufacturing is a bit like sewing or weaving. In both processes, a heterogeneous finished product is created by controlling how different raw materials are consolidated. In weaving, the materials are usually thread and yarn; however, additive manufacturing can use many materials, including polymers, metal alloys, ceramics, and composites.
Choosing the right materials is important for creating an ideal finished product, be it a warm blanket (left, woven by my grandmother) or a customized aerospace part (right). Right image in the public domain in the United States, via Wikimedia Commons.
This wide range of materials means that additive manufacturing can be used to design a large amount of unique objects across many industries. For instance, Roger mentions that by using the right materials and thermodynamic conditions, engineers can make objects that withstand or adapt to severe environmental conditions. Such objects could even adapt to certain temperatures or chemical conditions by changing their shape or releasing chemical species (like drugs) that are trapped in a matrix. A transformation over time would add another dimension to the printed part, resulting in “4D printing”.
Sometimes, additive manufacturing parts are inspired by natural forms, like the bioinspired example pictured here. Image courtesy Frédéric Roger.
According to Roger, the many opportunities that come with additive manufacturing make it “an unavoidable manufacturing process,” as it “offers new opportunities to develop optimized structures with advanced materials.” However, before engineers can create these structures, they have to improve the additive manufacturing process.
Since additive manufacturing is a complex process, it can be difficult to study. This technique varies based on the materials involved and the specific type of additive manufacturing. Studying this process also requires accounting for many different effects, such as:
To account for these factors, engineers can use the COMSOL Multiphysics® software, which Roger mentions is “a unique software that has great advantages in the simulation of additive manufacturing.” The software helps engineers to not only “optimize the additive manufacturing process but also to predict the mechanical and microstructural consequences on the product.” Through this, engineers can include all of the relevant physics and determine the ideal manufacturing conditions and part geometries that balance the needs of stiffness, weight reduction, and heat dissipation.
Left: An example of the additive manufacturing process, which involves many different physics. Image by Les Pounder — Own work. Licensed under CC BYSA 2.0, via Flickr Creative Commons. Right: Example of an additive manufacturing part created with two materials and filled with a honeycomb inner structure. Image courtesy Frédéric Roger.
A challenge is that analyzing the additive manufacturing process while coupling the relevant physics can result in large model sizes and long computational times. To overcome this issue, Roger implements several different simulation strategies, such as activating mesh properties, using adaptive remeshing, and performing sequential simulations.
By taking a sequential approach, Roger is able to better analyze the succession of thermodynamic states that a material experiences during additive manufacturing. At the same time, this approach helps to reduce the complexity of the multiphysics couplings by dissociating them over time. As such, sequential simulations provide a way to comprehensively model and optimize the additive manufacturing process while reducing computational costs.
For their simulations, Roger and his team focused on fuseddeposition modeling (FDM®), a common additive manufacturing technique that is both affordable and enables control over process parameters. The aim of the study was to optimize the internal and external geometry of a printed thermoplastic part and achieve the best possible performance. To accomplish these goals in an efficient manner, the team split their analysis into three parts, discussed below.
For more information about this study, check out the researchers’ paper.
In the first part of the study, the researchers wanted to minimize the total weight of a printed structure while maintaining a material distribution that maximizes stiffness. To do so, they used topological optimization and structural mechanics analysis to study a mechanical structure exposed to a tensile load.
Original geometry and boundary conditions (left) and the Young’s modulus distribution that defines the optimal shape by color contrast (right). Left image by F. Roger and P. Krawczak and taken from their COMSOL Conference 2015 Grenoble presentation. Right image courtesy Frédéric Roger.
Through the studies, they found an optimal shape for the part, determining that the middle of the shape has the highest stress levels. As such, the researchers divided the structure into domains based on the stress concentration field: a highstress middle area surrounded by two lowstress areas. In the following study, they used this information to apply specific manufacturing conditions to the highstress zone
The stress fields in the optimized geometry. Image courtesy Frédéric Roger.
In the second study, the researchers aimed to increase the stability of the highstress zone in their part by testing two possible infill strategies:
In the heterogeneous case, the team created a more resistant domain in the highstress middle area by using a higher density of infill. At the same time, they minimized the weight of the external areas by using less material. The results indicated that the ideal geometry contains 60% material in the highstress region and 20% material in the lowstress regions.
Printing an optimized part using one material with varying densities. Image by F. Roger and P. Krawczak and taken from their COMSOL Conference 2015 Grenoble paper.
As shown below, the multimaterial case involved using red ABS plastic on the ends of the part and black conductive ABS with improved mechanical properties in the middle. The team found that they could replace the conductive ABS with materials similar to ABS that have reinforced filters to increase stiffness.
Printing an optimized part using two materials. Image by F. Roger and P. Krawczak and taken from their COMSOL Conference 2015 Grenoble paper.
After optimizing the inner and outer designs of the 3Dprinted part, the researchers modeled the fused thermoplastic deposition process and evaluated manufacturing parameters. The resulting simulations helped them to accurately predict thermal history, wetting conditions, polymer crystallization, interactions between filaments, and residual stresses and strains. One example is shown below, depicting the plastic strain during the heating and cooling process.
The fusion and solidification of a disk that is irradiated by a laser beam as well as the resulting plastic strain evolution. This analysis takes Newtonian fluid flow and solid thermomechanical properties into account. Animation courtesy Frédéric Roger.
The study also investigated the heat and mass transfer within the first two layers of a thinwalled tube. The researchers were then able to analyze the plastic droplet deposition process and identify areas where the filaments reached fusion temperature. The animations of the material deposition study are shown below. They depict a heat source moving along a deposition pattern and heating the filaments up to fusion temperature, ~230°C for ABS droplets. The extruder path domain in the simulations is premeshed and the meshes are continuously activated depending on the extruder’s position.
Twolayer circular deposition (top). The moving heat source represents the hot ABS deposition. The thermal expansion of the two layers (amplified by a factor of five), showing the moving heat source activating the properties of the material (bottom). Here, blue indicates a nonactivated mesh and the physical properties (thermal conductivity and stiffness) are close to zero. Animations courtesy Frédéric Roger.
Using these simulations, Roger and his team predicted the temperature field between the filaments during the deposition process, an important factor that affects filament adhesion. Similar analyses could help researchers compare different additive manufacturing conditions and determine the best deposition strategy for a specific application.
Roger says that these simulations enabled his team to “define an additive manufacturing part whose internal and external architectures give it the best possible industrial performance.” Of course, this is only the start of what can be achieved by combining additive manufacturing and multiphysics simulation.
If you have any tips for using COMSOL Multiphysics to study the additive manufacturing process, be sure to let us know in the comments below!
FDM is a registered trademark of Stratasys, Inc.
]]>
Triaxial testing is a method used to determine the stressstrain properties of soils by subjecting soil samples to constant lateral pressure while increasing vertical pressure. This test measures stresses in three mutually perpendicular directions.
The mechanical behavior of a rock, sand, or soil sample can be complicated to analyze, depending on the specimen. Even if the soil seems stable at first, the construction process can subject it to shifts and uneven settlements later on. Just look at what happened to the Tower of Pisa. This popular tourist destination was built on softer ground (clay, sand, and shells), which shifted over time and caused the tower to lean offkilter.
The Leaning Tower of Pisa, Italy. Image by Alkarex Malin äger — Own work. Licensed under CC BYSA 3.0, via Wikimedia Commons.
A triaxial test obtains the shear strength parameter measurements needed for a construction project. One of the reasons why triaxial testing is so common is because of its versatility. During the procedure, you are able to control drainage; measure pore water pressures, stress, and strain; increase loads; and observe deflections until sample failure.
In a typical triaxial test, the soil sample is placed inside a rubber membrane and then axially compressed while maintaining a constant radial pressure. If the soil is cohesive, you can prepare the samples directly from saturated, compacted samples. If the soil lacks cohesion, you can use a mold to keep the shape needed for the test.
There are three main triaxial tests:
Triaxial tests have a wide variety of application areas. For instance, triaxial testing is used in the oil and gas industry to determine the properties of shale cores and predict how soil responds during natural gas extraction. Also, triaxial shear tests are used for building dams and embankments. For applications such as underground expansion in cities as well as commercial and residential construction, triaxial testing is performed before excavation.
One advantage of triaxial testing is the adaptable machinery design, which includes many modification options. If you are testing highstrength rock, for instance, you can change the cylinder sleeve material from rubber to a thin metal sheeting. You can also vary the design of the triaxial testing apparatus to account for large loads or different methods of compression.
A triaxial rock test system. Image by Jingyi Cheng, Zhijun Wan, Yidong Zhang, Wenfeng Li, Syd S. Peng, Peng Zhang — Own work. Licensed under CC BY 4.0, via Wikimedia Commons.
How does triaxial testing apply to other environmental conditions that are harder to predict, like seismic shifts? During an earthquake, for instance, the mechanical behavior of soil can change considerably. The type of change it undergoes depends on several variables, such as:
In addition, earthquakes can initiate soil liquefaction. This type of quicksand can continue to cause damage long after the earthquake is over. Triaxial testing is often used to help predict structural damage in the event of a natural disaster.
Liquefaction after the earthquake in Niigata, Japan, in 1964. Image in the public domain in the United States, via Wikimedia Commons.
The Triaxial Earthquake and Shock Simulator (TESS) is an advanced example of a triaxial testing method for earthquake preparation. This device is operated by the U.S. Army Engineer Research and Development Center. TESS is able to independently control three axes at the same time, which provides more realistic earthquake conditions for engineers who want to test facilities and equipment for vulnerabilities.
Using the Geomechanics Module, an addon to the Structural Mechanics Module and COMSOL Multiphysics, you can model a triaxial testing apparatus to examine its loading and unloading curves.
In this example, the apparatus consists of a cylinder that presses the soil sample from the top. A flexible membrane controls the surrounding pressure, which allows changes in the radial forces.
In the schematic below, you see how the different boundaries of the testing apparatus can be defined. Because the soil is subjected to loading at the top of the apparatus, the Prescribed Displacement boundary condition is used to slowly increase the vertical displacement.
Dimensions, boundary conditions, and boundary load for the triaxial apparatus.
This model is a starting point for testing the soil’s model parameters. The soil properties are taken from a standard clay material. This example uses the Soil Plasticity feature in the COMSOL® software with the DruckerPrager criterion.
After applying a vertical displacement and a confinement pressure to the specimen sample, you can study the results of the static response and collapse load for various confinement pressures.
After loading the soil sample, you can look at the effective plastic strain. Most of the sample suffers from plastic deformation (shown in red), with only a small part in the elastic region (shown in blue).
From here, you can test different confinement pressures by plotting the additional loading stress on the soil sample caused by the prescribed top displacement. Below is the extra loading stress for three different confinement pressures on the porous matrix:
Depending on the loading and draining conditions of the soil samples, you can conduct one or more of the three triaxial testing methods.
To get started with modeling a triaxial testing apparatus, click the button below. From the Application Gallery, you can log into your COMSOL Access account and download the MPHfile for detailed instructions.
What if there was a bright side to hitting a pothole? Innovations in vehicle suspension technology could make this possible. Potential developments include a method for converting kinetic energy into electrical energy to power vehicles, softwaredriven shocks that can mitigate potholes, and mechanical suspension settings that adjust with voice commands.
Enhanced suspension systems are not possible without first developing a strong foundation. The suspension system in any vehicle, after all, needs to adapt to load variations, absorb dips and bumps in the road, and more. If not, common suspension problems arise, such as poor wheel alignment, wearing springs, and damaged dampers.
An example of a chassis with a suspension system. Image by Christopher Ziemnowicz — Own work. Licensed under CC BYSA 2.5, via Wikimedia Commons.
By setting up a simplified lumped model in the COMSOL Multiphysics® software, you can analyze and optimize vehicle suspension system designs.
Available as of version 5.3a of COMSOL Multiphysics®, the Lumped Mechanical System interface can be used for modeling discrete mechanical systems in a nongraphical format. This can be in terms of masses, dampers, and springs. You have the option of connecting these systems to a 2D or 3D Multibody Dynamics interface. When modeling a lumped mechanical system, you can use both the Lumped Mechanical System and Multibody Dynamics interfaces within the Multibody Dynamics Module.
In this tutorial, the lumped model of the vehicle suspension system has three main components:
The lumped model of a vehicle suspension system with three main components.
Each wheel has one degree of freedom (DOF) and is represented by a green circle in the image above. Each seat is represented by a blue circle and also has one DOF. At the center of gravity, the body has three DOFs that account for the system’s rotation:
You can use a Rigid Domain node and Prescribed Displacement/Rotation subnode in the Multibody Dynamics interface to restrict the number of DOFs for the body.
To model the wheel and seat, you use the Mass, Spring, and Damper nodes within the Lumped Mechanical System interface. The full vehicle model includes all four wheels and four seats, and both components are defined as a subsystem.
In the schematic below, the mass (m), spring (k), and damper (c) are shown. The lumped model of the wheel accounts for its mass and stiffness, as well as the stiffness and damping of the vehicle suspension. The lumped model of the seat accounts for its stiffness and damping, as well as the mass of the passenger.
The lumped model of a wheel and seat.
The Lumped Mechanical System interface enables you to model the vehicle body as an External Source in the lumped mechanical system. This helps to connect the suspension system with the vehicle body at the wheelbody and bodyseat points.
Through a transient analysis, you can compute both the vehicle motion and seat vibration levels for a given road profile. In this scenario, the bump height for the road is 4 cm and the width is 7.5 cm. The vehicle is assumed to be moving with a constant speed of 40 km/h. The road profile is modeled by assuming a series of bumps on the road, but only the left wheels of the vehicle are assumed to be moving over the bumps.
Let’s take a look at the time history of the vehicle’s roll, pitch, and heave. These results could be useful for designing shocks that intuitively reduce the amount of roll, pitch, and heave after hitting a pothole.
As shown below, the roll rotation is larger than the pitch rotation for the given road excitation as the left side of the vehicle is moving over the bumps given in the road profile. You can also see the corresponding velocities for the roll, pitch, and heave motions in the velocity plot below on the right. Two different frequencies — low and high — correspond to the natural frequencies for the components of the system.
Vehicle roll, pitch, and heave motions at the center of gravity (left) and corresponding vehicle velocities (right).
If you want to harness the kinetic energy induced by hitting a pothole, for example, you need to determine how the vehicle moves and the rate at which it moves. In this case, you can analyze the time history of displacement and acceleration at all four seat locations. The seat displacement results show that the left side of the vehicle has a much larger displacement because this side goes over the bumps in the road, whereas the right side does not.
Time history of seat displacements (left) and seat accelerations (right).
Finally, to determine how soft or hard the suspension is and modify it accordingly, we want to find out what the forces are in the springs. The results show that the force magnitude in the spring and damper of a wheel is much larger than that of a seat. This is because the force is absorbed by the inertia of the wheels and the vehicle body, so only a fraction of the force is transmitted from the wheel to the seat. Additionally, the frequency of vibration is much lower for the forces in the seat compared to the forces in the wheel — making for a smoother ride.
Forces in the springs and damper of the frontleft wheel (left) and frontleft seat (right).
This simplified model provides a solid foundation for analyzing vehicle suspension, which you can then compare to data from experiments. With verified results, you can enhance suspension system designs for realworld performance.
Try the Lumped Model of a Vehicle Suspension System tutorial yourself via the button above. From there, you can download the MPHfile if you have a COMSOL Access account and a valid software license.
]]>
In an interference fit, also known as a press fit, two parts are joined together with minimal space in between. Initially, the inner object has a slightly larger diameter than the outer one. The outer part is heated until it is large enough to fit the inner part. After the inner part is inserted, the outer object shrinks over it during cooling, which strengthens the fastening. A suboptimal interference fit can cause the parts to come loose, bulge excessively because of permanent plastic deformations, or make it impossible for the parts to fit together. However, with the right interference fit, the parts essentially fuse into one piece that can be held together, even when met with a large amount of torque.
A new take on the popular Goldilocks fairy tale: Instead of a bowl of porridge heated to her preferred temperature, Goldilocks finds an interference fit for two pipes that is neither too tight nor too loose.
An optimized interference fit reduces unwanted effects in a structure. For instance, in ball bearing assemblies, the interference fit protects against the bearing sliding on the shaft. If the interference fit is too loose, sliding will still occur, but if it is too tight, the ball bearing will experience increased operation temperature and wear particles.
For the best performance of a structure, the interference fit needs to be optimized with the application in mind. By building a simulation app, it is possible to efficiently evaluate parameters that affect the interference fit between two parts.
The Interference Fit Calculator computes and visualizes the interference fit between two pipes. The app includes an Input section where app users can enter different geometry parameters to quickly and easily test designs.
In this example, the inputs include:
The userfriendly interface of the app makes it easy to compute and visualize the results of the interference fit analysis. You can include buttons such as Solve, Reset to Default, Create Report, and Open Documentation for app users to run and view different analyses.
The different Results tabs of the app enable you to visualize how slight parameter changes affect the interference fit, which are computed by the underlying model. The results show the maximum transferable torque and axial force, as well as the effective stress, contact pressure, and pipe deformation for different inputs.
The Interference Fit Calculator is an example of what you can create with the Application Builder, a builtin tool included with the COMSOL Multiphysics® software. As you are in control of an app’s design, you can include different inputs and outputs to suit your needs.
The Interference Fit Calculator in action.
With a simulation app, you can test different parameters to optimize the interference fit for your specific structural application.
Click the button below to try the Interference Fit Calculator example.
One common treatment for atherosclerosis is a procedure called percutaneous transluminal angioplasty, which removes or compresses unwanted plaque that has built up in a patient’s coronary artery. This procedure sometimes relies on stents, placed within a blocked artery by an angioplasty balloon.
After reaching the intended location, the balloon inflates the stent, which locks into an expanded position. The balloon is then deflated and removed, while the stent remains in the artery. The expanded stent functions like a scaffold, keeping the blood vessel open and enabling blood to flow normally.
A stent example. Image by Lenore Edman — Own work. Licensed under CC BY 2.0, via Flickr Creative Commons.
Of course, for the angioplasty procedure to be a success, the tools used must perform as intended. If the ends of the stent expand more than its middle — a common defect known as dogboning — the artery can face serious damage. Another potential issue is foreshortening, which makes it challenging to position the stent and can also damage the artery.
To avoid these issues and make the angioplasty a success, it’s necessary to evaluate stent designs. One step in this process is analyzing the deformation experienced by a stent.
For this example, let’s examine a PalmazSchatz stent model, the geometry of which is seen below. This model looks at the stress and deformation in a stainless steel stent that is expanded via a radial outward pressure on the tube’s inner surface. (The pressure represents the balloon expansion.) The original diameter of the stent is 0.74 mm, but after the expansion period, the middle section has a diameter of 2 mm.
Thanks to the inherent symmetry of the stent’s geometry, we can minimize the computational costs of this simulation by reducing the size of the model to 1/24 of its original geometry.
The full stent geometry. The reduced geometry used in this example is represented by the darker meshed area.
First, let’s look at the various stresses and strains experienced by the stent during operation. Below, we see the stress distribution in the stent at maximum balloon inflation (left) and the residual stress in the stent after balloon deflation (right). As expected, stress in the stent is reduced after the balloon deflates.
Stress in the stent during balloon expansion (left) and after balloon deflation (right).
Moving on, we analyze how the effects of dogboning (blue) and foreshortening (green) change in relation to pressure during balloon inflation. Using this plot, we can check for potentially harmful effects in the stent design and optimize its performance.
Dogboning and foreshortening in the stent vs. the pressure in the angioplasty balloon.
We also examine the effective plastic strains in the tube at maximum dogboning, as seen in the following image.
Effective plastic strains and deformation at the time of maximum dogboning. The peak value is about 25%.
In regard to the recoil parameters, note that the longitudinal recoil is around −0.9%, the distal radial recoil is about 0.4%, and the central radial recoil is approximately 0.7%. These parameters provide more details on how the stent behaves when the inflated balloon is removed.
With the information provided by simulations like this one, engineers can improve the design of stents and optimize their use in biomedical applications. To try this example for yourself, click on the button below.
During the TPV cell energy production process, fuel burns within an emitting device that intensely radiates heat. Photovoltaic (PV) cells capture this radiation and convert it into electricity, with an efficiency of 1–20%. The required efficiency depends on the intended application of the cell. For example, efficiency is not a major factor when TPVs are used to cogenerate electricity within heat generators. On the other hand, efficiency is critical when TPVs are used as electric power sources for vehicles.
Left: Simplified schematic depicting the electricity generation process of a TPV. Right: An image from a prototype TPV system. Right image courtesy Dr. D. Wilhelm, Paul Sherrer Institute, Switzerland.
To improve the efficiency of TPV systems, engineers need to maximize radiative heat transfer, but this comes with a catch. The more radiation in the system, the less radiation converted to electric power. These losses — as well as conductive heat transfer — raise the temperature of the PV cell. If the temperature increases too much, it can exceed the operating temperature range of the PV cell, causing it to stop functioning.
One option for increasing the operation temperature of a TPV system is to use highefficiency semiconductor materials, which can withstand temperatures up to 1000°C. Since these materials tend to be expensive, engineers can reduce costs by combining smallerarea PV cells with mirrors that focus radiation onto the cells. Of course, there is a limit to how much the beams can be focused, since the cells overheat if the radiation intensity gets too high.
Engineers designing TPV devices need to find optimal system geometries and operating conditions that maximize performance, minimize material costs, and ensure that the device temperature stays within the operating range. Heat transfer simulation can help achieve these design goals.
This example uses the Heat Transfer Module and the SurfacetoSurface Radiation interface to determine how operating conditions (e.g., the flame temperature) affect the efficiency of a normal TPV system as well as the temperature of the system’s components. The goal is to maximize surfacetosurface radiative heat fluxes while minimizing conductive heat fluxes. In this model, the effects of geometry changes are also evaluated.
The model geometry includes an emitter, mirrors, insulation, and a PV cell that is cooled by water on its back side. For details on setting up this model — including how to add conduction, surfacetosurface radiation, and convective cooling — take a look at the TPV cell model documentation.
The TPV system model geometry.
To minimize the computational costs of the simulation, we use sector symmetry and reflection to reduce the computational domain to one sixteenth of the original geometry. When modeling the surfacetosurface radiation, we expand this view to account for the presence of all of the surfaces in the full geometry.
First, let’s check the voltaic efficiency of the PV cell for a range of cell temperatures. In doing so, we see that the efficiency decreases as the temperature increases. When the temperature of the cell exceeds 1600 K, the efficiency is 0. As such, the maximum operational temperature for the PV cell design is 1600 K.
Plotting PV cell voltaic efficiency versus temperature.
In the next plots, we see how the temperature of the emitter affects the temperature of the PV cell and the electric output power. The cell temperature plot (left image below) indicates that the emitter temperature must be under ~1800 K to keep the PV cell below its maximum operating temperature of 1600 K.
Keeping this in mind, let’s take a look at the electric power output results (right image below). From the results, we conclude that the maximum electric power is achieved when the emitter temperature is ~1600 K.
Plotting PV cell temperature (left) and electric output power (right) against operating temperature.
Moving on, let’s examine the temperature distribution in the PV cell for the optimal operating condition (left image below) and compare it to a temperature that exceeds this operating temperature (right image below). The two plots highlight how the device’s temperature distribution varies due to operating conditions.
The stationary temperature distribution in the full TPV system when the emitter temperature is 1600 K (left) and 2000 K (right).
Looking closer at the plot of the optimal emitter temperature of 1600 K, we see that the PV cells are heated to a sustainable temperature of slightly above 1200 K. It is important to note that the outside part of the insulation reaches a temperature of 800 K, indicating that a large amount of heat is transferred to the surrounding air. In addition, the irradiative flux significantly varies around the PV cell circumference and insulation jacket.
To determine the cause of this variation, we generate a plot of the irradiative flux for a single sector of symmetry at a temperature of 1600 K. The graph indicates that the variation is caused by shadowing and is related to the mirror positions. Using this plot, we could optimize the cell size and placement of the mirrors for a PV design.
The irradiation flux at the TPV cell, insulation inner surface, mirrors, and emitter.
Using models like the one discussed here, engineers can efficiently find optimal operating conditions for TPV devices, minimizing prototype development and testing.
To try this TPV cell example yourself, download the model files above.
]]>
Picture a micromirror as a single string on a guitar. The string is so light and thin that when you pluck it, the surrounding air dampens the string’s motion, bringing it to a standstill.
Because this damping effect is important to many MEMS devices, micromirrors have a wide variety of potential applications. For instance, these mirrors can be used to control optic elements, an ability that makes them useful in the microscopy and fiber optics fields. Micromirrors are found in scanners, headsup displays, medical imaging, and more. Additionally, MEMS systems sometimes use integrated scanning micromirror systems for consumer and telecommunications applications.
Closeup view of an HDTV micromirror chip. Image by yellowcloud — Own work. Licensed under CC BY 2.0, via Flickr Creative Commons.
When developing a micromirror actuator system, engineers need to account for its dynamic vibrating behavior and damping, both of which greatly affect the operation of the device. Simulation provides a way to analyze these factors and accurately predict system performance in a timely and costefficient manner.
To perform an advanced MEMS analysis, you can combine features in the Structural Mechanics Module and Acoustics Module, two addon products to the COMSOL Multiphysics simulation platform. Let’s take a look at frequencydomain (timeharmonic) and transient analyses of a vibrating micromirror.
We model an idealized system that consists of a vibrating silicon micromirror — which is 0.5 by 0.5 mm with a thickness of 1 μm — surrounded by air. A key parameter in this model is the penetration depth; i.e., the thickness of the viscous and thermal boundary layers. In these layers, energy dissipates via viscous drag and thermal conduction. The thickness of the viscous and thermal layers is characterized by the following penetration depth scales:
where is the frequency, is the fluid density, is the dynamic viscosity, is the coefficient of thermal conduction, is the heat capacity at constant pressure, and is the nondimensional Prandtl number.
For air, when the system is excited at a frequency of 10 kHz (which is typical for this model), the viscous and thermal scales are 22 µm and 18 µm, respectively. These are comparable to the geometric scales, like the mirror thickness, meaning that thermal and viscous losses must be included. Moreover, in real systems, the mirrors may be located near surfaces or in close proximity to each other, creating narrow regions where the damping effects are accentuated.
The frequencydomain analysis provides insight into the frequency response of the system, including the location of the resonance frequencies, Qfactor of the resonance, and damping of the system.
The micromirror model geometry, showing the symmetry plane, fixed constraint, and torquing force components.
In this example, we use three separate interfaces:
By modeling the detailed thermoviscous acoustics and using the Thermoviscous Acoustics, Frequency Domain interface, we can explicitly include thermal and viscous damping while solving the full linearized NavierStokes, continuity, and energy equations. In doing so, we accomplish one of the main goals for this model: accurately calculating the damping experienced by the mirror.
To set up and combine the three interfaces, we use the AcousticsThermoviscous Acoustics Boundary and ThermoviscousAcousticsStructure Boundary multiphysics couplings. We then solve the model using a frequencydomain sweep and an eigenfrequency study. These analyses enable us to study the resonance frequency of the mirror under a torquing load in the frequency domain.
Let’s take a look at the displacement of the micromirror for a frequency of 10 kHz and when exposed to the torquing force. In this scenario, the displacement mainly occurs at the edges of the device. To view displacement in a different way, we also plot the response at the tip of the micromirror over a range of frequencies.
Micromirror displacement at 10 kHz for phase 0 (left) and the absolute value of the zcomponent of the displacement field at the micromirror tip (right).
Next, let’s view the acoustic temperature variations (left image below) and acoustic pressure distribution (right image below) in the micromirror for a frequency of 11 kHz. As we can see, the maximum and minimum temperature fluctuations occur opposite to one another and there is an antisymmetric pressure distribution. The temperature fluctuations are closely related to the pressure fluctuations through the equation of state. Note that the temperature fluctuations fall to zero at the surface of the mirror, where an isothermal condition is applied. The temperature gradient near the surface gives rise to the thermal losses.
Temperature fluctuation field within the thermoviscous acoustics domain (left) and the pressure isosurfaces (right).
The two animations below show a dynamic extension of the frequencydomain data using the timeharmonic nature of the solution. Both animations depict the mirror movement in a highly exaggerated manner, with the first one showing an instantaneous velocity magnitude in a cross section and the second showing the acoustic temperature fluctuations. These results indicate that there are highvelocity regions close to the edge of the micromirror. We determine the extent of this region into the air via the scale of the viscous boundary layer (viscous penetration depth). We can also identify the thermal boundary layer or penetration depth using the same method.
Animation of the timeharmonic variation in the local velocity.
Animation of the timeharmonic variation in the acoustic temperature fluctuations.
When the problem is formulated in the frequency domain, eigenmodes or eigenfrequencies can also be identified. From the eigenfrequency study (also performed in the model), we can determine the vibrating modes, shown in the animation below (only half the mirror is shown as symmetry applies). Our results show that the fundamental mode is around 10.5 kHz, with higher modes at 13.1 kHz and 39.5 kHz. The complex value of the eigenfrequency is related to the Qfactor of the resonance and thus the damping. (This relationship is discussed in detail in the Vibrating Micromirror model documentation.)
Animation of the first three vibrating modes of the micromirror.
As of version 5.3a of the COMSOL® software, a different take on this example solves for the transient behavior of the micromirror. Using the same geometry, we extend the frequencydomain analysis into a transient analysis. To achieve this, we swap the frequencydomain interfaces with their corresponding transient interfaces and adjust the settings of the transient solver. In the simulation, the micromirror is actuated for a short time and exhibits damped vibrations.
The resulting model includes some of the most advanced air and gas damping mechanisms that COMSOL Multiphysics has to offer. For instance, the Thermoviscous Acoustics, Transient interface generates the full details for the viscous and thermal damping of the micromirror from the surrounding air.
In addition, by coupling the transient perfectly matched layer capabilities of pressure acoustics to the thermoviscous acoustics domain, we can create efficient nonreflecting boundary conditions (NRBCs) for this model in the time domain.
Let’s start with the displacement results. The 3D results (left image below) visualize the displacement of the micromirror and the pressure distribution at a given time. We also generate a plot (right image below) to illustrate the damped vibrations caused by thermal and viscous losses. The green curve represents the undamped response of the micromirror when the surrounding air is not coupled to the mirror movement. The timedomain simulations make it possible to study transients of the system, like the decay time, and the response of the system to an anharmonic forcing.
Micromirror displacement and pressure distribution (left) and the transient evolution of the mirror displacement (right).
We can also examine the acoustic temperature variations surrounding the micromirror. The isothermal condition at the micromirror surface produces an acoustic thermal boundary layer. As with the frequencydomain example, the highest and lowest temperatures are located opposite to one another.
In addition, by calculating the acoustic velocity variations of the micromirror, we see that a noslip condition at the micromirror surface results in a viscous boundary layer.
Acoustic temperature variations (left) as well as acoustic velocity variations for the xcomponent (center) and zcomponent (right).
These examples demonstrate that we can analyze micromirrors using advanced modeling features available in the Acoustics Module in combination with the Structural Mechanics Module. For more details on modeling micromirrors, check out the tutorials below.
]]>
The French scientist Barré de SaintVenant formulated his famous principle in 1855, but it was more of an observation than a strict mathematical statement:
“If the forces acting on a small portion of the surface of an elastic body are replaced by another statically equivalent system of forces acting on the same portion of the surface, this redistribution of loading produces substantial changes in the stresses locally, but has a negligible effect on the stresses at distances which are large in comparison with the linear dimensions of the surface on which the forces are changed.”
B. SaintVenant, Mém. savants étrangers, vol. 14, 1855.
Portrait of SaintVenant. Image in the public domain, via Wikimedia Commons.
Many great minds within the field of applied mechanics — Boussinesq, Love, von Mises, Toupin, and others — were involved in stating SaintVenant’s principle in a more exact form and providing mathematical proofs for it. As it turns out, this is quite difficult for more general cases, and research on the topic is still ongoing. (The argumentation has at times been quite vivid.)
Let’s start with something quite simple: a thin rectangular plate with a circular hole at some distance from the loaded edge, which is being pulled axially. If we are interested in the stress concentration at the hole, then how important is the actual load distribution?
Three different load types are applied at the rightmost boundary:
As seen in the plots below, the stress distribution at the hole is not affected by how the load is applied. The key here is, of course, that the hole is far enough from the load.
Von Mises stress contours for the three load cases.
Another way of visualizing this scenario is by using principal stress arrows. Such a plot emphasizes the stress field as a flux and gives a good feeling for the redistribution.
Principal stress plot for the three load cases. Note that there is a singularity when a point load is used.
By graphing the stress along a line, we can see that all three cases converge to each other at a distance from the edge, which is approximately equal to the width of the plate.
Stress along the upper edge as a function of the distance from the loaded boundary. The distance is normalized by the width of the plate.
If the hole is moved closer to the loaded boundary, we get another situation. The stress state around the hole now depends on the load distribution. But even more interesting is that the distance to where the three stress fields agree now is twice as far from the loaded boundary. The application of SaintVenant’s principle requires that the stresses are free to redistribute. In this case, that redistribution is partially blocked by the hole.
Stress along the upper edge with the hole closer to the loaded boundary.
Note that SaintVenant’s principle tells us that there is no difference in the stress state at a distance that is of the order of the linear dimension of the loaded area. The loaded area to be taken into consideration, however, may not be the area that is actually loaded! This statement may sound strange, but think of it this way: When the hole is far away, we may compute the stress concentration factor using a handbook (mine says 4.32) rather than by an FE solution. The handbook approach contains an implicit assumption that the load is evenly distributed as in the first load case. So even if the actual load was applied to only a small part of the boundary, the critical distance in that case is related to the size of the whole boundary.
When solving the problem using the finite element method (FEM), then the hole can be arbitrarily close to the load. What sets the limit is that from the physical point of view, the load distribution is well defined. As soon as we make assumptions about redistribution, however, there is an implicit assumption about the load distribution, which may differ from the actual one.
So far, we have said that the stresses are the same independent of the load details at some suitable distance. Since we are dealing with linear elasticity here, it is always possible to superimpose load cases. When working with proofs of SaintVenant’s principle, it is easier to formulate a principle along these lines: The stresses caused by a load system with no resulting force or moment will be small at a distance that is of the same order of magnitude as the size of the loaded boundary.
Thus, we study the stress caused by the difference between the two load systems with equal resultants. Most modern proofs are based on estimates of the decay of the strain energy density for such a zeroresultant system.
Returning to the problem above, we can compute the difference between the load cases. Doing so allows us to study the actual decay of stress or strain energy density for the difference of the stress fields.
Logarithm of strain energy density for the zeroresultant load cases.
The strain energy density along the plate for the zero resultant load cases. The energy is integrated along the vertical direction in order to produce a quantity that is only a function of the distance from the load.
The decay in the logarithm of the strain energy density is more or less linear with the distance from the loaded boundary. This is actually in line with what modern proofs predict: an exponential decay of the strain energy density. We can also clearly see how the hole temporarily reduces the decay rate.
For thinner structures like shells, beams, and trusses, it is well known that SaintVenant’s principle cannot be applied the same way as for a more “solid” object. Disturbances travel longer distances than what we expect, because the load paths in a thin structure are much more limited. This is the same phenomenon we see with the hole in the example above, but more prominently.
Here, we study a beam with a standard IPE100 cross section. The end of the beam is subjected to an axial stress, with an amplitude that has a linear distribution in both crosssectional directions.
Load distribution, displayed as contours and arrows.
Due to the symmetries, this load has a zeroresultant force, as well as zero moment around all axes. The height of the cross section is 100 mm, so if the standard form of SaintVenant’s principle is applicable, then the stresses should be small at a distance of approximately 100 mm from the end section.
Effective stress in the beam. The red contour indicates where the stress is less than 5% of the peak applied stress.
It turns out that in order for the stress to be below 5% of the peak applied stress, we have to travel almost a meter along the beam. Thus, the load redistribution is much less efficient here, since the equilibration between the top and bottom flanges requires moment transfer through the thin web.
If you are familiar with the theory for nonuniform torsion of beams (i.e., warping theory or Vlasov theory), you will recognize that the applied load has a significant bimoment. The bimoment is a crosssectional quantity with the physical dimension force X length^{2}.
Maybe (this is just my personal speculation), an efficient SaintVenant’s principle for this case should require not only force and moment but also a bimoment of zero. This can be accomplished by adding four point loads that provide a counteracting bimoment. The result of such an analysis is shown below.
Effective stress with four point loads that also provide a zero bimoment. The 5% stress contour is now much closer to the loaded boundary.
The applied point loads, which are not optimally placed on purpose, give extremely high (actually singular) local stresses. However, the stress does drop off much faster and is below 5% after about 100 mm. The 5% limit is still in terms of the applied distributed load, so it is not adjusted for the new local stresses. The logarithmic decay rate of the strain energy density is three times faster after the point loads are added.
In some cases, you can intuitively consider SaintVenant’s principle to be applicable to the FE discretized problem. Here, we look at distributed loads and nonconforming meshes.
In the FE model, loads are always applied at the mesh nodes, even though you specify them as a continuous boundary load. The load is internally distributed to the nodes of the element using the principle of virtual work, as shown in the example below.
A linearly distributed load and how it is applied at the nodes of a secondorder Lagrange element with side length L.
There is, however, an infinite number of load distributions that give the same nodal loads as long as they share the same resultant force and moment. Obviously, the solution to the finite element problem is the same for all of these cases. From SaintVenant’s principle, however, we can conclude that all such loads should give essentially the same stress field as soon as we are some distance away.
Since the size of the area over which we redistribute loads is an element face, the linear dimension after which there is no difference is essentially one element layer inside the structure. Thus, the solution in the outermost layer of elements may not correspond to the actual load, but further in, it does.
As an example, we can load a rectangular plate with a boundary load that has an exponential stress distribution. The stress computed with a fine mesh is shown below.
Contour plot of the axial stress distribution.
Because of SaintVenant’s principle, the stress field is redistributed to a pure bending state at some distance from the loaded edge, just as we expect. This, however, is not the target of the current discussion. Rather, we investigate the difference between the stress distribution above, and what we get with a number of coarse meshes.
Error in axial stress for three different meshes. Note the different scales. As expected, the error is smaller when the mesh is finer.
As can be seen in the figure, the error quickly decreases after the first element layer. What we see here is actually a combination of mesh convergence and the redistribution of stresses implied by SaintVenant’s principle.
A nonconforming mesh occurs when the shape functions in two connected elements do not match. The most common case is when an assembly is connected using identity pairs and continuity conditions. To exemplify this, we can study a straight bar with an intentionally nonmatching mesh. With a simple load case, such as uniaxial tension, it is possible to study the stress disturbances caused by the transition.
Axial stress at a nonconforming mesh transition. Secondorder elements are used.
The forces transmitted by the nodes at the two sides do not match the assumption of constant stress. Again, this can be seen as a local load redistribution over an area that is the element size. Using the reasoning of SaintVenant, the disturbance should fade away at an “elementsized” distance from the transition. Let’s investigate what happens if the mesh is refined in the axial direction.
Region with more than 0.1% error in stress. Three different discretizations are used in the axial direction.
It turns out that the region of disturbance is not affected much by the discretization in the direction perpendicular to the transition boundary. This is exactly what SaintVenant’s principle tells us.
Without making use of SaintVenant’s principle, many structural analyses are difficult to perform, simply because the detailed load distribution is not known.
The principle is formally only valid for linear elastic materials. In practice, we also intuitively use it on a daily basis for other situations. If, for example, the material in the “plate with a hole” example were elastoplastic, we would expect the two distributed loads to give equivalent results, as long as the yield stress is above the stress applied at the boundary so that there is only plastic deformation around the hole. The point load, however, always gives a different solution, since the material yields around the loaded point. For a longer discussion, read this blog post on singularities at point loads.
Learn more about using the COMSOL Multiphysics® software for FEA.
Born in 1707 in Basel, Switzerland, Leonhard Euler (pronounced “oiler”) was a prolific mathematician who published more than 800 articles during his lifetime. He studied under the famous Johann Bernoulli and received his master’s degree in philosophy from the University of Basel. Before moving to St. Petersburg, Russia, to work at the university, Euler submitted his first paper to the Paris Academy of Sciences, coming in second place at only 19 years old.
A portrait of Leonhard Euler. Image in the public domain, via Wikimedia Commons.
Euler quickly rose through the academic ranks and in 1733 succeeded Bernoulli as the chair of mathematics in St. Petersburg. Euler moved to Berlin in 1741 at the invitation of King Frederick II. In his 25 years there, he wrote around 380 articles and the first volume of his seminal book Introductio in Analysin Infinitorum, which formally defined functions for the first time; introduced the notation; popularized the and notation; and established the critical formula .
JosephLouis Lagrange (pronounced “luhgronj”) was born Giuseppe Lodovico Lagrangia in Turin. Today, this city is the capital of the region of Piedmont in Italy, but when Lagrange was born in 1736, it was ruled by the Duke of Savoy as part of the Kingdom of Sardinia. Lagrange developed an interest in mathematics and, after working independently on novel topics, began corresponding with Euler, whom he succeeded when Euler left Berlin.
A portrait of JosephLouis Lagrange. Image in the public domain, via Wikimedia Commons.
In Berlin, Lagrange developed most of the mathematics for which he is famous today. He played an important role in the development of variational calculus and came up with the Lagrangian approach to mechanics. Although Lagrangian mechanics makes the same predictions as Newton’s laws of motion, the Lagrangian functional introduced by Lagrange allows the classical mechanics of many problems to be described in a mathematically more straightforward and insightful manner than in Newtonian mechanics. Lagrange also developed the method of Lagrange multipliers, which allows constraints on systems of equations to be introduced easily in a variational approach.
The mathematical formulations of Euler and Lagrange are fundamental to the finite element method, which is used to solve equations in COMSOL Multiphysics.
In the Eulerian method, the dynamics of a system are considered from the viewpoint of an observer measuring the system’s evolution with respect to a fixed system of coordinates. This coordinate system is called the spatial frame in COMSOL Multiphysics. It could be understood to correspond to the laboratory frame in physical analysis, since the system of coordinates is oriented according to a fixed set of axes without any reference to the orientation of the components of the physical system itself.
The figure below illustrates a thin plate of material whose structural mechanics are modeled in a 2D plane. The plate is fixed to a rigid wall at the lefthand side and is deformed under its own weight, as gravity acts downward. With the results plotted in the spatial frame, we see the deformation of the object, as we would expect to observe in the laboratory.
A thin plate fixed to the gray block at the left deforms under its own weight, as viewed in the spatial (lab) frame. The deflection at the tip is about 5 mm for the given mechanical properties.
Formulating physical equations seems very natural in the Eulerian method. Indeed, this is the common formulation for problems such as electromagnetics and fluid physics, in which the field variables are expressed as functions of the fixed coordinates in the spatial frame.
For mechanical problems, though, the Lagrangian method offers a helpful alternative. In the Lagrangian method, the mechanical equations are written with reference to small individual volumes of the material, which will move within an object as it displaces or deforms dynamically. To put it another way, the object itself always appears undeformed from the point of view of the Lagrangian coordinate system, since the latter stays attached to the deforming object and moves with it, but external forces in the surroundings appear to change their orientation from the deforming object’s perspective. The corresponding coordinate system, which moves along with the deforming object, is called the material frame in COMSOL Multiphysics.
A point within the object, as measured in the spatial frame, is displaced from the position of the same point as expressed in the material frame by the mechanical displacement of that point. In the image below, we focus our view on the tip of the deforming plate in the example above and animate its deformation as the density of the object increases so that the weight increases too. As you can see, the material frame coordinate system (red grid and arrows) deforms together with the object, as the object’s dimensions in the spatial frame change. This means that anisotropic material properties — such as mechanical properties of composite materials — can be expressed conveniently in the material frame.
Zoomedin view of the tip of a thin plate deforming under its own weight, as its density is increased. The red grid denotes the material frame coordinates, tied to the object, as viewed in the spatial (lab) frame. The red and green arrows show the x and ycoordinate orientations of the material frame, as viewed in the spatial frame.
In the limit of very small strains for this type of mechanical problem, the spatial and material frames are nearly coincident, because the mechanical displacement is small compared to the object’s size. In this case, it is common to use the “engineering strain” to define the elastic stressstrain relation for the object, and the resulting stressstrain equations are linear. As the mechanical displacement increases, though, the linear approximation used to evaluate the engineering strain is increasingly inaccurate — the exact GreenLagrange strain is required. In COMSOL Multiphysics, the term “geometric nonlinearity” means that the GreenLagrange strain is used.
For further details on the mathematics, see my colleague Henrik Sönnerlind’s blog post on geometric nonlinearity.
Geometric nonlinearity is handled in COMSOL Multiphysics by allowing the spatial frame to be separated from the material frame, according to a frame transformation due to the computed mechanical displacement. It remains convenient to access the material frame to express properties such as anisotropic mechanical material properties, since these properties will usually remain aligned with the material frame coordinates, even as the object deforms.
By contrast, external forces such as gravity have a fixed orientation in the spatial frame. From the perspective of the material frame, external forces like gravity change direction as the object deforms. The image below shows the tip of the thin plate as above, but here, the displacement magnitude is plotted with colors. Arrows are used to illustrate the force due to gravity, as expressed in the material frame coordinates. Since the material frame coordinates remain fixed with respect to the object, the dimensions of the object appear not to change. However, the displacement magnitude increases with the object’s weight and the gravity force increasingly changes direction with respect to the deformed material in conditions of greater deformation.
Zoomedin view of the tip of a thin plate deforming under its own weight as its density increases. The plot is in the material frame as used for the Lagrangian formulation, so the deformation is not apparent, although displacement increases. The red arrows indicate the apparent direction of gravity (which is constant in the spatial frame) as perceived from the material frame of reference within the deforming object.
Neither the Lagrangian nor Eulerian formulation is more “physical” or “correct” than the other. They are simply different mathematical approaches to describing the same phenomena and equations. Through coordinate transformation, we can always transform the physical equations for any phenomenon from the material frame to the spatial frame or vice versa. From the perspective of interpretation and implementation, though, each approach has certain advantages and common applications. Some of these are summarized in the table below:
Strengths  Common Applications  

Eulerian Method 


Lagrangian Method 


What about multiphysics problems, such as fluidstructure interaction (FSI) or geometrically nonlinear electromechanics? In these cases, one physical equation might be formulated most naturally with the Eulerian method, while another might be better expressed with the Lagrangian method. This is where the ALE method comes in. This method solves the equations on a third coordinate system, which is not required to match either the spatial frame or the material frame coordinate systems.
The third coordinate system is called the mesh frame in COMSOL Multiphysics. There is one mathematical mapping between the spatial frame and the underlying mesh frame, and one between the material frame and the underlying mesh frame, so at all points in time, the equations formulated in the spatial and material frames can be transformed into the mesh frame to be solved.
In domains representing solids in a model, mechanical displacement is predicted using structural mechanics equations in the Lagrangian formulation. Here, the relation of the spatial and material frames is given by the mechanical displacement, as above. The ALE method adds more equations to allow the apparent positions and shapes of mesh elements in neighboring domains to displace in the spatial frame. That is in order to account for how mechanical deformation can change the shape of the boundaries of any domain where the physics are described in the Eulerian formulation. These additional equations are called a Moving Mesh or Deformed Geometry in COMSOL Multiphysics.
At boundaries between Lagrangian and Eulerian domains, a boundary condition for these additional equations requires that the displacement of the spatial frame (as defined through the moving mesh) for the Eulerian domain must match the mechanical displacement of the spatial frame away from the material frame in the Lagrangian domain. Even where no mechanical equations are solved, such that no Lagrangian method is used, the ALE method can still be used to express moving boundaries due to deposition or loss of material.
If you find the ALE method quite mathematical, that’s OK! It’s a difficult concept to follow in the abstract. To better understand the way the ALE method works, let’s take a look at an example within COMSOL Multiphysics.
The ALE method plays an important role in modeling FSI. In COMSOL Multiphysics, this method enables the automated bidirectional coupling of fluid flow and structural deformation, a capability demonstrated in our Micropump Mechanism tutorial model.
At the heart of this micropump mechanism are two cantilevers, which perform the same function as valves in conventional pumping devices. These cantilevers are flexible enough that the fluid flow causes them to deform. As fluid is alternately pumped into or out of the channel at the top, the force of the fluid flow causes the two cantilevers to deform so that fluid flows out to the right or in from the left.
The micropump mechanism. Pumping fluid into or out of the top tube produces opposite reactions in the two cantilevers, pushing fluid in or out of the chamber. Even though there is no timeaveraged net flow into the upper tube, there is a timeaveraged net movement of fluid from left to right.
The cantilevers deform enough that there is an appreciable change in the position of the boundary where the fluid and solid meet: a geometrically nonlinear case. The selfconsistent handling of the fluid’s pressure on the solid and the solid’s force on the fluid, together with the deformation of the mesh, are handled automatically by the FluidStructure Interaction interface. The interface employs the ALE method to account for the change in shape in the solid and fluid regions.
For solids, the mechanical equations with geometric nonlinearity define the displacement of the spatial frame with respect to the material frame. In the fluid equations, it’s necessary to deform the mesh on which the equations are solved in order to express the displacement of the solid boundaries in the spatial frame where the fluid equations are formulated. The deformation at the boundaries is controlled by the mechanical displacement from the solution to the structural problem. Within the fluid, though, the exact position or orientation of mesh nodes isn’t important, as the equations are formulated in the fixed spatial frame. Instead, the deformation of the mesh is smoothed in order to ensure that the numerical problem remains stable with highquality mesh elements.
To explain the ALE method for the FSI problem, we could paraphrase a common explanation for general relativity: forces due to fluid flow (Eulerian) tell the structure how to deform in the material frame (Lagrangian), while the structural deformation (Lagrangian) tells the mesh how to move in the spatial frame (Eulerian).
Top: The micropump’s operation, including pressure, flow, and cantilever deformation, as plotted in the spatial frame. Bottom: Mesh deformations calculated by the ALE method.
As of COMSOL Multiphysics version 5.3a, the Moving Mesh feature to define mesh deformation in this type of problem is located under Component > Definitions. This allows consistency in the definition of material and spatial frames between all physics included in a model, even if several physics interfaces are included. The screen capture below shows where these settings are located in the COMSOL Multiphysics Model Builder tree.
Screen capture showing Moving Mesh features under Component > Definitions, and physical coupling between two physics interfaces through Multiphysics > FluidStructure Interaction.
Turning to an electrochemical problem, the Copper Deposition in a Trench tutorial model shows that the ALE method can be vital for simulating electrodeposition problems. In this model, copper is deposited onto a circuit board that has a small “trench”. The deposited copper layer becomes thick compared to the overall size of the trench, so the size and orientation of the copper surface change appreciably as deposition proceeds. Since the rate of copper deposition at different points on this surface is nonuniform, the shape and movement of the boundary cannot be neglected.
A schematic of the physical problem being solved in the electrodeposition model.
To calculate the rate of deposition at a given point on the copper electrodeelectrolyte interface, we need the concentration of the species and the electrolyte potential of the solution adjacent to that point. As the deposition progresses and the boundary moves, the shape of the electrolyte volume has to change continuously. Similarly, the concentration and potential distributions on the altered shape must be recalculated.
The coupling of the deposition rate to the boundary motion rate and the calculation of the changing shape are accomplished with the ALE method and fully automated multiphysics couplings with the Tertiary Current Distribution and Deformed Geometry interfaces. Here, the Deformed Geometry displaces the copper surface in the spatial frame at a rate proportional to the local current density for electrodeposition, as computed from the electrochemical interface.
With this model, we can accurately account for the deposition process in order to optimize its parameters. We can also experiment with different applied potentials and deposition surface geometries to improve the uniformity of the deposition, which produces a more efficient process and a higherquality end product.
Animations showing the evolution of the deposition process in time. It is clear that the deposition happens unevenly, resulting in a pinching of the trench opening at its top.
Thermal ablation, discussed in this previous blog post, involves a very high temperature applied to an object, causing the surface to melt and vaporize. Examples of thermal ablation include the removal of material by lasers — such as in the etching process, laser drilling, or laser eye surgery — and a spacecraft’s heat shield as it reenters the atmosphere.
Animation showing the effect of thermal ablation on a material.
Since we expect that an object’s shape will change when some of its material is removed, deforming meshes are clearly a key part of thermal ablation simulation. What we need to know is how the shape of the object will change. This depends on how we balance the applied heat with heat lost to ablation and heat dissipation throughout the structure by mechanisms such as conduction.
To obtain this information, we can predict the temperature profile as a function of space and time by solving the heat transfer equations using the Heat Transfer interface. Because the mass and shape of the object are changing, the Heat Transfer interface is coupled to a Deformed Geometry interface, using the ALE method to displace the boundary according to the rate of ablation. The Heat Transfer equations predict the temperature distribution in the object as its shape evolves.
By performing these steps, we can attain accurate calculations for the thermal ablation process. Moreover, we can determine the final shape of the object after ablation is complete. This might enable us to check whether a laser weld will fall within acceptable tolerances or whether a spacecraft will survive an emergency landing.
The contributions of Leonhard Euler and JosephLouis Lagrange in the field of mathematics have paved the way for simulating a variety of systems involving multiphysics applications. The combination of their individual methods has led to the development of the ALE method, which can be used to predict physical behavior when objects deform or displace. By properly accounting for these movements, you can set up highly accurate models. Remember to thank Euler and Lagrange as you investigate these and other models that exploit the ALE method!
The ALE method is one of many builtin physics capabilities in the COMSOL Multiphysics® software. See more of them: