Before the invention of gears, people used wheels to transfer the rotation of one shaft to another with the help of friction. The major drawback in using these frictional wheels was the slippage beyond a certain torque value, as the maximum torque that could be transmitted was limited by the frictional torque. To overcome this limitation, people began using toothed wheels, more commonly known nowadays as cogwheels or gears.
Gear pair created using the Parts Library in the Multibody Dynamics Module.
The main purpose behind gears is to avoid slippage. This is why the teeth of one gear are inserted between the teeth of the mating gear, a process referred to as gear meshing. Compared to the gear’s core region, the gear’s mesh region is more flexible. Hence, accounting for the stiffness of the gear mesh is important when trying to accurately capture the dynamics and vibrations in the system.
Gear mesh stiffness depends on several different parameters and, most importantly, it varies with the gear rotation. This makes the problem nonlinear, and the continuously varying gear mesh stiffness gives rise to vibrations in the system. These vibrations in different parts of the transmission system result in noise radiation. Therefore, it is crucial to evaluate gear mesh stiffness and include it in the gear model.
To examine gear mesh stiffness, we assume that the gears are elastic bodies and model the contact between them. We then perform a stationary parametric analysis to determine the mesh stiffness of the gears for different positions in a mesh cycle. A mesh cycle is defined as the amount of gear rotation after which the next tooth takes the position of the first one.
Now, to understand this process, let’s take an example in which two gears, both made of steel, have the following properties:
Properties | Pinion | Wheel | |
---|---|---|---|
Number of teeth | n | 20 | 30 |
Pitch diameter | d_{p} | 50 mm | 75 mm |
Pressure angle | a | 25° | 25° |
Gear width | w_{g} | 10 mm | 10 mm |
In this example, both gears are hinged at their respective centers. Using the penalty contact approach, we model the contact between the teeth of the two gears. The boundaries of the two gears in contact with each other are shown below. For more details about how to set up this model, you can check out the tutorial titled: Vibrations in a Compound Gear Train.
The contact pair boundaries (left) and the finite element mesh (right) in the gear pair.
Because the mesh stiffness changes for the gears’ different positions in the mesh cycle, we rotate both gears parametrically to compute the variation of gear mesh stiffness. The rotation of the pinion (θ_{p}) about the out-of-plane axis is prescribed in such a way that the pinion rotates for two mesh cycles. The rotation of the wheel (θ_{w}) about the out-of-plane axis is defined as the following:
where g_{r} is the gear ratio with a value of 1.5 and θ_{t} is the twist with a value of 0.5°.
The wheel is given a twist, θ_{t}, and the required twisting moment, T, is evaluated on the hinge joint. Hence, the torsional stiffness of the gear pair is prescribed as:
Once we know the torsional stiffness, we can define the stiffness along the line of action as:
where d_{pw} is the pitch diameter of the wheel and α is the pressure angle.
The von Mises stress distribution in the gear pair for different positions in a mesh cycle. This shows high stress levels at the contact points along the line of action.
The figure below shows the variation of computed gear mesh stiffness with the rotation of the pinion for two mesh cycles. We can see that the gear mesh stiffness is periodic in each mesh cycle as well as across multiple mesh cycles, increasing in the beginning and then later decreasing. This happens due to the changing contact ratio. In the beginning of a mesh cycle, the contact ratio increases from 1 to 2, but then drops back down to 1.
The variation of gear mesh stiffness with the pinion rotation.
In the previous section, we saw that gear mesh stiffness varies with the gear’s position in the mesh cycle. It also depends on several other parameters, some of which are listed here:
Let’s focus on investigating the effect of gear tooth parameters on the mesh stiffness. While doing so, we keep the same geometric and material properties that were given in the first table.
To look at the effect of the number of teeth or module on gear mesh stiffness, we consider different values for the number of teeth on the pinion.
We then compute the number of teeth on the wheel by using the gear ratio, which is set to 1.5. The other two gear tooth parameters are fixed to the following values:
Gear meshes for three different values of the number of teeth (n_{p} = 20, 28, 36).
The von Mises stress distribution in the gear pair for different values of n_{p}.
The variation of gear mesh stiffness with pinion rotation for three different values of the number of teeth (n_{p} = 20, 28, 36). The stiffness is comparatively higher and smoother for a greater number of teeth or for a smaller module.
To understand the effect of pressure angle on gear mesh stiffness, we look at three different values of the pressure angle.
The other two gear tooth parameters are fixed to the following values:
Gear meshes for three different values of the pressure angle (α = 20°, 25°, 35°).
The variation of gear mesh stiffness with pinion rotation for three different values of the pressure angle (α = 20°, 25°, 35°). The stiffness increases with a larger pressure angle.
After investigating the effects of module and pressure angle, we now examine the effect of different addendum values on gear mesh stiffness.
The other two gear tooth parameters are fixed to the following values:
Gear meshes for three different values of the addendum-to-pitch-diameter ratio (adr = 0.6, 0.75, 0.9).
The variation of gear mesh stiffness with pinion rotation for three different values of the addendum-to-pitch-diameter ratio (adr = 0.6, 0.75, 0.9). The stiffness is comparatively higher for higher values of addendum, however it also has more fluctuations. This may lead to higher vibration levels in the transmission system.
After evaluating gear mesh stiffness using the static contact analysis, the next step is to include the stiffness in the gear model so that we can perform an NVH analysis of the full transmission system.
The gear mesh stiffness and damping added along the line of action between the two gears.
In the multibody dynamics analysis, we use the evaluated gear mesh stiffness in the Gear Elasticity node under the Gear Pair node. In this analysis, we write gear mesh stiffness as a function of gear rotation. By default, gear mesh stiffness is assumed periodic in a mesh cycle. However, it is also possible to assume that it is periodic in a full revolution.
In order to dampen the vibrations, we can add gear mesh damping in the Gear Elasticity node. This can be entered either as a function of mesh stiffness or explicitly. The latter technique works well when we have the gear-mesh stiffness variation available. If we don’t have the exact gear-mesh stiffness variation, we can use the gear tooth stiffness for the wheel as well as the pinion. The tooth stiffness can simply be evaluated by applying a load on the gear tooth and measuring the deflection. The gear tooth stiffness is also the function of a mesh cycle, although as an approximation, and we can enter it as a constant average value.
Finding the overall gear mesh stiffness also requires determining the contact ratio. In simple words, the contact ratio can be defined as a measure of the average number of teeth in contact during the period in which a tooth comes and goes out of contact with the mating gear. To show how different values of the contact ratio affect the stiffness, let’s examine a few cases.
In the first case, only a single pair of teeth is in contact for all positions in the mesh cycle. The typical variation of the gear tooth stiffness is shown below.
The typical variation of the gear tooth stiffness for the pair of teeth in contact.
In this case, two pairs of teeth are in contact for all positions in the mesh cycle. We can see from the following image that except for a phase difference, the second pair of teeth has the same stiffness as that of the first pair. The total stiffness of the gear mesh is the summation of individual tooth stiffness.
The typical variation of the gear tooth stiffness for the first and second pair of teeth when the contact ratio equals 2.
In the third case, the pairs of teeth that are in contact change for different positions in the mesh cycle. For certain positions, there is only one pair of teeth in contact, whereas in other positions, there are two pairs of teeth in contact. The stiffness of the second pair of teeth goes to zero when it loses contact for certain positions in the mesh cycle. This results in large fluctuations in the overall gear mesh stiffness, which leads to vibrations in the system.
The typical variation of the gear tooth stiffness for the first and second pair of teeth when the contact ratio is between 1 and 2.
To demonstrate the effect of gear mesh stiffness on gear dynamics, let’s use a pair of helical gears as an example. We first perform a transient study to compare a rigid gear mesh, gear mesh with a constant stiffness, and a gear mesh with a varying stiffness. We then analyze the effects of different types of gear mesh on the angular velocity of the driven gear as well as on the contact force. More details about this tutorial model can be found in the Application Gallery.
The figure below shows the variation of the driven gear’s angular velocity for the constant angular velocity of the driver gear. For a rigid gear mesh, the driven gear rotates at a constant speed. When the gear mesh stiffness is constant, the driven gear initially fluctuates before settling down to a constant speed. The gear mesh that has a varying stiffness continues to fluctuate about the mean value, giving rise to the vibrations.
Driven gear angular velocity for different types of gear meshes.
We can observe a similar trend in the contact forces. The rigid and constant-stiffness gear mesh eventually begin to maintain a constant contact force, but the varying-stiffness gear mesh causes the contact force to fluctuate about the mean value. The contact force variation is periodic with respect to the mesh cycle, and the contact force varies from about 150 N to 450 N, with a mean value of 250 N. This large variation in the contact force within a mesh cycle rotation causes vibrations in other parts of the system. This may lead to noise radiation in the surrounding area.
Variation of the contact force with gear rotation for different types of gear meshes.
The variation of gear mesh stiffness, which depends on several geometric and material parameters, plays an important role in the NVH analysis of a transmission system. With COMSOL Multiphysics and the Multibody Dynamics Module, we can calculate its variation by combining a contact analysis with the parameterized gears in the Parts Library. We can then use the computed gear mesh stiffness in the multibody dynamics model to accurately capture the dynamics of gears working together with the other parts of the transmission system.
Stay tuned for the next blog post in our Gear Modeling series, where we’ll show you how to simulate gearbox noise and vibrations generated due to varying gear mesh stiffness. In the meantime, we encourage you to browse the additional resources below.
Let’s begin with a brief introduction to rotordynamics modeling. As we have mentioned previously on the blog, rotordynamics analysis helps enhance the functionality and safety of rotating machines, which are used across many industries, from aerospace technology to power generation.
For instance, say that you want to make sure that a generator, one type of rotating machine, avoids instabilities, damaging resonances, and failure caused by a poor design. You can use rotordynamics analysis to study the vibrations that both influence the physical behavior of the generator and are exacerbated by the generator’s rotation and structure.
A generator (left) and a 3D model of a generator (right).
With simulation software, you can increase the accuracy and simplicity of your rotordynamics studies. And now, with the Rotordynamics Module, this process has become even more user friendly and flexible.
The Rotordynamics Module helps you set appropriate design parameters to keep responses within acceptable operating limits by analyzing resonances, stresses, strains, and the effects of lateral and torsional vibrations on rotating machinery. Additionally, you can use this module to take a closer look at how stationary and moving rotor components affect your design, as well as other parameters such as critical speeds, natural frequencies, and mode shapes. We’ll delve into a few specific benefits and features in the next section.
One of the main benefits of the Rotordynamics Module is flexibility. With this module, you can easily customize your simulation analysis to study specific parts of a rotating assembly or the whole structure.
The latter of these options is achieved with the Solid Rotor interface in the Rotordynamics Module, which uses a 3D CAD geometry to represent the rotor and solid elements for finite element modeling. By studying all of the components in a rotating assembly, you can generate the most accurate results possible. While modeling the entire system is not necessary for stress and deformation of the rotor, it will increase the accuracy of the simulation. To obtain the distribution of the stress and deformation field in the whole domain, modeling the rotor as a solid element is necessary.
Using this interface, you can include nonlinear geometric effects, fully describe geometrical asymmetries, account for phenomena such as spin softening and stress stiffening, and more.
A crankshaft model that uses the Solid Rotor interface to analyze the bearings’ pressure distribution in the lubricant as well as the von Mises stresses.
What if you want your model to be less computationally expensive? You can turn to the Beam Rotor interface for a faster and more computationally efficient option for modeling rotating machines. In this interface, an edge along the rotor axis defines the rotor and the other rotating machine components are defined by creating points at their respective locations.
As another advantage, this module simplifies the modeling process for two key elements in a rotor system: foundations and bearings. First, let’s look at foundations, which are broken down into three different modeling options:
A model of the pressure in a hydrodynamic bearing.
Resting upon these foundations are bearings. Let’s first focus on journal bearings, which you can model in two ways with the Rotordynamics Module:
This second option utilizes three different interfaces where the hydrodynamic part is based on a full Reynolds equation implementation. The Hydrodynamic Bearings interface models the behavior of journal bearings in detail and features an easy method for modeling an oil lubricant between a journal and bushing. The Solid Rotor with Hydrodynamic Bearing and Beam Rotor with Hydrodynamic Bearing interfaces both analyze a rotor, hydrodynamic bearing, and their interactions. However, as the names imply, the former uses solid elements to describe the rotor and the latter uses beams to define an approximated rotor.
If you’re interested in modeling thrust bearings, the Rotordynamics Module has you covered. This module includes three types of thrust bearings and behaviors: no clearance bearings, bearing stiffness and damping coefficients, and bearing forces and moments.
Using the previously discussed features and functionality, you can design a model that fits your specific needs. However, there is even more customization available in the Rotordynamics Module, which offers a variety of study types.
With the included study types, you can easily model gyroscopic effects. Vibrational effects, meanwhile, are modeled from the perspective of a corotating observer. To achieve this, we use a coordinate system that rotates along with the rotor. This removes the need to physically rotate the rotor to simulate the assembly, simplifying the modeling process. Modeling in corotating frame also allows an eigenfrequency analysis of a rotating system, which would otherwise be impossible due to the nonlinearity of the rotation when the system is observed in a space-fixed frame.
The available study types apply to static and dynamic analyses and include:
Note that for rotordynamics analyses, the definition of a stationary study differs from conventional analyses.
After you’ve run your study, it’s time to format your results and share them with other people. Doing so requires choosing the plot type that best visualizes your particular results. Take a look at the four images below to see a few of the available plot types that you can create based on your rotordynamics analyses.
Whirl plots plot a rotor’s mode shapes around its axis.
Campbell plots plot the variations of a rotor’s natural frequency variations in relation to its speed.
Waterfall plots plot frequency spectrum variations when the rotor’s angular speed increases.
Orbit plots plot rotor displacements at certain points, including the locations of bearings and disks.
To learn more about the Rotordynamics Module, click on the button below. You can also contact us with any questions you may have. Happy modeling!
Let’s consider a symmetric two-bar structure under a compressive load, as shown in the following figure:
Two bars under compression.
We assume that the bars are linearly elastic so that the force in a bar, F, is
where Δ is the elongation and L_{0} is the original length.
Using Pythagoras’ theorem, the vertical force can then be written as an explicit function of the vertical displacement:
The following quantities have been nondimensionalized:
The force as a function of displacement is shown in the graph below. The example actually shows as a buckling problem with snap-through. Between points A and C, no unique solution exists. In a previous blog post, we further discuss the concept of buckling in structures.
The compressive force in the bars increases until they are horizontal (), but the vertical projection decreases even faster beyond point A. This is why the vertical force decreases.
Force as a function of vertical displacement.
If we build a finite element model of this structure and try to increase the load, the analysis will probably fail when we reach the first peak at point A. We can, however, easily trace the solution by prescribing the vertical displacement at the loaded point, rather than the force. The applied force can then be obtained as the reaction force. The graph above was created using that method.
The tangential stiffness for this single degree of freedom system is defined as the rate of change in force with respect to displacement:
The stiffness is thus negative between points A and B. A negative stiffness is often related to numerical and physical instabilities.
Stiffness as a function of vertical displacement.
There are several material models within the field of solid mechanics that contain a negative slope of the stress-strain curve, either as an intentional effect or with certain choices of parameters. For example, some models for concrete are designed like this. In the physical interpretation of this behavior, cracks form when the material model is loaded in tension. The load carried by a test specimen will then decrease. The cohesive zone models used for describing decohesion in the COMSOL Multiphysics® software also show this type of behavior.
A strain-softening material.
At the material level, decreasing stress with increasing strain indicates a negative stiffness:
Such a material can only be tested under a condition of prescribed displacement; otherwise, it will fail immediately when the peak load is reached. The negative stiffness is thus related to a material instability.
In general, the stress and strain states are multiaxial. Stress and strain are represented by second-order tensors. In the multiaxial case, we must use a more general criterion for material stability: For any small change in the strain state, the corresponding change in the stress state must be such that the sum of the products of all stress and strain components is positive. That is,
Or, written in component form,
This is called Drucker’s stability criterion or Hill’s stability criterion.
The discretized form used in finite element analysis implies that the constitutive matrix relating stress increments to strain increments must be positive definite in order for the material to be stable. This is a condition that is generally computationally expensive to check for nonlinear materials. For a linear elastic material, the requirement can be directly converted into the well-known requirements and .
How can we mediate that we sometimes need to work with material models that do not fulfill the stability criterion? The important fact is that the material can be locally unstable, while the structure as such is still stable.
To understand this behavior, we can think of the material in the structure as connected springs. Some springs are purely elastic and represent the undamaged material, while a certain spring fails. Consider the three springs in the figure below.
A three-spring system. The extension of the failing spring is denoted u_{1}.
The spring k_{1} represents the material with the damage model, whereas the other two springs are purely elastic. The material model for the first spring is bilinear.
Material model for the nonlinear spring. The peak force F_{m} is reached at the displacement u_{m}.
The force in the lower branch is independent of damage:
Before the peak load is reached, the force in the upper branch is
since the two springs are connected in series.
The damage starts when the force in the upper branch is ; that is, when the external displacement is
and the corresponding external force is
During the degradation, the force in the damaged spring can be written as . The same force also passes through spring 2 so that .
These two relations determine u_{1} as a function of the external displacement:
In order to give a reasonable solution, u_{1} must increase when the external displacement is increased. Thus, it is necessary that . This is actually a clue to a very general result. A quick decrease in the force (or stress) is more susceptible to instability than a slower decrease.
Finally, we can derive the relation between the total external force and displacement during the degradation phase:
Thus, the external force can either increase or decrease when the external displacement increases, depending on the relative stiffness in the two branches. This simple model can thereby predict two types of instability:
In either case, a slower decrease of the force in the damage model is beneficial. In other words, the stiffer the surroundings, the more plausible it is that the whole system will be stable.
A globally stable system (left) and a system where the stiffness in the lower branch is too small to maintain stability (right).
In reality, we are not free to make arbitrary choices about force and stiffness. The area under the triangular force-displacement curve in the material model represents the energy dissipated by the process. The energy dissipation and the displacement (or strain) at final failure have a physical meaning.
The damaged part of a structure elongates while its force decreases. If the external displacement remains fixed, then the elastic parts of the structure must contract to compensate. This means that elastic energy is released. The only way the energy can be absorbed is by doing work on the damaged part. If, for a certain incremental displacement , the energy released by the elastic parts is larger than the work needed to produce the same displacement in the cracking part, the state is unstable.
Years ago, a friend of mine at the Department of Solid Mechanics at KTH Royal Institute of Technology in Stockholm performed some interesting experiments where he studied the stability of cracks in a ductile steel using extremely long three-point bend test specimens. The tests highlighted that crack stability is not only a function of the local stress state, but also of the capacity that the stored energy in the test specimen has to drive crack propagation. The longest test specimen in the experiments was 26 meters and occupied a large portion of the lab! The experiment was reported in the article “The stability of very long bend specimens” in the International Journal of Pressure Vessels and Piping.
With softening material models, it is extremely difficult to achieve convergence in a finite element model if the stress state is homogeneous.
In a physical material, the strength does not have a perfectly uniform distribution. When increasing the load, a crack will form at the location with the lowest strength, even if the stress state is homogeneous. When this happens, the surrounding material is unloaded.
Consider this example of three elastic blocks joined by two glue layers:
In real life, one glue layer will fail before the other. The slightly stronger layer will then be unloaded as the force through the part decreases. We cannot predict which layer will fail, since that is controlled by manufacturing inaccuracies. In the mathematical model, however, both layers fail simultaneously. Numerically, the iterations may not converge because the failure jumps back and forth between the two layers.
In a finite element model, the stresses are evaluated at each integration point within each element. When the load is increased above the maximum value, the failure may even jump between the elements or individual integration points within the same element (if the stress is the same everywhere).
This behavior implies that if we implement our own material model containing strain softening, we should test it using a single first-order element and under prescribed displacements. In this way, we ensure a homogeneous prescribed strain field and the stress is the same everywhere in the element. One example is Mazar’s damage model, which we described in a previous blog post. If we were to change the element shape functions to quadratic in that model, the analysis would no longer converge.
Does this mean that damage models are meaningless? Not at all. However, we must be careful to avoid indeterminate states. If a structure and its boundary conditions are symmetric, that symmetry must be employed in order to avoid indeterminacy. We can often solve problems with axial symmetry by using an axially symmetric model, while this may be impossible using a model of a 3D solid sector. Another approach is to allow a slight random spatial disturbance of the material data. This approach actually mimics nature, where strength values are randomly distributed. Also, it is important to increase the loading slowly in order to avoid large portions of the structure switching to a failed state at the same time.
In some material models, for example, within soil plasticity, strongly mesh-dependent thin layers with high shear strains can occur. These layers are called shear bands. When yielding is first initiated, the surrounding elements or even integration points are unloaded. The first elements to yield continue to accumulate plastic strains. It is interesting that this type of instability can actually be seen in real soil and is not only an artifact in the numerical model. Just as in nature, we cannot predict the exact location and distribution of the shear bands in the model.
As mentioned in the initial example, using prescribed displacements rather than prescribed forces is a good way to stabilize the numerical problem. However, this approach is essentially limited to the following cases:
There is a more general method, which we can use to continue solving past a point of instability. In this method, we first prescribe an arbitrary quantity that is known to monotonically increase and then add an extra equation that solves for the corresponding value of the prescribed load or displacement.
To display this technique, let’s augment the initial example with a spring, so that the load is applied by prescribing the deformation of the end of the spring. If the spring is very stiff, this is essentially the same as prescribing the displacement directly.
Bar system loaded through a spring.
If the spring is softer, the system may become unstable, since too much energy can be released by the spring. The critical value is
This is the most “negative stiffness” of the bar assembly, which occurs when the bars are horizontal. The relation between force and displacement at point 1 when varying the spring stiffness is shown below. The spring stiffness is given as
where the coefficient β is varied from an essentially stiff spring to values below the critical value.
Force as a function of the displacement at point 1 when varying the spring stiffness.
For values of β smaller than one, the solution fails when the spring stiffness equals the “negative” stiffness of the bar assembly.
If a prescribed force is used instead, all solutions will fail at the first peak load. By using prescribed displacement, it is possible to continue the analysis further. For lower spring stiffness values, we are still limited by the state when the internal instability causes failure.
The solution that we want to track has a monotonous vertical displacement at point 2, but prescribing it directly is not possible, since this would change the problem fundamentally. Instead, we add an equation stating: “Set the spring end displacement at point 1 so that the monitored displacement at point 2 has the prescribed value.” To do this, we add a Global Equation node in which a new unknown variable disp_at_P1
is added.
The Global Variable definition.
The equation determining the value of disp_at_P1
states that disp_at_P2-delta = 0
. The variable delta is the monotonous parameter incremented in the Stationary study step and disp_at_P2
is a variable that contains the current value of the displacement at point 2.
Settings for the study step, where delta
is used as the auxiliary sweep parameter.
The displacement at point 2 is then prescribed to have the value that satisfies the global equation.
Settings for the prescribed displacement at point 1.
With this modification, it is possible to trace the solution through the instability. As seen in the following graph, even strong instabilities can be bypassed using this method.
Force as a function of the displacement at point 1 when varying the spring stiffness after stabilization with a Global Equation node.
Water is a tricky chemical compound. As opposed to most liquids, water becomes less dense as it freezes and is therefore lighter as a solid than a liquid (as is illustrated by ice cubes floating in a glass of water).
When you hold ice in your hands, it gets slippery as it melts from the warmth of your body. So why, in freezing temperatures, are you able to slip on ice when walking or skating around an ice rink? Surely, the ice is not melting in such a cold environment, right? Up until recently, the accepted theory for why ice is slippery suggested that the pressure from your body creates heat that melts the ice, causing it to become slippery. The true answer to why ice is slippery, however, involves more research, debate, and thought.
Over the years, ice has proven to be a complex subject of study.
Let’s take a closer look at why ice is slippery.
The original theory for why ice is slippery centers on the idea of pressure melting. The theory suggests that when pressure is put on ice, it melts the top layer of the ice and creates a thin layer of water that enables you to skate, ski, and slide. The pressure melting theory is based on the fact that the freezing temperature of water is lower than 0°C for a particular range of high pressures, as shown in Figure 2 in Ref. 1.
The pressure melting theory doesn’t fully explain ice slipperiness for many reasons. Think of a thin, sharp ice skate on a surface of ice, for instance. The pressure from the thin skate may be concentrated, but there is not enough pressure to explain melting at low temperatures (Ref. 1).
In the 1880s, an engineer named John Joly tried to prove that pressure melting is what enables us to skate. He calculated a pressure of 466 atmospheres and a melting point of -3.5°C on the edge of a skate blade. Joly never explained how skating was possible at temperatures lower than -3.5°C, which is contradictory because most skaters prefer ice that is at least -5.5°C (Ref. 1).
The pressure melting theory also doesn’t account for how a person’s foot, ski, or snowboard is able to slide across frozen ice and snow. Since all of these items have less pressure distribution than an ice skate, how can they possibly generate enough heat in cold temperatures to cause significant melting?
The Skating Minister by Henry Raeburn.
Another theory, frictional heating, states that the friction from skating, skiing, or walking generates heat that melts the top layer of ice. This creates a thin layer of water, similar to pressure melting, on which objects can slide. Frictional heating, however, doesn’t explain how you can slip on ice while just standing still.
Up until recently, the mystery of why ice is slippery stood at a standstill. Surprisingly, an observation made nearly two centuries ago is what led to a breakthrough in the research. In 1850, British physicist and chemist Michael Faraday stuck two ice cubes together and observed that they froze to each other. He presumed that there were thin liquid layers on each ice cube that were nonvisible to the human eye and that these liquid layers froze together and caused the ice cubes to stick.
This observation led to the accepted belief that ice has a thin liquid layer on the surface, even at temperatures well below freezing. Called premelting, this process occurs when a liquid layer forms at the bulk melting point of the ice. On a molecular level, premelting occurs because the molecules on the top layer of the ice vibrate without something to hold them together (like another layer of ice molecules).
A schematic of the premelting process between two phases. Image by Aetherwind — Own work. Licensed under CC BY-SA 3.0, via Wikimedia Commons.
Premelting alone doesn’t account for the slipperiness of ice or why we are able to ski and skate with ease. It takes a combination of premelting and the friction on top of this premelted layer of ice, whether from ski, skate, or shoe, to cause slippage. This relatively new and accepted theory is what we now use to explain the concept of slippery ice.
The behavior of ice is hard to study experimentally on the molecular level. We can’t study the interface between ice and a sliding object (known as the buried interface) because the object blocks where we would scatter electrons or particles. However, there is experimental data that shows how the temperature of the ice and speed of the sliding object affect the friction.
Recent research on the slipperiness of ice from the Jülich Research Center in Germany prompted the development of a mathematical equation that takes into account shear stress and ice temperature in order to connect it to experimental data. This research suggests that the buried interface exhibits premelting behavior, just like the free surface of the ice. The results from this research show that, over a range of velocities and temperatures, ice friction acts as a function of the sliding speed and temperature.
We now know that ice is slippery due to a combination of premelting and friction, but how does this information relate to winter sports?
The conditions of ice and the surrounding environment play a role in a variety of winter sports. Ice that is closer to the bulk melting temperature is known as “soft” ice and leads to slower conditions for skating. The optimal temperature for figure skating, for example, is -5.5°C, as skaters prefer soft ice for balanced landings. Hockey players, on the other hand, thrive on ice that is colder and further from the bulk melting temperature. The optimal temperature for hockey players is -9°C, which provides “hard”, and therefore fast, ice (Ref. 1).
In speed skating, a skater’s time is influenced by the ice conditions as well. According to my colleague Ad van der Linden, before the arrival of artificial ice rinks for speed skating some sixty years ago, ice conditions were far from equal for skaters. In those days, competitions could be a lottery, with skaters having to compete outdoors in arctic and often changing weather conditions. Nowadays, indoor climate-controlled ice rinks provide more equality in terms of skating conditions. Regular resurfacing of the ice with specialized motorized equipment helps reduce unequal skating conditions even further.
My colleagues Andrew Griesmer and Edmund Dondero ice skating at a company sporting event.
However, in today’s speed skating, even the interval of resurfacing has become a point of discussion. In the resurfacing process, the ice gets covered with a thin layer of water, which freezes on top of the existing surface. Immediately after resurfacing, skating times are relatively slower. About 10 to 15 minutes later, there is an optimal time to skate before the ice starts to slow the skaters down again. But since top speed skaters take 30 seconds or less to complete a 400-meter lap, the differences between “fast” and “slow” ice are only small fractions of a second. Therefore, skating directly after a resurfacing may be more of a psychological disadvantage than an actual one.
For skiing and snowboarding, you can still slide at extreme arctic temperatures, even as low as -30°C, because premelting still occurs in ice and snow this cold. At temperatures below that, which the average skier would rarely encounter anyway, the snow may be too cold for the sporting equipment to move through. Some experts have even described snow conditions at these temperatures as “sandlike” (Ref. 1).
Further research into understanding the properties of ice could help competitive figure skaters, speed skaters, skiers, and snowboarders improve their performance. It could also be useful for the design and development of snow tires and specialized winter shoes.
If you have performed any ice-related research, leave us a comment below!
R. Rosenberg. Why Is Ice Slippery?. Physics Today. Dec. 2005; 50-55.
The lavish baked Alaska is made by placing ice cream in a pie dish or, for the classic dome shape, a bowl. Then, you cover the ice cream with slices of sponge cake to form the bottom of the dessert. Next, you flip the ice cream and sponge cake upside down and onto a flat dish and cover the ice cream with meringue (egg whites and sugar whipped to a stiff foam) as if you were frosting a cake. The entire dessert is put into a really hot oven long enough for the meringue to caramelize, but short enough to keep the ice cream from melting.
A classic dome-shaped baked Alaska, with a layer of caramelized meringue around the ice cream.
As the story goes, the name “baked Alaska” was coined in 1867 at Delmonico’s restaurant in New York City to celebrate the recent acquisition of Alaska from the Russian Empire. The true origin of the baked Alaska, however, has always been up for debate.
The ice cream in the baked Alaska stays frozen, even when placed in a hot oven, by taking advantage of the insulating properties of the trapped air in the cellular structure of the foam components (the meringue and sponge cake). The ice cream is surrounded by meringue and sponge cake, which conduct heat very poorly. This keeps the intense heat in the oven from reaching the ice cream.
Ice cream starts to melt at about -3°C, so it’s imperative to caramelize the meringue before the temperature in the ice cream approaches the melting point. The following factors have the most effect on the temperature in the ice cream:
All of these factors can be simulated using a COMSOL Multiphysics model of the baked Alaska. Such simulations can provide insight into the quantitative effect of each of the factors to ensure that the dessert turns out to be a success at the dinner table every time.
For the geometry of our baked Alaska model, we use a half sphere for the dessert’s common dome shape. The geometry includes a layer of sponge cake at the bottom and a layer of meringue covering the dome-shaped ice cream. The thickness of the meringue layer is added as a parameter so that it will be easy to vary. The thickness is initially set to 2 cm.
For the same reason, the oven temperature is also added as a parameter and is initially set to 250°C. Some recipes call for an oven temperature of around 220°C and a cooking time of about 8 to 10 minutes, whereas other recipes use a higher temperature of around 250°C and just a couple of minutes in the oven. Our simulations will show whether the dessert comes out as expected in both cases.
We set up a time-dependent heat transfer simulation using the Heat Transfer in Solids interface in COMSOL Multiphysics. As input to the simulation, we need to provide the density, thermal conductivity, and heat capacity for the ice cream, meringue, and sponge cake. The values used in this simulation come from the book The Kitchen as Laboratory: Reflections on the Science of Food and Cooking, edited by Vega et al.
The material properties are added to three Material nodes, which are assigned to the domains in the geometry that represent the ice cream, meringue, and sponge cake, respectively. From the data, the meringue and the sponge cake are equally poor thermal conductors, which means they both provide ample thermal insulation covering the ice cream.
For modeling heat transfer, the Heat Transfer in Solids interface uses the material properties from the respective materials.
The initial temperatures are set to -18°C for the ice cream (a typical freezer temperature), 8°C for the meringue (typical if using eggs that have been stored in a refrigerator), and 20°C (room temperature) for the sponge cake.
For the boundary conditions, a heat flux with a large heat transfer coefficient represents the convective heat flux in the oven affecting the temperature of the baked Alaska.
We can set up a time-dependent study to simulate the temperature in the baked Alaska from the moment it is put in the oven to the point where it has stayed there for 12 minutes, which is a bit beyond what would be required according to most recipes. In addition, a full parametric sweep is added to analyze the effects of varying the meringue layer and the oven temperature. The following image shows how you can choose to plot the temperature distribution from any combination of those two parameters and the time.
Selecting values for the thickness of the meringue layer, oven temperature, and time for a fully parametric time-dependent simulation.
As the primary result from the simulation, the temperature in the entire baked Alaska can be displayed as a volume plot, as shown below.
The temperature field (in degrees Celsius) inside the baked Alaska after 4 minutes with a meringue layer of 2 cm and an oven temperature of 250°C.
Next, we perform a more quantitative analysis using a cross-sectional temperature profile. As an extension of this model, we also show an example of how to vary one of the initial temperatures to analyze what happens if the ice cream has been left to thaw for a while before putting the baked Alaska in the oven.
Let’s consider two cases from the various combinations of parameter values in the simulation:
A criteria for keeping the ice cream frozen is that the temperature stays at -3°C or lower, even where the ice cream is close to the meringue or sponge cake.
The following plot shows the temperature at the center of the baked Alaska with a meringue layer of 1 cm and an oven temperature of 200°C from the bottom to the top after 4, 8, and 12 minutes. The following plots use a Cut Line 3D data set to evaluate the temperature along the centerline from the bottom to the top. The temperature is plotted using a line graph in a 1D Plot Group.
The temperature profiles from the bottom to the top of the baked Alaska after four minutes (blue), eight minutes (green), and twelve minutes (red).
In the close-up plot below, additional straight lines represent the ice cream melting point and the meringue-ice cream border. These lines are created using extra Line Plot nodes in the same plot group and two parameters that define the levels.
The temperature toward the top of the baked Alaska after four minutes (blue), eight minutes (green), and twelve minutes (red). The ice cream melting point is the dotted black line and the vertical magenta line indicates the meringue-ice cream border. After eight minutes, the outer part of the ice cream starts to melt.
The simulation shows that the ice cream is intact after 4 minutes, but after 8 minutes, the 1-cm meringue layer is too thin to prevent the ice cream from melting from the outside. If we were to switch to a 2-cm layer, the baked Alaska could remain in the oven for a full 12 minutes, if necessary.
Let’s switch to another parametric solution to see what happens if we increase the meringue layer to 2 cm and the temperature to 250°C. The following plot shows the simulation results toward the top of the baked Alaska.
The temperature toward the top of the baked Alaska after four minutes (blue), eight minutes (green), and twelve minutes (red). After eight minutes, the outer part of the ice cream is still frozen despite the hotter oven, thanks to the thick layer of meringue.
The simulation shows that the ice cream hasn’t melted, even after 8 minutes, thanks to the thicker layer of meringue that insulates the ice cream from the high heat. A temperature of 250°C and an oven time of 4 to 5 minutes are values that you find in several recipes for the baked Alaska, and our COMSOL Multiphysics simulation confirms that the ice cream should remain intact with those oven times and temperatures with a 2-cm meringue layer.
Let’s consider a case where the ice cream has been taken out of the freezer for some time, which some bakers may want to do to easier shape it into a dome. If we rerun the simulation for the case with a meringue layer of 2 cm and an oven temperature of 250°C, but with an initial ice cream temperature of -10°C instead of -18°C, then we get the following results.
The temperature toward the top of the baked Alaska after two minutes (blue), four minutes (green), and eight minutes (red). After four minutes, the ice cream already starts melting.
The higher temperature in the ice cream, as expected, makes it reach the melting point a lot quicker, so it is important to keep the ice cream as cold as possible. In this case, it already starts melting after about three minutes, so if you don’t handle the baked Alaska quickly, it might be slightly melted by the time it reaches the dinner table.
To summarize, these simulations have shown that if you keep the meringue layer thick enough, the baking time reasonably short, and the ice cream cold, you should be able to make a dessert with a nicely caramelized meringue on the outside and delicious frozen ice cream on the inside. In turn, this classic dessert will continue to impress dinner guests by its apparent defiance of the laws of thermodynamics.
Using COMSOL Multiphysics, we can show that the layers of thermally insulating meringue and sponge cake can prevent the intense heat of the oven from melting the ice cream, making for a perfect baked Alaska. Bon appétit!
Imagine a cricket ball sailing through the air at around 145 km/h (90 mph). A batsman stands ready, bat in hand. In the brief moment before the ball arrives, the player is most likely thinking of how to best hit a shot. There are many ways for the cricket ball to connect with the bat, but if a batsman knows the location of a sweet spot, he or she may be able to deliver a better shot by taking advantage of an optimal zone that enables maximum stroke power with the least amount of effort.
A batsman during a cricket game aiming his shot to hit the sweet spot of the bat. Image by Pulkit Sinha — Own work. Licensed under CC BY-SA 2.0, via Flickr Creative Commons.
Current research on the physics and science behind the game of cricket centers on the performance of the batsmen and bowlers. In fact, we’ve even covered this topic before in a blog post highlighting swing bowling techniques. However, one area of cricket that seems to lack in research is the cricket bat itself. For instance, a structural mechanics analysis can help to find sweet spots in a bat’s design that can improve the quality of the batter’s shots.
Richie Latchman (left) and Yogeshwar Mulchand (right) in front of their poster “Determination of the “Sweet Spot” of a Cricket Bat using COMSOL Multiphysics®“.
Researchers from the University of the West Indies, St. Augustine took a swing at this challenge by using COMSOL Multiphysics to investigate the sweet spots in a cricket bat. This research is useful not only for players and coaches, but also for sporting equipment companies, where simulation is used to analyze sporting goods.
To start, let’s delve into the physics behind cricket bats. The bending modes of a bat are the main vibrational modes affecting its performance. While a freely supported bat has several bending modes of vibration, a handheld bat can be seen as a clamped cantilever beam.
A bat’s first two bending modes are important to its performance, and between them is a “sweet zone” distinguished by its minimal vibrations and energy loss. This information comes from research by D. A. Russell (Ref. 6 and Ref. 11 in the research paper).
In a typical cricket bat, the handle is most sensitive to strain when a ball is played. According to research by Jones (Ref. 14 in the research paper), the thicker edge has increased durability. Further, the area with more wood behind the blade, the swell position, yields better rebounding qualities and transfers greater force to a struck ball. It follows that the sweet spot could be located above this wider area of the bat.
Schematic of a cricket bat. Image by Y. Mulchand, A. Pooransingh, and R. Latchman and taken from their COMSOL Conference 2016 Boston paper.
For their studies, the research team defined a sweet spot as the bat position in which the maximum energy is conveyed with the least amount of vibration. Note that there are other ways the term “sweet spot” can be defined for sporting equipment.
At the core of the team’s research is a 3D model of a common willow cricket bat. They selected the Kingwood material in COMSOL Multiphysics to account for the willow wood that is used to make cricket bats. The team added further parameters to the material based on their research into willow wood. The bat was also modeled as a free object in all areas except for the handle, which was fixed in space.
The front (top image) and back (bottom image) of a 3D cricket bat model constructed in COMSOL Multiphysics. Images by Y. Mulchand, A. Pooransingh, and R. Latchman and taken from their COMSOL Conference 2016 Boston paper.
The researchers used the Structural Mechanics Module to analyze the deformations as well as stresses and strains in the solid structures. They also performed an eigenfrequency analysis to discover the natural frequencies of vibration and the related mode shapes of the bat.
Through this work, the team found the cricket bat’s first six mode shapes, eigenmodes, and eigenfrequencies, as shown in the images below. In their results, the color bar indicates displacement from the bat’s natural position. Here, red shows a large movement and blue represents a lack of vibration when the bat is in its rest position at the specified frequency.
The first six mode shapes of a cricket bat. Top row: The cricket bat at mode shape 1 (left), mode shape 2 (middle), and mode shape 3 (right). Bottom row: The cricket bat at mode shape 4 (left), mode shape 5 (middle), and mode shape 6 (right). Images by Y. Mulchand, A. Pooransingh, and R. Latchman and taken from their COMSOL Conference 2016 Boston paper.
Let’s take a closer look at these results, focusing on modes 1, 3, and 6, which are of interest for the game. The research shows that bat deformation causes a vertical motion around the handle, with high displacement and vibrations at the bat’s toe in mode shapes 1, 3 and 6. In mode shapes 3 and 6, we see that the bat acts like a pivot with no displacement or vibrations in the lower-mid region. The bottom-right figure, which analyzes mode shape 6, is the only case where the bat acts like a pivot with no displacement or vibrations in the upper-mid region.
Mode Shape | Eigenfrequency |
---|---|
1 | 1.1 Hz |
2 | 1.5 Hz |
3 | 9.6 Hz |
4 | 10.3 Hz |
5 | 17.2 Hz |
6 | 27.9 Hz |
The eigenfrequencies of the cricket bat at the different mode shapes.
When observing the simulation results, note that the researchers assumed that the cricket bat model has the exact same dimensions and material properties as an actual cricket bat. They also didn’t consider the bat’s age. While the sweet spot locations are determined solely by the geometry used, changes in the material data will affect the model’s natural frequencies.
Based on the results of the research team, there is a sweet spot located in the middle of the bat that is concentrated at the lower-mid area, 10 to 15 cm from the bat’s toe. There is another sweet zone 20 cm from the handle’s base, where the handle connects to the shoulder.
As for if these results can help you improve your cricket stroke, you’ll find out when you take a shot.
Topology optimization is a powerful tool that enables engineers to find optimal solutions to problems related to their applications. Here, we’ll take a closer look at topology optimization as it relates to acoustics and how we optimally distribute acoustic media to obtain a desired response. Several examples will further illustrate the potential of this optimization technique.
Many engineering tasks revolve around optimizing an existing design or a future design for a certain application. Best practices and experiences derived from years of working within a given industry are of great importance when it comes to improving designs. However, optimization problems are often so complex that it is impossible to know if design iterations are pushing things in the right direction. This is where optimization as a mathematical discipline comes into play.
Before we proceed, let’s review some important terminology. In optimization — be it parameter optimization, shape optimization, or in our case topology optimization — there is always at least one so-called objective function. Typically, we want to minimize this function. For acoustic problems, we may want to minimize the sound pressure in a certain region, whereas for structural mechanics problems, we may want to minimize the stresses in a part of a structure. We state this objective as
with F being the objective function. A design variable is varied throughout the optimization process to reach an optimal solution. It is varied within a design domain denoted Ω_{d}, which generally does not make up the entire finite element space Ω, as visualized in the figure below.
The design domain is generally a subset of the entire finite element domain.
Note that since the design variable varies as a function of space over the finite element discretized design domain, it is as such a vector. For this particular case, we will simply address it as a variable.
The optimization problem may have more than one objective function, and so it will be up to the engineer to decide how large of a weight each of these objectives should carry. Note that because the objectives may oppose each other during the optimization, special care should be taken when setting up the problem.
In addition to the objective function(s), there will usually be some constraints associated with the optimization problem. These constraints reflect some inherent size and/or weight limitations for the problem in question. With the Optimization interface in COMSOL Multiphysics, we can input the design variable, the objective function(s), and the constraints in a systematic way.
With topology optimization, we have an iterative process where the design variable is varied throughout the design domain. The design variable is continuous throughout the domain and takes on values from zero to one over the domain:
Ideally, we want the design variable to settle near values of either zero or one. In this way, we get a near discrete design, with two distinct (binary) states distributed over the design domain. The interpretation of these two states will depend on the physics related to our optimization. Since most literature addresses topology optimization within the context of structural mechanics, we will first look at this type of physics and address its acoustics counterpart in the next section.
Topology optimization in COMSOL Multiphysics for static structural mechanics was a previous topic of discussion on the COMSOL Blog. To give a brief overview: A so-called MBB beam is investigated with the objective of maximizing the stiffness by minimizing the total strain energy for a given load and boundary conditions. The design domain makes up the entire finite element domain. A constraint is applied to the total mass of the structure. In the design space, Young’s modulus is interpolated via the design variable as
To help the binary design, we can use a so-called solid isotropic material with penalization (SIMP) interpolation
where p is the penalization factor, typically taking on a value in the range of three to five. With this interpolation (and an implicit linear interpolation of the density), intermediate values of X are avoided by the solver as they provide less favorable stiffness-to-weight ratios. I have recreated the resulting MBB beam topology from the previous blog post below.
Recreation of the optimized MBB beam.
In this figure, black indicates a material with a user-defined Young’s modulus of E_{0}. Meanwhile, white corresponds to zero stiffness, indicating that there should be no material.
Let’s now move on to our discussion of acoustic topology optimization, where we have a frequency-dependent solution with wave propagation in an acoustic media. The design variable is now related to the physics of acoustics. Instead of having a binary void-material distribution of material, our goal is to have a binary air-solid distribution, where “solid” refers to a fluid with a high density and bulk modulus, which emulates a solid structure.
We define four parameters that describe the inertial and compressional behavior of the standard medium and the “solid” medium: Air is given a density of and a bulk modulus of K_{1}, and the “solid” medium has a higher density of and a higher bulk modulus of K_{2}. The density and bulk modulus K in the design domain will vary between the two states during the optimization via the design variable — similar to the variance of the Young’s modulus in our structural mechanics example. But a different interpolation is needed for an acoustics analysis so that the associated values do not tend to zero for a zero-valued design variable, but instead vary between air and the solid, so that
and
The easiest way to obtain these characteristics is by linear interpolation between the two extreme values. This is not necessarily the best approach since intermediate values of will not be penalized and therefore the optimal design may not be binary. As such, it will not be feasible to manufacture. In the literature alternative, interpolation schemes are given. In the cases presented here, the so-called rational approximation of material properties (RAMP) interpolation is used (see Ref. 1).
Just as with structural optimization, we define a design domain where the material distribution can take place, while simultaneously satisfying the constraints. An area or volume constraints can be defined via the design variable. For example, an area constraint on the design domain can be stated as an inequality constraint
where S_{r} is an area ratio between the area of the design that is assigned solid properties and the entire design domain.
Let’s first take a look at a silencer (or “muffler”) example. For simplicity, we limit ourselves to a 2D domain. A typical measure used when characterizing a silencer is the so-called transmission loss, denoted TL, which is a measure of power input to power output:
The transmission loss is calculated using the so-called three-point method (see Ref. 2). We use this as our objective function, seeking to minimize it at a single frequency (in this case 420 Hz):
Two design domains are defined above and below a tubular section. The design domain is constrained in such a way that a maximum of 5% of the 2D area is the structure and thus 95% must be air:
The initial state for the design domain is 100% air, i.e., . The animation below shows the evolution from the initial state to the resulting topology.
An animation depicting the evolution from the initial state to the optimized silencer topology.
The optimized structure takes on a “double expansion chamber” (see Ref. 3) silencer topology. The transmission loss has increased by approximately 14 dB at the target frequency, as illustrated in the plot below. However, at all frequencies other than the target frequency, the transmission loss has also changed, which may be of great importance for the specific application. Therefore, a single-frequency optimization may not be the best choice for the typical design problem.
Transmission loss for the initial state and optimized silencer.
Shifting gears, let’s now look at how to optimize for two objective functions and two frequencies. Here, we again consider a 2D room with three hard walls and a pressure input at the left side of the room. The room also includes two objective areas, Ω_{1} and Ω_{2}, defined at each corner at the right side of the room. The two objectives are as follows:
with the circular design domain Ω_{d} and an area constraint that is 10% structure. The initial state is , making the design domain 100% air.
A square 2D room with a circular design domain and two objective domains.
With more than one objective function, we must make some choices regarding the relative weights, or importance, of the different objectives. In this case, the two objectives are of equal weight in importance, and the problem is stated as a so-called min-max problem:
The figures below show the optimized topology (blue) along with the sound pressure for both frequencies using the same pressure scale. Note how the optimized topology results in a low-pressure zone (green) appearing in the upper-right corner at the first frequency. At the same time, this optimized topology ensures a similar low-pressure zone in the lower-right corner at the second frequency. This would certainly be a challenging task if trial-and-error was the only choice.
Sound pressure for frequency f_{1} (left) and for frequency f_{2} (right). The optimized topology is shown in blue.
As a third and final example, we’ll optimize a single objective over a frequency range. A sound source is radiating into a 2D domain, where we initially have a cylindrical sound field. Two square design domains are present, but since there is symmetry, we only consider one half of the geometry in the simulation. In this case, we want a constant magnitude of the on-axis sound pressure of in a point 0.4 meter in front of the sound source. The optimization is carried out in a frequency range of 4,000 to 4,200 Hz (50 Hz steps, a total of five frequencies). We can accomplish this via the Global Least-Squares Objective functionality in COMSOL Multiphysics, with the problem being stated as:
The initial state is again . The optimized topology is shown below, along with the sound field for both the initial state and optimized state.
Sound pressure for the initial state (left) and optimized state (right) at 4 kHz, with the optimized topology shown in blue within the square design domains.
Since the sound pressure magnitude in the observation point of the initial state is lower than the objective pressure, the topology optimization results in the creation of a reflector that focuses the on-axis sound. The sound pressure magnitudes before and after the optimization are shown below. The pressure magnitude is close to the desired objective pressure in the frequency range following the optimization.
The pressure magnitude divided by for the initial and optimized topology.
Acoustic topology optimization offers great potential for helping acoustic engineers come up with innovative designs. As I have demonstrated today, you can effectively use this technique in COMSOL Multiphysics. With proper formulations of objectives and constraints, it is possible to construct applications with new and innovative topologies — topologies that would most likely not have been found using traditional methods.
I would like to give special thanks to Niels Aage, an associate professor at the Technical University of Denmark, for several fruitful discussions on the topic of optimization.
To learn more about using acoustic topology optimization in COMSOL Multiphysics, we encourage you to download the following example from our Application Gallery: Topology Optimization of Acoustic Modes in a 2D Room.
René Christensen has been working in the field of vibroacoustics for more than a decade, both as a consultant (iCapture ApS) and as an engineer in the hearing aid industry (Oticon A/S, GN ReSound A/S). He has a special interest in the modeling of viscothermal effects in microacoustics, which was also the topic of his PhD. René joined the hardware platform R&D acoustics team at GN ReSound as a senior acoustic engineer in 2015. In this role, he works with the design and optimization of hearing aids.
]]>
Saying that the world’s oceans are large is an understatement. Oceans cover around 71% of Earth’s surface and the deepest known point, the Challenger Deep in the Mariana Trench, extends down for about 36,000 feet (almost 11 km). To study this massive environment, researchers need powerful, far-reaching tools.
The depth of the Challenger Deep compared to the size of Mount Everest. Image by Nomi887 — Own work. Licensed under CC BY-SA 3.0, via Wikimedia Commons.
Ocean acoustic tomography, which involves deep-water, low-frequency sound sources, is one option for measuring the temperature of oceans. This system measures the time it takes sound signals to travel between two instruments at known locations, a sound source and a receiver. Because sound travels faster in warmer water, you can use this measurement to extract the average temperature over the distance between the source and the receiver.
To get these measurements, long-range ocean acoustic tomography must be able to use low-frequency signals to cover a broad frequency band, something that often requires a high-power sound source. Therefore, creating a system that can successfully cover a large frequency band, while reducing power consumption via a highly efficient design, is ideal. One particular focus in this field is on resonators, since saving energy in a resonator helps increase overall transducer efficiency in cases where the wavelength is larger than its dimension.
In response to this, Andrey K. Morozov at Teledyne Webb Research (TWR) developed a sound resonator design that is highly efficient and has a tunable resonator. While previous research involved a high-Q resonant organ pipe operating at a frequency band of 200-300 Hz, this study revolves around a new high-frequency sound source that operates at an octave band of 500-1000 Hz. Further, the new high-Q resonant organ pipe design can keep a system in resonance when the transmitted signal has a changing instantaneous frequency. With its small size, this design is helpful for shallow water experiments.
In this design, a digitally synthesized frequency sweep signal is transmitted by a sound projector. The projector and high-Q resonator tune the organ pipe so that it matches a reference signal’s frequency and phase. This resonant tube can operate at any depth, but before it was ready to hit the seas, Morozov studied its design using the COMSOL Multiphysics® software.
As we can see in the schematic below, the organ pipe device is comprised of slotted resonator tubes (or pipes) that are moved via a symmetrical Tonpilz transducer. The Tonpilz driver’s piezoceramic stacks move pistons and thereby vary the volume. The two symmetrical pipes that are coupled through the Tonpilz transducer function like a half-wave resonator that has a volume velocity source driver.
Image of a tunable resonant sound source and Tonpilz driver. Image by Andrey K. Morozov and taken from his COMSOL Conference 2016 Boston paper.
Let’s focus on how these resonator tubes include slots or vents. In order to achieve smooth control of the resonance frequency, an electromechanical actuator moves two sleeves axially along the resonator tubes, maintaining a small gap in between the sleeve and pipe. Through this action, the slots are covered and the actuator can tune the organ pipe in a large frequency range. When the sleeves’ positions relative to the slot change, the equivalent acoustic impedance of the slots also changes, altering the resonance frequency of the entire resonator.
In the next section, we’ll see how simulation was used to further improve the design of the tunable organ pipe.
Morozov reduced the thickness of the resonator’s walls to make them lighter, which caused them to vibrate and store a large amount of acoustical energy. To prevent acoustical coupling between the main resonator and a mechanical part of the system, he used shock mounts to attach the main resonator pipe to the backbone rail. This design change did not completely avoid unwanted resonance effects in the tuning mechanics, so Morozov turned to simulation for further optimization.
The plot below and to the left represents the sound pressure level at resonance. Here, the vents in the main resonator pipe open and sound energy leaves the organ pipe through the resulting gap. In a low-frequency design, rounded edges in the sleeve cylinder help to prevent dual resonances in this position, but this isn’t a complete solution for a high-frequency resonator.
To learn more, the researcher studied the resonance curves for different sleeve positions, as seen below and to the right, shifting each position in 1 cm intervals.
Left: Simulation results of a tunable organ pipe, performed for a standard spherical driver. Right: Results showing the different sleeve positions and their correlating frequency responses. Image by Andrey K. Morozov and taken from his COMSOL Conference 2016 Boston paper.
His results showed that the vibrations in the main pipe and the resonating water beneath the sleeve can disturb the main resonance curve. Although both simulation results and experimental tests agree that this problem can be alleviated by increasing wall thickness, the resulting pipe design is too heavy.
To address this issue, Morozov easily tested different design configurations with simulation. He discovered that the tunable mechanism can be improved by ensuring that the gap between the sleeve and the main pipe is only from one of the orifice’s sides. Using this improved design as a basis, he completed additional studies, including investigating the optimal frequency, particle velocity, and sound pressure of the device, which we’ll focus on next.
Comparing sound pressure levels and frequency in the improved design for various sleeve positions. Image by Andrey K. Morozov and taken from his COMSOL Conference 2016 Boston paper.
In this new design, the pipe first functions as a half-wavelength resonator and radiates through its main orifices. At the end of the frequency band, the sound is mostly radiated through the completely open tuning vents, as seen in the following images. The transition between these two states is continuous.
Absolute sound pressure when the slots are completely closed at the starting frequency range of 500 Hz (left) and when the slots are completely open at the maximum resonance frequency of 1000 Hz (right). Images by Andrey K. Morozov and taken from his COMSOL Conference 2016 Boston paper.
To conclude, these simulations enabled Morozov to successfully visualize the structural acoustics of a new high-Q resonant organ pipe with an octal band of 500 to 1000 Hz and investigate important details, including the optimal profile of the opening slots.
Finally, a physical organ pipe was constructed out of aluminum using the exact dimensions of the model. The initial test pool results were similar to the simulation results and achieved the expected frequency range. However, the resonance frequencies were slightly lower in these tests. This is likely explained by the elliptical shape of the pipe and the limited pool dimensions. Both factors contributed to the decreased resonance frequency.
Due to these results, Morozov altered his experiment by cutting the pipes, as well as by performing another test at the Woods Hole Oceanographic Institution dock.
The altered sound source system (left), tested at the Woods Hole Oceanographic Institution (right). Images by Andrey K. Morozov and taken from his COMSOL Conference 2016 Boston paper.
The new experiment indicated that while the simulation could efficiently predict resonance frequencies, the model’s Q-factor is larger than in the experimental results. This difference is expected because real losses are hard to predict. Also, there were slight variations between the model and the realized design.
Designing a tunable resonant system is challenging because you need to precisely adjust parameters and ensure that it achieves the necessary frequency range. Using COMSOL Multiphysics, Morozov managed to achieve the octave frequency range in his tunable sound source design before performing a large amount of water tests. He found that the physical sound source parameters reasonably matched the simulation.
This improved design can help scientists measure long-range sound propagation and temperature over large distances in the ocean, allowing them to study everything from small-scale temperature fluctuations to overarching oceanic climate change.
Elastoplastic materials combine two principal types of behavior: elastic deformation, which is reversible deformation, and plastic deformation (or plasticity), which is irreversible and leaves a permanent deformation upon unloading. In order to model this type of material behavior, we need to use a constitutive relation that connects the stress state not only to the current strain state, but also to the previously accumulated plastic strains and to their development.
Plastic deformation in a pressure vessel subjected to internal pressure, showing the elastic region (dark blue) and some plasticity (red).
Generally, when there is an increase in stress and the initial yield stress (the elastic limit) is surpassed, the elastoplastic material is strained much more than for a corresponding stress increase in the elastic region. The material is hardened by plastic deformation, but the response in the plastic regime varies greatly among different materials.
For metallic materials, hardening is commonly described by three different types of behavior:
In the figures below, we can visualize the stress-strain relation for a uniaxial loading for the three types of hardening. In the first step, the material is stretched until a significant plastic strain is reached. At this point, the current yield stress, , is above the initial yield stress, . So far, the stress-strain curve follows the same path for all three types of hardening. In the second step, the loading direction is reversed and the material is compressed until the onset of yielding in compression.
The stress-strain relation in a uniaxial load case for three hardening models: isotropic, kinematic, and mixed.
With isotropic hardening, a material can be compressed at most before the onset of reversed yielding. With kinematic hardening, a material can be compressed at most . With mixed hardening, the compression is in between the two, having . Both kinematic and mixed hardening result in a so-called back stress or shift stress, which is a new stress level that is equally far from yielding in tension and compression. Before the onset of plasticity and in the case of isotropic hardening, the back stress is zero.
Besides this type of deformation hardening, some metallic materials also demonstrate more complex types of behavior. One example is viscoplasticity, where the plastic behavior is strain rate dependent.
You can access a collection of material models that can be used for modeling elastoplastic materials in the Nonlinear Structural Materials Module. Selecting a fatigue model, however, does not only depend on the material model, but also on the loading characteristics. We talk about the influence of loading conditions on the choice of fatigue model in a previous blog post.
When working with nonlinear materials such as elastoplastic materials, the material response of the first load cycle generally differs from the material response of the second cycle. This is caused by the first load cycle, which can both shift the yield surface and change the yield stress. The consecutive load cycles can then either oscillate around a new stress-strain state or cause further accumulation of inelastic strains. When studying fatigue, we must first find a stable load cycle, which is representative for the subsequent cycles. Therefore, when modeling elastoplastic materials, we often need to simulate several load cycles before reaching a stable load cycle.
We discuss the different types of load cycles in another blog post: Modeling Thermal Fatigue in Nonlinear Materials.
Let’s go over how to model fatigue in elastoplastic materials with two of the types of hardening, kinematic and isotropic hardening, using COMSOL Multiphysics.
Let’s take a look at the Elastoplastic Low-Cycle Fatigue of Cylinder with a Hole tutorial model. Here, the component is loaded beyond the point of yielding. The material experiences immediate stability, since a stable load cycle is obtained already during the second cycle. However, the stable load cycle consists of both elastic and plastic deformations. This is possible since the material is modeled with kinematic hardening. This means that the yield surface moves between two positions: tension and compression.
For most applications that involve kinematic hardening, a full elastoplastic analysis must be performed. The model size can be somewhat reduced by dividing the model into domains where plasticity develops and domains where only elastic deformation takes place. This method is useful because plasticity is computationally expensive to model, requiring us to evaluate an additional seven degrees of freedom as opposed to the three displacements in elastic materials.
It is common that fatigue failure originates from the presence of a notch. In this case, an approximate solution can be used; for example, the Neuber correction for plasticity based on the Ramberg-Osgood material model. Based on the elastic solution, this approximate method computes an elastoplastic stress-strain state at a notch. This method is fast, but the further away we move from the notch, the lower the accuracy of the results. This method is demonstrated in a related example model: Notch Approximation to Low-Cycle Fatigue Analysis of Cylinder with a Hole.
We can compare the two methods in the figures below. Due to high strain and multiaxial load conditions at the hole, we predict fatigue using the low-cycle-fatigue Smith-Watson-Topper (SWT) model. The results at the critical spot are similar for both methods. The computation time, on the other hand, differs significantly. For the elastoplastic model, computation time is a few minutes, compared to a few seconds for the notch approximation.
A low-cycle fatigue prediction, based on a full elastoplastic analysis (left) and a notch approximation (right). Results display the logarithm of the number of cycles to failure. The same color scale is used in both figures.
In another tutorial model, Standing Contact Fatigue, a surface-hardened material is subjected to a compressive load cycle. Affected by the hardening process, the tested material has three distinct layers with different material properties. The material is strong closest to the surface (the case), while it is weak deep inside (the core). In between, there is a thin transition layer where both the material properties and residual stress sharply change.
The plastic properties of the material differ through the depth. In the case layer, the hardening follows a linear isotropic model, while in the core, it follows an exponential hardening model. In the transition layer, the hardening function is exponential and parameterized. The function for the material parameters is chosen such that the material model of the transition layer at the interface with the case corresponds to the case model, and the interface with the core corresponds to the core model.
During the first load cycle, the material is compressed past the point of yielding and plasticity grows on the subsurface level. Since the yield surface expands in isotropic hardening, each consecutive load cycle that is not as high in magnitude as the first cycle will not introduce any further plasticity, thus the stable load cycle is elastic. Although high strains develop during the first load cycle, any consecutive cycle will result in small strain changes. It is therefore reasonable to assume that a stress-driven, high-cycle fatigue model is suitable for fatigue evaluation.
In the case of a predominantly compressive load, the Dang Van model is useful for fatigue modeling, since it takes the compressive mean stress into account. You can access the Dang Van model for these types of simulations in the Fatigue Module.
Fatigue prediction in a surface-hardened material. Fatigue usage factor is displayed. The highest risk of fatigue is in the near-surface case layer, with a lower risk of fatigue in the deep core layer.
By simulating fatigue in common types of elastoplastic materials with COMSOL Multiphysics, we can better understand and predict the occurrence of fatigue failure.
Humans have used drying as a method for preserving food since ancient times. Since then, the drying process has expanded from open-air drying or sun drying to other drying techniques, such as solar drying, freeze drying, and vacuum drying. Drying is also a key process in many other application areas, from the pharmaceutical industry to plastics.
Today, we’ll focus on the chemical process of vacuum drying, which is particularly useful when drying heat-sensitive materials such as food and pharmaceutical drugs. Vacuum dryers, commonly called vacuum ovens in the pharmaceutical industry, also offer other benefits. Because they require lower temperatures to operate, vacuum dryers use less energy and therefore, reduce costs. They also recover solvents and avoid oxidation.
A rotary vacuum dryer. Image by Matylda Sęk — Own Work. Licensed under CC BY-SA 3.0, via Wikimedia Commons.
Vacuum dryers remove water and organic solvents from a wet powder. The dryer operates by reducing the pressure around a liquid in a vacuum, thereby decreasing the liquid’s boiling point and increasing the evaporation rate. As a result, the liquid dries at a quicker rate — another major benefit of this process.
For vacuum drying to be effective, we need to decrease drying times without harming the products, which means that we need to maintain a strict control of the operating conditions. To balance these goals and to understand how operating conditions influence the product, you can use the multiphysics modeling capabilities of COMSOL Multiphysics.
Today, we’ll analyze the vacuum drying process of a Nutsche filter-dryer model. The dryer works by heating a wet cake from the bottom and the side walls of a container and by decreasing the pressure in the gas phase on the top of the cake. This example is based on a paper published by Murru et al. (Ref. 1 in the model documentation).
Let’s start by taking a closer look at our model. The vacuum dryer is comprised of a cylindrical drum filled with wet cake, which consists of three different phases: solid powder particulates, a liquid solvent, and a gas. As such, the cake’s material properties need to include the properties of all three individual phases, which vary depending on the proportion of each phase in the cake. The portion of each phase is determined by the volume fraction, which is one of our modeled variables.
The cake is modeled as a rectangular geometry with a radius of 40 cm and height of 10 cm in a 2D axisymmetric component. At the top, our model is exposed to a low-pressure head space. Meanwhile, heat flux boundary conditions at the filter dryer’s side and bottom boundaries account for a 60°C heating fluid.
The vacuum drying process in an axisymmetric Nutsche filter dryer.
Moving on, our tutorial combines evaporation and heat transfer modeling in order to study the cake’s liquid phase profiles and temperature. We calculate the cake’s solvent volume fraction with the Coefficient Form PDE interface and simulate heat transfer with the Heat Transfer in Solids interface. To solve the moisture transport in porous media, we use a predefined multiphysics interface in the Heat Transfer Module. We also include solvent evaporation by using both a heat-sink and mass-sink term and approximate the solvent transport as a diffusion process.
Our model makes the following assumptions:
In these situations, we can use a step function to smoothly ramp both the evaporation rate and diffusion coefficient down to zero.
We see that our simulation results are as predicted. Let’s start by examining our analysis of the cake after 30 hours have passed. As seen below, the cake’s temperature is close to that of the heating fluid (60°C) at both the side and bottom boundaries, and the liquid phase’s volume fraction is lowest near these heated boundaries and highest at the cake’s center. Additionally, the apparent moisture diffusivity is highest at the cake’s center and almost zero in places where the liquid phase has evaporated. Considering our model’s assumptions, these results are all expected.
The cake’s temperature (left), volume fraction of the liquid phase (middle), and apparent moisture diffusivity (right) after 30 hours.
Switching gears, let’s expand our timescale to look at the evaporation rate after 10, 20, and 30 hours. This study also yields expected results, since it shows evaporation beginning at the heated walls and decreasing when the amount of solvent at these boundaries lessens. During this process, the evaporation front shifts toward the cake’s center.
The evaporation rate after 10 (left), 20 (middle), and 30 (right) hours.
The quantitative results generated by our simulation study are in good agreement with previous research, confirming their validity. As such, we can use this model to accurately predict how dry a product is as a function of time. Using this information, we can minimize the amount of time that a product is exposed to elevated temperatures. Additionally, we can change the dryer’s size if we want to reduce the drying time when working with heat-sensitive products. Through multiphysics simulation, we can design more efficient and effective vacuum dryers for use in a variety of industries.
As a refresher, let’s begin by reviewing some of the key concepts behind modeling gears in COMSOL Multiphysics. A gear is defined in a Gear node as a rigid body with six degrees of freedom in the form of translations and rotations at the center of rotation. It is used in a Gear Pair node in the model tree in order to connect with another gear. Here, you can specify a finite stiffness for the gear mesh or gear tooth, either for individual gears or for the pair. A mathematical formulation is used to describe the connection between two gears, without any need for a defined, realistic gear geometry to detect the contact between the two gears. Therefore, you can represent a gear with either a realistic gear geometry or any similar geometry of a disc.
It is possible to compute the inertial properties of a gear from the geometry using its calculated mass density, or you can directly enter the properties in the form of mass and moment of inertia in the node’s edit fields. You can also apply external forces and moments on the gear as well as constrain certain degrees of freedom of a gear. For instance, when modeling torsional vibrations, all of the degrees of freedom except the axial rotation can be constrained.
COMSOL Multiphysics offers a number of standard gear types, each with its own merit and applications. As mentioned above, the gear is an abstract object, but if you want to add a realistic geometry for visualization, you can access the Part Libraries, where you can find various types of gears and racks.
In the following images, you can see the various types of gears and racks available and the geometrical parameters needed for their mathematical descriptions.
A Spur Gear (left) and Helical Gear (right) with their external gear mesh.
A Spur Gear (left) and Helical Gear (right) with their internal gear mesh.
A Bevel Gear (left) and Worm Gear (right).
A Spur Rack (left) and Helical Rack (right).
The inputs required to model each gear type are shown in the respective figures. They are as follows:
After selecting the appropriate gear type, you can then define the parameters controlling the size and shape of the gear teeth. As an example, these parameters are required to define a helical gear:
A screenshot showing the settings window for a helical gear. Various inputs required to model a helical gear, including gear properties, gear axis, center of rotation, and density are shown.
The next step is to define the position and orientation of the gear. The gear position is defined in terms of the center of rotation. This is the point at which the degrees of freedom are created and the rotation is interpreted. The forces and moments acting on the gear due to meshing with other gears are also interpreted about this point. By default, the center of rotation is set to the center of mass of the gear, but there are other ways to define it explicitly as well.
The gear orientation is specified in terms of the gear axis, which is the axis of rotation passing through the center of rotation. The gear axis is used when creating the gear local coordinate system. Also interpreted about this axis is the gear rotation, a degree of freedom in the Gear Pair node.
You can mount gears in one of two ways: on a flexible or a rigid shaft. These devices can be mounted either rigidly or with a finite stiffness using a fixed joint. Joints are the features used to connect two components by allowing certain relative motion between them.
When there is no clearance between the gear and the shaft in the geometry, the objects can be either in an assembly state or a union state. For a flexible shaft, gears are by default rigidly mounted on the shaft if both the gear and shaft are in a union state.
It is not necessary to model a shaft in order to mount gears, as the devices can be mounted directly to the ‘ground’ either rigidly or with a finite stiffness using a hinge joint. The prescribed displacement/rotation subnode of a gear can also be used for this purpose.
Note that it is also possible to support shafts on:
This can be done using hinge joints, which can be rigid or have a finite stiffness.
Figure showing gears with an actual geometry as well as those modeled through equivalent discs. Different mounting methods for gears and shafts are also depicted.
In order to connect the different types of gears that you have defined in your model, you can use a Gear Pair node. This node can connect spur, helical, and bevel gears. You can also use Worm and Wheel as well as Rack and Pinion nodes for their specific cases. These nodes connect two gears in such a way that there is no relative motion along the line of action at the contact point. The remaining displacements and rotations of the two gears are independent of each other.
Each Gear Pair node adds two degrees of freedom:
The following constraints are added by the Gear Pair node in order to connect two gears:
For a line contact model, one more constraint is added to restrict the relative rotation about a line joining the two gear centers. If friction is included, frictional forces are obtained using the contact force, which is computed as the reaction force of the contact point constraint. These frictional forces are then applied on both gears in a plane perpendicular to the line of action.
In a Gear Pair node, you can select any two gears defined in the model. But in order to achieve proper tooth meshing, a set of gears must fulfill the following compatibility criteria:
All these checks are automatically performed and an error message is issued during equation compilation if the two selected gears are not compatible.
Examples of incompatible gear mesh. In the figure on the left, the gears have different modules. In the figure on the right, the gears have different pressure angles.
A coordinate system for each gear is defined using the gear axis and center of rotation of both gears. The first axis of the coordinate system triad is the gear axis itself. The second axis is the direction pointing from the center of rotation to the contact point. The third axis is normal to the plane containing the first two axes. This coordinate system is attached to the gear and varies with the changes in gear orientation. Note, however, that it does not rotate with the gear rotation about its own axis.
A schematic showing coordinate systems and other parameters for both gears connected by a gear pair.
These quantities are illustrated in the above figure of a gear pair:
The gear tooth coordinate system is defined for both gears by rotating the gear coordinate system with the tooth angle matrix. This matrix is constructed using the helix angle and the cone angle.
The line of action, meanwhile, is defined as the normal direction of the gear tooth surface at the contact point on the pitch circle. This is the direction along which the forces are transferred from one gear to another. It is defined by rotating the third axis of the gear tooth coordinate system (gear tangent) about the first axis of the gear tooth coordinate system with the pressure angle (α). Based on the direction of the driver gear, the gear tangent can be rotated either clockwise or counterclockwise.
Two figures depicting the line of action and the direction of rotation of the driver gear. The line of action is defined due to the fact that the driver gear and tangent rotate in the clockwise direction (left) and counterclockwise direction (right).
The contact between the two gears is modeled through analytically founded equations. These are independent of the finite element mesh and thus much faster and more robust compared to mesh-based contact methods. To compute contact forces and moments, you can choose one of two methods:
The point of contact on each gear is defined via the center of rotation, displacement vector at the center of rotation, contact point offset from the gear center, pitch radius, and cone angle. Based on the orientation of both gears, different gear pairs can be classified into one of two configurations:
For a parallel or intersecting configuration, the contact point offset from the pinion center is the input and the contact point offset from the wheel center is automatically computed. The contact model can be selected as either:
For a configuration that is neither parallel nor intersecting, the contact point offset from the pinion, as well as the wheel center, is automatically computed. The reason for this is that there is always a point contact and the contact point can be uniquely determined.
From left to right: Thin gears (point contact model), thick gears (line contact model), and thick gears with an axial offset.
Now that we’ve explored gears in further detail and how to connect them, let’s look at various examples of gear pairs classified based on their configurations. You can use many gear pairs together in order to model complex parallel and planetary gear trains.
Some examples of the parallel axis configuration are as follows:
Bevel gears, meanwhile, offer an example of an intersecting axis configuration.
Set of spur gears and parallel helical gears with an external gear meshƒ.
Set of spur gears, one with an internal gear mesh and the other with an external gear mesh, as well as a set of bevel gears.
Some examples of a crossed (neither parallel nor intersecting) axis configuration are as follows:
Set of crossed helical gears with an external gear mesh and the worm and wheel.
Rack and pinion with a straight gear mesh.
When it comes to modeling gears, there are many important elements to consider to optimize your simulation results. As we’ve demonstrated here today, the new features and functionality for gear modeling in COMSOL Multiphysics allow you to address such elements, providing you with more useful insight into how to improve your gear design.
In the next blog post in our Gear Modeling series, we’ll discuss how you can use advanced features on your gear pairs (i.e., gear mesh elasticity, backlash, transmission error, and friction) in order to perform simulations requiring greater fidelity. We’ll also show you how these parameters affect the dynamics of your system. Stay tuned!
]]>