Can you make sound out of light? In his presentation, Carl Meinhart answers this question by starting small, with photons and phonons. The idea is that when an infrared photon interacts with matter in some manner, it could create a Stokes’-shifted photon with a lower energy level. Simultaneously, the excess energy from the shift could generate an acoustic phonon. In this way, light can generate acoustics. But, as Meinhart notes in the keynote video, “it’s kind of a chicken-and-egg [scenario]; you need the acoustics and this scattered light to create each other, so they have to exist simultaneously.”
From the video: Carl Meinhart discusses a theory behind converting light into acoustics.
While the idea was originally predicted in the 1920s as Brillouin scattering, it wasn’t observed until the 1960s. Modern researchers can now turn to the COMSOL® software to analyze this theory and all of the relevant multiphysics phenomena. For a specific photonics example, Meinhart examines an innovative design from the Vahala Research Group at Caltech, a pioneer in this field. The Vahala Research Group designed an optical ring that uses whispering gallery modes for the ring instead of guided waveguides. Meinhart explains that when simulating this kind of device, “it’s very important to design the optics and the acoustics simultaneously,” a task that can be achieved with multiphysics simulation.
Through their research, the team found that their design has a very high Q factor. Research like this indicates that very sensitive high-Q resonators can be built by combining photons, phonons, and the concept of Brillouin scattering.
To try this sort of simulation yourself, download the example Meinhart mentions in his presentation, the Optical Ring Resonator Notch Filter tutorial.
Next, Meinhart turns to an industry example: maximizing the speed of a microfluidic valve. When looking to increase speed, a researcher’s first move is often to decrease inertia by making their design light and small. However, physical prototypes of small devices like microfluidic valves are expensive and time consuming to create and difficult to measure experimentally.
Instead, to analyze microfluidic devices, researchers can use the COMSOL Multiphysics® software, which Meinhart states is “an invaluable tool for this process” because “the only way you can really visualize what’s going on is through numerical simulation.”
From the video: Carl Meinhart shares the example of a magnetically actuated microfluidic valve (left) and its approximate real-world size (right).
For a concrete example, Meinhart considers a microfluidic valve being commercialized by Owl Biomedical, Inc. To increase their microvalve’s speed, the group tried using magnetic materials and thin silicon, which bends well and is a high-Q material. The resulting magnetically actuated device can be evaluated by importing the complicated geometry into COMSOL Multiphysics® using a product like LiveLink™ for SOLIDWORKS®. Then, researchers can analyze the design by combining nonlinear magnetics, fluid-structure interaction, and particle tracing simulation studies.
Initial results revealed that this microvalve design contained nonoptimal flow patterns. But, by using simulation to modify the shape over many iterations, researchers can balance the spring forces and optimize the flow and opening and closing speeds. The result? An incredibly fast microfluidic valve design that, when used to create a cell sorter, can sort 55,000 cells in 1 second or 200 million cells per hour. This optimized design has the potential to revolutionize cell sorting through Owl Biomedical’s cell sorter.
To learn more about how Carl Meinhart uses multiphysics simulation to study transport processes in photonics and microfluidics, watch the video at the top of this post.
SOLIDWORKS is a registered trademark of Dassault Systèmes SolidWorks Corp.
]]>
Echologics provides specialized services in water loss management, leak detection, and pipe condition assessment. They developed a permanent leak detection system for pipe networks, using acoustic technology. With this solution, Sebastien says, “the pipes can talk to you.”
The location of a leak is measured using the time delay between signals captured with two sensors placed on the pipe. The time delay is determined using the correlation function. This technique also requires knowledge of the mechanical behavior of the pipe and the propagation speed of acoustic waves to accurately locate the leak. To solve this problem, Sebastien created an app using the Application Builder, a built-in tool in the COMSOL Multiphysics® software, to find the exact location of pipe leaks.
He explains that the app is advantageous for Echologics because its user interface is designed for ease of use in the field. This includes app dimensions that could easily fit on a tablet device when accessed with the COMSOL Server™ product, for instance. This is particularly useful for Echologics, as their field engineers travel extensively.
With apps, engineers at Echologics can easily run and rerun analyses. For example, an engineer can predict a leak location in a pipe using the app and contact the client to tell them where the leak is located. If the client recently replaced that segment of the pipe with a different material, for example, the engineer can rerun the analysis through the app and provide the exact leak location based on the new information. This enables them to quickly respond to the customer with an updated location.
During his keynote talk, Sebastien discussed how Echologics designed their app so that users can easily navigate its interface. By separating the app into five tabs, field engineers only have to calculate the information they need. For example, if an engineer using the app has already measured the speed of sound in a certain pipe segment, they don’t need to use the Speed Prediction tab in the app. Instead, they can simply input the measured speed in the Leak Location tab that calculates the results.
From the video: Sebastien Perrier demonstrates the custom app built by Echologics for predicting the location of a pipe leak.
After all of the information is entered into the app, it reports the leak’s location in relation to the two closest sensors. Echologics’ app also includes a Visualization tab so that the app users can see their results. For Sebastien, the beauty of this app is that he can “visualize and confirm” when each sensor detects the leak.
Watch Sebastien Perrier give a demonstration of this app in the keynote video at the top of this post.
]]>
To avoid detection by sonar during World War II, the German Navy covered their U-boats in rubber sheets with drilled air holes at regular distances. The same basic technology of embedding periodic patterns in spongy coatings is still in use, although the specifics are evolving. Finding the pattern and material properties that will minimize the echo for a desired range of frequencies is not an easy task, but one that lends itself very well to modeling.
Let’s find out how you can set up a model of an anechoic coating using the COMSOL Multiphysics® software. For our demonstration, we’ll consider a coating discussed in Ref. 1. The authors of this paper propose a quadratic array of tiny cylindrical holes stamped into a thin polydimethylsiloxane (PDMS) film. The film is placed on the submarine hull with the holes facing the steel. Hence, the holes form air bubbles, even when the vessel is submerged in water. Despite having a thickness of only 0.2 mm, this setup results in less than 10% reflectance for most of the frequency range between 1 and 2.8 MHz, and less than 50% reflectance all the way up to 5 MHz.
When setting up models with periodic geometries, the first thing you want to figure out is how far you can reduce the size of the model geometry. The figure below shows the periodic pattern of air cavities. The blue dashed-line square indicates an obvious and completely general choice of unit cell. Flanked by periodic Floquet boundary conditions, this geometry would allow for incident radiation from an arbitrary angle. See our Porous Absorber model for an example of oblique incidence on a periodic structure.
Top view of the periodic pattern with two candidate unit cells.
By assuming perpendicular plane wave incidence, we can exploit not only the periodicity, but also the geometric mirror symmetries. After establishing the x- and y-plane symmetries, it can be easy to forget that there is one mirror plane left, forming a 45-degree angle with both the x- and y-axes. This leaves us with the green solid-line triangle in the illustration, constituting 1/8 of the full periodic unit cell. Keep in mind, of course, that failing to notice and use a symmetry is not the end of the world — it merely makes the model more expensive than necessary to run.
Here is what the resulting geometry looks like, with water above the PDMS and steel below it:
Model geometry produced in COMSOL Multiphysics® with the add-on Acoustics Module.
We will take both the steel and the water to continue indefinitely beyond the modeled geometry. While this is clearly a good assumption for the water, it may seem like a less than obvious choice for the steel. Outer submarine hulls can be just a few millimeters thick, and omitting the other side of the hull means neglecting any reflections that might occur on the inside.
However, the transmission into the steel is small because of the high acoustic impedance contrast between the PDMS and the steel. Also, much of the reflected sound would likely be absorbed by the coating. Therefore, including the full thickness of the steel domain is left as an exercise for the curious reader. If you try this, please tell us about it in the comments section!
Materials that go on “forever” can be modeled either with various low-reflecting boundary conditions or with perfectly matched layers (PMLs). The former work optimally under the assumption of perpendicular plane waves. PMLs are more general, making them the preferred choice in nonperiodic, open geometries. For more information on PMLs, see our blog post on perfectly matched layers for wave electromagnetics problems — the considerations and conclusions are similar in pressure acoustics and structural mechanics.
So, can we expect only perpendicular plane waves at the ends of our geometry? To know for sure, we need a primer on diffraction theory.
The transmitted and reflected waves caused by a plane wave incident on a periodic pattern can be described as a sum of plane waves propagating in a finite number of discrete diffraction angles. In the immediate vicinity of the pattern, you will, of course, also have some arbitrarily shaped evanescent fields. Nevertheless, the propagating waves are all plane.
Typically, most of the acoustic energy will end up in the “zeroth diffraction order”, which is just the refraction and mirror reflection of the incident wave. Reflected higher diffraction orders occur at angles where the path distance between radiation traveling in the same direction from two neighboring unit cells is an integer number of wavelengths. This happens according to the equation
Here, m = 0, +/-1, +/-2,.. is the diffraction order; c_{i} is the pressure speed of sound in the incident medium; f is the frequency; d is the width of the repeating unit cell; θ_{i} is the angle of incidence; and θ_{r,m} is the angle of the mth order reflected diffracted wave.
Similarly, for the transmitted diffraction orders, we have
with c_{t} being the pressure wave speed of sound in the final medium and θ_{t,m} the angle of the mth order transmitted diffracted wave.
Let us now look at the anechoic coating model, with θ_{i} = 0. For an mth order reflected diffracted wave to exist, we need
So, if , we have no reflected diffracted waves. In the same manner, provided , we have no transmitted diffraction orders. The pressure speed of sound is higher in steel than in water, so diffraction would arise in the reflected waves first. With d = 120 µm and c_{i} = 1481 m/s, we can finally conclude that there is no diffraction at frequencies below 12.3 MHz.
Having decided that PMLs are not required in the relevant frequency spectrum, we need only leave a sufficient depth of water and steel in the model so that most of the evanescent wave content will have died out before reaching the exterior boundaries. For boundary conditions, we use a Low-Reflecting Boundary in the steel and the pressure acoustics counterpart, Plane Wave Radiation, in the water.
Speaking of Pressure Acoustics, that interface applies both in the water and in the air cavities. When modeling small confined spaces, the Thermoviscous Acoustics interface can be worth considering as a potentially more accurate option. However, it is only needed if the thermal and/or viscous boundary layers have a significant thickness. At the frequencies that we are concerned with here, these layers do remain much thinner than the dimensions of the cavity.
The steel and PDMS domains are modeled with Solid Mechanics. If you select Acoustic-Solid Interaction, Frequency Domain in the COMSOL Multiphysics® Model Wizard, you get the two relevant interfaces and an Acoustic-Structure Boundary automatically connecting them together.
The model is excited with an incident perpendicular wave added to the plane wave radiation condition. To find out the transmission, reflection, and absorption coefficients, you need to extract what fraction of the energy is passing through, being reflected, and being absorbed, respectively.
The transmitted power is simple. The outward mechanical energy flux is automatically available as solid.nI, so all you need to do is integrate that over the low-reflecting boundary terminating the steel domain. Divide that by the incident power, which for a plane wave has a known analytical expression, and you achieve the transmission coefficient.
The net acoustic intensity comes as a vector (acpr.Ix, acpr.Iy, acpr.Iz). To get the reflected power, take the negative of the z-component and subtract its integral over the inlet from the incident power. Divide by the incident power again and you have the reflection coefficient. Finally, the absorption coefficient is most conveniently achieved from the condition that all three coefficients sum up to 1.
The below plot shows the resulting transmission, reflection, and absorption coefficients. The results are generally in good agreement with those in the paper (referenced at the end of this post).
In a reciprocating engine — a power generator prominent in the automotive industry — the failure of one part can lead to the failure of the entire engine. This is an element we highlighted in a previous blog post, where we analyzed fatigue in a reciprocating engine’s connecting rods. While this approach provides a valuable way to optimize the engine’s design and improve its operational lifetime, there are many other engine parts to take into account as well.
Fatigue life prediction for a three-cylinder reciprocating engine’s connecting rods.
Consider the crankshaft of the engine, for instance. This mechanical part takes the reciprocating motion of the pistons, which are connected to the crankshaft, and converts it into rotational motion. By design, the crank pins, sometimes called crank journals, are eccentric to the axis of the rotation of the shaft to enable the conversion between these two motions. But this eccentricity of the crank pin produces unbalanced forces when the crankshaft undergoes rotation. To balance such forces, some dead masses, or balance masses, are added on the crankshaft. However, due to the axial offset of the dead masses from the crank pin, an unbalanced bending moment is produced along the length of the part. The location of the balance masses is therefore selected in order to minimize this unbalanced bending moment — otherwise known as balancing the rotor.
The eccentricity of these crank pins and balance masses and the axial offset between them can cause the crankshaft to undergo self-excited vibrations when under rotation. As is the case with other machines that include rotating parts, these vibrations can impact the safety and performance of the individual part as well as the entire system that it helps power.
The new Rotordynamics Module includes functionality that enables you to perform an accurate vibration analysis of an engine’s crankshaft. Today, we’ll explore an example from our Application Gallery that showcases these features.
Let’s begin by looking at our model geometry. For this example, we use the crankshaft from a three-cylinder reciprocating engine. The crankshaft’s geometry is depicted in the schematic below, with both the flywheel and bearing locations highlighted.
Geometry for the engine’s crankshaft.
In the analysis, the rotor is assumed to undergo only self-excited vibrations that are caused by eccentric masses. The load on the crank pin as a result of the piston is neglected. To reduce high-frequency vibrations, material damping is applied to the rotor.
At steady state, the crankshaft’s angular speed is 3000 rpm. It is, however, ramped initially to ensure a smooth start up. The length of the ramp is selected so that the rotor completes a revolution with the linearly increasing speed from 0 to 3000 rpm and then subsequently continuing with this constant angular speed.
To accurately model the crankshaft-bearing assembly, we can use the Solid Rotor with Hydrodynamic Bearing multiphysics coupling. This coupling is comprised of the following:
To account for the thin fluid-film flow in the journal bearing, we can use the Hydrodynamic Journal Bearing feature, available in the Hydrodynamic Bearing interface.
The following plot shows the stress profile for the crankshaft. As the plot indicates, the bearing close to the flywheel endures the maximum load, generating a maximum stress in the corresponding journal. The bearing close to the flywheel is also where the highest level of pressure is produced.
Stress within the crankshaft and pressure distribution on the bearing surfaces.
Looking at the journal orbits, we can see that they are stable for each of the four bearings. In the steady state, each journal attains their respective equilibrium positions. This is highlighted in the plot on the left below. The plot on the right, meanwhile, accounts for the lateral displacement components in the case of the third journal. The results indicate that these lateral displacements undergo damped vibration and reach an equilibrium value at steady state, as referenced above.
Left: Plot depicting the orbits of the center of the journals. Right: Plot showing the lateral displacement components for the third journal.
One of the first types of commercialized MEMS devices was the piezoresistive pressure sensor. This device, which continues to dominate the pressure sensor market, is valuable in a range of industries and applications. Measuring blood pressure as well as gauging oil and gas levels in vehicle engines are just two examples.
Piezoresistive pressure sensors have applications in the biomedical field as well as the automotive industry. Left: A blood pressure measurement device. Image by Andrew Butko. Licensed under CC BY-SA 3.0, via Wikimedia Commons. Right: A vehicle’s oil gauge. Image by Marcus Yeagley. Licensed under CC BY-SA 2.0, via Flickr Creative Commons.
While piezoresistive pressure sensors require additional power to operate and feature higher noise limits, they offer many advantages over their capacitive counterparts. For one, they are easier to integrate with electronics. They also have a more linear response in relation to the applied pressure and are shielded from RF noise.
But like other MEMS devices, piezoresistive pressure sensors include multiple physics within their design. And in order to accurately assess a sensor’s performance, you need to have tools that enable you to couple these different physics and describe their interactions. The features and functionality of COMSOL Multiphysics enable you to do just that. From your simulation results, you can get an accurate overview of how your device will perform before it reaches the manufacturing stage.
To illustrate this, let’s take a look at an example from our Application Gallery.
The design of our Piezoresistive Pressure Sensor, Shell tutorial model is based on a pressure sensor that was previously manufactured by a division of Motorola that later became Freescale Semiconductor, Inc. While the production of the sensor has stopped, there is a detailed analysis provided in Ref. 1 and an archived data sheet available from the manufacturers in Ref. 2.
Our model geometry is comprised of a square membrane that is 20 µm thick, with sides that are 1 mm in length. A supporting region that is 0.1 mm wide is included around the edges of the membrane. This region is fixed on its underside, indicating a connection to the thicker handle of the device’s semiconducting material. Near one of the membrane’s edges, you can see an X-shaped piezoresistor (Xducer™) as well as some of its associated interconnects. Only some interconnects are included, as their conductivity is high enough that they don’t contribute to the device’s output.
Geometry of the sensor model (left) and a detailed view of the piezoresistor geometry (right).
A voltage is applied across the [100] oriented arm of the X, generating a current down this arm. When pressure induces deformations in the diaphragm in which the sensor is implanted, it results in shear stresses in the device. From these stresses, an electric field or potential gradient that is transverse to the direction of the current flow occurs in the [010] arm of the X — a result of the piezoresistance effect. Across the width of the transducer, this potential gradient adds up, eventually producing a voltage difference between the [010] arms of the X.
For this case, we assume that the piezoresistor is 400 nm thick and features a uniform p-type density of 1.31 x 10^{19} cm^{-3}. While the interconnects are said to have the same thickness, their dopant density is assumed to be 1.45 x 10^{20} cm^{-3}.
With regards to orientation, the semiconducting material’s edges are aligned with the x- and y-axes of the model as well as the [110] directions of the silicon. The piezoresistor, meanwhile, is oriented at a 45º angle to the material’s edge, meaning that it lies in the [100] direction of the crystal. To define the orientation of the crystal, a coordinate system is rotated 45º about the z-axis in the model. This is easy to do with the Rotated System feature provided by the COMSOL software.
In this example, we use the Piezoresistance, Boundary Currents interface to model the structural equations for the domain as well as the electrical equations on a thin layer that is coincident with a boundary in the geometry. Using this kind of 2D “shell” formulation significantly reduces the computational resources required to simulate thin structures. Note that both the MEMS Module and the Structural Mechanics Module are used to perform this analysis.
To begin, let’s look at the displacement of the diaphragm after a 100 kPa pressure is applied. As the simulation plot below shows, the displacement at the center of the diaphragm is 1.2 µm. In Ref. 1, a simple isotropic model predicts a displacement of 4 µm at this point. Considering that the analytic model is derived from a crude variational guess, these results show reasonable agreement with one another.
The displacement of the diaphragm following a 100 kPa applied pressure.
When using a more accurate value for shear stress in local coordinates at the diaphragm edge’s midpoint, the local shear stress is said to be 35 MPa in Ref. 1. This is in good agreement with the minimum value from our simulation study (38 MPa). In theory, the shear stress should be the greatest at the diaphragm edge’s midpoint.
Shear stress in the piezoresistor’s local coordinate system.
The following graph shows the shear stress along the edges of the diaphragm. The maximum local shear stress of 38 MPa is at the center of each of the edges.
Local shear stress along two of the diaphragm’s edges.
Given that the dimensions of the device and the doping levels are estimates, the model’s output during normal operation is in good agreement with the information presented in the manufacturer’s data sheet. For instance, in the model, an operating current of 5.9 mA is obtained with an applied bias of 3 V. The data sheet notes a similar current of 6 mA. Further, the model generates a voltage output of 54 mV. As indicated by the data sheet, the actual device produces a potential difference of 60 mV.
Lastly, we look at the detailed current and voltage distribution inside the Xducer™ sensor. As noted by Ref. 3, a “short-circuit effect” may occur when voltage-sensing elements increase the current-carrying silicon wire’s width locally. This effect essentially means that the current spreads out into the sense arms of the X. The short-circuit effect is illustrated in the plot below. Also highlighted is the asymmetry of the potential, which is a result of the piezoresistive effect.
Current density and electric potential for a device with a 3 V bias and an applied pressure of 100 kPa.
S.D. Senturia, “A Piezoresistive Pressure Sensor”, Microsystem Design, chapter 18, Springer, 2000.
Motorola Semiconductor MPX100 series technical data, document: MPX100/D, 1998.
M. Bao, Analysis and Design Principles of MEMS Devices, Elsevier B. V., 2005.
Xducer™ is believed to be a trademark of Freescale Semiconductor, Inc. f/k/a Motorola, Inc. Neither Freescale Semiconductor Inc. nor Motorola, Inc. has in any way provided any sponsorship or endorsement of, nor do they have any connection or involvement with, COMSOL Multiphysics® software or this model.
]]>
Natural convection is a type of transport that is induced by buoyancy in a fluid. This buoyancy is in turn caused by the fluid’s variations in density with temperature or composition.
You may be familiar with the concept of natural convection in indoor climate systems. In this scenario, hot air rises to the ceiling close to heat sources and cool air sinks to the floor close to cold surfaces, such as the windows during winter.
Electronics cooling is another type of process that often depends on natural convection in order to work. For example, we do not want to use noisy fans to cool the amplifiers and TVs in home cinema systems. Electronic devices that need to operate in quiet environments often rely on natural convection to circulate air over their built-in heat sinks.
Free convection around a splayed pin fin heat sink that is heated from below. The animation shows the value of the velocity in the air around the heat sink.
Less obvious natural convection problems are found in industries such as chemical and food processing. Environmental sciences and meteorology also involve natural convection problems, as scientists and engineers try to predict and understand transport in air and water.
In all of the cases mentioned above, it is important for engineers and scientists to understand and design systems to control natural convection. In this context, mathematical modeling is the perfect tool. In the latest version of COMSOL Multiphysics, it is easier to define and solve problems involving natural convection. We have introduced a number of new capabilities for this purpose.
The Weakly compressible flow option for the fluid flow interfaces neglects the influence of pressure waves, which are seldom important in natural convection. It allows for larger time steps and shorter solution times for natural convection problems.
The Incompressible flow option with the Boussinesq approximation for buoyancy-driven flow linearizes density using a coefficient of thermal expansion. This option includes the density variation only as a volume force in the momentum equations. This implies an even larger simplification compared to the Weakly compressible flow option, but it still gives an excellent and efficient description for systems with small density variations. This simplification is almost always valid for free convection in water subjected to small temperature differences.
The Gravity feature makes it easy to define a reference point for hydrostatic pressure and also automatically accounts for hydrostatic pressure variations at vertical boundaries.
Let’s learn more about these new features and how you can apply them in your natural convection modeling problems.
The Nonisothermal Flow interface includes the Weakly compressible flow option, which simplifies flow problems by neglecting density variations with respect to pressure. This option also eliminates the description of pressure waves, which requires a dense mesh and small time steps to resolve, thus also a relatively long computation time. In natural convection, there is usually very little influence of pressure waves, which means that we lose very little fidelity in the model’s description of reality by making this simplification.
The continuity equation for a compressible fluid looks as follows:
(1)
where ρ denotes density and u is the velocity vector.
For a gas, density is proportional to pressure and temperature. For example, for an ideal gas, this gives:
(2)
If we neglect the dynamic effects of the density changes, we get:
(3)
If we use the expression for the density of an ideal gas and neglect the influence of pressure on density, we obtain the following continuity equation:
(4)
This means that variations of density are taken into account only in terms of temperature variations. The variations in density may cause an expansion of the fluid, but the direct dynamic effects of those expansions on the pressure field are neglected when using the Weakly compressible flow settings.
In addition to the density expression in the continuity equation, selecting the gravity check box in the settings for the fluid flow interface adds a volume force in the momentum equation in the direction of gravity. By default, this is the negative z-direction. This force looks as follows:
(5)
where density, ρ, is a function of temperature.
For an ideal gas, density is inversely proportional to temperature.
We can find the settings for the Weakly compressible flow option by selecting the Nonisothermal Flow interface or the Conjugate Heat Transfer interface. Selecting the Fluid Flow interface node in the Model Builder shows the settings window below. Selecting the Weakly compressible flow option removes the dependency between pressure and density, while selecting gravity automatically adds the volume force of buoyancy in the momentum equation.
Settings window for the fluid flow interface showing the Weakly compressible flow option and gravity feature.
The figure below shows the flow between two vertically positioned circuit boards. Only the unit cell of one circuit board is shown in the figure. The second circuit board is placed just in front, with its back facing the board that is visible. The flow is completely driven by buoyancy; i.e., there is no fan.
The flow rate at the inlet is around 0.2 m/s and around 0.3 m/s at the outlet. There is no inlet of air from the sides, which means that the difference in flow rate is due to the expansion caused by the increase in temperature along the height of the channel between the circuit boards.
Buoyancy-driven flow between vertical circuit boards. The expansion is seen in the color legend for the arrows, where the flow velocity is around 0.2 m/s at the inlet and 0.3 m/s at the outlet.
When the changes in density are negligible in terms of the influence of expansion on the velocity field, we can use the Incompressible flow option with the Boussinesq approximation for natural convection. This implies that the continuity equation is simplified even more than with the Weakly compressible flow option by treating the fluid as incompressible. In this case, the continuity equation becomes as follows:
(6)
Instead, a small change in density is accounted for in a volume force, which is introduced in the momentum equation in the opposite direction of gravity; by default, the z-direction. The small change in density is obtained by linearizing the fluid’s density at a reference temperature. The z-component of the volume force becomes as follows:
(7)
Where g is the gravity constant, is the density at a given reference temperature, α is the coefficient of thermal expansion of the fluid, and ΔT is the temperature difference measured against the reference temperature.
The advantage of using the Boussinesq approximation for buoyancy-driven flow is that the nonlinearities in the fluid flow equations are reduced and the problem becomes easier to solve numerically, requiring less iterations and allowing for larger time steps for time-dependent problems.
A typical example where the Boussinesq approximation can give a realistic description of the flow is for the modeling of liquid water subjected to relatively small temperature differences. The figure below shows natural convection in a glass of water heated from below. Here, we obtain a very complex flow pattern with an upward flow close to the middle and bottom of the glass and with downward flows between the vertical walls and the middle.
Natural convection in a glass of water. The plot shows the velocity field in the glass and the temperature distribution in the walls of the glass.
We can obtain the Incompressible flow option with the Boussinesq approximation for buoyancy-driven flow by selecting the settings shown in the figure below for the fluid flow interfaces in COMSOL Multiphysics.
Selecting the Incompressible flow option, Gravity feature, and reduced pressure gives the Boussinesq approximation for a natural convection problem.
When modeling fully compressible flow, the pressure’s time dependency is included in the continuity equation, since density is a function of pressure for compressible fluids. This also means that it is usually sufficient to include an initial condition for the pressure in order to get a well-posed problem, even when we do not prescribe pressure at a boundary.
For weakly compressible and incompressible flows, the time-dependent pressure term in the continuity equation is neglected according to the discussions above. If there are no boundary conditions that set the pressure, the pressure field becomes undetermined, unless we set it in some point in the domain.
In COMSOL Multiphysics, we can use a so-called pressure point constraint in order to avoid an undetermined pressure field. The absence of a reference pressure point is often the source of problems with convergence when solving natural convection problems.
The settings for the pressure point constraint in the water glass example.
The equations that describe natural convection usually involve the momentum equation, the continuity equation, and the energy transport or mass transport equation. If buoyancy is driven by temperature differences, then the energy equation is fully coupled with the fluid flow equations (the Navier-Stokes equations). For natural convection, this coupling is fairly tight. This means that the most robust way to solve the equations is to use the fully coupled solver in COMSOL Multiphysics.
The solver branch in the model tree with the fully coupled solver option.
For very large problems, a segregated approach may be a preferable option. For example, if there are many chemical species and if buoyancy is caused by variations in density due to chemical composition, then a segregated approach may be the only viable option for getting decent memory consumption in the solution process.
I would like to end this blog post with one more natural convection problem. I often think about natural convection when I smoke a cigar. Although I do not want to promote smoking, my favorite natural convection problem is the smoke from a cigar on a cold winter day. The figure below shows a lighted cigar resting on an ashtray with the flow distribution caused by the heat from combustion.
Natural convection (with a small forced component) around a lighted cigar resting on an ashtray.
Some of the flow caused by the lighted cigar is actually forced convection, since a large part of the tobacco goes to smoke, changing the density from around 500 to 1000 kg/m^{3} down to 1 kg/m^{3}. This can be described as an inlet for the flow at the boundary between the ash and the air surrounding the cigar.
3D printing, also called additive manufacturing, is the process of creating 3D objects by placing successive layers of a material on top of one another. One common technique for 3D printing is fused-deposition modeling (FDM®), which makes use of rapid prototyping. FDM technology™ creates 3D models by using the process of extrusion.
A 3D printer (left) and close-up view of a 3D printer head (right). Images by Rahman, Schott, and Sadhu and taken from their COMSOL Conference 2016 Boston presentation.
The extrusion process involves feeding a thermoplastic material into an extruder, which in turn enters a heater and is transformed into a molten thermoplastic that is forced through the extruder’s nozzle.
Researchers from North South University, the Department of Plastics Engineering at the University of Massachusetts Lowell, and IRays Teknology Ltd. in Bangladesh studied conventional extruder technology that has been altered to fit a commercial 3D printer. The adapted design uses a continuous plastic ribbon on a large spool that is fed through an extruder, as shown below.
The plastic material used in the printer is the thermoplastic polymer acrylonitrile butadiene styrene (ABS), one of the two most common thermoplastics used in FDM®. For the printing process used in this research, a computer controls the movement of the 3D extruder head, which creates the layers leading to the buildup of a 3D object.
An extruder adapted for a commercial 3D printer. A plastic ribbon is fed from a spool to the extruder/printer, where heating and forced flow extrude a thin and continuous amount of extruded ABS. This process creates layers of hardened ABS, which form the final 3D-printed object. Image by Rahman, Schott, and Sadhu and taken from their COMSOL Conference 2016 Boston paper.
For their study, the researchers wanted to investigate the cooling and transition stages of a 3D printer head with heat transfer simulation. They used the COMSOL Multiphysics® software to identify how different elements affect their 3D printer design.
To accurately study the cooling and casting process in a 3D printer, the research team developed a 2D axisymmetric model of a 3D printer head and analyzed its fluid and thermal aspects as well as the change in the tensile modulus. They assumed that ABS flows continuously through the narrow nozzle and that its volume remains constant during the solidification process. Further, the team assumed that the velocity of the ABS is constant and uniform.
With this model, the researchers investigated the glass-liquid transition, or the glass transition of ABS. Glass transition is a reversible change that occurs over the glass-transition temperature range. Within this temperature range, an amorphous material like ABS changes from a hard and brittle state to a rubbery and viscous state when exposed to an increase in temperature.
This process occurs first when ABS is heated to a molten state in the 3D printer’s extruder and again as it cools and solidifies upon exiting the extruder. This secondary glass-transition change is the key point of focus in this study.
Let’s look at the extrusion of ABS from the nozzle, which can be seen in the left image below. The results indicate that the glass transition of ABS occurs successfully. However, the team also aimed to analyze how well this process works.
A closer look at the 3D printer head shows that the streamlines near the nozzle form a vortex. In a real-world 3D printer, this eddy flow may generate a nonuniform surface quality that causes problems. By finding this issue while still in the design phase, engineers can optimize the nozzle design early on in the process.
Left: The velocity magnitude resulting from one revolution of the 2D axisymmetric data set. Right: The surface velocity magnitude (mm/s) and total heat flux streamlines of a 3D printer head. Images by Rahman, Schott, and Sadhu and taken from their COMSOL Conference 2016 Boston paper.
Next, the team examined the heat flux, as seen in the graphs below. They were able to confirm that the nozzle area has a very large conductive heat flux. This is an expected result, since ABS releases a significant amount of heat during its secondary phase transition.
The researchers located the conductive heat flux peak at the flow’s center in the transition zone. By modeling the conductive heat flux at the outer boundary, the researchers found that most of the cooling process happens directly outside the nozzle. These simulations yielded an interesting conclusion: The heat flux is not constant, but instead varies along the length of the nozzle wall. This information is useful for optimizing the cooling rate and choice of cooling method for a 3D printer.
Left: Conductive heat flux through the outer boundary. Right: The surface temperature distribution, with arrows indicating the total heat flux direction. Images by Rahman, Schott, and Sadhu and taken from their COMSOL Conference 2016 Boston paper.
Switching gears, the researchers were also able to see how different factors affect the glass transition of ABS. For example, the following plot suggests that smaller values of ΔT have a steeper glass transition.
The fraction of plastic in a liquid state along the centerline for different values of ΔT. Image by Rahman, Schott, and Sadhu and taken from their COMSOL Conference 2016 Boston paper.
By modeling the glass transition of ABS in a 3D printer head, the researchers located a narrow secondary transition in the glass-transition temperature. Their simulation research into relevant factors, such as the surface velocity magnitude, heat flux, and surface temperature distribution enables them to better understand 3D printers and use this knowledge to improve their designs.
For instance, this research could be used to optimize die designs, helping to ensure that a 3D-printed model cures as quickly as possible. Further, the team noted that they can use heat transfer analysis to predict how filters influence the printer’s cure rate and end-use properties.
Find out more about 3D printing and the benefits of heat transfer analysis by exploring the links in the next section.
FDM is a registered trademark of Stratasys, Inc.
FDM Technology is a trademark of Stratasys, Inc.
]]>
For those in the semiconductor industry, rapid thermal processing (RTP) is recognized as an important step in producing semiconductors. In this manufacturing process, silicon wafers are heated to temperatures greater than 1000°C in a few seconds or less. This is often achieved by using high-intensity lasers or lamps as heat sources. The temperature of the wafer is then slowly reduced in order to prevent any dislocations or water breakage that could occur as a result of thermal shock. The applications for RTP range from activating dopants to chemical vapor deposition, a topic that we’ve discussed previously on the blog.
Rapid thermal annealing (RTA) is a subset of RTP. This process involves rapidly heating an individual wafer from ambient temperature to somewhere in the range of 1000 to 1500 K. For RTA to be effective, there are a few considerations that need to be made. For one, the step must occur quickly; otherwise, the dopants can be diffused too much. Also important to the step’s success is preventing overheating and nonuniform temperature distributions. This facilitates the need for accurate measurements of the wafer’s temperature during RTA, which are typically achieved using either thermocouples or IR sensors.
A schematic of a common RTA apparatus.
An IR sensor, when ideally positioned, only receives radiation that is reflected and emitted by the silicon wafer. This is otherwise known as secondary radiation. Other desirable characteristics of sensors include short response times and high levels of accuracy. To design an optimal IR sensor, you could perform a parameter optimization in COMSOL Multiphysics. But before that step, and what we’ll focus on here, is using simulation to determine if an IR sensor is the more appropriate choice for an RTA configuration when compared to the inexpensive thermocouple.
As highlighted in the diagram above, RTA often makes use of double-sided heating in many applications. In such a setup, IR lamps are placed above and below the silicon wafer. For our Rapid Thermal Annealing tutorial, we chose to model a single-sided heating apparatus.
The model geometry for the RTA configuration.
In the above figure, the components are stored in a chamber featuring temperature-controlled walls with a set point of 400 K. The geometry of the chamber walls are thus omitted, as this results in a closed cavity. The model further assumes that radiation and convection cooling dominate the physical system. A heat transfer coefficient is used to model the convective cooling of the wafer and sensor to the gas.
The lamp, meanwhile, is treated as a solid object that has a volume heat source of 25 kW. Insulation is included on all sides of the object, except for the top surface. It is through this top surface, which faces the wafer, that the heat leaves the lamp as radiation. The model uses a low heat capacity for the solid to capture its transient start-up time. The other thermal properties of the lamp are the same as those of copper metal.
Let’s begin by looking at the temperature distribution in the lamp, wafer, and sensor after 10 seconds of heating. As the simulation plot shows, there is a significant difference between the temperature of the wafer, which is around 1800 K, and the temperature of the sensor, which is around 1100 K. You may also notice that there is an uneven temperature distribution in the wafer. While not included in our example model, reconfiguring the heat source could help address this issue.
The transient temperature field after ten seconds of heating.
We also want to see how well the temperature of the sensor reflects that of the wafer’s surface. For this purpose, it is helpful to plot the temperature transient of the centerpoint on the wafer’s surface facing the lamp along with the temperature at a certain point on the top surface of the sensor. In the plot below, these two measurements are denoted by T_{wafer} and T_{sensor}, respectively. The temperature transient of the lamp (T_{lamp}) and the irradiation power at the surface of the sensor I_{sensor} are also shown.
Comparison of the temperature transients for the individual components of the RTA configuration along with the irradiation power at the sensor’s surface.
As the results indicate, the temperature of the sensor poorly reflects that of the wafer. A thermocouple’s signal would therefore not be very useful in regulating this process. But the IR detector does show good agreement with the characteristics of the wafer temperature. Such accurate measurements of the wafer’s temperature are achieved with a scalar amplification.
There are some drawbacks to IR sensors that are important to mention as well. For instance, the IR sensor has far less inertia than the wafer. While the wafer needs some time to heat up, the sensor detects the radiation as soon as it starts. Further, an IR signal depends on the wafer’s emissivity. Because the emissivity varies with temperature, the response is nonlinear. The signal is also rather sensitive to changes within the geometry. With tools like COMSOL Multiphysics, you can fully study such phenomena and gain a better understanding of how to optimize your RTA configuration for successful semiconductor manufacturing.
Before the invention of gears, people used wheels to transfer the rotation of one shaft to another with the help of friction. The major drawback in using these frictional wheels was the slippage beyond a certain torque value, as the maximum torque that could be transmitted was limited by the frictional torque. To overcome this limitation, people began using toothed wheels, more commonly known nowadays as cogwheels or gears.
Gear pair created using the Parts Library in the Multibody Dynamics Module.
The main purpose behind gears is to avoid slippage. This is why the teeth of one gear are inserted between the teeth of the mating gear, a process referred to as gear meshing. Compared to the gear’s core region, the gear’s mesh region is more flexible. Hence, accounting for the stiffness of the gear mesh is important when trying to accurately capture the dynamics and vibrations in the system.
Gear mesh stiffness depends on several different parameters and, most importantly, it varies with the gear rotation. This makes the problem nonlinear, and the continuously varying gear mesh stiffness gives rise to vibrations in the system. These vibrations in different parts of the transmission system result in noise radiation. Therefore, it is crucial to evaluate gear mesh stiffness and include it in the gear model.
To examine gear mesh stiffness, we assume that the gears are elastic bodies and model the contact between them. We then perform a stationary parametric analysis to determine the mesh stiffness of the gears for different positions in a mesh cycle. A mesh cycle is defined as the amount of gear rotation after which the next tooth takes the position of the first one.
Now, to understand this process, let’s take an example in which two gears, both made of steel, have the following properties:
Properties | Pinion | Wheel | |
---|---|---|---|
Number of teeth | n | 20 | 30 |
Pitch diameter | d_{p} | 50 mm | 75 mm |
Pressure angle | a | 25° | 25° |
Gear width | w_{g} | 10 mm | 10 mm |
In this example, both gears are hinged at their respective centers. Using the penalty contact approach, we model the contact between the teeth of the two gears. The boundaries of the two gears in contact with each other are shown below. For more details about how to set up this model, you can check out the tutorial titled: Vibrations in a Compound Gear Train.
The contact pair boundaries (left) and the finite element mesh (right) in the gear pair.
Because the mesh stiffness changes for the gears’ different positions in the mesh cycle, we rotate both gears parametrically to compute the variation of gear mesh stiffness. The rotation of the pinion (θ_{p}) about the out-of-plane axis is prescribed in such a way that the pinion rotates for two mesh cycles. The rotation of the wheel (θ_{w}) about the out-of-plane axis is defined as the following:
where g_{r} is the gear ratio with a value of 1.5 and θ_{t} is the twist with a value of 0.5°.
The wheel is given a twist, θ_{t}, and the required twisting moment, T, is evaluated on the hinge joint. Hence, the torsional stiffness of the gear pair is prescribed as:
Once we know the torsional stiffness, we can define the stiffness along the line of action as:
where d_{pw} is the pitch diameter of the wheel and α is the pressure angle.
The von Mises stress distribution in the gear pair for different positions in a mesh cycle. This shows high stress levels at the contact points along the line of action.
The figure below shows the variation of computed gear mesh stiffness with the rotation of the pinion for two mesh cycles. We can see that the gear mesh stiffness is periodic in each mesh cycle as well as across multiple mesh cycles, increasing in the beginning and then later decreasing. This happens due to the changing contact ratio. In the beginning of a mesh cycle, the contact ratio increases from 1 to 2, but then drops back down to 1.
The variation of gear mesh stiffness with the pinion rotation.
In the previous section, we saw that gear mesh stiffness varies with the gear’s position in the mesh cycle. It also depends on several other parameters, some of which are listed here:
Let’s focus on investigating the effect of gear tooth parameters on the mesh stiffness. While doing so, we keep the same geometric and material properties that were given in the first table.
To look at the effect of the number of teeth or module on gear mesh stiffness, we consider different values for the number of teeth on the pinion.
We then compute the number of teeth on the wheel by using the gear ratio, which is set to 1.5. The other two gear tooth parameters are fixed to the following values:
Gear meshes for three different values of the number of teeth (n_{p} = 20, 28, 36).
The von Mises stress distribution in the gear pair for different values of n_{p}.
The variation of gear mesh stiffness with pinion rotation for three different values of the number of teeth (n_{p} = 20, 28, 36). The stiffness is comparatively higher and smoother for a greater number of teeth or for a smaller module.
To understand the effect of pressure angle on gear mesh stiffness, we look at three different values of the pressure angle.
The other two gear tooth parameters are fixed to the following values:
Gear meshes for three different values of the pressure angle (α = 20°, 25°, 35°).
The variation of gear mesh stiffness with pinion rotation for three different values of the pressure angle (α = 20°, 25°, 35°). The stiffness increases with a larger pressure angle.
After investigating the effects of module and pressure angle, we now examine the effect of different addendum values on gear mesh stiffness.
The other two gear tooth parameters are fixed to the following values:
Gear meshes for three different values of the addendum-to-pitch-diameter ratio (adr = 0.6, 0.75, 0.9).
The variation of gear mesh stiffness with pinion rotation for three different values of the addendum-to-pitch-diameter ratio (adr = 0.6, 0.75, 0.9). The stiffness is comparatively higher for higher values of addendum, however it also has more fluctuations. This may lead to higher vibration levels in the transmission system.
After evaluating gear mesh stiffness using the static contact analysis, the next step is to include the stiffness in the gear model so that we can perform an NVH analysis of the full transmission system.
The gear mesh stiffness and damping added along the line of action between the two gears.
In the multibody dynamics analysis, we use the evaluated gear mesh stiffness in the Gear Elasticity node under the Gear Pair node. In this analysis, we write gear mesh stiffness as a function of gear rotation. By default, gear mesh stiffness is assumed periodic in a mesh cycle. However, it is also possible to assume that it is periodic in a full revolution.
In order to dampen the vibrations, we can add gear mesh damping in the Gear Elasticity node. This can be entered either as a function of mesh stiffness or explicitly. The latter technique works well when we have the gear-mesh stiffness variation available. If we don’t have the exact gear-mesh stiffness variation, we can use the gear tooth stiffness for the wheel as well as the pinion. The tooth stiffness can simply be evaluated by applying a load on the gear tooth and measuring the deflection. The gear tooth stiffness is also the function of a mesh cycle, although as an approximation, and we can enter it as a constant average value.
Finding the overall gear mesh stiffness also requires determining the contact ratio. In simple words, the contact ratio can be defined as a measure of the average number of teeth in contact during the period in which a tooth comes and goes out of contact with the mating gear. To show how different values of the contact ratio affect the stiffness, let’s examine a few cases.
In the first case, only a single pair of teeth is in contact for all positions in the mesh cycle. The typical variation of the gear tooth stiffness is shown below.
The typical variation of the gear tooth stiffness for the pair of teeth in contact.
In this case, two pairs of teeth are in contact for all positions in the mesh cycle. We can see from the following image that except for a phase difference, the second pair of teeth has the same stiffness as that of the first pair. The total stiffness of the gear mesh is the summation of individual tooth stiffness.
The typical variation of the gear tooth stiffness for the first and second pair of teeth when the contact ratio equals 2.
In the third case, the pairs of teeth that are in contact change for different positions in the mesh cycle. For certain positions, there is only one pair of teeth in contact, whereas in other positions, there are two pairs of teeth in contact. The stiffness of the second pair of teeth goes to zero when it loses contact for certain positions in the mesh cycle. This results in large fluctuations in the overall gear mesh stiffness, which leads to vibrations in the system.
The typical variation of the gear tooth stiffness for the first and second pair of teeth when the contact ratio is between 1 and 2.
To demonstrate the effect of gear mesh stiffness on gear dynamics, let’s use a pair of helical gears as an example. We first perform a transient study to compare a rigid gear mesh, gear mesh with a constant stiffness, and a gear mesh with a varying stiffness. We then analyze the effects of different types of gear mesh on the angular velocity of the driven gear as well as on the contact force. More details about this tutorial model can be found in the Application Gallery.
The figure below shows the variation of the driven gear’s angular velocity for the constant angular velocity of the driver gear. For a rigid gear mesh, the driven gear rotates at a constant speed. When the gear mesh stiffness is constant, the driven gear initially fluctuates before settling down to a constant speed. The gear mesh that has a varying stiffness continues to fluctuate about the mean value, giving rise to the vibrations.
Driven gear angular velocity for different types of gear meshes.
We can observe a similar trend in the contact forces. The rigid and constant-stiffness gear mesh eventually begin to maintain a constant contact force, but the varying-stiffness gear mesh causes the contact force to fluctuate about the mean value. The contact force variation is periodic with respect to the mesh cycle, and the contact force varies from about 150 N to 450 N, with a mean value of 250 N. This large variation in the contact force within a mesh cycle rotation causes vibrations in other parts of the system. This may lead to noise radiation in the surrounding area.
Variation of the contact force with gear rotation for different types of gear meshes.
The variation of gear mesh stiffness, which depends on several geometric and material parameters, plays an important role in the NVH analysis of a transmission system. With COMSOL Multiphysics and the Multibody Dynamics Module, we can calculate its variation by combining a contact analysis with the parameterized gears in the Parts Library. We can then use the computed gear mesh stiffness in the multibody dynamics model to accurately capture the dynamics of gears working together with the other parts of the transmission system.
Stay tuned for the next blog post in our Gear Modeling series, where we’ll show you how to simulate gearbox noise and vibrations generated due to varying gear mesh stiffness. In the meantime, we encourage you to browse the additional resources below.
Let’s begin with a brief introduction to rotordynamics modeling. As we have mentioned previously on the blog, rotordynamics analysis helps enhance the functionality and safety of rotating machines, which are used across many industries, from aerospace technology to power generation.
For instance, say that you want to make sure that a generator, one type of rotating machine, avoids instabilities, damaging resonances, and failure caused by a poor design. You can use rotordynamics analysis to study the vibrations that both influence the physical behavior of the generator and are exacerbated by the generator’s rotation and structure.
A generator (left) and a 3D model of a generator (right).
With simulation software, you can increase the accuracy and simplicity of your rotordynamics studies. And now, with the Rotordynamics Module, this process has become even more user friendly and flexible.
The Rotordynamics Module helps you set appropriate design parameters to keep responses within acceptable operating limits by analyzing resonances, stresses, strains, and the effects of lateral and torsional vibrations on rotating machinery. Additionally, you can use this module to take a closer look at how stationary and moving rotor components affect your design, as well as other parameters such as critical speeds, natural frequencies, and mode shapes. We’ll delve into a few specific benefits and features in the next section.
One of the main benefits of the Rotordynamics Module is flexibility. With this module, you can easily customize your simulation analysis to study specific parts of a rotating assembly or the whole structure.
The latter of these options is achieved with the Solid Rotor interface in the Rotordynamics Module, which uses a 3D CAD geometry to represent the rotor and solid elements for finite element modeling. By studying all of the components in a rotating assembly, you can generate the most accurate results possible. While modeling the entire system is not necessary for stress and deformation of the rotor, it will increase the accuracy of the simulation. To obtain the distribution of the stress and deformation field in the whole domain, modeling the rotor as a solid element is necessary.
Using this interface, you can include nonlinear geometric effects, fully describe geometrical asymmetries, account for phenomena such as spin softening and stress stiffening, and more.
A crankshaft model that uses the Solid Rotor interface to analyze the bearings’ pressure distribution in the lubricant as well as the von Mises stresses.
What if you want your model to be less computationally expensive? You can turn to the Beam Rotor interface for a faster and more computationally efficient option for modeling rotating machines. In this interface, an edge along the rotor axis defines the rotor and the other rotating machine components are defined by creating points at their respective locations.
As another advantage, this module simplifies the modeling process for two key elements in a rotor system: foundations and bearings. First, let’s look at foundations, which are broken down into three different modeling options:
A model of the pressure in a hydrodynamic bearing.
Resting upon these foundations are bearings. Let’s first focus on journal bearings, which you can model in two ways with the Rotordynamics Module:
This second option utilizes three different interfaces where the hydrodynamic part is based on a full Reynolds equation implementation. The Hydrodynamic Bearings interface models the behavior of journal bearings in detail and features an easy method for modeling an oil lubricant between a journal and bushing. The Solid Rotor with Hydrodynamic Bearing and Beam Rotor with Hydrodynamic Bearing interfaces both analyze a rotor, hydrodynamic bearing, and their interactions. However, as the names imply, the former uses solid elements to describe the rotor and the latter uses beams to define an approximated rotor.
If you’re interested in modeling thrust bearings, the Rotordynamics Module has you covered. This module includes three types of thrust bearings and behaviors: no clearance bearings, bearing stiffness and damping coefficients, and bearing forces and moments.
Using the previously discussed features and functionality, you can design a model that fits your specific needs. However, there is even more customization available in the Rotordynamics Module, which offers a variety of study types.
With the included study types, you can easily model gyroscopic effects. Vibrational effects, meanwhile, are modeled from the perspective of a corotating observer. To achieve this, we use a coordinate system that rotates along with the rotor. This removes the need to physically rotate the rotor to simulate the assembly, simplifying the modeling process. Modeling in corotating frame also allows an eigenfrequency analysis of a rotating system, which would otherwise be impossible due to the nonlinearity of the rotation when the system is observed in a space-fixed frame.
The available study types apply to static and dynamic analyses and include:
Note that for rotordynamics analyses, the definition of a stationary study differs from conventional analyses.
After you’ve run your study, it’s time to format your results and share them with other people. Doing so requires choosing the plot type that best visualizes your particular results. Take a look at the four images below to see a few of the available plot types that you can create based on your rotordynamics analyses.
Whirl plots plot a rotor’s mode shapes around its axis.
Campbell plots plot the variations of a rotor’s natural frequency variations in relation to its speed.
Waterfall plots plot frequency spectrum variations when the rotor’s angular speed increases.
Orbit plots plot rotor displacements at certain points, including the locations of bearings and disks.
To learn more about the Rotordynamics Module, click on the button below. You can also contact us with any questions you may have. Happy modeling!
Let’s consider a symmetric two-bar structure under a compressive load, as shown in the following figure:
Two bars under compression.
We assume that the bars are linearly elastic so that the force in a bar, F, is
where Δ is the elongation and L_{0} is the original length.
Using Pythagoras’ theorem, the vertical force can then be written as an explicit function of the vertical displacement:
The following quantities have been nondimensionalized:
The force as a function of displacement is shown in the graph below. The example actually shows as a buckling problem with snap-through. Between points A and C, no unique solution exists. In a previous blog post, we further discuss the concept of buckling in structures.
The compressive force in the bars increases until they are horizontal (), but the vertical projection decreases even faster beyond point A. This is why the vertical force decreases.
Force as a function of vertical displacement.
If we build a finite element model of this structure and try to increase the load, the analysis will probably fail when we reach the first peak at point A. We can, however, easily trace the solution by prescribing the vertical displacement at the loaded point, rather than the force. The applied force can then be obtained as the reaction force. The graph above was created using that method.
The tangential stiffness for this single degree of freedom system is defined as the rate of change in force with respect to displacement:
The stiffness is thus negative between points A and B. A negative stiffness is often related to numerical and physical instabilities.
Stiffness as a function of vertical displacement.
There are several material models within the field of solid mechanics that contain a negative slope of the stress-strain curve, either as an intentional effect or with certain choices of parameters. For example, some models for concrete are designed like this. In the physical interpretation of this behavior, cracks form when the material model is loaded in tension. The load carried by a test specimen will then decrease. The cohesive zone models used for describing decohesion in the COMSOL Multiphysics® software also show this type of behavior.
A strain-softening material.
At the material level, decreasing stress with increasing strain indicates a negative stiffness:
Such a material can only be tested under a condition of prescribed displacement; otherwise, it will fail immediately when the peak load is reached. The negative stiffness is thus related to a material instability.
In general, the stress and strain states are multiaxial. Stress and strain are represented by second-order tensors. In the multiaxial case, we must use a more general criterion for material stability: For any small change in the strain state, the corresponding change in the stress state must be such that the sum of the products of all stress and strain components is positive. That is,
Or, written in component form,
This is called Drucker’s stability criterion or Hill’s stability criterion.
The discretized form used in finite element analysis implies that the constitutive matrix relating stress increments to strain increments must be positive definite in order for the material to be stable. This is a condition that is generally computationally expensive to check for nonlinear materials. For a linear elastic material, the requirement can be directly converted into the well-known requirements and .
How can we mediate that we sometimes need to work with material models that do not fulfill the stability criterion? The important fact is that the material can be locally unstable, while the structure as such is still stable.
To understand this behavior, we can think of the material in the structure as connected springs. Some springs are purely elastic and represent the undamaged material, while a certain spring fails. Consider the three springs in the figure below.
A three-spring system. The extension of the failing spring is denoted u_{1}.
The spring k_{1} represents the material with the damage model, whereas the other two springs are purely elastic. The material model for the first spring is bilinear.
Material model for the nonlinear spring. The peak force F_{m} is reached at the displacement u_{m}.
The force in the lower branch is independent of damage:
Before the peak load is reached, the force in the upper branch is
since the two springs are connected in series.
The damage starts when the force in the upper branch is ; that is, when the external displacement is
and the corresponding external force is
During the degradation, the force in the damaged spring can be written as . The same force also passes through spring 2 so that .
These two relations determine u_{1} as a function of the external displacement:
In order to give a reasonable solution, u_{1} must increase when the external displacement is increased. Thus, it is necessary that . This is actually a clue to a very general result. A quick decrease in the force (or stress) is more susceptible to instability than a slower decrease.
Finally, we can derive the relation between the total external force and displacement during the degradation phase:
Thus, the external force can either increase or decrease when the external displacement increases, depending on the relative stiffness in the two branches. This simple model can thereby predict two types of instability:
In either case, a slower decrease of the force in the damage model is beneficial. In other words, the stiffer the surroundings, the more plausible it is that the whole system will be stable.
A globally stable system (left) and a system where the stiffness in the lower branch is too small to maintain stability (right).
In reality, we are not free to make arbitrary choices about force and stiffness. The area under the triangular force-displacement curve in the material model represents the energy dissipated by the process. The energy dissipation and the displacement (or strain) at final failure have a physical meaning.
The damaged part of a structure elongates while its force decreases. If the external displacement remains fixed, then the elastic parts of the structure must contract to compensate. This means that elastic energy is released. The only way the energy can be absorbed is by doing work on the damaged part. If, for a certain incremental displacement , the energy released by the elastic parts is larger than the work needed to produce the same displacement in the cracking part, the state is unstable.
Years ago, a friend of mine at the Department of Solid Mechanics at KTH Royal Institute of Technology in Stockholm performed some interesting experiments where he studied the stability of cracks in a ductile steel using extremely long three-point bend test specimens. The tests highlighted that crack stability is not only a function of the local stress state, but also of the capacity that the stored energy in the test specimen has to drive crack propagation. The longest test specimen in the experiments was 26 meters and occupied a large portion of the lab! The experiment was reported in the article “The stability of very long bend specimens” in the International Journal of Pressure Vessels and Piping.
With softening material models, it is extremely difficult to achieve convergence in a finite element model if the stress state is homogeneous.
In a physical material, the strength does not have a perfectly uniform distribution. When increasing the load, a crack will form at the location with the lowest strength, even if the stress state is homogeneous. When this happens, the surrounding material is unloaded.
Consider this example of three elastic blocks joined by two glue layers:
In real life, one glue layer will fail before the other. The slightly stronger layer will then be unloaded as the force through the part decreases. We cannot predict which layer will fail, since that is controlled by manufacturing inaccuracies. In the mathematical model, however, both layers fail simultaneously. Numerically, the iterations may not converge because the failure jumps back and forth between the two layers.
In a finite element model, the stresses are evaluated at each integration point within each element. When the load is increased above the maximum value, the failure may even jump between the elements or individual integration points within the same element (if the stress is the same everywhere).
This behavior implies that if we implement our own material model containing strain softening, we should test it using a single first-order element and under prescribed displacements. In this way, we ensure a homogeneous prescribed strain field and the stress is the same everywhere in the element. One example is Mazar’s damage model, which we described in a previous blog post. If we were to change the element shape functions to quadratic in that model, the analysis would no longer converge.
Does this mean that damage models are meaningless? Not at all. However, we must be careful to avoid indeterminate states. If a structure and its boundary conditions are symmetric, that symmetry must be employed in order to avoid indeterminacy. We can often solve problems with axial symmetry by using an axially symmetric model, while this may be impossible using a model of a 3D solid sector. Another approach is to allow a slight random spatial disturbance of the material data. This approach actually mimics nature, where strength values are randomly distributed. Also, it is important to increase the loading slowly in order to avoid large portions of the structure switching to a failed state at the same time.
In some material models, for example, within soil plasticity, strongly mesh-dependent thin layers with high shear strains can occur. These layers are called shear bands. When yielding is first initiated, the surrounding elements or even integration points are unloaded. The first elements to yield continue to accumulate plastic strains. It is interesting that this type of instability can actually be seen in real soil and is not only an artifact in the numerical model. Just as in nature, we cannot predict the exact location and distribution of the shear bands in the model.
As mentioned in the initial example, using prescribed displacements rather than prescribed forces is a good way to stabilize the numerical problem. However, this approach is essentially limited to the following cases:
There is a more general method, which we can use to continue solving past a point of instability. In this method, we first prescribe an arbitrary quantity that is known to monotonically increase and then add an extra equation that solves for the corresponding value of the prescribed load or displacement.
To display this technique, let’s augment the initial example with a spring, so that the load is applied by prescribing the deformation of the end of the spring. If the spring is very stiff, this is essentially the same as prescribing the displacement directly.
Bar system loaded through a spring.
If the spring is softer, the system may become unstable, since too much energy can be released by the spring. The critical value is
This is the most “negative stiffness” of the bar assembly, which occurs when the bars are horizontal. The relation between force and displacement at point 1 when varying the spring stiffness is shown below. The spring stiffness is given as
where the coefficient β is varied from an essentially stiff spring to values below the critical value.
Force as a function of the displacement at point 1 when varying the spring stiffness.
For values of β smaller than one, the solution fails when the spring stiffness equals the “negative” stiffness of the bar assembly.
If a prescribed force is used instead, all solutions will fail at the first peak load. By using prescribed displacement, it is possible to continue the analysis further. For lower spring stiffness values, we are still limited by the state when the internal instability causes failure.
The solution that we want to track has a monotonous vertical displacement at point 2, but prescribing it directly is not possible, since this would change the problem fundamentally. Instead, we add an equation stating: “Set the spring end displacement at point 1 so that the monitored displacement at point 2 has the prescribed value.” To do this, we add a Global Equation node in which a new unknown variable disp_at_P1
is added.
The Global Variable definition.
The equation determining the value of disp_at_P1
states that disp_at_P2-delta = 0
. The variable delta is the monotonous parameter incremented in the Stationary study step and disp_at_P2
is a variable that contains the current value of the displacement at point 2.
Settings for the study step, where delta
is used as the auxiliary sweep parameter.
The displacement at point 2 is then prescribed to have the value that satisfies the global equation.
Settings for the prescribed displacement at point 1.
With this modification, it is possible to trace the solution through the instability. As seen in the following graph, even strong instabilities can be bypassed using this method.
Force as a function of the displacement at point 1 when varying the spring stiffness after stabilization with a Global Equation node.