In 1773, André Bordier, a Swiss naturalist, used the term “fluid” to describe the movement of mountain glaciers for the first time. However, it took more than a century for scientists to agree on a unified description for the dynamics of glaciers.
One of the most confusing aspects of glaciers is the observation that ice exhibits both viscous and plastic behavior, depending on the glacier. British physicist John Glen observed and described this intermediate behavior using a nonlinear relationship between stress and strain. Known as shear thinning, this classical behavior applies to many different fluids (e.g., ketchup and blood).
The life of any mountain glacier can be schematically described as follows:
Sketch of a typical mountain glacier.
Thus, we have a dynamical process for the ice, even at steady state (when the snow fall exactly compensates for the melting): creep. This fluid model is a standard NavierStokes equation with one simplification: the Stokes (lowReynolds) approximation, which neglects the advection term. A typical value for the Reynolds number is Re = 10^{15}, so the assumption undoubtedly holds.
The simulation of viscous flow generally assumes a linear relationship between stress and strain. This assumption describes a Newtonian fluid. Many fluids are, indeed, Newtonian in their standard condition (e.g., water and air). However, many fluids exhibit a variation of their viscosity when submitted to shearing. A slightly more general approach is to use a constitutive law to describe the viscosity as a function of a certain power of the shear rate. Mathematically speaking, it is written , where is the shear rate and is classically defined as the norm of the strain rate tensor .
Get more details in these blog posts about nonNewtonian fluids and the nonNewtonian behavior of ketchup.
To completely define the flow law, two parameters need to be evaluated:
In the case of ice, it is common to take . However, the viscosity of the ice depends not only on the shear rate but also the temperature and pressure. The consistency is then defined in order to represent these dependencies. A classical way to define the consistency in ice modeling is to use an Arrhenius law (Ref. 1): , where R is the perfect gas constant and T’ is the temperature relative to the pressure melting point.
Indeed, the pressure dependency is reflected through the shift of the ice’s melting point with pressure (which is lowered with increasing pressure). Using the ClausiusClapeyron relation, we obtain , where is the Clapeyron constant. The values for A_{0} and Q are a matter of debate. (Ref. 2)
This elegant flow law, established empirically over the years through rigorous lab work, has failed to predict the high velocities observed on reallife glaciers. It took many years to understand what was missing. Consensus was brought about by J. Weertman in the late 1950s with the notion of basal sliding.
At an equation level, the basal sliding law of glaciers is not different from the viscous slip concept introduced by H. Navier a century earlier, based on molecular interaction considerations. However, the physical process behind this law, in the case of ice flow, is still a matter of debate and is not the subject of this post. Let’s just recall that it is written as , where u_{t} is the velocity of the base; is the viscosity (nonlinear here); is the basal traction, or the shear stress at the bedrock; and L_{s} is called the slip length (Ref. 2). This quantity is crucial in glacier flow modeling, as it accounts for a large part of the mass flux downstream, adds up to the flowing movement, and represents a behavior similar to a rigid movement.
The Mer de Glace, which translates to “Sea of Ice”, is a mountain glacier located in the Mont Blanc massif in the French Alps above the Chamonix Valley. Considered the largest glacier in France, it has been widely observed and monitored because of its considerable moving speed for a valley glacier (around 100 meters per year) and its significant retreat and decrease in size in the past 80 years. Studies show an average loss of 5 meters per year in thickness and 30 meters per year in length.
Below (right) is a sketch of a geometry for a valley glacier, set up using COMSOL Multiphysics and the CAD geometric kernel (available with the CAD Import Module). The geometry approximately mimics the measurements and visuals of the Mer de Glace (left).
Left: Aerial photo of the Mer de Glace in 1909. Image in the public domain, via Wikimedia Commons. (Annotations added by the author.) Right: Model geometry, colored by thickness.
Let’s simulate the nonisothermal flow of the ice mass downslope, under its own weight and subject to basal sliding.
In terms of fluid, the inflow and outflow boundary conditions are the normal constraints, corresponding to the applied pressure of the ice, which is not included in the domain. It simply corresponds to an assigned hydrostatic (or cryostatic) pressure. The upstream boundary weighs on the domain, thus contributing to a streamwise velocity, while the downstream boundary resists to the flow. The surface of the glacier is a free surface.
In terms of heat transfer, the surface is considered to be at an ambient temperature. The boundary in contact with the bedrock is normally subject to a geothermal heat flux, which could be modeled as a boundary condition. However, since such a value is spatially varying and generally unknown, a temperature is imposed in the present case. This way, we ensure that the ice remains at a temperature below 0°C, thus avoiding the phase change and latent heat flux contribution. It is worth noting that this aspect could be taken into account using a Material with Phase Change interface. Heat is allowed to enter and leave the domain at the inflow and outflow boundaries.
An extruded mesh is used that is consistent with the aspect ratio of the geometry.
The external weather conditions are an important input data for geophysical simulations. Accessing the ASHRAE 2017 database directly through the Heat Transfer interface, we can import the average external temperature and wind velocities at a given time of the year for more than 6000 weather stations all over the world. Here, we use the data from the Grand Saint Bernard station in the Swiss Alps, located at 16 km of the Mer de Glace, around at the same altitude on the first of February at noon. The ambient temperature is imposed at the glacier surface and the wind velocities are used to simulate a convective heat flux at the surface.
First, we run the simulation without basal sliding to see how much the viscous flow contributes to the observed velocities of the glacier. The results are expected to be around 120 meters per year at the top of the glacier and 90 meters per year at the end of the glacier.
As we can see on the lefthand side of the plot, solely based on the viscous law described here, we get only 50% of the expected velocity.
We can introduce a viscous slip with a slip length L_{s} = 250 m and run the simulation again. Below, we plot the velocity at the surface along the central flowline of the glacier for both cases.
Now the velocity is globally much higher and better matches the expected magnitude. It is interesting to note that the viscous sliding does not only introduce a pure shift of the velocities. Indeed, as a function of the nonlinear viscosity, it adds a nonlinear contribution, thus it is not purely rigid. With this value for the slip length, the sliding contributes to around 60% of the velocity at the surface.
Next, let’s move on to the results about the effect of temperature on the ice flow, which is an important coupling in the context of recent climate change studies. To quantify the effect of global warming on the glacier, let’s consider the following experiment. According to data, the temperature has been globally stable between 1940 and 1970, so we can assume that the glacier reached a steady state during this period. Measurements show that the global average temperature has increased by around 1 degree over 50 years. We can thereby simulate the transient flow of ice over these 50 years with the average temperature steadily increasing for a total of 1 degree.
To see the effect of this change in temperature, we can plot the evolution of the mass outflux, in cubic meters per year, at the downstream boundary.
It’s interesting to observe the delay between the beginning of the temperature increase and the beginning of the glacier’s response. The linear temperature increase starts in 1970 and the first significant effects are observed around 8 years later. This delay is mainly due to the time for a temperature change at the surface to propagate through the whole glacier (and thus increase the average temperature in the whole bulk). As a result, the output mass flux increases by around 12% during this period, leading to a net extra ice loss of 10 meters over the period, a 6% increase (compared to the case where temperature would have remained steady).
Let’s compare our results with the data about the Mer de Glace discussed earlier. If the glacier decreases in thickness by 5 meters per year for our domain (5500 meters long and 600 meters wide in average), we get 15 Mt/yr of ice loss per year. Even assuming that all of this ice flux will melt at a lower altitude (which is not the case), the 3 Mt/yr per year computed for 2017 is much smaller than the real mass loss that the Mer de Glace has undergone in past decades. This is because the simulation does not take into account the negative surface mass balance (the accumulation of snow minus the melting of ice at the surface).
Surface mass balance, in terms of modeling, is a data input and itself the product of complicated physics. As an example, hotter summers have a very strong effect on glaciers, because the small amount of snow that normally falls in the summer acts as a shield against solar radiation, thereby protecting the glacier from a large part of summer melting. If the summer snowfall does not occur, the melting is then much greater. This extra melting results in large infiltrations of liquid water through crevasses, which eventually form a subglacial hydrological network that plays a significant role in the basal slipperiness, mostly via the lubrication of the icebedrock interface and the water pressure “lifting” the glacier.
The fact that the simulation is performed for a given geometry of the glacier, neglecting the evolution of the geometry through the surface mass balance and dynamics, is also important, since the geometry affects the dynamics.
This blog post has presented the setup and solution of a simple glacier flow model with COMSOL Multiphysics. The COMSOL® software offers specialized functionality for most problems involved in such modeling scenarios. The main limitation here, and in glaciology in general, is the data; typically, the topographical data, basal slip length, surface mass balance, accumulation, and melting.
In an upcoming blog post, we will demonstrate how to get the most out of glacier simulations using sensitivity analysis and data assimilation through the Optimization Module. Stay tuned!
The process of natural convection, also called buoyancy flow or free convection, involves temperature and density gradients that cause a fluid (like air) to move, leading to the transport of heat. Unlike forced convection, no fans or external sources are needed to generate fluid flow — just differences in temperature and density.
Natural convection in air has a wide range of applications in various industries. In the electronics field, this phenomenon dissipates heat in devices, which helps prevent them from overheating. Additionally, structures like solar chimneys and Trombe walls take advantage of this heat transport method to heat and cool buildings. The agricultural industry also depends on natural convection, which helps in the drying and storage of various products.
Natural convection of air through vertical circuit boards.
With the COMSOL Multiphysics® software, it is possible to study natural convection in air for both 2D and 3D models. Let’s take a look at one example…
The Buoyancy Flow in Air tutorial shows how to model natural convection in air for two geometries:
In both cases, all of the edges are insulated except for the left and right sides, which are set to a low and high temperature, respectively. The temperature difference (around 10 K) leads to density gradients in the air, generating buoyancy flow. Note that the cube has more sides than the square, which influences how the air flows.
To simplify the model setup, there are a couple of builtin features in COMSOL Multiphysics that we can use. First up is the predefined Nonisothermal Flow interface, which couples fluid dynamics and heat transfer in the model. We can also use the Material Library to easily determine the thermophysical properties of air.
Next, we can estimate the flow regime by computing the Grashof, Rayleigh, and Prandtl numbers. The Grashof and Rayleigh numbers suggest that the flow is laminar, with a velocity of around 0.2 m/s. As for the Prandtl number, it indicates that viscosity doesn’t influence the buoyancy of the air and that the shear layer thickness is about 3 mm.
For more details on estimating the flow regime, download the model documentation from the Application Gallery.
Note: The Buoyancy Flow in Water tutorial model demonstrates a similar model setup with water instead of air.
Let’s take a look at the results, starting with the velocity magnitude of air in the 2D square. In the left image below, we see that the velocity increases as the air nears the left and right edges, with a maximum velocity of 0.05 m/s. While this is a bit lower than the estimated velocity calculated using the Grashof and Rayleigh numbers, it is still in the same order of magnitude. Further, the shear layer thickness (3 mm) corresponds with the estimate from the Prandtl number.
The velocity magnitude (left) and velocity profile (right) of air in the 2D square.
As shown below, the results for the velocity magnitude in the 3D cube are similar to those for the 2D square.
Velocity magnitude in the cube.
Next up, let’s look at the temperature results for the 2D geometry. A single convective cell fills the square, with the air flowing around the edges. We see that the flow of air is faster at the left and right sides, where the temperature differences are the greatest.
The temperature field in the square.
The 3D results show a slightly different scenario. There are small convective cells in the cube at the corners of a vertical plane perpendicular to the heated sides. As mentioned, this difference is likely due to how the front and back sides in the cube affect the airflow.
The temperature and velocity fields in the 3D cube.
The model geometries in the Buoyancy Flow in Air tutorial are rather simple, but the example provides you with a solid foundation for modeling natural convection in more detailed models that represent realworld applications.
For more details about this example, go to the Application Gallery via the button above. From there, you can download the MPHfile and stepbystep instructions on how to build the model.
]]>
Let’s consider a buoyant sphere that is dropped into a waterfilled beaker. The sphere, with a radius of 0.15 cm and density of 800 kg/m^{3}, is initially suspended in air. The cylindrical beaker has a radius of 0.65 cm, a height of 1.7 cm, and is partially filled with water. The size of the system is small enough to assume laminar flow and also allows for a faster solution process on a regular PC. A larger system can be modeled in exactly the same way, but would require more computational resources.
To model the motion of the sphere through the water surface, we need to model the fluid flow in both the air and water phases. If we consider the sphere to be rigid, we can compute the motion of it by solving the ordinary differential equations (ODEs) of motion that account for the force experienced by the sphere falling through the air, hitting the water surface, and finally interacting with both air and water. The total fluid stress exerted by the fluids on solid boundaries is given by
(1)
where u_{fluid} denotes the fluid velocity field, p the fluid pressure, μ the dynamic viscosity of the fluid, n the outward normal to the boundary, and I the identity matrix.
Buoyant sphere suspended in air above the water surface in a waterfilled beaker. Part of the beaker wall has been hidden to show the sphere.
Assuming that the solid only moves in the vertical direction (z), we can write
(2)
where m denotes the mass, v_{s} the velocity in the z direction, and u_{s} the displacement in the z direction of the sphere.
The force F_{z} can be calculated by integrating the zcomponent of from Eq. 1 over the surface of the sphere. The above equations of motion (ODEs) are solved along with the finite element model to compute the displacement and velocity of the sphere.
Since the sphere moves through both air and water, a twophase flow model is formulated on a spatial frame with timedependent spatial coordinates to model both fluids. The spatial frame is the usual fixed global Euclidean system with spatial coordinates (x, y). These spatial coordinates, corresponding to the mesh nodes of the fluid domains, are functions of time and represent the current location of a point in space. This is achieved by solving the fluid flow problem on a Moving Mesh interface.
The mesh, on which the fluid flow problem is solved, deforms with a mesh deformation that is equal to the displacement of the sphere at the interfacing boundaries between the sphere and the fluid domains. Inside the fluid domains, the mesh deformation is computed using an appropriate smoothing method that is available in the Moving Mesh interface.
(3)
On the fluid boundaries interfacing the sphere, the fluid velocity is given by
(4)
We can set up the above problem as a 2D axisymmetric analysis. To model this problem, we couple a TwoPhase Flow interface with a Moving Mesh interface and Global Equations by defining the coupling expressions illustrated in Eqs. 1–4.
Geometry of the 2D axisymmetric model showing the initial position of the water surface represented by the horizontal line. The sphere is represented as a boundary in the model domain because we are modeling it as a rigid object using ODEs.
The fluid flow in this case is laminar, in both the air and water phases, so we can use the Laminar TwoPhase Flow, Phase Field interface. (Read this blog post on how to choose a multiphase flow interface for more information.) This multiphysics interface automatically defines a coupling between the laminar flow and phase field method, which is used to track the detailed shape of the water surface.
The Laminar TwoPhase Flow, Phase Field interface enables us to model the boundaries of the sphere as moving wetted walls. The Wetted Wall boundary condition of the Phase Field interface lets us specify the contact angle between the water surface and the solid walls, while the Wall Movement setting in the Wall boundary condition of the Laminar Flow interface lets us specify the velocity of the moving solid walls. An Outlet boundary condition is used for the boundary corresponding to the top of the beaker.
The structure of the model setup and the various interfaces — namely Laminar Flow, Phase Field, and Moving Mesh — is shown in the screenshot below:
In the Phase Field interface, we specify the value of the phase field variable as Φ = 1 (air for Fluid 1) and Φ = 1 (water for Fluid 2) for the initial configuration, along with the boundary representing the initial position of the water surface with the air above and the water below the horizontal line. The initial configuration is shown in the figure above.
We use a Wetted Wall boundary condition to specify the contact angle between the airwater interface, solid walls of the beaker, and sphere. For simplicity, we assume the same contact angle for both the sphere and walls of the beaker.
The velocity field in the fluids is automatically coupled to the phase field method to account for advection. The properties of the fluids and surface tension coefficient are specified in the multiphysics coupling between the Laminar Flow and Phase Field interfaces.
The Moving Mesh interface solves for the mesh deformation within the fluid domains due to movement of the sphere. We also use boundary conditions to describe how the mesh at the fluid boundaries should deform. The important boundary condition here is Mesh Displacement, which is equal to the sphere’s displacement on the spherical boundaries (Eq. 3).
The movement of the sphere is purely due to the gravitational load. However, the fluid exerts a drag force as defined by Eq. 1. The zcomponent of this force can be computed by integrating the builtin variable for total fluid stress in the z direction over the sphere boundaries. The builtin variable spf.T_stressz
computes the total stress in the z direction that is exerted on the sphere boundary by the surrounding fluid.
Learn more in this blog post on how to compute lift and drag.
The ODEs (Eq. 2) are solved using the Global Equations interface. The screenshot below shows one of the equations implemented in the Global Equations interface, where vs is the velocity of the sphere as noted in Eq. 2, vst is the time derivative of velocity (or the acceleration), and intop1 is an integration operator defined over the solid boundaries.
Settings of the Global Equations interface used to solve for the velocity of the sphere.
Since there is a large movement of the sphere, the mesh deformation will be large. To help keep the element quality high and avoid inverted elements, we can use the Automatic Remeshing feature, which stops a transient simulation based on a mesh quality metric and remeshes the current deformed shape before it continues.
The plot below shows the sphere falling on the water surface, the waterair interface (Φ = 0.5), and a crosssectional plot of the fluid velocity magnitude.
Animation of the movement of the buoyant sphere in the waterfilled beaker, the water surface, and the fluid velocity magnitude plotted on a crosssectional slice. To see the variation in velocity magnitude at later stages of the simulation, the maximum value of the color scale is capped at 0.1[m/s].
The figure below shows the plot of the displacement of the sphere versus time. As expected, its motion is damped by the fluid surrounding it and the displacement amplitude decreases exponentially with time.
In this blog post, we have demonstrated how to model the oscillating motion of a buoyant sphere. Here, the sphere and its motion are considered axisymmetric with no rotational motion. In general, should the situation require, the same strategy discussed here can be applied to model both rotation and translation. Modeling such a case would require solving for additional rotational motion of the rigid object using ODEs. It would be much easier to instead use the builtin functionality of the Solid Mechanics interface to automatically solve for the rigid body motions. This would also allow for modeling a larger system where the flow becomes turbulent and the motion can take place in all directions.
Access the file for the model featured in this blog post by clicking the button below:
]]>
To keep the model simple and take advantage of the symmetry of a balloon, we can build a 2D axisymmetric geometry that consists of only a rectangle and an ellipse, together with slightly larger double versions to account for the rubber balloon. Our goal is to see what happens if we let the same amount of water into differentsized balloons. To do so, we can parameterize the geometry and use a scaling factor to change the initial size of the balloon, while the material thickness and the neck radius remain constant.
Geometry of two deflated water balloons of different sizes. The size is controlled by the stretching factor fact: fact = 1 (left) and fact = 2 (right).
The water balloon model makes use of new features in version 5.3a of COMSOL Multiphysics, including improved fluidstructure interaction (FSI) functionality and a realigned moving mesh.
As of COMSOL Multiphysics version 5.3a, FSI is modeled via a Multiphysics node. The node connects the physics from Fluid Mechanics and Structural Mechanics interfaces. In contrast to earlier versions of the software, where there was a separate FluidStructure Interaction interface, we can now use all of the available features from the twoway coupled physics.
The interfaces and the moving mesh after adding the FSI physics.
In this example, it is easy to take gravity into account. All we need to do is place a checkmark in the Laminar Flow interface settings. This activates the gravity of the earth, which in turn has an effect on the mechanical behavior due to the hydrostatic pressure in the water. We can expect that there will be a noticeable effect of gravity on the results, and this effect will be more significant in the larger water balloon, because there is more mass in the beginning.
On the mechanical side, the physics settings can likewise be set up quickly. We only have to define a suitable material model that describes the hyperelastic properties of the material of the balloon correctly. In the Application Library, the Inflation of a Spherical Rubber Balloon model contains a variety of hyperelastic materials. We can use the Ogden model here, because it reproduces the analytical solution most accurately.
Interested in details about fitting measured data to different hyperelastic material models? Check out this previous blog post.
By the way, copying model interfaces between different models is now very simple. Since version 5.3a of the COMSOL® software, interfaces and components can be exchanged via the copypaste functionality — even between two running COMSOL Multiphysics® simulations! This means that we can efficiently insert material settings from another model into the water balloon model.
Parameters of the hyperelastic Ogden material model used for the balloon.
Another improvement in COMSOL Multiphysics version 5.3a is the new positioning of the Moving Mesh interface. It is now found at a more prominent position under Definitions. One advantage of the new structure is that it helps avoid accidental overlaps between deforming and nondeforming areas. For the water balloon model, this improvement means that we have only two tasks for the mesh: selecting the balloon’s interior water as a Deforming Domain and adding a Prescribed Normal Mesh Displacement on the symmetry axis (to avoid unwanted movement away from this axis due to numerical inaccuracies).
The final step before solving the water balloon model is to set the water flow timing. Turning the tap on and off quickly with a defined amount of time in between can be expressed by a rectangle function. This function is multiplied to the inlet velocity of 15 cm/s, creating a flow of about 1.4 l/min.
The water inlet velocity is controlled via a rectangle function.
We can carry out a parametric sweep study to compare the simulation results for three different initial balloon sizes. All three balloons are filled with the same amount of water because the inlet velocity and the filling time period are the same. By far, the largest stress occurs in the smallest balloon. This is as expected, because the small balloon has the smallest surface and the largest relative volume increase.
Von Mises stress distribution in the balloon material after inflation for three different initial sizes. (Note: These plots were created with the Cividis color table, a color table that is optimized for people with color vision deficiency, new as of COMSOL Multiphysics version 5.3a.)
These results call for some animations! If we take a look at the transitional behavior of the inflation, we clearly see the influence of gravity on the largest balloon, because it oscillates before the water injection even starts. There is no prestress in the balloon, so it starts falling down a bit until the counterforce from the material is large enough to compensate for the gravity.
Animation of the von Mises stress in the smallest water balloon during inflation.
Animation of the von Mises stress in the mediumsized water balloon during inflation.
Animation of the von Mises stress in the largest water balloon during inflation.
The FSI functionality in COMSOL Multiphysics version 5.3a includes useful enhancements and is more user friendly than in previous software versions. With surprisingly little effort, it’s possible to set up a complex FSI model and solve it in a short time.
I am very curious to see how you use these new features to master your modeling challenges!
]]>
Optogenetics is a neurological procedure in which light stimulates genetically modified photosensitive neurons to transmit electrical nerve impulses. Unlike with electrical probes, optical probes can target neurons precisely without damaging the surrounding brain tissue.
A closeup view of optogenetics. Image by S. Berry and taken from his COMSOL Conference 2017 Boston presentation.
How does optogenetics work? Neurons are genetically modified to release lightsensitive membrane proteins, making the brain susceptible to selective photoexcitation. When stimulated by light, these proteins transport ions into (or out of) the targeted neurons, which excites electrical responses that trigger nerve impulses in the brain.
As the functions of the brain are vast, so are the behaviors and treatments that optogenetics can facilitate. For instance, optogenetics can be used to treat neurological and psychiatric disorders like depression and schizophrenia — a significant improvement over controversial early treatments like lobotomies and electroconvulsive therapy (ECT).
Optogenetics can potentially be used to treat Parkinson’s disease, a degenerative brain condition that affects the central nervous system and causes tremors and shakes. Also, when testing the effect of optogenetics on laboratory mice, researchers found that it can treat heart arrhythmia and hearing loss.
Optogenetics could help treat this mouse’s heart arrhythmia.
In a more experimental application, researchers are looking into how optogenetics can help us understand the formation and consolidation of memories. Optogenetics could change the way we perceive and catalog past experiences. Think of the 2004 science fiction film Eternal Sunshine of the Spotless Mind: A couple experiences a tumultuous breakup and undergoes a procedure that erases all memories of their relationship, only to meet and start dating again as “strangers”.
A research team from MIT aimed to optimize the design of an optical probe using simulation. Their probe design includes a microlens with a liquidliquid interface of oil and water. The lens must be able to provide both active focusing and beam steering in a single optical element in order to deliver light from the probe to the neurons. The researchers used the COMSOL Multiphysics® software to analyze the size of the liquid microlens as well as the angle and depth of its taper. They presented their findings at the COMSOL Conference 2017 Boston, where they won a Best Paper award.
A schematic of the microlens design. Image by S. Berry and taken from his COMSOL Conference 2017 Boston presentation.
The optical probe’s microlens can achieve both focusing and steering through the electrowetting effect in which electrodes are used to focus the lens and direct the light to the targeted cell.
The researchers modeled twophase flow in the microlens using the level set method in COMSOL Multiphysics. This method helped them determine the location of the fluid interface and define the fluid domain. These initial results were used to simulate the fluid motion in the lens over time.
For the first timedependent simulation, the same voltage was applied to the left and right taper walls (V_{L} and V_{R}) to find the resulting contact angles. When comparing the contact angle to the applied voltage, the researchers were able to see which taper angles (α) and voltages (V) cause concave and convex liquidliquid interface profiles. Concave profiles result in a negative optical power for the probe, while convex profiles lead to a positive optical power.
Equal voltage is applied to the left and right walls of the taper. Different voltages for a taper angle of 45° cause convex, concave, and flat liquidliquid interface profiles. Image by S. Berry, S. Redmond, P. Robinson, T. Thorsen, M. Rothschild, and E.S. Boyden and taken from their COMSOL Conference 2017 Boston paper.
Next, the research team turned their attention to the focusing of the probe. They used simulation to analyze the lens size, liquids that make up the lens interface, and voltage through electrowetting. The results show that a steeper taper angle causes a smaller dynamic range for the operating voltage. For instance, for a 75° taper angle, the microlens only has a positive optical power between 34 and 40 V.
With a taper angle of 75° (red line), the microlens must operate between a range of 34 and 40 V to have a positive optical power. Image by S. Berry et al. and taken from their COMSOL Conference 2017 Boston paper. The data points are calculated from the simulations while the curves are interpolated to the simulation results.
The third simulation dealt with focusing and steering combined. To happen simultaneously, these effects require the liquid microlens to be curved. With a curved shape, the lens can maintain a spherical profile while being shifted. To study the profile, a fixed voltage was applied to V_{L} and a voltage sweep was applied to V_{R}. The results highlight the importance of the taper cavity’s geometry as well as the ability to apply different voltages to the taper’s different walls.
Liquidliquid interface profiles for microlenses with different taper angles and different voltages applied to each side of the taper. Image by S. Berry et al. and taken from their COMSOL Conference 2017 Boston paper.
As the taper angle neared 90°, the researchers observed that the liquidliquid interface became flat and acted like a prism. If the sole goal was beam steering, this would be an ideal behavior — but optogenetics requires both beam steering and focusing.
To determine the operational space of the liquid microlens, the researchers compared the steering angle and focal length of the lens. Both of these values were calculated with the radius of curvature from the liquidliquid interface profile found via the simulation, combined with the ray transfer matrix method.
Operational space for the optical probe with a 15° (left) and 45° (right) taper angle. Image by S. Berry et al. and taken from their COMSOL Conference 2017 Boston paper. The optical probe with a 15° taper angle gives a larger steering angle and a lower minimum focal length than the one with a 45° taper angle, which suggests that the probe with a 15° taper angle is the superior design. Unfortunately, the better design is also difficult to manufacture.
With these simulations, the research group learned about the behavior of the microlens when the geometry and operating conditions change. Using models and simulations, the team arrived at a decent design that was relatively easy to manufacture. They plan to use these analyses as a starting point for more complex research into optical probe microlenses, including performing 3D simulations of their optical probe designs beyond 2D models. With an optimized probe for optogenetics, we could see advanced brain research and treatment in the near future.
Get more details in the full paper from the COMSOL Conference 2017 Boston: “Liquid Microlenses with Adjustable Focusing and Beam Steering For Single Cell Optogenetics“
]]>
Drilled shafts are deep foundation elements that are both cost effective and high performing. They can be used in a variety of soil strata with minimal noise and vibration issues. Due to these benefits, drilled shafts are used to support heavy structures around the world.
Schematic of a drilled shaft in use. Image by J. Asirvatham, A. E. TejadaMartinez, and G. Mullins and taken from their COMSOL Conference 2017 Boston presentation.
Drilled shafts are simple structures, consisting of deep cylindrical holes drilled into soil or rock. Often, construction workers use a drilling fluid or slurry to maintain the hole’s stability during the excavation process. They do so by pumping an amount of slurry that is equal to the removed soil into the excavated hole.
When the hole reaches a required depth, a reinforced steel (rebar) cage is added. The excavation hole is then filled with concrete via a long pump truck hose or tremie. This device pumps the concrete into the bottom of the hole, preventing it from mixing with the slurry.
The main stages of the drilled shaft creation process: Drilling the excavation hole (left), inserting the rebar cage (middle), and filling the hole with concrete (right). Images by J. Asirvatham, A. E. TejadaMartinez, and G. Mullins and taken from their COMSOL Conference 2017 Boston presentation.
In an ideal situation, concrete effortlessly displaces the lighter slurry as it fills the excavation hole. However, this is not always the case in reality. Different rebar cage shapes and placements change how concrete rises as well as generate head differentials between the concrete inside and outside the cage. The kinematics of the concrete flowing into the excavation hole can also cause anomalies. For instance, a poor flow of concrete from the tremie can produce a drilled shaft that is lower quality across the entire cross section and depth.
Left: Comparison of an idealized concrete flow to an actual concrete flow in situations involving a drilling fluid or slurry. Right: An anomaly caused by poor concrete flow performance. Images by J. Asirvatham, A. E. TejadaMartinez, and G. Mullins and taken from their COMSOL Conference 2017 Boston paper and presentation, respectively.
When performing experimental tests to investigate these issues, a research team from the Department of Civil and Environmental Engineering at the University of South Florida noted that the flow pattern of concrete into drilled shaft excavation holes has a large influence on the properties of the hardened cast of the drilled shaft. In addition, they saw that the placement of the reinforced steel cage could cause anomalies in the form of creases.
To achieve a concrete flow that properly fills a drilled shaft hole, the research team decided to simulate the drilled shaft concreting process. We take a look at this work below.
For their study, the team created a preliminary 2D axisymmetric drilled shaft model consisting of a rectangular element that is 7 feet deep with a 4foot diameter. At the center of the model is a tremie pipe with a 10inch diameter. Surrounding this pipe are rebars, which take the shape of vertical elements with gaps. The model can account for the rheological properties of the concrete, the potential structural blockages, and the twophase flow of:
The researchers used a level set method, included in a twophase flow interface in COMSOL Multiphysics, to compute the motion of the interface between the concrete and slurry.
Drilled shaft model geometry. Image by J. Asirvatham, A. E. TejadaMartinez, and G. Mullins and taken from their COMSOL Conference 2017 Boston paper.
With this model, the team simulated the flow patterns and volume fraction of concrete and slurry over a 4minute time period. They also calculated the head differential between the inside and outside of the rebar cage.
At the beginning of the time period, the concrete (depicted as red in the plots below) remains within the tremie, while the slurry (depicted as blue) fills the rest of the excavation hole. As time progresses, concrete begins to flow out of the tremie and move vertically up within the rebar cage. After developing the required head, the concrete starts to flow radially out of the rebar cage and spreads into the annular space. This continues as concrete fills more and more of the excavation hole, displacing the slurry. However, the concrete does not fill the space evenly, resulting in a head differential between the inside and outside of the rebar cage.
NonNewtonian simulation results, showing the volume fraction of the concrete (red) and slurry (blue) at different times. Images by J. Asirvatham, A. E. TejadaMartinez, and G. Mullins and taken from their COMSOL Conference 2017 Boston paper.
The nonNewtonian simulations calculate a head differential of 0.35 meters (14 inches), which is within the observed experimental range of 0.20 to 0.40 meters (8 to 16 inches). On the other hand, the Newtonian simulations show a difference of 0.90 meters (36 inches), which is higher than the observed range. Thus, the nonNewtonian flow model is more appropriate for this concrete flow simulation and demonstrates that simulation can successfully calculate the concrete head differential in a drilled shaft.
The researchers used their preliminary simulation results to predict how the concrete flow velocity and rebar spacing affect the concrete head differential between the inside and outside of the rebar cage. Here, the researchers found that the differential increases as the concrete velocity rises and when the clear spacing of the rebar is reduced.
The initial 2D axisymmetric model results are promising because the calculated flow patterns are consistent with the observations from the project site. In the future, the researchers want to examine how this process is influenced by the size of the drilled shaft and the design of the rebars. They plan on expanding their model to 3D to further study the concreting process.
These models can help to determine the ideal rheological properties and optimized workability of fresh concrete so that it can better fill a drilled shaft with a given size and rebar arrangement.
Before we do anything else, let’s pour some 90°C coffee into a vacuum flask and consider the material properties of the model.
Materials involved:
All material properties except for the foam filler can be pulled directly from the Material Library in the COMSOL Multiphysics® software. As always, when using COMSOL Multiphysics, you can add special material properties manually into the software. In the case of the foam in this example, you would enter the following values:
Tip: The modeling approaches mentioned here are both covered in the Natural Convection Cooling of a Vacuum Flask tutorial model. Please refer to the tutorial MPHfile and accompanying documentation to see exactly how to set up and solve this model, because we won’t go into detail in this blog post.
For a quick and simple model, you can describe the thermal dissipation using predefined heat transfer coefficients. This method should help us determine how the coffee cools over time inside the vacuum flask. It’s simple because it won’t tell us anything about the flow behavior of the air around the flask, and it’s useful because it will show us the cooling power over time.
Instead of computing heat transfer and flow velocity in the fluid domain, you would simply model the heat flux on the external boundary of the vacuum flask, defined from the heat transfer coefficient, the surface temperature, and the ambient temperature (25°C; a little warmer than standard room temperature):
q = h(T_{∞}T)
There are many predefined cases where h is known with high accuracy. The Heat Transfer Module (an addon to COMSOL Multiphysics) includes a library of heat transfer coefficients for easy access.
Another timesaver with this method is the fact that you can avoid predicting whether the flow is turbulent or laminar, because many correlations are valid for a wide range of flow regimes. As long as you use the appropriate h correlations, you can typically arrive at accurate results at a very low computational cost with this method.
What about the second approach? It’s worth considering how the cooling power is distributed on the flask surface as the coffee cools down. To do so, you need to include surrounding fluid flow in the model.
To get a more complete picture of what’s going on with our precious java (seriously, when can I drink it?), we could create a more detailed model of the convective airflow outside the vacuum flask.
Taking the second approach calls for using the Gravity feature available in the SinglePhase Flow interface with the Heat Transfer Module or the CFD Module, which allows you to include buoyancy forces in the model. Typically, you would first need to figure out whether the flow is laminar or turbulent before following this modeling approach. For the sake of brevity here, let’s skip ahead: we know from the tutorial model documentation that the flow is laminar in this case.
The detailed model shows that the warm flask drives vertical air currents along its walls. The currents eventually combine in a thermal plume above the flask and air in the surrounding area is pulled toward the flask, feeding into the vertical flow. (This flow is weak enough that there are no significant changes in dynamic pressure.)
The vortex that forms above the flask’s lid reduces the cooling in that region — something you can’t tell from the first method. In essence, the fluid flow model is better at describing local cooling power than the simple method with the approximated heat transfer coefficient.
So how long will the coffee stay warm in the vacuum flask? Many coffee drinkers like to stay within the range of 50–60°C (roughly 120–140°F), because it’s supposedly when the “coffee notes shine.” Both methods suggest that after 10 hours inside the flask, the coffee will be about 54°C, which is still within the enjoyable range. Of course, if we were to bring the flask outside in cooler temperatures than the assumed 25°C, the coffee would cool down quicker.
A plot of the coffee temperature over time for the two modeling approaches. The blue line denotes the first approach and the green line denotes the second approach.
Though both modeling approaches give very similar results in terms of the coffee temperature over time, it’s a different story when looking at the flask surface’s cooling power:
A plot of the heat transfer coefficient for the two modeling approaches. The blue line denotes the first approach and the green line denotes the second approach.
For fast and accurate results in the long run, you can combine the two approaches. After setting up the more detailed model, you can create and calibrate functions for heat transfer coefficients to use later, via the simpler approach for solving largescale and timedependent models.
We saw that there are two different ways to model the convective cooling of coffee inside a vacuum flask over time. The detailed approach is more computationally demanding, as it combines heat transfer and fluid flow, but it’s also more accurate in the sense that it accounts for local effects. By combining both methods, you can save time in the future.
Try it yourself by downloading the tutorial model from the online Application Gallery or within the Application Library inside the COMSOL Multiphysics software. If you have any questions about this model or the COMSOL Multiphysics software, please contact us.
]]>
You can save a lot of computational time when running a simulation if the impact of temperature variations on the flow field are negligible compared to the accuracy required for the solution. In such cases, we can compute the flow field in the first study step and then use it as an input for the heat transfer problem solved in the second study step, which is an easy thing to do in COMSOL Multiphysics.
Instead of solving a twoway coupled problem (flow ↔ transport), we solve a simpler oneway coupled problem (flow → transport). The reduction in computational time and memory is even higher if the solution of the flow field can be reused several times; e.g., when a parametric study for different heat transfer conditions is carried out for the same flow field.
The oneway coupling approach can be applied for all types of fluid flow, including turbulent regimes and flow in porous media. It is also possible to apply this technique to any advected field, provided that the coupling is weak; e.g., for chemical species transport in dilute solutions.
The important criterion for the validity of the oneway coupling approach is that the influence on the flow field, assuming a constant temperature, is much smaller than the accuracy required in the computation. We have to check that the variations in density and viscosity caused by temperature changes are small enough that their impact on the flow fields falls within the accuracy limit in the analyses. It is recommended to assume the flow mean temperature as the reference temperature for density and viscosity in the oneway coupled case.
The best way to check the validity of a oneway coupled approach is to solve a test model and compare the results to a twoway coupled solution of the same problem. Pick a few sample points in the analysis where the fully coupled problem is computed and verify the simplified approach against the full solution. If these points fall within the required accuracy, we can use the simplified approach for the bulk of the computations. The samples need to be selected wisely, as the verification points must fall inside the simulation window of operation. Ideally, these points should be the extreme conditions and all other computations should fall within the extreme points.
If it turns out that the oneway coupling is not a suitable simplification for a certain simulation task, using this technique can still be helpful. The approach of solving the decoupled problem first is a good option to get good initial guesses for the fully coupled problem for steady nonisothermal flows. There are cases where the flow field does not converge unless a decent initial guess is provided, which is what we can obtain with the approach discussed here.
Let’s try out the oneway coupling approach using the CrossFlow Heat Exchanger tutorial model. This type of heat exchanger is found in labonachip devices in biotechnology and microreactors, such as for microsized fuel cells.
The modeled part of the microsized heat exchanger.
The modeled system consists of two sets of channels, one hot and one cold, arranged in a crossflow pattern with five channels in each set, as shown in the figure above. The model is reduced due to the symmetry of the heat exchanger.
If we check the study nodes of the model, we find two stationary study steps. In the first study step, only laminar flow (spf) is selected for solving, while in the second study step, heat transfer (ht) is selected together with the multiphysics coupling nonisothermal flow (nitf1). The flow field is solved in the first study step and the result is automatically taken in the second step because of the applied coupling provided by the Nonisothermal Flow multiphysics node. This study setup is preset; available as of COMSOL Multiphysics version 5.3; and called Stationary, OneWay Coupled, NITF for stationary simulations and Time Dependent, OneWay Coupled, NITF for transient simulations.
We can compare the results of the oneway coupled approach with a twoway coupled version by adding a new study with a stationary, fully coupled study step. After computing both studies, it turns out that the results vary only slightly. The heat transfer coefficient, probably the most interesting result of the model, becomes 1547.8 W/(m^{2}K) for the twoway coupling and 1548.1 W/(m^{2}K) for the oneway coupling. The difference of less than 0.2‰ is probably a lot smaller than the numerical error in the two computations. Further, the computation time is halved — from about 3 minutes for the twoway coupled problem to less than 1.5 minutes for the oneway coupled problem.
A comprehensive comparison of the two approaches can be found in the slideshow presentation available with the model documentation.
Temperature results of the oneway coupling (left) and twoway coupling (right) stationary solutions.
If the transient behavior of the model is of interest, other study combinations are possible. For example, we can add two timedependent study steps, where the flow is solved first, followed by a transient heat transfer study step (Time Dependent, OneWay Coupled, NITF). We can also create a study sequence with a stationary flow and a transient heat transfer study if the flow conditions are not changed with time (except temperature). The table below gives an overview of the different study combinations and their respective computation times for a simulation time of 10 seconds.
Study Type  Computation Time (Seconds) 

Oneway coupled (stationary) 
77

Twoway coupled (stationary) 
183

Oneway coupled (time dependent) 
571

Twoway coupled (time dependent) 
806

Oneway coupled stationary flow and timedependent heat transfer 
246

The computation times of different study approaches on an Intel® Core™ processor E51620 @ 3.70 GHz machine.
As expected, cases where the transient heat transfer is oneway coupled with stationary flow fields are computed in less time. The demonstration problem is obviously small with respect to the computational time, but the simplified approach discussed in this blog post becomes a more important option as a problem grows.
Intel and Intel Core are trademarks of Intel Corporation in the U.S. and/or other countries.
]]>
Adhesives ensure that electronic components stay bonded — a vital facet of the microelectronics industry. In flip chip packages, an underfill adhesive is used to bond the die to the substrate and protect the fragile solder connections. Standard industry practice is to first bond the die to the substrate and then dispense the underfill adhesive material at the die edge, having it wick between the die and substrate to fill the gap. This practice is known as capillary underfill.
Flip chip package. Image courtesy Veryst Engineering.
A point of focus in flip chip assembly, as with any manufacturing process, is lowering costs and increasing the rate of production. A potential technology that can help speed up flip chip assembly, as compared with capillary underfill, is nonconducting film. In this assembly approach, the underfill adhesive is applied to wafers before die preparation and thermocompression bonding. However, there are two main opposing challenges when using NCF:
Balancing these two requirements is difficult, as the bonding process only lasts a few seconds (with heating/cooling rates in the 100°Cpersecond range) and the NCF materials exhibit complex chemorheology.
To study this and other aspects that influence underfill adhesives, Veryst Engineering, a COMSOL Certified Consultant, modeled an NCF during the thermocompression bonding process.
Chemorheology is the study of how the matter in a reacting system deforms and flows. There are two different yet interconnected aspects of chemorheology. The deformation and flow of matter is particularly important near the beginning of the curing process, as it’s affected by factors like heating rates and pressure. Integrally tied to these changes is the chemistry taking place, including the rate of reaction, mechanisms, kinetics, and ending of the chemical reactions. The viscosity and, to a smaller degree, density of the matter may change during the curing process. Thus, the chemorheology of a material has a strong impact on the time it takes to complete the bonding process.
To simulate an NCF undergoing thermocompression bonding, Veryst created a multiphysics model that involves:
In the thermocompression process, there are two important variables: temperature and the compression profiles, which the bonding tool applies to both the die and substrate. In the COMSOL Multiphysics® software, the Heat Transfer in Solids interface is used to track heating and cooling in the model. In addition, a TwoPhase Flow, Moving Mesh interface accounts for the compression of the die into the substrate and tracks the boundary of the underfill and air.
The compression of the flip chip package. Image courtesy Veryst Engineering.
The formation of the underfill fillet is simulated with a chemorheology model created using the Domain ODEs interface. This enabled Veryst to analyze the progression of the underfill adhesive cure as well as account for how the cure and temperature influence viscosity.
Let’s examine the simulation results for two different material and process combinations used in the thermocompression bonding simulation. In the first combination, shown below on the left, the shape of the underfill fillet is convex. This material and process combination is not preferable, as it can negatively affect the reliability of the electronic device. The convex shape leads to stress concentration at the edges of the underfill. As for the second material and process combination, shown below on the right, the underfill fillet is concave — a more desired shape for this type of assembly, since edge stresses are better distributed.
The fillet shape and degree of cure for two material/process combinations. Image courtesy Veryst Engineering.
Apart from the fillet shape, the degree of cure at the end of the process is also of concern. Achieving a higher degree of cure during the bonding process means less postcuring is needed to complete the cure.
The figure below illustrates the evolution of the flow and cure of the NCF material during a 3second bonding process. The COMSOL Multiphysics model provides guidance on what cure rates to target to achieve the optimal cure profile for a given bonding process. The chemistry of the material can then be tailored to achieve the optimal cure profile.
The degree of cure over three seconds for the preferred material/process combination. Image courtesy Veryst Engineering.
Veryst’s simulations show that engineers have the ability to simulate fillet shape and degree of cure for different material/process combinations in the thermocompression bonding process. With this capability, we can optimize NCFs for microelectronic assembly processes, improving the throughput and cost of manufacturing electronic components.
Microgravity is the condition of “free fall” experienced by, for example, objects like satellites that “fall” toward Earth but actually never reach its surface. In this condition, gravity and weight does exist, but is not measurable on a scale. Some people refer to this as zero gravity.
Applying conditions of microgravity to certain systems and processes enables scientists to study them without accounting for effects like hydrostatic pressure and sedimentation. By investigating biological processes exposed to microgravity conditions, we can advance technologies associated with tissue engineering, stem cell research, vaccine development, and more.
One area where microgravity is proving helpful is cancer research. From previous studies, we know that microgravity exposure suppresses immune cell activity and changes genomic and proteomic expressions. As such, scientists are investigating whether these changes also influence cancer development. The goal is to find novel therapeutic targets for metastatic cancer cells by influencing their migration, and therefore their activity.
A research team from SUNY Polytechnic Institute and SpacePharma, Inc. joined forces to develop a culturing system to test how microgravity affects metastatic cancer cell migration. This system isolates gravity as an experimental variable, thereby determining its contribution to cellular function in normal gravity on Earth. Considering the behavior of the culturing system in Earth’s gravity will initially provide insight into how microgravity conditions can be used for labscale experiments. The simulation technique will eventually be translated and used for space flight experiments within a low Earth orbit (LEO).
The setup for performing cell culture chips experiments on a chip (left) and a CAD representation (right). Images by A. Dhall, T. Masiello, L. Butt, M. Strohmayer, M. Hemachandra, N. Tokranova, and J. Castracane and taken from their COMSOL Conference 2016 Boston poster.
Running these microgravity experiments in the normal gravity of Earth can be difficult and requires a robust system design. CFD simulation is one way to help understand this problem, augment a good design, and optimize operating and flow conditions.
First, let’s take a closer look at the cell culturing system, which exposes human cancer cells (contained in cell culture chambers) to microgravity conditions. To increase the number of cells during cell maintenance, the system supplies growth media via a media inlet. The system can also reduce the number of cells — and avoid overcrowding — by lifting the cells with trypsin and flushing them out. Another key element in this system is chemoattractants, which influence cell migration.
The initial design of the culturing system. Image by A. Dhall et al. and taken from their COMSOL Conference 2016 Boston poster.
To perform the preliminary analyses of the culturing system, the research team used two interfaces:
When using the SinglePhase Flow interface, the team tested for backflow into the cell culture chamber when the outer channel is flushed with cell growth media. From their results, the researchers found that using either valves or nozzlediffuser flow can help avoid backflow.
The potential backflow in a cell culture system that occurs due to flushing media through the outer channels of a culture unit. Image by A. Dhall et al. and taken from their COMSOL Conference 2016 Boston presentation.
Simulation was also used to calculate the optimal flow rate range under the chosen operating conditions. In these studies, the researchers modified the culture chip system to contain three chambers, as shown below.
Modified culture chip system with three chambers. Image by A. Dhall et al. and taken from their COMSOL Conference 2016 Boston presentation.
The results, shown below, indicate that when the flow from the inner chamber is less than or equal to the flow from the outer chambers, the cell growth media do not leak into the outer chambers. However, as the flow into the inner chamber increases, the media within the inner chamber spread outward, eventually leaking into the outer chambers via the third channel.
The optimal flow rate range for a threechamber chip. In these plots, the researchers varied the ratio of the input velocity in the inner chambers (V_{IC}) to input velocity in the outer chambers (V_{OC}) and visualized the resulting flow. Images by A. Dhall et al. and taken from their COMSOL Conference 2016 Boston presentation.
In the image below, the flow in the outer chambers runs opposite to the flow in the inner chamber. The result is that the cell growth media leak through all of the channels. Using the information they learned about the leakage, the team can improve the design of the cell culture chip.
Leakage in the cell culture chip system when the flow of the cell growth media has equal and antiparallel input velocities. Image by A. Dhall et al. and taken from their COMSOL Conference 2016 Boston paper.
The final simulations are of the diffusion of the chemoattractant along a gradient. The chemoattractant has an initial concentration of 0.04 mM and travels through a 0.6mm migration channel. Simulating this migration shows that the researchers can establish a gradient at a practical timescale for cell migration experiments.
Diffusion of the chemoattractant over time. Image by A. Dhall et al. and taken from their COMSOL Conference 2016 Boston presentation.
Designing a functional culturing system is the key to successfully studying cancer cell migration in microgravity conditions, thus identifying new therapeutic targets for cell mestatatic behavior. In the future, the research team plans to enhance their study by looking into how the cell growth media and chemoattractant interact.
Consider a wing moving in air, say at a constant speed u. The disturbances created by the wing propagate as pressure waves in air (here, sound waves), so these disturbances propagate at the local speed of sound, a. If the speed of the wing is smaller than the speed of sound, u < a, the disturbances propagate faster than the wing itself. Therefore, the fluid upstream is influenced by the presence of the wing before the wing reaches that location. As the speed of the wing is below the speed of sound, this is known as subsonic flow. This phenomenon can also be represented in terms of the Mach number, Ma = u/a < 1.
When the speed of the wing approaches a fraction of the speed of sound — at around Ma = 0.3 — compressible effects become significant. When the speed is greater than the local speed of sound (i.e., supersonic, Ma > 1), the fluid upstream is no longer influenced by the wing before it reaches that location, since the pressure waves have not propagated there yet, as shown in the image below (Mach cone in red). The disturbance propagation gives us qualitative insight into the demarcation between the subsonic and supersonic flow regimes.
The disturbance propagation for subsonic (left), sonic (middle), and supersonic (right) wing speeds, where the circles represent the pressure/sound waves. The arrow represents the direction of movement of the object.
To quantitatively resolve the fluid flow, we first determine whether the flow will be in the laminar flow regime or if it will have transitioned into a turbulent flow regime. The flow regime is based on the Reynolds number, Re = ρuL/μ. Here, ρ, μ, and u are the fluid density, dynamic viscosity, and speed, respectively. The characteristic length (the chord length in the case of a wing) is denoted by L. We then solve either the NavierStokes equations with continuity and constitutive relations for laminar flow or the Reynoldsaveraged NavierStokes equations with continuity and constitutive relations for turbulent flow. Depending on the problem, a suitable turbulence model can be used to resolve the turbulence.
If the flow becomes highly compressible, as is the case for supersonic flows, the energy equation has to be solved in addition to the mass and momentum conservation equations mentioned earlier, whether the flow is in the laminar or turbulent flow regime. The viscous effects in supersonic flows are usually negligible, except in the regions with sharp gradients such as shocks or boundary layers. However, if the viscous effects in the regions of interest are negligible, then removing the viscous terms in the NavierStokes equations yields the Euler equations. These inviscid flow equations can be solved analytically for simple geometries.
The supersonic flow field around a diamond airfoil, showing the different regions in the flow field.
For simplicity, consider the benchmark case of supersonic flow past the cross section of a wing with a diamond airfoil. This topic has already been investigated by COMSOL Multiphysics® users in the paper “Numerical Study of NavierStokes Equations in Supersonic Flow over a Double Wedge Airfoil using Adaptive Grids“. As mentioned earlier, the fluid upstream (zone 0 in the figure above) is not influenced by the disturbances caused by the airfoil. So when the fluid approaches the airfoil, it faces an abrupt decrease in flow area (similar to a concave corner) and has to suddenly change its direction to match the boundary conditions in zone 1.
This process can only occur if there is a discontinuity in the flow. This discontinuity is known as a shock wave and is very thin; i.e., on the order of the mean free path of the gas molecules, which is around 50 nm in air at atmospheric pressure. When a shock wave is inclined to the flow direction due to the shape of the object and the flow’s Mach number, it is called an oblique shock wave. Across a shock, the static pressure, temperature, and density increase, while the Mach number and total pressure decrease.
As mentioned above, when we assume inviscid flow (which is usually a valid assumption to estimate flow properties in supersonic flows), the shock angle (σ) can be determined for a weak shock based on the inclination of the object. Here, this is the half angle of the diamond airfoil (δ) and the upstream Mach number (Ma_{0}).
For a fluid with specific heat ratio γ, the Mach number after the oblique shock in zone 1 can be estimated from:
After the shock along the diamond airfoil, the flow encounters an expanding area around the top of the airfoil (similar to a convex corner). The change in the flow direction to match the boundary conditions is achieved through an expansion fan, also known as a PrandtlMeyer expansion fan. Across an expansion fan, the static pressure, temperature, and density decrease, while the Mach number increases. Again, by assuming inviscid flow, we can estimate the Mach number after the expansion fan based on Ma_{1} and the turning angle (θ) from:
Based on the above equations, we can estimate the shock angle and the Mach number across a shock wave or expansion fan. Similar estimates for pressure, density, and temperature can also be computed. However, for more complicated geometries where the shock/expansion waves interact or for highly viscous fluids, it would be more appropriate to resolve the flow through the numerical solution of the conservation equations in their entirety. Let’s model this simple benchmark case of a diamond airfoil and compare the results with estimates from the above equations.
As mentioned earlier, the flow in this scenario becomes highly compressible at supersonic speeds, which leads to a strong coupling of all of the conservation equations. These equations now have to be solved together to resolve the shock waves and the expansion fans that occur in the flow field, which can be done using the High Mach Number Flow interface in the CFD Module. Let’s take a look at how to set up a model of supersonic flow past a diamond airfoil using the High Mach Number Flow interface.
The model configuration, including the boundary conditions.
A schematic of the model setup, along with the static pressure and temperature at the inlet, is shown above. Based on the inlet conditions and airfoil chord length, the Reynolds number is found to be greater than 3 x 10^{5}, indicating that the flow is in the turbulent regime. Therefore, the kε turbulence model is used here.
A hybrid flow condition is used under the outlet boundary condition. This condition allows for the flow at that boundary to be either supersonic or subsonic. If the flow is subsonic, then the static pressure specified under the condition’s settings would be ensured. A noslip condition is defined on the airfoil and a combination of slip wall and outlet (hybrid flow) boundary conditions are used on the top and bottom walls to mimic the inviscid model equations that are solved analytically. In addition, since this is definitely a nonlinear problem, it is important to note that we can achieve better convergence for the problem by specifying initial conditions that could be close to the final solution — as discussed in a previous blog post on nonlinear static finite element problems.
As mentioned earlier, a shock wave is a form of discontinuity in the flow field. Therefore, a very fine mesh must be used at and around the pressure wave in order to numerically resolve this region. The spatial location of the shocks or expansion fans is not known ahead of time. In addition, we might be interested in computing the flow field for varying angles of attack and different inlet conditions, which would result in different locations of shocks in the domain. Therefore, if a uniform mesh is used for such problems, it would need to have a very high mesh density everywhere, significantly increasing the computational cost. However, in the COMSOL Multiphysics® software, we can take advantage of adaptive mesh refinement (an option available in the solvers) to dynamically adjust the mesh refinement in the domain, such that a fine mesh is created near the discontinuities and a comparatively coarser mesh is used in the remaining domain. Here, the conditions for adaptive mesh refinement are defined under the settings for the Stationary study step.
The results from the study and those from the expressions given in the previous section are listed in the table below for comparison. The results from COMSOL Multiphysics are evaluated in the midsection of the zones along the streamline, shown in the streamline plot below. The simulation results and theoretical estimates are within 3% of each other, as indicated by the tabulated results. It is also important to note that any viscous effects present in the problem are taken into account by the numerical model, making it more physically accurate than the inviscid theory on which the analytical estimates are based.
Zone Number  Mach Number (Theory)  Mach Number (Model)  Static Pressure (atm) (Theory)  Static Pressure (atm) (Model)  Static Temperature (K) (Theory)  Static Temperature (K) (Model) 

0  2.0  2.0  1.0  1.0  300  300 
1  1.45  1.44  2.22  2.21  378  381 
2  2.55  2.53  0.40  0.41  231  237 
A contour plot of the static pressure (in Pa) along with the streamlines (in blue) for zero angle of attack, an inlet Mach number of 2.0, pressure of 1 atm, and temperature of 300 K.
A comparison of the shock angle is qualitatively shown in the Mach number surface plot below. At the location of the shock, a sudden change in the Mach number is observed. In the following plot, a black line has been added to indicate where the shock would be located based on the theoretical estimates. There is good agreement between the theoretical estimates and the simulation results.
A surface plot of the Mach number. The black lines indicate the location of the shock based on expressions for shock angle, zero angle of attack, an inlet Mach number of 2.0, pressure of 1 atm, and temperature of 300 K.
Below, we can see the temperature surface plots for varying angles of attack. In addition to the temperature changes on the top and bottom sides of the airfoil, the change in the shock angle with the change in angle of attack can be clearly observed in these plots. At a 10° angle of attack, the angle of inclination at the bottom becomes 25° (as the half angle is 15°). The analytical theory predicts detachment of the shock, which is also observed in the numerical analysis (shown in the figure below on the right).
A surface plot of the temperature (in K) for varying angles of attack (α) for an inlet Mach number of 2.0, pressure of 1 atm, and temperature of 300 K.
In this blog post, we have discussed supersonic flow characteristics such as shocks and expansion fans. With an example of supersonic flow over a diamond airfoil, we have shown how these characteristics can be resolved in COMSOL Multiphysics using the High Mach Number Flow interface. The shock angle and flow properties on the airfoil from our simulations are in good agreement with analytical estimates from inviscid compressible flow theory.
For more information on modeling supersonic or transonic flows, check out these example models from the Application Gallery:
]]>