Humans have used drying as a method for preserving food since ancient times. Since then, the drying process has expanded from openair drying or sun drying to other drying techniques, such as solar drying, freeze drying, and vacuum drying. Drying is also a key process in many other application areas, from the pharmaceutical industry to plastics.
Today, we’ll focus on the chemical process of vacuum drying, which is particularly useful when drying heatsensitive materials such as food and pharmaceutical drugs. Vacuum dryers, commonly called vacuum ovens in the pharmaceutical industry, also offer other benefits. Because they require lower temperatures to operate, vacuum dryers use less energy and therefore, reduce costs. They also recover solvents and avoid oxidation.
A rotary vacuum dryer. Image by Matylda Sęk — Own Work. Licensed under CC BYSA 3.0, via Wikimedia Commons.
Vacuum dryers remove water and organic solvents from a wet powder. The dryer operates by reducing the pressure around a liquid in a vacuum, thereby decreasing the liquid’s boiling point and increasing the evaporation rate. As a result, the liquid dries at a quicker rate — another major benefit of this process.
For vacuum drying to be effective, we need to decrease drying times without harming the products, which means that we need to maintain a strict control of the operating conditions. To balance these goals and to understand how operating conditions influence the product, you can use the multiphysics modeling capabilities of COMSOL Multiphysics.
Today, we’ll analyze the vacuum drying process of a Nutsche filterdryer model. The dryer works by heating a wet cake from the bottom and the side walls of a container and by decreasing the pressure in the gas phase on the top of the cake. This example is based on a paper published by Murru et al. (Ref. 1 in the model documentation).
Let’s start by taking a closer look at our model. The vacuum dryer is comprised of a cylindrical drum filled with wet cake, which consists of three different phases: solid powder particulates, a liquid solvent, and a gas. As such, the cake’s material properties need to include the properties of all three individual phases, which vary depending on the proportion of each phase in the cake. The portion of each phase is determined by the volume fraction, which is one of our modeled variables.
The cake is modeled as a rectangular geometry with a radius of 40 cm and height of 10 cm in a 2D axisymmetric component. At the top, our model is exposed to a lowpressure head space. Meanwhile, heat flux boundary conditions at the filter dryer’s side and bottom boundaries account for a 60°C heating fluid.
The vacuum drying process in an axisymmetric Nutsche filter dryer.
Moving on, our tutorial combines evaporation and heat transfer modeling in order to study the cake’s liquid phase profiles and temperature. We calculate the cake’s solvent volume fraction with the Coefficient Form PDE interface and simulate heat transfer with the Heat Transfer in Solids interface. To solve the moisture transport in porous media, we use a predefined multiphysics interface in the Heat Transfer Module. We also include solvent evaporation by using both a heatsink and masssink term and approximate the solvent transport as a diffusion process.
Our model makes the following assumptions:
In these situations, we can use a step function to smoothly ramp both the evaporation rate and diffusion coefficient down to zero.
We see that our simulation results are as predicted. Let’s start by examining our analysis of the cake after 30 hours have passed. As seen below, the cake’s temperature is close to that of the heating fluid (60°C) at both the side and bottom boundaries, and the liquid phase’s volume fraction is lowest near these heated boundaries and highest at the cake’s center. Additionally, the apparent moisture diffusivity is highest at the cake’s center and almost zero in places where the liquid phase has evaporated. Considering our model’s assumptions, these results are all expected.
The cake’s temperature (left), volume fraction of the liquid phase (middle), and apparent moisture diffusivity (right) after 30 hours.
Switching gears, let’s expand our timescale to look at the evaporation rate after 10, 20, and 30 hours. This study also yields expected results, since it shows evaporation beginning at the heated walls and decreasing when the amount of solvent at these boundaries lessens. During this process, the evaporation front shifts toward the cake’s center.
The evaporation rate after 10 (left), 20 (middle), and 30 (right) hours.
The quantitative results generated by our simulation study are in good agreement with previous research, confirming their validity. As such, we can use this model to accurately predict how dry a product is as a function of time. Using this information, we can minimize the amount of time that a product is exposed to elevated temperatures. Additionally, we can change the dryer’s size if we want to reduce the drying time when working with heatsensitive products. Through multiphysics simulation, we can design more efficient and effective vacuum dryers for use in a variety of industries.
Industrial mixers are a key element in many fields, from the pharmaceutical and food industries to consumer products and plastics. Further, the purpose of mixers can vary greatly. Mixers are not only used to combine elements and create homogeneous mixtures, but to also reduce the size of particles and generate chemical reactions.
An industrial mixer. Image by Erikoinentunnus — Own work. Licensed under CC BYSA 3.0, via Wikimedia Commons.
Mixers are required for efficient and timely production as well as for producing a uniform product quality within a batch and between batches. In some cases, mixers are required for the safe operation of systems, for example, in exothermic reactions that may create hot spots and runaway reactions (explosions) under poor mixing. With modeling, we can run inexpensive and streamlined experiments with different mixer designs in order to optimize the mixing process, avoid poor product quality, and meet safety requirements.
To resolve these issues, you can turn to COMSOL Multiphysics, which provides you with the tools for testing a wide assortment of mixers. In the next section, we’ll discuss three different mixer design examples that speak to the versatility of COMSOL Multiphysics.
A typical batch mixer generally consists of two main components: a vessel and an impeller, both of which can vary in type and shape. Baffles can also be added to the device to improve the mixing by suppressing the bulk’s main vortex formation.
The importance of the baffles depends on the type of impeller. Radial impellers, for instance, require baffles to work. Otherwise, the solution will rotate like a merrygoround and mixing will not be achieved. Here, the impellers will only create vertical mixing as the solution hits the walls of the vessel. Axial impellers, on the other hand, create a vertical mixing flow at the impeller, which means they do not require baffles to achieve mixing. However, axial impellers also have a radial component, so baffles can be used to increase radial mixing in axial impellers, if desired.
Let’s take a look at a mixer’s vessel, shown below, which is often modeled as either a vertical cylinder with a dishshaped or flat bottom.
Side views of a flatbottom mixer (above) and a dishedbottom mixer (below).
Within the vessel, the fluid is mixed by a rotating impeller. The rotation and design of the impeller determines the axial and radial direction in which the liquid is discharged. As such, impellers come in many different designs, enabling them to be used for a variety of different industrial purposes. Here, we will investigate a sixblade Rushton disc turbine, which is a radial impeller used for highshear mixing, and a more generalpurpose pitchedblade impeller, which is an axial impeller.
A Rushton disc turbine with six blades (left) and a pitchedblade impeller with four blades (right).
By combining these two common types of vessels with two types of impellers, we create two separate geometries (shown below) and three separate studies. All three studies use the Frozen Rotor study type and the Rotating Machinery, Fluid Flow interface.
The first study involves the laminar mixing of silicon oil in a baffled flatbottom mixer that contains a Rushton turbine with six blades rotating at 40 rps. While we focus on the highest of three rotation rates in this example, you can easily adjust the rotation to simulate the slower rotation rates. This first example is based on a PhD thesis by M.J. Rice entitled High Resolution Simulation of Laminar and Transitional Flows in a Mixing Vessel (see Ref. 1 in the model documentation) and includes comparisons from the PhD thesis Study of Viscous and Viscoelastic Flows with Reference to Laminar Stirred Vessels by J. Hall (see Ref. 2 in the model documentation).
Two mixer geometries, one combining a baffled flatbottom mixer and a Rushton turbine (left) and one with a baffled dishedbottom mixer and a fourblade pitched impeller (right).
Moving on, our next two examples deal with the turbulent mixing of water within a baffled dishedbottom mixer. This mixer contains a pitched fourblade impeller that rotates at 20 rpm. It’s possible to reduce the computational time required to solve these models by using periodicity and only simulating a quarter of the domain.
Our turbulent mixing examples enable you to explore how different models affect your results. Here, we compare a kepsilon (kε) model, which has a quick convergence rate, to a komega (kω) model, which works better for flows with recirculation regions.
Let’s begin by looking at the velocity magnitude and inplane velocity vectors for our three models. These results provide a general view of the circulation patterns in the mixing vessels for all three of our examples.
For our first mixer model, the laminar mixing example, we can see that the fluid is discharged radially outward by the Rushton turbine, creating two zonal vortices. The resulting compartmentalization phenomenon, which is common for radial impellers, is also displayed in our simulation results. This leads to mixing in the top and bottom vortices, albeit less intensely than inside each individual vortex.
The velocity magnitude (xzplane) and inplane velocity vectors (yzplane) for the laminar mixing example.
On the other hand, the velocity magnitude and vector projection for the turbulent flow kε model indicate that the fluid is expelled axially and radially by the pitchedblade impeller. As a result, a large zonal vortex is generated from the top to the bottom of the vessel. Additionally, a small zonal vortex appears below the impeller, which can aggregate the heavy dispersed particles in this area.
The velocity magnitude (xzplane) and inplane velocity vectors (yzplane) for the kε turbulence model example.
The third study reveals that the turbulent flow kω model has a large zonal vortex, similar to the kε example. However, this time, the core is more vertically stretched. For its part, the smaller zonal vortex located beneath the impeller is stretched in the radial direction. Another difference lies with the torque and power draw values, which are both higher than the kε model. While the kω model is a good model to use for these types of flows, we still need to determine if its results are actually more valid than the kε model. Comparing simulation results to experiments is, therefore, a necessary next step.
The velocity magnitude and inplane velocity vectors for the kω turbulence model example.
Finally, our simulations reveal that all three examples generate good approximations for at least a few averaged flow quantities. Our results from the frozen rotor simulation for the laminar mixing study can be easily used as initial conditions for a new timedependent study.
It’s easy to modify the mixer geometries presented here to fit a wide assortment of mixer designs and conditions. Simply change the parameters in the supplied model to alter the types of components and properties of the geometry. For further customization, you can also add your own subsequences into the mix. With this, you can create a customized model to fit your specific application.
For more information on how to improve your mixer simulations, check out the resources in the next section.
When modeling thin fractures within a 3D porous matrix, you can efficiently describe their pressure field by modeling them as 2D objects via the Fracture Flow interface. Significant fracture flux calculation issues, however, may arise for systems of practical interest, such as hydraulic fractures contained within unconventional reservoirs. See how a hybrid approach overcomes such difficulties.
To model an actual fracture as a 2D object using the Subsurface Flow Module, you first have to solve for the pressure field (through a tangential form of Darcy’s law) within the internal surface representing the fracture’s lateral extent. You can then calculate, in principle, the corresponding fluid flux through the actual fracture cross section by multiplying the component normal to an edge of the velocity vector (delimiting the 2D fracture object) by the fracture’s thickness. This approach is much more computationally efficient, as a very thin but otherwise ample 3D object can now be described as a 2D object, one that only needs to be meshed as a surface.
Say you have a 2D fracture object with the following characteristics:
In a system of interest such as this, significant fracture flux computational errors can occur. Let’s take a look at one such example.
Note: With the latest version of the COMSOL Multiphysics® software — version 5.2a — you can also model heat transfer in thin fractures. This is made possible via the Heat Transfer in Fractures interface.
The system shown below features a 3D pennyshaped hydraulic fracture embedded in a reservoir block and connected to a horizontal well. The inlet for this simplified system consists of the two reservoir boundaries shown in green at the top and the back of the block. The only outlet is through the narrow boundary where the fracture disc connects to the wellbore. Both the inlets and the outlet are set as pressure boundary conditions, with the values of ΔP and 0, respectively. The geometry only considers one quarter of the actual system, as it takes advantage of existing symmetry.
A 3D pennyshaped fracture (shown in blue) embedded in a reservoir block and hydraulically connected to a horizontal well (shown in red). The two reservoir inlet boundaries are highlighted in green.
Note that the dimensions of the above system are not representative of cases of practical interest. The dimensions are scaled down to allow for adequate 3D meshing of the discoidal fracture, which has a radius of 7.62 m (25 ft) and a thickness of 1.27 cm. (Properly meshing 3D fractures with radii of hundreds of feet, as encountered in field applications, would be quite computationally expensive.) The wellbore radius is 12.7 cm (5 in), while the reservoir block’s dimensions are approximately 8 m x 15 m x 15 m (25 ft x 50 ft x 50 ft). The entire mesh consists of 2,246,298 tetrahedral elements, 657,720 of which are used for the discoidal fracture domain alone. The minimum and average element quality values of the latter are 0.148 and 0.700, respectively, while the average quality for the entire mesh is 0.673.
Outlet boundaries for the 3D (shown in green) and 2D (shown in red) realizations of the actual hydraulic fracture of thickness d_{HF}.
Darcy’s law is used to solve for the pressure field p in incompressible, singlephase, and stationary flow parametric studies for various values of the drawdown ΔP. The fluid is a light liquid hydrocarbon with a dynamic viscosity value of 0.26 cP. The permeability of the reservoir matrix is taken as 1 mD, while that of the (propped) hydraulic fracture is 45.6 Darcy.
The fracture flux calculation issue referenced above is depicted in the following figure. This figure shows the inlet and outlet flow rates as functions of the drawdown ΔP when the hydraulic fracture (HF) is described as either a 3D or 2D object. While the first three curves (for the inlet flow rates and the 3D outlet flow rate) overlap as expected, the outlet flow rate for the 2D case represents only a quarter of the inlet flow rate. The fluxes for the first three curves were calculated as integrals of the normal component of the fluid velocity vector over the respective inlet and outlet surfaces (). The outlet flow rate for the 2D fracture, meanwhile, was calculated as an integral of along the outlet edge, multiplied by the fracture thickness d_{HF}: .
The flux calculation issue remains regardless of the applied meshing and no matter how is probed, with its integrand expressed as (dl.nx*dl.u + dl.ny*dl.v + dl.nz*dl.w), (sys1.e_n1*dl.u + sys1.e_n2*dl.v + sys1.e_n3*dl.w), dl.bndflux/dl.rho, or as (root.nx*dl.u + root.ny*dl.v + root.nz*dl.w). The ‘dl.’ identifier stands for the applied interface (Darcy’s Law interface); {nx,ny,nz} are the Cartesian components of the unit vector normal to the edge ; and {u,v,w} are the Cartesian components of the fluid velocity vector .
Inlet and outlet flow rates as functions of the drawdown ΔP when the actual HF is modeled as either a 3D or 2D object for the respective system.
Notice that when the hydraulic fracture is described as a 2D object, the discoidal fracture (3D) domain is omitted from the model and is considered instead only through its inner lateral boundary. Otherwise, the geometry and mesh are identical between the 2D and 3D descriptions. This simplification greatly reduces the size of the system and thus represents one of the most attractive elements of the Fracture Flow interface: it enables the modeling of much larger fracture surfaces with proper meshing. As such, it would be quite useful if there was a way to work around the 2D fracture outlet flux issue.
A hybrid approach, which combines a 2D description of the fracture away from the wellbore with a 3D one in its immediate vicinity, makes this possible. The figure below shows the meshed geometry of the hybrid implementation. The 3D component of the fracture is represented by the blue domain, while the 2D component is represented by a red surface, which depicts the boundary toward the porous matrix of the actual fracture. Note that the 3D part of the actual fracture that corresponds to the 2D component is excluded from the model.
Meshed geometry of the hybrid fracture implementation. The blue domain represents the 3D component of the fracture, while the red surface represents the 2D component. The latter is chosen as the inner lateral boundary of the actual fracture (toward the matrix).
In the hybrid approach, the pressure field continues to be properly accounted for at any point within the actual fracture, while the flux through the outlet boundary is computed without the shortcoming of the 2D description. The following table compares relevant quantities for the 3D, 2D, and hybrid realizations of the hydraulic fracture. These computations were performed using a direct solver on a machine with an Intel® Core™ i74770 processor and 32 GB RAM.
Hydraulic fracture  Degrees of freedom  Memory (GB)  Time per iteration (s)  

3D  3,231,747  23.74  247.5  1  1.00026 
2D  2,354,490  15.98  153.5  0.99948  0.24992 
Hybrid  2,397,891  16.50  158.0  0.99941  0.99967 
Comparison of relevant quantities for the 3D, 2D, and hybrid realizations.
The plot below shows, in logarithmic scale, the pressure profiles along a diagonal line within the YZplane containing the 2D fracture surface for a drawdown of 100 psi. This probe line is delimited by the outlet (wellbore) at the lowerright part of the surface and by the inlet of the reservoir block at the other end. The white line at the inset of the plot highlights the probe line. The surface color of the inset corresponds to the pressure value within the probed YZplane, and the guiding arrows help map important graphing points on it. The graph’s curves overlap for all three cases, indicating that the pressure field solution is practically identical among the three fracture descriptions: 3D, 2D, and hybrid.
Pressure profiles along a face diagonal line within the YZplane.
Flux calculation issues can occur with a solely 2D description of a fracture. As we’ve demonstrated here today, the proposed hybrid approach for describing an actual fracture provides a viable solution. As such, this technique can be applied to various systems of practical interest that feature a greater number of arbitrarily thin fractures.
Ionut Prodan is the principal of Boffin Solutions, LLC, a COMSOL Certified Consultant. Prior to his founding of Boffin Solutions, Ionut worked within upstream technology at Shell and Marathon Oil. He earned his doctorate in physics from Rice University, where he conducted research on the photoassociation of ultracold atoms and computational solidstate chemistry.
Intel and Intel Core are trademarks of Intel Corporation or its subsidiaries in the U.S. and/or other countries.
]]>
The Chemical Reaction Engineering, CFD, and Plasma modules all include different variations of the equations for the transport of chemical species in a concentrated solution, such as the MaxwellStefan equations and the mixtureaverage model. In a concentrated solution, the model equations have to account for the interactions between all species in a solution, while a model of a dilute solution only includes the interaction between the solute and the solvent. The schematics below illustrates the difference between these two descriptions.
Dilute solutions (left) and concentrated solutions (right). The interaction within dilute solutions is dominated by the interactions between solvent and solute and solvent with solvent. In a concentrated solution, all species interact with one another.
Along with these interactions, the velocity field in a concentrated solution is defined as the sum of the flux over every species, i:
(1)
where n denotes the flux in kg/(m^{2}s), and ρ represents the density (kg/m^{3}). For a dilute solution, the velocity field is given by the velocity of the solvent:
(2)
As we can see from the images above and Eq. (1), the transport of species and fluid flow are tightly coupled for concentrated solutions.
In earlier versions of COMSOL Multiphysics, the Reacting Flow interface was a single multiphysics interface with its own domain settings and boundary conditions, specifically designed for coupling flow and chemical species transport and reactions. This approach was userfriendly in nature, as everything was predefined. However, some of the general flexibility in COMSOL Multiphysics was lost in this predefined physics interface. Say you wanted to make larger changes separately to the transport of concentrated species equations and the flow equations. To do so, you would have to define the problem by adding the two types of physics interfaces separately, instead of using the predefined multiphysics interface, and then manually construct the multiphysics coupling.
With the new Reacting Flow interface for a concentrated solution, you can handle this tight coupling while maintaining the ability to manipulate the transport equations and the fluid flow settings separately. The coupling itself is defined in the Multiphysics node. With such functionality, you can, for instance, change from laminar flow to turbulent flow or change the transport model from the MaxwellStefan equations to the mixtureaverage model.
Let’s see how this is manifested in the model tree and in the settings for the Multiphysics node. As the following screenshot shows, all of the usual nodes for the constituent physics interfaces can be modified while the coupling is predefined in the Multiphysics node. The predefined coupling controls the mass fluxes and, when summed over all species, satisfies the continuity equation for the flow. As such, the two sets of equations are fully coupled in a bidirectional way.
The model tree with the Reacting Flow multiphysics node selected. Here, we can select which physics interfaces to couple. We can also make changes to the flow model that allow us to include turbulent reacting flow. This is an additional flexibility, with preserved ease of use, as compared to previous multiphysics interfaces.
Another benefit of the new Reacting Flow multiphysics interface is found in the Study node. We have the ability to solve for the fluid flow equations to obtain a decent initial guess for the total flux. In a second step, we can solve only for the transport of species, with the velocity field given by the previous solution of the fluid flow equations.
Now we have a decent solution for the fluid flow and for the composition of the solution, which we can use as an initial guess for the fully coupled problem. Therefore, the last study step (Step 3) involves solving for the fluid flow and the transport of chemical species in a fully coupled scheme. Note though that the fully coupled scheme itself may also be sequential for a large number of species in 3D, but the loop over all species and the fluid flow is performed automatically.
The three study steps (1, 2, and 3) solve for the fluid flow, the transport of chemical species, and the fully coupled problem, respectively. The automatically generated solver configuration shows the intermediate steps that store the flow field, the concentration field, and the final step that then solves for the fully coupled problem using the stored solutions as an initial guess.
By utilizing the new Reacting Flow multiphysics interface, you have the ability to solve a range of interesting problems, such as the one shown below. In this case, we can see the flow and concentration in a tubular reactor that converts methane into hydrogen. The model combines the transport of species in concentrated solutions, fluid flow in free and porous media, and heat transfer with the endothermic reactions and the heated jacket on the reactor’s cylindrical outer walls.
Concentration of hydrogen in a reactor that converts methane into hydrogen. The reaction is endothermic and heat is supplied at the cylindrical walls, which yields a higher hydrogen production close to the walls.
In order to transport food and other perishables, refrigerated trucks are designed to maintain a cold temperature. If these products are not kept properly cooled, they can become heated above an acceptable temperature and thus harmed while intransit. This is especially true in cases where products are exposed to heat for a prolonged period of time.
A refrigerated truck.
While it may seem straightforward to keep a truck properly cooled, there are a few important elements to consider. For one, the cooling system’s design must be optimized in order to preserve the correct temperature. Additionally, the truck’s walls need to be sufficiently insulated to help maintain this desired temperature, which requires choosing the right materials for the job.
Now consider when goods are loaded or unloaded from the truck. The open and closeddoor cycles that refrigerated trucks undergo makes addressing the above elements even more challenging. It is, however, important to address these periods in order to obtain realistic results for the vehicle’s cooling efficiency.
Let’s see how SIMTEC, a COMSOL Certified Consultant, helped Air Liquide, a leader in technologies, gases, and services for industries and health, model this process.
When it comes to choosing the best thermal insulation materials for an efficient aircooling system, it is important to have a thorough understanding of the aerothermal configuration inside a truck’s refrigerated box. As we’ll demonstrate here, this can be achieved by coupling CFD and heat transfer in COMSOL Multiphysics.
For their simulation studies, the team at SIMTEC created an aerothermal model to simulate heat transfer during the normal operation of a refrigerated truck. Their goal: Predict the temperature and air velocity distribution inside the truck and find out what happens upon closing and opening the rear door.
We can begin by taking a closer look at their model geometry. The engineers modeled the refrigerated area of the truck (the refrigerated box) as a parallelepipedic box with a cooling system. The cooling system is comprised of two circular apertures used for air extraction and a rectangular aperture used to blow refrigerated air into the main box. Since the model geometry features a yz symmetry, it is possible to fully describe the physical phenomena occurring in the entire refrigerated compartment by using only a model of half of the box.
Geometry of the refrigerated truck. The entire refrigerated box geometry, the halfbox geometry, and a closeup view of the ventilation and cooling system are shown. Image by Alexandre Oury, Patrick Namy, and Mohammed YoubiIdrissi and taken from their COMSOL Conference 2015 Grenoble presentation.
For their analyses, the team investigated two stages of a refrigerated truck’s operating cycle, using two different computational methods to predict temperature and air flow distribution. In the first stage, which lasts for about three hours, the truck’s rear door is closed and the cooling system is turned on. Both the fans and refrigerating unit operate in order to cool the refrigerated box. The air from the refrigerating system is initially at a temperature of 27°C and later heats up when coming into contact with the warmer box walls. During this cooling period, the engineers partially decoupled their simulations to minimize the system’s degrees of freedom.
Cool air moving through a refrigerated truck with its rear door closed. Image by Alexandre Oury, Patrick Namy, and Mohammed YoubiIdrissi and taken from their COMSOL Conference 2015 Grenoble presentation.
In the second stage, which lasts around ten minutes, the truck’s rear door is opened after the cooling period, and both the refrigerating unit and ventilation system are switched off. Unlike the first stage, this step fully couples CFD and heat transfer to solve the laminar CFD and heat transfer equations. The team then used these equations to analyze the airflow into the box as well as the temperature change during this time period.
To answer this question, let’s begin by looking at the simulation findings from the first stage, when the truck’s rear door is closed. The figure below illustrates an air flow streamline of the local velocity after 10,000 seconds (approximately 2 hours and 45 minutes). At this point in time, the air reaches its maximum velocity at the roof of the box facing the inlet and along the door wall. The velocity decreases rapidly as the air flows through the rest of the box.
Streamlines showing the air’s local velocity after 10,000 seconds in the closeddoor stage. Image by Alexandre Oury, Patrick Namy, and Mohammed YoubiIdrissi and taken from their COMSOL Conference 2015 Grenoble paper.
The temperature field has a very similar distribution to the air velocity, with the coldest areas corresponding to highvelocity regions and vice versa. The plot below indicates that the warmest region is the recirculation zone located at the bottom of the box, where the air is greater than 0°C.
Streamlines showing the air flow’s local temperature after 10,000 seconds in the closeddoor stage. Image by Alexandre Oury, Patrick Namy, and Mohammed YoubiIdrissi and taken from their COMSOL Conference 2015 Grenoble paper.
The engineers also calculated the heat losses within the refrigerated box. By doing so, they found that air cools down inside the truck and global heat losses increase over time. Most of the heat is lost through the lateral and rear walls, both of which feature similar loss profiles. Because the floor is composed of the thickest materials and is the most insulated surface, a very limited amount of energy is lost through its surface.
Averaged heat loss for each of the five walls during the closeddoor cooling period. Image by Alexandre Oury, Patrick Namy, and Mohammed YoubiIdrissi and taken from their COMSOL Conference 2015 Grenoble paper.
Shifting gears to the second stage… With the cooling and ventilation system both off, the engineers stressed that the only driving force for air is the natural convection caused by the difference between outer and inner air temperatures. Since the temperature within the box is much cooler, warmer air flows into the box.
As illustrated in the following simulations, hot air rapidly enters the box at first, but after 50 seconds, the average air velocity drops below 10 cm/s. At 500 seconds, the average air velocity is as low as 2 cm/s, which may be due to the fact the temperature difference between the box and the outside environment is greatly reduced.
The average air velocity in the truck box at 2, 10, 50, and 500 seconds after opening the rear door. Images by Alexandre Oury, Patrick Namy, and Mohammed YoubiIdrissi and taken from their COMSOL Conference 2015 Grenoble paper.
As for the temperature, about 10 seconds after the truck’s rear door opens, the temperature for most of the box matches the outside temperature (around 25°C). One exception is the area around the walls, where a thermal inertia helps the surrounding air stay cool.
The temperature in the truck box at 2, 10, 50, and 500 seconds after opening the rear door. Images by Alexandre Oury, Patrick Namy, and Mohammed YoubiIdrissi and taken from their COMSOL Conference 2015 Grenoble paper.
The team later compared their thermal simulation results to physical experiments where the rear door of the truck was opened and closed multiple times. They found that the model predictions were in reasonable agreement with the experimental temperatures. The simulation results, however, do show oscillations occurring during the opendoor periods that were not observed in the physical experiments. An explanation suggested by the engineers for this is that the temperature was calculated in a different location in the model and physical sensor. Another possible cause is the sensor’s intrinsic inertia, which may have a small leveling effect on the temperature. The model itself shows an instantaneous temperature of air.
A plot comparing simulation results with experimental data. Image by Alexandre Oury, Patrick Namy, and Mohammed YoubiIdrissi and taken from their COMSOL Conference 2015 Grenoble presentation.
With COMSOL Multiphysics, the team at SIMTEC found it easy to couple turbulent CFD and heat transfer in order to perform aerothermal simulations of a refrigerated truck model. These findings have the potential to serve as a powerful tool for designing the next generation of refrigerated trucks, identifying the optimal materials for their walls and indicating ways to enhance the power specifications and locations of their cooling systems.
To learn more about our COMSOL Certified Consultant SIMTEC and their services, visit their website.
As the human population grows, so too does the amount of trash we produce. In fact, the amount of solid waste we generate may almost double by 2025. A large portion of the waste ends up in landfills, some of which are the same size as entire towns. Land, however, is a finite resource and eventually there will be nowhere to put our trash. Furthermore, these growing landfills can negatively affect their surrounding environments.
A landfill. Image by Alan Levine — Own Work. Licensed under CC BY 2.0, via Flickr Creative Commons.
As an example, let’s look at a common type of landfill: a drytomb landfill. As its name suggests, this landfill entombs waste in order to keep out moisture, resulting in reduced microbial growth and activity. While drytomb landfills are generally cost effective, if moisture enters, it can cause a variety of issues. For instance, landfills produce methane, a greenhouse gas much more potent than carbon dioxide, due to the resulting microbial activity. In the United States, landfills are even the third largest source of methane emissions. Landfills can also generate a harmful leachate that, if not treated, seeps into the water table and causes environmental and health problems. For these reasons, researchers are investigating alternative landfill designs, one of which is the aerobic bioreactor landfill.
There are two types of bioreactor landfills that correspond to with and without free air, respectively: aerobic and anaerobic. Here, we’ll focus on aerobic bioreactor landfills, which push air and moisture into a landfill in order to enhance aerobic microbial activity. This, in turn, increases the biodegradation rate, accelerating waste decomposition and more rapidly creating space for additional trash. This method also minimizes hazardous leachate and methane production when compared to anaerobic landfills.
We can turn existing landfills into aerobic bioreactor landfills by injecting air and recirculating the leachate to create an even distribution of bacteria and nutrients. The resulting aerobic landfill needs to be continuously monitored to ensure that the oxygen and moisture levels are ideal for aerobic microbial decomposition. Without outside intervention, the oxygen supply will be too low to provide a sufficient amount of aerobic bacteria, creating a lethal anaerobic environment. The hazardous leachate that is produced as a result may poison groundwater and harm the surrounding environment.
Before implementing this conversion process, further analysis needs to be done. Research using physical experiments could take years before reaching conclusions. For a more timeeffective method, researchers at the University of Western Ontario used simulation to analyze the conversion process.
For their study, the researchers modeled waste as a porous medium. They used both heat transfer and chemical reaction simulations to gain a better understanding of how aerobic landfills operate. With COMSOL Multiphysics, they were also able to include biological kinetic equations using distributed ordinary differential equations (ODEs).
The researchers created a 2D model, seen below, consisting of a 20 m by 20 m landfill cell with air injection wells at the corners and an extraction well at the center. With this model, the researchers studied how different key factors affected the landfill conversion process.
The landfill cell geometry. Image by Hecham M. Omar and Sohrab Rohani and taken from their COMSOL Conference 2015 Boston paper.
First, the team investigated the role of temperature in their simulations, using it as an indicator of successful biodegradation within the landfill. This is possible because aerobic biodegradation is exothermic, generating heat during the process. Therefore, when the landfill stays around ambient temperature, only minimal biodegradation occurs, whereas an increase in temperature indicates successful biodegradation.
It’s important to keep in mind that an increase in temperature is only beneficial in moderation. If the landfill isn’t monitored, the temperature will continue to rise and kill the aerobic bacteria. Thus, a major goal for researchers is to keep an aerobic bioreactor landfill at the perfect temperature, not too hot and not too cold.
With this in mind, the researchers investigated two methods for controlling the temperature:
For the first method, they increased the airflow rate fourfold. Instead of greatly decreasing the temperature, the air flowed farther into the waste before heating up to the waste’s temperature, and the cell temperature became more homogeneous due to convection.
The temperature after one day has passed for different airflow rates. The starting temperature for all three was 293 K. Images by Hecham M. Omar and Sohrab Rohani and taken from their COMSOL Conference 2015 Boston paper.
As for the second method, we see below that the temperature is significantly reduced by injecting leachate. This indicates that the leachate flow rate is more effective at controlling landfill temperature than the airflow rate. Here, leachate recirculation functions as a heat sink for the heat generated by the biomass, helping to ensure that the heat doesn’t reach unacceptable levels. Without this temperaturecontrol method, the biomass would overheat and begin to die, reducing its concentration to almost zero and slowing the biodegradation process.
The temperature after one day has passed for different leachate injection rates. The starting temperature for all three was 293 K. Images by Hecham M. Omar and Sohrab Rohani and taken from their COMSOL Conference 2015 Boston paper.
Another factor affecting the conversion of anaerobic landfills to aerobic bioreactor landfills is the initial aerobic biomass concentration. In the following images, you can see how various biomass concentrations affect the temperature. When there is a low initial aerobic biomass concentration, the biomass grows slowly and produces less heat. On the other hand, when the initial biomass concentration gets too high, the aerobic bacteria grows too rapidly and produces a harmful amount of heat. From this, the researchers concluded that the initial aerobic biomass concentration was a key factor in the landfill conversion process.
The temperature after one day has passed for different initial aerobic biomass concentrations. The starting temperature for all three was 293 K. Images by Hecham M. Omar and Sohrab Rohani and taken from their COMSOL Conference 2015 Boston paper.
With multiphysics modeling, the research team gained a better understanding of the factors that affect the aerobic landfill conversion process more quickly than they could have with experimental testing. Simulation also enabled them to study scenarios that would be difficult or impossible to study physically.
Although their model is consistent with both the researchers’ expectations and existing literature, the team notes that their model still needs to be validated against experimental and industrial data. Looking ahead, the aerobic bioreactor landfill design studied here can also be improved through other methods such as injecting aerobic sludge into the initial aerobic biomass concentration to help speed up the landfill conversion process.
When it comes to describing the velocity and pressure fields inside the system you are analyzing, there are many equations that could be appropriate. You could, for example, adequately describe a fluid slowly moving in a porous bed with Darcy’s law. But if the fluid moves rapidly, you may need to use the Brinkman equation. While there are many options available, today we will focus on the NavierStokes equations, as they are the most common in fluid flow analysis. Note that most of the explanations and practices highlighted here will also apply to the equations referenced above.
The first step is to characterize the type of flow that you are modeling based on fluid density. All fluids are compressible, that is, their density depends on absolute pressure and temperature through a thermodynamic relation, . However, from a practical point of view, most liquids can be safely described as having a density that depends uniquely on temperature, . Density is, of course, a function of an alternate element in some cases — for example, salt concentration in the Elder problem.
In the SinglePhase Flow interface available in COMSOL Multiphysics, there are three possible formulations for momentum and mass conservation equations: Compressible flow (Ma < 0.3), Weakly compressible flow, and Incompressible flow. You can easily select from these compressibility options within the Laminar Flow settings, as highlighted below.
Choosing a compressibility option in COMSOL Multiphysics.
In general, the various properties of a fluid are not constant and may depend on a number of quantities. Whether it is necessary to account for such dependencies in your modeling processes is up to you. Since the focus is on mass, momentum, and energy conservation equations in this blog post, we will review how COMSOL Multiphysics deals with viscosity , density , thermal conductivity , and heat capacity for the different compressibility options.
We’ll begin our discussion with isothermal flow simulations and later turn our attention to nonisothermal cases.
Pumps, mixers, airfoils, and multiphase systems… These are just some of the devices that are often modeled as isothermal. Isothermal flow simulations assume that , , , and are not dependent on temperature. If the properties are defined as a function of , they are evaluated at the reference value. This means that the energy equation is weakly coupled with the other two equations through the convective term, and it can be computed at a later time, if needed. It is, however, not always possible to make such an approximation.
Given the freedom that the user interface (UI) of COMSOL Multiphysics grants us, we can first study and solve a single physics problem and then build a multiphysics problem on top of the initial solution. Keep in mind that neglecting the energy conservation equation, even if you’re not directly interested in the temperature field, is valid only below a certain Mach number ().
Let’s take a look at how to select the appropriate compressibility option for your modeling case.
Compressible flow (Ma < 0.3), the most general case, makes no assumptions for the system that is being solved. COMSOL Multiphysics takes into account any dependency that the fluid properties may have on the variables. In isothermal flows, temperature is typically uniform and fluid properties (density and viscosity) remain constant, respectively, evaluated at the reference value. Even so, the properties can still vary with pressure or other quantities, such as concentration. With this formulation, which is the most computationally expensive, we can model any kind of flow and also describe incompressible situations. Our Flow Around an Inclined NACA 0012 Airfoil tutorial model provides one example of how to use the compressible flow formulation.
For the Weakly Compressible flow option, new with COMSOL Multiphysics® version 5.2a, the equations look the same as they do for the Compressible flow (Ma < 0.3) option. The only difference is that if the density is pressure dependent, the density will be evaluated at the reference absolute pressure. In this case, all the other dependencies of the density, like species concentration, are accounted for, so we can still use this formulation to account for volume forces given by concentration gradients.
The Incompressible flow formulation, meanwhile, is valid whenever can be regarded as constant (i.e., when modeling isothermal liquids or gases at low velocities). This option is also used in immiscible two or threephase flow simulations where density is constant. When applying the Incompressible flow formulation, COMSOL Multiphysics automatically uses the reference temperature and pressure to evaluate . In addition, it uses the reference temperature to evaluate . Of course, there are many applications in which the quantities mentioned above are dependent on another variable, like a specy concentration. In these cases, the density must be explicitly evaluated at a reference value for these variables within the interface. To learn more, download our Water Purification Reactor model example.
Density specification in the case of isothermal weakly compressible and incompressible flows.
Also note that the form of the equations, which are shown within the Equation section, change according to the selected option.
The different formulations for compressible and incompressible NavierStokes equations.
Nonisothermal flow simulations typically relate to cooling and heating applications, namely conjugate heat transfer. These simulations can refer to systems that are governed by natural, forced, or mixed convection.
Depending on the type of system that is being analyzed and the hypothesis that is assumed to be true, any of the compressible options can be appropriate for nonisothermal simulations. Since Compressible flow (Ma < 0.3) is the only meaningful formulation for gases subject to high pressure changes, we will focus here on systems that are below the Mach number limit, and those with fluid properties that are uniquely dependent on temperature. (There is a dedicated interface available for modeling high Mach number systems, as highlighted in this Sajben diffuser tutorial model.) The system of equations — mass, momentum, and energy conservation — is completely coupled, as the velocity appears inside the energy equation. Meanwhile, the pressure will appear explicitly in the momentum equation; the temperature will appear explicitly in the energy equation; and both temperature and pressure may be inside the fluid properties in these two equations.
Couplings of momentum and energy equations. In the case of natural convection, a part of the volume force depends on temperature gradients.
For convective heat transfer simulations, the Compressible flow (Ma < 0.3) option can be used to analyze forced and natural convection.
Forced convection refers to when the properties of the fluid vary in a nonnegligible way from pressure and temperature. This is the case for highspeed systems where the pressure changes are nonnegligible in their influence on density. As previously noted, the density of liquids rarely depends on pressure, which makes this exactly the same as the Weakly compressible flow formulation. See our ShellandTube Heat Exchanger tutorial model to learn more.
In natural convection, the driving force is the buoyancy force due to temperature gradients. The Compressible flow (Ma < 0.3) option must be used for gases in closed cavities in order for the system of equations to be consistent. In fact, if the volume cavity and total mass are constant, then the average density needs to be constant. Pressure changes help balance out density variations that are caused by temperature variations. Interested in modeling such a system? Refer to our Free Convection in a Light Bulb tutorial, which exemplifies the set up of transient conjugate heat transfer models with radiation.
The Weakly compressible flow option comes with a simplified formulation that usually leads to increased computational speed. It is the default option when the predefined Nonisothermal Flow or Conjugate Heat Transfer coupling is opened in COMSOL Multiphysics. This formulation, which basically neglects the green arrow coupling shown in the previous image, can be used to analyze forced and natural convection.
In the case of forced convection, the Weakly compressible flow option can be applied to the simulation of water or other fluids and is often valid for modeling gases in open systems (see this heat sink model example). The same considerations are valid for forced convection, as demonstrated in our vacuum flask tutorial.
The Incompressible flow option can be applied to both forced and natural convection as well. The initial case applies when you want to make simulations in cascade. For instance, sometimes it is interesting to compute the flow field at a reference temperature and then compute the temperature field in a second simulation. This can provide a powerful approximation when fluid properties do not vary much within the simulation’s temperature and pressure range. One good candidate for this type of modeling is a heat exchanger that includes liquids. You can also apply this approach to obtain more consistent initial values for highly nonlinear stationary problems. After computing the flow field and temperature field with ‘frozen velocity’, it is possible to gain considerable convergence improvements by using the initial values for the fully coupled simulations.
For the former case, COMSOL Multiphysics implements the Boussinesq approximation. The reference temperature and pressure, specified in the interface, are used to compute density, viscosity, heat capacity, and thermal conductivity. Further, the software automatically computes the coefficient of thermal expansion for the fluid, , taking the derivative of the density around the reference temperature, , and uses it to impose the buoyancy force, , where denotes the gravity vector. You also have the option to enter the coefficient directly, as shown below.
Options for specifying the density.
When utilizing the gravity option, an important concern is the need for consistent boundary conditions and initial values, particularly when dealing with natural convection simulations. This could be a nontrivial task since a domain force, the buoyancy force, is working inside the system, and we need to account for its presence. Consider, for instance, a system like a pipe where a hydrostatic head is present. Here, it is clear that we simply can’t impose a constant pressure as a boundary condition if the boundary itself is not perpendicular to the gravity vector.
Besides enabling you to model all of the above systems and situations, COMSOL Multiphysics helps to address initial values and boundary conditions for each case. To learn more, check out our Gravity and Boundary Conditions tutorial model.
For forced convection, the flow and temperature coupling is taken care of at the Multiphysics node level. Within the NonIsothermal Flow interface, the equations are coupled and fluid flow and heat transfer properties are synchronized (see the screenshot below). Depending on the compressibility option that is chosen, COMSOL Multiphysics will operate in the background to make the appropriate changes for the fluid properties, making them consistent with the selected formulation. Additionally, the NonIsothermal Flow interface takes care of implementing thermal wall functions and computing turbulent heat conduction.
Settings window for the NonIsothermal Flow interface.
If it is necessary to include a buoyancy force due to temperature or concentration gradients, then you must select the Include gravity check box. This will generate values in the Reference Values section that can be used to compute the hydrostatic pressure approximate, alongside reference temperature and pressure. Selecting the Include gravity check box also causes a new subnode to appear: Gravity. Here, you can specify the direction of the acceleration acting on the system. When the Gravity subnode is added, the hydrostatic contribution taken at the reference temperature and pressure is automatically added inside the boundary conditions, if appropriate.
To model natural convection, it is simply necessary to use both the Gravity feature and NonIsothermal Flow interface. Together, they model flow and temperature fields coupled in the presence of gravity acceleration.
Changes prompted by selecting the Include gravity check box.
The following simulation plots correlate with our vacuum flask tutorial model, which evaluates the thermal performance of a bottle holding hot fluid. This system is composed of a gas, air, outside the flask that is flowing in an open system — the flask is leaning on a table in a wide room. Such considerations make this example a helpful resource for understanding the use, assumptions, and results of the various formulations. The Compressible flow (Ma < 0.3) option, for instance, is always applicable. Since air is flowing in an open system, the Weakly compressible flow option is also applicable. And lastly, because the density changes are small, the Incompressible flow option can represent the system appropriately as well.
Plots comparing velocity, temperature, and density fields (respectively) for simulations using the Compressible flow (Ma < 0.3), Weakly compressible flow, and Incompressible flow formulations. We performed these simulations by simply toggling between the three compressibility options.
Graphs comparing velocity, temperature, and density fields (respectively) for simulations using the Compressible flow (Ma < 0.3), Weakly compressible flow, and Incompressible flow formulations. We performed these evaluations at the red dashed line, shown in the plot on the right in the previous set of images, after a simulation time of 10 hours.
Choosing the correct compressibility option is key for solving your system in an accurate and efficient way. COMSOL Multiphysics provides you with functionality that allows you to model both natural and forced convection with ease, while still offering various modeling choices and giving you complete control over the simulations at hand. This results in an optimized approach to the numerical analysis of fluid flow and temperature fields, further advancing your engineering design.
Use the table below as a helpful guide for choosing the compressibility option that is most appropriate for your modeling needs.
Compressibility Option  Isothermal Flow  Nonisothermal Flow 

Compressible flow (Ma < 0.3) 


Weakly compressible flow 


Incompressible flow 


In the various tropical and subtropical regions where it is cultivated and naturalized, the date palm tree is recognized as a valuable resource. The wood, for instance, can be used to construct huts, bridges, and aqueducts. The leaves, when mature, are incorporated into the design of mats, screens, and baskets. But what this type of palm tree is most widely known for is the sweet fruit that it produces: dates.
A date palm tree. Image by Madhif. Licensed under CC BY 3.0, via Wikimedia Commons.
Since ancient times, dates have been cultivated across various regions, from Mesopotamia to Egypt. The ancient Egyptians, for example, used the fruits to make wine and also ate them at harvest. As traders began to bring these fruits to other areas, the popularity of dates and their cultivation extended to Northern Africa, Spain, Mexico, and the United States.
Today, dates are commonly eaten as a snack and are an ingredient in many savory dishes. They are also sometimes used to make vinegar and syrup as well as to form feedstock when mixed with a grain.
In Tunisia, a leading cultivator of dates, these sweet fruits are highly valued as both a food source and financial resource. This is especially true with regards to the Deglet Nour date. Grown in inland oases, Deglet Nour dates are easily recognized by their translucent light color and honeylike taste.
Deglet Nour dates. Image by M. Dhifallah. Licensed under CC BYSA 3.0, via Wikimedia Commons.
Agronomic practices have a strong impact on the dates’ chemical composition — specifically the moisture content — and thus the overall quality of the fruit. In recent years, an increased proportion of dry dates in Tunisia has prompted the use of thermal processing to generate softer fruits that possess a more suitable appearance and characteristics. For this process to be effective, it is important to control the key unit of operation: hydration. For example, if the hydration time is too long, the shelf stability of the fruit could decrease. On the other hand, if the hydration time is too short, the final product quality might be unacceptable.
In an effort to address this concern and optimize the hydration process, one team of researchers turned to experimental tests and simulation analyses. We’ll explore their findings in the next section.
For their experiments, the researchers selected a sample of Deglet Nour dates that were harvested in 2014 and stored at a temperature of 4°C and a relative humidity of 65%. They used these dates to conduct hydration experiments at the laboratory scale. To approach industrial conditions, the dates were placed in a closed environment at atmospheric pressure, with temperatures ranging between 50°C and 65°C.
Such an environment was achieved by placing the dates in the head space of a metallic enclosure, which was filled with water and heated via a temperaturecontrolled hot plate. As they were housed in the head space, the dates themselves did not have any contact with the water. Additionally, as a means to prevent overpressure and maintain atmospheric pressure, no insulation was included on the cover of the enclosure.
Schematic depicting the experimental setup. Image by S. Curet, A. Lakoud, and M. Hassouna and taken from their COMSOL Conference 2015 Grenoble paper.
During the hydration process, the dates were weighed at regular intervals. The researchers determined the average moisture content for the flesh of the dates by drying 3 grams of dates in an oven at a temperature of 105°C for a minimum of 18 hours. By monitoring the dates’ temperature for the various hydration times, they found that the temperature could be considered homogenous.
After conducting their experiments, the group shifted gears to performing simulation analyses of the hydration of dates. The image below depicts the 2D geometry that they designed. The geometry includes the date flesh, date pit, and the saturated air that allows the date to be hydrated.
2D model geometry. Image by S. Curet, A. Lakoud, and M. Hassouna and taken from their COMSOL Conference 2015 Grenoble paper.
Using this model, the researchers computed moisture distribution in the date’s flesh during hydration and then calculated the average moisture concentration as a function of time. Note that only mass transfer phenomena was taken into account, as the temperature was considered to be homogenous.
While experiments were conducted for several types of dates, the researchers focused on using the results from two samples of one type of Deglet Nour date, which were slightly harder and drier than the others. The following series of plots compare the average moisture concentration of Date 1 and Date 2. The simulation curves are found to fit quite well with the experimental data. From these plots, we can also see that there is no decrease in the rate of moisture uptake. One possible reason for this behavior is the short length of the hydration times as compared to the maximum industry processing times.
Plots comparing the average moisture concentration from experiments and simulations in Date 1 (left) and Date 2 (right). Images by S. Curet, A. Lakoud, and M. Hassouna and taken from their COMSOL Conference 2015 Grenoble paper.
The model below shows the moisture concentration distribution within a date’s flesh after 14,640 seconds (around 4 hours) of hydration. As the plot illustrates, the gradient of moisture concentration is higher when closer to the surface in contact with the air. From there, the concentration gradient decreases to zero toward the center of the date’s flesh, where the moisture concentration stays at its initial value. This behavior indicates that diffusion occurs mainly at the outer surface of the date for the range of hydration times considered.
Simulation plot of moisture distribution concentration in a date after 4 hours of hydration. Image by S. Curet, A. Lakoud, and M. Hassouna and taken from their COMSOL Conference 2015 Grenoble paper.
For any food processing technique, the goal is to balance efficiency with overall quality. In the thermal processing of dates, optimizing the hydration method is a key step for realizing this balance, saving energy while ensuring a highquality product.
The simulationbased approach highlighted here offers valuable insight into the mass transfer phenomena that takes place in the hydration operation. As such, the theoretical model underlying this approach can be used as a resource to optimize the hydration of dates by predicting the necessary times to achieve a desired water content, as well as to reduce the overall processing time.
The food that we consume on a daytoday basis, particularly carbohydrates, is a powerful source of energy for the human body. For the body to utilize energy from carbohydrates and store glucose for later use, it is crucial that its cells properly absorb the sugar. The key to this process is insulin, a hormone the body signals to the pancreas cells to release into the bloodstream, allowing sugars to enter the cells and be used for energy.
But what happens when the body fails to produce enough insulin or if it doesn’t work in the way that it should? In this case, the glucose fails to be absorbed by the cells and will instead remain in the bloodstream, resulting in rather high blood glucose levels. Referred to as diabetes, this metabolic disease relates to cases where the body produces little or no insulin (Type 1) or does not properly process blood sugar or glucose (Type 2). Note that in the latter type, a lack of insulin can develop as the disease progresses.
A device for injecting insulin. Image by Sarah G. Licensed under CC BY 2.0, via Flickr Creative Commons.
In both Type 1 and Type 2 diabetes, insulin injections serve as a viable treatment option. These injections, however, can cause pain when applied by a heavy singleneedle mechanical pump. To minimize patients’ discomfort, researchers have investigated the potential of using a microneedlebased MEMS drug delivery device to administer insulin dosages. Not only would the stackable structure be minimal in size and easy to apply to the skin, but it would also provide a safer and less painful approach to applying injections.
Here’s a look at how a research team from the University of Ontario Institute of Technology used simulation to evaluate such a device…
Let’s begin with the design of the micropump model. The researchers developed a MEMSbased insulin micropump, placing a piezoelectric actuator on top of a diaphragm membrane comprised of silicone, with a viscous Newtonian fluid flowing through it. Note that the design itself is based on the minimum dosage requirement for diabetes patients — this typically ranges from 0.01 to 0.015 units per kg per hour.
Vibrations from the actuator create a positive/negative volume in the main chamber of the pump, which then pulls the fluid from the inlet gate and pushes it toward the outlet gate. Two flapper check valves control the direction of the fluid from the inlet to the outlet leading to the microneedle array, with a distributor connecting the outlet gate to the microneedle substrate. The established discharge pressure then pushes the fluid out of the microneedles to the skin’s outer layer.
The following set of images show the dimensions and cross section of the micropump as well as a more detailed layout of the model setup, respectively.
The MEMSbased piezoelectric micropump design. Image by F. Meshkinfam and G. Rizvi and taken from their COMSOL Conference 2015 Boston paper.
A 2D layout of the micropump model setup. Image by F. Meshkinfam and G. Rizvi and taken from their COMSOL Conference 2015 Boston paper.
To accurately study the performance of the micropump, the researchers utilized three different physics interfaces in COMSOL Multiphysics: the Solid Mechanics, Piezoelectric Devices, and FluidStructure Interaction (FSI) interfaces. The fluid flow that occurs from the inlet to the outlet via the action of the flapper check valve is described by the NavierStokes equations. Upon a wave signal exciting the piezoelectric actuator, the diaphragm disk and piezoelectric actuator move together, with an FSI moving mesh presenting the deformed solid boundary to the fluid domain as a moving wall boundary condition. Within the solid wall of the pump, this moving mesh follows the structural deformation. The FSI interface also accounts for the fluid force acting on the solid boundary, making the coupling between the fluid and solid domains fully bidirectional.
For their simulation analyses, the research team applied different input voltages and input exciting frequencies to the micropump design, studying various elements of the device’s behavior. The range of the voltages spanned from 10 to 110 V, while the exciting frequencies ranged from 1 to 3 Hz.
Let’s look at the results for an input voltage of 110 V and an input exciting frequency of 1 Hz. The plot on the left depicts the inflow and outflow rates, showing very little leakage for both. The plot on the right shows the established discharge and suction pressures at the inlet and outlet. At the inlet gate, a negative pressure denotes suction pressure, while a negative pressure at the outlet gate represents discharge pressure.
Left: Inflow and outflow rates. Right: Discharge and suction pressures. Images by F. Meshkinfam and G. Rizvi and taken from their COMSOL Conference 2015 Boston paper.
As part of their analyses, the researchers measured the stress and deflection of the flapper check valves as well as the velocity field of the fluid. You can see their results in the following set of plots.
Left: Von Mises stress in the flapper check valves and velocity field of the fluid. Right: Deflection of the flapper check valves. Images by F. Meshkinfam and G. Rizvi and taken from their COMSOL Conference 2015 Boston paper.
The studies shown here, as well as those conducted with alternative inputs, indicate that the micropump design performs properly from the minimum to maximum spectrum of pressure and flow rates. Such a configuration can therefore serve as a viable alternative for applying insulin injections, providing a safer and less painful method of treatment for diabetes patients. The researchers hope to use their simulation findings as a foundation for creating more durable and dynamic insulin micropump designs in the future.
At NASA’s Marshall Space Flight Center, researchers are reaching for the stars, quite literally. As participants in the Advanced Exploration Systems (AES) program, they seek to advance new technologies that could allow for future space missions that extend past Earth’s orbit.
The organization participates in the Life Support Systems Project (LSSP), which is part of NASA’s AES program. The goals of this project are to extend the duration of crew missions, improve reliability, reduce risks, and address gaps in technology with regards to life support systems. Both the LSSP and AES are directly derived from the architecture of the International Space Station (ISS).
The International Space Station. Image by the NASA Goddard Space Flight Center. Licensed under CC BY 2.0, via Flickr Creative Commons.
When it comes to manned vessels, one element that is required for successful space travel is efficient and reliable carbon dioxide removal assembly (CDRA) systems. Since these systems directly affect the health and wellbeing of a vessel’s crew, optimizing their design is of great importance. That, however, is easier said than done. Physical tests require a great deal of time and energy, and simulation studies can hit snags due to the inherently complex nature of CDRA systems. Such challenges have prompted engineers to explore new approaches to system development.
To supplement testing and avoid overcomplicated simulations, researchers at NASA’s Marshall Space Flight Center used COMSOL Multiphysics to create a 1D model of a 4bed molecular sieve (4BMS), a component of the ISS CDRA system. Their goal: Identify problem areas, with the hopes of eventually optimizing a CDRA 4BMS system.
Let’s begin by taking a closer look at the CDRA 4BMS system analyzed in this study. The main method for gas separation used by atmosphere revitalization systems is absorption in packed beds of pelletized sorbents. As illustrated in the schematic below, this system operates by first sending cabin air through a desiccant bed that absorbs water vapor from the air. The cooler and blower then precondition this dry air before passing it through a sorbent bed that removes CO_{2}. When the air stream enters a second desorbing or desiccant bed, the water vapor is added back and the air is returned to the cabin.
Schematic depicting a CDRA 4BMS system. Image by R. Coker and J. Knox and taken from their COMSOL Conference presentation.
As the process above occurs, there is also another process taking place in the 4BMS. The second sorbent bed is closed at one end and heated on the other, which releases CO_{2} from the bed. This is followed by a tenminute air save mode that helps to recover most of the air trapped in the sorbent bed. After this, the bed is vented into space.
The entire sequence highlighted in this section is called a halfcycle and lasts for approximately 155 minutes. In the next halfcycle, the two absorbing beds transform into desorbing beds, and vice versa.
The CDRA 4BMS, as you can see, is an intricate system. Yet here, predictive 1D models are accurate enough to help design the system’s desiccant beds. Although the sorbent beds of this system aren’t cylindrical, and the heaters create the potential for a complex multidimensional flow path, the researchers observed that air flows relatively uniformly through the channels. This prompted them to use a 1D approximation to study the beds, creating a fully coupled model to solve for concentrations, pressures, and temperatures.
For their study, the researchers modeled the transport of two concentrated sorbate species, carbon dioxide and water, in air. This mixture flows through four linked beds of sorbent pellets. Calculated effluent mass fractions of CO_{2} and H_{2}O from the upstream beds were applied as inputs for the next downstream bed. The carbon dioxide beds utilized a heaterassisted vacuum desorption model, with heat transfer involving the gas, porous media, solid housing, and can insulation modeled as well. Further, the researchers used distributed PDEs and Toth isotherms to determine the absorption rates and pellet loading.
The figure below shows the idealized schematic for the 4BMS model. Here, we can see that the team only modeled the glass beads and parts of the beds containing sorbents. In their model, the researchers handled the glass bead layers the same way as the sorbent layers, aside from the H_{2}O and CO_{2} having zero adsorption and desorption capacity.
The idealized 4BMS model. Images by R. Coker and J. Knox and taken from their COMSOL Conference paper submission.
To validate their model, the team used an ISS CDRA 4BMS ground test. It is important to note that some of the inputs, including total sorbent mass, degree of thermal insulation, and pressure drops across the beds, are unknown. Further, in order to achieve faster results and increase numerical stability, the initial bed loading conditions were set close to the expected final results.
The following series of graphs compare the simulation results with experimental data for sorbent bed temperatures and carbon dioxide partial pressures. Overall, the experimental system converged fairly quickly, with the data in the graphs generated from the test’s fourth halfcycle.
When plotting the temperature at the thermocouple (TC) locations for both the baseline data and the COMSOL Multiphysics model (shown below), the researchers noted that although their model’s cooling rate during adsorption was slightly fast, it matched the data fairly well during desorption. This could be due to the fact that the 1D model has a large, simple geometry and uses an ad hoc method to include fins.
Plot comparing sorbent temperature and time. Image by R. Coker and J. Knox and taken from their COMSOL Conference paper submission.
The next step involved comparing the partial pressure of carbon dioxide at the desiccant bed influent and effluent. As the graph below indicates, both simulation and experimental results feature a spike at the beginning of the halfcycle. The plot further shows a rise at the end of the halfcycle, indicating that the sorbent bed is fully breaking through. This is fine, however, since a full sorbent bed maximizes CO_{2} removal efficiency during desorption.
Plot comparing CO_{2} partial pressure and time. Image by R. Coker and J. Knox and taken from their COMSOL Conference paper submission.
The researchers at NASA’s Marshall Space Flight Center successfully created a fully functional model of a 4BMS that can be used for the predictive modeling of an entire ISS CDRA 4BMS system. Looking ahead, the team notes that their 1D model shows promising capabilities for predicting the behavior of such a system and thus locating any potential problem areas in the 4BMS. They have, for instance, already used it to look for unexpected heat leaks in sorbent beds and to predict breakthrough behavior in desiccant beds.
The plan is to eventually validate the model against other CDRA4EU data sets. After doing so, the researchers can use it as a resource to guide the design process for the next generation of CDRA 4BMS systems, as well as to optimize atmosphere revitalization systems such as the one on the ISS.
The National Aeronautics and Space Administration (NASA) does not endorse the COMSOL Multiphysics® software.
]]>
As a starting point for our Laminar Static Particle Mixer Designer app, we will consider the Particle Trajectories in a Laminar Static Mixer tutorial, which you can download from our Application Gallery. This model is designed to evaluate the mixing performance of a static mixer by computing particle trajectories throughout the device. To learn more about this tutorial and about mixer modeling in general, I encourage you to check out these previous blog posts:
The geometry that is used in the tutorial referenced above is the same one that we will use within our app. Shown in the figure below, the model consists of a tube featuring three twisted blades with alternating rotations. The mixing blades are illustrated as gray surfaces, with the outline of the surrounding pipe also depicted. As particles are carried through the pipe by the fluid, they are mixed together by the static mixing blades.
The geometry of the laminar static mixer model.
Using our Laminar Static Particle Mixer Designer app, shown below, we can first compute the trajectories of particles as they move throughout the mixer. Then, using some builtin postprocessing tools, we can quantitatively and qualitatively evaluate how the mixer performs.
A screenshot of the user interface (UI) of the Laminar Static Particle Mixer Designer.
The app includes a large number of geometry parameters and material properties, with the option to create a mixer that utilizes one, two, or three helical mixing blades. Modifying the number of model particles and postprocessing parameters is also possible through the app’s advanced settings.
To better visualize the distribution of different species in the static mixer, we can release particles and compute their trajectories using the Particle Tracing for Fluid Flow interface. The particle positions are computed via a Newtonian formulation of the equations of motion, where the position vector components are calculated by solving a set of secondorder equations:
(1)
where (SI unit: ) is the particle position, (SI unit: ) is the particle mass, and (SI unit: ) is the total force on the particles. The Newtonian formulation takes the inertia of the particles into account, allowing them to cross velocity streamlines.
In this model, the only force on the particles is the drag force, which is computed using the Stokes’ drag law:
(2)
where the following applies:
The Stokes’ drag law is applicable for particles with a relative Reynolds number much less than one; that is,
(3)
where (SI unit: ) is the density of the fluid. This is true in the present case. A representative sample of particles in the solution is depicted below. These particles are released at the bottomright corner of the mixer and flow to the topleft corner. The color expression indicates the initial zcoordinate of the particles at the inlet, and it can be used to visualize the final positions of particles relative to their initial positions in the mixer cross section.
Plot illustrating particle trajectories in the static mixer.
To some extent, we can judge the uniformity of a mixture by observation alone. In this example, the mixing performance can be visualized by creating a phase portrait of the particle positions. In a phase portrait, particles can be plotted in an arbitrary 2D phase space — that is, they can be arranged in a 2D plot in which the axes can be userdefined expressions. Phase portraits are, for example, often used to plot particle position versus momentum in a certain direction, a phase space distribution.
In the following animation, a phase portrait is used to observe the change in the transverse position of each particle as it moves throughout the mixer. Since the pipe is oriented in the y direction, the transverse directions are the x and z directions. The color expression denotes the quadrant that each particle occupied at the initial time; that is, dark blue particles were released with positive x and zcoordinates, and so on.
A phase portrait indicates the transverse position of particles as they move throughout the mixer.
The phase portrait shows, qualitatively, that the particles are mixed together imperfectly at the outlet. There are still regions of higher or lower particle number density, along with clusters of particles of the same color — particles originating in the same quadrant — that can still be seen.
One potential drawback of the phase portrait is that it plots the particles in phase space at equal times, not at equal ycoordinates. This can produce a somewhat misleading visualization of the mixer, as some of the particles may move closer to the mixing blades and therefore potentially reach the outlet much later than other particles. An alternative option is to create a Poincaré map, which plots the intersection points of particles with a plane at a specified location.
In the following image, at each cut plane, the particles are colored according to whether they were released with positive (blue) or negative (red) initial xcoordinates. Once again, we can observe a clustering of red and blue particles at the outlet.
A Poincaré map shows the location of particles on a 2D plot.
Quite a lot of information about the mixer performance can be obtained from phase portraits and Poincaré maps, but most of it is too subjective for industrial applications. A human observer can judge approximately whether different species are completely unmixed, partially mixed, or wellmixed, but the lines between these definitions are hazy and difficult to quantify. For example, any observer can see that the previous images include pockets of particles of the same color, but it is much more difficult to assign a numerical value to describe how wellmixed they are.
Fortunately, the Application Builder and Method Editor provide the tools to create specialized, highend postprocessing routines that can assign numeric values to the performance of a specific mixer geometry. A common metric for evaluating spatial uniformity of particles is the index of dispersion, defined as the ratio of the variance to the mean:
(4)
The mean and variance are computed by subdividing the outlet into a number of regions, or quadrats, of equal area. Because the outlet is circular, it can be subdivided into annular regions of equal area by drawing concentric circles of radii
The annular regions can each be partitioned into domains of equal area by drawing diameters at angles
The subdivision produces quadrats of equal area. Letting denote the number of particles in the ith quadrat, the average number of particles in each quadrat is
The variance of the number of particles per quadrat is
The following method (p_computeIndexOfDispersion
) is used in the app to compute the index of dispersion.
/* * p_computeIndexOfDispersion * This method computes the index of dispersion at the outlet. * The method is called in p_initApplication and in m_compute. */ // Get the x and zcoordinates of the particles at the outlet // and store them in matrices qx and qz, respectively. model.result().numerical().create("par1", "Particle"); model.result().numerical("par1").set("solnum", new String[]{"14"}); model.result().numerical("par1").set("expr", "qx"); model.result().numerical("par1").set("unit", "m"); double[][] qx = model.result().numerical("par1").getReal(); model.result().numerical("par1").set("expr", "qz"); model.result().numerical("par1").set("unit", "m"); model.result().numerical("par1").set("solnum", new String[]{"14"}); double[][] qz = model.result().numerical("par1").getReal(); // Use the "at" operator to get the initial xcoordinates of all particles // and store them in matrix qx0. model.result().numerical("par1").set("expr", "at(0,qx)"); model.result().numerical("par1").set("unit", "m"); model.result().numerical("par1").set("solnum", new String[]{"14"}); double[][] qx0 = model.result().numerical("par1").getReal(); // The Particle Evaluation is no longer needed. model.result().numerical().remove("par1"); double Ra = model.param().evaluate("Ra"); // Radius of the outlet int Np = qx.length; // Number of particles int Nr = nbrR; // Number of subdivisions in the radial direction int Nphi = nbrPhi; // Number of subdivisions per quadrant in the azimuthal direction int nbrQuad = Nr*4*Nphi; // Total number of quadrats (regions) double deltaPhi = Math.PI/(2*Nphi); // Angular width of each quadrat int index = 0; int ir = 0; int iphi = 0; int[] x = new int[nbrQuad]; // Array to store number of points per quadrat // Begin loop over all particles for (int i = 0; i < Np; ++i) { // Determine which quadrat each particle is in. ir = (int) Math.floor((Math.pow(qx[i][0], 2)+Math.pow(qz[i][0], 2))*Nr/Math.pow(Ra, 2)); iphi = (int) Math.floor(Math.atan2(qz[i][0], qx[i][0])*Math.signum(qz[i][0])/deltaPhi); if (Math.signum(qz[i][0]) < 0) { iphi = (int) Math.floor((2*Math.PIMath.atan2(qz[i][0], qx[i][0])*Math.signum(qz[i][0]))/deltaPhi); } index = 4*Nphi*ir+iphi; // Consider only half of the particles when evaluating mixer performance. if (qx0[i][0] < 0) { x[index] = x[index]+1; } } // compute the mean double sum = 0; for (int i = <0; i < nbrQuad; ++i) { sum += x[i]; } // compute the variance double xmean = sum/nbrQuad; sum = 0; for (int i = 0; i < nbrQuad; ++i) { sum += Math.pow(x[i]xmean, 2); } indexOfDispersion = sum/xmean;
The last line of this method returns the index of dispersion. In general, a reduction of the index of dispersion corresponds to an improvement in the uniformity of the particle distribution. With the default parameters in the app, the index of dispersion is approximately 900 when three mixing blades are used, 1200 when two blades are used, and 1400 when only one blade is used. Thus, the index of dispersion quantitatively shows what we can see by looking at the plots: that a larger number of mixing blades produces a more uniform mixture of particles.
Today, we have shown you how the Application Builder can advance your studies of static mixers. By creating an app, you can optimize the overall design workflow by spreading simulation capabilities to a wider audience, with the opportunity to gain a more accurate overview of mixing performance by assigning numerical values to different mixer geometries.
Interested in learning more about how to design simulation apps of your own? Be sure to check out the resources below.