I recently read a *NewScientist* article, “Soap-bubble cyclone is a deadly storm in miniature”, where a group of physicists at the University of Bordeaux, France heated soap bubbles from underneath to create spinning storm-like structures. The resulting patterns resembled tropical cyclones in earth’s atmosphere, and it got me thinking of how to model this using COMSOL Multiphysics.

But first, let’s go over the concept of a shear layer.

A shear layer is a region of finite thickness with steep velocity gradients separating two parallel streams in contact and moving at different speeds. For the Kelvin-Helmholtz instability, small perturbations of the shear layer trigger its evolution into an array of large-scale vortices dissipating into smaller ones.

Easier visualized than explained in words, let’s take a look at two natural phenomena.

In the figure below, you can see a rolling wavy cloud, also known as *billow*. The stream above the cloud is moving towards the right, while the cloud can be either still or moving towards the left. A small perturbation in their separation plane, i.e. the shear layer, triggers the rolling motion that gives these clouds their characteristic shape, which indicates a region of high turbulence that aircraft should avoid.

*Rolling wavy cloud caused by the Kelvin-Helmholtz instability. Photo credit: GRAHAMUK.*

Jupiter’s Great Red Spot, shown below, was developed between 150 and 300 years ago, is about 2-3 times the size of earth, and is the fiercest manifestation of the Kelvin-Helmholtz instability in our solar system. At least four shear layers can be seen in this image, with the Great Red Spot being the biggest roll trapped between two jet streams.

*Image taken by a Voyager 2 flyby showing Jupiter’s storms in the neighborhood of the Great Red Spot. Photo credit: National Aeronautics and Space Administration (NASA).*

The Kelvin-Helmholtz instability can be found everywhere in nature and can be used industrially, such as to enhance mixing applications, for instance. When working to predict or prevent the onset of the Kelvin-Helmholtz instability, simulation can be of great help to designers.

In order to trigger the Kelvin-Helmholtz instability, we need to define an initial velocity field consisting of a finitely thick, horizontally aligned shear layer with a vertical velocity perturbation. To simulate a jet with COMSOL Multiphysics, we can use the values from the article “A Second-Order Projection Method for the Incompressible Navier Stokes Equations”.

We want to use the following expressions for the initial velocity field:

If we type these expression into the Initial Values settings window in the COMSOL software:

we can visualize the velocity field:

* Velocity magnitude and arrow surface plot at t = 0 s.*

In addition to the CFD analysis, we can also perform a mass transport simulation to try to mimic the results obtained from the work mentioned in the *NewScientist* article on soap-bubble cyclones.

We want to use the following initial concentration expression:

If we type this into the Initial Values settings window:

we can visualize the concentration:

*Concentration at t = 0 s.*

The simulation is performed inside a unit square with Periodic Flow Conditions, so as to let the initial velocity field develop into a doubly periodic shear layer. To produce the above results, the density and viscosity should be set to 1 kg/m^{3} and 10^{-4} Pa·s, respectively. A similar Periodic Condition should be used for the mass transport simulation.

The simulation results show how the shear layers surrounding the central jet evolve into periodic vortices as shown in the animation below. Three rolls can be observed, two on top, rotating counterclockwise, and one below, rotating clockwise. The shear layers wrap around these rolls and are progressively thinned by the large straining field caused by the Kelvin-Helmholtz instability. Kinetic energy is transferred from the jet to the shear layers, which will continue to thin and dissipate into smaller vortices.

*Animation of the CFD analysis results. Top-left plot: Pressure field. Top-right plot: Velocity magnitude and Arrow Surface. Bottom-left plot: Vorticity surface, vorticity contours, and velocity Arrow Surface. Bottom-right plot: Arrow Line. Left legend: Vorticity [1/a]. Top-right legend: Pressure [Pa]. Bottom-right legend: Velocity magnitude [m/s].*

Given the symmetry of the initial values and the boundary conditions we used, the evolution of the top and bottom layers are mirrored. Below, you can see the results at t = 2 s. The high-frequency components of the vorticity field are smeared because the mesh is not fine enough to resolve the details of the flow.

*CFD analysis results at t = 2 s.*

In order to overcome such limitations, a finer mesh is needed, which will result in higher memory demand and longer computational time — both of which are outside the scope of this blog post. Nevertheless, in this simulation, the larger vortices don’t exhibit oscillations or shape distortions, which indicates a good resolution of the overall dynamic behavior of the jet. Their onset is characteristically slow until a point is reached when the instability develops quickly and fully.

In the mass transport simulation, we consider the concentration to diffuse and be passively advected by the jet. In the simulation results below, you can notice that the dynamics of the concentration tightly follow the jet evolution even in the presence of the diffusion mechanism. The simulation results resemble the phenomena discussed in the *NewScientist* soap bubble article.

*Mass transport simulation animation. Legend: Concentration [mol/m ^{3}].*

When studying the dynamics of a physical phenomenon with simulation, postprocessing is of paramount importance since it can confirm a physical intuition, give us better insight into what we’re studying, or show unexpected behavior. In this case, we can expect dust or ice crystals to be trapped within the rolls.

Here, I used the Particle Tracing Module to simulate such a scenario. The animation and particle trajectories plot confirm what we intuitively expected.

*Particle tracing simulation animation.*

The next step could be the determination of residence time, which, if the Great Red Spot is concerned, might range from 150 to 300 years.

Imaging systems have long been the subject of study for famous physicists like Maxwell, who proposed a fish-eye lens that uses a gradient index lens between a pair of points in space. The pair is defined by two opposite points laying on the spherical surface. Such a lens was supposed to be a “perfect imaging” system or, in other words, a system capable of focusing (imaging) the smallest detail from one point of its surface to another. Maxwell’s proposal was considered impossible to implement with an ordinary material with a positive index of refraction due to the diffraction limit. In practice, this means that in processes like photolitography, the size of features of an electronic device cannot be smaller than the wavelength of the light being used.

In 2004 it was proven that an artificial material with a negative refractive index (also known as a *metamaterial*) could be used to overcome the diffraction limit. Later, in 2009, a breakthrough theory showed that an ordinary material could in fact be used to manufacture a Maxwell fish-eye lens. The latter approach intrigued professor Juan Carlos Miñano and his research team at Cedint Polytechnic University of Madrid. They decided to use simulation to prove the theory that the diffraction limit could be surpassed by designing a device with the equivalent optical properties of a Maxwell fish-eye lens, but with a different geometry: a spherical geodesic waveguide.

A spherical geodesic waveguide, which was designed by Miñano and his colleagues, is a very thin spherical metallic waveguide filled with a non-magnetic material (see figure below). At the moment, it’s still a proposed device that can be studied, optimized, and fabricated thanks to simulation. Miñano’s team couldn’t resort to geometrical optics, and therefore, to solve Maxwell’s equations with real-world accuracy, they decided to rely on COMSOL Multiphysics and the RF Module. The spherical geodesic waveguide model was designed and simulated using COMSOL by postdoctoral researcher Dejan Grabovickic from Miñano’s group.

*The spherical geodesic waveguide with the drain port on top (left), where a cross section of the coaxial cable and its mesh are shown. The cross section (right) of the spherical geodesic waveguide including the drain port.*

The spherical geodesic waveguide is designed for short-distance transmission and demonstrated super imaging properties: it can sense changes in the position of its receiver that are much smaller than the wavelength of the light being used. Super imaging could drastically reduce the size of integrated electronics, and as Dejan states in the *IEEE Spectrum* magazine insert, *Multiphysics Simulation*, it could allow for the production of integrated electronics that are “much smaller than what is the state of the art — something like 100 times smaller.”

- Catch the full details on this fascinating story of scientific ingenuity in the User Story Gallery: “A 100-Fold Improvement in Lithography Resolution Realized with a 150-Year-Old “Perfect Imaging” System“
- Read more user stories similar to this one in the
*IEEE Spectrum*insert,*Multiphysics Simulation* - For those of you interested in learning more about the theory behind and the implementation of the simulation set up by Dejan, I suggest you also read:

Modern wetsuits are made of *neoprene*, and they don’t actually prevent us from getting wet — instead, they are designed to keep us warm. If a wetsuit is tight enough, the layer of water that fills the gap between the wetsuit and our skin will be so thin that it will readily be warmed up by our body heat. After the initial chill, the insulation offered by the foam neoprene allows us to swim and enjoy the water without experiencing its true temperature. In order to maintain its thermal insulation properties while in use, wetsuits are designed to prevent what is called *flushing* — or cold water entering the wetsuit through the ankles, wrists, and neck.

Why does foamed neoprene offer such good thermal insulation? Foamed neoprene contains small bubbles of nitrogen gas. The presence of such a gas with a very low thermal conductivity reduces heat transfer by almost an order of magnitude, compared to having the skin directly in contact with water. In addition to its favorable thermal properties, there are almost no convection currents in a gas trapped in such tiny bubbles. Heat transport in foamed neoprene occurs mainly through *diffusion*, a slow transport mechanism that reduces heat loss from our body.

To build my model I used COMSOL Multiphysics and the Heat Transfer Module, and considered a 2D domain made with the following layers, from left to right:

Material | Thickness/Diameter | Thermal Properties |
---|---|---|

Skin | 2 mm | Same as water |

Water | 0.5 mm | Temperature dependent, taken from the Material Library available in COMSOL Multiphysics |

Comfortable fabric (nylon) | 0.2 mm | Same as above |

Titanium metal oxide | 0.05 mm | Same as above |

Neoprene | 3 mm | Same as above |

Nitrogen bubbles | 0.025 mm | Same as above |

Resistant fabric (nylon) | 0.2 mm | Same as above |

The values provided in the table above are based on data that I found in an online search. If needed, or if I come across new information or specifications, I can test different materials and geometrical properties with just a few clicks. Additionally, my model is fully parameterized. That’s one of the main advantages of simulation — it enables the verification and optimization of a design right on the computer, before launching an expensive experimental campaign. The model geometry is depicted below, in Figure 1.

*Figure 1: Foamed neoprene model geometry with postprocessing cut line (in red).*

Boundary conditions and initial temperature used in the model are summarized below:

Body Heat Flux, left boundary | Sea Temperature, right boundary | Initial Temperature, all domains |
---|---|---|

100 W/2 m^{2} (Ref. 2 and 3) |
17.2°C (Ref. 4, September) | 36.1°C (Ref. 5) |

The simulation results shown in Figure 2 confirm that having a foamed neoprene wetsuit, with the nitrogen bubble insulation, is a good choice, especially if we’re in cold water for more than five minutes. We have three minutes more before we reach a skin temperature of 24°C, with respect to a non-foamed neoprene wetsuit.

*Figure 2. Skin temperature vs. time.*

Figures 3 and 4 provide more details about the temperature profile along a horizontal cut line and the temperature distribution across the whole simulation domain, respectively.

*Figure 3. From top to bottom: model geometry with postprocessing cut line (in red); temperature values along the cut line at t= 2, 6, and 9 min.*

*Figure 4. Temperature distribution at t=2 minutes (left) and 9 minutes (right).*

I’m sure you noticed in Figure 1 that I’ve actually drawn in the nitrogen bubbles. It would also have been possible to have simulated the bubbles by applying the technique for simulating Homogenized Pore Scale Flow and Thermal Conduction, which describes how to represent the bubbles with their equivalent volume-averaged thermal properties, allowing them to be simulated without drawing them or losing too much in accuracy. I could have simulated them without individually drawing all the domains thanks to boundary conditions in COMSOL that are specifically designed to model domains that are geometrically much smaller than the rest of the model. See Figures 5, 6, and 7 for more details about the options available in the Heat Transfer Module.

*Figure 5.* Highly Conductive Layer *boundary condition (click to enlarge).*

*Figure 6.* Thin Thermally Resistive Layer *boundary condition (click to enlarge).*

*Figure 7.* Thermal Contact *boundary condition (click to enlarge).*

When creating the model, I used some of the following resources in order to set up the model and select the model properties:

- Homogenized Pore Scale Flow and Thermal Conduction functionality in COMSOL, which allows for the use of image data to represent 2D material distributions and use the image’s color or gray scale to identify regions with different materials
- Thermoregulation, a resource about how the body regulates its heat output power
- Skin description
- Boston’s average water temperature
- Hypothermia

*Figure 1 (adapted from Anderson). (a) Vector sum of forces acting on a moving sailboat; (b) forces acting on a sailboat under the mainsail only and proceeding at a constant course and speed; (c) main parts of a sailboat; (d) frontal view of a bulb keel.*

When the wind is coming from the side of the boat, something is needed to impede excessive slide-slipping, and this “something” is the keel. Let’s take a look at figures 1a and 1b, where the forces acting on a sailboat moving at constant speed and course are depicted.

The mainsail acts like an asymmetric wing moving through the air. The generated lift force has two components: one lifting the sailboat forward, and another responsible for the sailboat to drift downwind. The keel acts like a symmetric wing moving through the water at an angle of attack because of the downwind drifting due to the mainsail. A lift force is also generated in this case with its main component counteracting the downwind drifting motion. When all forces acting on the sailboat balance each other, the sailboat is still drifting downwind but only slightly, thanks to the keel. Such a sidle motion is called *leeway*. Sailboat designers thereby face a paradox (Hunter & Killing): they need to allow it to *have some leeway*, in order for the keel to have an angle of attack and to generate the lift needed to *oppose leeway*.

Keel design has evolved to take advantage of wing theory and computational fluid dynamics (CFD) analysis. The current trend for recreational sailboats is to have a not-too-deep, narrow keel with a bulb with elliptic edges at the bottom. Such a design delivers the proper amount of righting moment and minimized reduced drag. As a CFD buff, I often ponder the physics of sailing when I’m out on the water. It’s in that spirit that I have performed a CFD analysis of a bulb keel, and I’d like to share the results with you.

The bulb keel geometry used for the CFD analysis has an elliptic cross section and is shown in figure 2. Water velocity is set to 5 knots, about 86% of the hull speed for a 19-foot sailboat, and is at a 45° angle with the bulb keel, which means that the sailboat is sailing upwind at an angle of about 30° (figure 1b). I’ve built the geometry directly in COMSOL Multiphysics and used the CFD Module to take advantage of its turbulent flow modeling capabilities.

The goal of the simulation was to observe in detail the fluid flow around the keel, determine pressure and total stress distribution, observe vortex formation responsible for induced drag, and find generated lift and its center of pressure. Simulation results are shown in the figures below.

*Figure 2. Port view of the bulb keel 3D geometry.*

*Figure 3. Velocity isosurfaces. Views: port side (image in the left) and starboard (image on the right) side.*

*Figure 4. Pressure distribution. Views: port side (image on the left) and starboard side (image in the right).*

*Figure 5. Streamlines show vortex formation behind the bulb keel. Water is flowing at a 45° angle with the bulb keel. Views: port side (image on the top), from the stern (image on the bottom left), and starboard side (image on the bottom right).*

*Figure 6. Total stress components distribution on the bulb keel. Views: starboard side (image on the top) and port side (image on the bottom).*

*Figure 7. Position of the center of pressure (initial point of the red vector) and direction of the generated lift. Note that the center of pressure is placed outside the bulb keel; meaning the pressure distribution results in applying both a force and a clockwise moment on the keel. The magnitude of the moment can be determined by multiplying the magnitude of the force and its distance from the center of gravity of the keel.*

Designing a sailboat is a challenging task, an art that requires engineering skills and the best simulation software. Would an almost 450 pounds-force of generated lift be enough to provide my sailboat with the right amount of leeway and heeling? A lot depends on the choice of rudder, hull, and sails. Next time I go sailing in the Boston harbor, I’ll take a closer look at the sailboats available at Courageous Sailing Center. The next step in exploring the physics of sailing is to add a rudder to my simulation.

*The Physics of Sailing Explained*, B.D. Anderson, 2003, Sheridan House*Yacht Design Explained*, D. Hunter and S. Killing, 1998, W.W. Norton

When dealing with cooling, the first decision to make is: active or passive? If we go for an active cooling technology, then we have to rely on mechanical components. Such a solution is not “green” and usually results in high operating costs. What if we opt for a passive cooling method instead? In this case our knowledge of physics will help us satisfy our cooling needs without energy-consuming components. A good example of how to apply the principles of physics to implement a passive cooling technology is the design of “green” buildings. Another example that I’m sure you will find interesting is wine cellars, where the thermal properties of the ground surrounding the cellar help mitigate the variations in temperature from the outside environment.

Active cooling applied to wine storage is pretty simple. You can think of it as a bigger version of a refrigerator. Simulation can help a lot in achieving an efficient design where, basically, if we have enough power, we can make our cooling system work even if the size of the storage or the distribution of the wine racks is not optimal.

What I believe is more challenging (and fun!) than an active cooled cellar is the design of a passive cooled cellar. It’s true that its operating costing costs are very low and its performance doesn’t depend on external sources of energy, like electricity. Yet, its design needs to satisfy several constraints that are not set by us. In order to succeed, we have to answer questions like: Given certain thermal properties of the ground, or a location subject to a certain kind of climate, how deep should I dig for my cellar for it to work? How big should it be? Where should I put my wine racks?

When simulation wasn’t so fast, accurate, and easy to perform as it is today, the construction of a passive cooled cellar relied a lot on experience. I’m sure that more than a few of them were too cold or too warm (wine should be stored around 12°C and deviations from this value shouldn’t exceed ±2°C). Digging deeper, or reducing the depth of the cellar once it’s already been built and operating, must have been expensive.

Today we can easily set up a simulation and find the optimal cellar’s depth given the thermal properties of the ground and the variation of temperature above it as a function of time. In Figure 1, I’ve depicted the computational domain. As you can see, I’ve opted for a 2D axisymmetric simulation and used an infinite element domain to reduce the size of the simulation without sacrificing its accuracy. In other words an infinite element domain is a domain that is mathematically stretched out towards infinity. We don’t have to worry about implementing such a fantastic technology; COMSOL Multiphysics will take care of it for us under the hood.

*Figure 1. Domains used for the simulation. The simulation is 2D axisymmetric.*

I positioned the cellar at a depth of 2 m, added wine racks (I assumed their thermal properties to be the same as water), included the air (which is a good insulator) present in the cellar, and assigned an initial temperature of 12°C. As a boundary condition at the surface of the ground, I used the expression T* _{ground}* = T

It’s interesting to see from the results shown in Figure 2 that the thermal properties of the ground and the depth and geometry of the cellar enable the dampening of the heat wave originating from the surface of the ground. In this case, both the thermal properties and the depth are optimal. The initial value of 12°C doesn’t vary more than 3°C. This design can be accepted by most wine buffs and can be further refined by running a simulation where different depths and cellar configurations are explored.

*Figure 2. Temperature as a function of time in different points of the computational domain.*

Below you can see a 3D representation of the temperature field after 1 year (the temperature in the infinite element domain is not shown). We can note that at the height of summer the temperature of the cellar is still acceptable, with the ground and air helping greatly to insulate it.

*Figure 3. Temperature distribution in the computational domain after 1 year.*

Figure 4 below depicts the temperature profile along the axis of symmetry with the cellar placed between 2 m and 4 m below the surface of the ground. This is one of my favorite plots since it allows us to really “see” the thermal dampening effect of the ground surrounding the cellar and capture the temperature inversion that occurs during the summer and winter. Pretty neat, wouldn’t you say?

*Figure 4. Temperature profile as a function of depth along the axis of symmetry r=0 (see Figure 1). The vertical dashed line represents the average temperature of ≈12°C. The two horizontal dashed lines represent the ceiling (z=-2 m) and the floor (z=-4 m) of the cellar, respectively. The four solid lines represent the temperature profile for different seasonal points. Profiles are not symmetrical around the average temperature due to the damped wave nature of the solution.*

As you’ve seen here, by modeling temperature virtually, it’s not only possible to design our own passive cooled wine cellar, but it’s relatively easy, too.

]]>

*Ficus microcarpa: overview (left), aerial roots detail (right). Pictures taken in Naples, FL, USA.*

The upward transport of water and nutrients is mainly due to phenomena like capillary pressure and osmosis. This means that we are dealing with a fluid dynamics problem; the fluid is flowing because of a pressure difference between two points. So nature has to solve the following problem: given a certain pressure difference, what root distribution would maximize the flow rate and synthesis rate of nutrients? That problem further reminds me of a simulation with a very similar goal — the optimization of a microreactor.

In this simulation a solution is pumped through a catalytic bed, or microreactor, where a reactant undergoes chemical reaction within the porous catalyst. The simulation accounts for fluid dynamics of the solution pumped through the catalytic bed, mass transport of the reactant, and its reaction rate. The distribution of catalyst in the catalytic bed determines the total reaction rate: a large amount of catalyst results in a low flow rate through the bed while less catalyst gives a high flow rate but low conversion of the reactant. Now we need simulation to optimize our design. COMSOL Multiphysics can find an optimal catalyst distribution by maximizing the reactant conversion for a given total pressure difference across the bed. Microreactors are widely used in process engineering due to their energy efficiency, scalability, safety, and their finer degree of control. I think that you will agree that the time invested in optimizing them is well worth it.

*Simulation results: distribution of the porous catalyst and velocity vector field and streamlines are shown.*

The figure above shows one of the possible optimal distributions of a porous catalyst found by COMSOL Multiphysics. To me, it resembles the aerial roots of the ficus microcarpa, and many other “nutrient distribution networks” that can be found in nature too for that matter.

We have to bow to Mother Nature: in order to mimic what she has optimized naturally over thousands of years, we need cutting-edge multiphysics simulation technology.

]]>

*Open bag of marshmallows.*

Finding my open bag of partially dried-out marshmallows raised the question: what is the diffusion coefficient of water inside the marshmallows bag? I realized that this could be answered by observing what happened to the open bag of marshmallows on my counter; I knew the geometry of the bag, what day it was opened, and I assumed that the “marshmallow water content profile” after 30 days was a straight line. This was the *experimental profile* I used to set up a mass transport by diffusion simulation with the goal of deducing the diffusion coefficient of water in the bag of marshmallows (hereon referred to as *Diff*).

*Note: In order to find the value of water content at each end of the experimental profile, I needed the water concentration values of both the air and the fresh marshmallows. The former can be derived by knowing that the absolute humidity of air with a relative humidity of 30% at 20°C is ≈5.2 g/m ^{3}, which corresponds to a water concentration ≈0.3 mol/m^{3}. While for the latter, I had to do some research and decided to use the data provided here, corresponding to a water concentration of ≈7,781 mol/m^{3}.*

The initial water concentration in the bag is 7,781 mol/m^{3} while its value at the open end of the bag is 0.3 mol/m^{3}. In the figure below the geometry is shown that is used for the simulation along with the line where the *experimental profile* will be compared with the simulations for *Diff*.

*Marshmallow bag geometry and profile line along which the simulated water concentration is compared to the experimental profile.*

I ran the simulation for different values of *Diff*, expressed in m^{2}/s, and compared the results to the experimental profile (see plot below). The value of *Diff* that best fit the experimental profile was 2·10^{-9} m^{2}/s.

*Simulated water concentration vs. experimental profile.*

*Isosurfaces and surface plots of water concentration for Diff=2·10 ^{-9} m^{2}/s.*

Even if I had to rely on several simplifying assumptions — an experimental profile determined “by hand”, a diffusion coefficient independent from water concentration, time, shape and packing of the marshmallows, chemical reactions and constant boundary conditions — I’ve been able to establish the order of magnitude of the diffusion coefficient and started to understand more about what is happening.

I might even try different bag shapes to find out which shape is less prone to dry out the marshmallows. The next step could also be to include all the effects that I’ve neglected here in a new simulation that would accurately answer the question: how long does it take to dry all the marshmallows out?

Well, in my case, after a couple of months, a quarter of the bag is still edible. You can start from the information provided in this post, run your simulation, and let me know when the last marshmallow will be as hard as a rock!

There’s a lot of applied physics at work when cooking food. If you’re intrigued by this, check out the following resources:

Plenty of simulations related to the food industry have also been presented at the COMSOL Conference over the years. Here are a few that grabbed my attention:

]]>

In our earlier blog post, we explained that oscillatory waves of appear as smoother waves on the coarser grid ( has been used to denote the grid with a spacing ). On the same line, we can show that the smooth waves on appear more oscillatory on . It is a good idea then to move to a coarser grid when convergence begins to stall. Predominant smooth waves on will appear more oscillatory on , where the smoothing property of iterative methods makes them more efficient.

As such, coarse grids can be used to generate improved initial guesses as in the following nested iteration procedure:

- Iterate on on the coarsest grid
- …
- Iterate on on to obtain an initial guess for
- Iterate on on to obtain an initial guess for
- Iterate on on to obtain a final approximation to the solution

Nested iteration seems to be a good procedure, but once we are on the fine grid, what happens if we still have smooth error components and convergence begins to stall? The residual equation comes in hand at this point, with multigrid methods showing us how to use it together effectively with nested iteration.

Note: Information between the grids is transferred through two functions:

prolongation(from a coarse grid to a fine grid) andrestriction(from a fine grid to a coarse grid) operators. Although they represent an important part of multigrid methods and are a fascinating subject, we will not cover them in today’s blog post for the sake of brevity.

The solution is unknown while we compute the approximate solution and the norm of the *algebraic error* is a measure of how well our iterative method is *converging* to the solution. (The equation can also be written as .) Since the algebraic error is unknown as well, we have to rely on a measurement that can be computed: the *residual* .

After some algebra, we identify the important relationship between and : the *residual equation* . The residual equation is important, as it will lead us to the idea of residual correction. That is, we compute the approximate solution and , solve directly thanks to the residual equation, and then compute a new approximate solution using the definition of the algebraic error: .

Further, with the residual equation, we can iterate directly on the error and use the residual correction to improve the approximate solution. While iterating on , convergence will eventually begin to stall if smooth error components are still present. We can then iterate the residual equation on the coarser grid , return to , and correct the approximated solution first obtained here. The procedure, called *coarse grid correction*, involves the following steps:

- Iterate times on to obtain the approximate solution until reaching a desirable convergence
- Compute the residual
- Solve the residual equation to obtain the algebraic error (the restriction operator needed by has been omitted)
- Correct with : (the prolongation operator needed by has been omitted)
- Iterate times on to obtain the approximate solution until a desired convergence is reached

We now know what to do when convergence begins to stall on the finest grid!

The coarse grid correction outlined above can be represented by this diagram:

*Coarse grid correction procedure involving and .*

One question remains about the coarse grid correction: What if we don’t reach the desired error convergence on the coarse grid?

Let’s think about this for a moment. Solving the residual equation on the coarse grid is not at all different from solving a new equation. We can apply coarse grid correction to the residual equation on , which means that we need for the correction step. If a direct solution is possible on , we don’t need to apply the coarse grid correction anymore and our solution method will look like a V, as shown in the figure below.

*V-cycle multigrid solution method.*

This solution method is called the *V-cycle multigrid*. It is apparent that multigrid methods are recursive in nature and a V-cycle can be extended to more than two levels. Coarse grid steps are quicker to compute than the fine ones and error will converge faster as we go down to them.

As it pays off to spend more effort on coarse grids, a method called *W-cycle* can be introduced where the computation stays on coarse steps longer. If the initial guess for the deepest V-cycle is instead obtained from shallower V-cycles, then we have what is called the *full multigrid cycle* (FMG). While the FMG is a more expensive approach, it also allows for faster convergence than just the V-cycle and the W-cycle.

*A plot comparing the V-cycle, W-cycle, and FMG.*

Here, we have presented the V-cycle, W-cycle, and FMG in their simplest forms. In order to solve complex multiphysics problems and reach optimal performances, multigrid methods are not used alone. They can also be used to improve the convergence of other iterative methods. This is what happens under the hood in COMSOL Multiphysics.

The techniques that multigrid methods piece together have been used for quite some time and are known for their limitations. The beauty of multigrid methods comes from their simplicity and the fact that they integrate all of these ideas in such a way that overcomes limitations, producing an algorithm that is more powerful than the sum of its elements.

- Check out the previous blog post in this series: On Solvers: Multigrid Methods
- Read
*A Multigrid Tutorial*by William L. Briggs

*Editor’s note: This blog post was updated on 4/18/16.*

The differential equations that describe a real application admit an analytical solution only when several simplifying assumptions are made. While the insights we gain from this approach are valuable, they are not enough to confirm that our design is efficient or to reduce the number of prototypes that are needed to obtain a complete understanding.

Numerical solution methods for models based on partial differential equations (PDEs) associated with your engineering problem overcome such limitations and allow you to represent the problem as a system of algebraic equations. Linear algebraic problems in *matrix form* as , where is the vector solution, are often a central part of the computational problem for the numerical solution process. Once we have determined and , all we have to do is to “find” .

When it comes to solution methods for linear algebraic problems, they can either be *direct* or *iterative*. Direct methods find an approximation of the solution by matrix factorization in a number of operations that depend on the number of unknowns. Factorization is expensive, but once it has been computed, it is relatively inexpensive to solve for new right-hand sides . The approximate solution will only be available when all operations required by the factorization algorithm are executed.

Iterative methods begin with an *approximated initial guess* and then proceed to improve it by a succession of *iterations*. This means that, in contrast to direct methods, we can stop the iterative algorithm at any iteration and have access to the approximate solution . Stopping the iterative process too early will result in an approximate solution with poor accuracy.

When it comes to choosing between direct or iterative solution methods, there are several factors to consider.

The first consideration is the application and the computer that is used. Since direct methods are expensive in terms of memory and time intensive for CPUs, they are preferable for small- to medium-sized 2D and 3D applications. Conversely, iterative methods have a lower memory consumption and, for large 3D applications, they outperform direct methods. Further, it is important to note that iterative methods are more difficult to tune and more challenging to get working for matrices arising from multiphysics problems.

As you can see, many different variables come into play when we have to make a decision about the best solver for the problem at hand. My suggestion is to use a simulation software that allows you to access the best-in-class solution methods. This means that you will be able to tackle your application with the right tools since your choice will be based on the physics involved and the computational resources that are available. It is indeed a good time to simulate your application since solution methods have improved at a rate even greater than the hardware, which has exploded in capability within the past several decades (see the plot below). All of the solution methods mentioned in the plot are available in COMSOL Multiphysics.

*A plot highlighting the increasing performance rate of solution methods (solid) and hardware (dashed) as compared to time. Legend: Banded GE (Gaussian Elimination, direct method), Gauss-Seidel (iterative method), Optimal SOR (Successive Over Relaxation, iterative method), CG (Conjugate Gradient, iterative method), Full MG (Multigrid, iterative method). Source: Report to the President of the United States titled “Computational Science: Ensuring America’s Competitiveness”, written by the President’s Information Technology Advisory Committee in 2005.*

My colleagues developing the solvers in COMSOL Multiphysics continually take advantage of these improvements, ensuring that we offer you high-performance methods. The rest of this blog post will focus on discussing the main ideas behind multigrid methods, as they are the most powerful of methods.

In order to introduce you to the basic ideas behind this solution method, I will present you with numerical experiments exposing the intrinsic limitations of iterative methods that multigrid methods are built on. For the sake of brevity, I will analyze only their qualitative behavior (I suggest you read *A Multigrid Tutorial* by William L. Briggs for more details on what is mentioned here).

Let’s start iterating with an approximated initial guess consisting of *Fourier* modes. They can be written as , where is the *wave number* and .

*Fourier modes with . The mode consists of half sine waves on the domain. For small values of , there are long smooth waves. For larger values, there are short oscillatory waves.*

The qualitative plot below shows that convergence is slower for smooth waves and quicker for oscillatory waves. Smooth waves are detrimental for the performance of these simple iterative methods — an intrinsic limitation. Typically, an approximated initial guess will contain several Fourier modes and convergence will start to slow down after the first iterations. This is because oscillatory components are efficiently eliminated from the error, while the smooth ones prevail and are left almost unchanged at every iteration. In other words, convergence is stalling. This is the reason why simple iterative methods don’t perform well. They converge too slowly for small wave numbers.

*Log of the maximum norm of the algebraic error plotted against the iteration number for three different initial guesses ().*

It is possible to overcome the limitations of simple iterative methods by modifying them to increase their efficiency for all Fourier modes. Let’s take a look at how multigrid methods take advantage of such limitations.

Say, for instance, that we also use a coarse grid during the solution process. There are a few good reasons to do this…

To start, the error associated with smooth waves will appear to have a relatively higher wave number, meaning that convergence will be more effective. Secondly, it’s cheaper to compute an improved initial guess for the original grid on a coarse grid through a nested iteration procedure. Clearly, restriction and prolongation methods need to be developed for mapping from one grid to another. Lastly, the residual correction procedure can also benefit from treatment on a coarse grid when convergence begins to stall, leading to the idea of coarse grid correction.

We’ll share more details about nested iteration and coarse grid correction in a follow-up blog post. Stay tuned!

*Editor’s note: This blog post was updated on 4/15/16.*

Requirements are usually set by our customers, and when we meet these requirements, they understand the final design to be effective. The performance of this design depends on our creativity, skills, and stubbornness. While designers can count on a plethora of specialists and procedures to help them reach an effective design, when it comes to efficiency they’re almost on their own. An efficient design relies on the knowledge of the physics and technology at hand. *Reaching a desired outcome* makes something effective, but reaching the outcome in the *best possible way* makes it efficient. For example, we talked about efficient design solutions for coaxial heat exchangers and concluded that thanks to the counter-flow arrangement, we could exchange more heat within the same pipe length. This design is more efficient, while both parallel- and counter-flow arrangements are effective.

We can now simulate our design and answer questions like “is it effective?” or “is it efficient?”

Nowadays, simulation makes it possible to verify our design before an actual prototype is built. This means that we can catch any flaws in the conceptual stage and anticipate the level of performance to be expected down the line. We can now simulate our design and answer questions like “is it effective?” or “is it efficient?” well in advance.

Here are some more examples of efficient designs spanning several applications and points of view:

- User-defined simulation package based on COMSOL Multiphysics and its Java API
- High-precision motion controlled positioning MEMS system
- Cooling of a LED module
- Ultrasound-induced control of motion of microparticles and cells in bio-chip technology
- Structural optimization of the Wright brothers’ flyer

I’m sure you have worked on making your design more efficient lately. Let us know more about it in the comments field below!

]]>

Solids deform under an applied stress and reach a position of equilibrium in which their deformation ceases. When the stress is removed, they will go back to their original shape. On the contrary, fluids deform continuously and don’t retain any particular shape. In scientific terms, we say that solids are characterized by *elasticity* while fluids are characterized by *viscosity*.

In between these two extremes lie *non-Newtonian fluids*, exhibiting both elasticity and viscosity. A suspension of starch in water belongs to this broad class of fluids and can be further classified as shear thickening: its viscosity increases with increasing shear rate. Another example of *shear thickening* fluid is sand soaked in water. The material dancing on a speaker’s cone on the TV show must therefore have been a non-Newtonian fluid.

If the answer is yes, I’m pretty sure you noticed that the shear created by the brush made the paint thin and capable of wetting out the wall surface evenly. Once applied, the paint regained its original higher viscosity, leaving your wall without drips and runs. This is what is called a *shear thinning* fluid, the exact opposite of shear thickening. Can you recall other fluids exhibiting a non-Newtonian behavior?

When looking to simulate non-Newtonian fluids, one example that comes to mind is the injection molding of a polymer. A ready-to-run model is available to download from the Model Gallery. I suggest starting from this model if you’re interested in simulating non-Newtonian flows. Below are a few images highlighting the shear thinning behavior of this polystyrene solution flowing into a mold.

*In this model the polystyrene solution is flowing into the mold from the top to
the bottom. Results show how for a shear thinning fluid, the dynamic viscosity is
lower when the shear rate is higher. Top: velocity magnitude. Bottom left: shear rate.
Bottom right: dynamic viscosity.*

*Dynamic viscosity as a function of shear rate. Values are measured in p1.*

*Top left: shear rate as a function of inlet pressure. Top right: dynamics viscosity as a
function of inlet pressure. Values are measured in p1. Bottom: comparison between the
non-Newtonian polystyrene solution and an equivalent Newtonian fluid. As expected, the
volumetric flow rate is higher for the polystyrene solution since at higher shear rates its
viscosity is lower.*