Here’s a question for all you electromagnetics-focused simulation engineers out there: Have you ever looked in envy at your structural, fluid, and chemical counterparts as they mesh their models with the click of a button, while you struggle to mesh your infinite elements or perfectly matched layers? Well, now you too can enjoy automatic meshing with a click (or two). Let me show you how.

Measuring the electromagnetic phenomena around an object is common practice that necessitates the creation of a boundary to enclose an air domain around your geometry. In theory, we should include large domains (technically infinite) in our model, but we do not, for obvious reasons.

In COMSOL Multiphysics, this problem is resolved by adding infinite element and perfectly matched layer (PML) domains, which take the place of these large open boundaries. Infinite element domains allow for the modeling of unbounded domains. Perfectly matched layers absorb all of the waves that enter the domain, preventing waves from reflecting off the boundaries.

For a more detailed description of these features, see page 300 of the COMSOL Reference Manual found in the documentation.

Meshing these virtual domains to provide correct results is a time-consuming process when done manually. However, with version 5.0, COMSOL Multiphysics will mesh the domains for you with a Physics-Controlled Mesh option.

*Image and model created by my colleague Jiyoun Munn. With all the time he saved by auto meshing his PMLs, he created this awesome image.*

Meshing infinite element domains is now simply a two-click process. In fact, I can sum up the process of using this feature in one image:

*Enable automatic meshing from the settings window of the physics interface node (e.g., Magnetic Fields, Electric Currents, etc.).*

After enabling the Physics-Controlled Mesh feature and clicking the mesh button, COMSOL Multiphysics will mesh the infinite element domains with a Swept mesh (and the rest of the domains accordingly). That is all.

Of course, there are always those of you who like to have a hand in everything — you know who you are. As is the case with most all COMSOL Multiphysics features, you are still free to inspect the mesh sequence and make edits to your liking.

The video below shows the short process of implementing and using the Physics-Controlled Mesh for infinite elements, as well as how to inspect and edit it.

If Electromagnetic Waves, Frequency Domain Simulations are more your speed, we’ve got you covered too. This process is ever so slightly more complicated. It involves a click of the check box and a specification of the maximum element size. When meshing Electromagnetic Waves, Frequency Domain, the maximum element size must be at most 20% of the wavelength. Dividing your wavelength parameter by 5 is a good practice. Alternatively, you can enter the numerical value instead.

*Enable automatic meshing and specify the maximum element size (based on the wavelength) through dividing it by 5.*

The rest of the process is the same as for infinite elements. Check out the video below to see the full process for the automatic meshing of perfectly matched layers and how to inspect and edit the mesh sequence.

For any COMSOL simulation software users who have models with infinite elements or perfectly matched layers, this simple feature will literally transform your modeling experience by eliminating the arduous task of manually meshing these domains.

If you haven’t already… Download version 5.0 today and get started with automatic meshing for your electromagnetic simulations.

]]>

*Contour plots* are suited to show scalar quantities on a boundary of your model.

Let’s take a look at a structural mechanics example: the Stresses in a Pulley model, which involves stress in a driving pulley that’s caused by loading from the driving belt.

If you want to follow along, this model can also be found under Model Libraries > COMSOL Multiphysics > Structural Mechanics > stresses in pulley.

This particular model examines the pulley at a “frozen” moment in time (a technique known as *kinetostatic analysis*) and evaluates the stress and deformation at different RPMs. The solved model already has a contour plot in the results (2D Plot Group 2), where the contours show the von Mises stress at different locations in the pulley.

As seen in the image below, each contour level indicates a surface where the stress is at a constant level (specified by the color legend):

We can see that the stress is highest in the areas of the pulley that connect the inner region to the outer circle.

Let’s take a look at the contour plot’s settings window.

The Levels tab contains options that control the number of surfaces (levels) depicted in the plot. Adding more levels gives finer control over the surfaces and is useful in cases where you need higher precision.

The image below shows the contour plot with 40 levels instead of 10:

Now we’ll change the contour type (under the Coloring and Style tab) to *Lines* and use 20 levels. Instead of showing surfaces, it will now show only lines that separate each region. This can make it easier to distinguish the colors when using many levels, instead of only a few.

One benefit of using this type of plot in a structural mechanics case is that if the maximum allowable stress is known, then contours can immediately indicate if the threshold has been exceeded. For contour lines, there is a *Level labels* checkbox that appears in the settings:

If we select this, markers will appear on the plot that show which stress values correspond to which line. This conveniently offers a quick view of the values (though, of course, further evaluation is needed if contours indicate stresses close to the maximum):

In order to visualize filled surfaces along with the stress level labels, we can simply include two contour plots: one with the labeled lines and one with filled surfaces. Plotting these together shows the boundaries of each region as a line, along with the stress surfaces for each:

There you have it. Although we’ve explored these contour types using a structural mechanics example, these techniques are applicable to many applications. Next, we’ll take a look at the three-dimensional cousin of contour plots — *isosurfaces* — using an acoustics example.

Isosurfaces, like contours, show surfaces where a quantity is constant throughout a region. However, they are not interchangeable. Isosurfaces are helpful for a variety of applications, as well. A classic example is showing the sound pressure levels in an acoustic device.

For instance, below is an isosurface plot from the Loudspeaker Driver in a Vented Enclosure model that displays the acoustic pressure in a vented loudspeaker enclosure. Again, the color legend indicates the value of the quantity on each surface, rather than showing a gradient.

In COMSOL Multiphysics, this model is available under File > Model Libraries > Acoustics Module > Electroacoustic Transducers > vented loudspeaker enclosure, if you have the Acoustics Module installed.

Note that in COMSOL Multiphysics version 5.0, a new color table named *Spectrum* shows colors in a slightly different range than the *Rainbow* option:

For an acoustics model like this, the settings allow us to choose the specific parameter value (frequency, in Hz) to display the isosurfaces. The image above shows acoustic pressure in the loudspeaker enclosure for a frequency of 1651.6 Hz. When the frequency is set to 1919.3 Hz, the plot looks rather different:

As with contour plots, we can specify the number of levels in the plot settings. One advantage that we don’t see with contour plots, however, is the *interactive positioning tool*:

This allows us to shift all of the surfaces together by a certain factor and see live updates in the Graphics window — try it out yourself. The figures below show the isosurfaces shifted by -10 (left) and 10 (right).

That’s it for this introduction. Hopefully this has given you a good starting point for working with isosurface and contour plots. Next time, we’ll continue exploring plot types by using streamlines to depict fluid flow.

- Download the Stresses in a Pulley model
- Download the Loudspeaker Driver in a Vented Enclosure model
- Read the previous installment of the postprocessing series: Using Slice Plots to Show Results on Cross-Sectional Surfaces

Importing meshes into COMSOL Multiphysics is often necessary when interfacing between different programs. With COMSOL Multiphysics version 5.0, these meshes can be converted into solid geometry objects for further investigation and modeling capabilities. You can also perform boolean operations on the new geometry for CFD, electromagetics, and acoustics applications.

There are multiple options for importing meshes into COMSOL Multiphysics software for users that interface with specialized mesh programs. The mesh formats supported by COMSOL software include NASTRAN® Bulk Data, VRML v1, and STL. Building upon this practice, users of COMSOL Multiphysics version 5.0 now have the option to do more with these imported meshes.

The Create Geometry from Mesh feature adds an additional Component to your model with the geometry included, created from the imported mesh. With this new feature, you can create a surrounding domain around this geometry, as shown in the video further down the page. After creating this separate domain, perform boolean operations such as Difference, Intersection, and Union. If the original geometry is to be meshed as well, the new Copy feature for meshes comes in handy. No need to re-mesh the original geometry — simply copy the mesh from the original Component into the new one.

Note that, for the time being, this can only be done when there are no intersections between the geometry objects.

There are a few applications that come to mind when introducing this modeling technique. The surrounding domain could be the fluid surrounding a solid object or the air surrounding an electromagnetic or acoustics device. For the electromagetic and acoustic examples, this comes in handy when using Infinite Elements (AC/DC Module) and Perfectly Matched Layers (RF, Wave Optics, Acoustics Modules). When modeling these applications, there needs to be a domain between the geometry and the Infinite Elements/Perfectly Matched Layer domains.

Here is a short list of the possible applications and associated physical phenomena involved:

- CFD — Fluid-structure interactions
- Electromagnetics — E-field and B-field in the surrounding media
- Acoustics — Sound pressure level and acoustic pressure field in the surrounding media
- Your own application

*In this video, we will show you how to create a geometry from an imported mesh.
So, let’s get started. Here, we have COMSOL Multiphysics open and our mesh already imported, which we want to finalize. And then we go under the* Mesh

Now, we are going to create a block, which will completely surround our imported mesh geometry, as well as lie within the center of the block. Click “Build Selected”, and you’ll see our block has been generated.

*Now, if we turn on the Transparency, we can see our imported mesh geometry lying within the center of the block. And what we’re going to do is subtract this imported mesh geometry from the geometry of the block. To do this, we go back again to the* Geometry *ribbon, and under Booleans and Partitions select “Difference”. Click the block as the object to add and the imported mesh geometry as the object to subtract. Click “Build Selected” and our new geometry has been created.*

*Now, we can go to the* Mesh *ribbon, and selecting “Mesh 2″, we can now click “Build Mesh” to generate the mesh. So, here, the meshing for our finalized geometry has been created.*

*Now, let’s go under the* Results *ribbon and create a Mesh data set for the second mesh. Change the label to reflect the mesh being used, which is Mesh 2, as well as the mesh selected. Now we can turn off the Transparency, and create a 3D Plot Group, which will house our Mesh plot for the second mesh. So, again, we change the label to reflect the mesh being shown, the Level to Volume, the Element color to Gray, and then we enable an Element Filter, to show the elements located in the *x*-direction at a position greater than 25. Now we can click “Plot”, and we can see that we’ve successfully subtracted our imported mesh geometry from the geometry of the block.*

*This concludes our tutorial on how to create a geometry from an imported mesh.*

For many of the different types of physics simulated with COMSOL Multiphysics, a weak formulation, or weak form, is used behind the scenes to construct the mathematical model. Understanding the weak form will help us gain insight into how the COMSOL software works internally as well as enable us to write our own equations when there is no built-in interface available for the particular physics involved in our model.

You may also be interested in reading my colleague Bettina Schieche’s blog post “The Strength of the Weak Form“.

Let’s consider a concrete example of 1D heat transfer at steady state with no heat source. Specifically, we are interested in the temperature, T, as a function of the position x in the domain defined by the interval $1\le x\le 5$. For simplicity, we assume the heat conductivity is unity. Then, the heat flux, q, in the positive *x*-direction is given by the gradient of the temperature T:

(1)

q(x)=-\partial_x T(x)

and the conservation of the heat flux (with no heat source in the domain) simply says

(2)

\partial_x q(x)=0

This is the main equation that we want to solve. Its solution will give us the temperature profile within the domain. Equations of this form appear in many different disciplines. For example, in electrostatics, T is replaced by the electric potential and q by the electric field, while in elasticity, T becomes the displacement and q becomes the stress.

Here, we start to see why COMSOL Multiphysics can solve coupled multiphysics problems in a breeze: No matter what physical mechanisms are involved, they are modeled by equations and once the equations are written down, they can be discretized and solved straightforwardly by the core algorithms of the COMSOL software.

Some readers may ask why we chose this seemingly too simple of an example, whose analytical solution can be easily obtained by very simple math or physical arguments. The reason is two-fold:

- We want to focus on the central idea of the weak form and not be distracted by the math of a complicated physical system.
- In subsequent posts, we will expand the example to more than one domain to demonstrate the coupling between two equation systems through their boundary conditions. Starting with a more complicated example now will almost certainly obscure the central theme later when the example is expanded.

Equation (2) involves the first derivative of the heat flux, q, or the second derivative of the temperature, T, which may cause numerical issues in practical situations where the differentiability of the temperature profile may be limited. For example, at a boundary where the adjacent materials have different values of thermal conductivity, the first derivative of the temperature T becomes discontinuous and the second derivative of T can not be evaluated numerically. The main idea of the weak form is to turn the differential equation into an integral equation, so as to lessen the burden on the numerical algorithm in evaluating derivatives.

To turn the differential equation (2) into an integral equation, a naive first approach may be to integrate it over the entire domain $1\le x\le 5$:

\int_1^5 \partial_x q(x) \,dx = 0

This asks that the *average value* of $\partial_x q(x)$ in the entire domain is zero. Indeed, it seems “too weak” as compared to the original differential equation, which asks that $\partial_x q(x)$ should be zero everywhere in the interval $1\le x\le 5$. To improve upon it, we can ask instead that the average value of $\partial_x q(x)$ in a very narrow domain is zero, say,

\int_{3.49}^{3.51} \partial_x q(x) \,dx = 0

The integral only involves the value of $\partial_x q(x)$ in the vicinity of x=3.5. Thus, the relation above requires it not to be too far away from zero: $\partial_x q(3.5) \approx 0$. Extending the same idea to all locations in the entire domain $1\le x\le 5$, we see that the original differential equation may be approximated by a set of integral equations, like this:

(3)

\int_1^{1.01} \partial_x q(x) \,dx = 0 \mbox{ , }\int_{1.01}^{1.02} \partial_x q(x) \,dx = 0 \mbox{ , }\int_{1.02}^{1.03} \partial_x q(x) \,dx = 0 \mbox{ , . . . }

The higher the number of integral equations in the set, the better the approximation. In the limit of the infinite number of such integral equations, we recover the original differential equation. It is cumbersome or even impossible to write out all the integral equations in the set, but we can apply the same idea in a different way.

The main idea is to sample the value of $\partial_x q(x)$ in a narrow range. This is done by integrating it over a narrow range in Eq. (3) above. The same kind of sampling can be done by multiplying the integrand by a weight function \tilde{T}(x) that is non-trivial only in a narrow range, as shown pictorially below:

Then, we can integrate the product $\partial_x q(x)\tilde{T}(x)$ over the entire domain $1\le x\le 5$ for a variety of weight functions \tilde{T}(x). Each weight function limits the contribution of the integrand to a narrow range centered around a different x value, thus achieving the same effect as the collection of integral equations in Eq. (3). This leads us to the weak formulation, which states that the relation

(4)

\int_1^5 \partial_x q(x) \tilde{T}(x) \,dx = 0

should hold for a set of weight functions \tilde{T}(x), commonly called *test functions*. For every value of x, say x=3.5, we can choose a test function \tilde{T}(x) that is a narrow weight function centered around x=3.5. Plugging this test function into Eq. (4) would sample the value of $\partial_x q(x)$ in the vicinity of x=3.5 and so require it not to be too far away from zero: $\partial_x q(3.5) \approx 0$.

By plugging a large number of narrow weight functions as test functions into Eq. (4), each centered at a different location in the interval $1\le x\le 5$, the value of the function $\partial_x q(x)$ will be clamped down to zero everywhere within the domain.

Footnote: In the picture above, we intentionally plotted $\partial_x q(x)$ as an arbitrary curve, not the final solution to the equation, to emphasize the fact that we haven’t found the solution yet. Later on in the solution process, this arbitrary curve will be pushed up and down by a collection of test functions to reach the shape of the final solution.

Note that the order of differentiation in the integrand of Eq. (4) is still the same as in Eq. (2) (after all, it’s the same function $\partial_x q(x)$), but it can be reduced using the method of integration by parts:

(5)

q(x=5) \tilde{T}(x=5)-q(x=1) \tilde{T}(x=1)-\int_1^5 q(x) \partial_x \tilde{T}(x) \,dx = 0

Now, there is no derivative of the heat flow, q, in the equation, or in terms of the temperature, T, the order of differentiation is reduced from two to one. What about the first derivative of the test function \tilde{T}(x), which just now appeared in the equation?

As we have seen in the previous section, the test function serves as a tool for us to find the solution to the equation. Thus, we have the freedom to choose any conveniently differentiable form for it.

The first two terms of Eq. (5) involve the heat flux and test function at the domain boundaries x=1 and x=5, with the heat flux, q, defined in the positive *x*-direction. We can rewrite them in terms of the flux going out of the domain and move them to the right-hand side:

(6)

\int_1^5 \partial_x T(x) \partial_x \tilde{T}(x) \,dx = -\Lambda_1 \tilde{T}_1 -\Lambda_2 \tilde{T}_2

Here, $\Lambda$ is the outgoing flux, and the subscripts 1 and 2 represent the domain boundaries x=1 and x=5, respectively \Lambda_1\equiv -q(x=1) \mbox{, } \Lambda_2\equiv +q(x=5) \mbox{, } \tilde{T}_1\equiv \tilde{T}(x=1) \mbox{, } \tilde{T}_2\equiv \tilde{T}(x=5).

Also, we have used the heat flux relation (1) to write the integrand in terms of the temperature, T, and its test function, \tilde{T}. The right-hand side of the equation provides a natural way to assign boundary conditions in terms of the heat flux. The simplest is to set both \Lambda_1 and \Lambda_2 to zero to get insulating boundary conditions (no heat flux through the boundaries).

This is exactly the reason why in COMSOL Multiphysics the default boundary condition for heat transfer is “Thermal Insulation”, the one for solid mechanics is “Free (no boundary force)”, and the one for fluid flow is “Wall (zero flow across the boundary)”. This kind of boundary condition, which specifies the flux or force (the first derivative of the variable being solved), is commonly called the *natural boundary condition* or the *Neumann boundary condition*.

Another type of boundary condition, commonly called the *fixed boundary condition* or the *Dirichlet boundary condition*, specifies the value of the variable being solved. In our current example, it specifies the value of the temperature at a point on the boundary. This kind of boundary condition is usually needed to set up a well-posed problem with a unique solution. For example, in fluid flow, we need to specify the pressure somewhere (not just the flow); and in solid mechanics, we need to specify the displacement somewhere (not just the force).

As we have seen in the example, the weak formulation provides a natural way to specify the heat flux at a boundary. How do we specify a fixed temperature at a boundary, then?

The trick is to take advantage of the mathematical structure of the natural boundary condition and apply the same idea of using test functions to clamp down the solution. Conceptually, to maintain a fixed temperature at a boundary point, a certain heat flux coming from the outside of the boundary is needed to compensate for the heat flux inside of the boundary. The weak formulation poses the problem as this: Find the heat flux needed to maintain the fixed temperature at the boundary point.

For example, if we want to specify the outgoing flux \Lambda to be 2 at x=1 and the temperature T to be 9 at x=5, then we introduce a new unknown variable \lambda_2 and its corresponding test function \tilde{\lambda}_2, and write Eq. (6) as:

(7)

\int_1^5 \partial_x T(x) \partial_x \tilde{T}(x) \,dx = -2 \tilde{T}_1 -\lambda_2 \tilde{T}_2 -\tilde{\lambda}_2 (T_2-9)

Here, on the right-hand side, the first term specifies the outgoing flux of 2 at x=1 and the second term specifies the unknown flux at x=5; both terms come directly from the natural boundary condition terms on the right-hand side of Eq. (6).

The new variable \lambda_2 represents the unknown heat flux to be determined at the boundary x=5. The third term is added to the equation to force the solution to be T=9 at x=5 by means of the test function \tilde{\lambda}_2 in the same fashion as in the earlier discussion about the test function \tilde{T}(x).

So far, we have been discussing a very simple one-dimensional example. In higher dimensions, such as a 2D surface domain or a 3D volume domain, the equations become more complicated, but the basic idea remains the same.

The weak formulation turns a differential equation into an integral equation. Integration by parts reduces the order of differentiation to provide numerical advantages, and generates natural boundary conditions for specifying fluxes at the boundaries. In the simple 1D example, the boundary is the two end points and the flux is a single value at each point.

In 2D and 3D, the boundary is the closed curve and the closed surface enclosing the domain, respectively. The right-hand side of Eq. (6) becomes the line or surface integral of the incoming flux density, in other words, the total incoming flux. In essence, the process of integration by parts in 2D and 3D uses the divergence theorem to obtain the line or surface integral of the flux at the boundary of the modeling domain.

In this blog post, we chose the simple 1D example so that the central idea wouldn’t be obscured by the complexity of the math.

Today, we learned about the idea of the weak formulation of using test functions to clamp down the solution. Integrating the weak form by parts provides the numerical benefit of reduced differentiation order. It also provides a natural way to specify boundary conditions in terms of the fluxes or forces (the first derivatives of the variables being solved), the so-called natural boundary condition or the Neumann boundary condition.

When solving a practical problem, it’s often necessary to specify the variable being solved — not just its derivative — via the so-called fixed boundary condition or the Dirichlet boundary condition. We saw that the weak formulation uses the same mechanism of test functions and its natural boundary conditions to construct additional terms for the fixed boundary conditions.

So far, we have left the equations in their original analytical forms without any numerical approximation. In the next blog post, we will implement the weak form equation (7) in COMSOL Multiphysics to solve it numerically. After that, we will discuss how the numerical approximation is done internally, how the same problem can be solved in different ways, and how different boundary conditions can be set up for different types of problems.

]]>

The overall color scheme has changed to structure and organize the information better, as well as to reinforce the COMSOL Multiphysics workflow. Each workflow step has its own color. For example, all the *Geometry* icons are red-orange, all the *Materials* icons are yellow-orange, all the *Physics* icons are blue, and all the *Results* icons are red. The *Results* tab also has different icons for the various dimensions:

3D: , 2D: , 1D: .

*The new color scheme helps to differentiate between the model workflow steps.*

Last year’s introduction of the Multiphysics node really spoke to our product and who we are as a company. Setting up models with *multiple* physical phenomena should be as easy as it is with *one* phenomenon. The Multiphysics node helps realize that notion; you model one physics at a time and the multiphysics nodes automatically couple the different physics together.

With COMSOL Multiphysics 5.0 , we are introducing an additional 15 multiphysics couplings to the Multiphysics node, spanning several application areas (required modules are listed in parentheses):

- Non-Isothermal Flow including Conjugate Heat Transfer (CFD Module or Heat Transfer Module)
- Fluid-Structure Interaction for Fixed Geometry (Structural Mechanics Module or MEMS Module)
- Semiconductor-Electromagnetic Waves Coupling for Optoelectronics (Wave Optics Module and Semiconductor Module)
- Plasma Heat Source (Plasma Module)
- Lorentz Force (Plasma Module)
- Static Current Density Component (Plasma Module)
- Induction Current Density Component (Plasma Module)
- Piezoelectric Effect (Structural Mechanics Module, MEMS Module, or Acoustics Module)
- Acoustic-Structure Boundary (Acoustics Module)
- Thermoacoustic-Structure Boundary (Acoustics Module)
- Aeroacoustic-Structure Boundary (Acoustics Module)
- Acoustic-Porous Boundary for Poroelastic Waves (Acoustics Module)
- Porous-Structure Boundary for Poroelastic Waves (Acoustics Module)
- Background Potential Flow Coupling (Acoustics Module)
- Acoustic-Thermoacoustic Boundary (Acoustics Module)

A few of the more significant UI-related improvements involve materials. The ribbon now contains a *Materials* tab to go along with the other steps in the workflow, and materials can now be created globally for models with the same materials in multiple components. This lends itself to the most significant new addition, which is the new “Switch” feature to power material sweeps and function sweeps.

With the new material sweep feature, you use the available materials with their defined material properties directly in a sweep, avoiding the tedious task of re-entering material properties or material property functions. Previously, you had to manually enter material property values, which was rather time-consuming. Alternatively, if your material properties could only be defined as functions and you wanted to sweep over properties described by different functions, you had to get clever in setting up a parameter sweep. The parameters were abstract values included to control which parts of the function would be active for a particular step in the sweep. This made it difficult to postprocess the results and opened up the possibility for errors.

Now, the definition of a material sweep is much easier to define and it is also easier and more intuitive to process the results as you get the actual material names and values in the plot settings along with derived value evaluations. Functions are generic in COMSOL Multiphysics. Therefore, the function sweep is introduced, in analogy to the material sweeps, to define domain settings, boundary conditions, initial conditions, loads, constraints, etc.

First, you must create a Switch under one of the Materials nodes. If you don’t have materials already, create them from the Switch node. Otherwise, simply drag and drop previously defined materials in the model tree into the Switch node. In the *Study* tab, there is a “Material Sweep” button. After adding the Material Sweep, you will be prompted to add a Switch function and choose which cases in the Switch should be included. After running the simulation, you can switch between the materials that were swept over and examine the varying results. A “Function Sweep” works the same way. As I mentioned earlier, the Switch function can be applied globally or locally.

*A view of the Model Builder window with an added Switch, with two materials and Material Sweep.*

*The results settings, switching between the two materials.*

All of these details and more can be found on our Release Highlights page (including videos!)

]]>

*Slice plots*, which can be applied to many model types, show a cross-sectional surface (sometimes several of them) that indicate how a variable changes over a distance or a specific area of the plot.

If you recall Ruud’s last post, he demonstrated arrow plots by using the Flow in a Pipe Elbow model. Let’s continue with his example and investigate slice plots as another tool to help us understand the velocity and pressure drop across the pipe.

The two figures below, which might ring a bell if you read Ruud’s last post, show slice plots that travel through the entire pipe:

These visuals give a very clear, immediate picture of what’s happening to the flow in the center of the pipe. This is done by plotting a virtual surface that vertically slices the geometry down the middle.

To create these slice plots, we can add a new 3D plot group in the Flow in a Pipe Elbow model. Following that, we can head up to the ribbon and choose *Slice* from the 3D Plot Group tab.

Note: In the version I present here, I have added some plot outlines and a gray surface to help give a feel for the pipe shape — to give it a wall, if you like. I’ve also mirrored the main solution so that a centered slice plot will bisect the entire pipe geometry and not the half that was originally modeled. This makes for better visualization for the purpose of this blog post.

When you click the Slice feature in the ribbon, you’ll probably notice that the software plots a series of planar surfaces within the pipe, by default.

Let’s take a look at the settings:

COMSOL Multiphysics automatically plotted the velocity on five surfaces that lie on *yz*-planes and shift in the *x*-direction. If we change the *Plane* field to *xy*-planes and the number of planes to *1*, we’ll see Ruud’s original slice plot, which is shown above.

Now, let’s take a closer look.

If we return to the default plot, with five *yz*-planar surfaces, we’ll see this:

Here, it’s very easy to see the velocity inside the pipe.

There’s also a simple way to move the slices around, so that we can see the velocity in different areas. If we select the *Interactive* checkbox, we can use the slider to shift the surfaces in whichever direction the planes don’t depend on. In this case, we’ll shift the surfaces in the *x*-direction:

As we can see, this will become especially useful for when you want to create an array of multiple slice plots, but don’t want them to interfere with each other.

Suppose that we want to see the velocity on a slice right in the pipe bend that is perpendicular to the wall. Let’s change the plane type to *General* and choose *Point* and *normal* for the entry method. This will allow us to create a diagonal plane in the right-angle bend. However, it’s important to make sure that the plane is correctly oriented.

In order to choose the right point and vector, let’s first take a look at the parameters. These can be found under the *Global Definitions* node:

The inlet and outlet length, or the length from one open end of the pipe to the bend, are represented by the variable *L*. This accounts for most of the vertical distance that we’ll want to apply to our plane. The variable *Rc* gives the radius of the bend, so let’s also add that to the distance, making the *x*-coordinate *L+Rc*.

The normal vector needs to be tangent to the direction of flow around the bend, so in the direction of (1,1,0) at the location of the point.

Here’s what it looks like:

This diagonal slice shows the velocity gradient as the fluid changes direction to follow the pipe. The fluid is moving slowest on the outside of the curve and faster toward the inside.

Now we can add a few slices, defined by *yz*-planes and *xz*-planes, to tell us the velocity in the horizontal and vertical sections of the pipe.

Tip: I did this by duplicating the original slice plot twice, rather than creating two new plots. I also chose Slice 1 under the Inherit Style tab in the settings for both of the duplicate plots, so that the color and data ranges will be the same for all three.

The locations of the points that will define these slices are *(L,0,0)* and *(L+2*Rc,Rc,0)*, which will place planes on either side of the bend, where the curve begins and ends. Their respective normal vectors are *(1,0,0)*, and *(0,1,0)*:

What happens if we check the *Additional parallel planes* box? In the Slice 3 plot, it will create a series of surfaces that run parallel to the one created by the plane data:

This looks a little messy, but bear with me for a minute. Remember that interactive slider we saw earlier? If we want to get rid of that long surface in the vertical section of the pipe, we can shift everything over a little. If all the surfaces are shifted by 0.035, the planes don’t overlap:

We can repeat this process with the horizontal surfaces (whose normal vector is *(1,0,0)*):

Now the series of slice plots shows the velocity change in a cross section as water flows through the pipe, with the fluid flow becoming more turbulent after the bend.

That’s it for a brief introduction to slice plots. Stay tuned for our next installment in the postprocessing series: contour and isosurface plots.

The plot below shows the amount of memory needed to solve various different 3D finite element problems in terms of the number of degrees of freedom (DOF) in the model.

*Memory requirements (with a second-polynomial curve fit) with respect to degrees of freedom for various representative cases.*

There are five different cases presented here:

- Case 1: A heat transfer problem of a spherical shell. There is radiative heat transfer between all of the surfaces. The model is solved with the default iterative solver.
- Case 2: A structural mechanics problem of a cantilevered beam, solved with the default direct solver.
- Case 3: A wave electromagnetics problem solved with the default iterative solver.
- Case 4: The same structural mechanics problem as Case 2, but using an iterative solver.
- Case 5: A heat transfer problem of a block of material. Only conductive heat transfer is considered. The model is solved with the default iterative solver.

What you should see from this graph is that, with a computer that has 64 GB of random access memory (RAM), you can solve problems that range in size anywhere from ~26,000 DOF on the low end all the way up to almost 14 million degrees of freedom. So why this wide range of numbers? Let’s look at how to interpret these data…

For most problems, COMSOL Multiphysics solves a set of governing partial differential equations via the finite element method, which takes your CAD model and subdivides the domains into *elements*, which are defined by a set of nodes on the boundaries.

At each node, there will be at least one *unknown*, and the number of these unknowns is based upon the physics that you are solving. For example, when solving for temperature, you only have a single unknown (called T, by default) at each node. When solving a structural problem, you are instead computing strains and the resultant stresses, thus you are solving for three unknowns (u,v,w), which are the displacements of each node in the x-y-z space.

For a turbulent fluid flow problem, you are solving for the fluid velocities (also called u,v,w by default) and pressure (p) as well as extra unknowns describing the turbulence. If you are solving a diffusion problem with many different species, you will have as many unknowns per node as you have chemical species. Additionally, different physics within the same model can have a different default *discretization* order, meaning there can be additional nodes along the element edges, as well as in the element interior.

*A second-order tetrahedral element solving for the temperature field, T, will have a total of 10 unknowns per element, while a first-order element solving the laminar Navier-Stokes equations for velocity, \mathbf{u}=(u_x,u_y,u_z), and pressure, p, will have a total of 16 unknowns per element.*

COMSOL Multiphysics will use the information about the physics, material properties, boundary condition, element type, and element shape to assemble a system of equations (a square matrix), which need to be solved to get the answer to the finite element problem. The size of this matrix is the number of *degrees of freedom* (DOFs) of the model, where the number of DOFs is a function of the number of elements, the discretization order used in each physics, and the number of variables solved for.

These systems of equations are typically sparse, which means that most of the terms in the matrix are zero. For most types of finite element models, each node is only connected to the neighboring nodes in the mesh. Note that element shape matters; a mesh composed of tetrahedra will have different matrix sparsity from a mesh composed of hexahedra (brick) elements.

Some models will include non-local couplings between nodes, resulting in a relatively dense system matrix. Radiative heat transfer is a typical problem that will have a dense system matrix. There is radiative heat exchange between any surfaces than can see each other, so each node on the radiating surfaces is connected to every other node. The result of this is clearly seen in the plots I shared at the beginning of this blog post. The thermal model that includes radiation has much higher memory requirements than the thermal model without radiation.

You should see, at this point, that it is not just the number of DOFs, but also the sparsity of the system matrix that will affect the amount of memory needed to solve your COMSOL Multiphysics model. Let’s now take a look at how your computer manages memory.

COMSOL Multiphysics uses the memory management algorithms provided by the Operating System (OS) that you are working with. Regardless of which OS you are using, the performance of these algorithms is quite similar on all of the latest OS’s that we support.

The OS creates a Virtual Memory Stack, which the COMSOL software sees as a continuous space of free memory. This continuous block of virtual memory can actually map to different physical locations, so some part of the data may be stored within RAM and other parts will be stored on the hard disk. The OS manages where (in RAM or on disk) that the data is actually stored, and by default you do not have any control over this. The amount of virtual memory is controlled by the OS, and it is not something that you usually want to change.

Under ideal circumstances, the data that COMSOL Multiphysics needs to store will fit entirely within RAM, but once there is no longer enough space, part of the data will spill over to the hard disk. When this happens, performance of all programs running on the computer will be noticeably degraded.

If too much memory space is requested by the COMSOL software, then the OS will determine that it can no longer manage memory efficiently (even via the hard disk) and will tell COMSOL Multiphysics that there is no more memory available. This is the point at which you will get an out-of-memory message and COMSOL Multiphysics will stop trying to solve the model.

Next, let’s take a look at what COMSOL Multiphysics is doing when you get this out-of-memory message and what you can do about it.

When you set up and solve a finite element problem, there are three memory intensive steps: *Meshing*, *Assembly*, and *Solving*.

**Meshing:**During the meshing step, the CAD geometry is subdivided into finite elements. The default meshing algorithm applies a free tetrahedral mesh over most of the modeling space. Free tetrahedral meshing of large complex structures will require a lot of memory. In fact, it can sometimes require more memory than actually solving the system of equations, so it is possible to run out of memory even at this step. If you do find that meshing is taking significant time and memory, then you should subdivide (or*partition*) your geometry into smaller sub-domains. Generally, the smaller the domains, the less memory intensive they are to mesh. By meshing in a sequence of operations, rather than all at once, you can reduce the memory requirements. Within the context of this blog entry, it is also assumed that there are no modeling simplifications (such as exploiting symmetry or using thin layer boundary conditions) that could be leveraged to simplify the model and reduce the mesh size.**Assembly:**During the assembly step, COMSOL Multiphysics forms the system matrix as well as a vector describing the loads. Assembling and storing this matrix requires significant memory — possibly more than the meshing step — but always less than the solution step. If you run out of available memory here, you should increase the amount of RAM in your system.**Solving:**During the solution step, COMSOL Multiphysics employs very general and robust algorithms capable of solving nonlinear problems, which can consist of arbitrarily coupled physics. At the very core of these algorithms, however, the software will always be solving a system of linear equations, and this can be done using either direct or iterative methods. So let’s look at these two methods from the point of view of when they should be used and how much memory they need.

Direct solvers are very robust and can handle essentially any problem that will arise during finite element modeling. The sparse matrix direct solvers used by COMSOL Multiphysics are the MUMPS, PARDISO, and SPOOLES solvers. There is also a dense matrix solver, which should only be used if you know the system matrix is fully populated.

The drawback to all of these solvers is that the memory and time required goes up very rapidly as the number of DOFs and the matrix density increase; the scaling is very close to quadratic with respect to number of DOFs.

As of writing this, both the MUMPS and PARDISO direct solvers in the COMSOL software come with an *out-of-core* option. This option overrides the OS’s memory management and lets COMSOL Multiphysics directly control how much data will be stored in RAM and when and how to start writing data to the hard drive. Although this is superior to the OS’s memory management algorithm, it will be slower than solving the problem entirely in RAM.

If you have access to a cluster supercomputer, such as the Amazon Web Service™ Amazon Elastic Compute Cloud™, you can also use the MUMPS solver to distribute the problem over many nodes of the cluster. Although this does allow you to solve much larger problems, it is also important to realize that solving on a cluster may be slower than solving on a single machine.

Due to their aggressive (approximately quadratic) scaling with problem size, the direct solvers are only used as the default for a few of the 3D physics interfaces (although they are almost always used for 2D models, for which their scaling is much better.)

The most common case where the direct solver is used by default is for 3D structural mechanics problems. While this choice has been made for robustness, it is also possible to use an iterative solver for many structural mechanics problems. The method for switching the solver settings is demonstrated in the example model of the stresses in a wrench.

Iterative solvers require relatively much less memory than the direct solvers, but they require more customization of the settings to get them to work well.

With all of the predefined physics interfaces where it is reasonable to do so, we have provided default iterative solver suggestions that are selected for robustness. These settings are handled automatically and do not require any user interaction, so as long as you are using the built-in physics interfaces, you do not need worry about these settings.

The memory and time needed by an iterative solver will be relatively much less than a direct solver for the same problem, so when they can be used, they should be. The scaling as the problem size increases is much closer to linear, as opposed to the quadratic scaling typical of the direct solvers.

At the time of writing this, the iterative solvers should be used on a computer that has enough RAM to solve the problem, so if you get an out-of-memory message when using an iterative solver, you should upgrade the amount of RAM on your computer.

It is also possible to use an iterative solver on a cluster computer using Domain Decomposition methods. This class of iterative methods has recently been introduced into the software, so stay tuned for more details about this in the future.

Although the data shown above do provide an upper and lower bound of memory requirements, these bounds are quite wide. We’ve seen that introducing a small change to a model, such as introducing a non-local coupling like radiative heat transfer, can significantly change memory requirements. So let’s introduce a general recipe for how you can predict memory requirements.

Start with a representative model that contains the combination of physics you want to solve and approximates the true geometric complexity. Begin with as coarse a mesh as possible, and then gradually increase the mesh refinement. Alternatively, start with a smaller representative model and gradually increase the size.

Solve each model and monitor memory requirements. Observe the default solver being used. If it is a direct solver, use the out-of-core option in your tests, or consider if an iterative solver can be used instead. Fit a second-order polynomial to the data, and use this curve to predict the memory required by the size of the larger problem that you eventually want to solve. This is the most reliable way to predict the memory requirements of large, complex, 3D multiphysics models.

As we have now seen, the memory needed will depend upon (at least) the geometry, mesh, element types, combination of physics being solved, couplings between the physics, and the scope of any non-local model couplings. At this point, it should also be made clear that it is not generally possible to predict the memory requirements in all cases. You may need to repeat this procedure several times for variations of your model.

It is also fair to say that setting up and solving large models in the most efficient way possible is something that can require some deep expertise of not just the solver settings, but also of finite element modeling in general. If you do have a particular modeling concern, please contact your COMSOL Support Team for guidance.

You should now have an understanding of why the memory requirements for a COMSOL Multiphysics model can vary dramatically. You should also be able to predict with confidence the memory requirements of your larger models and decide what kind of hardware is appropriate for your modeling challenges.

*Amazon Web Services and Amazon Elastic Compute Cloud are trademarks of Amazon.com, Inc. or its affiliates in the United States and/or other countries.*

In a recent blog post, Lexi explained how to best use line, surface, and volume plots. We will now look into arrow plots and how you can use these to your advantage. After a beginner’s guide, you’ll get a “look in the kitchen” via a very interesting industrial application where arrow plots played a crucial design role in winning a consulting assignment.

The arrow plot is a very powerful tool for visualizing field distributions. In COMSOL Multiphysics, we typically compute fields — whether this pertains to turbulent flow velocities, magnetic fields, chemical species transport, or thermal gradients. All of these benefit greatly from being visualized properly.

*Generator: Using arrow plots for magnetic fields.*

*Ozone reactor: Using arrow plots for fluid flow vectors.*

Let’s go over the options step-by-step, continuing to use the Heat Sink model that Lexi used in our previous postprocessing blog post as an example.

Starting out, our model only has one 3D Plot Group with a surface plot of the temperature. By clicking on “3D Plot Group 1” in the ribbon or right-clicking “3D Plot Group 1” in the model tree, we can add an *Arrow Volume* plot.

You will notice that arrow plots exist in three dimensions:

*Arrow Volume*, which plots the arrows in three dimensions: inside a volume.*Arrow Surface*, which plots the arrows in two dimensions: on a surface.*Arrow Line*, which plots the arrows in one dimension: along a line.

We’ll go for the arrow volume plot in this example, which gives us the following result:

Although useful, it’s hard to gain significant insight with the default settings here. The problem here is two-fold: the arrow color doesn’t stand out enough and the volume is stacked with too many layers of arrows.

Let’s start by tackling the first issue. It seems more convenient to choose a more contrasting color — we’ll go with magenta for now. The arrow color can be modified in the settings window, as displayed here:

Furthermore, we’re better off plotting only one layer of arrows. This can be changed in the same settings window, where the grid points can be chosen:

That gives the following result:

Hmm… Still not so great. What we need are more arrows in the in-plane directions, which are the *x*- and *y*-directions here. We’ll change them to 35 to get:

A more insightful perspective can be obtained by clicking the “XY View” button, located above the Graphics Window.

We can easily see how the air finds its way around the heat sink rods, as well as the reduced velocity behind them. This is where the arrow plot provides unique insight into what is going on; if we were instead using experiments, obtaining this information would be very difficult, if possible at all.

My roots are in experimental physics, and over the years, I’ve seen and felt the obstacles within performing such measurements. Think: Expensive downtime, challenging device scale (from microfluidics to enormous water filtration plants), equipment cost, having to interpolate — sometimes guess — between a limited amount of data points, probes influencing the measurement itself, and so on.

As you will notice in the plot, the arrow length is proportional to the velocity by default. This can also be set to *Normalized* and *Logarithmic*. The two latter options are great, for instance, to visualize local vortices, which generally have low velocity amplitudes compared to mid-stream values.

Until now, we’ve chosen just one color for the arrows, but they can also be given a color proportional to any expression. This can be done by right-clicking “Arrow Volume 1″ and choosing “Color expression”. By default, this will be given the *Rainbow* color table.

Changing the color table of the surface plot, we end up with the following image:

Now that you have an idea of how to use arrow plots, let’s have a look at *why* this is good to know.

I am from The Netherlands, and as you might know, about one third of The Netherlands is below sea level. For this reason, we’ve been very active in the field of water management, starting as early as the 10^{th} century. It’s still a big engineering field today, and the following case falls under the umbrella of this application area.

Pumping stations are used to move water to a higher elevation level, such as to bring water to canals, drain flooded land, and for other water infrastructure applications. These pumps are very large in size — 50 m^{3}/min and up. Below is an example of a water pump station:

*Pumping station. “Gemaal van sasse” by Vincent de Groot — http://www.videgro.net — Own work. Licensed under CC BY 2.5 via Wikimedia Commons.*

In our specific industry example (*not* pictured above), a new type of pump was designed to handle a dynamic range of water levels on the inlet side. For various reasons, the designers decided to build an elbow-shaped pipe on the inlet side (imagine the shape of a classic faucet, but with the flow direction reversed). The specs of the pump — power and discharge — were more than sufficient for the job, but when connected to the system, the pump efficiency was measured as being much too low. The target specification could not be met, and, with deadlines fast approaching, the engineers were under pressure to quickly come up with a diagnosis and solution.

Confidentiality of the case prevents me from going into details, but it turns out that their chosen pipe geometry, at certain water levels, creates a swirling flow going into the pump entrance, which severely limits its efficiency. Testing numerous design variations in the field isn’t easy or cheap. Moreover, while you can measure pump efficiency, you would still be in the dark as to the details of the specific flow behavior. This would mean one has to measure a very expensive “black box”, with little idea whether the next design iteration will improve things or why.

For these reasons, the company hired an engineering firm that uses COMSOL Multiphysics with the CFD Module to resolve the issue.

While I cannot provide specific details, I can show conceptually what was going on and why arrow plots were useful in convincing management to go for a specific design modification. For demonstration purposes, I’m using the Flow in a Pipe Elbow model, which is available in our Model Gallery:

As the water flow is highly turbulent, going through a 90-degree bend will strongly distort the field. This can be seen in the velocity plot above, but we can perhaps make it more clear with an arrow plot. With our previous settings, the plot looks as follows:

Because the water speed is still predominantly in the pipe direction, the in-plane speed is somewhat disguised. Let’s do something about that.

As always, you can modify any expression in COMSOL Multiphysics. In this case, we could multiply the in-plane directions of the flow field by a large factor, but it’s even easier to simply set the *y*-component of the arrow plot to zero:

And the result:

We can now very clearly see how the swirling flow components fade away over distance. Notice that these are not your typical plumbing-sized pipes, but sewage pipe size — the distance of flow settlement is significant and the water moves at violent speeds.

We can also export the results as an animation:

Here’s a closer look:

In the situation of the pump designers mentioned above, the geometry of the entrance in combination with certain water levels created much worse flow distortion effects than those from the elbow bend. Now, to be clear, there are other postprocessing methods to put hard numbers on how quickly (or slowly) the anisotropy dissipates. However, in this specific case, the above images were of decisive importance to convince the higher-ups what the problem was and how it could be solved. The consulting firm in question managed to win the assignment over competitors by providing a quick and cost-effective solution guided by simulations.

If you have any interesting tips or applications related to the topic presented here, make sure to share them via the comments section below. Stay tuned for our next postprocessing topic: slice plots.

]]>

To demonstrate these three plot types, I’ll use the example of an aluminum heat sink, often used for cooling electrical circuitry components. This model is available from the COMSOL Multiphysics Model Libraries if you have the Heat Transfer Module or CFD Module.

The heat sink is made of aluminum, shaped with a cluster of pillars for cooling, and mounted on a silica glass plate. In the model set-up, it sits inside a rectangular channel with an inlet and outlet for air flow. The base of the heat sink initially experiences 1 watt of heat flux, which is generated by an external heat source.

This model includes coupled thermal and fluid flow effects to analyze temperature gradients and cooling power, resulting from thermal conduction and convection.

In some ways, it is easiest (visually) to use surfaces to demonstrate the plot settings in COMSOL Multiphysics. Surface plots are used to show results quantities on the boundaries of a model’s geometry. They can be added either by right-clicking the *Results* node of the Model Builder or by using the Results tab in the ribbon.

First, I’ll add a 3D plot group, and then I’ll add a surface plot to it. (When you add a 3D plot group from the ribbon, a new tab called *3D Plot Group 1* will appear and you can add surface, line, and volume plots from there, in addition to using the Model Builder.)

Adding a surface plot automatically creates a plot showing the temperature on every boundary in the geometry. However, if I simply create a surface plot, it will look something like this:

This is because the air domain is blocking our view. In order to actually see the heat sink inside, I’ll need to hide some entities. In the Model Builder, expand *Component 1 > Definitions > View*. The View node is where you can hide boundaries, edges, or entire domains and control the scene lighting of your model. (Take a look at my blog post about the graphics window to learn more about using the View node.)

Now, I’ll right-click the View node and choose *Hide Geometric Entities*. I’ll set the geometric entity level to *Boundary*. Then, I’ll select the faces of the channel that are blocking our view of the heat sink (boundaries 1, 2, and 4). If you’re following along in the model, you might also select boundary 121, which is the channel inlet.

The boundaries will turn purple when clicked, indicating that they are selected.

Now, if I return to the plot group, we can see the whole heat sink:

Note that choosing a selection of boundaries in a plot — similar in some ways to hiding geometric entities — allows you to show results on only the boundaries you’ve chosen. This is done by creating a solution under the Data Sets node: Right-click *Data Sets* and choose “Solution”, then right-click the *Solution* node and choose “Add Selection”. This selection works the same way as for hiding entities; set the geometric entity level to the correct type and choose the boundaries (or edges, or domains) that you want. When you create a new plot, make sure to choose this solution as the data set.

The settings for color and style make it easy to control the look of the results plots. For instance, here I’ve changed the color table to *ThermalLight*:

You can also control the color and data range by dragging the sliders under the *Range* tab. These two options allow you to visualize results on only a specific interval.

Adjusting the color range will align the colors representing the maximum and minimum temperature (white and dark red, respectively) with the chosen maximum and minimum temperatures. For instance, in the plot below, I have set the color range minimum to 320 (temperature is in Kelvin). This is helpful for cases where you’re only concerned with results inside a certain interval — so in this case, I only want to see a gradient in areas of the heat sink that are hotter than 320 K:

The manual data range controls something a little different. Rather than changing the color shown for a specific data range, this only plots the data interval specified by the maximum and minimum. Raising the minimum or lowering the maximum will actually remove data points from the plot:

Another interesting feature of the *Coloring* and *Style* tab is a check-box under the main node (*3D Plot Group 1*) labeled *Plot data set edges*. Unchecking it will cause the plot to remove the black lines on the edges of the geometry. This is easiest to see on the plot with the altered color range, below:

Line and volume plots can be added the same way I added a surface plot in the previous example, by right-clicking the Results node or using the ribbon. Next, I’ll add a new 3D plot group with a line plot. Using the View node again, this time to hide the edges of the channel. I have plotted only the edges of the heat sink. This plot will show the change in temperature on the individual edges, allowing us to see clearly how the temperature is changing along the height of the pillars:

Note: All of the examples shown will use 3D plot groups, but they have an equivalent in 2D as well. For instance, in a 2D plot group, this type of plot would be used to show temperature on the edges that lie in a certain plane.

Likewise, a volume plot shows the change in a variable through an entire 3D domain. Volume plots can often save you the trouble of selecting many individual boundaries, the way you might need to for a surface plot. For instance, if I wanted to see the heat sink only, I could create a surface plot using a data set that contains all the boundaries of the heat sink. But in the figure below, I have plotted the temperature on the volume of the heat sink domain (excluding the channel domain), and we can see the temperature gradient:

That’s it for a run-down of these plot types and how to control their coloring and style! Hopefully this demonstration will help you get started in some effective postprocessing. These are just a few of the plot types available in COMSOL Multiphysics — stay tuned for future posts demonstrating other plot techniques, such as arrows, streamlines, contours, and some application-specific types. We will also demonstrate using cut line plots, which allow you to plot any quantity along an arbitrary line through the model.

]]>

In a 3D model, we are often interested in understanding how a solution evolves along a particular direction. Let’s consider the 3D Laminar Static Mixer model, where the fluid velocity and concentration distribution are primarily varying along the axial direction. This model shows how the static geometric obstacles induce secondary flows in the device, which, in turn, improve the mixing characteristics. The plot below shows the steady-state concentration distribution on parallel sections (or slices).

*Concentration slice plot in a laminar static mixer.*

The plot shows how the concentration changes from a perfectly unmixed state to a mixed state. Let’s look at how we can present this information using an animation. Our objective is to stitch the static slices together to create the frames of a movie.

The first step is to add a parameter corresponding to the axial coordinate. For this example model, we add a parameter *zp* as shown in the screenshot below.

Adding a parameter in the Parameters table.

Second, we need to change the slice plot settings. In the Plane Data section, let’s change the Entry Method to “Coordinates” with *zp* as the input to the *z*-coordinates field. These changes enable us to plot the solution in the plane *z* = *zp*.

*Plane data settings for concentration slice plot.*

Last, we need to add an Animation or Player node to the model (right-click on Export under the Results node and choose “Animation”). We would like to generate an animation that plots the solution as the slice location *zp* moves from one end of the mixer to the other.

In the Animation settings, choose the Subject as “Concentration (chds)”, which is the concentration slice plot.

Now, let’s change the Sequence type to “Result parameter” with *zp* as the Parameter ranging from -6 to 36, as shown below. One could also use similar animation settings in the Player node (right-click on Export and choose “Player”) to visualize animations within the COMSOL Desktop.

*Animation settings for changing the slice location.*

Now, click on the Export button to create the desired animation. The resulting animation for our Laminar Static Mixer model appears below.

*Animation showing concentration evolution along the axial direction.*

It is simple to further customize your animations in terms of frame rate, orientation, animation format, etc. This is done through various options in the Animation and Plot settings. For example, the following animation shows the same animation with an *xy* view without the data set (geometric) edges.

*Animation showing concentration evolution (*xy *view) along the axial direction without the data set edges.*

We have used a very simple three-step procedure to combine cross-sectional data (parallel slices) to create an animation for a 3D steady-state model. Now, suppose we are interested in finding the maximum (or minimum) value of any variable in each of these parallel slices.

Do you think we can create a plot of cross-sectional maximum (or minimum) values along the axial direction? I will address this post-processing question in a follow up blog entry. Stay tuned!

]]>We have all experienced the boredom and frustration of being stuck in a traffic jam. Very often, traffic congestion comes and goes for no obvious reason. Employing the analogy to gas dynamics, we can now simulate traffic flow using the equation-based modeling capabilities of COMSOL Multiphysics and gain a better understanding of why congestion happens.

The Texas Transportation Institute estimates that in 2000, the 75 largest metropolitan areas experienced 3.6 billion vehicle hours of delay, resulting in 5.7 billion gallons in wasted fuel and $67.5 billion in lost productivity. We are inclined to blame bad weather, poorly timed traffic lights, and other drivers for not merging into the exit lane in advance. It must be *someone’s* fault.

According to a 2005 report by the Federal Highway Administration (FHA), about 40% of traffic congestion in the U.S. occurs as a result of sheer volume of traffic. Between 1980 and 1999, vehicle miles of travel grew by 76% while the amount of new roads or lanes increased only by 1.5%. There simply aren’t enough or large enough roads. However, can a large volume of vehicles alone cause congestion? If everyone drives at a constant speed of 55 mph, a congestion is unlikely to happen, isn’t it?

A group of Japanese scientists, Sugiyama et al., conducted an experiment in March 2008 to prove us wrong. As shown in this video, 22 homogeneously distributed cars are initially traveling at the same speed on a circuit. A traffic jam then appears out of nowhere and propagates backward like a solitary wave.

MIT professor Rodolfo Rosales calls this phenomenon *phantom traffic jam*, which arises without any bottleneck or obstacle. When the traffic density passes a critical threshold, even a small perturbation in the traffic flow can amplify into traveling waves of high traffic density. This means that in rush hour traffic, when we reach for a cup of coffee or fiddle with the radio, the perturbation in vehicle speed might just be enough to leave the road haunted by phantom traffic jams.

Besides experiments, phantom traffic jams can be observed in a numerical simulation study. Since traffic flow resembles inviscid fluid flow, the phantom traffic jams can be modeled as detonation waves produced by explosions. A widely known model depicting this phenomenon is the Payne-Whitham model. The corresponding partial differential equations (PDEs) are implemented as an equation-based model in COMSOL Multiphysics.

The equations for the density and velocity of the cars are implemented directly in the COMSOL Multiphysics user interface (UI). Because it does not require any coding, such a model can be implemented in a few minutes.

The COMSOL Multiphysics model consists of a one-dimensional line 500 meters in length with periodic boundaries. In the following animation, the two ends of the line are connected to form a circuit, where 27 cars are traveling clockwise. Traffic density is represented by the radius of the red curve on the outside as well as the colors on the inner ring. A small initial perturbation is introduced to the traffic flow, which later grows into a traveling peak on the traffic density curve. Correspondingly, the congestion locations change from green to yellow and red in the color ring. We can clearly see how small sections of congestion emerge, form a single traffic jam, and propagate backwards in traffic, i.e. counter-clockwise, just like in the video of the experiment we linked to above.

*Traffic density evolution normalized against maximum density.*

Reducing the number of cars on the circuit can obviously alleviate the congestion. Another interesting aspect of the model is that we assume each driver has a reaction time of 3.3 seconds. That means a driver needs 3.3 seconds to fully adjust their speed to match the speed of traffic. This delay in reaction is partially responsible for amplifying the initial perturbation. Supposing all the cars are equipped with radar-guided cruise control and the reaction time is reduced to 0.4 seconds, the congestion can be eliminated altogether — something to factor in next time you buy a car!

Although the Payne-Whitham model only simulates a simplified version of our daily commute, more complexity can be added using the COMSOL Multiphysics equation-based modeling capabilities. The COMSOL software can not only handle different types of PDEs, but also ordinary differential equations (ODEs), algebraic equations, and transcendental equations. These capabilities are all included in the core functionality of COMSOL Multiphysics.

- To benefit more from the flexibility of building your own custom models, be sure to check out the following:
- U.S. Department of Transportation, Federal Highway Administration. Office of Operations. 21st century operations using 21st century technologies.
- Y. Sugiyama et al. “Traffic jams without bottlenecks—experimental evidence for the physical mechanism of the formation of a jam.“
- R. Rosales et al. “Traffic modeling – phantom traffic jams and traveling jamitons.“
- M. Flynn et al. “Self-sustained nonlinear waves in traﬃc ﬂow.“