When we use the term *CAD geometry*, we are referring to a set of data structures that provide a very precise method for describing the shapes of parts. This method is called *boundary representation*, or *B-rep*. A B-rep model for solids consists of topological entities (faces, edges, and vertices) and their geometrical representation (surfaces, curves, and points). A face is a bounded portion of a surface, an edge is a bounded segment of a curve, and a vertex lies at a point.

In the B-rep data structures, surfaces are often represented by *Non-Uniform Rational B-Splines*, or *NURBS*. The B-rep model of a part is used as the basis for other operations, such as generating tooling paths in Computer Aided Manufacturing software, creating Rapid Prototyping files, and — most importantly — for your COMSOL Multiphysics modeling, generating the finite element mesh.

Your first choice in terms of element type will usually be the tetrahedral mesh for 3D models or a triangular mesh in 2D models. Any 3D geometry can be meshed with tetrahedral (“tet”) elements and any 2D geometry can be meshed with triangles. Additionally, these are the only elements that support Adaptive Mesh Refinement.

For the rest of this blog post, we will focus on the 3D case, since it is the most computationally challenging. At a very conceptual level, the COMSOL tetrahedral meshing algorithm first applies a mesh on all of the surfaces of an object. This mesh is then used to “seed” the volume mesh from which tetrahedral elements “grow” elements inwards. As these tetrahedral elements intersect, their sizes are adjusted with the objective of keeping the elements as isotropic (similar edge lengths and included angles) as possible and to have reasonably gradual transitions between smaller and larger elements.

An issue that you can run into with this algorithm is that the meshing is done based upon the underlying topological entities. There is no way for the meshing algorithm to insert larger elements if the underlying entities are small. As we saw in the previous blog post “Working with Imported CAD Designs,” we can use the CAD repair and defeaturing tools to simplify the geometry.

However, when these algorithms attempt to remove topological entities, they often need to modify the underlying NURBS surfaces and are therefore somewhat limited. An alternative in COMSOL Multiphysics software is to use Virtual Operations, which can keep the existing geometrical representations as a basis for constructing a new alternative topological structure purely for the purposes of meshing and defining the physics.

Let us take a look at the virtual operations and see what you can do with them through a series of examples. The first ten options in the Virtual Operations menu actually only represent five unique capabilities, but they can be used in different ways.

*The Virtual Operations menu.*

Let’s look at a quick example for each of these five.

The below image demonstrates the Ignore Vertices feature (top) and the Form Composite Edges feature (bottom), which result in the same geometry.

Below is a demonstration of the Ignore Edges feature (top) and the Form Composite Faces feature (bottom), which result in the same geometry.

The following image demonstrates that the Ignore Faces feature (top) can be used to ignore any faces that lie between two adjacent domains, resulting in a single domain. The Form Composite Domains feature (bottom) will also combine multiple domains into a single domain.

As shown next, the Collapse Edges feature (top) and the Merge Vertices feature (bottom) will result in the same geometry. The Merge Vertices feature gives the additional option of choosing which vertex to remove and which one to keep.

The Collapse Faces command (top) and the Merge Edges command (bottom) stand out, since they have been designed to work even in those cases where the faces are not continuous. A useful application for these commands is to get rid of slivers resulting from the union of components that are slightly misaligned or do not fit for other reasons.

Lastly, the Mesh Control Points, Edges, Faces, and Domains features will hide points, edges, faces, or domains during the set-up of the physics; however, these geometric entities will still be present during the meshing step. By using these operations, you can gain greater control over the meshing process by designating geometric entities for the control of the mesh size and distribution. The physics set-up is kept simple by excluding the control entities. A typical area of application is in CFD simulations, where regions of steep gradients in a volume need a high mesh density.

It appears that we have a lot of options here, and you may wonder which of these features you should be using. In practice, the Form Composite Faces can usually be your first choice. Almost all of the issues that you will typically run into, with the exception of forming composite domains, can be handled with this feature.

Let’s look at a case from the COMSOL Multiphysics Model Library: the stresses and strains in a wrench. This is a structural model of a combination wrench. The provided CAD geometry has some relatively complex sculpted surfaces and fillets and blends, which result in small faces in some parts of the model. These small faces force the tet mesher to use smaller elements, but we can see that Virtual Operations can be utilized to avoid this.

*A detailed view of a CAD file shows that small faces result in a fine mesh. Using the Virtual Operations allows larger elements in these regions.*

We can use the Form Composite Faces feature to abstract whole sets of faces. You can simply select all of the faces and then *deselect* those faces that you do not want to abstract. This is acceptable and recommended if you know you do not need high fidelity of the mesh in certain regions where there are many small faces.

*Virtual Operations can be used to combine sets of surfaces and significantly simplify some parts of the geometry.*

We have now seen why you would want to use these Virtual Operations and the many ways in which they can be used. If you want to see a step-by-step guide for using these features to simplify your geometry, please see the Model Library example on using Virtual Operations on a Wheel Rim Geometry.

]]>

COMSOL Multiphysics has three add-on products for electromagnetic wave propagation: the Ray Optics Module, the Wave Optics Module, and the RF Module. Let’s take a look at the differences.

The RF Module and the Wave Optics Module both offer an *Electromagnetic Waves, Frequency Domain* interface, which solves the full-wave form of Maxwell’s equations via the finite element method (FEM). This requires a finite element mesh that is fine enough to resolve the electromagnetic waves, as shown in the figure below.

*Full-wave simulation of scattering off of a metallic sphere. The variations in the magnitude of the electric field require a fine mesh everywhere.*

This approach is appropriate when the solutions we are interested in have significant variations in all directions and are on a length scale comparable to the wavelength.

The Wave Optics Module also includes the *Electromagnetic Waves, Beam Envelopes* interface, which solves a modified version of the full-wave Maxwell’s equations, again via the finite element method. The Beam Envelopes formulation requires, as input, an approximate and slowly varying wave vector. Rather than solving for the electromagnetic fields themselves, this formulation solves for the slowly varying electric field amplitude.

*Beam envelopes simulation of a directional coupler. The gradual variation in the field magnitude allows for a very coarse mesh in that direction.*

The advantage of the Beam Envelopes formulation is that a very coarse mesh can be used in the direction of propagation. The limitation is that the wave vector field must be approximately uniform or slowly varying throughout the modeling domain. However, this is indeed the case for a range of important optical devices such as optical fibers or directional couplers.

The Ray Optics Module includes the *Geometrical Optics* interface, which treats electromagnetic waves as rays. It does not use the finite element method; instead, it traces the rays through the modeling domain by solving a set of ordinary differential equations for the position and wave vector. Although the domains through which the rays travel must be meshed, the mesh can be very coarse. Only at curved surfaces must the mesh be refined.

*Geometrical optics simulation of a plane wave scattering from a cylinder. The ray intensity decreases after the rays are reflected by the curved surface, causing the waves to diverge. A very coarse mesh can be used, except on the curved boundaries.*

The Ray Optics Module traces rays of light propagating through different media and can consider many different behaviors of the rays at boundaries. The wavelength-dependence of the refractive indices of the media can be considered. It is also possible to compute the intensity, the phase, and the polarization of light and how these vary as the ray goes through different media and across boundaries.

Let’s now take a deeper look at the various physical phenomena that can be modeled.

*Refraction and reflection at a dielectric interface.*

A ray of light propagating through a medium of uniform refractive index will travel in a straight line. When the ray encounters an interface between materials of different refractive indices, the ray will be partially reflected and partially refracted. This behavior is governed by Snell’s Law and the Fresnel equations and is handled automatically, by the Ray Optics Module, at interfaces between different materials.

*A light ray bends as it passes through a graded index material.*

Light propagating through a medium with a non-uniform refractive index will bend in the direction of a relatively higher refractive index. Such graded index behavior can be modeled simply by defining the refractive index as a smooth, spatially varying function. The Ray Optics Module inherits the powerful tools of COMSOL Multiphysics for creating spatially varying materials.

For instance, the Luneburg Lens example model available in the Model Library of the Ray Optics Module defines the refractive index simply as *sqrt(x^2+y^2+z^2)*. Alternatively, you can define spatially distributed media as a look-up table from a file, or more spectacularly, as a function of another physics field quantity such as n = f(T(x,y,z)), where n is the refractive index, f is some function, and T(x,y,z) is a spatially varying temperature field computed by a heat transfer simulation in COMSOL Multiphysics. More on this in a blog post coming soon.

*Specular (left) and diffuse (right) reflection of a ray of light.*

At boundaries, the ray can propagate through unimpeded as if the boundary were completely transparent, it can be completely absorbed, or it can be reflected. Reflections occur at surfaces of materials through which light cannot pass and will be either specular, diffuse, or a mixture of the two. Specular reflection occurs on highly polished metal surfaces, whereas most other surfaces reflect more diffusely.

*Reflection and transmission through a (possibly multi-layer) thin dielectric film.*

It is also possible to model structures composed of thin layers of different materials, such as dielectric mirrors or anti-reflective coatings. These can be modeled by adding one or more Thin Dielectric Film nodes to a boundary. The effective reflection and transmission coefficients through the multi-layer stack are then computed without explicitly modeling each layer. This is demonstrated in the Anti-Reflective Coating, Multilayer model.

*Reflection and transmission into various diffraction orders from an optical grating.*

On the other hand, structures with periodic wavelength-scale variation in the plane of the boundary can be modeled with the Grating boundary condition. Diffraction gratings have periodic variations in their structure and can split and diffract a ray into several different rays, which are termed *diffraction orders*. It is also possible to compute the characteristics of the grating via the full-wave formulation and use this as an input, as demonstrated in the Diffraction Grating model.

*The polarization of a ray of light changes as it goes through various optical elements.*

Lastly, boundary conditions can be used to manipulate the polarization of the ray. Boundary conditions for simulating linear polarizers, linear and circular wave retarders, ideal depolarizers, and optical components with arbitrary Mueller matrices can be represented as boundary conditions. These conditions are demonstrated in the Linear Wave Retarder model.

The rays themselves can be launched into the model from domains, boundaries, and any user-specified points. The rays can have a spherical, hemispherical, or conical distribution. It is also possible to model illumination from the sun by specifying a position on Earth. Along with the path of the ray, the intensity, polarization, and phase can also be computed, if desired. This makes it possible to compute both optical intensity on surfaces and interference patterns. Examples of this include modeling a solar dish and computing the interference pattern of a Michelson interferometer.

The Ray Optics Module does not directly consider interactions with structures that have size comparable to the wavelength.

For example, consider a plane wave scattering off of a diamond-shaped metallic object as shown below. If the wavelength is comparable to the object size, there will be significant diffraction around the object and the region behind it will get illuminated. Similarly, a plane wave incident upon a wavelength-scale slit will experience significant diffraction and broadening. Modeling either of these effects requires a full-wave approach using the Wave Optics Module or the RF Module.

*A diamond-shaped object scatters an electromagnetic wave in all directions (left). There is significant illumination behind the scatterer. A plane wave incident upon a slit (right) will spread out. The color in both plots indicates the electric field norm.*

The Geometrical Optics approach, on the other hand, does not consider these diffractive phenomena. Rays representing a plane wave will be reflected specularly from the surfaces and will not illuminate the region behind the object. Rays passing through a slit will not spread out. These are both valid approximations if the wavelength of light is much smaller than the object’s size.

*A diamond-shaped object in a plane wave using the Geometrical Optics approach (left) and a plane wave passing through a slit (right) does not experience any diffraction.*

Currently, the Ray Optics Module also does not consider refractive indices that are dependent upon the intensity of light. However, such problems can be addressed with the Beam Envelopes formulation in the Wave Optics Module, as demonstrated in the example of Self Focusing in BK-7 Glass.

The complete capabilities of the Ray Optics Module are demonstrated by the Model Library examples, available within the software and on our online Model Gallery.

If you are interested in using the Ray Optics Module for any of your modeling needs, please contact us.

]]>

The plot below shows the amount of memory needed to solve various different 3D finite element problems in terms of the number of degrees of freedom (DOF) in the model.

*Memory requirements (with a second-polynomial curve fit) with respect to degrees of freedom for various representative cases.*

There are five different cases presented here:

- Case 1: A heat transfer problem of a spherical shell. There is radiative heat transfer between all of the surfaces. The model is solved with the default iterative solver.
- Case 2: A structural mechanics problem of a cantilevered beam, solved with the default direct solver.
- Case 3: A wave electromagnetics problem solved with the default iterative solver.
- Case 4: The same structural mechanics problem as Case 2, but using an iterative solver.
- Case 5: A heat transfer problem of a block of material. Only conductive heat transfer is considered. The model is solved with the default iterative solver.

What you should see from this graph is that, with a computer that has 64 GB of random access memory (RAM), you can solve problems that range in size anywhere from ~26,000 DOF on the low end all the way up to almost 14 million degrees of freedom. So why this wide range of numbers? Let’s look at how to interpret these data…

For most problems, COMSOL Multiphysics solves a set of governing partial differential equations via the finite element method, which takes your CAD model and subdivides the domains into *elements*, which are defined by a set of nodes on the boundaries.

At each node, there will be at least one *unknown*, and the number of these unknowns is based upon the physics that you are solving. For example, when solving for temperature, you only have a single unknown (called T, by default) at each node. When solving a structural problem, you are instead computing strains and the resultant stresses, thus you are solving for three unknowns (u,v,w), which are the displacements of each node in the x-y-z space.

For a turbulent fluid flow problem, you are solving for the fluid velocities (also called u,v,w by default) and pressure (p) as well as extra unknowns describing the turbulence. If you are solving a diffusion problem with many different species, you will have as many unknowns per node as you have chemical species. Additionally, different physics within the same model can have a different default *discretization* order, meaning there can be additional nodes along the element edges, as well as in the element interior.

*A second-order tetrahedral element solving for the temperature field, T, will have a total of 10 unknowns per element, while a first-order element solving the laminar Navier-Stokes equations for velocity, \mathbf{u}=(u_x,u_y,u_z), and pressure, p, will have a total of 16 unknowns per element.*

COMSOL Multiphysics will use the information about the physics, material properties, boundary condition, element type, and element shape to assemble a system of equations (a square matrix), which need to be solved to get the answer to the finite element problem. The size of this matrix is the number of *degrees of freedom* (DOFs) of the model, where the number of DOFs is a function of the number of elements, the discretization order used in each physics, and the number of variables solved for.

These systems of equations are typically sparse, which means that most of the terms in the matrix are zero. For most types of finite element models, each node is only connected to the neighboring nodes in the mesh. Note that element shape matters; a mesh composed of tetrahedra will have different matrix sparsity from a mesh composed of hexahedra (brick) elements.

Some models will include non-local couplings between nodes, resulting in a relatively dense system matrix. Radiative heat transfer is a typical problem that will have a dense system matrix. There is radiative heat exchange between any surfaces than can see each other, so each node on the radiating surfaces is connected to every other node. The result of this is clearly seen in the plots I shared at the beginning of this blog post. The thermal model that includes radiation has much higher memory requirements than the thermal model without radiation.

You should see, at this point, that it is not just the number of DOFs, but also the sparsity of the system matrix that will affect the amount of memory needed to solve your COMSOL Multiphysics model. Let’s now take a look at how your computer manages memory.

COMSOL Multiphysics uses the memory management algorithms provided by the Operating System (OS) that you are working with. Regardless of which OS you are using, the performance of these algorithms is quite similar on all of the latest OS’s that we support.

The OS creates a Virtual Memory Stack, which the COMSOL software sees as a continuous space of free memory. This continuous block of virtual memory can actually map to different physical locations, so some part of the data may be stored within RAM and other parts will be stored on the hard disk. The OS manages where (in RAM or on disk) that the data is actually stored, and by default you do not have any control over this. The amount of virtual memory is controlled by the OS, and it is not something that you usually want to change.

Under ideal circumstances, the data that COMSOL Multiphysics needs to store will fit entirely within RAM, but once there is no longer enough space, part of the data will spill over to the hard disk. When this happens, performance of all programs running on the computer will be noticeably degraded.

If too much memory space is requested by the COMSOL software, then the OS will determine that it can no longer manage memory efficiently (even via the hard disk) and will tell COMSOL Multiphysics that there is no more memory available. This is the point at which you will get an out-of-memory message and COMSOL Multiphysics will stop trying to solve the model.

Next, let’s take a look at what COMSOL Multiphysics is doing when you get this out-of-memory message and what you can do about it.

When you set up and solve a finite element problem, there are three memory intensive steps: *Meshing*, *Assembly*, and *Solving*.

**Meshing:**During the meshing step, the CAD geometry is subdivided into finite elements. The default meshing algorithm applies a free tetrahedral mesh over most of the modeling space. Free tetrahedral meshing of large complex structures will require a lot of memory. In fact, it can sometimes require more memory than actually solving the system of equations, so it is possible to run out of memory even at this step. If you do find that meshing is taking significant time and memory, then you should subdivide (or*partition*) your geometry into smaller sub-domains. Generally, the smaller the domains, the less memory intensive they are to mesh. By meshing in a sequence of operations, rather than all at once, you can reduce the memory requirements. Within the context of this blog entry, it is also assumed that there are no modeling simplifications (such as exploiting symmetry or using thin layer boundary conditions) that could be leveraged to simplify the model and reduce the mesh size.**Assembly:**During the assembly step, COMSOL Multiphysics forms the system matrix as well as a vector describing the loads. Assembling and storing this matrix requires significant memory — possibly more than the meshing step — but always less than the solution step. If you run out of available memory here, you should increase the amount of RAM in your system.**Solving:**During the solution step, COMSOL Multiphysics employs very general and robust algorithms capable of solving nonlinear problems, which can consist of arbitrarily coupled physics. At the very core of these algorithms, however, the software will always be solving a system of linear equations, and this can be done using either direct or iterative methods. So let’s look at these two methods from the point of view of when they should be used and how much memory they need.

Direct solvers are very robust and can handle essentially any problem that will arise during finite element modeling. The sparse matrix direct solvers used by COMSOL Multiphysics are the MUMPS, PARDISO, and SPOOLES solvers. There is also a dense matrix solver, which should only be used if you know the system matrix is fully populated.

The drawback to all of these solvers is that the memory and time required goes up very rapidly as the number of DOFs and the matrix density increase; the scaling is very close to quadratic with respect to number of DOFs.

As of writing this, both the MUMPS and PARDISO direct solvers in the COMSOL software come with an *out-of-core* option. This option overrides the OS’s memory management and lets COMSOL Multiphysics directly control how much data will be stored in RAM and when and how to start writing data to the hard drive. Although this is superior to the OS’s memory management algorithm, it will be slower than solving the problem entirely in RAM.

If you have access to a cluster supercomputer, such as the Amazon Web Service™ Amazon Elastic Compute Cloud™, you can also use the MUMPS solver to distribute the problem over many nodes of the cluster. Although this does allow you to solve much larger problems, it is also important to realize that solving on a cluster may be slower than solving on a single machine.

Due to their aggressive (approximately quadratic) scaling with problem size, the direct solvers are only used as the default for a few of the 3D physics interfaces (although they are almost always used for 2D models, for which their scaling is much better.)

The most common case where the direct solver is used by default is for 3D structural mechanics problems. While this choice has been made for robustness, it is also possible to use an iterative solver for many structural mechanics problems. The method for switching the solver settings is demonstrated in the example model of the stresses in a wrench.

Iterative solvers require relatively much less memory than the direct solvers, but they require more customization of the settings to get them to work well.

With all of the predefined physics interfaces where it is reasonable to do so, we have provided default iterative solver suggestions that are selected for robustness. These settings are handled automatically and do not require any user interaction, so as long as you are using the built-in physics interfaces, you do not need worry about these settings.

The memory and time needed by an iterative solver will be relatively much less than a direct solver for the same problem, so when they can be used, they should be. The scaling as the problem size increases is much closer to linear, as opposed to the quadratic scaling typical of the direct solvers.

At the time of writing this, the iterative solvers should be used on a computer that has enough RAM to solve the problem, so if you get an out-of-memory message when using an iterative solver, you should upgrade the amount of RAM on your computer.

It is also possible to use an iterative solver on a cluster computer using Domain Decomposition methods. This class of iterative methods has recently been introduced into the software, so stay tuned for more details about this in the future.

Although the data shown above do provide an upper and lower bound of memory requirements, these bounds are quite wide. We’ve seen that introducing a small change to a model, such as introducing a non-local coupling like radiative heat transfer, can significantly change memory requirements. So let’s introduce a general recipe for how you can predict memory requirements.

Start with a representative model that contains the combination of physics you want to solve and approximates the true geometric complexity. Begin with as coarse a mesh as possible, and then gradually increase the mesh refinement. Alternatively, start with a smaller representative model and gradually increase the size.

Solve each model and monitor memory requirements. Observe the default solver being used. If it is a direct solver, use the out-of-core option in your tests, or consider if an iterative solver can be used instead. Fit a second-order polynomial to the data, and use this curve to predict the memory required by the size of the larger problem that you eventually want to solve. This is the most reliable way to predict the memory requirements of large, complex, 3D multiphysics models.

As we have now seen, the memory needed will depend upon (at least) the geometry, mesh, element types, combination of physics being solved, couplings between the physics, and the scope of any non-local model couplings. At this point, it should also be made clear that it is not generally possible to predict the memory requirements in all cases. You may need to repeat this procedure several times for variations of your model.

It is also fair to say that setting up and solving large models in the most efficient way possible is something that can require some deep expertise of not just the solver settings, but also of finite element modeling in general. If you do have a particular modeling concern, please contact your COMSOL Support Team for guidance.

You should now have an understanding of why the memory requirements for a COMSOL Multiphysics model can vary dramatically. You should also be able to predict with confidence the memory requirements of your larger models and decide what kind of hardware is appropriate for your modeling challenges.

*Amazon Web Services and Amazon Elastic Compute Cloud are trademarks of Amazon.com, Inc. or its affiliates in the United States and/or other countries.*

Let’s start by considering a very simplified model of a computer, composed of just three parts: random access memory (RAM), which is used to store information; a processing unit, which performs mathematical operations on the information; and a bus, which transfers the data between the two.

*Schematic of the key parts of a computer.*

For the purposes of this blog post, let’s imagine that all of the data about the problem is sitting in the RAM and that this data gets moved over to the processing unit via the bus. The memory bus itself can be composed of several channels operating in parallel, effectively increasing throughput. The processing unit can be composed of several chips, each of which can have several computational cores that are able to work on data simultaneously, after it has been loaded from the memory via the bus. Let’s use this as our mental model of the computer sitting on our desktop.

Many problems in computer science can be thought of as games that we played as children. Let’s look at three of the classics.

First, let’s try to find a face in the crowd, á la *Where’s Waldo?*

*A photo from the COMSOL Conference. Can you find me?*

Suppose we have a photo with hundreds of people — what’s the fastest way of finding one person?

You could scan through the entire image by yourself, checking faces one by one to see if they match the person you are searching for. But, this can be quite slow. You can also invite your friends over to help. In this case, you would first subdivide the picture into smaller pieces. Each person can then independently work on one piece at a time.

In the language of computer science, we would say that this game is *completely parallel*.

Having two people working will halve the solution time, four people will cut the solution time in four, and so on. But, there is a limit — you can only have as many friends helping you as there are faces in the crowd. Beyond that point, inviting more people to help won’t speed up the process, and it may even slow things down.

Next, let’s try to solve a jigsaw puzzle.

*Can you put together the image?*

This is a bit more complicated — you can have multiple people working at once, but they cannot work independently. Each person will take a few dozen puzzle pieces for themselves from the main pile and try to fit them together with the pieces that their friends are putting together. People will try to fit their own pieces together. They will pass pieces back and forth, and they will be in constant communication with each other.

A computer scientist would call this a *partially parallel* game.

Although adding more people will decrease the solution time, it will not be a simple mathematical relationship. Suppose you have a 1,000-piece puzzle, and 10 people with 100 pieces each. They will spend relatively more time working on their own pieces and less time talking. On the other hand, if you have 100 people with 10 pieces each, there will be a lot more talking and moving pieces around. And what will happen when you have 1,000 people working on a puzzle with 1,000 pieces? Try that one at home!

You can probably see that for a puzzle of a certain size, there is some maximum number of people that can be working it. This number will be much lower than the number of puzzle pieces. Adding more people won’t speed things up noticeably.

Finally, let’s try to stack some blocks on top of each other to form a tower, and then raise the height of the tower by taking the blocks from the lower levels and adding them to the top without causing the structure to topple over.

*How high can you stack the tower? (JENGA® tower standing on one tile. “Jenga distorted” by Guma89 — Own work. Licensed under Creative Commons Attribution-Share Alike 3.0 via Wikimedia Commons.)*

In this game, only one person can play at a time, and we can say that the game is *completely serial*.

Playing with more people won’t finish the game any faster, and if you invite too many people to play, some of them will never get a chance to do anything. In fact, it’s probably fastest (albeit not very sociable) to play this game by yourself.

You can probably already see the relationships between playing these games and using COMSOL Multiphysics. How about we start classifying the problems you solve in COMSOL Multiphysics into these categories:

**Partially Parallel**— All problems in COMSOL Multiphysics have a partially parallel component. Solving a system of linear equations is a partially parallelizable problem. Thus, no matter what class of problem you are solving, some (usually significant) fraction of the solution time is spent solving a partially parallel problem. For problems where you are solving a stationary, frequency-domain, or eigenfrequency problem only, almost all of the time is spent solving the system of linear equations.**Completely Parallel**— A completely parallel problem arises when you use the*Parametric Sweep*functionality, such as when sweeping over a range of geometric dimensions. Each step in the parameter sweep will need to solve a partially parallelizable problem, but each parameter can be solved independently, and if you solve the parameters independently there is no information exchanged between the various cases.**Completely Serial**— A serial problem arises when subsequent parts of the solution depend on previously computed values. Time-dependent models, models using continuation methods, and optimization problems fall into this category. All such models still need to solve a system of linear equations, but they do so sequentially. The possible speedup is mainly governed by the speedup possible when solving the partially parallel system of linear equations.

When solving, COMSOL Multiphysics is spending most of its time solving a partially parallel problem. So, what actually happens in the hardware?

The COMSOL software starts with the information about the loads, boundary conditions, material properties, and finite element mesh and generates data that is used during the solution process. Many gigabytes of memory are needed to store the system matrices, generate intermediate information, and compute the solution. Ideally, all of this information should be stored in the RAM.

It is worth noting that COMSOL Multiphysics does offer hard-disk based solvers as well, but these will be slower than when data is accessed directly from the RAM. Their advantage is that they allow you to solve larger problems.

Of course, this data has to be operated on by the processors, so it turns out that the bottleneck in the solution on a desktop computer is actually the bandwidth of the memory bus — much more so than the processor speed, or even the number of processor cores.

Cluster computers are really nothing more than ordinary computers, or *nodes*, connected with an additional communication layer.

Let’s assume that we are working with a cluster where each node is equivalent in performance to that of our single computer. Data passes between the nodes via the interconnect hardware. The interconnect speed is dependent upon not just the type of hardware, but also the physical configuration. In sum, it is usually slower than the memory bus speed on any individual node. This introduces an additional consideration.

*A simple model of a cluster with four compute nodes.*

We have already seen that the partially parallelizable case is the most important to understand, so we’ll focus on that.

On a cluster, we would say that we are solving this problem in a *distributed parallel* sense. In the context of our game of putting together a puzzle, we can think of this as grouping several of our friends in different rooms of our house, and giving them each a stack of pieces. You would now additionally need to send pieces and information back and forth between different rooms.

COMSOL Multiphysics adjusts the solution algorithm to maximize the amount of work done locally and minimize the amount of data that is passed back and forth. These distributed parallel solvers, which are available when you use the Floating Network License, adjust the solution algorithm to efficiently split the problem up onto the different nodes of the cluster. Again, we can see that there is a limit. If there are too many nodes involved, we will just be communicating data back and forth all the time. So, for each particular problem, there is some number of nodes beyond which solution speed will not improve.

Now, if you have a completely parallel problem, such as a parametric sweep, where each step in the sweep can be solved entirely within the RAM available on one node, then a cluster is an almost perfect way to speed up your modeling. You could use up to as many nodes as there are parameter values that you want to sweep over.

You should now have an understanding of the different types of problems that COMSOL Multiphysics solves, in terms of parallelization and how these relate to performance.

When working on a single computer, the performance bottleneck is the bus speed rather than the clock speed and number of processors. For desktop machines, we also publish some more specific hardware purchasing guidelines in our Knowledge Base. For cluster computers, performance can be much more variable, depending on problem size, cluster architecture, and the type of problem being solved. If you want more technical details about clusters, please see this series of blogs on hybrid parallel computing.

*Amazon Web Services, the “Powered by Amazon Web Services” logo, and Amazon Elastic Compute Cloud are trademarks of Amazon.com, Inc. or its affiliates in the United States and/or other countries.*

*JENGA® is a registered trademark owned by Pokonobe Associates.*

The coil heat exchanger we’ll consider is shown in the figure below.

*A copper coil carries hot water through a duct carrying cold air.*

Copper tubing is helically wound so that it can be inserted along the axis of a circular air duct. Cold air is moving through the duct, and hot air is pumped through the tubing. The air flow pattern and the temperature of the air and copper pipes will be computed using the *Conjugate Heat Transfer* interface. Since the geometry is almost axisymmetric, we can simplify our modeling by assuming that the geometry and the air flow are entirely axisymmetric. Thus, we can use the *2D axisymmetric Conjugate Heat Transfer* interface. Since the airspeed is high, a turbulent flow model is used; in this case, it is the k-epsilon model.

We can assume that the water flowing inside of the pipe is a fully developed flow. We can also assume that the temperature variation of the water is small enough that the density does not change, hence the average velocity will be constant. Therefore, we do not need to model the flow of the water at all and can instead model the heat transfer between the fluid and the pipe walls via a forced convective heat transfer correlation.

The Convective Heat Flux boundary condition uses a Nusselt number correlation for forced internal convection to compute the heat transfer between the water and copper tubing. This boundary condition is applied at all inside boundaries of the copper piping. As inputs, it takes the pipe dimensions, fluid type, fluid velocity, and fluid temperature. With the exception of the fluid temperature, all of these quantities remain constant between the turns of the tubing.

As the hot water is being pumped through the copper coils, it cools down. However, since the model is axisymmetric, each turn of the coil is independent of the others, unless we explicitly pass information between them. That is, we must apply a separate Convective Heat Flux boundary condition at the inside boundaries of each coil turn.

This raises the question: How do we compute the temperature drop between each turn and incorporate this information into our model?

Consider the water passing through one turn of the copper coil. The heat lost by the water equals the heat transfer into the copper pipes. Under the assumption of constant material properties, and neglecting viscous losses, the temperature drop of the water passing through one turn of the pipe is:

\Delta T = \frac{Q}{\dot m C_p} = \frac{\int q'' dA}{\dot m C_p}

where \dot m is the mass flow rate, C_p is the specific heat of water, and Q is the total heat lost by the water, which is equal to the integral of the heat flux into the copper, integrated over the inside boundaries of the coil. This integral can be evaluated via the Integration Component Coupling, defined over the inside coil boundaries.

*The Integration Component Coupling defined over a boundary. Note: The integral is computed in the revolved geometry.*

Using these coupling operators, we can define a set of user-defined variables for the temperature drop:

`DT1 = intop1(-nitf.nteflux/mdot0/Cp0)`

This evaluates the temperature drop along the first turn of the pipe. We can define a different temperature drop variable for each turn of the pipe and use them sequentially for each turn.

*The water temperature in the sixth turn considers the temperature drop in the first five turns.*

*Flow field and temperature plot (left) and the temperature along a line through the center of the coils (right).*

Since this is a 2D axisymmetric model, it will solve very quickly. We can examine the temperature and the flow fields and plot the temperature drop along a line down the center of the coils. We can observe that the water cools down between each turn of the coil, and the air heats up.

This can be considered a parallel-flow heat exchanger, since the hot and cold fluids flow in the same overall direction. If we wanted to change this model to the counter-flow configuration, we could simply switch the air inlet and outlet conditions so that the fluids travel in opposite directions.

What other kinds of heat exchanger configurations do you think this technique can be applied to?

]]>

First, let us look at a rectangular wound multi-turn coil, as shown in the figure below. A spherical modeling domain contains a rectangular coil. The coil domain represents several hundred turns of wire wrapped around a rectangular profile. The lead wires that excite the coil are neglected from the model, and we treat the coil as a closed loop of current. The *Multi-turn coil* feature is used to compute and apply a uniform current distribution around the profile of the coil, and the steady-state magnetic field is plotted. Note that the coil domain has a constant cross-sectional area as we follow the path of the current around it.

*A rectangular coil with current flowing around the winding direction. Current flow (blue arrows) and magnetic field are plotted.*

This modeling domain has three planes of symmetry, i.e. planes about which the geometry is exactly mirrored. Let us now see how we can use this geometric symmetry as well as our knowledge of the magnetic field and what direction the current is flowing to reduce the size of our modeling domain.

The Magnetic Insulation boundary condition represents a mirror symmetry plane for the magnetic field. The magnetic field will be exactly mirrored as you cross the plane.

*The Magnetic Insulation boundary condition — “cut perpendicular to J and parallel to B“.*

This boundary condition also means that the magnetic field is zero in the normal direction to the boundary. That is, the magnetic field must be tangential to this boundary. As a consequence, this boundary condition has the physical interpretation of a boundary through which current can only flow in the *normal* direction. The modeling rule can be summarized: “Use Magnetic Insulation to cut perpendicular to **J** and parallel to **B**.”

The Perfect Magnetic Conductor boundary condition, on the other hand, represents a mirror symmetry plane for the current. From a mathematical point of view, it can be thought of as the “opposite” of the Magnetic Insulation boundary condition.

*The Perfect Magnetic Conductor boundary condition — “cut perpendicular to B and parallel to J“.*

The current vector will be exactly mirrored as you cross the plane and can have no normal component, so the current must flow *tangentially*. This boundary condition enforces that the magnetic field can have no tangential component as you approach the boundary, so the magnetic field can only point in the normal direction and cannot change sign as you cross the boundary. The modeling rule can be summarized: “Use Perfect Magnetic Conductor to cut perpendicular to **B** and parallel to **J**.”

The original geometry can be reduced in size to a one-eighth model representing the original geometry. Orthogonal planes through the center of the coil are used to partition the domains as shown below.

*A one-eighth symmetry model of a rectangular coil with current flowing around the winding direction. The Magnetic Insulation (magenta) and Perfect Magnetic Conductor (cyan) boundary conditions are applied along the appropriate symmetry planes for this problem.*

The Magnetic Insulation boundary condition is applied at the two boundaries representing the planes through which the current will flow normally. If the coil is excited with a voltage boundary condition, it is important to reduce the voltage by a factor of two for each Magnetic Insulation symmetry condition applied. If the coil is excited with current, the applied current does not need to be changed, but the postprocessed coil voltage should be scaled by a factor of two for each Magnetic Insulation symmetry condition.

The Perfect Magnetic Conductor condition is used at the plane along which the current will flow tangentially. Since the Perfect Magnetic Conductor condition cuts the coil in half, it is important to divide the applied current in half when a current excitation is used. On the other hand, if a voltage excitation is used, then the postprocessed coil current must be scaled up by a factor of two for each Perfect Magnetic Conductor symmetry plane.

In almost all cases, the Magnetic Insulation and Perfect Magnetic Conductor boundary conditions are sufficient to significantly reduce the size of your model. As we saw earlier, these conditions enforce the current and magnetic fields to be either normal or tangential to the boundary. But what if we have a geometric symmetry plane where the fields do not have such a symmetry? In such cases, the Periodic (boundary) Condition may be appropriate.

*The Periodic Condition is used when all we know is that the solution must be periodic.*

The Periodic (boundary) Condition allows for more general symmetry where both the current and the magnetic field vector can be at an angle to the boundary. The usage of this condition is limited to cases where the magnetic sources as well as the structure are periodic in space. Typically, the full geometry can be reduced to the smallest repetitive element, a unit cell, limited by periodic conditions.

Consider the structure of a toroidal inductor wound with a single strand of wire, shown below. The wire can be modeled fairly accurately as a single continuous spiral around the toroid, as long as we again neglect the asymmetry due to the lead wires. We can model the wire as an edge current, flowing tangentially to the wires.

*A spirally wound toroidal inductor. The arrows (blue) indicate the direction of current flow. The magnetic field in the core is shown.*

To exploit as much symmetry as possible here, we can consider the unit cell that is just a small slice of the original model containing a single turn of the winding. The Periodic Condition is used along the sides of the slice. When using this boundary condition, the mesh must be identical on the periodic faces, so the *Copy Face* functionality should be used to ensure identical meshes. As we can see from the image below, the size of the model can be reduced by the number of windings, greatly reducing the problem size.

*Periodic Conditions can greatly reduce the model size for certain geometries.*

The generality of the Periodic Condition comes at a price compared to the more basic Magnetic Insulation and Perfect Magnetic Conductor conditions. As it links the unknown fields on one side of the geometry to those on the opposite side, it makes the system matrix more dense and expensive to solve. Therefore, do not use it if the more basic conditions apply.

By reducing the model size, we also reduce the computational requirements significantly. In fact, computational requirements grow exponentially with problem size, so the more symmetries that we can use, the better. Even if you don’t have symmetry in the full problem that you want to solve, it is often advisable to work with a smaller model that *does* have symmetry in the initial developmental stages of your modeling.

Serious engineering and scientific computing problems involve working with large amounts of data — anywhere from megabytes to gigabytes of data are commonly generated during a COMSOL Multiphysics simulation. To generate and store this data, we will want to use computers with fast processors, a lot of Random Access Memory (RAM), and a large hard drive. To visualize this data, it is important to have a high-end graphics card.

The COMSOL Multiphysics workflow can be divided into four steps:

**Preprocessing:**Creating the geometry and defining the physics. The user is constantly interacting with the user interface (UI), but this step is usually not very computationally intensive.**Meshing:**This step can require a lot of interaction with the UI and can also be computationally demanding.**Solving:**This is the most computationally demanding step. However, the user only rarely interacts with the UI.**Postprocessing:**The most interactive step. Most of the computations are handled by the graphics card.

Under ideal conditions, everyone would be working on a high-end computer with more than enough memory and processing power for all of these steps. But realistically, we have to make do with what is available on each user’s desktop. If we need to solve larger models, we will want to access a shared computing resource across a network.

Here lies the issue: Passing data back and forth across a network is a lot slower than passing data around *inside your computer*, especially when it comes to the graphics-intensive postprocessing step. This becomes particularly apparent when using a virtual desktop application, which has to continuously send many megabytes of graphics data over the network. So, let’s see how the COMSOL Multiphysics Client-Server mode addresses this issue.

*One COMSOL Multiphysics Floating Network License is used during Client-Server operation.*

Users typically start COMSOL Multiphysics on their local desktop or laptop computer and start drawing the geometry, defining material properties, applying loads, and setting boundary conditions. Since this primarily involves selecting objects on the screen and typing, the computational requirements are quite low.

Once your users start meshing and solving larger models, however, they can quickly exceed their local computational resources. Meshing and solving takes both time and RAM. If a model requires more RAM than what is available locally, the computer will become quite unresponsive for some time. Rather than upgrading each computer, you can use Client-Server mode so that users can access a remote computing resource.

At any time while using COMSOL Multiphysics, it is possible to connect to a remote computing resource via the Client-Server mode of operation. This is a two-step process. First, log onto the remote system and invoke the COMSOL Multiphysics Server, which will start up the COMSOL Multiphysics Server process and open a network connection. Second, on the local machine, simply enter the network connection information into an open session of COMSOL Multiphysics. The software then transparently streams the model data and results back and forth over the network and uses the remote computing resource for all computations.

It is possible to disconnect from the COMSOL Multiphysics Server as long as it is not in the middle of a computation. This will free up the shared computing resource for other users. It can be good practice to do so during postprocessing, which primarily involves visualization of data and is less computationally intensive. Displaying the results is always handled locally, so it is important to have a good graphics card. (Tip: Check out this list of tested graphics cards.)

You can see that running your simulations in Client-Server mode will allow each part of your IT infrastructure do what it does best. You can run the COMSOL Multiphysics Server on your high-performance computing resources, but your users will be working on their local machines for graphics visualization. Other than the number of licenses that you have available, there is no limit to the number of simultaneous users running a server at any one time. In fact, you can run COMSOL Multiphysics in Client-Sever mode *all the time*. Preprocessing, meshing, solving, and any non-graphical postprocessing computations can all be done on the server. By taking advantage of your organization’s shared computing resources, your users will not need an upgrade of their desktop computer every time they want to run a larger COMSOL Multiphysics model.

The COMSOL Client-Server capabilities, available as part of the Floating Network License, allow you to run your large COMSOL Multiphysics models on the best computers that you have available, so you do not have to buy every user a large workstation. It is a great option for any organization.

- Please contact us if you would like to learn more
- Learn how to update an FNL

Consider the problem of taking the integral of a quadratic function:

*The integral is the area of the shaded region*.

We can evaluate this integral within COMSOL Multiphysics by using the *integrate* function, which has the syntax: integrate(u^2,u,0,2,1e-3). Here, the first argument is the expression, the second is the variable to integrate over, the third and fourth arguments are the limits of the integration, and the optional fifth argument is the relative tolerance of the integral, which must be between 0 and 1. If the fifth argument is omitted, the default value of 1e-3 is used. We can call this function anywhere within the set-up of the model.

Here, we’ll use it within the *Global Equations* interface:

*The Global Equation for* Integral *computes the integral between the specified limits.*

There aren’t any big surprises here, so far. We can solve this problem in COMSOL Multiphysics or by hand. But suppose we turn the problem around a bit. What if we know what the integral should evaluate to, but don’t know the upper limit of the integral?

Let’s look at how to solve the following problem for the upper limit, u_b:

6=\int_0^{u_b} u^2du

We can solve this by changing the Global Equation such that it solves for the upper limit of the integral:

*The Global Equation for u_b solves for the upper limit of the interval for which the integral evaluates to 6.*

There are a few changes in the above Global Equation. The variable is changed to u_b and the expression that must equal zero becomes: 6-integrate(u^2,u,0,u_b). So, the software will find a value for u_b such that the integral equals the specified value.

Note that the initial value of u_b is non-zero.

Since we use the Newton-Raphson method to solve this, we should not start from a point where the slope of the function is zero. After solving the problem, we find that u_b = 2.621.

Now, let’s complicate things a bit more and solve the following problem for both limits of the interval, u_a and u_b:

6=\int_{u_a}^{u_b} u^2du

Since we have two unknowns, we clearly need to have one more equation here, so let’s additionally say that (u_b-u_a)-1=0

*An additional equation is added to specify the difference between the upper and lower limits of the interval.*

Solving the model, shown above, will give us values of u_a = 1.932 and u_b = 2.932. It would actually also be possible to solve this with a single Global Equation, by writing 6-integrate(u^2,u,u_b-1,u_b) as the equation to solve for u_b, but it is interesting to see that we can solve for multiple equations simultaneously.

Next, let’s put the above technique into practice to determine the operating conditions of a heat exchanger. Consider the Model Library example of the geothermal heating of water circulating through a network of pipes submerged in a pond.

*Water pumped through a submerged network of pipes is heated up.*

In this example, the Pipe Flow Module is used to model water at 5°C (278.15 K) pumped into a network of pipes and heated up by the relatively warmer water in a pond. The temperature of the water in the pond varies between 10°C and 20°C with depth. The computed temperature at the output is 11.1°C (284.25 K). If the mass flow rate of water is specified to be 4 kg/s, then the total absorbed heat is:

Q=\int_{278.15K}^{284.25K}\dot m C_p(T)dT=99kW

where \dot m is the mass flow rate and C_p(T) is the specific heat, which is temperature dependent.

Now, in fact, the network of pipes we have here is a closed-loop system, but we simply aren’t modeling the part of the system between the pipe outlet and inlet. This model contains an implicit assumption that as the water gets pumped from the outlet back to the inlet, it is cooled back down to exactly 5°C.

So, instead of assuming that the temperature of the water coming into the pipe is a constant temperature, let’s consider this closed-loop system connected to another heat exchanger that removes a specified amount of heat. Suppose that this heat exchanger can only extract 10 kW. What will the temperature of the water in the pipes be?

Clearly, the first step here would be to write the integral for the extracted heat, in terms of the unknown limits, T_{in} and T_{out}:

Q=10kW=\int_{T_{in}}^{T_{out}}\dot m C_p(T)dT

The second condition that we need to include is the relationship between the input and output temperatures. This is computed by our existing finite element model. The model uses a fixed temperature boundary condition at the pipe inlet and computes the temperature along the entire length of the pipe. Therefore, all we need to do is add a Global Equation to our existing model to compute the (initially unknown) inlet temperature, T_in, in terms of the extracted heat, and the temperature difference between the inlet and outlet.

*The Global Equation that specifies the total heat extracted from the pond loop.*

Let’s look at the equation for T_in, the inlet temperature to the pipe flow model, in detail:

10[kW]-integrate(4[kg/s]*mat1.def.Cp,T,T_in,T_out)

Starting from the right, T_out is the computed outlet temperature. It is available within the Global Equation via the usage of the Integration Coupling Operator, defined at the outlet point of the flow network. That is, T_out=intop1(T), which is defined as a global variable within the Component Definitions.

T_in is the temperature at the inlet to the pipe network, which is the quantity that we want to compute; T is the temperature variable, which is used within the material definitions; and mat1.def.Cp is the expression for the temperature dependent specific heat defined within the Materials branch.

*The closed-loop solution. 10 kW is extracted at this operating point. Note how the water heats up and cools down within the pond under these operation conditions.*

You can see from the techniques we’ve outlined here that you can not only take an integral, but even solve for the limits of that integration, and make this equation a part of the rest of your model. The example presented here considers a heat exchanger. Where else do you think you could use this powerful technique?

]]>

Very broadly speaking, whenever you are modeling a problem that involves computing the velocity and/or the pressure field in a fluid as well as the stresses and strains in a solid material interacting with that fluid, you are solving a fluid-structure interaction (FSI) model.

When modeling an FSI problem, there are various assumptions that we can make to simplify the modeling complexity and reduce the computational burden. To get us started, let’s look at the most complete FSI model that you can build in COMSOL Multiphysics: the fluid flow around a cylinder.

*Deformation of a flexible object in the wake of a cylinder in crossflow.*

The fluid wake behind the cylinder induces large oscillations in the solid protruding from the back of the cylinder. Solving this type of model requires that we address three problems. First, the Navier-Stokes equations are solved in the fluid flow regions. Next, the displacements in the solids are computed. Finally, the deformation of the mesh in the fluid region is solved to account for the deforming region through which fluid can flow.

This nonlinear multiphysics coupling is handled with the *Fluid-Structure Interaction* interface that is available within the MEMS Module or the Structural Mechanics Module. Such models can be solved in either the time-domain or as a steady-state (time-invariant) problem.

The example above considers a linear relationship between the stress and strain in the solid material. If you would like to model materials with a nonlinear stress-strain relationship, such as a hyperelastic material model commonly used to describe rubbers and polymers, you will also want to use the Nonlinear Structural Materials Module.

*Peristaltic pump: A roller pumps fluid along a flexible tube. Image credit: Veryst Engineering.*

On the other hand, you may know ahead of time that the structural displacements will be relatively small, but the stresses may be significant. You can still use the *Fluid-Structure Interaction* interface in this case, but instead also use a *One-Way Coupled* solver, which will compute the flow solution and apply the fluid loads onto the structural problem. That way, you will avoid computing the deformation of the mesh.

It is also possible to put together this type of one-way coupled FSI problem from scratch, without using the *Fluid-Structure Interaction* interface at all. This is demonstrated in the Fluid-Structure Interaction in Aluminum Extrusion example. Additionally, if you are dealing with a very high-speed flow and are not interested in the short time-span chaotic oscillations in the flow, then you can use a turbulent fluid flow model as part of your FSI model. Both the CFD Module and the Heat Transfer Module include various turbulence models appropriate for different flow regimes.

Solar Panel in Periodic Flow: The turbulent air flow around a solar panel and the resultant structural stresses are computed.

If you know in advance that you are dealing with a vibrating structure in a fluid, then you can usually assume that the structural displacements will be relatively small. As a consequence, any induced bulk motion in surrounding fluid is negligible. However, since the structure is vibrating, a pressure wave will be excited in the fluid and sound will radiate away. Within the COMSOL software, this is handled via the *Acoustic-Structure Interaction* interfaces available in the Acoustics Module.

These interfaces assume that the variations in the displacements of the solids are relatively small and therefore do not induce significant bulk motion of the fluid, but only variations in the fluid pressure field. It is possible to solve such problems in the time-domain, but it can also be assumed that the displacements and pressure will vary sinusoidally in time. This allows us to model in the frequency domain, which is less computationally intensive to solve. Bulk losses due to fluid viscosity and material damping can be included.

*The sound pressure level radiated by a loudspeaker.*

It is further possible to solve a *Thermoacoustic-Solid Interaction* problem, which solves a linearized frequency-domain version of the Navier-Stokes equations and can consider losses in the explicitly modeled thermal and viscous boundary layers. Although this is more computationally expensive than an acoustic-solid interaction problem, it is still much more efficient than solving the full FSI problem.

*Vibrating Micromirror: The stresses and displacements of a vibrating micromirror and the surrounding air velocity.*

The Acoustics Module can also handle wave propagation through poroelastic media, such as wetted soils, biological tissues, and sound-damping foams using the *Poroelastic Waves* interface. This solves for both the structural displacements as well as the pressure of the fluid in the pores of the solid. An example is computing the reflection of a sound wave off an interface between water-sediment interface.

If you are interested in modeling poroelastic media, but in the steady-state or time domain instead of the frequency-domain, then you will need the Subsurface Flow Module. It is meant for modeling the steady-state or transient pressure-driven flow and static stresses in soils and other porous media. This module contains a *Poroelasticity* interface, so you can model poroelastic fluid-structure interaction in the steady-state and transient regime.

*Open Hole Multilateral Well: The stresses in the soil and the fluid velocity in the poroelastic domain are plotted.*

All of the approaches I just described explicitly model the volume of the fluid and solve the velocity and/or the pressure throughout these volumes. In circumstances where we have thin layers of fluid, such as in a hydrodynamic bearing, we can avoid a volumetric model of the fluid entirely and solve the Reynold’s equation, which solves only for the pressure in a thin film of fluid.

In this approach, we only solve for fluid flow along a boundary of the domain. This interface is available in the CFD Module and the MEMS Module. We can even take things further and only solve for the fluid flow along a line. In other words, we can solve for flow along a pipe using the Pipe Flow Module.

For an example of a model that considers both pressure variations along the length of a pipe as well as the effect of the elasticity of the pipe walls, please check out this example of solving the Water Hammer Equation.

*Tilted Pad Thrust Bearing: The pressure field in a lubricating layer and deformation of a tilted pad thrust bearing.*

You can probably see that we started with the most complex case and are looking at ways to simplify the computations, especially of the fluid flow field. Let’s take this to the extreme and consider the case of a fluid that isn’t moving at all, but does provide a hydrostatic pressure load on the structure.

In such situations, we can take advantage of the core features of COMSOL Multiphysics: the User-Defined Equations, Component Coupling Operators, and Global Equations. These allow you to include an arbitrary equation into the model to represent any additional variable, such as fluid pressure. As we discussed in a previous blog post, you can include the effect of both a compressible and incompressible fluid within a deforming enclosed cavity, as well as the hydrostatic pressure.

Now that we’ve introduced every way to simplify the fluid flow problem and compute the stresses, let’s finish up by turning things around and modeling the motion of a fluid for cases where we know the rigid body motion of the solid. Such situations can be handled via the Mixer Module, which is meant for addressing mixers and stirred vessels.

The motion of the solid structure is, in this case, completely defined via the rotation, and the movement of the fluid is computed. We can additionally compute the stresses in the moving solid objects, if we assume linear elastic deformations of the solids. This can be handled by a one-way coupled solution that first solves for the fluid flow field due to the moving mixer, and then computes the stresses under the assumption that the structural deformation is small.

*Flow fields in a stirred mixing vessel.*

As you can see, COMSOL Multiphysics is capable of handling a wide range of fluid-structure interaction modeling problems. If you’ve seen something here that you are interested in, or if there is something you’re curious about in this area that isn’t covered here, please contact us.

]]>

Consider a rubber balloon, completely filled with water and resting on a surface within a hole, while being pushed from the top by an indenter. The deformation of the balloon is due to the weight of the fluid as well as the indenter pushing down from the top, see below. The rubber material is modeled with a hyperelastic material model. We will use the technique explained in the previous entry to keep the volume of the cavity constant as it deforms.

The deformation of the balloon is due, in part, to the weight of the fluid, which causes it to bulge outwards and into the depression. It also deforms due to the compression from above, which causes it to bulge outwards and upwards. As a consequence of this compression, the depth of the fluid inside the balloon will change. We want to solve for this change in depth without having to solve the Navier-Stokes equations for the fluid flow, since we are only interested in the static (time-invariant) solution.

*A rubber balloon filled with water is compressed at the center. As the balloon is squeezed, the location of the highest point and the depth of fluid changes, altering the hydrostatic pressure distribution.*

A container of fluid will exert a hydrostatic pressure on its walls:

p(z)=p_0+\rho g (z_0-z)

where \rho is the density of the fluid, g is the force of gravity, z_0 is the location of the top of the container, and p_0 is the pressure of the fluid at the top of the container. Since the balloon is filled with an incompressible fluid, the pressure, p_0, will increase as we squeeze it with the indenter.

We can also see, from the image above, that the depth of the fluid changes as the balloon is compressed. Furthermore, it appears as if computing the depth requires knowing the location of the top and bottom of the container. So, how do we incorporate this change in depth? Let’s find out…

As shown below, there are two components to the pressure loads applied inside the balloon. The first part of the load is computed from the Global Equation. The second pressure load is due to the hydrostatic pressure. Ideally, this second pressure load would be based upon the depth of the fluid, but this depth is a variable that we don’t know. So instead, let’s enter a hydrostatic load based only upon the z-location, which could have an arbitrary zero level.

*The applied pressure load on the inside boundary of the balloon is the sum of the pressure load computed by the Global Equation and the hydrostatic pressure. The hydrostatic pressure is ramped up during the solution.*

*The Global Equation constrains the volume to remain constant during deformation.*

So, it appears here as if we are applying a pressure load to constrain the volume and a load that is directly proportional to the z-location, but we are not correctly computing the hydrostatic pressure, since we do not know z_0. As it turns out, however, the Global Equation does a little bit more than you might first expect.

To see this, let us slightly re-write the equation for the pressure inside the balloon:

p(z)=(p_0+\rho g z_0)-\rho g z

We can see right away that this almost exactly matches the equation we entered as the pressure load, p(z)= P_0-\rho g z, except that the pressure we are computing via the global equation is the pressure at the top of the container plus an offset due to the unknown z-location of the top. So, although we are only solving for a single additional variable, P_0, it accounts for two physical effects: the change in pressure due to the volume constraint as well as the change in the z-location of the top of the fluid.

Since this model contains both geometric and material nonlinearities and a nonlinearity due to the contact, converging to the solution can be difficult. To address this, we will use load ramping to slowly increase the effect of gravity on the model, and to gradually squeeze the balloon. A 2D-axisymmetric model is used to exploit the symmetry of the structure.

*The* Maximum Coupling Operator *is used to find the highest point inside the cavity for postprocessing.*

After we solve the model, we can postprocess the magnitude of the hydrostatic pressure by using the *Maximum Coupling Operator* to compute the maximum z-location along the inside boundary of the balloon.

*The solution where the arrows indicate the hydrostatic pressure load that varies with depth.*

The plot above shows the hydrostatic pressure load on the inside of the balloon. The length of the arrows is given by the expression: WaterDensity*g_const*(maxop1(z)-z), where maxop1(z) gives the z-location at the top of the deformed cavity.

In this example, we have modeled the varying depth of a fluid in a deformable container (a balloon, in this case). The Global Equation that is used to solve for the fluid pressure that keeps the volume constant also accounts for the change in the depth of the fluid as the balloon deforms.

By using this approach, we solve a fluid-structure interaction problem without explicitly having to solve the Navier-Stokes equations, thus saving significant computational resources. If you are interested in this type of modeling, or would like more details about this model, please contact us.

]]>

First, let’s look at one of the most common transmission line structures, the coaxial cable. An inner and outer conductor are separated by a dielectric, and a wave will travel along the length of the cable in a TEM mode, meaning that the electric and magnetic field are transverse to the direction of propagation.

If we can assume that the conductors and the dielectric are lossless (a good approximation for many cases), we can compute an impedance, as demonstrated in the Model Gallery benchmark example on Finding the Impedance of a Coaxial Cable (the model can also be found in the Model Library).

In that example, we draw a 2D cross section of the coax and specify the dielectric properties as well as an operating frequency below the cutoff frequency for any TE or TM modes. The COMSOL software will then solve an eigenvalue problem for the out-of-plane propagation constant as well as the fields, which can be used to compute the impedance of the cable. This approach is very efficient in terms of computation, but only works for TEM waveguides of uniform cross section.

Now, let’s consider a coaxial waveguide with a corrugated outer conductor. These are used when mechanical flexibility is desired.

*A corrugated coaxial cable. The slice plot is of the electric field and the arrow plot is the magnetic field.*

Such waveguides will not be operating in a purely TEM mode, meaning that there is some electric and magnetic field component in the direction of propagation. However, we will assume that these components are small and can, as a consequence, define the impedance as:

Z_0=\frac{\left|V\right|^2}{2P}

where V is the voltage that can be evaluated by taking the path integral of the electric field at any line between the inner and outer conductor, and P is the integral of the Poynting flux at any cross section. You can use *Integration Coupling Operators* to evaluate the fields at the edges and boundaries of the modeling domain to evaluate these quantities.

Rather than computing a long section of the waveguide, we can consider only one periodic section of the structure itself. But the effective wavelength of the signal traveling down the waveguide will be much longer than this, so we use the *Floquet periodic boundary condition* to specify that the wave traveling down the waveguide has a specified propagation constant.

*The Floquet periodic boundary condition interface.*

Via this approach, we can then solve an eigenvalue problem to compute the frequency of the wave that will have this propagation constant. When using a periodic boundary condition, we also need to ensure that the mesh on the boundaries is periodic.

*The Copy Face feature will ensure that the mesh on the periodic faces is identical. The interior is then meshed with free tetrahedral elements.*

Once the solution is computed for a single unit cell, we can evaluate the impedance at that frequency. We can also sweep over a range of effective wavelengths to compute the impedance over a range and observe that at higher frequencies, the impedance will go up. This means that we are approaching the frequency at which TE or TM modes will be present. At that point, we can no longer use this approach.

Here, we have shown that you can compute an impedance for a waveguide with periodic structure operating in the quasi-TEM regime. The *Floquet Periodic* boundary conditions and the *Copy Face* functionality are used to set up a unit cell model, which is solved to extract the impedance for a range of frequencies.

If you have questions about this type of modeling, please contact us.

]]>